
In agencies and beyond
Artificial intelligence (AI) has long since arrived in the everyday life of communication agencies. Whether in text creation, image generation or data analysis — AI tools offer enormous opportunities, but also present us with new ethical and legal challenges. Principles and guidelines are needed so that things are not like the Wild West.
Why AI policies are essential
With the right know-how, the use of AI can increase efficiency and support creative processes. At the same time, there is a risk that transparency and trust will suffer without clear guidelines. Customers rightly expect their agency to handle AI responsibly. AI guidelines create clarity here and strengthen trust in cooperation.
Principles for using AI
The legal situation when it comes to AI is still quite opaque, especially as the technology is rapidly evolving. However, in addition to the General Data Protection Regulation (GDPR), the European Union has also introduced specific regulations in the AI Regulation, the Federal Data Protection Act (BDSG) and employee data protection in order to create a legal framework. Let's take a look at the individual points:
General Data Protection Regulation (GDPR)
As a fundamental set of rules for data protection in Europe, the GDPR also applies to the processing of personal data by AI systems. The key points here are:
Legality, transparency and fairness: Data processing must be based on a legal basis, carried out transparently and comply with the principles of fairness.
Purpose limitation and data minimization: Personal data may only be collected and processed for specified, clear and legitimate purposes. The amount of data should be limited to the necessary minimum.
Accuracy and memory limit: Data must be factually accurate and up to date. They may only be stored for as long as is necessary for the purposes of processing.
Integrity and confidentiality: Companies must ensure the security of data and protect it from unauthorized access through appropriate technical and organizational measures.
Accountability: Responsible persons must be able to demonstrate compliance with data protection regulations.
AI regulation
In March 2024, the EU adopted the AI Regulation as the world's first comprehensive set of rules for artificial intelligence. A distinction is made between high-risk AI systems and other applications:
High-risk AI systems: These include systems that pose significant risks to health, safety, or fundamental rights, such as biometric identification, critical infrastructure management, or automated judicial decision-making.
Transparency obligations: Companies must provide information about the use of AI, reveal the logic behind automated decisions, and describe their potential effects.
Safety requirements: AI systems must meet strict safety standards and be regularly reviewed.
Various institutions and companies have also formulated their own ethical guidelines, the principles of which can also be transferred to communication agencies:
Transparency: Disclosure of the use of AI to customers and employees.
Responsibility: Human oversight and control over AI-generated content.
Data protection: Compliance with applicable data protection regulations when using AI tools.
Fairness: Avoiding discrimination and bias in AI applications.
How we handle the topic
Of course, we also use AI systems, for example to generate ideas, create texts or edit images. We are very aware of the power, but also of the responsibility, that comes with AI tools. That is why we have developed our own guidelines for our use of AI, which are based on the applicable rules of the GDPR and the EU AI Regulation. Let's take a look at the key points:
Basically
- AI is a tool, not a decision maker. People always have the last word with us.
- We label AI content when necessary — for example, for automatically created content.
- We only use AI where it brings real added value — and not because it's trending right now.
- We pay attention to fairness — no manipulative tools, no discriminatory algorithms.
Data protection
We do not use any personal data in AI systems unless it is really necessary and legally protected. The GDPR serves as a basis for this.
risk levels
The EU AI regulation divides AI into four risk levels. For every application, we take a close look:
❌ Forbidden -> Not used (e.g. manipulative systems)
⚠️ High risk -> Only with clear requirements (e.g. in HR systems)
ℹ️ Limited risk -> transparency obligation (e.g. with chatbots)
✅ Low risk -> Free to use but under editorial control
To learn
Everyone who works with AI for us knows what they're doing. We ensure this with internal training and regular exchange of tools, trends and legal developments.
contact person
We always have someone who keeps track of things, ensures quality and is available to answer questions — both internally and externally.
troubleshooting
We deal openly with problems with an AI tool or a result. We document, analyze and change what needs to be changed.
Conclusion: Looking to the future responsibly
AI offers communication agencies huge opportunities, but it also requires a high level of responsibility. Clear guidelines help to avoid ethical and legal pitfalls and to strengthen the trust of customers and employees. By focusing on transparency, responsibility, data protection, and fairness, we can fully exploit the potential of AI while maintaining our values. It is important to always be up to date not only technologically, but also ethically and legally.
AI can promote creativity, efficiency, and innovation if we use it consciously, responsibly, and with attitude. And that is exactly what we are doing.
Note: This article is based on current findings and is only intended to provide an overview. It is in no way intended as legal advice.
