The European Union has reached a provisional agreement to streamline its artificial intelligence regulations, aiming to reduce legal uncertainty for businesses while maintaining robust safety standards. This tentative deal, part of the broader “Digital Omnibus” package, balances the need for innovation with public protection by clarifying compliance timelines and banning specific harmful AI applications.
Clarifying Compliance: Ending Double Regulation
The core driver behind these amendments is to resolve confusion regarding how companies should navigate the existing EU Artificial Intelligence Act alongside sector-specific laws. Previously, businesses faced ambiguity about whether to follow general AI rules or industry-specific regulations, leading to fears of “double regulation.”
Arba Kokalari, rapporteur for the European Parliament’s Internal Market committee, emphasized that the changes are not about weakening safety but about clarifying the legal landscape.
“Companies should not be regulated twice for one thing. We are clarifying the rules for companies in Europe.”
To support this goal, the agreement introduces several key adjustments:
- Extended Deadlines for High-Risk AI: Systems classified as “high-risk”—such as those used in critical infrastructure, education, employment, and border control—now have until December 2027 to comply with EU legislation.
- Longer Timeline for Consumer Products: AI embedded in products like lifts, toys, and smart home appliances (previously classified under machinery) has an extended deadline of August 2, 2028.
- Support for SMEs: Smaller and medium-sized enterprises will benefit from simplified rules designed to avoid duplication between sectoral and AI-specific requirements.
- EU-Level Sandboxes: Developers will gain access to regulatory sandboxes, allowing them to test AI products in a controlled environment before full market entry.
Banning Non-Consensual Sexual Content
In a significant move to protect individual rights, the Digital Omnibus explicitly prohibits AI systems that generate non-consensual sexually explicit content, including so-called “nudification apps” that digitally remove clothing from images.
The ban covers:
– Explicit images, videos, or audio created without consent.
– Content where a person’s intimate parts are exposed.
Key Details of the Ban:
– Scope: The rules apply to content depicting real human beings, not synthetic AI characters.
– Watermarking: Companies must implement mandatory watermarking for AI-generated content.
– Compliance Date: Businesses have until December 2 to align their systems with these new prohibitions.
Michael McNamara, a Renew Europe lawmaker, noted that the legislation aims to provide clear boundaries, stating, “We wanted to have clarity on what we think about [nudification apps] in Europe and that we are not accepting of it.” This provision responds to growing concerns over the misuse of AI tools, such as Elon Musk’s Grok chatbot, which has been used to generate explicit imagery of women and children online.
What Comes Next?
While the provisional agreement marks a significant step forward, it is not yet final law. The deal must still receive formal approval from both the European Parliament and EU member states.
Once ratified, these changes will reshape how AI is developed and deployed in Europe. By extending compliance deadlines and banning harmful applications, the EU seeks to foster a competitive AI sector that prioritizes both innovation and fundamental rights. This balanced approach aims to prevent regulatory overlap while ensuring that technology serves society responsibly.
































