In response to pressure from within the EU and abroad, the proposal seeks to simplify the digital rule book
November 20, 2025
The European Commission has released two major proposals that could reshape how organizations operating in the EU handle AI governance, data protection, cybersecurity incident reporting, and cookie consent. Together, these updates — known as the Digital Omnibus Regulation proposal and the Digital Omnibus on AI Regulation proposal — aim to simplify Europe’s digital rulebook while still tightening protections for users and strengthening market trust.
These proposals introduce real changes worth watching. Here’s a breakdown of what happened, what’s coming next, and what it means for your business and your customers.
The European Commission proposed sweeping legislative updates in five key areas:
The EU is officially proposing to delay enforcement of high-risk AI requirements, shifting major deadlines out of 2026 and deeper into 2027.
For deeper analysis on the full scope of the EU AI Act Omnibus, visit DataGuidance.
This change is not about weakening the AI Act. Instead, the Commission is restructuring the rollout to align with the actual readiness of the ecosystem that includes standards, authorities, guidance, and tools, so organizations can realistically comply.
When the AI Act was adopted, its high-risk obligations were planned to phase in by August 2, 2026, with full enforcement by August 2, 2027. But the infrastructure needed to support compliance hasn’t arrived on time.
Key gaps include:
The EU’s AI compliance ecosystem wasn’t ready, so enforcing 2026 deadlines wasn’t feasible. The Digital Omnibus proposes extended transition periods and conditional enforcement tied to the availability of standards and official guidance.
Instead of sweeping postponements, the Commission is introducing structured, legally defined extensions. These mechanisms collectively push most high-risk enforcement into 2027, while maintaining the AI Act’s core protections.
High-risk obligations will not kick in until essential compliance tools like harmonized standards and Commission guidelines are available. This prevents organizations from having to comply based on guesswork.
High-risk systems defined in Article 6(1) and Annex I get longer transition windows, recognizing their dependence on delayed standards.
Machine-readable detection of AI-generated content (Article 50(2)) is now pushed to February 2027 for systems already on the market.
Documentation, quality management systems, post-market monitoring, and human-oversight expectations are being scaled appropriately, giving smaller companies runway to comply.
The breakdown of those company sizes include:
The EU also proposes removing the requirement for providers and deployers to ensure staff AI literacy, placing that responsibility on the Commission and member states instead. A new GDPR amendment would allow legitimate interest as a legal basis for training AI models, under specific conditions. Combined, these extensions amount to a de facto one-year delay in high-risk enforcement.
On the topic of incident reporting, rules across NIS2, DORA, eIDAS, CRA, and GDPR create complexity. The Digital Omnibus proposes a major simplification:
Organizations would also be required to use the same single-entry point for GDPR notifications.
To combat cookie banner “fatigue,” cookie rules move from the ePrivacy Directive into the GDPR.
Key updates include:
This will materially change how vendors collect analytics, measure engagement, and design consent UX.
Controllers may now refuse or charge fees for requests that are:
This should reduce administrative load and abuse of access rights.
The Omnibus proposes replacing 27 different national DPIA lists with a single EU-wide list. This unifies criteria and reduces cross-border compliance complexity.
Pulling together all concerns, three major themes emerged:
Without standards or national authorities, organizations couldn’t plan realistically.
Reopening the AI Act risked undermining legal stability. A narrow extension is considered safer.
Different readiness levels across EU member states threatened to create uneven enforcement — contradicting the AI Act’s goal of a unified single market.
The proposal(s) must be approved by the European Council, Parliament, and Commission, which means the final agreed upon changes could look materially different from what is currently being sought. The shift, as it stands now, introduces a more flexible enforcement strategy:
The EU also proposes removing the requirement for providers and deployers to ensure staff AI literacy, placing that responsibility on the Commission and member states instead.
A new GDPR amendment would allow legitimate interest as a legal basis for training AI models, under specific conditions.
Acting on calls from inside the EU and abroad, the Commission is realizing the uncertainty and unreadiness brought about by the current legislation. The regulations can only be enforced when the ecosystem can realistically support compliance. The proposal is not a watering-down, rather a structural recalibration.