Skip to main content

On-demand webinar coming soon...

Blog

How the EU Digital Omnibus Reshapes AI Act Timelines and Governance In 2026

Clearer timelines and defined application dates are changing how AI governance is planned across the EU.

March 31, 2026

Person standing on a modern office balcony using a mobile device, representing digital governance and AI compliance planning.

The EU’s Digital Omnibus proposal is entering its next legislative phase. Following the European Commission’s initial proposal in late 2025, both the Council of the EU and the European Parliament have now established their positions.

On 13 March 2026, the Council agreed its negotiating position, maintaining the core structure of the proposal while introducing targeted adjustments. On 18 March, Parliament committees adopted their position, which was confirmed in plenary on 26 March.

This confirmation marks a key milestone. The European Parliament has now formally adopted its position on amendments to the AI Act, allowing trilogue negotiations with the Council and Commission to begin. Early reports indicate that political negotiations are already underway, with technical discussions scheduled immediately after. The current timeline targets a final agreement as early as the second trilogue on 28 April.

Following the European Commission’s initial proposal in late 2025, both the Council of the EU and European Parliament committees have set out their positions. While the final text remains under negotiation, the updates focus on how and when obligations will apply in practice, particularly for high-risk AI systems.

For privacy and compliance teams, the focus is shifting toward operational timelines, enforceability, and alignment across frameworks.

For a broader overview of the original proposal and its scope, revisit our earlier coverage of the EU Digital Omnibus.

 

AI Act Timelines Move to Fixed Application Dates

The most visible update concerns the timing of high-risk AI obligations. Both the Council and Parliament support fixed application dates:

  • 2 December 2027 for high-risk AI systems listed in the regulation (e.g. employment, education, law enforcement)
  • 2 August 2028 for AI systems embedded in regulated products (e.g. medical devices, machinery)

This replaces earlier proposals linking compliance to the availability of technical standards.

The Parliament’s adopted position reinforces these timelines, confirming that high-risk systems across areas such as biometrics, critical infrastructure, employment, and law enforcement will follow the December 2027 deadline, while systems governed by sectoral safety legislation align with the August 2028 date.

In addition, a separate compliance milestone has been introduced for transparency obligations. Providers are expected to meet watermarking requirements for AI-generated audio, image, video, and text content by 2 November 2026, ensuring clear identification of synthetic content.

The introduction of fixed dates provides a clearer planning horizon. Organizations deploying AI in areas such as hiring or credit scoring can no longer defer governance design pending further guidance. Instead, impact assessments, documentation, and oversight models need to be defined ahead of these dates and applied consistently across systems.

 

Scope and Prohibited Practices Become More Explicit

The Parliament’s position introduces additional clarity around risk and unacceptable uses.

Proposed updates include a ban on systems capable of generating or manipulating non-consensual intimate imagery of identifiable individuals. This expands the list of prohibited practices under Article 5 of the AI Act and aligns with broader enforcement trends around harm prevention and misuse of generative AI.

At the same time, amendments address how the AI Act interacts with existing EU sectoral legislation. Where AI systems fall within established product safety regimes, obligations under the AI Act may be applied in a more targeted way to avoid duplication.

For organizations operating in regulated sectors, this requires a coordinated approach across compliance functions. A medical device manufacturer incorporating AI into diagnostic tools, for example, will need to align product safety requirements with AI-specific documentation, monitoring, and oversight obligations rather than treating them as separate compliance tracks.

 

Bias Detection and Sensitive Data Remain Tightly Scoped

The use of sensitive personal data for bias detection and correction remains a focal point.

While the Commission proposed expanding this capability, the Council maintains stricter conditions. Processing must remain strictly necessary and tied to specific risks affecting health, safety, or fundamental rights.

This maintains a high evidentiary threshold. Bias testing strategies must be targeted and proportionate, with clear justification for why sensitive data is required and why alternative approaches are insufficient.

In practice, this affects how organizations design fairness assessments. A financial institution evaluating potential bias in lending models, for instance, will need to demonstrate that the inclusion of sensitive attributes directly supports the detection of discriminatory outcomes and that safeguards are in place throughout the process.

 

Registration and Oversight Frameworks Evolve

Adjustments to registration requirements illustrate a broader move toward proportionality.

Under current rules, certain AI systems must be registered in the EU database even when they are assessed as not meeting the high-risk threshold. The latest proposals retain this requirement while reducing the amount of information that must be submitted.

This approach preserves transparency while simplifying administrative processes.

Oversight structures are also being refined. The European Commission has proposed expanding the role of the EU AI Office in supervising systems built on general-purpose AI models. The Council supports this direction while maintaining national authority in specific sectors such as financial services, law enforcement, and critical infrastructure.

The result is a more defined allocation of supervisory responsibility, with a combination of centralized and national oversight depending on the use case.

 

From Legislative Timelines to Operational Planning

The Omnibus updates address the gap between regulatory timelines and implementation readiness across the EU.

There is now greater certainty around when obligations will apply, how enforcement responsibilities are distributed, and how the AI Act interacts with existing legal frameworks. This provides a more stable foundation for planning, but it also places emphasis on execution.

Organizations are expected to move beyond high-level interpretation and establish governance processes that operate across the AI lifecycle. This includes identifying where AI is used, defining risk classification, aligning assessments with existing privacy workflows, and maintaining documentation that supports accountability.

In many cases, these requirements build directly on existing privacy program capabilities. The difference lies in applying them to automated systems that influence outcomes in areas such as employment, access to services, and content generation.

 

Next Steps for the EU Digital Omnibus

The final outcome of the Omnibus proposal will depend on trilogue negotiations between the Commission, Council, and Parliament.

With the Parliament’s position now formally adopted, negotiations have moved into the trilogue phase, beginning with an initial political alignment discussion followed by detailed technical sessions. The current schedule points to a potential agreement by late April, which would significantly accelerate the transition from legislative design to implementation planning.

The current positions point to a more structured implementation of the AI Act, with defined timelines, clearer interaction with other EU laws, and governance expectations that are designed to be applied consistently across sectors.

Organizations that begin aligning AI governance with existing privacy, risk, and compliance frameworks are likely to be better prepared as these requirements come into force.

For deeper analysis of the evolving EU regulatory landscape, explore OneTrust DataGuidance.

 

Key Questions on the EU Digital Omnibus and AI Act Updates

 

Current positions from both the Council and Parliament point to December 2027 for most high-risk AI systems and August 2028 for systems embedded in regulated products.

Yes. The Parliament’s position introduces a November 2026 deadline for watermarking AI-generated content, adding an earlier requirement focused on transparency and content origin. 

The updates focus on timing, proportionality, and alignment with other EU laws. Core obligations such as risk management, documentation, and oversight remain in place.

Registration requirements remain, but proposals reduce the amount of information required for systems assessed as not high-risk, simplifying the process while maintaining transparency.

Preparation includes identifying AI use cases, aligning risk assessments with existing privacy processes, and establishing documentation and oversight mechanisms that can be applied consistently across systems.


You may also like