Skip to main content

On-demand webinar coming soon...

Blog

Regulatory Draft Picks: March Madness Meets US Privacy and AI Laws

A closer look at how four states are shaping privacy and AI governance in practice, from enforcement-driven programs to emerging frameworks. 

March 27, 2026

Basketball bouncing on a blue court

March Madness always brings a mix of dominant favorites, inconsistent contenders, and quiet teams that outperform expectations. US privacy and AI regulation follows a similar pattern. Some states are refining enforcement, others are introducing new requirements, and a few are building new frameworks from the ground up.

This year’s “regulatory draft picks” highlight four states where recent developments deserve attention and the practical implications for how organizations manage data, AI systems, and compliance operations.

 

Connecticut Huskies: Mature Enforcement And Operational Discipline

Connecticut enters the tournament with strong fundamentals across both sides of the court, and a privacy framework that combines established requirements with increasing enforcement activity.

The Connecticut Data Privacy Act (CTDPA), in force since July 2023, continues to evolve through recent amendments, including Act No. 25-113 with changes taking effect in July 2026. The law applies to organizations operating in Connecticut or targeting its residents, with defined thresholds and exemptions.

The Attorney General (AG) has issued an enforcement report, along with FAQs and universal opt-out mechanisms effective January 2025. Enforcement authority sits solely with the AG, including the ability to investigate, issue notices of violation, and pursue action under unfair trade practices law.

For organizations, this moves privacy into execution. Opt-out signals must work across websites, apps, and backend systems. A global privacy control signal received through a browser needs to translate into actual suppression of tracking and data sharing.\

Data subject requests also require coordination across systems. A request to access or delete data may involve CRM platforms, marketing tools, and archived records. Each step needs to be documented so the organization can show how the request was fulfilled and within what timeframe.

This is less about policy updates and more about whether processes hold up across systems and over time.

 

UCLA Bruins (California): Operational Privacy and AI Governance at Scale

California continues to set the pace for operational privacy requirements, with recent CCPA updates taking effect in January 2026.

These updates focus on how privacy works in practice. Consent must be as easy to withdraw as it is to give. Interfaces that push users toward a specific choice or make opt-out harder introduce compliance risk. For example, a consent banner that highlights “accept” while muting “decline” may not meet regulatory expectations.

Consumer rights now extend across all data environments, including archived or legacy systems. A request to know or delete data must include information stored outside active systems. This often requires coordination between privacy teams, IT, and records management to locate and retrieve older data.

The updates also introduce expectations around risk assessments, cybersecurity audits, and automated decision-making. Systems that influence outcomes such as hiring or lending need to be identified, documented, and, in some cases, explained to individuals affected by those decisions.

For organizations, this shifts privacy into product design, data architecture, and AI governance. Consent flows, data inventories, and automated decision systems all need to be reviewed, tested, and documented as part of ongoing operations.

 

Houston Cougars (Texas): AI Governance With Transparency Requirements

Texas enters as a disciplined contender with a clear identity and a framework focused on accountability and transparency in AI systems.

The Texas Responsible Artificial Intelligence Governance Act took effect on January 1, 2026. Additional proposals address biometric data and AI use in commercial contexts. Some requirements include maintaining industry-aligned standards, providing tools to detect AI use, explaining automated decisions, and preventing discriminatory outcomes.

These requirements affect how AI systems are deployed and monitored. For example, a financial services provider using AI to assess creditworthiness needs to document how inputs are used and how decisions are generated. If a customer challenges a decision, the organization must be able to explain the logic behind it.

This creates a need for traceability. Teams need visibility into where AI is used, what data feeds those systems, and how outputs are produced. Monitoring must continue after deployment, with processes to identify and address issues such as bias or unintended outcomes.

This creates a need for structured AI inventories and traceability. Teams need visibility into where AI is used, what data it relies on, and how decisions are produced. Monitoring processes must continue after deployment, not end at initial assessment.

 

Oklahoma Sooners: Emerging Framework With Defined Responsibilities

Oklahoma introduces a new comprehensive privacy framework with clear obligations for both controllers and processors.

Senate Bill 546, signed into law on March 20, 2026, will take effect on January 1, 2027. It applies to organizations operating in Oklahoma or targeting its residents, with thresholds based on data volume and revenue from data sales.

The law requires controllers to limit data collection, implement appropriate security measures, provide clear privacy notices, and conduct Data Protection Impact Assessments for high-risk processing such as targeted advertising or profiling. Processors must support these obligations, including assisting with requests and providing information for assessments.

In practice, this requires coordination between teams. A marketing function running targeted advertising campaigns needs to align with privacy teams conducting impact assessments. Engineering teams need to ensure that systems capture the data required to support those assessments and respond to consumer requests.

Enforcement sits with the Attorney General, with civil penalties up to $7,500 per violation following a cure period. This creates a defined window for remediation, but also requires organizations to detect and address issues quickly.

 

Where Does Your Program Stand This Season?

Across these states, regulatory activity is moving into execution. Enforcement is becoming more visible, transparency and explainability are being built into AI requirements, and impact assessments and system inventories are now expected as part of day-to-day operations.

At the same time, differences between states introduce real operational complexity. Organizations need to manage varying requirements without creating fragmented processes across teams, systems, and regions.

This places pressure on how governance is structured. Programs that rely on manual processes or disconnected tools struggle to keep pace as requirements change. Centralizing AI inventories, risk assessments, data mapping, and documentation creates a more stable foundation. It allows teams to update controls, track decisions, and respond to new obligations without rebuilding workflows each time.

Regulations will continue to evolve. The gap between manual compliance and operational governance is becoming more visible.

Explore your position with OneTrust’s March Madness-inspired Privacy Automation Maturity Bracket. Assess how your program performs across DSAR automation, data mapping, risk assessments, vendor oversight, and AI governance, and identify where to strengthen your approach.

Keeping pace with state-level changes also requires continuous visibility into new laws, amendments, and enforcement trends. Discover more on these updates and track evolving privacy and AI regulations with DataGuidance, including detailed analysis, timelines, and practical implications for your program.

 

FAQs

The CTDPA is Connecticut’s primary privacy law, in force since July 2023. It applies to organizations handling personal data of Connecticut residents and includes requirements for data subject rights, transparency, and opt-out mechanisms, with enforcement led by the Attorney General.

The 2026 updates focus on operational requirements. Consent must be easy to withdraw, rights requests must cover all data environments including archived systems, and organizations must prepare for risk assessments, cybersecurity audits, and oversight of automated decision-making systems.

This law took effect on January 1, 2026. It introduces requirements for AI governance, including transparency, explainability, and safeguards against bias in AI-driven decisions.

Oklahoma’s Senate Bill 546 introduces obligations for controllers and processors, including data minimization, security practices, privacy notices, and Data Protection Impact Assessments for high-risk processing. It will take effect on January 1, 2027.

Organizations need consistent governance across jurisdictions. This includes maintaining AI inventories, aligning risk assessments with system changes, documenting decisions, and ensuring accountability across teams.

They provide visibility into how AI systems operate, support risk identification, and create the documentation needed to demonstrate accountability to regulators.

By integrating AI governance into existing privacy and risk programs. This allows teams to reuse established processes such as assessments, documentation, and monitoring across both domains.


You may also like