close
close
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

AI Governance

What Regulators Want From AI Governance

Luna Khatib - March 31, 2026

The “AI is unregulated” argument is officially retired.

In 2025, regulators stopped signaling and started acting. State attorneys general settled cases. The FTC brought enforcement sweeps. The EU activated compliance deadlines. Courts started ruling. And in 2026, the scrutiny is only intensifying, even as the federal versus state regulatory picture stays complicated.

If your organization is building, or even just using AI tools, the question is no longer whether regulators will look your way. It’s what they’re looking at when they do, and whether your team is actually prepared for what they’ll find.

Here’s a practical breakdown of what’s drawing the most regulatory attention right now, paired with how privacy and legal practitioners are building programs to stay ahead of it.

What are regulators actually looking at?

Whether your AI claims are actually true

This one catches more organizations off guard than it should.

In September 2024, the FTC launched Operation AI Comply, a targeted enforcement sweep against companies making unsubstantiated or misleading claims about their AI products. The initiative has continued into 2026 under the current administration, signaling that this is an area where enforcement is not going away.

The FTC’s cases have been instructive:

  • One company marketed its AI content detection tool as “98% accurate.” FTC testing put actual accuracy at approximately 53%.
  • Another sold subscriptions to an “AI lawyer” service that didn’t come close to delivering on its promises.
  • Others targeted consumers with guarantees of thousands of dollars monthly in passive income through AI-powered storefronts, resulting in tens of millions of dollars in consumer losses.

The pattern is clear: promises about AI performance, accuracy, automation, and earnings are being treated as verifiable claims. If you cannot substantiate them with real-world evidence, you are exposed. 

If you’re leading AI governance for your organization, this means you need to work closely with the marketing team to substantiate any public claims.

What data you used to train your models, and whether you disclosed it

Regulators are increasingly interested in the front end of the AI lifecycle, not just what the model does after deployment, but what data went into building it.

Under California AB 2013, effective January 1, 2026, developers of generative AI systems are required to publicly disclose information about their training data, including whether datasets contain personal information, copyrighted material, or licensed content. The EU AI Act similarly requires providers of foundation models to publish detailed training data summaries.

Privacy regulators have also raised a question with significant technical and legal implications: if a user’s personal data was used to train a model, is deleting their record from a database actually sufficient compliance? Not necessarily.  While trained models do not typically store data in a directly retrievable form, research shows that under certain conditions some can exhibit memorization or enable limited extraction of training data. This creates a potential gap between traditional data deletion practices and emerging expectations around machine learning systems, which many organizations are still in the process of addressing.

For a deeper look at how AI intersects with data privacy obligations, including the challenge of data deletion in trained models, this guide covers the full landscape.

Whether your AI systems make decisions that affect people

High-risk AI has been in regulatory crosshairs for years, but 2026 is when many of the relevant compliance frameworks actually bite.

Upcoming AI Compliance Deadlines:

  • Already in effect: New York City’s Local Law 144 requires bias audits and public disclosure for AI tools used in employment decisions.
  • June 30, 2026: The Colorado AI Act takes effect, extending protections against algorithmic discrimination to consumers statewide.
  • August 2, 2026: The EU AI Act reaches full enforcement for high-risk AI systems, covering employment, credit, essential services, and healthcare.

What’s important to understand is that regulators aren’t waiting for AI-specific laws to mature. They’re applying existing consumer protection, fair lending, and civil rights frameworks to AI-assisted decisions right now. The fact that a human made the final call offers very limited shelter if an AI system materially influenced an outcome that harmed someone.

If your AI touches any of these domains, risk assessments, bias testing, and documented human oversight aren’t optional. They’re the baseline.

How you handle AI chatbots, especially around vulnerable users

This is one of the fastest-moving areas of regulatory attention in 2026, and one that tends to catch organizations off guard.

Regulators across the political spectrum have zeroed in on AI-powered chatbots, specifically around data collection practices, model training on user interactions, retention policies, and protections for minors. It’s not just consumer-facing chatbot companies in the crosshairs. Any organization whose AI product could be accessed by or interact with minors is facing heightened scrutiny.

Multiple states, including Pennsylvania, Michigan, and Washington, have enacted or proposed disclosure requirements, crisis-response protocols, and tighter interaction controls in this space. If this touches your product or your customers, it deserves dedicated legal review now, not when the next enforcement action lands.

Whether your AI governance program is documented and defensible

Across all of these areas, there is one consistent thread: regulators aren’t just asking what happened. They’re asking what controls existed beforehand.

What they want to see is documented policies, tested controls, completed risk assessments, audit trails, and clear accountability for who owns AI governance decisions inside your organization. “We have an AI ethics statement” is not a compliance program. “We conduct documented risk assessments before deploying new AI tools, maintain an inventory of all AI systems in use, and have a named owner for governance decisions” is closer to what holds up.

If you’re figuring out where to start, the question of who owns AI governance inside your organization is worth resolving first.

How privacy and legal teams are operationalizing against scrutiny? 

Knowing what regulators want to see is one thing. Building a program that can actually demonstrate it is another. Privacy and legal practitioners navigating this in real time are wrestling with a few recurring challenges.

The organizational alignment problem

Turning AI governance principles into day-to-day processes rarely goes smoothly out of the gate. Who is responsible for a given AI system’s data practices? The team that procured it? The team that uses it? Legal? Privacy? Security? When no one has a clear answer, the work falls through the cracks.

Frameworks like ISO/IEC 42001, the EU AI Act, and the NIST AI Risk Management Framework offer useful scaffolding. But for situations involving underrepresented data, where standard fairness metrics break down due to small or skewed samples, or for third-party model drift, where vendor updates silently shift model performance without triggering a contracting event, no framework gives you a clean answer.

What fills the gap is your people. Small, cross-functional working groups drawing on data scientists, legal, privacy, and product leads that meet regularly to surface edge cases, apply quick fixes, and maintain human judgment as the final arbiter. This is where responsible AI culture is actually built, at the team level, not the policy level.

The shadow AI problem is bigger than most organizations realize

It’s not just standalone AI tools that matter. Many vendors are embedding AI capabilities into their existing offerings and simply turning those features on, often without any new contracting event, without notifying their customers, and without any visibility for the privacy or security team on the receiving end. A project management tool adds an AI summarization feature. A CRM activates predictive lead scoring. A communications platform deploys an AI writing assistant. None of these trigger a purchase order. All of them may be processing personal data.

This is precisely why system detection is a foundational capability in any serious AI governance program. You cannot govern what you cannot see. The organizations that feel most confident in their AI oversight are the ones that have invested in continuous, automated discovery of the systems their organization is actually using, not just the ones they intentionally procured.

What “working” AI oversight actually looks like in practice

There’s no single metric that proves AI governance is functioning. But start by confirming cross-functional teams that are talking to each other before AI tools get deployed, not after. Risk assessments that are completed and documented in advance. A vendor management process that extends to AI capabilities embedded in third-party tools. And audit trails that can answer a regulator’s question, or a CEO’s, within hours rather than weeks.

The organizations getting this right are treating privacy and AI governance as two sides of the same obligation. AI systems generate value from personal data. The principles of consent, transparency, data minimization, and rights fulfillment that define modern privacy practice apply directly to how AI tools are procured, deployed, and monitored.

Brand reputation depends on minimizing risk and exceeding consumer expectations. That’s as true for AI governance as it is for any other dimension of your privacy program.


The practitioner insights throughout this post were shaped by ongoing conversations inside the Privacy Roundtable. If you’re a privacy, legal, or security professional working through AI governance in practice, not just in theory, join Privacy Roundtable to get in on the conversation.

Contact Us image

Let’s get started

Ready to level up your privacy program?

We're here to help.