IAPP GPS 2026 Recap: Three Insights About Where Data Privacy is, and Where it’s Headed
Every year, the IAPP Global Privacy Summit (“GPS”) gives the data privacy industry a chance to look at itself in the mirror. This year, the reflection was a little more complicated than usual.
Held March 30 through April 2 in Washington, D.C., GPS 2026 drew a crowd that felt, by most accounts, somewhat smaller than years past. Across panels, networking lunches, and conference floor chatter, attendees cited one major reason: shrinking budgets. Budget cuts have impacted headcount, subscription renewals, and now professional development.
That tension played out across the conference as attendees navigated the best ways to scale their privacy operations while advancing their mission as privacy leaders. The ideas that surfaced at GPS 2026 cut to the heart of where privacy is right now: caught between an industry vision of AI that is still sorting itself out, and the ground-level reality of practitioners who have already decided to move.
1. AI Governance Has Officially Moved From Conversation to Crisis
2025 GPS sessions explored questions like “Who needs a seat on an AI governance committee?” and “What should a responsible AI policy include?” In 2026, the rubber met the road. Sessions instead covered what to do when those aspirational programs start to fail.
Attendees joined numerous breakouts to discuss how legacy privacy controls like point-in-time reviews were never designed for systems that can retrain and evolve after deployment. Static governance for dynamic systems does not hold, and what worked for a legacy privacy program can’t scale at even the smallest company once AI joins the picture.
Nobody walked out of these sessions with a clean answer. That honesty was, in its own way, the most useful thing on offer. The field is early, the problems are real, and the organizations that invest now in purpose-built governance infrastructure, rather than retrofitting general-purpose tools after the fact, will be in a considerably better position when the regulatory expectations catch up to the operational reality.
And regulators are not waiting for privacy teams to catch up. During a panel of regulators, the Connecticut Deputy Associate Attorney General Michele Lucan declared AI a major enforcement priority for the state. Organizations must ensure AI training data transparency and the opportunity to appeal profiling decisions in order to avoid fines in the next phase of enforcement.
2. Thought leaders are saying pump the breaks. Business leaders are hitting the accelerator.
The clearest signal from this year’s summit was a striking mismatch between what the main stage had to say about AI and what was happening throughout the rest of the conference.
Keynote speaker Woodrow Hartzog delivered a pointed critique of AI’s trajectory, making the case that the privacy community had a responsibility to pump the brakes. It was a bold position in a room full of people who had already decided to accelerate.
GPS had a schedule packed with hands-on workshops and panels teaching attendees how to use Claude or ChatGPT to advance privacy work. On the vendor floor, nearly every booth advertised new AI features, and attendees crowded booths to excitedly evaluate the offerings.
The people whose inboxes are full of data subject requests, whose drives are full of lost risk assessments, whose search history is a series of individual searches for cookie definitions, who haven’t seen a budget increase in years… They need help, and they see AI’s potential to close the gap.
Is AI a threat or a lifeline? Realistically, you can’t answer that question without truly knowing and understanding the AI you’re working with. In DataGrail’s case, Vera is not an AI agent that operates in the background, making decisions without oversight. It is a human-governed AI agent, designed specifically for privacy workflows, built on a security-first architecture, and supported by the 2,500+ integrations that make real automation possible across the full complexity of a modern enterprise tech stack.
3. One thing everyone agrees on? Protecting kids.
The conference was full of controversy, even within its own keynote speakers. Prince Harry advocated for the importance of privacy, reflecting on his own experience with tabloid surveillance. A day earlier, author Salman Rushdie explored the life-or-death circumstances where privacy isn’t the priority. Reactions were mixed between the two starkly different messages.
And yet, one keynote received unanimous accolades. Alyson Stoner, the actor and advocate who grew up in the public eye as a child performer, delivered what was widely regarded as the highlight keynote of the summit. Their talk connected a deeply personal story, growing up in an industry that treated child privacy as an afterthought, to a clear and practical message for the privacy professionals in the room: this is why the work matters. The response was emotional, genuine, and immediate. It reminded a room full of people who often work in the abstract, in compliance frameworks and regulatory definitions, what the human stakes of privacy actually look like.
Reactions to the keynote mirrored the current political landscape on privacy. The FTC has made children’s privacy a stated enforcement priority for 2026, with Commissioner statements at GPS reinforcing a focus on children’s data practices. A little over a week before the conference, the White House proposed a set of AI policy recommendations emphasizing children’s safety. The bipartisan appetite for children’s online safety legislation has been one of the few consistent signals in an otherwise fractured federal privacy landscape.
This is not a new observation, but GPS 2026 made it feel more urgent. Children’s privacy has moved from a niche concern into a consensus issue, and organizations that have not mapped their data practices against that reality are increasingly exposed.
The Takeaway
GPS 2026 was a conference in productive tension with itself. The stage was cautious. The floor was ready. Children’s privacy emerged as a consensus priority that cuts across politics, professional discipline, and company size. Budget pressure is real, but it is accelerating demand for automation rather than killing it. And the market has made its choice about AI. Privacy professionals are rushing to tout their expertise using AI.
The work now is to earn the trust that the keynote speakers are right to ask for. AI agents in privacy need to be governed carefully, operated transparently, and built on a foundation that can actually connect to the places where personal data lives. That is what separates an agent that makes a real difference from one that just makes a good demo.
Vera is purpose-built for that responsibility. If you want to see what human-governed, AI-powered privacy automation looks like in practice, we are happy to show you.
DataGrail exhibited at IAPP GPS 2026 in Washington, D.C.