2025 U.S. & Europe: Data Privacy Trends & Insights
Globally, governments are racing to update or enact regulations to keep up with fast-moving technologies, safeguard individual rights, and strengthen data security. In the U.S., the lack of federal privacy legislation is driving legislation at the state level - with 19 state laws on the books as of June 2025 - and more to come. Attorneys general in states with privacy laws are flexing their enforcement power through a flurry of recent high-profile enforcement actions against major brands. Europe’s privacy laws are well in force, with regulators stepping up with aggressive crackdowns. Bottom line: With enforcement actions making headlines, expect more and tougher penalties to come.
United States
- State-level data privacy enforcement actions are on the rise. Though precise statistics are not available, several indicators reflect a significant escalation in enforcement activities, meaning beyond big tech, consumer brands large and small will be held accountable. The past year has seen several high-profile data privacy violation enforcement actions: Texas fined Google ($1.375 billion) and Meta ($1.4 billion), and California fined Honda Motor Company ($632,500) and retailer Todd Snyder’s $345,000, to name just a few high profile enforcement actions.
- Data Protection Assessments (DPAs): Ten U.S. states now require DPAs for certain high-risk processing activities ranging from targeted advertising to sensitive data use. Starting in July 2025, Minnesota’s new law will mandate not just the completion of assessments but detailed documentation explaining how the data use aligns with internal privacy policies on fairness, transparency, data minimization and security. California’s proposed CPPA rules would significantly expand the DPA scope to include automated decision-making and AI training purposes, even when not explicitly required under other state laws.
- AI & Automated Decision-Making: AI governance is becoming a top compliance focus. Colorado’s AI Act, effective January 2026, will be the first U.S. law requiring companies to implement a formal AI risk management program, conduct annual risk assessments, publish online descriptions of high-risk AI use cases, notify consumers when AI is involved in significant decisions (like hiring or lending), and report bias-related incidents to the Attorney General.
- California’s draft Automated Decision-Making Technology (ADMT) regulations will likely require detailed assessments, consumer notices, and individual rights to opt out or access decisions made by AI — particularly for profiling, surveillance, or training purposes. These developments suggest that privacy, compliance, and engineering teams must now coordinate to ensure AI use aligns with evolving legal standards.
European Union (EU) & United Kingdom (UK) continue to enforce businesses out of compliance:
- Moves by the EU to ease GDPR: The EU is moving to ease GDPR rules to boost business competitiveness, especially for SMEs, through a broad deregulation package that raises compliance exemptions and relaxes admin tasks like cookie consent. Though over 100 civil groups warn this could weaken accountability and rights protections, the EU aims to balance privacy with simpler, more flexible enforcement.
- Court rulings and enforcement actions: In the EU, regulators have been highly active in enforcement. Ireland’s Data Protection Commission (DPC) fined TikTok €530 million for transferring user data to China without proper safeguards. The Dutch DPA issued major fines, including €290 million to Uber for unlawful driver data transfers and €4.75 million to Netflix for failing to fulfill users’ access requests. LinkedIn was fined €310 million for unlawful data processing, and Clearview AI received a €30.5 million fine for unauthorized biometric data collection. In the UK, enforcement has been more restrained. While the Information Commissioner’s Office (ICO) influenced LinkedIn to improve its AI practices and child safety measures, no major fines were issued in that case.