On August 27, 2024 we kicked off our second annual DataGrail Summit, focused on the future of responsible innovation. We brought together security, legal and privacy experts for a one-day event dedicated to exploring the importance of collaboration, adaptability and responsibility associated with data privacy and its intersection with artificial intelligence.
Trailblazers including keynote speaker and Facebook whistleblower Frances Haugen, Instacart CISO David Tsao, Anthropic CISO Jason Clinton, VP of Human Risk Strategy at Mimecast Masha Sedova, Chief Legal & Privacy Officer at NETGEAR Kirsten Daru, and more took the stage to discuss personal experiences and share perspectives on topics from forming privacy councils to stress-testing AI. In this recap, we’ll break down session highlights, key themes and actionable takeaways from this year’s event.
Theme 1 – AI shines a light on the collaborative imperative
As AI transitions from a nice-to-have to a strategic business imperative, cross-department collaboration is becoming a core pillar of a successful data privacy strategy.
Kirsten Daru, General Counsel and Chief Privacy Officer at NETGEAR, took the stage to discuss the importance of building a strong risk governance council, and began with a hard truth: “privacy is the most challenging legal discipline in existence.” As Frances mentioned in her keynote address, “part of what makes this moment hard is we don’t have expectations yet.” Legal, security and privacy departments are all trying to determine the full scope of the risks and benefits of incorporating AI into a privacy framework. The danger comes when those groups operate in silos.
Kirsten drew on her past experiences with developing these councils to lay out the dos and don’ts for 100+ DataGrail Summit attendees. Some of the dos include getting buy-in from your organization’s senior leaders, building trust with employees and expecting pushback. The dont’s? Policy writing and attempting enforcement on your own. Kirsten noted, “at the end of the day, privacy and AI are very hard and we don’t have to do it by ourselves.”
In a moderated panel from Cybersecurity Reporter at Axios Sam Sabin, Cyber, Privacy and Data Innovation Leader at Orrick Shannon Yavorsky, Supervising Deputy AG at the CA Dept. of Justice, and Chief Privacy Officer at Aura Mirena Taskova highlighted how regulations play a role.
While AI governance is gaining traction, Stacey reiterated to our audience that “just because it’s new technology, that doesn’t mean old laws don’t apply.” When asked to provide advice for security leaders navigating the regulatory landscape, Mirena advised those feeling stuck to follow the principles of why those particular laws exist. Finally, Shannon Yavorsky explained that AI and privacy roles are increasingly overlapping, making close communication between security teams and legal even more crucial.
Theme 2 – The privacy experience needs an overhaul
Privacy and consent are hard. We heard it time and again that privacy professionals feel overwhelmed by the moment we’re living in right now. Whether it’s because of regulations, managing data subject requests or figuring out AI, it’s easy to feel like you’re stuck in the mud.
Principal Product Manager at DataGrail Tarun Gangwani, Lead Counsel, Global Data Privacy and AI at Legends, Frances Phillips-Taft, and CISO of Vercel Ty Sbano emphasized the importance of being the change you want to see in the consent landscape.
Ty even gave a golden, gen-Z approved piece of advice to think “sus by default.” Always air on the side of caution when prompted to share your data and only grant permissions to brands you really trust.
Why? Consumer preferences are rarely respected. The stat from DataGrail’s audit of 5,000 websites that 75% of sites did not respect a user’s right to opt-out of data collection and/or sharing exemplifies the issue and was cited time and again.
This set the stage for DataGrail VP of Product, Eric Brinkman, to dive into DataGrail’s 2025 product strategy overview, and introduce Unified Choice. Unified Choice is a single, integrated solution replacing disconnected, piecemeal consent methods with a single, integrated approach for collecting and honoring all consumer privacy choices across the digital experience. As Eric noted, “Privacy is messy. The status quo of spreadsheets doesn’t cut it anymore.” It’s time for a new era of privacy if businesses are going to stay compliant and foster consumer trust. Enter DataGrail.
Theme 3 – Threats are on the rise
Adversaries are just as excited as we are about AI. VP of Human Risk Strategy at Mimecast Masha Sedova laid out the internal and external human-centric risks posed by AI usage. When it comes to internal, IP and sensitive data leakage/loss is at the top of the list. As Masha put it, “I think it’s safe to assume that we should plan for failure in this case.” Plan for the eventuality that your employees will disclose sensitive data and have a process in place for risk mitigation.
As for external threats, Masha highlighted deepfakes and business email compromise as the top malicious AI use cases. Although neither threat is new, they’ve certainly gotten a boost in success rates due to the wide availability of genAI. According to Masha, it’s time to expand what we tell employees to distrust as AI makes attacks easier by the second for adversaries.
The final panel of the day, moderated by Editorial Director of VentureBeat Michael Nunez, the CISOs of Instacart (David Tsao) and Anthropic (Jason Clinton) broke down stress-testing AI. The main takeaway? You must look forward if you want to stay ahead. If you’re creating an AI governance framework based on the models that exist today, you’re going to fall behind.
Following the final panel, experts convened for lunch, then broke into roundtable discussions to engage of the following top-of-mind topics:
- Building Cross-functional Councils
- Data Privacy & Security Incident Response Plans
- Data Privacy & Compliance, Where they Intersect
- Convergence of Data Security and Data Privacy
- Health Data, What Exactly Constitutes Health Data
- Navigating the Sea of U.S. State Privacy Laws
Key Takeaways
We can’t possibly capture the brilliance of Tuesday’s event in just one recap, but we can call out a few key takeaways for security, legal and privacy pros looking to foster responsible innovation:
- Collaboration is key: You don’t have to navigate this landscape on your own. Team up with other department leaders to build trust, ideate on solutions to pushback and establish a path forward.
- Set realistic expectations: AI and privacy are relatively new ball games for everyone from the C-Suite to interns. Be patient with your expectations and know that your privacy strategy will inevitably need iterations with the evolution of AI.
- Expect and embrace pushback: With great change often comes great hesitancy. Embrace criticism and engage with those who want to see your organization better itself.
- Instill ‘sus by default’: Educating your employees on the latest threats associated with AI will go a long way in protecting your company from preventable privacy headaches. Take the time to teach your employees what to look out for, and thank yourself later.
- Responsible innovation requires a privacy-centric approach: Center the consumer in every decision you make. We’re all consumers here, whether you’re a chief privacy officer or a one-time website visitor. Think about how you’d want your data to be handled and how you expect to be treated as a consumer…chances are, your next steps will become clear.
To view or re-watch this year’s presentations, head here for on-demand access.
Stay informed about upcoming DataGrail events, webinars, and updates by following DataGrail on LinkedIn and subscribe to our newsletter.