close
close
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Share:
Moderated Panel

Privacy 2.0: Uncovering What’s Next for Data Privacy

Anna Westfelt Head of Data Privacy, Gunderson Dettmer
Rick Arney Co-author of CCPA
Ryan O’Leary Data Privacy Analyst, IDC

The only constant in data privacy is change. New global and U.S. privacy laws emerge and evolve constantly. Consumer expectations are high and their privacy knowledge is maturing. Join leading privacy experts to unpack major business concerns, trends, and shifts companies should be aware of in the rapidly evolving space.

View Transcript

Ryan O'Leary (00:12):

Thanks for joining us today. I appreciate it. As Mary said, I'm Ryan O'Leary. I'm the research director at IDC that heads up our privacy and legal technology coverage area. I spend my days covering the shifting regulatory landscape as well as advising end users, vendors, anyone who will basically listen to me on how to navigate these waters. Prior to joining IDC, I was managing mass torts, e-discovery at General Electric. I'm a recovering attorney at this point, so I definitely echo Alex's statements of maybe avoiding law school at this point. But I'd like to give a moment for my panelists to introduce themselves. Rick and Anna, if you'd like to go ahead and introduce yourselves. 

Anna Westfelt (00:50):

Absolutely. Thank you, Ryan. My name is Anna Westfelt. I'm a partner at Gunderson Dettmer and I head up our data privacy group. And we work with startup companies all the way from pre formation, through financing, mergers, acquisitions, IPOs. We really see the day-to-day matters that startup companies and emerging companies deal with when it comes to data privacy. And we also represent venture capital funds, so we look at it from the other side, what issues are really concerning for investors. I do want to mention that the views I represent here today are those of myself and not necessarily those of my organization.

Ryan O'Leary (01:28):

Of course. Rick?

Rick Arney (01:30):

Thanks. Thanks for having me. Rick Arney. I was a co-author of CCPA and CPRA. I'm actually a finance guy for my day job, but I do initiatives for many of you who live in California. This is a system we have where you can write a law yourself, collect a million signatures, put it on the ballot, and a 50% plus one vote on it, it becomes a law. That's what CPRA is. That's what CCPA was. I'll get into a little bit of the story behind those two, but I am not a lawyer, actually. We have an esteemed lawyer here with us, but I do actually write laws for fun. And so I did write a significant portion of CCPA and CPRA and I look forward to sharing with you a little bit about how that came about and what I think is perhaps going to happen going forward with respect to regulation writing.

Ryan O'Leary (02:21):

So before we get into our discussion, I think it's important before we take a look at privacy 2.0 to take a look at where we are right now in privacy 1.0. This is a survey that I did at the end of last year interviewing a number of folks across the globe, both the United States, the UK, Germany, and a few other countries. All of these folks had an impact on privacy within their organizations and asked them what the top challenges are for managing data privacy across their organization. And the first one is actually, Alex just touched on this being that we in legal know this very well, trying to help drive regulatory compliance while not being seen as a hindrance towards actual revenue generation. It's a huge challenge and one that is going to continue to be a struggle for the foreseeable future, regardless of how many regulations come about.

(03:22):

And then I think everyone in this room knows that the variety of regulatory frameworks, the Frankenstein, Patrick that we have for every multinational that is operating in the United States, the EU, organizations that are even just in the United States that have multi-state regulations to manage. It's a huge challenge. And it really ties into the third bullet up there with the third-highest pain point, which is people and organizations don't really have an understanding of where their data is. I talk about this all the time as a data closet. If you don't clean out your closet and organize it every once in a while, you can't find your shoes or your shirt when you need it. And that's really what's going on here is you have disparate systems across the organizations. Organizations that started on-premises, went to the cloud, and some are now going back to on-premises depending on where they are in the world.

(04:21):

And 95% of enterprise data, according to IDC statistics, is unstructured. And when you don't have data classification, data discovery capabilities to really manage this, understand where that sensitive data is, you're exposing your organization to a huge amount of risk. And it's really the building blocks of data privacy that we haven't managed yet. And that really comes into other pain points like data portability. The right to data portability and the right to give people their data when they ask for it is a huge challenge. When you can't basically find your shoes in the closet, you can't give them their shoes back. As well as the consent management, dynamic consent across organizations to understand, to manage the data underlying with consent. I mean, how many of the regulatory fines between Sephora, Meta have been related to a lack of consent management or allowing people to exercise their consent? These are huge challenges that we still have not solved as we're going further and further into data privacy compliance.

(05:30):

I mean, it's been five years since GDPR and we're still struggling with these things, and I believe you all get a copy of this data, so don't feel like you need to frantically copy down these percentages. And then one last thing I wanted to touch on before we really get into our discussion is the big scary data subject access request that was supposed to revolutionize the world. That was the thing that was covered in 2018 that was going to cripple companies. We haven't really seen any enforcement on that. I did a survey in 2020 and asked the same question twice. In 2020, organizations were getting 135 data subject access requests per month. That number is now 2061 per month. It may not have started in 2018, but it's certainly coming now. The sky is starting to fall a little bit. Now, are those 2000 data subject access requests or all of them valid? No, but you still need to fetter through them, manage them, stuff like that. But we haven't really seen enforcement on data subject access requests.

(06:33):

We've seen enforcement on consent management, lawful basis of processing, stuff like that. But the data subject access request is still kind of the next frontier. I think that's where I want to start our discussion with the rest of the panel in that you have these large Patricks of regulations that apply to large swaths of the United States, EU, India, Australia, all these places, but you only have so many regulators. In that environment where there's only so many cops on the road pulling people over, Anna, how do you advise your clients on risk appetite, risk calculations, stuff like that?

Anna Westfelt (07:13):

Absolutely. I agree that we haven't seen actually much action on the DSAR enforcement front, but many of us also know that the regulators, especially Europe, are incredibly understaffed and backlogged and they have received a lot of complaints relating to data subject requests, and they may get around to investigating them at some point. That does keep a lot of us up at night knowing that enforcement could be coming. But of course, the prospect of regulatory action is a big driver of compliance efforts, but for many of my clients there are so many other drivers. If you are a venture backed company or you're looking to be a venture backed company, it's always impossible to get through several rounds of financing now without having a solid privacy and security program in place. If you have had data breaches that show that you have not taken security seriously, investors are going to be very concerned with that and may not be interested in proceeding with you down through your life cycle.

(08:15):

We see customers as drivers of privacy compliance. You have to take on incredible risk if you're a vendor in your contracts when it comes to data breach risk, when it comes to privacy compliance. And if customers aren't comfortable that you've done what you need to do and that you're going to take the safety and security of their data seriously, they may not want to sign up your deals. And of course, public perception, huge driver of privacy compliance. You really want to be seen as a company that takes privacy seriously. At this point, privacy and security are board level concerns. This has really shifted, I want to say in the last five years or so, that companies' boards are very interested in what companies are doing in privacy. They will hold companies accountable, they will ask for regular reporting. They really want to know that a company is doing what its peers are doing and making sure that they are protecting against risks that can happen if there is privacy of security and noncompliance. There are so many drivers beyond just regulatory action.

Ryan O'Leary (09:19):

Well, I mean we've seen that in our research at IDC in terms of trust. We have a huge focus on CEOs driving trusted practices to drive revenue. Rick, from the regulatory standpoint and the regulator's point of view, how are you expecting enforcement to continue and how are you helping to budget for that enforcement when that's a chief concern of the populace?

Rick Arney (09:45):

Thank you. Anna set up very well what goes into the calculus of whether a firm would like to really go deep into compliance or not on a certain topic. And you're right, it comes back to the customers, the company, the pressure they may be receiving from their suppliers, et cetera. But on the regulatory front, with respect to California, it's still early days. I mean, if you look on the web, you'll see that the California Privacy Commission, otherwise known as the CPPA, all these acronyms, is currently hiring for enforcement people right now. If anyone wants to change your career and go into privacy enforcement, let me know. They're actively hiring right now, and it's very early days.

Ryan O'Leary (10:25):

Do we get a cut of the fine?

Rick Arney (10:28):

Maybe not. In terms of how the genesis of enforcement came forth for CPRA, I can tell you what happened behind the scenes is that roughly about $10 million is spent by the FTC to regulate privacy for the whole country. We decided when we were authorizing CPRA, you have to decide how much money you want to endow a law with, and there's a lot of incentives that go into that question. You still have to go to the voters and the voters are told in the ballot pamphlet what the expense of a law is, but we also want this law to have some teeth to it because one thing we realized in writing this law is there's tons of laws that are just frankly not enforced. What we decided was we would match the budget of the FTCs that is used for the whole country, for California alone. The baseline budget for California's CPPA for enforcement is $10 million, and it's allowed to go above that if the legislature decides to do so. That's a quite large commitment to enforcement.

(11:31):

The reason why you're not feeling it just yet is it's actually not up and running completely yet. They're still hiring people, but the attorney general of California and the CPPA are endowed with the ability to enforce these laws. My forecast is you'll see a lot more of that coming forward. You obviously saw what happened with Sephora. There'll be more things like that, and as those happen, there'll be certain prioritization on what they enforce, we can get into later, but I would stay tuned and think of the relative allocation of resources. One last point, the FTC has struggled. Let's be clear. I mean, a lot of the lawsuits they've brought forth so far have failed. That's partially because legal issues that the laws just aren't that strong and the endowment of the powers that aren't that strong, we tried to fix that. If people were looking at this as well, the FTC only went so far, we tried to fix that problem and add more dollars to it.

Ryan O'Leary (12:25):

Well, the FTC has to wiggle their way into these investigations. They have to find creative ways to enforce the laws that they have under their power or they don't have a real privacy remit, as it were. I think it's important to go back. As we talk about the federal government, when you were drafting CCPA, California's the fifth-largest GDP in the world, basically its own nation state, did you feel a responsibility to drive the country forward within data privacy regulation? How far can California itself drag the federal government?

Rick Arney (13:03):

The answer is yes. One of the neat things we have in this state is that our initiative process does often lead the way for regulation across the country, and it later gets picked up by the federal government. There's many examples of initiatives in California that get passed, and people are like, "Whoa." They tend to get picked up by other states in a weaker or sometimes very rarely stronger format, and then eventually get picked up by Congress. Our intent was to follow the same process frankly, and we're seeing that play out. Most of the laws that have passed across the states are weaker than CPRA primarily because they're legislatively passed. When you have a legislatively passed law, from my perspective, it's going to be weaker almost by definition. Now you are seeing a lot of debate about, "Okay, is the federal government going to act now or not?" There's certainly a big push for it, and I understand the push for it because no company likes to comply with 50 different state laws, although that's just the system we have.

(13:58):

My forecast, I'll just get into right now, with respect to the federal government, I just don't see it happening just yet. The debates are too hot with respect to private right of action and preemption. The idea there being the federal government can preempt a state law. They can do it in a way that gets rid of that state law or goes in a weaker direction. That is what the current proposals are from my perspective. And then the issue around enforcement, will this include the ability to consumers who enforce this, the prior right of action? That's the nexus of the debate that's happening right now, and the problem is it's kind of a null set, so it hasn't led anywhere just yet. Also, a thing I'll add to it is there's 55 Congress people in California, every single congressional district from the most left to right majority voted yes on CPRA and those congresspeople know that.

(14:44):

And if there's one thing that keeps congresspeople awake at night, it's polls. But an actual poll, an actual vote by your own voters of your district has even more weight for them. That bolsters a lot of what happens in Congress from my perspective, and it will give us a seat at the table with respect to making sure that if there is a nationwide standard, which by the way I am for, that California at least becomes the floor and not the ceiling or something weaker.

Ryan O'Leary (15:16):

Anna, when you are navigating this patchwork of laws and advising clients who are waiting for the federal law, do your clients want a federal, private right of action to be able to better defend or enforce these laws or how are they feeling about that, the ability for private citizens to sue under these laws or private companies to sue under these laws?

Anna Westfelt (15:42):

Absolutely. The private right of action is a really interesting issue. It's very unpopular. I mean, as we see, most of the state privacy laws do not have one. California has a limited private right of action for data breaches of certain kinds of sensitive information. Washington's My Health My Data Act has a private right of action, but many state law or state law bills that had a pretty strong private right of actions did not pass because it is extremely unpopular, and I think it is going to be difficult to pass a federal law that has a kind of blanket private right of action for violations of all the rights. It's extremely unpopular with the industry and the industry is very, very powerful. Whether a private right of action is a good thing or not is an interesting question. In my view, it really benefits only the plaintiff's lawyers.

(16:34):

As we've seen with some state laws that have a private right of action, for example, Illinois Biometric Information Privacy Act, it has a private right of action, a statutory damages. We have seen an incredibly active class action industry grow up around that. I don't think that benefits consumers. I don't think it increases the privacy for consumer data. I really think it just gives rise to the class action industry. We did see when the CCPA first came into effect that the AG said that they wouldn't be able to bring more than three to five enforcement actions a year. And the AG at the time was supportive of an amendment with a private right of action, which we're all really grateful did not happen because for many of the reasons I stated, we don't think that that is the right thing for a privacy law. I think we have a very different landscape now where we have the agency and possibly other states will look at agencies possibly if and when we get a federal law, there'll be a stronger privacy division within the FTC or maybe we'll get a completely different privacy enforcement agency.

(17:38):

I think there are a lot more regulatory resources to enforce now, and I think that is the way to do it rather than a private right of action.

Ryan O'Leary (17:47):

I mean, we've seen class actions in the privacy space before and it usually just results in you getting a year long subscription to Norton or some sort of identity monitoring service that really doesn't amount to a hill of beans, and that usually follows in the event of a major data breach. We're seeing a scattered spider run havoc in Vegas right now. There's going to be a lot of fallout from that. How is cybersecurity and the indemnification of cybersecurity going to play a role in privacy going forward? Because we see behind us privacy, security and legal meet, and we're still struggling with that. Anna, how do your clients grapple with that cybersecurity where breaches almost are inevitable at this point, and social engineering is becoming a huge factor too?

Anna Westfelt (18:33):

It is. Anyone can get hacked and most companies will get hacked at some point or they will have some kind of data incident. Whether that leads to an actual data breach or not, it doesn't always happen, but it is extremely common in the industry. And you could really be doing everything and you could still have a data breach. A lot of my clients are vendors to larger companies, and this puts them in a really difficult position because they are being asked to provide unlimited, uncapped indemnities to their large company customers for their data. They're essentially guaranteeing their data and in a way that is somewhat unrealistic, and in turn, they do not get the same guarantees from their vendors. If they're using sub-process, they don't get the same kind of uncapped, unlimited indemnities where they almost become the insurer of the data. This is a really difficult position, especially for startup companies that are vendors.

(19:34):

They need to close these customer deals, they have to agree to things that are really uncomfortable. They have to take on a lot of risk that way. They often try to manage it with cyber insurance. And cyber insurance is often a requirement in these large customer contracts, and that is... it's definitely useful to an extent. Of course, these insurance policies are written to have a lot of exclusions and the cost of cyber insurance has gone up tremendously in the last few years. But this is really an industry problem that vendors are being asked to basically guarantee or ensure the security of data when it is an industry risk that anyone can get hacked.

Ryan O'Leary (20:17):

And you have large penalties under these regulations. You have cyber insurance, which essentially the insurance industry works by mitigating the amount of claims that they have to pay out and managing risk. And when a loss event is inevitable, how do you even have a viable insurance company in that kind of a sphere without really limiting? From a regulatory perspective, Rick, how is cybersecurity? You have these laws where you're penalizing people for data breaches and the like, but how are we really supposed to prevent them if the threat actors are becoming more and more sophisticated?

Rick Arney (20:58):

This is a tough problem, and Anna correctly laid out that the risk sharing of this problem is still being worked out between the company, the suppliers, and the insurance companies. It's kind of in the air a little bit. With respect to the regulation of this, as you probably all know, CPRA does contain components about cybersecurity and securing your systems. Why is that? Well, from our perspective, it came back... and I'll just tell you behind the scenes, we did a lot of polling and focus groups, and if there is one group of people that people hate the most, and it doesn't matter where you came from or who you are or what political party you're from, it's people that hack into systems, they want them in prison. I mean, it's unbelievable. If you would watch the videos that we have, these focus groups, people have their proclivities, but when you say, "What do you think of people that hack into systems?", the whole conversation changes. And that is why we kept the prior right of action in CPRA for cybersecurity. That's why it's there.

(21:58):

It was in the whole bill, and I'm not a huge fan of prior right of action. I think it's necessary in certain cases, targeted, et cetera. Without it, I think things fall by the wayside. But with respect to cybersecurity, what hasn't happened yet is the enforcement under CPRA of a breach. When that happens, the numbers I assure you are going to be eye-popping because that's how we wrote it. And I'm sure Anna knows about this, but they haven't done an action yet. But when they do, it's going to be an enormous number. We designed it specifically for that reason, but we put the standard in there, and this is to get to answer Ryan's question, it's hard to prescribe behavior in cybersecurity because it's ever-changing. If I were to write on a piece of paper, because that's what you need to do when you write a law, you can't just sit there and describe it to somebody, you have to write it.

(22:44):

Write on a piece of paper that's fixed. What is good cybersecurity, it's very hard to do because it changes within a day or two, and lots of people have different opinions about it. What we hung our hat on in writing this law was mostly around the notion of negligence, when we had a lot of discussions around this about how do you write a standard here? We're not crazy, we just want to make sure that people are being responsible with people's information, coming back to valuing people's privacy. It all came back to examining the cases that were in place about four years ago that were totally egregious ridiculous cases for lack of a better technical description where the security patch is sitting there for months and no one puts it on. Okay. That's the kind of stuff that our law is going to take on, and when it does, like I said, the numbers are going to be in the hundreds of millions. That's what it's going to be, and that's the behavior we wanted to incentivize by creating this law.

(23:37):

We created the ability of the agency to create regulations around this that can change. The legislature can actually amend this with a certain high bar of vote, but the bottom line is just make sure that what you're doing a reasonable person would look at and say, "That's reasonable. You're being reasonably responsible." We're not asking for NSA level security, we're just trying to get rid of the really bad actors. That was our goal, for lack of a technical analysis. That's the guidance I'd give everybody is that's what we're trying to do. And if you trip over that, well then shame on you. You shouldn't have taken the information and not respect it. It's going to be very costly if you do that.

Ryan O'Leary (24:18):

From my perspective, when looking at large regulations like GDPR and CCPA, there is a level of, you touched on it, obfuscation or nebulousness to these laws in terms of what is prescribed when it comes to security. From that context, how much of the enforcement and people running to follow these laws is based upon good faith compliance versus actual letter of the law compliance, from your guys' perspective?

Rick Arney (24:49):

Yours.

Anna Westfelt (24:53):

I work with a lot of startup companies. They don't have a huge internal legal team. They don't have an internal privacy council a lot of the time. We really work along good faith efforts and risk-based approaches. And that's really relevant to the security question too, and that's why I really understand, acknowledge that it's hard to prescribe exactly what you need to do in security because it depends on what kind of data you have. And you have to have a much different level of security for high risk data if you're processing someone's health data or social security numbers or credit card numbers. Generally when I talk to clients, we really talk about, "Okay, where are the biggest risks and how can you really show good faith efforts?" And that really has to run through the entire lifecycle of your product and the entirety of your organization. You have to think about it when you design your product to show that you thought about privacy and security, the way you set your product up.

(25:50):

You have to get the developers and engineers to come to the table. You have to have your marketing team on board to make sure that you have a policy to comply with laws around marketing, for example. That's a very high risk area because there's a lot of scope for that to go wrong. You really have to make sure that your entire organization buys into this good faith effort. Having privacy in mind, privacy by design, employee training is incredibly important. A really important aspect of a privacy compliance program, and that goes to the concept of good faith efforts, is that this is not a check the box or paper shoveling exercise. You don't just put in place some policies and procedures and put them away in a drawer. It really has to really permeate the entirety of your organization and employees have to be aware of it. They need training. And that's also really relevant on the generative AI use point that I think we'll touch upon today as well.

Ryan O'Leary (26:48):

I mean, it just seems that in my line of work, when I talk to a lot of vendors, there's a lot of folks out there selling seals and certifications and all this stuff that seems to be a CYA measure that is just wholly vacuous and not indicative of good faith efforts. It seems like you'd be better to spend that money on actual good faith efforts to comply with laws as opposed to busting your butt to comply with every last dot and I in the regulation.

Anna Westfelt (27:22):

Related to that, I think a really interesting issue is the cure period. The CCPA had a mandatory 30-day cure period. Sephora had that available to it, and it did not cure its alleged non-compliance within that cure period. Now it's a discretionary cure period. The other state laws have varying provisions, some have cure periods that will sunset in a few years, but if you are going to show that you are using good faith efforts, you really need to make sure that you take the cure period seriously and treat anything as a notice to start the clock for that cure period. I think that's one of the most important things. I don't know how it'll be applied now when it's discretionary. I do hope that good faith efforts will be considered and the fact that companies really take steps, meaningful steps within the cure period will go into any kind of regulatory determination, but that remains to be seen.

Ryan O'Leary (28:13):

Rick, did you want to jump in on this?

Rick Arney (28:14):

Yeah. I think that's right. I think there's a lot of reasons to believe that good actors that are acting in good faith and looking through their systems and having a culture of trying to comply goes a long way. The law itself supports that. The language of law supports that. The regulators that I know that have been hired in this area are, I think, actually reasonable people. I know people look out all this stuff. I think they're very reasonable. And then third, for the purposes of determining whether you're going to get busted for a footfall or you're going to get busted for something that really is legitimate comes back to the lack of resources. Even though we endowed the enforcement of CPRA with a lot of money in a relative space, there still are lots of choices to be made. And I think, Anna, you alluded to Sephora. I mean, a company that does a violation and just ignores the cure period, it's hard to have a lot of sympathy for that.

(29:09):

I mean, I think the guidance here is good that if the regulators come to you and say, "Hey, there's a problem, let us know what you're doing about it.", that goes a long way to figuring it out. And the only last thing I would say is that regulators have choices to make in what they enforce. And let's be honest, some of it's political, some of it's not. But if you're working on highly sensitive data involving children, healthcare, finance, women's abortion rights, gender, all that stuff, I mean, that's just hot right now to be honest, and be aware that that can change. Women's geolocation, funnily enough, was not that sensitive of information until abortion rights changed. It really wasn't. And that's what Sephora ended up in the meat grinder because of that, frankly. It's one of the things that to be aware that what we think is sensitive now might change, but more importantly, just have the culture of compliance, culture of good faith, trying to make sure you're compliant. That goes a long ways.

(30:06):

I would be very disappointed in the whole setup if it ended up prosecuting folks for good faith efforts. We didn't intend that, and I don't see that happening just yet.

Ryan O'Leary (30:19):

As we go on our journey here towards the future of CCPA and GPR, the pace of technological acceleration is wrapping up, it seems. We had a whole talk from Alex on generative AI and AI generally, and we can't have a panel on data privacy without talking about AI. And if you're not managing AI right now or going through some sort of exercise, it's going to become shadow IT. We saw Samsung and how many folks were putting sensitive information into ChatGPT initially. It is just getting worse. Anna, when it comes to AI in reverse engineering LLMs, how are your clients worried about it? How are they going to... Are they thinking about how they're going to enforce the right to be forgotten on large language models? What's the challenge going forward for your clients?

Anna Westfelt (31:14):

Absolutely. It's been a really interesting year in this space because now we... of course, for the proliferation of generative AI, we have even more issues to solve. We work with clients looking at the internal use of generative AI tools and also the external integrations of generative AI in their products. And when it comes to internal use, there's a lot you can do. I don't believe personally in banning generative AI tools. I think that was a little bit of a knee-jerk reaction of many organizations in the beginning because they needed to understand this and needed to understand how their employees were going to use it. Employees could still use it on their own devices, and they probably will because there are such incredible time-saving features of generative AI. We really work with clients on responsible use, training again, having policies, putting guardrails in place, and really figuring out how you can do this in a safer and more responsible way. Using an enterprise version, using the walled off version, maybe you license in an generative AI tool that only runs on your information, it never leaves your system.

(32:27):

And then of course, when we're looking at external use of generative AI, we're helping clients figure out how to integrate this in their product, how to diligence their vendors, how to manage their vendors, how to change their customer contracts to account for this, and that's really where we see a lot of pain points when it comes to privacy law compliance. GDPR now, well five years since it came into effect, but it was drafted quite long before that, and even the CCPA and didn't really take into account generating of AI. The privacy rights don't really work for generative AI. It's really, really hard for a company that runs a large language model to comply with, for example, the right to be forgotten. It's incredibly difficult, and I think we'll probably see some legislative change there because I just don't know how no one can be a hundred percent compliant. It's another area of good faith efforts, so really important.

Ryan O'Leary (33:24):

When it comes to generative AI, I think there was certainly that fear of robots taking our jobs at the onset, but at IDC, we think about it more as a digital coworker, basically a really high level assistant who's going to take away a lot of the pain points. I often joke that maybe we might get to see our families on the weekends because we have to spend less time working. But we've seen across the board at every level, CEOs really putting the brakes on generative AI from a deployment standpoint, but really ratcheting up the exploration and investment portions of it because they understand across every level that generative AI is going to be, like Alex said, it's the new cloud. It is really going to have that transformative impact over the next 20, 30 years, and they want to get it right this time. They don't want to rush into it. That's something we've seen. We've also seen regulators taking stabs at making sure that this doesn't run amuck and create some sort of the dystopian sci-fi future for us all.

(34:32):

Rick, when you were writing CCPA, you were definitely thinking about generative AI, right? That's why you put specific provisions in CCPA.

Rick Arney (34:40):

Yeah, no. Here's my chance. CPPA, which is the agency of course, that's endowed with enforcing CPRA, I'm going to say this here, is now the currently lead regulator of AI by happenstance. And Anna sort of alluded to this. I'll read the code that got us there. I wrote this and it says... this is a regulation that's in CPRA. We never mentioned the word AI ever in our campaign or among our team ever, but we did talk about the notion racial profiling because we know that was hot. Let's dial back to when this thing passed or when it was proposed around the George Floyd era. Racial profiling was hot, and when you're running a campaign, you look for things that are hot, you look for things that are fast moving water, it's called, and you want to jump in it. Well, we thought, "Well, one way to get this jump in that fast moving water is the notion racial profile, which is connected very clearly to privacy."

(35:38):

We wrote the following, we said, "We're going to endow the agency to issue regulations governing access and opt-out rights with respect to business' use of automated decision-making technology, including profiling," and then we defined profiling. We said, "Profiling means any form of automated processing of personal information as further defined by regulations to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interest, reliability, behavior, location and movements." This is now becoming the code that is endowed the CPPA for the first real regulations on AI. This actually is where it's going to happen. I come to you today to let you know that's actually happening, and the commission is going to be writing regs on this, and the California legislature has the ability to actually take this even further.

(36:37):

We built this law, we knew that technology changes and we wanted to make sure that it wasn't fixed in the books. The agency can create regulations pursuant to what I just read, and the legislature can do it as well. That is, right now, the one regulatory authority that actually has the ability to write regs on AI. That was all by happenstance. I mean, this is the law of unintended consequences literally, that's number one, if you'll see some regulations on that. And I encourage all of you, if you want to be involved in it, come and testify to the CPPA because that's where the first regs are going to happen. Number two, the one thing I worry a little bit about with this law that we wrote is that the accuracy of AI and the ability of AI to have been created was caused by the unfettered access to data. It was built in the old world where you just owned all this data. You can move it, sell it, buy it, do whatever you want with it. It was actually completely unregulated.

(37:36):

I worry if everyone invokes their rights under CPRA with a DSAR, it actually may drain and hamper the ability to develop AI and make it more accurate and more useful. I worry about that. And that again, is an unintended consequence, I'm ready and willing to admit as an author of this law that I worry because it is the unfettered access to data that allowed us to create AI, frankly. And if people really invoke their rights, which I'm a big fan of, it's going to mean that there's going to be some impediments to that unfettered access and that the consumer is going to have a seat at the table, which I think is a good thing. The initial formation of AI though, was done in an unfettered way. Now we have to deal with it, and luckily we actually have the code here to actually maybe address that. It's still open, but I just wanted to share that with you. That's where the source of this is coming from.

Ryan O'Leary (38:29):

Is that where synthetic data might come in and perhaps some essentially pseudo anonymization, stuff like that, that could potentially fill that gap? Because obviously something needs to give between the unfettered access to personal data and the gain of corporations over the personal right to data.

Rick Arney (38:53):

That's right. I completely agree with you. There's going to be a question here. And Anna, I'm sure you can comment on this, and when you're running your AI models, how are you going to do it? I mean, frankly, from my perspective, I think the regulation of AI, at least in California, is going to hinge on inputs and outputs. The reality of it is a lot of people in the legislature are terrified of what they think the outputs of AI are going to be and how that's going to change behavior with respect to consumers on dimensions that I mentioned before with respect to race, gender, et cetera. That's the issue. And we all know junk in, junk out, or accuracy in, accuracy out. That's really the nexus of the debate in my mind. People are afraid of what the answers are, because I think in many cases we know what some of the answers are that are going to come out of it, and we don't like those answers.

(39:35):

And so I think the whole debate, at least from a regulatory standpoint, is going to be how can we get this to get to the answers we want that we think are politically more viable? That's what I think is actually going to happen. That's a hard thing to do, a really hard thing to do, but I think that's, from my perspective, you asked me where's that heading? I think that's the nexus of the debate, at least politically.

Ryan O'Leary (39:59):

Anna, have you seen that from your clients in terms of their thinking about AI and the regulatory impacts of that?

Anna Westfelt (40:07):

Yeah, definitely. Anyone using AI or developing an AI model at this point needs to think about the issues of the underlying data, the real junk in, junk out statement. It's impossible to have an AI program if you don't consider issues of bias and discrimination. And we've seen, again and again, large language models producing really inaccurate outputs or really biased outputs. And I think that is a huge problem to solve. I don't think we can solve it through legislation. I think we need to solve it through better standards, industry leaders working together. But it's really interesting what you say now, how the CPPA is going to be really the first de facto regulator of AI. I don't necessarily think that a privacy regulator is the best suited regulator to regulate AI because there are so many other issues. There are civil rights issues, there are bias and discrimination issues. I'm really interested to see where we'll go with this on a federal level. And I know that's going to be a lot slower than what we're seeing in California, but I think it is going to end up with a different AI regulatory agency.

(41:23):

And I think that's what we're seeing in Europe as well. When the EU AI Act comes into force, member states will likely put the responsibility with AI regulatory authorities as opposed to data protection authorities.

Rick Arney (41:37):

I think that's very probable, to be honest. But I will say at the debate at the national level about regulation of AI, as privacy regulation in California becomes more embedded in, people have to comply and there's enforcement, there's also going to be this AI thing come up, and that makes it harder to bifurcate the legislation at the federal level. This is where it all comes crashing at some point. The more people comply, the more there's enforcement, the harder it is to... it just becomes inertia. And I don't know if we'll get there with the AI piece. I think the privacy thing has a greater chance of making it and being embedded such that the California way of doing things probably will be, or hopefully some of it will be adopted federally. I think you're right about AI, it probably will have its own regulatory entity, but we'll see. I mean, if California starts regulating it, it works, that might be the model.

Ryan O'Leary (42:29):

And we have a challenge right at the federal level where we have an aging congressional delegation that may not necessarily understand the AIs and the inner workings of the technology, quite frankly. That's another challenge that we have to navigate. Just a few minutes left here. I want to give you guys a second to leave the folks with one of the key takeaways that you think is going to impact privacy going forward in the near future. Rick, I'd start with you. From a regulatory standpoint, what do you think is the next big thing?

Rick Arney (43:07):

I think basic compliance of DSARs and compliance of where you're sending your information and so it isn't leakage is really what this is all about in the first instance. I mean, making sure that you are in a basic way, in a good faith way, compliant with those two things. And then I throw the cybersecurity piece in there as well. Those are really what this is about. The sensitive areas I'll come back to. I really do believe that the enforcement of all this is going to come down to the headline things involving children or otherwise. That's where things are hot. I would suggest being very careful in that area. But the major tenets of CPRA are all about managing the DSAR, managing the downstream suppliers, making sure that's tight, making sure there isn't leakage. That's really what, at a basic level, it's about. And then of course, cybersecurity is its own area, but I think putting together cultures and systems in place to do that is really where it should happen. And I wouldn't worry about, again, as I said before, the footfalls.

Ryan O'Leary (44:12):

And Anna, last thoughts from you.

Anna Westfelt (44:15):

Absolutely. Really what my clients are looking for when it comes to data privacy legislation and regulation is they're looking for clarity and they're looking for harmonization. And I know those are very big things to ask for, and it's been really hard to do that in other areas. We have breach notification laws. Each state has a different law and they're just all slightly different. If you have a data breach, you need to look at all 50 states and their laws. I really don't want to end up there with data privacy. I really recognize all the things that Rick says with the themes of good faith efforts, making sure you manage your vendors, you comply with the DSAR rights. I think the issue is if you end up with really long privacy policies with a section for each state and your data processing agreements are also really long with a section for each state, I don't think that really helps anyone.

(45:04):

We try to work towards a highest common denominator to make sure that we're compliant, but even that doesn't really work with the different states because they're just slightly different. It's not that you can pick one standard and apply it across the board. I'm really excited to see where we get to on a federal level. I know there's a lot of disagreement on preemption and on the private right of action, but I really think we have a fantastic opportunity now to put in place something that is more uniform, harmonized and much clearer for companies to know what they have to comply with, clearer for consumers to know what rights they have with respect to the data.

Ryan O'Leary (45:36):

Cool. Thank you so much. And thank you so much to my panelists, Rick and Anna, for joining me. And thanks to DataGrail for having us. That we will conclude our session. I'll turn it back over to Mary.

expand_more Show all

Explore More Sessions

Keynote

The New Frontier: Implications of an AI World

Alex Stamos
Watch Now
Moderated Panel

The Flywheel of Trust: Personalization + Privacy

Cathy Polinsky, Jess Hertz, Julie Bornstein, Trishla Ostwal
Watch Now
Presentation

Let’s Get Technical: Talking Privacy With Your CISO

Brandon Greenwood, Jonathan Agha
Watch Now

Learn more about how DataGrail
can help your privacy program.

Our platform eliminates complicated, manual, and time-consuming privacy program management processes. We have 2,000+ integrations with the most popular business systems to help companies build comprehensive, automated privacy programs effortlessly.

close
Please complete the form to access all
slides from DataGrail Summit.