close
close
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Share:
Moderated Panel

Staying Ahead of AI and Global Privacy Regulations

Omer Tene Partner, Goodwin
Shannon Yavorsky Cyber, Privacy & Data Innovation, Orrick
Dr. Gabriela Zanfir-Fortuna Vice President for Global Privacy, Future of Privacy Forum
Andy Dale General Counsel & Chief Privacy Officer, OpenAP

AI is advancing faster than global regulators can respond, creating complex and shifting compliance challenges. In this session, Omer Tene (Goodwin), Shannon Yavorsky (Orrick), and Gabriela Zanfir-Fortuna (Future of Privacy Forum) will share insights on how to manage risk and adapt to new regulations. This session will be moderated by Andy Dale (OpenAP). Attendees will leave with practical guidance to navigate today’s fragmented regulatory environment and prepare for what’s ahead.

View Transcript

Welcome
to our panel discussion on staying ahead of AI
and Global privacy regulations.
Uh, I'm Andy Dale, the General Counsel
and Chief Privacy Officer of Open ap,
which is a joint venture in the TV ad tech space, uh, owned
by the large programmers,
NBC Universal Fox Warner Brothers in Paramount.
I'm thrilled to moderate this panel today with some really,
really, uh, sharp people
and, uh, friends of mine in the privacy world.
Um, the topic is how fast AI is evolving
and, uh, whether global privacy laws can keep up, uh,
and create whether we're creating a maze
of compliance challenges.
Um, the panel will share practical guidance on managing
risk, adapting to new regulations,
and staying ahead of the changing global landscape.
So, I'm gonna introduce our panelists today.
I'm joined by Gabriela Za, fear Fortuna, VP
of Global Privacy at Future Privacy Forum, FPF Gabriela.
You, uh, tell the audience a little bit more
about yourself and your role.
Uh, thank you de uh, thank you, Andy. I'm so sorry.
Uh, hello, uh, everyone.
Um, so I lead the global privacy work at the Future
of Privacy Forum, where we have, um, offices in Brussels, in
Singapore, in Tel Aviv, uh, Nairobi New Daily.
So, uh, keeping an eye on, uh, developments,
uh, around the world.
Great. Um, next we have OER Tene, a partner at Goodwin.
Uh, will you tell, uh,
everybody a little bit about you and your role?
Yeah, sure. Andy.
So, oh, I think like most other people here, I used
to be a privacy lawyer
and for the past, uh, um, is it three years soon, right?
It's, uh, uh, almost three years I've become an AI lawyer
also, uh, trying to sort
of keep my finger on the pulse of this dizzying,
uh, uh, space of AI laws and regulations
and advising employers and developers
and kind of anything and everything in this space.
Uh, great. And lastly, Shannon Dvorsky, uh,
who oversee cyber privacy and data innovation at oric.
Shannon, can you introduce yourself?
Thanks, Andy. And, uh, thanks.
A, uh, data Grail Daniel for, for having us today.
Uh, I head up the cyber privacy
and data innovation practice at oric.
I also co-head our AI practice
because, like oer, we're all, uh,
we're all AI lawyers now,
and that's right, it's the Chet GPT adversary almost
in, uh, November.
It's been a couple years.
I feel like we are advising so many of the companies
who were, we, we kind of took
through all the different developments in privacy and GDPR
and the state privacy laws.
We're now going through a similar exercise with building,
uh, AI compliance program.
So really excited to be here today and talk to everyone.
Thank you. And the more things, uh, change,
the more they stay the same.
It feels like we're in sort of recursive patterns here, uh,
that many of us have lived through,
but now we're in the AI pattern, I think.
And, uh, so Gabriela, let's start with you.
Let's, um, sort of to level set
for everybody What's like the latest on the EU AI Act?
When can we expect it, kind of what's it gonna look like?
And, and then we will kind of go around
and talk about impact, uh, as well.
We have a lot of uncertainty, uh,
around the EU AI Act right now, uh, to, to be, uh,
completely honest.
And I'm just, uh, back from a visit to Brussels, uh,
last week, uh, where the conversations, um,
are, uh, enhancing this uncertainty
because the European Commission right now, um,
is in the middle of what they call a simplification
of the digital, uh, legislation framework, uh, in the eu.
And only, uh, recently they have launched, um,
a public consultation on an omnibus, as they call it,
which is technically a, a package of, um, initiatives
that aim to simplify existing law in the digital space.
And the AI Act is actually a part of that effort.
It will just be, uh, let's say a, a small part of it,
but still, um, so right now we have this bit
of level of uncertainty on top of the fact that, uh,
in any case, the UAI Act as it was adopted was meant to,
uh, become applicable at various stages, um, within, uh,
the next two to three years, uh, actually.
And, um, as of right now, only specific provisions
of the AI Act are actually, uh, enforced.
Um, so, uh, I think the latest and,
and, uh, important thing to keep in mind is that, uh,
companies have to operate within this increased, uh, um,
layered, uh, uncertainty.
Um, and,
and, uh, absolutely this is not, uh, an ideal environment
to operate in, particularly knowing, uh, knowing that
the law is adopted.
And, uh, even if, uh,
it's not clearly applicable, uh, right now, it, we know
for sure that it will continue
to become applicable in tiers, uh, within, uh,
the next couple of years.
So perhaps, um, it'll be great to, to, uh, move, uh,
the floor to, to, to OER and,
and Shannon to learn a bit about the operational, uh, impact
of this uncertainty.
Yeah, I was, was curious, you know, given that both OER
and Shannon represent a lot of early stage tech companies,
um, when you get approached by, you know, those companies
and, and asking you questions, particularly if they
shocking, have raised money behind an ai, uh, product, uh,
like how are you advising them, sort
of pragmatically on the ground to approach building product
today, uh, with respect to, you know, either the EU Act
or you telling them, look at GD the GDPR first,
if it's global, or look in the US first.
How are you like advising them kind of from step one?
Maybe Shannon, you first and then no, mayor.
Yeah, sure thing.
So I, it's really similar to where,
how we were advising companies around the, um, the sort
of emergence of all the different global privacy laws,
were helping people understand the
through lines across all the different emerging legislation,
both in Europe and the us
and building sort of, um,
principles based governance frameworks,
which generally start with carrying out a, an inventory
of what AI systems are currently in use
and are being, you know, developed
or deployed within an organization, which sounds simple,
but getting your arms around
what people are actually using has proved quite difficult.
And there's a lot of shadow IT that companies are figuring,
finding out that there's this secret faction within the
organization that's using a whole range of AI tools.
So we're helping companies understand how
to start building those programs around core principles.
So looking to the OECD principles,
the NIST AI risk management framework, ISO 40 2001,
and figuring out how to develop something that's going to be
resilient to the new laws that come online over the course
of the next, you know, 18 to 24 months call it,
Is that similar for you, Amer?
I think, uh, you know, you've said you,
you're an AI lawyer now,
but you're still a privacy lawyer as well.
So where do you start with them? Similar place.
Um, I actually have like three sort of threads that
I'd like to pick up on if, if briefly
So, first of all, on the, uh, AI lawyer, I,
I just wanna clarify that in my view,
all lawyers are now AI lawyers, uh,
because, you know, Shannon Illa, um, you, Andy,
we come kind of added from privacy,
but I work in law firm that has, you know, specialists
and Shannon of course, too, across the board,
and all of them are now AI lawyers.
You know, certainly the IP lawyers are AI lawyers
and, you know, labor
and employment, if it's kind of touches on unemployment
and consumer protection
and just, you know, general sort of, uh, uh, uh, uh,
safety liability, uh, corporate, right.
A lot of the transactions now are any, uh, so, um, I,
you know, I, I don't want to narrow cast this in any way.
As you know, us being the AI lawyers,
this is a technology that's very fundamental kind of
to the fabric of, uh, all technologies now.
And I think it means that all lawyers are AI lawyers,
partially because, you know, putting aside the e ai act,
the, there isn't really an AI law on all laws
apply to ai.
Like tort law applies to AI
and certainly, you know, anti-discrimination laws
and, uh, product liability laws and IP and, and privacy.
So just, you know, um, maybe put a pin in
that we can discuss later.
Uh, to Gabriella's point about kind of uncertainty,
I just wanna say that I, you know,
I think the uncertainty is much broader than just
with respect to the specter of regulation
and what exactly it captures
and how exactly it's gonna, um, apply.
If, you know, um, European Commission is kind
of entertaining the idea maybe to pull some of it back, um,
the uncertainty is very fundamental still to the, uh,
technology itself.
And I think a lot of the questions even, you know, post,
uh, as Shannon said, uh, chat adversary, um,
a lot of the questions are still very much open.
Like this, this thing even work.
You know, we've seen a lot of reports, uh, about, uh,
agenda AI specifically in organizations
naturally being successfully integrated in
the, I think, um, MIT or IBM had a report,
or like, it's only 15% of deployments even work.
Um, certainly accuracy, you know, it's not a privacy issue,
it's not a data issue, but it is,
but it's a fundamental
and, you know, potentially earth shattering issue
for this technology
because if it keeps up, you know,
making, making up b******t, excuse the term, uh,
constantly and very eloquently, um,
I think at some point people will say, you know,
wait a minute, can we trust this
to do like really sophisticated things?
So kind of the gap between the prospect of this sort
of destroying civilization
and this being as harmless as kind
of just writing a limerick about, um, sort
of the latest thought you have, um, is, is still very broad,
um, with respect to kind of advising clients.
Look, you know, as lawyers,
you asked specifically about the EU AI act.
I'll say how we sort of think about that.
The first big question is, does it apply, right?
Because, um, if you're stateside, you are kind
of an AI startup, hopefully for you, it doesn't apply yet
because it's, it is a pretty heavy bureaucratic, um, law.
Um, so does it apply geographically?
Are you even developing or deploying an AI system?
And I think part of this goes to show
that when they drafted this law, they didn't even seem
to have the right sort of technological mindset
because chat GPT, you know,
came in very late in the process,
and it, it was almost tacked on to the eeu AI AI act
as an afterthought.
Like the whole idea of kind of a general purpose AI
and the concept of a model,
because the EEU AI Act is focused on AI systems,
not even on the model, but we know
that really everyone is kind
of consuming the same models from the big, uh, um,
LLM providers.
So, so, um, you know, first question is,
does it even apply to you?
If it applies to you, what risk category are you?
And, you know, hopefully it's not a prohibited, uh, uh, uh,
AI and hopefully it's not even high risk,
because if it's high risk, it's like a world
of pain in terms of, uh, illegal obligations,
especially for providers.
And that, I think is the next question.
So if, you know, unfortunately for you it's yes, yes, yes.
Then are you a provider
or a deployer of ai, as we all know now,
it's not really a dichotomy, it's not bipolar.
There are a lot of kind of stops along that spectrum,
and most companies are both, uh, providers or developers
and deployers to some extent.
Um, yeah.
So these are kind of some of the initial questions,
and I think if the answer is yes to all, you're a developer
of or provider of high risk ai
and it, it is, you know, sort of, uh, deployed in the eu,
um, well, um, for a lot of work,
Yeah, it, it is a lot.
Uh, it's a lot to consider.
Um, obviously the EU leans in in these areas
and creates, uh, you know, different ways of working
and taking companies to market.
I'm an advisor for a consumer facing technology startup,
and that is facing a whole, like, it's too many questions,
you know, it's too many questions at the beginning
to think about, and they're leveraging AI
pretty significantly.
And so, you know, I'm not their counsel, uh,
but we are talking about where to prioritize,
and I think that's sort of what you're getting at based on a
risk assessment of the company
and their business, irrespective of, of whether, uh,
which law may or may not, uh, apply in the first instance.
So, like, are there, Gabriela,
are there other laws in other countries?
Obviously the EU is prominent,
we're gonna talk about the US in a minute.
Are there other laws in other countries?
Uh, it's a big world out there that, uh, will,
will also have impact here.
And it certainly impact like the thinking of, uh,
people when they, they think about where to launch
and how to use products.
Uh, we are seeing some, uh, laws,
um, specifically dedicated to ai,
but I think there are some important nuances here.
Um, many of, uh, those attending now
that have started their journey in, uh,
data protection in privacy law some years ago following the
GDPR would know, uh, that Brussels effect
that the GDPR had, right, with laws
around the world being inspired by it.
And, and then, um, taking, uh, a lot of
exactly what was in the GDPR
and transplanting within their own jurisdictions.
Uh, we don't see that with the U AI Act,
and I think there are very clear reasons for this.
And perhaps the most important one is that the AI Act
as owner was pointing out, is this, this, uh,
comp complex piece of legislation, heavily bureaucratic, uh,
almost like a, a, a monstrous monument of bureaucracy if,
if we are to be honest about it, uh,
which would make it very difficult
to being transplanted within other jurisdictions.
So, uh, we are not seeing that type of effect.
However, um, there are jurisdictions who, uh,
started to adopt AI laws, uh,
but with very different, uh, uh, sort of nuances.
We have, uh, South Korea, which adopted an AI basic act, uh,
earlier this year, uh, or even late last year, uh, uh,
and then we had Japan, uh, which adopted, uh,
an AI framework act this year.
But both of this, uh, this laws are, um,
a much, much, much, much lighter touch than what,
and the U AI Act is, uh, in fact,
Japan's law is very much pro innovation.
It basically creates a framework that supports, uh,
investment in AI in Japan,
and then only has some very high level principles, um, that
that would apply to AI systems.
So we are seeing some laws, uh, the South Korea law is a bit
of a, a mixed, uh, between, um,
high level principles for the riskiest of AI systems
and a lot of pro-innovation measures as well.
Um, so, so we, we are seeing that type of development,
but that's about it.
Um, Brazil is, uh, considering an AI bill, uh, already
for many years now,
but they don't seem to actually make a lot
of progress towards adopting one
and looking at what's happening in Brussels now
with the push for simplification,
some even say deregulation,
I think this will also have an effect on other
jurisdictions, um,
looking at comprehensive frameworks like the AI Act.
So it's good, good to know.
There are other things to consider, more risks to, to try
to weigh and cover off.
Um, let's go, like, Shannon,
let's go to the US a little bit.
Anything, uh, knows, nothing's like hyper-specific
to AI in the US yet,
but which, you know, wh which, um,
developments are you tracking
and thinking about for clients, um,
potentially in the us different states, obviously there's
a bunch of state specific privacy laws
and things that are being amended and reviewed all the time.
Where are you seeing activity in the US
and where should people be focused?
Yeah, it's a really great question.
I like to sort of set the table on this question a little
bit, because when I talk
to folks about AI law in the us I think it's important
to understand the, the broad landscape.
So first, and I, I feel like there are five points
and oer, Gabrielle feel free to free, like fill in
or chime in if I'm missing another area.
But when I'm talking about the body of law,
you have federal laws that continue to apply.
So like OER was saying, tort law, IP law,
consumer protection laws, the FTC act,
then you have privacy laws
that talk about automated decision making, um, which,
you know, applies to, applies to ai.
Then you have the emerging AI state law landscape,
and there have been since the beginning
of the year about over a thousand laws
that have been proposed at the state level.
So it's an incredibly complex sort of web of laws
that is emerging.
We, and I know FPF has done a really good job as well,
but ORIC has an, it's freely available on our website,
an AI law tracker that shows you just the laws
that have gone into effect.
And we built it because I was having trouble keeping
track of all the laws.
I was like, okay, we have all the digital strategy laws in
Europe, we have all the privacy laws,
and now I have a thousand AI laws that I have
to know all about.
So anyway, the AI law tracker on our site just has the laws
that have gone into effect,
and that's been tremendously helpful,
but there's still like 150.
So it's, it's a lot, um, to keep track of.
So you have the AI laws, then you have guidance documents
that have been issued by federal agencies.
So the EEOC, the FTC, the SEC,
have all issued guidance documents about
how their laws apply, like existing regulation applies
to their area
and falls within their jurisdictional authority.
Those are quite helpful to take a look at as well,
to get a feel for how they're approaching, um, ai.
Um, and then beyond that, you have this emerging sort
of landscape of investigation
and litigation that contours the law.
So the, the FTC cases on like algorithmic disgorge
and recent settlements that help to define how regulators
and ags are thinking about AI enforcement.
So that's like oer, Gabriela, uh,
any other, any other area.
I feel like when I talk about it, those are the five things,
the federal, existing federal laws, privacy laws, AI laws,
guidance documents, litigation investigations.
Any other, any others Amer?
Yeah. Um, so obviously a great, you know,
table setting by, um, Shannon
in sort of, in the AI legislative space,
there are a lot of different directions.
Um, there's a big focus on health related
ai and even more specifically on, uh,
health chat bots and mental health, uh, chat bots.
Um, the, there is, um, you know,
still a focus in some states on the high risk categories,
consequential decision making, uh, uh, Colorado
AI Act, and, uh, Texas Traga,
uh, there's some novel laws that
actually just passed in California, the law, uh, focusing on
frontier ai that actually passed last year in a different
forum, 10 47 this year it passed
as 53 has be in 53.
Last year it was, um, vetoed by the governor,
and this year, it, it was just signed last week.
Um, so, so, you know, there's a lot of kind of color
and nuance around it.
Um, in my mind, US
AI law is still primarily locked that doesn't have AI
in it, the, the words ai.
And one way we try to think about it as just
through the use cases, you know, because companies come
and like, they tell you, what should I do about ai?
And well, that's, you know, a difficult
and somewhat confusing question.
What are your use cases? What are you using AI for?
And then we can sort of think about the, the risks.
Are you using AI to code?
Well, then you have, you know, you have, uh, uh,
copyright issues and the input
and the output as it use open source, uh, uh, software.
Like you don't wanna contaminate your code base, um,
are using AI transcription tools, which I'm assuming,
but I actually don't see are being used on this call.
Um, because if you are, then, you know,
then there are state wiretapping laws to party consent,
and you're also, it has e discovery implications
because you're creating, uh, a records
and bookkeeping obligations for, uh, public, uh, companies.
Are you using AI in hr, um,
for employment context?
And even if you are, what exactly are you using it for?
Is it for performance reviews?
Is it to make decisions about hiring or, or firing?
Um, are you using AI in the context of PII
or PHI or NPI, you know, any type of personal information?
And if so, what are the agreements that you have in place
with, um, sort of parties up
and down the chain, uh, to either protect, uh, the data
or to make sure that you're not violating agreements
with your customers or vendors?
So, uh, I think these are some of the questions.
None of them really, um, depend on
AI specific laws and regulations.
They're just application of general law to ai.
So to that end, you know, if we can get, you know,
a little more, like one common theme obviously here is like
risk assessment and thinking about your business
and thinking about what a company is actually doing
or planning to do.
So what are the like most pragmatic things you can do
to help a company assess that?
Like, are they, you know, you're going to probably ask
and see if you have an existing process
that you can use and leverage.
I think we're all using tools like Data Grail
and others, uh, mostly data grail for, uh, like data mapping
and perhaps D sars, e ps,
like privacy focused activities.
So are you, how are you all thinking about, you know,
what are the best practices that we can talk
to people about, whether it's in FPF context, maybe policy,
best practice, or, you know, policy analysis, best practice.
Maybe we'll start there first.
And then like from the tech company perspective,
internal tools, internal processes,
Uh, I'm, I'm, I'm happy to, to jump, uh, in first.
Um, and, um, I think you absolutely, um,
were right by saying that the first thing to do is, is
that mapping, you know, that internal mapping.
And we are, uh, used to the old exercise
of data mapping.
Uh, but now as, um, uh,
OER was also saying in his first intervention, um,
I think it's, it's important to try
and figure out to what extent, um,
you are relying on AI systems, right?
Just, uh, um, try to, uh, uh,
see if, um, any of the, um, systems
that you are either producing, working on,
putting on the market,
or actually relying for your own internal purposes
or building on to put your own products on the market, uh,
can, uh, amount to an AI system
or an automated process that's captured, uh, by this
multiplicity of frameworks that now we need to work
with from the AI Act to the state laws on, um, you know,
that are, uh, in the US that are very relevant,
automated decision making, um, and, uh, and so forth.
Uh, and, and it might sound like, uh, an easy first step,
but I think it's, it's very, very, uh, complicated,
particularly because of the multiplicity of laws and
because right now it's, um, it's more difficult
to know exactly what you even need to count
and put on your map.
It's not just data anymore, right?
Um, so, um, yeah, I'll, I'll, I'll stop here.
And tons of companies don't have this, these resources
yet, or haven't, or have done it, have done it,
and they didn't know they were doing it, right,
they were doing it as part of product development
or, or some other thing.
So I guess I'll ask Shannon first, like, you're talking
to a client, you know, what are you,
what are you kind of asking them?
I want to discuss this for those on the, on the call that,
you know, have these actual day-to-day issues, like
what are you discussing with them?
You know, like, can you
show us your data maps if you have them?
Or, you know, walk me through this product data flow.
Is that the most, uh, ideal way for you to get at
what their key issues are and risks?
Yeah, so I think there are a couple of things.
The first, from a very practical perspective goes back
to getting your arms around exactly what tools are in use
and what they're being used for.
So figuring out what that AI map is, if, if you will,
and then second, figuring out
how you're gonna carry out risk assessments.
So I, it's may not be required in all circumstances,
but figuring out what those gates are or triggers,
and then doing AI risk assessments,
I think is a super helpful way.
It's like a, it's a great hygiene practice for companies
to get into, to start doing those, to figure out,
okay, what is the tool?
What are we proposing to use it for?
And then think about the risks.
So what are the proposed, you know,
what are the potential risks associated
with this particular tool?
Are they IP risks or is it, are there bias
and discrimination risks?
Are there, um, are there risks
because the tool's gonna hallucinate or there inaccuracies
or mis, you know, bad information as, as a,
as another example.
And then think about whether and how that matters.
So maybe it matters less
because you're not using it in a high risk kind of area,
or maybe it matters more
because you're, um, you know, healthcare
or in financial services and there's,
or there's more sensitive personal information.
So I think you have to learn to that kind of calibration
around when it matters
and when it's important to build in, um, mitigation, kind
of like privacy by design,
but like AI by design as another example.
Yeah, and the one other thing I would mention there is
we're seeing a lot of companies kind of turn on AI features,
and I'm like talking to clients about being like attuned to
that or mindful of when AI features can just be configured
with the slide of a toggle
and being aware of when that's happening.
Again, that goes to getting your arms around
what AI is in use.
Um, I, so that's great.
And, um, we've talked a lot about risk,
I'll give you the last word, Omer.
We talked a lot about risk,
but also what's really important is the
reward side of things.
And, you know, we, I we are lawyers,
but you know, lawyers that try to
encourage innovation and encourage our, our companies
and our clients to go out and do things
and build things that work really well.
So, Amer, how do you, how do you, like,
I know you lean towards like trying
to enable your client, right?
To build the thing that they want to build.
And we're in a moment here with ai, so how are you working
with them to, to sort of allow them
to run fast in this environment?
It's, it's a lot. There's a lot of risk here,
but we can't let it weigh us down too much.
Yeah. That I, I think,
and that's table stakes, you know, the,
and we are here to sort
of enable and facilitate the use
of a tool that's beneficial.
If it's not beneficial, you know, there's no, don't go
to the lawyer, right then just don't use it.
But if, if there are benefits,
and certainly, you know, here, there are kind
of mind bending benefits, uh, then
we should find a way to do it, you know,
responsibly of course safely.
Um, but we wanna make sure that you can do it.
And to that point, you know, but we as a law firm,
and I'm sure Shannon can say the same, uh,
we use AI internally, it's, we can't, you know, this
revolution tech technological revolution hasn't sort
of bypassed, but I mean, it's, it's, it go,
it goes without saying, right?
This thing is good in language
and legal work, you know, uh, is is exactly that.
So we are really frontline in terms of, um, having to,
uh, deploy these tools.
And actually a partner of ours, uh, said that, um,
AI is not going to replace you in your job,
but someone who's using AI is going to replace you.
So certainly, you know, I, I think everybody needs
to figure out how to use it as opposed to
whether to use it.
Um, just kind of building on
what Gorilla and Shannon were saying.
Um, we like to look at it in a, um, red, yellow,
green type of framework as lawyers often do for, you know,
many issues and not to oversimplify.
Um, but, and, and,
and you know, I guess it aligns
with the way the EAI act legislature
saw the world, right?
Have prohibited high risk and kind of lower risk category.
So I'm not saying that they're, you know, oh, sort
of categories are exactly right
and of course, who am I to say?
But the way we look at it, you know,
in some cases it's red light, which means if you want
to use AI
or to integrate it into your product for this purpose, talk
to Shannon, you know, or talk to him
or like, you need counsel about that, or,
or at the very least, talk to Andy, right?
Talk to your gc, uh, not Andy that your
any less, but I'm saying
before you go out to get advice, like talk to your council,
um, be,
and you know, these would be cases like using AI
for college admission decisions, right?
It's, it's kind of stating obvious
or where kids, like kids a chat bot, a chat bot
for, for kids.
I, I, I think it's pretty clear where the red light,
you know, requires, um, some legal
analysis and then everything Shannon said, like, um,
the risk assessments
and um, you know, the more sort
of elaborate legal work.
And then on the other hand, I think there is some
Greenlight examples
because Andy, you know, only has so much time in a day
and you can't sit there fielding like
a thousand questions a day.
Can I use chat GPT for this or that
or so, so I think, you know, if you're using it without,
um, sort of compromising PII or copyright
or using it to create a presentation
or maybe using your enterprise version, which, uh, protects,
uh, kind of the data that goes in
or out, then it's greenlight
and in the middle there's kind
of the interim category which, um, might require adherence
to a policy, might require a kind of front end, um,
uh, uh, risk analysis work.
Anyways, just sort of a framework for,
and I'm trying to, for a while now to sort
of articulate it more clearly and what the categories are.
And when I do, I'll publish an article
with Gabrielle at FPF.
Well, obviously we've had, uh, a wealth of knowledge here.
Thank you all for, for sharing today for this discussion.
Um, thanks everyone for joining us.
Uh, really appreciate it.
Appreciate Data girl for having us.
Um, next up is the Data Girl's, chief Product Officer
and senior PM to talk about some new product launches
and what they have coming next.
Thank you very much everyone. Appreciate it. Thanks.

expand_more Show all

Explore More Sessions

Featured Presentation

What’s Next in Privacy: 2025 Trends Shaping the Future

Daniel Barber
Watch Now
Moderated Panel

Privacy in Action: Lessons from Data Privacy Heroes

Anna Rogers, Randy Wood, Jennifer Dickey, Ian Phippen
Watch Now
Moderated Panel

Generative AI & Privacy: Risks, Realities, and What Comes Next

Jason Clinton, Whitney Merrill, Sunil Agrawal, Nat Rubio-Licht
Watch Now

Learn more about how DataGrail
can help your privacy program.

Our platform eliminates complicated, manual, and time-consuming privacy program management processes. We have 2,000+ integrations with the most popular business systems to help companies build comprehensive, automated privacy programs effortlessly.

close
Please complete the form to access all
slides from DataGrail Summit.