Generative AI & Privacy: Risks, Realities, and What Comes Next
Generative AI is rapidly transforming how organizations innovate, but with new capabilities come new risks. This session will unpack the complex privacy challenges posed by generative AI, from data leakage to compliance blind spots. Attendees will gain a clear-eyed view of the risks and realities shaping today’s landscape, along with practical guidance on how to safeguard privacy without slowing down innovation. Join us to explore what’s next for AI and what it means for privacy—and how to build a strategy that drives]growth but is rooted in doing the right thing.
All right.
Hello everyone, and welcome to our panel on Gen AI
and Privacy Risks Realities, and what comes next.
Um, I am Nat Rubio Licht.
I'm a senior reporter with the Deep View,
a daily newsletter on all things ai,
and I am so excited to, uh, be the moderator today.
Um, as generative AI reshapes how organizations operate,
it's redefining what responsible innovation looks like.
Understanding and addressing its privacy
risks isn't optional.
It's essential in building trust, maintaining compliance,
and enabling sustainable growth in the AI era.
That is what we are gonna be talking about today.
With that, I will pass it over to our panelists.
Today, I am joined by Jason Clinton, uh,
deputy CISO at Anthropic.
Jason, if you'd like to introduce yourself. Yeah,
Great to be here.
25 years in tech, uh, long time at Google,
and been in Anthropic for about two
and a half years, so that's been a wild ride.
As, as all of you know, the AI revolution is causing lots
of change, things to change.
So glad to, glad to talk about those today.
Yeah. Next we have Whitney Merrill, head
of Data Protection, privacy and Compliance at Asana.
Uh, Whitney, if you'd like to give an introduction.
Hi everyone. It's so great to be here.
I've spent my entire career doing privacy
and consumer protection.
I started my career at the Federal Trade Commission
and then went in-house,
and I've been enjoying that ever since.
Looking forward to the discussion
Last, we have, uh, Sunil Agrawal,
chief Security Officer at Glean.
Uh, Sunil, if you'd like
to tell the audience a little
bit more about yourself, that'd be
Great. Absolutely.
Sunil Agrawal been in the security space
for a good 25 plus years,
and for folks who don't know, gle, you know, we started
as a Google for Enterprises, uh, search, B2B search engine,
graduated to being a chat GPT for enterprises.
You know, we can ask questions, get answers,
and now a big focus is all Agent T ai.
So here I am. Thanks to be part of the panel.
Yeah. Thank you guys all again for being here.
Uh, let's jump right into the big question.
Um, is privacy keeping pace with innovation,
or is it lagging behind?
Jason, if you'd like to kick us off.
Yeah. Uh, if you are, uh, working at one of the,
the tech companies that's sort of been at the edge
of what's happening, you've seen, um, technology
offering solutions to some of the privacy challenges
that we face from a regulatory,
but also from a customer demand perspective.
For example, uh, it is the case today that you can use, uh,
row level encryption, for example, to make sure
that like only the customer can access their own customer
data and you have no, uh, no opportunities for inside risk
or, um, any kinds of technical errors
that would lead to data leaks.
Those are very powerful controls and,
and the, uh, the technology is the, is, uh,
in use today at the most robust, uh, uh, organizations
that, that we all know of.
That said, um, AI is changing things relatively quickly
and in a world where, um, data makes it possible
for agentic systems to be useful, um, there's an opportunity
for, uh, in anything that we think of
as a quote agentic system to fail in the same ways
that humans make mistakes like humans.
Uh, uh, people frequently, uh, you know, fall
for phishing attacks and send, you know, like spreadsheets
of customer data out the,
out the front door in response to an email.
Um, and, and, uh,
and AI is vulnerable in the same way that, um, people are.
And so we have to sort of assume that we need
to put the right guardrails in place in the same way
that we put guardrails in place for people.
Whitney. Yeah, I actually think, you know,
when I think about privacy keeping pace with innovation,
I think about the strong foundation that has kind
of been established over the last 20 years around privacy
and how a lot of that still applies to AI
and to the innovation happening right now.
And so, when I think about the core elements, right,
data processing, storage, the data life cycle, to me,
there's still a lot of governance around the data,
around the processing, regardless of, um,
you know, the, the, the pace of innovation.
Now, there's always gonna be some gaps,
but I think, like, I always think about looking back
to the fundamentals when thinking about
how to take on these challenges.
Yeah, absolutely. Sunil, any thoughts?
Yeah, I think about it from two different angles.
You know, one is from regulation viewpoint is privacy,
keeping pace with innovation,
and I feel, you know, very good about this.
You know, again, as I've mentioned,
I've been in the tech sector 25 plus years.
We know when internet went mainstream, you know,
this was late nineties, early 2000, there were bunch
of privacy regulations over the period, correct.
But it took, you know, almost 15 years for GDPR
to come about and correct
and start getting enforced in 2018.
It good took good 15 odd years, I would say.
When it comes to ai, we have been a lot more faster.
You know, we have nest, um, AI framework,
we have the EU AI Act, you know, everything came out in two,
two and a half years now, you know, likely
to get adopted the next year and the year after.
So overall, I would say the timeline has definitely
compressed from the privacy regulation
and the framework viewpoint.
Yeah. Now, of course, the adoption
of those frameworks within organization, that's a bit,
uh, a bit uneven.
Correct. Of course. You know, the good part is regulations,
frameworks are there, there are a lot
of security vendors out there, you know, providing
with solutions, complying with those frameworks,
however, the mature organizations have adopted those things,
but maybe at the low end, there are still gaps.
Hmm. Absolutely.
How does data leakage occur in
generative AI in the first place?
Uh, Sunil, if you'd like to take this one.
Sure. Now, that's a good question.
And also the way, at least
when I think about it from a B2B perspective, I consider it,
you know, the three ways that data leakage happens.
One is leakage into the model, leakage
around the model and leakage as a side channel.
And I'll talk about what I mean by that, you know,
so leakage into the model I just said, you know, a lot
of the, you know, the organizational maturity
around AI adoption is somewhat uneven.
So what that happens is,
although we have great foundational models, you know,
the anthropics of the world providing you zero data
retention, zero training,
however, those things are applicable if you are using the
enterprise version of all those foundational models.
Now, uh, many of the users have the shadow ai, you know,
they end up copy pasting their PII
or the source code into unapproved enterprise apps,
and you do not know what their retention policy is,
what model they're using behind the scene.
So you end up leaking the data into the model
and they might train on your data.
So that's one form of leakage.
The second form of leakage is, you know, when I say leakage
around the model, now of course we know that, you know,
not every enterprise creates their own foundational model,
but they use this great technology
or architectural concept called rag
retrieval, augmented generation.
So you bring your enterprise data
to the foundational models.
When you do that, you know, you got
to make sure you are only providing the data
that the user is allowed to access in a enterprise context.
If you do not have proper a c ACLS
or access controls outside in your rack system,
you might provide additional data
and the models might just spit out
that same data that you provided.
Correct. So there could be data leakage
happening around the model.
So that would be the second one.
The third one is what I would call it
as the side channel attacks.
You know, the model providers,
if you're not using the one from, you know, some
of the popular ones like philanthropic,
what could happen is, you know,
they could be creating audit logs, telemetry dashboards,
where it could be leaking some of the information
that you fed in as part of the rag architecture.
So there are many leakages that could happen through.
So primarily three ways.
That's how I think about a leakage happens
by providing the data to the model,
using the model incorrectly or through site channel attacks.
Right. How do unintended outputs in generative
AI lead to IP or privacy exposure? Jason?
Yeah. Um, so very, very similar to what Sunil was saying.
I think another way of saying what he said is that, uh,
fundamentally, um, a model, uh, context window, the thing
that it's using to decide what it's going to be working
with, um, can contain all of the context that's necessary
for a model to achieve whatever the objective is
that it's trying to, you know, work on.
Um, so you've got potentially, uh, you know,
privacy related information in the context.
Let's just use as an example,
like a customer service type example.
You've got the model needs to know some PII about the,
in the in person who's, or who's requesting, uh, help.
And in doing so, if that same model had an opportunity
to call out to a place where that PII is not permitted
by policy, then there's an opportunity for that model
to transmit that information, uh,
to wherever, you know, the call is.
The simplest and like least low risk example
that I can give here would be, let's say
that there's a database that contains PII, uh,
that the customer service agents use.
And then there's another database that, um, that the, uh,
agents use for let's say like shipping and receiving and,
and PII is not permitted in the second one.
Um, if you had an AI agent working on both of these,
you could imagine that the a i agent accidentally puts PII
in the second, uh, database by accident, even though it's,
you know, been instructed not to do that.
Um, so that that would be an, uh, opportunity for
that kind of leakage to occur.
The same kind of thing can happen if the model in,
involved in that example that I just gave can make outbound
calls to the outside internet.
So an example here would be, uh, everyone is familiar
with the paste bin, paste bin as a place
where you can just paste some text
and then get a, get a permanent link.
Um, the model might, you know, paste bin, uh, you know,
PII onto some public website just
by making a trivial single HTTP, uh, outbound call, uh,
you know, doing that kind of outbound networking.
So, you know, thinking about from a privacy protections
perspective and the way we think about certifications and,
and the, the risk mitigations that we put in place,
ISO 40 2001 is a framework for, um, AI risk mitigation.
Um, and there's a, you know, there's a draft standard, um,
in the ISO 27,000 range that for, for privacy of AI systems,
um, the, the number is blanking,
uh, I'm blanking on right now.
I think it's, uh, iso, uh, 27 5 63
or something like that still in draft.
Uh, those, those are opportunities for organizations
to manage their risks, um, around AI systems.
And so you would enumerate this risk
and then you would put some kind of control in place
to mitigate that particular kind of risk.
So I think it's a pretty important, uh, principle.
Absolutely. When deploying AI in, I guess like real world,
real world situations, what are some
of the privacy blind spots, uh, Whitney?
Yeah, I think, you know, a lot of the privacy blind spots
that we're seeing across any vendor tools still apply
to ai, so calling those out.
But I think in the AI tooling
and community in particular,
we're talking about relatively new companies,
relatively new systems re relatively new to tooling,
which means they likely on,
on their side have immature processes
or processes that rely on manual
or human intervention in order to complete them.
Meaning the chances for something not being completed
or not being checked
or not actually happening is really high.
So you could see that along the lines of asking an AI vendor
to have zero retention, right?
Are they actually implementing it?
Can you actually verify
that they aren't retaining that data?
Are they doing what they say that they're doing with that?
And how can you verify that on your end?
I think with a lot of, um, these newer companies,
they're growing really quickly.
It's even hard for them to keep pace with regulations
and compliance requirements of larger enterprise
enterprises generally,
because everything's changing and moving so quickly.
So I think like one of the big, like things
that can happen is they just don't do
what they say they're doing,
and it's not like an intentional lie,
but it's just that mistakes happen
and they create these incidents.
So I think that's one thing that can happen,
especially in the AI world
and things that I've generally observed.
Um, the other thing is, is on the employee like usage of ai,
I think that you have a shadow IT problem,
and this is if you don't provide a paved path
of using these tools in a way
that have the right guardrails,
that have the right protections in place,
employees will find a path to use it in some way.
And if that, even if the tooling
that you provide doesn't meet their standards
and doesn't hit the needs that they need,
they're gonna find a path to use something else.
And I've heard this anecdotally from others in the privacy
field, others who just generally like using ai, it's like,
oh, I've tried this, it doesn't work for me,
so I just anonymize my data
and stick it in some other random unapproved tool.
And I think the important thing here is
to really set out those guardrails
and provide robust solutions for employees so
that you can really make sure
that you're protecting the data, um, the confidential data
that's in your possession as a company.
So those are the two biggest ones
that I see from my perspective.
There are obviously a whole bunch of others,
but those I think, um, stand out to me
and if you can focus on those
and potentially mitigate the risks around those, um,
you're gonna be on a better path in the long run. Yeah,
That's, I love Whitney's answer.
Just to add one more, one more anecdote to her, her example
of the paved roads.
Uh, it is the case today
that people can just take their phone
and hold it up to the screen
and get an answer from whatever AI they want.
You know, the theis can just read the text on the screen.
So if you are not, you know,
putting some golden path in place
and you allow phones in your office, you're,
you're already opening yourself up to some kind of, uh,
opportunity for that to occur. Yeah,
And I also think it creates a tension too, right?
If people find out that that's happening,
they're gonna lock things down more
and this weird tension's going to happen.
So I think there's obligations on all sides
of those using it to find the right balance internally.
And you can do that through training
and enablement to make sure your people understand, right,
those guardrails and why they're important, um,
and why confidentiality potentially can be destroyed.
Because the last thing you want is then to tell employees,
Hey, we've had enough incidents
where people are putting up their phones
and now we've had data leakage
that now you can't have phones in the office.
Right? We don't wanna end up in that world.
Yeah. Yeah. So, Whitney, you mentioned, uh, regulation,
and I'd like to, uh, hone in on that a little bit
with the evolving nature of regulation around ai.
How should businesses navigate compliance when,
uh, deploying the tech?
I, I think to my original point, it's start
with the foundation, start with the fundamentals.
Also look at ways that you can implement
AI governance processes into existing privacy pro processes.
I think, you know, one of the things I both love
and kind of hate about the conversation around AI is
that people are like, well, it's different and separate.
And you're like, yes, in many ways it is,
but AI's been around for a very long time.
I've been doing some sort of AI
and privacy for the last 10 years.
Um, and I think it's just another version of processing.
So start with the fundamentals.
Look at how you can make sure that you're, um,
looping it into processes, vendor risk assessments,
privacy impact assessments, et cetera.
And I think if you don't have those things in place,
AI's gonna give you the excuse, right?
To say, Hey, we need to mature our compliance program,
or we need to mature our privacy processes
because of these things that are getting more attention,
have higher risk, whatever it might be.
And so I think if you think of
that from a regulatory perspective,
you're not gonna feel like it's just coming at
you from every direction.
I think from there, after you build the fundamentals,
you can start to focus in on the other pieces of regulation,
what's changing, what's tweaking.
And I think this is true of any privacy regulation.
Like everything's coming to everyone on this call.
I'm sure you're all going, oh,
another thing is changing in privacy
or ai, like, it's always going to change.
And so stick with the fundamentals
and then just build layers on top of that
to mature your program over time.
Yeah. So Whitney, uh, do you have any suggestions,
you know, as we are adopting coding assistance, you know,
we are G Green are adopting, we are generating a lot
of code at amazing Pace.
Correct. And we used
to do the PIS the DPIs a little bit more manually in the
past, but of course that is not going
to work out Correct going forward.
Any thoughts as to how do we keep pace with our DPIS
with the pace at which we are generating code
Code? Yeah, I mean,
I think it's really difficult.
I think that's one where we would look for some sort
of innovation and privacy tooling to be able
to do automated scanning of code to better understand
what are the actual data flows happening?
Is the code introducing any sort of security bugs
or privacy bugs in ways that are unintended?
But I also think this is an,
I know training is never the like bulletproof vest,
but I really think training
and enablement for engineers, um,
who are using generative AI is really important.
You need to be responsible for your code,
and you need to make sure that you are following those
processes and you are actually teaching security by design,
privacy, by design to those individuals
so they can spot those pieces.
Um, I was quite shocked when I was in grad school
for engineering for computer science.
They didn't teach secure coding practices
and they didn't teach privacy,
like secure privacy practices at all.
And I think this is an opportunity to really build
that into the fundamentals of coding.
Couldn't agree more.
I, and just to add on, uh, it does seem to me
that the entire SDLC is going
to radically alter over the next few years.
Um, and so like literally everything
that we do in software delivery, not only the writing
of the code, but the continuous integration,
the continuous delivery, the, you know, the QA process,
and then the post, um, post-delivery,
post-production monitoring, all of those are going
to be radically altered by, by ai.
So, and huge opportunity to put those controls
as mitigating controls
for whatever's happening in, in a coding step.
Absolutely. So, as we are all well aware,
AI is constantly changing and evolving.
Um, Jason, why is it important to consider the evolution
of this tech when preparing for the future?
Yeah, so, um, everything
that's happening in AI right now was predicted, um,
more than 15 years ago by, by folks who are paying attention
to something that we call the scaling laws.
Um, so if you go back
and there's actually this really, uh, great, um,
nonprofit called Our World and Data.
Uh, if you go look up our world
and data, the, they have a chart on the amount of compute
that's gone into AI models, um, over time.
And it's, the chart starts in 1957.
So the perceptron is the lower left hand corner,
and you see the, all the AI models that have been discussed,
um, uh, since then, and it's more
or less a four x year over year increase in the total amount
of compute that goes into these
AI models for their training.
So if you just like took a line through the last 70 years
of trend, and you sort of speculated like,
where are we going to be in terms of the amount of compute
that's going into AI models today?
Um, we, you would, you would, uh, automatically come
to the conclusion that we would have models that cost,
you know, hundred hundreds of millions of dollars to train
as we, as we are now seeing.
And so all of this was well, well anticipated.
Um, I'm not a betting type person.
I'm not, I'm not, I'm not wanting to go to Vegas and, uh,
and uh, and, and put, put some, uh, bets down on the table,
but I would be willing to bet that the trend lines are going
to keep going for the next few years at least, at least from
what we can see inside the industry.
So when you think about building your program for ai,
whatever your mitigations are, you have to plan
for the models getting smarter.
It will happen, uh, at least for the next two years.
And so you have to think to yourself, okay,
if the models gonna be four x more compute next year
and 16 x more compute the year after that,
and I'm putting in place a program that won't launch
for six months from now, uh, what, you know, what,
what is the future of AI even going to look like
by the time my program launches?
Those are fundamental questions that you
as a leader in your organization need to be asking yourself.
Chatbots are ancient history as far as, uh,
we are concerned, uh, today.
And, uh, as, as, uh, Sunil, uh, mentioned, a gentech is,
is where we are now.
Now thinking personally, if I had to look out a few, uh,
a few months, uh, to years in the future, it's starting
to look much more like this.
Um, the, the reason that we're doing all of this is that,
that these models can automate some manual labor
that we don't want to do as people, right?
And so, um, the more
and more that the models get smart, the more
and more that there will be, you know, parts of workflows
that people do in organizations
that are completely autonomous.
Um, they won't be just like trivial, you know, uh, uh,
chatbots or they won't be trivial, sort
of like input output systems.
They will like, have lots of degrees of freedom.
They'll be like a little adventure
or going on an adventure in your, in your enterprise.
And their, their goal is to, to solve some task
and they'll have, you know, dozens of tools available
to them to achieve their task, whatever that is.
And in that world, uh, the opportunities for privacy, um,
you know, issues, uh,
as we've been discussing earlier in the call,
those are all places where you wanna put the guardrails
and to make sure that that goes well, so that you
as your organization can extract the economic value
that we're all all chasing here.
Yeah, absolutely. What are, excuse me, sorry.
What are some of the key challenges today
of AI adoption in B2B
and enterprise spaces compared to B2C, uh, Sunil?
Sure. Now, you know, everything that Jason talked about,
all the great innovation that's happening, you know,
and coming to the consumer space, you know,
enterprises love it.
Correct. They want to bring all
that innovation within the enterprise,
however, you know, in a safe compliant and a privacy manner.
Correct. So I would say that's the fundamental challenge.
Correct. How do we bring all the amazing innovation
happening in the consumer space within enterprise?
Within enterprise, of course, you know, one is the data,
isolation is a big thing.
The big difference between consumer
and the enterprise space is, you know,
making sure your data is isolated from everyone,
every other customer's data.
And then as the employees are using
strict permission enforcement,
they only getting access to the data.
You know, that they're privy to that through ai, through
LLMs, they're not able to bypass any of the controls
that the enterprises have put in over the years.
Very fundamental difference. Correct.
When you go to Claude, Chad, GPT, you ask a question,
those models are trained on everything publicly available.
Correct. You're not concerned that, you know,
you might get access to someone else's information.
By and large enterprises, that's different. Okay.
All the stuff that Jason was talking about, you want
to automate a lot of those mundane workflows, you know,
as your agents are going on that onto the journey.
Correct. You got to make sure
that there's sufficient guardrails at every point
that it cannot violate the basics of enterprise security.
Yeah. And of course, I want to make sure that, you know,
people whenever they, you guardrails,
they say, oh, this is going to slow down.
No, I always say guardrails, don't slow you down.
They actually allow you to go faster
because you don't have to be always be thinking about
applying your brakes because
there's sufficient security there.
Correct. So that is what I would say fundamentally,
we got to be thinking different.
And it's not a one way street, you know, of course
innovation happens in the consumer space.
We bring it to enterprise now.
Everything that's happening in the enterprise in terms
of privacy compliance, now
that's going back to the consumer.
So it's a two-way street.
Now, the consumers are expecting
that to be sufficient privacy.
Even when I'm using my cloud, my consumer version of Claude
or Chad, GBT, I want transparency.
I want control over my data.
I should be able to see my memory,
delete my memory if I want
to from the chat GBTs of the world.
Mm-hmm. So I would say that it's a two-way street
and there are definitely fundamental differences,
but both are benefiting from each other. Yeah,
Absolutely.
Uh, before we get into some closing thoughts, uh,
I wanna discuss the future one more time.
Uh, looking ahead, how do you guys see the relationship
between AI and privacy evolving in
the next five to 10 years?
Jason?
Um, gosh, five to 10 years is such a long time.
These models are getting so fast, uh, so smart, so fast.
And, uh, that really leads me like the, the immediate answer
that I have in my mind is, um, I think,
I think it's really important actually
to impress upon the audience in that timeframe.
We're talking about models which approach human performance
and pretty much every cognitive task.
Maybe robotics will be much longer than that,
but five to 10 years is, um,
I think the most pessimistic view at a frontier lab is
that something like, like a system
that can do pretty much anything a human being can do is,
is at most, uh, 10 years away.
Um, and, uh, I think the more,
the more optimistic folks are on the three to three
to four year, uh, time horizon.
So privacy
and security for every organization in that, in that window
is gonna look a lot more like insider risk programs,
insider risk from the security side and the privacy side.
This is, this is true of, this is true of both pro uh,
programs, uh, has been a concern for all of us for about,
you know, six, uh, six, uh, six years or so.
You know, insider risk became a real
big focus about six years ago.
Somebody asked in the chat what's on the tenure horizon,
basically, uh, an AI model that can perform as well
as a human at any cognitive task, literally any kind
of knowledge work, um,
let's say at the 80th percentile human performance on,
on literally anything as is at most 10 years away, um, uh,
possibly as soon as three to three to four.
So in that world, um, we need
to be thinking about the models as an insider threat, as a,
as just like we think about people, you know, 99 out
of a hundred people are amazing and great
and are wonderful coworkers,
but there's that one bad apple that we need to be able
to put the guardrails in
and make sure that if there is something
that's going wrong there, that we're catching it
before it causes major damage to our business.
And that's gonna be true of, of the models in
that kind of time range.
Like they will need to build trust.
Um, you won't onboard a virtual AI employee
and just give it access to everything.
You'll, you know, you sort of work with it
and get to the point where you feel like, uh, you know,
on uh, 900, uh,
and 50 times out
of a thousand it does exactly what it's supposed to do.
And you get to that point where you've got
that trust built up, uh, that you know,
that AI has been putting trust in that trust piggy bank, uh,
and you, and you're to a place
where you feel like you can trust it.
Um, you know, that's the same thing we do
with employees when we onboard them.
The, the first day an employee comes into the company,
they don't get access to everything.
Yeah. They, they earn trust over time.
And so we just have to sort
of start thinking that way. Mm-hmm.
Absolutely. Alright.
Uh, we are coming to the end of our time.
So before we wrap up, if, um,
let's do a quick round of closing thoughts.
Um, what is, I guess, a takeaway
that you think enterprise tech leaders
and privacy leaders should have from today?
Uh, Sunil, if you wanna kick us off,
Well, I would say, you know, AI is your friend.
You know, definitely AI is causing concerns, you know,
but I would say, you know, this
odds were already stacked against the security team.
We had a small security team, uh, take any company and you,
and you had a potential attackers of billions out there.
Correct. Now with ai, now we can actually have, you know,
infinite number of inside security folks helping you against
the insider thread Jason just talked about.
Mm-hmm. Correct. So I'm very bullish.
So use AI to govern ai, use AI
for your privacy assessments that Whitney was talking about.
Correct. Because, you know, the pace of innovation,
everything is going to change, use the technology
to actually manage the technology
that created this entire revolution.
Yeah. Whitney,
I think, um, I'm just gonna echo a point Sunil made
earlier, which is controls actually help you move faster.
Guardrails help you move faster.
And I think, um, seeing
and communicating that to the teams you're working
with will help, um, drive them to believe in the mission
and it'll feel like a lot less tension.
I think, you know, sometimes privacy see is seen
as something that's gonna hold back
or stop the innovation of ai,
and I don't think that's true at all.
I think, you know, as we continue into the future,
we're gonna be asking ourselves less, can we do that
with ai, but should we do that with ai?
And I think the right controls
and the right guardrails will help us make those decisions
and then adoption will become cleaner, clearer, um,
as we continue on
because people will be able to adopt as they see fit
with the control setup for them.
Absolutely. Jason. Yeah.
Um, uh, so look, I agree with everything Sunil
and Whitney said, but I think one
like really important principle that every leader dealing
with AI adoption right now should be putting in their tool
belt is you get to decide the level of autonomy
and autonomy equals risk.
Mm-hmm. So if your risk tolerance in your organization is
very low, you, you're in a highly regulated industry
where you don't feel that it's appropriate for full levels
of autonomy, you get to put that slider on the scale
of completely autonomous to like very,
very guarded at the lowest end of that setting
where you can say, we're gonna do AI in the most, you know,
guardrails heavy, uh, um,
and uh, sort of like restricted from a,
from an operational vectors, um, deployment.
And then you, you, you, from a risk management perspective,
from your ISO 40 2001
or whatever framework you're using for, for managing risk,
you have a very, very good risk management story
for that kind of deployment.
If on the other hand, you're a small startup
and you have nothing to lose and you wanna vibe code your
way to, uh, to success, uh,
and slide that slider all the way to fully agent mode, um,
and give it access to everything,
you know, that's your choice.
Uh, you have the opportunity to do that
and every organization gets to decide where on
that spectrum, uh, they are, uh, comfortable, uh,
accepting risk and yeah, there's no one right way to do it.
Absolutely. Thank you so much, Jason, Whitney,
and Sunil for your insight today.
And thank you everyone for joining us.
We're now gonna break for 30 minutes
before sessions resume at 11:00 AM PT and 2:00 PM et uh,
after the break, there's gonna be a panel discussion on
staying ahead of AI with, um, all of the global
changing privacy regulations.
Can't wait to tune into the rest of today's session.
And thanks again everyone.