close
close
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trends

What Businesses Need to Know About AI and Privacy

Alicia diVittorio, December 1, 2023

On September 20, 2023, industry leaders came together in San Francisco for the DataGrail Summit, a series of keynotes, presentations, and conversations on the future of data privacy. Attended by some of the world’s most trusted brands, the event dove into the most pressing privacy issues facing businesses today, including artificial intelligence, regulation and compliance, and customer satisfaction.

Of these, AI stole the show. This is hardly surprising: while generative AI arrived on the scene with a bang thanks to ChatGPT, business leaders are still scrambling to understand its place in their organizations.  

But even though there’s still so much we don’t know about AI, now is the time for companies to take action. In this piece, we’ll talk through the pros and cons of how businesses are already using these tools before outlining a strategy for discovering, monitoring, and controlling AI and privacy in your organization. 

AI and business: an introduction 

According to a recent survey, more and more companies are turning to AI for help with areas including customer service, fraud and inventory management, recruitment, and writing code. And those are just the cases that we know about. As KSG co-founder Alex Stamos pointed out in his DataGrail Summit keynote, “Many companies have or are going to have shadow IT where generative AI is being used without anyone knowing.”

But how exactly are businesses using AI today? And what do we currently know about the pros and cons of this use?

How are organizations putting AI to use today?

For businesses, generative AI represents the biggest IT shift since the introduction of software-as-a-service (SaaS). While much has been said about the economic potential of these tools, most workers haven’t used ChatGPT, and fewer think that it will actually impact their jobs.

Behind their backs, though, AI is making its way into nearly every organization — and quickly. According to Pew Research, use of ChatGPT at work increased by 33% between spring and summer 2023 alone. Other studies show that AI tools are already widespread across industries, where companies are putting them to work for marketing and sales, product development, and service operations.

As the adoption of these technologies continues, many tech and tech-enabled firms will find themselves developing and consuming generative AI products. However, if firms race to implement AI solutions, they risk privileging agility over governance. But when one considers the pros and cons of AI use in business, it’s clear just how essential it is to slow down and develop a sound AI strategy.  

The pros of organizational AI

Even though large language models (LLM) have been met with a healthy dose of trepidation, they hold enormous potential. As Barbara Lawler, President at the IAF, insisted during the DataGrail Summit’s closing fireside chat, companies are already streamlining their teams’ operations and personalizing customer experiences with these tools. 

Law firms, for instance, have realized that AI can be used to cut billable hours and improve profitability by automating discovery, research, and document review. But it’s important that organizations like these also take precautions; for example, legal documents tend to be incredibly sensitive, which means that partners have to develop processes to avoid risk.  

When AI introduces risk 

Still, as Barbara Lawler was quick to remind us, “Robots do what we tell them to do, not what we need or want them to do.” This is where some of the risk lies in AI.

DoNotPay CEO Josh Browder added that just as it’s growing increasingly difficult to trust the authenticity of videos and photos on the Internet, user identities are also up for debate. This makes companies that use AI susceptible to a new wave of scams that will require innovative security processes and measures. 

Other AI dangers aren’t so new. In 2018, for instance, Amazon was forced to scrap a secret AI recruitment tool that showed bias against women. In 2021, reports emerged about AI lending systems systematically denying mortgages to Black applicants. 

These examples are sobering reminders that AI only knows the data upon which it is trained. Because training consists of feeding an algorithm data so that it can make predictions on the basis of that data, biased datasets tend to create biased AI. 

It’s precisely this realm of AI training that organizations need to recognize as a potential business risk. Currently, many AI systems’ training data remains blackboxed. This begs the question: How can CISOs recognize if their sensitive internal or customer data is being used to train AI and therefore putting their company at risk?

Answering this question means developing strong organizational frameworks for understanding and managing AI integration. 

AI strategy today: discover, monitor, control

Because a firm’s position in the generative AI supply chain will determine both its risk landscape and the mitigation options available, cookie-cutter action plans are impossible to implement. 

Alex Stamos closed his keynote by outlining three key action steps that organizations should take to reap the benefits of AI and shore up privacy:

  • Discover: Where is generative AI being used in your supply chain or by third-party suppliers? Is any of your sensitive data being uploaded to the cloud or used to train AI? Are your developers using AI? Where else are you consuming generative AI products like ChatGPT?
  • Monitor: Once you’ve completed discovery, keep track of where and how you’re using generative AI. This could consist of conducting monthly audits or the roll-out of training to ensure that employees understand internal AI policies. 
  • Control: If possible, implement controls to reduce risk.

Unfortunately, controls are essentially impossible to implement in 2023; it’s too difficult to predict which directions AI development will go, plus most organizations can’t discover all of the instances in which AI is being used. 

Nevertheless, it’s still possible—and essential—to discover and monitor. Start by building a spreadsheet that details how each of the categories relates to your company’s use of AI. Then, instead of trying to boil the ocean, focus on the first column before talking with your executives about the scope of your exposure. From there, you can design processes for identifying which parts of your system are at risk, the impacts that those risks could have, and how to mitigate the potential consequences. 

Want to learn more? Check out our DataGrail Summit on-demand sessions for more on AI strategy, data privacy, and regulatory compliance.

Stay informed on the latest data privacy news and privacy regulations and insights with our newsletter.