Businesses of virtually every size and industry are feeling the pressure to quickly adopt AI capabilities in hopes of unlocking organizational efficiencies and competitive advantages. After a number of high-profile privacy lawsuits on the topic of AI, it’s understandable that many privacy managers are alarmed about AI risk. While some teams have responded to the very real risks of fast, unmitigated AI adoption by attempting to completely shut down any AI use, this approach is increasingly untenable.
Meanwhile, regulation feels two steps behind. The EU AI Act faces potential implementation delays, and while some U.S. states have introduced their own AI regulation, new laws have been both piecemeal and plentiful, with nearly 500 AI-related state laws introduced in 2024 alone. Many privacy managers have been tasked with AI governance, but must develop their own guidelines in real-time.
The simplest place to start is in procurement. AI adoption is widespread, and if your company doesn’t provision AI tools, employees are likely to deploy their own, leading to risk that is much more difficult to track, measure, and reduce. But how do you pick a vendor that has your best privacy interests at heart?
Measuring privacy culture
Like all risks, you’re safest with a vendor who has critically embedded privacy-by-design into its culture, abiding by the minimum-necessary principle. That means your vendors should commit to only keeping the minimum data necessary for the minimum amount of time to produce their results.
At Ventrilo.AI, privacy and security practices go beyond the letter of the law. “Every engineer cares about the privacy and security of our customers and the reputation of our company. We hire for it and talk about it regularly. There’s no replacement for that. Checkboxes can’t ensure developers do the right thing, they have to care,” Chou explained.
Culture can be difficult to measure in a procurement process, but if your potential vendor values privacy as much as you do, they should be able to articulate their own privacy and security practices with ease. This could include:
- Documenting all 3rd-party vendors they use, including data privacy agreements for each vendor
- Detailing their data storage security, encryption, and access permissions management, including demonstrating SOC 2 compliance
- Providing examples of the training their own engineering and/or legal teams receive on privacy
- Investing in privacy solutions of their own – for example, Ventrilo.AI uses DataGrail to automate their own privacy compliance.
DataGrail’s own machine learning expert Stephanie Kirmer encourages privacy teams to really dig in on how vendors structure their data. For example, if data is aggregated and deidentified, it poses a far lower risk of individuals’ information being exposed. Storage and processing of nontextual data, like audio or video files, can also be more challenging or cumbersome when it comes to risk limitation.
Mitigating AI training risk
It’s important to ask your prospective AI vendor whether or not they will use your data to train their model. According to Andy Chou at Ventrilo.AI, the answer to this question should be a simple “yes” or “no” explicitly articulated in data processing agreements (and for the record, Ventrilo.AI doesn’t train on customer data).
Depending on the subject matter, AI modality, and data protection strategy, a vendor using your data to train its model may not always represent an excessive risk to your company, but at minimum the vendor should not have difficulty answering this question openly. A vendor with a vague answer is a red flag.
You may also want to look for vendors with a straight forward training opt-out experience. However, remember that once data is already involved in training, completely eradicating your data from the model can be extremely complex. It would be reasonable and expected for opting out to take time once the data is already in use – in fact, a more immediate turnaround could even signal the data is not legitimately being extricated from the model.
Continue assessing risk post-purchase
Of course, not all AI usage at your company will come from net new products. Your colleagues are also likely to take advantage of emerging AI capabilities in existing purchased products. This can be especially dangerous, as the usage comes into play before your team has any opportunity to complete an AI risk assessment. It can be too easy for an end-user to expose sensitive or proprietary data to these models without recognizing they are creating unvetted business risk.
This “Shadow AI” is exactly why it is so critical to monitor your company’s tools beyond procurement. With DataGrail Live Data Map, you can easily surface systems with detected AI capabilities at any time, even the tools that didn’t have AI capabilities when they were first purchased. Plan on vetting your system inventory periodically to review and document all AI usage and prevent unnecessary risk such as vendors training on your proprietary data.
Your next steps
Once you’re ready to start preparing your own internal guidelines for responsible AI usage, use our worksheet to create AI principles consistent with company values.