Organizations across industries are experimenting with generative AI in areas like marketing, sales, customer support, and product research. The results, in some cases, have been impressive, and businesses continue to take notice and explore potential uses for the technology.
But questions surrounding data privacy issues of generative AI remain, especially as it relates to consent—as DataGrail CEO Daniel Barber covered in VentureBeat. Since generative AI platforms require data in order to run, user consent and data privacy need to be chief among any discussions about generative AI and its implementation in any industry.
Let’s take stock of the current controversies surrounding consent and privacy with generative AI and explore the implications for consumers and businesses.
Generative AI Risks Spur Action Through Entire Countries and Industries
By now, you can’t get far into a conversation (or a Google search) about generative AI without encountering questions about its various risks, as well as reports of action both in the U.S. and abroad to address these concerns. In March, Italy briefly banned ChatGPT until it addressed the country’s concerns and put measures in place to reassure that EU citizens’ data privacy rights were being observed. Spain and Canada have launched probes over similar concerns, and other countries have considered such precautionary actions as well.
In the U.S., Congress has introduced legislation aimed at regulating the government’s use of AI and monitoring its competitiveness relative to other countries, and President Joe Biden recently signed an executive order that prohibits or restricts U.S. investments in Chinese artificial intelligence systems.
However, most stateside action addressing generative AI has come from outside the government. The first half of 2023 has been marked by various creative industries taking legal or collective action against generative AI platforms and companies, as well as employers looking to employ the technology.
In the art world, creators are currently in litigation with Stability AI and Midjourney over the unconsented use of their copyrighted work. Getty Images also found itself in the midst of a legal battle with Stability, accusing the company of using 12 million photos without credit or compensation. And recently, comedian Sarah Silverman launched a class-action lawsuit with two authors over OpenAI and Meta’s use of their written work to train AI platforms.
Generative AI is also one of the chief concerns at the heart of the current WGA and SAG-AFTRA strikes. Writers for film and television are voicing concerns about AI being indiscriminately implemented in union-covered projects, as well as material covered under union agreements being used to train AI. On the other side of the camera, actors are concerned about their likenesses being replicated by AI without pay or permission
Generative AI’s Consent and Privacy Implications for Consumers and Companies
If data is publicly accessible, it’s likely been scraped and used by generative AI and used within platforms that harness the technology. Given the size of most people’s digital footprint, that means there’s an overwhelming amount of personal data at generative AI platforms’ disposal.
So, organizations that use generative AI in any capacity (the use cases for which are steadily growing) need to keep privacy and consent in mind. Any information you feed into the algorithm is no longer confidential, which matters for high-value company data like proprietary product-focused content and internal communications. It’s for this very reason that Samsung banned ChatGPT after an employee accidentally uploaded sensitive code to the platform.
But, of course, an even greater concern is how you treat the sensitive data and info of customers with whom you hope to foster a relationship built on trust. With every interaction with an AI system comes the risk of personal data (names, addresses, contact details, or financial and health information) being collected and processed. This means customer data is being shared with a third party they never granted access to when they agreed to let you use their data in the first place.
As we explore in our Privacy Trends Report, consumer concern about data privacy (as well as about generative AI) has been growing over the past few years, especially as the reality of our digital world has been cemented. With public awareness of AI increasing, we can expect that concern about AI will be similarly cemented.
DataGrail Is Staying On Top of Developments with Generative AI
Generative AI isn’t going away, and even if the U.S. does begin regulating its use, the technology will probably remain several steps ahead of whatever capabilities fall under the latest legislation.
As the technology progresses and gets refined, its use cases and capabilities will increase along with the opportunities to use generative AI in your organization. With that will come more potential pitfalls as it relates to data privacy and consent. Adopting new technologies conscientiously means addressing these pitfalls and risks—and the more proactive you can be, the better. This starts with:
1) Developing ethical use policies for Generative AI, before aggressively exploring ways to implement generative AI.
2) Doing what you can to discover where AI is used in your supply chain. Once you’ve identified where it is in your supply chain, communicate with your executive team the size and scope of the problem.
3) Actively monitoring where Generative AI is used in your supply chain, your third-party vendors, and by your employees. In the absence of any real ways to control AI usage, set up regular audits to ensure your following your ethical use policies.
We’ll continue covering developments within the world of generative AI and offering action items for companies that will allow them to tackle these developments as they occur, so subscribe to our blog and newsletter if you’d like to stay posted.
If you’re ready to explore an approach to data privacy that enables you to capitalize on new and exciting tech while honoring consumer concerns and respecting their data privacy rights, contact DataGrail today.