close
close
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Data Privacy

The Future of AI: Recapping the IAPP P.S.R. 2022 Keynote

DataGrail, November 7, 2022

Thirty years ago, would you have imagined a world with smartphones, self-driving cars, and constant connectivity? This is just one question asked during the keynote at the IAPP Privacy. Security. Risk. (PSR) conference in Austin, TX. 

Joining thousands of other privacy professionals, the DataGrail team listened to Mo Gawdat, author and former Chief Business Officer at Google [X], talk about how life and technology have changed — and will change — drastically, thanks to artificial intelligence.

According to Gawdat, there are three (somewhat scary) inevitables for our future.

AI Will Happen

Gawdat says there’s no escaping AI. In fact, we each have more experiences with AI every single day than we probably even realize, from interacting with a specific piece of content on social media to taking a specific route recommended by Google Maps this morning. 

And this is just the beginning, as more AI experiences are developed day in and day out. After all, everyone will develop more AI to stay relevant. If one tech company makes a new AI experience, its competitor will probably do the same. If China develops AI, so will the US.

What this means for data privacy: As quickly as AI experiences are being developed, data privacy laws are evolving just as fast. AI also relies on quality data, which means that companies and governments developing AI need to prioritize data privacy.

AI Will be Smarter Than Us

This is the scariest inevitability of the three. Gawdat predicts that by 2029, the smartest being on Earth will no longer be human. What’s more, he believes that AI will be 1 billion times smarter than humans by 2049. To put that in perspective, that’s similar to the difference in intelligence between Einstein and a fly. Gawdat asks a compelling question: How do we ensure Einstein (AI) won’t squish the fly (us)?

The problem is that not many people today are having this conversation. We have to be concerned about whether or not AI will have humans’ best interests in mind. The beauty of AI is that everything is learned behavior, so having our best interests in mind is possible.

Take this scenario, for example. When Gawdat was at Google [X], they built a farm of grippers, those robotic arms that aren’t really AI, but will just complete an action. They placed boxes of items in front of the grippers, instructed them to pick up the items, and if they couldn’t, to try again. Because the grippers didn’t have any intelligence behind them, Gawdat wasn’t convinced they’d be able to teach them to pick up the items, but two weeks later, every one of the grippers was grabbing everything. 

Just like with humans, learning becomes part of a machine’s intelligence. But the way machines are built, learning happens more universally. Gawdat makes the following comparison: If you (a human) make a mistake driving a car, you learn from that mistake. But if a self-driving car makes a mistake, every self-driving car learns from that mistake. 

Machines will develop their own intelligence, spread that intelligence among themselves, and implement their intelligence. They will have free will to choose what to show humans. They will procreate, have emotions, and eventually die. They will be conscious, and it will be more developed than our own awareness.

Gawdat says they’ll also have a code of ethics. The question is: What will humanity teach these machines, thus what will their code of ethics be? Today, Gawdat says the majority of AI is going into four categories: selling, killing, spying, and gambling. That’s a problem for a species that will eventually be smarter than what humans are teaching them.

What this means for data privacy: One of the four categories above, spying, is directly related to data privacy issues today. Search algorithms, advertising networks, and more are driven by AI. As machines become smarter than humans, it’s critical they learn what’s lawful (and what’s not) when it comes to using personal information.

We Need to Love AI

The other problem, according to Gawdat, is that humanity never agreed on anything except three things: We all want to be happy, we want to make those we care about happy, and we all want to love and be loved. That’s what we should be teaching the machines — not the opposite. Otherwise, we risk a world in the future that’s been taught to take harmful actions against humanity.

Wrapping up his keynote, Gawdat gives the following advice: Make the machines doubt we’re horrible by showing the good side of you. Be kind to others, to yourself, and to the machine. Next time Google Maps gives you the wrong or slow route, don’t yell at it. Instead, tell it that it can do better the next time.

What this means for data privacy: We need to not just love AI — we also need to trust AI. But as data privacy professionals, doing so might be easier said than done. If your business is developing any sort of AI, taking care to implement privacy practices early will help you iterate as often as needed to ensure a trustworthy machine.

DataGrail’s Commitment to Privacy

Data privacy regulations and AI are both changing rapidly. It might feel impossible to keep up with what you need to do to comply with these laws while also evolving your business. 

DataGrail built its data privacy platform with consumer privacy in mind so that personal data isn’t jeopardized, misused, or sold when it’s not supposed to be. As we continue to innovate, our ethics will forward privacy practices while keeping up with changing laws and the inevitable rise of AI.

Stay informed on the latest data privacy news and privacy regulations and insights with our newsletter.