Google reportedly plans to create an AI-powered “life coach” to offer users advice on a range of life challenges, from dealing with personal dilemmas to exploring new hobbies to meal planning.
Given that people are already searching the web for such recommendations, this may seem like a natural extension of the core service already provided by Google. But let’s take it from an AI researcher: The combination of generative AI and personalization that such an app represents is new and powerful, and its placement in a position of intimate trust is concerning.
Yes, anxiety has accompanied many recent developments in artificial intelligence. Since the release of ChatGPT, many have been concerned about rogue AIs on the run. In March, more than 1,000 tech professionals, including many AI pioneers, signed an open letter warning the public of this danger.
But most discussions of the risks of AI envision a future in which hyper-capable AIs surpass humans in the skills we consider our strength. The rise of AI coaches, therapists, and friends points to a different possibility. What if the most immediate risk deriving from artificial intelligence systems is not that they learn to surpass us, but that they become the greatest “enemies” we have ever had?
For better or for worse, AI systems are far from mastering many tasks that humans do well. Building reliable self-driving cars has been much more difficult than computer scientists had anticipated. ChatGPT can string together flowing paragraphs, but it doesn’t come close to creating a high-quality short story or magazine article.
On the other hand, long before the arrival of ChatGPT, we had behind-the-scenes AI algorithms that excelled at catching us on the next viral video or keeping us scrolling for some more. Over the past two decades, these algorithms have given us endless entertainment and changed the face of our culture.
Customized versions of ChatGPT-like AIs built into a wide range of apps will have the capabilities of these algorithms on steroids. Your Netflix Movie Adviser can only see what you do on Netflix; these AI-powered apps will also read your emails, messages, and even listen to your private conversations. By combining this data with ChatGPT-scale artificial neural networks, they will often be able to predict your wants and needs better than your closest real-life friends. And unlike your human friends, they’ll always be just a click away, 24/7.
But here’s the frantic part: Just like previous recommendation generation systems, these AI confidants will ultimately be designed to create revenue for their developers. This means that they will have incentives to manipulate you into clicking ads forever or to make sure you never unsubscribe.
The ability of these systems to continuously generate new content will worsen their harmful impact.
These AIs will be able to use images and words newly created for you personally to calm, amuse, and agitate your brain’s reward and stress systems. The dopamine circuits in our brains have evolved over millions of years of evolution. They weren’t designed to withstand the onslaught of constant stimulation, tailored to your innermost hopes and fears.
Add to this the well-known struggles of generative AI with the truth. ChatGPT is notorious for lying and your AI enemies will be equally unreliable narrators. At the same time, your perceived intimacy with them may make you less likely to question their authority.
Like friendships with humans who manipulate and lie, our relationships with our AI enemies often end in tears. Many of us could be controlled by these “tools,” as the line between what we really want and what AI thinks we want becomes increasingly blurred. Many of us will get lost in a digital amusement park, disengaged from society or parroting the falsehoods generated by artificial intelligence. Meanwhile, as the race for AI heats up, tech companies will be tempted to ignore the risks of their products (Google’s AI security team has reportedly raised concerns about the AI ​​life coach artificial, but the project went ahead anyway).
We are at an inflection point as personalized generative AI starts to take off and it is imperative that we address these challenges head-on. The Biden administration’s AI bill of rights emphasized the right to opt out of automated systems and the need for consent in data collection. But humans manipulated by powerful AI systems may not be able to meaningfully opt out or consent, and lawmakers need to recognize that fact.
Designing policies that limit the harm of our AI enemies without harming the broader innovation of AI requires careful discussion. But one thing is certain: Cognitive action – the ability to act on our genuine free will – is a fundamental aspect of being human. It is essential to both our pursuit of happiness and our citizenship in a democracy. We need to make sure we don’t lose this right to carelessly employed technology.
Swarat Chaudhuri is professor of computer science and director of the Trustworthy Intelligent Systems Laboratory at the University of Texas at Austin. He is a member of the OpEd Project Public Voices Fellowship group 2023. Follow him to @swarat.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, transmitted, rewritten or redistributed.
#personalized #turn #enemy
Image Source : thehill.com