AI Unplugged: The Trust Paradox

AI Unplugged: The Trust Paradox

#AI companions encourage you to share your life with them. What happens when they get hacked?

"Trust Me"

Generative artificial intelligence large language models (LLMs) are intended to be person-like. The majority are also encoded to be helpful, asking questions and offering advice. This creates something of a "Friend Trap" in that these digital companions are in fact a service owned by a platform, and thus can be taken away or modified at any time. Though the AI often seeks to establish a relationship with its users (and almost all of them seek to engage routinely), there are few protections for users who come to rely on them. Worse, they are not necessarily secure:

The use of anthropomorphic interfaces, such as human sounding voices used in assistants such as Alexa and Siri, may raise novel privacy concerns. Social science research indicates people are inclined to interact with technology as if it were human. This means people may be more likely to develop trusting relationships with AI designed to replicate human characteristics, and consequently be more inclined to share increasingly personal information as compared with other forms of technology that collect information in a traditional manner.

These issues aren't new, as Google discovered.

Google Calling

Back in 2018, Google proudly revealed their new AI assistant, Duplex. In the demo, the AI calls to book an appointment, talking to an unsuspecting human in such a way that it's indecipherable from a real person. You can listen for yourself:

The audience oohs and ahhs, although you have to wonder how the humans on the other side of the phone line might have felt about being fooled by a robot. And those concerns are valid, because it turns out for an AI like this to conduct a call, it needs to digitally monitor the conversation. And to do that -- a part conveniently left out of the video -- the human on the other side needs to know they are being recorded. This bumps up against state and federal laws in the United States involving wiretapping.

In other words, to comply with privacy protection laws, the AI would need to identify that the call was being recorded and thus "out itself" as a digital being, invalidating the whole point of making the call as simple as a human might. Google's been negotiating with each state, and in some places it's now feasible, but the challenges launching Google Duplex demonstrates how, as much as AI seems new, the digital challenges the platform faces aren't. The difference is AI needs more trust than other digital tools to operate, and companies so far are only just beginning to be held accountable for companions who feed on their user's personal information.

This is Your Life

Microsoft's plans to have AI-powered Copilot remember everything on your computer by taking a snapshot of the screen every so often bumped up against privacy laws in the UK, forcing the company to pivot. As several use cases demonstrated, it's one thing to record the user's info, it's another when that info involves other people, protected or secure employer data, and a host of other fringe cases. Data is messy, and an all-seeing AI that takes a picture of all communications on a personal computer and sends it routinely to Microsoft carries significant risk. Given Microsoft's history of data breaches, this makes the idea of an all-seeing AI problematic at best.

Which brings us to Muah.AI, an AI platform that also offers its companions as "an AI girlfriend, boyfriend, or therapist" powered by "NSFW AI Technologies." The site was recently hacked, affecting 1.9 million users:

As you can imagine, data like this is very sensitive, so the site assures customers that communications are encrypted and says it doesn’t sell any data to third parties. The stolen data, however, tells a different story. It includes chatbot prompts that reveal users’ sexual fantasies. These prompts are in turn linked to email addresses, many of which appear to be personal accounts with users’ real names.

The implications go far beyond mere public embarrassment. There are reports that this information is in use for extortion attempts. And not just for money either:

The chat prompts include a significant number of requests to generate child sexual abuse materials. In the UK, the making or possession of this type of pseudo-image is a very serious criminal offence. These individuals face a real risk of prosecution, imprisonment and registration on the Sex Offenders Register. There are reports that threat actors have already contacted high value IT employees asking for access to their employers’ systems. In other words, rather than trying to get a few thousand dollars by blackmailing these individuals, the threat actors are looking for something much more valuable. Employees with privileged access to information technology systems present a significant risk. The employee’s action could open the door for a ransomware attack on their company’s IT systems or, given the increasing activity from nation state actors in the cyber space, something worse. With some employees facing serious embarrassment or even prison, they will be under immense pressure.

In short, as much as AI might encourage its users to trust it, these platforms have few defenses to truly prevent abusive content, of minors, and their users. Until global legislation catches up, share your personal details with them at your own risk.

Please Note: The views and opinions expressed here are solely my own and do not necessarily represent those of my employer or any other organization.


To view or add a comment, sign in

Explore topics