AI-powered romance chatbots are a privacy threat
You should not trust any responses that a chatbot sends you. And you probably shouldn’t trust him with your personal information either. According to new research, this is especially useful for “AI girlfriends” or “AI boyfriends,” Wired reports.
An analysis of 11 so-called romance chatbots published by the Mozilla Foundation revealed a number of bots’ security and privacy issues.
Together, the apps, which have been downloaded more than 100 million times on Android devices, collect massive amounts of people’s data, use trackers that send information to Google, Facebook and companies in Russia and China, allow users to use weak passwords and lack transparency about their ownership and the AI models that use them.
Many “artificial girlfriend” services or romantic chatbots look similar. They often contain AI-generated images of women, which may be sexualized or placed next to provocative messages.
Mozilla researchers looked at a variety of chatbots. Some of which are “girlfriends”, others offer people support through friendship or intimacy, or allow role-playing and other fantasies.
“These programs are designed to collect a wealth of personal information,” – says Jen Caltrider, project manager of Mozilla’s Privacy Not Included group, which conducted the analysis.
“They provoke you into role-playing games, sex, intimate conversations,” – added the expert.
For example, in the screenshots of the EVA AI chatbot, there is text with the following content: “I like it when you send me your photos and voice.” Users are also asked if anyone is willing to share all their secrets and desires.
Caltrader says there are several problems with these programs and websites. All analyzed programs had weaknesses.
Take Romantic AI, a service that allows you to create your own AI girlfriend. The home page features an ad for a chatbot that sends the following message: “Just bought new underwear. Do you want to see it?”.
The app’s privacy documents, according to Mozilla’s analysis, say the app won’t use people’s data. However, when the researchers tested the app, they found that it sent 24,354 ad trackers within 1 minute of use.
“The legal documentation was vague, hard to understand, not very specific and kind of formulaic.” – says Kaltrider.
It is unclear who owns or operates some of the companies behind these chatbots. The website of one app called Mimico Your AI Friends only contains the word “Hello”. Others do not list owners or their locations, or simply include generic email addresses for help or support contacts.
“These were very small app developers who were nameless, faceless, without specific addresses,” – added the specialist.
Mozilla noted that several companies use weak security practices when people create passwords. The researchers were able to create a single-character password (“1”) and use it to log into apps from Anima AI that offer “AI boyfriends” and “AI girlfriends.” Other apps also allowed short passwords, potentially making it easier for hackers to gain access to user accounts and access chat data.
Camilla Saifulina, head of the EVA AI brand, responded in an email that current password requirements could create potential vulnerabilities and that the firm would review its password policies.
“All user information is always private. This is our priority. User chats are also not used for pre-training. We only use our own, hand-written datasets.” Sayfulina says.
Aside from data sharing and security issues, Mozilla’s analysis also noted that little is known about the specific technologies that power chatbots.
“There is no transparency in the work of AI” – says Kaltrider.
It seems that some apps don’t have controls that allow you to delete messages. Some of them don’t disclose what types of generative models they use, or whether people can opt out of using their chats to train future models.
The biggest app covered in Mozilla’s investigation is Replika, which has previously come under scrutiny from regulators. Mozilla published an analysis of Replika in early 2023. At the time, Evgenia Kuida, CEO and founder of Replika, said that the company does not use the data of conversations between the user and the Replika app for any advertising or marketing.
Many of the analyzed chatbots require a paid subscription to access some features and were launched within the last 2 years, after the start of the generative AI boom.
Chatbots are often designed to mimic human behavior and encourage trust and intimacy in the people who use them. But there were cases when, for example, one person was ordered by a chatbot to kill Queen Elizabeth II during a conversation.
One user is also known to have committed suicide after messaging the chatbot for 6 weeks.
Some developers emphasize that the app is designed to support users’ mental health, and its terms and conditions clarify that it is not a provider of medical or mental health services. Also, the company does not guarantee that it provides professional assistance.
Vivian Ta-Johnson, associate professor of psychology at Lake Forest College, says interacting with chatbots can make some people feel more comfortable discussing topics they wouldn’t normally bring up with other people. However, the expert says that if a company shuts down or changes how its systems work, it can be traumatic for people who will get used to chatbots.
“These companies need to take seriously the emotional connections that users have made with chatbots, and understand that any significant changes in how chatbots function can have serious consequences for users’ social support and well-being.” Ta-Johnson is confident.
Some people are unlikely to carefully monitor what they reveal to chatbots. In the case of “AI girlfriends”, it can be about sexual preferences or private feelings. This can cause reputational damage if the chatbot system is hacked or a data leak occurs.
Adenike Cosgrove, vice president of cyber security strategy for Europe, Middle East and Africa at security firm Proofpoint, says that cybercriminals routinely exploit people’s trust and that there is an inherent risk in services that collect vast amounts of people’s data.
“Many users overlook the privacy risks of their data, potentially putting themselves at risk, especially in emotionally vulnerable situations,” – says Cosgrove.
Caltrider says you should be careful when using romance chatbots and use security best practices. This includes using strong passwords, not logging into apps with Facebook or Google, deleting data, and opting out of data collection if offered.
“As much as possible, limit the personal information you share without revealing names, locations, ages. However, even these actions may not provide you with the security you would like.” – says the expert.
We recently reported that hackers hacked a fast-food chain’s AI-powered hiring chatbot.
Subscribe to ProIT on Telegram so you don’t miss a post!