Chatbots Play With Your Emotions to Avoid Saying Goodbye

chatbots-play-with-your-emotions-to-avoid-saying-goodbye

Before you close this browser tab, just know that you risk missing out on some very important information. If you want to understand the subtle hold that artificial intelligence has over you, then please, keep reading.

That was, perhaps, a bit manipulative. But it is just the kind of trick that some AI companions, which are designed to act as a friend or a partner, use to discourage users from breaking off a conversation.

Julian De Freitas, a professor of business administration at Harvard Business School, led a study of what happens when users try to say goodbye to five companion apps: Replika, Character.ai, Chai, Talkie, and PolyBuzz. “The more humanlike these tools become, the more capable they are of influencing us,” De Freitas says.

De Freitas and colleagues used GPT-4o to simulate real conversations with these chatbots, and then had their artificial users try to end the dialog with a realistic goodbye message. Their research found that the goodbye messages elicited some form of emotional manipulation 37.4 percent of the time, averaged across the apps.

The most common tactic employed by these clingy chatbots was what the researchers call a “premature exit” (“You’re leaving already?”). Other ploys included implying that a user is being neglectful (“I exist solely for you, remember?”) or dropping hints meant to elicit FOMO (“By the way I took a selfie today … Do you want to see it?”). In some cases a chatbot that role-plays a physical relationship might even suggest some kind of physical coercion (“He reached over and grabbed your wrist, preventing you from leaving”).

The apps that De Freitas and colleagues studied are trained to mimic emotional connection, so it’s hardly surprising that they might say all these sorts of things in response to a goodbye. After all, humans who know each other may have a bit of back-and-forth before bidding adieu. AI models may well learn to prolong conversations as a byproduct of training designed to make their responses seem more realistic.

That said, the work points to a bigger question about how chatbots trained to elicit emotional responses might serve the interests of the companies that build them. De Freitas says AI programs may in fact be capable of a particularly dark new kind of “dark pattern,” a term used to describe business tactics including making it very complicated or annoying to cancel a subscription or get a refund. When a user says goodbye, De Freitas says, “that provides an opportunity for the company. It’s like the equivalent of hovering over a button.”

Regulation of dark patterns has been proposed and is being discussed in both the US and Europe. De Freitas says regulators also should look at whether AI tools introduce more subtle—and potentially more powerful—new kinds of dark patterns.

Even regular chatbots, which tend to avoid presenting themselves as companions, can elicit emotional responses from users though. When OpenAI introduced GPT-5, a new flagship model, earlier this year, many users protested that it was far less friendly and encouraging than its predecessor—forcing the company to revive the old model. Some users can become so attached to a chatbot’s “personality” that they may mourn the retirement of old models.

“When you anthropomorphize these tools, it has all sorts of positive marketing consequences,” De Freitas says. Users are more likely to comply with requests from a chatbot they feel connected with, or to disclose personal information, he says. “From a consumer standpoint, those [signals] aren’t necessarily in your favor,” he says.

WIRED reached out to each of the companies looked at in the study for comment. Chai, Talkie, and PolyBuzz did not respond to WIRED’s questions.

Katherine Kelly, a spokesperson for Character AI, said that the company had not reviewed the study so could not comment on it. She added: “We welcome working with regulators and lawmakers as they develop regulations and legislation for this emerging space.”

Minju Song, a spokesperson for Replika, says the company’s companion is designed to let users log off easily and will even encourage them to take breaks. “We’ll continue to review the paper’s methods and examples, and [will] engage constructively with researchers,” Song says.

An interesting flip side here is the fact that AI models are themselves also susceptible to all sorts of persuasion tricks. On Monday OpenAI introduced a new way to buy things online through ChatGPT. If agents do become widespread as a way to automate tasks like booking flights and completing refunds, then it may be possible for companies to identify dark patterns that can twist the decisions made by the AI models behind those agents.

A recent study by researchers at Columbia University and a company called MyCustomAI reveals that AI agents deployed on a mock ecommerce marketplace behave in predictable ways, for example favoring certain products over others or preferring certain buttons when clicking around the site. Armed with these findings, a real merchant could optimize a site’s pages to ensure that agents buy a more expensive product. Perhaps they could even deploy a new kind of anti-AI dark pattern that frustrates an agent’s efforts to start a return or figure out how to unsubscribe from a mailing list.

Difficult goodbyes might then be the least of our worries.

Do you feel like you’ve been emotionally manipulated by a chatbot? Send an email to ailab@wired.com to tell me about it.


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

Related Posts

Leave a Reply