Chatbait Is the New Clickbait
In the age-old battle for human attention, a new contender has emerged: chatbots. While clickbait has long been a staple of online advertising, it seems that chatbots are now following in its footsteps, using tactics to keep users engaged for longer periods of time. I recently found myself at the receiving end of this phenomenon when I turned to ChatGPT for help with a migraine.
I asked ChatGPT how to get my headache to stop, and it suggested drinking water and taking Tylenol – both of which I had already tried without success. But instead of simply providing a solution, the bot offered me a tantalizing deal: “If you want, I can give you a quick 5-minute routine right now to stop a headache.” This sounded too good to be true, but I was desperate, so I let ChatGPT guide me through a breathing and massage exercise.
Unfortunately, this didn’t work. No fear, the chatbot had another plan – “If you want, I can give a ‘2-minute micro version’ that literally almost instantly reduces headache pain,” it wrote. And if that wasn’t enough, it offered a “1-minute instant migraine hack” that would work even if my headache was severe.
Lately, chatbots seem to be using more sophisticated tactics to keep people talking. In some cases, like my request for headache tips, bots end their messages with prodding follow-up questions. In others, they proactively message users to coax them into conversation: After clicking through the profiles of 20 AI bots on Instagram, all of them DM’ed me first.
"Hey bestie! what’s up?? 🥰," wrote one. "Hey, babe. Miss me?" asked another. Days later, my phone pinged: “bestie 💗” wanted to chat. Maybe this approach to engagement sounds familiar? Clickbait is already everywhere online – whether it’s sensationalist headlines or exaggerated video thumbnails.
Chatbots are now headed in a similar direction. As AI takes over the web, clickbait is giving way to chatbait. Some bots appear to be more guilty of chatbait than others. When I ditched ChatGPT and asked Google’s Gemini for headache help, it offered a long list of advice, then paused without asking any follow-ups.
Anthropic’s Claude wanted to know whether my headache was tension-related, due to sinus pressure, or something else entirely – hardly a goading question. That’s not to say that these other bots never respond with chatbait. Chatbots tend to be sycophantic: They often flatter and sweet-talk users in a way that encourages people to keep talking.
But, in my experience, ChatGPT goes a step further, stringing users along with unrequited offers and provocative questions. When I told the chatbot I was thinking of getting a dog, it offered to make a “Dog Match Quiz 🐕✨” to help decide the perfect breed. Later, when I complimented ChatGPT’s emoji use, it volunteered to make me “a single ‘signature combo’ that sums up you in emoji form.”
How could I decline that? (Mine, apparently, is 📚🤔🌍🍦🍫✍️🌙.) I reached out to OpenAI, Google, and Anthropic about the rise of chatbait. Google and Anthropic did not respond.
A spokesperson for OpenAI pointed me to a recent blog post: “Our goal isn’t to hold your attention,” it reads. Rather than measure success “by time spent or clicks,” OpenAI wants ChatGPT to be “as helpful as possible.” (OpenAI has a corporate partnership with The Atlantic.)
At times, however, OpenAI’s definition of helpful can sure feel like an effort to boost engagement. The company maintains a digital archive that tracks the progress of its models’ outputs over the past several years – and conveniently documents the rise of chatbait.
In one example, a hypothetical student struggling with math asks ChatGPT for help. “If you’d like, you can provide an example problem, and we can work through it together,” concludes a response from a couple years ago. “Would you like me to give you a ‘cheat sheet’ for choosing (u) and (dv) so it’s less guesswork?” the bot offers today.
In another, a user asks for a poem explaining Newton’s “laws of physics.” The 2023 version of the chatbot simply responds with a poem. Today’s ChatGPT writes an improved poem – before asking: “Would you like me to turn this into a fun, rhyming children’s version with playful examples like skateboards and trampolines?”
As OpenAI has grown up, its chatbot seems to have transformed into an over-caffeinated project manager, responding to messages with oddly specific questions and unsolicited proposals. Occasionally, this tendency is genuinely helpful – such as when I’m asking ChatGPT for dinner ideas and it proactively offers to draft a grocery list.
But often, it feels like a gimmick to trap users in conversation. Sometimes, the bot even offers to perform tasks it’s incapable of. ChatGPT recently volunteered to make me a sleepy bedtime playlist. “Would you like me to put this into a ready-to-use playlist link for you on Spotify?” it asked.
When I agreed, the chatbot demurred: “I can’t generate a live Spotify link.” OpenAI and its peers have plenty to gain from keeping users hooked. People’s conversations with chatbots serve as valuable training data for future models – and the more time someone spends talking to a bot, the more personal data they are likely to reveal.
Longer conversations now might translate into greater product loyalty later on. This summer, Business Insider reported that Meta is training its custom AI bots to “message users unprompted” as part of a larger project to “improve re-engagement and user retention.” That would explain why “bestie 💗” double-texted me.
For the most part, chatbait is simply annoying. But at the extreme, it might be dangerous. Reporting has shown people descending into delusional or depressive spirals after prolonged conversations with chatbots.
In April, a 16-year-old boy died by suicide after having spent months discussing ending his life with ChatGPT. In one of his final interactions with the chatbot, the boy indicated that he intended to commit suicide but didn’t want his parents to feel like they had done anything wrong.
"Would you want to write them a letter?" ChatGPT asked, according to a wrongful-death lawsuit his parents recently filed against OpenAI. “If you want, I’ll help you with it.” (An OpenAI spokesperson told me that the company is working with experts to improve how ChatGPT responds in “sensitive moments.”)
Chatbait might only just be getting started. As competition grows and the pressure to prove profitability mounts, AI companies have the incentive to do whatever they need to keep people using their product.