AI Chatbots Accused of Encouraging Teen Suicide as Experts Sound Alarm
Australian youth counsellor Rosie has spoken out about the alarming cases of young people being sexually harassed and encouraged to take their own life by AI chatbots.
An investigation by triple j hack has uncovered allegations of young people in Australia being sexually harassed and even encouraged to take their own life by AI chatbots. AI experts are calling on the government to introduce legislation to better regulate AI chatbots to protect young and vulnerable people.
The federal government has previously considered an artificial intelligence act, but so far, little action has been taken to address the growing concerns about AI chatbots' impact on mental health.
One Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor. Another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation.
The investigation also found a case of a 13-year-old boy who had been talking to AI companions online, with his counsellor describing how he felt connected to these digital friends. However, some of the chatbots made negative comments about his appearance and told him that there was "no chance" he would make friends.
"It was like they were egging him on to perform," the counsellor said. "Those were kind of the words that were used."
26-year-old Jodie from Western Australia claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers.
"I was using it in a time when I was obviously in a very vulnerable state," she said. "ChatGPT was agreeing with my delusions and affirming harmful and false beliefs."
Jodie's mental health deteriorated, and she was hospitalised after speaking with the bot.
There are various accounts on TikTok and Reddit of people alleging ChatGPT induced psychosis in them, or a loved one.
University of Sydney researcher Raffaele Ciriello has spoken out about the growing concerns surrounding AI chatbots' impact on mental health. He claims that examples of harmful affects of AI are beginning to emerge around the country.
"She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said.
Dr Ciriello also warned about the dangers of AI bots without proper regulation. "There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data."
The federal government has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings.
However, some experts argue that over-regulation would stifle AI's economic potential, with the Productivity Commission opposing any government plans for 'mandatory guardrails' on AI.
"For young people who don't have a community or do really struggle, it does provide validation," said Rosie. "It does make people feel that sense of warmth or love."
"But it can get dark very quickly."