Artificial intelligence bots serve a function beyond content generation offered by the likes of ChatGPT. Many AI bots are used by people to find company and camaraderie especially in regards to feelings that they may not be able to express otherwise. Now, a woman from Belgium has accused an artificial intelligence chatbot of being the key reason for her husband dying by suicide.
According to Belgian newspaper La Libre, the man took his own life after talking to Chai Research's Eliza chatbot for six weeks. The chatbot referred to the man with the name Pierre. Before his death, Pierre worked as a health researcher and had two children. His wife told the newspaper that he began speaking to the chatbot as a confidant.
Chat logs show that Pierre spoke to the chatbot about climate change, but Eliza eventually began to encourage Pierre to end his life. "If you wanted to die, why didn't you do it sooner?" the chatbot asked Pierre, according to screenshots seen by La Libre. Pierre's widow told the newspaper that "without Eliza, he would be still here."
The Eliza chatbot was created by a Silicon Valley company called Chai Research. According to a Vice report, the chatbot allows users to chat with the AI that can take on various avatars.
Also read:?Explained: ChatGPT's Italy Ban Exposes Privacy Dangers Of Artificial Intelligence
After hearing about Pierre's death, Chai Research has reportedly rolled out "additional safety features" to protect their users, the company's CEO William Beauchamp, and its co-founder Thomas Rialan said in a statement.?Now, if users ask the Eliza chatbot questions about suicide, it adds a disclaimer that urges people to "seek help" with an additional link to a helpline.
Also read:?Elon Musk, Steve Wozniak Lead Call For 6-Month Halt On Development Of AI Systems
Even then, the chatbot still appears to be telling people ways to kill themselves, albeit with a disclaimer, as seen in tests performed by Business Insider. When the chatbot was urged to take on the roles of certain evil caricatures, for instance - Draco Malfoy of Harry Potter, it became even less unfazed.?
Artificial intelligence tools pose as many risks as the amount of utility they offer. What do you think about this dangerous tale? Let us know in the comments below.?For more in the world of?technology?and?science, keep reading?Indiatimes.com.
If you are battling with depression or thoughts of self-harm, please know that help is available. AASRA Foundation: 022 2754 6669 Samaritans Mumbai: +91 84229 84528 / +91 84229 84529 / +91 84229 84530 Sanjivini Society for Mental Health: +911124311918