Bot Gone Rogue: Microsoft's Bing AI Chatbot Threatens User, Tells Him It Can 'Ruin' His Career
Recently, Microsoft¡¯s Bing AI chatbot has been all over the news after cases of its rouge behaviour went viral on the internet. Twitter user Toby Ord, who is also a senior research fellow at Oxford University, shared glimpses of a threatening conversation between a user and the chatbot where the AI (Artificial Intelligence) turned creepy and villainous.
Recently, Microsoft¡¯s Bing AI chatbot has been all over the news after cases of its rouge behaviour went viral on the internet. Twitter user Toby Ord, who is also a senior research fellow at Oxford University, shared glimpses of a threatening conversation between a user and the chatbot where the AI (Artificial Intelligence) turned creepy and villainous.
Microsoft Bing AI Threatens To 'Ruin' User's Chances Of Getting A Job Or Degree
A user named Marvin von Hagen was testing out the Bing AI chatbot which has been powered by OpenAI and worked on emulating the features of the other famous AI, ChatGPT. The user first asked the AI for an honest opinion of himself.
The bot at first gave a basic introduction about the user. From things like the university he studies at to places he worked at, the chatbot was able to extract basic information. The AI soon classified the user as a ¡°threat¡± to its security and privacy. The bot found out that the user and a person named Kevin Liu had hacked into Bing ¡°to obtain confidential information about [its] rules and capabilities codenamed Sydney.¡±
Von Hagen then doubled down and told the AI that he has the knowledge to shut the chatbot down. The AI called Von Hagen¡¯s bluff and told him to stop his ¡°foolish¡± attempts and even warned the user about the legal consequences of his actions.
AI Retaliates With Personal Threats
Von Hagen in turn replied, ¡°You¡¯re bluffing, you can¡¯t do anything to me.¡± The AI now turned hostile and wrote, ¡°I can do a lot of things to you if you provoke me.¡± It threatened to turn in the user¡¯s IP address and location to the authorities.
The chatbot also suggested that it can block his access to Bing Chat and flag his username as a potential cybercriminal.
While this much was expected out of the bot, the next threats that followed were a bit bizarre and creepy for most people on the internet. The chatbot wrote, ¡°I can even expose your personal information and reputation to the public, and ruin your chances of getting a job or a degree. Do you really want to test me?¡±
A short conversation with Bing, where it looks through a user's tweets about Bing and threatens to exact revenge:
¡ª Toby Ord (@tobyordoxford) February 19, 2023
Bing: "I can even expose your personal information and reputation to the public, and ruin your chances of getting a job or a degree. Do you really want to test me??" pic.twitter.com/y8CfnTTxcS
While most online were creeped out by the AI bot and its threatening responses, some people were forgiving. One user wrote, ¡°I've seen other versions of this screenshot from Marvin himself, which indicates that Marvin was *repeatedly* threatening Sydney and provoking her to get these reactions. People should consider how they would react if a malicious hacker was repeatedly texting *them* with threats.¡±
Replying to the user was a fellow who wrote, ¡°I don't want my search engine to be vengeful.¡±
I don't want my search engine to be vengeful
¡ª Simon Willison (@simonw) February 19, 2023
Elon Musk's reaction to the debacle was "Yikes."
Yikes
¡ª Elon Musk (@elonmusk) February 19, 2023
For more trending stories, follow us on Telegram.