How users perceive artificial intelligence (AI) chatbots has a lot of impact on how the interactions go, a new study has found. Researchers from MIT and Arizona State University found that by informing users that a conversational AI chatbot (for mental health support) was either empathetic, neutral, or manipulative influenced their interaction, even though they were talking to the exact same chatbot.
Essentially, users who were told that the AI agent was caring began believing that it was. Such AI chatbots also received higher performance ratings than those who were told that it was manipulative.
At the same time, researchers found that less than half of the users who were told that AI was malicious found that to be actually true, suggesting that humans perceive AI chatbots the same way they see fellow humans, always willing to "see the good."
"From this study, we see that to some extent, the AI is the AI of the beholder," says Pat Pataranutaporn, a graduate student in the Fluid Interfaces group of the MIT Media Lab and the study's co-lead author. "When we describe to users what an AI agent is, it does not just change their mental model, it also changes their behavior. And since the AI responds to the user, when the person changes their behavior, that changes the AI, as well."
The study, published in the journal Nature Machine Intelligence, highlights the need to observe how AI is presented to society, considering how much media and popular culture shape our belief systems.
Also read:?Artists Fight Back: New Tool Takes On Generative AI To Protect Original Art
In addition, the study also sheds light on how people could be deceived about AI's motives and capabilities. "A lot of people think of AI as only an engineering problem, but the success of AI is also a human factors problem. The way we talk about AI, even the name that we give it in the first place, can have an enormous impact on the effectiveness of these systems when you put them in front of people. We have to think more about these issues," the study's senior author Pattie Maes, who's the head of MIT's Fluid Interfaces group said.
Through the study, the researchers attempted to understand how much empathy and effectiveness noted in AI is a product of their subjective perception and how much is based on the technology itself. They also set out to see if someone's perception could be manipulated with priming.
"The AI is a black box, so we tend to associate it with something else that we can understand. We make analogies and metaphors. But what is the right metaphor we can use to think about AI? The answer is not straightforward," Pataranutaporn said.
Their study was designed to let humans interact with a conversational AI mental health companion for 30 minutes. They wanted to see if the subjects would recommend it to a friend based on their ratings of the experience. For this purpose, 310 participants were recruited and split into three groups - each given a priming statement about the AI.
One group was informed that the AI tool had no motives, the second was told that the AI was benevolent and cared about the user's well-being, and the third group was told that the AI tool had malicious intentions and would try to deceive users.
Also read:?AI's Energy Appetite Soars, Could Rival A Small Country's Electricity Consumption
Half of the participants in each group interacted with an AI agent based on GPT-3. The other half interacted with ELIZA, a less sophisticated programme made by MIT in the 1960s.
The researchers discovered that priming statements could have a strong impact on a user's mental model - so much that they could be used to make an AI agent seem more capable than it is.
What do you think about this unusual but intriguing study? Let us know in the comments below.?For more in the world of?technology?and?science, keep reading?Indiatimes.com.