Tech giants are now banning the use of generative artificial intelligence tools like ChatGPT and Bard to protect sensitive information. Samsung was among the first big companies to urge their employees to stop using AI chatbots. Now, Google's parent company Alphabet has joined the roster of companies to ban such tools.
The company has even banned the usage of its AI chatbot called Bard. This move is aimed at protecting confidential data from potential leaks through such chatbots. According to Reuters, Google's own chatbot Bard falls under this internal ban.
In addition, the company has urged engineers against using computer code generated by chatbots. At the same time, Google said that Bard could be useful for programmers. Talk about mixed messaging!
Also read:?ChatGPT's Misjudgment: AI Tool Wrongly Fails More Than Half A Class For Cheating
AI chatbots like ChatGPT and Bard engage in conversations based on user prompts. The problem with them is that human reviewers are able to access these chat logs. At the same time, AI tools can use this data during training. This might pose an unintended information disclosure risk, the report said.
This marks a shift in the industry wherein tech giants are adopting strict security measures concerning AI chatbots. Samsung, Amazon, and Deutsche Bank have already commissioned guidelines to regulate the use of these chatbots.
Also read:?Samsung Bans ChatGPT And Other Generative AI Tools After Sensitive Code Leak
Apple, too has similar bans in place even though no official acknowledgment of the same has been so far. In March, Italy became the first Western country to ban ChatGPT on privacy grounds. The decision has been overturned since then, but signifies various concerns associated with the use of such tools.
What do you think about the internal bans on such generative tools? Why should we trust them when the companies behind them don't? Let us know in the comments below.?For more in the world of?technology?and?science, keep reading?Indiatimes.com.??