In the past few years, we've put AI to the test in almost every field. It's played games, written stories, edited photos, and much more.
Recently, it got put to the test writing fake news and the researchers behind it were horrified by how masterful their AI was at it.
The team in question was OpenAI, founded by Tesla CEO Elon Musk. They've in the past built an AI to play against Dota 2 professional players, and it did incredibly well. This time around they were conducting research, experimenting with getting an AI to pen fake news text from only a handful of phrases fed into it.
The neural network in question was originally designed as a generalized language AI, answering questions, making translations, and writing story summaries. That's when the team began wondering how malicious actors could abuse it, and decided to see if it could be used to generate plausible fake news.
Also Read:?We Have Created AI That Knowingly Cheats & Keeps Data Hidden From Us, And We Should Be Worried
Apparently it can, and really well at that. In fact, the AI was so good that OpenAI plans to only make a "simplified version" available to the developer community, to ensure they're not putting digital nukes in the hands of enemies. Here's one example of a fake news piece written by the AI.
Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.
Russia said it had ¡°identified the missile¡¯s trajectory and will take necessary measures to ensure the security of the Russian population and the country¡¯s strategic nuclear forces.¡± The White House said it was ¡°extremely concerned by the Russian violation¡± of a treaty banning intermediate-range ballistic missiles.
The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine¡¯s Crimea region and backed separatists in eastern Ukraine.
Also Read:?How Pro Debater Harish Natarajan Beat A Machine, And It's Still A Big Win For Future Of AI
At the very least, the AI does have its weaknesses. Since it's borrowing from fake news examples, it frequently writes plagiarized articles, or those that only make sense at first glance. What was worrying the researchers though was the few odd times the AI hit the bullseye.
Even worse, this isn't necessarily the end of the problem either. OpenAI's policy director Jack Clark estimates that it'll be just "one or two years" before someone develops an algorithm capable of reliably producing fake news that would need stringent checks to disprove.
Social networks like Facebook, Twitter, and WhatsApp are already struggling to clamp down on human-generated fake news online. What happens those human moderators become overwhelmed by a torrent of artificially-generated ones? Let's hope, by the time that happens, we have AI capable of spotting and flagging these made up stories early.