Elon Musk has on multiple occasions made his thoughts on artificial intelligence clear.
He¡¯s at times compared AI research to ¡°summoning the demon¡±, warning that tinkering with forces before we¡¯re ready to control them could be catastrophic.
At the US Governors¡¯ Association meet this past weekend, he repeated this once again, warning how AI could pose a significant threat to humanity¡¯s very existence. When it all comes down to it, Musk wants governments around the world to start regulating AI development.
¡°I have exposure to the very cutting edge AI, and I think people should be really concerned about it,¡± Musk told the governors assembled at the conference on Saturday. ¡°I keep sounding the alarm bell, but until people see robots going down the street killing people, they don¡¯t know how to react, because it seems so ethereal.¡±
According to the Tesla CEO regulation is key here to keeping us safe. ¡°AI is a rare case where we need to be proactive about regulation instead of reactive,¡± he said. ¡°Because I think by the time we are reactive in AI regulation, it¡¯s too late.¡± Musk believes that the way governments currently issue regulations is only after ¡°bad things happen¡±, something that won¡¯t work for AI.
Of course, Musk isn¡¯t talking about the kind of AI megacorporations like Google, Amazon, and Apple are developing. These are AI designed to carry out specific jobs in specific scenarios. What he¡¯s worried about is artificial general intelligence ¡ª the kind of AI you¡¯d see in sci-fi movies like Eagle Eye, Ex Machina, and of course Terminator¡¯s Skynet. He isn¡¯t alone either, as a lot of AI researchers believe the work being done by the likes of Google Deep Mind would eventually give rise to general intelligence, though that may take a while.
Meanwhile, other researchers also worry about how current forms of AI can be abused for criminal gain or even go haywire thanks to a simple oversight in its programming model.
It¡¯s why people like Musk believe governments need to be overseeing AI development, to ensure we aren¡¯t eventually the architects of our own destruction.