Google's AI Research Group Is Considering The Ethics Of AI Development & Its Impact On Society
The society will do its part with publicized research to prevent AI from being misused in the future.
After months of tech icons and researchers butting heads over the sort of regulations we should have regarding developing artificial intelligence, it seems Google is finally sitting up and taking notice.
The company¡¯s AI development arm DeepMind today announced it¡¯s forming a new research collaborative with the task of of tackling serious ethical issues in the field. Things like how to strictly regulate AI to prevent them learning a bias, and the economic impact of automated workers.
The DeepMind Ethics & Society (DMES) will henceforth publish research papers on these and other topics, beginning 2018. The group currently has eight full-time members, though Deepmind says it plans to grow that number to a 25-strong team by 2019.
In addition, there are six external unpaid researcher contributors, including philosopher and AI expert Nick Bostrom, and the team will also partner with the likes of the New York University and other academic groups.
¡°If AI technologies are to serve society, they must be shaped by society¡¯s priorities and concerns.¡±
In a blog post for the announcement, lead researchers Verity Harding and Sean Legassick wrote that DMES will help ¡°explore and understand the real-world impacts of AI.¡±
This could include everything from preventing racial bias in the criminal justice system from seeping into algorithms for the police, to how an autonomous vehicle should steer in a crash situation (either to prevent driver or pedestrian injury), and more.