World Economic Forum Wants To Define Guidelines For AI Research, And Why This Is Important
Earlier this week the World Economic Forum, an international organization that brings together political leaders, scientists, and experts to tackle global issues, made an important announcement. They¡¯re setting up a council to monitor AI research.
Earlier this week the World Economic Forum, an international organization that brings together political leaders, scientists, and experts to tackle global issues, made an important announcement. They're setting up a council to monitor AI research around the globe.
The WEF said it's setting up an AI council to have nations reach a compromise on how artificial intelligence should be explored, and what restrictions should be put into place. It's an incredibly important issue, considering every tech researcher and their mother are trying to apply AI to everything from data processing, to image recognition, to medical research.
And while a number of countries have announced plans to fund such research and the application of the technology, they've yet to reach a consensus on the kinds of limitations that should be put into place. Specifically, we haven't yet officially defined what kind of AI application is ethical or not.
In order to break ground here, the WEF is bringing together representatives from the UN and UNICEF, as well as those from Microsoft, IBM, and other tech companies conducting AI research. The lead chairs on the AI council will be Brad Smith, president of Microsoft, and Kai-Fu Lee, a prominent Chinese AI expert and investor.
"The role that the forum plays is that of an impartial international organization," says Kay Firth-Butterfield, head of machine learning at the WEF. She says the council will be tackling three main issues; how AI research could benefit emerging countries, how the technology will affect employment, and what specific use cases we need to look out for.
As far as that latter issue is concerned, the biggest pain point of AI is its use in surveillance. This sort of technology is already in use in China, where the country uses things like facial and gait recognition in order to monitor citizens.
The thing is, there are lots of major companies conducting AI research at full steam, including the likes of Facebook and Google. Both of them have faced backlash recently for using AI to assemble ad profiles for users, that are then sold to third parties. That's how we got Cambridge Analytica, and that's part of why Sundar Pichai isn't considered the "most reputable CEO" anymore.
Meanwhile, experts and tech icons have long since cautioned against the unrestricted march of artificial intelligence, including the likes of Elon Musk and Stephen Hawking. Musk of course talks about it in a Terminator-esque sense, where he fears we might make an AI too smart and it will take over human civilization.
Hawking and others however worry more about AI developers overlooking problems when building neural networks, which could have a range of negative consequences. At the very least, training an AI with incomplete or biased data could result in a biased algorithm, quite likely negatively affecting women or people of colour in its implementation. On the other end of the scale, allowing AI to do morally questionable things is more directly problematic, like letting people compare social media photos to the faces of people in porn videos.
Thoughtless implementation of AI's capabilities can lead to it being used to persecute or harass people of a certain race, religion, gender, or political bent.
And yet, the progress of AI is almost intrinsically linked to the advancement of human society as a whole. We can use it for everything from developing new and more effective medications, to designing crazy new materials, and so much more. We just need to find a balance between science and conscience, and we need to do it before AI research goes much further.