MIT Shows How Image Recognition AI Can Be Easily Fooled By Just Changing A Few Pixels
The university's team managed to make rifles look like a helicopter, at least in the computer's eyes.
With artificial intelligence being such a huge revenue potential these days, technology companies are quick to point out how well their own particular versions function, whether that¡¯s in computer vision, facial recognition, or otherwise.
However, they all currently possess one major flaw that can be exploited.
Unlike human observers, computer vision algorithms can be thrown off the mark by something called ¡°adversarial example¡±. These are optical illusions specifically designed to fool computers into thinking a photo of one thing is actually something else.
So what if a computer can be fooled into thinking a frog is a cat right? How much harm can that do? Well the same method can also mess with a self-driving car¡¯s AI for instance, making it mistake a stop sign for a speed limit indicator.
According to new research by MIT¡¯s Computer Science and Artificial Intelligence Laboratory, these adversarial examples are actually much easier to create than previously thought. Last week, the team managed to reliably fool Google¡¯s Cloud Vision API, a commonly used algorithm today even under ¡°black box conditions. Basically, that means they fooled the AI without the benefit of insight into how it functions and processes data.
Why an error-free vision processing AI is critical
In this particular case, they targeted the part of the AI that puts identifier labels to objects in a photo, like labelling a picture with a kitten in it as ¡°cat¡±. After a trial and error process, they were able to manipulate Google¡¯s computer vision system into thinking a row of machine guns was a helicopter. And they say the label they choose isn¡¯t really important, they can reliably make an AI think an object is something totally different, just by tweaking pixels in the photo. The final image looks exactly the same to the naked eye, but it still confuses a machine learning algorithm.
In this way, the experiment demonstrates that attackers can fairly easily create these kinds of adversarial examples to confuse image recognition AI. For instance, an automated baggage scanner can be fooled into thinking a bomb in a suitcase is actually a teddy bear, or facial recognition at the gates can let someone else through on your ticket.
Thankfully, Google is already aware of and working on trying to fix this issue. Indeed, input from MIT¡¯s research help puts that work a step closer to its final goal. However, it might be a few years before we see some significant breakthrough. Until then, don¡¯t expect AI to play too large a part in automated functions.