If there¡¯s one thing artificial intelligence experts around the world worry about, it¡¯s how the technology can be misused in so many ways. Specifically, they¡¯re worried about how easy it¡¯s made it to fake photographs and videos. So now, they¡¯re trying to turn AI to fact-finding too.
Adobe, ironically the company behind Photoshop, is now developing a counter to both its manual software as well as AI being used to forge images. They¡¯re working on a machine learning system that can automatically spot pictures that have been edited.
Adobe showcased their progress at the CVPR computer vision conference earlier this month, showing how we can use AI to make digital forensics that much easier and more efficient. There¡¯s not been a significant breakthrough just yet, and the technology isn¡¯t commercially available either, but it definitely is a step in the right direction,
The company says it hopes to eventually have a system that can monitor and automatically verify how authentic photos and videos are. It¡¯s still unclear what that means, seeing as this is new territory for even Adobe, but they hint at possible applications in law enforcement, and perhaps news media.
The research paper details three common types of image fakery the AI can spot; splicing, where two parts of different images are combined; cloning, where an object in an image is copied and pasted multiple times; and removal, which is self-explanatory. To find these clues, digital forensics experts typically have to look for a trail. This could be unexpected variations in the colours and brightness in an image, for instance.
Adobe trained the AI on a large data-set of edited images, teaching it to spot common patterns that hint at manipulation. It¡¯s become fairly accurate at spotting these now, unfortunately it¡¯s not yet good enough to pick out ¡®deepfakes¡¯, a new breed of video fakery also powered by AI.
At the very least though, it¡¯s the right path, and perhaps holds some promise for the future. Perhaps this will even turn into a war just like that between hackers and cyber-security researchers, except it¡¯ll be AI systems both creating fake digital content, and others trying to pick them out of a lineup.