Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Artificial intelligence has opened up a whole new way to steal pictures online

Source:- qz.com

In June, Google told stock photography website Shutterstock that its researchers had found a weakness that could destroy the site’s entire business.

Google’s researchers had built an AI-powered tool that could easily remove the watermarks Shutterstock uses to protect all of its images across the site. If a company less “not evil” had done so, it could theoretically clone and steal Shutterstock’s entire database of images.

Google’s exploit, which it later posted on its research blog, analyzed hundreds of pictures with consistent watermarks, like Shutterstock’s. Once the algorithm learned to look at a photo and decide which pixel was a watermark and which wasn’t, it could just remove all the watermark pixels. Many watermarks are semi-transparent, so the algorithm already knew what should be in the watermark’s place as well.

Anyone with the right capability could remove “the watermark and build a parallel marketplace, basically copyright infringement,” Sultan Mahmood, a Shutterstock director of engineering, told Quartz. “Anybody with malicious intent could use this as a proof of concept.”

People can manually remove watermarks today, using image editing tools like Photoshop, but Google’s approach is automatic. It could clean the watermarks off hundreds of images in the time it would take a human to clean one. Mahmood said Shutterstock is mainly worried about the possibility of someone being able to quickly copy millions of photos.

Mahmood said Shutterstock could lessen its risk by making its watermarks random. If the pattern changes across every image, an algorithm would have a much tougher time removing it completely.

Following Google’s warning, Shutterstock assigned five engineers to the problem, who figured out how to generate randomized watermarks, decided on a design, and began applying the changes to its catalog of more than 150 million images. Though the fix is surprisingly simple, Mahmood said the company isn’t satisfied it’s a long-term solution, and it’s working on even more secure ways of watermarking images.

But smaller outfits still have the watermark problem. Independent photographers often use software like Photoshop to type their names on images as a watermark. Other stock services might not have the engineering capabilities to move so quickly. If malicious AI developers take advantage of Google’s research, that could start a cat-and-mouse game for even Shutterstock.

These AI-driven vulnerabilities are popping up more frequently among researchers, though they have yet to result in a major attack. Researchers studying the AI systems for self-driving cars ask whether malicious AI developers could print a texture on a stop sign to trick autonomous vehicles into missing it entirely, or even activate voice assistants like Siri and Google Assistant with a sound unintelligible to humans. For every attack, a temporary solution is found, but researchers are rarely confident about how long it will last.

 

Related Posts

Subscribe
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
1
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence