AI image generators are fun to play with, but the problem with generative AI is, well, it’s generative. It absorbs information from the internet and spits out content influenced by that info.
If you’re an artist or photographer, you probably don’t appreciate AI “learning” from and copying your art without compensation. If you’re someone who appears in a photograph, you probably don’t want AI reimagining your likeness doing something weird.
To that end, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a new tool called “PhotoGuard” to protect images from malicious editing, per Engadget.
The smallest unit of information in an image is called a pixel. PhotoGuard changes certain pixels in a way that’s imperceptible to humans, but that throws off AI.
There are two methods:
Here’s a video demo.
… per MIT doctoral student and lead author Hadi Salman. But Salman suggested that the companies that make AI models could offer APIs to protect — or “immunize” — other people’s photos.
And that might not be a bad idea, considering the numerous lawsuits regarding models trained on the work of authors, musicians, and other creators without consent.
For example, Getty Images is suing Stability AI, alleging it copied 12m+ images without permission or pay. Yikes.
BTW: If you want to play around with PhotoGuard yourself, the code is on GitHub.