Site icon Tech Newsday

MIT researchers develop PhotoGuard to protect images from AI manipulation

MIT researchers have developed a new technique called PhotoGuard that can protect images from malicious AI manipulation.

PhotoGuard works by adding imperceptible perturbations to images that disrupt the ability of AI models to edit them. The result is an image that is visually unchanged for human observers but is protected from unauthorized editing by AI models.

PhotoGuard employs “adversarial perturbations” to safeguard images from unauthorized manipulation by models like DALL-E and Midjourney. These subtle changes in pixel values are imperceptible to the human eye but can be detected by computer models, thwarting AI’s ability to effectively alter images. The tool employs two attack methods: the “encoder” attack disrupts the AI model’s latent representation of an image, generating irrelevant or unrealistic outcomes, while the “diffusion” attack aims to resemble a specific target image, even disturbing the text prompt conditioning process.

Hadi Salman, lead author of the paper and a PhD student at MIT, explains that PhotoGuard adds an additional layer of protection to images, making them immune to manipulation by diffusion models. By introducing imperceptible pixel modifications before uploading an image, users can immunize it against modifications and potential misuse. PhotoGuard was effective at preventing them from editing images and did not significantly degrade the quality of the images.

The work was supported by the U.S. Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF).

The sources for this piece include an article in AnalyticsIndiaMagazine.

Exit mobile version