How to use AI to protect against AI image manipulation

Researchers at MIT’s CSAIL created “Photo Guard” which guards against unauthorized image alteration and preserves authenticity in the age of sophisticated generative models.

How to use AI to protect against AI image manipulation
How to use AI to protect against AI image manipulation

The threat of abuse looms enormous as we enter a new era where artificial intelligence-powered technology may create and edit images with such precision that the distinction between reality and fabrication is blurred.

The creation of hyper-realistic graphics has become increasingly simple in recent years because to sophisticated generative models like DALL-E and Midjourney, praised for their amazing precision and user-friendly interfaces.

Lowered entry barriers allow even novice users to create and edit high-quality images from straight forward text descriptions, allowing for both benign and malicious image changes. Techniques like watermarking offer a potential remedy, but overuse necessitates a preventative (as opposed to merely remedial) action.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with “Photo Guard,” a method that employs perturbations, or minute changes in pixel values that are undetectable to the human eye but are seen by computer models, to block the model’s ability to manipulate the image.

To produce these disturbances, Photo Guard employs two alternative “attack” strategies. The simpler “encoder” approach seeks out the image’s latent representation in the AI model, tricking it into thinking the image is random.

The more complex “diffusion” one selects a target image and optimizes the perturbations to produce a result that is as similar to the target as possible.

“Take into account the potential for the fraudulent spread of false catastrophic occurrences, such as an explosion at an important landmark. The risks of this deception extend beyond the public eye and can be used to distort market patterns and public opinion.

Hadi Salman, a graduate student in electrical engineering and computer science (EECS) at MIT and a member of the MIT CSAIL, is the lead author of a new paper about Photo Guard. “Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” he says.

“In more extreme cases, these models may imitate sounds and visuals to stage fictitious crimes that would cause psychological harm and monetary damage. The issue is made worse by how quickly these acts are taken. Even when the lie is eventually exposed, the harm—whether reputational, psychological, or financial—has frequently already been done.

This is a fact for victims on all scales, from those who were bullied in school to those who were duped on a societal level.

Must read:Artificial intelligence is intensifying the fight against infectious diseases

Hands on Photo Guard

AI algorithms perceive images differently from how we do. The latent representation of an image is a sophisticated collection of mathematical data points that explain the color and location of each pixel. The AI model interprets the image as a random entity as a result of the encoder attack’s little modifications to this mathematical representation.

As a result, it becomes practically impossible to alter the image using the model. The image’s visual integrity is maintained while being protected because the alterations made are so slight that they are imperceptible to the human eye.

The second, far more complex “diffusion” attack deliberately and comprehensively tackles the entire diffusion paradigm. Choosing a target image is the first step in this procedure, after which an optimization process is started with the goal of closely aligning the created image with the chosen target.

The team perturbed the input space of the original image during implementation. These perturbations are subsequently applied to the images during the inference stage, providing a strong barrier against unauthorized manipulation.

The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike,” says Aleksander Madry, an author on the article and MIT professor of EECS and CSAIL senior investigator.

Must read:Artificial intelligence is intensifying the fight against infectious diseases

Therefore, it is critical that we identify and reduce the latter. I see Photo Guard as our little but meaningful contribution to that endeavor.

The diffusion attack demands a substantial amount of GPU RAM and is more computationally costly than its more straightforward brother. According to the team, minimizing the problem by roughly simulating the diffusion process with fewer steps makes the technique more useful.

Think about a creative endeavor, for instance, to more effectively depict the attack. The target image is an entirely distinct drawing from the original drawing that makes up the original image.

The diffusion assault is similar to making minute, undetectable adjustments to the first drawing so that it starts to resemble the second drawing to an AI model. However, the original drawing is still visible to the naked eye.

By doing this, you can preserve the original image from manipulation by making sure that any AI model that tries to edit it accidentally alters it as if it were the target image. The end result is a picture that is protected from illegal editing by AI models while remaining visually unchanged for human observers.

Must read:Aided by A.I. Language Models, Google’s Robots Are Getting Smart

Leave a Comment