Site icon Tech Newsday

Researchers launch Nightshade to protect artists’ copyright

A new tool called Nightshade, developed by a team led by Ben Zhao, a professor at the University of Chicago, is set to revolutionize the way artists protect their copyright in the age of AI.

Nightshade allows artists to inject invisible alterations into their artwork’s pixels before uploading them online. This poisons the training data of AI models, preventing them from replicating or plagiarizing artists’ creations without consent or compensation.

Nightshade works by exploiting a vulnerability in generative AI models. It then leaves a lasting mark on the data sets that AI companies rely on. It is also open source and its potential impact grows as more users adopt and adapt it. However, there are concerns that malicious actors could misuse this technique to poison AI models for malicious purposes. Experts argue that significant damage would require thousands of poisoned samples, a daunting task for more powerful AI models trained on billions of data samples.

Nightshade is particularly promising against renowned AI models like DALL-E, Midjourney, and Stable Diffusion, which have been used to generate images that are virtually indistinguishable from human-created works. When poisoned with altered images, these models produce distorted results, such as dogs turning into cats and cars morphing into cows. This is a step towards protecting artists’ intellectual property.

In addition to Nightshade, the team behind the tool has also developed Glaze, a tool that allows artists to mask their personal style, preventing it from being harvested by AI companies. Nightshade is set to be integrated into Glaze, allowing artists to decide whether they wish to employ this data-poisoning technique.

The sources for this piece include an article in TechnologyReview.

Exit mobile version