Ian Mulvany

July 20, 2023

How likely is it to generate a copyrighted image using a Generative ai tool such a stable diffusion?

How likely is it to generate a copyrighted image using a Generative ai tool such a stable diffusion? This question came up in a work context last week and I've been thinking about it. 

I asked GPT how these tools work, and it came back with 

"generative image models like Stable Diffusion learn from data to generate new images by applying a diffusion process that progressively refines an initial image or latent code. This approach allows for the creation of diverse and realistic images, Generative image models, including Stable Diffusion, have the capability to generate images that resemble the ones they were trained on. However, it is important to note that these models do not have direct access to the specific training images or their exact representations. Instead, they learn the underlying statistical distribution of the training dataset. During the training process, the model learns the patterns, textures, and structures present in the training images. It captures the statistical properties of the dataset, rather than memorizing individual images. As a result, while the generated images may resemble the training images in terms of style and content, they are not direct replicas of any specific training image.

It is also worth mentioning that generative image models are probabilistic in nature, and the generated images can vary based on factors such as random initialization and sampling techniques. So, while it is possible to generate images that share similarities with the training images, the exact replication of a specific training image is highly unlikely."

So it seems given how they work, that it is going to be quite unlikely for them to create a copy of an image that they were trained on, but can we get any more precision around how unlikely? 

The best view I've found on this so far comes from this paper - https://arxiv.org/pdf/2301.13188.pdf, which tried to "attack" a early version of StableDiffusion to see whether it can leak any of the training images. One way to think about this is let's imagine a scenario in which our image generation model was only trained on one image. That's it, just one image. Then anything we ask of it is pretty much going to reproduce that one image, with a bit of noise. Given enough turns of the handle, we are going to come up with an image that is close enough to the training image. The paper looked at whether there are any instances of pockets of training images that are in the training data set multiple times, so that there might be a higher chance of re-creating them. They call this ability of the model to carry a potential for recreating the original training image "memoization". 

They worked on an older version of Stable Diffusion, one trained on about 160 million images. They had access to the training data set, and the descriptions of the training images. What they did was pick the 350,000 images out of 160 million, that appeared a bit more frequently than other images in the training data set. They then took the image descriptions and got Stable diffusion to generate 500 images for each description. They then looked at the generated images to see if any of them looked like training images. 

That meant they were looking for duplicates out of 175 million images. They found 1280. 

So the probability is 0.000000006% of stumbling upon an image that looks close enough to a training image. 

However, that's just for images that are highly represent in the training corpus, not for all of the images. 

They recognised that these images were often of famous people, or well known brands, and the attack worked because they could bias their trial runs against images that fell into this cohort of images that were slightly higher in representation in the training corpus. 

So for a normal user, who is not looking to uncover something in the training data set, what might be the change that they create an image that is copyrighted? 

First they need to be picking a famous person or topic, rather than something more generic. Let's image that they do this 20% of the time, and the other 80% of the time they are creating more complex, or abstract images. 

Next that famous person or brand needs to come from the 350k images out of the 160m images. This is about about 2.5 % of the corpus, so the persons would first need to be picking a topic that was in a small subset of the training data. Let's be pessimistic, and say that our user is actually looking for something in this part of the training set 10% of the time, instead of 2%. 

Next they would need to use the exact description of the image. If the idea is very atomic, such as "barbie" or "Barak Obama", then there might be a good chance that they get this right. Let's assume 50%. 

Finally, the image that is generated has to be actually copyrighted. I've seen an estimate that the current training corps os StableDiffusion has about 2% of its corpus as copyrighted, but let's imaging in our case that this might happen 50% of the time, as we are picking a well known topic or brand. 

That gives us our combined probabilities. 

The probability of accidentally creating a copyrighted image should be less than the following:

0.000000006% x 50% x 50% x 10% x 20 % ~ 0.00000000003%. 

This is pretty small. You would need to create about 1000 images per second, every second, for a year, to generate a copyrighted image. And that's based on what we know from an earlier version of Stable Diffusion. 

As the training corpus gets larger, we can assume that the chances of uncovering a "memoized" image gets smaller. 

This calculation was based on a version of Stable Diffusion with 160M images in its training set. The current version has something closer to 5 billion images that it is trained on, and so we can be confident that the chances of creating a copyrighted image is vanishingly small. 

Nonetheless, if you want to be very certain, here are some things that might help

- Don't pick simple atomic subjects on their own. 
- Make your image prompt a bit longer, add some image styling to your prompts 
- Hew towards concepts, or abstract ideas, rather than people or brands. 

One other thing that is interesting is who owns the copyright of the images that you create with these tools. At present it appears that no copyright can be assigned as there is no human intervention in the creation of the images, so that is something to consider. 

I've found these two posts pretty interesting on the topic: 



About Ian Mulvany

Hi, I'm Ian - I work on academic publishing systems. You can find out more about me at mulvany.net. I'm always interested in engaging with folk on these topics, if you have made your way here don't hesitate to reach out if there is anything you want to share, discuss, or ask for help with!