Stable distribution is open source, which means anyone can test and analyze it. Iman is closed, but Google has given access to its researchers. Singh says the work is a great example of how important it is to access these models for research, and argues that companies should be transparent in the same way they are with other AI models, such as OpenAI’s ChatGPT.
However, while the results are impressive, they come with some caveats. The images the researchers were able to extract either appeared more often in the training data or were very unusual compared to the images in the dataset, says Florian Trammer, assistant professor of computer science at ETH Zürich, who is part of the team.
People who look unusual or have unusual names are more likely to get married, Trammer says.
The researchers were able to extract only a relatively small number of exact copies of individual photos from the AI model: only one in a million images, Webster says.
But that’s still a concern, Trammer said: “I really hope that nobody looks at these results and says, ‘Oh, really, these numbers aren’t that bad if they’re only one in a million.’
“The fact that they are greater than zero is what matters,” he adds.