Getty Images has banned the upload and sale of any images generated by an AI—a bid to keep itself safe from any legal issues that may arise from what is effectively a Wild West of art generation today.
“There are real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata and those individuals contained within the imagery,” Getty Images CEO Craig Peters told The Verge (opens in new tab).
With the rise of AI art tools such as DALL-E, Stable Diffusion, and Midjourney, among others, there has been a sudden influx of AI-generated images on the web. For the most part, we’ve seen these images come and go as entertaining gaffs on Twitter and other social media platforms, but as these AI algorithms become more complex and effective at image creation, we’ll see these images used for a whole lot more.
And that’s a business that Getty, one of the leading curated image library providers, wants to stay well clear of.
Getty’s CEO refused to say if the company had already received legal challenges regarding AI-generated images, although he did assert that it had “extremely limited” AI-generated content in its library.
All AI image generation algorithms require training, and massive image sets are required to do this effectively. As The Verge reports, Stable Diffusion is trained on images scraped from the web via a dataset from the German charity LAION. This data set was created in compliance with German law, the Stable Diffusion website states, although it admits that the exact legality regarding copyright for images created using its tool “will vary from jurisdiction to jurisdiction.”
As such, it’s likely to become increasingly difficult to tell whether an artwork is derived from another copyrighted image.
There are other concerns regarding image datasets and scraping techniques, as a California-based artist discovered private medical record photographs (opens in new tab), taken by their doctor, within the LAION-5B image set. The artist, Lapine, discovered their images had been used through the use of a website that is specifically designed to tell artists whether their work has been used in these sorts of sets, called ‘Have I Been Trained? (opens in new tab)‘
These images have been confirmed by Ars Technica in an interview with Lapine, who has kept their identity confidential for privacy reasons. Although clearly privacy was not afforded to the supposedly confidential medical records held by the artist’s doctor following the doctor’s death in 2018, and it’s quite worrying to think of how these ended up in a very public dataset without permission since.
Lapine is not the only person affected either, it seems, as Ars also states that during a search for Lapine’s photos they discovered other images that may have been obtained through similar means.
🚩My face is in the #LAION dataset. In 2013, a doctor photographed my face as part of clinical documentation. He died in 2018 and somehow that image ended up somewhere online and then ended up in the dataset – the image that I signed a consent form for my doctor – not for a dataset. pic.twitter.com/TrvjdZtyjDSeptember 16, 2022
When asked about the image set the CEO of the company behind Stable Diffusion, Stability AI, said that he could not speak for LAION but did state that it might be possible to un-train Stable Diffusion to remove certain images from its algorithm, but that the end result as it stands today is not an exact copy of any information from a given image set.
There are burgeoning privacy and legal concerns that will undoubtedly rise to the surface in the coming months and years regarding the production and distribution of AI generated images. What is a fun tool, and perhaps even a handy one at times, is very likely to become a sticky topic for lawmakers, rights holders, and private citizens.
I don’t blame age-old image libraries for taking a step back from the technology in the meantime.