
In a decision[1] that was highly anticipated by AI providers and the creative industry alike, the UK High Court has determined that Stability AI did not infringe UK secondary copyright infringement provisions because the AI model was found not to include an “infringing copy” of the relevant copyright works. The UK provisions under consideration are different in important respects from Australian secondary infringement provisions.
The case may therefore be influential, but will not be determinative, in relation to any cases in Australian courts.
This article considers the secondary infringement aspects of the decision and identifies some key differences between the provisions under consideration and Australian secondary infringement provisions.
Background
In 2023, two of the world’s largest providers of licensed stock images and video content online, Getty Images (US), Inc. and related parties (together, Getty) sued Stability AI Limited (Stability) in the United Kingdom. The crux of Getty’s case was that Stable Diffusion, Stability’s open-source text-to-image generative AI model, had been created using Getty’s content. Specifically, Getty claimed that the model had been trained on millions of copyright-protected images that were scraped from Getty’s websites and, as a result, when prompted, Stable Diffusion reproduced those images, including Getty’s watermarks, in synthetic image outputs.
Getty initially sued Stability for primary and secondary copyright infringement, database rights infringement, trade mark infringement and passing off. However, it lacked evidence of any training or development of Stability in the UK.
By the end of the trial, Getty only proceeded with the secondary infringement, trade mark infringement and passing off claims. This article considers the findings in respect of those claims.
What is 'Stable Diffusion'?
To understand the decision, it is important to understand how Stable Diffusion operates. The Stable Diffusion models that were the subject of the case each had the following characteristics:
- input in the form of a user command or prompt;
- output in the form of a synthesised image;
- model architecture designed to be trained by repeated exposure to large data sets from online content, including millions of captioned images scraped from the internet and image metadata;
- model parameters (or “model weights”) which were given random values by Stability before training; and
- model weights that adjusted as they were exposed to training data and the models learned to recognise statistical patterns between text labels and visual features.
Importantly (and relevant to the Court’s finding on copyright infringement), the Court found that the model itself did not at any point, store or contain copies of Getty’s content. Instead, the AI model leveraged probability distributions associated with certain concepts to generate “new” images in response to user prompts. Therefore, the AI model’s output was influenced both by how a prompt was framed as well as the model’s probabilistic processes that were developed over time. As a result, the AI model’s output could resemble input from training data or the output could bear little resemblance to the input, i.e. be entirely novel.
Trade mark infringement – in some cases
Getty’s success in establishing trade mark infringement was somewhat of a pyrrhic victory. The Court observed that the trade mark infringement findings were both “historic and extremely limited in scope”.
Broadly, Getty’s claims were that Stability had produced images bearing watermarks (Marks) on some Stable Diffusion outputs, which were identical or similar to several registered iStock and GETTY IMAGES marks, and that Stability had used the Marks in relation to the service of providing synthetic image outputs. Each example of Marks that Getty relied on was either the result of “prompt experiments” that were conducted by Getty’s lawyers or was drawn from a small number of online examples generated by third-party users in the UK. Stability argued that it was not liable for trade mark infringement where the outputs were generated by user prompts, that if there was trade mark use it was not in the course of trade, and that the Marks were only generated by a user that set out to do so wilfully.
The trade mark aspect of the decision was lengthy, which highlights the complexity involved in assessing trade mark infringement by AI models.
The Court was tasked with considering a threshold question: whether Getty Images had established that any user of a version of Stable Diffusion in the UK had ever been presented with a Mark on a synthetic image output. In undertaking this assessment, the Court differentiated between outputs produced by different versions of Stable Diffusion (as not all versions of the model had been trained on the same dataset). This problem is likely to be common to other AI models, which go through many iterations and updates after release. In the limited instances where the threshold question was answered affirmatively, the mere presence of a watermark was not always sufficient to give rise to a claim.
The Court rejected Getty’s arguments that it was not liable for trade mark infringement because the Marks were user-generated, and that the trade marks were not used in the course of trade. In doing so, the Court agreed (at 339 to 340) with Getty’s submissions that:
“Stability is using the sign for its own commercial communication: the communication that bears the watermark* in the form of the output image is the commercial communication of Stability because it is generated by its Model. This, says Getty Images, involves more than merely storing the output images (unlike in Coty) – but instead involves offering the service of generating the images and putting those images onto the market…this case also involves active behaviour and control on the part of Stability because (i) it is the entity that trained the Model; (ii) it is the entity that could have filtered out watermarked images in order to ensure that its model did not produce outputs bearing watermarks*; (iii) it makes the Model available to consumers through GitHub, Hugging Face, the Developer Platform and DreamStudio (which I have accepted in relation to v2.x; SD-XL and v1.6. For v1.x the position is more complex); and (iv) it is the entity making the communication that bears the relevant signs. None of this can be said to be the independent action of another economic operator.”
The Court also found (at [348 to 349]) that:
“Stable Diffusion is a machine learning system which derives its primary function largely from learning patterns from a curated training dataset. Its final function is not directly controlled in its entirety by the engineers who designed it, but a large part of its functionality is indirectly controlled via the training data. The model weights are learned from the training data and it is the model weights which control the functionality of the network. Although the process of inference does not require the use of training data, the outputs generated during inference will (at least indirectly) be a function of that training data. Thus, as the Experts agree, the generation of watermarks* by the Model “is due to the fact that the model was trained on some number of images containing this visible watermark”. This is the responsibility of Stability.
…
As Getty Images submit, the only entity with any control in any meaningful sense of the word over the generation of watermarks* on synthetic images is Stability. It is certainly not “passive” as Stability submits.”
There is an interesting juxtaposition between this position and the position taken by the Court on secondary copyright infringement.
Both expert witnesses in the case took the view that later models of Stable Diffusion likely deployed a watermark filter.
We note that care should be taken in relation to any removal of watermarks in Australia as, depending on the circumstances, it may give rise to a contravention of section 116B of the Copyright Act 1968 (Cth) (Copyright Act), which prohibits removal or alteration of electronic rights management information relating to a copyright work or other subject matter without the permission of the copyright owner where the person knows or ought reasonably to have known that the removal or alteration would induce, enable, facilitate or conceal an infringement of the copyright in the work or other subject matter.
Getty's copyright claims
Getty originally made claims of primary copyright infringement relating to the training and development of Stable Diffusion and the model’s allegedly infringing ‘outputs’, but as mentioned above, these claims were abandoned due to difficulty proving that training or development occurred in the UK. The judgment does not therefore directly consider the question of whether reproductions made in the course of training or development infringed copyright.
Getty’s claim for secondary copyright infringement was to the effect that “contrary to sections 22 and 23 of the Copyright, Designs and Patents Act 1988 (CDPA), Stability has imported into the UK, otherwise than for private and domestic use, possessed in the course of business, sold or let for hire or offered or exposed for sale or hire, or distributed an article, namely Stable Diffusion, which is and which Stability knew or had reason to believe is an infringing copy of the Copyright Works” [at 10].
There were two key matters of statutory construction in issue in relation to this claim. First, whether Stable Diffusion was relevantly an “article”, and second, whether it was an “infringing copy”. We will examine each of these issues in turn, and will then also examine some interesting obiter remarks.
Stable Diffusion was an article
It was held that Stable Diffusion was relevantly an article. Justice Smith applied the “always speaking” principle (to the effect that when a new state of affairs arises, Courts should consider whether they fall within the relevant Parliamentary intention) to find that “an electronic copy stored in an intangible medium (such as the AWS Cloud) is, …, capable of being an infringing copy and thus also capable of being an “article”” (at [583]). As discussed below, this is consistent with the drafting of a key definition in Australia.
Stable Diffusion was not an “infringing copy”
The Court held that Stable Diffusion was not an infringing copy. In arriving at this conclusion, the Court agreed with Stability that an article which is “purely the product of the patterns and features which they have learnt” cannot itself be an "infringing copy", unless it had at some point “contained or stored an infringing copy".
Section 27(2) and (3) of the CDPA contain the definition of ‘infringing copy’. They provide as follows:
(2) an article is an infringing copy if its making constituted an infringement of the copyright in the work in question.
(3) an article is also an infringing copy if:
(a) it has been or is proposed to be imported into the United Kingdom; and
(b) its making in the United Kingdom would have constituted an infringement of the copyright in the work in question, or a breach of an exclusive licence agreement relating to that work.
Getty Images submitted that it was sufficient for the purpose of these definitions that an article’s making constituted an infringement, and that there was no requirement that the article thus made must continue to retain a copy or copies of the work. Justice Smith rejected this submission.
The Court held that “an infringing copy must be a copy”. That is, “the essence of the infringement is that there has been an infringement of copyright by the reproduction of the work (including by its storage in any medium by electronic means) in any material form” (at [597]).
The Court found that Stable Diffusion did not store the visual information in the Copyright Works and was not therefore an “infringing copy”. Specifically, her Honour said:
“while it is true that the model weights are altered during training by exposure to Copyright Works, by the end of that process, the model itself does not store any of those Copyright Works; the model weights are not themselves an infringing copy and they do not store an infringing copy. They are purely the product of the patterns and features which they have learnt over time during the training process …The fact that its development involved the reproduction of Copyright Works (through storing the images locally and in cloud computing resources and then exposing the model weights to those images) is of no relevance …The model weights for each version of Stable Diffusion in their final iteration have never contained or stored an infringing copy.”
Further observations
The Court made a number of obiter observations in case its conclusions were wrong in relation to “infringing copy”. Those observations are potentially relevant to any future proceedings and include:
- making downloads of Stable Diffusion available for download on Hugging Face amounted to importation for the purpose of the CDPA (at [604(i)]);
- “There could never be any act of secondary infringement by reason of provision of remote software services” (at 604(iii)). Specifically, gaining access to the Model via DreamStudio did not involve importation or even transfer of a copy of Stable Diffusion to the UK because all inference and output occurs outside the UK; and
- if (contrary to the finding) Stable Diffusion was an infringing copy, then it could be inferred from the evidence that Stability had knowledge or reason to believe that Stable Diffusion was an infringing copy.
German decision
A week after the UK High Court’s decision, a German court (Munich I Regional Court) [2] delivered a judgment in which Open AI was held liable for primary copyright infringement because the process of “memorization” did reproduce copyright material within the jurisdiction, not merely extracting information from a training dataset. This decision may be appealed but it is interesting to contrast the German court’s approach.
Australian position
Australian test cases in relation to whether AI models infringe trade marks and/or copyright (including secondary infringement provisions) are yet to emerge.
When they do, international decisions such as those set out above are likely to be taken into account.
The UK decision may be persuasive for Australian courts in respect of trade mark infringement and indirect copyright infringement such as importation for sale or hire or infringement by sale and other dealings under sections 37 and 38 of the Copyright Act.
In relation to the definition of an “article”, it is worth noting that section 38(3) of the Copyright Act specifically provides that “article includes a reproduction or copy of a work or other subject-matter, being a reproduction or copy in electronic form”. This aligns with the Court’s approach in Getty Images in relation to the definition of an “article”.
In relation to the question of whether an article must contain an infringing copy for sections 37 and 38 to apply, the express wording of those provisions is similar, but not identical to, the UK provisions. They do not use “infringing copy” (though as can be seen above, section 38(3) includes a reproduction in the definition of “article”).
In relation to the question of whether an AI model contains an infringing copy, section 24 of the Copyright Act may be relevant. Section 24 provides that, “for the purposes of the Act, sounds or visual images shall be taken to have been embodied in an article or thing if the article or thing has been so treated in relation to those sounds or visual images that those sounds or visual images are capable, with or without the aid of some other device, of being reproduced from the article or thing.”
The most important thing to note in relation to the Australian secondary infringement provisions is that there will not generally be any secondary infringement if an article has been made in a Berne Convention country and did not infringe copyright in that country: see s 44F and ss 10AA to 10AC. In light of the fact that much AI is developed outside Australia in Berne Convention countries (such as the US and China), this makes the Australian position dependent in many cases on the position reached by courts in countries where AI is commonly developed and trained. US cases on these issues are emerging: see for example, Thomson Reuters Enterprise Centre GmbH & West Publishing Corp v Ross Intelligence Inc, District Court of Delaware (Bibas J) 11 February 2025; Bartz v Anthropic PBC 3:24-cv-05417, (N.D. Cal.); and Kadrey v Meta Platforms Inc 3:23-cv-03417, (N.D. Cal.).
Key points for Australia
- If Australian courts follow the Getty Images decision, then hosting an AI model in another country and making it available to Australian users online may not, by itself, constitute sale or importation (as opposed to the downloading of such models which does constitute importation based on this decision). However, there is no guarantee that Australian courts will follow this aspect of the decision.
- AI developers may choose to use licensed material for training their models. Those who don’t may strategically consider where to develop and train their model, with a view to training models in a Berne Convention jurisdiction where they are likely to be in a strong position to defend reproduction for those purposes.
- The Federal Government currently proposes not to introduce a text and data mining exception under the Copyright Act, which contains limited fair dealing defences.
- Rights holders should take measures to support their preferred position in relation to AI use of copyright material including:
- by adopting website terms which reflect their preferred position;
- put in place software to track and if they wish, block, AI bots. Technological protection measures provide a layer of technical security. There are also technological protection measure circumvention provisions in the Copyright Act, which may be available if they are circumvented; and
- use watermarks and other technological measures to protect their intellectual property.
- Cases like Getty underline the importance of investing in business processes to collate accurate records, including of AI prompts or examples of infringement in the “wild”. In collating evidence, it is important for IP rights holders to ensure that they do not over-engineer prompts to generate examples of infringement. Further, a significant number of infringement examples is needed to succeed, in order to demonstrate a pattern of infringement by an AI model.
[1] [2025] EWHC 2863 (Ch) (4 November 2025) Getty Images (US) Inc v Stability AI Ltd.
[2] GEMA v OpenAI (Case No. 42 O 14139/24).