Australian artificial intelligence regulation: a work in progress

Articles Written by Sophie Dawson (Partner), Emily Lau (Senior Associate), Joshua Angelo (Law Graduate), Liesel Millard (Law Graduate)

As AI becomes a part of day-to-day life globally, regulators are grappling with the challenges it poses, and where to strike the regulatory balance.

Submissions to the Australian Senate Committee tasked to consider AI reveal some of the key themes that are emerging from this discussion in Australia.

The recent debate (on 25 June 2024) in relation to a Bill to address deepfake pornography also marks the continuation of the trend towards greater regulation of AI in Australia.

Introduction: the state of play

On 21 May 2024, the EU AI Act was finally approved by the Council of the EU. It will come into effect 20 days after it is published in the EU’s official journal. A transition period of 2 years will apply for many provisions. A graded approach is taken to regulation of AI in the AI Act. AI systems posing an unacceptable risk are prohibited, high-risk systems are required to be registered and will be regulated, AI systems posing limited risk will be subject to transparency obligations, and low-risk AI systems will be unregulated.

Australia is at a much earlier stage when it comes to deciding on a way forward in relation to AI. Australian legislators are in the process of considering the benefits and harms which AI poses with an eye to whether further regulation is required. This article discusses some of the key themes emerging in that process, and also discusses a Bill currently under consideration by the Australian Parliament in relation to deepfake pornography.

Senate committee process and terms of reference

The Senate has resolved that a Select Committee on Adopting Artificial Intelligence (AI) (Committee) be established to “inquire into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia”.

The establishment of the Committee comes alongside several other inquiries into AI in Australia. Examples of recent developments include the:

In the Government’s 17 January 2024 response to the Consultation report, it concluded that AI is insufficiently regulated in Australia, and that it would seek to regulate AI in a risk-based, technology-neutral way.

The terms of reference of the Committee are to report on and consider:

  • recent trends and opportunities in the development and adoption of AI technologies in Australia and overseas, in particular regarding generative AI;
  • risks and harms arising from the adoption of AI technologies, including bias, discrimination and error;
  • emerging international approaches to mitigating AI risks;
  • opportunities to adopt AI in ways that benefit citizens, the environment and/or economic growth, for example in health and climate management;
  • opportunities to foster a responsible AI industry in Australia;
  • potential threats to democracy and trust in institutions from generative AI; and
  • environmental impacts of AI technologies and opportunities for limiting and mitigating impacts.

The Committee received written submissions which ended on 10 May 2024. The Committee is due to report to Parliament by 19 September 2024.  

The Committee received 216 public submissions as well as oral submissions given at two public hearings in Sydney and Canberra. The submissions varied from individuals to multinationals to industry organisations. Key submissions included Microsoft, Google, FreeTV, OpenAI, and the Law Council of Australia (LCA).

Key themes emerging from submissions to senate inquiry

The Committee received a large volume of submissions. There were certain risks and harms that emerge from a number of submissions. This section touches upon some key examples and discusses key legal developments in relation to deepfake pornography. 

Deepfake technology

Some submissions argued that the rise of AI-generated deepfakes poses significant concerns for both Australia’s democratic processes and personal privacy. They submitted that deepfakes possessed a capacity to mislead the public, bolstering the proliferation of misinformation already present in social media.

Doctored content has now also extended to AI “voice cloning”, which the Australian Recording Industry Association theorises could lead to copyright concerns. Submitters also argued that deepfakes also pose personal privacy risks, such as the fabrication of non-consensual intimate imagery and financial fraud. As discussed further below, deepfake pornography is the subject of a Bill currently under consideration by the Australian House of Representatives.

Transparency

Some submitters argued that there should be transparency in relation to aspects of the development of AI products. Submissions such as that put forward by Annalise.ai, an AI-enhanced medical imaging corporation, reflected on a need for oversight in the testing and development stages of AI-enhanced products, specifically on the source datasets. Additionally, some submissions argued that users of AI-enhanced systems should be made aware of its presence through clear disclosures. The submission by FreeTV also argues that there are risks to democracy through misinformation and disinformation where users are unaware that they are viewing AI-generated content.

Bias

Algorithmic bias occurs when the training data using in machine learning contains existing biases or limitations. These biases can produce skewed results replicating discrimination. The LCA and Microsoft emphasise the importance of risk management processes to limit the effects of algorithmic bias. As some submissions have argued, users should be made aware of the source data used and how biases have been mitigated.

Benefits of AI

Although the adoption of AI comes with its risks, the submissions identified many areas in which AI is underutilised and could benefit from its presence. Disaster responses and sustainability practices are emerging areas in which Google and Microsoft tools are being developed for environmental benefits. Additionally, healthcare and educational uses of AI have the potential to leave a lasting societal impact.  

Deepfake pornography

Deepfake pornography can involve the use of AI to use non-pornographic images of a person to create faked images which make it appear that the person has participated in pornography.

Legal developments in relation to deepfake pornography

Defamation law and existing offences in Australia provide some protection in relation to deepfake pornography. A Bill is in train which, if enacted, will put in place an additional offence.

The posting of deepfake pornography may give rise to an imputation (meaning) that the victim has voluntarily participated in producing pornography. It is highly likely that such a meaning is defamatory. The Ettingshausen case is arguably an early precedent for this; in that case a photograph showing Mr Ettingshausen exposed in a changing room shower was found to be defamatory because it gave rise imputation that he had “deliberately permitted” such a photograph to be taken for the purpose of reproduction in a publication with a widespread readership. An imputation that the plaintiff is a person whose genitals had been exposed to the readers of the defendant’s magazine HQ, a publication with a widespread readership, was also found by Justice Hunt to be defamatory on the basis that it would subject the plaintiff to ridicule: see Ettingshausen v Australian Consolidated Press Ltd (1991) 23 NSWLR 443 at 449.

Deepfake porn is also likely in many cases to meet the serious harm test which now applies in relation to defamation in Australia. 

A defamation action will however be of limited utility where the publisher has few resources or, more significantly, where the victim does not have the resources to track down and sue each publisher (of whom there may be many, particularly if an image goes ‘viral’).

There are existing offences of potential relevance to deepfake pornography, but their specific application in that context is largely untested, and specific issues may arise which do not arise in relation to real photographs and videos. For example, Section 474.17(1) of the Criminal Code Act 1995 (Cth) (the Criminal Code), prohibits use of a carriage service in a way that reasonable persons would regard as being, in all the circumstances, menacing, harassing or offensive content on the internet, and carries a maximum penalty of three years imprisonment or five years for an aggravated offence. Section 474.17A of the Criminal Code provides for an aggravated offence of transmitting, making available, publishing, distributing, advertising or promoting “private sexual material”. More specifically, section 75 of the Online Safety Act 2021 makes it a civil offence to post intimate images of a person without their consent online. This offence attracts a maximum penalty of a fine of 500 penalty units. States and Territories have similar offences: see, for example, Crimes Act 1958 (Vic), section 53S; Criminal Code Act 1899 (Qld), Schedule 1, section 223; Criminal Code Compilation Act 1913 (WA) section 221BD; Summary Offences Act 1953 (SA), subsection 26B(2); Crimes Act 1900 (ACT), section 72C; Crimes Act 1900 (NSW) section 91Q; Police Offences Act 1935 (Tas), section 13B. Publication of material which would be refused classification is also an offence under classification laws, and where children are involved, additional child pornography offences (including under the Crimes Act) will apply.   

Deepfake images give rise to new challenges under some of these provisions. For example, in submissions to a cybercrime Senate Committee inquiry quoted in the Bills Digest for the Bill discussed below, the Commonwealth Director of Public Prosecutions (CDPP) argued that section 474.17A may not cover deepfakes because victims would not necessarily have the requisite expectation of privacy in relation to the images (as they are likely to have been unaware of their creation).

Only Victoria has a specific offence aimed at deepfake pornography, which is in section 53R of the Crimes Act 1958 (Vic).

The Federal Attorney General announced on 1 May 2024 that it would introduce a new offence to prohibit the creation and the non-consensual distribution of deepfake pornography.

The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 (the Bill) was read for a first time on 5 June 2024 and was the subject of a second reading debate on 25 June 2024. The Bill will, if it becomes law, replace the existing offences in section 474.17A with an offence of using a carriage service to transmit sexual material without consent. That offence will be committed by a person (the first person where:

  • “(a) the first person uses a carriage service to transmit material of 20 another person; and
  •  (b) the other person is, or appears to be, 18 years of age or older;
  • (c) the material depicts, or appears to depict:
  • (i) the other person engaging in a sexual pose or sexual 25 activity (whether or not in the presence of other 26 persons); or
  • (ii) a sexual organ or the anal region of the other person; or
  • (iii) if the other person is female—the other person’s breasts; and
  • (d) the first person:
  • (i) knows that the other person does not consent to the transmission of the material; or
  • (ii) is reckless as to whether the other person consents to the transmission of the material.”

It will provide for a number of defences, including where a reasonable person would consider transmitting the material to be acceptable, having regard to a number of specified matters including the nature and content of the material, the circumstances in which the material was transmitted, the age, intellectual capacity, vulnerability or other relevant circumstances of the person depicted, or appearing to be depicted, in the material, the degree to which the transmission of the material affects the privacy of the person depicted, or appearing to be depicted, in the material, the relationship between the person transmitting the material and the person depicted, or appearing to be depicted, in the material and any other relevant matters.

It will also create two aggravated offences (in 474.17AA) where the offence occurred after certain civil penalties have been made against the person and when the person created or altered the sexual material which was transmitted.

On 25 June 2024, the Hon. Paul Fletcher for the Opposition indicated that the Opposition supported the “intent” of the Bill, but raised issues about the drafting of the Bill, questioned the argument by the CDPP about the application of existing laws referred to above, and asked that the drafting be “closely scrutinised”. The Bill was referred to Federation Chamber (a debating Committee) on 25 June 2024.

Conclusion

The regulation of AI is a developing area of law globally and within Australia. The development of a regulatory approach to AI in the coming years will continue the wave of technology-focused law reform over the past seven years.

Important Disclaimer: The material contained in this article is comment of a general nature only and is not and nor is it intended to be advice on any specific professional matter. In that the effectiveness or accuracy of any professional advice depends upon the particular circumstances of each case, neither the firm nor any individual author accepts any responsibility whatsoever for any acts or omissions resulting from reliance upon the content of any articles. Before acting on the basis of any material contained in this publication, we recommend that you consult your professional adviser. Liability limited by a scheme approved under Professional Standards Legislation (Australia-wide except in Tasmania).

Related insights Read more insight

JWS boosts cyber, privacy and technology expertise with appointment of Phillip Magness

Leading independent Australian law firm Johnson Winter Slattery has appointed Phillip Magness as a Special Counsel in its national Cybersecurity, Privacy & Technology team. Phillip is based in...

More
Uniformity deadline passes as digital defamation deformation prevails

The goal of uniformity for Australian defamation law is set to fall short again as a majority of jurisdictions fail to meet their own timeline for the proposed model Stage 2 defamation law reforms.

More
Digital Bytes – cyber, privacy & data update

Welcome to Digital Bytes, our latest quarterly update on current developments in cyber, privacy and data governance.

More