Responsible use of AI: call for submissions on new safety Standard and consultation paper

Articles Written by Keith Robinson (Partner), Sophie Dawson (Partner)

The Federal Government has this week released a Voluntary AI Safety Standard and a consultation paper on ‘Safe and responsible AI in Australia’ for introducing mandatory guardrails for AI in high-risk settings.

The paper seeks submissions on:

  • the proposed definition of high-risk AI;
  • 10 proposed mandatory guardrails for the development and deployment of high-risk AI; and
  • the approach to the implementation of regulation.

Submissions are due on 4 October 2024.

The proposed approach

The proposed risk-based approach is focused on preventing harms before people interact with, or are subject to, an AI system and will predominantly apply to AI developers and deployers. This approach will bring Australia in line with the approaches being adopted in the European Union, Canada and the United Kingdom, and is consistent with the approach adopted in the multilateral Bletchley Declaration and Soeul Declaration to which Australia is a signatory.

The voluntary AI Safety Standard is intended to support and promote best practice, and mitigate the potential adverse impacts of AI developers and deployers adopting inconsistent approaches while the government considers its options on mandatory guardrails. It clearly sets expectations for what future legislation may look like.

Defining high-risk AI

The proposed approach is to regulate two categories of “high-risk” AI.

The first category is where the proposed use or application of the AI system or general purpose AI (GPAI model) are known or foreseeable. For this category, whether or not the AI system or GPAI is high-risk AI will be determined through the application of a set of principles requiring regard to be given to:

  • the risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations;
  • the risk of adverse impacts to an individual’s physical or mental health or safety;
  • the risk of adverse legal effects, defamation or similarly significant effects on an individual;
  • the risk of adverse impacts to groups of individuals or collective rights of cultural groups;
  • the risk of adverse impacts to the broader Australian economy, society, environment and rule of law; and
  • the severity and extent of those adverse impacts outlined in principles above.

The second proposed category of high-risk AI applies to GPAI models which are advanced and highly capable where all possible risks and applications cannot be predicted.

The mandatory guardrails outlined below would apply to all high-risk AI.

Following the approach in Canada’s proposed Artificial Intelligence Data Act (AIDA), the paper proposes defining GPAI as ‘An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems’.

Specific uses

The paper also refers to the approach adopted in the EU AI Act which explicitly prohibits certain uses of AI which:

  • exert subliminal influence;
  • exploits vulnerability in a way likely to cause significant harm;
  • utilise social scoring with consequences;
  • assess criminal offence risk;
  • utilise facial recognition through untargeted internet scraping;
  • infer emotions;
  • conduct biometric categorisation inferring sensitive attributes; or
  • deliver real-time biometric law enforcement identification (subject to exceptions);

and deems other uses to be high risk:

  • biometrics;
  • critical infrastructure;
  • education;
  • employment;
  • access to essential services; and
  • law enforcement, immigration, justice, democratic processes.

The paper seeks feedback on types of AI use that could present an unacceptable level of risk in Australia and should be banned.

The mandatory guardrails

The mandatory guardrails are intended to be interoperable with those in other comparable jurisdictions (and, in particular, the EU’s AI Act and Canada’s proposed AIDA), and to align with national and international standards such as ISO/IEC 42001:2023 Artificial Intelligence Management System, which has been recognised as a standard that will support AI governance in Australia and internationally.

The proposed mandatory guardrails largely replicate those in the Voluntary AI standard introduced in August 2024 and require organisations developing or deploying high-risk AI systems to:

  1. establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
  2. establish and implement a risk management process to identify and mitigate risks;
  3. protect AI systems, and implement data governance measures to manage data quality and provenance;
  4. test AI models and systems to evaluate model performance and monitor the system once deployed;
  5. enable human control or intervention in an AI system to achieve meaningful human oversight;
  6. inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
  7. establish processes for people impacted by AI systems to challenge use or outcomes;
  8. be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
  9. keep and maintain records to allow third parties to assess compliance with guardrails; and
  10. undertake conformity assessments to demonstrate and certify compliance with the guardrails.

The only deviation between the mandatory and voluntary guardrails is the 10th guardrail. In the voluntary guardrails, the 10th is ‘engage your stakeholders and evaluate their needs and circumstances, with a focus on safety’, which emphasises the importance of ongoing engagement with stakeholders to evaluate their needs and circumstances.

Regulatory implementation options

The paper considers (and seeks submissions on) three regulatory options available to mandate the proposed mandatory guardrails:

  1. a domain-specific approach – adopting the guardrails within existing regulatory frameworks as needed;
  2. a framework approach – introducing new framework legislation to adapt existing regulatory frameworks across the economy; and
  3. a whole of economy approach – introducing a new cross-economy AI-specific Act (for example, an Australian AI Act).

For any questions about the Government’s new Voluntary AI Safety Standard and consultation paper on ‘Safe and responsible AI in Australia’, please contact our Technology team.

Important Disclaimer: The material contained in this article is comment of a general nature only and is not and nor is it intended to be advice on any specific professional matter. In that the effectiveness or accuracy of any professional advice depends upon the particular circumstances of each case, neither the firm nor any individual author accepts any responsibility whatsoever for any acts or omissions resulting from reliance upon the content of any articles. Before acting on the basis of any material contained in this publication, we recommend that you consult your professional adviser. Liability limited by a scheme approved under Professional Standards Legislation (Australia-wide except in Tasmania).