1 July 2025

New model clauses for AI and what they mean for procurement and compliance

Sophie Dawson, Helen Clarke, Keith Robinson, Emily Lau, Liesel Millard

The Digital Transformation Agency has issued new AI model clauses for use by Commonwealth Government agencies. While these clauses are for Commonwealth Government contracts and are not mandatory, they provide a clear indication of the Government’s expectations of suppliers and are likely to be followed, at least in part, by other customers of AI-driven products and services. 

The model clauses are designed to be used when procuring services from sellers that incorporate or use AI to provide their services, and when asking sellers to assist with the development of in-house AI tools. Model clauses for Commonwealth Government agencies procuring a software product with AI capabilities are to be developed as part of the Software and Cloud Module of the DMP 2 Panel Agreement. 

Use of AI in providing services

The model AI clauses are closely aligned to the guardrails in the Voluntary AI Safety Standard, and require sellers using AI to provide their services to:

  1. be transparent – by obtaining the buyer’s approval of the use of the relevant AI system for the intended purpose and ensuring that the seller’s use of AI complies with the buyer’s consent;
  2. engage in quality assurance – by checking outputs of the AI system;
  3. keep records – of the AI system used, the scope of the seller’s use of the AI system, the data collected, processed and stored and systems with which the AI interconnects; and
  4. not use banned AI systems – which are listed in the contract (DeepSeek being an obvious example).

What these clauses don’t do is recognise the buyer’s role in the safe and controlled use of AI. For example, in many use cases, it will be the buyer that is responsible for the data inputted into the AI and the systems from which it is drawn. In these circumstances, it may be difficult for the seller to engage in quality assurance and to keep the required records, meaning these clauses may need to be adjusted to reflect specific use cases.

Developing an AI tool for the buyer

Where a seller is helping a Commonwealth Government agency to develop an AI tool in-house, the model AI clauses require sellers to:

  1. be transparent – by obtaining the buyer’s approval of the use of the relevant AI system for the intended purpose and ensuring that the seller’s use of AI complies with the buyer’s consent;
  2. notify the buyer – if harm to individuals or property or critical infrastructure disruptions occur, or there is a breach of law as a result of the development, use or malfunction of the AI system;
  3. include a circuit breaker – that allows the AI system to be immediately interrupted or stopped;
  4. provide oversight – through effective human oversight, training, testing and monitoring of the AI system;
  5. risk management – have and implement a comprehensive AI risk management system which is approved by the buyer;
  6. promote fairness – by ensuring that the AI system does not discriminate or harm individuals and identifying uncontrolled bias in training data; and
  7. keep records – regarding training data, discrimination assessment and human oversight.

The model AI clauses suggest that the buyer should conduct due diligence on the seller’s supply chain and risk management system. Inevitably, this will require suppliers to have a comprehensive view of where AI is used in its supply chain, address the associated risks in its risk management system and provide visibility to the buyer. This is consistent with the trend towards requiring entities to understand and properly manage risks in their supply chains, which is embodied in a variety of regulatory instruments (for example, including APRA’s Prudential Standard CPS230 and modern slavery laws).

Parallels between the model AI clauses and other AI related principles

The new model AI clauses align with the voluntary principles contained in Australia’s AI Ethics Principles and the Voluntary AI Safety Standard in relation to the development and use of AI, as well as the OAIC Guidance on privacy and use of commercially available AI products. The AI Ethics Principles and Voluntary AI Safety Standard both emphasise the need for accountability, transparency, reliability and safety, human oversight and fairness. 

Similarly, the OAIC Guidance promotes safety by recommending that buyers conduct due diligence to ensure a product is suitable for its intended use, and advocates for transparency by encouraging businesses to update their privacy policies regarding their use of AI. Notably, in the interests of increasing transparency regarding automated decision making (which may include the use of AI), the recent privacy reforms scheduled to commence on 10 December 2026 will require an APP entity using automated decision making to provide certain information in its privacy policy. 

The model AI clauses illustrate the trends that suppliers will be expected to follow to ensure quality assurance, transparency, risk management and appropriate oversight measures, and they will be held contractually responsible for doing so. This will also support buyers’ expectations of harm minimisation, accountability for those responsible for developing or using AI systems to provide services, data security and the opportunity to conduct due diligence on a seller’s supply chain and risk management systems.

These model AI clauses are quite buyer friendly, and do not significantly address the buyer’s role in training AI systems and curating the data inputs. Given this, sellers negotiating with government agencies are likely to require changes to these clauses to address the buyer’s responsibilities and factors outside of the sellers’ control – for example, quality assurance of outputs where the buyer is providing inputs and would need to judge the quality of the output. 

We can expect increasing regulation and more guidance in this area as industry trends on contractual positions emerge.