Amendments to the Privacy Act 1988 (Cth) (the Privacy Act) passed in December 2024 include new transparency requirements which will apply where entities bound by the Privacy Act (APP entities) use a computer in relation to certain decision making involving the use of personal information.
What are the new transparency requirements?
The Privacy and Other Legislation Amendment Act 2024 (Cth) will introduce greater transparency for individuals affected by automated decision making in provisions that are scheduled to commence on 10 December 2026.
If an APP entity uses automated decision making, it must include certain information in its privacy policy (under a new APP 1.7 to be inserted into Schedule 1 Australian Privacy Principles (APPs) of the Privacy Act).
Once the reforms are in effect, an APP entity will be considered under the Privacy Act to be using automated decision making if:
- the APP entity has arranged for a computer program to make, or do a thing that is substantially and directly related to making, a decision in relation to an individual;
- that decision could reasonably be expected to significantly affect the individual’s rights or interests; and
- personal information is used in the operation of the computer program to make the decision or do the thing that is substantially and directly related to making the decision (emphasis added) (new APP 1.7).
This broadly drafted definition will create challenges for APP entities trying to comply with the new transparency requirements.
A new APP 1.8 will require that a privacy policy includes the following information if automated decision making is used:
- the kinds of personal information used in the operation of such computer programs;
- the kinds of such decisions made solely by the operation of such computer programs; and
- the kinds of such decisions for which a thing, that is substantially and directly related to making the decision, is done by the operation of such computer programs.
The obligation on entities to include information in their privacy policy about the kinds of personal information used in such computer programs and the kinds of such decisions is not expected to include commercial-in-confidence information about automated decision making systems.
A new APP 1.9 clarifies that a decision may affect an individual’s rights or interests, whether the rights or interests are adversely or beneficially affected, as well as provides the following examples of the kinds of decisions that may affect an individual’s rights or interests:
- a decision made under a provision of an Act or a legislative instrument to grant, or to refuse to grant, a benefit to the individual;
- a decision that affects the individual’s rights under a contract, agreement or arrangement; and
- a decision that affects the individual’s access to a significant service or support.
Once these amendments come into effect, the new powers for Australia’s privacy regulator, the OAIC, to issue infringement notices and compliance notices (which commenced on 11 December 2024) will apply to an APP entity’s failure to have a privacy policy which meets the automated decision making transparency requirements in APP 1.7. Additionally, an APP entity is subject to civil penalties for such a failure.
Practical implications of automated decision making reforms
APP entities will need to consider the technology they use to help them make decisions in their operations and map data flows to assess whether they use personal information in automated decision making.
Careful decisions will also be required as to how to draft policies in such a way as to satisfy the requirements of APPs 1.7 to 1.8.
Some entities may also choose to conduct Privacy Impact Assessments in relation to any automated decision making. Government agencies are obliged to carry out PIAs in relation to high-risk projects.[1] PIAs are not legally required but are encouraged by the OAIC in relation to organisations. A requirement for PIAs may be included in the next tranche of privacy reform. When undertaking the PIA process, it is prudent to carefully consider legal professional privilege preservation in relation to any legal advice.
Automated decision making – risks and opportunities
Automated decision making is a term generally used to describe the use of technology to automate a decision making process.[2] It can range from using a simple rules-based formula to confirm if someone meets criteria, to the use of predictive algorithms where a computer learns to make a decision (as opposed to being programmed to execute a decision making process in a specified way).[3] A decision making process can be partially or wholly automated.[4]
On one hand, automated decision making presents opportunities for increasing the efficiency, accuracy and consistency of decisions. The Positioning Australia as a leader in digital economy regulation – Automated Decision Making and AI Regulation Issues Paper described such benefits as including the reduction of waiting times for applications to be processed and approvals to be granted;[5] lower costs to business and government of producing and delivering goods and services to consumers;[6] and allowing human intervention to be focused on more high risk and complex areas.[7]
On the other hand, using and relying on automated decision making has its risks, particularly, discrimination arising from the use of historical data, large amounts of data being exposed to data breach risks, a lack of transparency and accountability in decisions made. and the possibility of automated decision making systems not being fit for purpose.
A high-profile example of the risks associated with automated decision making is the previous debt recovery and assessment scheme known as the ‘Robodebt Scheme’. The Robodebt Scheme used automated decision making technology to identify possible instances of social service overpayments based on data-matching and assessment of discrepancies between income information from the Australian Taxation Office and income information disclosed by individuals to the Department of Human Services (DHS), with limited (or sometimes no) human intervention.[8]
The Royal Commission into the Robodebt Scheme recommended the:
- consideration of legislative reform to introduce a consistent legal framework for the automation of government services to ensure that where automated decision making is implemented:
- there is a clear path for those affected by decisions to seek review;
- departmental websites contain information advising that automated decision making is used and explaining in plain language how the process works; and
- business rules and algorithms should be made available, to enable independent expert scrutiny; and
- establishment of a body to monitor and audit automated decision making processes with regard to their technical aspects and their impact in respect of fairness, the avoidance of bias, and client usability.[9]
In particular, the Commission considered that transparency regarding the use of automaton in decision making, and the ability of affected persons to review such decisions, are vital safeguards in the use of automated decision making.[10]
Similarly, the transparency requirement reforms outlined above are intended to increase transparency about the use of personal information in automated decision making.
Broad definition of automated decision making
The broadly drafted definition of automated decision making has raised concerns amongst some organisations. The Explanatory Memorandum indicates that “substantially” means where the thing done by a computer program is a key factor in facilitating human decision making and “directly” means where the thing done by a computer program has a direct connection with making the decision. However, APP entities will need to take a view as to when the thing done by a computer program is enough to be a “key factor” in, or “directly connected to”, human decision making. Issues also arise as to how the definition applies where there are numerous stages in the decision making tree where computer programs are used. Some submissions have expressed concerns that the definition of automated decision making could capture any software playing a part in decision making and that the Australian definition captures decisions that would not be captured in other jurisdictions such as the EU GDPR which targets decisions that are “based solely on automated processing”.
It will also be necessary to make decisions as to when a decision “significantly affects the individual’s rights or interests” in circumstances in which this has not been clarified by the Courts or elsewhere. The kinds of decisions mentioned in APP 1.9 could capture many decisions, including a decision as to whether to provide services to a customer and a decision as to whether to grant a job applicant a job interview. Each APP entity will need to identify which decisions it makes that may be caught, and identify any relevant computer programs used in connection with such decisions.
Finally, some submissions have been critical of the approach taken by the automated decision making reforms because they are not risk focused. As some automated decision making uses AI, there have been concerns that the broad approach to be taken under the Privacy Act towards automated decision making does not align with the proposed mandatory guardrails for AI which focus on high-risk settings.
Risk of over disclosure
A concern to comply with the new requirement coupled with the broad definition of automated decision making could lead to some APP entities taking the approach of including dense amounts of information in their privacy policies. APP entities may take a cautious approach as they will face being issued with infringement notices and compliance notices, as well as civil penalties, if they have non-compliant privacy policies.
APP entities should endeavour to take a clear, succinct approach where possible to ensure that consumers can easily review and understand their disclosures.
Conclusion
APP entities have 24 months to prepare before the new transparency requirements regarding automated decision making come into force. As a priority, APP entities need to consider the computer programs used to make decisions or to do things that relate to decision making within their organisation against the broad definition of automated decision making to be included in the Privacy Act. If an APP entity considers that the new disclosure obligation will apply to it, then it should take steps to ensure that it updates its privacy policy to include the above disclosures by 10 December 2026.
[1] Privacy (Australian Government Agencies – Governance) APP Code 2017.
[2] Privacy Act Review Report, p 188
[3] Positioning Australia as a leader in digital economy regulation – Automated Decision Making and AI Regulation Issues Paper 2022, p 3
[4] Positioning Australia as a leader in digital economy regulation – Automated Decision Making and AI Regulation Issues Paper 2022, p 3
[5] Positioning Australia as a leader in digital economy regulation – Automated Decision Making and AI Regulation Issues Paper 2022, p 3
[6] Positioning Australia as a leader in digital economy regulation – Automated Decision Making and AI Regulation Issues Paper 2022, p 3
[7] Positioning Australia as a leader in digital economy regulation – Automated Decision Making and AI Regulation Issues Paper 2022, p 3
[8] Report of the Royal Commission into the Robodebt Scheme 2023, p xxiv.
[9] Report of the Royal Commission into the Robodebt Scheme 2023, p 488.
[10] Report of the Royal Commission into the Robodebt Scheme 2023, p 486.