
This instalment of Digital Bytes fills you in on all the important developments in the privacy, cyber, AI and data space from 2025's Privacy Awareness Week to the latest significant updates from eSafety in social media age-gating and new compliance obligations for permitting access to ‘age-inappropriate’ online material.
In that time, key legal obligations that have taken effect are:
- on 10 June 2025 – the tort for serious invasions of privacy in Schedule 2 to the Privacy Act 1988 (Cth) (Privacy Act); and
- on 1 July 2025 – for entities regulated by the Australian Prudential Regulation Authority (APRA), Prudential Standard CPS 230 on operational risk management, which means that CPS 231 and CPS 232 have been retired.
Australian Clinical Labs agrees to $5.8m penalty over 2022 data breach
On 29 September 2025, Australian Clinical Labs (ACL) announced it has agreed to pay a $5.8 million penalty in what will be the first civil penalty ordered under the Privacy Act.
The Office of the Australian Information Commissioner (OAIC) commenced proceedings against ACL in the Federal Court in November 2023. This followed an investigation into ACL’s privacy practices resulting from a 2022 cyber incident impacting its Medlab Pathology business. The incident compromised the personal information, including sensitive health information and credit card information, of approximately 223,000 individuals.
The penalty agreed with the OAIC, which remains subject to the Federal Court’s approval, comprises:
- A$4.2 million for failing to take reasonable steps to protect the personal information of Medlab customers;
- A$800,000 for failing to carry out a reasonable and expeditious assessment of whether there were reasonable grounds to believe that an eligible data breach had occurred, within 30 days, in breach of section 26WH(2); and
- A$800,000 for failing to notify the OAIC as soon as practicable after it had reasonable grounds to believe there had been an eligible data breach, in breach of section 26WK(2).
ACL has also proposed that it pay A$400,000 towards the OAIC’s legal costs.
Counsel appearing for the OAIC was asked by the Federal Court to address why the penalty was sufficient given the size and profitability of ACL, expressing concern as to whether the proposed penalty was sufficient to achieve general deterrence.
Justice Halley also noted the lack of an explanation from ACL as to its delay in recognising the extent and scope of the data breach, which occurred in February 2022 but was not notified to the OAIC until July 2022.
OAIC commences civil penalty proceedings in relation to the 2022 Optus data breach
On 8 August 2025, the OAIC commenced civil penalty proceedings against Optus alleging that it failed to take reasonable steps to protect personal information, in breach of the Privacy Act.
These proceedings show us that:
- the OAIC continues to expand its enforcement focus;
- there can be a delay between an incident and the commencement of enforcement action – especially for data breaches where it can take some time to work through forensic and technical evidence; and
- regulators are willing to take separate enforcement action in relation to the same matter – noting that the Australian Communications and Media Authority (ACMA) commenced civil penalty proceedings for a breach of telecommunications legislation in relation to the same data breach in May 2024, as we reported on in an earlier edition of Digital Bytes.
As in the ACL and Medibank proceedings, the Optus data breach pre-dates the substantial change to the maximum penalty regime that took effect in December 2022. However, despite the maximum penalty at the time being A$2.2 million per contravention, the OAIC is seeking a finding of one contravention for each of the 9.5 million affected individuals.
The OAIC’s announcement in relation to the proceedings reminds organisations of its expectations that they:
- implement procedures to allocate responsibility for internet-facing domains;
- verify a person’s authority before giving them access to customers’ personal information;
- layer security controls to avoid a single point of failure;
- implement robust security monitoring processes and procedures to ensure detection of vulnerabilities and timely response to incidents;
- allocate appropriate resources to privacy and cyber security, even when outsourced; and
- regularly review practices and systems, including actively assessing critical and sensitive infrastructure, and promptly act on areas for improvement.
OAIC report praises de-identification of health information used to train AI
In a closely-watched development at the intersection of AI and health privacy, the OAIC has concluded its preliminary inquiries into I-MED Radiology Network Limited (I-MED), an imaging and radiology provider. This follows reports that I-MED had disclosed patient imaging scans to AI joint venture providers for the purpose of training diagnostic AI models.
The OAIC’s report confirms that where personal information has been sufficiently de-identified, it is no longer ‘personal information’ for the purposes of the Privacy Act and obligations under the Australian Privacy Principles (including consent and notice) no longer apply.
‘Sufficiently’ de-identified is the key concept. This requires robust layers of de-identification and active assessment and mitigation from the risk of re-identification. Where that has been put in place, information may be considered de-identified even though:
- the risk of re-identification has not been reduced to absolute zero; and
- re-identification is technically possible, but it is so impracticable, excessively time-consuming or costly that there is almost no likelihood of it occurring.
The OAIC found that I-MED had sufficiently de-identified personal information and taken additional steps with its AI partner to reasonably prevent any residual risk of re-identification. The OAIC noted that I-MED had de-identified the images by:
- segregating the patient data from the underlying dataset;
- scanning the records with text recognition software;
- using two hashing techniques (for unique identifiers such as patient ID numbers, and names, addresses and phone numbers);
- time-shifting dates (to a random date within a specified number of years);
- aggregating certain fields into large cohorts to avoid identification of outliers; and
- redacting any text that appeared within or within 10 per cent from the boundary of an image scan.
The OAIC considered whether these steps aligned with practices endorsed by the National Institute of Standards and Technology.
In addition, I-MED had also imposed contractual obligations on its AI partner:
- prohibiting the provider from doing any act, or engaging in any practice, that would result in the patient data becoming 'reasonably identifiable';
- prohibiting the provider from disclosing or publishing the patient data for any purpose (to prevent wider dissemination of the dataset and accordingly reduce the risk that the patient data may become re-identifiable in the hands of other third parties or the public domain);
- requiring the provider to store the patient data in a secure environment; and
- requiring the provider to notify I-MED if it inadvertently received any patient personal information.
The OAIC determined that I-MED had not breached the Privacy Act by using and disclosing that de-identified patient data to train AI without patient consent or notice.
At a time where the OAIC has developed guidance materials on the training and use of AI and has included AI in its areas of regulatory focus, this report provides some comfort that there are lawful ways to train AI in compliance with the Privacy Act.
DHA releases discussion paper to kick off consultation on Horizon 2 of the Australian Cyber Security Strategy
Australia’s Cyber Security Strategy 2023-2030 was released in late 2023 and set out three ‘Horizons’ to build cyber resilience and security across six domains or ‘cyber shields’.
Many of the actions for ‘Horizon 1’ have been delivered, including through the introduction of the Cyber Security Act 2024 (Cth). The Department of Home Affairs has now commenced consultation on ‘Horizon 2’ through the publication of a discussion paper. ‘Horizon 2’ moves beyond strengthening foundations to expanding Australia’s reach. Its key themes are:
- embedding cyber security messaging, standards, capabilities and efforts across all aspects of society, organisations and government;
- empowering businesses, not-for-profits and citizens to implement cyber protections; and
- enhancing cyber regulatory frameworks.
While the discussion paper asks a broad range of questions across different focus areas, notably it seeks feedback on:
- simplifying cyber regulation to promote best practice and efficiency;
- developing appropriate standards for high-risk devices, protecting vulnerable datasets and using emerging technology safely; and
- enhancements to the regulatory framework for critical infrastructure.
After consultation closed on 29 August 2025, the government will now review submissions.
OAIC’s regulatory action priorities show a focus on emerging technologies and encouraging change
The OAIC has now released its regulatory action priorities for 2025-26, reflecting a shift toward proactive, harm-based regulation, with an emphasis on restoring public trust and rebalancing power in the digital economy.
In addition to government-focused priorities, some of its areas of focus include:
- practices that erode information access and privacy rights in the application of AI;
- excessive collection and retention of personal information;
- facial recognition technology and forms of biometric scanning; and
- new surveillance technologies, such as location data tracking in apps.
Sectors in the spotlight include rental and property, credit reporting and data brokerage sectors, and ad tech and pixel tracking.
OAIC finds Kmart’s use of facial recognition to be unlawful
On the topic of facial recognition technology (FRT), the OAIC recently published its determination on Kmart Australia’s use of this technology in its retail stores for the purpose of detecting and preventing refund fraud. The Commissioner found that Kmart’s practices interfered with individuals’ privacy by failing to notify customers or seek their consent to use FRT to collect their biometric information.
Kmart claimed that it did not need customers’ consent on the basis that a ‘permitted general situation’ (and therefore an exception to the APP 3 consent requirement) applied – it reasonably believed the collection of biometric information was necessary to address unlawful activity or serious misconduct. However, this was rejected by the Commissioner because:
- The FRT system indiscriminately captured sensitive biometric information from every customer who entered a store, regardless of whether they were suspected of refund fraud.
- Alternative and less privacy-intrusive measures were available to address refund fraud. While these options may have been less effective or could have impacted customer experience, these factors did not mean these measures were impracticable, particularly given their reduced impact on individuals’ privacy.
- The utility and effectiveness of the FRT system in preventing refund fraud was limited.
- The collection of biometric data from thousands of individuals, who were not suspected of refund fraud, was a disproportionate interference with privacy.
This is the OAIC’s second determination with respect to the use of FRT in retail environments – the OAIC similarly found that Bunnings’ use of FRT interfered with individuals’ privacy in October 2024 (although this is currently subject to review by the Administrative Review Tribunal).
The Commissioner has made it clear that the use of FRT is not prohibited, and that customer and staff safety and fraud prevention and detection are legitimate reasons for considering the use of such emerging technologies. However, organisations must ensure that their use of these technologies complies with the Privacy Act. The Commissioner’s blog post provides additional guidance on FRT considerations, and the OAIC’s previous guidance on FRT (following the Bunnings determination) remains relevant.
Long-awaited guidance on social media minimum age restrictions and new industry codes for ‘age-inappropriate’ online material
The eSafety Commissioner (eSafety) has released its regulatory guidance on expectations for ‘age-restricted social media platforms’. From 10 December 2025, those platforms will be required to prevent Australians under the age of 16 from creating or using a social media account. The guidance is based on key principles and sets out eSafety’s expectations of the reasonable steps ‘age-restricted social media platforms’ will need to take to comply with this new obligation. It also includes case studies which will provide more practical guidance for industry.
eSafety’s key expectations include that regulated platforms will:
- detect and deactivate or remove age-restricted users’ accounts;
- prevent age-restricted users from creating new accounts; and
- mitigate the risk of age-restricted users re-registering or circumventing the measures in place.
Importantly, eSafety has made its position on self-declaration (of a user’s age) clear – this alone will not be sufficient to discharge a platform’s regulatory obligations.
eSafety is encouraging platforms to:
- implement a range of tools and measures to address different risks and circumstances;
- communicate with age-restricted users in a kind, careful and clear way;
- provide accessible review mechanisms for users; and
- continuously monitor and improve systems and measures, and be transparent with the general public and eSafety about the age assurance measures in place.
The guidance also clarifies what eSafety will not require platforms to do:
- Platforms will not be required to verify the age of all users.
- Platforms must not require users to provide government identification details as the sole method of age verification – reasonable alternatives must be provided.
- Platforms will not be required to use specific technologies or measures.
- Platforms will not be required to retain personal information obtained as part of age verification, and record-keeping should focus on a platform’s systems and processes.
- Platforms should not automatically transfer underage users to other platforms or services without explicit user consent and opt-in.
The guidance encourages platforms to minimise the information and data required to be handled as part of these age verification measures, and reiterates that platforms must still ensure they comply with their information and privacy obligations under Part 4A of the Online Safety Act 2021 (Cth), as well as the Privacy Act and the Australian Privacy Principles.
eSafety has also recently published guidance to assist online service providers to assess whether their services will be subject to the upcoming social media minimum age restrictions.
To complement these new obligations, eSafety has finalised new mandatory industry codes for providers of internet carriage services, hosting services and search engine services, which come into effect from 27 December 2025. The codes impose additional minimum compliance measures to protect children from ‘age-inappropriate’ online material, including requirements for internet search engine providers to implement age assurance measures for account holders and for internet service providers to provide Australian end-users with information about filtering and restricting access to certain online materials.
Productivity Commission generates discussion with new proposed direction for privacy and AI law reform
In our article, Sophie Dawson provides an overview of the recommendations and key considerations from the Productivity Commission’s interim report, “Harnessing data and digital technology”.
Public responses to the interim report have been mixed. Australia’s Privacy Commissioner, Carly Kind, has responded strongly to the Productivity Commission’s suggested alternative compliance model which would permit businesses to focus on outcomes rather than controls-based rules. Commissioner Kind suggested the Productivity Commission was “proposing a path whereby organisations would no longer have to meet the prescribed requirements of the Privacy Act. Such a system would be, at its best, unworkable.”
Wide-ranging ACCC recommendations in the Digital Platforms Services Inquiry Final Report
The Australian Competition and Consumer Commission (ACCC) has released its 10th and final report in its five-year inquiry into digital platform services. In our article, Jennifer Dean takes you through the report’s key findings and recommendations – including for retail online marketplaces, cloud computing and generative AI, online gaming and more broadly.
Government reviews AI in health care and under the therapeutic goods regime
In July 2025, the Department of Health, Disability and Aging (Department) released its final report into Safe and Responsible Artificial Intelligence in Health Care – Legislation and Regulation Review.
Key findings from the Department’s report include:
- regulatory gaps may be filled by economy-wide AI guardrails;
- high-quality contemporary guidance should be developed to support the implementation of AI in health care, including addressing datasets, training and validation, accuracy of outputs, product selection, and AI trials;
- a centralised, high-quality and trusted information source should be developed to support consumers and clinicians to make informed decisions about AI; and
- regulation should be updated to clearly address patient data ownership, consent requirements, and responsibility and accountability for data use by AI.
In the same month, the Therapeutic Goods Administration (TGA) released its report into Clarifying and Strengthening the Regulation of Medical Device Software including Artificial Intelligence (AI), setting out its review of the legislative framework and recommendations for improvement to mitigate risks and leverage opportunities.
The TGA’s report finds that AI and its use in medical devices, in managing clinical trials and in designing and manufacturing therapeutic goods, is accommodated within the existing therapeutic goods regulatory framework. However, it makes 14 findings that identify potential improvements to the regime to accommodate AI, such as changes to definitions and clear allocation of responsibility, and to improve compliance. The TGA is now taking steps to implement changes to address those findings.
IBM's latest 'Cost of a Data Breach' report shows the first fall in average global costs
This year’s instalment of IBM’s Cost of a Data Breach report focuses on AI – both in how it can assist in, and reduce the costs of, incident response, and how breaches of AI systems are on the rise because they are not being implemented with strong security and governance.
The Cost of a Data Breach report finds that the average global cost of a data breach is US$4.44 million, down from US$4.88 million in 2024, despite the cost of a data breach in the US increasing during the same period. It finds that faster identification and containment of breaches has reduced the cost. In Australia (based on a survey of 30 organisations), the average total cost of a data breach fell from US$2.78 million in 2024 to US$2.55 million in 2025.
The report includes a breakdown of the costs of data breaches in different sectors, and the relative average reduction or increase in the cost of data breaches based on a variety of factors, such as employee training, generative AI security tools, and Board-level oversight.
Some organisations may see the report as providing a useful reference point for considering data breach liability in contracts.
AI developments that should be on Australian businesses’ radar
The Full Federal Court’s decision in Aristocrat Technologies Australia Pty Ltd v Commissioner of Patents [2025] FCAFC 131 has confirmed that computer-implemented inventions can be patentable even without an ‘advance in computer technology’.
Further afield, there has been a spate of litigation in the UK and the US where content creators have sued AI companies for infringing copyright in their works in the training of AI models and/or in outputs, including:
- litigation against Anthropic by a group of authors, which was recently settled;
- litigation against Stability AI by Getty Images alleging secondary copyright infringement, trade mark infringement, and passing off; and
- Disney and NBCUniversal’s action against Midjourney.
Further, on 26 August 2025, the UN General Assembly resolved to:
- establish an Independent Scientific Panel on AI, to assess its risks, opportunities and impact; and
- establish a global dialogue to strengthen global AI governance.
Other key privacy, cyber, data and AI updates
- The Digital Platform Regulators Forum (DP-REG) has published a new working paper on immersive technologies, such as virtual reality (VR), mixed reality (MR), augmented reality (AR) and haptics. The working paper makes clear that the OAIC, the ACCC, the ACMA and eSafety will continue to apply existing regulatory frameworks to the use of these technologies.
- The OAIC has released a Privacy Foundations self-assessment tool to assist organisations to manage their privacy compliance maturity;
- After a spate of credential stuffing attacks, APRA has written to superannuation entities directing them to assess their compliance with Prudential Standard CPS 234 (Information Security), and clearly set the expectation that multi-factor authentication or equivalent controls should be in place for all high-risk activities (such as changing member details, withdrawals etc).
- The ACMA continues to pursue enforcement action for breaches of the Spam Act 2003 (Cth), issuing infringement notices to TAB and Betfair for non-compliances associated with communications as part of their respective VIP programs;
- The OAIC has issued its first determination in relation to a breach of the consumer data right (CDR) regime, finding that the Regional Australia Bank was responsible for a breach through the conduct of its outsourced service provider, where up to 197 consumers’ data became co-mingled and was at risk of being incorrectly disclosed.
- In addition, NAB paid four infringement notices issued by the ACCC for breaches of CDR Rules requiring the disclosure of accurate credit limit data in response to a valid request;
- Following proceedings against FIIG Securities earlier this year, the Australian Securities and Investment Commission (ASIC) continues to pursue actions against Australian financial services licensees for failing to properly manage and mitigate cyber security risks, with proceedings brought against Fortnum Private Wealth in July 2025;
- Standards Australia has adopted a national standard for securing operational technology in critical infrastructure (AS IEC 62443) and has announced that it will include a new Part in the series to address industrial internet of things (IOT) devices;
- Australia’s national benchmark for AI adoption, the Australian Responsible AI Index, was released on 26 August 2025 and surprisingly shows no change in the average maturity of organisations adopting AI from the 2024 report, although there is a slight rise in the highest category of ‘leading’ organisations;
- According to the latest Operational Technology and IOT Security Report from Nozomi Networks Labs, Australia continues to be the fourth most targeted country for cyber threats in critical infrastructure;
- The Australian Cyber Security Centre (ACSC) along with its US, UK and NZ counterparts, has jointly issued best practice guidance for securing data in AI systems throughout the data lifecycle;
- The OAIC’s determination on ‘data scraping practices’ against Court Data Australia has been upheld by the Administrative Review Tribunal, affirming that the Privacy Act applies equally to publicly available and private data; and
- Australia’s new statutory tort for serious invasions of privacy will be considered for the first time by the Federal Court in the Groth v Herald Sun dispute, which will test the application of the statutory tort in the journalism and media context.
Looking ahead
Looking ahead to the implementation of the Children’s Online Privacy Code under the Privacy Act by 10 December 2026, submissions to the OAIC on the Code have recently closed, with further public consultation on the draft Code to occur in 2026.
Meanwhile, there has been no indication for the timeline for progress on Tranche 2 privacy reforms or the proposed mandatory AI guardrails for use of AI in high-risk settings; however there are a raft of other changes to stay abreast of in this fast-evolving area of law.
How can we assist?
We have a large team of privacy, data protection and cyber specialists, with substantial experience across the whole spectrum of data, privacy and cyber compliance and incident management.
For a more detailed briefing on any of these updates, or to discuss how we can assist your organisation to manage its risks in these rapidly evolving areas, please get in touch.