
After the flurry of legislative reform in late 2024, the dust is settling in the privacy, cyber, AI and data space in Australia. While many changes took immediate effect, a number of key changes were deferred for six months and take effect imminently. Notably:
- ransomware payment reporting obligations under the Cyber Security Act 2024 (Cth) commence on 30 May 2025. The Cyber Security (Ransomware Payment Reporting) Rules 2025 (Cth) confirm these obligations apply to entities with a turnover of A$3 million in the previous financial year (or a pro rata amount), and specify the information that must be notified to the Department of Home Affairs and the Australian Signals Directorate; and
- the statutory tort of privacy in new Schedule 2 to the Privacy Act 1988 (Cth) takes effect on 10 June 2025, giving individuals a direct right of action against persons who seriously and intentionally (or recklessly) intrude on their seclusion (i.e. physically intrude into the individual’s private space or watching, listening to or recording the individual’s private activities or private affairs), or misuse their information.
Work is progressing on further reforms that take effect later this year, such as the social media minimum age laws, and in late 2026, development of the Children’s Online Privacy Code and automated decision-making transparency requirements. To learn more, see our ‘Internet, privacy and data – a year in review’ article.
The recent Australian federal election has meant there has not been the same volume of legislative change in the last quarter.
However, there have still been plenty of notable privacy, cyber and data learnings from enforcement action and reports.
ASIC sues for alleged systemic and prolonged cyber security failures
The Australian Securities and Investments Commission (ASIC) commenced proceedings against FIIG Securities (FIIG) in the Federal Court on 12 March 2025 for allegedly failing to have adequate cyber security measures for over four years.
FIIG had been the victim of a significant cyber attack on 19 May 2023 when a threat actor accessed its IT network and removed approximately 385GB of confidential data. 18,000 of FIIG’s clients were notified by FIIG that their personal information may have been compromised.
In its Concise Statement, ASIC alleges that between 13 March 2019 and 8 June 2023, FIIG failed to take adequate steps to protect itself and its clients against cyber security risks, in breach of its obligations as an Australian Financial Services (AFS) licensee.
ASIC set out an extensive list of cyber security measures that ASIC considered FIIG’s AFS licensee obligations required it to have in place, but which were not implemented. The list included:
- a cyber incident response plan approved by the organisation and communicated and accessible to employees;
- appropriate management of privileged access to accounts on FIIG’s networks, computer systems and applications;
- multi-factor authentication for all remote access users;
- appropriately configured and monitored firewalls;
- mandatory training to employees on the organisation’s key cyber security risks and the employees’ responsibilities; and
- a process to review and evaluate the effectiveness of existing technical cyber security controls on an at least quarterly basis.
Among the remedies sought by ASIC is a civil penalty. The maximum civil penalty (at the time) per contravention of the applicable provisions is A$13.75 million, three times the benefit obtained and detriment avoided, or 10 per cent of annual turnover (capped at A$687.5 million).
ASIC has identified AFS licensees’ cyber security obligations as an enforcement priority. ASIC Chair Joe Longo says this matter “should serve as a wake-up call to all companies on the dangers of neglecting your cyber security systems”.
Security of Critical Infrastructure updates
Some recent updates relating to the Security of Critical Infrastructure Act 2018 (Cth) (SOCI Act) including the following.
- On 4 April 2025, a suite of legislative instruments took effect to extend the SOCI Act to the telecommunications sector and to clarify regulated entities’ Critical Infrastructure Risk Management Program (CIRMP) obligations.
- As of late March 2025, over 220 critical infrastructure assets have been declared as ‘systems of national significance’ under the SOCI Act.
- In early March 2025, the Cyber and Infrastructure Security Centre (CISC) issued a report on regulated entities’ compliance with 12 hour and 72-hour cyber incident notification obligations under the SOCI Act. It reported that, in 2023-24, a large proportion of notifications were outside the required timeframe. Food and water sectors had the highest proportion of late notifications, being 50 per cent.
The CISC’s report noted that there were numerous reports of data exposure, data theft, and data leak via insider threat. It urges regulated entities to take steps to mitigate insider threat risks by background and security checks of personnel, managing systems access, and monitoring to detect unauthorised activity.
Privilege over data breach documents challenged in Medibank class action
In March 2025, in a class action against Medibank in relation to its 2022 data breach, the Federal Court handed down its decision on whether certain documents were subject to legal professional privilege.
In brief, the Court found that reports and communications with CyberCX, Coveware, CrowdStrike and Threat Intelligence were privileged as they were created for the dominant purpose of Medibank’s lawyers providing legal advice and assisting with proceedings. As such, they were not required to be disclosed.
However, three reports prepared by Deloitte were found not to be privileged because they were prepared for multiple purposes, including for the purpose of alleviating market concerns and avoiding the need for an independent APRA review. The fact that those reports were referred to in public communications by Medibank indicated that they were not for the dominant purpose of legal advice.
The Medibank and Optus privilege cases demonstrate the complexity of privilege and cyber incident response. It is important to ensure that any commissioned reports are for the dominant purpose of obtaining legal advice or for actual or reasonably contemplated litigation. Where an adviser is engaged for multiple purposes, the work and work product should ideally be separated into privileged and non-privileged workstreams. Further, care needs to be taken when making public statements to the purpose for which an adviser has been engaged, as this may result in a waiver of privilege.
OAIC statistics show record number of serious data breach notifications
The Office of the Australian Information Commissioner (OAIC) has released its latest notifiable data breaches report for the second half of 2024, which highlights a record number of data breach notifications.
From July to December 2024, the OAIC was notified of 595 serious data breaches, raising the total number of notifications in 2024 to 1,113. This represents a 25 per cent increase from the 893 notifications reported on for 2023.
Malicious and criminal attacks remained the main source of data breaches, accounting for 69 per cent of breaches during the report period. This was followed by human error (29 per cent) and system fault (2 per cent).
Almost half (42 per cent) of all data breaches reported, and 61 per cent of malicious or criminal attacks, resulted from cyber security incidents. Common cyber incidents involved:
- phishing (34 per cent of cyber incidents);
- ransomware (24 per cent of cyber incidents); and
- compromised or stolen credentials (21 per cent of cyber incidents).
Hacking, brute force attack and malware also contributed to a smaller proportion of cyber incidents.
The Australian Privacy Commissioner Carly Kind has said these trends suggest the threat of data breaches is unlikely to diminish and “[b]usinesses and government agencies need to step up privacy and security measures to keep pace”.
Separately, Check Point Threat Intelligence data suggests that the rate of cyber threats faced by Australia, as part of the APAC region, is 60 per cent higher than the global average.
APAC President at Check Point, Ruma Balasubramanian, considers the AI-rich environment in the region and the use of generative AI is what allows threats such as phishing and ransomware to be “much more severe and operate at scale”.
New model AI clauses for Commonwealth Government released
On 20 May 2025, the Digital Transformation Agency released model clauses for use by Commonwealth Government agencies when procuring services from sellers that utilise artificial intelligence (AI) or when procuring an AI tool or system.
The clause bank is detailed and addresses a range of matters, including compliance with laws, privacy, oversight, explainability and transparency, training, testing and monitoring, updates, security, record keeping, IP, data and risk management.
Noting that government contract templates are generally buyer-favoured, the clause bank may be a useful reference point for non-government organisations procuring or supplying AI.
Workplace surveillance laws overhaul recommended in Victoria
On 13 May 2025, the Economy and Infrastructure Committee of the Victorian Legislative Assembly released its report following its inquiry into workplace surveillance. In light of the growing adoption of workplace surveillance technologies, it recommends the modernisation of Victoria’s workplace surveillance laws.
In Australia, surveillance (including in the workplace) is subject to disparate surveillance devices laws at the federal level and by each State and Territory. Only New South Wales, the Australian Capital Territory and Victoria have workplace-specific surveillance laws, and some States and Territories differs in the types of surveillance devices they regulate and in what way.
The report notes that federal workplace laws do not regulate surveillance devices, and the Privacy Act 1988 (Cth) provides limited protection because it includes an exemption in relation to the handling of employee records.
The report recommends that Victoria introduces principles-based workplace surveillance laws that requires an employer to ensure that any surveillance is “reasonable, necessary and proportionate to achieve an employer’s legitimate objective”. It is intended to be technology-neutral, to capture all forms of surveillance. The report also recommends transparency measures, consultation requirements, data retention requirements, special rules for handling biometric data, a notifiable data breaches scheme and the establishment of an independent regulator.
It will be interesting to see if any other States or Territories follow Victoria in implementing new or strengthening existing workplace surveillance laws.
New mandatory security standards for smart devices
As part of the suite of reforms under the Cyber Security Act 2024 (Cth), the government has now introduced minimum cyber security standards for smart devices.
The new Cyber Security (Security Standards for Smart Devices) Rules 2025 were introduced in March 2025, to take effect on 4 March 2026. The rules set out obligations applicable to network- or internet-connected (internet of things or IOT) consumer devices, other than computers (and similar), smart phones, therapeutic goods and road vehicles.
Manufacturers of these smart devices for consumers must:
- meet minimum password requirements, including that device passwords must be unique to the device (and meet certain requirements) or be set by the user;
- publish information about how people can report device security issues to the manufacturer;
- publish information about the defined support period during which the manufacturer will provide security updates; and
- issue a statement with the supply of the device which includes a declaration that, in the opinion of the manufacturer, the device complies with the security standards.
The definition of a ‘manufacturer’ is broad and also covers entities such as importers.
While the security standards are specific to smart devices for consumers, they reflect a growing uplift in security expectations for all IT-related products and services.
Company responsible for verifying payment details in financial redirection case
Financial redirection fraud has become increasingly common in Australia. The fraud typically involves a threat actor initially obtaining unauthorised access to a company’s email account. The threat actor identifies one or more invoices for payment and, masquerading as the email account holder, requests invoice payment to a bank account the threat actor’s control.
This is what happened in Mobius Group Pty Ltd v Inoteq Pty Ltd [2024] WADC 114.
Mobius, an electrical engineering contractor, issued invoices to Inoteq for $235,400 in early 2022. A threat actor obtained access to Mobius’ email account and sent an email to Inoteq requesting to update Mobius’ bank account details to those contained in a fraudulent invoice. There was conflicting evidence about the steps Inoteq took to verify the change, but ultimately Inoteq sent an email requesting confirmation and the threat actor responded as Mobius. Inoteq then paid the $235,400 to the account set out in the invoice. When it discovered the fraud, Inoteq’s bank was only able to recover $43,541.13. Inoteq refused to pay the remaining $191,859.16 and so Mobius commenced proceedings in the Western Australian District Court.
The Court handed down its judgment on 20 December 2024, ordering Inoteq to pay Mobius $191,859.16. The decision provides some of the first guidance on where liability may lie for financial redirection fraud.
The Court considered four key issues:
1. Indemnity clauses
Inoteq sought to rely on an indemnity Mobius gave in the agreement between the parties against all loss or liability arising out of Mobius’ performance or non-performance of the services. The Court rejected this argument and found that the loss arose from the threat actor’s actions not as a result of Mobius’ performance or non-performance of the services.
2. Duty of care
Inoteq claimed Mobius owed it a duty of care to take reasonable steps to avoid economic harm arising from unauthorised communications. Inoteq argued that Mobius had failed to put adequate security arrangements and access controls in place to prevent the threat actor from accessing its email account. The Court found that such a duty of care did not exist and noted that Inoteq was in a better position to prevent the effects of the fraud by contacting Mobius directly to personally verify the updated bank account details.
3. Notice of change of bank details
Inoteq submitted that fraudulent emails (being the initial email and a follow up) sent by the threat actor constituted notice of change of bank details under the agreement. The Court rejected this, since the emails were not actually sent by Mobius.
4. Apportionment of liability
Inoteq argued that liability should be apportioned between the parties under the Civil Liability Act 2002 (WA) because Mobius was a “concurrent wrongdoer” and caused or contributed to the loss. The Court rejected this on the basis that there was no duty of care (as noted above).
The takeaway for Australian businesses is to always obtain verbal verification before changing a counterparty’s bank account details prior to payment. This verification should always use a known, trusted telephone number rather than a number provided in an invoice (which may be falsified).
eSafety report on use of services
A recent transparency report published by the eSafety Commissioner found that, despite a significant proportion of children using social media platforms, existing minimum age rules for social media platforms are inadequate and poorly enforced.
The report was informed by the results of a national survey of social media use by Australian children aged 8 to 15, together with information provided to the eSafety Commissioner by Discord, Meta (Facebook and Instagram), Reddit, Snapchat, TikTok, Twitch and Google (YouTube) about children’s social media access and the safeguards in place to assess the age of users.
Key findings of the national survey include:
- 95 per cent of 13- to 15-year-olds surveyed used social media in 2024.
- 80 per cent of 8- to 12-year-olds surveyed used social media in 2024, despite social media platforms having 13+ age restrictions in place.
- Only 10 per cent of children aged between 8 and 12 had their social media account closed for being under the age of 13.
While there is inconsistency across the industry as to the steps taken to assess end-users’ age, existing measures employed by social media services (such as language analysis, age estimation, classifiers and facial estimation technology tools) failed to effectively verify the users’ ages at the point of sign-up. In particular, the eSafety Commissioner noted that at the point of sign-up, the social media services rely on truthful self-declaration and no additional upfront verification or assurance tools are used.
The report is timely given social media platforms are expected to be required to develop and roll out systems to prevent Australians under 16 years old from having accounts on their platforms, under the new social media minimum age restrictions which take effect on 11 December 2025. Existing reliance on ‘truthful self-declaration’ is likely to be inadequate, and platforms will need to take further steps to comply when the laws take effect.
In parallel, the OAIC is developing a Children’s Online Privacy Code, to take effect in late 2026. The report into usage of social media platforms by children, and the extent to which that use is currently undetected by those platforms, is illuminating considering the enhanced obligations that are likely to apply to platforms in relation to children’s data when the Code takes effect.
EU IP Office releases report on generative AI and copyright
On 12 May 2025, the European Union Intellectual Property Office (EUIPO) released its findings in a comprehensive study into generative AI and copyright which looks at:
- how copyright-protected works are used to train generative AI models;
- the IP status of content generated by generative AI, and associated legal issues; and
- the broader impact on creators, AI developers, and the legal framework for copyright.
The reports notes that litigation by copyright holders in relation to the use of their works to train AI models has been brought in the US, China, Canada, the UK, India and the EU. However, it also notes that there is an emerging market for copyright holders to license copyright in their works to generative AI developers for training.
The report focusses on the ‘text and data mining’ provisions of the Copyright in the Single Market Directive, and the associated opt-out mechanism for copyright holders under that Directive and the EU AI Act. However, it also notes that to manage copyright issues into the future, it will be important to have accurate information about:
- the identity of a copyright holder;
- permissible uses for a work, including whether it can be used for generative AI training; and
- content that is developed by generative AI.
The EUIPO is launching a Copyright Knowledge Centre by the end of 2025, which will continue to consider how generative AI interacts with copyright.
The report raises similar themes to two of the 13 recommendations of the Senate Committee on Adopting Artificial Intelligence’s Final Report released in November 2024, which recommends that the Australian Government:
- “require the developers of AI products to be transparent about the use of copyrighted works in their training datasets, and that the use of such works is appropriately licenced and paid for”; and
- “urgently undertake further consultation with the creative industry to consider an appropriate mechanism to ensure fair remuneration is paid to creators for commercial AI-generated outputs based on copyrighted material used to train AI systems”.
Other privacy, cyber, AI and data updates
In addition to what is covered in detail above:
1. A$16 million worth of penalties for breaches of the Spam Act 2003 (Cth) have been issued over the last 18 months. In March 2025, Telstra was fined A$626,000 for sending:
a. more than 10 million text messages over 21 months where Telstra’s unsubscribe option failed to meet legal requirements, because it required customers to provide personal information in order to unsubscribe; and
b. over 40,000 text messages over 10 months to persons who had not consented or who had withdrawn their consent.
2. Following Oxfam’s January 2021 data breach affecting 1.7 million records, Oxfam provided an enforceable undertaking which requires Oxfam to implement a range of security measures and to not store certain personal information for more than seven years.
3. X is challenging whether the eSafety Commissioner’s safety standard for harmful online content applies to X and other social media platforms. The standard came into effect on December 2024 and will be enforced from June 2025. X is arguing that it and other social media platforms should instead continue to comply with the existing Social Media Services Online Safety Code, an industry code.
4. Further to the August 2024 expansion of Australia’s Consumer Data Right (CDR) regime (see our October 2024 Digital Bytes update), in March 2025 the government announced an additional expansion of the CDR regime to include non-bank lenders and By Now, Pay Later schemes and to reduce the mandatory data retention period to 2 years. Amendments to the CDR Rules take effect in mid-2026.
5. As part of the Productivity Commission’s review into pillars of Government productivity, including ‘data and digital technology’, the Productivity Commission is seeking feedback across the areas of privacy regulation, consumer data access and controls, digital financial reporting and AI. Consultation is open until 6 June 2024.
6. On 1 April 2025, the National Health (Privacy) Rules 2025 took effect. These rules regulate how Medicare Benefits Schedule (MBS) and Pharmaceutical Benefits Schedule (PBS) claim information can be handled.
7. The Scams Prevention Framework Act 2025 (Cth) was passed in February 2025, introducing a framework for obligations to be imposed on regulated sectors, such as banking and financial services, telecommunications providers and digital platforms, to prevent scams. You can read our insights about the framework and what it means for regulated entities in our earlier article.
8. Stolen password and credential stuffing attacks continue to make the news, more recently in relation to a spate of attacks on Australian superannuation member accounts. This underlies a growing expectation on organisations to put in place increased security controls such as multifactor authentical on end user accounts.
9. A review of data retention requirements under Australian laws is underway, delivering on commitments made in Australia’s Cyber Security Strategy 2023-30. A number of organisations have published submissions, including some which spruik the benefits of Digital ID as a mechanism to minimise data retention.
10. Treasury has opened a review of the data sharing framework for Australian government bodies under the Data Availability and Transparency Act 2022 (Cth) by releasing an Issues Paper. Interested persons can make submissions up to 30 May 2025.
11. The Australian Institute of Company Directors (AICD) has recently published Data Governance Foundations for Boards, which provides guidance to Australian company directors on data governance oversight, and risk and incident management. The resource includes a Snapshot and Checklist for SME and NFP Boards.
What’s next?
There is hope that now the election is over, there will be some progress towards anticipated legislative changes, such as tranche two of the Privacy Act reforms, and dedicated AI legislation as recommended by the Senate Committee on Adopting Artificial Intelligence’s Final Report.
However, in a number of public engagements, the Australian Privacy Commissioner has made clear that enforcement action is not on hold pending the second tranche of privacy reforms. She is looking for opportunities to take effective enforcement action to incentivise broader compliance and achieve general deterrence. We expect to see the OAIC look for appropriate cases to exercise its new infringement notice powers, and to seek cases to pursue in relation to its new tiers of civil penalties. For more detail, see our previous article.
Areas of focus for the OAIC include connected cars, rent technology and the real estate industry, identity verification and facial recognition, data scraping and using personal information to train AI models.
How can we assist?
We have a large team of privacy and cyber specialists, with substantial experience across the whole spectrum of data, privacy, AI and cyber compliance and incident management.
For a more detailed briefing on any of these updates, or to discuss how we can assist your organisation to manage its risks in these rapidly evolving areas, please get in touch.