In 2024, laws were passed which effect significant changes to regulation of the internet, privacy and data security in Australia. At the same time, Australian regulators such as the Privacy Commissioner made it clear that they are focusing on ensuring that those laws are more robustly enforced. Consequently, it is more important than ever for entities to map their relevant digital practices and ensure that appropriate compliance measures are in place. This article examines some of the key changes with a specific focus on:
- online safety concerns: including those relating to children and young people, scams, misinformation and disinformation and other online consumer harms;
- cyber security and data protection: including the recent reforms and cyber security incidents ;
- artificial intelligence (AI): including consultation on proposed mandatory measures and further review being undertaken by the Australian Government; and
- other privacy developments: including the first tranche of privacy reforms and enforcement under the Spam Act 2003 (Cth).
If you would like further detail about any of these developments or if you would like to discuss compliance measures, please contact our expert team.
Online safety
Online safety was a high priority on the Australian Government’s agenda in 2024 and we anticipate it will remain a key focus as we move into 2025. With the rapid advancement and use of technologies such as AI and automated systems, the challenges related to online harms such as scams and harmful content will likely continue to grow and receive continued focus from the Government and regulators.
This includes the work from the eSafety Commissioner (eSafety), in collaboration with other Australian regulators and international partners, including through the Digital Platform Regulators Forum (DP-REG), to improve online safety and target issues such as cyberbullying, image-based abuse and harmful online content. Formed in 2022, the DP-REG is comprised of the Australian Competition and Consumer Commission (ACCC), the Australian Communications and Media Authority (ACMA), eSafety and the Office of the Australian Information Commissioner (OAIC), who have been working together to understand, assess and respond to the benefits, risks and harms of technology, including AI models.
2024 saw significant developments in Australia in this important area, including the commencement of Online Safety codes and standards, and the introduction of scams prevention framework legislation to Parliament.
Online Safety Act 2021 (Cth)
OSA Statutory Review
In late 2023, the Government announced it was bringing forward the prescribed statutory review of the Online Safety Act 2021 (Cth) (OSA) by 12 months. The OSA regulates a wide range of content online, including the most harmful material and content which can be harmful for children.
The scope of the OSA review is broad, including:
- evaluating the current regime and identifying the best regulatory measures to support eSafety in reducing Australians’ exposure to online harms;
- considering whether additional functions and powers should be granted and an expansion of the circumstances under which penalties should apply;
- assessing the adequacy of current measures in addressing issues such as cyber-bullying, cyber-abuse material targeting both children and adults, non-consensual sharing of intimate images and the operation of the Basic Online Safety Expectations (BOSE) regime; and
- exploring the need for further arrangements to tackle online harms, such as online hate and harm arising from technologies including AI and end-to-end encryption.
In April 2024, the Government released an issues paper for consultation. The Government welcomed the final report of the OSA Review on 1 November 2024, however a copy of the report has not yet been published.
Digital Duty of Care
Following the recommendations from the OSA Review, on 14 November 2024, the Government announced that it would be legislating a new Digital Duty of Care. While limited detail has been provided at this stage, the Government has indicated that it would:
- be placing “the onus on digital platforms to proactively keep Australians safe and better prevent online harms”;
- align with the approaches taken in the United Kingdom and European Union, requiring digital platforms “to take reasonable steps to prevent foreseeable harms on their platforms and services, with the framework to be underpinned by risk assessment and risk mitigation, and informed by safety-by-design principles’; and
- enable the regulator to “draw on strong penalty arrangements” where platforms seriously breach their duty of care and where there are systemic failures (see relevant speech).
As we move into 2025, we anticipate that the Australian Government will consider whether to implement further measures from the OSA Review. A federal election is expected by 17 May 2025. In view of the limited number of sitting days that remain between now and the beginning of the caretaker period, the Government’s consideration of these measures may follow the election.
Online safety industry codes and standards
The OSA provides for industry bodies or associations to develop codes to regulate class 1 and class 2 material. Class 1 and class 2 material is defined by reference to the National Classification Scheme. eSafety can register the codes if it considers they meet statutory requirements. If it determines they do not, then eSafety can instead impose mandatory standards for regulating this material.
eSafety has been taking a phased approach to the introduction of these industry codes and standards. It started with the most harmful types of content, in class 1A and class 1B, and is now in the course of ensuring codes or standards are in place for the next most serious categories of material (class 1C and class 2). Class 1A and class 1B material cover very harmful content including certain child sexual exploitation material, pro-terror material, extreme crime and violence material, and drug-related material.
Phase 1 Codes and Standards regulating class 1A and class 1B content have now commenced. There are currently six industry codes in operation, which capture a number of service providers including app distribution, internet carriage and social media services. Phase 1 Codes for five industry sections regarding class 1A and class 1B material were registered on 16 June 2023 and came into effect on 16 December 2023. An additional Phase 1 Code for internet search engine services was registered on 12 September 2023 and commenced on 12 March 2024. The Phase 1 Standards were registered on 21 June 2024 and came into effect on 22 December 2024. The Phase 1 Standards apply to designated internet services and relevant electronic services.
The development of the Phase 2 codes is ongoing, and eSafety issued notices to industry representatives for the development of these codes by 19 December 2024. The Phase 2 codes are in respect of class 1C and class 2 material, which includes online pornography and other high impact material as defined under the National Classification Scheme.
- Online Safety (Designated Internet Services – Class 1A and 1B Material) Industry Standard 2024; and
- Online Safety (Relevant Electronic Services – Class 1A and 1B Material) Industry Standard 2024.
In broad terms, the objectives of the codes and standards include to:
- require industry participants to take proactive steps to prevent or limit (as appropriate) access to and hosting of harmful material;
- facilitate co-operation within industries and with eSafety in relation to the removal, disruption and restriction of harmful material;
- empower end-users to manage their own access and/or exposure to harmful material; and
- strengthen transparency in relation to relevant material.
Scams
The regulation of scams and development of scam prevention mechanisms continues to be an issue of focus for the Government, with an indication that scams resulted in losses of $2.7 billion for Australian consumers in 2023.
There have been a few recent developments, including:
Scams Prevention Framework: On 7 November 2024, the Government introduced the Scams Prevention Framework Bill 2024 into Parliament, following earlier consultation this year on exposure draft legislation. The Bill has been referred to the Senate Economics Legislation Committee and a report is due on 3 February 2025.
The framework:
- includes measures to take stronger action against scams, including through tougher penalties for non-compliance and dispute resolution pathways for consumers to seek redress;
- is intended to apply first to those in the bank and telecommunications industries as well as digital platform service providers, commencing with social media, paid search engine advertising and direct messaging services.
SMS Sender ID Register: Further to the passing of the Telecommunications Amendment (SMS Sender ID Register) Act 2024 (which received Royal Assent on 5 September 2024), the Government announced on 3 December 2024 that it will direct ACMA to develop an enforceable mandatory industry standard targeted at addressing SMS scams. The standard will require telecommunications providers to check whether messages being sent under a brand name correspond with the legitimate registered sender. If the Sender ID is not on the Register, ACMA will either block the SMS or include a warning.
Framework for Practical Cooperation: On 3 December 2024, ACMA announced a new framework between Australia and the United Kingdom targeted at combatting phone scams, spam and unsolicited calls.
Regulator enforcement: Additionally, this year there has been some further enforcement action from regulators in relation to scams, including:
- two infringement notices issued by ACMA against Telstra for failures to comply with scam rules and disclosure of unlisted phone numbers; and
- proceedings commenced by ASIC against HSBC Australia in relation to allegations of failures to adequately protect customers from scams.
Children and young people
One of the key issues currently in the spotlight for Australia and other jurisdictions is online safety and minimising the risk of harm and creating a safer online environment for children and young people.
Children (Social Media Safety) Bill 2024 (SA)
On 9 September 2024, the Honourable Robert French AC released the ‘Report of the Independent Legal Examination into Banning Children’s Access to Social Media’ (French Review) which included a consultation exposure draft of the Children (Social Media Safety) Bill 2024 (SA) for the Premier of South Australia. See our previous article.
The French Review considered a potential legislative model which would ban access to social media services for children under the age of 14 and restrict access to social media services for children between 14 and 16, through requiring social media companies to establish parental consent before allowing access to their platforms.
The Australian Government commented that it would consider the recommendations and the proposed draft Bill to inform the design of a national response. Since that time, the Australian Government has taken the measures further than the South Australian proposal, with the national social media ban for children recently passed in Parliament. Further detail is set out below.
Social media ban for children
In a decision that has attracted the attention of international governments and regulators, on 29 November 2024, the Australian Parliament passed the Online Safety Amendment (Social Media Minimum Age) Act 2024, to enforce a minimum age of 16 years for access to social media.
The Act will impose an obligation on social media platforms captured by the Act to take reasonable steps to prevent children under 16 years of age from having an account. The Act contains restrictions on the use of government identifiers and other personal information specified in the rules. At the centre of the Act is the civil penalty provision, which imposes a maximum civil penalty of 150,000 penalty units (currently equivalent to A$49.5 million) on any relevant platform which does not meet this obligation.
That provision will commence on a date to be determined by a notifiable instrument within 12 months from 11 December 2024. According to the Minister, this will provide platforms with the necessary time to develop and implement the required systems to comply with the Bill. However, details on how platforms are expected to enforce age restrictions have not been specified in the Bill.
The means by which platforms can enforce age restrictions and age assurance measures have been subject to debate and consideration by the Australian Government. The Government is in the midst of an age assurance trial, which is expected to conclude sometime mid-2025. This follows an issues paper published by eSafety in July 2024.
Age assurance continues to present challenges in relation to balancing the use of particular methods to evaluate age with privacy and data considerations. As we start 2025 and as relevant platforms continue to consider the changes they will need to implement as a result of the Bill, we anticipate that further guidance will be issued by eSafety, including in part based on the report produced from the age assurance trial.
Children's Online Privacy Code
The Children’s Online Privacy Code is one of the outcomes of the Australian Government’s review of the Privacy Act 1988 (Cth) (Privacy Act) released in late 2023, and the subsequent passing of the Privacy and Other Legislation Amendment Act 2024 (Cth), which we set out in further detail below.
Relevantly, as a result of the Bill, the Information Commissioner will be required to develop and register a Children’s Online Privacy Code within two years of 10 December 2024. It will automatically apply to APP entities that meet the criteria, including social media services, relevant electronic services or designated internet services (as defined under the OSA) where the service is likely to be accessed by children, and excluding some entities which are providing a health service. In effect, the Code is likely to have a broad application and has been designed to provide flexibility for the Information Commissioner to specify who will and will not be bound by the Code. The Code is intended to align with the UK Information Commissioner’s Age Appropriate Design Code, and the Online Safety (Basic Online Safety Expectations) Determination 2022.
Over the next 12 months, entities that are likely to be captured should be watching closely for further consultation and guidance which is released as the code is developed.
Misinformation and disinformation
One of the compliance priorities for ACMA for 2024-25 has been to address misinformation and disinformation on digital platforms. ACMA has been overseeing the performance of digital platforms under the voluntary Australian Code on Disinformation and Misinformation, and published its third report in September 2024. ACMA’s report indicated an increase in the concern about misinformation, with data showing that 75 per cent of Australians remain concerned and noting that internationally, governments are demanding more transparency from digital platforms and establishing frameworks to hold them accountable.
Further to this focus and following the release of an exposure bill in June 2023, the Australian Government introduced the Communications Legislation (Combatting Misinformation and Disinformation) Bill 2024 into Parliament on 12 September 2024. The Bill was considered in some detail by Parliament, however it was met with resistance in the Senate, in particular in relation to whether there are adequate protections for the freedom of expression. Subsequently, the Government announced on 24 November 2024 that it would not proceed with the Bill.
The Bill would have provided ACMA with new powers to address misinformation and disinformation considered to be seriously harmful.
Other consumer online harms
One of the ACCC priorities for 2024-25 includes addressing consumer and fair-trading issues in the digital economy, with a focus on misleading or deceptive advertising within influencer marketing, online reviews, in-app purchases and price comparison websites.
Across the past 12 months, the Australian Government has introduced a number of initiatives and published findings in relation to targeting and combatting online consumer harms, including the below.
- Review of AI and the Australian Consumer Law (ACL): On 15 October 2024, the Government released a discussion paper in relation to its review into AI and the ACL. The paper builds on earlier consultation undertaken in 2023 on Safe and responsible AI in Australia and focuses on AI-enabled goods or services. The Government sought submissions on whether the ACL remains suitable to protect consumers who use AI and to support the safe and responsible use of AI by businesses. This work forms part of the Government’s broader review in relation to addressing AI-related risks and harms (set out in further detail below).
- Unfair trading prohibition: On 16 October 2024, the Government announced that it proposes to ban unfair trading practices under the ACL and expects to publish its proposed ACL reforms in the first half of 2025. The proposal seeks to prohibit a wide range of unfair trading practices which the Government considers are not adequately caught by current legislation. This includes targeting practices such as ‘subscription traps’ and deceptive and manipulative online practices. Our previous article sets out some further detail about these proposed changes.
- Digital Platform Services Inquiry (DPSI): A few reports were published by the DPSI in 2024.
- Eighth report: On 21 May 2024, the ACCC published its eighth interim report of the DPSI, considering potential competition and consumer issues in the supply of data products and services by data firms in Australia.
- Ninth report: On 4 December 2024, the ACCC published its ninth interim report of the DPSI, examining search services and the impact of the emergence of new technologies like generative AI.
- Final report: The final report of the DPSI is due to be provided to the Australian Government by 31 March 2025 and will focus on:
- recent international legislative and regulatory developments in markets for digital platform services and their impact on competition and consumers;
- major developments and key trends in certain markets for digital platform services; and
- potential and emerging competition and consumer issues which relate to digital platform services.
Cyber security developments
CrowdStrike outage
On 19 July 2024, a software update from cyber security firm CrowdStrike inadvertently triggered a global IT outage. CrowdStrike’s root cause analysis determined that a number of issues together resulted in the system crash.
Microsoft reported that up to 8.5 million devices running its operating system were affected, with significant repercussions for critical sectors such as airlines, media and broadcasting, banking and retail. The financial impact on Australian businesses was also reported by the media as surpassing A$1 billion.
The incident has sparked discussions around the vulnerability of global technological systems and the potential consequences of large-scale cyber attacks. According to the Australian Government’s Annual Cyber Threat Report 2023-2024, cyber security attacks are at an all-time high, among increased geopolitical challenges for Australia. Key findings from the report include the following.
- Cyber Security Hotline activity: The Australian Signals Directorate (ASD) received over 36,700 calls to its Cyber Security Hotline, a 12 per cent increase from the previous year.
- Ransomware incidents: The ASD responded to 121 ransomware cases, accounting for 11 per cent of all reported incidents, a 3 per cent increase from the previous year.
- Malicious domain blocking: The Australian Protective Domain Name System blocked access to 82 million malicious domains, a 21 per cent increase from the previous year.
- Domain Takedown Service: Over 189,000 malicious domains targeting Australian servers were taken down, a 49 per cent surge from the previous year.
As the cyber threat landscape grows, the passing of the Cyber Security Legislative Package (set out below) brings timely regulatory measures, which will require organisations to review and update their security policies and frameworks to comply with the new requirements, and strengthen their cyber resilience.
Cyber Security Legislative Package
On 25 November 2024, the Australian Government reinforced its commitment to enhancing Australia's cyber security by passing a suite of legislative reforms. The Cyber Security Legislative Package, which received Royal Assent and became law on 29 November 2024, consists of:
- Cyber Security Act 2024;
- Intelligence Services and Other Legislation Amendment (Cyber Security) Act 2024; and
- Security of Critical Infrastructure and Other Legislation Amendment (Enhanced Response and Prevention) Act 2024.
The package will implement seven initiatives under the 2023-2030 Australian Cyber Security Strategy, which seeks to address legislative gaps and align Australia with international best practice. Australia now has a standalone Cyber Security Act, which will include the following measures.
Minimum cyber security standards for smart devices
Part 2 of the Cyber Security Act enables the Minister for Cyber Security to prescribe rules that will establish mandatory security standards for smart devices. The provisions will apply to ‘relevant connectable products’, being products capable of sending or receiving data that have not been exempted. Manufacturers and suppliers of smart devices covered by the rules will be required to produce a statement of compliance confirming that each relevant device meets the requirements under the relevant standard and providing specified details.
Mandatory ransomware and cyber extortion reporting
Part 3 of the Cyber Security Act establishes a ransomware and cyber extortion payment reporting obligation on reporting business entities. A reporting business entity must make a report within 72 hours of making the ransomware payment (or becoming aware that the ransomware payment has been made). This obligation will commence within six months following Royal Assent.
Limited Use restriction in the National Cyber Security Coordinator (NCSC)
Part 4 of the Cyber Security Act introduces a ‘limited use’ restriction on the NCSC when sharing information with other Government entities and regulators (including under the mandatory ransomware reporting regime). This is intended to encourage organisations to voluntarily share information with the Government when responding to a significant cyber security incident.
Cyber Incident Review Board (CIRB)
Part 5 of the Act establishes a CIRB to conduct no-fault, post-incident reviews of significant cyber security incidents and make recommendations to Government and industry about actions that could be taken to prevent, detect, respond to, or minimise the impact of cyber security incidents of a similar nature in the future. The CIRB will be comprised of a Chair and at least two, but no more than six, members.
OAIC’s notifiable data breaches report for January to June 2024
In the latest notifiable data breaches report for January to June 2024, the OAIC reported a 9 per cent increase in data breach notifications compared to the previous six months, marking the highest number since 2020. According to the report, cyber security incidents continue to be a prevalent cause of data breaches, accounting for 38 per cent of all breaches. Measures suggested by the OAIC to mitigate cyber threats include:
- implementing multi-factor authentication;
- enforcing strong password policies;
- applying layered security controls to avoid a single point of failure;
- ensuring users have appropriate levels of access to information assets depending on their role and responsibilities; and
- implementing timely security monitoring processes and procedures to detect, respond to and report incidents, or suspicious activity.
The Privacy Commissioner has argued that “the coverage of Australia’s privacy legislation lags behind the advancing skills of malicious cyber actors”. Accordingly, when the OAIC welcomed the passing of the Privacy and Other Legislation Amendment Act 2024, it also indicated its support for a further, second tranche, of reforms.
See below for further detail.
Digital ID laws
The Digital ID Act 2024 and Digital ID (Transitional and Consequential Provisions) Act 2024 were passed on 16 May 2024 and have recently commenced. These Acts follow the launch of several pilots of digital ID by the NSW Government over the past couple of years, and legislate a voluntary Accreditation Scheme for digital ID service providers. The Act establishes a framework for organisations that provide or use Digital ID services for government or commercial services.
The ACCC is the Digital ID Regulator and the OAIC will regulate the privacy aspects of Australia's Digital ID System. Accredited private businesses will be able to apply to participate with the Australian Government Digital ID System by December 2026.
The commencement of the Digital ID system is likely to be relevant as platforms continue to navigate age assurance challenges and for cyber security purposes, as the increase in data breaches involving identity information may steer more entities towards participating in the Digital ID System.
The regulation of AI
As AI becomes more commonly used in our daily lives and with demand for AI-enabled goods and services continuing to grow, the Australian Government and regulators have been grappling with how best to regulate its use and manage some of the associated challenges. This includes in relation to deepfake technology, transparency and disclosures around AI use, intellectual property implications, algorithmic bias and considerations around privacy.
There has been a significantly increased focus in this area in the past 12 months and this will continue in the coming years both in Australia and globally. See our previous articles, ‘Australian artificial intelligence regulation: a work in progress’ and ‘Responsible use of AI: call for submissions on new safety Standard and consultation paper’.
Some of the key developments from 2024 in this area include the following.
- Government Interim Response: On 17 January 2024, the Government published its Interim response to Safe and Responsible AI in Australia consultation, indicating that it would consider adopting a ‘risk-based’ approach with specific rules for the use of AI in high-risk settings.
- Select Committee on Adopting Artificial Intelligence: On 26 March 2024, the Senate resolved to establish the Select Committee on Adopting Artificial Intelligence to inquire into and report on the opportunities and impacts for Australia from the adoption of AI in Australia. Public hearings were held between May and September 2024 and the final report was tabled in November 2024, setting out 13 recommendations to the Government.
- Voluntary measures: Currently, the Australian response to AI has been through voluntary measures, including the AI Ethics Principles, and the Voluntary AI Safety Standard which was released on 5 September 2024.
- Mandatory measures proposal: In addition to the Voluntary AI Safety Standard, the Australian Government released a proposal for the introduction of 10 mandatory guardrails which AI developers and deployers must comply with for AI in ‘high-risk settings’, which included a proposed definition of ‘high-risk AI’. The consultation paper sought submissions on the proposal and is one of the immediate actions arising from the Government’s Interim Response.
- Deepfake regulation: The Criminal Code Amendment (Deepfake Sexual Material) Act 2024 was passed on 21 August 2024 and received Royal Assent on 2 September 2024. It amends the Criminal Code Act 1995 (Cth) to strengthen offences targeting the creation and non-consensual sharing of sexually explicit material online, including material that has been created or altered using AI technology (commonly referred to as ‘deepfakes’).
- Further review and guidance in relation to AI: In addition to the review into AI and ACL (set out above):
- the Government continues to conduct its review work in relation to AI and copyright through the Copyright and Artificial Intelligence Reference Group;
- the Digital Platform Regulators Forum (DP-REG) released Working Paper 3: Examination of technology – Multimodal Foundation Models (MFMs) in September 2024. The paper focuses on MFMs, which are a type of generative AI that can process and output multiple data types, and the relevant impact on consumer protection, competition, the media and information environment, privacy and online safety within the digital platform context. This followed the release of the DP-REG working papers on algorithms and AI published at the end of 2023.
- On 21 October 2024, the OAIC published new AI guidance which sets out how Australian privacy law applies to AI, and the OAIC’s expectations as to privacy compliance in relation to:
Other privacy developments and reforms
The OAIC indicated in its 2024-25 Corporate plan that one of its priorities was to focus on identifying the unseen harms that impact privacy rights in the digital environment. As described by the OAIC, there is an unprecedented focus on privacy protection and information access and we are at a “pivotal moment for the OAIC and our country”.
In line with its focus, the long anticipated first tranche of the privacy reforms were introduced and passed on the last sitting day of Parliament for 2024. The OAIC has also released privacy guidance in relation to tracking pixels and privacy obligations and taken further enforcement action against large entities, such as Medibank Private Limited (see further detail in our previous article).
In addition, on 17 December 2024, Meta settled enforcement proceedings brought by the OAIC in relation to the Cambridge Analytica proceedings, on terms which required Meta to enter into an enforceable undertaking which provided for a A$50 million settlement fund. Details are available in the OAIC’s media release.
Privacy and Other Legislation Amendment Act 2024 (Cth)
On 29 November 2024, the Senate passed the Privacy and Other Legislation Amendment Act 2024. This followed publication by the Senate Legal and Constitutional Affairs Legislation Committee of its report on 14 November 2024, recommending passage of the Bill in the Senate, subject to a number of amendments. See our earlier article.
The Bill received Royal Assent on 10 December 2024 and represents the first tranche of reforms to the Privacy Act and implements a number of legislative proposals agreed by the Government in its Response to the Privacy Act Review.
The key changes which came into effect on 11 December 2024 are summarised in our earlier article and include the following.
1.New powers for the OAIC to issue infringement notices and compliance notices:
- The OAIC has a new power to issue infringement notices for breaches of specified APPs (such as failure to have a privacy policy which meets the requirements of APP 1.4) or a non-compliant notifiable data breach statement without having to engage in protracted litigation, allowing it to resolve matters more efficiently. Infringement notices for each contravention of any of these provisions can be for up to A$19,800 for a body corporate (other than a publicly listed corporation) and A$66,000 for a publicly listed company.
- The OAIC may also give an entity a compliance notice if the OAIC reasonably believes that the entity has contravened any of the provisions for which infringement notices can be issued. The pecuniary penalty for a failure to comply with a compliance notice is A$330,000 for a body corporate.
2. Civil penalties:
- The existing maximum civil penalty for serious interference with privacy (the greater of A$50 million, three times the value of the benefit obtained from the conduct or, if the benefit cannot be determined, 30 per cent of adjusted turnover) is unchanged, but the Privacy Act now relevantly refers to a serious interference with privacy (instead of a serious or repeated interference with privacy) and clarifies what conduct constitutes “serious interference with privacy”.
- New civil penalty for interferences with privacy that are not a serious interference, being a maximum of A$3.3 million for a body corporate.
- New civil penalty for a contravention of any of the specified APPs or a non-compliant notifiable data breach statement, being a maximum of A$330,000 for a body corporate.
3. New powers to ‘whitelist’ overseas jurisdictions for the purposes of overseas disclosure of personal information: The Governor-General is now able to make regulations that prescribe countries and binding schemes as providing substantially similar protection to the APPs and having mechanisms that the individual can access to enforce those protections. However, at this stage no ‘whitelist’ exists.
4. Information security obligations clarified: A new APP 11.3 clarifies that an APP entity’s pre-existing obligations to take reasonable steps to protect personal information from misuse, interference and loss, and unauthorised access, modification or disclosure under APP 11.1, and to destroy or de-identify personal information it no longer needs for any purpose for which the information may be used or disclosed and is not legally required to retain under APP 11.2, including technical and organisational measures.
5. A new regime to permit, by Ministerial declaration in the wake of a notifiable data breach, certain collections, uses and disclosures of personal information that would otherwise breach privacy laws, duties of confidence or some statutory secrecy provisions: This will apply where the Minister is satisfied that the declaration is necessary to prevent or reduce the risk of harm to individuals affected by the data breach.
Other key changes that will come into effect later are requirements for APP entities to include specific information in their privacy policies about the use of personal information in automated decision making (scheduled to commence on 10 December 2026) and the new cause of action in tort for serious invasions of privacy (commencement date to be confirmed by a proclamation but no later than 10 June 2025).
The Attorney-General has indicated that a second tranche of reforms is forthcoming. However, with the current Government approaching the end of its term—facing a federal election by May 2025—time for significant reform is running out. As a result, the future of Australia’s privacy reform remains uncertain, and it is quite possible that the second tranche of reforms will not be introduced to Parliament until after the election.
Spam Act and enforcement
The Government has been increasingly active in its enforcement of the Spam Act 2003 (Cth), with one of ACMA’s compliance priorities for 2024-25 being to target misleading spam messages.
On 1 July 2024, ACMA released a Statement of Expectations, emphasising the importance of compliance under the Spam Act 2003 (Cth) and the Do Not Call Register Act 2006 (Cth) and protecting Australians from ‘unwanted intrusion on their privacy and inappropriate use of their personal information for marketing purposes’.
Some of ACMA’s expectations include that businesses should:
- obtain explicit and informed consent from consumers for e-marketing activities and otherwise carefully consider whether to rely upon the use of inferred consent;
- use express consent based on clear terms and conditions that are readily accessible at the point of consent;
- not hide the terms and conditions in fine print, lengthy privacy policies or multiple click-throughs;
- consider using a double opt-in procedure to obtain consent – for example, an email confirmation that consent has been given, with the email providing a click-through link to a ‘manage user preferences’ page; and
- action unsubscribe requests as quickly as practicable, and always within a maximum of five business days.
ACMA has continued to ramp up its enforcement actions, with some larger penalty notices issued in 2024, including:
- in May 2024, issuing a A$2.5 million penalty to a multi-national food chain for sending over 10 million marketing messages in breach of Australian spam laws across a four-month period. In issuing the penalty, ACMA noted that the “enforcement of the spam unsubscribe rules is a current ACMA compliance priority”.
- in October 2024, issuing a A$7.5 million penalty to a national bank for sending over 170 million marketing messages which did not include a method to unsubscribe and/or were sent to individuals who had not consented or had withdrawn their consent. This has been the largest penalty to date and follows an earlier breach of Australian Spam laws, which resulted in a A$3.55 million penalty in May 2023.
The increasing enforcement from ACMA serves as a good reminder for businesses to remain vigilant and keep their Spam Act and related privacy obligations front of mind, particularly before sending out communications to large groups of individuals and addressees.
Conclusion
2024 marked a distinct increase in regulation of the internet and of privacy in Australia. Changes will commence through the coming 24 months. It will be important for entities to understand the impact of these changes and put in place compliance measures in a timely manner. In an enforcement environment which includes a greater range of penalty provisions and enforcement-minded regulators, the risks of non-compliance will be substantial.