
This first instalment of Digital Bytes for 2026 brings you up to date on all the important developments in the privacy, cyber, AI and data space from the last quarter of 2025 into the new year.
Social media age-gating in Australia takes effect
The world has been watching as – from 10 December 2025 – new social media minimum age (SMMA) restrictions came into effect under the Online Safety Act 2021 (Cth). The SMMA restrictions require age-restricted online platforms to take reasonable steps to prevent Australian children under 16 from creating or keeping an account.
In guidance issued in late November 2025, the eSafety Commissioner published its view that the following mainstream social media platforms are subject to the SMMA restrictions: Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X (formerly Twitter) and YouTube. The Federal Government announced that over 4.7 million accounts maintained by under 16s have been removed or restricted in compliance with the new SMMA obligations. Meta in particular has since reported that it had removed approximately 544,000 accounts maintained by under 16s, and announced its intention to incorporate new ‘AgeKey’ age verification technology into its Australian apps in 2026.
In the same November 2025 guidance, the eSafety Commissioner notes it has also formed the view that the following platforms are not subject to SMMA obligations: Discord, GitHub, Google Classroom, LEGO Play, Messenger, Pinterest, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids.
Other online platforms should review how account holders use their services to consider whether the SMMA restrictions apply or whether the service is lawfully exempt under the Online Safety (Age Restricted Social Media Platforms) Rules 2025.
As noted in our previous instalment of Digital Bytes, the eSafety Commissioner’s regulatory guidance and assessment guide may assist with this assessment.
Failure by a regulated platform to take reasonable steps to prevent Australian under 16s from creating or keeping an account on their platform may result in penalties of up to A$49.5 million.
Looking forward
As under 16s migrate to alternative existing and new platforms, we expect the eSafety Commissioner to form the view that additional platforms are subject to the SMMA restrictions.
Some impacted stakeholders, such as Reddit and teenagers affected by the social media ban, have filed claims in the High Court of Australia that the SMMA restrictions impinge upon the implied freedom of political communication.
Additionally, the eSafety Commissioner’s regulatory guidance is scheduled to be updated by June 2026, and periodically thereafter to maintain pace with regulatory and technological developments.
It seems children’s online safety will remain a high priority for regulators and jurisdictions around the world in 2026. Several other jurisdictions have implemented or are considering age assurance measures designed to protect children online. Those implemented or under consideration include:
- age-appropriate design codes for online services;
- new rules requiring parent consent for under-16s’ access to social media platforms; and
- requirements for app stores to implement age assurance measures.
In Australia, the OAIC is expected to release the draft Children’s Online Privacy Code for consultation early this year, in the lead up to its implementation in late 2026. The services subject to the Cod eare expected to be broader than those subject to the SMMA restrictions.
Australia’s new National AI Plan and AI Safety Institute
On 2 December 2025, the Australian Government released the National AI Plan to develop the artificial intelligence industry in Australia, which has scrapped mandatory guardrails for high-risk AI, established the AI Safety Institute and set the direction of AI regulation and investment.
The National AI Plan reveals that the Australian Government will not implement the previously proposed mandatory guardrails for high-risk AI, and will instead leverage the existing regulatory landscape to address AI-related risks and harms. This approach aligns with the Productivity Commissioner’s recommendations in its final report on harnessing data and data technologies, released on 19 December 2025. See below for more detail.
The National AI Plan indicates the Australian Government will prioritise fostering innovation and economic growth in the AI sector in balance with ensuring safety from AI related harms. The AI Plan is predicated on three pillars:
- capturing the opportunity: by building the infrastructure to support AI development in Australia and promoting investment in Australian based technology;
- spreading the benefits: by facilitating broadscale AI adoption, training Australians on using AI, and facilitating access to AI by public institutions like schools and government departments; and
- keeping Australians safe: by mitigating potential harms through the newly established AI Safety Institute (see below); understanding intersections with other sectors to promote online safety, under the Australian Consumer Law, intellectual property rights, healthcare regulation, national security and privacy; and supporting responsible AI adoption by issuing clear guidance and governance on AI use.
In addition, the National AI Plan refers to the establishment of a new AI Safety Institute.
The Institute will:
- assess upstream risks around the ways AI are built which could amplify harm, and downstream AI harms to impacted individuals;
- gain industry insights by engaging with Australian unions, businesses and industry experts; and
- facilitate domestic and international engagement with the National AI Centre and the International Network of AI Safety Institutes.
In addition, in mid-December 2025, the ACCC released a snapshot of AI developments, which highlighted that:
- key players continue to develop and improve their AI models’ reasoning, e.g. with the release of Google’s Gemini 3 in November 2025 and OpenAI’s GPT-5 in August 2025;
- different types of AI have been growing and developing, including multimodal AI (text, images and sound), ‘neurosymbolic AI’, use of agentic AI (e.g. AI chatbots) and multi-agent AI systems; and
- the ACCC recognises that these AI developments pose regulatory and legal issues, such as the use of consumer data for training AI, AI scams, ‘AI-washing’ (i.e. overstating the AI’s capabilities) and liability issues for agentic AI.
Treasury’s final report released on how Australia’s consumer laws apply to AI
In October 2025, Treasury released its final report into the Review of AI and the Australian Consumer Law (ACL).
The Report suggests many of the ACL’s existing provisions are suitable to address the consumer law risks of AI-enabled goods and services, including the principle-based protections, provisions around remedy and liability, manufacturer defences and ACCC powers.
The Report proposed some additional targeted amendments to enhance clarity, such as:
- amendments to the legislated list of items included in the ACL’s definitions of ‘goods’, supported by updates to regulatory guidance and other materials – to clarify the distinction between AI-enabled goods and services to address current ambiguity;
- targeted clarifications to the definition of ‘manufacturer’; and
- technical amendments as required to the manufacturer defences to ensure they have the intended effect for software-enabled goods where the manufacturer continues exercising control over the goods post-supply.
While Treasury did not find any changes to the ACCC’s powers were necessary, it recommended regular review of these to keep pace with legal and technological developments.
Latest ACSC guidance on AI use and security for critical infrastructure and small businesses
In December 2025, the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC) issued guidance for critical infrastructure owners and operators on how to enhance the safety and security of AI systems in operational technology environments.
Some of the key actions the ACSC recommends include undertaking an assessment of the business case for AI use, audits and risk assessments throughout the AI lifecycle, and training personnel on AI use to support human in the loop decision-making. The ACSC also recommends implementing AI governance and assurance frameworks, and updating or developing important operational documents like SOPs, AI integration plans, a data integrity program and data security policies.
The ACSC also published guidance in January 2026 for small businesses on using AI, emerging AI and cybersecurity risks. The guidance incorporates an AI cyber security checklist for small business to assess their use of AI and its impacts on their business’ risk profile, and to take steps to strengthen the safety and cybersecurity of their business.
Australian Government rules out introduction of a “text and data mining” copyright exemption
The Australian Government has confirmed it will not introduce a text and data mining (TDM) exception into the Copyright Act 1968 (Cth), meaning technology companies will generally not be able to exercise intellectual property rights in copyright-protected material to train AI systems without a licence from the copyright owner. Attorney-General Michelle Rowland has made clear there are “no plans to weaken copyright protections when it comes to AI”.
The EU Copyright Directive defines TDM as “any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations”.
The Government’s announcement follows the Productivity Commission’s interim report, Harnessing data and digital technology, released on 5 August 2025, which had floated the idea of a TDM exception to bring Australia in line with some overseas jurisdictions. In its final report, the Productivity Commission recommended a “wait and see” approach to monitor how any TDM exception would be approached in terms of commercial and non-commercial use, a fairness test and the existing research and study exception.
Currently, the Copyright Act 1968 (Cth) provides a limited set of “fair dealing” exceptions to copyright infringement. These exceptions allow the use of copyright material without the copyright owner’s permission only for specific purposes, such as research or study, criticism or review, news reporting, parody or satire and legal advice. Importantly, these exceptions do not extend to the large-scale, automated copying of works for TDM or AI training, meaning that such activities remain outside the scope of permitted fair dealing under Australian copyright law.
This issue was at the heart of the highly anticipated UK decision in Getty Images (US) Inc. and others v Stability Al Ltd, where Getty sued Stability AI, alleging its Stable Diffusion model was trained on millions of copyright-protected images scraped from Getty’s websites. In our article, we examine why the UK High Court ruled Stable Diffusion was not an “infringing copy” and consider the implications of this decision for Australian copyright law in the context of AI.
Rather than legislating a broad TDM exception, the Government has reconvened the Copyright and AI Reference Group to focus on three key areas:
- encouraging fair, legal avenues for AI use of copyright material: the Group will consider whether to introduce a new paid collective licensing framework for AI or to continue with the current voluntary licensing model.
- improving legal certainty: the Group will explore options to clarify or update how copyright law applies to material generated by or with the assistance of AI.
- lower-cost enforcement: the Government is investigating the creation of a small claims forum to make it easier and more affordable to enforce copyright in lower-value infringement matters.
The Government’s announcement shelving any plans for a TDM exception provides certainty for creators and rightsholders, ensuring that any exercise of intellectual property rights in their works for AI training will require a licence (which may be granted subject to compensation). The Government is calling on the tech industry and creative sector to collaborate on “sensible and workable solutions to support innovation while ensuring creators are compensated”.
Productivity Commission’s final report on harnessing data and digital technology doubles down on alternative privacy regulation proposal
On 19 December 2025, the Productivity Commission released its final report, Harnessing data and digital technology. We had previously dissected the interim report in our article.
Many of the recommendations remain substantively similar to those in the interim report, with some notable changes, including recommendations:
- to phase out the APPs under the Privacy Act in favour of outcomes-based privacy obligations, rather than as an alternative dual-track compliance pathway as previously suggested in the interim report (Recommendation 4.1). The outcomes-based approach would impose an overarching duty for APP entities to deal with personal information in a way that is fair and reasonable in the circumstances and would be guided by a non-exhaustive list of considerations like proportionality, necessity and transparency;
- to consult with the newly established AI Safety Institute as part of the previous recommendation that agencies and regulatory bodies undertake gap analysis reviews of risks or harms not addressed by the existing regulatory landscape (Recommendation 1);
- for the Government to undertake a three-year monitoring program to review the implications of AI in Australian copyright settings, including licensing markets for open web materials, AI impacts on royalties and creative incomes and overseas limitations on AI-related copyright exceptions (e.g. how the fair use doctrine develops) (Recommendation 2.1);
- to streamline the Consumer Data Right framework to simplify processes and support broader adoption (Recommendation 3.1); and
- to require disclosing entities to conduct mandatory digital bi-annual and annual financial reporting to the Corporations Act 2001 (Cth). This would be supported by existing regulatory frameworks and additional regulatory oversight by ASIC to ensure high-quality data (Recommendation 5.1).
These suggested actions align with the Productivity Commission’s recommendation that AI-specific regulation be used as a last resort, including the previously proposed national AI guardrails and the EU AI Act (Recommendation 1.2).
The recommendation to phase out the APPs is a step further than what was proposed in the interim report. Public reception to this recommendation is still unfolding, but as with the interim report, we expect views will be mixed and strongly held – noting the Privacy Commissioner’s strong response to the interim report recommendation.
Lessons from the OAIC’s latest privacy determinations and investigations on data breaches and employee records disclosure
Vinomofo required to take remediation steps after data breach caused by failure to take reasonable steps to protect personal information
On 17 October 2025, following an OAIC-initiated investigation, the OAIC determined that online wine wholesaler Vinomofo had interfered with the privacy of its customers by failing to take reasonable steps to protect their personal information, as required by APP 11.1. This finding followed a significant data breach that occurred on 25 September 2022, which Vinomofo notified to the OAIC as an eligible data breach on 17 October 2022.
The breach involved the exfiltration of a database containing 928,760 customer records, including identity, contact and financial information. The attacker demanded a ransom and subsequently posted the data for sale on the dark web on 16 October 2022. A sample of the data was sold on 20 October 2022.
To determine what reasonable steps Vinomofo was required to take to protect personal information, the OAIC considered several key factors:
- nature and volume of data: the database contained personal information relating to 928,760 customers, including gender, date of birth, contact information, sales order history and invoice information. The APP Guidelines make clear that as the amount and sensitivity of personal information held by an entity increases, so too does the level of security required.
- nature of the entity: Vinomofo is a reasonably well-resourced, for-profit business with A$72 million annual revenue and 120 employees.
- possible adverse consequences: the unauthorised access to or disclosure of personal information exposed individuals to significant risks, including identity theft, fraud, publication and sale of their information on the dark web and targeting by online scams. The OAIC noted that more rigorous security measures are required as the risk of adversity increases.
- additional security logging to monitor and record system activity;
- appropriate cloud infrastructure controls and configuration of security settings;
- access monitoring controls to detect and alert the company to suspicious or unauthorised activity;
- formal policies documenting information security, security roles and responsibilities or the acceptable use of assets such as laptops, emails or passwords; and
- robust security governance led by personnel with formal qualifications and certifications in cyber security.
- implementing security logging in all Amazon Web Services’ (AWS’) environments storing personal information;
- applying appropriate security access settings to all databases holding personal information;
- introducing systems or controls to monitor for unauthorised activity;
- developing written policies and procedures that meet the minimum-security baseline standards;
- engaging an independent reviewer to review the adequacy of staff with cyber security expertise and address any shortcomings identified; and
- promote a privacy- and security-aware culture through appropriate staff training.
The OAIC acknowledged that Vinomofo had taken some steps to protect personal information, but it ultimately found these measures were insufficient when considered in their totality and in the circumstances. In particular, the OAIC found that Vinomofo should have implemented:
The OAIC found that Vinomofo failed to meet the “reasonable steps” standard under APP 11.1, but had it “enabled the appropriate logging and access monitoring controls on the database, it would have been more likely to proactively detect and respond to the unauthorised access and exfiltration”.
As often occurs in determinations and investigations, the OAIC also scrutinised Vinomofo’s Privacy Policy. While no specific finding was made, the OAIC noted that Vinomofo’s previous public-facing privacy policy was titled “the boring stuff” and impliedly encouraged readers not to read it and that this language had been removed.
The OAIC ordered Vinomofo to undertake a series of remedial actions within 90 days, including:
Vinomofo was also required to engage a suitably qualified independent reviewer within six months to review the effectiveness of these measures, provide a report to the OAIC and act on any recommendations within specified timeframes.
This determination reinforces that organisations handling large volumes of personal information must implement comprehensive, formal and proactive security measures. Informal approaches to privacy, and misconceptions about the allocation of security responsibilities when using cloud services, will not shield organisations from regulatory scrutiny or liability. This investigation adds to the weight of existing authority on the kinds of security measures the OAIC expects to be in place to protect types of personal information – see for example our update on the Optus proceedings and on the Medibank proceedings.
Employee record exemption did not apply to unauthorised disclosure of employee medical certificate
On 15 September 2025, the OAIC issued a determination against Fortrend Securities Pty Ltd, finding the company had engaged in unauthorised disclosure of personal information when its Managing Director maliciously disclosed a former employee’s medical certificate to a client. The complainant was awarded A$13,500 in compensation.
The circumstances of the unauthorised disclosure where that, when the employee took medical leave (supported by medical certificates) as a response to hostile and aggressive behaviour after resigning but while they were working through their notice period, the Managing Director told a client that the employee had suffered a “complete nervous breakdown” and sent them a copy of the client’s medical certificate.
Fortrend sought to rely on the employee records exemption, arguing the medical certificate was an “employee record” and therefore exempt from the APPs. The OAIC accepted the certificate was provided during employment for employment purposes, but found the disclosure to the client occurred after the employment had ended and was not for any employment-related purpose. Therefore, the exemption did not apply and there was no valid basis under APP 6 permitting the disclosure.
Compensation orders are not particularly common in privacy determinations, however in this case the OAIC:
- awarded A$10,000 for non-economic loss, accepting the complainant suffered significant humiliation, hurt feelings and embarrassment as a result of the breach;
- awarded an additional $3,500 in aggravated damages, reflecting the “malicious, improper, and unjustifiable” nature of Fortrend’s conduct, its indifference towards its privacy obligations in respect of the employee’s sensitive health information and lack of cooperation during the OAIC’s investigation (including providing unreliable and inaccurate information); and
- ordered Fortrend to issue a written apology and to undertake an independent review of its privacy policies, procedures and training.
The decision serves as a reminder that the employee records exemption is not a free pass for employers and that multiple recent determinations have interpreted it narrowly. Malicious disclosures of sensitive health information and indifference towards privacy obligations in respect of an employee’s personal and sensitive health information can lead to regulatory action and significant financial penalties. This case also confirms that transparent and proactive engagement with the OAIC would have avoided some of the additional consequences ordered in this case.
Other key privacy updates from the latest OAIC activities
New notifiable data breaches statistics dashboard
On 4 November 2025, the OAIC launched its new Notifiable Data Breaches (NBD) statistics dashboard. This interactive tool gives reporting entities, media and the public easy access to data reported under the NDB scheme since 2018, allowing users to explore trends and insights. Under the NDB scheme, any organisation or agency bound by the Privacy Act must notify affected individuals and the OAIC when a data breach is likely to cause serious harm to an individual whose personal information is involved. The dashboard will be updated every six months.
Updates to APP Guidelines
The following aspects of the APP Guidelines were updated in October 2025 to address the privacy reforms passed in late 2024:
- APP 1, addressing the new requirements to address automated decision-making in privacy policies from December 2026;
- APP 8, addressing the new mechanism for authorised overseas disclosure of personal information where a country is declared by the Minister; and
- APP 11, addressing the new requirement to implement technical and organisational measures to protect and dispose of personal information and to update guidance in relation to destruction and de-identification of personal information.
New paragraphs 11.30 and 11.31 in the APP 11 guidance highlight the importance of destroying and de-identifying aged and redundant personal information:
“Destroying or de-identifying personal information no longer needed is an important concept that can help reduce privacy and security risks. For example, retaining too many categories of personal information can increase the harm to an individual in the event of a data breach or unauthorised access. Retaining a high volume of personal information can also amplify the reputational and financial risks to the APP entity.
It is expected that an APP entity actively considers the privacy and security risks of any personal information it retains, and actively seeks to mitigate these risks by taking reasonable steps to destroy or de-identify personal information that is no longer needed.”
Workplace surveillance and digital work system law reforms progress in New South Wales and Victoria
Employers with employees in New South Wales and Victoria should be aware of significant reforms to workplace surveillance and digital work system laws in those states.
The Victorian Government has given in-principle support to 15 of the 18 recommendations made in the final report of the Legislative Assembly Economy and Infrastructure Committee’s Inquiry into workplace surveillance, released on 13 May 2025.
Key recommendations from the inquiry include new obligations for employers, such as:
- demonstrating that any workplace surveillance is reasonable, necessary, and proportionate, supported by a risk assessment;
- providing workers with 14 days’ written notice of surveillance, including clear details about the methods, scope, timing, purpose, and data-handling practices;
- maintaining a workplace surveillance policy and ensuring it is provided to all employees; and
- giving employees, upon request, access to surveillance data generated about them.
The government released its official response on 18 November 2025, stating it will give detailed consideration to new laws that reflect these recommendations and respond to significant developments in workplace surveillance technology.
Meanwhile, the Work Health and Safety Amendment (Digital Work Systems) Bill 2025 (Bill) has been introduced in New South Wales and targets the use of digital work systems such as algorithms, AI, and automation in the workplace. The Bill seeks to impose a new duty on Persons Conducting a Business or Undertaking (PCBUs) to ensure, so far as reasonably practicable, that digital systems do not create risks to workers’ health and safety. This includes considering whether the allocation of work by or use of a digital system results in excessive workloads, unreasonable performance metrics, excessive monitoring or discriminatory practices. Digital tools that may be covered by the Bill include automated rostering, performance tracking software and workflow or task allocation tools.
The Bill also requires PCBUs to provide reasonable assistance to WHS entry permit holders, such as unions, seeking to access and inspect a digital work system relevant to a suspected contravention of the Workplace Health and Safety Act 2011 (NSW). The Bill is expected to pass later this year.
Both developments signal a strong commitment in those states to addressing concerns in relation to workplace surveillance and emerging technologies. It remains to be seen whether other states, many of which do not already have any specific workplace surveillance laws, will introduce or enhance their laws.
Insights from the latest reports on cyber and critical infrastructure threats
Annual Cyber Threat Report
The ACSC has released its Annual Cyber Threat Report for 2024-25.
Top takeaways from the report are that:
- while calls to the ACSC and cybercrime reports to ReportCyber were down from FY2023-24, the number of cyber security incidents the ACSC responded to and the number of entities it notified of potential malicious cyber activity had increased;
- the average self-reported cost of cybercrime per report for businesses was 50 per cent higher than the previous financial year, with large businesses suffering the greatest increase;
- the top three self-reported cybercrime threats for businesses were email compromise resulting in no financial loss (19 per cent), business email compromise (BEC) fraud resulting in financial loss (15 per cent) and identity fraud (11 per cent).
- use phishing-resistant multi-factor authentication (MFA) wherever possible;
- use strong and unique passwords or passphrases;
- regularly back up important data;
- be alert for phishing messages and scams; and
- keep software on devices updated;
- implement best-practice logging;
- replace legacy IT;
- effectively manage third-party risk; and
- prepare for post-quantum cryptography.
Recommendations the ACSC highlighted to reduce cyber risk are:
The ACSC also emphasised that businesses should have a cyber security incident response plan that is regularly tested and to report suspicious activity, cyber security incidents and vulnerabilities to the ACSC at cyber.gov.au/report.
Critical infrastructure report
The Critical Infrastructure Security Centre (CISC) released its Third Edition of the Critical Infrastructure Annual Risk Review in November 2025. The Annual Risk Review outlines the emerging risks to Australia’s critical infrastructure throughout 2025 which have escalated since the previous year. The CISC identified an uncertain global geopolitical landscape, changes to international supply chains for software and hardware, and rapid advancements in technology as driving an increased level of risk.
Cyber security priorities for boards in 2025-26
The ACSC and the Australian Institute of Company Directors (AICD) have released guidance setting out the cyber security priorities for boards in 2025-26.
According to the guidance, released in October, boards should focus on:
- understanding whether technology used or provided to customers is secure by design and secure by default; and
- prioritising the defence of the organisation’s most critical assets.
- implementing event logging and threat detection;
- managing legacy IT;
- managing cyber supply chain risk; and
- implementing a post-quantum cryptography transition plan.
The guidance steps out questions boards should ask management and their organisation to understand its cyber security posture in the 2025-26 cyber threat environment. There are both “threshold governance questions” to help boards determine the cyber security posture of organisations and “supplementary technical questions” to help boards better understand the cyber security controls in place within organisations.
Broadly, the threshold and supplementary questions address the same recommendations set out in the ACSC’s Annual Cyber Threat Report (discussed above), being:
Further context to the questions is set out in the AICD and Cyber Security Cooperative Research Centre’s Cyber Security Governance Principles and Governing Through a Cyber Crisis - Cyber Incident Response and Recovery for Australian Directors publications.
Data centre investment and momentum grow with the launch of Data Centres Australia
A new peak industry body for Australia’s data centre sector, Data Centres Australia, officially launched on 28 November 2025.
Chief Executive Officer Belinda Dennett outlined that Australia has “already produced world-class data centre companies”, noting Australia’s “proximity to growing markets, political stability, land availability and cost-competitive renewable energy make [it] well-positioned for growth,” but highlighted that what is required is “coordination, collaboration and urgency”.
Data Centres Australia’s goals are to:
- secure Australia’s position in the global race for AI infrastructure, to ensure the benefits flow to Australians;
- position Australia as a regional hub for AI infrastructure investment and to be a leader in sustainable data centre development; and
- ensure Australia is at the forefront of technical innovation and that we are building the workforce we need to support the industry,
by:
- establishing a public presence;
- advocating for effective and evidence-based policy;
- collaborating, within the sector and with other parts of the economy; and
- developing research and helping to educate stakeholders and the community.
As identified by the ACCC in its AI developments snapshot (discussed above), investment in data centres in Australia to enable AI growth is continuing to rise. Recent high-profile projects include a A$7 billion data centre project in Eastern Creek Sydney, an expansion of data centres in Sydney, and a proposed new data centre in Huntingwood, New South Wales.
What Australia’s significant merger reforms mean for technology and IP transactions
Australia’s new merger regime became mandatory on 1 January 2026. These reforms impact all industries and sectors, including organisations acquiring, licensing or transferring intellectual property (IP) rights and assets (including software). Importantly, merger clearance should now be an important consideration for all organisations and is no longer limited to traditional M&A or asset sale transactions.
Under the new merger regime, if a transaction meets the revenue or deal value thresholds and is ‘connected with Australia’, the parties must notify the ACCC and obtain ACCC approval before completion. The ACCC will assess whether the proposed acquisition is likely to substantially lessen competition. Failure to notify the ACCC will render the transaction legally void and may attract substantial penalties.
For organisations that deal with IP rights and assets, this means that assignments, licences and other IP dealings may trigger notification obligations. Organisations that deal with patentable inventions and technologies should also be aware that the “ordinary course of business” exception no longer applies, so routine patent-related transactions may also require notification to the ACCC.
Our Competition team’s recent article on the new merger regime and the most recent changes provides a useful summary of the key changes and considerations for organisations, including recommendations to ensure compliance and minimise regulatory risk such as:
- being aware of the thresholds for notification of a transaction to the ACCC;
- the waiver process if the transaction does not raise material competition issues;
- the lead time required to go through the new process and the ACCC’s timeframes for issuing a decision; and
- applicable fees.
Other key telco regulatory and data updates
In the telecommunications and spam space:
- telcos face new penalties of up to A$30 million non-compliance with Triple Zero rules, under amendments to federal telecommunications legislation, including new obligations to share real-time information on emergency call services outages with ACMA and emergency services;
- Optus has paid over $12 million in penalties for failing to comply with emergency call rules during its network outage on 8 November 2025. Optus is also implementing the 21 recommendations from its independent report by Kerry Schott into the September 2025 Triple Zero outages;
- while there have been no new spam compliance enforcement actions in the last quarter, the ACMA has issued a number of infringement notices for failure to comply with anti-scam rules;
- ACMA has rejected a proposed new telecommunications consumer protection (TCP) industry code on the basis it does not provide adequate protections for consumers. It remains to be seen whether ACMA is considering a new TCP rectifying the deficiencies or if ACMA may develop a new industry standard; and
- ACMA has amended customer identity check rules to allow telcos to use digital ID verification services for prepaid mobiles. This follows an investigation that, in the course of activating over 18,000 prepaid mobile services, Telstra’s method of identity verification did not adhere to the rules.
In the data space:
- the Regulatory Reform Omnibus Bill 2025 (Cth), which received royal assent on 4 December 2025, aims to facilitate the wider use and adoption of healthcare identifiers (which identify individuals, health practitioners and provider organisations) by expanding the class of entities that can access the healthcare identifier directory, and to permit use of healthcare identifiers for health administration purposes; and
- Consumer Data Right compliance enforcement continues, with CBA being fined almost A$800,000 for failing to comply with the Consumer Data Right rules in relation to access and data sharing.
In the online space, caching, conduct or storage service providers, and search engine providers, should be aware that updates to the Defamation Act 2005 (Qld) bring Queensland closer in line with other states in implementing a digital intermediaries exemption and defence for defamation actions.
Looking ahead
- On 9 January 2026, the OAIC announced it was undertaking a privacy compliance sweep of privacy policies, with a particular focus on where personal information is collected in person. Sectors of focus are rental and property, chemists and pharmacists, licensed venues, car rental companies, car dealerships, pawnbrokers and second hand dealers.
- After an interim judgment was released in a New South Wales District Court case alleging an infringement of the new tort of privacy under the Privacy Act (Kurraba Group Pty Ltd & Anor v Williams[2025] NSWDC 396), it will be interesting to see what insights about the operation of the tort will be gathered from the final judgment.
- Banks, telcos and some digital platforms will be interested in the various draft instruments for the Scams Prevention Framework released by Treasury. Consultation closed on 5 January 2026.
- The Department of Home Affairs is consulting on proposed amendments to the Critical Infrastructure Risk Management Program (CIRMP) Rules under the Security of Critical Infrastructure Act 2018 (Cth), with consultation open until 13 February 2026. The Department’s proposed updates to the Rules are discussed in its Consultation Paper.
- The Australian Government has confirmed that draft legislation amending the Australian Consumer Law to address unfair subscription practices, which primarily occur through digital user journeys, will be released for public consultation in early 2026. In the meantime, the ACCC has not waited for this new legislation to take action on these issues, recently commencing Federal Court proceedings against HelloFresh and YouFoodz for subscription traps.
- New mandatory codes under the Online Safety Act 2021 (Cth) will commence on 9 March 2026. The codes require various types of digital service providers (including AI providers) to protect children against certain age-restricted material. The commencement of these codes is timely, with the eSafety Commissioner recently announcing increased reports of misuse of the Grok AI chatbot to develop material harmful for children, such as sexually explicit content.
- The status of the ‘Tranche 2’ Privacy Act reforms is currently unknown. In 2025, the OAIC confirmed that the Attorney-General’s department will lead the next round of reforms, however consultation has not yet commenced.
What we think 2026 has in store for privacy, cyber, data and AI
In previous years, most predictions about the year ahead for privacy, cyber, data and AI highlight their increasing impact across all sectors.
These matters are now high on the agenda for businesses and organisations across all sectors, and there is a possibility that, for many organisations, they have peaked.
In our view, 2026 is likely to have in store:
- sustained interest by the OAIC in data breaches and personal information security, in an effort to ensure business and government are putting privacy and cyber as high on their agenda as possible.
- continued focus on ensuring that cyber is not an “IT-only” issue in organisations and that it is the responsibility of everyone in the organisation, including active Board interest and participation in cyber matters and cyber exercises.
- ongoing discussion about whether there are harms arising from AI that justify bespoke regulation, across all domains that AI touches such as intellectual property, confidentiality and privacy, and workforce. AI risk and opportunity will also remain a clear imperative for Boards.
The new year is an opportunity to take stock of your organisation’s privacy, cyber, data and AI maturity; to identify risk and compliance gaps and areas for improvement, and to take well-informed steps to address them.
How can we assist?
We have a large team of privacy, data protection and cyber specialists, with substantial experience across the whole spectrum of data, privacy and cyber compliance and incident management.
For a more detailed briefing on any of these updates, or to discuss how we can assist your organisation to manage its risks in these rapidly evolving areas, please get in touch.