
Welcome to the May 2026 edition of Digital Bytes, our quarterly snapshot of key legal and regulatory developments shaping Australia’s digital, data and technology landscape.
This edition arrives for Privacy Awareness Week 2026 and at a time of sustained regulatory momentum and increasing enforcement activity. Children’s online privacy remains firmly in focus with the release of the exposure draft of the Children’s Online Privacy Code, while recent penalty outcomes underscore the financial and reputational consequences of governance and compliance failures related to cyber security. Courts and regulators are also continuing to shape the emerging contours of Australia’s data and AI framework, highlighted by the latest decision on facial recognition and privacy, and new cases testing the intersection of artificial intelligence and legal professional privilege.
Beyond privacy and AI, this edition canvasses significant developments in online safety, ransomware and cyber security threats, and the expanding reach of critical infrastructure and surveillance regimes. We also consider the data transfer implications of the Australia-EU trade agreement and what the proposed new consumer laws on unfair trading and subscriptions mean for the technology sector and its customers.
As always, Digital Bytes is designed to be quick, practical and accessible, flagging what matters, what’s changing and what organisations should be thinking about now. We hope you find this edition helpful and we welcome discussions on what these developments mean for your business. As part of Privacy Awareness Week (4–10 May 2026), we have also shared five practical tips for managing privacy complaints effectively, which you can read in our latest article.
Latest privacy determination focuses on overcollection and unfair collection of personal information
On 21 April 2026, Australia’s privacy regulator, the Office of the Australian Information Commissioner (OAIC) released a determination following its commissioner-initiated investigation into IRE Pty Ltd (IRE), the operator of the 2Apply rental technology platform that has processed over 8.5 million tenant applications.
The OAIC found that the 2Apply platform collected personal information:
- which was not reasonably necessary for IRE’s functions and activities – in breach of Australian Privacy Principle (APP) 3.2; and
- by unfair means – in breach of APP 3.5.
Overcollection
In relation to overcollection, after analysing what information IRE had a legitimate interest in collecting, the OAIC found that IRE could have discharged its functions and activities (including the provision of services to real estate agents) without collecting the following personal information at the tenancy application stage:
- gender;
- details (names and ages) of dependents;
- student status;
- bankruptcy status;
- retirement status;
- previous living history (two years);
- current/intended ownership of principal place of residence/investment property;
- current applications for other properties;
- bond and rent assistance application status;
- citizenship status and visa enquiry;
- emergency contact;
- vehicle details.
The OAIC also indicated that smoker status and bond history/claims may not be reasonably necessary for IRE’s functions and activities, but that interested parties may have countervailing views.
The OAIC also set out a list of information for which it considered that IRE could have collected less information to discharge its functions and activities, such as identification documents, proof of income and employment.
Fairness of collection
In relation to the fairness of collection, the OAIC noted fairness takes into account the context of the collection of personal information – in this case, the significant power imbalance, the rental crisis, and individuals’ limited choices for rental platforms, the overcollection of personal information, and the way 2Apply set out the user journey (which the OAIC described as the ‘Online Choice Architecture’).
Interestingly, the OAIC raised a number of issues that the government is also seeking to address through the proposed introduction of the unfair trading practices provisions, which we summarise below. The OAIC identified the following as issues affecting the fairness of the collection of personal information:
- ‘confirmshaming’ – using emotive language to make a user feel guilty or embarrassed for not taking an action that is beneficial to the entity;
- biased framing – presenting choices in a way that emphasises their supposed benefits (positive framing) or downsides (negative framing), where the framing benefits the interests of the entity; and
- bundled consent – requesting consent for multiple purposes in a single consent.
While bundled consent has been the topic of OAIC guidance for some time, this is the first clear indication that the OAIC will carefully consider the user journey when assessing the lawful handling of personal information.
The OAIC found that the Online Choice Architecture employed in 2Apply was likely to lead individuals to provide more personal information than they would have otherwise provided.
Direct marketing
The OAIC also noted that 2Apply required individuals to agree to the use of their personal information for direct marketing, noting that they could opt out later. There was no option to opt out at the point of collection.
Most direct marketing these days is through electronic direct messaging, which is the domain of Australia’s Spam Act 2003 (Cth) (Spam Act). The OAIC’s commentary may be read as suggesting that requiring an individual to give consent as a precondition to accessing services may breach the Privacy Act 1988 (Cth) (Privacy Act) – in which case it may not be valid consent for the purposes of the Spam Act.
Outcome
The OAIC ordered that IRE:
- undertake an independent review of its privacy practices, not only in relation to collection of personal information but also in relation to compliance with information security obligations; and
- provide a copy of the review report to the OAIC and then report to the OAIC on its progress to address and respond to the report’s recommendations.
The OAIC’s determination also invites all rent-tech providers and real estate agents to undertake an assessment of their personal information collection practices and to ‘turn their minds’ to the privacy of tenancy applicants.
Exposure draft of the Children’s Online Privacy Code released
The OAIC has released the exposure draft of the new Children’s Online Privacy Code, signalling a significant shift in the regulation of digital services likely to be accessed by children – anyone under 18 years old. While the Code will sit within the existing Privacy Act framework, it introduces enhanced obligations that will require many organisations to reassess and re-design how their services collect, use and manage the personal information of children.
Key highlights of the exposure draft include the following.
- Age ascertainment obligations. Entities must either take reasonable steps to determine an end user’s age before collecting their personal information, or apply the requirements under the Code to all end users.
- Data minimisation by default. Services must be designed so that, by default, only personal information that is strictly necessary to provide the service is collected, used or disclosed.
- “Best interests of the child”. Collection, use and disclosure of a child’s personal information must be consistent with the child’s best interests.
- Consent framework. Children may give consent for themselves when they are 15 years old. For younger children, consent must be obtained from a person with parental responsibility. Entities must take reasonable steps to confirm that the person giving consent is a person with parental responsibility. Consent must be voluntary, informed, current, specific, unambiguous, capable of being withdrawn and time‑limited up to 12 months.
- Child assent requirements. In some circumstances, entities must seek the child’s assent to collection, use and disclosure, and to contacting their person with parental responsibility to obtain consent.
- Enhanced transparency and notices. Entities with services likely to be accessed by children must maintain child‑specific privacy policies, provide age‑appropriate collection notices and ensure access and correction rights are delivered in formats children can meaningfully understand and engage with.
- Direct marketing restrictions. Use or disclosure of children’s personal information for direct marketing requires consent and must be consistent with the child’s best interests. In addition, sensitive information can only be used for direct marketing if collected from the child directly (or from their person with parental responsibility, if applicable).
- Right to request information about processing. Children (or persons with parental responsibility, where applicable) may, when requesting access to personal information, seek and receive information about an entity’s handling of the child’s personal information.
- New destruction rights. Children (or persons with parental responsibility, where applicable) may request destruction of personal information, subject to limited exceptions.
- Parental control transparency. Where parental monitoring or control mechanisms are offered, children must be given an age-appropriate notification about that monitoring and control.
- Higher governance expectations. Entities must conduct privacy impact assessments (PIAs) for new or materially changed services likely to affect children’s privacy, maintain and publish a PIA register, implement child‑friendly complaints processes and provide regular staff training.
Although still in draft, the Code clearly reflects the regulator’s intention to move beyond more general notice and consent models and impose broader design, governance and accountability obligations on entities operating in the digital ecosystem who are likely to interact with children. Key areas of concern that the Code aims to address are the amount of personal information collected about children, ‘nudge techniques’ to increase the likelihood that they provide further data, and specific categories of data such as location data.
The Code operates alongside Australia’s age-gating requirements under online safety legislation (see below), which require certain online services to take reasonable steps to ensure children under 16 do not hold accounts. The intention is that the Code regulates the same online services for their 16-17 year old users, and a range of other online services who are not subject to the age-gating requirements.
Consultation on the exposure draft of the Code is open until 5 June 2026, after which the OAIC will consider submissions and updates. The Code is due to be registered on 10 December 2026, however its commencement date has not yet been specified.
Facial recognition technology can be used in retail settings without consent to address serious threats and serious misconduct – Bunnings appeal decision
Facial recognition technology has been a recent focus of the OAIC.
In October 2024, as we reported in our previous issue of Digital Bytes, the OAIC found that Bunnings’ use of facial recognition technology to combat significant retail crime and to protect staff and customers was not permitted under the Privacy Act.
Bunnings appealed the decision to the Administrative Review Tribunal (Tribunal). The decision confirmed the OAIC’s findings that:
- Bunnings had “collected” the personal information of all individuals it had captured on its CCTV systems, even though that information was only held for (on average) 4.17 milliseconds before it was deleted; and
- Bunnings had breached parts of APPs 1 and 5, related to transparency in its privacy policy and collection notice, and failure to implement adequate practices, procedures and systems to address privacy risks and compliance in relation to the implementation of facial recognition technology.
However, the Tribunal overturned part of the OAIC’s decision by finding that Bunnings was permitted to collect sensitive information of individuals without their consent, because Bunnings could rely on the exception that it:
- had reason to suspect that there was unlawful activity, or misconduct of a serious nature, that related to its functions or activities; and
- reasonably believed that the collection was necessary in order for it to take appropriate action in relation to the matter.
The Tribunal commented that Bunnings also could have relied on the exception where there is a serious threat to life, health or safety.
The OAIC has announced that it will not appeal the decision, but has warned retailers that this decision is not a “green light” for deploying facial recognition technology. There is still a “high bar” for the lawful deployment of that technology under the Privacy Act.
First ASIC cyber incident penalty: FIIG Securities ordered to pay $2.5 million for cyber security failures
On 9 February 2026, the Federal Court ordered FIIG Securities (FIIG) to pay a $2.5 million penalty for various cyber security failures, in the first civil penalty of its kind. FIIG was also ordered to contribute $500,000 towards ASIC’s legal costs and undertake a compliance program, including engaging an independent expert to report on FIIG’s cyber security posture and identify further remedial actions.
FIIG suffered a cyber-attack in 2023 when a threat actor accessed its IT network and stole a significant amount of personal information. Some personal information was leaked on the dark web, including driver’s licences, passport information, bank account details and tax file numbers. 18,000 of FIIG’s clients were notified that their personal information may have been compromised.
ASIC commenced proceedings against FIIG on 12 March 2025, alleging that it failed to have adequate cyber security measures in place for over four years in breach of its obligations as an Australian Financial Services Licence (AFSL) holder under the Corporations Act 2001 (Cth). FIIG accepted, and the Federal Court declared, that FIIG contravened its obligations to:
- do all things necessary to provide financial services efficiently, honestly and fairly;
- have adequate financial, technological and human resources; and
- have adequate risk management systems.
Some of FIIG’s cyber security failures were that it did not:
- have an appropriate cyber incident response plan in place that was tested at least annually;
- implement appropriate access controls and password protections for privileged accounts and review access rights to ensure that they were appropriate on a quarterly basis;
- have multi-factor authentication for its remote access users;
- have appropriately configured firewalls and security software in place and implement vulnerability scanning over its network or end points;
- conduct regular penetration testing;
- have a process to regularly review the effectiveness of its existing cyber security controls;
- have appropriately skilled IT personnel monitoring threat alerts to identify and respond to unusual or suspicious activity;
- provide mandatory cyber security awareness training to its staff;
- allocate sufficient financial resources, to have adequate cyber security measures in place or employ or outsource people with the skills to ensure it had adequate cyber security measures in place; and
- fully implement the procedures and controls set out in its IT information security and cyber and information security policies.
The Federal Court’s decision reaffirms previous case law that cyber security sits squarely within the duties of an AFSL holder. It also serves as a warning for businesses to not underinvest in cyber security, as the cost of compliance is likely to be less than the amount of a potential penalty.
This decision marks ASIC’s second successful cyber security enforcement action, following its proceedings against RI Advice in which compliance orders were made but there was no civil penalty. ASIC has identified cyber-attacks, data breaches and inadequate operational resilience and crisis management in its 2026 key issues outlook and we expect to see continued enforcement focus in this area.
Injunctions after data breaches and regulatory focus
We have seen further instances of organisations affected by a data breach involving the publication of stolen data on the dark web applying for an injunction to restrain use or disclose of that data. In February, the NSW Supreme Court delivered judgments in ReadyTech Holdings Ltd v Persons Unknown [2026] NSWSC 66 and Ansell Ltd v Persons Unknown [2026] NSWSC 65, in both cases granting injunctions against the threat actor requiring it to remove datasets from the internet and not handle or publish those datasets.
Cyber security remains a high priority for regulators. In addition to ASIC’s enforcement action relating to FIIG’s cyber security failures (above), the Chair of the Australian Prudential Regulation Authority (APRA) has recently stated that lifting cyber security policies and practices across APRA-regulated industries is a “top priority”. It is particularly focused on working with entities to assess potential risks from concentrations of third party service providers.
The takeaway for Australian businesses is clear: with cyber threats intensifying and regulators raising expectations, organisations must remain vigilant in keeping cyber security policies, incident response plans and staff training current and fit for purpose.
The new Australia-EU Free Trade Agreement aims to facilitate cross-border data transfer
The newly concluded Australia–EU Free Trade Agreement (FTA) includes a dedicated Digital Trade chapter aimed at facilitating cross‑border data flows to support digital trade, subject to each jurisdiction’s privacy and data protection laws.
Notably, the FTA includes a commitment that cross-border data flows between the parties must not be restricted by requiring data or computing facilities/network elements to be located in a particular party’s territory (e.g. data localisation). However, this operates alongside each country’s privacy laws, so parties to a contract must undertake the steps required (e.g. adopting standard contractual clauses) to permit cross-border transfers in compliance with applicable laws.
In addition, to facilitate digital trade, the FTA reinforces the importance of laws relating to electronic documents and signatures, consumer protections in electronic commerce transactions, spam (unsolicited direct marketing) and making government data available.
The FTA also prohibits a person from either jurisdiction from requiring a person from the other jurisdiction to transfer or give access to source code, with limited exceptions for critical infrastructure, law enforcement and the protection and enforcement of IP rights and government procurement.
While the FTA does not specifically harmonise the parties’ data transfer regimes, it sends a strong signal in favour of facilitating open digital markets and cross‑border data flows. For Australian organisations receiving personal data from the EU, existing GDPR transfer requirements will continue to apply.
How using AI can threaten legal professional privilege
Generative AI is changing how organisations create, store and share legal advice. However, recent US litigation highlights that those efficiencies can affect legal professional privilege. Our colleagues’ article explores how privilege can be inadvertently waived, why “internal use” assumptions may be unreliable and the practical steps organisations should be taking now.
Proposed updates to the Security of Critical Infrastructure framework progress, including new guidance on cyber security incident notification
Following the independent review of the Security of Critical Infrastructure Act 2018 (Cth) (SOCI Act) released in February 2026, the Department of Home Affairs is now consulting on amendments to the Ministerial directions powers and an exposure draft of amendments to the Critical Infrastructure Risk Management Program (CIRMP) Rules.
Ministerial direction powers
The Federal Government is proposing to:
- replace the existing adverse security assessment requirement, introduce a limited carve out from the prescribed administrative action framework and update the ‘regulatory exhaustion’ requirement;
- introduce a new power to apply conditions on reporting entities where their ownership, control or governance arrangements create a material risk to national security that cannot be sufficiently mitigated through existing regulatory obligations or measures;
- introduce a new power which can be directed at all regulated entities to address vendors or technology dependencies that create a material national security risk;
- introduce a limited power to delay an entity’s disclosure obligations under the Corporations Act 2001 (Cth) to avoid compromising national security or public safety; and
- increase the maximum civil penalty for non-compliance with a Part 3 Ministerial direction.
Exposure draft of the enhanced CIRMP Rules
The exposure draft proposes to specify ‘enhanced’ obligations addressing cyber and information security hazards, supply chain hazards, personnel security hazards and physical and natural hazards, for certain prescribed asset classes.[1] At a high level, the enhanced obligations address:
- cyber and information security – requirements relate to meeting maturity level two of one of the listed cyber security frameworks, implementing phishing-resistant multiple-factor authentication, inventorying critical systems, implementing network segregation between critical systems and other networks, recovering and restoring critical systems and addressing specified material cyber and information security risks.
- supply chain – requirements relate to mapping the supply chain for major suppliers and critical systems, identifying vulnerabilities, identifying maximum tolerable outages of and mitigating the associated risks and conducting vendor assessments including to identify foreign ownership, control or influence risks.
- personnel security – requirements relate to access management, mapping onshore and offshore critical workers and conducting AusCheck background check and suitability assessments.
- physical and natural hazards – requirements relate to specifying physical access controls and response measures for unauthorised access and minimising physical risks arising from any other category of risk.
In addition, CIRMPs must address impairments to assets’ functions that could prejudice the social stability, economic stability, national security or defence of Australia, as well as impairments or compromises arising from or in connection with foreign ownership, control or influence risks.
Consultation is open until 1 May 2026. In addition, the Department proposes to develop guidance for these enhanced CIRMP requirements to assist regulated entities.
Separately, the Cyber Infrastructure Security Centre (CISC) has issued new guidance on SOCI entities’ notification obligations under the SOCI Act, including examples of what might (or might not) constitute a notifiable critical or other cyber security incident and a step-by-step guide to reporting an incident.
The guidance describes how cyber security incidents involve different phases of malicious activity, such as where an actor:
- “conducts reconnaissance (e.g. scan network gateways for open ports);
- delivers malicious software (e.g. sending emails that may include malicious attachments or direct the user to download a malicious file, i.e. phishing);
- exploits unauthorised access to install malicious code (e.g. installing ransomware), referred to as the ‘exploitation phase’; and
- undertakes subsequent malicious activities using that access (e.g. steal data or change how systems operate).”
The guidance makes clear that SOCI entities must submit a report if a cyber security incident is detected at or beyond the exploitation phase of malicious activity (despite any prevention or mitigation action taken).
-------
[1] The proposed prescribed asset classes are critical energy market operator, electricity, gas, liquid fuel, water, broadcasting, domain name systems, freight service and freight infrastructure assets.
NSW’s laws protecting workers from digital work systems may impact surveillance and monitoring
On 12 February 2026, NSW passed the Work Health and Safety Amendment (Digital Work Systems) Bill 2026 (NSW), amending the Work Health and Safety Act 2011 (NSW) to prevent risks to workers arising from digital work systems (“an algorithm, artificial intelligence, automation or online platform”).
The amendments will require employers in NSW to ensure, so far as is reasonably practicable, that the health and safety of workers is not put at risk from the use of or allocation of work by an employer’s digital work system. Employers will also need to consider risks like excessive or unreasonable workloads, performance metrics and monitoring or surveillance, as well as discriminatory practices or decision-making – each being well-recognised drivers of physical and psychological harms. These requirements will commence one month after SafeWork NSW publishes guidelines under the WHS Act. In preparing the guidelines, SafeWork NSW must undertake public consultation and consider any feedback.
Union officials will also have new powers to access workplaces and inspect digital work systems relevant to suspected contraventions, subject to notice requirements. While there may be concerns about privacy and data security implications, these may be addressed in the SafeWork NSW guidelines.
Although employers’ key obligations have not yet commenced, employers can stay ahead of the curve by reviewing their use of digital work systems, AI and performance metrics, and their impacts on workers. Employers may also wish to consider engaging with the NSW Government and SafeWork NSW so that concerns can be taken into account when supporting materials are developed.
New consumer laws target digital user journeys and subscriptions
The Competition and Consumer Amendment (Unfair Trading Practices) Bill 2026 (Cth), introduced into Parliament on 1 April 2026, aims to strengthen Australian consumer laws by targeting:
- ‘unfair’ conduct that falls short of the elements of existing prohibitions;
- drip pricing; and
- subscriptions with consumers and small businesses.
Central to the reforms is a new, economy‑wide prohibition on unfair trading practices, which prohibits conduct that manipulates users or unreasonably distorts a consumer’s decision‑making environment.
While the prohibition is technology-neutral, it has come about due to concern about ‘dark patterns’ in digital user journeys which force or lead consumers into business’ preferred actions, such as:
- confusing or complex menus with pre-selections or defaults;
- making options a consumer is trying to access difficult to find;
- omitting or obfuscating material information; and
- exerting pressure during a transaction process.
For digital businesses, it will be important to assess and critically analyse user journey design.
In addition, there are a range of new requirements for subscription‑based businesses, with new transparency, reminder and cancellation requirements applying to subscriptions with consumers and small businesses.
While these reforms will directly affect businesses with digital user journeys and those who sell subscriptions to consumers and small businesses, it will also indirectly affect the platform providers who sell to those businesses. Platform providers will be expected to design platforms that address these requirements and risks.
The laws are proposed to commence on 1 July 2027. For more detail on the proposed laws, see our colleagues’ article. Those interested in consumer law will also want to be aware of new legislation doubling the maximum monetary penalty and the review of the updates to the unfair contract terms (UCT) regime.
Latest eSafety and online safety developments
The eSafety Commissioner (eSafety) continues to be an active regulator in Australia. One of eSafety’s current focuses is establishing the effectiveness and enforcement of Australia’s Social Media Minimum Age (SMMA) regime and related compliance measures.
- A new legislative rule has clarified the scope of ‘age-restricted social media platforms’ – the SMMA obligation applies to services that adopt ‘recommender systems’ (i.e. algorithms) and/or at least one of the following features (referred to as ‘logged-in features’):
- an ‘endless-feed feature’ (e.g. news feed with infinite material);
- a ‘feedback feature’ (e.g. displaying the number of ‘likes’ or ‘upvotes’ a user has received); or
- a ‘time-limited feature’ (e.g. disappearing ‘stories’).
- In March 2026, eSafety published its first compliance report on implementation of the SMMA obligation by 10 age-restricted social media platforms – Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X (formerly Twitter) and YouTube. The report notes that although these platforms have taken some steps to comply with the SMMA obligation, a substantial number of children aged under 16 retain accounts and eSafety is actively investigating alleged potential non-compliance by a number of platforms. The report also reinforces the expectation that platforms should:
- provide clear messaging to users about age assurance;
- apply a layered or waterfall approach (of successive validation) to age determination; and
- provide accessible reporting pathways.
- eSafety has also commenced a study on the SMMA obligation, in partnership with Stanford University’s Social Media Lab and an ‘Academic Advisory Group’ of 11 Australian and international experts. The study will assess how the SMMA obligation is being implemented, examine impacts and provide insights to guide future decision-making, with findings to be progressively released from later this year.
eSafety has separately announced that it is closely monitoring Roblox’s compliance with online safety industry codes and standards, including the effectiveness of earlier commitments made by Roblox to address concerns relating to children’s use of the service. eSafety has also issued transparency notices to several other gaming platforms to investigate the steps they are taking to address the risks of grooming, radicalisation, cyber bullying and online hate on those platforms.
To complement the SMMA obligation, the remaining mandatory ‘Age Restricted Material Codes’ came into effect on 9 March 2026, completing the suite of binding, industry-wide obligations to prevent children’s exposure to age-inappropriate content. These Codes require service providers (including those which might not otherwise be subject to the SMMA obligation) to implement meaningful safeguards, including age assurance measures (which will come into effect on 9 September 2026), restrictions for AI systems (such as AI companions) and default blurring of certain search results. eSafety has warned it will actively monitor and assess compliance with the new Codes and will take enforcement action where there is systemic non-compliance.
From a privacy perspective, the OAIC has also published new guidance on age assurance technologies, emphasising the need to carefully balance the requirements of the Online Safety Act 2021 (Cth) with privacy rights and impacts. As noted in a recent media release, the OAIC continues to emphasise the need to respect children’s privacy, including through the Children’s Online Privacy Code. In addition, the OAIC and eSafety have entered into a memorandum of understanding addressing cooperation and information sharing to better facilitate each other’s regulatory activities.
When it comes to the online safety of adults, Senator Fatima Payman has introduced a private member’s bill, proposing to amend the Online Safety Act 2021 (Cth) to expand protections for adult cyber abuse.
Other updates
In the privacy space:
- The OAIC has participated in the 2025 Global Privacy Enforcement Network’s privacy sweep of websites and apps used by children, which found that risks to children’s privacy are on the rise.
- Entities previously exempt from Privacy Act obligations as small businesses should note they will now be subject to privacy obligations if they are regulated by Australia’s expanded anti-money laundering and counter-terrorism financing (AML/CTF) regime. The OAIC has updated its guidance to reflect this.
- The OAIC has released a privacy assessment of the Document Verification Service (DVS) Hub, with a particular focus on governance and ensuring documentation such as the participating agreements and privacy statements are accurate and complete.
- The OAIC has made changes to its privacy complaints handling to address its current backlog. It expects individuals to attempt to resolve privacy issues with organisations directly and to provide adequate information about a complaint when referring it to the OAIC. The OAIC will not take all individual complaints through to investigation, especially where they do not meet a seriousness threshold. Some complaints will be redirected to Commissioner‑initiated investigations, guidance or broader regulatory action instead of individual outcomes.
In the cyber security space, 4 March 2026 marked the commencement of the smart device cyber security standards set out in the Cyber Security (Security Standards for Smart Devices) Rules 2025 (Cth) under the Cyber Security Act 2024 (Cth). The security standards address requirements for passwords, reporting security issues and defining support periods and security updates. Manufacturers are required to issue statements of compliance when supplying a regulated smart device.
In the telecommunications and spam space:
- Lululemon has paid a $702,900 penalty and given an enforceable undertaking after ACMA found it sent more than 370,000 emails without unsubscribe facilities. This is a timely reminder that including marketing and promotional content in otherwise factual and transaction messages (e.g. shipping updates) engages e-marketing obligations under the Spam Act.
- ACMA has announced it will replace the current Telecommunications Consumer Protections (TCP) Code with a new regulatory framework intended to deliver stronger protections for telco customers and to give the ACMA greater enforcement powers.
- The Triple Zero Custodian will undertake a review of Triple Zero legislation and regulations, delivering on a recommendation from the March 2024 report into a Triple Zero outage.
In the data space, following a review of Commonwealth secrecy provisions, a Bill has been introduced to Parliament to remove criminal liability from more than 300 secrecy provisions. The intention is to ensure that criminal liability only arises where necessary and proportionate to protect the most sensitive information. There is also a requirement to obtain ministerial consent before prosecuting a journalist and related administrative staff for secrecy offences.
In the AI space:
- A lawsuit brought by Nippon Life Insurance against OpenAI in the US alleges tortious interference with contract, abuse of process and the unlicensed practice of law by ChatGPT – after a Nippon Life insured used ChatGPT to prepare legal material and arguments to re-open a previously settled claim, raising questions in relation to whether and how AI can give professional advice.
- The NSW Government has updated its AI Assessment Framework to replace subjective self‑assessments with a more standardised, guidance‑driven approach.
More generally, recent intellectual property developments will be of interest to those watching the technology and digital space:
- Parliament has passed amendments to Australia’s copyright legislation to provide for a framework for ‘orphan works’ where the copyright owner cannot be found, and to modernise provisions to address the use of copyright material in education.
- In this article, our colleagues explain the impact of the Federal Court’s decision in McCallum v Projector Films and how to get moral rights waivers and consents right.
For those interested in data centres, JWS regularly publishes updates as part of its ‘Data centre development in focus’ series. Recent articles explore:
- the NSW Legislative Council’s inquiry into data centres, including how policymakers are grappling with energy use, land planning, critical infrastructure risk and the rapid expansion of data centre developments across the state; and
- building social licence for data centres, examining the regulatory and commercial pressures shaping Australia’s data centre boom, from planning approvals and grid access to community expectations and maintaining social licence in an increasingly energy‑constrained environment.
You can also sign up to be notified of other JWS publications, including the popular Above Board series, by subscribing.
Looking ahead
Private sector entities will be permitted to apply to access and use the Australian Government Digital ID System from 30 November 2026, opening up new opportunities to enhance privacy and minimise collection of identity documents. This expansion coincides with the due date for the statutory review of the Digital ID Act 2024 (Cth).
The OAIC is preparing to release an issues paper on automated decision‑making (ADM) ahead of the commencement of new transparency obligations in December 2026, which will set out regulatory expectations about when ADM transparency obligations apply and how they should be addressed in privacy policies.
The OAIC has indicated that it is progressing enforcement activity in sectors relying heavily on digital platforms and connected technologies, with investigations nearing completion in the rental technology space and active investigations into overseas vehicle manufacturers focusing on voice monitoring and in‑vehicle data collection. It is yet to publish the outcomes of this activity.
It is safe to say that privacy, cyber, AI and data remain top issues for organisations doing business in Australia.
How can we assist?
We have a large team of privacy, data protection and cyber specialists, with substantial experience across the whole spectrum of data, privacy and cyber compliance and incident management.
For a more detailed briefing on any of these updates, or to discuss how we can assist your organisation to manage its risks in these rapidly evolving areas, please get in touch.