Digital Bytes – cyber, privacy & data update

Articles Written by Helen Clarke (Partner), Viva Swords (Senior Associate), Lydia Cowan-Dillon (Associate)
close up of camera lens

Quick summary

2024 is off to brisk start in the cyber, privacy and data space – regulatory developments in cyber security and artificial intelligence (AI) continue at pace. 

This instalment summarises the key updates on:

  • The Australian Government’s 2023-2030 Australian Cyber Security Strategy, and consultation on proposed legislative reform.
  • The release of the Cyber and Infrastructure Security Centre’s ‘Overview of Cyber Security Obligations for Corporate Leaders’.
  • Recent developments in the regulation of AI.
  • Generative AI and copyright law issues, including the establishment of a new reference group in Australia.
  • The Australian Government’s sanctions against the hacker behind the Medibank data breach.
  • The Australian Prudential Regulation Authority’s (APRA) data risk management study findings.
  • Updates to the Australian Government’s cyber security Essential Eight Maturity Model.

Also in the news:

  • NIST Cybersecurity Framework 2.0 released: The US National Institute of Standards and Technology (NIST) has released the updated NIST Cybersecurity Framework 2.0. The Framework is widely used, including by Australian organisations, to inform their cybersecurity risk management processes and procedures. Updated core guidance extends to all organisations (not just those in critical infrastructure). Updates introduce a new ‘Govern’ core function, which supports the other ‘Identify’, ‘Protect’, ‘Detect’, ‘Respond’ and ‘Recover’ functions (which relate to the lifecycle of an organisation’s management of cybersecurity risk), and highlights the importance of incorporating cybersecurity risk management into an organisation’s broader risk management strategy.
  • Digital ID laws progress: The Digital ID Bill 2023 (Digital ID Bill) released late last year continues to progress through Parliament.  It was referred to the Senate Economics Legislation Committee, which is due to provide a report on 28 February 2024.
  • Review of the Online Safety Act : A review of the Online Safety Act 2021 (Cth) (OSA) has been brought forward (earlier than is required under the Act) to ensure the framework remains fit for purpose. The review will consider the appropriateness and effectiveness of existing provisions (including the Basic Online Safety Expectations (BOSE)), and whether any additional protections are required (including to address online harms such as online hate and technology-facilitated abuse). An issues paper is expected to be released in the first half of this year, with a final report to be provided to the Minister by 31 October 2024.  
  • New online safety industry codes: A number of mandatory codes under the OSA have come into effect. The mandatory codes are developed by industry to protect Australian end-users from harmful material and apply a uniform set of minimum compliance measures for certain service providers. Codes for social media service providers, app distributors, internet service providers, hosting providers, and device manufacturers and suppliers came into effect on 16 December 2023. A further industry code for internet search engine services, which will address generative AI models, comes into effect on 12 March 2024. Two further codes for designated internet services (e.g. websites) and relevant electronic services (e.g. online email services, online gaming services) are currently being developed.
  • Optus appeal on claim of legal professional privilege: Optus has appealed the decision of the Federal Court of Australia rejecting its claim of legal professional privilege in relation to a review of its 2022 data breach and related documents (covered in our previous Digital Bytes instalment).
  • OAIC’s latest data breach report highlights supply chain risks: The Office of the Australian Information Commissioner (OAIC) has released its latest notifiable data breaches report for the July to December 2023 period. The report highlights the increased occurrence of multi-party incidents involving breaches of cloud or software providers, and the need for organisations to proactively manage privacy and data risks in their contractual arrangements with third-party service providers. Measures suggested by the OAIC include having clear policies in place for handling personal information and a data breach response plan that assigns roles and responsibilities for managing an incident and meeting regulatory reporting obligations.
  • Update on the OAIC’s representative complaints: The OAIC has published an overview of its current proceedings in relation to representative complaints (the equivalent of ‘class actions’ but for privacy complaints to the OAIC).  The Australian Information Commission (AIC) has accepted two representative complaints against Medibank (in respect of its October 2022 data breach) and Optus (in respect of its September 2022 data breach). The Medibank and Optus representative complaints will be investigated concurrently with the Commissioner-initiated investigations, and information gathered in the Commissioner-initiated investigations will be used for the purposes of the representative complaints. The Federal Court has rejected an application by Medibank to restrain the representative complaint and the Commissioner-initiated investigations in favour of a separate class action. Medibank unsuccessfully argued that there was a real risk of inconsistent factual and legal findings being made in the separate proceedings.
  • ACMA enforcement: The Australian Communications and Media Authority (ACMA) has continued its spam enforcement focus, including a $302,500 penalty issued to Outdoor Supacentre Pty Ltd (trading as 4WD Supacentre) for sending more than 83,000 marketing text messages in breach of spam laws, and a $259,440 penalty to Medion Australia (a carriage service provider) for failing to comply with customer identification rules, resulting in a number of people falling victim to SIM-swap scams.

Australia releases its roadmap for cyber security regulation and strategy in the short and medium term

On 22 November 2023, the Australian Government released the 2023–2030 Australian Cyber Security Strategy (Strategy). The Strategy acknowledges that Australia is an ‘attractive’ target for cyber criminals, and emerging technologies are creating new opportunities and challenges for cyber security.  At a high level, the Strategy focuses on supporting small and medium businesses, strengthening critical infrastructure and enhancing government cyber security, improving regional and global cyber resilience, and responding to ransomware attacks.

The Strategy, which commits to Australia being a world leader in cyber security by 2030, implements six ‘cyber shields’ as additional layers of defence for Australians and businesses against cyber threats:

  1. Strong business and citizens: citizens and businesses are better protected from cyber threats, and can recover quickly following a cyber attack.
  2. Safe technology: Australians can trust that their digital products and services are safe, secure and fit for purpose.
  3. World-class threat sharing and blocking: Australia has access to real-time threat data, and can block threats at scale.
  4. Protected critical infrastructure: Critical infrastructure and essential government systems can withstand and bounce back from cyber attacks.
  5. Sovereign capabilities: Australia has a ‘flourishing’ cyber industry, enabled by a diverse and professional cyber workforce.
  6. Resilient region and global leadership: Australia’s region is more cyber resilient, and will prosper from the digital economy. This includes continuing to uphold international law and norms and shape global rules and standards in line with shared interests.

The Strategy will be delivered in three phases:

  • ‘Horizon 1’ (2023 to 2025): Strengthening foundations by addressing critical gaps in Australia’s cyber shields, building better protections for vulnerable citizens and businesses, and supporting improved cyber maturity uplift across the region.
  • ‘Horizon 2’ (2026 to 2028): Scaling cyber maturity across the economy, including through further investments in the broader cyber ecosystem, continuing to scale-up the Australian cyber industry, and growing a diverse cyber workforce.
  • ‘Horizon 3’ (2029 to 2030): Advancing the global frontier of cyber security by leading the development of emerging cyber technologies capable of adapting to new cyber risks and opportunities.

The Government also released the Cyber Security Strategy Action Plan (Action Plan), which outlines the initiatives to be delivered in the first phase for each ‘cyber shield’.  Of particular interest to businesses, the Government will:

  1. make cyber ‘health checks’ (cyber maturity assessments), with tailored guidance on improving cyber security, available to small and medium businesses;
  2. establish a Small Business Cyber Security Resilience Service;
  3. create a ‘ransomware playbook’ to guide businesses to prepare for and manage ransomware and cyber extortion incidents;
  4. provide additional guidance summarising cyber governance obligations – see the update on CISC guidance below;
  5. consider options to develop a single reporting portal for cyber incidents to allow affected entities to more easily meet their regulator reporting obligations;
  6. work with industry to design a voluntary data classification model that businesses can adopt to assess and classify their datasets; and
  7. encourage and incentivise businesses to share intelligence about threats.

As part of the ‘Horizon 1’, the first phase of the Strategy, the Government has released a Consultation Paper for proposed legislative reforms, which aim to strengthen Australia’s national cyber defences and build cyber resilience, address gaps in the current legislative and regulatory framework through new cyber security legislation and amendments to the Security of Critical Infrastructure Act 2018 (Cth) (SOCI Act).

The proposed new cyber security legislation comprises:

  • Secure-by-design standards for Internet of Things devices: in addition to a commitment under the Strategy to work with industry to develop a voluntary labelling scheme for consumer-grade smart devices, a mandatory cyber security standard for consumer-grade smart devices.
  • Ransomware reporting for businesses: introduction of legislated ‘no-fault, no-liability’ ransom reporting obligations for businesses.
  • Limited use obligation on the Australian Signals Directorate and the National Cyber Security Coordinator: a ‘limited use’ obligation (not a ‘safe harbour’, as has previously been considered), which will clarify how information voluntarily disclosed during a cyber incident can be used, to encourage collaboration with the Government on incident response and management.
  • Cyber Incident Review Board: establishment of a ‘Cyber Incident Review Board’, which will conduct ‘no-fault’ incident reviews and share lessons learned to improve cyber resilience.

The proposed amendments to the SOCI Act seek to address gaps that limit the ability to prepare, prevent and respond to cyber incidents – to clarify and enhance the security standards of critical infrastructure, consistently capture the secondary systems where vulnerabilities could have a ‘relevant impact’ on critical infrastructure, and allow for coordinated responses to incidents with appropriate support from the Government.  The proposed amendments are:

  • Data storage systems and business critical data: extension of the SOCI Act to data storage systems that hold ‘business critical data’, and clarifying that material risks to be addressed in an entity’s critical infrastructure risk management program (CIRMP) include risks to data storage systems holding ‘business critical data’ (as well as the systems that access the data).
  • Consequence management powers: providing a broad ‘all-hazards power of last resort’ to allow the Minister for Home Affairs to respond to incidents (if there is no existing power available to support a fast and effective response). 
  • Simplifying protected information provisions: clarification of the operation of these provisions so that entities know when they can disclose information for the purposes of the operation or risk mitigation of their critical infrastructure assets, and to clarify information sharing rights between Government agencies.
  • Review and remedy powers: introduction of a formal, written directions power to address seriously deficient elements of a CIRMP.
  • Consolidation of telecommunications security requirements under the SOCI Act: consolidation of security regulation for the telecommunications sector, including by moving obligations under Part 14 of the Telecommunications Act 1998 (Cth) to the SOCI Act.

Consultation on the proposed legislative reform closes on Friday, 1 March 2024.  Submissions can be made via the consultation form.

CISC releases guidance for corporate leaders on cyber security obligations

In response to concerns raised about the complex cyber security regulatory environment during consultation on the Strategy (above), the Australian Government’s Cyber and Infrastructure Security Centre (CISC) has released an ‘Overview of Cyber Security Obligations for Corporate Leaders’

The guidance identifies the rules, regulations and laws that apply to critical infrastructure sectors in:

  1. preparing for a cyber incident;
  2. reporting to regulators before, during or after a cyber incident; 
  3. responding to consequences of a cyber incident,

and provides a summary of relevant obligations, including those under the Privacy Act 1988 (Cth), the SOCI Act and other regulatory instruments such as APRA prudential standards.

The guidance is intended to be read in conjunction with other domestic and international guidance as part of a best practice framework, including the ‘Cyber Security Principles’ published by the Australian Institute of Company Directors and the Cyber Security Cooperative Research Centre in 2022.

Developments in AI regulation

A patchwork of laws relating to corporate governance, privacy, intellectual property, online safety and anti-discrimination currently regulate AI.  However, as adoption of AI grows and new legal risks emerge, more specific regulation is required to address gaps in the current regulatory framework. 

The regulation of AI continues to develop:

1. The Australian Government has published its interim response on the 'Safe and responsible AI in Australia' discussion paper.

After considering submissions from interested parties, the Government’s interim response indicates that it will consider adopting a ‘risk-based’ approach with specific rules for the use of AI in high-risk settings, including healthcare, employment and law enforcement.  This could include mandatory safeguards for the development or deployment of AI systems in legitimate, high-risk settings to ensure AI systems are safe when harms are difficult or impossible to revise.  The Government will engage in further consultation prior to introducing any legislation, and will also consult on other initiatives including a voluntary AI Safety Standard, a voluntary labelling/watermarking scheme for AI-generated material and establishing an expert advisory body to advise on other AI regulations and rules.

2 . The Australian Signals Directorate’s (ASD) Australian Cyber Security Centre (ACSC) has released guidance on engaging with AI, which has been developed in collaboration with international partners.

The ACSC’s guidelines for engagement with AI focuses on the safe and secure use of AI systems (rather than development), and provides guidance on a range of threats to safe use of AI (including case studies) and possible risk mitigation strategies.

3. The Digital Platform Regulators Forms (DP-REG), which is made up of the OAIC, ACMA, the eSafety Commissioner and the Australian Competition and Consumer Commission (ACCC), has published working papers on algorithms and AI.

The DP-REG’s working papers focus on understanding the risks and harms, as well as evaluating the benefits, of algorithms and generative AI.The working papers also provide some relevant examples of proposed or enacted regulatory initiatives that are aimed at addressing the identified risks and harms.

4. The chair of the Australian Securities and Investments Commission (ASIC), Joe Longo, has given a keynote address, focussing on the current and future frameworks for regulation of AI.

Longo noted that the responsibility for good governance is not changed just because the technology is new, and the existing regulatory ‘toolkit’ allows ASIC to regulate AI. However, he also acknowledged that more can be done to specifically regulate AI, particularly in relation to “transparency, explainability and rapidity”.

Generative AI remains in the spotlight

The legal issues associated with the use of generative AI continue to evolve. While intellectual property has long been identified as a potential risk area, recent litigation brings it sharply into focus.

Whether the use of content in training AI models and generating content constitutes ‘fair use’ under US copyright law is being considered in the New York Times’ (The Times) copyright lawsuit against OpenAI and Microsoft.

The Times alleges that by using its articles to train ChatGPT and Copilot chatbots without authorisation, Open AI and Microsoft are using its journalism to generate competing material. In particular, The Times says its copyright has been infringed by OpenAI and Microsoft:

  • building datasets for training that contain millions of copies of The Times’ works;
  • using that data for training the AI;
  • storing, processing and reproducing the AI models, which have ‘memorised’ The Times’ works; and
  • disseminating generative content, which contains copies and derivatives of The Times’ works.

While Australia’s equivalents to the US’ ‘fair use’ exception are far more narrow, this legal action also gives rise to questions about the extent to which a user of a generative AI system may also be liable for IP infringement by the system.

Separately, the Australian Government has announced it will establish a reference group to consider issues associated with copyright and AI generated content. The reference group will engage with stakeholders in relation to a number of important copyright issues, including use of material to train AI models, transparency of inputs and outputs, the use of AI to create imitative works, and whether and when AI-generated works should receive copyright protection.

Australian Government’s first cyber crime sanctions imposed on hacker behind Medibank data breach

The Australian Government has, for the first time, exercised its power to impose cyber sanctions under the Autonomous Sanctions Act 2011 (Cth) on a Russian national for his role in the unauthorised release and publication of 9.7 million records containing Australians’ personal information, including names, dates of birth, Medicare numbers and sensitive medical information.

The sanctions make it a criminal offence to provide assets to this individual or to use or deal with his assets, including through cryptocurrency wallets or ransomware payments. Such offence is punishable by up to 10 years’ imprisonment or large fines. The sanctioned individual is also banned from travelling to, or remaining in, Australia.

The US Department of the Treasury and the UK Sanctions Minister have also announced similar sanctions against the hacker.

While sanctioning one individual is unlikely to have a significant practical or deterrent effect on cyber crime more broadly, it demonstrates the willingness of governments to exercise sanction powers. It also is a timely reminder for organisations to ensure that its cyber incident response plans are up to date and include consideration of sanctions lists when deciding to make ransomware payments. While paying a ransom is not unlawful per se, if the cyber attacker is a sanctioned individual, a payment is likely to breach sanctions laws (subject to any available defences).

APRA’s data risk study indicates more work to do to achieve better practice data risk management

The Australian Prudential Regulation Authority (APRA) has released the findings of multi-year pilot study with a selection of banks to understand the status of data risk management.

While APRA found there have been recent improvements in data practices, which have been driven in part by APRA’s supervisory activities, progress is slow and there is a significant gap between current and better practice in relation to data risk management.

As part of its finding, APRA noted the following as ‘better practice activities’:

  • implementing and adopting organisation-wide data programs and processes;
  • ensuring an organisation’s chosen technology strategy is scalable and adaptable to changes in business requirements;
  • improving data accessibility by offering ‘data as a product’, where data domains are used to create ready-to-use data sets that can be accessed across the organisation;
  • ensuring there is an effective data issues management framework in place, which governs how data issues are identified, assessed, and remediated, while addressing the root cause, on an ongoing basis; and
  • using Governance Risk and Compliance (GRC) systems to support data risk reporting enhancement.

The study recommended a number of areas for improvement by all APRA-regulated entities:

  1. Establish data governance with a unified data strategy.
  2. Provide clarity on roles and responsibilities for ownership of critical data elements and processes across the data lifecycle.
  3. Simplify the technology and data architecture environment through improved platform solutions and by decommissioning legacy assets.
  4. Identify critical data elements and create a consistent set of data controls.
  5. Establish mechanisms to monitor data quality and timely remediation of errors based on business requirements.
  6. Integrate data management risk into risk management frameworks.

APRA has indicated that it will continue its focus on data risk management through its Operational Risk Management prudential standard, CPS 230, which is scheduled to commence in July 2025 – for more, see our August 2023 Digital Bytes.

Australian Cyber Security Centre updates Essential Eight Maturity Model

The Essential Eight framework sets out the top eight essential cyber risk mitigation strategies that are recommended to help businesses, organisations and government to better protect their internet-connected IT networks from cyber threats. These mitigation strategies include patching applications and operating systems, enabling multi-factor authentication and regularly backing up data.

The Essential Eight Maturity Model (E8MM) provides advice on how to implement the Essential Eight, and is updated annually to ensure it provides cyber security advice that is contemporary, fit for purpose and practical. The E8MM provides for four maturity levels, which assist an organisation to identify and plan for a target maturity level that is suitable for the organisation.

At the end of last year, the E8MM was updated, including to balance patching timeframes, increase adoption of phishing-resistant multifactor authentication, support management of cloud services, and perform incident detection and response for internet-facing infrastructure. Organisations seeking to achieve compliance with a particular Essential Eight maturity level should review the changes and uplift their practices and procedures.

What else is in store for 2024?

This year, we are likely to see:

  • legislative reform – particularly the release of exposure draft legislation and further consultation in relation to the Privacy Act review, and progress on the cyber security legislative reforms described above.
  • more active regulators and enforcement action – including the OAIC’s legal proceedings against Australian Clinical Labs, in which the OAIC alleges that ACL seriously interfered with the privacy of millions of Australians by failing to take reasonable steps to secure personal information, and failing to comply with the notifiable data breaches scheme in relation to a data breach. We are also expecting to see progress in the OAIC’s proceedings against Facebook and Google, and its investigation into retail use of facial recognition.
  • further coordination between Australian and international regulators – including the work of the DP-REG.
  • regulation of AI – including regulation of high-risk uses of AI, changes to the Privacy Act in relation to automated decision making, and the EU’s new AI Act.
  • developments in copyright law and its interaction with generative AI – including arising from the Australian Government’s new reference group.

For a more detailed briefing on any of these updates, or to discuss how we can assist your organisation manage its risks in these rapidly evolving areas, please get in touch.

Important Disclaimer: The material contained in this article is comment of a general nature only and is not and nor is it intended to be advice on any specific professional matter. In that the effectiveness or accuracy of any professional advice depends upon the particular circumstances of each case, neither the firm nor any individual author accepts any responsibility whatsoever for any acts or omissions resulting from reliance upon the content of any articles. Before acting on the basis of any material contained in this publication, we recommend that you consult your professional adviser. Liability limited by a scheme approved under Professional Standards Legislation (Australia-wide except in Tasmania).

Related insights Read more insight

Australia's merger control mandatory in 2026

The Treasurer yesterday announced far-reaching reforms of Australia's merger control regime. The reforms proposed by the Government include the introduction of a mandatory notification requirement...

More
JWS appoints Isaac Evans, further deepening the firm’s corporate advisory, M&A, ECM and PE expertise

Leading independent Australian law firm Johnson Winter Slattery (JWS) has appointed Isaac Evans as a Special Counsel in its Corporate team. Isaac is based in Brisbane and joins JWS from Baker...

More
Vanguard pinged for greenwashing

In proceedings brought in the Federal Court of Australia, ASIC has successfully established that one of the world’s largest investment managers contravened the ASIC Act when it made a series of...

More