2024 is off to brisk start in the cyber, privacy and data space – regulatory developments in cyber security and artificial intelligence (AI) continue at pace.
This instalment summarises the key updates on:
Also in the news:
On 22 November 2023, the Australian Government released the 2023–2030 Australian Cyber Security Strategy (Strategy). The Strategy acknowledges that Australia is an ‘attractive’ target for cyber criminals, and emerging technologies are creating new opportunities and challenges for cyber security. At a high level, the Strategy focuses on supporting small and medium businesses, strengthening critical infrastructure and enhancing government cyber security, improving regional and global cyber resilience, and responding to ransomware attacks.
The Strategy, which commits to Australia being a world leader in cyber security by 2030, implements six ‘cyber shields’ as additional layers of defence for Australians and businesses against cyber threats:
The Strategy will be delivered in three phases:
The Government also released the Cyber Security Strategy Action Plan (Action Plan), which outlines the initiatives to be delivered in the first phase for each ‘cyber shield’. Of particular interest to businesses, the Government will:
As part of the ‘Horizon 1’, the first phase of the Strategy, the Government has released a Consultation Paper for proposed legislative reforms, which aim to strengthen Australia’s national cyber defences and build cyber resilience, address gaps in the current legislative and regulatory framework through new cyber security legislation and amendments to the Security of Critical Infrastructure Act 2018 (Cth) (SOCI Act).
The proposed new cyber security legislation comprises:
The proposed amendments to the SOCI Act seek to address gaps that limit the ability to prepare, prevent and respond to cyber incidents – to clarify and enhance the security standards of critical infrastructure, consistently capture the secondary systems where vulnerabilities could have a ‘relevant impact’ on critical infrastructure, and allow for coordinated responses to incidents with appropriate support from the Government. The proposed amendments are:
Consultation on the proposed legislative reform closes on Friday, 1 March 2024. Submissions can be made via the consultation form.
In response to concerns raised about the complex cyber security regulatory environment during consultation on the Strategy (above), the Australian Government’s Cyber and Infrastructure Security Centre (CISC) has released an ‘Overview of Cyber Security Obligations for Corporate Leaders’.
The guidance identifies the rules, regulations and laws that apply to critical infrastructure sectors in:
and provides a summary of relevant obligations, including those under the Privacy Act 1988 (Cth), the SOCI Act and other regulatory instruments such as APRA prudential standards.
The guidance is intended to be read in conjunction with other domestic and international guidance as part of a best practice framework, including the ‘Cyber Security Principles’ published by the Australian Institute of Company Directors and the Cyber Security Cooperative Research Centre in 2022.
A patchwork of laws relating to corporate governance, privacy, intellectual property, online safety and anti-discrimination currently regulate AI. However, as adoption of AI grows and new legal risks emerge, more specific regulation is required to address gaps in the current regulatory framework.
The regulation of AI continues to develop:
1. The Australian Government has published its interim response on the 'Safe and responsible AI in Australia' discussion paper.
After considering submissions from interested parties, the Government’s interim response indicates that it will consider adopting a ‘risk-based’ approach with specific rules for the use of AI in high-risk settings, including healthcare, employment and law enforcement. This could include mandatory safeguards for the development or deployment of AI systems in legitimate, high-risk settings to ensure AI systems are safe when harms are difficult or impossible to revise. The Government will engage in further consultation prior to introducing any legislation, and will also consult on other initiatives including a voluntary AI Safety Standard, a voluntary labelling/watermarking scheme for AI-generated material and establishing an expert advisory body to advise on other AI regulations and rules.
2 . The Australian Signals Directorate’s (ASD) Australian Cyber Security Centre (ACSC) has released guidance on engaging with AI, which has been developed in collaboration with international partners.
The ACSC’s guidelines for engagement with AI focuses on the safe and secure use of AI systems (rather than development), and provides guidance on a range of threats to safe use of AI (including case studies) and possible risk mitigation strategies.
3. The Digital Platform Regulators Forms (DP-REG), which is made up of the OAIC, ACMA, the eSafety Commissioner and the Australian Competition and Consumer Commission (ACCC), has published working papers on algorithms and AI.
The DP-REG’s working papers focus on understanding the risks and harms, as well as evaluating the benefits, of algorithms and generative AI.The working papers also provide some relevant examples of proposed or enacted regulatory initiatives that are aimed at addressing the identified risks and harms.
4. The chair of the Australian Securities and Investments Commission (ASIC), Joe Longo, has given a keynote address, focussing on the current and future frameworks for regulation of AI.
Longo noted that the responsibility for good governance is not changed just because the technology is new, and the existing regulatory ‘toolkit’ allows ASIC to regulate AI. However, he also acknowledged that more can be done to specifically regulate AI, particularly in relation to “transparency, explainability and rapidity”.
The legal issues associated with the use of generative AI continue to evolve. While intellectual property has long been identified as a potential risk area, recent litigation brings it sharply into focus.
Whether the use of content in training AI models and generating content constitutes ‘fair use’ under US copyright law is being considered in the New York Times’ (The Times) copyright lawsuit against OpenAI and Microsoft.
The Times alleges that by using its articles to train ChatGPT and Copilot chatbots without authorisation, Open AI and Microsoft are using its journalism to generate competing material. In particular, The Times says its copyright has been infringed by OpenAI and Microsoft:
While Australia’s equivalents to the US’ ‘fair use’ exception are far more narrow, this legal action also gives rise to questions about the extent to which a user of a generative AI system may also be liable for IP infringement by the system.
Separately, the Australian Government has announced it will establish a reference group to consider issues associated with copyright and AI generated content. The reference group will engage with stakeholders in relation to a number of important copyright issues, including use of material to train AI models, transparency of inputs and outputs, the use of AI to create imitative works, and whether and when AI-generated works should receive copyright protection.
The Australian Government has, for the first time, exercised its power to impose cyber sanctions under the Autonomous Sanctions Act 2011 (Cth) on a Russian national for his role in the unauthorised release and publication of 9.7 million records containing Australians’ personal information, including names, dates of birth, Medicare numbers and sensitive medical information.
The sanctions make it a criminal offence to provide assets to this individual or to use or deal with his assets, including through cryptocurrency wallets or ransomware payments. Such offence is punishable by up to 10 years’ imprisonment or large fines. The sanctioned individual is also banned from travelling to, or remaining in, Australia.
The US Department of the Treasury and the UK Sanctions Minister have also announced similar sanctions against the hacker.
While sanctioning one individual is unlikely to have a significant practical or deterrent effect on cyber crime more broadly, it demonstrates the willingness of governments to exercise sanction powers. It also is a timely reminder for organisations to ensure that its cyber incident response plans are up to date and include consideration of sanctions lists when deciding to make ransomware payments. While paying a ransom is not unlawful per se, if the cyber attacker is a sanctioned individual, a payment is likely to breach sanctions laws (subject to any available defences).
The Australian Prudential Regulation Authority (APRA) has released the findings of multi-year pilot study with a selection of banks to understand the status of data risk management.
While APRA found there have been recent improvements in data practices, which have been driven in part by APRA’s supervisory activities, progress is slow and there is a significant gap between current and better practice in relation to data risk management.
As part of its finding, APRA noted the following as ‘better practice activities’:
The study recommended a number of areas for improvement by all APRA-regulated entities:
APRA has indicated that it will continue its focus on data risk management through its Operational Risk Management prudential standard, CPS 230, which is scheduled to commence in July 2025 – for more, see our August 2023 Digital Bytes.
The Essential Eight framework sets out the top eight essential cyber risk mitigation strategies that are recommended to help businesses, organisations and government to better protect their internet-connected IT networks from cyber threats. These mitigation strategies include patching applications and operating systems, enabling multi-factor authentication and regularly backing up data.
The Essential Eight Maturity Model (E8MM) provides advice on how to implement the Essential Eight, and is updated annually to ensure it provides cyber security advice that is contemporary, fit for purpose and practical. The E8MM provides for four maturity levels, which assist an organisation to identify and plan for a target maturity level that is suitable for the organisation.
At the end of last year, the E8MM was updated, including to balance patching timeframes, increase adoption of phishing-resistant multifactor authentication, support management of cloud services, and perform incident detection and response for internet-facing infrastructure. Organisations seeking to achieve compliance with a particular Essential Eight maturity level should review the changes and uplift their practices and procedures.
This year, we are likely to see:
For a more detailed briefing on any of these updates, or to discuss how we can assist your organisation manage its risks in these rapidly evolving areas, please get in touch.
This week marks a significant development in Australia’s privacy law reform process, which is likely to result in some changes becoming law before the next federal election.
The European Commission recently fined a large global pharmaceutical company €462.6 million for abusing its dominant position to lessen competition in the market for the supply of Copaxone...
The past year has undoubtedly been challenging for companies in the lithium, rare earth and critical minerals sectors. To provide some context, lithium carbonate, lithium hydroxide and spodumene...