As you would have seen in the media, the Australian government has banned DeepSeek from federal devices due to national security concerns. DeepSeek’s rise in prominence highlights various challenges for Australian businesses and they must adapt to the evolving world of AI. This article provides a round-up of current legal considerations associated with DeepSeek and AI more broadly and offers some insights into the challenges faced by investors, employers, regulators and other stakeholders.
Established in 2023, the Chinese start-up DeepSeek is disrupting the artificial intelligence industry. The DeepSeek story is particularly eye-catching, and not just because it has emerged from a hedge fund instead of a traditional tech giant.
The DeepSeek AI model has recently overtaken ChatGPT as the most downloaded, free app available on the Apple App Store in the United States. DeepSeek boasts of its ability to run effectively on fewer resources, resulting in costs that are a fraction of the price of its counterparts. While many of these claims remain unsubstantiated, it is clear that DeepSeek’s rapid success in the market has sent shockwaves within the technology space and challenged traditional thinking about the AI landscape.
The release of DeepSeek’s newest model this month, DeepSeek-R1, comes at an interesting time considering the recent announcement of the staggering ‘Stargate Project’, involving the joint venture created by OpenAI, Oracle, SoftBank and the investment firm MGX. The $500 billion investment plan to fund AI infrastructure appeared to be overshadowed by the release of DeepSeek-R1 – the disruptive and competitive AI model claiming to have required just $6 million to develop in 55 days.
The emergence of DeepSeek as a market competitor has various macro considerations and the resulting market and regulator reaction – including the Australian government’s decision to ban the DeepSeek app from all government devices due to national security concerns, effective immediately – prompts us to consider various relevant legal issues, including investment in digital infrastructure and energy transition, impacts on tax regulation, employment and industrial relations, cyber security and data privacy issues, and IP protection.
Investment in digital and energy transition infrastructure
In recent years, there has been substantial investment in digital and energy transition infrastructure both to continue support for traditional services as well as support the advancement of AI capabilities. The ever-increasing demand for computing power, including to drive AI evolution, has fuelled many Australian investments in this sector. Sites suitable for development into large-scale data centres have been in high demand, commanding a price premium.
The emergence of DeepSeek has challenged traditional thinking about the infrastructure capacity requirements for the technology sector. There are still many unknowns associated with DeepSeek's impact. Investors are asking: What does DeepSeek's emergence mean for demand in data centres and energy? If DeepSeek lowers barriers to entry and drives greater efficiency, will this reduce demand for digital infrastructure and energy? Not necessarily – in fact we think unlikely.
Regardless, we expect that investors in the digital infrastructure and energy transition space will carefully consider their investment strategies to ensure agility and adaptability in response to further disruption in the AI landscape. This requires a focus on flexibility in contractual arrangements and adaptability in the developed built form. We anticipate this will remain a key focus in the coming months.
Notwithstanding the immediate shocks and short-term uncertainty (that are not uncommon in the technology market), what has happened serves as a reminder of some fundamentals:
- existing assets (and much of the immediate digital infrastructure pipeline) are already financed against non-AI services;
- the financing of future assets may face some temporary hurdles while the equity and debt markets come to terms with revised capex programs and rebalanced demand-led pricing (noting that on all estimates demand exceeds supply, and in the longer term, lowering cost will likely serve to increase overall demand);
- given the sensitive nature of data and its use, regulation (including foreign investment regulation) will continue to impact investment in the sector;
- it is possible that a user-led bifurcation of markets will occur (say between those servicing the consumer and the business-to-business market, as one example); and
- across the markets, from hyperscalers to start-ups and all adjacencies in between, an increase in competition and a degree of uncertainty and change will inevitably lead to an uptick in consolidation and M&A.
Tax regulation
Digitalisation of the world economy has also highlighted the challenge faced by tax regulators around the globe to the allocation of taxing rights to income generated by intangible assets that are used and exploited across various jurisdictions.
The Australian Taxation Office (ATO) currently has a focus on tax involving intangible assets, including, in particular, whether foreign entities have a taxable presence in Australia (e.g. a permanent establishment), the structure of the Australian group and whether it has been formed in such a way to reduce Australian tax and whether any payments made by Australian subsidiaries appropriately reflect the use or right to use intellectual property – which would result in an Australian royalty withholding tax liability.
The ATO is also actively pursuing these issues in the Courts in order to seek clarity on tax positions taken. This means that all new entrants like DeepSeek ought to expect scrutiny from the ATO if their products are being used and exploited within Australia.
Employment and industrial relations
The use of generative AI in the workplace is on the rise. Some reports indicate that up to 84 per cent of Australians are already using generative AI at work. At a time when labour productivity is in decline, this should be good news for employers and the broader community. AI could, for example, automate routine tasks, analyse data, reduce human error, and allow employees to focus on more meaningful and strategic work, leading to more efficient and productive workplaces.
Of course, in time, this could reshape the future of ‘work’. It has the potential to change the roles we need, the way we recruit for those roles, and how we manage performance. The Australian Council of Trade Unions has suggested that a third of Australian workers are at risk of job loss by 2030 due to the introduction of AI, advocating that ‘workers must be involved in every debate about AI – nothing about us, without us’. Indeed, unions have long battled against the automation of roles and are already contesting the introduction of such AI tools in the workplace. Recently, Woolworths faced industrial action across three warehouses because of, among other things, the introduction of an AI-based productivity framework. This will undoubtedly pose challenges for employers at the bargaining table for years to come.
However, perhaps the more immediate challenge faced by employers is controlling the information employees share with free and unsanctioned AI tools. Last year, for example, a child protection worker from the Victorian Government Department of Families, Fairness and Housing was found to have used ChatGPT to draft a ‘Protection Application Report’, a report submitted to the Children’s Court to inform decisions about when a child requires protection. The Department was investigated and found to have contravened the Victorian Information Privacy Principles.
There is also a real risk that any information shared with any free AI tool, as well as questions asked and answered, will be used by the tool’s makers, or shared with a third party It is certainly possible that Australian employers will follow the federal government’s lead and restrict the use of DeepSeek on company devices until some of these security concerns are addressed.
In any event, all employers should develop policies and provide training to their employees on the safe, ethical, and compliant use of generative AI in the workplace.
Cyber security and privacy
The rapid adoption of DeepSeek is happening amid increased scrutiny by the Australian government and regulators of use of AI more generally, driven by concerns around privacy, cybersecurity, and ethical obligations. The swift uptake of major AI platforms has raised potential privacy challenges and security risks, as highlighted by Australian Science Minister Ed Husic and a recent study by the Bristol Cyber Security Group. Australia’s Privacy Commissioner, Carly Kind has also noted this week that her team is “getting a lot of questions about whether the OAIC … will be looking into DeepSeek's handling of personal information”, and that she has “real concerns about the implications of generative AI for privacy”.
The Australian government has now banned DeepSeek from all federal government devices effective immediately, after advice from national security agencies that the app poses an “unacceptable risk” to Australian government technology. The mandatory direction requires federal government agencies to remove all instances of DeepSeek from government IT systems and prevent the app from being installed, and report to Home Affairs when these steps have been taken. Commenting on the ban, the Minister for Home Affairs, Tony Burke, noted that “AI is a technology full of potential and opportunity, but the government will not hesitate to act when our agencies identify a national security risk".
While AI offers significant benefits for efficiency and productivity, organisations must carefully consider privacy and security implications before implementing Generative AI (Gen AI) products, including by ensuring the AI tools they use to handle personal information maintain confidentiality such as by way of an appropriate enterprise-specific software instance. Important measures include evaluating how information is used and retained, assessing security risks, ensuring reliable outputs, maintaining transparent AI use disclosures, and avoiding the input of confidential information into public AI tools. In some cases, it may be appropriate to complete a privacy impact assessment. The Office of the Australian Information Commissioner (OAIC) expects organisations to approach AI use cautiously and ensure that all relevant acts and practices comply with the Privacy Act 1988 (Cth).
The NSW Supreme Court's Practice Note SC Gen 23 further emphasises the need for careful use of Gen AI in legal proceedings. As AI products like DeepSeek continue to evolve, organisations must conduct thorough due diligence and exercise caution in their deployment.
These developments are also taking place in the context of an evolving AI regulatory landscape. Australia’s AI Ethics Principles and the Voluntary AI Safety Standard contain voluntary principles in relation to the development and use of AI. The Australian Government is also currently considering the implementation of Mandatory AI guardrails for the use of AI in high-risk settings.
IP considerations
The emergence of DeepSeek underscores several copyright issues related to AI training and outputs under Australian law. Copyright infringement risks arise from the content used to train AI and the AI outputs that reflect those inputs. Unauthorised reproduction or communication of copyright works or other matter during AI training may constitute infringement, as copyright subsists in various works and matter without registration.
Australia's fair dealing defences are limited, and it is not clear that any of them would apply to AI training generally (though some are likely to apply in specific circumstances). Temporary copies made during communication or technical processes are also the subject of specific defences, which may not always apply. Establishing that a specific copyright work was reproduced by an AI program as part of the training or operation of the program can be challenging, but Australia's preliminary discovery provisions can help copyright owners gather necessary information. AI-generated outputs can also infringe copyright, for example if they reproduce well-known works in which copyright subsists, leading to complex questions about who (if anyone) authorised the infringement. The applicable copyright law will generally be the law of the place where the infringing act occurred, which could lead to "safe havens" with broader defences for AI training.
Enforcing judgments can be difficult, particularly in countries without reciprocal enforcement agreements with Australia, but judgments can be enforced against assets in Australia or jurisdictions that recognise Australian judgments. These issues, together with the potential for class actions against AI platforms, are the subject of test cases around the world, including the landmark January UK High Court judgment in Getty Images (US) Inc and others v Stability AI Ltd [2025] EWHC 38.
Legislative solutions may be introduced to solve some of these challenges, including to ensure that media organisations are compensated when their work is utilised by AI platforms. The Australian government has announced that it will establish a News Bargaining Incentive to require digital platforms such as Meta to contribute to the sustainability of news media in Australia, so there is a precedent for this approach. The details of the proposed law have not yet been finalised, and will be the subject of consultation in early 2025. The Government’s announcement of it on 12 December 2024 included a statement that the “incentive builds on significant work underway to ensure Australian laws keep pace with digital technologies including … ongoing work related to artificial intelligence”.
Key takeaways for businesses
The rise of DeepSeek marks a significant shift in the AI landscape, challenging traditional models and prompting a re-evaluation of legal, infrastructural, tax, employment and industrial relations, and cybersecurity frameworks. As DeepSeek and others continue to disrupt the market with more cost-effective and efficient AI solutions, it is crucial for stakeholders to be adaptable and remain focused on the impacts.
Organisations should conduct thorough due diligence, implement robust risk management strategies, and ensure compliance with evolving regulations to harness the benefits of AI while mitigating potential risks. The future of AI holds immense potential, and with careful consideration and proactive measures, the business world can prosper in the world of AI.