Online harm: an expanding responsibility for online platforms

Articles Written by Daniel Thompson (Partner), LuAnna Han (Law Graduate)

Background

In line with recent global trends, online platforms have come under scrutiny by Australian regulators and legislators from numerous angles ranging from market-power and competition to consumer rights and privacy issues. 

Following the introduction of the German NetzDG laws and UK’s introduction of the Online Safety Bill, Australia has introduced its own online safety legislative reform with the introduction of the Online Safety Act 2021 (Act), which comes into effect in January 2022 (6 months after receiving royal assent on 23 July 2021). The Act will expand the legal responsibilities of a range of online service providers for the content they host, including in respect of cyberbullying, harassment, prohibited content, and violent material.

Online service providers face a range of legal risks in respect of user consent, most of which are not new. Notably, these risks include the potential liability of such providers for user content that is defamatory, copyright infringing, or in breach of other laws regulating offensive content. The challenge of mitigating these risks is not straight-forward given the already gigantic, and exponentially growing, volume of user content. 

Although the Act appears primarily targeted at regulating BigTech social media platforms, it will raise concerns for a far broader range of online service providers. The Act is stated to apply to providers of “social media services”, “relevant electronic services”, and “internet service providers”. These concepts are broadly defined – for instance, “relevant electronic services” includes email, instant messaging or any form of electronic chat, SMS, MMS, and multiplayer online gaming. Further, the Act extends to service providers that facilitate online activity and content, such as cloud infrastructure providers, and internet service providers, who often do not have visibility of, or control over the specific content which may be regulated.

Online service providers operating in Australia will need to reassess their user content policies, legal terms, and systems and processes used to monitor and moderate user content to ensure they are in a position to comply with the Act.

Existing legislative frameworks

Prior to the Act taking effect, a patchwork of existing legislation regulates online content, including:

  • Broadcasting Services Act 1992 (BSA);
  • Criminal Code Act 1995 and the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (AVMA); and
  • Enhancing Online Safety Act 2015 (EOSA).

This existing legislative framework imposes (often overlapping) obligations on online service providers that host, deliver, publish or facilitate distribution or communication of, certain types of content such as age classified content, ‘intimate images’, ‘abhorrent violent material’, and content that constitutes cyber-bullying. Such obligations include:

  • notifying regulators and law enforcement on becoming aware of abhorrent violent material;
  • complying with a cyber abuse scheme, including take down notices or directions to delete links issues by regulators in respect of prohibited content or cyber-bullying targeted at Australian children; and
  • maintaining a complaints based mechanism to be alerted of relevant concern of material.

The eSafety Commissioner (the Commissioner), an independent statutory office supported by the Australian Communications and Media Authority (ACMA), is the primary regulator of this legislative framework. The Commissioner may impose penalties for non-compliance, which range depending on the specific breach, and its severity and duration, but can be significant (e.g. the greater of AUD 11.1 million or 10% of annual turnover for failures to remove abhorrent violent material).

The Act and obligations from January 2022

The Act consolidates and expands on existing legislative frameworks, including by:

  • expanding the cyber abuse scheme so as to apply to material targeting Australian adults (in addition to children) and to private messaging;
  • reducing the time for compliance with take-down and link-deletion notices from 48 hours to 24 hours;
  • introducing ‘Basic Online Safety Expectations’ (‘BOSE’) which include ‘core expectations’ which relevant providers must follow including but not limited to taking reasonable steps to minimise the extent to which harmful content is provided on their services; and
  • reporting on BOSE compliance which may include public disclosure of such reports.

The Act also extends the Commissioner’s investigative powers to obtain identifying information from relevant providers about anonymous accounts and to ‘do all things necessary or convenient’ to be able to perform the Commissioner’s functions. Non-compliance with BOSE will attract civil penalties and the Commissioner may name providers that do not meet the expectations while also publishing statements of compliance for those that do.

BOSE consultation and development

The Department of Infrastructure, Transport, Regional Development and Communications is conducting the consultation on the BOSE and has released a draft BOSE determination for public submissions. The consultation is due to close on 15 October 2021 and due to be implemented in June 2022.

The BOSE is principle-based and are drafted broadly to reflect the policy intention to provide ‘flexibility’ on how service providers are to meet these expectations. In addition to the core expectations, there are ‘additional expectations’, which provide further clarity as to what is expected of the providers. The BOSE will apply to providers of social media services, ‘relevant electronic services’ and ‘designated internet services’.

Broadly, the draft determination is set out in three parts which cover expectations regarding safe use, reporting and complaint mechanisms, and providing information accessible to end-users and the Commissioner. Notably, under the proposed BOSE, online service providers will be required to:

  • make judgements on what constitutes ‘harmful’ content in order to actively take steps to protect their end-users. There is a further expectation that if the service uses encryption, the service provider will take reasonable steps to implement processes to detect and address material which is harmful or unlawful, which will be of particular concern to cloud service providers who may not have visibility of their customer’s content;
  • take ‘reasonable steps’ to prevent anonymous accounts from being used for unlawful or harmful content. The Commissioner identifies reasonable steps to include having processes that prevent the same person repeatedly using anonymous accounts and/or having processes of verification of identity; and
  • have a complaints mechanism in place for end-users and provide information to the Commissioner on complaints and breaches of terms of service. Further, the service provider will need to keep records of reports and complaints for 5 years from the date of the complaint or report, and may be required to provide such information to the Commissioner on request.

Industry concerns

The Act and the draft BOSE has given rise to concerns from a variety of industry participants regarding its far-reaching scope and numerous practical problems in achieving compliance. For example:

  • the Act applies to a broad range of cloud service providers. In some cases it may be impractical for cloud providers to remove specific content, where they have no visibility or control of their customer’s content (e.g. due to encryption). This is most evidently true for infrastructure-as-a-service providers, which may need to suspend a customer’s entire computing environment as a result of a take-down notice in respect of specific offending content; and
  • instant messaging platforms may be required to remove offending material including in private conversations to avoid non-compliance. For platforms that use encryption such that only the parties to the conversation may view the communications, monitoring and complying with removal notices raises significant technical and privacy concerns.

At this stage, it is not clear how such issues will be overcome across the vast array of online service providers that are likely to be subject to the Act.  In many cases, compliance will likely require significant investment into processes and automation technology to monitor and respond to harmful content.

Key takeaways

Online service providers should consider:

  • reviewing and updating systems and processes used to monitor and respond to cyberbullying and other harmful content, and to comply with take-down notices;
  • reviewing and updating their user terms and acceptable use polices; and
  • participating in the BOSE consultation process and the development of industry codes with industry participants and the Commissioner.
Important Disclaimer: The material contained in this article is comment of a general nature only and is not and nor is it intended to be advice on any specific professional matter. In that the effectiveness or accuracy of any professional advice depends upon the particular circumstances of each case, neither the firm nor any individual author accepts any responsibility whatsoever for any acts or omissions resulting from reliance upon the content of any articles. Before acting on the basis of any material contained in this publication, we recommend that you consult your professional adviser. Liability limited by a scheme approved under Professional Standards Legislation (Australia-wide except in Tasmania).