Australia
Artificial Intelligence
This is the first edition of the Law Over Borders Artificial Intelligence guide. The second edition is now available for viewing and purchase: Artificial Intelligence
Introduction
The Australian Government is committed to becoming a world leader in the development and adoption of trusted, secure and responsible AI. While Australia’s AI industry is in its nascent stages, a 2019 Report by the Commonwealth Scientific Industrial Research Organisation (CSIRO) and Data61 titled “Artificial Intelligence: Solving problems, growing the economy and improving our quality of life” estimated that AI technologies will contribute AUD 315 billion to the Australian economy by 2028.
The Australian Government’s Digital Economy Strategy, which aims to position Australia as a global leader in AI technology by 2030, includes a targeted AUD 124.1 million AI Action Plan. The AI Action Plan which was established in June 2021 includes the following initiatives to realise this vision:
- the National AI Centre within Commonwealth Scientific Industrial Research Organisation’s (CSIRO) Data61 to coordinate Australia’s AI expertise and capabilities to address barriers for small and medium enterprises (SMEs) in adopting and developing AI;
- the Next Generation AI Graduates program to attract and train AI specialists through a national scholarship program in collaboration with Australian universities and industry bodies; and
- a number of grant programs to support regional development of AI and SME adoption of AI (e.g., the AI and Digital Capability Centres grant).
There is no dedicated legislative regime in Australia regulating AI, Big Data or any form of automated decision-making processes. Existing laws are, for present purposes, to be applied.
There are gaps in Australia’s existing laws with respect to the treatment of novel issues raised by use of AI technologies, such as those relating to the ownership of AI generated works or inventions. It is expected that these gaps will be addressed in a future Australian AI law.
Aside from existing laws, there are a number of government publications which will guide the future development of Australia’s AI regulations, including:
- The Australian Human Rights Commission’s 2021 Human Rights and Technology report, which sets out a number of key responsible AI recommendations.
- The CSIRO and Data61’s 2019 AI Technology Roadmap, which identifies potential areas of AI specialisation for Australia.
- Standards Australia’s 2020 AI Standards Roadmap, which provides a framework for the development of future standards with respect to the use of AI in Australia.
- The Australian Department of Industry, Science, Energy and Resources’ 2019 AI Ethics Framework, which sets out AI Ethics Principles, as follows:
- “human, societal and environmental wellbeing” which purports that AI systems should benefit individuals, society and the environment, and the impacts of AI systems should be accounted for throughout its lifecycle;
- “human centred-values” which purports that AI systems should respect human rights, diversity and the autonomy of individuals;
- “fairness” which purports that AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups;
- “privacy protection and security” which purports that AI systems should respect and uphold privacy rights and ensure the security and protection of data;
- “reliability and safety” which purports that AI systems should reliably operate in accordance with their intended purpose;
- “transparency and explainability” which purports that there should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI is engaging with them;
- “contestability” which purports that when an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system; and
- “accountability” which purports that people responsible for the different phases of the AI system life cycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
1 . Constitutional law and fundamental human rights
1.1. Domestic constitutional provisions
The Australian Constitution does not contain any express provisions in respect of AI. The Australian Government is empowered under Section 51(v) of the Australian Constitution to establish legislation regulating “postal, telegraphic, telephonic and other like services”. It may be that this head of power may be used to establish legislation relating to AI.
Separately, there are limited human rights provisions enshrined in the Australian Constitution which might operate to restrict the use of AI technologies in Australia or otherwise protect individuals’ rights from a constitutional perspective.
1.2. Human rights decisions and conventions
The Australian Human Rights Commission (HRC) is a leading figure guiding the development of AI regulation, and this reflects the overall sentiment that human rights considerations are likely to be central to the adoption of AI in Australia. In the HRC’s 2021 Human Rights and Technology Final Report (HRC Report), the HRC proposed a number of recommendations including:
- The creation of an independent regulator (AI Safety Commissioner) to promote safety and protect human rights in the development and use of AI. In particular, the HRC advocated that the AI Safety Commissioner should be empowered to assess the impact of the development and use of AI on vulnerable and marginalised people in Australia.
- Mandating the completion of Human Rights Impacts Assessments in respect of government use of AI, and requiring government agencies to provide notice regarding their use of AI. Furthermore, individuals subjected to government decisions made with the use of AI should be provided with a right to reasons explaining the basis of the decision, and recourse to an independent merits review tribunal in respect of such decisions.
- Encouraging private sector organisations’ use of the AI Ethics Principles framework when developing and deploying AI technologies.
- A general moratorium on the use of biometric technologies in the context of high-risk decision making, subject to further reform to ensure better human rights and privacy protections regarding the use of such technologies.
2 . Intellectual property
2.1. Patents
The main source of law governing patents in Australia is the Patents Act 1990 (Cth) (Patents Act). There are no express references to AI under the Patents Act. However, there has been contention as to whether an AI system can be named as an “inventor” to which a patent is registered. In Thaler v. Commissioner of Patents [2021] FCA 879, the court originally held that the relevant AI System (i.e. DABUS) could be considered an “inventor”, within the definition of Section 15(1) of the Patents Act, on the basis that an “inventor is an agent noun” and “an agent can be a person or a thing that invents”.
On appeal, the Full Court of the Federal Court of Australia in Commissioner of Patents v. Thaler [2022] FCAFC 62 overturned the original decision, on the basis that an “inventor”, within the meaning of Section 15(1) of the Patents Act, had to be a “natural person”, citing the historical role of an inventor in patent law, plain reading of the section and structure and policy objectives of the Patents Act. As at the date of publication, an appeal to the Full Court decision has been lodged with the High Court of Australia and the proceedings are ongoing.
These decisions are a prelude to the imminent policy debate in Australia for inventions involving AI. Interestingly, the Full Court considered:
- whether an “inventor” should be redefined to expressly include AI, and if so, to whom such an AI invented patent could be granted, and the standard of the inventive step that should be applied; and
- that its decision did not necessarily preclude the granting of patents from AI devised inventions in another case.
2.2. Copyright
The main source of law governing copyright in Australia is the Copyright Act 1968 (Cth) (Copyright Act). There are no express references to AI under the Copyright Act. However, to the extent that an AI algorithm is written (e.g., as represented in software as source code), the software will be considered a “literary work” and potentially subject to the protections under the Copyright Act, including a prohibition on unauthorised reproduction.
In respect of computer-generated works, the Copyright Act restricts the provision of copyright protection to works originating from an “author” – that is, a person who brings the work into existence in its material form. Specifically, an author must be a human person and any works emerging from the operation of a computer system cannot originate from an individual (see Telstra Corp Ltd v. Phone Directories Co Pty Ltd [2010] FCA 44 (Telstra)). In Telstra, the Court held that phone directories which had been largely organised and presented by a computer program were not subject to copyright protection as the compilation did not originate from an individual (i.e., there was an absence of human authorship).
Whether works generated by both a human author and a computer program together will be subject to copyright protection will depend on:
- the authorial contribution of the person;
- the control the person exerts over the final material form of the work; and
- the extent to which the relevant computer program is used as a “tool”.
2.3. Trade secrets/confidentiality
There are no specific trade secrets or confidentiality law requirements in respect of AI.
3 . Data
3.1. Domestic data law treatment
The main source of law governing privacy in Australia is the Privacy Act 1988 (Cth) (Privacy Act). The Privacy Act does not contain any express provisions which directly regulate the use of AI. However, the Privacy Act may regulate the use of data used in AI systems to the extent that such data is “personal information” – essentially, information about a reasonably identifiable individual. For example, an organisation which uses personal information for input in an AI system must comply with the requirements under the Privacy Act, notably the Australian Privacy Principles (i.e., the personal information must be used for the primary purpose for which it was collected, or a secondary purpose within the reasonable expectation of the identified individual).
3.2. General data protection regulation
The European General Data Protection Regulation will apply to Australian organisations that fall within the extraterritorial ambit of the GDPR.
3.3. Open data & data sharing
There are “open data” legislative regimes in Australia that enable the sharing of data which may support the development and adoption of AI technologies in Australia.
The Data Availability and Transparency Act 2021 (Cth) (DAT Act) facilitates the sharing of “public sector data” (meaning data that is lawfully created, collected or held by or on behalf of a Commonwealth body) with government departments and universities to stimulate the use of public sector data for prescribed purposes, including research and development. The DAT Act sets out a comprehensive accreditation framework, and establishes requirements in order for accredited users to access the relevant datasets (e.g., the use must align with the “data sharing purposes” and must generally be consistent with the data sharing principles).
Separately, the Consumer Data Right (CDR) was enacted by the Treasury Laws Amendment (Consumer Data Right) Act 2019 (Cth) amending the Competition and Consumer Act 2010 (Cth). Essentially, the CDR grants consumers the right to access their data held about them by businesses or “data holders” in prescribed regulated industries (e.g., energy, banking and telecommunications) and to have that data transferred to an accredited recipient.
3.4. Biometric data: voice data and facial recognition data
The use of biometric data, including voice and facial recognition data, is currently regulated under the Privacy Act which prescribes that biometric data used for the purpose of automated biometric verification or biometric identification, and the use of biometric templates are to be considered “sensitive information”. There are requirements regarding the collection, use and disclosure of sensitive information, for example:
- an organisation to which the Privacy Act applies must not collect sensitive information unless the relevant individual consents to the collection, and the information is reasonably necessary for one or more of the entity’s functions or activities; and
- an organisation to which the Privacy Act applies may only use sensitive information if such use is within the reasonable expectation of the relevant individual, and directly related to the primary purpose for which the information was collected.
In 2021, a determination was made by the Office of the Australian Information Commissioner (OAIC) against Clearview AI, Inc. for breaches of the Privacy Act for scraping individuals’ biometric information from the web and disclosing it through a facial recognition tool. The OAIC held that the collection and use of such sensitive information was unreasonably intrusive and unfair, and carried a significant risk of harm to individuals, including vulnerable groups such as children and victims of crime.
The HRC Report (noted above in Section 1.2 Human rights decisions and conventions) has recommended a moratorium on biometric technologies for use in circumstances where high risk decision-making is involved, such as in relation to schools and policing. The HRC recommends that this moratorium should stay in place until there is further law reform which sets out express human rights protections regarding the use of biometric technology.
The HRC has cited a number of key human rights concerns as the basis of its recommendation, including:
- the high rate of error, especially in the use of one-to-many facial recognition technology which disproportionately affects vulnerable people by reference to characteristics like their skin colour, gender and disability;
- the increasing rate of facial recognition trials in Australia in high stakes government, including in relation to policing, education and service delivery, and the corresponding increase in human rights risks that potential errors present; and
- the increased risk of mass surveillance that can affect human rights including freedom of expression and association which stems from the cumulative impact of an increase in facial recognition technologies.
The University of Technology, Sydney is currently developing a report outlining the Model Law for Facial Recognition Systems in Australia, in collaboration with the former Commissioner of the HRC. It is likely that a dedicated facial recognition law will be developed in future.
4 . Bias and discrimination
The Australian Government and HRC regard bias and discrimination to be a central issue that must be addressed in shaping the development of AI regulation in Australia. The HRC published a 2020 report “Using Artificial Intelligence to make decisions: Addressing the problem of algorithmic bias” which is intended to provide guidance to governments and industry bodies on creating and using fairer decision making processes driven by AI systems.
Separately, the Australian AI Ethics Framework was established to highlight the Australian Government’s commitment to ensuring the use of responsible and inclusive AI. The eight AI Ethics Principles are intended to encourage business and governments employing AI systems to practice the highest ethical standards when designing, developing and implementing AI (see Section 1. Introduction).
4.1. Domestic anti-discrimination and equality legislation treatment
The main source of law governing bias and discriminatory practices in Australia is the Disability Discrimination Act 1992 (Cth) (Discrimination Act). The Discrimination Act does not contain any express references to AI. Instead, the Discrimination Act generally prohibits an organisation from discriminating against a person on the basis of their disability when providing goods and services to that person. To avoid discriminatory conduct, the organisation must take steps to make the relevant goods and services accessible to persons with a disability by making reasonable adjustments to the manner in which goods and services are provided to that person. However, as an exception, an organisation is not required to make such reasonable adjustments or otherwise take action to avoid discriminatory conduct if it would impose an unjustifiable hardship on the organisation.
It is possible that an AI system would be considered a good or service for the purposes of the Discrimination Act. Therefore, the general requirements set out above would apply to the use of an AI system.
Operating in conjunction with the above, there are other anti-discrimination laws which may be relevant to the types of AI that may be developed and their algorithmic content such as:
- the Racial Discrimination Act 1975 (Cth), which prohibits discrimination on the basis of race, colour, descent, nationality, ethnicity or immigration status;
- the Sex Discrimination Act 1984 (Cth), which prohibits discrimination on the basis of gender, marital status, or pregnancy; and
- the Age Discrimination Act 2004 (Cth), which prohibits discrimination on the basis of age.
5 . Trade, anti-trust and competition
5.1. AI related anti-competitive behaviour
AI related anti-competitive behaviour is explored further below.
5.2. Domestic regulation
The main source of law governing trade, anti-trust and competition in Australia is the Competition and Consumer Act 2010 (Cth) (CCA), and compliance with these laws is regulated and enforced by the Australian Competition and Consumer Commission (ACCC).
While there are no specific provisions under the CCA which govern the use of AI technologies, the ACCC has previously considered applicability of AI type technologies in respect of a number of trade and competition law issues in its 2017 publication “The ACCC’s approach to colluding robots”. The ACCC raised the following issues:
- The implications of Big Data in the context of authorising mergers and acquisitions, and the impact of Big Data in determining market share and barriers to entry – the 2019 Digital Platforms Inquiry report also addressed the anti-trust risks associated with the accumulation of market power with respect to data.
- The use of price algorithms and potential for collusion or concerted practices. The ACCC raised the possibility of certain machine learning algorithms which compare and set prices in a given market for consumers to produce collusive outcomes and engage in practices such as price-fixing.
- The use of anti-competitive algorithms. For example, the proceedings in Google Inc v. ACCC [2013] HCA 1 concerned Google’s deployment of sponsored links in which consumers’ search results would instead produce competitor names who had entered into advertising arrangements with Google. The High Court accepted Google’s appeal and ultimately held that such conduct was not misleading or deceptive on the basis that Google had not itself created the published links.
6 . Domestic legislative developments
There are a number of additional regulatory initiatives carried out by government industry bodies in Australia. In summary:
- The Digital Technology Taskforce, as part of the Department of Prime Minister and Cabinet, is undertaking a public consultation process with the aims of publishing an issues paper titled “Positioning Australia as a Leader in Digital Economy Regulation: Automated Decision Making and AI Regulation” in 2022. This paper will identify possible reforms and legislative action to be taken in respect of AI in Australia.
- The National Transport Commission’s (NTC) Automated Vehicle Program Approach published in September 2020 outlines the NTC’s planned reforms regarding the use of automated vehicles. In particular, the report considers on-road enforcement actions, insurance arrangements, in-service safety requirements, government access to vehicle generated data, and sets out guidelines for trials of automated vehicles.
- Standards Australia published a final report titled “An Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard” in December 2020 which sets out eight recommendations with respect to AI regulation and best practice standards in Australia. These recommendations include calls to:
- explore avenues for enhanced cooperation with international bodies, including the United States National Institution for Standards and Technology, with the aim of improving Australia’s knowledge and influence in international AI Standards development; and
- grow Australia’s capacity to develop and share best practice in design, deployment and evaluation of AI systems with a Standards Hub and security-by-design initiative.
7 . Frequently asked questions
1. What is the state of regulation of AI in Australia, and what is the trajectory for AI regulation in Australia?
There is no specific law regulating AI in Australia such as in the form prepared in Europe. The expectation is that a dedicated AI law will be introduced in Australia which will at least specifically address the concerns raised by the HRC and other government and industry body reports.
It may well be that the regulation of AI in Australia will be modelled off the EU’s AI Act and will adopt a similar risk-based approach which prescribes certain requirements based on the degree of risk the relevant AI system presents, and the industry in which the AI system is deployed.
2. Who is responsible for decisions made by AI systems, and how will liability in respect of such decisions be attributed?
There is no legislative guidance or jurisprudence which expressly deals with the responsibility for the decisions made by AI systems, and any liability that flows from such decisions. For example, there is no standard position as to whether responsibility and liability should fall on the user deploying the AI system, end-user of the AI system, or any other parties who have contributed to the development of the AI system (i.e., hardware and software manufacturers, programmers and data supplier).
The attribution of responsibility and liability in respect of AI systems will likely depend upon how the decision can be traced back through the decision-making matrix to identify the “bad actor” or fault component. This will depend upon the extent to which the factors contributing to the relevant decision can be identified. Future AI laws in Australia will likely include robust and prescriptive requirements with respect to transparency, and the degree to which decisions made by AI systems can be explained which are integral to this evaluative process.
3. What owns the output of AI generated creations, and can AI systems own the outputs they produce?
The issue of ownership with respect to AI is highlighted by the decisions of Thaler and Telstra in the context of AI patent inventions and AI copyrighted works respectively. The cases reveal a gap in the relevant legislative regimes (i.e., the Patents Act and Copyright Act) which are not currently suited to address the increasingly complex and prevalent issue of AI generated inventions and works, and how the ownership of such IP rights should be designated. In light of these decisions, there is a risk that such inventions and works generated without human intervention or direction will not receive any intellectual property protections or rights.