Technology and Telecommunications

Australian Government takes the next step to legislate AI in Australia

September 19, 2024

The Australian Government is taking the next step to achieve its goal of ensuring that Australia's regulatory framework is meeting the challenges and opportunities posed by the development and use of artificial intelligence (AI) in Australia.

On 5 September the Government released a paper proposing a risk-based approach to applying mandatory guidelines to developers and deployers of AI in high-risk settings. The Government also introduced a new voluntary AI standard.1

It is currently seeking feedback on its proposed mandatory guidelines and the regulatory framework through which they will be introduced. The current consultation period closes on 4 October 2024.

This follows the Australian Government's assessment of the feedback received to its recent consultation paper2 and the recommendations of the Artificial Intelligence Expert Group established earlier this year3 to provide advice to the Government on the introduction of the mandatory guardrails.

Organisations that are developing or proposing to deploy AI in their business should be mindful of these developments. With the imminent introduction of the long-awaited privacy reforms, it is important for organisations to formulate their data governance arrangements in consideration of these developments.

Mandatory guardrails and voluntary AI standards

The Australian Government's Department of Industry, Science and Resources (the Department) has recently released two publications:

  1. the Proposals Paper for Safe and Responsible AI in Australia (Proposals Paper); and
  2. the Voluntary AI Safety Standard (Voluntary Standard).

These publications are designed to shape Australia's regulatory framework and to align it with international advances in AI regulation (specifically, in the European Union, the United States and Canada).

The Proposals Paper contains 10 mandatory guardrails (Mandatory Guardrails) for developers and deployers of AI systems in high-risk settings.  The Mandatory Guardrails are proposed to form the basis of Australia's approach to future regulation in the AI landscape.  

Together with the Proposals Paper, the Department released the Voluntary Standard which sets out a best practice approach for Australian organisations on how to implement safe and responsible AI in their AI supply chains.

These publications provide valuable insight to organisations as to the likely form that the regulatory landscape for AI may take in Australia.

In this article, we highlight the key aspects of the Proposals Paper (including the Mandatory Guardrails and proposed definition of high-risk AI) and the Voluntary Standard, so that Australian organisations impacted by these regulatory reforms can decide whether to provide feedback to the Proposals Paper and to consider developing their current AI systems and their approach to data governance in light of these potential reforms.

High-risk AI

The Australian Government proposes to define high-risk AI in two broad categories.  The first relates to AI systems where their uses are known or foreseeable and the second relates to general-purpose AI (GPAI) models.  

High-risk AI: Intended and foreseeable uses

In defining an AI system as high-risk on intended and foreseeable uses, it is proposed that regard must be given to the extent and severity of the risk of:

  1. adverse impacts to human rights (with a particular focus on discrimination on the basis of age, disability, race and sex);
  2. adverse impacts to physical or mental health or safety;
  3. adverse legal effects (such as law enforcement and equitable access to essential services);
  4. adverse impacts to the collective rights of cultural groups (including First Nations people); and
  5. adverse impacts to the broader Australian economy, society, environment and rule of law (such as harmful synthetic content capable of widespread dis-information).

In assessing the extent and severity of these adverse impacts, considerations as to the demographics adversely affected, the scale and intensity of the impacts and the likely consequences and disproportionate impacts will be taken into account.

The Government proposes to implement a principles-based approach to defining AI systems as high-risk where risk is known or foreseeable, as opposed to the list-based approach applied in other jurisdictions such as the European Union and Canada.  Feedback is being sought on which approach should be implemented in Australia.  A list-based approach would include the identification of specific domain areas, such as employment or law enforcement, and a description of high-risk AI systems with respect to those areas.  On the other hand, a principles-based approach would require organisations to assess the severity and extent of the adverse impacts.  

The Proposals Paper provides examples of these approaches.

High-risk AI: General-purpose AI

The second way the Australian Government proposes to define high-risk AI relates to highly advanced GPAI models with capabilities such that the extent of their uses and risks are unknown.  

GPAI models are categorically higher risk than other AI systems due to their adaptability and wide variety of potential uses.  Therefore, the Australian Government is proposing to apply the Mandatory Guardrails to all GPAI models.  

In Canada, their mandatory guardrails apply to all GPAI presuming that all GPAI is high-risk.  

Whereas, in the European Union, there are a minimum set of guardrails for all GPAI models and then further obligations for GPAI models that pose "systemic risks".4

In the United States, there are reporting requirements on GPAI models where certain thresholds of capability have been met.

The Australian Government is seeking feedback on whether all GPAI models should be subject to the Mandatory Guardrails or potentially only a subset of the Mandatory Guardrails or if GPAI models require further guardrails beyond the Mandatory Guardrails.  

Guardrails

The Proposals Paper outlines the following 10 Mandatory Guardrails for Australian organisations that are developing or deploying high-risk AI systems:

  1. establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;  
  2. establish and implement a risk management process to identify and mitigate risks;
  3. protect AI systems, and implement data governance measures to manage data quality and provenance;
  4. test AI models and systems to evaluate model performance and monitor the system once deployed;  
  5. enable human control or intervention in an AI system to achieve meaningful human oversight;
  6. inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
  7. establish processes for people impacted by AI systems to challenge use or outcomes;
  8. be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
  9. keep and maintain records to allow third parties to assess compliance with guardrails; and
  10. undertake conformity assessments to demonstrate and certify compliance with the guardrails.

The Mandatory Guardrails are almost identical to the voluntary guardrails located in the Voluntary Standard (Voluntary Guardrails).  The 10th  Voluntary Guardrail differs in that its focus is on engagement with stakeholders rather than the undertaking of conformity assessments.

The Voluntary Guardrails complement the Mandatory Guardrails and clarify the Australian Government's current expectations for the safe and responsible use of AI.  Organisations that adopt the Voluntary Guardrails will be better prepared to meet the future regulations as they are developed and introduced by the Australian Government.  

In addition, the Voluntary Standard aligns with international benchmarks, assisting those impacted Australian organisations and ensuring their practices remain consistent with those in other jurisdictions.  

Next Steps

Those organisations who develop and deploy AI systems in Australia may wish to review the proposed definition of high-risk AI and the Mandatory Guardrails under the Proposals Paper (and the relevant discussion questions) and consider making a submission to the Department before the 4 October 2024 deadline.

These organisations should also consider the Voluntary Standard and undertake an assessment as to its current AI position and data governance practices in light of the Voluntary Guardrails to demonstrate its implementation of the current best practice approach ahead of the reforms.

If you would like assistance, please contact a member of our Technology and Telecommunications team.

Authors

Demetrios Christou | Partner | +61 2 8248 3428 | dchristou@tglaw.com.au

Dianne Beer | Special Counsel | +61 2 8248 5816 | DeBeer@tglaw.com.au

Ashlee Broadbent | Associate | +61 8 8236 1185 | abroadbent@tglaw.com.au

Notes

1 Department for Industry, Science and Resources, 'The Albanese Government acts to make AI safer' (Media Release, 5 September 2024) <The Albanese Government acts to make AI safer | Ministers for the Department of Industry, Science and Resources>

2 Department of Industry, Science and Resources, Safe and Responsible AI in Australia (Discussion Paper, June 2023) and Safe and Responsible AI in Australia consultation (Interim response, January 2024)

3 Department for Industry, Science and Resources, 'New artificial intelligence expert group' (Media Release, 14 February 2024) <New artificial intelligence expert group | Ministers for the Department of Industry, Science and Resources>

4 The EU AI Act defines "systemic risk" as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the supply chain”.

Download pdf
Recent posts

Keep learning