News

The importance of managing risk for AI in corporate Australia

February 27, 2024

Corporate risk as a topic is not new, but the scope of risk for corporate Australia is constantly changing with new risks emerging and heightened responsibilities on directors and officers.  

Warren Buffet observed that '[r]isk comes from not knowing what you are doing'.1   This is particularly apt in the context of artificial intelligence (AI) where the technology is complex and the rate of application of AI is (at least on one view when one looks at the development and deployment of AI since the release of ChatGPT), exponential.  

One of the challenges for organisations includes the lack of targeted regulation.  The challenge for directors leading those organisations is to ensure, where they may not be 'AI-literate', that their conduct is consistent with their duties - to act in good faith, to exercise due care and diligence and to act in the best interests of the company.2  

This article provides guidance on the expectations on corporates and their directors to adopt responsible and ethical AI and on some practical steps each can take to meet those expectations.  

Common terms associated with AI

The term 'AI' has become part of our vernacular, but other more qualitative terms have emerged in contemporary writings.  This term first appeared during a summer program at Dartmouth College, United States of America, in 1956, when John McCarthy (a mathematician) brought together a small cohort of scientists to work on the following study:

     The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. ...3

We now understand the distinction between 'weak AI' and 'strong AI'.  'Weak AI' has been described as:

     [focussing] on performing a specific task, such as answering questions based on user input or playing chess … […].  [It] relies on human interference to define the parameters of its learning algorithms and to provide the relevant training data to ensure accuracy.4

IBM gives the examples of Siri and self-driving cars as examples of Weak AI.  'Strong AI' has been described in the following terms:

     Strong AI can perform a variety of functions, eventually teaching itself to solve for new problems. […]  While human input accelerates the growth phase of Strong AI, it is not required, and over time, it develops a human-like consciousness instead of simulating it like Weak AI.5

Strong AI is really something for the future.  It is aspirational, but likely.

Many current resources refer to 'Generative AI'.  There are various definitions of this term, one of which describes it as:

     … deep-learning models that can generate high-quality text, images, and other content based data they were trained on.

     … At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that's similar, but not identical, to the original data.6

The application and/or use of AI has attracted two descriptors: responsible, and ethical.  Responsible AI is often used in the context of corporate risk.  

The CSIRO describes responsible AI as 'a practice of developing and using AI systems in a way that provides benefits to individuals, groups, and wider society, while minimising the risk of negative consequences'. 7 

Ethical AI is a more amorphous concept; it is usually described by the identification of its elements, rather than with one definition and involves adherence to guidelines or concepts.

PwC identifies the use of ethical AI as 'one of the ideal ways to help minimise risks that range from compliance failures to brand damage', but it does not offer a specific definition.8   Instead, it proposes 10 core principles as elements of ethical AI:9  

  1. Interpretability (that involves transparency around the AI decision making process);
  2. Reliability and robustness (including protection from cyber threats);
  3. Accountability (someone needs to have assigned responsibility);
  4. Beneficiality (regard to the common good);
  5. Privacy (protecting personal data);
  6. Human agency (where the ethics risk is high there should be greater human oversight);
  7. Lawfulness;
  8. Fairness (without bias); and
  9. Safety.

These core principles build on eight core principles developed and published by the CSIRO in 2019: (a) generation of net-benefits; (b) do no harm; (c) compliance with regulations and legislation; (d) protection of privacy; (e) fairness; (f) transparency and explainability; (g) contestability; and (h) accountability.10

Potential harms of AI

The Human Technology Institute at the University of Technology Sydney (HTI) published a short paper on AI and corporate governance in May 2023 (2023 HTI May Report).11   This paper has been augmented by a summary report called 'Insight Summary: The State of AI Governance in Australia' (published in November 2023) (2023 HTI November Summary).12

The earlier paper reported that 66% of Australian business decision makers surveyed were already using, or planning to use, AI.  This number is most likely now superseded.  There may also be a significant number of entities within the Australian business community using AI without expressly recognising or acknowledging this.  The November 2023 HTI November Summary stated that:

     AI systems are being applied closer to the 'core' of organisations, with the most rapid growth in strategy, corporate finance and risk functions.13

The Chief Executive of Singapore's Personal Data Protection Commission and its Infocomm Media Development Authority Commissioner observed in the forward to Singapore's Companion to the Model AI Governance Framework – Implementation and Self-Assessment Guide for Organisations14  that:

     At the brink of entering a new decade, artificial intelligence seems to be shrouded under a cloud of ambivalence.

He acknowledged, however, that there are no easy answers to striking the right balance between managing risks and encouraging advances in AI technology.  

Turning now to potential harms.  The 2023 HTI May Report identified a number of potential harms which could arise from the use and application of AI.  

In the HTI November 2023 Summary, these harms are categorised as either 'harms arising to individuals' or 'collective harms'.  Collective harms include job displacement and social and political manipulation.15  

The following harms are categorised as 'harms arising to individuals':16

  1. Misleading advice and information;
  2. Unfair treatment;
  3. Unlawful discrimination and exclusion;
  4. Violation of human and protected rights;
  5. Physical, psychological, economic or reputational harm; and
  6. Physical, psychological, economic or reputational harm; and
  7. Breach of privacy.

One additional harm not included in the summary is breach of intellectual property rights.

Causes of harm

A recent blogpost published by Georgetown's Center for Security and Emerging Technology (CSET) observed that:17

     AI systems can cause various types of harm.  These harms can affect many different types of people, particularly vulnerable groups, but they can also create challenges related to the environment, property, infrastructure, and social structure.  Thinking systematically about harm, its elements, and the various categories of AI harms can help us rigorously address different concerns that new AI applications might create. […]

     … AI harm can be straightforward, involving one AI system and a single harmed entity, but it can also be more complex and multifaceted.  AI harm can involve multiple harmed entities, AI systems, or types of harm.

The 2023 HTI November Summary provides insight into the source of harms of AI systems.  These include: (a) AI system failures; (b) malicious or misleading deployment; and (c) overuse, inappropriate or reckless use.18   Harms caused by AI, particularly generative AI, can be unintentional.  This will become a significant issue for Strong AI.  The Australian public is not ignorant to likely harms: the November 2023 summary reports that 'only one third of Australians say that they trust AI systems, and less than half believe the benefits of AI outweigh the risks'.19   The November 2023 summary categorises potential harms as reputational, commercial and regulatory.

Directors' duties

Directors are subject to specific duties.  These arise under statute (Corporations Act 2001 (Cth) (Corporations Act)) and at common law.  The statutory duties reflect the common law duties.  These duties extend to directors' conduct when those directors are considering the use of AI within an organisation.  There will also be risks to a company where a director breaches his or her duty in relation to AI and that breach exposes the company to a potential breach of regulation or to harm, such as damage to brand or reputation.

Directors have duties, amongst other things, to:

  1. act in good faith;20
  2. to exercise reasonable care and diligence;21  and
  3. act in the best interests of the company.22

As to (c), the Australian Institute of Company Directors observes that:

     Acting in the best interests of the company means directors should focus on sustainable value creation over time, rather than short-term profit maximisation.  While the interests of shareholders are central, directors can, and should, as a matter of practice, consider other stakeholders such as employees, customers and the environment when discharging their duty.  Maintaining and advancing the organisation's reputation and community standard are key considerations.23  

Cassimatis v ASIC [2020] FCAFC 52 (and the trial judgment: ASIC v Cassimatis (No 8) [2016] FCA 1023) provides guidance to directors which is relevant, inter alia, in the context of AI.  

Mr and Mrs Cassimatis were the directors and sole shareholders of Storm Financial Pty Ltd (Storm).  ASIC prosecuted them for breach of section 180(1) of the Corporations Act.  Edelman J found, at trial, relevantly, that Mr and Mrs Cassimatis had failed to act with the degree of care and diligence that a reasonable director would engage in if they were the director of a company in that company's circumstances.  

Mr and Mrs Cassimatis appealed.  The majority of the Full Court of the Federal Court upheld his Honour's decision.

The contravention came about where the directors' actions were found to have caused the company to breach certain provisions of the Corporations Act (and the company was prosecuted for those breaches).  Essentially, the directors had exposed the company to a possible contravention with serious consequences and, in those circumstances, the directors had breached their statutory duties.  The Full Court observed that the company's contraventions which were derived from the directors' conduct 'contained within it a foreseeable risk of serious harm to Storm's interests […] which reasonable directors, with the responsibilities of the directors standing in Storm's circumstances, ought to have guarded against'.24

The Full Court emphasised that the trial judge had not found that the directors had breached their duties because they had participated (in an accessorial capacity) in conduct which caused the company to contravene the Corporations Act. In respect to this, the trial judge observed that (at [527]):

     Accessory liability is recognised in s 79 of the Corporations Act […].  The considerations that arise in s 180(1) are quite different.  That section does not require any contravention, or threatened contravention, by the corporation.  It is concerned with whether an officer exercised the degree of care and diligence of the reasonable person (as described) in the exercise of the officer's powers and discharge of his or her duties.  Contraventions, or risk of contraventions, by the corporations are circumstances to be taken into account in that assessment.  They are not the only circumstances. And they are not conditions for liability.

The Full Court majority in the appeal decision found that the prospect of Storm contravening provisions of the Corporations Act was not only foreseeable in all the circumstances, but there was a strong likelihood of harm.  It was that foreseeability and the inaction of the directors to take steps to combat the risk that was a key element in the finding of breach of directors' duties.

On that issue, the primary judge's decision in Cassimatis includes a discussion as to the nature of harm which might be caused by a breach by a director of the statutory duty to exercise due care and skill.  Edelman J (at [483]) concluded that (emphasis added):

     … the foreseeable risk of harm to the corporation which falls to be considered in s 180(1) is not confined to financial harm.  It includes harm to all the interests of the corporation.  The interests of the corporation, including its reputation, include its interests which relate to compliance with the law.

His Honour explained (on the question of reputation) (at [482]) that:

     … A corporation has a real and substantial interest in the lawful or legitimate conduct of its activity independently of whether the illegitimacy of that conduct will be detected or would cause loss.  One reason for that interest is the corporation's reputation.  Corporations have reputations, independently of any financial concerns, just as individuals do.  Another is that the corporation itself exists as a vehicle for lawful activity.  For instance, it would be hard to imagine examples where it could be in a corporation's interests for the corporation to engage in serious unlawful conduct even if that serious unlawful conduct was highly profitable and was reasonably considered by the director to be virtually undetectable during a limitation period for liability.

Given the range of harms which can arise from AI and the role directors will have in decision making around AI for their organisation, the Cassimatis decisions are important.

AI governance

The role of directors

The challenge faced by directors and boards when making strategic and operational decisions about the use of AI in their organisation is significant.  

Amongst other things, in order to comply with their statutory duties, directors must understand the AI technology which is under consideration (including its genesis, its limitations and the scope of its application).  

Without this understanding, directors will not be well placed to interrogate proposals or assess whether there is any foreseeable harm which might be caused to the company by adopting the AI system.  

All of this is difficult for directors where:

  1. AI technology is developing rapidly;
  2. the use of AI is not always transparent or obvious; and
  3. AI requires continuous oversight (so a level of understanding at one point in time might not translate to a level of understanding at a later date).

A further issue which is often overlooked by directors when considering a proposal to adopt new AI systems is the level of control the organisation will have over the application.  AI's ability to 're-invent' itself and to produce unexpected outcomes is a key factor in any risk analysis.  Many AI systems need continuous monitoring to ensure that they remain operating in an ethical and responsible manner.  

Corporate governance

A board and/or its directors can be supported in their roles by the adoption of an AI governance framework.

The Harvard Law School Forum on Corporate Governance publishes on the topic of board responsibility, including for AI oversight.  

As early as January 2022, the forum identified five key steps for boards to ensure responsible AI governance:25

  1. Establish an AI governance framework;
  2. Identify a designated person in the C suite as point of contact;
  3. Designate stages of the AI lifecycle when testing will be conducted;
  4. Document relevant findings at the completion of each stage; and
  5. Implement routine auditing.

A further paper published by the forum in late 202326 observed that:

     [A] board's oversight obligations extend to the company’s use of AI, and the same fiduciary mindset and attention to internal controls and policies are necessary. Directors must understand how AI impacts the company and its obligations, opportunities, and risks, and apply the same general oversight approach as they apply to other management, compliance, risk, and disclosure topics.

The oversight of the board (assisted by senior management and operations) should include:

  1. Engaging appropriately qualified AI experts to evaluate AI systems both prior to implementation and at regular intervals during its lifecycle;
  2. Developing real time protocols to interrogate AI systems and assess their risk profile;
  3. Ensuring that the use of AI within the organisation is consistent with the company's values and brand and fostering a culture which is consistent with ethical and responsible AI;
  4. Ongoing review of regulatory and compliance requirements;
  5. Protecting against unlawful disclosure of information, data breaches and cyberattacks; and
  6. Protecting intellectual property and confidential information (which might result in a breach of privacy).

To a large extent many boards, both large and small, are only now recognising the responsibilities which come with the use of AI in their organisation.  And a number of boards (and individual directors) are yet to appreciate the complexities of this topic and the rate at which AI is developing.  

A paper published by the World Economic Forum's AI Governance Alliance in January 2024 (as part of a series of three) suggested that a key issue for corporations when assessing AI is whether the organisation has the resources, not only to manage the AI in a practical sense, but to also manage associated risks.  The paper describes this as 'organisational readiness'.27  

There are two concrete steps which might be considered to assist boards in meeting the challenge of AI: first, establishing an AI committee (akin to an audit or risk committee) to advise the board on AI issues; and, secondly, retainer of an external AI advisor.  

Both courses would provide boards (and directors) with some reassurance that decisions for the company are being made where the company is fully briefed on the AI technology, its application (which may be multi-faceted across a large corporate), how the AI was developed (including data inputs) and potential risks.

Summing up

Effectively managing corporate risk in any context starts with awareness.  

On a number of fronts, AI is in a unique category.  AI is constantly developing and surveys show that customers and end users are cynical about its trustworthiness.

It is also complex and the potential for its application is significant and it is a specialist subject.  

The majority of directors, irrespective of their level of experience, will not have the requisite understanding of AI (particularly generative AI and, in the future, Strong AI) to make fully informed decisions about how and whether an organisation should deploy AI systems.  

Directors who recognise decision-making around AI without a full understanding and awareness of the scope (and limits) of AI, risk breaching their statutory and common law duties as officers of the company.

Organisations which fail to put in place proper governance procedures around the development and use of AI will be ill-prepared for the future.  

In the words of billionaire investor, Seth Klarman, '[u]nprecedented events occur with some regularity, so be prepared'.28

For more information on directors' duties and AI, please contact the author Bronwyn Lincoln or our Corporate and Advisory team.

Authors

Bronwyn Lincoln | Partner | +61 3 8080 3565 | blincoln@tglaw.com.au

Julia Nguyen | Lawyer

Footnotes

1Warren Buffet (Speech, 1993) (Web Page, accessed 22 February 2024) <https://www.azquotes.com/author/2136-Warren_Buffett/tag/risk>.

2Corporations Act 2001 (Cth) ss 180 and 181 ('Corporations Act').

3John McCarthy et al, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (31 August 1955) <http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf>.

4'What is strong AI?' IBM (Web Page, accessed 22 February 2024) <https://www.ibm.com/topics/strong-ai>.

5Ibid.

6'What is generative AI?' IBM (Web Page, accessed 22 February 2024) <https://research.ibm.com/blog/what-is-generative-AI>.

7'Good AI, bad AI: decoding responsible artificial intelligence' CSIRO (Web Page accessed 21 February 2024) <https://www.csiro.au/en/news/All/Articles/2023/November/Responsible-AI-explainer>.

8'Ten principles for ethical AI' PwC (Web Page, accessed 22 February 2024) <https://www.pwc.com.au/digitalpulse/ten-principles-ethical-ai.html>.

9Ibid.

10CSIRO, Artificial Intelligence: Australia's Ethics Framework (2019) p 6 <https://www.csiro.au/en/news/All/Articles/2023/November/Responsible-AI-explainer,with link to AI Ethics Principles PDF>.

11Human Technology Institute, The State of AI Governance in Australia (May 2023)<https://www.uts.edu.au/human-technology-institute/news/report-launch-state-ai-governance-australia> ('2023 HTI May Report').

12Human Technology Institute, Insight Summary: The State of AI Governance in Australia (November 2023), p 5 <https://www.uts.edu.au/human-technology-institute/publications> ('2023 HTI November Summary').

132023 HTI November Summary (n 12), p 5.

14World Economic Forum and Info-communications Media Development Authority of Singapore, Companion to the Model AI Governance Framework (January 2020), p 4 <https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework, with link to Implementation and Self Assessment Guide for Organisations (ISAGO)>.

152023 HTI November Summary (n 12), p 8.

16Ibid.

17Heather Frase and Owen Daniels, Understanding AI Harms: An Overview, (11 August 2023) <https://cset.georgetown.edu/article/understanding-ai-harms-an-overview/>.

182023 HTI November Summary (n 12), p 9.

192023 HTI November Summary (n 12), p 10.

20Corporations Act (n 2) s 181(1)(a).

21Corporations Act (n 2) s 180(1).

22Corporations Act (n2) 2 181(1)(a).

23'Best interests duty', Australian Institute of Directors' (Web Page, 1 August 2022) <https://www.aicd.com.au/company-policies/corporate-social-responsibility/examples/best-interests-duty.html>.

24Cassimatis v ASIC [2020] FCAFC 52, [77]; see also [74], where his Honour described the directors' conduct which led to the company's contraventions as 'primary direct failures on the part of the appellants to discharge the obligation cast upon them by s180(1) measured by the objective standard of that section as earlier described'.

25Robert G Eccles and Miriam Vogel, Board Responsibility for Artificial Intelligence Oversight, (5 January 2022) <https://corpgov.law.harvard.edu/2022/01/05/board-responsibility-for-artificial-intelligence-oversight, accessed on 21 February 2024>.

26Holly J Gregory, AI and the Role of the Board of Directors, (7 October 2023) <https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors, accessed on 21 February 2024>.

27World Economic Forum, Unlocking Value from Generative AI: Guidance for Responsible Transformation,(18 January 2024) <https://www.weforum.org/publications/ai-governance-alliance-briefing-paper-series, with link to paper, accessed on 22 February 2024>.

28(Seth Klarman, Speech) (Web Page, accessed 22 February 2024) <https://novelinvestor.com/quote-author/seth-klarman>.

Download pdf
Recent posts

Keep
learning