Loader
Opinion article

Connecting principles and practice: Responsible AI needs ongoing engagement, not one-off fixes

Scale, complexity and autonomy are three of the challenges unique to artificial intelligence that differentiate it from other technological advances and explain the heightened focus on responsible AI. Liming Zu, Research Director of the Software and Computational Systems division at CSIRO's Data61, writes that comprehensive strategies are needed to address the broad spectrum of challenges inherent to responsible AI.



Friday 8 December 2023


Four Seasons Hotel | Sydney



Friday 8 December 2023

The term "operationalising responsible AI" can easily be dismissed as jargon. However, for executives and board members engaged in steering business strategy centred around new technology such as AI, this is a critical concern. The challenge stems from the kaleidoscope of stakeholders involved – from policymakers and investors to the wider community and internal AI experts. Each group requires tailored, concrete practices suited to their specific needs and levels of understanding. If the responsible deployment of AI is not methodically operationalised, we risk creating competing silos of risk management that waste effort and overlook gaps.

The Fluidity of "Concrete Practices"

Even within an organisation, the term "concrete practices" can be elusive. What the board considers concrete – like high-level governance questions – may appear abstract to an AI engineer who needs detailed technical guidance. Additionally, popular checklist-based approaches like model cards and data sheets can create a false sense of progress. Checklists work only if they are backed by rigorously standardised, high-quality practices. Simply ticking off a box is insufficient and can be misleading.

Silos and Information/Decision Chains

To complicate matters, the output of one practice often serves as the input for another—say, from a policy unit to an engineering team for implementation, and then back again for compliance and monitoring. This interconnectedness requires clarity and trust at all levels. Stakeholders need to trust not only the output but also the process and individuals behind it. Trust is not just a subjective or intuitive belief; it's an evidence-based expectation that the people and systems involved will behave as they should.

The Unique Challenges of AI

One may argue, why the heightened focus on responsible AI when technology itself should be universally responsible? Three factors differentiate AI:

Scale: Let's consider a debt-collection system. When humans manually issue collection letters and run contesting channels, the total potential harm is limited, ironically due to inefficiency. When these processes are moved to a large-scale automated system, even if the error rate is lower than that of human errors, the sheer scale of harm might be significantly higher.

Complexity: AI algorithms, unlike traditional systems, often function as 'black boxes', with internal operations that are complex and difficult to explain. This makes traditional auditing and accountability mechanisms less effective.

Autonomy: AI systems can autonomously identify and solve problems. For example, AI in autonomous vehicles decides how to navigate traffic without human intervention. This cedes some level of control and complicates how we establish checks and balances.

The Best Practice/Pattern-Oriented Solution

Addressing these complexities requires a framework that is both flexible and robust. We advocate for a pattern-oriented approach to operationalising responsible AI. Patterns are simply reusable solutions that can be applied consistently across different problems and sectors. Unlike merely touting best practices as universally beneficial, patterns also capture the nuanced pros, cons and costs of reusable solutions, as well as their applicable contexts. In a business setting, a governance pattern could set up a multi-layer oversight mechanism, while a process pattern could standardise ethical considerations throughout the product development cycle.

Figure 1: Overview of Responsible AI Pattern Catalogue (RAIC).

Operate at Multiple Levels

A comprehensive strategy requires multi-layer, multi-aspect patterns spanning industry norms, organisational policies and team-level practices. It also involves multiple perspectives: governance, process and product. These patterns are interconnected, feeding into a unified reference architecture that integrates the AI supply chain, system layer and operational frameworks.

Risk and Trust Dimensions

Effective pattern implementation manages multiple, interrelated risks. Adopting one pattern might mitigate a particular risk but introduce another. Acknowledging these trade-offs allows us to link patterns in a way that addresses the broad spectrum of challenges inherent to responsible AI. Trust is the final layer, going beyond just system robustness. It involves diverse stakeholder engagement and transparent communication, creating an environment where technology is both reliable and socially accountable.

Operationalising responsible AI is neither a box-ticking exercise nor a one-off initiative. It's an ongoing, interconnected endeavour that requires a pattern-oriented strategy, actively engaging multiple stakeholders across various governance levels and lifecycle stages. By methodically navigating these complexities, we can implement responsible AI in business with both a high level of governance confidence and technical soundness.


References
Lu, Q., Zhu, L., Whittle, J., Xu, X., 2024. Overview of the Responsible AI Pattern Catalogue, in: Responsible AI: Best Practices for Creating Trustworthy AI Systems. Addison-Wesley. (pre-order from late 2023)

About the authors
LZ

Liming Zhu

See all articles
Dr/Prof Liming Zhu is a Research Director at CSIRO’s Data61 and a conjoint full professor at the University of New South Wales (UNSW). He is the chairperson of Standards Australia’s blockchain committee and contributes to the AI trustworthiness committee. He is a member of the OECD.AI expert group on AI Risks and Accountability, as well as a member of the Responsible AI at Scale think tank at Australia’s National AI Centre. His upcoming book “Responsible AI: Best Practices for Creating Trustworthy AI Systems” will be published by Addison Wesley in late 2023.
;