NEW REPORT OUT NOW
Scale, complexity and autonomy are three of the challenges unique to artificial intelligence that differentiate it from other technological advances and explain the heightened focus on responsible AI. Liming Zu, Research Director of the Software and Computational Systems division at CSIRO's Data61, writes that comprehensive strategies are needed to address the broad spectrum of challenges inherent to responsible AI.
The term "operationalising responsible AI" can easily be dismissed as jargon. However, for executives and board members engaged in steering business strategy centred around new technology such as AI, this is a critical concern. The challenge stems from the kaleidoscope of stakeholders involved – from policymakers and investors to the wider community and internal AI experts. Each group requires tailored, concrete practices suited to their specific needs and levels of understanding. If the responsible deployment of AI is not methodically operationalised, we risk creating competing silos of risk management that waste effort and overlook gaps.
Even within an organisation, the term "concrete practices" can be elusive. What the board considers concrete – like high-level governance questions – may appear abstract to an AI engineer who needs detailed technical guidance. Additionally, popular checklist-based approaches like model cards and data sheets can create a false sense of progress. Checklists work only if they are backed by rigorously standardised, high-quality practices. Simply ticking off a box is insufficient and can be misleading.
To complicate matters, the output of one practice often serves as the input for another—say, from a policy unit to an engineering team for implementation, and then back again for compliance and monitoring. This interconnectedness requires clarity and trust at all levels. Stakeholders need to trust not only the output but also the process and individuals behind it. Trust is not just a subjective or intuitive belief; it's an evidence-based expectation that the people and systems involved will behave as they should.
One may argue, why the heightened focus on responsible AI when technology itself should be universally responsible? Three factors differentiate AI:
Scale: Let's consider a debt-collection system. When humans manually issue collection letters and run contesting channels, the total potential harm is limited, ironically due to inefficiency. When these processes are moved to a large-scale automated system, even if the error rate is lower than that of human errors, the sheer scale of harm might be significantly higher.
Complexity: AI algorithms, unlike traditional systems, often function as 'black boxes', with internal operations that are complex and difficult to explain. This makes traditional auditing and accountability mechanisms less effective.
Autonomy: AI systems can autonomously identify and solve problems. For example, AI in autonomous vehicles decides how to navigate traffic without human intervention. This cedes some level of control and complicates how we establish checks and balances.
Addressing these complexities requires a framework that is both flexible and robust. We advocate for a pattern-oriented approach to operationalising responsible AI. Patterns are simply reusable solutions that can be applied consistently across different problems and sectors. Unlike merely touting best practices as universally beneficial, patterns also capture the nuanced pros, cons and costs of reusable solutions, as well as their applicable contexts. In a business setting, a governance pattern could set up a multi-layer oversight mechanism, while a process pattern could standardise ethical considerations throughout the product development cycle.
Figure 1: Overview of Responsible AI Pattern Catalogue (RAIC).
A comprehensive strategy requires multi-layer, multi-aspect patterns spanning industry norms, organisational policies and team-level practices. It also involves multiple perspectives: governance, process and product. These patterns are interconnected, feeding into a unified reference architecture that integrates the AI supply chain, system layer and operational frameworks.
Effective pattern implementation manages multiple, interrelated risks. Adopting one pattern might mitigate a particular risk but introduce another. Acknowledging these trade-offs allows us to link patterns in a way that addresses the broad spectrum of challenges inherent to responsible AI. Trust is the final layer, going beyond just system robustness. It involves diverse stakeholder engagement and transparent communication, creating an environment where technology is both reliable and socially accountable.
Operationalising responsible AI is neither a box-ticking exercise nor a one-off initiative. It's an ongoing, interconnected endeavour that requires a pattern-oriented strategy, actively engaging multiple stakeholders across various governance levels and lifecycle stages. By methodically navigating these complexities, we can implement responsible AI in business with both a high level of governance confidence and technical soundness.
References
Lu, Q., Zhu, L., Whittle, J., Xu, X., 2024. Overview of the Responsible AI Pattern Catalogue, in: Responsible AI: Best Practices for Creating Trustworthy AI Systems. Addison-Wesley. (pre-order from late 2023)
Digital technologies such as artificial intelligence (AI) will play a crucial role in tracing and verifying the environmental impact of low-emissions hydrogen. Aurecon’s Megan Wheeldon and Dave Mackenzie discuss how AI could interpret data and support informed decision-making for product certification, so that the carbon intensity of hydrogen supply chains can be accurately documented and verified.
Read more Opinion article October 3, 2022The Optus data breach and Australia’s recent lacklustre cyber security placing of 31 (out of 63 countries) in the latest World Digital Competitiveness Ranking (WDCR), highlights that we cannot afford to be complacent about cyber security. Aided by cyber hack toolkits obtained from the dark web, there is a rising tide of bad actors who constantly re-invent ways to obtain corporate, industrial and government data, which is then sold on the dark web and used to commit further crimes, writes David Tuffley, Senior Lecturer in Applied Ethics and CyberSecurity at Griffith University's School of Information & Communication Technology.
Read more Opinion article May 18, 2021CEDA CEO Melinda Cilento writes that the government should introduce a new Chief Technologist to help Australia become a leading digital nation.
Read more