NEW REPORT OUT NOW
Scale, complexity and autonomy are three of the challenges unique to artificial intelligence that differentiate it from other technological advances and explain the heightened focus on responsible AI. Liming Zu, Research Director of the Software and Computational Systems division at CSIRO's Data61, writes that comprehensive strategies are needed to address the broad spectrum of challenges inherent to responsible AI.
The term "operationalising responsible AI" can easily be dismissed as jargon. However, for executives and board members engaged in steering business strategy centred around new technology such as AI, this is a critical concern. The challenge stems from the kaleidoscope of stakeholders involved – from policymakers and investors to the wider community and internal AI experts. Each group requires tailored, concrete practices suited to their specific needs and levels of understanding. If the responsible deployment of AI is not methodically operationalised, we risk creating competing silos of risk management that waste effort and overlook gaps.
Even within an organisation, the term "concrete practices" can be elusive. What the board considers concrete – like high-level governance questions – may appear abstract to an AI engineer who needs detailed technical guidance. Additionally, popular checklist-based approaches like model cards and data sheets can create a false sense of progress. Checklists work only if they are backed by rigorously standardised, high-quality practices. Simply ticking off a box is insufficient and can be misleading.
To complicate matters, the output of one practice often serves as the input for another—say, from a policy unit to an engineering team for implementation, and then back again for compliance and monitoring. This interconnectedness requires clarity and trust at all levels. Stakeholders need to trust not only the output but also the process and individuals behind it. Trust is not just a subjective or intuitive belief; it's an evidence-based expectation that the people and systems involved will behave as they should.
One may argue, why the heightened focus on responsible AI when technology itself should be universally responsible? Three factors differentiate AI:
Scale: Let's consider a debt-collection system. When humans manually issue collection letters and run contesting channels, the total potential harm is limited, ironically due to inefficiency. When these processes are moved to a large-scale automated system, even if the error rate is lower than that of human errors, the sheer scale of harm might be significantly higher.
Complexity: AI algorithms, unlike traditional systems, often function as 'black boxes', with internal operations that are complex and difficult to explain. This makes traditional auditing and accountability mechanisms less effective.
Autonomy: AI systems can autonomously identify and solve problems. For example, AI in autonomous vehicles decides how to navigate traffic without human intervention. This cedes some level of control and complicates how we establish checks and balances.
Addressing these complexities requires a framework that is both flexible and robust. We advocate for a pattern-oriented approach to operationalising responsible AI. Patterns are simply reusable solutions that can be applied consistently across different problems and sectors. Unlike merely touting best practices as universally beneficial, patterns also capture the nuanced pros, cons and costs of reusable solutions, as well as their applicable contexts. In a business setting, a governance pattern could set up a multi-layer oversight mechanism, while a process pattern could standardise ethical considerations throughout the product development cycle.
Figure 1: Overview of Responsible AI Pattern Catalogue (RAIC).
A comprehensive strategy requires multi-layer, multi-aspect patterns spanning industry norms, organisational policies and team-level practices. It also involves multiple perspectives: governance, process and product. These patterns are interconnected, feeding into a unified reference architecture that integrates the AI supply chain, system layer and operational frameworks.
Effective pattern implementation manages multiple, interrelated risks. Adopting one pattern might mitigate a particular risk but introduce another. Acknowledging these trade-offs allows us to link patterns in a way that addresses the broad spectrum of challenges inherent to responsible AI. Trust is the final layer, going beyond just system robustness. It involves diverse stakeholder engagement and transparent communication, creating an environment where technology is both reliable and socially accountable.
Operationalising responsible AI is neither a box-ticking exercise nor a one-off initiative. It's an ongoing, interconnected endeavour that requires a pattern-oriented strategy, actively engaging multiple stakeholders across various governance levels and lifecycle stages. By methodically navigating these complexities, we can implement responsible AI in business with both a high level of governance confidence and technical soundness.
References
Lu, Q., Zhu, L., Whittle, J., Xu, X., 2024. Overview of the Responsible AI Pattern Catalogue, in: Responsible AI: Best Practices for Creating Trustworthy AI Systems. Addison-Wesley. (pre-order from late 2023)
SAS Chief Privacy Strategist Europe and Asia-Pacific, Kalliopi Spyridaki, writes that to effectively regulate artificial intelligence, countries need to start small, think big and work together.
Read more Opinion article June 1, 2020Former Industry Commission Chairman, Bill Scales AO; Swinburne University Centre for Transformative Innovation Director, Professor Beth Webster; and Centre for Transformative Innovation PhD Candidate, David Paynter argue that we need to understand what Australian manufacturing looks like today before we decide what role it should play in the economy after COVID-19.
Read more Opinion article March 29, 2017Advanced Manufacturing Growth Centre Chair Andrew Stevens looks at why we are still using the old "manufacturing rule book" when these rules no longer apply as manufacturing enters the world of robotics, machine learning and big data.
Read more