Opinion article

Regulating AI will take ambition and global cooperation

SAS Chief Privacy Strategist Europe and Asia-Pacific, Kalliopi Spyridaki, writes that to effectively regulate artificial intelligence, countries need to start small, think big and work together.

Societal values and norms should filter through all aspects of our lives. That’s why it is important that regulation keeps up with the development of technology and data use.

The recent pandemic has demonstrated both the benefits and risks of ubiquitous technologies. The pandemic has accelerated and intensified the AI regulatory debate, introducing new considerations related to commercial versus government control of data as well as the invasiveness and potential misuse of powerful technology. The debate also takes in issues of global competitiveness, national security and digital sovereignty and this is driving the start of a race between governments worldwide to regulate AI.

The global picture today

AI’s autonomy combined often with its opacity and complexity arguably increase the risks it presents and raises regulatory concerns. Europe, Japan and Canada are like-minded thought leaders in this space. These countries have recently adopted AI ethics principles and have started regulating specific aspects of AI such as algorithmic ranking and AI decision-making.

Australia recently joined a wider club of countries in this arena, the Global Partnership on Artificial Intelligence (GPAI). GPAI’s secretariat is hosted by the Organisation for Economic Cooperation and Development (OECD), which has produced important work on AI principles with the consensus of major economies globally.

The first horizontal law with mandatory rules for AI development and use is expected to be proposed in Europe at the end of 2020. The European Union is reflecting on whether to act big or small this time against the background of the global success of the General Data Protection Regulation (GDPR) that created an ongoing spiral effect of privacy laws in every corner of the world. It will be interesting to observe the approach that China and the US will eventually take on AI policy.

The possibility of a common global approach is challenged by diversity in culture and societal values. However, remarkably, all AI principles frameworks today have at least two characteristics in common. Firstly, they call for human-centric AI aiming to protect human dignity, safety and autonomy. Secondly, they pursue AI uptake by promoting trust through principles of transparency, fairness, explicability and accountability.

How should governments navigate this potential turning point in history and how can regulation protect societal values and address the risks while enabling further technology innovations and the transformative potential of AI to better our world?
 

Think big: A Charter of AI ethics

We need an AI ethics code to guide the design and implementation of AI – a Charter of AI ethics that safeguards human rights and societal values that can be global and future-proof. The Charter of Fundamental Rights of the European Union or UNESCO’s Universal Declaration on Bioethics and Human Rights can inspire this endeavour.
 

Start small: Targeted AI regulation

A one-size-fits-all regulatory approach is not appropriate for AI given the breadth and range of AI technologies. Many AI applications today are trivial and need not be disrupted with prescriptive regulation.
 

Risk-based rules

The type and level of risk that specific AI uses pose to individuals’ safety, well-being, rights and freedoms is a useful starting point. For instance, AI rules may need to focus specifically on liability for the use of autonomous cars; or physical harm related to the use of diagnostic tools in healthcare; or discrimination in the context of law enforcement; or privacy protections for AI uses in smart homes; or algorithmic price-fixing and collusion in an antitrust context.  
 

Actionable and enforceable rules

AI regulation should aim to guide organisations on how to operationalise AI ethics principles towards ensuring accountable AI. In practice, this means flexible, principles-based rules on AI models and AI systems governance as well as data quality.
 

Work together: Avoid the regulatory race

People, planet and the economy will benefit more from AI in the long run if governments do not pursue a regulatory race. Despite the natural barriers to cross-cultural cooperation and the inherent competition between countries, the nature and transformative potential of AI require global regulatory cooperation.


Click here for more information on deploying AI responsibly.

 
About the author
KS

Kalliopi Spyridaki

See all articles
Kalliopi Spyridaki is Chief Privacy Strategist at SAS. She joined SAS in 2007. In her role today, Kalliopi focuses on public policy and legal compliance in Europe and Asia Pacific.
 
Kalliopi works with regulators and policymakers to help shape laws and government policies that impact SAS and its customers related to privacy, artificial intelligence and the wide spectrum of data governance. She also assists with SAS’s privacy compliance program, focusing particularly in the Asia Pacific region. She feels the most intriguing part of her work is striving to bridge the gaps between the making of a law, its implementation and the rapid pace of technology evolution with its transformative power for business and our society.
 
Kalliopi has lived and worked in Brussels since 2002. Before joining SAS, she worked on data protection law, competition law and various aspects of consumer law. Her work experience includes positions in European and Greek law firms, an industry trade association, a public affairs consulting firm, the European Commission and the Greek Ministry of Foreign Affairs.