PROGRESS 2050: Toward a prosperous future for all Australians
Keys to successful secure AI adoption
Artificial Intelligence is transforming everyday life at a rapid pace, but realising its full potential in a way that is responsible, trustworthy and safe will require careful consideration and strategic implementation. While more organisations are looking to implement their own internal AI chatbots trained on company data, any new approaches require input from a multidisciplinary team across technology, risk, legal, security and change management.
Keys to successful secure AI adoption
Artificial Intelligence is transforming everyday life at a rapid pace, but realising its full potential in a way that is responsible, trustworthy and safe will require careful consideration and strategic implementation. While more organisations are looking to implement their own internal AI chatbots trained on company data, any new approaches require input from a multidisciplinary team across technology, risk, legal, security and change management.
Artificial Intelligence is transforming everyday life at a rapid pace, but realising its full potential in a way that is responsible, trustworthy and safe will require careful consideration and strategic implementation.
As the generative AI revolution accelerates, businesses will need to consider risks, opportunities and use cases to unlock new possibilities for growth.
While more organisations are looking to implement their own internal AI chatbots trained on company data, any new approaches require input from a multidisciplinary team across technology, risk, legal, security and change management.
KPMG applies a Trusted AI strategic approach to designing, building, deploying and using AI solutions in a responsible and ethical manner. The Trusted AI framework sets out the principles and pillars that guide KPMG’s responsible development, procurement and use of AI.
The framework prompts the team to ask, “What are the potential human impacts of what we’re developing?” It ensures that fairness, transparency and sustainability are at the core of KPMG’s AI development, adoption and use.
Every generative AI solution at KPMG needs to be reviewed against the framework as part of the firm’s AI management system. That system includes a robust layer of governance, led by the Trusted AI council, which is formed from a cross-organisational group of senior stakeholders from across core business units and divisions. There is also an independent external member to provide an objective view.
KPMG Australia’s internal AI, KymChat, is a case study of early adoption of a secure AI chatbot trained on company data. Employees are able to access the capabilities of ChatGPT in a protected environment, which has helped innovation across the company.
In partnership with Microsoft, the KymChat team had early insight and input into the quickly evolving world of generative AI. They were able to learn, understand and play with the technology in a secure setting.
During beta testing, the team ensured all existing data protection measures were applied. The chat was clearly labelled with a ‘beta badge’ and every request was tracked, which allowed the team to work through any issues and identify gaps to address.
More than 40,000 prompts were answered in the first eight weeks and to improve accuracy, the team has created custom libraries of curated datasets, such as the tax precedence database. This has raised accuracy from 60 per cent to 94 per cent. Applying a specific subset of data to KymChat was key to building an AI solution that employees could trust.
Staff can also rate the accuracy of responses and use citations for verification.
With 10,000 unique users since its launch, the internal AI has been used to assist staff in conducting research; creating first drafts of emails, presentations and documents; and understanding policies.
The strategy for the AI chatbot is evolving based on lessons learnt from each use case and the rapid pace of AI technology development.
There are now several purpose-built versions of KymChat, including a tax version that helps to create the first draft of tax advice for review by KPMG subject matter experts. Other versions are being used to support compliance and obligation management.
KPMG is now helping other businesses implement their own version of KymChat through an accelerator solution that combines KymChat use cases, features, lessons learnt and bespoke consulting.
As part of the inaugural KPMG Human Rights Innovation Challenge, we’re also exploring how some of Australia's most impactful not-for-profit organisations can use AI to drive social change.
The rapid development of AI has prompted governments and industries to consider regulations and establish guidelines to de-risk AI adoption.
While current laws that cover issues such as consumer rights and security also apply to AI, others, such as Australia's Privacy Act, are being updated to better handle AI's unique challenges.
As AI becomes a bigger part of everyday life, it is also important to create new laws that support innovation and ensure AI is used safely.
Europe is at the forefront with its EU AI Act, a set of rules focused on managing the risks of AI. Australia is planning to follow a similar path, first with suggested safety standards and then with strict rules that organisations must follow to ensure AI is used safely.
Alongside these changes in the law, new international standards are also being introduced covering the requirements for trustworthy AI. The ISO/IEC 42001 focuses on how organisations manage their AI systems, including risk management, AI system impact assessment, system lifecycle management and third-party suppliers.
Staying on the front foot with this evolving regulatory landscape is core to KPMG’s commitment to trustworthy AI.
Kath McNiff is the Content Lead at KPMG.