Ai

How to build trust in AI

The accountancy profession can assist in leading on emerging AI issues, states Alistair Brisbourne, Head of Technology in the ACCA Policy & Insights team.

Artificial intelligence (AI) is reinventing the world as we know it. Over the next few years, it is set to transform virtually every aspect of our lives, from work and education through to health and transportation. For businesses, AI presents significant opportunities to enhance processes, achieve efficiencies and deliver a better customer experience. Yet it also poses major challenges. These challenges relate to important issues such as accountability, bias and discrimination, data use and reliability.

ACCA’s new insights series, titled AI Monitor, explores the rise of AI through the lens of finance professionals, examining key issues including data strategies, risk and controls, sustainability and talent. The first report in the series, Trust in an AI-enabled accountancy profession, focuses specifically on how AI can potentially impact on trust. 

Alison King

Senior Account Director for the NHS & UK Government at CTS

Charles Story

Director, Operations for Corporate Investigative Services, Rehmann

Trusted AI

Accountants are expected to uphold high ethical standards. So, it’s natural that they should play an important role in helping to manage the risks associated with AI and ensure that AI tools are used in a trustworthy manner. This principle is particularly true in the specific context of finance, but it also applies to how their organisation more broadly develops and deploys AI. Accountants can work with their organisation’s technology team, as well as other teams that are using AI models, to ensure the technologies are being used ethically, effectively and safely.

When it comes to traditional accounting and financial reporting practices, AI can present a variety of trust-related issues. For example, trust could be impacted if AI models are used to make consequential business forecasts and recommendations without the existence of clear explanations for their rationale. In the case of audit and assurance, there is a risk that auditors over-rely on AI-dependent procedures and don’t apply sufficient human professional scepticism and judgement. 

Also, while AI tools can be invaluable in areas such as fraud detection, risk assessment and compliance monitoring, they may still be biased or make errors, potentially flagging false positives or missing relevant issues. Further challenges are presented by AI-powered virtual assistants, which are starting to be used by finance professionals. These chatbots could provide inaccurate or inappropriate responses, potentially harming an organisation’s competence and reliability. 

The research highlighted that, to mitigate the risks associated with AI, it’s necessary to balance technical and social considerations. On the technical side, AI systems should be transparent and explainable, and tested to reduce the risks of bias and errors. On the social side, human judgment and accountability are key to ensuring that AI appropriately manages tasks and decision-making. Additionally, since trust is often built through human interaction and relationships, AI tools should not be deployed in ways that significantly reduce human-to-human engagement.  

Know the use cases

There are a host of different use cases for AI, both within finance as well as the broader business environment. The complexities and concerns associated with the use of AI will vary according to the use case and specific business context in which the AI tool is deployed. Similarly, the best strategies for mitigating risk vary in the same way. For example, data quality checks and system monitoring are among the strategies that can be used to mitigate the risks associated with AI-driven invoice-processing tools. Meanwhile, a human review of outputs is critical to ensuring that AI fraud detection tools are operating effectively.

Governance is the foundation for appropriate use of AI within organisations. So, governance mechanisms should reflect how AI is used in practice, with policies considering the common risks and specific challenges that are unique to the organisation, the types of AI employed, and the information derived from these applications.

Data governance is particularly critical since the data used to train and operate AI models has a huge influence on their performance and fairness. Accountancy and finance organisations therefore need robust data lineage, quality controls, security measures and access policies in place. Additionally, model governance is key. Clear policies around AI model validation, monitoring and retraining can help to ensure performance remains reliable over time.

Since auditability is a foundation for trust, it is vital to have detailed audit trails and decision logs that allow for the examination of individual AI outcomes. Machine Learning Operations practices (MLOps) – such as data and model version histories and dashboards that track AI model performance – can embed governance standards into the actual implementation and running of AI systems. 

Talking talent

Talent enables the development of trusted AI, which is why it will be the focus of ACCA’s next AI Monitor report in the autumn. Going forward, accountants will need robust data and AI literacy skills to understand the different types of AI tools that exist, which use cases they can be applied to, and which outcomes can be achieved. They should also know how to develop prototypes and deploy those prototyped models into a real-world setting, putting appropriate processes and governance in place.

ACCA’s research highlights that more must be done to equip accountants with AI literacy skills, especially literacy skills that relate to the specific use cases for AI within their organisation. Even among organisations that are developing AI at a relatively rapid place, only around a third have developed plans to upskill their staff on AI tools, as well as on the risks associated with different use cases.  

Positive social impact

Discussions around the impact of AI on finance often centre on AI’s potential to automate mundane tasks, taking them off accountants’ hands. The real impact of AI on the profession is more far-reaching than that, however. In the era of AI, accountants will be responsible for ensuring that data is used in the right way, for the right reasons, to authentically support the organisation to achieve its objectives.

To get the most out of AI, accountants should aim to create value for their organisation that extends way beyond its profit and loss statement. They should not just use AI to enhance processes and achieve efficiencies – important though this is. They should also look to deploy AI in ways that generate trust, leave a positive social impact, and help build a better world.  

Key recommendations

Accountants can support their organisations to develop and deploy trusted AI systems by ensuring that appropriate AI governance and risk management are in place. The AI Monitor suggests these three key actions:

  • Develop an AI governance framework. Beginning with critical uses, finance professionals should take steps to establish clear policies, oversight and governance practices within their organisation. 
  • Invest in AI literacy and skills development. Undertake education and training to critically evaluate AI outputs, communicate clearly with key stakeholders, and make informed decisions.
  • ​​​​​​​Collaborate via cross-functional teams. Finance professionals should actively engage with IT, data science, legal and risk management teams.   

Main image: Alistair Brisbourne, head of technology, ACCA Policy & Insights team