• Call Us On 0207 377 2250
  • LinkedIn

Should Operational Risk Be so Dazzled by AI?

There is considerable hype around artificial intelligence (AI) within the governance, risk and compliance (GRC) technology space today. Is it justified? Is the discipline looking in the right direction as it seeks to find solutions to the challenges that confront it?

In a new article, Chase Cooper explores these questions and concludes that AI itself may not really be giving the discipline much that is new. However, the use of Bayesian reasoning within the right technology framework could begin to tackle one of operational risk’s perennial challenges.

ai-bayesian-reasoning

Understanding uncertainty

For operational risk managers, uncertain data is an ongoing issue. The challenge caused by this uncertain data is so great that the Basel Committee on Banking Supervision recently rewrote the way banks calculate their regulatory capital requirements. This was because of arguments over the impact that the uncertain data had on those capital requirement calculations – in the end, a variety of stakeholders argued that the data just wasn’t reliable enough to base capital on.

So operational risk executives are used to being told that their data is too uncertain to do much with – particularly by those in other, more quantitatively-oriented risk disciplines. It’s little wonder that the op risk discipline has long hoped for some kind of magic bullet to help improve the situation. AI technology – often in the form of its Big Data applications – has had quite a lot of attention as a result.

The idea – roughly speaking – is that if AI can comb through enough Big Data, it will discover patterns that could be considered “certain data”, shedding new and improved light on operational risks. The industry is, at the moment, nowhere near attaining this dream.

Instead, AI GRC software is being bandied about that is little more than what has already existed for some time – many tools are just look-up tables or the automation of processes. This doesn’t really do justice to the term AI.

Embracing Bayesian reasoning

The goal of risk management software development should not be to create something that can be labelled an “AI application.” Instead, it should be to create a solution that will deliver intelligence and insight superior to what humans can achieve on their own. The key word in AI is “Intelligence”, not “Artificial”.

Bayesianism is a discipline that accepts and embraces uncertainty. While Bayesian networks proved too unwieldy to use within risk management, Bayesian reasoning is different. It uses the tools of Bayesianism  without making a network map necessary. Incorporated into a Risk Intelligence approach, it can help operational risk managers transform uncertain data such as risk and control self-assessment (RCSA) results into information that can be used in decision-making across the organization.

This new article looks at how risk managers should begin to think differently about the challenges operational risk faces as a discipline and how technology could be used to solve those challenges. It proposes that while it may be too soon for AI to provide any real answers, Bayesian reasoning – harnessed to technology and fed with operational risk data – could provide op risk team with a fresh approach to understanding and managing the risks their organizations face.

Download  the article here, or fill in the form below.