Practice Notes

Thought Piece

Being Realistic About Artificial Intelligence

Peter Williams asked Peter Charles and Michael Berns how we can be realistic about the potential of Artificial Intelligence and see through the current hype cycle.

"For the CFO two AI questions are crucial: what will the impact be on the business; and what impact could it have on the finance function?"

"The reality is that in terms of automation or even semi--automation most finance functions, even the largest and most sophisticated, are somewhere in the 1970s."

"AI offers the possibility of handing power back to knowledge workers, giving them insights they could not have previously hoped for and radically improving the quality of output and productivity."

CFOs should be assessing the impact of artificial intelligence (AI). As usual with any new technology it is a question of dividing the hype from the reality. For the CFO, two AI questions are crucial: what will the impact be on the business; and what impact could it have on the finance function?

Given the fair amount of scepticism about the impact and about how real AI can be - much of the talk around AI used to see the development as a curse. Most sources agree that General Artificial Intelligence is still far away, in terms of having a truly intelligent machine at which one could throw a wide of use cases, however, there is an increasing number of Narrow AI solutions used and implemented in the corporate world.
A quick Google search suggests that tech companies should be honest about the jobs that AI will destroy. On the other hand a growing body of research suggests that the impact won't be as bad as feared or might in fact create more jobs than it destroys.

Against the background of AI's potential — and its downside — the reality is that in terms of automation or even semi--automation most finance functions, even the largest and most sophisticated, are somewhere in the 1970s.
The truth of AI will be more nuanced than the headlines would suggest. The iterations of AI that we are seeing so far appear to suggest that this technology, well deployed, could do work which human beings either don't like doing or are not very good at doing — and perhaps the two overlap.
Examples include law firms checking whether a confidentiality agreement is fair. This is fairly standard legal documentation. — In this case a Narrow AI solution is used to empower humans, in terms of an intelligent assistant that allows the expert to focus on exceptions and edge case.

The confidentiality agreement example is a good one: machines work really well at ferreting through large bundles of data works, while human beings find the task difficult and far from rewarding. How could the confidentiality agreement translate across to the work of a finance function? Shared services (SSC) are big data factories. One of the problems of running an SSC is getting the reporting right: human beings aren't always too sure what other human beings — their bosses or their clients - want to know.
The default position is to report the big numbers but that may not always be right. For instance one division of a business may have a collection problem. If the manual review only looks at the big number the small, unusual number may be missed.
AI could crunch through all the data and could report on the unusual number — whether big or small — which highlights a problem which a human analytical review, however careful, may have missed.
Such use of AI protects against risks that have a low probability of occurring but if they do they carry high consequences.
Currently in the financial world Tier 1 banks are taking AI seriously as they build up their big data infrastructure and platforms. Much of this is driven by regulation and the cost of compliance.
Regulators in the US and Europe have imposed $342 billion of fines on banks since 2009 for misconduct, including violation of anti-money laundering rules, and that is likely to top $400 billion by 2020.


Too often workers waste time wandering helplessly though reams of data rather than doing the real job of interpretation and analysis which leads to better decision making.


These massive fines have driven innovation and technology. Perhaps for financial professionals it is also auditors who are facing similar pressure to the banks with large fines being handed out where audit standards have been found to have fallen short.
No serious question exists over whether AI works; we can have a high level of confidence of what is flagged is good and another few years should see accuracy rates reaching the 90% - 95% levels. Even at today's accuracy levels of 70% - 80% depending on the use case and data quality banks are seeing the regulatory benefit in using AI in to spot bad conduct and ultimately change behaviours and avoid fines.
Central banks and regulators are also exploring AI in specific Accelerators and Tech Sprints. For instance, the Bank of England has run several iterations of its Fintech Accelerator, with example use cases like using a machine learning solution to ingest and classify large amounts of weakly-structured data, drawing out sentiment.

Gaming, healthcare, even the third sector is using AI. Actor-investor Ashton Kutcher's tech non-profit outfit Thorn is using AI and other emerging technologies to combat human trafficking and child abuse.

One focus of AI is the development of the intelligent assistant — smart speakers have already become an accepted part of many homes and some companies are looking to implement this in the corporate world in terms of an AI enabled diary management & booking service.
Too often workers waste time wandering helplessly though reams of data rather than doing the real job of interpretation and analysis which leads to better decision making.
AI offers the possibility of handing power back to knowledge workers giving them insights they could not have previously hoped for and so radically improving the quality of output and productivity.

**Michael Berns** is an AI Thought Leader with extensive international experience in leading rapidly growing businesses and client engagements. He has spent a decade working on the compliance and risk management side before then disrupting that space after his Executive MBA at London Business School.

Peter Williams

Peter Williams

Share by: