Peter Williams asked Peter Charles and Michael Berns how we can be realistic about the potential of Artificial Intelligence and see through the current hype cycle.
CFOs should be assessing the impact of artificial intelligence (AI). As usual with any new technology it is a question of dividing the hype from the reality. For the CFO, two AI questions are crucial: what will the impact be on the business; and what impact could it have on the finance function?
Given the fair amount of scepticism about the impact and about how real AI can be - much of the talk around AI used to see the development as a curse. Most sources agree that General Artificial Intelligence is still far away, in terms of having a truly intelligent machine at which one could throw a wide of use cases, however, there is an increasing number of Narrow AI solutions used and implemented in the corporate world.
A quick Google search suggests that tech companies should be honest about the jobs that AI will destroy. On the other hand a growing body of research suggests that the impact won't be as bad as feared or might in fact create more jobs than it destroys.
Against the background of AI's potential — and its downside — the reality is that in terms of automation or even semi--automation most finance functions, even the largest and most sophisticated, are somewhere in the 1970s.
The truth of AI will be more nuanced than the headlines would suggest. The iterations of AI that we are seeing so far appear to suggest that this technology, well deployed, could do work which human beings either don't like doing or are not very good at doing — and perhaps the two overlap.
Examples include law firms checking whether a confidentiality agreement is fair. This is fairly standard legal documentation. — In this case a Narrow AI solution is used to empower humans, in terms of an intelligent assistant that allows the expert to focus on exceptions and edge case.
The confidentiality agreement example is a good one: machines work really well at ferreting through large bundles of data works, while human beings find the task difficult and far from rewarding. How could the confidentiality agreement translate across to the work of a finance function? Shared services (SSC) are big data factories. One of the problems of running an SSC is getting the reporting right: human beings aren't always too sure what other human beings — their bosses or their clients - want to know.
The default position is to report the big numbers but that may not always be right. For instance one division of a business may have a collection problem. If the manual review only looks at the big number the small, unusual number may be missed.
AI could crunch through all the data and could report on the unusual number — whether big or small — which highlights a problem which a human analytical review, however careful, may have missed.
Such use of AI protects against risks that have a low probability of occurring but if they do they carry high consequences.
Currently in the financial world Tier 1 banks are taking AI seriously as they build up their big data infrastructure and platforms. Much of this is driven by regulation and the cost of compliance.
Regulators in the US and Europe have imposed $342 billion of fines on banks since 2009 for misconduct, including violation of anti-money laundering rules, and that is likely to top $400 billion by 2020.
Peter Williams
A Consulting Interim Team
Peter Charles Limited
Fifth Floor, 11 Leadenhall Street,
London, EC3V 1LP
United Kingdom.
VAT Number: GB 720 482 164
Company Number: 3 589 829