From Siri to self-driving cars, artificial intelligence is becoming an ever-more-present partner in countless aspects of life, and banking is no different. Even legacy banks are getting in on the act.
While public-facing apps, such as trouble-shooting chat bots, may be the most obvious example of AI in the banking sector, the biggest area in which banks are putting the technology to use is managing risk. Indeed, as the technology improves, it’s becoming not only desirable but essential to the secure running of modern banks. But relentless as the march of AI may seem, there are numerous obstacles it still has to overcome.
How AI manages risk
To understand these obstacles, we first have to look at just what AI is doing behind the scenes to keep risk at reasonable levels.
- Natural language learning software allows traders, investors and financial advisors to study historical trading data and lending risk by quickly and thoroughly analyzing the background of potential customers
- AI can crunch the vast quantities of complex data needed to aid investigations into fraud and cyber security
- It aids with regulatory compliance in multiple territories by sifting through thousands of regulatory documents
This isn’t just abstract potential. There are already plenty of concrete examples of big banks granting AI an ever-more central role in their operations. JP Morgan Chase is using machine learning and natural language processing to help choose which clients for follow-on equity offerings. Citibank has made multiple investments in various AI firms, including one vendor called Feedzai, specifically aimed at monitoring customer payments and deviation from normal activity in order to watch for fraud. NY Mellon Corp, meanwhile, uses robotic process automation. This tech integrates AI to ensure the smooth running of transfers – whether this be in automatically responding to audits or correcting formatting or data mistakes. It is said, so far, to have made possible $300,000 in annual savings.
So far so financially prudent. But, of course, change is seldom quite so straight forward.
Issues of talent and trust
- When it comes to AI, the talent pool of programmers is relatively small, and the trouble is many are more attracted to out-and-out tech companies than banks. These companies often providing more showy public-facing services, and with a higher profit-per-employee ratio, they can offer salaries with which even wealthy banks can struggle to compete
- There’s a fear AI will result in job losses, as lots of back room, number-crunching staff will no longer be needed. Banks like to refer to AI as an “augmenting” rather than replacing force, but the reality is that banks – especially legacy banks – currently have thousands of employees doing precisely the type of tasks that AI can be used to compute
- “Validation frameworks” for AI are insufficient, which is to say that ironically we cannot perfectly judge the risks that AI might pose. The flash crash of 2016 is an example of this. Despite vast investment in the technology, AI still presents certain unknowns
- AI covers a lot of fields and technologies. Legislating these is tricky, as it’s nearly impossible to understand their full breadth. AI technology often works like a black box: we don’t know exactly what’s going on inside the system, but it works and is efficient. It introduces an element of trust, which is, of course, difficult of quantify and, therefore, legislate. Furthermore, legislation can slow down the acceleration of such an innovation and fast-moving technology, thus reducing its efficiency. As a result, legislation is currently applied in small touches according to proven risks.
Despite these obstacles, however, it’s a foregone conclusion among analysts that AI will shape the future of banking, with McKinsey Global claiming it will generate value of more than $250 billion for the industry. Banks have to adapt.
Clearing AI obstacles
In many respects, clearing these obstacles in order to use the AI to regulate risk comes down to restructuring – pushing money into certain areas of a bank’s operations, which will necessarily save them from others.
Financial software providers Ayasdi and Feezai are already attracting and holding on to genuine AI talent, and as the job market changes, we’re sure to see an increased number of talented programmers graduating to fill similar positions.
Similarly, “validation frameworks” are being better negotiated every day. With every setback, we learn how to avoid the same mistakes in the future. As is often the case, new tech seems like it should be deployed at the speed it is developed, but the reality is that checks and balances slow down progress. This does not mean, though, that they stop it. Working with regulators – as Ayasdi did to help pass the Federal Reserve’s Comprehensive Capital Analysis and Review process – will also slowly help make the AI revolution an everyday reality.
The road to an AI future
The obstacles facing the use of AI in managing risks for banks are not dissimilar to many of those facing the autonomous car revolution. In both cases, there’s an uncanny mistrust that must be overcome. Self-driving bots have to prove themselves much safer than their human counterparts to be accepted. Similarly, the current regulations guarding against risk by limiting of AI are actually opening banks up to more risk.
If the technology is to fulfil its promise of taking a pivotal role in the banking sector, AI will have to wait for a trust to emerge, not just in finance, but in all areas of society. This will require an improvement of the technology across multiple sectors, a normalization of it in our daily lives and the active work of manufacturers to convince regulators of its safety. As this is gradually accomplished, banks will be able to further secure their operations and improve their customers’ experiences.