for-phone-onlyfor-tablet-portrait-upfor-tablet-landscape-upfor-desktop-upfor-wide-desktop-up
Banks

Breakingviews - Bank of England’s AI approach will toughen up

LONDON (Reuters Breakingviews) - The Bank of England last year picked artificial-intelligence pioneer Alan Turing as the face of its new 50-pound note. It’s an apt choice given the central bank will quickly have to figure out how to ensure that the spread of smart robots throughout finance is a force for good rather than a destabilising influence.

A tour bus passes the Bank of England in London Britain August-1-2018.

Computer programmes with humanlike problem-solving skills are already a fixture of the UK financial system. Two-thirds of the companies that the BoE surveyed make use of such systems, Tom Mutton, the central bank’s fintech director told Breakingviews in an interview. Banks and insurers are further ahead than most. They have built anti-money laundering programmes that scan and interpret documents, or ones that help weigh up the risk of underwriting loans or insurance policies.

The central bank is welcoming such efforts. One reason is that deploying AI might cut costs in financial services, and savings could be passed on to consumers. Boston Consulting Group, for example, reckons AI software may by 2027 reduce the administrative grunt-work required of staff by 2.4 hours a day per employee in banking and 2.9 hours in capital markets firms. Another reason is that machines have turned out to be better at spotting suspicious transactions than humans. AI could reduce the 1.2 billion pounds stolen through frauds and scams a year, based on UK Finance data.

Artificial intelligence may also help people who are reliant on informal credit to regain access to the banking system. Consumers who have patchy credit histories may be turned away by lenders and therefore be forced to resort to high-cost payday lenders. But AI might offer a fairer representation of credit histories and give these people a shot at taking out a normal loan, according to Mutton.

There’s some evidence to support the thesis. AI models can be better at assessing creditworthiness than traditional ones, according to a recent Bank for International Settlements study that was based on data from a Chinese fintech firm. Because AI models can incorporate a range of data, such as phone-bill payments and other transactions, they can help people who might slip through the cracks of the traditional credit-scoring process.

ROBOT OVERLORDS

But for all the potential benefits, AI presents some worrying risks. Those are likely to be aired at a forum that will bring together the public and private sectors and which the BoE and the Financial Conduct Authority will launch in March. One potential issue is the extent to which AI models influence humans’ decisions and willingness to take risks. That’s something regulators would be interested in regardless of the technology being used. But closer attention is warranted if, for example, banks increasingly defer credit assessments to robots and are lulled into a false sense of security about the safety of their loans.

Another possible problem is what Mutton termed “pro-cyclicality and herding”. Different AI trading programmes may be developed by using similar data sets. These so-called algorithms might then choose to buy and sell the same securities at the same time, potentially amplifying swings in asset prices and worsening liquidity shortages during market panics. A 2017 Financial Stability Board paper pointed out that AI algorithms could make decisions that humans find incomprehensible, and therefore difficult to unpick once acted on.

Mutton also says that AI algorithms have the potential to engage in what regulators call “market abuse”, for example by frontrunning or spoofing trades. These are illegal attempts to gain an edge in financial markets by trading on private information or feigning interest in a security to stoke more demand. Equally troubling were the findings of researchers at the University of Bologna who looked at AI algorithms that were taught to optimise the price of consumer goods in order to maximise profit. Through trial and error, and without directly communicating, the algorithms ended up colluding with each other to reduce competition and boost their takings.

Finally, there are ethical questions, such as how to stop AI credit-scoring models from importing programmers’ racial or gender biases, or even developing new biases of their own. Without tight oversight, bank managers could find themselves struggling to explain why some borrowers have been turned down for a loan. The regulator can help by asking tougher questions about artificial intelligence sooner rather than later.

Breakingviews

Reuters Breakingviews is the world's leading source of agenda-setting financial insight. As the Reuters brand for financial commentary, we dissect the big business and economic stories as they break around the world every day. A global team of about 30 correspondents in New York, London, Hong Kong and other major cities provides expert analysis in real time.


Sign up for a free trial of our full service at https://www.breakingviews.com/trial and follow us on Twitter @Breakingviews and at www.breakingviews.com. All opinions expressed are those of the authors.

for-phone-onlyfor-tablet-portrait-upfor-tablet-landscape-upfor-desktop-upfor-wide-desktop-up