“AI, especially when it comes to machine learning, has both high promise and serious risk for the financial and regulatory service industry,” according to Prof. Yves Le Traon from the Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg. His background in software engineering at SnT has led him to continually work with industry partners to develop new solutions to help them streamline their work.
“The legacy systems that many banks and existing financial sector actors have would likely be very much improved by starting to implement some form of artificial intelligence,” said Prof. Le Traon. “On the proviso that it is used and maintained properly.”
Keep it sharp
Once the development of an AI system has finished, the importance of keeping it up to date cannot be understated. Le Traon painted a worrying picture of an AI fraud detection system that, just one year after being launched, is incapable of detecting a newly invented fraud. This is an all-too familiar problem.
“It’s not the fault of the AI system, instead it will likely be related to the fact it has to be used as an instrument the right way,” he said. “Also, it has to be sharpened continuously, like a knife, in order to prevent these investments going to waste.”
The way to avoid such disappointment is by putting a high priority on software engineering. “The main focus should be on what the best practice is, and how we can improve the way we develop, devise and design software to be high-quality and meet the different required performances.”
Having first worked with Luxembourg companies CETREL (now part of the SIX Group ) and BGL BNP Paribas , Prof. Le Traon has more recently worked with the globally known PayPal , an internet payment services firm with which SnT has had a long and fruitful collaboration, including a funded PayPal research chair .
“What is very interesting with PayPal is that they do not define themselves as a financial company or as a bank, as I thought they would,” said Le Traon. “It means that they are very aware that IT is what drives opportunities for business.”
AI, especially when it comes to machine learning, has both high promise and serious risk for the financial and regulatory service industry.
Prof. Le Traon saw the way that PayPal uses what it calls ‘continuous integration’. He explained, “The time between developers modifying the code and when this code is used by the final user is becoming shorter and shorter.” While this process may have, at one time, taken one month, it’s now happening potentially over just one day.
“One of the challenges in continuous integration,” he explained, “Is to be able to select the best test to execute in order to detect a bug, if there is one.” He explained that large firms like Google may have tens of millions of tests running continuously.
Another interesting aspect of machine learning is that it can be used both to devise and deploy new financial services, as well as increase the software’s security and reliability, explained Prof. Le Traon, who advocates, what we call, ‘robustifying’ the AI components by making them reliable and very secure.
“There are various approaches to ensure the provision of quality and secure AI/machine learning software to the financial industry,” he explained. “We need first to have a very solid and well-funded process with quality gates in order to assure the basic quality of a machine learning based system.”