Artificial intelligence (AI) has revolutionized many fields in recent years, including the banking sector. There have been both positive and negative aspects of its implementation, in particular the issue of algorithmic discrimination in lending.
In Canada and more broadly around the world, the implementation of AI within major banks has led to increased productivity while offering greater personalization of services.
According to the IEEE Global Survey, the adoption of AI-based solutions is expected to double globally by 2025, reaching 80 per cent of financial institutions.
Some banks are more advanced, such as BMO Financial Group, which has created specific positions to oversee the integration of AI into its digital services in order to remain competitive. As a result, thanks to AI, the global banking industry’s profits could exceed US$2 trillion by 2028, representing growth of nearly nine per cent between 2024 and 2028.
As a professor at Laval University of knowledge and innovation management and a science communicator, I was assisted in writing this analysis by Kandet Oumar Bah, author of a research project on algorithmic discrimination, and Aziza Halilem, an expert in governance and cyber risk at the French Prudential Supervision and Resolution Authority.
How does AI improve bank performance?
The integration of AI in the banking sector has already significantly optimized financial processes, with a 25 to 40 per cent gain in operational efficiency. Combined with the growing capabilities of big data — for example, the massive collection of data — AI offers powerful analytics that can already reduce the error margins of financial systems by 18 to 30 per cent.
It also makes it possible to monitor millions of transactions in real time, detect suspicious behaviour and even preventively block certain fraudulent transactions. This is one of the uses implemented by J.P. Morgan.
In addition, platforms such as FICO, which specialize in AI-based decision analysis, help financial institutions leverage a variety of customer data, refining their credit decisions through advanced predictive models.
Several banks around the world now rely on automated rating algorithms that can analyze numerous parameters, including income, credit history and debt ratios, in a matter of seconds. In the credit market, these tools significantly improve the processing of applications, particularly for “standard” cases, such as those with explicit loan guarantees.
But what about the other cases?
Formalizing injustice?
As American researchers Tambari Nuka and Amos Ogunola point out, the illusion that algorithms produce fair and objective predictions poses a major risk for the banking sector.
Reviewing the scientific literature, they warn against the temptation to blindly delegate the assessment of complex human behaviour to automated systems. Several central banks, including Canada’s, have also expressed strong reservations about this, warning…
Read More: How can we keep machines from reproducing social biases?


