No industry has escaped the massive disruption brought about by recent innovations in AI and machine learning. In financial services, AI has opened the door to new operating models and business opportunities. In fintech alone, the global AI market is expected to grow to $79.4 billion by 2030, up from just $22.9 billion in 2023.
Moreover, around 80% of equity trades in the US and EU are run by algorithms, and the market is expected to almost quadruple in size by 2032. Even high-value trades are increasingly being informed by algorithms, including generative AI systems that can interpret data in everything from earnings reports to stock charts.
Machine learning is also revolutionizing financial services in myriad other ways. It’s providing analysts with much deeper insights into market trends, increasing financial forecast accuracy, automating risk assessments in loan applications, and facilitating fraud prevention initiatives.
However, these developments are not without risk, especially with regards to data protection regulations and algorithmic bias. Fintechs in particular tend to be among the early adopters of these new and emerging technologies, but they also need to be wary of the rapidly evolving regulatory landscape concerning AI.
The EU AI Act, which came into force on August 1, 2024, is the world’s first legal framework concerning the use of AI. The framework uses a risk-based approach spanning four categories – minimal, limited, high, and unacceptable risk.
For instance, high risk areas include the use of AI in conducting creditworthiness or insurance risk assessments. Such systems are subject to strict regulatory obligations intended to safeguard individuals from the inherent flaws of AI systems, such as discriminatory practices resulting from model bias. In addition, the EU AI Act bans certain use cases outright, such as using AI to manipulate financial decision-making.
The US, by contrast, has no overarching federal legislation on the use of AI. Instead, regulation and risk management tend to be industry-specific, risk-based, and highly distributed, resulting in a complex regulatory landscape that diverges heavily from the approach taken in the EU and the UK.
Despite the regulatory, operational, and reputational risks, financial services organizations are still investing heavily in AI, particularly in lower-risk use cases, such as personalizing customer experiences. Also, next-generation chatbots powered by generative AI and verticalized for the financial services sector are starting to make an impact on customer service.
What’s important, especially in an era of escalating cyberthreats – including those specifically targeting AI systems – and increasing regulatory oversight, is that financial services must be mindful of their AI adoption strategies. In an industry that’s built on trust, transparency, privacy, security, and ethics must be squarely at the heart of any implementation. Only then can the sector safely and sustainably deploy AI to drive greater agility and value creation.