AI-augmented disinformation campaigns pose a growing threat to global economic stability, according to a warning from the G20 Financial Stability Board in November 2025. This was recently exemplified in a report by London-based research group Say No to Disinfo, which concluded that for every $12.48 spent on misleading social media advertising, as much as $1.28 million of customer deposits could be moved. The calculation was made using the average deposits held by UK customers, the cost of social media advertising, and estimates for the number of people who would likely see the ads.
Perhaps unsurprisingly, AI-generated content has become a key driver for disinformation campaigns. While disinformation is certainly nothing new, generative AI greatly increases its scale by allowing malicious parties – such as rival states or companies – to publish huge numbers of ads that appear convincing yet are based on mistruths. Given growing global economic instability in the wake of trade wars between the US and China and other parties, the threat of disinformation in the financial sector will likely only continue to rise.
A growing divide between banks and regulators?
Despite the latest revelations, financial services institutions remain broadly confident in generative AI technology, citing it as a way to boost customer engagement and experience. However, regulators are not so sure, given the enormous potential of fake news – greatly augmented by generative AI – to influence people’s financial decisions on a scale that simply wasn’t possible before. As a result, there is now a growing divide between industry executives and regulators in how to balance the spread of AI while mitigating the inherent risks it poses.
What are the implications for fintech companies?
The fintech sector has been quick to capitalize on the generative AI opportunity, hence we’ve seen a flurry of new AI-powered finance and banking tools over the last couple of years. Many of these now widely adopted tools include those leveraging generative AI for customer interactions, marketing, and fraud detection. However, while these are all legitimate use cases for generative AI, they come with significant risks to brand reputation, regulatory compliance, fraud prevention, and governance. Due to these risks, it’s vital that fintechs implement effective safeguards to prevent their own tools from inadvertently contributing to financial panic.
There are also, of course, the external risks. While mainstream generative AI tools like ChatGPT, Claud, or Copilot are widely trusted, they’re far from immune to creating disinformation. More worryingly, however, are similar tools developed under the auspices of rival states, which may not have the same safeguards in place. January’s launch of DeepSeek – a Chinese-developed alternative to ChatGPT – has already spawned no small amount of controversy for this very reason.