Last November, the Financial Crimes Enforcement Network released a bulletin warning about an increase in fraud schemes using AI-generated deepfakes. According to FinCEN’s analysis of Bank Secrecy Act data, criminals have leveraged generative AI to, for instance, create fake accounts that can receive and launder proceeds from other fraud schemes; impersonate executives; and instruct employees to transfer large sums of money to scammers’ accounts.
This, in part, is the world that proliferating AI has enabled—and the reason why 61% of survey respondents who expect financial crime risk to increase cite cybercriminals’ increased use of AI., the second most cited factor, after cybersecurity and data breaches.
Yet AI is a double-edged sword: Even as bad actors use the technology to commit financial crime, organizations in industries from financial services to accountancy to insurance are using the technology to stop it. As then-Deputy U.S. Attorney General Lisa Monaco noted last year, AI “has the potential to be an indispensable tool to help identify, disrupt, and deter criminals, terrorists and hostile nation-states from doing us harm.”
It’s fitting, then, that 57% of respondents agree AI will benefit financial crime compliance programs, even as 49% agree it poses a significant risk. Both trends will likely accelerate as generative AI fuels the equivalent of an AI arms race.
Please state your agreement with the following statements.
AI Adoption Grows, but Organizations Struggle with Implementation
AI has been part of the financial crime prevention landscape for several years, aiding in everything from customer risk assessments and automated due diligence to transaction monitoring and data analysis. But as adoption of AI tools grows, so too does the awareness that they can be challenging to leverage effectively—whether the difficulty stems from bad data or missing employee skillsets.
Over half of all respondents to our latest survey are investing in AI solutions to fight financial crime. Twenty-five percent say AI is an established part of their financial crime compliance program, and 30% say they are in the early stages of adoption.
Which of the following best summarizes your organization’s current usage of AI and/or machine learning solutions as part of your financial crime compliance program?
Taken together, the findings represent a notable rise in adoption from our 2023 report. Most who are already or considering using AI to fight financial crime are doing so to identify suspicious behaviors or patterns (63%), analyze networks (54%), identify risk signals (44%) and automate administrative tasks (41%).
This is not necessarily new ground. Back in 2022, the growing use of AI/ML in preventing financial crime even led The Wall Street Journal to report that regulators would start expecting banks to adopt such tools.
What remains an open question is how generative AI can be used to advance crime prevention tools like pattern recognition and data analysis to fight bad actors, who themselves will increasingly use large language models (LLMs) to generate fraudulent audio and text. Recent research has shown that LLMs can reduce the cost of the phishing process by more than 95%. Even more concerning: Nearly 80% of recipients in another study opened AI-written phishing emails—and 21% clicked on malicious content.
The latest technology offers benefits, too. Amid ongoing economic volatility and limited resources, generative AI embedded in financial crime prevention programs can help automate more mundane tasks and guide employees to the right decisions in real time, overcoming an ongoing compliance challenge.
For now, however, as AI adoption rates increase, our survey reveals that positive perceptions of these tools’ effects on financial crime compliance are decreasing. Only 20% of those currently using AI/ML say it has a very positive impact, compared with nearly 40% in 2023.
What has been your perception to date of the impact of AI and/or machine learning on the financial crime compliance framework?
On the one hand, the bar for entry has significantly lowered as the technology gets cheaper and easier to use. On the other hand, organizations still have a relatively immature set of policies and procedures when it comes to integrating these technologies—and many employees lack the sophistication to effectively oversee them. For instance, the European Banking Authority recently reported that roughly half of the 256 financial services companies fined in 2024 involved an “unthinking” reliance on new technologies, including AI.
These issues, coupled with some general disillusionment with much-hyped AI tools, can lead to disconnects between what businesses expect to get out of the technology and what it can actually deliver. AI could lead to a high rate of false positives when it comes to know your customer (KYC) reviews, or flag everyone of a certain gender without explanation for why it is doing so; alternatively, AI may not flag a transaction that it should have. As they get more familiar with AI, executives may also be discovering what they don’t know—and that solving AI-related issues may not be as simple as they once thought.
At the root of many of these issues is a key organizational problem: bad data. Many might have thought that the AI itself would solve this issue when in fact it only exacerbates it.
Regulatory Hurdles Mount
Just 55% of respondents agree (and only 16% strongly agree) that their financial crime compliance program is prepared to meet AI regulatory developments. For those in legal services and real estate, this percentage was significantly lower (30% and 40%, respectively).
This state of play is understandable given the emerging regulatory approaches currently underway around the world, like the EU’s newly passed AI Act or a patchwork of U.S. state laws related to algorithmic discrimination, automated employment decision-making and AI bills of rights.
U.S. federal regulatory agencies have staked out positions on AI, too. The Consumer Financial Protection Bureau (CFPB), for instance, said in August 2024 that existing consumer protection laws—like the Consumer Financial Protection Act and Equal Credit Opportunity Act—also apply to new technologies, emphasizing that companies must “comply with crucial consumer protections that protect people from, for example, unlawful discrimination” and noting it may take steps to enhance oversight of algorithms used to inform lending decisions. Of course, these initiatives are poised to shift given the new Trump administration, which has already rescinded a Biden-era executive order aimed at establishing safeguards for AI use and ordered the CFPB to stop work.
So far, the UK is acting similarly to the U.S.—promising a less centralized, principles-based approach, at least for now—while China implemented Interim Measures for the Management of Generative Artificial Intelligence Services in August 2023. That said, the U.S./UK approach doesn’t necessarily lower the compliance bar for businesses. These regulators are taking the stance that principles of good governance apply regardless of AI adoption and that regulated firms must be compliant with such rules if they go ahead with it.
Even the most prescriptive laws, like the EU’s AI Act, apply more broadly than some might think, particularly for businesses that engage in highly regulated sectors like banking, insurance and healthcare. Having good governance structures and flexible, risk-based programs in place is the best way to ensure legal compliance—and avoid the steep legal, economic, and reputational risks of getting AI wrong.
Fortunately, 55% of respondents have AI policies and guidelines in place. That’s just a first step, though. Executives will have to ensure they’re evolving these policies year over year as today’s hype matures into disciplined implementation and governance.
That may be a hurdle given the amount of documentation required for tech departments and third-party vendors, especially for organizations without the internal expertise to handle these risks and with a limited budget to expand teams. For example, the EU AI Act is heavy on technological documentation and requirements, while the EU’s DORA, which went into effect this January, creates a binding, comprehensive information and communication technology risk management framework for the region’s financial sector.
Best Practices
How can business leaders successfully implement AI capabilities into their financial crime compliance programs while continuing to defend against the threats AI poses from outside? Here are a few best practices to keep top of mind:
- Get the right team in place. To manage the scope of AI-related documentation and governance, organizations must form cross-functional teams. Go beyond IT and cyber teams and involve those in AML, compliance, legal, product and senior management. Achieving sound AI governance and implementation requires an all-organization approach to understand the use cases, risks and guardrails—and to communicate them effectively to regulators, customers and employees.
- Frequent training, testing and education are key. As suggested above, simply updating policies and expecting your workforce to abide by them isn’t enough. There has to be focused, hands-on training with new AI tools. These trainings should be updated and repeated as the organization implements new AI capabilities and the regulatory and risk landscape changes. Firms should also undertake comprehensive testing before deploying AI and have sufficient monitoring in place to ensure it is working as intended.
- To combat AI-related fraud, maintain a “back to the basics” approach. Focus on fundamental human intervention and confirmation procedures—regardless of how convincing or time-sensitive circumstances appear.
More AI to Come
In the year ahead, nearly half of respondents (49%) expect their organization will invest in AI solutions to tackle financial crime, and 47% say the same about their cybersecurity budgets. These investments complement one another, even as the mounting focus on AI may put added stress or pressure on cyber programs—particularly in a resource-strained environment.
Which steps will your organization take internally in the next year to tackle the likely increase in financial crime?
These and other findings from our latest survey reveal that organizations are headed in the right direction when it comes to AI use in financial crime prevention. However, obstacles remain, and business leaders must accelerate their organization’s AI understanding, adoption and implementation. Bad actors will continue to innovate as new technologies become increasingly accessible. Those trying to stop them will have to as well.