As attackers harness large language models to create synthetic identities and automate scams, financial institutions face an uncomfortable truth: the tools they rely on to fight fraud were not built for this moment.
The arms race has tilted
Fraud has always been an arms race, but AI has fundamentally shifted the balance. Attackers can now generate convincing synthetic identities, launch phishing campaigns at scale, and use deepfake audio and video to impersonate account holders, all with a laptop and subscription.
Global fraud losses exceeded $485.6 billion in 2023, according to Nasdaq’s Global Financial Crime Report, with payments fraud dominating. These losses reflect how the cost of launching sophisticated attacks has fallen. Fraud-as-a-service toolkits, amplified by generative AI, make it easier than ever to attack financial systems. Meanwhile, defenders largely run systems built for a different era.
Two approaches, two distinct problems
Most fraud teams rely on combinations of rule-based systems, machine learning, graph analytics, and behavioral biometrics. These are powerful tools, but architectural limitations hamper operational effectiveness.
Rule-based systems rely on thresholds. If a customer crosses five transfers in ten minutes, the transaction is flagged. Analysts can explain these rules to regulators, but fraudsters can probe thresholds to stay just below alerts. Updating rules takes weeks, by which time attack patterns have evolved.
Machine learning models detect unusual activity without explicit rules and can incorporate network or behavioral signals. Yet flagged transactions often come with a statistical score and a list of contributing factors that are difficult for analysts to act on in real time. Graph analytics may reveal proximity to fraud networks, but translating that into immediate, defensible action is challenging. This gap contributes to false positive rates reaching 95 percent, according to LexisNexis Risk Solutions.
The cost of getting this wrong
Modern payment rails, including US FedNow, Europe’s SEPA Instant, UK Faster Payments, and Nigeria’s NIP, process transactions irreversibly in milliseconds. Slow or noisy detection systems create immediate financial risk and operational strain.
Regulators are watching closely. The EU AI Act requires transparency and human oversight. In the US, Federal Reserve guidance under SR 11-7 mandates validated, documentable, and reviewable models. African regulators are aligning frameworks. The Central Bank of Nigeria now requires real-time automated monitoring with clear rationale. Draft APP fraud guidelines mandate structured reimbursement with strict investigative timelines. Across all regions, explainable automated decisions are increasingly mandatory.
A third architecture: The protocol approach
Previous “third-way” solutions, such as FICO Falcon, SAS Fraud Management, Feedzai, and Featurespace, combine orchestration layers with ML and rules. Analysts still face the same question: why did this alert fire, and what action should they take? Valuable signals require interpretation before becoming actionable.
“Fraud rarely announces itself through a single dramatic signal. It shows up as a cluster of things that are each slightly off. A system that can only ask yes or no questions will always struggle with that reality,” said Solomon Ehi Olumese, Global Head of Operations at Loci Fraud AI.
What has been missing is not better models, but a structured, auditable protocol for expressing fraud detection intent, converting it into executable logic, and ensuring transparency.
Why a protocol, not another platform
Lagos-based fraud infrastructure company Loci Fraud AI has developed the Fraud Language Model (FLM), a protocol designed to close this gap. Unlike orchestration platforms that layer on complexity, FLM unifies rules, ML outputs, and analyst intent into a single, explainable artifact.
FLM operates across three layers:
- Domain-Constrained AI – Analysts describe scenarios in plain language. The AI layer interprets them using a formal fraud vocabulary into structured, operational specifications.
- Structured Policy Representation – Policies are human-readable and machine-executable, enabling analysts to act and regulators to audit.
- Deterministic Execution – Policies run reproducibly in Loci’s infrastructure. AI assists in writing, not decision-making, ensuring auditability and no exposure of sensitive data.
FLM integrates with existing ML investments. Prior model output feeds in as one signal among many, with FLM providing the explainable logic that wraps around them.
“Most institutions don’t have a model problem. They have a coordination problem. They’ve invested in strong machine learning tools, but those tools operate in silos. FLM gives them a unifying layer; one that makes every signal legible, composable, and accountable. That’s what modern fraud defence actually requires,” Olumese noted.
Built for today’s adversary
In 2010, the challenge was building ML models that beat rules. By 2020, it was integrating multiple detection tools. In 2026, the challenge is different. Attackers iterate in hours, payment rails are instant, and regulators demand explainability.
Sophisticated models alone do not solve the problem. Analysts need trust, clarity, and operational agility. Systems that take weeks to update concede the field to attackers who iterate in days. The right approach is designing detection infrastructure where capability, clarity, speed, and explainability coexist. FLM embodies that approach.
The fraud arms race is accelerating. Institutions that bake explainability into their architecture, rather than retrofit it, will be better positioned. Ultimately, the question is stark: is your detection infrastructure built for the adversary of today or the one from five years ago? The answer affects regulatory compliance, analyst effectiveness, and how much of the $485.6 billion annual problem lands on your balance sheet.
Written by Solomon Ehi Olumese, Global Head of Operations at Loci Fraud AI
Get passive updates on African tech & startups
View and choose the stories to interact with on our WhatsApp Channel
ExploreLast updated: March 31, 2026
