The fraud landscape is evolving at breakneck speed, fuelled by ever more sophisticated techniques and a growing reliance on digital channels.
In response, the financial sector is undergoing a quiet but profound transformation: the shift from supportive to agentic AI.
Unlike conventional AI, which flags suspicious activity for later human review, agentic AI can act autonomously, taking preventative measures in real time.
Agentic AI, Done well
This evolution is more than a technical upgrade.
Done well, agentic AI augments fraud teams by reducing manual workload, enabling faster and more accurate decisions, and responding within milliseconds to emerging threats. Done poorly, it risks adding complexity and eroding public trust.
The threat landscape itself is also evolving. Fraudsters are deploying generative AI to launch attacks at scale, adjust tactics dynamically, and penetrate traditional defences.
Crucially, most modern fraud doesn’t involve system breaches – it exploits systems that are functioning exactly as designed. Authentication is passed. Transactions are approved.
But somewhere between login and payment, the attack unfolds, unnoticed.
Historic Defence
Historically, defences have centred on perimeter security – firewalls, multifactor authentication, access control.
These barriers are essential, but they fade once a session is authenticated.
That’s where agentic AI steps in, operating within the authenticated session to detect anomalies, synthesise cross-channel signals, and adapt to new threats in real time.
It doesn’t replace existing defences but acts as a force multiplier, allowing analysts to cut through noise and focus on what matters.
APP Fraud
A key case in point is authorised push payment (APP) fraud, which continues to rise despite increased awareness.
Many see it as intractable. Yet these scams often follow a recognisable pattern: reconnaissance, social engineering, session manipulation. All leave behind digital footprints.
With high-quality telemetry and context-aware AI models, these subtle clues become early warning signs.
Agentic AI can flag these signs mid-session, offering a critical window for human intervention before money moves.
This isn’t about replacing fraud teams. It’s about empowering them.
By using generative AI as co-pilots – summarising session behaviour, suggesting next steps – banks can ease the burden on analysts and increase resilience at scale.
But speed must not come at the expense of accountability.
Agentic systems must be built with guardrails: clear audit trails, human oversight protocols, and the ability to escalate when uncertainty is high.
Automation that lacks transparency risks undermining trust. Smart automation – designed with intent, precision, and control – offers a better way forward.
The post Fraud moves fast – Can Agentic AI keep pace? appeared first on Payments Cards & Mobile.