Article:
The hidden risks of AI in fraud prevention
AI is fighting on both sides of the fraud landscape: as a defence tool, and as a potent weapon for attackers. SecurityBrief reports that AI-powered fraud threats surged for Australian firms over the past year, with 65% of surveyed organisations recording an increase in year-on-year fraud losses. Generative AI and deepfake technologies are empowering fraudsters to scale their operations like never before, automating everything from voice impersonations to the creation of synthetic identities.
These attacks are hitting critical touchpoints: contact centres, urgent payment requests, and executive impersonations. Retail has been particularly hard-hit, with bots orchestrating low-value refund scams that fly under the radar. Fraud has evolved from manual schemes to an automated supply chain, making scams faster, cheaper, and tougher to spot.
But amid this acceleration, an emerging danger lies in the hype surrounding AI as a fraud prevention saviour. The market narrative pushes “fight AI with AI” as the ultimate fix, yet it's introducing fresh risks that organisations are grappling with behind closed doors: operational overload, governance gaps, and regulatory pitfalls. Senior leaders are privately admitting that AI systems often over-promise, swamp teams with alerts, and create accountability voids that won't hold up under scrutiny.
In a recent Forbes article, Emil Sayegh argues that the current artificial intelligence gold rush has escalated into dangerous over-enthusiasm, with companies recklessly investing billions and slapping "AI-powered" labels on virtually everything (including fraud detection systems) without adequate scrutiny. He emphasises that AI is not magical but merely advanced software relying on pattern recognition from vast datasets, yet relentless marketing hype has distorted perceptions, leading to unchecked adoption and "cybersecurity regrets."
Quest Events spoke with Prof. Dali Kaafar (Founder and CEO of APATE.AI and Executive Director of Macquarie University Cyber Security Hub), Gaby Carney (Senior Fellow – Strategic AI, UTS Human Technology Institute) and Dr Ahmed Al-Ani (Senior Data Scientist, Cuscal) to gain their insights into this issue ahead of FraudCon 2026
The promise of AI in fraud prevention is seductive: real-time detection, adaptive learning, and reduced human error. Yet, the reality is a 'black box' paradox: models that decide in milliseconds but leave teams scrambling to explain or verify those decisions. The key friction points exist where AI's speed clashes with operational integrity.
The speed trap
AI models breed overconfidence, especially in real-time payment (RTP) and point-of-sale (POS) systems that rely on fast AI: lightweight heuristic models that prioritise low latency over deep analysis. Speed becomes the primary success metric in many live environments, but this optimisation for clearing high volumes can mask correctness issues, creating false confidence when the system efficiently filters obvious cases while missing sophisticated ones that appear legitimate.
APATE.AI Founder and CEO Dali Kaafar notes that speed is essential in fraud environments, but the risk arises when speed becomes the primary success metric. “Many AI systems are optimised to clear volume and keep BAU moving, which is useful for filtering obvious, repeat patterns,” he says. “False confidence sets in when that efficiency is mistaken for correctness across all cases. AI should be weaponised for good by removing noise and buying humans time. The failure mode is letting speed obscure what’s happening at the edges, where fraud is adaptive and deliberately designed to look legitimate.”
Blind spots
Fraud rooted in human manipulation rather than technical compromise remains the hardest to detect. “Romance scams, authorised push payment fraud, and mule recruitment often evade controls because they sit comfortably within ‘normal’ behavioural ranges, and more importantly because they run for a relatively long period of time,” explains Kaafar.
Can this blind spot be blamed on an over-reliance on AI models? It would be fairer to attribute it to an over-reliance on passive detection, which means organisations wait for anomalies after harm has already begun. Criminals actively test and adapt to models, operating in the grey rather than triggering obvious red flags.
Al-Ani concurs: “Investment, romance and impersonation scams are quite hard to detect with AI models, as they appear to be ‘legitimate’ events,” he says. For example, the victim initiates the transfer in an APP fraud, so biometrics, device IDs, and locations check out as legitimate, blinding AI to the social engineering at play. “When comparing detection and false‑positive rates at a specific alert rate, AI models generally outperform legacy rule‑based systems,” adds Al-Ani.
“In practice, the most effective approach is a hybrid one: an AI model that performs broad fraud detection, complemented by a set of rules designed to capture specific fraud scenarios the model may miss. Rule‑based controls can also be developed and deployed quickly, making them well-suited for detecting emerging fraud patterns.”
The accountability gap
Who is actually accountable when a fraud-detection-related AI decision is wrong? The C-level owner? The data science team? Or the AI product vendor?
HTI’s Gaby Carney says that generally, the organisation that makes a decision is accountable for it, regardless of whether it was made by or with the support of an AI system. “However, depending on the circumstances, an organisation may have contractual or other claims against the developer or vendor of the AI system if it operates in unexpected ways,” she adds. "Organisations using AI systems for fraud detection should implement appropriate governance at both the organisational and system levels. This should include testing, human oversight and other controls to minimise the risk of the system making inaccurate decisions.”
Al-Ani points to the data science team holding accountability, but notes that AI models are not expected to have a 100% detection accuracy and 0% false positive rate.
Kaafar believes accountability sits with the organisation, not the model: “I don’t think AI is supposed to remove responsibility,” he says, “if anything, it concentrates it. The real question is who decided where automation ends, and human judgment begins. Regulators and customers don’t accept ‘the system made the decision’ as an explanation, and nor should they. Well-governed deployments are explicit about decision boundaries and escalation paths, rather than treating AI as a substitute for ownership. This is why AI decisions/output backed by human-interpretable evidence (of fraud or scam-related activities) is becoming more important than ever.”
Explaining AI decisions
This brings us to the “explainability gap”. How are organisations faring when required to explain or defend AI decisions to regulators or at the Board level?
Many organisations can show performance metrics and dashboards, but struggle to explain individual outcomes in plain language. Kaafar explains that Boards and regulators increasingly want to understand why a decision was made, not just how the model performs in aggregate.
“Organisations should be able to have a good grasp of what evidence and proof points are behind any AI-based decision, and that includes human-interpretable datapoints that explain these decisions,” Kaafar adds.
“AI explainability is a tough problem, but transparency and evidence-based checkpoints should not be negotiable in fraud environments. Teams that deploy AI as decision support, augmenting analysts rather than replacing them, are generally better positioned to defend outcomes, because the reasoning chain still exists.”
A lack of interpretability in complex models weakens trust and forensic accountability. Carney believes organisations are taking quite a cautious approach to using AI systems to make decisions in areas that can directly impact individuals. “This is particularly the case where they are operating in highly regulated areas. In these cases, they are more likely to use AI systems to support a decision and will have controls in place to ensure that the decision itself is explainable.”
Al-Ani notes that explainability depends on the model type. “For example, neural networks differ significantly from tree‑based models in how their decisions can be explained,” he says.
Foundational capabilities before scaling
“Organisations should implement some core AI governance capabilities before adopting any AI systems at scale,” says Carney. “These are important to ensure that they are using AI lawfully, managing risk effectively, and aligning with the key principles of responsible AI, which are fairness, transparency, and accountability.”
According to Carney, key initiatives include having clear, single-point accountability for AI governance, defined AI risk management processes, and a governance structure that supports quick decision-making on AI adoption and oversight.
“Responsible scale starts with intent,” says Kaafar. “AI should first surface signals and then remove noise to finally free up human attention and certainly not automate accountability away.” He lists the three foundations he believes matter most:
Clear boundaries on what AI can decide autonomously versus what requires review.
Explainability at the case level, not just at the model level.
An adversarial mindset that assumes fraudsters will probe, learn, and adapt.
“When AI is used to filter the obvious and elevate the ambiguous, it becomes a force multiplier,” Kaafar adds. “When it’s deployed blindly, it creates fragile systems that fail quietly … until they don’t.”
Join the conversation at FraudCon 2026. Hear more from Dali Kaafar, Gaby Carney, Ahmed Al-Ani and other thought-leaders at FraudCon 2026, 5-7 May at the Sydney Masonic Centre. Learn more.

