AI’s security flaw exposes payments industry to new cyber threats

The world’s leading artificial intelligence developers are racing to close a major security loophole that could expose companies—including those in the payments sector—to sophisticated new cyberattacks.

Envato

Flaw exposes industry to cyber threats

Firms such as Google DeepMind, Anthropic, OpenAI and Microsoft are accelerating efforts to prevent indirect prompt injection—a method in which malicious actors hide hidden commands in websites, emails or documents to manipulate AI models into leaking confidential data or performing unauthorised actions.

These so-called “injection” attacks represent one of the most pressing cyber vulnerabilities in large language models (LLMs).

Designed to follow instructions, models like ChatGPT or Anthropic’s Claude struggle to distinguish between legitimate user prompts and hidden malicious ones.

“AI is being used by cyber actors at every stage of the attack chain right now,” said Jacob Klein, head of threat intelligence at Anthropic. The company is investing in external “red teaming” and AI-powered detection tools to identify potential misuse in real time.

The Growing Threat of Data Poisoning and Deepfakes

Beyond prompt injection, a more insidious threat is emerging: data poisoning. This involves embedding harmful or misleading content into the training data used to build AI models, creating hidden “back doors” that cause them to misbehave.

Recent research from Anthropic, the UK’s AI Security Institute, and the Alan Turing Institute suggests such attacks are easier to execute than previously believed—raising concerns about the long-term reliability of AI systems integrated into financial services.

At the same time, generative AI is supercharging the cybercrime ecosystem.

A study by MIT found that 80% of ransomware attacks in 2024 leveraged AI tools, while phishing scams and deepfake-enabled fraud rose by 60%.

“Back in 2023, we’d see one deepfake attack per month across the entire customer base. Now we’re seeing seven per day per customer,” said Vijay Balasubramaniyan, chief executive of voice fraud specialist Pindrop.

AI Arms Race: Offence and Defence

While attackers exploit AI to scale and automate their operations, defenders are increasingly using it to fight back.

“Defensive systems are learning faster, adapting faster, and moving from reactive to proactive,” said Microsoft’s deputy chief information security officer, Ann Johnson.

The use of automated red teaming at Google DeepMind and AI-powered threat monitoring at Anthropic illustrates how major players are embedding continuous testing into their security processes.

Yet despite these defences, companies remain highly exposed.

LLMs can trawl public data—from LinkedIn profiles to open-source repositories—to map corporate software stacks and identify weak points.

Cybersecurity adviser Jake Moore from ESET warned that even small firms are now targets: “It doesn’t take much to be a crook nowadays. You get a laptop, $15 for a bootleg AI model on the dark web, and off you go.”

For the payments industry—where speed, trust and regulatory compliance are paramount—the risks are acute.

As AI tools become embedded in customer support, fraud detection and transaction processing, protecting these systems from manipulation will be essential.

The same technology transforming financial innovation may also be its greatest new vulnerability.

The post AI’s security flaw exposes payments industry to new cyber threats appeared first on Payments Cards & Mobile.