CeFPro Connect

News
Banks Turn to AI Amid Fears Over Growing Credibility of Social Engineering Fraud
Banks are increasingly combining artificial intelligence with traditional security controls to combat a surge in sophisticated social engineering and business email compromise attacks. As fraudsters deploy automation and generative AI, financial institutions are shifting toward adaptive, behavior based defenses to protect payments, customers, and trust.
Jan 19, 2026
Tags: Industry News AI and Technology (including Fintech)
Banks Turn to AI Amid Fears Over Growing Credibility of Social Engineering Fraud
The views and opinions expressed in this content are those of the thought leader as an individual and are not attributed to CeFPro or any other organization
  • Business email compromise remains a leading fraud threat to banks
  • Attackers increasingly use automation and generative AI
  • Social engineering exploits trust rather than technical weaknesses
  • Traditional rule based controls often fail under operational pressure
  • AI enables context and behavior based fraud detection
  • Natural language processing helps identify executive impersonation
  • Behavioral analytics flag abnormal transactions and workflows
  • Human oversight remains essential alongside AI tools

Banks are accelerating the use of artificial intelligence to counter a growing wave of social engineering and business email compromise attacks that continue to inflict heavy financial and reputational damage across the sector.

Business email compromise remains one of the most effective cybercrime techniques because it exploits human trust rather than technical weaknesses.

Fraudsters impersonate senior executives, suppliers, or customers to pressure staff into authorizing payments or releasing sensitive information, often slipping past traditional perimeter defenses.

“Social engineering attacks succeed because they manipulate urgency, authority, and familiarity rather than exploiting code,” said Quadri Owolabi, Technology Project Management leader at HSBC, writing for Finextra.

“That makes them particularly difficult to stop using static, rule based security models alone.”

The threat is intensifying as attackers adopt automation, impersonation, and generative AI to produce messages that closely resemble legitimate business communications.

These techniques allow criminals to scale attacks while making fraudulent requests harder to distinguish from normal activity.

Banks are attractive targets due to the value and speed of transactions they process. Common scenarios include fake executive requests for urgent wire transfers, invoice redirection schemes aimed at finance teams, and social engineering of customer service staff to bypass identity checks.

These attacks frequently strike during periods of operational pressure, such as quarter end or major transactions, when employees are more likely to act quickly.

“Because these requests align with everyday workflows, traditional controls often fail to flag them in time,” Owolabi said. “That is why banks are increasingly turning to AI to analyze context and behavior rather than relying solely on predefined indicators.”

AI driven defenses allow banks to examine multiple signals simultaneously. Natural language processing can detect subtle linguistic cues associated with impersonation, while behavioral analytics can identify anomalies in transaction patterns, approval chains, communication history, and timing.

When integrated with security orchestration platforms, AI alerts can automatically pause or escalate suspicious transactions before funds leave the bank.

Owolabi highlighted a case involving a large international retail and commercial bank that experienced a spike in executive impersonation attacks targeting treasury and finance teams.

Fraudsters posed as senior leaders and requested urgent cross border transfers linked to confidential initiatives.

Despite strong baseline controls such as multi factor authentication and dual authorization, attackers exploited moments of high workload and targeted staff with delegated authority.

The bank responded by deploying AI driven monitoring integrated directly into its payments environment.

“The system assessed language patterns, historical communication behavior, transaction context, and timing anomalies all at once,” Owolabi said. “In one case, an urgent request that appeared legitimate was flagged as anomalous and automatically paused.”

A subsequent investigation confirmed the request was fraudulent, preventing a high six figure loss.

Following deployment, the bank reported faster detection of social engineering attempts, fewer successful incidents, and improved visibility for risk and audit teams.

While AI has strengthened detection and response, Owolabi stressed that it does not replace governance or human judgment.

Foundational controls such as access management, transaction limits, and authentication remain essential, supported by analysts who review high risk alerts before irreversible actions are taken.

“AI works best when combined with strong governance and human oversight,” he said. “Trust and accountability remain critical, especially when automated systems are influencing financial decisions.”

Sign in to view comments
You may also like...
ad
Related insights