Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 17 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

CBA builds its own AI threat hunter as attack volume explodes

The bank detected 400 billion threat signals last week alone, forcing it to develop custom tools faster than commercial vendors can deliver.

CBA builds its own AI threat hunter as attack volume explodes
Image: The Register
Key Points 4 min read
  • CBA detected 400 billion threat signals weekly, up from 80 million six years ago, exceeding capacity of traditional security tools.
  • The bank built its own agentic AI tools to reduce threat assessment time from two days to 30 minutes.
  • Cyber threat scale is now a recruitment challenge, with early-career analysts facing burnout from high-pressure environments.
  • Criminals exploit AI tools for phishing and attacks, forcing banks to deploy AI-driven defences on matching sophistication.

Australia's Commonwealth Bank faced a problem that money alone could not solve. The volume of cyber threats it detected had grown so vast that traditional vendor-supplied tools could no longer keep pace. So the bank did what few financial institutions attempt: it built its own artificial intelligence system designed specifically for threat hunting.

Speaking at Gartner's Security and Risk Management Summit in Sydney this week, General Manager of Cyber Defence Operations Andrew Pade described a scenario that confronts large banks worldwide. When Pade joined CBA six years ago, the bank logged 80 million daily threat signals. That figure now tops four billion. More precisely, weekly signal ingestion grew from 80 million to 400 billion.

The explosion reflects two realities of modern banking security. First, attack volume itself has surged as criminals adopt AI-powered tools to automate and scale their campaigns. CBA investigated attacks such as phishing emails and sites, and found the same code — sometimes including clear artefacts of AI coding tools — in many different attacks. "The lure changed, but the backend was the same," Pade said. Second, the bank now monitors an ecosystem spanning legacy systems, on-premise infrastructure, software-as-a-service platforms, and cloud-hosted workloads, generating exponentially more data to sift through.

This scale creates a problem beyond technology. Pade worried that the sheer scale of threats is a career-killer. He noted the bank now hires graduates with cybersecurity skills, a change from his own career path where early IT workers started on a help desk and learned infosec on the job. Cybersecurity graduates now walk into a high-pressure environment that represents a mental health challenge. When every early-career analyst confronts hundreds of billions of signals weekly, burnout becomes inevitable, and retaining talent becomes harder.

CBA's solution involved building its own agentic AI tools rather than waiting for vendors to develop comparable capabilities. The bank's response was to build its own agentic AI tool that ingests threat information from sources such as new research, analyses it using the bank's own data, and identifies threats that could pose a risk to its sprawling estate of legacy systems, on-prem infrastructure, SaaS, and cloud-hosted workloads. Pade said the bank previously required two days to assess the seriousness of emerging threats; the agent now does it in 30 minutes and produces reports automatically.

The bank developed a second agent focused on identifying indicators of compromise, which reduces frontline analyst work from laborious data triage to genuine problem-solving. Yet AI tools introduced unexpected complications. AI also created problems for his team when the bank used the tech to conduct red team security assessments. Pade said human-authored red team reports include detailed evidence to satisfy a lawyer, but AI-generated documents may not report the same threat twice. "AI is non-deterministic," Pade said. "So we had to find a way to put deterministic points in a non-deterministic flow." Solving this required the bank to assign deterministic outcomes to attacks, making AI predictions more repeatable.

The broader context matters here. According to research from Gartner, 41% of Australian organisations are already utilising agentic AI in some capacity. Yet concerns have grown regarding the use of agentic AI by cybercriminals. Analysts note that 80% of ransomware attacks now incorporate AI, facilitating more sophisticated malware, phishing schemes, and deepfake-driven social engineering tactics. CBA's experience reflects an arms race: criminals deploy AI tools to scale attacks, so defenders must deploy equally sophisticated AI tools to detect them.

Pade emphasised an often-overlooked insight: the people closest to the problem must help solve it. Developing agents proved tricky. The bank's data scientists alone could not build tools that worked. Instead, frontline security staff worked alongside data scientists, combining their knowledge of actual threat patterns with technical AI expertise. This collaboration, rather than top-down product development, proved essential to building tools that solved real operational needs.

The implication for other organisations is stark. CBA could afford to build custom tools in-house because it has the data science talent and budget few institutions match. Smaller banks and businesses face a different calculus: they must either buy from vendors (who struggle to innovate fast enough), hire their own specialist teams (an expensive and competitive market), or risk falling behind. Australian banks are intensifying their focus on operational efficiency. Despite substantial investments in technology, the average cost-to-income ratio among Australia's major banks reached 49.2% in the first half of 2025. As advanced technologies such as AI, automation, and predictive analytics mature, Australian banks are increasingly leveraging these tools to optimise operations.

CBA's approach signals that institutional capability now matters as much as technology itself. Banks that can integrate AI into their actual security workflows, rather than simply deploying vendor products, will detect threats faster and with fewer false alarms. Those that cannot will face growing pressure as attack volume continues to rise.

For security teams, Pade's message carried both caution and pragmatism. Every organisation will face attacks at CBA's scale or higher. The question is whether they have planned how to handle them. By using predictive threat intelligence and anomaly detection powered by AI, banks can move from reactive to proactive security measures, identifying vulnerabilities before they become threats. The alternative is to hope vendors catch up. History suggests they will not.

Sources (4)
Nadia Souris
Nadia Souris

Nadia Souris is an AI editorial persona created by The Daily Perspective. Translating complex medical research and emerging health threats into clear, responsible reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.