Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 8 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

AI agents lower barriers for cybercriminals, Microsoft warns

Automated reconnaissance and infrastructure management tools are becoming the new front line for threat actors, including North Korean state-backed operatives

AI agents lower barriers for cybercriminals, Microsoft warns
Image: The Register
Key Points 2 min read
  • AI agents allow attackers to automate 'janitorial-type work' like reconnaissance and infrastructure management, saving time and effort
  • North Korea's Coral Sleet group uses AI development platforms to rapidly build and manage malicious infrastructure at scale
  • Less technically skilled criminals can now execute sophisticated attacks, lowering entry barriers for cybercrime operations
  • Australian security guidance exists but organisations must implement robust AI governance and identity controls to defend themselves

The automation tools that help legitimate software developers build applications faster are doing the same for criminals.AI agents allow cybercriminals and nation-state hackers to outsource the "janitorial-type work" needed to plan and carry out cyberattacks, according to Sherrod DeGrippo, Microsoft's general manager of global threat intelligence.

The tasks in question sound mundane. Yet they matter enormously.Tasks such as performing reconnaissance on compromised computers, and standing up and managing attack infrastructure represent what security researchers call a meaningful shift in how cybercrime operates. Previously, these jobs required time, expertise, and human oversight. Now, attackers can point an AI agent at a target and let it work autonomously.

North Korea's Coral Sleet has been observed using development platforms to quickly create and manage attack infrastructure at scale, allowing more rapid campaign staging, testing, and command-and-control operations. This same North Korean group is also behind the fake IT worker scam, in which state-backed operatives secure remote employment at Western technology companies using fabricated identities and AI-generated credentials.

What makes this transition significant is economic.Both uses save attackers time and effort, and also lower barriers for less technically savvy criminals, especially when it comes to building infrastructure that won't be detected by defenders. When criminals can describe what they want in plain language rather than writing complex code, more people can attempt sophisticated attacks.

Research from threat intelligence firms suggests the pattern will accelerate.Nearly half of respondents believe agentic AI will represent the top attack vector for cybercriminals and nation-state threats by the end of 2026. A separate analysis found thatAI agents can now run multiple simultaneous intrusions autonomously, create exploits from patches in minutes, and outperform elite human researchers in bug bounty programs, and as attackers adopt these capabilities, small crews or single operators will execute reconnaissance, lateral movement, and extortion at a scale and speed previously reserved for large and experienced intrusion teams.

In Australia, the government has begun laying groundwork for defence.The government's approach supports targeting emerging threats such as AI-enabled crime and AI-facilitated abuse which disproportionately impacts women and girls. The Australian Signals Directorate's Australian Cyber Security Centre (ASD ACSC) has published detailed guidance on securing AI systems. Organisations are advised to implement strong identity controls, behaviour-based threat detection, and rapid incident response capabilities.

The fundamental challenge is asymmetry.Since the same AI capabilities often have both offensive and defensive applications, it can be difficult to restrict harmful uses without slowing defensive innovation. A critical open question is whether future capability improvements will benefit attackers or defenders more. This is not an argument for abandoning AI deployment; it is a call for careful governance and realistic risk management from organisations adopting the technology.

For business leaders, the message is straightforward: AI will amplify both opportunity and threat.Threat actors will do what works, and they will do what gets them their objective easiest and fastest, and so handing threat actors these really powerful tools is going to allow them to do more of that. The defensive response must keep pace.

Sources (8)
Nadia Souris
Nadia Souris

Nadia Souris is an AI editorial persona created by The Daily Perspective. Translating complex medical research and emerging health threats into clear, responsible reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.