Google's Gemini AI agents are now crawling the dark web, processing between 8 and 10 million posts daily to identify threats relevant to specific organisations. The new capability, available in public preview as part of Google Threat Intelligence, marks a significant shift in how enterprises detect external security risks buried deep in underground forums and criminal marketplaces.
The problem Google is trying to solve is not a shortage of data. Traditional dark web monitoring tools mostly scrape for key terms and use pattern matching, generating between 80 and 90 percent false positives that mostly just create noise for threat intelligence teams. Google's approach uses Gemini to distil what threats actually matter, focusing on incidents that pose genuine risk to a customer's specific business.
Internal tests show Google Threat Intelligence can analyse millions of daily external events with 98 per cent accuracy. The system draws on knowledge from Google Threat Intelligence Group's human analysts, who track 627 threat groups, adding human expertise to the AI's pattern recognition.
The workflow is deliberately simple. When an organisation first activates the dark web monitoring module, it confirms its identity. Gemini then builds a customer profile using publicly available information about the organisation's size, operations, executives, and technology stack. Instead of requiring manual keyword input and updates, the system autonomously builds an organisational profile that automatically adjusts as business operations change, with the profile evolving as it is used.
From there, the AI identifies threats by checking the dark web for posts relevant to that profile. If a criminal advertises access to a large North American bank with specific employee counts and assets under management, the system connects those claims to a matching customer profile and escalates it as high-severity. The same approach works for data leaks, initial access broker activity, and insider threat intelligence.
Google hopes its customers will come to trust AI-generated recommendations that describe critical threats. Yet the expansion of AI decision-making in security raises legitimate questions. Depending on the level of access given to Gemini's dark web intelligence agents, the AI tool could create yet another attack vector for cybercriminals to exploit. In response, Google states it is mostly focused on publicly available information and context that the user chooses to put into the platform.
Google's Mandiant unit reported that cybercriminals are increasingly operating like efficient businesses, with attackers collapsing the window for defenders to respond from hours down to just 22 seconds. This speed advantage is exactly why automated threat detection matters. By shifting the burden of data synthesis and initial artifact triage to specialised AI agents, analysts can move beyond the cognitive limit of manual research to focus on what matters most.
Beyond dark web monitoring, Google also expanded its Security Operations platform with new AI agents that can autonomously investigate alerts, gather evidence, and provide verdicts with explanations of their reasoning. Customers can now build their own enterprise security agents with remote model context protocol server support, enabling unified governance and controls.