As anxiety spreads through software markets about AI replacing human workers and making SaaS subscriptions redundant, Datadog is pursuing a contrarian strategy: building its own AI models instead of relying on off-the-shelf large language models.
The observability platform has already created a model called Toto-Open-Base trained on over two trillion time-series data points. That dataset appears to be the largest available for any publicly released time-series model. All the training data came from Datadog's own systems, accumulated during years of operating an observability service. The company is now preparing a revised version, according to an interview with chief product officer Yanbing Li.
The Toto model is available as open-source code, a bet-the-farm move that signals Datadog's confidence in the approach. Li's reasoning is direct. Rather than chase generic AI trends, SaaS companies should focus on their core domain. "What is the SaaS company's role," Li asked during the conversation. "To innovate in their domain."
That approach directly responds to what many investors and analysts now call the SaaSpocalypse. In early February 2026, software valuations cratered when Anthropic released Claude Cowork, a tool designed to automate workflows with AI agents. The market immediately asked: if AI can now do the work that SaaS tools were built for, why pay recurring subscription fees? Roughly two trillion dollars in market value evaporated in weeks.
The concern is economically real. AI coding tools make it cheaper and faster for companies to build custom software internally rather than buy from vendors. Even the threat of building a replacement tool gives customers leverage at contract renewal time. Some enterprises, like Klarna, have already publicly cut SaaS contracts in favour of custom AI-driven alternatives.
But not all SaaS is equally vulnerable. The companies that have held up best during the sell-off are those providing infrastructure, compliance, or security tools that sit deep in the enterprise stack. Datadog itself saw modest declines relative to sector-wide pain, suggesting investors see some insulation in observability work. Tools managing compliance or critical operations remain harder to replace because the risks of AI errors are too severe.
Datadog's model strategy rests on a different proposition: domain-specific models can outperform generic LLMs on specialized tasks. A model trained on trillions of observability metrics understands the unique characteristics of production monitoring in ways a general-purpose language model never will. That edge in accuracy and speed could make Datadog's platform valuable enough that customers don't bother building alternatives.
Li emphasised two concrete benefits of owning the model layer. First, customers no longer need to budget for tokens from external AI services; the cost sits within Datadog's platform. Second, proprietary models can improve the quality of anomaly detection and remediation suggestions, potentially making Datadog's agents faster and more reliable than generic AI tools applied to the same problem.
The honest complication: AI agents remain unpredictable and prone to hallucination. Critical infrastructure teams must verify suggestions before acting on them. Li acknowledged this directly, saying that for AI systems to win trust, their outputs must be both explainable and verifiable. Having built the model herself, Datadog can also build tools to watch the model while it works and detect signs of failure, creating a feedback loop that generic tools cannot match.
Whether that proves enough depends on whether domain specialisation truly defeats commoditisation. The SaaSpocalypse assumed that generic AI could replace narrowly focused SaaS tools. Datadog's argument is that narrowly focused models trained on real production data beat generic tools in their domain, making replacement uneconomical. That's not a certainty, but it reflects the genuine choice facing every SaaS company right now: compete on features and UI, or own the data and intelligence layer.