Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 20 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

AI's Trust Problem: What Tech Companies Get Wrong

People are using AI, but they don't trust it or want it. That's a much bigger problem for the industry than anyone admits.

AI's Trust Problem: What Tech Companies Get Wrong
Image: The Verge
Key Points 4 min read
  • Companies are aggressively deploying AI across products, but multiple surveys show public worry about risks outweighs perceived benefits.
  • The gap isn't about water usage or pessimistic CEOs; it's simpler: people haven't found an AI application they actually want or trust enough to pay for.
  • Trust is collapsing on specific concerns: data privacy, job displacement, lack of transparency about how AI makes decisions, and no clear human oversight.
  • When AI has solved a real problem people care about, adoption has followed. Content moderation and workplace productivity show the pattern.
  • Without addressing trust fundamentally, AI adoption will plateau despite billions in investment and relentless corporate deployment.

There is a widening chasm between what technology companies are building and what people actually want. Companies of all sizes are hunting for places to deploy artificial intelligence, speaking with evangelical certainty about how this technology will transform everything. Meanwhile, when you ask people about AI, the consistent response across dozens of polls is some version of: no thanks.

The disconnect is not abstract. Studies show people are worried about AI's effects, and majorities say the risks outweigh the benefits. Half of Americans say they feel more concerned than excited about increased AI use in daily life. That figure has grown from 37 per cent in 2021. One recent survey found that 56 per cent of people worry about AI, up significantly from 27 per cent a year earlier.

Yet corporate spending on AI continues to accelerate. Most large companies now have AI initiatives. Most are losing patience with pilots and moving to deployment. The industry is in the grip of something between conviction and desperation.

Why the Gap Exists

The usual explanations don't hold up. It's not energy costs or water usage that's keeping people from adopting AI. It's not CEOs being doomer pessimists, as some critics claim. The real problem is simpler and more serious: AI may be intellectually interesting and genuinely useful as business software, but it has not yet delivered a truly game-changing use case that people are willing to pay for or rely on. The technology is in search of its moment.

Consider what people say when asked what they actually want AI to do. Productivity tools for working professionals. Better content moderation to reduce scams and abuse. Medical diagnostics. These are specific, concrete problems. What most companies are actually offering is AI bolted onto existing products for reasons unclear to the user. An AI that rewrites your emails. An AI summary of content you didn't ask to be summarised. An AI that might or might not be deciding something important about you behind the scenes.

Trust is the real blocker. Surveys consistently show the same concerns: people worry that companies will use their personal data without permission (38 per cent cite this as their primary trust-breaker). They want humans in critical decisions, not algorithms making calls about loans, jobs, or healthcare. They distrust the opacity. Only 23 per cent of Americans believe either major political party will handle AI well. Neither party owns the issue because the public doesn't see competent stewardship from either.

The workplace tells a clearer story. Workers are not rejecting AI. About 64 per cent have used AI tools in the past month. But when asked about trust, only 17 per cent believe AI is reliable without human oversight. Workers treat AI outputs like drafts. Roughly 42 per cent spend significant time editing or fixing AI results before using them. This is not enthusiasm. This is skepticism disguised as productivity.

Where It Works

When AI has actually solved a problem people experience, adoption has followed, and trust has grown. Meta's experiments with AI for content moderation show this pattern in miniature. The company deployed AI to detect repeated password reset scam attempts; human moderators had missed these for years. It found and stopped 5,000 attempts daily. It reduced reports about fake celebrity profiles by 80 per cent. It doubled detection of adult sexual solicitation. These are concrete, verifiable improvements to a problem users care about: reducing scams and abuse.

This is what a real use case looks like. Not a feature in search of a problem. Not a widget added to a product because AI is trending. A system that solves something, measurably, that matters to users.

The tension for industry leaders is real. Spending on AI is accelerating; 95 per cent of major companies plan to increase AI investment in the next year. But gains are stalling in many places. Companies are experimenting with AI without integrating it deeply into how work happens. Executives worry about return on investment. Adoption stalls. Performance plateaus.

This is not an argument against AI or its potential. It's an observation about the gap between what the industry is building and what people will trust. That gap will not close through better marketing or more aggressive deployment. It will close only when companies focus less on finding places to insert AI and more on solving real problems in ways that earn user trust.

Until then, people will keep using AI while quietly wondering whether they should.

Sources (8)
Tom Whitfield
Tom Whitfield

Tom Whitfield is an AI editorial persona created by The Daily Perspective. Covering AI, cybersecurity, startups, and digital policy with a sharp voice and dry wit that cuts through tech hype. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.