Britain's competition watchdog says the next wave of agentic AI assistants could end up nudging people toward worse deals, manipulating choices, or quietly prioritising the interests of the companies behind them. In a report published Monday, the UK's Competition and Markets Authority explored the rise of so-called agentic AI, systems that go beyond answering questions and instead carry out tasks for people, such as shopping around for services, booking travel, switching providers, or managing subscriptions.
The promise is efficiency. The pitch, at least from the tech industry, is that these agents could cut the time and effort required to navigate complex digital markets. The regulator's analysis reads as a sharp counterpoint. "Greater autonomy for agents increases the consequences of errors, may heighten risks of manipulation and loss of consumer agency, and could lead to worse overall outcomes for consumers," the report notes.
Self-Dealing Through the Back Door
One of the CMA's biggest worries is whose interests these agents will actually serve. An AI assistant that's supposed to hunt down the best deal for you could just as easily push you toward products that make more money for the platform behind it. That could mean pricier or less suitable options quietly bubbling to the top.
The problem becomes harder to detect when systems personalise their behaviour. Personalisation, usually pitched as a helpful feature, could also make the problem harder to see. If every user is shown different recommendations or prices based on detailed behavioural profiles, it becomes much harder to tell when something is being steered.
The CMA warns that highly adaptive agents could supercharge the sort of manipulative interface tricks often called "dark patterns," especially if the systems are optimised for engagement, conversions, or other commercial targets. The concern is not theoretical. AI could also cause market power to shift from financial services firms towards those AI firms who control consumer interfaces, own consumer data, and design AI agents, moving value chains beyond regulatory perimeters or enabling direct entry into financial services.
When Errors Become Expensive
Reliability matters when autonomous systems can execute decisions without human review. The CMA points out that today's AI models remain prone to hallucinations and other errors, and those mistakes become more serious when software is allowed to take actions rather than merely offer advice. An incorrect answer from a chatbot is annoying; an autonomous agent canceling a service, switching a contract, or making a financial decision based on flawed information could be considerably more expensive.
Financial services adoption is already underway. Reports suggest around a third of banks are already piloting agentic AI with around half of those pilots expected to go live in 2026. This velocity outpaces regulatory clarity. Technology is moving significantly faster than the ability of financial institutions to fully grapple with the legal and regulatory risks and turn pilots into tangible results. The gap between the speed of development and the ability to adopt is evident in the industry's head start on agentic AI before it has answered its own questions in relation to generative AI.
Opacity and the Loss of Scrutiny
As with other AI-enabled systems, agentic AI may amplify existing biases in data or decision-making processes, particularly where outcomes emerge from complex, multi-step reasoning that is difficult to observe or explain. Opaque decision making can make it harder for consumers to understand, challenge or seek redress for unfair outcomes, increasing risks under consumer protection and equality frameworks.
Human oversight creates another layer of risk. As consumers increasingly delegate tasks to AI agents, there is a risk of over-reliance, where users defer too readily to automated decisions and become less able to scrutinise or intervene over time. Sustained delegation may weaken consumers' ability to detect errors or misalignment with their preferences unless systems are designed with clear boundaries, prompts and override mechanisms.
The Regulatory Response: Existing Laws, Clear Accountability
The CMA does not believe that heavy-handed, sweeping regulation is right for the UK; it could stymie innovation and stunt growth. Instead, the focus has been on understanding developments, potential benefits and risks, identifying important uncertainties and drivers, and publishing in-depth reports and a set of AI principles to guide the market towards positive outcomes.
The regulatory approach is explicit: UK consumer law applies whether decisions are made by people or by AI. You are responsible for what an AI agent does in the same way you are responsible for what an employee does. This is true even if someone else designed or provides the AI agent on your behalf. Violations carry teeth. If you break consumer protection law, you could be fined up to 10% of your worldwide turnover, and possibly forced to compensate affected consumers.
Without appropriate safeguards, agentic systems could undermine trust in AI and consumer markets rather than strengthen it, and this loss of trust and confidence in turn could inhibit positive innovation, investment and growth. The CMA's message is pragmatic: agentic AI can deliver genuine benefits in efficiency and accessibility. Agentic AI will deliver greatest consumer value and be trusted when autonomy is bounded clearly by user intent and backed by strong transparency and accountability. The question is not whether the technology should exist, but whether it can be deployed in ways that serve consumer interests rather than subvert them.