Model Context Protocol (MCP) is the emerging open standard that lets AI models connect to external tools and data sources.
You can think of MCP as a USB‑C for AI—it standardizes how a large language model (LLM) interacts with services, such as databases, web APIs, file tools, etc.
Essentially, an LLM, the MCP Host, embeds an MCP Client that mediates a one-to-one connection with an MCP Server, providing specific functions.
The LLM never talks directly to the outside world—all requests go through this client-server layer. MCP is growing exponentially, with researchers finding around 20,000 MCP server implementations on GitHub.
They’re enabling new agentic AI workflows, for example, an AI support bot that can query a customer’s account balance or update a database using MCP.
That said, it’s not all a ray of sunshine, and as you can imagine, anything involving LLMs comes with new security challenges.
By design, MCP offloads security decisions, such as authentication and input validation, to developers of each server and client. In most early implementations, security was not built in by default.
Below, we’ll explore what MCP security means for AI-powered applications.
Primary MCP Security Risks
There are multiple primary MCP security risks. For example, researchers noted that some early MCP sessions leaked sensitive tokens in URL query strings.
And probably the biggest is that an MCP server is just executable code—Red Hat’s analysis warns that “MCP servers are composed of executable code, so users should only use MCP servers that they trust” (and ideally have been cryptographically signed).
Essentially, what that’s saying is that MCP expands the AI attack surface. Any flaw in an MCP server or its tool definitions can mislead an LLM into harmful actions. Or, more than that, there are people deliberately making LLMs do that.
This risk is magnified by scale. Independent research shows AI bot traffic grew 4.5× in 2025, with automated requests now exceeding human browsing behaviour—fundamentally undermining traditional visibility, governance, and security controls.
Security experts have identified several high‑risk issues in MCP deployments. Among them are:
- Supply-chain and tool poisoning: Malicious code or prompts can be injected into MCP servers or their tool metadata.
- Credential management vulnerabilities: Astrix’s large-scale study found that almost 88% of MCP servers require credentials, but 53% of them rely on long-lived static API keys or PATs, and only about 8.5% use modern OAuth-based delegation.
- Over-permissive “confused deputy” attacks: MCP does not inherently carry user identity into the server. If an MCP server has powerful permissions, an attacker can trick the LLM into invoking it on their behalf.
- Prompt and context injection: Prompt injection can fool a standalone LLM, but MCP introduces more sophisticated variants. An attacker can subtly poison a data source or file, for example, by inserting an invisible malicious prompt, so that when the agent fetches it from the MCP, the harmful instruction is executed before the user even sees a response.
- Unverified third-party servers: Hundreds of MCP servers, for GitHub, Slack, etc., exist online, and any developer can install one from a public registry, creating the typical supply chain threats.
Taken together, these risks make it clear that MCP cannot be secured with traditional API or application controls alone.
Purpose-built MCP security solutions are emerging to address these challenges—providing visibility into agent-to-tool interactions, enforcing least-privilege access, validating third-party servers, and detecting malicious or anomalous MCP behaviour at runtime.
AI Bot Pressure on Digital Businesses
The security risks introduced by MCP are colliding with a sharp rise in AI-driven bot traffic, particularly across e-commerce and high-traffic online services.
As AI agents become more capable, they are increasingly used to scale abuse that was once manual—credential stuffing, scraping, fake account creation, and inventory scalping—at unprecedented volumes.
Industry data shows that AI crawler and agent traffic has surged dramatically. Across DataDome’s customer base, for example, LLM bots grew from around 2.6% of all bot requests to over 10.1% between January and August 2025.
During peak retail periods, this activity intensifies further, amplifying fraud attempts and putting login flows, forms, and checkout pages under sustained pressure.
These are precisely the areas where users submit credentials and payment data, making them high-value targets for automated attacks.
Many organizations remain poorly defended. Large-scale testing of popular websites reveals that only a small fraction can reliably stop automated abuse, while the majority fail to block even basic scripted bots – let alone adaptive AI agents that mimic human behavior.
This gap highlights how quickly legacy, signature-based controls are falling behind.
Platforms such as DataDome show how modern defenses are shifting toward intent-based traffic analysis, using behavioral signals to distinguish malicious automation from legitimate users and approved AI agents.
This model allows organizations to respond dynamically as attack patterns evolve, rather than relying on static rules or brittle fingerprints.
Mitigating AI-driven bot risk now requires tighter controls on high-risk entry points, especially account creation, authentication, and form submissions. It also requires real-time detection that can scale alongside automated traffic.
DataDome reports blocking hundreds of billions of bot-driven attacks annually, highlighting the security challenges we’re facing and the need for AI-aware protection as MCP-enabled applications become mainstream.
The post How AI Agents Can Quietly Expose Your Business to Serious Risk appeared first on Addicted 2 Success.