A practical security guide for MCP servers in production. Credential handling, authentication, GDPR, input validation, and the mistakes we see most often.
MCP is moving from experimental to production fast. AI assistants connected to live APIs, databases, and third-party services are no longer a proof-of-concept. They're running in companies of all sizes, handling real data, right now.
The security model for MCP is still being worked out. Anthropic's specification defines the protocol but deliberately leaves security implementation to the server operator. That's the right architectural choice. It's also why developers building MCP servers need to think carefully about security from the start.
This guide covers 10 security considerations that matter most in production. Each one includes what the risk is, who should care about it, and what to do.
Defence in Depth
Five security layers between an MCP client and your upstream API credentials
1
Tool Schema Validation
Constrained inputs, type checking, pattern matching
Upstream APIs -- Google, Meta, Amazon, YouTube -- credentials never leave the server
1. Keep credentials off user machines
Who cares: Everyone. This is the foundation.
The most common MCP security mistake is putting upstream API credentials in a configuration file that lives on the user's machine.
This pattern comes from the early stdio era of MCP, when most servers ran locally. Developer-focused clients like Cursor, VS Code, Cline, and Continue still support local stdio servers and still accept configs shaped like this.
When you connect ChatGPT, Gemini, or Claude to your Google Analytics account through Ooty, your Google password never touches our systems. Neither does your Meta login, your Amazon credentials, or any other platform password.
We use OAuth, the same authenticat
The difference between the Model Context Protocol and a traditional API comes down to one question: who orchestrates the work? With an API, your code controls every step. With MCP, the AI decides what to call and when. That single distinction determines which
Charts have always lived outside AI conversations. You run an analysis, get a table of numbers, and then open a separate tool to visualize it. Ploti changes that. It is a free, open-source MCP server that renders 43 chart types as interactive widgets directly
:
"AIza..."
,
"META_APP_SECRET": "abc123..."
}
}
}
}
Every process on that machine can read those credentials. If the machine is shared, compromised, or the config file is accidentally committed to a repo, those credentials are exposed.
The modern path avoids this entirely. Remote MCP servers, accessed via the Streamable HTTP transport, live on the operator's infrastructure. Clients like ChatGPT, Claude (web, desktop, mobile), Gemini, Cursor, Windsurf, and VS Code connect by pasting a URL into a Connectors UI or adding a url entry in their MCP config. Upstream credentials never touch the end user's machine. For any multi-tenant or production deployment, remote is the only defensible architecture. The old claude_desktop_config.json stdio bridge does not even accept remote servers anymore.
According to GitGuardian's 2024 report, over 12.8 million new secrets were detected in public GitHub commits in 2023 alone (GitGuardian, 2024). API keys in config files are one of the most common vectors.
The fix: Credentials live on your server, not the client. The client authenticates to your MCP server using a token (a license key, a session token, an OAuth access token). Your server holds the upstream API keys and proxies requests on the client's behalf. This is the server-side proxy pattern. It's the only pattern that keeps upstream credentials safe in a multi-tenant deployment. Ooty's architecture uses this pattern for the same reason.
2. Authenticate every endpoint
Who cares: Developers building MCP servers.
MCP servers that accept HTTP connections need authentication on every route. "I'll add auth later" is how security holes ship to production.
Two primary options:
Bearer token authentication. The client includes Authorization: Bearer {token} in every request. Your server validates the token before processing any tool calls. Appropriate for license-key or API-key based authentication.
OAuth 2.0. For user-delegated access (the user authorises your MCP server to act on their behalf), OAuth is the correct standard. The Streamable HTTP transport supports OAuth token passing natively. Appropriate when users are authorising access to their own accounts on platforms like Google or Meta.
For development environments, no-auth is fine. For anything that touches real data, implement auth before you write the first tool handler.
3. Request minimum OAuth scopes
Who cares: Both CTOs and end users.
When implementing OAuth flows, request only the scopes your tool needs.
Bad practice:
Google OAuth scopes: analytics.readonly
mail.google.com
calendar
Requesting Gmail and Calendar access for an analytics tool is unnecessary and alarming to users. OAuth consent screens showing excessive permissions reduce conversion and create legitimate security concerns.
This matters for GDPR compliance too. GDPR's data minimisation principle (Article 5(1)(c)) requires that personal data is "adequate, relevant and limited to what is necessary." Requesting unnecessary scopes creates GDPR exposure.
The rule: If your MCP server reads Google Ads data, you need the Google Ads readonly scope. That's it. Nothing else.
MCP Threat Model
Six security risks in MCP deployments, sorted by severity
Appropriate expiry (24 hours is a reasonable default)
Machine or client fingerprint binding if you want to prevent token sharing
Server-side storage in a database or Redis, not stateless JWT if you need revocation
Token refresh before expiry so users don't need to reauthenticate mid-session
On JWT specifically. JWTs are stateless, which means they can't be revoked without a blocklist. For MCP session tokens that need to be revocable (think: license cancellation, security incident), store sessions server-side and validate against the database.
5. Treat all tool parameters as untrusted input
Who cares: Developers.
MCP tool parameters are user-controlled inputs. Full stop. Treat them like you would treat form submissions on a public website.
The attack surface depends on what your tools do with the parameters:
SQL queries. Parameterise everything. Never interpolate user-supplied values into query strings.
Shell commands. Don't construct shell commands from tool parameters. If you must call external processes, use argument arrays, not string interpolation.
API calls. Validate parameter types, ranges, and formats before passing to upstream APIs. Unexpected values can cause upstream errors that leak backend information.
File paths. If your MCP server reads files based on tool parameters, validate and sanitise paths. Path traversal (../../etc/passwd) is trivially constructed with string concatenation.
The MCP specification doesn't validate tool parameters. That's your responsibility.
6. Rate limit at the MCP layer
Who cares: Anyone paying API bills.
Your MCP server sits between the AI assistant and your upstream APIs. If you don't rate limit at the MCP layer, a runaway agent can exhaust your upstream API quotas in minutes.
This is both a security concern and a cost concern. Many upstream APIs (Google Ads, Meta, Amazon) have both rate limits and cost-per-call billing. An agent stuck in a loop can generate significant unexpected costs.
Three levels of rate limiting:
Per-session. Limit tool calls per session per minute (e.g., 60 calls/minute)
Per-license-key. Limit total daily calls per license (appropriate for SaaS billing tiers)
Per-upstream-endpoint. Respect upstream API rate limits with queuing or backoff
Return the right error. When rate limited, return HTTP 429 with Retry-After header. MCP clients that implement proper error handling will back off automatically. Clients that don't will at least get a clear error rather than a cryptic failure.
7. Handle GDPR before you need to
Who cares: Any company with European users.
If your MCP server handles data about EU residents, and that includes most marketing data (analytics, ad performance, social media metrics), GDPR applies. The common areas that catch people:
Logging. If you log tool requests for debugging, be careful about what you log. Analytics data and search queries may constitute personal data if they're linkable to individuals. Log request metadata (timing, error codes, tool names) rather than request payloads.
Data processing agreements. If you proxy requests to upstream APIs on behalf of users, you're a data processor. Your terms of service and privacy policy need to reflect this.
Right to erasure. If you store any user data server-side (session tokens, cached API responses, usage logs), you need a deletion mechanism. Build the deletion path before you need it.
Cross-border transfer. US servers processing EU user data need a valid legal basis for the transfer. The EU-US Data Privacy Framework covers many US-based services, but you need to be enrolled.
Retention limits. Don't store upstream API responses longer than necessary. If you're caching for performance, set cache TTLs in hours, not months, and implement automated cleanup.
8. Log intelligently
Who cares: Security teams, on-call engineers, compliance officers.
Good security posture requires knowing what happened when something goes wrong. MCP server logs should capture enough to reconstruct events without storing sensitive data.
Log:
Tool name called
License key or session ID (truncated or hashed)
Response time
Error codes and types
Rate limit events
Don't log:
Full request payloads (may contain personal data)
Upstream API responses (same concern)
Complete token values (truncate to first and last 4 characters)
OAuth access tokens (never, under any circumstances)
Use structured JSON logs with consistent fields. This makes it practical to search for specific patterns when investigating incidents.
Keep access logs for three months. That's typically sufficient for incident investigation. Longer retention increases your GDPR exposure.
9. Manage your dependencies
Who cares: Developers and DevOps teams.
MCP servers are typically built on npm or pip ecosystems. These ecosystems have a supply chain problem: transitive dependencies can introduce vulnerabilities that are invisible in your direct dependency list.
The specific MCP risk. MCP servers using the stdio transport run as local processes on the user's machine with the user's permissions. A compromised dependency executes with full user-level access. This is still a meaningful threat for stdio servers distributed as npm packages, which is most of what developer clients like Cursor, Cline, and Continue run locally.
For server-side remote MCP deployments (Streamable HTTP transport), which is what ChatGPT, Claude, and Gemini now use for anything production-grade, the risk profile is similar to any other web service. Standard practices apply.
10. Use the tool schema as a security boundary
Who cares: Everyone building MCP tools.
The MCP tool schema (the JSON Schema definition of what parameters each tool accepts) is a security control, not just documentation.
A poorly defined schema is an invitation to misuse:
The schema does three things: tells the AI client what's expected (reducing hallucinated parameters), provides a first line of validation before your code runs, and limits the attack surface for adversarial inputs.
Validate against the schema in your server code too. Don't trust that the client has enforced schema constraints.
A note on SOC 2
If you're building MCP infrastructure for enterprise customers, SOC 2 Type II is the most relevant compliance framework. It evaluates security controls across five trust service criteria: Security, Availability, Processing Integrity, Confidentiality, and Privacy.
Every control described in this guide (access controls, audit logging, encryption, rate limiting, vulnerability management) maps directly to SOC 2 Security criteria. The audit is primarily about demonstrating that controls are in place and operating consistently over time. If SOC 2 is on your roadmap, implement these controls systematically and document them from the start.
The 10-Point Security Checklist
The baseline for responsible MCP server operation
1
Server-side credentials
No upstream API keys on client machines
2
Auth on every endpoint
Bearer token or OAuth before shipping
3
Minimum OAuth scopes
Request only what the tool needs
4
Proper token design
Random, expiring, revocable, stored securely
5
Input validation
All tool parameters treated as untrusted
6
Three-tier rate limiting
Session, license, and upstream API
7
GDPR-compliant handling
Minimise storage, know your legal basis
8
Safe audit logging
Log metadata not payloads, set retention
9
Dependency hygiene
Audit regularly, lock files, keep updated
10
Schema as security control
Constrain inputs, validate server-side
The bottom line
These 10 points represent the baseline for responsible MCP server operation. The floor, not the ceiling.
The MCP ecosystem is growing fast, and the companies that get security right early will have a real advantage over those that retrofit it later. Every week we see another headline about a data breach that started with exposed credentials or missing authentication. MCP doesn't have to repeat those mistakes.