MCP Inspector is the official debugging tool for MCP servers. Install: npx @modelcontextprotocol/inspector. Usage: npx @modelcontextprotocol/inspector node server.js (stdio) or npx @modelcontextprotocol/inspector http://localhost:3000/mcp (HTTP). Features: Interactive tool testing, JSON-RPC message inspection, capability verification, session management testing. UI shows: Available tools list, parameter input forms, response visualization, raw JSON-RPC messages. Use for: Testing tools before Claude Desktop integration, debugging tool responses, verifying initialization flow, checking capability advertisement. Essential for development - test locally before production deployment.
MCP Testing Debugging FAQ & Answers
8 expert MCP Testing Debugging answers researched from official documentation. Every answer cites authoritative sources you can verify.
unknown
8 questionsUse MCP Inspector for interactive testing or write behavioral tests with MCP SDK. Inspector method: npx @modelcontextprotocol/inspector
Test 3-step handshake and capability negotiation. Test cases: (1) Version compatibility: Send initialize with 2025-03-26, verify server accepts or negotiates. (2) Capability advertisement: Verify server returns complete capabilities list in initialize response. (3) initialized notification: Verify server receives initialized confirmation. (4) Session management: For HTTP, verify Mcp-Session-Id header handling. (5) Error scenarios: Send unsupported version, verify error response. (6) Timeout handling: Delay initialized notification, verify server behavior. Pattern: Automated test suite checking happy path + error paths. Critical: Test version mismatch handling and session security.
Enable verbose logging and use MCP Inspector's raw message view. Logging pattern: console.log(JSON.stringify({direction: 'sent', message: rpcMessage})); console.log(JSON.stringify({direction: 'received', message: response}));. Check: (1) Request has jsonrpc: '2.0', id, method, params. (2) Response has jsonrpc, id, result OR error (never both). (3) Notifications have method but no id. Inspector: View Messages tab shows full JSON-RPC exchange. Debug checklist: Verify id matching (response id === request id), check error field structure, validate params against schema. Common issues: Missing id in request, both result and error in response, wrong jsonrpc version.
Test 5 error categories: (1) Invalid parameters: Missing required params, wrong types, out-of-range values. (2) External service failures: Database timeout, API errors, network issues. (3) Permission errors: Access denied, invalid credentials. (4) Resource not found: Entity doesn't exist, file missing. (5) Timeout errors: Long-running operations. Pattern: Return JSON-RPC error with code and message: {jsonrpc: '2.0', id, error: {code: -32602, message: 'Invalid params', data: {field: 'slug', reason: 'required'}}}. Error codes: -32700 Parse error, -32600 Invalid request, -32601 Method not found, -32602 Invalid params, -32603 Internal error. Test that errors don't crash server.
Test 5-step PKCE flow with mock OAuth server (required in OAuth 2.1). Best mock server: navikt/mock-oauth2-server (OAuth 2.1 compliant, 2025 maintained). Setup: docker run -p 8080:8080 ghcr.io/navikt/mock-oauth2-server:latest OR npm install oauth2-mock-server. Test PKCE: (1) Generate code_verifier (43-128 chars), (2) Create code_challenge = base64url(sha256(code_verifier)), (3) Request /authorize?code_challenge=...&code_challenge_method=S256, (4) Exchange code with code_verifier at /token, (5) Verify server validates hash match. Test cases: Valid flow, invalid code_verifier (expect 400), expired code, token refresh, consent revocation. Manual testing: Use Zuplo bash script with curl/openssl. Production: Azure EntraID supports MCP OAuth 2.1 with PKCE. Verify tokens stored securely, never logged.
Log 5 key events: (1) Initialization: Log client name, version, capabilities requested. (2) Tool invocations: Log tool name, parameters (sanitized), execution time. (3) Errors: Log full error with stack trace, context. (4) Session events: Log session creation, expiration, reuse. (5) External calls: Log API requests, database queries, timeouts. Pattern: Structured logging with Winston/Pino: logger.info({event: 'tool_call', tool: 'search', params: {q: 'term'}, duration_ms: 145}). Avoid logging: Sensitive parameters (API keys, passwords), full response bodies (too verbose), PII without consent. Production: Use log levels (ERROR, WARN, INFO, DEBUG), rotate logs daily, aggregate with ELK/Datadog.
Use load testing tools for concurrent session testing. 2025 options: Artillery (spike tests <30 min, soak tests 6-12 hours, distributed testing built-in), k6 (JavaScript-based), mcp-performance-test TypeScript library (MCP-specific). Artillery config: target: http://localhost:3000/mcp, phases: [{duration: 60, arrivalRate: 10}], scenarios: [initialize, call_tool, cleanup]. Test transport performance: StreamableHTTP shows 10x difference between shared/unique sessions (Aug 2025 research). Test cases: (1) Session isolation (Client A ≠ Client B), (2) Scale to 10,000+ concurrent agents (achievable with optimization), (3) Race conditions (concurrent calls don't corrupt state), (4) Connection pooling. Targets: <500ms p95 latency, <1% error rate. Use mcpjam Inspector for real LLM interaction testing. Monitor: CPU, memory, error rates. Run horizontally scaled (not from dev machines).