Overview
The Playground (Chat Tab) is MCPJam Inspector’s interactive testing environment for MCP servers with LLM integration. It enables real-time conversation with AI models while automatically invoking MCP tools, handling elicitations, and streaming responses. Key Features:- Multi-provider LLM support (OpenAI, Anthropic, DeepSeek, Google, Ollama)
 - Free chat via MCPJam-provided models (powered by MCPJam backend)
 - Real-time MCP tool execution with OpenAI Apps SDK compatibility
 - Server-Sent Events (SSE) for streaming responses
 - Interactive elicitation support (MCP servers requesting user input)
 - Multi-server MCP integration with automatic tool routing
 
- Frontend: 
client/src/components/ChatTab.tsx - Backend: 
server/routes/mcp/chat.ts - Hook: 
client/src/hooks/use-chat.ts 
Architecture Overview
System Components
Chat Flow: Local vs Backend
The Playground supports two execution paths based on the selected model:1. Local Execution (User API Keys)
Used when the user selects models requiring their own API keys (OpenAI, Anthropic, DeepSeek, Google, or Ollama). Flow:server/routes/mcp/chat.ts:207-403-createStreamingResponse()server/utils/chat-helpers.ts-createLlmModel()client/src/hooks/use-chat.ts:181-327- SSE event handling
2. Backend Execution (Free Models via MCPJamBackend)
Used when the user selects MCPJam-provided models (identified byisMCPJamProvidedModel()).
Flow:
server/routes/mcp/chat.ts:405-589-sendMessagesToBackend()shared/backend-conversation.ts-runBackendConversation()shared/http-tool-calls.ts-executeToolCallsFromMessages()
shared/types.ts:109-118):
meta-llama/llama-3.3-70b-instructopenai/gpt-oss-120bx-ai/grok-4-fastopenai/gpt-5-nano
Free Chat: MCPJamBackend Integration
MCPJam Inspector offers free chat powered by MCPJamBackend atCONVEX_HTTP_URL.
How It Works
1. Model Selection:CONVEX_HTTP_URL- MCPJamBackend backend endpoint (required for free chat)- Authenticated users get access via WorkOS tokens
 
MCP Integration via MCPClientManager
The Playground uses MCPClientManager to orchestrate MCP server connections and tool execution.Tool Retrieval
getToolsForAiSdk() does (see docs/contributing/mcp-client-manager.mdx):
- Fetches tools from specified servers (or all if undefined)
 - Converts MCP tool schemas to AI SDK format
 - Attaches 
_serverIdmetadata to each tool - Wires up 
tool.execute()to callmcpClientManager.executeTool() - Caches tool 
_metafields for OpenAI Apps SDK 
Tool Execution Flow
Local Execution (AI SDK):Server Selection
Users can select which MCP servers to use in the chat:Streaming Implementation (SSE)
The Playground uses Server-Sent Events (SSE) for real-time streaming of LLM responses and tool execution.Event Types
Defined inshared/sse.ts:
Server-Side Streaming
Client-Side Parsing
client/src/hooks/use-chat.ts:181-327):
Elicitation Support
Elicitation allows MCP servers to request interactive input from the user during tool execution.Flow
User Response
client/src/hooks/use-chat.ts:563-619):
client/src/components/ElicitationDialog.tsx
OpenAI Apps SDK Integration
MCPJam Inspector supports OpenAI Apps SDK via_meta field preservation in tool results.
Why _meta?
The OpenAI Apps SDK uses _meta to pass rendering hints to OpenAI’s UI (e.g., chart data, markdown formatting, images). Tools can return:
Implementation
1. Tool Metadata Caching (MCPClientManager):shared/http-tool-calls.ts:154-168):
shared/backend-conversation.ts:125-142):
server/routes/mcp/chat.ts:512-528):
Accessing Tool Metadata
Technical Details
Agent Loop
Both local and backend execution use an agent loop pattern:server/routes/mcp/chat.ts:57-58):
MAX_AGENT_STEPS = 10- Max iterationsELICITATION_TIMEOUT = 300000- 5 minutes
Message Format
Client Messages (shared/types.ts:19-28):
server/routes/mcp/chat.ts:223-234):
Content Blocks
Used for rich UI rendering:Model Selection
Available Models (shared/types.ts:167-260):
- Anthropic: Claude Opus 4, Sonnet 4, Sonnet 3.7/3.5, Haiku 3.5
 - OpenAI: GPT-4.1, GPT-4.1 Mini/Nano, GPT-4o, GPT-4o Mini
 - DeepSeek: Chat, Reasoner
 - Google: Gemini 2.5 Pro/Flash, 2.0 Flash Exp, 1.5 Pro/Flash variants
 - Meta: Llama 3.3 70B (Free)
 - X.AI: Grok 4 Fast (Free)
 - OpenAI: GPT-OSS 120B, GPT-5 Nano (Free)
 - Ollama: User-defined local models
 
client/src/hooks/use-chat.ts:159-179):
Temperature & System Prompt
Key Files Reference
Frontend
client/src/components/ChatTab.tsx- Main chat UIclient/src/hooks/use-chat.ts- Chat state managementclient/src/components/chat/message.tsx- Message renderingclient/src/components/chat/chat-input.tsx- Input componentclient/src/components/ElicitationDialog.tsx- Elicitation UIclient/src/lib/sse.ts- SSE parsing utilities
Backend
server/routes/mcp/chat.ts- Chat endpoint (593 lines)server/utils/chat-helpers.ts- LLM model creation
Shared
shared/types.ts- Type definitionsshared/sse.ts- SSE event typesshared/backend-conversation.ts- Backend conversation orchestrationshared/http-tool-calls.ts- Tool execution logic
SDK
sdk/mcp-client-manager/index.ts- MCP orchestrationsdk/mcp-client-manager/tool-converters.ts- AI SDK conversion- See 
docs/contributing/mcp-client-manager.mdxfor full docs 
Development Patterns
Adding New Model Providers
- Add provider to 
shared/types.ts: 
- 
Add models to 
SUPPORTED_MODELSarray - 
Implement in 
server/utils/chat-helpers.ts: 
- Add API key handling in 
client/src/hooks/use-ai-provider-keys.ts 
Adding New SSE Event Types
- Define in 
shared/sse.ts: 
- Emit in backend:
 
- Handle in client:
 
Debugging Tips
Enable RPC Logging:Related Documentation
- MCPClientManager - MCP orchestration layer
 - Elicitation Support - Interactive prompts
 - Debugging - JSON-RPC logging
 - LLM Playground - User guide
 

