From 90580581bab90c0b39eeb357b8264aae87023bef Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Mon, 24 Mar 2025 21:31:36 +0000 Subject: [PATCH 01/16] feat(wip): initial commits for sub-tasks 1,2,3 for task 23 --- mcp-server/README.md | 170 +++ mcp-server/server.js | 44 + mcp-server/src/api-handlers.js | 970 +++++++++++++++++ mcp-server/src/auth.js | 285 +++++ mcp-server/src/context-manager.js | 873 ++++++++++++++++ mcp-server/src/index.js | 366 +++++++ package-lock.json | 1610 ++++++++++++++++++++++++++++- package.json | 23 +- tasks/task_023.txt | 115 +++ tasks/tasks.json | 64 +- 10 files changed, 4476 insertions(+), 44 deletions(-) create mode 100644 mcp-server/README.md create mode 100755 mcp-server/server.js create mode 100644 mcp-server/src/api-handlers.js create mode 100644 mcp-server/src/auth.js create mode 100644 mcp-server/src/context-manager.js create mode 100644 mcp-server/src/index.js diff --git a/mcp-server/README.md b/mcp-server/README.md new file mode 100644 index 00000000..9c8b1300 --- /dev/null +++ b/mcp-server/README.md @@ -0,0 +1,170 @@ +# Task Master MCP Server + +This module implements a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server for Task Master, allowing external applications to access Task Master functionality and context through a standardized API. + +## Features + +- MCP-compliant server implementation using FastMCP +- RESTful API for context management +- Authentication and authorization for secure access +- Context storage and retrieval with metadata and tagging +- Context windowing and truncation for handling size limits +- Integration with Task Master for task management operations + +## Installation + +The MCP server is included with Task Master. Install Task Master globally to use the MCP server: + +```bash +npm install -g task-master-ai +``` + +Or use it locally: + +```bash +npm install task-master-ai +``` + +## Environment Configuration + +The MCP server can be configured using environment variables or a `.env` file: + +| Variable | Description | Default | +| -------------------- | ---------------------------------------- | ----------------------------- | +| `MCP_SERVER_PORT` | Port for the MCP server | 3000 | +| `MCP_SERVER_HOST` | Host for the MCP server | localhost | +| `MCP_CONTEXT_DIR` | Directory for context storage | ./mcp-server/contexts | +| `MCP_API_KEYS_FILE` | File for API key storage | ./mcp-server/api-keys.json | +| `MCP_JWT_SECRET` | Secret for JWT token generation | task-master-mcp-server-secret | +| `MCP_JWT_EXPIRATION` | JWT token expiration time | 24h | +| `LOG_LEVEL` | Logging level (debug, info, warn, error) | info | + +## Getting Started + +### Starting the Server + +Start the MCP server as a standalone process: + +```bash +npx task-master-mcp-server +``` + +Or start it programmatically: + +```javascript +import { TaskMasterMCPServer } from "task-master-ai/mcp-server"; + +const server = new TaskMasterMCPServer(); +await server.start({ port: 3000, host: "localhost" }); +``` + +### Authentication + +The MCP server uses API key authentication with JWT tokens for secure access. A default admin API key is generated on first startup and can be found in the `api-keys.json` file. + +To get a JWT token: + +```bash +curl -X POST http://localhost:3000/auth/token \ + -H "x-api-key: YOUR_API_KEY" +``` + +Use the token for subsequent requests: + +```bash +curl http://localhost:3000/mcp/tools \ + -H "Authorization: Bearer YOUR_JWT_TOKEN" +``` + +### Creating a New API Key + +Admin users can create new API keys: + +```bash +curl -X POST http://localhost:3000/auth/api-keys \ + -H "Authorization: Bearer ADMIN_JWT_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"clientId": "user1", "role": "user"}' +``` + +## Available MCP Endpoints + +The MCP server implements the following MCP-compliant endpoints: + +### Context Management + +- `GET /mcp/context` - List all contexts +- `POST /mcp/context` - Create a new context +- `GET /mcp/context/{id}` - Get a specific context +- `PUT /mcp/context/{id}` - Update a context +- `DELETE /mcp/context/{id}` - Delete a context + +### Models + +- `GET /mcp/models` - List available models +- `GET /mcp/models/{id}` - Get model details + +### Execution + +- `POST /mcp/execute` - Execute an operation with context + +## Available MCP Tools + +The MCP server provides the following tools: + +### Context Tools + +- `createContext` - Create a new context +- `getContext` - Retrieve a context by ID +- `updateContext` - Update an existing context +- `deleteContext` - Delete a context +- `listContexts` - List available contexts +- `addTags` - Add tags to a context +- `truncateContext` - Truncate a context to a maximum size + +### Task Master Tools + +- `listTasks` - List tasks from Task Master +- `getTaskDetails` - Get detailed task information +- `executeWithContext` - Execute operations using context + +## Examples + +### Creating a Context + +```javascript +// Using the MCP client +const client = new MCPClient("http://localhost:3000"); +await client.authenticate("YOUR_API_KEY"); + +const context = await client.createContext("my-context", { + title: "My Project", + tasks: ["Implement feature X", "Fix bug Y"], +}); +``` + +### Executing an Operation with Context + +```javascript +// Using the MCP client +const result = await client.execute("generateTask", "my-context", { + title: "New Task", + description: "Create a new task based on context", +}); +``` + +## Integration with Other Tools + +The Task Master MCP server can be integrated with other MCP-compatible tools and clients: + +- LLM applications that support the MCP protocol +- Task management systems that support context-aware operations +- Development environments with MCP integration + +## Contributing + +Contributions are welcome! Please feel free to submit a Pull Request. + +## License + +This project is licensed under the MIT License - see the LICENSE file for details. diff --git a/mcp-server/server.js b/mcp-server/server.js new file mode 100755 index 00000000..ed5c3c69 --- /dev/null +++ b/mcp-server/server.js @@ -0,0 +1,44 @@ +#!/usr/bin/env node + +import TaskMasterMCPServer from "./src/index.js"; +import dotenv from "dotenv"; +import { logger } from "../scripts/modules/utils.js"; + +// Load environment variables +dotenv.config(); + +// Constants +const PORT = process.env.MCP_SERVER_PORT || 3000; +const HOST = process.env.MCP_SERVER_HOST || "localhost"; + +/** + * Start the MCP server + */ +async function startServer() { + const server = new TaskMasterMCPServer(); + + // Handle graceful shutdown + process.on("SIGINT", async () => { + logger.info("Received SIGINT, shutting down gracefully..."); + await server.stop(); + process.exit(0); + }); + + process.on("SIGTERM", async () => { + logger.info("Received SIGTERM, shutting down gracefully..."); + await server.stop(); + process.exit(0); + }); + + try { + await server.start({ port: PORT, host: HOST }); + logger.info(`MCP server running at http://${HOST}:${PORT}`); + logger.info("Press Ctrl+C to stop"); + } catch (error) { + logger.error(`Failed to start MCP server: ${error.message}`); + process.exit(1); + } +} + +// Start the server +startServer(); diff --git a/mcp-server/src/api-handlers.js b/mcp-server/src/api-handlers.js new file mode 100644 index 00000000..ead546f2 --- /dev/null +++ b/mcp-server/src/api-handlers.js @@ -0,0 +1,970 @@ +import { z } from "zod"; +import { logger } from "../../scripts/modules/utils.js"; +import ContextManager from "./context-manager.js"; + +/** + * MCP API Handlers class + * Implements handlers for the MCP API endpoints + */ +class MCPApiHandlers { + constructor(server) { + this.server = server; + this.contextManager = new ContextManager(); + this.logger = logger; + + // Bind methods + this.registerEndpoints = this.registerEndpoints.bind(this); + this.setupContextHandlers = this.setupContextHandlers.bind(this); + this.setupModelHandlers = this.setupModelHandlers.bind(this); + this.setupExecuteHandlers = this.setupExecuteHandlers.bind(this); + + // Register all handlers + this.registerEndpoints(); + } + + /** + * Register all MCP API endpoints + */ + registerEndpoints() { + this.setupContextHandlers(); + this.setupModelHandlers(); + this.setupExecuteHandlers(); + + this.logger.info("Registered all MCP API endpoint handlers"); + } + + /** + * Set up handlers for the /context endpoint + */ + setupContextHandlers() { + // Add a tool to create context + this.server.addTool({ + name: "createContext", + description: + "Create a new context with the given data and optional metadata", + parameters: z.object({ + contextId: z.string().describe("Unique identifier for the context"), + data: z.any().describe("The context data to store"), + metadata: z + .object({}) + .optional() + .describe("Optional metadata for the context"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.createContext( + args.contextId, + args.data, + args.metadata || {} + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error creating context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to get context + this.server.addTool({ + name: "getContext", + description: + "Retrieve a context by its ID, optionally a specific version", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to retrieve"), + versionId: z + .string() + .optional() + .describe("Optional specific version ID to retrieve"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.getContext( + args.contextId, + args.versionId + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error retrieving context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to update context + this.server.addTool({ + name: "updateContext", + description: "Update an existing context with new data and/or metadata", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to update"), + data: z + .any() + .optional() + .describe("New data to update the context with"), + metadata: z + .object({}) + .optional() + .describe("New metadata to update the context with"), + createNewVersion: z + .boolean() + .optional() + .default(true) + .describe( + "Whether to create a new version (true) or update in place (false)" + ), + }), + execute: async (args) => { + try { + const context = await this.contextManager.updateContext( + args.contextId, + args.data || {}, + args.metadata || {}, + args.createNewVersion + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error updating context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to delete context + this.server.addTool({ + name: "deleteContext", + description: "Delete a context by its ID", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to delete"), + }), + execute: async (args) => { + try { + const result = await this.contextManager.deleteContext( + args.contextId + ); + return { success: result }; + } catch (error) { + this.logger.error(`Error deleting context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to list contexts with pagination and advanced filtering + this.server.addTool({ + name: "listContexts", + description: + "List available contexts with filtering, pagination and sorting", + parameters: z.object({ + // Filtering parameters + filters: z + .object({ + tag: z.string().optional().describe("Filter contexts by tag"), + metadataKey: z + .string() + .optional() + .describe("Filter contexts by metadata key"), + metadataValue: z + .string() + .optional() + .describe("Filter contexts by metadata value"), + createdAfter: z + .string() + .optional() + .describe("Filter contexts created after date (ISO format)"), + updatedAfter: z + .string() + .optional() + .describe("Filter contexts updated after date (ISO format)"), + }) + .optional() + .describe("Filters to apply to the context list"), + + // Pagination parameters + limit: z + .number() + .optional() + .default(100) + .describe("Maximum number of contexts to return"), + offset: z + .number() + .optional() + .default(0) + .describe("Number of contexts to skip"), + + // Sorting parameters + sortBy: z + .string() + .optional() + .default("updated") + .describe("Field to sort by (id, created, updated, size)"), + sortDirection: z + .enum(["asc", "desc"]) + .optional() + .default("desc") + .describe("Sort direction"), + + // Search query + query: z.string().optional().describe("Free text search query"), + }), + execute: async (args) => { + try { + const result = await this.contextManager.listContexts(args); + return { + success: true, + ...result, + }; + } catch (error) { + this.logger.error(`Error listing contexts: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to get context history + this.server.addTool({ + name: "getContextHistory", + description: "Get the version history of a context", + parameters: z.object({ + contextId: z + .string() + .describe("The ID of the context to get history for"), + }), + execute: async (args) => { + try { + const history = await this.contextManager.getContextHistory( + args.contextId + ); + return { + success: true, + history, + contextId: args.contextId, + }; + } catch (error) { + this.logger.error(`Error getting context history: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to merge contexts + this.server.addTool({ + name: "mergeContexts", + description: "Merge multiple contexts into a new context", + parameters: z.object({ + contextIds: z + .array(z.string()) + .describe("Array of context IDs to merge"), + newContextId: z.string().describe("ID for the new merged context"), + metadata: z + .object({}) + .optional() + .describe("Optional metadata for the new context"), + }), + execute: async (args) => { + try { + const mergedContext = await this.contextManager.mergeContexts( + args.contextIds, + args.newContextId, + args.metadata || {} + ); + return { + success: true, + context: mergedContext, + }; + } catch (error) { + this.logger.error(`Error merging contexts: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to add tags to a context + this.server.addTool({ + name: "addTags", + description: "Add tags to a context", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to tag"), + tags: z + .array(z.string()) + .describe("Array of tags to add to the context"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.addTags( + args.contextId, + args.tags + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error adding tags to context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to remove tags from a context + this.server.addTool({ + name: "removeTags", + description: "Remove tags from a context", + parameters: z.object({ + contextId: z + .string() + .describe("The ID of the context to remove tags from"), + tags: z + .array(z.string()) + .describe("Array of tags to remove from the context"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.removeTags( + args.contextId, + args.tags + ); + return { success: true, context }; + } catch (error) { + this.logger.error( + `Error removing tags from context: ${error.message}` + ); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to truncate context + this.server.addTool({ + name: "truncateContext", + description: "Truncate a context to a maximum size", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to truncate"), + maxSize: z + .number() + .describe("Maximum size (in characters) for the context"), + strategy: z + .enum(["start", "end", "middle"]) + .default("end") + .describe("Truncation strategy: start, end, or middle"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.truncateContext( + args.contextId, + args.maxSize, + args.strategy + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error truncating context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + this.logger.info("Registered context endpoint handlers"); + } + + /** + * Set up handlers for the /models endpoint + */ + setupModelHandlers() { + // Add a tool to list available models + this.server.addTool({ + name: "listModels", + description: "List all available models with their capabilities", + parameters: z.object({}), + execute: async () => { + // Here we could get models from a more dynamic source + // For now, returning static list of models supported by Task Master + const models = [ + { + id: "claude-3-opus-20240229", + provider: "anthropic", + capabilities: [ + "text-generation", + "embeddings", + "context-window-100k", + ], + }, + { + id: "claude-3-7-sonnet-20250219", + provider: "anthropic", + capabilities: [ + "text-generation", + "embeddings", + "context-window-200k", + ], + }, + { + id: "sonar-medium-online", + provider: "perplexity", + capabilities: ["text-generation", "web-search", "research"], + }, + ]; + + return { success: true, models }; + }, + }); + + // Add a tool to get model details + this.server.addTool({ + name: "getModelDetails", + description: "Get detailed information about a specific model", + parameters: z.object({ + modelId: z.string().describe("The ID of the model to get details for"), + }), + execute: async (args) => { + // Here we could get model details from a more dynamic source + // For now, returning static information + const modelsMap = { + "claude-3-opus-20240229": { + id: "claude-3-opus-20240229", + provider: "anthropic", + capabilities: [ + "text-generation", + "embeddings", + "context-window-100k", + ], + maxTokens: 100000, + temperature: { min: 0, max: 1, default: 0.7 }, + pricing: { input: 0.000015, output: 0.000075 }, + }, + "claude-3-7-sonnet-20250219": { + id: "claude-3-7-sonnet-20250219", + provider: "anthropic", + capabilities: [ + "text-generation", + "embeddings", + "context-window-200k", + ], + maxTokens: 200000, + temperature: { min: 0, max: 1, default: 0.7 }, + pricing: { input: 0.000003, output: 0.000015 }, + }, + "sonar-medium-online": { + id: "sonar-medium-online", + provider: "perplexity", + capabilities: ["text-generation", "web-search", "research"], + maxTokens: 4096, + temperature: { min: 0, max: 1, default: 0.7 }, + }, + }; + + const model = modelsMap[args.modelId]; + if (!model) { + return { + success: false, + error: `Model with ID ${args.modelId} not found`, + }; + } + + return { success: true, model }; + }, + }); + + this.logger.info("Registered models endpoint handlers"); + } + + /** + * Set up handlers for the /execute endpoint + */ + setupExecuteHandlers() { + // Add a tool to execute operations with context + this.server.addTool({ + name: "executeWithContext", + description: "Execute an operation with the provided context", + parameters: z.object({ + operation: z.string().describe("The operation to execute"), + contextId: z.string().describe("The ID of the context to use"), + parameters: z + .record(z.any()) + .optional() + .describe("Additional parameters for the operation"), + versionId: z + .string() + .optional() + .describe("Optional specific context version to use"), + }), + execute: async (args) => { + try { + // Get the context first, with version if specified + const context = await this.contextManager.getContext( + args.contextId, + args.versionId + ); + + // Execute different operations based on the operation name + switch (args.operation) { + case "generateTask": + return await this.executeGenerateTask(context, args.parameters); + case "expandTask": + return await this.executeExpandTask(context, args.parameters); + case "analyzeComplexity": + return await this.executeAnalyzeComplexity( + context, + args.parameters + ); + case "mergeContexts": + return await this.executeMergeContexts(context, args.parameters); + case "searchContexts": + return await this.executeSearchContexts(args.parameters); + case "extractInsights": + return await this.executeExtractInsights( + context, + args.parameters + ); + case "syncWithRepository": + return await this.executeSyncWithRepository( + context, + args.parameters + ); + default: + return { + success: false, + error: `Unknown operation: ${args.operation}`, + }; + } + } catch (error) { + this.logger.error(`Error executing operation: ${error.message}`); + return { + success: false, + error: error.message, + operation: args.operation, + contextId: args.contextId, + }; + } + }, + }); + + // Add tool for batch operations + this.server.addTool({ + name: "executeBatchOperations", + description: "Execute multiple operations in a single request", + parameters: z.object({ + operations: z + .array( + z.object({ + operation: z.string().describe("The operation to execute"), + contextId: z.string().describe("The ID of the context to use"), + parameters: z + .record(z.any()) + .optional() + .describe("Additional parameters"), + versionId: z + .string() + .optional() + .describe("Optional context version"), + }) + ) + .describe("Array of operations to execute in sequence"), + }), + execute: async (args) => { + const results = []; + let hasErrors = false; + + for (const op of args.operations) { + try { + const context = await this.contextManager.getContext( + op.contextId, + op.versionId + ); + + let result; + switch (op.operation) { + case "generateTask": + result = await this.executeGenerateTask(context, op.parameters); + break; + case "expandTask": + result = await this.executeExpandTask(context, op.parameters); + break; + case "analyzeComplexity": + result = await this.executeAnalyzeComplexity( + context, + op.parameters + ); + break; + case "mergeContexts": + result = await this.executeMergeContexts( + context, + op.parameters + ); + break; + case "searchContexts": + result = await this.executeSearchContexts(op.parameters); + break; + case "extractInsights": + result = await this.executeExtractInsights( + context, + op.parameters + ); + break; + case "syncWithRepository": + result = await this.executeSyncWithRepository( + context, + op.parameters + ); + break; + default: + result = { + success: false, + error: `Unknown operation: ${op.operation}`, + }; + hasErrors = true; + } + + results.push({ + operation: op.operation, + contextId: op.contextId, + result: result, + }); + + if (!result.success) { + hasErrors = true; + } + } catch (error) { + this.logger.error( + `Error in batch operation ${op.operation}: ${error.message}` + ); + results.push({ + operation: op.operation, + contextId: op.contextId, + result: { + success: false, + error: error.message, + }, + }); + hasErrors = true; + } + } + + return { + success: !hasErrors, + results: results, + }; + }, + }); + + this.logger.info("Registered execute endpoint handlers"); + } + + /** + * Execute the generateTask operation + * @param {object} context - The context to use + * @param {object} parameters - Additional parameters + * @returns {Promise} The result of the operation + */ + async executeGenerateTask(context, parameters = {}) { + // This is a placeholder for actual task generation logic + // In a real implementation, this would use Task Master's task generation + + this.logger.info(`Generating task with context ${context.id}`); + + // Improved task generation with more detailed result + const task = { + id: Math.floor(Math.random() * 1000), + title: parameters.title || "New Task", + description: parameters.description || "Task generated from context", + status: "pending", + dependencies: parameters.dependencies || [], + priority: parameters.priority || "medium", + details: `This task was generated using context ${ + context.id + }.\n\n${JSON.stringify(context.data, null, 2)}`, + metadata: { + generatedAt: new Date().toISOString(), + generatedFrom: context.id, + contextVersion: context.metadata.version, + generatedBy: parameters.user || "system", + }, + }; + + return { + success: true, + task, + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + }; + } + + /** + * Execute the expandTask operation + * @param {object} context - The context to use + * @param {object} parameters - Additional parameters + * @returns {Promise} The result of the operation + */ + async executeExpandTask(context, parameters = {}) { + // This is a placeholder for actual task expansion logic + // In a real implementation, this would use Task Master's task expansion + + this.logger.info(`Expanding task with context ${context.id}`); + + // Enhanced task expansion with more configurable options + const numSubtasks = parameters.numSubtasks || 3; + const subtaskPrefix = parameters.subtaskPrefix || ""; + const subtasks = []; + + for (let i = 1; i <= numSubtasks; i++) { + subtasks.push({ + id: `${subtaskPrefix}${i}`, + title: parameters.titleTemplate + ? parameters.titleTemplate.replace("{i}", i) + : `Subtask ${i}`, + description: parameters.descriptionTemplate + ? parameters.descriptionTemplate + .replace("{i}", i) + .replace("{taskId}", parameters.taskId || "unknown") + : `Subtask ${i} for ${parameters.taskId || "unknown task"}`, + dependencies: i > 1 ? [i - 1] : [], + status: "pending", + metadata: { + expandedAt: new Date().toISOString(), + expandedFrom: context.id, + contextVersion: context.metadata.version, + expandedBy: parameters.user || "system", + }, + }); + } + + return { + success: true, + taskId: parameters.taskId, + subtasks, + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + }; + } + + /** + * Execute the analyzeComplexity operation + * @param {object} context - The context to use + * @param {object} parameters - Additional parameters + * @returns {Promise} The result of the operation + */ + async executeAnalyzeComplexity(context, parameters = {}) { + // This is a placeholder for actual complexity analysis logic + // In a real implementation, this would use Task Master's complexity analysis + + this.logger.info(`Analyzing complexity with context ${context.id}`); + + // Enhanced complexity analysis with more detailed factors + const complexityScore = Math.floor(Math.random() * 10) + 1; + const recommendedSubtasks = Math.floor(complexityScore / 2) + 1; + + // More detailed analysis with weighted factors + const factors = [ + { + name: "Task scope breadth", + score: Math.floor(Math.random() * 10) + 1, + weight: 0.3, + description: "How broad is the scope of this task", + }, + { + name: "Technical complexity", + score: Math.floor(Math.random() * 10) + 1, + weight: 0.4, + description: "How technically complex is the implementation", + }, + { + name: "External dependencies", + score: Math.floor(Math.random() * 10) + 1, + weight: 0.2, + description: "How many external dependencies does this task have", + }, + { + name: "Risk assessment", + score: Math.floor(Math.random() * 10) + 1, + weight: 0.1, + description: "What is the risk level of this task", + }, + ]; + + return { + success: true, + analysis: { + taskId: parameters.taskId || "unknown", + complexityScore, + recommendedSubtasks, + factors, + recommendedTimeEstimate: `${complexityScore * 2}-${ + complexityScore * 4 + } hours`, + metadata: { + analyzedAt: new Date().toISOString(), + analyzedUsing: context.id, + contextVersion: context.metadata.version, + analyzedBy: parameters.user || "system", + }, + }, + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + }; + } + + /** + * Execute the mergeContexts operation + * @param {object} primaryContext - The primary context to use + * @param {object} parameters - Additional parameters + * @returns {Promise} The result of the operation + */ + async executeMergeContexts(primaryContext, parameters = {}) { + this.logger.info( + `Merging contexts with primary context ${primaryContext.id}` + ); + + if ( + !parameters.contextIds || + !Array.isArray(parameters.contextIds) || + parameters.contextIds.length === 0 + ) { + return { + success: false, + error: "No context IDs provided for merging", + }; + } + + if (!parameters.newContextId) { + return { + success: false, + error: "New context ID is required for the merged context", + }; + } + + try { + // Add the primary context to the list if not already included + if (!parameters.contextIds.includes(primaryContext.id)) { + parameters.contextIds.unshift(primaryContext.id); + } + + const mergedContext = await this.contextManager.mergeContexts( + parameters.contextIds, + parameters.newContextId, + { + mergedAt: new Date().toISOString(), + mergedBy: parameters.user || "system", + mergeStrategy: parameters.strategy || "concatenate", + ...parameters.metadata, + } + ); + + return { + success: true, + mergedContext, + sourceContexts: parameters.contextIds, + }; + } catch (error) { + this.logger.error(`Error merging contexts: ${error.message}`); + return { + success: false, + error: error.message, + }; + } + } + + /** + * Execute the searchContexts operation + * @param {object} parameters - Search parameters + * @returns {Promise} The result of the operation + */ + async executeSearchContexts(parameters = {}) { + this.logger.info( + `Searching contexts with query: ${parameters.query || ""}` + ); + + try { + const searchResults = await this.contextManager.listContexts({ + query: parameters.query || "", + filters: parameters.filters || {}, + limit: parameters.limit || 100, + offset: parameters.offset || 0, + sortBy: parameters.sortBy || "updated", + sortDirection: parameters.sortDirection || "desc", + }); + + return { + success: true, + ...searchResults, + }; + } catch (error) { + this.logger.error(`Error searching contexts: ${error.message}`); + return { + success: false, + error: error.message, + }; + } + } + + /** + * Execute the extractInsights operation + * @param {object} context - The context to analyze + * @param {object} parameters - Additional parameters + * @returns {Promise} The result of the operation + */ + async executeExtractInsights(context, parameters = {}) { + this.logger.info(`Extracting insights from context ${context.id}`); + + // Placeholder for actual insight extraction + // In a real implementation, this would perform analysis on the context data + + const insights = [ + { + type: "summary", + content: `Summary of context ${context.id}`, + confidence: 0.85, + }, + { + type: "key_points", + content: ["First key point", "Second key point", "Third key point"], + confidence: 0.78, + }, + { + type: "recommendations", + content: ["First recommendation", "Second recommendation"], + confidence: 0.72, + }, + ]; + + return { + success: true, + insights, + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + metadata: { + extractedAt: new Date().toISOString(), + model: parameters.model || "default", + extractedBy: parameters.user || "system", + }, + }; + } + + /** + * Execute the syncWithRepository operation + * @param {object} context - The context to sync + * @param {object} parameters - Additional parameters + * @returns {Promise} The result of the operation + */ + async executeSyncWithRepository(context, parameters = {}) { + this.logger.info(`Syncing context ${context.id} with repository`); + + // Placeholder for actual repository sync + // In a real implementation, this would sync the context with an external repository + + return { + success: true, + syncStatus: "complete", + syncedTo: parameters.repository || "default", + syncTimestamp: new Date().toISOString(), + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + }; + } +} + +export default MCPApiHandlers; diff --git a/mcp-server/src/auth.js b/mcp-server/src/auth.js new file mode 100644 index 00000000..22c36973 --- /dev/null +++ b/mcp-server/src/auth.js @@ -0,0 +1,285 @@ +import jwt from "jsonwebtoken"; +import { logger } from "../../scripts/modules/utils.js"; +import crypto from "crypto"; +import fs from "fs/promises"; +import path from "path"; +import { fileURLToPath } from "url"; + +// Constants +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); +const API_KEYS_FILE = + process.env.MCP_API_KEYS_FILE || path.join(__dirname, "../api-keys.json"); +const JWT_SECRET = + process.env.MCP_JWT_SECRET || "task-master-mcp-server-secret"; +const JWT_EXPIRATION = process.env.MCP_JWT_EXPIRATION || "24h"; + +/** + * Authentication middleware and utilities for MCP server + */ +class MCPAuth { + constructor() { + this.apiKeys = new Map(); + this.logger = logger; + this.loadApiKeys(); + } + + /** + * Load API keys from disk + */ + async loadApiKeys() { + try { + // Create API keys file if it doesn't exist + try { + await fs.access(API_KEYS_FILE); + } catch (error) { + // File doesn't exist, create it with a default admin key + const defaultApiKey = this.generateApiKey(); + const defaultApiKeys = { + keys: [ + { + id: "admin", + key: defaultApiKey, + role: "admin", + created: new Date().toISOString(), + }, + ], + }; + + await fs.mkdir(path.dirname(API_KEYS_FILE), { recursive: true }); + await fs.writeFile( + API_KEYS_FILE, + JSON.stringify(defaultApiKeys, null, 2), + "utf8" + ); + + this.logger.info( + `Created default API keys file with admin key: ${defaultApiKey}` + ); + } + + // Load API keys + const data = await fs.readFile(API_KEYS_FILE, "utf8"); + const apiKeys = JSON.parse(data); + + apiKeys.keys.forEach((key) => { + this.apiKeys.set(key.key, { + id: key.id, + role: key.role, + created: key.created, + }); + }); + + this.logger.info(`Loaded ${this.apiKeys.size} API keys`); + } catch (error) { + this.logger.error(`Failed to load API keys: ${error.message}`); + throw error; + } + } + + /** + * Save API keys to disk + */ + async saveApiKeys() { + try { + const keys = []; + + this.apiKeys.forEach((value, key) => { + keys.push({ + id: value.id, + key, + role: value.role, + created: value.created, + }); + }); + + await fs.writeFile( + API_KEYS_FILE, + JSON.stringify({ keys }, null, 2), + "utf8" + ); + + this.logger.info(`Saved ${keys.length} API keys`); + } catch (error) { + this.logger.error(`Failed to save API keys: ${error.message}`); + throw error; + } + } + + /** + * Generate a new API key + * @returns {string} The generated API key + */ + generateApiKey() { + return crypto.randomBytes(32).toString("hex"); + } + + /** + * Create a new API key + * @param {string} id - Client identifier + * @param {string} role - Client role (admin, user) + * @returns {string} The generated API key + */ + async createApiKey(id, role = "user") { + const apiKey = this.generateApiKey(); + + this.apiKeys.set(apiKey, { + id, + role, + created: new Date().toISOString(), + }); + + await this.saveApiKeys(); + + this.logger.info(`Created new API key for ${id} with role ${role}`); + return apiKey; + } + + /** + * Revoke an API key + * @param {string} apiKey - The API key to revoke + * @returns {boolean} True if the key was revoked + */ + async revokeApiKey(apiKey) { + if (!this.apiKeys.has(apiKey)) { + return false; + } + + this.apiKeys.delete(apiKey); + await this.saveApiKeys(); + + this.logger.info(`Revoked API key`); + return true; + } + + /** + * Validate an API key + * @param {string} apiKey - The API key to validate + * @returns {object|null} The API key details if valid, null otherwise + */ + validateApiKey(apiKey) { + return this.apiKeys.get(apiKey) || null; + } + + /** + * Generate a JWT token for a client + * @param {string} clientId - Client identifier + * @param {string} role - Client role + * @returns {string} The JWT token + */ + generateToken(clientId, role) { + return jwt.sign({ clientId, role }, JWT_SECRET, { + expiresIn: JWT_EXPIRATION, + }); + } + + /** + * Verify a JWT token + * @param {string} token - The JWT token to verify + * @returns {object|null} The token payload if valid, null otherwise + */ + verifyToken(token) { + try { + return jwt.verify(token, JWT_SECRET); + } catch (error) { + this.logger.error(`Failed to verify token: ${error.message}`); + return null; + } + } + + /** + * Express middleware for API key authentication + * @param {object} req - Express request object + * @param {object} res - Express response object + * @param {function} next - Express next function + */ + authenticateApiKey(req, res, next) { + const apiKey = req.headers["x-api-key"]; + + if (!apiKey) { + return res.status(401).json({ + success: false, + error: "API key is required", + }); + } + + const keyDetails = this.validateApiKey(apiKey); + + if (!keyDetails) { + return res.status(401).json({ + success: false, + error: "Invalid API key", + }); + } + + // Attach client info to request + req.client = { + id: keyDetails.id, + role: keyDetails.role, + }; + + next(); + } + + /** + * Express middleware for JWT authentication + * @param {object} req - Express request object + * @param {object} res - Express response object + * @param {function} next - Express next function + */ + authenticateToken(req, res, next) { + const authHeader = req.headers["authorization"]; + const token = authHeader && authHeader.split(" ")[1]; + + if (!token) { + return res.status(401).json({ + success: false, + error: "Authentication token is required", + }); + } + + const payload = this.verifyToken(token); + + if (!payload) { + return res.status(401).json({ + success: false, + error: "Invalid or expired token", + }); + } + + // Attach client info to request + req.client = { + id: payload.clientId, + role: payload.role, + }; + + next(); + } + + /** + * Express middleware for role-based authorization + * @param {Array} roles - Array of allowed roles + * @returns {function} Express middleware + */ + authorizeRoles(roles) { + return (req, res, next) => { + if (!req.client || !req.client.role) { + return res.status(401).json({ + success: false, + error: "Unauthorized: Authentication required", + }); + } + + if (!roles.includes(req.client.role)) { + return res.status(403).json({ + success: false, + error: "Forbidden: Insufficient permissions", + }); + } + + next(); + }; + } +} + +export default MCPAuth; diff --git a/mcp-server/src/context-manager.js b/mcp-server/src/context-manager.js new file mode 100644 index 00000000..5b94b538 --- /dev/null +++ b/mcp-server/src/context-manager.js @@ -0,0 +1,873 @@ +import { logger } from "../../scripts/modules/utils.js"; +import fs from "fs/promises"; +import path from "path"; +import { fileURLToPath } from "url"; +import crypto from "crypto"; +import Fuse from "fuse.js"; + +// Constants +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); +const CONTEXT_DIR = + process.env.MCP_CONTEXT_DIR || path.join(__dirname, "../contexts"); +const MAX_CONTEXT_HISTORY = parseInt( + process.env.MCP_MAX_CONTEXT_HISTORY || "10", + 10 +); + +/** + * Context Manager for MCP server + * Handles storage, retrieval, and manipulation of context data + * Implements efficient indexing, versioning, and advanced context operations + */ +class ContextManager { + constructor() { + this.contexts = new Map(); + this.contextHistory = new Map(); // For version history + this.contextIndex = null; // For fuzzy search + this.logger = logger; + this.ensureContextDir(); + this.rebuildSearchIndex(); + } + + /** + * Ensure the contexts directory exists + */ + async ensureContextDir() { + try { + await fs.mkdir(CONTEXT_DIR, { recursive: true }); + this.logger.info(`Context directory ensured at ${CONTEXT_DIR}`); + + // Also create a versions subdirectory for history + await fs.mkdir(path.join(CONTEXT_DIR, "versions"), { recursive: true }); + } catch (error) { + this.logger.error(`Failed to create context directory: ${error.message}`); + throw error; + } + } + + /** + * Rebuild the search index for efficient context lookup + */ + async rebuildSearchIndex() { + await this.loadAllContextsFromDisk(); + + const contextsForIndex = Array.from(this.contexts.values()).map((ctx) => ({ + id: ctx.id, + content: + typeof ctx.data === "string" ? ctx.data : JSON.stringify(ctx.data), + tags: ctx.tags.join(" "), + metadata: Object.entries(ctx.metadata) + .map(([k, v]) => `${k}:${v}`) + .join(" "), + })); + + this.contextIndex = new Fuse(contextsForIndex, { + keys: ["id", "content", "tags", "metadata"], + includeScore: true, + threshold: 0.6, + }); + + this.logger.info( + `Rebuilt search index with ${contextsForIndex.length} contexts` + ); + } + + /** + * Create a new context + * @param {string} contextId - Unique identifier for the context + * @param {object|string} contextData - Initial context data + * @param {object} metadata - Optional metadata for the context + * @returns {object} The created context + */ + async createContext(contextId, contextData, metadata = {}) { + if (this.contexts.has(contextId)) { + throw new Error(`Context with ID ${contextId} already exists`); + } + + const timestamp = new Date().toISOString(); + const versionId = this.generateVersionId(); + + const context = { + id: contextId, + data: contextData, + metadata: { + created: timestamp, + updated: timestamp, + version: versionId, + ...metadata, + }, + tags: metadata.tags || [], + size: this.estimateSize(contextData), + }; + + this.contexts.set(contextId, context); + + // Initialize version history + this.contextHistory.set(contextId, [ + { + versionId, + timestamp, + data: JSON.parse(JSON.stringify(contextData)), // Deep clone + metadata: { ...context.metadata }, + }, + ]); + + await this.persistContext(contextId); + await this.persistContextVersion(contextId, versionId); + + // Update the search index + this.rebuildSearchIndex(); + + this.logger.info(`Created context: ${contextId} (version: ${versionId})`); + return context; + } + + /** + * Retrieve a context by ID + * @param {string} contextId - The context ID to retrieve + * @param {string} versionId - Optional specific version to retrieve + * @returns {object} The context object + */ + async getContext(contextId, versionId = null) { + // If specific version requested, try to get it from history + if (versionId) { + return this.getContextVersion(contextId, versionId); + } + + // Try to get from memory first + if (this.contexts.has(contextId)) { + return this.contexts.get(contextId); + } + + // Try to load from disk + try { + const context = await this.loadContextFromDisk(contextId); + if (context) { + this.contexts.set(contextId, context); + return context; + } + } catch (error) { + this.logger.error( + `Failed to load context ${contextId}: ${error.message}` + ); + } + + throw new Error(`Context with ID ${contextId} not found`); + } + + /** + * Get a specific version of a context + * @param {string} contextId - The context ID + * @param {string} versionId - The version ID + * @returns {object} The versioned context + */ + async getContextVersion(contextId, versionId) { + // Check if version history is in memory + if (this.contextHistory.has(contextId)) { + const history = this.contextHistory.get(contextId); + const version = history.find((v) => v.versionId === versionId); + if (version) { + return { + id: contextId, + data: version.data, + metadata: version.metadata, + tags: version.metadata.tags || [], + size: this.estimateSize(version.data), + versionId: version.versionId, + }; + } + } + + // Try to load from disk + try { + const versionPath = path.join( + CONTEXT_DIR, + "versions", + `${contextId}_${versionId}.json` + ); + const data = await fs.readFile(versionPath, "utf8"); + const version = JSON.parse(data); + + // Add to memory cache + if (!this.contextHistory.has(contextId)) { + this.contextHistory.set(contextId, []); + } + const history = this.contextHistory.get(contextId); + history.push(version); + + return { + id: contextId, + data: version.data, + metadata: version.metadata, + tags: version.metadata.tags || [], + size: this.estimateSize(version.data), + versionId: version.versionId, + }; + } catch (error) { + this.logger.error( + `Failed to load context version ${contextId}@${versionId}: ${error.message}` + ); + throw new Error( + `Context version ${versionId} for ${contextId} not found` + ); + } + } + + /** + * Update an existing context + * @param {string} contextId - The context ID to update + * @param {object|string} contextData - New context data + * @param {object} metadata - Optional metadata updates + * @param {boolean} createNewVersion - Whether to create a new version + * @returns {object} The updated context + */ + async updateContext( + contextId, + contextData, + metadata = {}, + createNewVersion = true + ) { + const context = await this.getContext(contextId); + const timestamp = new Date().toISOString(); + + // Generate a new version ID if requested + const versionId = createNewVersion + ? this.generateVersionId() + : context.metadata.version; + + // Create a backup of the current state for versioning + if (createNewVersion) { + // Store the current version in history + if (!this.contextHistory.has(contextId)) { + this.contextHistory.set(contextId, []); + } + + const history = this.contextHistory.get(contextId); + + // Add current state to history + history.push({ + versionId: context.metadata.version, + timestamp: context.metadata.updated, + data: JSON.parse(JSON.stringify(context.data)), // Deep clone + metadata: { ...context.metadata }, + }); + + // Trim history if it exceeds the maximum size + if (history.length > MAX_CONTEXT_HISTORY) { + const excessVersions = history.splice( + 0, + history.length - MAX_CONTEXT_HISTORY + ); + // Clean up excess versions from disk + for (const version of excessVersions) { + this.removeContextVersionFile(contextId, version.versionId).catch( + (err) => + this.logger.error( + `Failed to remove old version file: ${err.message}` + ) + ); + } + } + + // Persist version + await this.persistContextVersion(contextId, context.metadata.version); + } + + // Update the context + context.data = contextData; + context.metadata = { + ...context.metadata, + ...metadata, + updated: timestamp, + }; + + if (createNewVersion) { + context.metadata.version = versionId; + context.metadata.previousVersion = context.metadata.version; + } + + if (metadata.tags) { + context.tags = metadata.tags; + } + + // Update size estimate + context.size = this.estimateSize(contextData); + + this.contexts.set(contextId, context); + await this.persistContext(contextId); + + // Update the search index + this.rebuildSearchIndex(); + + this.logger.info(`Updated context: ${contextId} (version: ${versionId})`); + return context; + } + + /** + * Delete a context and all its versions + * @param {string} contextId - The context ID to delete + * @returns {boolean} True if deletion was successful + */ + async deleteContext(contextId) { + if (!this.contexts.has(contextId)) { + const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); + try { + await fs.access(contextPath); + } catch (error) { + throw new Error(`Context with ID ${contextId} not found`); + } + } + + this.contexts.delete(contextId); + + // Remove from history + const history = this.contextHistory.get(contextId) || []; + this.contextHistory.delete(contextId); + + try { + // Delete main context file + const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); + await fs.unlink(contextPath); + + // Delete all version files + for (const version of history) { + await this.removeContextVersionFile(contextId, version.versionId); + } + + // Update the search index + this.rebuildSearchIndex(); + + this.logger.info(`Deleted context: ${contextId}`); + return true; + } catch (error) { + this.logger.error( + `Failed to delete context files for ${contextId}: ${error.message}` + ); + throw error; + } + } + + /** + * List all available contexts with pagination and advanced filtering + * @param {object} options - Options for listing contexts + * @param {object} options.filters - Filters to apply + * @param {number} options.limit - Maximum number of contexts to return + * @param {number} options.offset - Number of contexts to skip + * @param {string} options.sortBy - Field to sort by + * @param {string} options.sortDirection - Sort direction ('asc' or 'desc') + * @param {string} options.query - Free text search query + * @returns {Array} Array of context objects + */ + async listContexts(options = {}) { + // Load all contexts from disk first + await this.loadAllContextsFromDisk(); + + const { + filters = {}, + limit = 100, + offset = 0, + sortBy = "updated", + sortDirection = "desc", + query = "", + } = options; + + let contexts; + + // If there's a search query, use the search index + if (query && this.contextIndex) { + const searchResults = this.contextIndex.search(query); + contexts = searchResults.map((result) => + this.contexts.get(result.item.id) + ); + } else { + contexts = Array.from(this.contexts.values()); + } + + // Apply filters + if (filters.tag) { + contexts = contexts.filter( + (ctx) => ctx.tags && ctx.tags.includes(filters.tag) + ); + } + + if (filters.metadataKey && filters.metadataValue) { + contexts = contexts.filter( + (ctx) => + ctx.metadata && + ctx.metadata[filters.metadataKey] === filters.metadataValue + ); + } + + if (filters.createdAfter) { + const timestamp = new Date(filters.createdAfter); + contexts = contexts.filter( + (ctx) => new Date(ctx.metadata.created) >= timestamp + ); + } + + if (filters.updatedAfter) { + const timestamp = new Date(filters.updatedAfter); + contexts = contexts.filter( + (ctx) => new Date(ctx.metadata.updated) >= timestamp + ); + } + + // Apply sorting + contexts.sort((a, b) => { + let valueA, valueB; + + if (sortBy === "created" || sortBy === "updated") { + valueA = new Date(a.metadata[sortBy]).getTime(); + valueB = new Date(b.metadata[sortBy]).getTime(); + } else if (sortBy === "size") { + valueA = a.size || 0; + valueB = b.size || 0; + } else if (sortBy === "id") { + valueA = a.id; + valueB = b.id; + } else { + valueA = a.metadata[sortBy]; + valueB = b.metadata[sortBy]; + } + + if (valueA === valueB) return 0; + + const sortFactor = sortDirection === "asc" ? 1 : -1; + return valueA < valueB ? -1 * sortFactor : 1 * sortFactor; + }); + + // Apply pagination + const paginatedContexts = contexts.slice(offset, offset + limit); + + return { + contexts: paginatedContexts, + total: contexts.length, + offset, + limit, + hasMore: offset + limit < contexts.length, + }; + } + + /** + * Get the version history of a context + * @param {string} contextId - The context ID + * @returns {Array} Array of version objects + */ + async getContextHistory(contextId) { + // Ensure context exists + await this.getContext(contextId); + + // Load history if not in memory + if (!this.contextHistory.has(contextId)) { + await this.loadContextHistoryFromDisk(contextId); + } + + const history = this.contextHistory.get(contextId) || []; + + // Return versions in reverse chronological order (newest first) + return history.sort((a, b) => { + const timeA = new Date(a.timestamp).getTime(); + const timeB = new Date(b.timestamp).getTime(); + return timeB - timeA; + }); + } + + /** + * Add tags to a context + * @param {string} contextId - The context ID + * @param {Array} tags - Array of tags to add + * @returns {object} The updated context + */ + async addTags(contextId, tags) { + const context = await this.getContext(contextId); + + const currentTags = context.tags || []; + const uniqueTags = [...new Set([...currentTags, ...tags])]; + + // Update context with new tags + return this.updateContext( + contextId, + context.data, + { + tags: uniqueTags, + }, + false + ); // Don't create a new version for tag updates + } + + /** + * Remove tags from a context + * @param {string} contextId - The context ID + * @param {Array} tags - Array of tags to remove + * @returns {object} The updated context + */ + async removeTags(contextId, tags) { + const context = await this.getContext(contextId); + + const currentTags = context.tags || []; + const newTags = currentTags.filter((tag) => !tags.includes(tag)); + + // Update context with new tags + return this.updateContext( + contextId, + context.data, + { + tags: newTags, + }, + false + ); // Don't create a new version for tag updates + } + + /** + * Handle context windowing and truncation + * @param {string} contextId - The context ID + * @param {number} maxSize - Maximum size in tokens/chars + * @param {string} strategy - Truncation strategy ('start', 'end', 'middle') + * @returns {object} The truncated context + */ + async truncateContext(contextId, maxSize, strategy = "end") { + const context = await this.getContext(contextId); + const contextText = + typeof context.data === "string" + ? context.data + : JSON.stringify(context.data); + + if (contextText.length <= maxSize) { + return context; // No truncation needed + } + + let truncatedData; + + switch (strategy) { + case "start": + truncatedData = contextText.slice(contextText.length - maxSize); + break; + case "middle": + const halfSize = Math.floor(maxSize / 2); + truncatedData = + contextText.slice(0, halfSize) + + "...[truncated]..." + + contextText.slice(contextText.length - halfSize); + break; + case "end": + default: + truncatedData = contextText.slice(0, maxSize); + break; + } + + // If original data was an object, try to parse the truncated data + // Otherwise use it as a string + let updatedData; + if (typeof context.data === "object") { + try { + // This may fail if truncation broke JSON structure + updatedData = { + ...context.data, + truncated: true, + truncation_strategy: strategy, + original_size: contextText.length, + truncated_size: truncatedData.length, + }; + } catch (error) { + updatedData = truncatedData; + } + } else { + updatedData = truncatedData; + } + + // Update with truncated data + return this.updateContext( + contextId, + updatedData, + { + truncated: true, + truncation_strategy: strategy, + original_size: contextText.length, + truncated_size: truncatedData.length, + }, + true + ); // Create a new version for the truncated data + } + + /** + * Merge multiple contexts into a new context + * @param {Array} contextIds - Array of context IDs to merge + * @param {string} newContextId - ID for the new merged context + * @param {object} metadata - Optional metadata for the new context + * @returns {object} The new merged context + */ + async mergeContexts(contextIds, newContextId, metadata = {}) { + if (contextIds.length === 0) { + throw new Error("At least one context ID must be provided for merging"); + } + + if (this.contexts.has(newContextId)) { + throw new Error(`Context with ID ${newContextId} already exists`); + } + + // Load all contexts to be merged + const contextsToMerge = []; + for (const id of contextIds) { + try { + const context = await this.getContext(id); + contextsToMerge.push(context); + } catch (error) { + this.logger.error( + `Could not load context ${id} for merging: ${error.message}` + ); + throw new Error(`Failed to merge contexts: ${error.message}`); + } + } + + // Check data types and decide how to merge + const allStrings = contextsToMerge.every((c) => typeof c.data === "string"); + const allObjects = contextsToMerge.every( + (c) => typeof c.data === "object" && c.data !== null + ); + + let mergedData; + + if (allStrings) { + // Merge strings with newlines between them + mergedData = contextsToMerge.map((c) => c.data).join("\n\n"); + } else if (allObjects) { + // Merge objects by combining their properties + mergedData = {}; + for (const context of contextsToMerge) { + mergedData = { ...mergedData, ...context.data }; + } + } else { + // Convert everything to strings and concatenate + mergedData = contextsToMerge + .map((c) => + typeof c.data === "string" ? c.data : JSON.stringify(c.data) + ) + .join("\n\n"); + } + + // Collect all tags from merged contexts + const allTags = new Set(); + for (const context of contextsToMerge) { + for (const tag of context.tags || []) { + allTags.add(tag); + } + } + + // Create merged metadata + const mergedMetadata = { + ...metadata, + tags: [...allTags], + merged_from: contextIds, + merged_at: new Date().toISOString(), + }; + + // Create the new merged context + return this.createContext(newContextId, mergedData, mergedMetadata); + } + + /** + * Persist a context to disk + * @param {string} contextId - The context ID to persist + * @returns {Promise} + */ + async persistContext(contextId) { + const context = this.contexts.get(contextId); + if (!context) { + throw new Error(`Context with ID ${contextId} not found`); + } + + const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); + try { + await fs.writeFile(contextPath, JSON.stringify(context, null, 2), "utf8"); + this.logger.debug(`Persisted context ${contextId} to disk`); + } catch (error) { + this.logger.error( + `Failed to persist context ${contextId}: ${error.message}` + ); + throw error; + } + } + + /** + * Persist a context version to disk + * @param {string} contextId - The context ID + * @param {string} versionId - The version ID + * @returns {Promise} + */ + async persistContextVersion(contextId, versionId) { + if (!this.contextHistory.has(contextId)) { + throw new Error(`Context history for ${contextId} not found`); + } + + const history = this.contextHistory.get(contextId); + const version = history.find((v) => v.versionId === versionId); + + if (!version) { + throw new Error(`Version ${versionId} of context ${contextId} not found`); + } + + const versionPath = path.join( + CONTEXT_DIR, + "versions", + `${contextId}_${versionId}.json` + ); + try { + await fs.writeFile(versionPath, JSON.stringify(version, null, 2), "utf8"); + this.logger.debug( + `Persisted context version ${contextId}@${versionId} to disk` + ); + } catch (error) { + this.logger.error( + `Failed to persist context version ${contextId}@${versionId}: ${error.message}` + ); + throw error; + } + } + + /** + * Remove a context version file from disk + * @param {string} contextId - The context ID + * @param {string} versionId - The version ID + * @returns {Promise} + */ + async removeContextVersionFile(contextId, versionId) { + const versionPath = path.join( + CONTEXT_DIR, + "versions", + `${contextId}_${versionId}.json` + ); + try { + await fs.unlink(versionPath); + this.logger.debug( + `Removed context version file ${contextId}@${versionId}` + ); + } catch (error) { + if (error.code !== "ENOENT") { + this.logger.error( + `Failed to remove context version file ${contextId}@${versionId}: ${error.message}` + ); + throw error; + } + } + } + + /** + * Load a context from disk + * @param {string} contextId - The context ID to load + * @returns {Promise} The loaded context + */ + async loadContextFromDisk(contextId) { + const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); + try { + const data = await fs.readFile(contextPath, "utf8"); + const context = JSON.parse(data); + this.logger.debug(`Loaded context ${contextId} from disk`); + return context; + } catch (error) { + this.logger.error( + `Failed to load context ${contextId} from disk: ${error.message}` + ); + throw error; + } + } + + /** + * Load context history from disk + * @param {string} contextId - The context ID + * @returns {Promise} The loaded history + */ + async loadContextHistoryFromDisk(contextId) { + try { + const files = await fs.readdir(path.join(CONTEXT_DIR, "versions")); + const versionFiles = files.filter( + (file) => file.startsWith(`${contextId}_`) && file.endsWith(".json") + ); + + const history = []; + + for (const file of versionFiles) { + try { + const data = await fs.readFile( + path.join(CONTEXT_DIR, "versions", file), + "utf8" + ); + const version = JSON.parse(data); + history.push(version); + } catch (error) { + this.logger.error( + `Failed to load context version file ${file}: ${error.message}` + ); + } + } + + this.contextHistory.set(contextId, history); + this.logger.debug( + `Loaded ${history.length} versions for context ${contextId}` + ); + + return history; + } catch (error) { + this.logger.error( + `Failed to load context history for ${contextId}: ${error.message}` + ); + this.contextHistory.set(contextId, []); + return []; + } + } + + /** + * Load all contexts from disk + * @returns {Promise} + */ + async loadAllContextsFromDisk() { + try { + const files = await fs.readdir(CONTEXT_DIR); + const contextFiles = files.filter((file) => file.endsWith(".json")); + + for (const file of contextFiles) { + const contextId = path.basename(file, ".json"); + if (!this.contexts.has(contextId)) { + try { + const context = await this.loadContextFromDisk(contextId); + this.contexts.set(contextId, context); + } catch (error) { + // Already logged in loadContextFromDisk + } + } + } + + this.logger.info(`Loaded ${this.contexts.size} contexts from disk`); + } catch (error) { + this.logger.error(`Failed to load contexts from disk: ${error.message}`); + throw error; + } + } + + /** + * Generate a unique version ID + * @returns {string} A unique version ID + */ + generateVersionId() { + return crypto.randomBytes(8).toString("hex"); + } + + /** + * Estimate the size of context data + * @param {object|string} data - The context data + * @returns {number} Estimated size in bytes + */ + estimateSize(data) { + if (typeof data === "string") { + return Buffer.byteLength(data, "utf8"); + } + + if (typeof data === "object" && data !== null) { + return Buffer.byteLength(JSON.stringify(data), "utf8"); + } + + return 0; + } +} + +export default ContextManager; diff --git a/mcp-server/src/index.js b/mcp-server/src/index.js new file mode 100644 index 00000000..eb820f95 --- /dev/null +++ b/mcp-server/src/index.js @@ -0,0 +1,366 @@ +import { FastMCP } from "fastmcp"; +import { z } from "zod"; +import path from "path"; +import fs from "fs/promises"; +import dotenv from "dotenv"; +import { fileURLToPath } from "url"; +import express from "express"; +import cors from "cors"; +import helmet from "helmet"; +import { logger } from "../../scripts/modules/utils.js"; +import MCPAuth from "./auth.js"; +import MCPApiHandlers from "./api-handlers.js"; +import ContextManager from "./context-manager.js"; + +// Load environment variables +dotenv.config(); + +// Constants +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); +const DEFAULT_PORT = process.env.MCP_SERVER_PORT || 3000; +const DEFAULT_HOST = process.env.MCP_SERVER_HOST || "localhost"; + +/** + * Main MCP server class that integrates with Task Master + */ +class TaskMasterMCPServer { + constructor(options = {}) { + this.options = { + name: "Task Master MCP Server", + version: process.env.PROJECT_VERSION || "1.0.0", + ...options, + }; + + this.server = new FastMCP(this.options); + this.expressApp = null; + this.initialized = false; + this.auth = new MCPAuth(); + this.contextManager = new ContextManager(); + + // Bind methods + this.init = this.init.bind(this); + this.start = this.start.bind(this); + this.stop = this.stop.bind(this); + + // Setup logging + this.logger = logger; + } + + /** + * Initialize the MCP server with necessary tools and routes + */ + async init() { + if (this.initialized) return; + + this.logger.info("Initializing Task Master MCP server..."); + + // Set up express for additional customization if needed + this.expressApp = express(); + this.expressApp.use(cors()); + this.expressApp.use(helmet()); + this.expressApp.use(express.json()); + + // Set up authentication middleware + this.setupAuthentication(); + + // Register API handlers + this.apiHandlers = new MCPApiHandlers(this.server); + + // Register additional task master specific tools + this.registerTaskMasterTools(); + + this.initialized = true; + this.logger.info("Task Master MCP server initialized successfully"); + + return this; + } + + /** + * Set up authentication for the MCP server + */ + setupAuthentication() { + // Add a health check endpoint that doesn't require authentication + this.expressApp.get("/health", (req, res) => { + res.status(200).json({ + status: "ok", + service: this.options.name, + version: this.options.version, + }); + }); + + // Add an authenticate endpoint to get a JWT token using an API key + this.expressApp.post("/auth/token", async (req, res) => { + const apiKey = req.headers["x-api-key"]; + + if (!apiKey) { + return res.status(401).json({ + success: false, + error: "API key is required", + }); + } + + const keyDetails = this.auth.validateApiKey(apiKey); + + if (!keyDetails) { + return res.status(401).json({ + success: false, + error: "Invalid API key", + }); + } + + const token = this.auth.generateToken(keyDetails.id, keyDetails.role); + + res.status(200).json({ + success: true, + token, + expiresIn: process.env.MCP_JWT_EXPIRATION || "24h", + clientId: keyDetails.id, + role: keyDetails.role, + }); + }); + + // Create authenticator middleware for FastMCP + this.server.setAuthenticator((request) => { + // Get token from Authorization header + const authHeader = request.headers?.authorization; + if (!authHeader || !authHeader.startsWith("Bearer ")) { + return null; + } + + const token = authHeader.split(" ")[1]; + const payload = this.auth.verifyToken(token); + + if (!payload) { + return null; + } + + return { + clientId: payload.clientId, + role: payload.role, + }; + }); + + // Set up a protected route for API key management (admin only) + this.expressApp.post( + "/auth/api-keys", + (req, res, next) => { + this.auth.authenticateToken(req, res, next); + }, + (req, res, next) => { + this.auth.authorizeRoles(["admin"])(req, res, next); + }, + async (req, res) => { + const { clientId, role } = req.body; + + if (!clientId) { + return res.status(400).json({ + success: false, + error: "Client ID is required", + }); + } + + try { + const apiKey = await this.auth.createApiKey(clientId, role || "user"); + + res.status(201).json({ + success: true, + apiKey, + clientId, + role: role || "user", + }); + } catch (error) { + this.logger.error(`Error creating API key: ${error.message}`); + + res.status(500).json({ + success: false, + error: "Failed to create API key", + }); + } + } + ); + + this.logger.info("Set up MCP authentication"); + } + + /** + * Register Task Master specific tools with the MCP server + */ + registerTaskMasterTools() { + // Add a tool to get tasks from Task Master + this.server.addTool({ + name: "listTasks", + description: "List all tasks from Task Master", + parameters: z.object({ + status: z.string().optional().describe("Filter tasks by status"), + withSubtasks: z + .boolean() + .optional() + .describe("Include subtasks in the response"), + }), + execute: async (args) => { + try { + // In a real implementation, this would use the Task Master API + // to fetch tasks. For now, returning mock data. + + this.logger.info( + `Listing tasks with filters: ${JSON.stringify(args)}` + ); + + // Mock task data + const tasks = [ + { + id: 1, + title: "Implement Task Data Structure", + status: "done", + dependencies: [], + priority: "high", + }, + { + id: 2, + title: "Develop Command Line Interface Foundation", + status: "done", + dependencies: [1], + priority: "high", + }, + { + id: 23, + title: "Implement MCP Server Functionality", + status: "in-progress", + dependencies: [22], + priority: "medium", + subtasks: [ + { + id: "23.1", + title: "Create Core MCP Server Module", + status: "in-progress", + dependencies: [], + }, + { + id: "23.2", + title: "Implement Context Management System", + status: "pending", + dependencies: ["23.1"], + }, + ], + }, + ]; + + // Apply status filter if provided + let filteredTasks = tasks; + if (args.status) { + filteredTasks = tasks.filter((task) => task.status === args.status); + } + + // Remove subtasks if not requested + if (!args.withSubtasks) { + filteredTasks = filteredTasks.map((task) => { + const { subtasks, ...taskWithoutSubtasks } = task; + return taskWithoutSubtasks; + }); + } + + return { success: true, tasks: filteredTasks }; + } catch (error) { + this.logger.error(`Error listing tasks: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to get task details + this.server.addTool({ + name: "getTaskDetails", + description: "Get detailed information about a specific task", + parameters: z.object({ + taskId: z + .union([z.number(), z.string()]) + .describe("The ID of the task to get details for"), + }), + execute: async (args) => { + try { + // In a real implementation, this would use the Task Master API + // to fetch task details. For now, returning mock data. + + this.logger.info(`Getting details for task ${args.taskId}`); + + // Mock task details + const taskDetails = { + id: 23, + title: "Implement MCP Server Functionality", + description: + "Extend Task Master to function as an MCP server, allowing it to provide context management services to other applications.", + status: "in-progress", + dependencies: [22], + priority: "medium", + details: + "This task involves implementing the Model Context Protocol server capabilities within Task Master.", + testStrategy: + "Testing should include unit tests, integration tests, and compatibility tests.", + subtasks: [ + { + id: "23.1", + title: "Create Core MCP Server Module", + status: "in-progress", + dependencies: [], + }, + { + id: "23.2", + title: "Implement Context Management System", + status: "pending", + dependencies: ["23.1"], + }, + ], + }; + + return { success: true, task: taskDetails }; + } catch (error) { + this.logger.error(`Error getting task details: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + this.logger.info("Registered Task Master specific tools"); + } + + /** + * Start the MCP server + */ + async start({ port = DEFAULT_PORT, host = DEFAULT_HOST } = {}) { + if (!this.initialized) { + await this.init(); + } + + this.logger.info( + `Starting Task Master MCP server on http://${host}:${port}` + ); + + // Start the FastMCP server + await this.server.start({ + port, + host, + transportType: "sse", + expressApp: this.expressApp, + }); + + this.logger.info( + `Task Master MCP server running at http://${host}:${port}` + ); + + return this; + } + + /** + * Stop the MCP server + */ + async stop() { + if (this.server) { + this.logger.info("Stopping Task Master MCP server..."); + await this.server.stop(); + this.logger.info("Task Master MCP server stopped"); + } + } +} + +export default TaskMasterMCPServer; diff --git a/package-lock.json b/package-lock.json index acf6ee8d..345d3081 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "task-master-ai", - "version": "0.9.16", + "version": "0.9.18", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "task-master-ai", - "version": "0.9.16", + "version": "0.9.18", "license": "MIT", "dependencies": { "@anthropic-ai/sdk": "^0.39.0", @@ -14,9 +14,14 @@ "chalk": "^4.1.2", "cli-table3": "^0.6.5", "commander": "^11.1.0", + "cors": "^2.8.5", "dotenv": "^16.3.1", + "express": "^4.21.2", + "fastmcp": "^1.20.5", "figlet": "^1.8.0", "gradient-string": "^3.0.0", + "helmet": "^8.1.0", + "jsonwebtoken": "^9.0.2", "openai": "^4.89.0", "ora": "^8.2.0" }, @@ -988,6 +993,365 @@ "@jridgewell/sourcemap-codec": "^1.4.14" } }, + "node_modules/@modelcontextprotocol/sdk": { + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.7.0.tgz", + "integrity": "sha512-IYPe/FLpvF3IZrd/f5p5ffmWhMc3aEMuM2wGJASDqC2Ge7qatVCdbfPx3n/5xFeb19xN0j/911M2AaFuircsWA==", + "license": "MIT", + "dependencies": { + "content-type": "^1.0.5", + "cors": "^2.8.5", + "eventsource": "^3.0.2", + "express": "^5.0.1", + "express-rate-limit": "^7.5.0", + "pkce-challenge": "^4.1.0", + "raw-body": "^3.0.0", + "zod": "^3.23.8", + "zod-to-json-schema": "^3.24.1" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/accepts": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/accepts/-/accepts-2.0.0.tgz", + "integrity": "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==", + "license": "MIT", + "dependencies": { + "mime-types": "^3.0.0", + "negotiator": "^1.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/body-parser": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-2.1.0.tgz", + "integrity": "sha512-/hPxh61E+ll0Ujp24Ilm64cykicul1ypfwjVttduAiEdtnJFvLePSrIPk+HMImtNv5270wOGCb1Tns2rybMkoQ==", + "license": "MIT", + "dependencies": { + "bytes": "^3.1.2", + "content-type": "^1.0.5", + "debug": "^4.4.0", + "http-errors": "^2.0.0", + "iconv-lite": "^0.5.2", + "on-finished": "^2.4.1", + "qs": "^6.14.0", + "raw-body": "^3.0.0", + "type-is": "^2.0.0" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/content-disposition": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-1.0.0.tgz", + "integrity": "sha512-Au9nRL8VNUut/XSzbQA38+M78dzP4D+eqg3gfJHMIHHYa3bg067xj1KxMUWj+VULbiZMowKngFFbKczUrNJ1mg==", + "license": "MIT", + "dependencies": { + "safe-buffer": "5.2.1" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/cookie-signature": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.2.2.tgz", + "integrity": "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg==", + "license": "MIT", + "engines": { + "node": ">=6.6.0" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/express": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/express/-/express-5.0.1.tgz", + "integrity": "sha512-ORF7g6qGnD+YtUG9yx4DFoqCShNMmUKiXuT5oWMHiOvt/4WFbHC6yCwQMTSBMno7AqntNCAzzcnnjowRkTL9eQ==", + "license": "MIT", + "dependencies": { + "accepts": "^2.0.0", + "body-parser": "^2.0.1", + "content-disposition": "^1.0.0", + "content-type": "~1.0.4", + "cookie": "0.7.1", + "cookie-signature": "^1.2.1", + "debug": "4.3.6", + "depd": "2.0.0", + "encodeurl": "~2.0.0", + "escape-html": "~1.0.3", + "etag": "~1.8.1", + "finalhandler": "^2.0.0", + "fresh": "2.0.0", + "http-errors": "2.0.0", + "merge-descriptors": "^2.0.0", + "methods": "~1.1.2", + "mime-types": "^3.0.0", + "on-finished": "2.4.1", + "once": "1.4.0", + "parseurl": "~1.3.3", + "proxy-addr": "~2.0.7", + "qs": "6.13.0", + "range-parser": "~1.2.1", + "router": "^2.0.0", + "safe-buffer": "5.2.1", + "send": "^1.1.0", + "serve-static": "^2.1.0", + "setprototypeof": "1.2.0", + "statuses": "2.0.1", + "type-is": "^2.0.0", + "utils-merge": "1.0.1", + "vary": "~1.1.2" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/express/node_modules/debug": { + "version": "4.3.6", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.6.tgz", + "integrity": "sha512-O/09Bd4Z1fBrU4VzkhFqVgpPzaGbw6Sm9FEkBT1A/YBXQFGuuSxa1dN2nxgxS34JmKXqYx8CZAwEVoJFImUXIg==", + "license": "MIT", + "dependencies": { + "ms": "2.1.2" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/express/node_modules/ms": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", + "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", + "license": "MIT" + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/express/node_modules/qs": { + "version": "6.13.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", + "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.0.6" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/finalhandler": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-2.1.0.tgz", + "integrity": "sha512-/t88Ty3d5JWQbWYgaOGCCYfXRwV1+be02WqYYlL6h0lEiUAMPM8o8qKGO01YIkOHzka2up08wvgYD0mDiI+q3Q==", + "license": "MIT", + "dependencies": { + "debug": "^4.4.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "on-finished": "^2.4.1", + "parseurl": "^1.3.3", + "statuses": "^2.0.1" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/fresh": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-2.0.0.tgz", + "integrity": "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/iconv-lite": { + "version": "0.5.2", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.5.2.tgz", + "integrity": "sha512-kERHXvpSaB4aU3eANwidg79K8FlrN77m8G9V+0vOR3HYaRifrlwMEpT7ZBJqLSEIHnEgJTHcWK82wwLwwKwtag==", + "license": "MIT", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/media-typer": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-1.1.0.tgz", + "integrity": "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/merge-descriptors": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-2.0.0.tgz", + "integrity": "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/mime-db": { + "version": "1.54.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz", + "integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/mime-types": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.0.tgz", + "integrity": "sha512-XqoSHeCGjVClAmoGFG3lVFqQFRIrTVw2OH3axRqAcfaw+gHWIfnASS92AV+Rl/mk0MupgZTRHQOjxY6YVnzK5w==", + "license": "MIT", + "dependencies": { + "mime-db": "^1.53.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/negotiator": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-1.0.0.tgz", + "integrity": "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/raw-body": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-3.0.0.tgz", + "integrity": "sha512-RmkhL8CAyCRPXCE28MMH0z2PNWQBNk2Q09ZdxM9IOOXwxwZbN+qbWaatPkdkWIKL2ZVDImrN/pK5HTRz2PcS4g==", + "license": "MIT", + "dependencies": { + "bytes": "3.1.2", + "http-errors": "2.0.0", + "iconv-lite": "0.6.3", + "unpipe": "1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/raw-body/node_modules/iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "license": "MIT", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/send": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/send/-/send-1.1.0.tgz", + "integrity": "sha512-v67WcEouB5GxbTWL/4NeToqcZiAWEq90N888fczVArY8A79J0L4FD7vj5hm3eUMua5EpoQ59wa/oovY6TLvRUA==", + "license": "MIT", + "dependencies": { + "debug": "^4.3.5", + "destroy": "^1.2.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "etag": "^1.8.1", + "fresh": "^0.5.2", + "http-errors": "^2.0.0", + "mime-types": "^2.1.35", + "ms": "^2.1.3", + "on-finished": "^2.4.1", + "range-parser": "^1.2.1", + "statuses": "^2.0.1" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/send/node_modules/fresh": { + "version": "0.5.2", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz", + "integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/send/node_modules/mime-db": { + "version": "1.52.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz", + "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/send/node_modules/mime-types": { + "version": "2.1.35", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz", + "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==", + "license": "MIT", + "dependencies": { + "mime-db": "1.52.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/serve-static": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.1.0.tgz", + "integrity": "sha512-A3We5UfEjG8Z7VkDv6uItWw6HY2bBSBJT1KtVESn6EOoOr2jAxNhxWCLY3jDE2WcuHXByWju74ck3ZgLwL8xmA==", + "license": "MIT", + "dependencies": { + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "parseurl": "^1.3.3", + "send": "^1.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/type-is": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/type-is/-/type-is-2.0.0.tgz", + "integrity": "sha512-gd0sGezQYCbWSbkZr75mln4YBidWUN60+devscpLF5mtRDUpiaTvKpBNrdaCvel1NdR2k6vclXybU5fBd2i+nw==", + "license": "MIT", + "dependencies": { + "content-type": "^1.0.5", + "media-typer": "^1.1.0", + "mime-types": "^3.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@sec-ant/readable-stream": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/@sec-ant/readable-stream/-/readable-stream-0.4.1.tgz", + "integrity": "sha512-831qok9r2t8AlxLko40y2ebgSDhenenCatLVeW/uBtnHPyhHOvG0C7TvfgecV+wHzIm5KUICgzmVpWS+IMEAeg==", + "license": "MIT" + }, "node_modules/@sinclair/typebox": { "version": "0.27.8", "resolved": "https://registry.npmjs.org/@sinclair/typebox/-/typebox-0.27.8.tgz", @@ -995,6 +1359,18 @@ "dev": true, "license": "MIT" }, + "node_modules/@sindresorhus/merge-streams": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/@sindresorhus/merge-streams/-/merge-streams-4.0.0.tgz", + "integrity": "sha512-tlqY9xq5ukxTUZBmoOp+m61cqwQD5pHJtFY3Mn8CA8ps6yghLH/Hw8UPdqg4OLmFW3IFlcXnQNmo/dh8HzXYIQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/@sinonjs/commons": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/@sinonjs/commons/-/commons-3.0.1.tgz", @@ -1015,6 +1391,30 @@ "@sinonjs/commons": "^3.0.0" } }, + "node_modules/@tokenizer/inflate": { + "version": "0.2.7", + "resolved": "https://registry.npmjs.org/@tokenizer/inflate/-/inflate-0.2.7.tgz", + "integrity": "sha512-MADQgmZT1eKjp06jpI2yozxaU9uVs4GzzgSL+uEq7bVcJ9V1ZXQkeGNql1fsSI0gMy1vhvNTNbUqrx+pZfJVmg==", + "license": "MIT", + "dependencies": { + "debug": "^4.4.0", + "fflate": "^0.8.2", + "token-types": "^6.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/Borewit" + } + }, + "node_modules/@tokenizer/token": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/@tokenizer/token/-/token-0.3.0.tgz", + "integrity": "sha512-OvjF+z51L3ov0OyAU0duzsYuvO01PH7x4t6DJx+guahgTnBHkhJdG7soQeTSFLWN3efnHyibZ4Z8l2EuWwJN3A==", + "license": "MIT" + }, "node_modules/@types/babel__core": { "version": "7.20.5", "resolved": "https://registry.npmjs.org/@types/babel__core/-/babel__core-7.20.5.tgz", @@ -1169,6 +1569,19 @@ "node": ">=6.5" } }, + "node_modules/accepts": { + "version": "1.3.8", + "resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz", + "integrity": "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==", + "license": "MIT", + "dependencies": { + "mime-types": "~2.1.34", + "negotiator": "0.6.3" + }, + "engines": { + "node": ">= 0.6" + } + }, "node_modules/agentkeepalive": { "version": "4.6.0", "resolved": "https://registry.npmjs.org/agentkeepalive/-/agentkeepalive-4.6.0.tgz", @@ -1311,6 +1724,12 @@ "sprintf-js": "~1.0.2" } }, + "node_modules/array-flatten": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz", + "integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==", + "license": "MIT" + }, "node_modules/asap": { "version": "2.0.6", "resolved": "https://registry.npmjs.org/asap/-/asap-2.0.6.tgz", @@ -1447,6 +1866,60 @@ "dev": true, "license": "MIT" }, + "node_modules/body-parser": { + "version": "1.20.3", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz", + "integrity": "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==", + "license": "MIT", + "dependencies": { + "bytes": "3.1.2", + "content-type": "~1.0.5", + "debug": "2.6.9", + "depd": "2.0.0", + "destroy": "1.2.0", + "http-errors": "2.0.0", + "iconv-lite": "0.4.24", + "on-finished": "2.4.1", + "qs": "6.13.0", + "raw-body": "2.5.2", + "type-is": "~1.6.18", + "unpipe": "1.0.0" + }, + "engines": { + "node": ">= 0.8", + "npm": "1.2.8000 || >= 1.4.16" + } + }, + "node_modules/body-parser/node_modules/debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", + "dependencies": { + "ms": "2.0.0" + } + }, + "node_modules/body-parser/node_modules/ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" + }, + "node_modules/body-parser/node_modules/qs": { + "version": "6.13.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", + "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.0.6" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, "node_modules/boxen": { "version": "8.0.1", "resolved": "https://registry.npmjs.org/boxen/-/boxen-8.0.1.tgz", @@ -1548,6 +2021,12 @@ "node-int64": "^0.4.0" } }, + "node_modules/buffer-equal-constant-time": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz", + "integrity": "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==", + "license": "BSD-3-Clause" + }, "node_modules/buffer-from": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz", @@ -1555,6 +2034,15 @@ "dev": true, "license": "MIT" }, + "node_modules/bytes": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", + "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/call-bind-apply-helpers": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", @@ -1572,7 +2060,6 @@ "version": "1.0.4", "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", - "dev": true, "license": "MIT", "dependencies": { "call-bind-apply-helpers": "^1.0.2", @@ -1776,7 +2263,6 @@ "version": "8.0.1", "resolved": "https://registry.npmjs.org/cliui/-/cliui-8.0.1.tgz", "integrity": "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==", - "dev": true, "license": "ISC", "dependencies": { "string-width": "^4.2.0", @@ -1791,7 +2277,6 @@ "version": "5.0.1", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", - "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -1801,14 +2286,12 @@ "version": "8.0.0", "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", - "dev": true, "license": "MIT" }, "node_modules/cliui/node_modules/string-width": { "version": "4.2.3", "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", - "dev": true, "license": "MIT", "dependencies": { "emoji-regex": "^8.0.0", @@ -1823,7 +2306,6 @@ "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", - "dev": true, "license": "MIT", "dependencies": { "ansi-regex": "^5.0.1" @@ -1836,7 +2318,6 @@ "version": "7.0.0", "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", - "dev": true, "license": "MIT", "dependencies": { "ansi-styles": "^4.0.0", @@ -1924,6 +2405,27 @@ "dev": true, "license": "MIT" }, + "node_modules/content-disposition": { + "version": "0.5.4", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz", + "integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==", + "license": "MIT", + "dependencies": { + "safe-buffer": "5.2.1" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/content-type": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", + "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, "node_modules/convert-source-map": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-2.0.0.tgz", @@ -1931,6 +2433,21 @@ "dev": true, "license": "MIT" }, + "node_modules/cookie": { + "version": "0.7.1", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz", + "integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/cookie-signature": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz", + "integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==", + "license": "MIT" + }, "node_modules/cookiejar": { "version": "2.1.4", "resolved": "https://registry.npmjs.org/cookiejar/-/cookiejar-2.1.4.tgz", @@ -1938,6 +2455,19 @@ "dev": true, "license": "MIT" }, + "node_modules/cors": { + "version": "2.8.5", + "resolved": "https://registry.npmjs.org/cors/-/cors-2.8.5.tgz", + "integrity": "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==", + "license": "MIT", + "dependencies": { + "object-assign": "^4", + "vary": "^1" + }, + "engines": { + "node": ">= 0.10" + } + }, "node_modules/create-jest": { "version": "29.7.0", "resolved": "https://registry.npmjs.org/create-jest/-/create-jest-29.7.0.tgz", @@ -1964,7 +2494,6 @@ "version": "7.0.6", "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz", "integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==", - "dev": true, "license": "MIT", "dependencies": { "path-key": "^3.1.0", @@ -1988,7 +2517,6 @@ "version": "4.4.0", "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.0.tgz", "integrity": "sha512-6WTZ/IxCY/T6BALoZHaE4ctp9xm+Z5kY/pzYaCHRFeyVhojxlrm+46y68HA6hr0TcwEssoxNiDEUJQjfPZ/RYA==", - "dev": true, "license": "MIT", "dependencies": { "ms": "^2.1.3" @@ -2036,6 +2564,25 @@ "node": ">=0.4.0" } }, + "node_modules/depd": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", + "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/destroy": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz", + "integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==", + "license": "MIT", + "engines": { + "node": ">= 0.8", + "npm": "1.2.8000 || >= 1.4.16" + } + }, "node_modules/detect-newline": { "version": "3.1.0", "resolved": "https://registry.npmjs.org/detect-newline/-/detect-newline-3.1.0.tgz", @@ -2093,6 +2640,21 @@ "node": ">= 0.4" } }, + "node_modules/ecdsa-sig-formatter": { + "version": "1.0.11", + "resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz", + "integrity": "sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==", + "license": "Apache-2.0", + "dependencies": { + "safe-buffer": "^5.0.1" + } + }, + "node_modules/ee-first": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", + "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==", + "license": "MIT" + }, "node_modules/electron-to-chromium": { "version": "1.5.123", "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.123.tgz", @@ -2119,6 +2681,15 @@ "integrity": "sha512-EC+0oUMY1Rqm4O6LLrgjtYDvcVYTy7chDnM4Q7030tP4Kwj3u/pR6gP9ygnp2CJMK5Gq+9Q2oqmrFJAz01DXjw==", "license": "MIT" }, + "node_modules/encodeurl": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", + "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/error-ex": { "version": "1.3.2", "resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.2.tgz", @@ -2178,12 +2749,17 @@ "version": "3.2.0", "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==", - "dev": true, "license": "MIT", "engines": { "node": ">=6" } }, + "node_modules/escape-html": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", + "integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==", + "license": "MIT" + }, "node_modules/escape-string-regexp": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-2.0.0.tgz", @@ -2208,6 +2784,15 @@ "node": ">=4" } }, + "node_modules/etag": { + "version": "1.8.1", + "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", + "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, "node_modules/event-target-shim": { "version": "5.0.1", "resolved": "https://registry.npmjs.org/event-target-shim/-/event-target-shim-5.0.1.tgz", @@ -2217,6 +2802,27 @@ "node": ">=6" } }, + "node_modules/eventsource": { + "version": "3.0.5", + "resolved": "https://registry.npmjs.org/eventsource/-/eventsource-3.0.5.tgz", + "integrity": "sha512-LT/5J605bx5SNyE+ITBDiM3FxffBiq9un7Vx0EwMDM3vg8sWKx/tO2zC+LMqZ+smAM0F2hblaDZUVZF0te2pSw==", + "license": "MIT", + "dependencies": { + "eventsource-parser": "^3.0.0" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/eventsource-parser": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/eventsource-parser/-/eventsource-parser-3.0.0.tgz", + "integrity": "sha512-T1C0XCUimhxVQzW4zFipdx0SficT651NnkR0ZSH3yQwh+mFMdLfgjABVi4YtMTtaL4s168593DaoaRLMqryavA==", + "license": "MIT", + "engines": { + "node": ">=18.0.0" + } + }, "node_modules/execa": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/execa/-/execa-5.1.1.tgz", @@ -2290,6 +2896,97 @@ "node": "^14.15.0 || ^16.10.0 || >=18.0.0" } }, + "node_modules/express": { + "version": "4.21.2", + "resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz", + "integrity": "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA==", + "license": "MIT", + "dependencies": { + "accepts": "~1.3.8", + "array-flatten": "1.1.1", + "body-parser": "1.20.3", + "content-disposition": "0.5.4", + "content-type": "~1.0.4", + "cookie": "0.7.1", + "cookie-signature": "1.0.6", + "debug": "2.6.9", + "depd": "2.0.0", + "encodeurl": "~2.0.0", + "escape-html": "~1.0.3", + "etag": "~1.8.1", + "finalhandler": "1.3.1", + "fresh": "0.5.2", + "http-errors": "2.0.0", + "merge-descriptors": "1.0.3", + "methods": "~1.1.2", + "on-finished": "2.4.1", + "parseurl": "~1.3.3", + "path-to-regexp": "0.1.12", + "proxy-addr": "~2.0.7", + "qs": "6.13.0", + "range-parser": "~1.2.1", + "safe-buffer": "5.2.1", + "send": "0.19.0", + "serve-static": "1.16.2", + "setprototypeof": "1.2.0", + "statuses": "2.0.1", + "type-is": "~1.6.18", + "utils-merge": "1.0.1", + "vary": "~1.1.2" + }, + "engines": { + "node": ">= 0.10.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/express-rate-limit": { + "version": "7.5.0", + "resolved": "https://registry.npmjs.org/express-rate-limit/-/express-rate-limit-7.5.0.tgz", + "integrity": "sha512-eB5zbQh5h+VenMPM3fh+nw1YExi5nMr6HUCR62ELSP11huvxm/Uir1H1QEyTkk5QX6A58pX6NmaTMceKZ0Eodg==", + "license": "MIT", + "engines": { + "node": ">= 16" + }, + "funding": { + "url": "https://github.com/sponsors/express-rate-limit" + }, + "peerDependencies": { + "express": "^4.11 || 5 || ^5.0.0-beta.1" + } + }, + "node_modules/express/node_modules/debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", + "dependencies": { + "ms": "2.0.0" + } + }, + "node_modules/express/node_modules/ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" + }, + "node_modules/express/node_modules/qs": { + "version": "6.13.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", + "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.0.6" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, "node_modules/fast-json-stable-stringify": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", @@ -2304,6 +3001,131 @@ "dev": true, "license": "MIT" }, + "node_modules/fastmcp": { + "version": "1.20.5", + "resolved": "https://registry.npmjs.org/fastmcp/-/fastmcp-1.20.5.tgz", + "integrity": "sha512-jwcPgMF9bcE9qsEG82YMlAG26/n5CSYsr95e60ntqWWd+3kgTBbUIasB3HfpqHLTNaQuoX6/jl18fpDcybBjcQ==", + "license": "MIT", + "dependencies": { + "@modelcontextprotocol/sdk": "^1.6.0", + "execa": "^9.5.2", + "file-type": "^20.3.0", + "fuse.js": "^7.1.0", + "mcp-proxy": "^2.10.4", + "strict-event-emitter-types": "^2.0.0", + "undici": "^7.4.0", + "uri-templates": "^0.2.0", + "yargs": "^17.7.2", + "zod": "^3.24.2", + "zod-to-json-schema": "^3.24.3" + }, + "bin": { + "fastmcp": "dist/bin/fastmcp.js" + } + }, + "node_modules/fastmcp/node_modules/execa": { + "version": "9.5.2", + "resolved": "https://registry.npmjs.org/execa/-/execa-9.5.2.tgz", + "integrity": "sha512-EHlpxMCpHWSAh1dgS6bVeoLAXGnJNdR93aabr4QCGbzOM73o5XmRfM/e5FUqsw3aagP8S8XEWUWFAxnRBnAF0Q==", + "license": "MIT", + "dependencies": { + "@sindresorhus/merge-streams": "^4.0.0", + "cross-spawn": "^7.0.3", + "figures": "^6.1.0", + "get-stream": "^9.0.0", + "human-signals": "^8.0.0", + "is-plain-obj": "^4.1.0", + "is-stream": "^4.0.1", + "npm-run-path": "^6.0.0", + "pretty-ms": "^9.0.0", + "signal-exit": "^4.1.0", + "strip-final-newline": "^4.0.0", + "yoctocolors": "^2.0.0" + }, + "engines": { + "node": "^18.19.0 || >=20.5.0" + }, + "funding": { + "url": "https://github.com/sindresorhus/execa?sponsor=1" + } + }, + "node_modules/fastmcp/node_modules/get-stream": { + "version": "9.0.1", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-9.0.1.tgz", + "integrity": "sha512-kVCxPF3vQM/N0B1PmoqVUqgHP+EeVjmZSQn+1oCRPxd2P21P2F19lIgbR3HBosbB1PUhOAoctJnfEn2GbN2eZA==", + "license": "MIT", + "dependencies": { + "@sec-ant/readable-stream": "^0.4.1", + "is-stream": "^4.0.1" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/fastmcp/node_modules/human-signals": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-8.0.0.tgz", + "integrity": "sha512-/1/GPCpDUCCYwlERiYjxoczfP0zfvZMU/OWgQPMya9AbAE24vseigFdhAMObpc8Q4lc/kjutPfUddDYyAmejnA==", + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/fastmcp/node_modules/is-stream": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-4.0.1.tgz", + "integrity": "sha512-Dnz92NInDqYckGEUJv689RbRiTSEHCQ7wOVeALbkOz999YpqT46yMRIGtSNl2iCL1waAZSx40+h59NV/EwzV/A==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/fastmcp/node_modules/npm-run-path": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-6.0.0.tgz", + "integrity": "sha512-9qny7Z9DsQU8Ou39ERsPU4OZQlSTP47ShQzuKZ6PRXpYLtIFgl/DEBYEXKlvcEa+9tHVcK8CF81Y2V72qaZhWA==", + "license": "MIT", + "dependencies": { + "path-key": "^4.0.0", + "unicorn-magic": "^0.3.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/fastmcp/node_modules/path-key": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-4.0.0.tgz", + "integrity": "sha512-haREypq7xkM7ErfgIyA0z+Bj4AGKlMSdlQE2jvJo6huWD1EdkKYV+G/T4nq0YEF2vgTT8kqMFKo1uHn950r4SQ==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/fastmcp/node_modules/strip-final-newline": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-4.0.0.tgz", + "integrity": "sha512-aulFJcD6YK8V1G7iRB5tigAP4TsHBZZrOV8pjV++zdUwmeV8uzbY7yn6h9MswN62adStNZFuCIx4haBnRuMDaw==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/fb-watchman": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/fb-watchman/-/fb-watchman-2.0.2.tgz", @@ -2346,6 +3168,12 @@ "node": ">= 8" } }, + "node_modules/fflate": { + "version": "0.8.2", + "resolved": "https://registry.npmjs.org/fflate/-/fflate-0.8.2.tgz", + "integrity": "sha512-cPJU47OaAoCbg0pBvzsgpTPhmhqI5eJjh/JIu8tPj5q+T7iLvW/JAYUqmE7KOB4R1ZyEhzBaIQpQpardBF5z8A==", + "license": "MIT" + }, "node_modules/figlet": { "version": "1.8.0", "resolved": "https://registry.npmjs.org/figlet/-/figlet-1.8.0.tgz", @@ -2358,6 +3186,39 @@ "node": ">= 0.4.0" } }, + "node_modules/figures": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/figures/-/figures-6.1.0.tgz", + "integrity": "sha512-d+l3qxjSesT4V7v2fh+QnmFnUWv9lSpjarhShNTgBOfA0ttejbQUAlHLitbjkoRiDulW0OPoQPYIGhIC8ohejg==", + "license": "MIT", + "dependencies": { + "is-unicode-supported": "^2.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/file-type": { + "version": "20.4.1", + "resolved": "https://registry.npmjs.org/file-type/-/file-type-20.4.1.tgz", + "integrity": "sha512-hw9gNZXUfZ02Jo0uafWLaFVPter5/k2rfcrjFJJHX/77xtSDOfJuEFb6oKlFV86FLP1SuyHMW1PSk0U9M5tKkQ==", + "license": "MIT", + "dependencies": { + "@tokenizer/inflate": "^0.2.6", + "strtok3": "^10.2.0", + "token-types": "^6.0.0", + "uint8array-extras": "^1.4.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sindresorhus/file-type?sponsor=1" + } + }, "node_modules/fill-range": { "version": "7.1.1", "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz", @@ -2371,6 +3232,39 @@ "node": ">=8" } }, + "node_modules/finalhandler": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz", + "integrity": "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ==", + "license": "MIT", + "dependencies": { + "debug": "2.6.9", + "encodeurl": "~2.0.0", + "escape-html": "~1.0.3", + "on-finished": "2.4.1", + "parseurl": "~1.3.3", + "statuses": "2.0.1", + "unpipe": "~1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/finalhandler/node_modules/debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", + "dependencies": { + "ms": "2.0.0" + } + }, + "node_modules/finalhandler/node_modules/ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" + }, "node_modules/find-up": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz", @@ -2446,6 +3340,24 @@ "url": "https://ko-fi.com/tunnckoCore/commissions" } }, + "node_modules/forwarded": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", + "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/fresh": { + "version": "0.5.2", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz", + "integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, "node_modules/fs.realpath": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz", @@ -2477,6 +3389,15 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/fuse.js": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/fuse.js/-/fuse.js-7.1.0.tgz", + "integrity": "sha512-trLf4SzuuUxfusZADLINj+dE8clK1frKdmqiJNb1Es75fmI5oY6X2mxLVUciLLjxqw/xr72Dhy+lER6dGd02FQ==", + "license": "Apache-2.0", + "engines": { + "node": ">=10" + } + }, "node_modules/gensync": { "version": "1.0.0-beta.2", "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.2.tgz", @@ -2491,7 +3412,6 @@ "version": "2.0.5", "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", - "dev": true, "license": "ISC", "engines": { "node": "6.* || 8.* || >= 10.*" @@ -2693,6 +3613,15 @@ "node": ">= 0.4" } }, + "node_modules/helmet": { + "version": "8.1.0", + "resolved": "https://registry.npmjs.org/helmet/-/helmet-8.1.0.tgz", + "integrity": "sha512-jOiHyAZsmnr8LqoPGmCjYAaiuWwjAPLgY8ZX2XrmHawt99/u1y6RgrZMTeoPfpUbV96HOalYgz1qzkRbw54Pmg==", + "license": "MIT", + "engines": { + "node": ">=18.0.0" + } + }, "node_modules/hexoid": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/hexoid/-/hexoid-2.0.0.tgz", @@ -2710,6 +3639,22 @@ "dev": true, "license": "MIT" }, + "node_modules/http-errors": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz", + "integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==", + "license": "MIT", + "dependencies": { + "depd": "2.0.0", + "inherits": "2.0.4", + "setprototypeof": "1.2.0", + "statuses": "2.0.1", + "toidentifier": "1.0.1" + }, + "engines": { + "node": ">= 0.8" + } + }, "node_modules/human-signals": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-2.1.0.tgz", @@ -2729,6 +3674,38 @@ "ms": "^2.0.0" } }, + "node_modules/iconv-lite": { + "version": "0.4.24", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", + "integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==", + "license": "MIT", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/ieee754": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz", + "integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "BSD-3-Clause" + }, "node_modules/import-local": { "version": "3.2.0", "resolved": "https://registry.npmjs.org/import-local/-/import-local-3.2.0.tgz", @@ -2775,9 +3752,17 @@ "version": "2.0.4", "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", - "dev": true, "license": "ISC" }, + "node_modules/ipaddr.js": { + "version": "1.9.1", + "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", + "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", + "license": "MIT", + "engines": { + "node": ">= 0.10" + } + }, "node_modules/is-arrayish": { "version": "0.2.1", "resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.2.1.tgz", @@ -2842,6 +3827,24 @@ "node": ">=0.12.0" } }, + "node_modules/is-plain-obj": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-4.1.0.tgz", + "integrity": "sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-promise": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/is-promise/-/is-promise-4.0.0.tgz", + "integrity": "sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==", + "license": "MIT" + }, "node_modules/is-stream": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.1.tgz", @@ -2871,7 +3874,6 @@ "version": "2.0.0", "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", - "dev": true, "license": "ISC" }, "node_modules/istanbul-lib-coverage": { @@ -3608,6 +4610,61 @@ "node": ">=6" } }, + "node_modules/jsonwebtoken": { + "version": "9.0.2", + "resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.2.tgz", + "integrity": "sha512-PRp66vJ865SSqOlgqS8hujT5U4AOgMfhrwYIuIhfKaoSCZcirrmASQr8CX7cUg+RMih+hgznrjp99o+W4pJLHQ==", + "license": "MIT", + "dependencies": { + "jws": "^3.2.2", + "lodash.includes": "^4.3.0", + "lodash.isboolean": "^3.0.3", + "lodash.isinteger": "^4.0.4", + "lodash.isnumber": "^3.0.3", + "lodash.isplainobject": "^4.0.6", + "lodash.isstring": "^4.0.1", + "lodash.once": "^4.0.0", + "ms": "^2.1.1", + "semver": "^7.5.4" + }, + "engines": { + "node": ">=12", + "npm": ">=6" + } + }, + "node_modules/jsonwebtoken/node_modules/semver": { + "version": "7.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.1.tgz", + "integrity": "sha512-hlq8tAfn0m/61p4BVRcPzIGr6LKiMwo4VM6dGi6pt4qcRkmNzTcWq6eCEjEh+qXjkMDvPlOFFSGwQjoEa6gyMA==", + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/jwa": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/jwa/-/jwa-1.4.1.tgz", + "integrity": "sha512-qiLX/xhEEFKUAJ6FiBMbes3w9ATzyk5W7Hvzpa/SLYdxNtng+gcurvrI7TbACjIXlsJyr05/S1oUhZrc63evQA==", + "license": "MIT", + "dependencies": { + "buffer-equal-constant-time": "1.0.1", + "ecdsa-sig-formatter": "1.0.11", + "safe-buffer": "^5.0.1" + } + }, + "node_modules/jws": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/jws/-/jws-3.2.2.tgz", + "integrity": "sha512-YHlZCB6lMTllWDtSPHz/ZXTsi8S00usEV6v1tjq8tOUZzw7DpSDWVXjXDre6ed1w/pd495ODpHZYSdkRTsa0HA==", + "license": "MIT", + "dependencies": { + "jwa": "^1.4.1", + "safe-buffer": "^5.0.1" + } + }, "node_modules/kleur": { "version": "3.0.3", "resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz", @@ -3648,6 +4705,48 @@ "node": ">=8" } }, + "node_modules/lodash.includes": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz", + "integrity": "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w==", + "license": "MIT" + }, + "node_modules/lodash.isboolean": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz", + "integrity": "sha512-Bz5mupy2SVbPHURB98VAcw+aHh4vRV5IPNhILUCsOzRmsTmSQ17jIuqopAentWoehktxGd9e/hbIXq980/1QJg==", + "license": "MIT" + }, + "node_modules/lodash.isinteger": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/lodash.isinteger/-/lodash.isinteger-4.0.4.tgz", + "integrity": "sha512-DBwtEWN2caHQ9/imiNeEA5ys1JoRtRfY3d7V9wkqtbycnAmTvRRmbHKDV4a0EYc678/dia0jrte4tjYwVBaZUA==", + "license": "MIT" + }, + "node_modules/lodash.isnumber": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isnumber/-/lodash.isnumber-3.0.3.tgz", + "integrity": "sha512-QYqzpfwO3/CWf3XP+Z+tkQsfaLL/EnUlXWVkIk5FUPc4sBdTehEqZONuyRt2P67PXAk+NXmTBcc97zw9t1FQrw==", + "license": "MIT" + }, + "node_modules/lodash.isplainobject": { + "version": "4.0.6", + "resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz", + "integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==", + "license": "MIT" + }, + "node_modules/lodash.isstring": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz", + "integrity": "sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==", + "license": "MIT" + }, + "node_modules/lodash.once": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/lodash.once/-/lodash.once-4.1.1.tgz", + "integrity": "sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==", + "license": "MIT" + }, "node_modules/log-symbols": { "version": "6.0.0", "resolved": "https://registry.npmjs.org/log-symbols/-/log-symbols-6.0.0.tgz", @@ -3746,6 +4845,38 @@ "node": ">= 0.4" } }, + "node_modules/mcp-proxy": { + "version": "2.12.0", + "resolved": "https://registry.npmjs.org/mcp-proxy/-/mcp-proxy-2.12.0.tgz", + "integrity": "sha512-hL2Y6EtK7vkgAOZxOQe9M4Z9g5xEnvR4ZYBKqFi/5tjhz/1jyNEz5NL87Uzv46k8iZQPVNEof/T6arEooBU5bQ==", + "license": "MIT", + "dependencies": { + "@modelcontextprotocol/sdk": "^1.6.0", + "eventsource": "^3.0.5", + "yargs": "^17.7.2" + }, + "bin": { + "mcp-proxy": "dist/bin/mcp-proxy.js" + } + }, + "node_modules/media-typer": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz", + "integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/merge-descriptors": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.3.tgz", + "integrity": "sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/merge-stream": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz", @@ -3757,7 +4888,6 @@ "version": "1.1.2", "resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz", "integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==", - "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" @@ -3869,6 +4999,15 @@ "dev": true, "license": "MIT" }, + "node_modules/negotiator": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz", + "integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, "node_modules/node-domexception": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz", @@ -3943,11 +5082,19 @@ "node": ">=8" } }, + "node_modules/object-assign": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz", + "integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/object-inspect": { "version": "1.13.4", "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", - "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" @@ -3956,11 +5103,22 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/on-finished": { + "version": "2.4.1", + "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", + "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", + "license": "MIT", + "dependencies": { + "ee-first": "1.1.1" + }, + "engines": { + "node": ">= 0.8" + } + }, "node_modules/once": { "version": "1.4.0", "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", - "dev": true, "license": "ISC", "dependencies": { "wrappy": "1" @@ -4120,6 +5278,27 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/parse-ms": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/parse-ms/-/parse-ms-4.0.0.tgz", + "integrity": "sha512-TXfryirbmq34y8QBwgqCVLi+8oA3oWx2eAnSn62ITyEhEYaWRlVZ2DvMM9eZbMs/RfxPu/PK/aBLyGj4IrqMHw==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/parseurl": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", + "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/path-exists": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", @@ -4144,7 +5323,6 @@ "version": "3.1.1", "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", - "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -4157,6 +5335,25 @@ "dev": true, "license": "MIT" }, + "node_modules/path-to-regexp": { + "version": "0.1.12", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz", + "integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==", + "license": "MIT" + }, + "node_modules/peek-readable": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/peek-readable/-/peek-readable-7.0.0.tgz", + "integrity": "sha512-nri2TO5JE3/mRryik9LlHFT53cgHfRK0Lt0BAZQXku/AW3E6XLt2GaY8siWi7dvW/m1z0ecn+J+bpDa9ZN3IsQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/Borewit" + } + }, "node_modules/picocolors": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", @@ -4187,6 +5384,15 @@ "node": ">= 6" } }, + "node_modules/pkce-challenge": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/pkce-challenge/-/pkce-challenge-4.1.0.tgz", + "integrity": "sha512-ZBmhE1C9LcPoH9XZSdwiPtbPHZROwAnMy+kIFQVrnMCxY4Cudlz3gBOpzilgc0jOgRaiT3sIWfpMomW2ar2orQ==", + "license": "MIT", + "engines": { + "node": ">=16.20.0" + } + }, "node_modules/pkg-dir": { "version": "4.2.0", "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz", @@ -4228,6 +5434,21 @@ "url": "https://github.com/chalk/ansi-styles?sponsor=1" } }, + "node_modules/pretty-ms": { + "version": "9.2.0", + "resolved": "https://registry.npmjs.org/pretty-ms/-/pretty-ms-9.2.0.tgz", + "integrity": "sha512-4yf0QO/sllf/1zbZWYnvWw3NxCQwLXKzIj0G849LSufP15BXKM0rbD2Z3wVnkMfjdn/CB0Dpp444gYAACdsplg==", + "license": "MIT", + "dependencies": { + "parse-ms": "^4.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/prompts": { "version": "2.4.2", "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.2.tgz", @@ -4242,6 +5463,19 @@ "node": ">= 6" } }, + "node_modules/proxy-addr": { + "version": "2.0.7", + "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", + "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", + "license": "MIT", + "dependencies": { + "forwarded": "0.2.0", + "ipaddr.js": "1.9.1" + }, + "engines": { + "node": ">= 0.10" + } + }, "node_modules/pure-rand": { "version": "6.1.0", "resolved": "https://registry.npmjs.org/pure-rand/-/pure-rand-6.1.0.tgz", @@ -4263,7 +5497,6 @@ "version": "6.14.0", "resolved": "https://registry.npmjs.org/qs/-/qs-6.14.0.tgz", "integrity": "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==", - "dev": true, "license": "BSD-3-Clause", "dependencies": { "side-channel": "^1.1.0" @@ -4275,6 +5508,30 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/range-parser": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", + "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/raw-body": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz", + "integrity": "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA==", + "license": "MIT", + "dependencies": { + "bytes": "3.1.2", + "http-errors": "2.0.0", + "iconv-lite": "0.4.24", + "unpipe": "1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, "node_modules/react-is": { "version": "18.3.1", "resolved": "https://registry.npmjs.org/react-is/-/react-is-18.3.1.tgz", @@ -4286,7 +5543,6 @@ "version": "2.1.1", "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==", - "dev": true, "license": "MIT", "engines": { "node": ">=0.10.0" @@ -4362,6 +5618,55 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/router": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/router/-/router-2.1.0.tgz", + "integrity": "sha512-/m/NSLxeYEgWNtyC+WtNHCF7jbGxOibVWKnn+1Psff4dJGOfoXP+MuC/f2CwSmyiHdOIzYnYFp4W6GxWfekaLA==", + "license": "MIT", + "dependencies": { + "is-promise": "^4.0.0", + "parseurl": "^1.3.3", + "path-to-regexp": "^8.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/router/node_modules/path-to-regexp": { + "version": "8.2.0", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-8.2.0.tgz", + "integrity": "sha512-TdrF7fW9Rphjq4RjrW0Kp2AW0Ahwu9sRGTkS6bvDi0SCwZlEZYmcfDbEsTz8RVk0EHIS/Vd1bv3JhG+1xZuAyQ==", + "license": "MIT", + "engines": { + "node": ">=16" + } + }, + "node_modules/safe-buffer": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", + "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT" + }, + "node_modules/safer-buffer": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", + "license": "MIT" + }, "node_modules/semver": { "version": "6.3.1", "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.1.tgz", @@ -4372,11 +5677,91 @@ "semver": "bin/semver.js" } }, + "node_modules/send": { + "version": "0.19.0", + "resolved": "https://registry.npmjs.org/send/-/send-0.19.0.tgz", + "integrity": "sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw==", + "license": "MIT", + "dependencies": { + "debug": "2.6.9", + "depd": "2.0.0", + "destroy": "1.2.0", + "encodeurl": "~1.0.2", + "escape-html": "~1.0.3", + "etag": "~1.8.1", + "fresh": "0.5.2", + "http-errors": "2.0.0", + "mime": "1.6.0", + "ms": "2.1.3", + "on-finished": "2.4.1", + "range-parser": "~1.2.1", + "statuses": "2.0.1" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/send/node_modules/debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", + "dependencies": { + "ms": "2.0.0" + } + }, + "node_modules/send/node_modules/debug/node_modules/ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" + }, + "node_modules/send/node_modules/encodeurl": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz", + "integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/send/node_modules/mime": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", + "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", + "license": "MIT", + "bin": { + "mime": "cli.js" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/serve-static": { + "version": "1.16.2", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz", + "integrity": "sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw==", + "license": "MIT", + "dependencies": { + "encodeurl": "~2.0.0", + "escape-html": "~1.0.3", + "parseurl": "~1.3.3", + "send": "0.19.0" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/setprototypeof": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", + "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==", + "license": "ISC" + }, "node_modules/shebang-command": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", - "dev": true, "license": "MIT", "dependencies": { "shebang-regex": "^3.0.0" @@ -4389,7 +5774,6 @@ "version": "3.0.0", "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", - "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -4399,7 +5783,6 @@ "version": "1.1.0", "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", - "dev": true, "license": "MIT", "dependencies": { "es-errors": "^1.3.0", @@ -4419,7 +5802,6 @@ "version": "1.0.0", "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", - "dev": true, "license": "MIT", "dependencies": { "es-errors": "^1.3.0", @@ -4436,7 +5818,6 @@ "version": "1.0.1", "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", - "dev": true, "license": "MIT", "dependencies": { "call-bound": "^1.0.2", @@ -4455,7 +5836,6 @@ "version": "1.0.2", "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", - "dev": true, "license": "MIT", "dependencies": { "call-bound": "^1.0.2", @@ -4541,6 +5921,15 @@ "node": ">=10" } }, + "node_modules/statuses": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz", + "integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/stdin-discarder": { "version": "0.2.2", "resolved": "https://registry.npmjs.org/stdin-discarder/-/stdin-discarder-0.2.2.tgz", @@ -4553,6 +5942,12 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/strict-event-emitter-types": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/strict-event-emitter-types/-/strict-event-emitter-types-2.0.0.tgz", + "integrity": "sha512-Nk/brWYpD85WlOgzw5h173aci0Teyv8YdIAEtV+N88nDB0dLlazZyJMIsN6eo1/AR61l+p6CJTG1JIyFaoNEEA==", + "license": "ISC" + }, "node_modules/string-length": { "version": "4.0.2", "resolved": "https://registry.npmjs.org/string-length/-/string-length-4.0.2.tgz", @@ -4655,6 +6050,23 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/strtok3": { + "version": "10.2.2", + "resolved": "https://registry.npmjs.org/strtok3/-/strtok3-10.2.2.tgz", + "integrity": "sha512-Xt18+h4s7Z8xyZ0tmBoRmzxcop97R4BAh+dXouUDCYn+Em+1P3qpkUfI5ueWLT8ynC5hZ+q4iPEmGG1urvQGBg==", + "license": "MIT", + "dependencies": { + "@tokenizer/token": "^0.3.0", + "peek-readable": "^7.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/Borewit" + } + }, "node_modules/superagent": { "version": "9.0.2", "resolved": "https://registry.npmjs.org/superagent/-/superagent-9.0.2.tgz", @@ -4766,6 +6178,32 @@ "node": ">=8.0" } }, + "node_modules/toidentifier": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", + "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", + "license": "MIT", + "engines": { + "node": ">=0.6" + } + }, + "node_modules/token-types": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/token-types/-/token-types-6.0.0.tgz", + "integrity": "sha512-lbDrTLVsHhOMljPscd0yitpozq7Ga2M5Cvez5AjGg8GASBjtt6iERCAJ93yommPmz62fb45oFIXHEZ3u9bfJEA==", + "license": "MIT", + "dependencies": { + "@tokenizer/token": "^0.3.0", + "ieee754": "^1.2.1" + }, + "engines": { + "node": ">=14.16" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/Borewit" + } + }, "node_modules/type-detect": { "version": "4.0.8", "resolved": "https://registry.npmjs.org/type-detect/-/type-detect-4.0.8.tgz", @@ -4788,12 +6226,67 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/type-is": { + "version": "1.6.18", + "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz", + "integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==", + "license": "MIT", + "dependencies": { + "media-typer": "0.3.0", + "mime-types": "~2.1.24" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/uint8array-extras": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/uint8array-extras/-/uint8array-extras-1.4.0.tgz", + "integrity": "sha512-ZPtzy0hu4cZjv3z5NW9gfKnNLjoz4y6uv4HlelAjDK7sY/xOkKZv9xK/WQpcsBB3jEybChz9DPC2U/+cusjJVQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/undici": { + "version": "7.5.0", + "resolved": "https://registry.npmjs.org/undici/-/undici-7.5.0.tgz", + "integrity": "sha512-NFQG741e8mJ0fLQk90xKxFdaSM7z4+IQpAgsFI36bCDY9Z2+aXXZjVy2uUksMouWfMI9+w5ejOq5zYYTBCQJDQ==", + "license": "MIT", + "engines": { + "node": ">=20.18.1" + } + }, "node_modules/undici-types": { "version": "5.26.5", "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz", "integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==", "license": "MIT" }, + "node_modules/unicorn-magic": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/unicorn-magic/-/unicorn-magic-0.3.0.tgz", + "integrity": "sha512-+QBBXBCvifc56fsbuxZQ6Sic3wqqc3WWaqxs58gvJrcOuN83HGTCwz3oS5phzU9LthRNE9VrJCFCLUgHeeFnfA==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/unpipe": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", + "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/update-browserslist-db": { "version": "1.1.3", "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.1.3.tgz", @@ -4825,6 +6318,21 @@ "browserslist": ">= 4.21.0" } }, + "node_modules/uri-templates": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/uri-templates/-/uri-templates-0.2.0.tgz", + "integrity": "sha512-EWkjYEN0L6KOfEoOH6Wj4ghQqU7eBZMJqRHQnxQAq+dSEzRPClkWjf8557HkWQXF6BrAUoLSAyy9i3RVTliaNg==", + "license": "http://geraintluff.github.io/tv4/LICENSE.txt" + }, + "node_modules/utils-merge": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz", + "integrity": "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==", + "license": "MIT", + "engines": { + "node": ">= 0.4.0" + } + }, "node_modules/v8-to-istanbul": { "version": "9.3.0", "resolved": "https://registry.npmjs.org/v8-to-istanbul/-/v8-to-istanbul-9.3.0.tgz", @@ -4840,6 +6348,15 @@ "node": ">=10.12.0" } }, + "node_modules/vary": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", + "integrity": "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/walker": { "version": "1.0.8", "resolved": "https://registry.npmjs.org/walker/-/walker-1.0.8.tgz", @@ -4863,7 +6380,6 @@ "version": "2.0.2", "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", - "dev": true, "license": "ISC", "dependencies": { "isexe": "^2.0.0" @@ -4923,7 +6439,6 @@ "version": "1.0.2", "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==", - "dev": true, "license": "ISC" }, "node_modules/write-file-atomic": { @@ -4951,7 +6466,6 @@ "version": "5.0.8", "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz", "integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==", - "dev": true, "license": "ISC", "engines": { "node": ">=10" @@ -4968,7 +6482,6 @@ "version": "17.7.2", "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.7.2.tgz", "integrity": "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==", - "dev": true, "license": "MIT", "dependencies": { "cliui": "^8.0.1", @@ -4987,7 +6500,6 @@ "version": "21.1.1", "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-21.1.1.tgz", "integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==", - "dev": true, "license": "ISC", "engines": { "node": ">=12" @@ -4997,7 +6509,6 @@ "version": "5.0.1", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", - "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -5007,14 +6518,12 @@ "version": "8.0.0", "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", - "dev": true, "license": "MIT" }, "node_modules/yargs/node_modules/string-width": { "version": "4.2.3", "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", - "dev": true, "license": "MIT", "dependencies": { "emoji-regex": "^8.0.0", @@ -5029,7 +6538,6 @@ "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", - "dev": true, "license": "MIT", "dependencies": { "ansi-regex": "^5.0.1" @@ -5050,6 +6558,36 @@ "funding": { "url": "https://github.com/sponsors/sindresorhus" } + }, + "node_modules/yoctocolors": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/yoctocolors/-/yoctocolors-2.1.1.tgz", + "integrity": "sha512-GQHQqAopRhwU8Kt1DDM8NjibDXHC8eoh1erhGAJPEyveY9qqVeXvVikNKrDz69sHowPMorbPUrH/mx8c50eiBQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/zod": { + "version": "3.24.2", + "resolved": "https://registry.npmjs.org/zod/-/zod-3.24.2.tgz", + "integrity": "sha512-lY7CDW43ECgW9u1TcT3IoXHflywfVqDYze4waEz812jR/bZ8FHDsl7pFQoSZTz5N+2NqRXs8GBwnAwo3ZNxqhQ==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/colinhacks" + } + }, + "node_modules/zod-to-json-schema": { + "version": "3.24.5", + "resolved": "https://registry.npmjs.org/zod-to-json-schema/-/zod-to-json-schema-3.24.5.tgz", + "integrity": "sha512-/AuWwMP+YqiPbsJx5D6TfgRTc4kTLjsh5SOcd4bLsfUg2RcEXrFMJl1DGgdHy2aCfsIA/cr/1JM0xcB2GZji8g==", + "license": "ISC", + "peerDependencies": { + "zod": "^3.24.1" + } } } } diff --git a/package.json b/package.json index 8c9500d9..3be7aba0 100644 --- a/package.json +++ b/package.json @@ -6,7 +6,8 @@ "type": "module", "bin": { "task-master": "bin/task-master.js", - "task-master-init": "bin/task-master-init.js" + "task-master-init": "bin/task-master-init.js", + "task-master-mcp-server": "mcp-server/server.js" }, "scripts": { "test": "node --experimental-vm-modules node_modules/.bin/jest", @@ -14,7 +15,8 @@ "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage", "prepare-package": "node scripts/prepare-package.js", "prepublishOnly": "npm run prepare-package", - "prepare": "chmod +x bin/task-master.js bin/task-master-init.js" + "prepare": "chmod +x bin/task-master.js bin/task-master-init.js mcp-server/server.js", + "mcp-server": "node mcp-server/server.js" }, "keywords": [ "claude", @@ -24,7 +26,9 @@ "development", "cursor", "anthropic", - "llm" + "llm", + "mcp", + "context" ], "author": "Eyal Toledano", "license": "MIT", @@ -34,11 +38,17 @@ "chalk": "^4.1.2", "cli-table3": "^0.6.5", "commander": "^11.1.0", + "cors": "^2.8.5", "dotenv": "^16.3.1", + "express": "^4.21.2", + "fastmcp": "^1.20.5", "figlet": "^1.8.0", "gradient-string": "^3.0.0", + "helmet": "^8.1.0", + "jsonwebtoken": "^9.0.2", "openai": "^4.89.0", - "ora": "^8.2.0" + "ora": "^8.2.0", + "fuse.js": "^7.0.0" }, "engines": { "node": ">=14.0.0" @@ -59,7 +69,8 @@ ".cursor/**", "README-task-master.md", "index.js", - "bin/**" + "bin/**", + "mcp-server/**" ], "overrides": { "node-fetch": "^3.3.2", @@ -72,4 +83,4 @@ "mock-fs": "^5.5.0", "supertest": "^7.1.0" } -} \ No newline at end of file +} diff --git a/tasks/task_023.txt b/tasks/task_023.txt index a34085a0..35e721d4 100644 --- a/tasks/task_023.txt +++ b/tasks/task_023.txt @@ -56,3 +56,118 @@ Testing for the MCP server functionality should include: - Test for common API vulnerabilities (injection, CSRF, etc.) All tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman. + +# Subtasks: +## 1. Create Core MCP Server Module and Basic Structure [done] +### Dependencies: None +### Description: Create the foundation for the MCP server implementation by setting up the core module structure, configuration, and server initialization. +### Details: +Implementation steps: +1. Create a new module `mcp-server.js` with the basic server structure +2. Implement configuration options to enable/disable the MCP server +3. Set up Express.js routes for the required MCP endpoints (/context, /models, /execute) +4. Create middleware for request validation and response formatting +5. Implement basic error handling according to MCP specifications +6. Add logging infrastructure for MCP operations +7. Create initialization and shutdown procedures for the MCP server +8. Set up integration with the main Task Master application + +Testing approach: +- Unit tests for configuration loading and validation +- Test server initialization and shutdown procedures +- Verify that routes are properly registered +- Test basic error handling with invalid requests + +## 2. Implement Context Management System [done] +### Dependencies: 23.1 +### Description: Develop a robust context management system that can efficiently store, retrieve, and manipulate context data according to the MCP specification. +### Details: +Implementation steps: +1. Design and implement data structures for context storage +2. Create methods for context creation, retrieval, updating, and deletion +3. Implement context windowing and truncation algorithms for handling size limits +4. Add support for context metadata and tagging +5. Create utilities for context serialization and deserialization +6. Implement efficient indexing for quick context lookups +7. Add support for context versioning and history +8. Develop mechanisms for context persistence (in-memory, disk-based, or database) + +Testing approach: +- Unit tests for all context operations (CRUD) +- Performance tests for context retrieval with various sizes +- Test context windowing and truncation with edge cases +- Verify metadata handling and tagging functionality +- Test persistence mechanisms with simulated failures + +## 3. Implement MCP Endpoints and API Handlers [done] +### Dependencies: 23.1, 23.2 +### Description: Develop the complete API handlers for all required MCP endpoints, ensuring they follow the protocol specification and integrate with the context management system. +### Details: +Implementation steps: +1. Implement the `/context` endpoint for: + - GET: retrieving existing context + - POST: creating new context + - PUT: updating existing context + - DELETE: removing context +2. Implement the `/models` endpoint to list available models +3. Develop the `/execute` endpoint for performing operations with context +4. Create request validators for each endpoint +5. Implement response formatters according to MCP specifications +6. Add detailed error handling for each endpoint +7. Set up proper HTTP status codes for different scenarios +8. Implement pagination for endpoints that return lists + +Testing approach: +- Unit tests for each endpoint handler +- Integration tests with mock context data +- Test various request formats and edge cases +- Verify response formats match MCP specifications +- Test error handling with invalid inputs +- Benchmark endpoint performance + +## 4. Implement Authentication and Authorization System [pending] +### Dependencies: 23.1, 23.3 +### Description: Create a secure authentication and authorization mechanism for MCP clients to ensure only authorized applications can access the MCP server functionality. +### Details: +Implementation steps: +1. Design authentication scheme (API keys, OAuth, JWT, etc.) +2. Implement authentication middleware for all MCP endpoints +3. Create an API key management system for client applications +4. Develop role-based access control for different operations +5. Implement rate limiting to prevent abuse +6. Add secure token validation and handling +7. Create endpoints for managing client credentials +8. Implement audit logging for authentication events + +Testing approach: +- Security testing for authentication mechanisms +- Test access control with various permission levels +- Verify rate limiting functionality +- Test token validation with valid and invalid tokens +- Simulate unauthorized access attempts +- Verify audit logs contain appropriate information + +## 5. Optimize Performance and Finalize Documentation [pending] +### Dependencies: 23.1, 23.2, 23.3, 23.4 +### Description: Optimize the MCP server implementation for performance, especially for context retrieval operations, and create comprehensive documentation for users. +### Details: +Implementation steps: +1. Profile the MCP server to identify performance bottlenecks +2. Implement caching mechanisms for frequently accessed contexts +3. Optimize context serialization and deserialization +4. Add connection pooling for database operations (if applicable) +5. Implement request batching for bulk operations +6. Create comprehensive API documentation with examples +7. Add setup and configuration guides to the Task Master documentation +8. Create example client implementations +9. Add monitoring endpoints for server health and metrics +10. Implement graceful degradation under high load + +Testing approach: +- Load testing with simulated concurrent clients +- Measure response times for various operations +- Test with large context sizes to verify performance +- Verify documentation accuracy with sample requests +- Test monitoring endpoints +- Perform stress testing to identify failure points + diff --git a/tasks/tasks.json b/tasks/tasks.json index 5f8ac41f..f8eb211a 100644 --- a/tasks/tasks.json +++ b/tasks/tasks.json @@ -1343,8 +1343,68 @@ 22 ], "priority": "medium", - "details": "This task involves implementing the Model Context Protocol server capabilities within Task Master using FastMCP. The implementation should:\n\n1. Use FastMCP to create the MCP server module (`mcp-server.ts` or equivalent)\n2. Implement the required MCP endpoints using FastMCP:\n - `/context` - For retrieving and updating context\n - `/models` - For listing available models\n - `/execute` - For executing operations with context\n3. Utilize FastMCP's built-in features for context management, including:\n - Efficient context storage and retrieval\n - Context windowing and truncation\n - Metadata and tagging support\n4. Add authentication and authorization mechanisms using FastMCP capabilities\n5. Implement error handling and response formatting as per MCP specifications\n6. Configure Task Master to enable/disable MCP server functionality via FastMCP settings\n7. Add documentation on using Task Master as an MCP server with FastMCP\n8. Ensure compatibility with existing MCP clients by adhering to FastMCP's compliance features\n9. Optimize performance using FastMCP tools, especially for context retrieval operations\n10. Add logging for MCP server operations using FastMCP's logging utilities\n\nThe implementation should follow RESTful API design principles and leverage FastMCP's concurrency handling for multiple client requests. Consider using TypeScript for better type safety and integration with FastMCP[1][2].", - "testStrategy": "Testing for the MCP server functionality should include:\n\n1. Unit tests:\n - Test each MCP endpoint handler function independently using FastMCP\n - Verify context storage and retrieval mechanisms provided by FastMCP\n - Test authentication and authorization logic\n - Validate error handling for various failure scenarios\n\n2. Integration tests:\n - Set up a test MCP server instance using FastMCP\n - Test complete request/response cycles for each endpoint\n - Verify context persistence across multiple requests\n - Test with various payload sizes and content types\n\n3. Compatibility tests:\n - Test with existing MCP client libraries\n - Verify compliance with the MCP specification\n - Ensure backward compatibility with any MCP versions supported by FastMCP\n\n4. Performance tests:\n - Measure response times for context operations with various context sizes\n - Test concurrent request handling using FastMCP's concurrency tools\n - Verify memory usage remains within acceptable limits during extended operation\n\n5. Security tests:\n - Verify authentication mechanisms cannot be bypassed\n - Test for common API vulnerabilities (injection, CSRF, etc.)\n\nAll tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman." + "details": "This task involves implementing the Model Context Protocol server capabilities within Task Master. The implementation should:\n\n1. Create a new module `mcp-server.js` that implements the core MCP server functionality\n2. Implement the required MCP endpoints:\n - `/context` - For retrieving and updating context\n - `/models` - For listing available models\n - `/execute` - For executing operations with context\n3. Develop a context management system that can:\n - Store and retrieve context data efficiently\n - Handle context windowing and truncation when limits are reached\n - Support context metadata and tagging\n4. Add authentication and authorization mechanisms for MCP clients\n5. Implement proper error handling and response formatting according to MCP specifications\n6. Create configuration options in Task Master to enable/disable the MCP server functionality\n7. Add documentation for how to use Task Master as an MCP server\n8. Ensure the implementation is compatible with existing MCP clients\n9. Optimize for performance, especially for context retrieval operations\n10. Add logging for MCP server operations\n\nThe implementation should follow RESTful API design principles and should be able to handle concurrent requests from multiple clients.", + "testStrategy": "Testing for the MCP server functionality should include:\n\n1. Unit tests:\n - Test each MCP endpoint handler function independently\n - Verify context storage and retrieval mechanisms\n - Test authentication and authorization logic\n - Validate error handling for various failure scenarios\n\n2. Integration tests:\n - Set up a test MCP server instance\n - Test complete request/response cycles for each endpoint\n - Verify context persistence across multiple requests\n - Test with various payload sizes and content types\n\n3. Compatibility tests:\n - Test with existing MCP client libraries\n - Verify compliance with the MCP specification\n - Ensure backward compatibility with any MCP versions supported\n\n4. Performance tests:\n - Measure response times for context operations with various context sizes\n - Test concurrent request handling\n - Verify memory usage remains within acceptable limits during extended operation\n\n5. Security tests:\n - Verify authentication mechanisms cannot be bypassed\n - Test for common API vulnerabilities (injection, CSRF, etc.)\n\nAll tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman.", + "subtasks": [ + { + "id": 1, + "title": "Create Core MCP Server Module and Basic Structure", + "description": "Create the foundation for the MCP server implementation by setting up the core module structure, configuration, and server initialization.", + "dependencies": [], + "details": "Implementation steps:\n1. Create a new module `mcp-server.js` with the basic server structure\n2. Implement configuration options to enable/disable the MCP server\n3. Set up Express.js routes for the required MCP endpoints (/context, /models, /execute)\n4. Create middleware for request validation and response formatting\n5. Implement basic error handling according to MCP specifications\n6. Add logging infrastructure for MCP operations\n7. Create initialization and shutdown procedures for the MCP server\n8. Set up integration with the main Task Master application\n\nTesting approach:\n- Unit tests for configuration loading and validation\n- Test server initialization and shutdown procedures\n- Verify that routes are properly registered\n- Test basic error handling with invalid requests", + "status": "done", + "parentTaskId": 23 + }, + { + "id": 2, + "title": "Implement Context Management System", + "description": "Develop a robust context management system that can efficiently store, retrieve, and manipulate context data according to the MCP specification.", + "dependencies": [ + 1 + ], + "details": "Implementation steps:\n1. Design and implement data structures for context storage\n2. Create methods for context creation, retrieval, updating, and deletion\n3. Implement context windowing and truncation algorithms for handling size limits\n4. Add support for context metadata and tagging\n5. Create utilities for context serialization and deserialization\n6. Implement efficient indexing for quick context lookups\n7. Add support for context versioning and history\n8. Develop mechanisms for context persistence (in-memory, disk-based, or database)\n\nTesting approach:\n- Unit tests for all context operations (CRUD)\n- Performance tests for context retrieval with various sizes\n- Test context windowing and truncation with edge cases\n- Verify metadata handling and tagging functionality\n- Test persistence mechanisms with simulated failures", + "status": "done", + "parentTaskId": 23 + }, + { + "id": 3, + "title": "Implement MCP Endpoints and API Handlers", + "description": "Develop the complete API handlers for all required MCP endpoints, ensuring they follow the protocol specification and integrate with the context management system.", + "dependencies": [ + 1, + 2 + ], + "details": "Implementation steps:\n1. Implement the `/context` endpoint for:\n - GET: retrieving existing context\n - POST: creating new context\n - PUT: updating existing context\n - DELETE: removing context\n2. Implement the `/models` endpoint to list available models\n3. Develop the `/execute` endpoint for performing operations with context\n4. Create request validators for each endpoint\n5. Implement response formatters according to MCP specifications\n6. Add detailed error handling for each endpoint\n7. Set up proper HTTP status codes for different scenarios\n8. Implement pagination for endpoints that return lists\n\nTesting approach:\n- Unit tests for each endpoint handler\n- Integration tests with mock context data\n- Test various request formats and edge cases\n- Verify response formats match MCP specifications\n- Test error handling with invalid inputs\n- Benchmark endpoint performance", + "status": "done", + "parentTaskId": 23 + }, + { + "id": 4, + "title": "Implement Authentication and Authorization System", + "description": "Create a secure authentication and authorization mechanism for MCP clients to ensure only authorized applications can access the MCP server functionality.", + "dependencies": [ + 1, + 3 + ], + "details": "Implementation steps:\n1. Design authentication scheme (API keys, OAuth, JWT, etc.)\n2. Implement authentication middleware for all MCP endpoints\n3. Create an API key management system for client applications\n4. Develop role-based access control for different operations\n5. Implement rate limiting to prevent abuse\n6. Add secure token validation and handling\n7. Create endpoints for managing client credentials\n8. Implement audit logging for authentication events\n\nTesting approach:\n- Security testing for authentication mechanisms\n- Test access control with various permission levels\n- Verify rate limiting functionality\n- Test token validation with valid and invalid tokens\n- Simulate unauthorized access attempts\n- Verify audit logs contain appropriate information", + "status": "pending", + "parentTaskId": 23 + }, + { + "id": 5, + "title": "Optimize Performance and Finalize Documentation", + "description": "Optimize the MCP server implementation for performance, especially for context retrieval operations, and create comprehensive documentation for users.", + "dependencies": [ + 1, + 2, + 3, + 4 + ], + "details": "Implementation steps:\n1. Profile the MCP server to identify performance bottlenecks\n2. Implement caching mechanisms for frequently accessed contexts\n3. Optimize context serialization and deserialization\n4. Add connection pooling for database operations (if applicable)\n5. Implement request batching for bulk operations\n6. Create comprehensive API documentation with examples\n7. Add setup and configuration guides to the Task Master documentation\n8. Create example client implementations\n9. Add monitoring endpoints for server health and metrics\n10. Implement graceful degradation under high load\n\nTesting approach:\n- Load testing with simulated concurrent clients\n- Measure response times for various operations\n- Test with large context sizes to verify performance\n- Verify documentation accuracy with sample requests\n- Test monitoring endpoints\n- Perform stress testing to identify failure points", + "status": "pending", + "parentTaskId": 23 + } + ] }, { "id": 24, From 21e74ab8f5f5152877936998ae9f03a2ac379bdc Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Tue, 25 Mar 2025 00:39:20 +0000 Subject: [PATCH 02/16] feat(wip): set up mcp server and tools, but mcp on cursor not working despite working in inspector --- .cursor/mcp.json | 8 + mcp-server/server.js | 12 +- mcp-server/src/api-handlers.js | 970 -------------------------- mcp-server/src/auth.js | 285 -------- mcp-server/src/context-manager.js | 873 ----------------------- mcp-server/src/index.js | 314 +-------- mcp-server/src/logger.js | 68 ++ mcp-server/src/tools/addTask.js | 56 ++ mcp-server/src/tools/expandTask.js | 66 ++ mcp-server/src/tools/index.js | 29 + mcp-server/src/tools/listTasks.js | 51 ++ mcp-server/src/tools/nextTask.js | 45 ++ mcp-server/src/tools/setTaskStatus.js | 52 ++ mcp-server/src/tools/showTask.js | 45 ++ mcp-server/src/tools/utils.js | 90 +++ 15 files changed, 529 insertions(+), 2435 deletions(-) create mode 100644 .cursor/mcp.json delete mode 100644 mcp-server/src/api-handlers.js delete mode 100644 mcp-server/src/auth.js delete mode 100644 mcp-server/src/context-manager.js create mode 100644 mcp-server/src/logger.js create mode 100644 mcp-server/src/tools/addTask.js create mode 100644 mcp-server/src/tools/expandTask.js create mode 100644 mcp-server/src/tools/index.js create mode 100644 mcp-server/src/tools/listTasks.js create mode 100644 mcp-server/src/tools/nextTask.js create mode 100644 mcp-server/src/tools/setTaskStatus.js create mode 100644 mcp-server/src/tools/showTask.js create mode 100644 mcp-server/src/tools/utils.js diff --git a/.cursor/mcp.json b/.cursor/mcp.json new file mode 100644 index 00000000..3b7160ae --- /dev/null +++ b/.cursor/mcp.json @@ -0,0 +1,8 @@ +{ + "mcpServers": { + "taskMaster": { + "command": "node", + "args": ["mcp-server/server.js"] + } + } +} diff --git a/mcp-server/server.js b/mcp-server/server.js index ed5c3c69..dfca0f55 100755 --- a/mcp-server/server.js +++ b/mcp-server/server.js @@ -2,15 +2,11 @@ import TaskMasterMCPServer from "./src/index.js"; import dotenv from "dotenv"; -import { logger } from "../scripts/modules/utils.js"; +import logger from "./src/logger.js"; // Load environment variables dotenv.config(); -// Constants -const PORT = process.env.MCP_SERVER_PORT || 3000; -const HOST = process.env.MCP_SERVER_HOST || "localhost"; - /** * Start the MCP server */ @@ -19,21 +15,17 @@ async function startServer() { // Handle graceful shutdown process.on("SIGINT", async () => { - logger.info("Received SIGINT, shutting down gracefully..."); await server.stop(); process.exit(0); }); process.on("SIGTERM", async () => { - logger.info("Received SIGTERM, shutting down gracefully..."); await server.stop(); process.exit(0); }); try { - await server.start({ port: PORT, host: HOST }); - logger.info(`MCP server running at http://${HOST}:${PORT}`); - logger.info("Press Ctrl+C to stop"); + await server.start(); } catch (error) { logger.error(`Failed to start MCP server: ${error.message}`); process.exit(1); diff --git a/mcp-server/src/api-handlers.js b/mcp-server/src/api-handlers.js deleted file mode 100644 index ead546f2..00000000 --- a/mcp-server/src/api-handlers.js +++ /dev/null @@ -1,970 +0,0 @@ -import { z } from "zod"; -import { logger } from "../../scripts/modules/utils.js"; -import ContextManager from "./context-manager.js"; - -/** - * MCP API Handlers class - * Implements handlers for the MCP API endpoints - */ -class MCPApiHandlers { - constructor(server) { - this.server = server; - this.contextManager = new ContextManager(); - this.logger = logger; - - // Bind methods - this.registerEndpoints = this.registerEndpoints.bind(this); - this.setupContextHandlers = this.setupContextHandlers.bind(this); - this.setupModelHandlers = this.setupModelHandlers.bind(this); - this.setupExecuteHandlers = this.setupExecuteHandlers.bind(this); - - // Register all handlers - this.registerEndpoints(); - } - - /** - * Register all MCP API endpoints - */ - registerEndpoints() { - this.setupContextHandlers(); - this.setupModelHandlers(); - this.setupExecuteHandlers(); - - this.logger.info("Registered all MCP API endpoint handlers"); - } - - /** - * Set up handlers for the /context endpoint - */ - setupContextHandlers() { - // Add a tool to create context - this.server.addTool({ - name: "createContext", - description: - "Create a new context with the given data and optional metadata", - parameters: z.object({ - contextId: z.string().describe("Unique identifier for the context"), - data: z.any().describe("The context data to store"), - metadata: z - .object({}) - .optional() - .describe("Optional metadata for the context"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.createContext( - args.contextId, - args.data, - args.metadata || {} - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error creating context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to get context - this.server.addTool({ - name: "getContext", - description: - "Retrieve a context by its ID, optionally a specific version", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to retrieve"), - versionId: z - .string() - .optional() - .describe("Optional specific version ID to retrieve"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.getContext( - args.contextId, - args.versionId - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error retrieving context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to update context - this.server.addTool({ - name: "updateContext", - description: "Update an existing context with new data and/or metadata", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to update"), - data: z - .any() - .optional() - .describe("New data to update the context with"), - metadata: z - .object({}) - .optional() - .describe("New metadata to update the context with"), - createNewVersion: z - .boolean() - .optional() - .default(true) - .describe( - "Whether to create a new version (true) or update in place (false)" - ), - }), - execute: async (args) => { - try { - const context = await this.contextManager.updateContext( - args.contextId, - args.data || {}, - args.metadata || {}, - args.createNewVersion - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error updating context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to delete context - this.server.addTool({ - name: "deleteContext", - description: "Delete a context by its ID", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to delete"), - }), - execute: async (args) => { - try { - const result = await this.contextManager.deleteContext( - args.contextId - ); - return { success: result }; - } catch (error) { - this.logger.error(`Error deleting context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to list contexts with pagination and advanced filtering - this.server.addTool({ - name: "listContexts", - description: - "List available contexts with filtering, pagination and sorting", - parameters: z.object({ - // Filtering parameters - filters: z - .object({ - tag: z.string().optional().describe("Filter contexts by tag"), - metadataKey: z - .string() - .optional() - .describe("Filter contexts by metadata key"), - metadataValue: z - .string() - .optional() - .describe("Filter contexts by metadata value"), - createdAfter: z - .string() - .optional() - .describe("Filter contexts created after date (ISO format)"), - updatedAfter: z - .string() - .optional() - .describe("Filter contexts updated after date (ISO format)"), - }) - .optional() - .describe("Filters to apply to the context list"), - - // Pagination parameters - limit: z - .number() - .optional() - .default(100) - .describe("Maximum number of contexts to return"), - offset: z - .number() - .optional() - .default(0) - .describe("Number of contexts to skip"), - - // Sorting parameters - sortBy: z - .string() - .optional() - .default("updated") - .describe("Field to sort by (id, created, updated, size)"), - sortDirection: z - .enum(["asc", "desc"]) - .optional() - .default("desc") - .describe("Sort direction"), - - // Search query - query: z.string().optional().describe("Free text search query"), - }), - execute: async (args) => { - try { - const result = await this.contextManager.listContexts(args); - return { - success: true, - ...result, - }; - } catch (error) { - this.logger.error(`Error listing contexts: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to get context history - this.server.addTool({ - name: "getContextHistory", - description: "Get the version history of a context", - parameters: z.object({ - contextId: z - .string() - .describe("The ID of the context to get history for"), - }), - execute: async (args) => { - try { - const history = await this.contextManager.getContextHistory( - args.contextId - ); - return { - success: true, - history, - contextId: args.contextId, - }; - } catch (error) { - this.logger.error(`Error getting context history: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to merge contexts - this.server.addTool({ - name: "mergeContexts", - description: "Merge multiple contexts into a new context", - parameters: z.object({ - contextIds: z - .array(z.string()) - .describe("Array of context IDs to merge"), - newContextId: z.string().describe("ID for the new merged context"), - metadata: z - .object({}) - .optional() - .describe("Optional metadata for the new context"), - }), - execute: async (args) => { - try { - const mergedContext = await this.contextManager.mergeContexts( - args.contextIds, - args.newContextId, - args.metadata || {} - ); - return { - success: true, - context: mergedContext, - }; - } catch (error) { - this.logger.error(`Error merging contexts: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to add tags to a context - this.server.addTool({ - name: "addTags", - description: "Add tags to a context", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to tag"), - tags: z - .array(z.string()) - .describe("Array of tags to add to the context"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.addTags( - args.contextId, - args.tags - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error adding tags to context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to remove tags from a context - this.server.addTool({ - name: "removeTags", - description: "Remove tags from a context", - parameters: z.object({ - contextId: z - .string() - .describe("The ID of the context to remove tags from"), - tags: z - .array(z.string()) - .describe("Array of tags to remove from the context"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.removeTags( - args.contextId, - args.tags - ); - return { success: true, context }; - } catch (error) { - this.logger.error( - `Error removing tags from context: ${error.message}` - ); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to truncate context - this.server.addTool({ - name: "truncateContext", - description: "Truncate a context to a maximum size", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to truncate"), - maxSize: z - .number() - .describe("Maximum size (in characters) for the context"), - strategy: z - .enum(["start", "end", "middle"]) - .default("end") - .describe("Truncation strategy: start, end, or middle"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.truncateContext( - args.contextId, - args.maxSize, - args.strategy - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error truncating context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - this.logger.info("Registered context endpoint handlers"); - } - - /** - * Set up handlers for the /models endpoint - */ - setupModelHandlers() { - // Add a tool to list available models - this.server.addTool({ - name: "listModels", - description: "List all available models with their capabilities", - parameters: z.object({}), - execute: async () => { - // Here we could get models from a more dynamic source - // For now, returning static list of models supported by Task Master - const models = [ - { - id: "claude-3-opus-20240229", - provider: "anthropic", - capabilities: [ - "text-generation", - "embeddings", - "context-window-100k", - ], - }, - { - id: "claude-3-7-sonnet-20250219", - provider: "anthropic", - capabilities: [ - "text-generation", - "embeddings", - "context-window-200k", - ], - }, - { - id: "sonar-medium-online", - provider: "perplexity", - capabilities: ["text-generation", "web-search", "research"], - }, - ]; - - return { success: true, models }; - }, - }); - - // Add a tool to get model details - this.server.addTool({ - name: "getModelDetails", - description: "Get detailed information about a specific model", - parameters: z.object({ - modelId: z.string().describe("The ID of the model to get details for"), - }), - execute: async (args) => { - // Here we could get model details from a more dynamic source - // For now, returning static information - const modelsMap = { - "claude-3-opus-20240229": { - id: "claude-3-opus-20240229", - provider: "anthropic", - capabilities: [ - "text-generation", - "embeddings", - "context-window-100k", - ], - maxTokens: 100000, - temperature: { min: 0, max: 1, default: 0.7 }, - pricing: { input: 0.000015, output: 0.000075 }, - }, - "claude-3-7-sonnet-20250219": { - id: "claude-3-7-sonnet-20250219", - provider: "anthropic", - capabilities: [ - "text-generation", - "embeddings", - "context-window-200k", - ], - maxTokens: 200000, - temperature: { min: 0, max: 1, default: 0.7 }, - pricing: { input: 0.000003, output: 0.000015 }, - }, - "sonar-medium-online": { - id: "sonar-medium-online", - provider: "perplexity", - capabilities: ["text-generation", "web-search", "research"], - maxTokens: 4096, - temperature: { min: 0, max: 1, default: 0.7 }, - }, - }; - - const model = modelsMap[args.modelId]; - if (!model) { - return { - success: false, - error: `Model with ID ${args.modelId} not found`, - }; - } - - return { success: true, model }; - }, - }); - - this.logger.info("Registered models endpoint handlers"); - } - - /** - * Set up handlers for the /execute endpoint - */ - setupExecuteHandlers() { - // Add a tool to execute operations with context - this.server.addTool({ - name: "executeWithContext", - description: "Execute an operation with the provided context", - parameters: z.object({ - operation: z.string().describe("The operation to execute"), - contextId: z.string().describe("The ID of the context to use"), - parameters: z - .record(z.any()) - .optional() - .describe("Additional parameters for the operation"), - versionId: z - .string() - .optional() - .describe("Optional specific context version to use"), - }), - execute: async (args) => { - try { - // Get the context first, with version if specified - const context = await this.contextManager.getContext( - args.contextId, - args.versionId - ); - - // Execute different operations based on the operation name - switch (args.operation) { - case "generateTask": - return await this.executeGenerateTask(context, args.parameters); - case "expandTask": - return await this.executeExpandTask(context, args.parameters); - case "analyzeComplexity": - return await this.executeAnalyzeComplexity( - context, - args.parameters - ); - case "mergeContexts": - return await this.executeMergeContexts(context, args.parameters); - case "searchContexts": - return await this.executeSearchContexts(args.parameters); - case "extractInsights": - return await this.executeExtractInsights( - context, - args.parameters - ); - case "syncWithRepository": - return await this.executeSyncWithRepository( - context, - args.parameters - ); - default: - return { - success: false, - error: `Unknown operation: ${args.operation}`, - }; - } - } catch (error) { - this.logger.error(`Error executing operation: ${error.message}`); - return { - success: false, - error: error.message, - operation: args.operation, - contextId: args.contextId, - }; - } - }, - }); - - // Add tool for batch operations - this.server.addTool({ - name: "executeBatchOperations", - description: "Execute multiple operations in a single request", - parameters: z.object({ - operations: z - .array( - z.object({ - operation: z.string().describe("The operation to execute"), - contextId: z.string().describe("The ID of the context to use"), - parameters: z - .record(z.any()) - .optional() - .describe("Additional parameters"), - versionId: z - .string() - .optional() - .describe("Optional context version"), - }) - ) - .describe("Array of operations to execute in sequence"), - }), - execute: async (args) => { - const results = []; - let hasErrors = false; - - for (const op of args.operations) { - try { - const context = await this.contextManager.getContext( - op.contextId, - op.versionId - ); - - let result; - switch (op.operation) { - case "generateTask": - result = await this.executeGenerateTask(context, op.parameters); - break; - case "expandTask": - result = await this.executeExpandTask(context, op.parameters); - break; - case "analyzeComplexity": - result = await this.executeAnalyzeComplexity( - context, - op.parameters - ); - break; - case "mergeContexts": - result = await this.executeMergeContexts( - context, - op.parameters - ); - break; - case "searchContexts": - result = await this.executeSearchContexts(op.parameters); - break; - case "extractInsights": - result = await this.executeExtractInsights( - context, - op.parameters - ); - break; - case "syncWithRepository": - result = await this.executeSyncWithRepository( - context, - op.parameters - ); - break; - default: - result = { - success: false, - error: `Unknown operation: ${op.operation}`, - }; - hasErrors = true; - } - - results.push({ - operation: op.operation, - contextId: op.contextId, - result: result, - }); - - if (!result.success) { - hasErrors = true; - } - } catch (error) { - this.logger.error( - `Error in batch operation ${op.operation}: ${error.message}` - ); - results.push({ - operation: op.operation, - contextId: op.contextId, - result: { - success: false, - error: error.message, - }, - }); - hasErrors = true; - } - } - - return { - success: !hasErrors, - results: results, - }; - }, - }); - - this.logger.info("Registered execute endpoint handlers"); - } - - /** - * Execute the generateTask operation - * @param {object} context - The context to use - * @param {object} parameters - Additional parameters - * @returns {Promise} The result of the operation - */ - async executeGenerateTask(context, parameters = {}) { - // This is a placeholder for actual task generation logic - // In a real implementation, this would use Task Master's task generation - - this.logger.info(`Generating task with context ${context.id}`); - - // Improved task generation with more detailed result - const task = { - id: Math.floor(Math.random() * 1000), - title: parameters.title || "New Task", - description: parameters.description || "Task generated from context", - status: "pending", - dependencies: parameters.dependencies || [], - priority: parameters.priority || "medium", - details: `This task was generated using context ${ - context.id - }.\n\n${JSON.stringify(context.data, null, 2)}`, - metadata: { - generatedAt: new Date().toISOString(), - generatedFrom: context.id, - contextVersion: context.metadata.version, - generatedBy: parameters.user || "system", - }, - }; - - return { - success: true, - task, - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - }; - } - - /** - * Execute the expandTask operation - * @param {object} context - The context to use - * @param {object} parameters - Additional parameters - * @returns {Promise} The result of the operation - */ - async executeExpandTask(context, parameters = {}) { - // This is a placeholder for actual task expansion logic - // In a real implementation, this would use Task Master's task expansion - - this.logger.info(`Expanding task with context ${context.id}`); - - // Enhanced task expansion with more configurable options - const numSubtasks = parameters.numSubtasks || 3; - const subtaskPrefix = parameters.subtaskPrefix || ""; - const subtasks = []; - - for (let i = 1; i <= numSubtasks; i++) { - subtasks.push({ - id: `${subtaskPrefix}${i}`, - title: parameters.titleTemplate - ? parameters.titleTemplate.replace("{i}", i) - : `Subtask ${i}`, - description: parameters.descriptionTemplate - ? parameters.descriptionTemplate - .replace("{i}", i) - .replace("{taskId}", parameters.taskId || "unknown") - : `Subtask ${i} for ${parameters.taskId || "unknown task"}`, - dependencies: i > 1 ? [i - 1] : [], - status: "pending", - metadata: { - expandedAt: new Date().toISOString(), - expandedFrom: context.id, - contextVersion: context.metadata.version, - expandedBy: parameters.user || "system", - }, - }); - } - - return { - success: true, - taskId: parameters.taskId, - subtasks, - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - }; - } - - /** - * Execute the analyzeComplexity operation - * @param {object} context - The context to use - * @param {object} parameters - Additional parameters - * @returns {Promise} The result of the operation - */ - async executeAnalyzeComplexity(context, parameters = {}) { - // This is a placeholder for actual complexity analysis logic - // In a real implementation, this would use Task Master's complexity analysis - - this.logger.info(`Analyzing complexity with context ${context.id}`); - - // Enhanced complexity analysis with more detailed factors - const complexityScore = Math.floor(Math.random() * 10) + 1; - const recommendedSubtasks = Math.floor(complexityScore / 2) + 1; - - // More detailed analysis with weighted factors - const factors = [ - { - name: "Task scope breadth", - score: Math.floor(Math.random() * 10) + 1, - weight: 0.3, - description: "How broad is the scope of this task", - }, - { - name: "Technical complexity", - score: Math.floor(Math.random() * 10) + 1, - weight: 0.4, - description: "How technically complex is the implementation", - }, - { - name: "External dependencies", - score: Math.floor(Math.random() * 10) + 1, - weight: 0.2, - description: "How many external dependencies does this task have", - }, - { - name: "Risk assessment", - score: Math.floor(Math.random() * 10) + 1, - weight: 0.1, - description: "What is the risk level of this task", - }, - ]; - - return { - success: true, - analysis: { - taskId: parameters.taskId || "unknown", - complexityScore, - recommendedSubtasks, - factors, - recommendedTimeEstimate: `${complexityScore * 2}-${ - complexityScore * 4 - } hours`, - metadata: { - analyzedAt: new Date().toISOString(), - analyzedUsing: context.id, - contextVersion: context.metadata.version, - analyzedBy: parameters.user || "system", - }, - }, - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - }; - } - - /** - * Execute the mergeContexts operation - * @param {object} primaryContext - The primary context to use - * @param {object} parameters - Additional parameters - * @returns {Promise} The result of the operation - */ - async executeMergeContexts(primaryContext, parameters = {}) { - this.logger.info( - `Merging contexts with primary context ${primaryContext.id}` - ); - - if ( - !parameters.contextIds || - !Array.isArray(parameters.contextIds) || - parameters.contextIds.length === 0 - ) { - return { - success: false, - error: "No context IDs provided for merging", - }; - } - - if (!parameters.newContextId) { - return { - success: false, - error: "New context ID is required for the merged context", - }; - } - - try { - // Add the primary context to the list if not already included - if (!parameters.contextIds.includes(primaryContext.id)) { - parameters.contextIds.unshift(primaryContext.id); - } - - const mergedContext = await this.contextManager.mergeContexts( - parameters.contextIds, - parameters.newContextId, - { - mergedAt: new Date().toISOString(), - mergedBy: parameters.user || "system", - mergeStrategy: parameters.strategy || "concatenate", - ...parameters.metadata, - } - ); - - return { - success: true, - mergedContext, - sourceContexts: parameters.contextIds, - }; - } catch (error) { - this.logger.error(`Error merging contexts: ${error.message}`); - return { - success: false, - error: error.message, - }; - } - } - - /** - * Execute the searchContexts operation - * @param {object} parameters - Search parameters - * @returns {Promise} The result of the operation - */ - async executeSearchContexts(parameters = {}) { - this.logger.info( - `Searching contexts with query: ${parameters.query || ""}` - ); - - try { - const searchResults = await this.contextManager.listContexts({ - query: parameters.query || "", - filters: parameters.filters || {}, - limit: parameters.limit || 100, - offset: parameters.offset || 0, - sortBy: parameters.sortBy || "updated", - sortDirection: parameters.sortDirection || "desc", - }); - - return { - success: true, - ...searchResults, - }; - } catch (error) { - this.logger.error(`Error searching contexts: ${error.message}`); - return { - success: false, - error: error.message, - }; - } - } - - /** - * Execute the extractInsights operation - * @param {object} context - The context to analyze - * @param {object} parameters - Additional parameters - * @returns {Promise} The result of the operation - */ - async executeExtractInsights(context, parameters = {}) { - this.logger.info(`Extracting insights from context ${context.id}`); - - // Placeholder for actual insight extraction - // In a real implementation, this would perform analysis on the context data - - const insights = [ - { - type: "summary", - content: `Summary of context ${context.id}`, - confidence: 0.85, - }, - { - type: "key_points", - content: ["First key point", "Second key point", "Third key point"], - confidence: 0.78, - }, - { - type: "recommendations", - content: ["First recommendation", "Second recommendation"], - confidence: 0.72, - }, - ]; - - return { - success: true, - insights, - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - metadata: { - extractedAt: new Date().toISOString(), - model: parameters.model || "default", - extractedBy: parameters.user || "system", - }, - }; - } - - /** - * Execute the syncWithRepository operation - * @param {object} context - The context to sync - * @param {object} parameters - Additional parameters - * @returns {Promise} The result of the operation - */ - async executeSyncWithRepository(context, parameters = {}) { - this.logger.info(`Syncing context ${context.id} with repository`); - - // Placeholder for actual repository sync - // In a real implementation, this would sync the context with an external repository - - return { - success: true, - syncStatus: "complete", - syncedTo: parameters.repository || "default", - syncTimestamp: new Date().toISOString(), - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - }; - } -} - -export default MCPApiHandlers; diff --git a/mcp-server/src/auth.js b/mcp-server/src/auth.js deleted file mode 100644 index 22c36973..00000000 --- a/mcp-server/src/auth.js +++ /dev/null @@ -1,285 +0,0 @@ -import jwt from "jsonwebtoken"; -import { logger } from "../../scripts/modules/utils.js"; -import crypto from "crypto"; -import fs from "fs/promises"; -import path from "path"; -import { fileURLToPath } from "url"; - -// Constants -const __filename = fileURLToPath(import.meta.url); -const __dirname = path.dirname(__filename); -const API_KEYS_FILE = - process.env.MCP_API_KEYS_FILE || path.join(__dirname, "../api-keys.json"); -const JWT_SECRET = - process.env.MCP_JWT_SECRET || "task-master-mcp-server-secret"; -const JWT_EXPIRATION = process.env.MCP_JWT_EXPIRATION || "24h"; - -/** - * Authentication middleware and utilities for MCP server - */ -class MCPAuth { - constructor() { - this.apiKeys = new Map(); - this.logger = logger; - this.loadApiKeys(); - } - - /** - * Load API keys from disk - */ - async loadApiKeys() { - try { - // Create API keys file if it doesn't exist - try { - await fs.access(API_KEYS_FILE); - } catch (error) { - // File doesn't exist, create it with a default admin key - const defaultApiKey = this.generateApiKey(); - const defaultApiKeys = { - keys: [ - { - id: "admin", - key: defaultApiKey, - role: "admin", - created: new Date().toISOString(), - }, - ], - }; - - await fs.mkdir(path.dirname(API_KEYS_FILE), { recursive: true }); - await fs.writeFile( - API_KEYS_FILE, - JSON.stringify(defaultApiKeys, null, 2), - "utf8" - ); - - this.logger.info( - `Created default API keys file with admin key: ${defaultApiKey}` - ); - } - - // Load API keys - const data = await fs.readFile(API_KEYS_FILE, "utf8"); - const apiKeys = JSON.parse(data); - - apiKeys.keys.forEach((key) => { - this.apiKeys.set(key.key, { - id: key.id, - role: key.role, - created: key.created, - }); - }); - - this.logger.info(`Loaded ${this.apiKeys.size} API keys`); - } catch (error) { - this.logger.error(`Failed to load API keys: ${error.message}`); - throw error; - } - } - - /** - * Save API keys to disk - */ - async saveApiKeys() { - try { - const keys = []; - - this.apiKeys.forEach((value, key) => { - keys.push({ - id: value.id, - key, - role: value.role, - created: value.created, - }); - }); - - await fs.writeFile( - API_KEYS_FILE, - JSON.stringify({ keys }, null, 2), - "utf8" - ); - - this.logger.info(`Saved ${keys.length} API keys`); - } catch (error) { - this.logger.error(`Failed to save API keys: ${error.message}`); - throw error; - } - } - - /** - * Generate a new API key - * @returns {string} The generated API key - */ - generateApiKey() { - return crypto.randomBytes(32).toString("hex"); - } - - /** - * Create a new API key - * @param {string} id - Client identifier - * @param {string} role - Client role (admin, user) - * @returns {string} The generated API key - */ - async createApiKey(id, role = "user") { - const apiKey = this.generateApiKey(); - - this.apiKeys.set(apiKey, { - id, - role, - created: new Date().toISOString(), - }); - - await this.saveApiKeys(); - - this.logger.info(`Created new API key for ${id} with role ${role}`); - return apiKey; - } - - /** - * Revoke an API key - * @param {string} apiKey - The API key to revoke - * @returns {boolean} True if the key was revoked - */ - async revokeApiKey(apiKey) { - if (!this.apiKeys.has(apiKey)) { - return false; - } - - this.apiKeys.delete(apiKey); - await this.saveApiKeys(); - - this.logger.info(`Revoked API key`); - return true; - } - - /** - * Validate an API key - * @param {string} apiKey - The API key to validate - * @returns {object|null} The API key details if valid, null otherwise - */ - validateApiKey(apiKey) { - return this.apiKeys.get(apiKey) || null; - } - - /** - * Generate a JWT token for a client - * @param {string} clientId - Client identifier - * @param {string} role - Client role - * @returns {string} The JWT token - */ - generateToken(clientId, role) { - return jwt.sign({ clientId, role }, JWT_SECRET, { - expiresIn: JWT_EXPIRATION, - }); - } - - /** - * Verify a JWT token - * @param {string} token - The JWT token to verify - * @returns {object|null} The token payload if valid, null otherwise - */ - verifyToken(token) { - try { - return jwt.verify(token, JWT_SECRET); - } catch (error) { - this.logger.error(`Failed to verify token: ${error.message}`); - return null; - } - } - - /** - * Express middleware for API key authentication - * @param {object} req - Express request object - * @param {object} res - Express response object - * @param {function} next - Express next function - */ - authenticateApiKey(req, res, next) { - const apiKey = req.headers["x-api-key"]; - - if (!apiKey) { - return res.status(401).json({ - success: false, - error: "API key is required", - }); - } - - const keyDetails = this.validateApiKey(apiKey); - - if (!keyDetails) { - return res.status(401).json({ - success: false, - error: "Invalid API key", - }); - } - - // Attach client info to request - req.client = { - id: keyDetails.id, - role: keyDetails.role, - }; - - next(); - } - - /** - * Express middleware for JWT authentication - * @param {object} req - Express request object - * @param {object} res - Express response object - * @param {function} next - Express next function - */ - authenticateToken(req, res, next) { - const authHeader = req.headers["authorization"]; - const token = authHeader && authHeader.split(" ")[1]; - - if (!token) { - return res.status(401).json({ - success: false, - error: "Authentication token is required", - }); - } - - const payload = this.verifyToken(token); - - if (!payload) { - return res.status(401).json({ - success: false, - error: "Invalid or expired token", - }); - } - - // Attach client info to request - req.client = { - id: payload.clientId, - role: payload.role, - }; - - next(); - } - - /** - * Express middleware for role-based authorization - * @param {Array} roles - Array of allowed roles - * @returns {function} Express middleware - */ - authorizeRoles(roles) { - return (req, res, next) => { - if (!req.client || !req.client.role) { - return res.status(401).json({ - success: false, - error: "Unauthorized: Authentication required", - }); - } - - if (!roles.includes(req.client.role)) { - return res.status(403).json({ - success: false, - error: "Forbidden: Insufficient permissions", - }); - } - - next(); - }; - } -} - -export default MCPAuth; diff --git a/mcp-server/src/context-manager.js b/mcp-server/src/context-manager.js deleted file mode 100644 index 5b94b538..00000000 --- a/mcp-server/src/context-manager.js +++ /dev/null @@ -1,873 +0,0 @@ -import { logger } from "../../scripts/modules/utils.js"; -import fs from "fs/promises"; -import path from "path"; -import { fileURLToPath } from "url"; -import crypto from "crypto"; -import Fuse from "fuse.js"; - -// Constants -const __filename = fileURLToPath(import.meta.url); -const __dirname = path.dirname(__filename); -const CONTEXT_DIR = - process.env.MCP_CONTEXT_DIR || path.join(__dirname, "../contexts"); -const MAX_CONTEXT_HISTORY = parseInt( - process.env.MCP_MAX_CONTEXT_HISTORY || "10", - 10 -); - -/** - * Context Manager for MCP server - * Handles storage, retrieval, and manipulation of context data - * Implements efficient indexing, versioning, and advanced context operations - */ -class ContextManager { - constructor() { - this.contexts = new Map(); - this.contextHistory = new Map(); // For version history - this.contextIndex = null; // For fuzzy search - this.logger = logger; - this.ensureContextDir(); - this.rebuildSearchIndex(); - } - - /** - * Ensure the contexts directory exists - */ - async ensureContextDir() { - try { - await fs.mkdir(CONTEXT_DIR, { recursive: true }); - this.logger.info(`Context directory ensured at ${CONTEXT_DIR}`); - - // Also create a versions subdirectory for history - await fs.mkdir(path.join(CONTEXT_DIR, "versions"), { recursive: true }); - } catch (error) { - this.logger.error(`Failed to create context directory: ${error.message}`); - throw error; - } - } - - /** - * Rebuild the search index for efficient context lookup - */ - async rebuildSearchIndex() { - await this.loadAllContextsFromDisk(); - - const contextsForIndex = Array.from(this.contexts.values()).map((ctx) => ({ - id: ctx.id, - content: - typeof ctx.data === "string" ? ctx.data : JSON.stringify(ctx.data), - tags: ctx.tags.join(" "), - metadata: Object.entries(ctx.metadata) - .map(([k, v]) => `${k}:${v}`) - .join(" "), - })); - - this.contextIndex = new Fuse(contextsForIndex, { - keys: ["id", "content", "tags", "metadata"], - includeScore: true, - threshold: 0.6, - }); - - this.logger.info( - `Rebuilt search index with ${contextsForIndex.length} contexts` - ); - } - - /** - * Create a new context - * @param {string} contextId - Unique identifier for the context - * @param {object|string} contextData - Initial context data - * @param {object} metadata - Optional metadata for the context - * @returns {object} The created context - */ - async createContext(contextId, contextData, metadata = {}) { - if (this.contexts.has(contextId)) { - throw new Error(`Context with ID ${contextId} already exists`); - } - - const timestamp = new Date().toISOString(); - const versionId = this.generateVersionId(); - - const context = { - id: contextId, - data: contextData, - metadata: { - created: timestamp, - updated: timestamp, - version: versionId, - ...metadata, - }, - tags: metadata.tags || [], - size: this.estimateSize(contextData), - }; - - this.contexts.set(contextId, context); - - // Initialize version history - this.contextHistory.set(contextId, [ - { - versionId, - timestamp, - data: JSON.parse(JSON.stringify(contextData)), // Deep clone - metadata: { ...context.metadata }, - }, - ]); - - await this.persistContext(contextId); - await this.persistContextVersion(contextId, versionId); - - // Update the search index - this.rebuildSearchIndex(); - - this.logger.info(`Created context: ${contextId} (version: ${versionId})`); - return context; - } - - /** - * Retrieve a context by ID - * @param {string} contextId - The context ID to retrieve - * @param {string} versionId - Optional specific version to retrieve - * @returns {object} The context object - */ - async getContext(contextId, versionId = null) { - // If specific version requested, try to get it from history - if (versionId) { - return this.getContextVersion(contextId, versionId); - } - - // Try to get from memory first - if (this.contexts.has(contextId)) { - return this.contexts.get(contextId); - } - - // Try to load from disk - try { - const context = await this.loadContextFromDisk(contextId); - if (context) { - this.contexts.set(contextId, context); - return context; - } - } catch (error) { - this.logger.error( - `Failed to load context ${contextId}: ${error.message}` - ); - } - - throw new Error(`Context with ID ${contextId} not found`); - } - - /** - * Get a specific version of a context - * @param {string} contextId - The context ID - * @param {string} versionId - The version ID - * @returns {object} The versioned context - */ - async getContextVersion(contextId, versionId) { - // Check if version history is in memory - if (this.contextHistory.has(contextId)) { - const history = this.contextHistory.get(contextId); - const version = history.find((v) => v.versionId === versionId); - if (version) { - return { - id: contextId, - data: version.data, - metadata: version.metadata, - tags: version.metadata.tags || [], - size: this.estimateSize(version.data), - versionId: version.versionId, - }; - } - } - - // Try to load from disk - try { - const versionPath = path.join( - CONTEXT_DIR, - "versions", - `${contextId}_${versionId}.json` - ); - const data = await fs.readFile(versionPath, "utf8"); - const version = JSON.parse(data); - - // Add to memory cache - if (!this.contextHistory.has(contextId)) { - this.contextHistory.set(contextId, []); - } - const history = this.contextHistory.get(contextId); - history.push(version); - - return { - id: contextId, - data: version.data, - metadata: version.metadata, - tags: version.metadata.tags || [], - size: this.estimateSize(version.data), - versionId: version.versionId, - }; - } catch (error) { - this.logger.error( - `Failed to load context version ${contextId}@${versionId}: ${error.message}` - ); - throw new Error( - `Context version ${versionId} for ${contextId} not found` - ); - } - } - - /** - * Update an existing context - * @param {string} contextId - The context ID to update - * @param {object|string} contextData - New context data - * @param {object} metadata - Optional metadata updates - * @param {boolean} createNewVersion - Whether to create a new version - * @returns {object} The updated context - */ - async updateContext( - contextId, - contextData, - metadata = {}, - createNewVersion = true - ) { - const context = await this.getContext(contextId); - const timestamp = new Date().toISOString(); - - // Generate a new version ID if requested - const versionId = createNewVersion - ? this.generateVersionId() - : context.metadata.version; - - // Create a backup of the current state for versioning - if (createNewVersion) { - // Store the current version in history - if (!this.contextHistory.has(contextId)) { - this.contextHistory.set(contextId, []); - } - - const history = this.contextHistory.get(contextId); - - // Add current state to history - history.push({ - versionId: context.metadata.version, - timestamp: context.metadata.updated, - data: JSON.parse(JSON.stringify(context.data)), // Deep clone - metadata: { ...context.metadata }, - }); - - // Trim history if it exceeds the maximum size - if (history.length > MAX_CONTEXT_HISTORY) { - const excessVersions = history.splice( - 0, - history.length - MAX_CONTEXT_HISTORY - ); - // Clean up excess versions from disk - for (const version of excessVersions) { - this.removeContextVersionFile(contextId, version.versionId).catch( - (err) => - this.logger.error( - `Failed to remove old version file: ${err.message}` - ) - ); - } - } - - // Persist version - await this.persistContextVersion(contextId, context.metadata.version); - } - - // Update the context - context.data = contextData; - context.metadata = { - ...context.metadata, - ...metadata, - updated: timestamp, - }; - - if (createNewVersion) { - context.metadata.version = versionId; - context.metadata.previousVersion = context.metadata.version; - } - - if (metadata.tags) { - context.tags = metadata.tags; - } - - // Update size estimate - context.size = this.estimateSize(contextData); - - this.contexts.set(contextId, context); - await this.persistContext(contextId); - - // Update the search index - this.rebuildSearchIndex(); - - this.logger.info(`Updated context: ${contextId} (version: ${versionId})`); - return context; - } - - /** - * Delete a context and all its versions - * @param {string} contextId - The context ID to delete - * @returns {boolean} True if deletion was successful - */ - async deleteContext(contextId) { - if (!this.contexts.has(contextId)) { - const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); - try { - await fs.access(contextPath); - } catch (error) { - throw new Error(`Context with ID ${contextId} not found`); - } - } - - this.contexts.delete(contextId); - - // Remove from history - const history = this.contextHistory.get(contextId) || []; - this.contextHistory.delete(contextId); - - try { - // Delete main context file - const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); - await fs.unlink(contextPath); - - // Delete all version files - for (const version of history) { - await this.removeContextVersionFile(contextId, version.versionId); - } - - // Update the search index - this.rebuildSearchIndex(); - - this.logger.info(`Deleted context: ${contextId}`); - return true; - } catch (error) { - this.logger.error( - `Failed to delete context files for ${contextId}: ${error.message}` - ); - throw error; - } - } - - /** - * List all available contexts with pagination and advanced filtering - * @param {object} options - Options for listing contexts - * @param {object} options.filters - Filters to apply - * @param {number} options.limit - Maximum number of contexts to return - * @param {number} options.offset - Number of contexts to skip - * @param {string} options.sortBy - Field to sort by - * @param {string} options.sortDirection - Sort direction ('asc' or 'desc') - * @param {string} options.query - Free text search query - * @returns {Array} Array of context objects - */ - async listContexts(options = {}) { - // Load all contexts from disk first - await this.loadAllContextsFromDisk(); - - const { - filters = {}, - limit = 100, - offset = 0, - sortBy = "updated", - sortDirection = "desc", - query = "", - } = options; - - let contexts; - - // If there's a search query, use the search index - if (query && this.contextIndex) { - const searchResults = this.contextIndex.search(query); - contexts = searchResults.map((result) => - this.contexts.get(result.item.id) - ); - } else { - contexts = Array.from(this.contexts.values()); - } - - // Apply filters - if (filters.tag) { - contexts = contexts.filter( - (ctx) => ctx.tags && ctx.tags.includes(filters.tag) - ); - } - - if (filters.metadataKey && filters.metadataValue) { - contexts = contexts.filter( - (ctx) => - ctx.metadata && - ctx.metadata[filters.metadataKey] === filters.metadataValue - ); - } - - if (filters.createdAfter) { - const timestamp = new Date(filters.createdAfter); - contexts = contexts.filter( - (ctx) => new Date(ctx.metadata.created) >= timestamp - ); - } - - if (filters.updatedAfter) { - const timestamp = new Date(filters.updatedAfter); - contexts = contexts.filter( - (ctx) => new Date(ctx.metadata.updated) >= timestamp - ); - } - - // Apply sorting - contexts.sort((a, b) => { - let valueA, valueB; - - if (sortBy === "created" || sortBy === "updated") { - valueA = new Date(a.metadata[sortBy]).getTime(); - valueB = new Date(b.metadata[sortBy]).getTime(); - } else if (sortBy === "size") { - valueA = a.size || 0; - valueB = b.size || 0; - } else if (sortBy === "id") { - valueA = a.id; - valueB = b.id; - } else { - valueA = a.metadata[sortBy]; - valueB = b.metadata[sortBy]; - } - - if (valueA === valueB) return 0; - - const sortFactor = sortDirection === "asc" ? 1 : -1; - return valueA < valueB ? -1 * sortFactor : 1 * sortFactor; - }); - - // Apply pagination - const paginatedContexts = contexts.slice(offset, offset + limit); - - return { - contexts: paginatedContexts, - total: contexts.length, - offset, - limit, - hasMore: offset + limit < contexts.length, - }; - } - - /** - * Get the version history of a context - * @param {string} contextId - The context ID - * @returns {Array} Array of version objects - */ - async getContextHistory(contextId) { - // Ensure context exists - await this.getContext(contextId); - - // Load history if not in memory - if (!this.contextHistory.has(contextId)) { - await this.loadContextHistoryFromDisk(contextId); - } - - const history = this.contextHistory.get(contextId) || []; - - // Return versions in reverse chronological order (newest first) - return history.sort((a, b) => { - const timeA = new Date(a.timestamp).getTime(); - const timeB = new Date(b.timestamp).getTime(); - return timeB - timeA; - }); - } - - /** - * Add tags to a context - * @param {string} contextId - The context ID - * @param {Array} tags - Array of tags to add - * @returns {object} The updated context - */ - async addTags(contextId, tags) { - const context = await this.getContext(contextId); - - const currentTags = context.tags || []; - const uniqueTags = [...new Set([...currentTags, ...tags])]; - - // Update context with new tags - return this.updateContext( - contextId, - context.data, - { - tags: uniqueTags, - }, - false - ); // Don't create a new version for tag updates - } - - /** - * Remove tags from a context - * @param {string} contextId - The context ID - * @param {Array} tags - Array of tags to remove - * @returns {object} The updated context - */ - async removeTags(contextId, tags) { - const context = await this.getContext(contextId); - - const currentTags = context.tags || []; - const newTags = currentTags.filter((tag) => !tags.includes(tag)); - - // Update context with new tags - return this.updateContext( - contextId, - context.data, - { - tags: newTags, - }, - false - ); // Don't create a new version for tag updates - } - - /** - * Handle context windowing and truncation - * @param {string} contextId - The context ID - * @param {number} maxSize - Maximum size in tokens/chars - * @param {string} strategy - Truncation strategy ('start', 'end', 'middle') - * @returns {object} The truncated context - */ - async truncateContext(contextId, maxSize, strategy = "end") { - const context = await this.getContext(contextId); - const contextText = - typeof context.data === "string" - ? context.data - : JSON.stringify(context.data); - - if (contextText.length <= maxSize) { - return context; // No truncation needed - } - - let truncatedData; - - switch (strategy) { - case "start": - truncatedData = contextText.slice(contextText.length - maxSize); - break; - case "middle": - const halfSize = Math.floor(maxSize / 2); - truncatedData = - contextText.slice(0, halfSize) + - "...[truncated]..." + - contextText.slice(contextText.length - halfSize); - break; - case "end": - default: - truncatedData = contextText.slice(0, maxSize); - break; - } - - // If original data was an object, try to parse the truncated data - // Otherwise use it as a string - let updatedData; - if (typeof context.data === "object") { - try { - // This may fail if truncation broke JSON structure - updatedData = { - ...context.data, - truncated: true, - truncation_strategy: strategy, - original_size: contextText.length, - truncated_size: truncatedData.length, - }; - } catch (error) { - updatedData = truncatedData; - } - } else { - updatedData = truncatedData; - } - - // Update with truncated data - return this.updateContext( - contextId, - updatedData, - { - truncated: true, - truncation_strategy: strategy, - original_size: contextText.length, - truncated_size: truncatedData.length, - }, - true - ); // Create a new version for the truncated data - } - - /** - * Merge multiple contexts into a new context - * @param {Array} contextIds - Array of context IDs to merge - * @param {string} newContextId - ID for the new merged context - * @param {object} metadata - Optional metadata for the new context - * @returns {object} The new merged context - */ - async mergeContexts(contextIds, newContextId, metadata = {}) { - if (contextIds.length === 0) { - throw new Error("At least one context ID must be provided for merging"); - } - - if (this.contexts.has(newContextId)) { - throw new Error(`Context with ID ${newContextId} already exists`); - } - - // Load all contexts to be merged - const contextsToMerge = []; - for (const id of contextIds) { - try { - const context = await this.getContext(id); - contextsToMerge.push(context); - } catch (error) { - this.logger.error( - `Could not load context ${id} for merging: ${error.message}` - ); - throw new Error(`Failed to merge contexts: ${error.message}`); - } - } - - // Check data types and decide how to merge - const allStrings = contextsToMerge.every((c) => typeof c.data === "string"); - const allObjects = contextsToMerge.every( - (c) => typeof c.data === "object" && c.data !== null - ); - - let mergedData; - - if (allStrings) { - // Merge strings with newlines between them - mergedData = contextsToMerge.map((c) => c.data).join("\n\n"); - } else if (allObjects) { - // Merge objects by combining their properties - mergedData = {}; - for (const context of contextsToMerge) { - mergedData = { ...mergedData, ...context.data }; - } - } else { - // Convert everything to strings and concatenate - mergedData = contextsToMerge - .map((c) => - typeof c.data === "string" ? c.data : JSON.stringify(c.data) - ) - .join("\n\n"); - } - - // Collect all tags from merged contexts - const allTags = new Set(); - for (const context of contextsToMerge) { - for (const tag of context.tags || []) { - allTags.add(tag); - } - } - - // Create merged metadata - const mergedMetadata = { - ...metadata, - tags: [...allTags], - merged_from: contextIds, - merged_at: new Date().toISOString(), - }; - - // Create the new merged context - return this.createContext(newContextId, mergedData, mergedMetadata); - } - - /** - * Persist a context to disk - * @param {string} contextId - The context ID to persist - * @returns {Promise} - */ - async persistContext(contextId) { - const context = this.contexts.get(contextId); - if (!context) { - throw new Error(`Context with ID ${contextId} not found`); - } - - const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); - try { - await fs.writeFile(contextPath, JSON.stringify(context, null, 2), "utf8"); - this.logger.debug(`Persisted context ${contextId} to disk`); - } catch (error) { - this.logger.error( - `Failed to persist context ${contextId}: ${error.message}` - ); - throw error; - } - } - - /** - * Persist a context version to disk - * @param {string} contextId - The context ID - * @param {string} versionId - The version ID - * @returns {Promise} - */ - async persistContextVersion(contextId, versionId) { - if (!this.contextHistory.has(contextId)) { - throw new Error(`Context history for ${contextId} not found`); - } - - const history = this.contextHistory.get(contextId); - const version = history.find((v) => v.versionId === versionId); - - if (!version) { - throw new Error(`Version ${versionId} of context ${contextId} not found`); - } - - const versionPath = path.join( - CONTEXT_DIR, - "versions", - `${contextId}_${versionId}.json` - ); - try { - await fs.writeFile(versionPath, JSON.stringify(version, null, 2), "utf8"); - this.logger.debug( - `Persisted context version ${contextId}@${versionId} to disk` - ); - } catch (error) { - this.logger.error( - `Failed to persist context version ${contextId}@${versionId}: ${error.message}` - ); - throw error; - } - } - - /** - * Remove a context version file from disk - * @param {string} contextId - The context ID - * @param {string} versionId - The version ID - * @returns {Promise} - */ - async removeContextVersionFile(contextId, versionId) { - const versionPath = path.join( - CONTEXT_DIR, - "versions", - `${contextId}_${versionId}.json` - ); - try { - await fs.unlink(versionPath); - this.logger.debug( - `Removed context version file ${contextId}@${versionId}` - ); - } catch (error) { - if (error.code !== "ENOENT") { - this.logger.error( - `Failed to remove context version file ${contextId}@${versionId}: ${error.message}` - ); - throw error; - } - } - } - - /** - * Load a context from disk - * @param {string} contextId - The context ID to load - * @returns {Promise} The loaded context - */ - async loadContextFromDisk(contextId) { - const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); - try { - const data = await fs.readFile(contextPath, "utf8"); - const context = JSON.parse(data); - this.logger.debug(`Loaded context ${contextId} from disk`); - return context; - } catch (error) { - this.logger.error( - `Failed to load context ${contextId} from disk: ${error.message}` - ); - throw error; - } - } - - /** - * Load context history from disk - * @param {string} contextId - The context ID - * @returns {Promise} The loaded history - */ - async loadContextHistoryFromDisk(contextId) { - try { - const files = await fs.readdir(path.join(CONTEXT_DIR, "versions")); - const versionFiles = files.filter( - (file) => file.startsWith(`${contextId}_`) && file.endsWith(".json") - ); - - const history = []; - - for (const file of versionFiles) { - try { - const data = await fs.readFile( - path.join(CONTEXT_DIR, "versions", file), - "utf8" - ); - const version = JSON.parse(data); - history.push(version); - } catch (error) { - this.logger.error( - `Failed to load context version file ${file}: ${error.message}` - ); - } - } - - this.contextHistory.set(contextId, history); - this.logger.debug( - `Loaded ${history.length} versions for context ${contextId}` - ); - - return history; - } catch (error) { - this.logger.error( - `Failed to load context history for ${contextId}: ${error.message}` - ); - this.contextHistory.set(contextId, []); - return []; - } - } - - /** - * Load all contexts from disk - * @returns {Promise} - */ - async loadAllContextsFromDisk() { - try { - const files = await fs.readdir(CONTEXT_DIR); - const contextFiles = files.filter((file) => file.endsWith(".json")); - - for (const file of contextFiles) { - const contextId = path.basename(file, ".json"); - if (!this.contexts.has(contextId)) { - try { - const context = await this.loadContextFromDisk(contextId); - this.contexts.set(contextId, context); - } catch (error) { - // Already logged in loadContextFromDisk - } - } - } - - this.logger.info(`Loaded ${this.contexts.size} contexts from disk`); - } catch (error) { - this.logger.error(`Failed to load contexts from disk: ${error.message}`); - throw error; - } - } - - /** - * Generate a unique version ID - * @returns {string} A unique version ID - */ - generateVersionId() { - return crypto.randomBytes(8).toString("hex"); - } - - /** - * Estimate the size of context data - * @param {object|string} data - The context data - * @returns {number} Estimated size in bytes - */ - estimateSize(data) { - if (typeof data === "string") { - return Buffer.byteLength(data, "utf8"); - } - - if (typeof data === "object" && data !== null) { - return Buffer.byteLength(JSON.stringify(data), "utf8"); - } - - return 0; - } -} - -export default ContextManager; diff --git a/mcp-server/src/index.js b/mcp-server/src/index.js index eb820f95..3fe17b58 100644 --- a/mcp-server/src/index.js +++ b/mcp-server/src/index.js @@ -1,16 +1,10 @@ import { FastMCP } from "fastmcp"; -import { z } from "zod"; import path from "path"; -import fs from "fs/promises"; import dotenv from "dotenv"; import { fileURLToPath } from "url"; -import express from "express"; -import cors from "cors"; -import helmet from "helmet"; -import { logger } from "../../scripts/modules/utils.js"; -import MCPAuth from "./auth.js"; -import MCPApiHandlers from "./api-handlers.js"; -import ContextManager from "./context-manager.js"; +import fs from "fs"; +import logger from "./logger.js"; +import { registerTaskMasterTools } from "./tools/index.js"; // Load environment variables dotenv.config(); @@ -18,25 +12,27 @@ dotenv.config(); // Constants const __filename = fileURLToPath(import.meta.url); const __dirname = path.dirname(__filename); -const DEFAULT_PORT = process.env.MCP_SERVER_PORT || 3000; -const DEFAULT_HOST = process.env.MCP_SERVER_HOST || "localhost"; /** * Main MCP server class that integrates with Task Master */ class TaskMasterMCPServer { - constructor(options = {}) { + constructor() { + // Get version from package.json using synchronous fs + const packagePath = path.join(__dirname, "../../package.json"); + const packageJson = JSON.parse(fs.readFileSync(packagePath, "utf8")); + this.options = { name: "Task Master MCP Server", - version: process.env.PROJECT_VERSION || "1.0.0", - ...options, + version: packageJson.version, }; this.server = new FastMCP(this.options); - this.expressApp = null; this.initialized = false; - this.auth = new MCPAuth(); - this.contextManager = new ContextManager(); + + // this.server.addResource({}); + + // this.server.addResourceTemplate({}); // Bind methods this.init = this.init.bind(this); @@ -53,301 +49,27 @@ class TaskMasterMCPServer { async init() { if (this.initialized) return; - this.logger.info("Initializing Task Master MCP server..."); - - // Set up express for additional customization if needed - this.expressApp = express(); - this.expressApp.use(cors()); - this.expressApp.use(helmet()); - this.expressApp.use(express.json()); - - // Set up authentication middleware - this.setupAuthentication(); - - // Register API handlers - this.apiHandlers = new MCPApiHandlers(this.server); - - // Register additional task master specific tools - this.registerTaskMasterTools(); + // Register Task Master tools + registerTaskMasterTools(this.server); this.initialized = true; - this.logger.info("Task Master MCP server initialized successfully"); return this; } - /** - * Set up authentication for the MCP server - */ - setupAuthentication() { - // Add a health check endpoint that doesn't require authentication - this.expressApp.get("/health", (req, res) => { - res.status(200).json({ - status: "ok", - service: this.options.name, - version: this.options.version, - }); - }); - - // Add an authenticate endpoint to get a JWT token using an API key - this.expressApp.post("/auth/token", async (req, res) => { - const apiKey = req.headers["x-api-key"]; - - if (!apiKey) { - return res.status(401).json({ - success: false, - error: "API key is required", - }); - } - - const keyDetails = this.auth.validateApiKey(apiKey); - - if (!keyDetails) { - return res.status(401).json({ - success: false, - error: "Invalid API key", - }); - } - - const token = this.auth.generateToken(keyDetails.id, keyDetails.role); - - res.status(200).json({ - success: true, - token, - expiresIn: process.env.MCP_JWT_EXPIRATION || "24h", - clientId: keyDetails.id, - role: keyDetails.role, - }); - }); - - // Create authenticator middleware for FastMCP - this.server.setAuthenticator((request) => { - // Get token from Authorization header - const authHeader = request.headers?.authorization; - if (!authHeader || !authHeader.startsWith("Bearer ")) { - return null; - } - - const token = authHeader.split(" ")[1]; - const payload = this.auth.verifyToken(token); - - if (!payload) { - return null; - } - - return { - clientId: payload.clientId, - role: payload.role, - }; - }); - - // Set up a protected route for API key management (admin only) - this.expressApp.post( - "/auth/api-keys", - (req, res, next) => { - this.auth.authenticateToken(req, res, next); - }, - (req, res, next) => { - this.auth.authorizeRoles(["admin"])(req, res, next); - }, - async (req, res) => { - const { clientId, role } = req.body; - - if (!clientId) { - return res.status(400).json({ - success: false, - error: "Client ID is required", - }); - } - - try { - const apiKey = await this.auth.createApiKey(clientId, role || "user"); - - res.status(201).json({ - success: true, - apiKey, - clientId, - role: role || "user", - }); - } catch (error) { - this.logger.error(`Error creating API key: ${error.message}`); - - res.status(500).json({ - success: false, - error: "Failed to create API key", - }); - } - } - ); - - this.logger.info("Set up MCP authentication"); - } - - /** - * Register Task Master specific tools with the MCP server - */ - registerTaskMasterTools() { - // Add a tool to get tasks from Task Master - this.server.addTool({ - name: "listTasks", - description: "List all tasks from Task Master", - parameters: z.object({ - status: z.string().optional().describe("Filter tasks by status"), - withSubtasks: z - .boolean() - .optional() - .describe("Include subtasks in the response"), - }), - execute: async (args) => { - try { - // In a real implementation, this would use the Task Master API - // to fetch tasks. For now, returning mock data. - - this.logger.info( - `Listing tasks with filters: ${JSON.stringify(args)}` - ); - - // Mock task data - const tasks = [ - { - id: 1, - title: "Implement Task Data Structure", - status: "done", - dependencies: [], - priority: "high", - }, - { - id: 2, - title: "Develop Command Line Interface Foundation", - status: "done", - dependencies: [1], - priority: "high", - }, - { - id: 23, - title: "Implement MCP Server Functionality", - status: "in-progress", - dependencies: [22], - priority: "medium", - subtasks: [ - { - id: "23.1", - title: "Create Core MCP Server Module", - status: "in-progress", - dependencies: [], - }, - { - id: "23.2", - title: "Implement Context Management System", - status: "pending", - dependencies: ["23.1"], - }, - ], - }, - ]; - - // Apply status filter if provided - let filteredTasks = tasks; - if (args.status) { - filteredTasks = tasks.filter((task) => task.status === args.status); - } - - // Remove subtasks if not requested - if (!args.withSubtasks) { - filteredTasks = filteredTasks.map((task) => { - const { subtasks, ...taskWithoutSubtasks } = task; - return taskWithoutSubtasks; - }); - } - - return { success: true, tasks: filteredTasks }; - } catch (error) { - this.logger.error(`Error listing tasks: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to get task details - this.server.addTool({ - name: "getTaskDetails", - description: "Get detailed information about a specific task", - parameters: z.object({ - taskId: z - .union([z.number(), z.string()]) - .describe("The ID of the task to get details for"), - }), - execute: async (args) => { - try { - // In a real implementation, this would use the Task Master API - // to fetch task details. For now, returning mock data. - - this.logger.info(`Getting details for task ${args.taskId}`); - - // Mock task details - const taskDetails = { - id: 23, - title: "Implement MCP Server Functionality", - description: - "Extend Task Master to function as an MCP server, allowing it to provide context management services to other applications.", - status: "in-progress", - dependencies: [22], - priority: "medium", - details: - "This task involves implementing the Model Context Protocol server capabilities within Task Master.", - testStrategy: - "Testing should include unit tests, integration tests, and compatibility tests.", - subtasks: [ - { - id: "23.1", - title: "Create Core MCP Server Module", - status: "in-progress", - dependencies: [], - }, - { - id: "23.2", - title: "Implement Context Management System", - status: "pending", - dependencies: ["23.1"], - }, - ], - }; - - return { success: true, task: taskDetails }; - } catch (error) { - this.logger.error(`Error getting task details: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - this.logger.info("Registered Task Master specific tools"); - } - /** * Start the MCP server */ - async start({ port = DEFAULT_PORT, host = DEFAULT_HOST } = {}) { + async start() { if (!this.initialized) { await this.init(); } - this.logger.info( - `Starting Task Master MCP server on http://${host}:${port}` - ); - // Start the FastMCP server await this.server.start({ - port, - host, - transportType: "sse", - expressApp: this.expressApp, + transportType: "stdio", }); - this.logger.info( - `Task Master MCP server running at http://${host}:${port}` - ); - return this; } @@ -356,9 +78,7 @@ class TaskMasterMCPServer { */ async stop() { if (this.server) { - this.logger.info("Stopping Task Master MCP server..."); await this.server.stop(); - this.logger.info("Task Master MCP server stopped"); } } } diff --git a/mcp-server/src/logger.js b/mcp-server/src/logger.js new file mode 100644 index 00000000..80c0e55c --- /dev/null +++ b/mcp-server/src/logger.js @@ -0,0 +1,68 @@ +import chalk from "chalk"; + +// Define log levels +const LOG_LEVELS = { + debug: 0, + info: 1, + warn: 2, + error: 3, + success: 4, +}; + +// Get log level from environment or default to info +const LOG_LEVEL = process.env.LOG_LEVEL + ? LOG_LEVELS[process.env.LOG_LEVEL.toLowerCase()] + : LOG_LEVELS.info; + +/** + * Logs a message with the specified level + * @param {string} level - The log level (debug, info, warn, error, success) + * @param {...any} args - Arguments to log + */ +function log(level, ...args) { + const icons = { + debug: chalk.gray("🔍"), + info: chalk.blue("â„šī¸"), + warn: chalk.yellow("âš ī¸"), + error: chalk.red("❌"), + success: chalk.green("✅"), + }; + + if (LOG_LEVELS[level] >= LOG_LEVEL) { + const icon = icons[level] || ""; + + if (level === "error") { + console.error(icon, chalk.red(...args)); + } else if (level === "warn") { + console.warn(icon, chalk.yellow(...args)); + } else if (level === "success") { + console.log(icon, chalk.green(...args)); + } else if (level === "info") { + console.log(icon, chalk.blue(...args)); + } else { + console.log(icon, ...args); + } + } +} + +/** + * Create a logger object with methods for different log levels + * Can be used as a drop-in replacement for existing logger initialization + * @returns {Object} Logger object with info, error, debug, warn, and success methods + */ +export function createLogger() { + return { + debug: (message) => log("debug", message), + info: (message) => log("info", message), + warn: (message) => log("warn", message), + error: (message) => log("error", message), + success: (message) => log("success", message), + log: log, // Also expose the raw log function + }; +} + +// Export a default logger instance +const logger = createLogger(); + +export default logger; +export { log, LOG_LEVELS }; diff --git a/mcp-server/src/tools/addTask.js b/mcp-server/src/tools/addTask.js new file mode 100644 index 00000000..0622d0e8 --- /dev/null +++ b/mcp-server/src/tools/addTask.js @@ -0,0 +1,56 @@ +/** + * tools/addTask.js + * Tool to add a new task using AI + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the addTask tool with the MCP server + * @param {FastMCP} server - FastMCP server instance + */ +export function registerAddTaskTool(server) { + server.addTool({ + name: "addTask", + description: "Add a new task using AI", + parameters: z.object({ + prompt: z.string().describe("Description of the task to add"), + dependencies: z + .string() + .optional() + .describe("Comma-separated list of task IDs this task depends on"), + priority: z + .string() + .optional() + .describe("Task priority (high, medium, low)"), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Adding new task: ${args.prompt}`); + + const cmdArgs = [`--prompt="${args.prompt}"`]; + if (args.dependencies) + cmdArgs.push(`--dependencies=${args.dependencies}`); + if (args.priority) cmdArgs.push(`--priority=${args.priority}`); + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("add-task", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error adding task: ${error.message}`); + return createErrorResponse(`Error adding task: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/expandTask.js b/mcp-server/src/tools/expandTask.js new file mode 100644 index 00000000..b94d00d4 --- /dev/null +++ b/mcp-server/src/tools/expandTask.js @@ -0,0 +1,66 @@ +/** + * tools/expandTask.js + * Tool to break down a task into detailed subtasks + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the expandTask tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerExpandTaskTool(server) { + server.addTool({ + name: "expandTask", + description: "Break down a task into detailed subtasks", + parameters: z.object({ + id: z.union([z.string(), z.number()]).describe("Task ID to expand"), + num: z.number().optional().describe("Number of subtasks to generate"), + research: z + .boolean() + .optional() + .describe( + "Enable Perplexity AI for research-backed subtask generation" + ), + prompt: z + .string() + .optional() + .describe("Additional context to guide subtask generation"), + force: z + .boolean() + .optional() + .describe( + "Force regeneration of subtasks for tasks that already have them" + ), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Expanding task ${args.id}`); + + const cmdArgs = [`--id=${args.id}`]; + if (args.num) cmdArgs.push(`--num=${args.num}`); + if (args.research) cmdArgs.push("--research"); + if (args.prompt) cmdArgs.push(`--prompt="${args.prompt}"`); + if (args.force) cmdArgs.push("--force"); + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("expand", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error expanding task: ${error.message}`); + return createErrorResponse(`Error expanding task: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/index.js b/mcp-server/src/tools/index.js new file mode 100644 index 00000000..97d47438 --- /dev/null +++ b/mcp-server/src/tools/index.js @@ -0,0 +1,29 @@ +/** + * tools/index.js + * Export all Task Master CLI tools for MCP server + */ + +import logger from "../logger.js"; +import { registerListTasksTool } from "./listTasks.js"; +import { registerShowTaskTool } from "./showTask.js"; +import { registerSetTaskStatusTool } from "./setTaskStatus.js"; +import { registerExpandTaskTool } from "./expandTask.js"; +import { registerNextTaskTool } from "./nextTask.js"; +import { registerAddTaskTool } from "./addTask.js"; + +/** + * Register all Task Master tools with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerTaskMasterTools(server) { + registerListTasksTool(server); + registerShowTaskTool(server); + registerSetTaskStatusTool(server); + registerExpandTaskTool(server); + registerNextTaskTool(server); + registerAddTaskTool(server); +} + +export default { + registerTaskMasterTools, +}; diff --git a/mcp-server/src/tools/listTasks.js b/mcp-server/src/tools/listTasks.js new file mode 100644 index 00000000..7da65692 --- /dev/null +++ b/mcp-server/src/tools/listTasks.js @@ -0,0 +1,51 @@ +/** + * tools/listTasks.js + * Tool to list all tasks from Task Master + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the listTasks tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerListTasksTool(server) { + server.addTool({ + name: "listTasks", + description: "List all tasks from Task Master", + parameters: z.object({ + status: z.string().optional().describe("Filter tasks by status"), + withSubtasks: z + .boolean() + .optional() + .describe("Include subtasks in the response"), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Listing tasks with filters: ${JSON.stringify(args)}`); + + const cmdArgs = []; + if (args.status) cmdArgs.push(`--status=${args.status}`); + if (args.withSubtasks) cmdArgs.push("--with-subtasks"); + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("list", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error listing tasks: ${error.message}`); + return createErrorResponse(`Error listing tasks: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/nextTask.js b/mcp-server/src/tools/nextTask.js new file mode 100644 index 00000000..4003ce04 --- /dev/null +++ b/mcp-server/src/tools/nextTask.js @@ -0,0 +1,45 @@ +/** + * tools/nextTask.js + * Tool to show the next task to work on based on dependencies and status + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the nextTask tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerNextTaskTool(server) { + server.addTool({ + name: "nextTask", + description: + "Show the next task to work on based on dependencies and status", + parameters: z.object({ + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Finding next task to work on`); + + const cmdArgs = []; + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("next", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error finding next task: ${error.message}`); + return createErrorResponse(`Error finding next task: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/setTaskStatus.js b/mcp-server/src/tools/setTaskStatus.js new file mode 100644 index 00000000..5681dd7b --- /dev/null +++ b/mcp-server/src/tools/setTaskStatus.js @@ -0,0 +1,52 @@ +/** + * tools/setTaskStatus.js + * Tool to set the status of a task + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the setTaskStatus tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerSetTaskStatusTool(server) { + server.addTool({ + name: "setTaskStatus", + description: "Set the status of a task", + parameters: z.object({ + id: z + .union([z.string(), z.number()]) + .describe("Task ID (can be comma-separated for multiple tasks)"), + status: z + .string() + .describe("New status (todo, in-progress, review, done)"), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Setting status of task(s) ${args.id} to: ${args.status}`); + + const cmdArgs = [`--id=${args.id}`, `--status=${args.status}`]; + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("set-status", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error setting task status: ${error.message}`); + return createErrorResponse( + `Error setting task status: ${error.message}` + ); + } + }, + }); +} diff --git a/mcp-server/src/tools/showTask.js b/mcp-server/src/tools/showTask.js new file mode 100644 index 00000000..c44d9463 --- /dev/null +++ b/mcp-server/src/tools/showTask.js @@ -0,0 +1,45 @@ +/** + * tools/showTask.js + * Tool to show detailed information about a specific task + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the showTask tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerShowTaskTool(server) { + server.addTool({ + name: "showTask", + description: "Show detailed information about a specific task", + parameters: z.object({ + id: z.union([z.string(), z.number()]).describe("Task ID to show"), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Showing task details for ID: ${args.id}`); + + const cmdArgs = [args.id]; + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("show", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error showing task: ${error.message}`); + return createErrorResponse(`Error showing task: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/utils.js b/mcp-server/src/tools/utils.js new file mode 100644 index 00000000..24745d2e --- /dev/null +++ b/mcp-server/src/tools/utils.js @@ -0,0 +1,90 @@ +/** + * tools/utils.js + * Utility functions for Task Master CLI integration + */ + +import { spawnSync } from "child_process"; + +/** + * Execute a Task Master CLI command using child_process + * @param {string} command - The command to execute + * @param {Object} log - The logger object from FastMCP + * @param {Array} args - Arguments for the command + * @returns {Object} - The result of the command execution + */ +export function executeTaskMasterCommand(command, log, args = []) { + try { + log.info( + `Executing task-master ${command} with args: ${JSON.stringify(args)}` + ); + + // Prepare full arguments array + const fullArgs = [command, ...args]; + + // Execute the command using the global task-master CLI or local script + // Try the global CLI first + let result = spawnSync("task-master", fullArgs, { encoding: "utf8" }); + + // If global CLI is not available, try fallback to the local script + if (result.error && result.error.code === "ENOENT") { + log.info("Global task-master not found, falling back to local script"); + result = spawnSync("node", ["scripts/dev.js", ...fullArgs], { + encoding: "utf8", + }); + } + + if (result.error) { + throw new Error(`Command execution error: ${result.error.message}`); + } + + if (result.status !== 0) { + throw new Error( + `Command failed with exit code ${result.status}: ${result.stderr}` + ); + } + + return { + success: true, + stdout: result.stdout, + stderr: result.stderr, + }; + } catch (error) { + log.error(`Error executing task-master command: ${error.message}`); + return { + success: false, + error: error.message, + }; + } +} + +/** + * Creates standard content response for tools + * @param {string} text - Text content to include in response + * @returns {Object} - Content response object + */ +export function createContentResponse(text) { + return { + content: [ + { + text, + type: "text", + }, + ], + }; +} + +/** + * Creates error response for tools + * @param {string} errorMessage - Error message to include in response + * @returns {Object} - Error content response object + */ +export function createErrorResponse(errorMessage) { + return { + content: [ + { + text: errorMessage, + type: "text", + }, + ], + }; +} From a3c86148d46ec98c5001f3291342e7113cc553db Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Tue, 25 Mar 2025 19:00:00 +0000 Subject: [PATCH 03/16] fix(mcp): get everything working, cleanup, and test all tools --- .cursor/mcp.json | 8 -- README-task-master.md | 70 ++++++++++- README.md | 70 ++++++++++- mcp-server/README.md | 170 -------------------------- mcp-server/src/tools/addTask.js | 12 +- mcp-server/src/tools/expandTask.js | 16 ++- mcp-server/src/tools/listTasks.js | 16 ++- mcp-server/src/tools/nextTask.js | 14 ++- mcp-server/src/tools/setTaskStatus.js | 16 ++- mcp-server/src/tools/showTask.js | 18 ++- mcp-server/src/tools/utils.js | 32 +++-- package-lock.json | 4 +- 12 files changed, 244 insertions(+), 202 deletions(-) delete mode 100644 mcp-server/README.md diff --git a/.cursor/mcp.json b/.cursor/mcp.json index 3b7160ae..e69de29b 100644 --- a/.cursor/mcp.json +++ b/.cursor/mcp.json @@ -1,8 +0,0 @@ -{ - "mcpServers": { - "taskMaster": { - "command": "node", - "args": ["mcp-server/server.js"] - } - } -} diff --git a/README-task-master.md b/README-task-master.md index cf46772c..26cce92b 100644 --- a/README-task-master.md +++ b/README-task-master.md @@ -1,4 +1,5 @@ # Task Master + ### by [@eyaltoledano](https://x.com/eyaltoledano) A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI. @@ -15,9 +16,11 @@ A task management system for AI-driven development with Claude, designed to work The script can be configured through environment variables in a `.env` file at the root of the project: ### Required Configuration + - `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude ### Optional Configuration + - `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219") - `MAX_TOKENS`: Maximum tokens for model responses (default: 4000) - `TEMPERATURE`: Temperature for model responses (default: 0.7) @@ -123,6 +126,21 @@ Claude Task Master is designed to work seamlessly with [Cursor AI](https://www.c 3. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`) 4. Open Cursor's AI chat and switch to Agent mode +### Setting up MCP in Cursor + +To enable enhanced task management capabilities directly within Cursor using the Model Control Protocol (MCP): + +1. Go to Cursor settings +2. Navigate to the MCP section +3. Click on "Add New MCP Server" +4. Configure with the following details: + - Name: "Task Master" + - Type: "Command" + - Command: "npx -y task-master-mcp" +5. Save the settings + +Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience. + ### Initial Task Generation In Cursor's AI chat, instruct the agent to generate tasks from your PRD: @@ -132,11 +150,13 @@ Please use the task-master parse-prd command to generate tasks from my PRD. The ``` The agent will execute: + ```bash task-master parse-prd scripts/prd.txt ``` This will: + - Parse your PRD document - Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies - The agent will understand this process due to the Cursor rules @@ -150,6 +170,7 @@ Please generate individual task files from tasks.json ``` The agent will execute: + ```bash task-master generate ``` @@ -169,6 +190,7 @@ What tasks are available to work on next? ``` The agent will: + - Run `task-master list` to see all tasks - Run `task-master next` to determine the next task to work on - Analyze dependencies to determine which tasks are ready to be worked on @@ -178,12 +200,14 @@ The agent will: ### 2. Task Implementation When implementing a task, the agent will: + - Reference the task's details section for implementation specifics - Consider dependencies on previous tasks - Follow the project's coding standards - Create appropriate tests based on the task's testStrategy You can ask: + ``` Let's implement task 3. What does it involve? ``` @@ -191,6 +215,7 @@ Let's implement task 3. What does it involve? ### 3. Task Verification Before marking a task as complete, verify it according to: + - The task's specified testStrategy - Any automated tests in the codebase - Manual verification if required @@ -204,6 +229,7 @@ Task 3 is now complete. Please update its status. ``` The agent will execute: + ```bash task-master set-status --id=3 --status=done ``` @@ -211,16 +237,19 @@ task-master set-status --id=3 --status=done ### 5. Handling Implementation Drift If during implementation, you discover that: + - The current approach differs significantly from what was planned - Future tasks need to be modified due to current implementation choices - New dependencies or requirements have emerged Tell the agent: + ``` We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change. ``` The agent will execute: + ```bash task-master update --from=4 --prompt="Now we are using Express instead of Fastify." ``` @@ -236,36 +265,43 @@ Task 5 seems complex. Can you break it down into subtasks? ``` The agent will execute: + ```bash task-master expand --id=5 --num=3 ``` You can provide additional context: + ``` Please break down task 5 with a focus on security considerations. ``` The agent will execute: + ```bash task-master expand --id=5 --prompt="Focus on security aspects" ``` You can also expand all pending tasks: + ``` Please break down all pending tasks into subtasks. ``` The agent will execute: + ```bash task-master expand --all ``` For research-backed subtask generation using Perplexity AI: + ``` Please break down task 5 using research-backed generation. ``` The agent will execute: + ```bash task-master expand --id=5 --research ``` @@ -275,6 +311,7 @@ task-master expand --id=5 --research Here's a comprehensive reference of all available commands: ### Parse PRD + ```bash # Parse a PRD file and generate tasks task-master parse-prd @@ -284,6 +321,7 @@ task-master parse-prd --num-tasks=10 ``` ### List Tasks + ```bash # List all tasks task-master list @@ -299,12 +337,14 @@ task-master list --status= --with-subtasks ``` ### Show Next Task + ```bash # Show the next task to work on based on dependencies and status task-master next ``` ### Show Specific Task + ```bash # Show details of a specific task task-master show @@ -316,18 +356,21 @@ task-master show 1.2 ``` ### Update Tasks + ```bash # Update tasks from a specific ID and provide context task-master update --from= --prompt="" ``` ### Generate Task Files + ```bash # Generate individual task files from tasks.json task-master generate ``` ### Set Task Status + ```bash # Set status of a single task task-master set-status --id= --status= @@ -342,6 +385,7 @@ task-master set-status --id=1.1,1.2 --status= When marking a task as "done", all of its subtasks will automatically be marked as "done" as well. ### Expand Tasks + ```bash # Expand a specific task with subtasks task-master expand --id= --num= @@ -363,6 +407,7 @@ task-master expand --all --research ``` ### Clear Subtasks + ```bash # Clear subtasks from a specific task task-master clear-subtasks --id= @@ -375,6 +420,7 @@ task-master clear-subtasks --all ``` ### Analyze Task Complexity + ```bash # Analyze complexity of all tasks task-master analyze-complexity @@ -396,6 +442,7 @@ task-master analyze-complexity --research ``` ### View Complexity Report + ```bash # Display the task complexity analysis report task-master complexity-report @@ -405,6 +452,7 @@ task-master complexity-report --file=my-report.json ``` ### Managing Task Dependencies + ```bash # Add a dependency to a task task-master add-dependency --id= --depends-on= @@ -420,6 +468,7 @@ task-master fix-dependencies ``` ### Add a New Task + ```bash # Add a new task using AI task-master add-task --prompt="Description of the new task" @@ -436,6 +485,7 @@ task-master add-task --prompt="Description" --priority=high ### Analyzing Task Complexity The `analyze-complexity` command: + - Analyzes each task using AI to assess its complexity on a scale of 1-10 - Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS - Generates tailored prompts for expanding each task @@ -443,6 +493,7 @@ The `analyze-complexity` command: - Saves the report to scripts/task-complexity-report.json by default The generated report contains: + - Complexity analysis for each task (scored 1-10) - Recommended number of subtasks based on complexity - AI-generated expansion prompts customized for each task @@ -451,6 +502,7 @@ The generated report contains: ### Viewing Complexity Report The `complexity-report` command: + - Displays a formatted, easy-to-read version of the complexity analysis report - Shows tasks organized by complexity score (highest to lowest) - Provides complexity distribution statistics (low, medium, high) @@ -463,12 +515,14 @@ The `complexity-report` command: The `expand` command automatically checks for and uses the complexity report: When a complexity report exists: + - Tasks are automatically expanded using the recommended subtask count and prompts - When expanding all tasks, they're processed in order of complexity (highest first) - Research-backed generation is preserved from the complexity analysis - You can still override recommendations with explicit command-line options Example workflow: + ```bash # Generate the complexity analysis report with research capabilities task-master analyze-complexity --research @@ -485,6 +539,7 @@ task-master expand --all ### Finding the Next Task The `next` command: + - Identifies tasks that are pending/in-progress and have all dependencies satisfied - Prioritizes tasks by priority level, dependency count, and task ID - Displays comprehensive information about the selected task: @@ -499,6 +554,7 @@ The `next` command: ### Viewing Specific Task Details The `show` command: + - Displays comprehensive details about a specific task or subtask - Shows task status, priority, dependencies, and detailed implementation notes - For parent tasks, displays all subtasks and their status @@ -529,43 +585,51 @@ The `show` command: ## Example Cursor AI Interactions ### Starting a new project + ``` -I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt. +I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt. Can you help me parse it and set up the initial tasks? ``` ### Working on tasks + ``` What's the next task I should work on? Please consider dependencies and priorities. ``` ### Implementing a specific task + ``` I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it? ``` ### Managing subtasks + ``` I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them? ``` ### Handling changes + ``` We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change? ``` ### Completing work + ``` -I've finished implementing the authentication system described in task 2. All tests are passing. +I've finished implementing the authentication system described in task 2. All tests are passing. Please mark it as complete and tell me what I should work on next. ``` ### Analyzing complexity + ``` Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further? ``` ### Viewing complexity report + ``` Can you show me the complexity report in a more readable format? -``` \ No newline at end of file +``` diff --git a/README.md b/README.md index 6e24c651..b0803a99 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,5 @@ # Task Master + ### by [@eyaltoledano](https://x.com/eyaltoledano) A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI. @@ -15,9 +16,11 @@ A task management system for AI-driven development with Claude, designed to work The script can be configured through environment variables in a `.env` file at the root of the project: ### Required Configuration + - `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude ### Optional Configuration + - `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219") - `MAX_TOKENS`: Maximum tokens for model responses (default: 4000) - `TEMPERATURE`: Temperature for model responses (default: 0.7) @@ -123,6 +126,21 @@ Claude Task Master is designed to work seamlessly with [Cursor AI](https://www.c 3. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`) 4. Open Cursor's AI chat and switch to Agent mode +### Setting up MCP in Cursor + +To enable enhanced task management capabilities directly within Cursor using the Model Control Protocol (MCP): + +1. Go to Cursor settings +2. Navigate to the MCP section +3. Click on "Add New MCP Server" +4. Configure with the following details: + - Name: "Task Master" + - Type: "Command" + - Command: "npx -y task-master-mcp" +5. Save the settings + +Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience. + ### Initial Task Generation In Cursor's AI chat, instruct the agent to generate tasks from your PRD: @@ -132,11 +150,13 @@ Please use the task-master parse-prd command to generate tasks from my PRD. The ``` The agent will execute: + ```bash task-master parse-prd scripts/prd.txt ``` This will: + - Parse your PRD document - Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies - The agent will understand this process due to the Cursor rules @@ -150,6 +170,7 @@ Please generate individual task files from tasks.json ``` The agent will execute: + ```bash task-master generate ``` @@ -169,6 +190,7 @@ What tasks are available to work on next? ``` The agent will: + - Run `task-master list` to see all tasks - Run `task-master next` to determine the next task to work on - Analyze dependencies to determine which tasks are ready to be worked on @@ -178,12 +200,14 @@ The agent will: ### 2. Task Implementation When implementing a task, the agent will: + - Reference the task's details section for implementation specifics - Consider dependencies on previous tasks - Follow the project's coding standards - Create appropriate tests based on the task's testStrategy You can ask: + ``` Let's implement task 3. What does it involve? ``` @@ -191,6 +215,7 @@ Let's implement task 3. What does it involve? ### 3. Task Verification Before marking a task as complete, verify it according to: + - The task's specified testStrategy - Any automated tests in the codebase - Manual verification if required @@ -204,6 +229,7 @@ Task 3 is now complete. Please update its status. ``` The agent will execute: + ```bash task-master set-status --id=3 --status=done ``` @@ -211,16 +237,19 @@ task-master set-status --id=3 --status=done ### 5. Handling Implementation Drift If during implementation, you discover that: + - The current approach differs significantly from what was planned - Future tasks need to be modified due to current implementation choices - New dependencies or requirements have emerged Tell the agent: + ``` We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change. ``` The agent will execute: + ```bash task-master update --from=4 --prompt="Now we are using Express instead of Fastify." ``` @@ -236,36 +265,43 @@ Task 5 seems complex. Can you break it down into subtasks? ``` The agent will execute: + ```bash task-master expand --id=5 --num=3 ``` You can provide additional context: + ``` Please break down task 5 with a focus on security considerations. ``` The agent will execute: + ```bash task-master expand --id=5 --prompt="Focus on security aspects" ``` You can also expand all pending tasks: + ``` Please break down all pending tasks into subtasks. ``` The agent will execute: + ```bash task-master expand --all ``` For research-backed subtask generation using Perplexity AI: + ``` Please break down task 5 using research-backed generation. ``` The agent will execute: + ```bash task-master expand --id=5 --research ``` @@ -275,6 +311,7 @@ task-master expand --id=5 --research Here's a comprehensive reference of all available commands: ### Parse PRD + ```bash # Parse a PRD file and generate tasks task-master parse-prd @@ -284,6 +321,7 @@ task-master parse-prd --num-tasks=10 ``` ### List Tasks + ```bash # List all tasks task-master list @@ -299,12 +337,14 @@ task-master list --status= --with-subtasks ``` ### Show Next Task + ```bash # Show the next task to work on based on dependencies and status task-master next ``` ### Show Specific Task + ```bash # Show details of a specific task task-master show @@ -316,18 +356,21 @@ task-master show 1.2 ``` ### Update Tasks + ```bash # Update tasks from a specific ID and provide context task-master update --from= --prompt="" ``` ### Generate Task Files + ```bash # Generate individual task files from tasks.json task-master generate ``` ### Set Task Status + ```bash # Set status of a single task task-master set-status --id= --status= @@ -342,6 +385,7 @@ task-master set-status --id=1.1,1.2 --status= When marking a task as "done", all of its subtasks will automatically be marked as "done" as well. ### Expand Tasks + ```bash # Expand a specific task with subtasks task-master expand --id= --num= @@ -363,6 +407,7 @@ task-master expand --all --research ``` ### Clear Subtasks + ```bash # Clear subtasks from a specific task task-master clear-subtasks --id= @@ -375,6 +420,7 @@ task-master clear-subtasks --all ``` ### Analyze Task Complexity + ```bash # Analyze complexity of all tasks task-master analyze-complexity @@ -396,6 +442,7 @@ task-master analyze-complexity --research ``` ### View Complexity Report + ```bash # Display the task complexity analysis report task-master complexity-report @@ -405,6 +452,7 @@ task-master complexity-report --file=my-report.json ``` ### Managing Task Dependencies + ```bash # Add a dependency to a task task-master add-dependency --id= --depends-on= @@ -420,6 +468,7 @@ task-master fix-dependencies ``` ### Add a New Task + ```bash # Add a new task using AI task-master add-task --prompt="Description of the new task" @@ -866,6 +915,7 @@ task-master add-task --prompt="Description" --priority=high ### Analyzing Task Complexity The `analyze-complexity` command: + - Analyzes each task using AI to assess its complexity on a scale of 1-10 - Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS - Generates tailored prompts for expanding each task @@ -873,6 +923,7 @@ The `analyze-complexity` command: - Saves the report to scripts/task-complexity-report.json by default The generated report contains: + - Complexity analysis for each task (scored 1-10) - Recommended number of subtasks based on complexity - AI-generated expansion prompts customized for each task @@ -881,6 +932,7 @@ The generated report contains: ### Viewing Complexity Report The `complexity-report` command: + - Displays a formatted, easy-to-read version of the complexity analysis report - Shows tasks organized by complexity score (highest to lowest) - Provides complexity distribution statistics (low, medium, high) @@ -893,12 +945,14 @@ The `complexity-report` command: The `expand` command automatically checks for and uses the complexity report: When a complexity report exists: + - Tasks are automatically expanded using the recommended subtask count and prompts - When expanding all tasks, they're processed in order of complexity (highest first) - Research-backed generation is preserved from the complexity analysis - You can still override recommendations with explicit command-line options Example workflow: + ```bash # Generate the complexity analysis report with research capabilities task-master analyze-complexity --research @@ -915,6 +969,7 @@ task-master expand --all ### Finding the Next Task The `next` command: + - Identifies tasks that are pending/in-progress and have all dependencies satisfied - Prioritizes tasks by priority level, dependency count, and task ID - Displays comprehensive information about the selected task: @@ -929,6 +984,7 @@ The `next` command: ### Viewing Specific Task Details The `show` command: + - Displays comprehensive details about a specific task or subtask - Shows task status, priority, dependencies, and detailed implementation notes - For parent tasks, displays all subtasks and their status @@ -959,43 +1015,51 @@ The `show` command: ## Example Cursor AI Interactions ### Starting a new project + ``` -I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt. +I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt. Can you help me parse it and set up the initial tasks? ``` ### Working on tasks + ``` What's the next task I should work on? Please consider dependencies and priorities. ``` ### Implementing a specific task + ``` I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it? ``` ### Managing subtasks + ``` I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them? ``` ### Handling changes + ``` We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change? ``` ### Completing work + ``` -I've finished implementing the authentication system described in task 2. All tests are passing. +I've finished implementing the authentication system described in task 2. All tests are passing. Please mark it as complete and tell me what I should work on next. ``` ### Analyzing complexity + ``` Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further? ``` ### Viewing complexity report + ``` Can you show me the complexity report in a more readable format? -``` \ No newline at end of file +``` diff --git a/mcp-server/README.md b/mcp-server/README.md deleted file mode 100644 index 9c8b1300..00000000 --- a/mcp-server/README.md +++ /dev/null @@ -1,170 +0,0 @@ -# Task Master MCP Server - -This module implements a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server for Task Master, allowing external applications to access Task Master functionality and context through a standardized API. - -## Features - -- MCP-compliant server implementation using FastMCP -- RESTful API for context management -- Authentication and authorization for secure access -- Context storage and retrieval with metadata and tagging -- Context windowing and truncation for handling size limits -- Integration with Task Master for task management operations - -## Installation - -The MCP server is included with Task Master. Install Task Master globally to use the MCP server: - -```bash -npm install -g task-master-ai -``` - -Or use it locally: - -```bash -npm install task-master-ai -``` - -## Environment Configuration - -The MCP server can be configured using environment variables or a `.env` file: - -| Variable | Description | Default | -| -------------------- | ---------------------------------------- | ----------------------------- | -| `MCP_SERVER_PORT` | Port for the MCP server | 3000 | -| `MCP_SERVER_HOST` | Host for the MCP server | localhost | -| `MCP_CONTEXT_DIR` | Directory for context storage | ./mcp-server/contexts | -| `MCP_API_KEYS_FILE` | File for API key storage | ./mcp-server/api-keys.json | -| `MCP_JWT_SECRET` | Secret for JWT token generation | task-master-mcp-server-secret | -| `MCP_JWT_EXPIRATION` | JWT token expiration time | 24h | -| `LOG_LEVEL` | Logging level (debug, info, warn, error) | info | - -## Getting Started - -### Starting the Server - -Start the MCP server as a standalone process: - -```bash -npx task-master-mcp-server -``` - -Or start it programmatically: - -```javascript -import { TaskMasterMCPServer } from "task-master-ai/mcp-server"; - -const server = new TaskMasterMCPServer(); -await server.start({ port: 3000, host: "localhost" }); -``` - -### Authentication - -The MCP server uses API key authentication with JWT tokens for secure access. A default admin API key is generated on first startup and can be found in the `api-keys.json` file. - -To get a JWT token: - -```bash -curl -X POST http://localhost:3000/auth/token \ - -H "x-api-key: YOUR_API_KEY" -``` - -Use the token for subsequent requests: - -```bash -curl http://localhost:3000/mcp/tools \ - -H "Authorization: Bearer YOUR_JWT_TOKEN" -``` - -### Creating a New API Key - -Admin users can create new API keys: - -```bash -curl -X POST http://localhost:3000/auth/api-keys \ - -H "Authorization: Bearer ADMIN_JWT_TOKEN" \ - -H "Content-Type: application/json" \ - -d '{"clientId": "user1", "role": "user"}' -``` - -## Available MCP Endpoints - -The MCP server implements the following MCP-compliant endpoints: - -### Context Management - -- `GET /mcp/context` - List all contexts -- `POST /mcp/context` - Create a new context -- `GET /mcp/context/{id}` - Get a specific context -- `PUT /mcp/context/{id}` - Update a context -- `DELETE /mcp/context/{id}` - Delete a context - -### Models - -- `GET /mcp/models` - List available models -- `GET /mcp/models/{id}` - Get model details - -### Execution - -- `POST /mcp/execute` - Execute an operation with context - -## Available MCP Tools - -The MCP server provides the following tools: - -### Context Tools - -- `createContext` - Create a new context -- `getContext` - Retrieve a context by ID -- `updateContext` - Update an existing context -- `deleteContext` - Delete a context -- `listContexts` - List available contexts -- `addTags` - Add tags to a context -- `truncateContext` - Truncate a context to a maximum size - -### Task Master Tools - -- `listTasks` - List tasks from Task Master -- `getTaskDetails` - Get detailed task information -- `executeWithContext` - Execute operations using context - -## Examples - -### Creating a Context - -```javascript -// Using the MCP client -const client = new MCPClient("http://localhost:3000"); -await client.authenticate("YOUR_API_KEY"); - -const context = await client.createContext("my-context", { - title: "My Project", - tasks: ["Implement feature X", "Fix bug Y"], -}); -``` - -### Executing an Operation with Context - -```javascript -// Using the MCP client -const result = await client.execute("generateTask", "my-context", { - title: "New Task", - description: "Create a new task based on context", -}); -``` - -## Integration with Other Tools - -The Task Master MCP server can be integrated with other MCP-compatible tools and clients: - -- LLM applications that support the MCP protocol -- Task management systems that support context-aware operations -- Development environments with MCP integration - -## Contributing - -Contributions are welcome! Please feel free to submit a Pull Request. - -## License - -This project is licensed under the MIT License - see the LICENSE file for details. diff --git a/mcp-server/src/tools/addTask.js b/mcp-server/src/tools/addTask.js index 0622d0e8..0b12d9fc 100644 --- a/mcp-server/src/tools/addTask.js +++ b/mcp-server/src/tools/addTask.js @@ -29,6 +29,11 @@ export function registerAddTaskTool(server) { .optional() .describe("Task priority (high, medium, low)"), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -40,7 +45,12 @@ export function registerAddTaskTool(server) { if (args.priority) cmdArgs.push(`--priority=${args.priority}`); if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("add-task", log, cmdArgs); + const result = executeTaskMasterCommand( + "add-task", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/expandTask.js b/mcp-server/src/tools/expandTask.js index b94d00d4..ae0b4550 100644 --- a/mcp-server/src/tools/expandTask.js +++ b/mcp-server/src/tools/expandTask.js @@ -19,7 +19,7 @@ export function registerExpandTaskTool(server) { name: "expandTask", description: "Break down a task into detailed subtasks", parameters: z.object({ - id: z.union([z.string(), z.number()]).describe("Task ID to expand"), + id: z.string().describe("Task ID to expand"), num: z.number().optional().describe("Number of subtasks to generate"), research: z .boolean() @@ -38,6 +38,11 @@ export function registerExpandTaskTool(server) { "Force regeneration of subtasks for tasks that already have them" ), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -50,7 +55,14 @@ export function registerExpandTaskTool(server) { if (args.force) cmdArgs.push("--force"); if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("expand", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "expand", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/listTasks.js b/mcp-server/src/tools/listTasks.js index 7da65692..af6f4844 100644 --- a/mcp-server/src/tools/listTasks.js +++ b/mcp-server/src/tools/listTasks.js @@ -25,6 +25,11 @@ export function registerListTasksTool(server) { .optional() .describe("Include subtasks in the response"), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -35,12 +40,21 @@ export function registerListTasksTool(server) { if (args.withSubtasks) cmdArgs.push("--with-subtasks"); if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("list", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "list", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); } + log.info(`Listing tasks result: ${result.stdout}`, result.stdout); + return createContentResponse(result.stdout); } catch (error) { log.error(`Error listing tasks: ${error.message}`); diff --git a/mcp-server/src/tools/nextTask.js b/mcp-server/src/tools/nextTask.js index 4003ce04..729c5fec 100644 --- a/mcp-server/src/tools/nextTask.js +++ b/mcp-server/src/tools/nextTask.js @@ -21,6 +21,11 @@ export function registerNextTaskTool(server) { "Show the next task to work on based on dependencies and status", parameters: z.object({ file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -29,7 +34,14 @@ export function registerNextTaskTool(server) { const cmdArgs = []; if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("next", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "next", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/setTaskStatus.js b/mcp-server/src/tools/setTaskStatus.js index 5681dd7b..d2c0b2c1 100644 --- a/mcp-server/src/tools/setTaskStatus.js +++ b/mcp-server/src/tools/setTaskStatus.js @@ -20,12 +20,17 @@ export function registerSetTaskStatusTool(server) { description: "Set the status of a task", parameters: z.object({ id: z - .union([z.string(), z.number()]) + .string() .describe("Task ID (can be comma-separated for multiple tasks)"), status: z .string() .describe("New status (todo, in-progress, review, done)"), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -34,7 +39,14 @@ export function registerSetTaskStatusTool(server) { const cmdArgs = [`--id=${args.id}`, `--status=${args.status}`]; if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("set-status", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "set-status", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/showTask.js b/mcp-server/src/tools/showTask.js index c44d9463..86130570 100644 --- a/mcp-server/src/tools/showTask.js +++ b/mcp-server/src/tools/showTask.js @@ -19,17 +19,29 @@ export function registerShowTaskTool(server) { name: "showTask", description: "Show detailed information about a specific task", parameters: z.object({ - id: z.union([z.string(), z.number()]).describe("Task ID to show"), + id: z.string().describe("Task ID to show"), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { log.info(`Showing task details for ID: ${args.id}`); - const cmdArgs = [args.id]; + const cmdArgs = [`--id=${args.id}`]; if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("show", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "show", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/utils.js b/mcp-server/src/tools/utils.js index 24745d2e..872363e0 100644 --- a/mcp-server/src/tools/utils.js +++ b/mcp-server/src/tools/utils.js @@ -10,27 +10,39 @@ import { spawnSync } from "child_process"; * @param {string} command - The command to execute * @param {Object} log - The logger object from FastMCP * @param {Array} args - Arguments for the command + * @param {string} cwd - Working directory for command execution (defaults to current project root) * @returns {Object} - The result of the command execution */ -export function executeTaskMasterCommand(command, log, args = []) { +export function executeTaskMasterCommand( + command, + log, + args = [], + cwd = process.cwd() +) { try { log.info( - `Executing task-master ${command} with args: ${JSON.stringify(args)}` + `Executing task-master ${command} with args: ${JSON.stringify( + args + )} in directory: ${cwd}` ); // Prepare full arguments array const fullArgs = [command, ...args]; + // Common options for spawn + const spawnOptions = { + encoding: "utf8", + cwd: cwd, + }; + // Execute the command using the global task-master CLI or local script // Try the global CLI first - let result = spawnSync("task-master", fullArgs, { encoding: "utf8" }); + let result = spawnSync("task-master", fullArgs, spawnOptions); // If global CLI is not available, try fallback to the local script if (result.error && result.error.code === "ENOENT") { log.info("Global task-master not found, falling back to local script"); - result = spawnSync("node", ["scripts/dev.js", ...fullArgs], { - encoding: "utf8", - }); + result = spawnSync("node", ["scripts/dev.js", ...fullArgs], spawnOptions); } if (result.error) { @@ -38,8 +50,14 @@ export function executeTaskMasterCommand(command, log, args = []) { } if (result.status !== 0) { + // Improve error handling by combining stderr and stdout if stderr is empty + const errorOutput = result.stderr + ? result.stderr.trim() + : result.stdout + ? result.stdout.trim() + : "Unknown error"; throw new Error( - `Command failed with exit code ${result.status}: ${result.stderr}` + `Command failed with exit code ${result.status}: ${errorOutput}` ); } diff --git a/package-lock.json b/package-lock.json index 345d3081..42eee10f 100644 --- a/package-lock.json +++ b/package-lock.json @@ -19,6 +19,7 @@ "express": "^4.21.2", "fastmcp": "^1.20.5", "figlet": "^1.8.0", + "fuse.js": "^7.0.0", "gradient-string": "^3.0.0", "helmet": "^8.1.0", "jsonwebtoken": "^9.0.2", @@ -27,7 +28,8 @@ }, "bin": { "task-master": "bin/task-master.js", - "task-master-init": "bin/task-master-init.js" + "task-master-init": "bin/task-master-init.js", + "task-master-mcp": "mcp-server/server.js" }, "devDependencies": { "@types/jest": "^29.5.14", From edc8adf6c61f57071789d8abc287186299988136 Mon Sep 17 00:00:00 2001 From: Eyal Toledano Date: Thu, 27 Mar 2025 00:00:38 -0400 Subject: [PATCH 04/16] adds 'tm' and 'taskmaster' aliases to zshrc or bashrc automatically, added as options in the init questions. --- scripts/init.js | 74 ++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 70 insertions(+), 4 deletions(-) diff --git a/scripts/init.js b/scripts/init.js index 50d18fed..3ac1521f 100755 --- a/scripts/init.js +++ b/scripts/init.js @@ -39,6 +39,7 @@ program .option('-a, --author ', 'Author name') .option('--skip-install', 'Skip installing dependencies') .option('--dry-run', 'Show what would be done without making changes') + .option('--aliases', 'Add shell aliases (tm, taskmaster)') .parse(process.argv); const options = program.opts(); @@ -133,6 +134,53 @@ function ensureDirectoryExists(dirPath) { } } +// Function to add shell aliases to the user's shell configuration +function addShellAliases() { + const homeDir = process.env.HOME || process.env.USERPROFILE; + let shellConfigFile; + + // Determine which shell config file to use + if (process.env.SHELL?.includes('zsh')) { + shellConfigFile = path.join(homeDir, '.zshrc'); + } else if (process.env.SHELL?.includes('bash')) { + shellConfigFile = path.join(homeDir, '.bashrc'); + } else { + log('warn', 'Could not determine shell type. Aliases not added.'); + return false; + } + + try { + // Check if file exists + if (!fs.existsSync(shellConfigFile)) { + log('warn', `Shell config file ${shellConfigFile} not found. Aliases not added.`); + return false; + } + + // Check if aliases already exist + const configContent = fs.readFileSync(shellConfigFile, 'utf8'); + if (configContent.includes('alias tm=\'task-master\'')) { + log('info', 'Task Master aliases already exist in shell config.'); + return true; + } + + // Add aliases to the shell config file + const aliasBlock = ` +# Task Master aliases added on ${new Date().toLocaleDateString()} +alias tm='task-master' +alias taskmaster='task-master' +`; + + fs.appendFileSync(shellConfigFile, aliasBlock); + log('success', `Added Task Master aliases to ${shellConfigFile}`); + log('info', 'To use the aliases in your current terminal, run: source ' + shellConfigFile); + + return true; + } catch (error) { + log('error', `Failed to add aliases: ${error.message}`); + return false; + } +} + // Function to copy a file from the package to the target directory function copyTemplateFile(templateName, targetPath, replacements = {}) { // Get the file content from the appropriate source directory @@ -299,6 +347,7 @@ async function initializeProject(options = {}) { const authorName = options.authorName || ''; const dryRun = options.dryRun || false; const skipInstall = options.skipInstall || false; + const addAliases = options.addAliases || false; if (dryRun) { log('info', 'DRY RUN MODE: No files will be modified'); @@ -306,6 +355,9 @@ async function initializeProject(options = {}) { log('info', `Description: ${projectDescription}`); log('info', `Author: ${authorName || 'Not specified'}`); log('info', 'Would create/update necessary project files'); + if (addAliases) { + log('info', 'Would add shell aliases for task-master'); + } if (!skipInstall) { log('info', 'Would install dependencies'); } @@ -318,7 +370,7 @@ async function initializeProject(options = {}) { }; } - createProjectStructure(projectName, projectDescription, projectVersion, authorName, skipInstall); + createProjectStructure(projectName, projectDescription, projectVersion, authorName, skipInstall, addAliases); return { projectName, projectDescription, @@ -340,6 +392,10 @@ async function initializeProject(options = {}) { const projectVersionInput = await promptQuestion(rl, chalk.cyan('Enter project version (default: 1.0.0): ')); const authorName = await promptQuestion(rl, chalk.cyan('Enter your name: ')); + // Ask about shell aliases + const addAliasesInput = await promptQuestion(rl, chalk.cyan('Add shell aliases for task-master? (Y/n): ')); + const addAliases = addAliasesInput.trim().toLowerCase() !== 'n'; + // Set default version if not provided const projectVersion = projectVersionInput.trim() ? projectVersionInput : '1.0.0'; @@ -349,6 +405,7 @@ async function initializeProject(options = {}) { console.log(chalk.blue('Description:'), chalk.white(projectDescription)); console.log(chalk.blue('Version:'), chalk.white(projectVersion)); console.log(chalk.blue('Author:'), chalk.white(authorName || 'Not specified')); + console.log(chalk.blue('Add shell aliases:'), chalk.white(addAliases ? 'Yes' : 'No')); const confirmInput = await promptQuestion(rl, chalk.yellow('\nDo you want to continue with these settings? (Y/n): ')); const shouldContinue = confirmInput.trim().toLowerCase() !== 'n'; @@ -367,6 +424,9 @@ async function initializeProject(options = {}) { if (dryRun) { log('info', 'DRY RUN MODE: No files will be modified'); log('info', 'Would create/update necessary project files'); + if (addAliases) { + log('info', 'Would add shell aliases for task-master'); + } if (!skipInstall) { log('info', 'Would install dependencies'); } @@ -380,7 +440,7 @@ async function initializeProject(options = {}) { } // Create the project structure - createProjectStructure(projectName, projectDescription, projectVersion, authorName, skipInstall); + createProjectStructure(projectName, projectDescription, projectVersion, authorName, skipInstall, addAliases); return { projectName, @@ -405,7 +465,7 @@ function promptQuestion(rl, question) { } // Function to create the project structure -function createProjectStructure(projectName, projectDescription, projectVersion, authorName, skipInstall) { +function createProjectStructure(projectName, projectDescription, projectVersion, authorName, skipInstall, addAliases) { const targetDir = process.cwd(); log('info', `Initializing project in ${targetDir}`); @@ -571,6 +631,11 @@ function createProjectStructure(projectName, projectDescription, projectVersion, } )); + // Add shell aliases if requested + if (addAliases) { + addShellAliases(); + } + // Display next steps in a nice box console.log(boxen( chalk.cyan.bold('Things you can now do:') + '\n\n' + @@ -619,7 +684,8 @@ console.log('process.argv:', process.argv); projectVersion: options.version || '1.0.0', authorName: options.author || '', dryRun: options.dryRun || false, - skipInstall: options.skipInstall || false + skipInstall: options.skipInstall || false, + addAliases: options.aliases || false }); } else { // Otherwise, prompt for input normally From c4f7de8845e30773fcb5f9f1bb391ff841d6fd8f Mon Sep 17 00:00:00 2001 From: Eyal Toledano Date: Thu, 27 Mar 2025 00:58:14 -0400 Subject: [PATCH 05/16] Adds 3 docs for MCP related context provision. Also updates the system prompt for the task update command. Updated the system prompt with clear guidelines about: Preserving completed subtasks exactly as they are Building upon what has already been done Creating new subtasks instead of modifying completed ones Making new subtasks specific and targeted Added specific instructions to the Perplexity AI system message to emphasize preserving completed subtasks Added an informative boxed message to the user explaining how completed subtasks will be handled during the update process Added emphatic instructions in the user prompts to both Claude and Perplexity to highlight completed subtasks that must be preserved These changes ensure that: Completed subtasks will be preserved The AI will build on top of what's already been done If something needs to be changed/undone, it will be handled through new subtasks The user is clearly informed about how subtasks are handled. --- docs/fastmcp-docs.txt | 3849 ++++++++ docs/mcp-js-sdk-docs.txt | 14618 ++++++++++++++++++++++++++++++ docs/mcp-protocol-docs.txt | 6649 ++++++++++++++ scripts/modules/task-manager.js | 21 +- 4 files changed, 25136 insertions(+), 1 deletion(-) create mode 100644 docs/fastmcp-docs.txt create mode 100644 docs/mcp-js-sdk-docs.txt create mode 100644 docs/mcp-protocol-docs.txt diff --git a/docs/fastmcp-docs.txt b/docs/fastmcp-docs.txt new file mode 100644 index 00000000..f116c2e7 --- /dev/null +++ b/docs/fastmcp-docs.txt @@ -0,0 +1,3849 @@ +Directory Structure: + +└── ./ + ├── src + │ ├── bin + │ │ └── fastmcp.ts + │ ├── examples + │ │ └── addition.ts + │ ├── FastMCP.test.ts + │ └── FastMCP.ts + ├── eslint.config.js + ├── package.json + ├── README.md + └── vitest.config.js + + + +--- +File: /src/bin/fastmcp.ts +--- + +#!/usr/bin/env node + +import yargs from "yargs"; +import { hideBin } from "yargs/helpers"; +import { execa } from "execa"; + +await yargs(hideBin(process.argv)) + .scriptName("fastmcp") + .command( + "dev ", + "Start a development server", + (yargs) => { + return yargs.positional("file", { + type: "string", + describe: "The path to the server file", + demandOption: true, + }); + }, + async (argv) => { + try { + await execa({ + stdin: "inherit", + stdout: "inherit", + stderr: "inherit", + })`npx @wong2/mcp-cli npx tsx ${argv.file}`; + } catch { + process.exit(1); + } + }, + ) + .command( + "inspect ", + "Inspect a server file", + (yargs) => { + return yargs.positional("file", { + type: "string", + describe: "The path to the server file", + demandOption: true, + }); + }, + async (argv) => { + try { + await execa({ + stdout: "inherit", + stderr: "inherit", + })`npx @modelcontextprotocol/inspector npx tsx ${argv.file}`; + } catch { + process.exit(1); + } + }, + ) + .help() + .parseAsync(); + + + +--- +File: /src/examples/addition.ts +--- + +/** + * This is a complete example of an MCP server. + */ +import { FastMCP } from "../FastMCP.js"; +import { z } from "zod"; + +const server = new FastMCP({ + name: "Addition", + version: "1.0.0", +}); + +server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args) => { + return String(args.a + args.b); + }, +}); + +server.addResource({ + uri: "file:///logs/app.log", + name: "Application Logs", + mimeType: "text/plain", + async load() { + return { + text: "Example log content", + }; + }, +}); + +server.addPrompt({ + name: "git-commit", + description: "Generate a Git commit message", + arguments: [ + { + name: "changes", + description: "Git diff or description of changes", + required: true, + }, + ], + load: async (args) => { + return `Generate a concise but descriptive commit message for these changes:\n\n${args.changes}`; + }, +}); + +server.start({ + transportType: "stdio", +}); + + + +--- +File: /src/FastMCP.test.ts +--- + +import { FastMCP, FastMCPSession, UserError, imageContent } from "./FastMCP.js"; +import { z } from "zod"; +import { test, expect, vi } from "vitest"; +import { Client } from "@modelcontextprotocol/sdk/client/index.js"; +import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js"; +import { getRandomPort } from "get-port-please"; +import { setTimeout as delay } from "timers/promises"; +import { + CreateMessageRequestSchema, + ErrorCode, + ListRootsRequestSchema, + LoggingMessageNotificationSchema, + McpError, + PingRequestSchema, + Root, +} from "@modelcontextprotocol/sdk/types.js"; +import { createEventSource, EventSourceClient } from 'eventsource-client'; + +const runWithTestServer = async ({ + run, + client: createClient, + server: createServer, +}: { + server?: () => Promise; + client?: () => Promise; + run: ({ + client, + server, + }: { + client: Client; + server: FastMCP; + session: FastMCPSession; + }) => Promise; +}) => { + const port = await getRandomPort(); + + const server = createServer + ? await createServer() + : new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + await server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port, + }, + }); + + try { + const client = createClient + ? await createClient() + : new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, + ); + + const transport = new SSEClientTransport( + new URL(`http://localhost:${port}/sse`), + ); + + const session = await new Promise((resolve) => { + server.on("connect", (event) => { + + resolve(event.session); + }); + + client.connect(transport); + }); + + await run({ client, server, session }); + } finally { + await server.stop(); + } + + return port; +}; + +test("adds tools", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args) => { + return String(args.a + args.b); + }, + }); + + return server; + }, + run: async ({ client }) => { + expect(await client.listTools()).toEqual({ + tools: [ + { + name: "add", + description: "Add two numbers", + inputSchema: { + additionalProperties: false, + $schema: "http://json-schema.org/draft-07/schema#", + type: "object", + properties: { + a: { type: "number" }, + b: { type: "number" }, + }, + required: ["a", "b"], + }, + }, + ], + }); + }, + }); +}); + +test("calls a tool", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args) => { + return String(args.a + args.b); + }, + }); + + return server; + }, + run: async ({ client }) => { + expect( + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }), + ).toEqual({ + content: [{ type: "text", text: "3" }], + }); + }, + }); +}); + +test("returns a list", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async () => { + return { + content: [ + { type: "text", text: "a" }, + { type: "text", text: "b" }, + ], + }; + }, + }); + + return server; + }, + run: async ({ client }) => { + expect( + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }), + ).toEqual({ + content: [ + { type: "text", text: "a" }, + { type: "text", text: "b" }, + ], + }); + }, + }); +}); + +test("returns an image", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async () => { + return imageContent({ + buffer: Buffer.from( + "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=", + "base64", + ), + }); + }, + }); + + return server; + }, + run: async ({ client }) => { + expect( + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }), + ).toEqual({ + content: [ + { + type: "image", + data: "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=", + mimeType: "image/png", + }, + ], + }); + }, + }); +}); + +test("handles UserError errors", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async () => { + throw new UserError("Something went wrong"); + }, + }); + + return server; + }, + run: async ({ client }) => { + expect( + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }), + ).toEqual({ + content: [{ type: "text", text: "Something went wrong" }], + isError: true, + }); + }, + }); +}); + +test("calling an unknown tool throws McpError with MethodNotFound code", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + return server; + }, + run: async ({ client }) => { + try { + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }); + } catch (error) { + expect(error).toBeInstanceOf(McpError); + + // @ts-expect-error - we know that error is an McpError + expect(error.code).toBe(ErrorCode.MethodNotFound); + } + }, + }); +}); + +test("tracks tool progress", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args, { reportProgress }) => { + reportProgress({ + progress: 0, + total: 10, + }); + + await delay(100); + + return String(args.a + args.b); + }, + }); + + return server; + }, + run: async ({ client }) => { + const onProgress = vi.fn(); + + await client.callTool( + { + name: "add", + arguments: { + a: 1, + b: 2, + }, + }, + undefined, + { + onprogress: onProgress, + }, + ); + + expect(onProgress).toHaveBeenCalledTimes(1); + expect(onProgress).toHaveBeenCalledWith({ + progress: 0, + total: 10, + }); + }, + }); +}); + +test("sets logging levels", async () => { + await runWithTestServer({ + run: async ({ client, session }) => { + await client.setLoggingLevel("debug"); + + expect(session.loggingLevel).toBe("debug"); + + await client.setLoggingLevel("info"); + + expect(session.loggingLevel).toBe("info"); + }, + }); +}); + +test("sends logging messages to the client", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args, { log }) => { + log.debug("debug message", { + foo: "bar", + }); + log.error("error message"); + log.info("info message"); + log.warn("warn message"); + + return String(args.a + args.b); + }, + }); + + return server; + }, + run: async ({ client }) => { + const onLog = vi.fn(); + + client.setNotificationHandler( + LoggingMessageNotificationSchema, + (message) => { + if (message.method === "notifications/message") { + onLog({ + level: message.params.level, + ...(message.params.data ?? {}), + }); + } + }, + ); + + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }); + + expect(onLog).toHaveBeenCalledTimes(4); + expect(onLog).toHaveBeenNthCalledWith(1, { + level: "debug", + message: "debug message", + context: { + foo: "bar", + }, + }); + expect(onLog).toHaveBeenNthCalledWith(2, { + level: "error", + message: "error message", + }); + expect(onLog).toHaveBeenNthCalledWith(3, { + level: "info", + message: "info message", + }); + expect(onLog).toHaveBeenNthCalledWith(4, { + level: "warning", + message: "warn message", + }); + }, + }); +}); + +test("adds resources", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addResource({ + uri: "file:///logs/app.log", + name: "Application Logs", + mimeType: "text/plain", + async load() { + return { + text: "Example log content", + }; + }, + }); + + return server; + }, + run: async ({ client }) => { + expect(await client.listResources()).toEqual({ + resources: [ + { + uri: "file:///logs/app.log", + name: "Application Logs", + mimeType: "text/plain", + }, + ], + }); + }, + }); +}); + +test("clients reads a resource", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addResource({ + uri: "file:///logs/app.log", + name: "Application Logs", + mimeType: "text/plain", + async load() { + return { + text: "Example log content", + }; + }, + }); + + return server; + }, + run: async ({ client }) => { + expect( + await client.readResource({ + uri: "file:///logs/app.log", + }), + ).toEqual({ + contents: [ + { + uri: "file:///logs/app.log", + name: "Application Logs", + text: "Example log content", + mimeType: "text/plain", + }, + ], + }); + }, + }); +}); + +test("clients reads a resource that returns multiple resources", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addResource({ + uri: "file:///logs/app.log", + name: "Application Logs", + mimeType: "text/plain", + async load() { + return [ + { + text: "a", + }, + { + text: "b", + }, + ]; + }, + }); + + return server; + }, + run: async ({ client }) => { + expect( + await client.readResource({ + uri: "file:///logs/app.log", + }), + ).toEqual({ + contents: [ + { + uri: "file:///logs/app.log", + name: "Application Logs", + text: "a", + mimeType: "text/plain", + }, + { + uri: "file:///logs/app.log", + name: "Application Logs", + text: "b", + mimeType: "text/plain", + }, + ], + }); + }, + }); +}); + +test("adds prompts", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addPrompt({ + name: "git-commit", + description: "Generate a Git commit message", + arguments: [ + { + name: "changes", + description: "Git diff or description of changes", + required: true, + }, + ], + load: async (args) => { + return `Generate a concise but descriptive commit message for these changes:\n\n${args.changes}`; + }, + }); + + return server; + }, + run: async ({ client }) => { + expect( + await client.getPrompt({ + name: "git-commit", + arguments: { + changes: "foo", + }, + }), + ).toEqual({ + description: "Generate a Git commit message", + messages: [ + { + role: "user", + content: { + type: "text", + text: "Generate a concise but descriptive commit message for these changes:\n\nfoo", + }, + }, + ], + }); + + expect(await client.listPrompts()).toEqual({ + prompts: [ + { + name: "git-commit", + description: "Generate a Git commit message", + arguments: [ + { + name: "changes", + description: "Git diff or description of changes", + required: true, + }, + ], + }, + ], + }); + }, + }); +}); + +test("uses events to notify server of client connect/disconnect", async () => { + const port = await getRandomPort(); + + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + const onConnect = vi.fn(); + const onDisconnect = vi.fn(); + + server.on("connect", onConnect); + server.on("disconnect", onDisconnect); + + await server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port, + }, + }); + + const client = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, + ); + + const transport = new SSEClientTransport( + new URL(`http://localhost:${port}/sse`), + ); + + await client.connect(transport); + + await delay(100); + + expect(onConnect).toHaveBeenCalledTimes(1); + expect(onDisconnect).toHaveBeenCalledTimes(0); + + expect(server.sessions).toEqual([expect.any(FastMCPSession)]); + + await client.close(); + + await delay(100); + + expect(onConnect).toHaveBeenCalledTimes(1); + expect(onDisconnect).toHaveBeenCalledTimes(1); + + await server.stop(); +}); + +test("handles multiple clients", async () => { + const port = await getRandomPort(); + + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + await server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port, + }, + }); + + const client1 = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, + ); + + const transport1 = new SSEClientTransport( + new URL(`http://localhost:${port}/sse`), + ); + + await client1.connect(transport1); + + const client2 = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, + ); + + const transport2 = new SSEClientTransport( + new URL(`http://localhost:${port}/sse`), + ); + + await client2.connect(transport2); + + await delay(100); + + expect(server.sessions).toEqual([ + expect.any(FastMCPSession), + expect.any(FastMCPSession), + ]); + + await server.stop(); +}); + +test("session knows about client capabilities", async () => { + await runWithTestServer({ + client: async () => { + const client = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: { + roots: { + listChanged: true, + }, + }, + }, + ); + + client.setRequestHandler(ListRootsRequestSchema, () => { + return { + roots: [ + { + uri: "file:///home/user/projects/frontend", + name: "Frontend Repository", + }, + ], + }; + }); + + return client; + }, + run: async ({ session }) => { + expect(session.clientCapabilities).toEqual({ + roots: { + listChanged: true, + }, + }); + }, + }); +}); + +test("session knows about roots", async () => { + await runWithTestServer({ + client: async () => { + const client = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: { + roots: { + listChanged: true, + }, + }, + }, + ); + + client.setRequestHandler(ListRootsRequestSchema, () => { + return { + roots: [ + { + uri: "file:///home/user/projects/frontend", + name: "Frontend Repository", + }, + ], + }; + }); + + return client; + }, + run: async ({ session }) => { + expect(session.roots).toEqual([ + { + uri: "file:///home/user/projects/frontend", + name: "Frontend Repository", + }, + ]); + }, + }); +}); + +test("session listens to roots changes", async () => { + let clientRoots: Root[] = [ + { + uri: "file:///home/user/projects/frontend", + name: "Frontend Repository", + }, + ]; + + await runWithTestServer({ + client: async () => { + const client = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: { + roots: { + listChanged: true, + }, + }, + }, + ); + + client.setRequestHandler(ListRootsRequestSchema, () => { + return { + roots: clientRoots, + }; + }); + + return client; + }, + run: async ({ session, client }) => { + expect(session.roots).toEqual([ + { + uri: "file:///home/user/projects/frontend", + name: "Frontend Repository", + }, + ]); + + clientRoots.push({ + uri: "file:///home/user/projects/backend", + name: "Backend Repository", + }); + + await client.sendRootsListChanged(); + + const onRootsChanged = vi.fn(); + + session.on("rootsChanged", onRootsChanged); + + await delay(100); + + expect(session.roots).toEqual([ + { + uri: "file:///home/user/projects/frontend", + name: "Frontend Repository", + }, + { + uri: "file:///home/user/projects/backend", + name: "Backend Repository", + }, + ]); + + expect(onRootsChanged).toHaveBeenCalledTimes(1); + expect(onRootsChanged).toHaveBeenCalledWith({ + roots: [ + { + uri: "file:///home/user/projects/frontend", + name: "Frontend Repository", + }, + { + uri: "file:///home/user/projects/backend", + name: "Backend Repository", + }, + ], + }); + }, + }); +}); + +test("session sends pings to the client", async () => { + await runWithTestServer({ + run: async ({ client }) => { + const onPing = vi.fn().mockReturnValue({}); + + client.setRequestHandler(PingRequestSchema, onPing); + + await delay(2000); + + expect(onPing).toHaveBeenCalledTimes(1); + }, + }); +}); + +test("completes prompt arguments", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addPrompt({ + name: "countryPoem", + description: "Writes a poem about a country", + load: async ({ name }) => { + return `Hello, ${name}!`; + }, + arguments: [ + { + name: "name", + description: "Name of the country", + required: true, + complete: async (value) => { + if (value === "Germ") { + return { + values: ["Germany"], + }; + } + + return { + values: [], + }; + }, + }, + ], + }); + + return server; + }, + run: async ({ client }) => { + const response = await client.complete({ + ref: { + type: "ref/prompt", + name: "countryPoem", + }, + argument: { + name: "name", + value: "Germ", + }, + }); + + expect(response).toEqual({ + completion: { + values: ["Germany"], + }, + }); + }, + }); +}); + +test("adds automatic prompt argument completion when enum is provided", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addPrompt({ + name: "countryPoem", + description: "Writes a poem about a country", + load: async ({ name }) => { + return `Hello, ${name}!`; + }, + arguments: [ + { + name: "name", + description: "Name of the country", + required: true, + enum: ["Germany", "France", "Italy"], + }, + ], + }); + + return server; + }, + run: async ({ client }) => { + const response = await client.complete({ + ref: { + type: "ref/prompt", + name: "countryPoem", + }, + argument: { + name: "name", + value: "Germ", + }, + }); + + expect(response).toEqual({ + completion: { + values: ["Germany"], + total: 1, + }, + }); + }, + }); +}); + +test("completes template resource arguments", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addResourceTemplate({ + uriTemplate: "issue:///{issueId}", + name: "Issue", + mimeType: "text/plain", + arguments: [ + { + name: "issueId", + description: "ID of the issue", + complete: async (value) => { + if (value === "123") { + return { + values: ["123456"], + }; + } + + return { + values: [], + }; + }, + }, + ], + load: async ({ issueId }) => { + return { + text: `Issue ${issueId}`, + }; + }, + }); + + return server; + }, + run: async ({ client }) => { + const response = await client.complete({ + ref: { + type: "ref/resource", + uri: "issue:///{issueId}", + }, + argument: { + name: "issueId", + value: "123", + }, + }); + + expect(response).toEqual({ + completion: { + values: ["123456"], + }, + }); + }, + }); +}); + +test("lists resource templates", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addResourceTemplate({ + uriTemplate: "file:///logs/{name}.log", + name: "Application Logs", + mimeType: "text/plain", + arguments: [ + { + name: "name", + description: "Name of the log", + required: true, + }, + ], + load: async ({ name }) => { + return { + text: `Example log content for ${name}`, + }; + }, + }); + + return server; + }, + run: async ({ client }) => { + expect(await client.listResourceTemplates()).toEqual({ + resourceTemplates: [ + { + name: "Application Logs", + uriTemplate: "file:///logs/{name}.log", + }, + ], + }); + }, + }); +}); + +test("clients reads a resource accessed via a resource template", async () => { + const loadSpy = vi.fn((_args) => { + return { + text: "Example log content", + }; + }); + + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addResourceTemplate({ + uriTemplate: "file:///logs/{name}.log", + name: "Application Logs", + mimeType: "text/plain", + arguments: [ + { + name: "name", + description: "Name of the log", + }, + ], + async load(args) { + return loadSpy(args); + }, + }); + + return server; + }, + run: async ({ client }) => { + expect( + await client.readResource({ + uri: "file:///logs/app.log", + }), + ).toEqual({ + contents: [ + { + uri: "file:///logs/app.log", + name: "Application Logs", + text: "Example log content", + mimeType: "text/plain", + }, + ], + }); + + expect(loadSpy).toHaveBeenCalledWith({ + name: "app", + }); + }, + }); +}); + +test("makes a sampling request", async () => { + const onMessageRequest = vi.fn(() => { + return { + model: "gpt-3.5-turbo", + role: "assistant", + content: { + type: "text", + text: "The files are in the current directory.", + }, + }; + }); + + await runWithTestServer({ + client: async () => { + const client = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + return client; + }, + run: async ({ client, session }) => { + client.setRequestHandler(CreateMessageRequestSchema, onMessageRequest); + + const response = await session.requestSampling({ + messages: [ + { + role: "user", + content: { + type: "text", + text: "What files are in the current directory?", + }, + }, + ], + systemPrompt: "You are a helpful file system assistant.", + includeContext: "thisServer", + maxTokens: 100, + }); + + expect(response).toEqual({ + model: "gpt-3.5-turbo", + role: "assistant", + content: { + type: "text", + text: "The files are in the current directory.", + }, + }); + + expect(onMessageRequest).toHaveBeenCalledTimes(1); + }, + }); +}); + +test("throws ErrorCode.InvalidParams if tool parameters do not match zod schema", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args) => { + return String(args.a + args.b); + }, + }); + + return server; + }, + run: async ({ client }) => { + try { + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: "invalid", + }, + }); + } catch (error) { + expect(error).toBeInstanceOf(McpError); + + // @ts-expect-error - we know that error is an McpError + expect(error.code).toBe(ErrorCode.InvalidParams); + + // @ts-expect-error - we know that error is an McpError + expect(error.message).toBe("MCP error -32602: MCP error -32602: Invalid add parameters"); + } + }, + }); +}); + +test("server remains usable after InvalidParams error", async () => { + await runWithTestServer({ + server: async () => { + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args) => { + return String(args.a + args.b); + }, + }); + + return server; + }, + run: async ({ client }) => { + try { + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: "invalid", + }, + }); + } catch (error) { + expect(error).toBeInstanceOf(McpError); + + // @ts-expect-error - we know that error is an McpError + expect(error.code).toBe(ErrorCode.InvalidParams); + + // @ts-expect-error - we know that error is an McpError + expect(error.message).toBe("MCP error -32602: MCP error -32602: Invalid add parameters"); + } + + expect( + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }), + ).toEqual({ + content: [{ type: "text", text: "3" }], + }); + }, + }); +}); + +test("allows new clients to connect after a client disconnects", async () => { + const port = await getRandomPort(); + + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args) => { + return String(args.a + args.b); + }, + }); + + await server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port, + }, + }); + + const client1 = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, + ); + + const transport1 = new SSEClientTransport( + new URL(`http://localhost:${port}/sse`), + ); + + await client1.connect(transport1); + + expect( + await client1.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }), + ).toEqual({ + content: [{ type: "text", text: "3" }], + }); + + await client1.close(); + + const client2 = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, + ); + + const transport2 = new SSEClientTransport( + new URL(`http://localhost:${port}/sse`), + ); + + await client2.connect(transport2); + + expect( + await client2.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }), + ).toEqual({ + content: [{ type: "text", text: "3" }], + }); + + await client2.close(); + + await server.stop(); +}); + +test("able to close server immediately after starting it", async () => { + const port = await getRandomPort(); + + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + await server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port, + }, + }); + + // We were previously not waiting for the server to start. + // Therefore, this would have caused error 'Server is not running.'. + await server.stop(); +}); + +test("closing event source does not produce error", async () => { + const port = await getRandomPort(); + + const server = new FastMCP({ + name: "Test", + version: "1.0.0", + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args) => { + return String(args.a + args.b); + }, + }); + + await server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port, + }, + }); + + const eventSource = await new Promise((onMessage) => { + const eventSource = createEventSource({ + onConnect: () => { + console.info('connected'); + }, + onDisconnect: () => { + console.info('disconnected'); + }, + onMessage: () => { + onMessage(eventSource); + }, + url: `http://127.0.0.1:${port}/sse`, + }); + }); + + expect(eventSource.readyState).toBe('open'); + + eventSource.close(); + + // We were getting unhandled error 'Not connected' + // https://github.com/punkpeye/mcp-proxy/commit/62cf27d5e3dfcbc353e8d03c7714a62c37177b52 + await delay(1000); + + await server.stop(); +}); + +test("provides auth to tools", async () => { + const port = await getRandomPort(); + + const authenticate = vi.fn(async () => { + return { + id: 1, + }; + }); + + const server = new FastMCP<{id: number}>({ + name: "Test", + version: "1.0.0", + authenticate, + }); + + const execute = vi.fn(async (args) => { + return String(args.a + args.b); + }); + + server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute, + }); + + await server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port, + }, + }); + + const client = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, + ); + + const transport = new SSEClientTransport( + new URL(`http://localhost:${port}/sse`), + { + eventSourceInit: { + fetch: async (url, init) => { + return fetch(url, { + ...init, + headers: { + ...init?.headers, + "x-api-key": "123", + }, + }); + }, + }, + }, + ); + + await client.connect(transport); + + expect(authenticate, "authenticate should have been called").toHaveBeenCalledTimes(1); + + expect( + await client.callTool({ + name: "add", + arguments: { + a: 1, + b: 2, + }, + }), + ).toEqual({ + content: [{ type: "text", text: "3" }], + }); + + expect(execute, "execute should have been called").toHaveBeenCalledTimes(1); + + expect(execute).toHaveBeenCalledWith({ + a: 1, + b: 2, + }, { + log: { + debug: expect.any(Function), + error: expect.any(Function), + info: expect.any(Function), + warn: expect.any(Function), + }, + reportProgress: expect.any(Function), + session: { id: 1 }, + }); +}); + +test("blocks unauthorized requests", async () => { + const port = await getRandomPort(); + + const server = new FastMCP<{id: number}>({ + name: "Test", + version: "1.0.0", + authenticate: async () => { + throw new Response(null, { + status: 401, + statusText: "Unauthorized", + }); + }, + }); + + await server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port, + }, + }); + + const client = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, + ); + + const transport = new SSEClientTransport( + new URL(`http://localhost:${port}/sse`), + ); + + expect(async () => { + await client.connect(transport); + }).rejects.toThrow("SSE error: Non-200 status code (401)"); +}); + + +--- +File: /src/FastMCP.ts +--- + +import { Server } from "@modelcontextprotocol/sdk/server/index.js"; +import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; +import { + CallToolRequestSchema, + ClientCapabilities, + CompleteRequestSchema, + CreateMessageRequestSchema, + ErrorCode, + GetPromptRequestSchema, + ListPromptsRequestSchema, + ListResourcesRequestSchema, + ListResourceTemplatesRequestSchema, + ListToolsRequestSchema, + McpError, + ReadResourceRequestSchema, + Root, + RootsListChangedNotificationSchema, + ServerCapabilities, + SetLevelRequestSchema, +} from "@modelcontextprotocol/sdk/types.js"; +import { zodToJsonSchema } from "zod-to-json-schema"; +import { z } from "zod"; +import { setTimeout as delay } from "timers/promises"; +import { readFile } from "fs/promises"; +import { fileTypeFromBuffer } from "file-type"; +import { StrictEventEmitter } from "strict-event-emitter-types"; +import { EventEmitter } from "events"; +import Fuse from "fuse.js"; +import { startSSEServer } from "mcp-proxy"; +import { Transport } from "@modelcontextprotocol/sdk/shared/transport.js"; +import parseURITemplate from "uri-templates"; +import http from "http"; +import { + fetch +} from "undici"; + +export type SSEServer = { + close: () => Promise; +}; + +type FastMCPEvents = { + connect: (event: { session: FastMCPSession }) => void; + disconnect: (event: { session: FastMCPSession }) => void; +}; + +type FastMCPSessionEvents = { + rootsChanged: (event: { roots: Root[] }) => void; + error: (event: { error: Error }) => void; +}; + +/** + * Generates an image content object from a URL, file path, or buffer. + */ +export const imageContent = async ( + input: { url: string } | { path: string } | { buffer: Buffer }, +): Promise => { + let rawData: Buffer; + + if ("url" in input) { + const response = await fetch(input.url); + + if (!response.ok) { + throw new Error(`Failed to fetch image from URL: ${response.statusText}`); + } + + rawData = Buffer.from(await response.arrayBuffer()); + } else if ("path" in input) { + rawData = await readFile(input.path); + } else if ("buffer" in input) { + rawData = input.buffer; + } else { + throw new Error( + "Invalid input: Provide a valid 'url', 'path', or 'buffer'", + ); + } + + const mimeType = await fileTypeFromBuffer(rawData); + + const base64Data = rawData.toString("base64"); + + return { + type: "image", + data: base64Data, + mimeType: mimeType?.mime ?? "image/png", + } as const; +}; + +abstract class FastMCPError extends Error { + public constructor(message?: string) { + super(message); + this.name = new.target.name; + } +} + +type Extra = unknown; + +type Extras = Record; + +export class UnexpectedStateError extends FastMCPError { + public extras?: Extras; + + public constructor(message: string, extras?: Extras) { + super(message); + this.name = new.target.name; + this.extras = extras; + } +} + +/** + * An error that is meant to be surfaced to the user. + */ +export class UserError extends UnexpectedStateError {} + +type ToolParameters = z.ZodTypeAny; + +type Literal = boolean | null | number | string | undefined; + +type SerializableValue = + | Literal + | SerializableValue[] + | { [key: string]: SerializableValue }; + +type Progress = { + /** + * The progress thus far. This should increase every time progress is made, even if the total is unknown. + */ + progress: number; + /** + * Total number of items to process (or total progress required), if known. + */ + total?: number; +}; + +type Context = { + session: T | undefined; + reportProgress: (progress: Progress) => Promise; + log: { + debug: (message: string, data?: SerializableValue) => void; + error: (message: string, data?: SerializableValue) => void; + info: (message: string, data?: SerializableValue) => void; + warn: (message: string, data?: SerializableValue) => void; + }; +}; + +type TextContent = { + type: "text"; + text: string; +}; + +const TextContentZodSchema = z + .object({ + type: z.literal("text"), + /** + * The text content of the message. + */ + text: z.string(), + }) + .strict() satisfies z.ZodType; + +type ImageContent = { + type: "image"; + data: string; + mimeType: string; +}; + +const ImageContentZodSchema = z + .object({ + type: z.literal("image"), + /** + * The base64-encoded image data. + */ + data: z.string().base64(), + /** + * The MIME type of the image. Different providers may support different image types. + */ + mimeType: z.string(), + }) + .strict() satisfies z.ZodType; + +type Content = TextContent | ImageContent; + +const ContentZodSchema = z.discriminatedUnion("type", [ + TextContentZodSchema, + ImageContentZodSchema, +]) satisfies z.ZodType; + +type ContentResult = { + content: Content[]; + isError?: boolean; +}; + +const ContentResultZodSchema = z + .object({ + content: ContentZodSchema.array(), + isError: z.boolean().optional(), + }) + .strict() satisfies z.ZodType; + +type Completion = { + values: string[]; + total?: number; + hasMore?: boolean; +}; + +/** + * https://github.com/modelcontextprotocol/typescript-sdk/blob/3164da64d085ec4e022ae881329eee7b72f208d4/src/types.ts#L983-L1003 + */ +const CompletionZodSchema = z.object({ + /** + * An array of completion values. Must not exceed 100 items. + */ + values: z.array(z.string()).max(100), + /** + * The total number of completion options available. This can exceed the number of values actually sent in the response. + */ + total: z.optional(z.number().int()), + /** + * Indicates whether there are additional completion options beyond those provided in the current response, even if the exact total is unknown. + */ + hasMore: z.optional(z.boolean()), +}) satisfies z.ZodType; + +type Tool = { + name: string; + description?: string; + parameters?: Params; + execute: ( + args: z.infer, + context: Context, + ) => Promise; +}; + +type ResourceResult = + | { + text: string; + } + | { + blob: string; + }; + +type InputResourceTemplateArgument = Readonly<{ + name: string; + description?: string; + complete?: ArgumentValueCompleter; +}>; + +type ResourceTemplateArgument = Readonly<{ + name: string; + description?: string; + complete?: ArgumentValueCompleter; +}>; + +type ResourceTemplate< + Arguments extends ResourceTemplateArgument[] = ResourceTemplateArgument[], +> = { + uriTemplate: string; + name: string; + description?: string; + mimeType?: string; + arguments: Arguments; + complete?: (name: string, value: string) => Promise; + load: ( + args: ResourceTemplateArgumentsToObject, + ) => Promise; +}; + +type ResourceTemplateArgumentsToObject = { + [K in T[number]["name"]]: string; +}; + +type InputResourceTemplate< + Arguments extends ResourceTemplateArgument[] = ResourceTemplateArgument[], +> = { + uriTemplate: string; + name: string; + description?: string; + mimeType?: string; + arguments: Arguments; + load: ( + args: ResourceTemplateArgumentsToObject, + ) => Promise; +}; + +type Resource = { + uri: string; + name: string; + description?: string; + mimeType?: string; + load: () => Promise; + complete?: (name: string, value: string) => Promise; +}; + +type ArgumentValueCompleter = (value: string) => Promise; + +type InputPromptArgument = Readonly<{ + name: string; + description?: string; + required?: boolean; + complete?: ArgumentValueCompleter; + enum?: string[]; +}>; + +type PromptArgumentsToObject = + { + [K in T[number]["name"]]: Extract< + T[number], + { name: K } + >["required"] extends true + ? string + : string | undefined; + }; + +type InputPrompt< + Arguments extends InputPromptArgument[] = InputPromptArgument[], + Args = PromptArgumentsToObject, +> = { + name: string; + description?: string; + arguments?: InputPromptArgument[]; + load: (args: Args) => Promise; +}; + +type PromptArgument = Readonly<{ + name: string; + description?: string; + required?: boolean; + complete?: ArgumentValueCompleter; + enum?: string[]; +}>; + +type Prompt< + Arguments extends PromptArgument[] = PromptArgument[], + Args = PromptArgumentsToObject, +> = { + arguments?: PromptArgument[]; + complete?: (name: string, value: string) => Promise; + description?: string; + load: (args: Args) => Promise; + name: string; +}; + +type ServerOptions = { + name: string; + version: `${number}.${number}.${number}`; + authenticate?: Authenticate; +}; + +type LoggingLevel = + | "debug" + | "info" + | "notice" + | "warning" + | "error" + | "critical" + | "alert" + | "emergency"; + +const FastMCPSessionEventEmitterBase: { + new (): StrictEventEmitter; +} = EventEmitter; + +class FastMCPSessionEventEmitter extends FastMCPSessionEventEmitterBase {} + +type SamplingResponse = { + model: string; + stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string; + role: "user" | "assistant"; + content: TextContent | ImageContent; +}; + +type FastMCPSessionAuth = Record | undefined; + +export class FastMCPSession extends FastMCPSessionEventEmitter { + #capabilities: ServerCapabilities = {}; + #clientCapabilities?: ClientCapabilities; + #loggingLevel: LoggingLevel = "info"; + #prompts: Prompt[] = []; + #resources: Resource[] = []; + #resourceTemplates: ResourceTemplate[] = []; + #roots: Root[] = []; + #server: Server; + #auth: T | undefined; + + constructor({ + auth, + name, + version, + tools, + resources, + resourcesTemplates, + prompts, + }: { + auth?: T; + name: string; + version: string; + tools: Tool[]; + resources: Resource[]; + resourcesTemplates: InputResourceTemplate[]; + prompts: Prompt[]; + }) { + super(); + + this.#auth = auth; + + if (tools.length) { + this.#capabilities.tools = {}; + } + + if (resources.length || resourcesTemplates.length) { + this.#capabilities.resources = {}; + } + + if (prompts.length) { + for (const prompt of prompts) { + this.addPrompt(prompt); + } + + this.#capabilities.prompts = {}; + } + + this.#capabilities.logging = {}; + + this.#server = new Server( + { name: name, version: version }, + { capabilities: this.#capabilities }, + ); + + this.setupErrorHandling(); + this.setupLoggingHandlers(); + this.setupRootsHandlers(); + this.setupCompleteHandlers(); + + if (tools.length) { + this.setupToolHandlers(tools); + } + + if (resources.length || resourcesTemplates.length) { + for (const resource of resources) { + this.addResource(resource); + } + + this.setupResourceHandlers(resources); + + if (resourcesTemplates.length) { + for (const resourceTemplate of resourcesTemplates) { + this.addResourceTemplate(resourceTemplate); + } + + this.setupResourceTemplateHandlers(resourcesTemplates); + } + } + + if (prompts.length) { + this.setupPromptHandlers(prompts); + } + } + + private addResource(inputResource: Resource) { + this.#resources.push(inputResource); + } + + private addResourceTemplate(inputResourceTemplate: InputResourceTemplate) { + const completers: Record = {}; + + for (const argument of inputResourceTemplate.arguments ?? []) { + if (argument.complete) { + completers[argument.name] = argument.complete; + } + } + + const resourceTemplate = { + ...inputResourceTemplate, + complete: async (name: string, value: string) => { + if (completers[name]) { + return await completers[name](value); + } + + return { + values: [], + }; + }, + }; + + this.#resourceTemplates.push(resourceTemplate); + } + + private addPrompt(inputPrompt: InputPrompt) { + const completers: Record = {}; + const enums: Record = {}; + + for (const argument of inputPrompt.arguments ?? []) { + if (argument.complete) { + completers[argument.name] = argument.complete; + } + + if (argument.enum) { + enums[argument.name] = argument.enum; + } + } + + const prompt = { + ...inputPrompt, + complete: async (name: string, value: string) => { + if (completers[name]) { + return await completers[name](value); + } + + if (enums[name]) { + const fuse = new Fuse(enums[name], { + keys: ["value"], + }); + + const result = fuse.search(value); + + return { + values: result.map((item) => item.item), + total: result.length, + }; + } + + return { + values: [], + }; + }, + }; + + this.#prompts.push(prompt); + } + + public get clientCapabilities(): ClientCapabilities | null { + return this.#clientCapabilities ?? null; + } + + public get server(): Server { + return this.#server; + } + + #pingInterval: ReturnType | null = null; + + public async requestSampling( + message: z.infer["params"], + ): Promise { + return this.#server.createMessage(message); + } + + public async connect(transport: Transport) { + if (this.#server.transport) { + throw new UnexpectedStateError("Server is already connected"); + } + + await this.#server.connect(transport); + + let attempt = 0; + + while (attempt++ < 10) { + const capabilities = await this.#server.getClientCapabilities(); + + if (capabilities) { + this.#clientCapabilities = capabilities; + + break; + } + + await delay(100); + } + + if (!this.#clientCapabilities) { + console.warn('[warning] FastMCP could not infer client capabilities') + } + + if (this.#clientCapabilities?.roots?.listChanged) { + try { + const roots = await this.#server.listRoots(); + this.#roots = roots.roots; + } catch(e) { + console.error(`[error] FastMCP received error listing roots.\n\n${e instanceof Error ? e.stack : JSON.stringify(e)}`) + } + } + + this.#pingInterval = setInterval(async () => { + try { + await this.#server.ping(); + } catch (error) { + this.emit("error", { + error: error as Error, + }); + } + }, 1000); + } + + public get roots(): Root[] { + return this.#roots; + } + + public async close() { + if (this.#pingInterval) { + clearInterval(this.#pingInterval); + } + + try { + await this.#server.close(); + } catch (error) { + console.error("[MCP Error]", "could not close server", error); + } + } + + private setupErrorHandling() { + this.#server.onerror = (error) => { + console.error("[MCP Error]", error); + }; + } + + public get loggingLevel(): LoggingLevel { + return this.#loggingLevel; + } + + private setupCompleteHandlers() { + this.#server.setRequestHandler(CompleteRequestSchema, async (request) => { + if (request.params.ref.type === "ref/prompt") { + const prompt = this.#prompts.find( + (prompt) => prompt.name === request.params.ref.name, + ); + + if (!prompt) { + throw new UnexpectedStateError("Unknown prompt", { + request, + }); + } + + if (!prompt.complete) { + throw new UnexpectedStateError("Prompt does not support completion", { + request, + }); + } + + const completion = CompletionZodSchema.parse( + await prompt.complete( + request.params.argument.name, + request.params.argument.value, + ), + ); + + return { + completion, + }; + } + + if (request.params.ref.type === "ref/resource") { + const resource = this.#resourceTemplates.find( + (resource) => resource.uriTemplate === request.params.ref.uri, + ); + + if (!resource) { + throw new UnexpectedStateError("Unknown resource", { + request, + }); + } + + if (!("uriTemplate" in resource)) { + throw new UnexpectedStateError("Unexpected resource"); + } + + if (!resource.complete) { + throw new UnexpectedStateError( + "Resource does not support completion", + { + request, + }, + ); + } + + const completion = CompletionZodSchema.parse( + await resource.complete( + request.params.argument.name, + request.params.argument.value, + ), + ); + + return { + completion, + }; + } + + throw new UnexpectedStateError("Unexpected completion request", { + request, + }); + }); + } + + private setupRootsHandlers() { + this.#server.setNotificationHandler( + RootsListChangedNotificationSchema, + () => { + this.#server.listRoots().then((roots) => { + this.#roots = roots.roots; + + this.emit("rootsChanged", { + roots: roots.roots, + }); + }); + }, + ); + } + + private setupLoggingHandlers() { + this.#server.setRequestHandler(SetLevelRequestSchema, (request) => { + this.#loggingLevel = request.params.level; + + return {}; + }); + } + + private setupToolHandlers(tools: Tool[]) { + this.#server.setRequestHandler(ListToolsRequestSchema, async () => { + return { + tools: tools.map((tool) => { + return { + name: tool.name, + description: tool.description, + inputSchema: tool.parameters + ? zodToJsonSchema(tool.parameters) + : undefined, + }; + }), + }; + }); + + this.#server.setRequestHandler(CallToolRequestSchema, async (request) => { + const tool = tools.find((tool) => tool.name === request.params.name); + + if (!tool) { + throw new McpError( + ErrorCode.MethodNotFound, + `Unknown tool: ${request.params.name}`, + ); + } + + let args: any = undefined; + + if (tool.parameters) { + const parsed = tool.parameters.safeParse(request.params.arguments); + + if (!parsed.success) { + throw new McpError( + ErrorCode.InvalidParams, + `Invalid ${request.params.name} parameters`, + ); + } + + args = parsed.data; + } + + const progressToken = request.params?._meta?.progressToken; + + let result: ContentResult; + + try { + const reportProgress = async (progress: Progress) => { + await this.#server.notification({ + method: "notifications/progress", + params: { + ...progress, + progressToken, + }, + }); + }; + + const log = { + debug: (message: string, context?: SerializableValue) => { + this.#server.sendLoggingMessage({ + level: "debug", + data: { + message, + context, + }, + }); + }, + error: (message: string, context?: SerializableValue) => { + this.#server.sendLoggingMessage({ + level: "error", + data: { + message, + context, + }, + }); + }, + info: (message: string, context?: SerializableValue) => { + this.#server.sendLoggingMessage({ + level: "info", + data: { + message, + context, + }, + }); + }, + warn: (message: string, context?: SerializableValue) => { + this.#server.sendLoggingMessage({ + level: "warning", + data: { + message, + context, + }, + }); + }, + }; + + const maybeStringResult = await tool.execute(args, { + reportProgress, + log, + session: this.#auth, + }); + + if (typeof maybeStringResult === "string") { + result = ContentResultZodSchema.parse({ + content: [{ type: "text", text: maybeStringResult }], + }); + } else if ("type" in maybeStringResult) { + result = ContentResultZodSchema.parse({ + content: [maybeStringResult], + }); + } else { + result = ContentResultZodSchema.parse(maybeStringResult); + } + } catch (error) { + if (error instanceof UserError) { + return { + content: [{ type: "text", text: error.message }], + isError: true, + }; + } + + return { + content: [{ type: "text", text: `Error: ${error}` }], + isError: true, + }; + } + + return result; + }); + } + + private setupResourceHandlers(resources: Resource[]) { + this.#server.setRequestHandler(ListResourcesRequestSchema, async () => { + return { + resources: resources.map((resource) => { + return { + uri: resource.uri, + name: resource.name, + mimeType: resource.mimeType, + }; + }), + }; + }); + + this.#server.setRequestHandler( + ReadResourceRequestSchema, + async (request) => { + if ("uri" in request.params) { + const resource = resources.find( + (resource) => + "uri" in resource && resource.uri === request.params.uri, + ); + + if (!resource) { + for (const resourceTemplate of this.#resourceTemplates) { + const uriTemplate = parseURITemplate( + resourceTemplate.uriTemplate, + ); + + const match = uriTemplate.fromUri(request.params.uri); + + if (!match) { + continue; + } + + const uri = uriTemplate.fill(match); + + const result = await resourceTemplate.load(match); + + return { + contents: [ + { + uri: uri, + mimeType: resourceTemplate.mimeType, + name: resourceTemplate.name, + ...result, + }, + ], + }; + } + + throw new McpError( + ErrorCode.MethodNotFound, + `Unknown resource: ${request.params.uri}`, + ); + } + + if (!("uri" in resource)) { + throw new UnexpectedStateError("Resource does not support reading"); + } + + let maybeArrayResult: Awaited>; + + try { + maybeArrayResult = await resource.load(); + } catch (error) { + throw new McpError( + ErrorCode.InternalError, + `Error reading resource: ${error}`, + { + uri: resource.uri, + }, + ); + } + + if (Array.isArray(maybeArrayResult)) { + return { + contents: maybeArrayResult.map((result) => ({ + uri: resource.uri, + mimeType: resource.mimeType, + name: resource.name, + ...result, + })), + }; + } else { + return { + contents: [ + { + uri: resource.uri, + mimeType: resource.mimeType, + name: resource.name, + ...maybeArrayResult, + }, + ], + }; + } + } + + throw new UnexpectedStateError("Unknown resource request", { + request, + }); + }, + ); + } + + private setupResourceTemplateHandlers(resourceTemplates: ResourceTemplate[]) { + this.#server.setRequestHandler( + ListResourceTemplatesRequestSchema, + async () => { + return { + resourceTemplates: resourceTemplates.map((resourceTemplate) => { + return { + name: resourceTemplate.name, + uriTemplate: resourceTemplate.uriTemplate, + }; + }), + }; + }, + ); + } + + private setupPromptHandlers(prompts: Prompt[]) { + this.#server.setRequestHandler(ListPromptsRequestSchema, async () => { + return { + prompts: prompts.map((prompt) => { + return { + name: prompt.name, + description: prompt.description, + arguments: prompt.arguments, + complete: prompt.complete, + }; + }), + }; + }); + + this.#server.setRequestHandler(GetPromptRequestSchema, async (request) => { + const prompt = prompts.find( + (prompt) => prompt.name === request.params.name, + ); + + if (!prompt) { + throw new McpError( + ErrorCode.MethodNotFound, + `Unknown prompt: ${request.params.name}`, + ); + } + + const args = request.params.arguments; + + for (const arg of prompt.arguments ?? []) { + if (arg.required && !(args && arg.name in args)) { + throw new McpError( + ErrorCode.InvalidRequest, + `Missing required argument: ${arg.name}`, + ); + } + } + + let result: Awaited>; + + try { + result = await prompt.load(args as Record); + } catch (error) { + throw new McpError( + ErrorCode.InternalError, + `Error loading prompt: ${error}`, + ); + } + + return { + description: prompt.description, + messages: [ + { + role: "user", + content: { type: "text", text: result }, + }, + ], + }; + }); + } +} + +const FastMCPEventEmitterBase: { + new (): StrictEventEmitter>; +} = EventEmitter; + +class FastMCPEventEmitter extends FastMCPEventEmitterBase {} + +type Authenticate = (request: http.IncomingMessage) => Promise; + +export class FastMCP | undefined = undefined> extends FastMCPEventEmitter { + #options: ServerOptions; + #prompts: InputPrompt[] = []; + #resources: Resource[] = []; + #resourcesTemplates: InputResourceTemplate[] = []; + #sessions: FastMCPSession[] = []; + #sseServer: SSEServer | null = null; + #tools: Tool[] = []; + #authenticate: Authenticate | undefined; + + constructor(public options: ServerOptions) { + super(); + + this.#options = options; + this.#authenticate = options.authenticate; + } + + public get sessions(): FastMCPSession[] { + return this.#sessions; + } + + /** + * Adds a tool to the server. + */ + public addTool(tool: Tool) { + this.#tools.push(tool as unknown as Tool); + } + + /** + * Adds a resource to the server. + */ + public addResource(resource: Resource) { + this.#resources.push(resource); + } + + /** + * Adds a resource template to the server. + */ + public addResourceTemplate< + const Args extends InputResourceTemplateArgument[], + >(resource: InputResourceTemplate) { + this.#resourcesTemplates.push(resource); + } + + /** + * Adds a prompt to the server. + */ + public addPrompt( + prompt: InputPrompt, + ) { + this.#prompts.push(prompt); + } + + /** + * Starts the server. + */ + public async start( + options: + | { transportType: "stdio" } + | { + transportType: "sse"; + sse: { endpoint: `/${string}`; port: number }; + } = { + transportType: "stdio", + }, + ) { + if (options.transportType === "stdio") { + const transport = new StdioServerTransport(); + + const session = new FastMCPSession({ + name: this.#options.name, + version: this.#options.version, + tools: this.#tools, + resources: this.#resources, + resourcesTemplates: this.#resourcesTemplates, + prompts: this.#prompts, + }); + + await session.connect(transport); + + this.#sessions.push(session); + + this.emit("connect", { + session, + }); + + } else if (options.transportType === "sse") { + this.#sseServer = await startSSEServer>({ + endpoint: options.sse.endpoint as `/${string}`, + port: options.sse.port, + createServer: async (request) => { + let auth: T | undefined; + + if (this.#authenticate) { + auth = await this.#authenticate(request); + } + + return new FastMCPSession({ + auth, + name: this.#options.name, + version: this.#options.version, + tools: this.#tools, + resources: this.#resources, + resourcesTemplates: this.#resourcesTemplates, + prompts: this.#prompts, + }); + }, + onClose: (session) => { + this.emit("disconnect", { + session, + }); + }, + onConnect: async (session) => { + this.#sessions.push(session); + + this.emit("connect", { + session, + }); + }, + }); + + console.info( + `server is running on SSE at http://localhost:${options.sse.port}${options.sse.endpoint}`, + ); + } else { + throw new Error("Invalid transport type"); + } + } + + /** + * Stops the server. + */ + public async stop() { + if (this.#sseServer) { + this.#sseServer.close(); + } + } +} + +export type { Context }; +export type { Tool, ToolParameters }; +export type { Content, TextContent, ImageContent, ContentResult }; +export type { Progress, SerializableValue }; +export type { Resource, ResourceResult }; +export type { ResourceTemplate, ResourceTemplateArgument }; +export type { Prompt, PromptArgument }; +export type { InputPrompt, InputPromptArgument }; +export type { ServerOptions, LoggingLevel }; +export type { FastMCPEvents, FastMCPSessionEvents }; + + + +--- +File: /eslint.config.js +--- + +import perfectionist from "eslint-plugin-perfectionist"; + +export default [perfectionist.configs["recommended-alphabetical"]]; + + + +--- +File: /package.json +--- + +{ + "name": "fastmcp", + "version": "1.0.0", + "main": "dist/FastMCP.js", + "scripts": { + "build": "tsup", + "test": "vitest run && tsc && jsr publish --dry-run", + "format": "prettier --write . && eslint --fix ." + }, + "bin": { + "fastmcp": "dist/bin/fastmcp.js" + }, + "keywords": [ + "MCP", + "SSE" + ], + "type": "module", + "author": "Frank Fiegel ", + "license": "MIT", + "description": "A TypeScript framework for building MCP servers.", + "module": "dist/FastMCP.js", + "types": "dist/FastMCP.d.ts", + "dependencies": { + "@modelcontextprotocol/sdk": "^1.6.0", + "execa": "^9.5.2", + "file-type": "^20.3.0", + "fuse.js": "^7.1.0", + "mcp-proxy": "^2.10.4", + "strict-event-emitter-types": "^2.0.0", + "undici": "^7.4.0", + "uri-templates": "^0.2.0", + "yargs": "^17.7.2", + "zod": "^3.24.2", + "zod-to-json-schema": "^3.24.3" + }, + "repository": { + "url": "https://github.com/punkpeye/fastmcp" + }, + "homepage": "https://glama.ai/mcp", + "release": { + "branches": [ + "main" + ], + "plugins": [ + "@semantic-release/commit-analyzer", + "@semantic-release/release-notes-generator", + "@semantic-release/npm", + "@semantic-release/github", + "@sebbo2002/semantic-release-jsr" + ] + }, + "devDependencies": { + "@sebbo2002/semantic-release-jsr": "^2.0.4", + "@tsconfig/node22": "^22.0.0", + "@types/node": "^22.13.5", + "@types/uri-templates": "^0.1.34", + "@types/yargs": "^17.0.33", + "eslint": "^9.21.0", + "eslint-plugin-perfectionist": "^4.9.0", + "eventsource-client": "^1.1.3", + "get-port-please": "^3.1.2", + "jsr": "^0.13.3", + "prettier": "^3.5.2", + "semantic-release": "^24.2.3", + "tsup": "^8.4.0", + "typescript": "^5.7.3", + "vitest": "^3.0.7" + }, + "tsup": { + "entry": [ + "src/FastMCP.ts", + "src/bin/fastmcp.ts" + ], + "format": [ + "esm" + ], + "dts": true, + "splitting": true, + "sourcemap": true, + "clean": true + } +} + + + +--- +File: /README.md +--- + +# FastMCP + +A TypeScript framework for building [MCP](https://glama.ai/mcp) servers capable of handling client sessions. + +> [!NOTE] +> +> For a Python implementation, see [FastMCP](https://github.com/jlowin/fastmcp). + +## Features + +- Simple Tool, Resource, Prompt definition +- [Authentication](#authentication) +- [Sessions](#sessions) +- [Image content](#returning-an-image) +- [Logging](#logging) +- [Error handling](#errors) +- [SSE](#sse) +- CORS (enabled by default) +- [Progress notifications](#progress) +- [Typed server events](#typed-server-events) +- [Prompt argument auto-completion](#prompt-argument-auto-completion) +- [Sampling](#requestsampling) +- Automated SSE pings +- Roots +- CLI for [testing](#test-with-mcp-cli) and [debugging](#inspect-with-mcp-inspector) + +## Installation + +```bash +npm install fastmcp +``` + +## Quickstart + +```ts +import { FastMCP } from "fastmcp"; +import { z } from "zod"; + +const server = new FastMCP({ + name: "My Server", + version: "1.0.0", +}); + +server.addTool({ + name: "add", + description: "Add two numbers", + parameters: z.object({ + a: z.number(), + b: z.number(), + }), + execute: async (args) => { + return String(args.a + args.b); + }, +}); + +server.start({ + transportType: "stdio", +}); +``` + +_That's it!_ You have a working MCP server. + +You can test the server in terminal with: + +```bash +git clone https://github.com/punkpeye/fastmcp.git +cd fastmcp + +npm install + +# Test the addition server example using CLI: +npx fastmcp dev src/examples/addition.ts +# Test the addition server example using MCP Inspector: +npx fastmcp inspect src/examples/addition.ts +``` + +### SSE + +You can also run the server with SSE support: + +```ts +server.start({ + transportType: "sse", + sse: { + endpoint: "/sse", + port: 8080, + }, +}); +``` + +This will start the server and listen for SSE connections on `http://localhost:8080/sse`. + +You can then use `SSEClientTransport` to connect to the server: + +```ts +import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js"; + +const client = new Client( + { + name: "example-client", + version: "1.0.0", + }, + { + capabilities: {}, + }, +); + +const transport = new SSEClientTransport(new URL(`http://localhost:8080/sse`)); + +await client.connect(transport); +``` + +## Core Concepts + +### Tools + +[Tools](https://modelcontextprotocol.io/docs/concepts/tools) in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. + +```js +server.addTool({ + name: "fetch", + description: "Fetch the content of a url", + parameters: z.object({ + url: z.string(), + }), + execute: async (args) => { + return await fetchWebpageContent(args.url); + }, +}); +``` + +#### Returning a string + +`execute` can return a string: + +```js +server.addTool({ + name: "download", + description: "Download a file", + parameters: z.object({ + url: z.string(), + }), + execute: async (args) => { + return "Hello, world!"; + }, +}); +``` + +The latter is equivalent to: + +```js +server.addTool({ + name: "download", + description: "Download a file", + parameters: z.object({ + url: z.string(), + }), + execute: async (args) => { + return { + content: [ + { + type: "text", + text: "Hello, world!", + }, + ], + }; + }, +}); +``` + +#### Returning a list + +If you want to return a list of messages, you can return an object with a `content` property: + +```js +server.addTool({ + name: "download", + description: "Download a file", + parameters: z.object({ + url: z.string(), + }), + execute: async (args) => { + return { + content: [ + { type: "text", text: "First message" }, + { type: "text", text: "Second message" }, + ], + }; + }, +}); +``` + +#### Returning an image + +Use the `imageContent` to create a content object for an image: + +```js +import { imageContent } from "fastmcp"; + +server.addTool({ + name: "download", + description: "Download a file", + parameters: z.object({ + url: z.string(), + }), + execute: async (args) => { + return imageContent({ + url: "https://example.com/image.png", + }); + + // or... + // return imageContent({ + // path: "/path/to/image.png", + // }); + + // or... + // return imageContent({ + // buffer: Buffer.from("iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=", "base64"), + // }); + + // or... + // return { + // content: [ + // await imageContent(...) + // ], + // }; + }, +}); +``` + +The `imageContent` function takes the following options: + +- `url`: The URL of the image. +- `path`: The path to the image file. +- `buffer`: The image data as a buffer. + +Only one of `url`, `path`, or `buffer` must be specified. + +The above example is equivalent to: + +```js +server.addTool({ + name: "download", + description: "Download a file", + parameters: z.object({ + url: z.string(), + }), + execute: async (args) => { + return { + content: [ + { + type: "image", + data: "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=", + mimeType: "image/png", + }, + ], + }; + }, +}); +``` + +#### Logging + +Tools can log messages to the client using the `log` object in the context object: + +```js +server.addTool({ + name: "download", + description: "Download a file", + parameters: z.object({ + url: z.string(), + }), + execute: async (args, { log }) => { + log.info("Downloading file...", { + url, + }); + + // ... + + log.info("Downloaded file"); + + return "done"; + }, +}); +``` + +The `log` object has the following methods: + +- `debug(message: string, data?: SerializableValue)` +- `error(message: string, data?: SerializableValue)` +- `info(message: string, data?: SerializableValue)` +- `warn(message: string, data?: SerializableValue)` + +#### Errors + +The errors that are meant to be shown to the user should be thrown as `UserError` instances: + +```js +import { UserError } from "fastmcp"; + +server.addTool({ + name: "download", + description: "Download a file", + parameters: z.object({ + url: z.string(), + }), + execute: async (args) => { + if (args.url.startsWith("https://example.com")) { + throw new UserError("This URL is not allowed"); + } + + return "done"; + }, +}); +``` + +#### Progress + +Tools can report progress by calling `reportProgress` in the context object: + +```js +server.addTool({ + name: "download", + description: "Download a file", + parameters: z.object({ + url: z.string(), + }), + execute: async (args, { reportProgress }) => { + reportProgress({ + progress: 0, + total: 100, + }); + + // ... + + reportProgress({ + progress: 100, + total: 100, + }); + + return "done"; + }, +}); +``` + +### Resources + +[Resources](https://modelcontextprotocol.io/docs/concepts/resources) represent any kind of data that an MCP server wants to make available to clients. This can include: + +- File contents +- Screenshots and images +- Log files +- And more + +Each resource is identified by a unique URI and can contain either text or binary data. + +```ts +server.addResource({ + uri: "file:///logs/app.log", + name: "Application Logs", + mimeType: "text/plain", + async load() { + return { + text: await readLogFile(), + }; + }, +}); +``` + +> [!NOTE] +> +> `load` can return multiple resources. This could be used, for example, to return a list of files inside a directory when the directory is read. +> +> ```ts +> async load() { +> return [ +> { +> text: "First file content", +> }, +> { +> text: "Second file content", +> }, +> ]; +> } +> ``` + +You can also return binary contents in `load`: + +```ts +async load() { + return { + blob: 'base64-encoded-data' + }; +} +``` + +### Resource templates + +You can also define resource templates: + +```ts +server.addResourceTemplate({ + uriTemplate: "file:///logs/{name}.log", + name: "Application Logs", + mimeType: "text/plain", + arguments: [ + { + name: "name", + description: "Name of the log", + required: true, + }, + ], + async load({ name }) { + return { + text: `Example log content for ${name}`, + }; + }, +}); +``` + +#### Resource template argument auto-completion + +Provide `complete` functions for resource template arguments to enable automatic completion: + +```ts +server.addResourceTemplate({ + uriTemplate: "file:///logs/{name}.log", + name: "Application Logs", + mimeType: "text/plain", + arguments: [ + { + name: "name", + description: "Name of the log", + required: true, + complete: async (value) => { + if (value === "Example") { + return { + values: ["Example Log"], + }; + } + + return { + values: [], + }; + }, + }, + ], + async load({ name }) { + return { + text: `Example log content for ${name}`, + }; + }, +}); +``` + +### Prompts + +[Prompts](https://modelcontextprotocol.io/docs/concepts/prompts) enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions. + +```ts +server.addPrompt({ + name: "git-commit", + description: "Generate a Git commit message", + arguments: [ + { + name: "changes", + description: "Git diff or description of changes", + required: true, + }, + ], + load: async (args) => { + return `Generate a concise but descriptive commit message for these changes:\n\n${args.changes}`; + }, +}); +``` + +#### Prompt argument auto-completion + +Prompts can provide auto-completion for their arguments: + +```js +server.addPrompt({ + name: "countryPoem", + description: "Writes a poem about a country", + load: async ({ name }) => { + return `Hello, ${name}!`; + }, + arguments: [ + { + name: "name", + description: "Name of the country", + required: true, + complete: async (value) => { + if (value === "Germ") { + return { + values: ["Germany"], + }; + } + + return { + values: [], + }; + }, + }, + ], +}); +``` + +#### Prompt argument auto-completion using `enum` + +If you provide an `enum` array for an argument, the server will automatically provide completions for the argument. + +```js +server.addPrompt({ + name: "countryPoem", + description: "Writes a poem about a country", + load: async ({ name }) => { + return `Hello, ${name}!`; + }, + arguments: [ + { + name: "name", + description: "Name of the country", + required: true, + enum: ["Germany", "France", "Italy"], + }, + ], +}); +``` + +### Authentication + +FastMCP allows you to `authenticate` clients using a custom function: + +```ts +import { AuthError } from "fastmcp"; + +const server = new FastMCP({ + name: "My Server", + version: "1.0.0", + authenticate: ({request}) => { + const apiKey = request.headers["x-api-key"]; + + if (apiKey !== '123') { + throw new Response(null, { + status: 401, + statusText: "Unauthorized", + }); + } + + // Whatever you return here will be accessible in the `context.session` object. + return { + id: 1, + } + }, +}); +``` + +Now you can access the authenticated session data in your tools: + +```ts +server.addTool({ + name: "sayHello", + execute: async (args, { session }) => { + return `Hello, ${session.id}!`; + }, +}); +``` + +### Sessions + +The `session` object is an instance of `FastMCPSession` and it describes active client sessions. + +```ts +server.sessions; +``` + +We allocate a new server instance for each client connection to enable 1:1 communication between a client and the server. + +### Typed server events + +You can listen to events emitted by the server using the `on` method: + +```ts +server.on("connect", (event) => { + console.log("Client connected:", event.session); +}); + +server.on("disconnect", (event) => { + console.log("Client disconnected:", event.session); +}); +``` + +## `FastMCPSession` + +`FastMCPSession` represents a client session and provides methods to interact with the client. + +Refer to [Sessions](#sessions) for examples of how to obtain a `FastMCPSession` instance. + +### `requestSampling` + +`requestSampling` creates a [sampling](https://modelcontextprotocol.io/docs/concepts/sampling) request and returns the response. + +```ts +await session.requestSampling({ + messages: [ + { + role: "user", + content: { + type: "text", + text: "What files are in the current directory?", + }, + }, + ], + systemPrompt: "You are a helpful file system assistant.", + includeContext: "thisServer", + maxTokens: 100, +}); +``` + +### `clientCapabilities` + +The `clientCapabilities` property contains the client capabilities. + +```ts +session.clientCapabilities; +``` + +### `loggingLevel` + +The `loggingLevel` property describes the logging level as set by the client. + +```ts +session.loggingLevel; +``` + +### `roots` + +The `roots` property contains the roots as set by the client. + +```ts +session.roots; +``` + +### `server` + +The `server` property contains an instance of MCP server that is associated with the session. + +```ts +session.server; +``` + +### Typed session events + +You can listen to events emitted by the session using the `on` method: + +```ts +session.on("rootsChanged", (event) => { + console.log("Roots changed:", event.roots); +}); + +session.on("error", (event) => { + console.error("Error:", event.error); +}); +``` + +## Running Your Server + +### Test with `mcp-cli` + +The fastest way to test and debug your server is with `fastmcp dev`: + +```bash +npx fastmcp dev server.js +npx fastmcp dev server.ts +``` + +This will run your server with [`mcp-cli`](https://github.com/wong2/mcp-cli) for testing and debugging your MCP server in the terminal. + +### Inspect with `MCP Inspector` + +Another way is to use the official [`MCP Inspector`](https://modelcontextprotocol.io/docs/tools/inspector) to inspect your server with a Web UI: + +```bash +npx fastmcp inspect server.ts +``` + +## FAQ + +### How to use with Claude Desktop? + +Follow the guide https://modelcontextprotocol.io/quickstart/user and add the following configuration: + +```json +{ + "mcpServers": { + "my-mcp-server": { + "command": "npx", + "args": [ + "tsx", + "/PATH/TO/YOUR_PROJECT/src/index.ts" + ], + "env": { + "YOUR_ENV_VAR": "value" + } + } + } +} +``` + +## Showcase + +> [!NOTE] +> +> If you've developed a server using FastMCP, please [submit a PR](https://github.com/punkpeye/fastmcp) to showcase it here! + +- https://github.com/apinetwork/piapi-mcp-server +- https://github.com/Meeting-Baas/meeting-mcp - Meeting BaaS MCP server that enables AI assistants to create meeting bots, search transcripts, and manage recording data + +## Acknowledgements + +- FastMCP is inspired by the [Python implementation](https://github.com/jlowin/fastmcp) by [Jonathan Lowin](https://github.com/jlowin). +- Parts of codebase were adopted from [LiteMCP](https://github.com/wong2/litemcp). +- Parts of codebase were adopted from [Model Context protocolでSSEã‚’ã‚„ãŖãĻãŋる](https://dev.classmethod.jp/articles/mcp-sse/). + + + +--- +File: /vitest.config.js +--- + +import { defineConfig } from "vitest/config"; + +export default defineConfig({ + test: { + poolOptions: { + forks: { execArgv: ["--experimental-eventsource"] }, + }, + }, +}); + diff --git a/docs/mcp-js-sdk-docs.txt b/docs/mcp-js-sdk-docs.txt new file mode 100644 index 00000000..3c200fe7 --- /dev/null +++ b/docs/mcp-js-sdk-docs.txt @@ -0,0 +1,14618 @@ +Directory Structure: + +└── ./ + ├── src + │ ├── __mocks__ + │ │ └── pkce-challenge.ts + │ ├── client + │ │ ├── auth.test.ts + │ │ ├── auth.ts + │ │ ├── index.test.ts + │ │ ├── index.ts + │ │ ├── sse.test.ts + │ │ ├── sse.ts + │ │ ├── stdio.test.ts + │ │ ├── stdio.ts + │ │ └── websocket.ts + │ ├── integration-tests + │ │ └── process-cleanup.test.ts + │ ├── server + │ │ ├── auth + │ │ │ ├── handlers + │ │ │ │ ├── authorize.test.ts + │ │ │ │ ├── authorize.ts + │ │ │ │ ├── metadata.test.ts + │ │ │ │ ├── metadata.ts + │ │ │ │ ├── register.test.ts + │ │ │ │ ├── register.ts + │ │ │ │ ├── revoke.test.ts + │ │ │ │ ├── revoke.ts + │ │ │ │ ├── token.test.ts + │ │ │ │ └── token.ts + │ │ │ ├── middleware + │ │ │ │ ├── allowedMethods.test.ts + │ │ │ │ ├── allowedMethods.ts + │ │ │ │ ├── bearerAuth.test.ts + │ │ │ │ ├── bearerAuth.ts + │ │ │ │ ├── clientAuth.test.ts + │ │ │ │ └── clientAuth.ts + │ │ │ ├── clients.ts + │ │ │ ├── errors.ts + │ │ │ ├── provider.ts + │ │ │ ├── router.test.ts + │ │ │ ├── router.ts + │ │ │ └── types.ts + │ │ ├── completable.test.ts + │ │ ├── completable.ts + │ │ ├── index.test.ts + │ │ ├── index.ts + │ │ ├── mcp.test.ts + │ │ ├── mcp.ts + │ │ ├── sse.ts + │ │ ├── stdio.test.ts + │ │ └── stdio.ts + │ ├── shared + │ │ ├── auth.ts + │ │ ├── protocol.test.ts + │ │ ├── protocol.ts + │ │ ├── stdio.test.ts + │ │ ├── stdio.ts + │ │ ├── transport.ts + │ │ ├── uriTemplate.test.ts + │ │ └── uriTemplate.ts + │ ├── cli.ts + │ ├── inMemory.test.ts + │ ├── inMemory.ts + │ └── types.ts + ├── CLAUDE.md + ├── package.json + └── README.md + + + +--- +File: /src/__mocks__/pkce-challenge.ts +--- + +export default function pkceChallenge() { + return { + code_verifier: "test_verifier", + code_challenge: "test_challenge", + }; +} + + +--- +File: /src/client/auth.test.ts +--- + +import { + discoverOAuthMetadata, + startAuthorization, + exchangeAuthorization, + refreshAuthorization, + registerClient, +} from "./auth.js"; + +// Mock fetch globally +const mockFetch = jest.fn(); +global.fetch = mockFetch; + +describe("OAuth Authorization", () => { + beforeEach(() => { + mockFetch.mockReset(); + }); + + describe("discoverOAuthMetadata", () => { + const validMetadata = { + issuer: "https://auth.example.com", + authorization_endpoint: "https://auth.example.com/authorize", + token_endpoint: "https://auth.example.com/token", + registration_endpoint: "https://auth.example.com/register", + response_types_supported: ["code"], + code_challenge_methods_supported: ["S256"], + }; + + it("returns metadata when discovery succeeds", async () => { + mockFetch.mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => validMetadata, + }); + + const metadata = await discoverOAuthMetadata("https://auth.example.com"); + expect(metadata).toEqual(validMetadata); + const calls = mockFetch.mock.calls; + expect(calls.length).toBe(1); + const [url, options] = calls[0]; + expect(url.toString()).toBe("https://auth.example.com/.well-known/oauth-authorization-server"); + expect(options.headers).toEqual({ + "MCP-Protocol-Version": "2024-11-05" + }); + }); + + it("returns metadata when first fetch fails but second without MCP header succeeds", async () => { + // Set up a counter to control behavior + let callCount = 0; + + // Mock implementation that changes behavior based on call count + mockFetch.mockImplementation((_url, _options) => { + callCount++; + + if (callCount === 1) { + // First call with MCP header - fail with TypeError (simulating CORS error) + // We need to use TypeError specifically because that's what the implementation checks for + return Promise.reject(new TypeError("Network error")); + } else { + // Second call without header - succeed + return Promise.resolve({ + ok: true, + status: 200, + json: async () => validMetadata + }); + } + }); + + // Should succeed with the second call + const metadata = await discoverOAuthMetadata("https://auth.example.com"); + expect(metadata).toEqual(validMetadata); + + // Verify both calls were made + expect(mockFetch).toHaveBeenCalledTimes(2); + + // Verify first call had MCP header + expect(mockFetch.mock.calls[0][1]?.headers).toHaveProperty("MCP-Protocol-Version"); + }); + + it("throws an error when all fetch attempts fail", async () => { + // Set up a counter to control behavior + let callCount = 0; + + // Mock implementation that changes behavior based on call count + mockFetch.mockImplementation((_url, _options) => { + callCount++; + + if (callCount === 1) { + // First call - fail with TypeError + return Promise.reject(new TypeError("First failure")); + } else { + // Second call - fail with different error + return Promise.reject(new Error("Second failure")); + } + }); + + // Should fail with the second error + await expect(discoverOAuthMetadata("https://auth.example.com")) + .rejects.toThrow("Second failure"); + + // Verify both calls were made + expect(mockFetch).toHaveBeenCalledTimes(2); + }); + + it("returns undefined when discovery endpoint returns 404", async () => { + mockFetch.mockResolvedValueOnce({ + ok: false, + status: 404, + }); + + const metadata = await discoverOAuthMetadata("https://auth.example.com"); + expect(metadata).toBeUndefined(); + }); + + it("throws on non-404 errors", async () => { + mockFetch.mockResolvedValueOnce({ + ok: false, + status: 500, + }); + + await expect( + discoverOAuthMetadata("https://auth.example.com") + ).rejects.toThrow("HTTP 500"); + }); + + it("validates metadata schema", async () => { + mockFetch.mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => ({ + // Missing required fields + issuer: "https://auth.example.com", + }), + }); + + await expect( + discoverOAuthMetadata("https://auth.example.com") + ).rejects.toThrow(); + }); + }); + + describe("startAuthorization", () => { + const validMetadata = { + issuer: "https://auth.example.com", + authorization_endpoint: "https://auth.example.com/auth", + token_endpoint: "https://auth.example.com/tkn", + response_types_supported: ["code"], + code_challenge_methods_supported: ["S256"], + }; + + const validClientInfo = { + client_id: "client123", + client_secret: "secret123", + redirect_uris: ["http://localhost:3000/callback"], + client_name: "Test Client", + }; + + it("generates authorization URL with PKCE challenge", async () => { + const { authorizationUrl, codeVerifier } = await startAuthorization( + "https://auth.example.com", + { + clientInformation: validClientInfo, + redirectUrl: "http://localhost:3000/callback", + } + ); + + expect(authorizationUrl.toString()).toMatch( + /^https:\/\/auth\.example\.com\/authorize\?/ + ); + expect(authorizationUrl.searchParams.get("response_type")).toBe("code"); + expect(authorizationUrl.searchParams.get("code_challenge")).toBe("test_challenge"); + expect(authorizationUrl.searchParams.get("code_challenge_method")).toBe( + "S256" + ); + expect(authorizationUrl.searchParams.get("redirect_uri")).toBe( + "http://localhost:3000/callback" + ); + expect(codeVerifier).toBe("test_verifier"); + }); + + it("uses metadata authorization_endpoint when provided", async () => { + const { authorizationUrl } = await startAuthorization( + "https://auth.example.com", + { + metadata: validMetadata, + clientInformation: validClientInfo, + redirectUrl: "http://localhost:3000/callback", + } + ); + + expect(authorizationUrl.toString()).toMatch( + /^https:\/\/auth\.example\.com\/auth\?/ + ); + }); + + it("validates response type support", async () => { + const metadata = { + ...validMetadata, + response_types_supported: ["token"], // Does not support 'code' + }; + + await expect( + startAuthorization("https://auth.example.com", { + metadata, + clientInformation: validClientInfo, + redirectUrl: "http://localhost:3000/callback", + }) + ).rejects.toThrow(/does not support response type/); + }); + + it("validates PKCE support", async () => { + const metadata = { + ...validMetadata, + response_types_supported: ["code"], + code_challenge_methods_supported: ["plain"], // Does not support 'S256' + }; + + await expect( + startAuthorization("https://auth.example.com", { + metadata, + clientInformation: validClientInfo, + redirectUrl: "http://localhost:3000/callback", + }) + ).rejects.toThrow(/does not support code challenge method/); + }); + }); + + describe("exchangeAuthorization", () => { + const validTokens = { + access_token: "access123", + token_type: "Bearer", + expires_in: 3600, + refresh_token: "refresh123", + }; + + const validClientInfo = { + client_id: "client123", + client_secret: "secret123", + redirect_uris: ["http://localhost:3000/callback"], + client_name: "Test Client", + }; + + it("exchanges code for tokens", async () => { + mockFetch.mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => validTokens, + }); + + const tokens = await exchangeAuthorization("https://auth.example.com", { + clientInformation: validClientInfo, + authorizationCode: "code123", + codeVerifier: "verifier123", + }); + + expect(tokens).toEqual(validTokens); + expect(mockFetch).toHaveBeenCalledWith( + expect.objectContaining({ + href: "https://auth.example.com/token", + }), + expect.objectContaining({ + method: "POST", + headers: { + "Content-Type": "application/x-www-form-urlencoded", + }, + }) + ); + + const body = mockFetch.mock.calls[0][1].body as URLSearchParams; + expect(body.get("grant_type")).toBe("authorization_code"); + expect(body.get("code")).toBe("code123"); + expect(body.get("code_verifier")).toBe("verifier123"); + expect(body.get("client_id")).toBe("client123"); + expect(body.get("client_secret")).toBe("secret123"); + }); + + it("validates token response schema", async () => { + mockFetch.mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => ({ + // Missing required fields + access_token: "access123", + }), + }); + + await expect( + exchangeAuthorization("https://auth.example.com", { + clientInformation: validClientInfo, + authorizationCode: "code123", + codeVerifier: "verifier123", + }) + ).rejects.toThrow(); + }); + + it("throws on error response", async () => { + mockFetch.mockResolvedValueOnce({ + ok: false, + status: 400, + }); + + await expect( + exchangeAuthorization("https://auth.example.com", { + clientInformation: validClientInfo, + authorizationCode: "code123", + codeVerifier: "verifier123", + }) + ).rejects.toThrow("Token exchange failed"); + }); + }); + + describe("refreshAuthorization", () => { + const validTokens = { + access_token: "newaccess123", + token_type: "Bearer", + expires_in: 3600, + refresh_token: "newrefresh123", + }; + + const validClientInfo = { + client_id: "client123", + client_secret: "secret123", + redirect_uris: ["http://localhost:3000/callback"], + client_name: "Test Client", + }; + + it("exchanges refresh token for new tokens", async () => { + mockFetch.mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => validTokens, + }); + + const tokens = await refreshAuthorization("https://auth.example.com", { + clientInformation: validClientInfo, + refreshToken: "refresh123", + }); + + expect(tokens).toEqual(validTokens); + expect(mockFetch).toHaveBeenCalledWith( + expect.objectContaining({ + href: "https://auth.example.com/token", + }), + expect.objectContaining({ + method: "POST", + headers: { + "Content-Type": "application/x-www-form-urlencoded", + }, + }) + ); + + const body = mockFetch.mock.calls[0][1].body as URLSearchParams; + expect(body.get("grant_type")).toBe("refresh_token"); + expect(body.get("refresh_token")).toBe("refresh123"); + expect(body.get("client_id")).toBe("client123"); + expect(body.get("client_secret")).toBe("secret123"); + }); + + it("validates token response schema", async () => { + mockFetch.mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => ({ + // Missing required fields + access_token: "newaccess123", + }), + }); + + await expect( + refreshAuthorization("https://auth.example.com", { + clientInformation: validClientInfo, + refreshToken: "refresh123", + }) + ).rejects.toThrow(); + }); + + it("throws on error response", async () => { + mockFetch.mockResolvedValueOnce({ + ok: false, + status: 400, + }); + + await expect( + refreshAuthorization("https://auth.example.com", { + clientInformation: validClientInfo, + refreshToken: "refresh123", + }) + ).rejects.toThrow("Token refresh failed"); + }); + }); + + describe("registerClient", () => { + const validClientMetadata = { + redirect_uris: ["http://localhost:3000/callback"], + client_name: "Test Client", + }; + + const validClientInfo = { + client_id: "client123", + client_secret: "secret123", + client_id_issued_at: 1612137600, + client_secret_expires_at: 1612224000, + ...validClientMetadata, + }; + + it("registers client and returns client information", async () => { + mockFetch.mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => validClientInfo, + }); + + const clientInfo = await registerClient("https://auth.example.com", { + clientMetadata: validClientMetadata, + }); + + expect(clientInfo).toEqual(validClientInfo); + expect(mockFetch).toHaveBeenCalledWith( + expect.objectContaining({ + href: "https://auth.example.com/register", + }), + expect.objectContaining({ + method: "POST", + headers: { + "Content-Type": "application/json", + }, + body: JSON.stringify(validClientMetadata), + }) + ); + }); + + it("validates client information response schema", async () => { + mockFetch.mockResolvedValueOnce({ + ok: true, + status: 200, + json: async () => ({ + // Missing required fields + client_secret: "secret123", + }), + }); + + await expect( + registerClient("https://auth.example.com", { + clientMetadata: validClientMetadata, + }) + ).rejects.toThrow(); + }); + + it("throws when registration endpoint not available in metadata", async () => { + const metadata = { + issuer: "https://auth.example.com", + authorization_endpoint: "https://auth.example.com/authorize", + token_endpoint: "https://auth.example.com/token", + response_types_supported: ["code"], + }; + + await expect( + registerClient("https://auth.example.com", { + metadata, + clientMetadata: validClientMetadata, + }) + ).rejects.toThrow(/does not support dynamic client registration/); + }); + + it("throws on error response", async () => { + mockFetch.mockResolvedValueOnce({ + ok: false, + status: 400, + }); + + await expect( + registerClient("https://auth.example.com", { + clientMetadata: validClientMetadata, + }) + ).rejects.toThrow("Dynamic client registration failed"); + }); + }); +}); + + +--- +File: /src/client/auth.ts +--- + +import pkceChallenge from "pkce-challenge"; +import { LATEST_PROTOCOL_VERSION } from "../types.js"; +import type { OAuthClientMetadata, OAuthClientInformation, OAuthTokens, OAuthMetadata, OAuthClientInformationFull } from "../shared/auth.js"; +import { OAuthClientInformationFullSchema, OAuthMetadataSchema, OAuthTokensSchema } from "../shared/auth.js"; + +/** + * Implements an end-to-end OAuth client to be used with one MCP server. + * + * This client relies upon a concept of an authorized "session," the exact + * meaning of which is application-defined. Tokens, authorization codes, and + * code verifiers should not cross different sessions. + */ +export interface OAuthClientProvider { + /** + * The URL to redirect the user agent to after authorization. + */ + get redirectUrl(): string | URL; + + /** + * Metadata about this OAuth client. + */ + get clientMetadata(): OAuthClientMetadata; + + /** + * Loads information about this OAuth client, as registered already with the + * server, or returns `undefined` if the client is not registered with the + * server. + */ + clientInformation(): OAuthClientInformation | undefined | Promise; + + /** + * If implemented, this permits the OAuth client to dynamically register with + * the server. Client information saved this way should later be read via + * `clientInformation()`. + * + * This method is not required to be implemented if client information is + * statically known (e.g., pre-registered). + */ + saveClientInformation?(clientInformation: OAuthClientInformationFull): void | Promise; + + /** + * Loads any existing OAuth tokens for the current session, or returns + * `undefined` if there are no saved tokens. + */ + tokens(): OAuthTokens | undefined | Promise; + + /** + * Stores new OAuth tokens for the current session, after a successful + * authorization. + */ + saveTokens(tokens: OAuthTokens): void | Promise; + + /** + * Invoked to redirect the user agent to the given URL to begin the authorization flow. + */ + redirectToAuthorization(authorizationUrl: URL): void | Promise; + + /** + * Saves a PKCE code verifier for the current session, before redirecting to + * the authorization flow. + */ + saveCodeVerifier(codeVerifier: string): void | Promise; + + /** + * Loads the PKCE code verifier for the current session, necessary to validate + * the authorization result. + */ + codeVerifier(): string | Promise; +} + +export type AuthResult = "AUTHORIZED" | "REDIRECT"; + +export class UnauthorizedError extends Error { + constructor(message?: string) { + super(message ?? "Unauthorized"); + } +} + +/** + * Orchestrates the full auth flow with a server. + * + * This can be used as a single entry point for all authorization functionality, + * instead of linking together the other lower-level functions in this module. + */ +export async function auth( + provider: OAuthClientProvider, + { serverUrl, authorizationCode }: { serverUrl: string | URL, authorizationCode?: string }): Promise { + const metadata = await discoverOAuthMetadata(serverUrl); + + // Handle client registration if needed + let clientInformation = await Promise.resolve(provider.clientInformation()); + if (!clientInformation) { + if (authorizationCode !== undefined) { + throw new Error("Existing OAuth client information is required when exchanging an authorization code"); + } + + if (!provider.saveClientInformation) { + throw new Error("OAuth client information must be saveable for dynamic registration"); + } + + const fullInformation = await registerClient(serverUrl, { + metadata, + clientMetadata: provider.clientMetadata, + }); + + await provider.saveClientInformation(fullInformation); + clientInformation = fullInformation; + } + + // Exchange authorization code for tokens + if (authorizationCode !== undefined) { + const codeVerifier = await provider.codeVerifier(); + const tokens = await exchangeAuthorization(serverUrl, { + metadata, + clientInformation, + authorizationCode, + codeVerifier, + }); + + await provider.saveTokens(tokens); + return "AUTHORIZED"; + } + + const tokens = await provider.tokens(); + + // Handle token refresh or new authorization + if (tokens?.refresh_token) { + try { + // Attempt to refresh the token + const newTokens = await refreshAuthorization(serverUrl, { + metadata, + clientInformation, + refreshToken: tokens.refresh_token, + }); + + await provider.saveTokens(newTokens); + return "AUTHORIZED"; + } catch (error) { + console.error("Could not refresh OAuth tokens:", error); + } + } + + // Start new authorization flow + const { authorizationUrl, codeVerifier } = await startAuthorization(serverUrl, { + metadata, + clientInformation, + redirectUrl: provider.redirectUrl + }); + + await provider.saveCodeVerifier(codeVerifier); + await provider.redirectToAuthorization(authorizationUrl); + return "REDIRECT"; +} + +/** + * Looks up RFC 8414 OAuth 2.0 Authorization Server Metadata. + * + * If the server returns a 404 for the well-known endpoint, this function will + * return `undefined`. Any other errors will be thrown as exceptions. + */ +export async function discoverOAuthMetadata( + serverUrl: string | URL, + opts?: { protocolVersion?: string }, +): Promise { + const url = new URL("/.well-known/oauth-authorization-server", serverUrl); + let response: Response; + try { + response = await fetch(url, { + headers: { + "MCP-Protocol-Version": opts?.protocolVersion ?? LATEST_PROTOCOL_VERSION + } + }); + } catch (error) { + // CORS errors come back as TypeError + if (error instanceof TypeError) { + response = await fetch(url); + } else { + throw error; + } + } + + if (response.status === 404) { + return undefined; + } + + if (!response.ok) { + throw new Error( + `HTTP ${response.status} trying to load well-known OAuth metadata`, + ); + } + + return OAuthMetadataSchema.parse(await response.json()); +} + +/** + * Begins the authorization flow with the given server, by generating a PKCE challenge and constructing the authorization URL. + */ +export async function startAuthorization( + serverUrl: string | URL, + { + metadata, + clientInformation, + redirectUrl, + }: { + metadata?: OAuthMetadata; + clientInformation: OAuthClientInformation; + redirectUrl: string | URL; + }, +): Promise<{ authorizationUrl: URL; codeVerifier: string }> { + const responseType = "code"; + const codeChallengeMethod = "S256"; + + let authorizationUrl: URL; + if (metadata) { + authorizationUrl = new URL(metadata.authorization_endpoint); + + if (!metadata.response_types_supported.includes(responseType)) { + throw new Error( + `Incompatible auth server: does not support response type ${responseType}`, + ); + } + + if ( + !metadata.code_challenge_methods_supported || + !metadata.code_challenge_methods_supported.includes(codeChallengeMethod) + ) { + throw new Error( + `Incompatible auth server: does not support code challenge method ${codeChallengeMethod}`, + ); + } + } else { + authorizationUrl = new URL("/authorize", serverUrl); + } + + // Generate PKCE challenge + const challenge = await pkceChallenge(); + const codeVerifier = challenge.code_verifier; + const codeChallenge = challenge.code_challenge; + + authorizationUrl.searchParams.set("response_type", responseType); + authorizationUrl.searchParams.set("client_id", clientInformation.client_id); + authorizationUrl.searchParams.set("code_challenge", codeChallenge); + authorizationUrl.searchParams.set( + "code_challenge_method", + codeChallengeMethod, + ); + authorizationUrl.searchParams.set("redirect_uri", String(redirectUrl)); + + return { authorizationUrl, codeVerifier }; +} + +/** + * Exchanges an authorization code for an access token with the given server. + */ +export async function exchangeAuthorization( + serverUrl: string | URL, + { + metadata, + clientInformation, + authorizationCode, + codeVerifier, + }: { + metadata?: OAuthMetadata; + clientInformation: OAuthClientInformation; + authorizationCode: string; + codeVerifier: string; + }, +): Promise { + const grantType = "authorization_code"; + + let tokenUrl: URL; + if (metadata) { + tokenUrl = new URL(metadata.token_endpoint); + + if ( + metadata.grant_types_supported && + !metadata.grant_types_supported.includes(grantType) + ) { + throw new Error( + `Incompatible auth server: does not support grant type ${grantType}`, + ); + } + } else { + tokenUrl = new URL("/token", serverUrl); + } + + // Exchange code for tokens + const params = new URLSearchParams({ + grant_type: grantType, + client_id: clientInformation.client_id, + code: authorizationCode, + code_verifier: codeVerifier, + }); + + if (clientInformation.client_secret) { + params.set("client_secret", clientInformation.client_secret); + } + + const response = await fetch(tokenUrl, { + method: "POST", + headers: { + "Content-Type": "application/x-www-form-urlencoded", + }, + body: params, + }); + + if (!response.ok) { + throw new Error(`Token exchange failed: HTTP ${response.status}`); + } + + return OAuthTokensSchema.parse(await response.json()); +} + +/** + * Exchange a refresh token for an updated access token. + */ +export async function refreshAuthorization( + serverUrl: string | URL, + { + metadata, + clientInformation, + refreshToken, + }: { + metadata?: OAuthMetadata; + clientInformation: OAuthClientInformation; + refreshToken: string; + }, +): Promise { + const grantType = "refresh_token"; + + let tokenUrl: URL; + if (metadata) { + tokenUrl = new URL(metadata.token_endpoint); + + if ( + metadata.grant_types_supported && + !metadata.grant_types_supported.includes(grantType) + ) { + throw new Error( + `Incompatible auth server: does not support grant type ${grantType}`, + ); + } + } else { + tokenUrl = new URL("/token", serverUrl); + } + + // Exchange refresh token + const params = new URLSearchParams({ + grant_type: grantType, + client_id: clientInformation.client_id, + refresh_token: refreshToken, + }); + + if (clientInformation.client_secret) { + params.set("client_secret", clientInformation.client_secret); + } + + const response = await fetch(tokenUrl, { + method: "POST", + headers: { + "Content-Type": "application/x-www-form-urlencoded", + }, + body: params, + }); + + if (!response.ok) { + throw new Error(`Token refresh failed: HTTP ${response.status}`); + } + + return OAuthTokensSchema.parse(await response.json()); +} + +/** + * Performs OAuth 2.0 Dynamic Client Registration according to RFC 7591. + */ +export async function registerClient( + serverUrl: string | URL, + { + metadata, + clientMetadata, + }: { + metadata?: OAuthMetadata; + clientMetadata: OAuthClientMetadata; + }, +): Promise { + let registrationUrl: URL; + + if (metadata) { + if (!metadata.registration_endpoint) { + throw new Error("Incompatible auth server: does not support dynamic client registration"); + } + + registrationUrl = new URL(metadata.registration_endpoint); + } else { + registrationUrl = new URL("/register", serverUrl); + } + + const response = await fetch(registrationUrl, { + method: "POST", + headers: { + "Content-Type": "application/json", + }, + body: JSON.stringify(clientMetadata), + }); + + if (!response.ok) { + throw new Error(`Dynamic client registration failed: HTTP ${response.status}`); + } + + return OAuthClientInformationFullSchema.parse(await response.json()); +} + + +--- +File: /src/client/index.test.ts +--- + +/* eslint-disable @typescript-eslint/no-unused-vars */ +/* eslint-disable no-constant-binary-expression */ +/* eslint-disable @typescript-eslint/no-unused-expressions */ +import { Client } from "./index.js"; +import { z } from "zod"; +import { + RequestSchema, + NotificationSchema, + ResultSchema, + LATEST_PROTOCOL_VERSION, + SUPPORTED_PROTOCOL_VERSIONS, + InitializeRequestSchema, + ListResourcesRequestSchema, + ListToolsRequestSchema, + CreateMessageRequestSchema, + ListRootsRequestSchema, + ErrorCode, +} from "../types.js"; +import { Transport } from "../shared/transport.js"; +import { Server } from "../server/index.js"; +import { InMemoryTransport } from "../inMemory.js"; + +test("should initialize with matching protocol version", async () => { + const clientTransport: Transport = { + start: jest.fn().mockResolvedValue(undefined), + close: jest.fn().mockResolvedValue(undefined), + send: jest.fn().mockImplementation((message) => { + if (message.method === "initialize") { + clientTransport.onmessage?.({ + jsonrpc: "2.0", + id: message.id, + result: { + protocolVersion: LATEST_PROTOCOL_VERSION, + capabilities: {}, + serverInfo: { + name: "test", + version: "1.0", + }, + instructions: "test instructions", + }, + }); + } + return Promise.resolve(); + }), + }; + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + await client.connect(clientTransport); + + // Should have sent initialize with latest version + expect(clientTransport.send).toHaveBeenCalledWith( + expect.objectContaining({ + method: "initialize", + params: expect.objectContaining({ + protocolVersion: LATEST_PROTOCOL_VERSION, + }), + }), + ); + + // Should have the instructions returned + expect(client.getInstructions()).toEqual("test instructions"); +}); + +test("should initialize with supported older protocol version", async () => { + const OLD_VERSION = SUPPORTED_PROTOCOL_VERSIONS[1]; + const clientTransport: Transport = { + start: jest.fn().mockResolvedValue(undefined), + close: jest.fn().mockResolvedValue(undefined), + send: jest.fn().mockImplementation((message) => { + if (message.method === "initialize") { + clientTransport.onmessage?.({ + jsonrpc: "2.0", + id: message.id, + result: { + protocolVersion: OLD_VERSION, + capabilities: {}, + serverInfo: { + name: "test", + version: "1.0", + }, + }, + }); + } + return Promise.resolve(); + }), + }; + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + await client.connect(clientTransport); + + // Connection should succeed with the older version + expect(client.getServerVersion()).toEqual({ + name: "test", + version: "1.0", + }); + + // Expect no instructions + expect(client.getInstructions()).toBeUndefined(); +}); + +test("should reject unsupported protocol version", async () => { + const clientTransport: Transport = { + start: jest.fn().mockResolvedValue(undefined), + close: jest.fn().mockResolvedValue(undefined), + send: jest.fn().mockImplementation((message) => { + if (message.method === "initialize") { + clientTransport.onmessage?.({ + jsonrpc: "2.0", + id: message.id, + result: { + protocolVersion: "invalid-version", + capabilities: {}, + serverInfo: { + name: "test", + version: "1.0", + }, + }, + }); + } + return Promise.resolve(); + }), + }; + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + await expect(client.connect(clientTransport)).rejects.toThrow( + "Server's protocol version is not supported: invalid-version", + ); + + expect(clientTransport.close).toHaveBeenCalled(); +}); + +test("should respect server capabilities", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + resources: {}, + tools: {}, + }, + }, + ); + + server.setRequestHandler(InitializeRequestSchema, (_request) => ({ + protocolVersion: LATEST_PROTOCOL_VERSION, + capabilities: { + resources: {}, + tools: {}, + }, + serverInfo: { + name: "test", + version: "1.0", + }, + })); + + server.setRequestHandler(ListResourcesRequestSchema, () => ({ + resources: [], + })); + + server.setRequestHandler(ListToolsRequestSchema, () => ({ + tools: [], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + enforceStrictCapabilities: true, + }, + ); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + // Server supports resources and tools, but not prompts + expect(client.getServerCapabilities()).toEqual({ + resources: {}, + tools: {}, + }); + + // These should work + await expect(client.listResources()).resolves.not.toThrow(); + await expect(client.listTools()).resolves.not.toThrow(); + + // This should throw because prompts are not supported + await expect(client.listPrompts()).rejects.toThrow( + "Server does not support prompts", + ); +}); + +test("should respect client notification capabilities", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: {}, + }, + ); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + roots: { + listChanged: true, + }, + }, + }, + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + // This should work because the client has the roots.listChanged capability + await expect(client.sendRootsListChanged()).resolves.not.toThrow(); + + // Create a new client without the roots.listChanged capability + const clientWithoutCapability = new Client( + { + name: "test client without capability", + version: "1.0", + }, + { + capabilities: {}, + enforceStrictCapabilities: true, + }, + ); + + await clientWithoutCapability.connect(clientTransport); + + // This should throw because the client doesn't have the roots.listChanged capability + await expect(clientWithoutCapability.sendRootsListChanged()).rejects.toThrow( + /^Client does not support/, + ); +}); + +test("should respect server notification capabilities", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + logging: {}, + resources: { + listChanged: true, + }, + }, + }, + ); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: {}, + }, + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + // These should work because the server has the corresponding capabilities + await expect( + server.sendLoggingMessage({ level: "info", data: "Test" }), + ).resolves.not.toThrow(); + await expect(server.sendResourceListChanged()).resolves.not.toThrow(); + + // This should throw because the server doesn't have the tools capability + await expect(server.sendToolListChanged()).rejects.toThrow( + "Server does not support notifying of tool list changes", + ); +}); + +test("should only allow setRequestHandler for declared capabilities", () => { + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + // This should work because sampling is a declared capability + expect(() => { + client.setRequestHandler(CreateMessageRequestSchema, () => ({ + model: "test-model", + role: "assistant", + content: { + type: "text", + text: "Test response", + }, + })); + }).not.toThrow(); + + // This should throw because roots listing is not a declared capability + expect(() => { + client.setRequestHandler(ListRootsRequestSchema, () => ({})); + }).toThrow("Client does not support roots capability"); +}); + +/* + Test that custom request/notification/result schemas can be used with the Client class. + */ +test("should typecheck", () => { + const GetWeatherRequestSchema = RequestSchema.extend({ + method: z.literal("weather/get"), + params: z.object({ + city: z.string(), + }), + }); + + const GetForecastRequestSchema = RequestSchema.extend({ + method: z.literal("weather/forecast"), + params: z.object({ + city: z.string(), + days: z.number(), + }), + }); + + const WeatherForecastNotificationSchema = NotificationSchema.extend({ + method: z.literal("weather/alert"), + params: z.object({ + severity: z.enum(["warning", "watch"]), + message: z.string(), + }), + }); + + const WeatherRequestSchema = GetWeatherRequestSchema.or( + GetForecastRequestSchema, + ); + const WeatherNotificationSchema = WeatherForecastNotificationSchema; + const WeatherResultSchema = ResultSchema.extend({ + temperature: z.number(), + conditions: z.string(), + }); + + type WeatherRequest = z.infer; + type WeatherNotification = z.infer; + type WeatherResult = z.infer; + + // Create a typed Client for weather data + const weatherClient = new Client< + WeatherRequest, + WeatherNotification, + WeatherResult + >( + { + name: "WeatherClient", + version: "1.0.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + // Typecheck that only valid weather requests/notifications/results are allowed + false && + weatherClient.request( + { + method: "weather/get", + params: { + city: "Seattle", + }, + }, + WeatherResultSchema, + ); + + false && + weatherClient.notification({ + method: "weather/alert", + params: { + severity: "warning", + message: "Storm approaching", + }, + }); +}); + +test("should handle client cancelling a request", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + resources: {}, + }, + }, + ); + + // Set up server to delay responding to listResources + server.setRequestHandler( + ListResourcesRequestSchema, + async (request, extra) => { + await new Promise((resolve) => setTimeout(resolve, 1000)); + return { + resources: [], + }; + }, + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: {}, + }, + ); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + // Set up abort controller + const controller = new AbortController(); + + // Issue request but cancel it immediately + const listResourcesPromise = client.listResources(undefined, { + signal: controller.signal, + }); + controller.abort("Cancelled by test"); + + // Request should be rejected + await expect(listResourcesPromise).rejects.toBe("Cancelled by test"); +}); + +test("should handle request timeout", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + resources: {}, + }, + }, + ); + + // Set up server with a delayed response + server.setRequestHandler( + ListResourcesRequestSchema, + async (_request, extra) => { + const timer = new Promise((resolve) => { + const timeout = setTimeout(resolve, 100); + extra.signal.addEventListener("abort", () => clearTimeout(timeout)); + }); + + await timer; + return { + resources: [], + }; + }, + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: {}, + }, + ); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + // Request with 0 msec timeout should fail immediately + await expect( + client.listResources(undefined, { timeout: 0 }), + ).rejects.toMatchObject({ + code: ErrorCode.RequestTimeout, + }); +}); + + + +--- +File: /src/client/index.ts +--- + +import { + mergeCapabilities, + Protocol, + ProtocolOptions, + RequestOptions, +} from "../shared/protocol.js"; +import { Transport } from "../shared/transport.js"; +import { + CallToolRequest, + CallToolResultSchema, + ClientCapabilities, + ClientNotification, + ClientRequest, + ClientResult, + CompatibilityCallToolResultSchema, + CompleteRequest, + CompleteResultSchema, + EmptyResultSchema, + GetPromptRequest, + GetPromptResultSchema, + Implementation, + InitializeResultSchema, + LATEST_PROTOCOL_VERSION, + ListPromptsRequest, + ListPromptsResultSchema, + ListResourcesRequest, + ListResourcesResultSchema, + ListResourceTemplatesRequest, + ListResourceTemplatesResultSchema, + ListToolsRequest, + ListToolsResultSchema, + LoggingLevel, + Notification, + ReadResourceRequest, + ReadResourceResultSchema, + Request, + Result, + ServerCapabilities, + SubscribeRequest, + SUPPORTED_PROTOCOL_VERSIONS, + UnsubscribeRequest, +} from "../types.js"; + +export type ClientOptions = ProtocolOptions & { + /** + * Capabilities to advertise as being supported by this client. + */ + capabilities?: ClientCapabilities; +}; + +/** + * An MCP client on top of a pluggable transport. + * + * The client will automatically begin the initialization flow with the server when connect() is called. + * + * To use with custom types, extend the base Request/Notification/Result types and pass them as type parameters: + * + * ```typescript + * // Custom schemas + * const CustomRequestSchema = RequestSchema.extend({...}) + * const CustomNotificationSchema = NotificationSchema.extend({...}) + * const CustomResultSchema = ResultSchema.extend({...}) + * + * // Type aliases + * type CustomRequest = z.infer + * type CustomNotification = z.infer + * type CustomResult = z.infer + * + * // Create typed client + * const client = new Client({ + * name: "CustomClient", + * version: "1.0.0" + * }) + * ``` + */ +export class Client< + RequestT extends Request = Request, + NotificationT extends Notification = Notification, + ResultT extends Result = Result, +> extends Protocol< + ClientRequest | RequestT, + ClientNotification | NotificationT, + ClientResult | ResultT +> { + private _serverCapabilities?: ServerCapabilities; + private _serverVersion?: Implementation; + private _capabilities: ClientCapabilities; + private _instructions?: string; + + /** + * Initializes this client with the given name and version information. + */ + constructor( + private _clientInfo: Implementation, + options?: ClientOptions, + ) { + super(options); + this._capabilities = options?.capabilities ?? {}; + } + + /** + * Registers new capabilities. This can only be called before connecting to a transport. + * + * The new capabilities will be merged with any existing capabilities previously given (e.g., at initialization). + */ + public registerCapabilities(capabilities: ClientCapabilities): void { + if (this.transport) { + throw new Error( + "Cannot register capabilities after connecting to transport", + ); + } + + this._capabilities = mergeCapabilities(this._capabilities, capabilities); + } + + protected assertCapability( + capability: keyof ServerCapabilities, + method: string, + ): void { + if (!this._serverCapabilities?.[capability]) { + throw new Error( + `Server does not support ${capability} (required for ${method})`, + ); + } + } + + override async connect(transport: Transport): Promise { + await super.connect(transport); + + try { + const result = await this.request( + { + method: "initialize", + params: { + protocolVersion: LATEST_PROTOCOL_VERSION, + capabilities: this._capabilities, + clientInfo: this._clientInfo, + }, + }, + InitializeResultSchema, + ); + + if (result === undefined) { + throw new Error(`Server sent invalid initialize result: ${result}`); + } + + if (!SUPPORTED_PROTOCOL_VERSIONS.includes(result.protocolVersion)) { + throw new Error( + `Server's protocol version is not supported: ${result.protocolVersion}`, + ); + } + + this._serverCapabilities = result.capabilities; + this._serverVersion = result.serverInfo; + + this._instructions = result.instructions; + + await this.notification({ + method: "notifications/initialized", + }); + } catch (error) { + // Disconnect if initialization fails. + void this.close(); + throw error; + } + } + + /** + * After initialization has completed, this will be populated with the server's reported capabilities. + */ + getServerCapabilities(): ServerCapabilities | undefined { + return this._serverCapabilities; + } + + /** + * After initialization has completed, this will be populated with information about the server's name and version. + */ + getServerVersion(): Implementation | undefined { + return this._serverVersion; + } + + /** + * After initialization has completed, this may be populated with information about the server's instructions. + */ + getInstructions(): string | undefined { + return this._instructions; + } + + protected assertCapabilityForMethod(method: RequestT["method"]): void { + switch (method as ClientRequest["method"]) { + case "logging/setLevel": + if (!this._serverCapabilities?.logging) { + throw new Error( + `Server does not support logging (required for ${method})`, + ); + } + break; + + case "prompts/get": + case "prompts/list": + if (!this._serverCapabilities?.prompts) { + throw new Error( + `Server does not support prompts (required for ${method})`, + ); + } + break; + + case "resources/list": + case "resources/templates/list": + case "resources/read": + case "resources/subscribe": + case "resources/unsubscribe": + if (!this._serverCapabilities?.resources) { + throw new Error( + `Server does not support resources (required for ${method})`, + ); + } + + if ( + method === "resources/subscribe" && + !this._serverCapabilities.resources.subscribe + ) { + throw new Error( + `Server does not support resource subscriptions (required for ${method})`, + ); + } + + break; + + case "tools/call": + case "tools/list": + if (!this._serverCapabilities?.tools) { + throw new Error( + `Server does not support tools (required for ${method})`, + ); + } + break; + + case "completion/complete": + if (!this._serverCapabilities?.prompts) { + throw new Error( + `Server does not support prompts (required for ${method})`, + ); + } + break; + + case "initialize": + // No specific capability required for initialize + break; + + case "ping": + // No specific capability required for ping + break; + } + } + + protected assertNotificationCapability( + method: NotificationT["method"], + ): void { + switch (method as ClientNotification["method"]) { + case "notifications/roots/list_changed": + if (!this._capabilities.roots?.listChanged) { + throw new Error( + `Client does not support roots list changed notifications (required for ${method})`, + ); + } + break; + + case "notifications/initialized": + // No specific capability required for initialized + break; + + case "notifications/cancelled": + // Cancellation notifications are always allowed + break; + + case "notifications/progress": + // Progress notifications are always allowed + break; + } + } + + protected assertRequestHandlerCapability(method: string): void { + switch (method) { + case "sampling/createMessage": + if (!this._capabilities.sampling) { + throw new Error( + `Client does not support sampling capability (required for ${method})`, + ); + } + break; + + case "roots/list": + if (!this._capabilities.roots) { + throw new Error( + `Client does not support roots capability (required for ${method})`, + ); + } + break; + + case "ping": + // No specific capability required for ping + break; + } + } + + async ping(options?: RequestOptions) { + return this.request({ method: "ping" }, EmptyResultSchema, options); + } + + async complete(params: CompleteRequest["params"], options?: RequestOptions) { + return this.request( + { method: "completion/complete", params }, + CompleteResultSchema, + options, + ); + } + + async setLoggingLevel(level: LoggingLevel, options?: RequestOptions) { + return this.request( + { method: "logging/setLevel", params: { level } }, + EmptyResultSchema, + options, + ); + } + + async getPrompt( + params: GetPromptRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "prompts/get", params }, + GetPromptResultSchema, + options, + ); + } + + async listPrompts( + params?: ListPromptsRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "prompts/list", params }, + ListPromptsResultSchema, + options, + ); + } + + async listResources( + params?: ListResourcesRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "resources/list", params }, + ListResourcesResultSchema, + options, + ); + } + + async listResourceTemplates( + params?: ListResourceTemplatesRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "resources/templates/list", params }, + ListResourceTemplatesResultSchema, + options, + ); + } + + async readResource( + params: ReadResourceRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "resources/read", params }, + ReadResourceResultSchema, + options, + ); + } + + async subscribeResource( + params: SubscribeRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "resources/subscribe", params }, + EmptyResultSchema, + options, + ); + } + + async unsubscribeResource( + params: UnsubscribeRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "resources/unsubscribe", params }, + EmptyResultSchema, + options, + ); + } + + async callTool( + params: CallToolRequest["params"], + resultSchema: + | typeof CallToolResultSchema + | typeof CompatibilityCallToolResultSchema = CallToolResultSchema, + options?: RequestOptions, + ) { + return this.request( + { method: "tools/call", params }, + resultSchema, + options, + ); + } + + async listTools( + params?: ListToolsRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "tools/list", params }, + ListToolsResultSchema, + options, + ); + } + + async sendRootsListChanged() { + return this.notification({ method: "notifications/roots/list_changed" }); + } +} + + + +--- +File: /src/client/sse.test.ts +--- + +import { createServer, type IncomingMessage, type Server } from "http"; +import { AddressInfo } from "net"; +import { JSONRPCMessage } from "../types.js"; +import { SSEClientTransport } from "./sse.js"; +import { OAuthClientProvider, UnauthorizedError } from "./auth.js"; +import { OAuthTokens } from "../shared/auth.js"; + +describe("SSEClientTransport", () => { + let server: Server; + let transport: SSEClientTransport; + let baseUrl: URL; + let lastServerRequest: IncomingMessage; + let sendServerMessage: ((message: string) => void) | null = null; + + beforeEach((done) => { + // Reset state + lastServerRequest = null as unknown as IncomingMessage; + sendServerMessage = null; + + // Create a test server that will receive the EventSource connection + server = createServer((req, res) => { + lastServerRequest = req; + + // Send SSE headers + res.writeHead(200, { + "Content-Type": "text/event-stream", + "Cache-Control": "no-cache", + Connection: "keep-alive", + }); + + // Send the endpoint event + res.write("event: endpoint\n"); + res.write(`data: ${baseUrl.href}\n\n`); + + // Store reference to send function for tests + sendServerMessage = (message: string) => { + res.write(`data: ${message}\n\n`); + }; + + // Handle request body for POST endpoints + if (req.method === "POST") { + let body = ""; + req.on("data", (chunk) => { + body += chunk; + }); + req.on("end", () => { + (req as IncomingMessage & { body: string }).body = body; + res.end(); + }); + } + }); + + // Start server on random port + server.listen(0, "127.0.0.1", () => { + const addr = server.address() as AddressInfo; + baseUrl = new URL(`http://127.0.0.1:${addr.port}`); + done(); + }); + }); + + afterEach(async () => { + await transport.close(); + await server.close(); + + jest.clearAllMocks(); + }); + + describe("connection handling", () => { + it("establishes SSE connection and receives endpoint", async () => { + transport = new SSEClientTransport(baseUrl); + await transport.start(); + + expect(lastServerRequest.headers.accept).toBe("text/event-stream"); + expect(lastServerRequest.method).toBe("GET"); + }); + + it("rejects if server returns non-200 status", async () => { + // Create a server that returns 403 + await server.close(); + + server = createServer((req, res) => { + res.writeHead(403); + res.end(); + }); + + await new Promise((resolve) => { + server.listen(0, "127.0.0.1", () => { + const addr = server.address() as AddressInfo; + baseUrl = new URL(`http://127.0.0.1:${addr.port}`); + resolve(); + }); + }); + + transport = new SSEClientTransport(baseUrl); + await expect(transport.start()).rejects.toThrow(); + }); + + it("closes EventSource connection on close()", async () => { + transport = new SSEClientTransport(baseUrl); + await transport.start(); + + const closePromise = new Promise((resolve) => { + lastServerRequest.on("close", resolve); + }); + + await transport.close(); + await closePromise; + }); + }); + + describe("message handling", () => { + it("receives and parses JSON-RPC messages", async () => { + const receivedMessages: JSONRPCMessage[] = []; + transport = new SSEClientTransport(baseUrl); + transport.onmessage = (msg) => receivedMessages.push(msg); + + await transport.start(); + + const testMessage: JSONRPCMessage = { + jsonrpc: "2.0", + id: "test-1", + method: "test", + params: { foo: "bar" }, + }; + + sendServerMessage!(JSON.stringify(testMessage)); + + // Wait for message processing + await new Promise((resolve) => setTimeout(resolve, 50)); + + expect(receivedMessages).toHaveLength(1); + expect(receivedMessages[0]).toEqual(testMessage); + }); + + it("handles malformed JSON messages", async () => { + const errors: Error[] = []; + transport = new SSEClientTransport(baseUrl); + transport.onerror = (err) => errors.push(err); + + await transport.start(); + + sendServerMessage!("invalid json"); + + // Wait for message processing + await new Promise((resolve) => setTimeout(resolve, 50)); + + expect(errors).toHaveLength(1); + expect(errors[0].message).toMatch(/JSON/); + }); + + it("handles messages via POST requests", async () => { + transport = new SSEClientTransport(baseUrl); + await transport.start(); + + const testMessage: JSONRPCMessage = { + jsonrpc: "2.0", + id: "test-1", + method: "test", + params: { foo: "bar" }, + }; + + await transport.send(testMessage); + + // Wait for request processing + await new Promise((resolve) => setTimeout(resolve, 50)); + + expect(lastServerRequest.method).toBe("POST"); + expect(lastServerRequest.headers["content-type"]).toBe( + "application/json", + ); + expect( + JSON.parse( + (lastServerRequest as IncomingMessage & { body: string }).body, + ), + ).toEqual(testMessage); + }); + + it("handles POST request failures", async () => { + // Create a server that returns 500 for POST + await server.close(); + + server = createServer((req, res) => { + if (req.method === "GET") { + res.writeHead(200, { + "Content-Type": "text/event-stream", + "Cache-Control": "no-cache", + Connection: "keep-alive", + }); + res.write("event: endpoint\n"); + res.write(`data: ${baseUrl.href}\n\n`); + } else { + res.writeHead(500); + res.end("Internal error"); + } + }); + + await new Promise((resolve) => { + server.listen(0, "127.0.0.1", () => { + const addr = server.address() as AddressInfo; + baseUrl = new URL(`http://127.0.0.1:${addr.port}`); + resolve(); + }); + }); + + transport = new SSEClientTransport(baseUrl); + await transport.start(); + + const testMessage: JSONRPCMessage = { + jsonrpc: "2.0", + id: "test-1", + method: "test", + params: {}, + }; + + await expect(transport.send(testMessage)).rejects.toThrow(/500/); + }); + }); + + describe("header handling", () => { + it("uses custom fetch implementation from EventSourceInit to add auth headers", async () => { + const authToken = "Bearer test-token"; + + // Create a fetch wrapper that adds auth header + const fetchWithAuth = (url: string | URL, init?: RequestInit) => { + const headers = new Headers(init?.headers); + headers.set("Authorization", authToken); + return fetch(url.toString(), { ...init, headers }); + }; + + transport = new SSEClientTransport(baseUrl, { + eventSourceInit: { + fetch: fetchWithAuth, + }, + }); + + await transport.start(); + + // Verify the auth header was received by the server + expect(lastServerRequest.headers.authorization).toBe(authToken); + }); + + it("passes custom headers to fetch requests", async () => { + const customHeaders = { + Authorization: "Bearer test-token", + "X-Custom-Header": "custom-value", + }; + + transport = new SSEClientTransport(baseUrl, { + requestInit: { + headers: customHeaders, + }, + }); + + await transport.start(); + + // Store original fetch + const originalFetch = global.fetch; + + try { + // Mock fetch for the message sending test + global.fetch = jest.fn().mockResolvedValue({ + ok: true, + }); + + const message: JSONRPCMessage = { + jsonrpc: "2.0", + id: "1", + method: "test", + params: {}, + }; + + await transport.send(message); + + // Verify fetch was called with correct headers + expect(global.fetch).toHaveBeenCalledWith( + expect.any(URL), + expect.objectContaining({ + headers: expect.any(Headers), + }), + ); + + const calledHeaders = (global.fetch as jest.Mock).mock.calls[0][1] + .headers; + expect(calledHeaders.get("Authorization")).toBe( + customHeaders.Authorization, + ); + expect(calledHeaders.get("X-Custom-Header")).toBe( + customHeaders["X-Custom-Header"], + ); + expect(calledHeaders.get("content-type")).toBe("application/json"); + } finally { + // Restore original fetch + global.fetch = originalFetch; + } + }); + }); + + describe("auth handling", () => { + let mockAuthProvider: jest.Mocked; + + beforeEach(() => { + mockAuthProvider = { + get redirectUrl() { return "http://localhost/callback"; }, + get clientMetadata() { return { redirect_uris: ["http://localhost/callback"] }; }, + clientInformation: jest.fn(() => ({ client_id: "test-client-id", client_secret: "test-client-secret" })), + tokens: jest.fn(), + saveTokens: jest.fn(), + redirectToAuthorization: jest.fn(), + saveCodeVerifier: jest.fn(), + codeVerifier: jest.fn(), + }; + }); + + it("attaches auth header from provider on SSE connection", async () => { + mockAuthProvider.tokens.mockResolvedValue({ + access_token: "test-token", + token_type: "Bearer" + }); + + transport = new SSEClientTransport(baseUrl, { + authProvider: mockAuthProvider, + }); + + await transport.start(); + + expect(lastServerRequest.headers.authorization).toBe("Bearer test-token"); + expect(mockAuthProvider.tokens).toHaveBeenCalled(); + }); + + it("attaches auth header from provider on POST requests", async () => { + mockAuthProvider.tokens.mockResolvedValue({ + access_token: "test-token", + token_type: "Bearer" + }); + + transport = new SSEClientTransport(baseUrl, { + authProvider: mockAuthProvider, + }); + + await transport.start(); + + const message: JSONRPCMessage = { + jsonrpc: "2.0", + id: "1", + method: "test", + params: {}, + }; + + await transport.send(message); + + expect(lastServerRequest.headers.authorization).toBe("Bearer test-token"); + expect(mockAuthProvider.tokens).toHaveBeenCalled(); + }); + + it("attempts auth flow on 401 during SSE connection", async () => { + // Create server that returns 401s + await server.close(); + + server = createServer((req, res) => { + lastServerRequest = req; + if (req.url !== "/") { + res.writeHead(404).end(); + } else { + res.writeHead(401).end(); + } + }); + + await new Promise(resolve => { + server.listen(0, "127.0.0.1", () => { + const addr = server.address() as AddressInfo; + baseUrl = new URL(`http://127.0.0.1:${addr.port}`); + resolve(); + }); + }); + + transport = new SSEClientTransport(baseUrl, { + authProvider: mockAuthProvider, + }); + + await expect(() => transport.start()).rejects.toThrow(UnauthorizedError); + expect(mockAuthProvider.redirectToAuthorization.mock.calls).toHaveLength(1); + }); + + it("attempts auth flow on 401 during POST request", async () => { + // Create server that accepts SSE but returns 401 on POST + await server.close(); + + server = createServer((req, res) => { + lastServerRequest = req; + + switch (req.method) { + case "GET": + if (req.url !== "/") { + res.writeHead(404).end(); + return; + } + + res.writeHead(200, { + "Content-Type": "text/event-stream", + "Cache-Control": "no-cache", + Connection: "keep-alive", + }); + res.write("event: endpoint\n"); + res.write(`data: ${baseUrl.href}\n\n`); + break; + + case "POST": + res.writeHead(401); + res.end(); + break; + } + }); + + await new Promise(resolve => { + server.listen(0, "127.0.0.1", () => { + const addr = server.address() as AddressInfo; + baseUrl = new URL(`http://127.0.0.1:${addr.port}`); + resolve(); + }); + }); + + transport = new SSEClientTransport(baseUrl, { + authProvider: mockAuthProvider, + }); + + await transport.start(); + + const message: JSONRPCMessage = { + jsonrpc: "2.0", + id: "1", + method: "test", + params: {}, + }; + + await expect(() => transport.send(message)).rejects.toThrow(UnauthorizedError); + expect(mockAuthProvider.redirectToAuthorization.mock.calls).toHaveLength(1); + }); + + it("respects custom headers when using auth provider", async () => { + mockAuthProvider.tokens.mockResolvedValue({ + access_token: "test-token", + token_type: "Bearer" + }); + + const customHeaders = { + "X-Custom-Header": "custom-value", + }; + + transport = new SSEClientTransport(baseUrl, { + authProvider: mockAuthProvider, + requestInit: { + headers: customHeaders, + }, + }); + + await transport.start(); + + const message: JSONRPCMessage = { + jsonrpc: "2.0", + id: "1", + method: "test", + params: {}, + }; + + await transport.send(message); + + expect(lastServerRequest.headers.authorization).toBe("Bearer test-token"); + expect(lastServerRequest.headers["x-custom-header"]).toBe("custom-value"); + }); + + it("refreshes expired token during SSE connection", async () => { + // Mock tokens() to return expired token until saveTokens is called + let currentTokens: OAuthTokens = { + access_token: "expired-token", + token_type: "Bearer", + refresh_token: "refresh-token" + }; + mockAuthProvider.tokens.mockImplementation(() => currentTokens); + mockAuthProvider.saveTokens.mockImplementation((tokens) => { + currentTokens = tokens; + }); + + // Create server that returns 401 for expired token, then accepts new token + await server.close(); + + let connectionAttempts = 0; + server = createServer((req, res) => { + lastServerRequest = req; + + if (req.url === "/token" && req.method === "POST") { + // Handle token refresh request + let body = ""; + req.on("data", chunk => { body += chunk; }); + req.on("end", () => { + const params = new URLSearchParams(body); + if (params.get("grant_type") === "refresh_token" && + params.get("refresh_token") === "refresh-token" && + params.get("client_id") === "test-client-id" && + params.get("client_secret") === "test-client-secret") { + res.writeHead(200, { "Content-Type": "application/json" }); + res.end(JSON.stringify({ + access_token: "new-token", + token_type: "Bearer", + refresh_token: "new-refresh-token" + })); + } else { + res.writeHead(400).end(); + } + }); + return; + } + + if (req.url !== "/") { + res.writeHead(404).end(); + return; + } + + const auth = req.headers.authorization; + if (auth === "Bearer expired-token") { + res.writeHead(401).end(); + return; + } + + if (auth === "Bearer new-token") { + res.writeHead(200, { + "Content-Type": "text/event-stream", + "Cache-Control": "no-cache", + Connection: "keep-alive", + }); + res.write("event: endpoint\n"); + res.write(`data: ${baseUrl.href}\n\n`); + connectionAttempts++; + return; + } + + res.writeHead(401).end(); + }); + + await new Promise(resolve => { + server.listen(0, "127.0.0.1", () => { + const addr = server.address() as AddressInfo; + baseUrl = new URL(`http://127.0.0.1:${addr.port}`); + resolve(); + }); + }); + + transport = new SSEClientTransport(baseUrl, { + authProvider: mockAuthProvider, + }); + + await transport.start(); + + expect(mockAuthProvider.saveTokens).toHaveBeenCalledWith({ + access_token: "new-token", + token_type: "Bearer", + refresh_token: "new-refresh-token" + }); + expect(connectionAttempts).toBe(1); + expect(lastServerRequest.headers.authorization).toBe("Bearer new-token"); + }); + + it("refreshes expired token during POST request", async () => { + // Mock tokens() to return expired token until saveTokens is called + let currentTokens: OAuthTokens = { + access_token: "expired-token", + token_type: "Bearer", + refresh_token: "refresh-token" + }; + mockAuthProvider.tokens.mockImplementation(() => currentTokens); + mockAuthProvider.saveTokens.mockImplementation((tokens) => { + currentTokens = tokens; + }); + + // Create server that accepts SSE but returns 401 on POST with expired token + await server.close(); + + let postAttempts = 0; + server = createServer((req, res) => { + lastServerRequest = req; + + if (req.url === "/token" && req.method === "POST") { + // Handle token refresh request + let body = ""; + req.on("data", chunk => { body += chunk; }); + req.on("end", () => { + const params = new URLSearchParams(body); + if (params.get("grant_type") === "refresh_token" && + params.get("refresh_token") === "refresh-token" && + params.get("client_id") === "test-client-id" && + params.get("client_secret") === "test-client-secret") { + res.writeHead(200, { "Content-Type": "application/json" }); + res.end(JSON.stringify({ + access_token: "new-token", + token_type: "Bearer", + refresh_token: "new-refresh-token" + })); + } else { + res.writeHead(400).end(); + } + }); + return; + } + + switch (req.method) { + case "GET": + if (req.url !== "/") { + res.writeHead(404).end(); + return; + } + + res.writeHead(200, { + "Content-Type": "text/event-stream", + "Cache-Control": "no-cache", + Connection: "keep-alive", + }); + res.write("event: endpoint\n"); + res.write(`data: ${baseUrl.href}\n\n`); + break; + + case "POST": { + if (req.url !== "/") { + res.writeHead(404).end(); + return; + } + + const auth = req.headers.authorization; + if (auth === "Bearer expired-token") { + res.writeHead(401).end(); + return; + } + + if (auth === "Bearer new-token") { + res.writeHead(200).end(); + postAttempts++; + return; + } + + res.writeHead(401).end(); + break; + } + } + }); + + await new Promise(resolve => { + server.listen(0, "127.0.0.1", () => { + const addr = server.address() as AddressInfo; + baseUrl = new URL(`http://127.0.0.1:${addr.port}`); + resolve(); + }); + }); + + transport = new SSEClientTransport(baseUrl, { + authProvider: mockAuthProvider, + }); + + await transport.start(); + + const message: JSONRPCMessage = { + jsonrpc: "2.0", + id: "1", + method: "test", + params: {}, + }; + + await transport.send(message); + + expect(mockAuthProvider.saveTokens).toHaveBeenCalledWith({ + access_token: "new-token", + token_type: "Bearer", + refresh_token: "new-refresh-token" + }); + expect(postAttempts).toBe(1); + expect(lastServerRequest.headers.authorization).toBe("Bearer new-token"); + }); + + it("redirects to authorization if refresh token flow fails", async () => { + // Mock tokens() to return expired token until saveTokens is called + let currentTokens: OAuthTokens = { + access_token: "expired-token", + token_type: "Bearer", + refresh_token: "refresh-token" + }; + mockAuthProvider.tokens.mockImplementation(() => currentTokens); + mockAuthProvider.saveTokens.mockImplementation((tokens) => { + currentTokens = tokens; + }); + + // Create server that returns 401 for all tokens + await server.close(); + + server = createServer((req, res) => { + lastServerRequest = req; + + if (req.url === "/token" && req.method === "POST") { + // Handle token refresh request - always fail + res.writeHead(400).end(); + return; + } + + if (req.url !== "/") { + res.writeHead(404).end(); + return; + } + res.writeHead(401).end(); + }); + + await new Promise(resolve => { + server.listen(0, "127.0.0.1", () => { + const addr = server.address() as AddressInfo; + baseUrl = new URL(`http://127.0.0.1:${addr.port}`); + resolve(); + }); + }); + + transport = new SSEClientTransport(baseUrl, { + authProvider: mockAuthProvider, + }); + + await expect(() => transport.start()).rejects.toThrow(UnauthorizedError); + expect(mockAuthProvider.redirectToAuthorization).toHaveBeenCalled(); + }); + }); +}); + + + +--- +File: /src/client/sse.ts +--- + +import { EventSource, type ErrorEvent, type EventSourceInit } from "eventsource"; +import { Transport } from "../shared/transport.js"; +import { JSONRPCMessage, JSONRPCMessageSchema } from "../types.js"; +import { auth, AuthResult, OAuthClientProvider, UnauthorizedError } from "./auth.js"; + +export class SseError extends Error { + constructor( + public readonly code: number | undefined, + message: string | undefined, + public readonly event: ErrorEvent, + ) { + super(`SSE error: ${message}`); + } +} + +/** + * Configuration options for the `SSEClientTransport`. + */ +export type SSEClientTransportOptions = { + /** + * An OAuth client provider to use for authentication. + * + * When an `authProvider` is specified and the SSE connection is started: + * 1. The connection is attempted with any existing access token from the `authProvider`. + * 2. If the access token has expired, the `authProvider` is used to refresh the token. + * 3. If token refresh fails or no access token exists, and auth is required, `OAuthClientProvider.redirectToAuthorization` is called, and an `UnauthorizedError` will be thrown from `connect`/`start`. + * + * After the user has finished authorizing via their user agent, and is redirected back to the MCP client application, call `SSEClientTransport.finishAuth` with the authorization code before retrying the connection. + * + * If an `authProvider` is not provided, and auth is required, an `UnauthorizedError` will be thrown. + * + * `UnauthorizedError` might also be thrown when sending any message over the SSE transport, indicating that the session has expired, and needs to be re-authed and reconnected. + */ + authProvider?: OAuthClientProvider; + + /** + * Customizes the initial SSE request to the server (the request that begins the stream). + * + * NOTE: Setting this property will prevent an `Authorization` header from + * being automatically attached to the SSE request, if an `authProvider` is + * also given. This can be worked around by setting the `Authorization` header + * manually. + */ + eventSourceInit?: EventSourceInit; + + /** + * Customizes recurring POST requests to the server. + */ + requestInit?: RequestInit; +}; + +/** + * Client transport for SSE: this will connect to a server using Server-Sent Events for receiving + * messages and make separate POST requests for sending messages. + */ +export class SSEClientTransport implements Transport { + private _eventSource?: EventSource; + private _endpoint?: URL; + private _abortController?: AbortController; + private _url: URL; + private _eventSourceInit?: EventSourceInit; + private _requestInit?: RequestInit; + private _authProvider?: OAuthClientProvider; + + onclose?: () => void; + onerror?: (error: Error) => void; + onmessage?: (message: JSONRPCMessage) => void; + + constructor( + url: URL, + opts?: SSEClientTransportOptions, + ) { + this._url = url; + this._eventSourceInit = opts?.eventSourceInit; + this._requestInit = opts?.requestInit; + this._authProvider = opts?.authProvider; + } + + private async _authThenStart(): Promise { + if (!this._authProvider) { + throw new UnauthorizedError("No auth provider"); + } + + let result: AuthResult; + try { + result = await auth(this._authProvider, { serverUrl: this._url }); + } catch (error) { + this.onerror?.(error as Error); + throw error; + } + + if (result !== "AUTHORIZED") { + throw new UnauthorizedError(); + } + + return await this._startOrAuth(); + } + + private async _commonHeaders(): Promise { + const headers: HeadersInit = {}; + if (this._authProvider) { + const tokens = await this._authProvider.tokens(); + if (tokens) { + headers["Authorization"] = `Bearer ${tokens.access_token}`; + } + } + + return headers; + } + + private _startOrAuth(): Promise { + return new Promise((resolve, reject) => { + this._eventSource = new EventSource( + this._url.href, + this._eventSourceInit ?? { + fetch: (url, init) => this._commonHeaders().then((headers) => fetch(url, { + ...init, + headers: { + ...headers, + Accept: "text/event-stream" + } + })), + }, + ); + this._abortController = new AbortController(); + + this._eventSource.onerror = (event) => { + if (event.code === 401 && this._authProvider) { + this._authThenStart().then(resolve, reject); + return; + } + + const error = new SseError(event.code, event.message, event); + reject(error); + this.onerror?.(error); + }; + + this._eventSource.onopen = () => { + // The connection is open, but we need to wait for the endpoint to be received. + }; + + this._eventSource.addEventListener("endpoint", (event: Event) => { + const messageEvent = event as MessageEvent; + + try { + this._endpoint = new URL(messageEvent.data, this._url); + if (this._endpoint.origin !== this._url.origin) { + throw new Error( + `Endpoint origin does not match connection origin: ${this._endpoint.origin}`, + ); + } + } catch (error) { + reject(error); + this.onerror?.(error as Error); + + void this.close(); + return; + } + + resolve(); + }); + + this._eventSource.onmessage = (event: Event) => { + const messageEvent = event as MessageEvent; + let message: JSONRPCMessage; + try { + message = JSONRPCMessageSchema.parse(JSON.parse(messageEvent.data)); + } catch (error) { + this.onerror?.(error as Error); + return; + } + + this.onmessage?.(message); + }; + }); + } + + async start() { + if (this._eventSource) { + throw new Error( + "SSEClientTransport already started! If using Client class, note that connect() calls start() automatically.", + ); + } + + return await this._startOrAuth(); + } + + /** + * Call this method after the user has finished authorizing via their user agent and is redirected back to the MCP client application. This will exchange the authorization code for an access token, enabling the next connection attempt to successfully auth. + */ + async finishAuth(authorizationCode: string): Promise { + if (!this._authProvider) { + throw new UnauthorizedError("No auth provider"); + } + + const result = await auth(this._authProvider, { serverUrl: this._url, authorizationCode }); + if (result !== "AUTHORIZED") { + throw new UnauthorizedError("Failed to authorize"); + } + } + + async close(): Promise { + this._abortController?.abort(); + this._eventSource?.close(); + this.onclose?.(); + } + + async send(message: JSONRPCMessage): Promise { + if (!this._endpoint) { + throw new Error("Not connected"); + } + + try { + const commonHeaders = await this._commonHeaders(); + const headers = new Headers({ ...commonHeaders, ...this._requestInit?.headers }); + headers.set("content-type", "application/json"); + const init = { + ...this._requestInit, + method: "POST", + headers, + body: JSON.stringify(message), + signal: this._abortController?.signal, + }; + + const response = await fetch(this._endpoint, init); + if (!response.ok) { + if (response.status === 401 && this._authProvider) { + const result = await auth(this._authProvider, { serverUrl: this._url }); + if (result !== "AUTHORIZED") { + throw new UnauthorizedError(); + } + + // Purposely _not_ awaited, so we don't call onerror twice + return this.send(message); + } + + const text = await response.text().catch(() => null); + throw new Error( + `Error POSTing to endpoint (HTTP ${response.status}): ${text}`, + ); + } + } catch (error) { + this.onerror?.(error as Error); + throw error; + } + } +} + + + +--- +File: /src/client/stdio.test.ts +--- + +import { JSONRPCMessage } from "../types.js"; +import { StdioClientTransport, StdioServerParameters } from "./stdio.js"; + +const serverParameters: StdioServerParameters = { + command: "/usr/bin/tee", +}; + +test("should start then close cleanly", async () => { + const client = new StdioClientTransport(serverParameters); + client.onerror = (error) => { + throw error; + }; + + let didClose = false; + client.onclose = () => { + didClose = true; + }; + + await client.start(); + expect(didClose).toBeFalsy(); + await client.close(); + expect(didClose).toBeTruthy(); +}); + +test("should read messages", async () => { + const client = new StdioClientTransport(serverParameters); + client.onerror = (error) => { + throw error; + }; + + const messages: JSONRPCMessage[] = [ + { + jsonrpc: "2.0", + id: 1, + method: "ping", + }, + { + jsonrpc: "2.0", + method: "notifications/initialized", + }, + ]; + + const readMessages: JSONRPCMessage[] = []; + const finished = new Promise((resolve) => { + client.onmessage = (message) => { + readMessages.push(message); + + if (JSON.stringify(message) === JSON.stringify(messages[1])) { + resolve(); + } + }; + }); + + await client.start(); + await client.send(messages[0]); + await client.send(messages[1]); + await finished; + expect(readMessages).toEqual(messages); + + await client.close(); +}); + + + +--- +File: /src/client/stdio.ts +--- + +import { ChildProcess, IOType, spawn } from "node:child_process"; +import process from "node:process"; +import { Stream } from "node:stream"; +import { ReadBuffer, serializeMessage } from "../shared/stdio.js"; +import { Transport } from "../shared/transport.js"; +import { JSONRPCMessage } from "../types.js"; + +export type StdioServerParameters = { + /** + * The executable to run to start the server. + */ + command: string; + + /** + * Command line arguments to pass to the executable. + */ + args?: string[]; + + /** + * The environment to use when spawning the process. + * + * If not specified, the result of getDefaultEnvironment() will be used. + */ + env?: Record; + + /** + * How to handle stderr of the child process. This matches the semantics of Node's `child_process.spawn`. + * + * The default is "inherit", meaning messages to stderr will be printed to the parent process's stderr. + */ + stderr?: IOType | Stream | number; + + /** + * The working directory to use when spawning the process. + * + * If not specified, the current working directory will be inherited. + */ + cwd?: string; +}; + +/** + * Environment variables to inherit by default, if an environment is not explicitly given. + */ +export const DEFAULT_INHERITED_ENV_VARS = + process.platform === "win32" + ? [ + "APPDATA", + "HOMEDRIVE", + "HOMEPATH", + "LOCALAPPDATA", + "PATH", + "PROCESSOR_ARCHITECTURE", + "SYSTEMDRIVE", + "SYSTEMROOT", + "TEMP", + "USERNAME", + "USERPROFILE", + ] + : /* list inspired by the default env inheritance of sudo */ + ["HOME", "LOGNAME", "PATH", "SHELL", "TERM", "USER"]; + +/** + * Returns a default environment object including only environment variables deemed safe to inherit. + */ +export function getDefaultEnvironment(): Record { + const env: Record = {}; + + for (const key of DEFAULT_INHERITED_ENV_VARS) { + const value = process.env[key]; + if (value === undefined) { + continue; + } + + if (value.startsWith("()")) { + // Skip functions, which are a security risk. + continue; + } + + env[key] = value; + } + + return env; +} + +/** + * Client transport for stdio: this will connect to a server by spawning a process and communicating with it over stdin/stdout. + * + * This transport is only available in Node.js environments. + */ +export class StdioClientTransport implements Transport { + private _process?: ChildProcess; + private _abortController: AbortController = new AbortController(); + private _readBuffer: ReadBuffer = new ReadBuffer(); + private _serverParams: StdioServerParameters; + + onclose?: () => void; + onerror?: (error: Error) => void; + onmessage?: (message: JSONRPCMessage) => void; + + constructor(server: StdioServerParameters) { + this._serverParams = server; + } + + /** + * Starts the server process and prepares to communicate with it. + */ + async start(): Promise { + if (this._process) { + throw new Error( + "StdioClientTransport already started! If using Client class, note that connect() calls start() automatically." + ); + } + + return new Promise((resolve, reject) => { + this._process = spawn( + this._serverParams.command, + this._serverParams.args ?? [], + { + env: this._serverParams.env ?? getDefaultEnvironment(), + stdio: ["pipe", "pipe", this._serverParams.stderr ?? "inherit"], + shell: false, + signal: this._abortController.signal, + windowsHide: process.platform === "win32" && isElectron(), + cwd: this._serverParams.cwd, + } + ); + + this._process.on("error", (error) => { + if (error.name === "AbortError") { + // Expected when close() is called. + this.onclose?.(); + return; + } + + reject(error); + this.onerror?.(error); + }); + + this._process.on("spawn", () => { + resolve(); + }); + + this._process.on("close", (_code) => { + this._process = undefined; + this.onclose?.(); + }); + + this._process.stdin?.on("error", (error) => { + this.onerror?.(error); + }); + + this._process.stdout?.on("data", (chunk) => { + this._readBuffer.append(chunk); + this.processReadBuffer(); + }); + + this._process.stdout?.on("error", (error) => { + this.onerror?.(error); + }); + }); + } + + /** + * The stderr stream of the child process, if `StdioServerParameters.stderr` was set to "pipe" or "overlapped". + * + * This is only available after the process has been started. + */ + get stderr(): Stream | null { + return this._process?.stderr ?? null; + } + + private processReadBuffer() { + while (true) { + try { + const message = this._readBuffer.readMessage(); + if (message === null) { + break; + } + + this.onmessage?.(message); + } catch (error) { + this.onerror?.(error as Error); + } + } + } + + async close(): Promise { + this._abortController.abort(); + this._process = undefined; + this._readBuffer.clear(); + } + + send(message: JSONRPCMessage): Promise { + return new Promise((resolve) => { + if (!this._process?.stdin) { + throw new Error("Not connected"); + } + + const json = serializeMessage(message); + if (this._process.stdin.write(json)) { + resolve(); + } else { + this._process.stdin.once("drain", resolve); + } + }); + } +} + +function isElectron() { + return "type" in process; +} + + + +--- +File: /src/client/websocket.ts +--- + +import { Transport } from "../shared/transport.js"; +import { JSONRPCMessage, JSONRPCMessageSchema } from "../types.js"; + +const SUBPROTOCOL = "mcp"; + +/** + * Client transport for WebSocket: this will connect to a server over the WebSocket protocol. + */ +export class WebSocketClientTransport implements Transport { + private _socket?: WebSocket; + private _url: URL; + + onclose?: () => void; + onerror?: (error: Error) => void; + onmessage?: (message: JSONRPCMessage) => void; + + constructor(url: URL) { + this._url = url; + } + + start(): Promise { + if (this._socket) { + throw new Error( + "WebSocketClientTransport already started! If using Client class, note that connect() calls start() automatically.", + ); + } + + return new Promise((resolve, reject) => { + this._socket = new WebSocket(this._url, SUBPROTOCOL); + + this._socket.onerror = (event) => { + const error = + "error" in event + ? (event.error as Error) + : new Error(`WebSocket error: ${JSON.stringify(event)}`); + reject(error); + this.onerror?.(error); + }; + + this._socket.onopen = () => { + resolve(); + }; + + this._socket.onclose = () => { + this.onclose?.(); + }; + + this._socket.onmessage = (event: MessageEvent) => { + let message: JSONRPCMessage; + try { + message = JSONRPCMessageSchema.parse(JSON.parse(event.data)); + } catch (error) { + this.onerror?.(error as Error); + return; + } + + this.onmessage?.(message); + }; + }); + } + + async close(): Promise { + this._socket?.close(); + } + + send(message: JSONRPCMessage): Promise { + return new Promise((resolve, reject) => { + if (!this._socket) { + reject(new Error("Not connected")); + return; + } + + this._socket?.send(JSON.stringify(message)); + resolve(); + }); + } +} + + + +--- +File: /src/integration-tests/process-cleanup.test.ts +--- + +import { Server } from "../server/index.js"; +import { StdioServerTransport } from "../server/stdio.js"; + +describe("Process cleanup", () => { + jest.setTimeout(5000); // 5 second timeout + + it("should exit cleanly after closing transport", async () => { + const server = new Server( + { + name: "test-server", + version: "1.0.0", + }, + { + capabilities: {}, + } + ); + + const transport = new StdioServerTransport(); + await server.connect(transport); + + // Close the transport + await transport.close(); + + // If we reach here without hanging, the test passes + // The test runner will fail if the process hangs + expect(true).toBe(true); + }); +}); + + +--- +File: /src/server/auth/handlers/authorize.test.ts +--- + +import { authorizationHandler, AuthorizationHandlerOptions } from './authorize.js'; +import { OAuthServerProvider, AuthorizationParams } from '../provider.js'; +import { OAuthRegisteredClientsStore } from '../clients.js'; +import { OAuthClientInformationFull, OAuthTokens } from '../../../shared/auth.js'; +import express, { Response } from 'express'; +import supertest from 'supertest'; +import { AuthInfo } from '../types.js'; +import { InvalidTokenError } from '../errors.js'; + +describe('Authorization Handler', () => { + // Mock client data + const validClient: OAuthClientInformationFull = { + client_id: 'valid-client', + client_secret: 'valid-secret', + redirect_uris: ['https://example.com/callback'], + scope: 'profile email' + }; + + const multiRedirectClient: OAuthClientInformationFull = { + client_id: 'multi-redirect-client', + client_secret: 'valid-secret', + redirect_uris: [ + 'https://example.com/callback1', + 'https://example.com/callback2' + ], + scope: 'profile email' + }; + + // Mock client store + const mockClientStore: OAuthRegisteredClientsStore = { + async getClient(clientId: string): Promise { + if (clientId === 'valid-client') { + return validClient; + } else if (clientId === 'multi-redirect-client') { + return multiRedirectClient; + } + return undefined; + } + }; + + // Mock provider + const mockProvider: OAuthServerProvider = { + clientsStore: mockClientStore, + + async authorize(client: OAuthClientInformationFull, params: AuthorizationParams, res: Response): Promise { + // Mock implementation - redirects to redirectUri with code and state + const redirectUrl = new URL(params.redirectUri); + redirectUrl.searchParams.set('code', 'mock_auth_code'); + if (params.state) { + redirectUrl.searchParams.set('state', params.state); + } + res.redirect(302, redirectUrl.toString()); + }, + + async challengeForAuthorizationCode(): Promise { + return 'mock_challenge'; + }, + + async exchangeAuthorizationCode(): Promise { + return { + access_token: 'mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'mock_refresh_token' + }; + }, + + async exchangeRefreshToken(): Promise { + return { + access_token: 'new_mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'new_mock_refresh_token' + }; + }, + + async verifyAccessToken(token: string): Promise { + if (token === 'valid_token') { + return { + token, + clientId: 'valid-client', + scopes: ['read', 'write'], + expiresAt: Date.now() / 1000 + 3600 + }; + } + throw new InvalidTokenError('Token is invalid or expired'); + }, + + async revokeToken(): Promise { + // Do nothing in mock + } + }; + + // Setup express app with handler + let app: express.Express; + let options: AuthorizationHandlerOptions; + + beforeEach(() => { + app = express(); + options = { provider: mockProvider }; + const handler = authorizationHandler(options); + app.use('/authorize', handler); + }); + + describe('HTTP method validation', () => { + it('rejects non-GET/POST methods', async () => { + const response = await supertest(app) + .put('/authorize') + .query({ client_id: 'valid-client' }); + + expect(response.status).toBe(405); // Method not allowed response from handler + }); + }); + + describe('Client validation', () => { + it('requires client_id parameter', async () => { + const response = await supertest(app) + .get('/authorize'); + + expect(response.status).toBe(400); + expect(response.text).toContain('client_id'); + }); + + it('validates that client exists', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ client_id: 'nonexistent-client' }); + + expect(response.status).toBe(400); + }); + }); + + describe('Redirect URI validation', () => { + it('uses the only redirect_uri if client has just one and none provided', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256' + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.origin + location.pathname).toBe('https://example.com/callback'); + }); + + it('requires redirect_uri if client has multiple', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'multi-redirect-client', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256' + }); + + expect(response.status).toBe(400); + }); + + it('validates redirect_uri against client registered URIs', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://malicious.com/callback', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256' + }); + + expect(response.status).toBe(400); + }); + + it('accepts valid redirect_uri that client registered with', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://example.com/callback', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256' + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.origin + location.pathname).toBe('https://example.com/callback'); + }); + }); + + describe('Authorization request validation', () => { + it('requires response_type=code', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://example.com/callback', + response_type: 'token', // invalid - we only support code flow + code_challenge: 'challenge123', + code_challenge_method: 'S256' + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.searchParams.get('error')).toBe('invalid_request'); + }); + + it('requires code_challenge parameter', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://example.com/callback', + response_type: 'code', + code_challenge_method: 'S256' + // Missing code_challenge + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.searchParams.get('error')).toBe('invalid_request'); + }); + + it('requires code_challenge_method=S256', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://example.com/callback', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'plain' // Only S256 is supported + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.searchParams.get('error')).toBe('invalid_request'); + }); + }); + + describe('Scope validation', () => { + it('validates requested scopes against client registered scopes', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://example.com/callback', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256', + scope: 'profile email admin' // 'admin' not in client scopes + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.searchParams.get('error')).toBe('invalid_scope'); + }); + + it('accepts valid scopes subset', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://example.com/callback', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256', + scope: 'profile' // subset of client scopes + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.searchParams.has('code')).toBe(true); + }); + }); + + describe('Successful authorization', () => { + it('handles successful authorization with all parameters', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://example.com/callback', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256', + scope: 'profile email', + state: 'xyz789' + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.origin + location.pathname).toBe('https://example.com/callback'); + expect(location.searchParams.get('code')).toBe('mock_auth_code'); + expect(location.searchParams.get('state')).toBe('xyz789'); + }); + + it('preserves state parameter in response', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + redirect_uri: 'https://example.com/callback', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256', + state: 'state-value-123' + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.searchParams.get('state')).toBe('state-value-123'); + }); + + it('handles POST requests the same as GET', async () => { + const response = await supertest(app) + .post('/authorize') + .type('form') + .send({ + client_id: 'valid-client', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256' + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.searchParams.has('code')).toBe(true); + }); + }); +}); + + +--- +File: /src/server/auth/handlers/authorize.ts +--- + +import { RequestHandler } from "express"; +import { z } from "zod"; +import express from "express"; +import { OAuthServerProvider } from "../provider.js"; +import { rateLimit, Options as RateLimitOptions } from "express-rate-limit"; +import { allowedMethods } from "../middleware/allowedMethods.js"; +import { + InvalidRequestError, + InvalidClientError, + InvalidScopeError, + ServerError, + TooManyRequestsError, + OAuthError +} from "../errors.js"; + +export type AuthorizationHandlerOptions = { + provider: OAuthServerProvider; + /** + * Rate limiting configuration for the authorization endpoint. + * Set to false to disable rate limiting for this endpoint. + */ + rateLimit?: Partial | false; +}; + +// Parameters that must be validated in order to issue redirects. +const ClientAuthorizationParamsSchema = z.object({ + client_id: z.string(), + redirect_uri: z.string().optional().refine((value) => value === undefined || URL.canParse(value), { message: "redirect_uri must be a valid URL" }), +}); + +// Parameters that must be validated for a successful authorization request. Failure can be reported to the redirect URI. +const RequestAuthorizationParamsSchema = z.object({ + response_type: z.literal("code"), + code_challenge: z.string(), + code_challenge_method: z.literal("S256"), + scope: z.string().optional(), + state: z.string().optional(), +}); + +export function authorizationHandler({ provider, rateLimit: rateLimitConfig }: AuthorizationHandlerOptions): RequestHandler { + // Create a router to apply middleware + const router = express.Router(); + router.use(allowedMethods(["GET", "POST"])); + router.use(express.urlencoded({ extended: false })); + + // Apply rate limiting unless explicitly disabled + if (rateLimitConfig !== false) { + router.use(rateLimit({ + windowMs: 15 * 60 * 1000, // 15 minutes + max: 100, // 100 requests per windowMs + standardHeaders: true, + legacyHeaders: false, + message: new TooManyRequestsError('You have exceeded the rate limit for authorization requests').toResponseObject(), + ...rateLimitConfig + })); + } + + router.all("/", async (req, res) => { + res.setHeader('Cache-Control', 'no-store'); + + // In the authorization flow, errors are split into two categories: + // 1. Pre-redirect errors (direct response with 400) + // 2. Post-redirect errors (redirect with error parameters) + + // Phase 1: Validate client_id and redirect_uri. Any errors here must be direct responses. + let client_id, redirect_uri, client; + try { + const result = ClientAuthorizationParamsSchema.safeParse(req.method === 'POST' ? req.body : req.query); + if (!result.success) { + throw new InvalidRequestError(result.error.message); + } + + client_id = result.data.client_id; + redirect_uri = result.data.redirect_uri; + + client = await provider.clientsStore.getClient(client_id); + if (!client) { + throw new InvalidClientError("Invalid client_id"); + } + + if (redirect_uri !== undefined) { + if (!client.redirect_uris.includes(redirect_uri)) { + throw new InvalidRequestError("Unregistered redirect_uri"); + } + } else if (client.redirect_uris.length === 1) { + redirect_uri = client.redirect_uris[0]; + } else { + throw new InvalidRequestError("redirect_uri must be specified when client has multiple registered URIs"); + } + } catch (error) { + // Pre-redirect errors - return direct response + // + // These don't need to be JSON encoded, as they'll be displayed in a user + // agent, but OTOH they all represent exceptional situations (arguably, + // "programmer error"), so presenting a nice HTML page doesn't help the + // user anyway. + if (error instanceof OAuthError) { + const status = error instanceof ServerError ? 500 : 400; + res.status(status).json(error.toResponseObject()); + } else { + console.error("Unexpected error looking up client:", error); + const serverError = new ServerError("Internal Server Error"); + res.status(500).json(serverError.toResponseObject()); + } + + return; + } + + // Phase 2: Validate other parameters. Any errors here should go into redirect responses. + let state; + try { + // Parse and validate authorization parameters + const parseResult = RequestAuthorizationParamsSchema.safeParse(req.method === 'POST' ? req.body : req.query); + if (!parseResult.success) { + throw new InvalidRequestError(parseResult.error.message); + } + + const { scope, code_challenge } = parseResult.data; + state = parseResult.data.state; + + // Validate scopes + let requestedScopes: string[] = []; + if (scope !== undefined) { + requestedScopes = scope.split(" "); + const allowedScopes = new Set(client.scope?.split(" ")); + + // Check each requested scope against allowed scopes + for (const scope of requestedScopes) { + if (!allowedScopes.has(scope)) { + throw new InvalidScopeError(`Client was not registered with scope ${scope}`); + } + } + } + + // All validation passed, proceed with authorization + await provider.authorize(client, { + state, + scopes: requestedScopes, + redirectUri: redirect_uri, + codeChallenge: code_challenge, + }, res); + } catch (error) { + // Post-redirect errors - redirect with error parameters + if (error instanceof OAuthError) { + res.redirect(302, createErrorRedirect(redirect_uri, error, state)); + } else { + console.error("Unexpected error during authorization:", error); + const serverError = new ServerError("Internal Server Error"); + res.redirect(302, createErrorRedirect(redirect_uri, serverError, state)); + } + } + }); + + return router; +} + +/** + * Helper function to create redirect URL with error parameters + */ +function createErrorRedirect(redirectUri: string, error: OAuthError, state?: string): string { + const errorUrl = new URL(redirectUri); + errorUrl.searchParams.set("error", error.errorCode); + errorUrl.searchParams.set("error_description", error.message); + if (error.errorUri) { + errorUrl.searchParams.set("error_uri", error.errorUri); + } + if (state) { + errorUrl.searchParams.set("state", state); + } + return errorUrl.href; +} + + +--- +File: /src/server/auth/handlers/metadata.test.ts +--- + +import { metadataHandler } from './metadata.js'; +import { OAuthMetadata } from '../../../shared/auth.js'; +import express from 'express'; +import supertest from 'supertest'; + +describe('Metadata Handler', () => { + const exampleMetadata: OAuthMetadata = { + issuer: 'https://auth.example.com', + authorization_endpoint: 'https://auth.example.com/authorize', + token_endpoint: 'https://auth.example.com/token', + registration_endpoint: 'https://auth.example.com/register', + revocation_endpoint: 'https://auth.example.com/revoke', + scopes_supported: ['profile', 'email'], + response_types_supported: ['code'], + grant_types_supported: ['authorization_code', 'refresh_token'], + token_endpoint_auth_methods_supported: ['client_secret_basic'], + code_challenge_methods_supported: ['S256'] + }; + + let app: express.Express; + + beforeEach(() => { + // Setup express app with metadata handler + app = express(); + app.use('/.well-known/oauth-authorization-server', metadataHandler(exampleMetadata)); + }); + + it('requires GET method', async () => { + const response = await supertest(app) + .post('/.well-known/oauth-authorization-server') + .send({}); + + expect(response.status).toBe(405); + expect(response.headers.allow).toBe('GET'); + expect(response.body).toEqual({ + error: "method_not_allowed", + error_description: "The method POST is not allowed for this endpoint" + }); + }); + + it('returns the metadata object', async () => { + const response = await supertest(app) + .get('/.well-known/oauth-authorization-server'); + + expect(response.status).toBe(200); + expect(response.body).toEqual(exampleMetadata); + }); + + it('includes CORS headers in response', async () => { + const response = await supertest(app) + .get('/.well-known/oauth-authorization-server') + .set('Origin', 'https://example.com'); + + expect(response.header['access-control-allow-origin']).toBe('*'); + }); + + it('supports OPTIONS preflight requests', async () => { + const response = await supertest(app) + .options('/.well-known/oauth-authorization-server') + .set('Origin', 'https://example.com') + .set('Access-Control-Request-Method', 'GET'); + + expect(response.status).toBe(204); + expect(response.header['access-control-allow-origin']).toBe('*'); + }); + + it('works with minimal metadata', async () => { + // Setup a new express app with minimal metadata + const minimalApp = express(); + const minimalMetadata: OAuthMetadata = { + issuer: 'https://auth.example.com', + authorization_endpoint: 'https://auth.example.com/authorize', + token_endpoint: 'https://auth.example.com/token', + response_types_supported: ['code'] + }; + minimalApp.use('/.well-known/oauth-authorization-server', metadataHandler(minimalMetadata)); + + const response = await supertest(minimalApp) + .get('/.well-known/oauth-authorization-server'); + + expect(response.status).toBe(200); + expect(response.body).toEqual(minimalMetadata); + }); +}); + + +--- +File: /src/server/auth/handlers/metadata.ts +--- + +import express, { RequestHandler } from "express"; +import { OAuthMetadata } from "../../../shared/auth.js"; +import cors from 'cors'; +import { allowedMethods } from "../middleware/allowedMethods.js"; + +export function metadataHandler(metadata: OAuthMetadata): RequestHandler { + // Nested router so we can configure middleware and restrict HTTP method + const router = express.Router(); + + // Configure CORS to allow any origin, to make accessible to web-based MCP clients + router.use(cors()); + + router.use(allowedMethods(['GET'])); + router.get("/", (req, res) => { + res.status(200).json(metadata); + }); + + return router; +} + + +--- +File: /src/server/auth/handlers/register.test.ts +--- + +import { clientRegistrationHandler, ClientRegistrationHandlerOptions } from './register.js'; +import { OAuthRegisteredClientsStore } from '../clients.js'; +import { OAuthClientInformationFull, OAuthClientMetadata } from '../../../shared/auth.js'; +import express from 'express'; +import supertest from 'supertest'; + +describe('Client Registration Handler', () => { + // Mock client store with registration support + const mockClientStoreWithRegistration: OAuthRegisteredClientsStore = { + async getClient(_clientId: string): Promise { + return undefined; + }, + + async registerClient(client: OAuthClientInformationFull): Promise { + // Return the client info as-is in the mock + return client; + } + }; + + // Mock client store without registration support + const mockClientStoreWithoutRegistration: OAuthRegisteredClientsStore = { + async getClient(_clientId: string): Promise { + return undefined; + } + // No registerClient method + }; + + describe('Handler creation', () => { + it('throws error if client store does not support registration', () => { + const options: ClientRegistrationHandlerOptions = { + clientsStore: mockClientStoreWithoutRegistration + }; + + expect(() => clientRegistrationHandler(options)).toThrow('does not support registering clients'); + }); + + it('creates handler if client store supports registration', () => { + const options: ClientRegistrationHandlerOptions = { + clientsStore: mockClientStoreWithRegistration + }; + + expect(() => clientRegistrationHandler(options)).not.toThrow(); + }); + }); + + describe('Request handling', () => { + let app: express.Express; + let spyRegisterClient: jest.SpyInstance; + + beforeEach(() => { + // Setup express app with registration handler + app = express(); + const options: ClientRegistrationHandlerOptions = { + clientsStore: mockClientStoreWithRegistration, + clientSecretExpirySeconds: 86400 // 1 day for testing + }; + + app.use('/register', clientRegistrationHandler(options)); + + // Spy on the registerClient method + spyRegisterClient = jest.spyOn(mockClientStoreWithRegistration, 'registerClient'); + }); + + afterEach(() => { + spyRegisterClient.mockRestore(); + }); + + it('requires POST method', async () => { + const response = await supertest(app) + .get('/register') + .send({ + redirect_uris: ['https://example.com/callback'] + }); + + expect(response.status).toBe(405); + expect(response.headers.allow).toBe('POST'); + expect(response.body).toEqual({ + error: "method_not_allowed", + error_description: "The method GET is not allowed for this endpoint" + }); + expect(spyRegisterClient).not.toHaveBeenCalled(); + }); + + it('validates required client metadata', async () => { + const response = await supertest(app) + .post('/register') + .send({ + // Missing redirect_uris (required) + client_name: 'Test Client' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_client_metadata'); + expect(spyRegisterClient).not.toHaveBeenCalled(); + }); + + it('validates redirect URIs format', async () => { + const response = await supertest(app) + .post('/register') + .send({ + redirect_uris: ['invalid-url'] // Invalid URL format + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_client_metadata'); + expect(response.body.error_description).toContain('redirect_uris'); + expect(spyRegisterClient).not.toHaveBeenCalled(); + }); + + it('successfully registers client with minimal metadata', async () => { + const clientMetadata: OAuthClientMetadata = { + redirect_uris: ['https://example.com/callback'] + }; + + const response = await supertest(app) + .post('/register') + .send(clientMetadata); + + expect(response.status).toBe(201); + + // Verify the generated client information + expect(response.body.client_id).toBeDefined(); + expect(response.body.client_secret).toBeDefined(); + expect(response.body.client_id_issued_at).toBeDefined(); + expect(response.body.client_secret_expires_at).toBeDefined(); + expect(response.body.redirect_uris).toEqual(['https://example.com/callback']); + + // Verify client was registered + expect(spyRegisterClient).toHaveBeenCalledTimes(1); + }); + + it('sets client_secret to undefined for token_endpoint_auth_method=none', async () => { + const clientMetadata: OAuthClientMetadata = { + redirect_uris: ['https://example.com/callback'], + token_endpoint_auth_method: 'none' + }; + + const response = await supertest(app) + .post('/register') + .send(clientMetadata); + + expect(response.status).toBe(201); + expect(response.body.client_secret).toBeUndefined(); + expect(response.body.client_secret_expires_at).toBeUndefined(); + }); + + it('sets client_secret_expires_at for public clients only', async () => { + // Test for public client (token_endpoint_auth_method not 'none') + const publicClientMetadata: OAuthClientMetadata = { + redirect_uris: ['https://example.com/callback'], + token_endpoint_auth_method: 'client_secret_basic' + }; + + const publicResponse = await supertest(app) + .post('/register') + .send(publicClientMetadata); + + expect(publicResponse.status).toBe(201); + expect(publicResponse.body.client_secret).toBeDefined(); + expect(publicResponse.body.client_secret_expires_at).toBeDefined(); + + // Test for non-public client (token_endpoint_auth_method is 'none') + const nonPublicClientMetadata: OAuthClientMetadata = { + redirect_uris: ['https://example.com/callback'], + token_endpoint_auth_method: 'none' + }; + + const nonPublicResponse = await supertest(app) + .post('/register') + .send(nonPublicClientMetadata); + + expect(nonPublicResponse.status).toBe(201); + expect(nonPublicResponse.body.client_secret).toBeUndefined(); + expect(nonPublicResponse.body.client_secret_expires_at).toBeUndefined(); + }); + + it('sets expiry based on clientSecretExpirySeconds', async () => { + // Create handler with custom expiry time + const customApp = express(); + const options: ClientRegistrationHandlerOptions = { + clientsStore: mockClientStoreWithRegistration, + clientSecretExpirySeconds: 3600 // 1 hour + }; + + customApp.use('/register', clientRegistrationHandler(options)); + + const response = await supertest(customApp) + .post('/register') + .send({ + redirect_uris: ['https://example.com/callback'] + }); + + expect(response.status).toBe(201); + + // Verify the expiration time (~1 hour from now) + const issuedAt = response.body.client_id_issued_at; + const expiresAt = response.body.client_secret_expires_at; + expect(expiresAt - issuedAt).toBe(3600); + }); + + it('sets no expiry when clientSecretExpirySeconds=0', async () => { + // Create handler with no expiry + const customApp = express(); + const options: ClientRegistrationHandlerOptions = { + clientsStore: mockClientStoreWithRegistration, + clientSecretExpirySeconds: 0 // No expiry + }; + + customApp.use('/register', clientRegistrationHandler(options)); + + const response = await supertest(customApp) + .post('/register') + .send({ + redirect_uris: ['https://example.com/callback'] + }); + + expect(response.status).toBe(201); + expect(response.body.client_secret_expires_at).toBe(0); + }); + + it('handles client with all metadata fields', async () => { + const fullClientMetadata: OAuthClientMetadata = { + redirect_uris: ['https://example.com/callback'], + token_endpoint_auth_method: 'client_secret_basic', + grant_types: ['authorization_code', 'refresh_token'], + response_types: ['code'], + client_name: 'Test Client', + client_uri: 'https://example.com', + logo_uri: 'https://example.com/logo.png', + scope: 'profile email', + contacts: ['dev@example.com'], + tos_uri: 'https://example.com/tos', + policy_uri: 'https://example.com/privacy', + jwks_uri: 'https://example.com/jwks', + software_id: 'test-software', + software_version: '1.0.0' + }; + + const response = await supertest(app) + .post('/register') + .send(fullClientMetadata); + + expect(response.status).toBe(201); + + // Verify all metadata was preserved + Object.entries(fullClientMetadata).forEach(([key, value]) => { + expect(response.body[key]).toEqual(value); + }); + }); + + it('includes CORS headers in response', async () => { + const response = await supertest(app) + .post('/register') + .set('Origin', 'https://example.com') + .send({ + redirect_uris: ['https://example.com/callback'] + }); + + expect(response.header['access-control-allow-origin']).toBe('*'); + }); + }); +}); + + +--- +File: /src/server/auth/handlers/register.ts +--- + +import express, { RequestHandler } from "express"; +import { OAuthClientInformationFull, OAuthClientMetadataSchema } from "../../../shared/auth.js"; +import crypto from 'node:crypto'; +import cors from 'cors'; +import { OAuthRegisteredClientsStore } from "../clients.js"; +import { rateLimit, Options as RateLimitOptions } from "express-rate-limit"; +import { allowedMethods } from "../middleware/allowedMethods.js"; +import { + InvalidClientMetadataError, + ServerError, + TooManyRequestsError, + OAuthError +} from "../errors.js"; + +export type ClientRegistrationHandlerOptions = { + /** + * A store used to save information about dynamically registered OAuth clients. + */ + clientsStore: OAuthRegisteredClientsStore; + + /** + * The number of seconds after which to expire issued client secrets, or 0 to prevent expiration of client secrets (not recommended). + * + * If not set, defaults to 30 days. + */ + clientSecretExpirySeconds?: number; + + /** + * Rate limiting configuration for the client registration endpoint. + * Set to false to disable rate limiting for this endpoint. + * Registration endpoints are particularly sensitive to abuse and should be rate limited. + */ + rateLimit?: Partial | false; +}; + +const DEFAULT_CLIENT_SECRET_EXPIRY_SECONDS = 30 * 24 * 60 * 60; // 30 days + +export function clientRegistrationHandler({ + clientsStore, + clientSecretExpirySeconds = DEFAULT_CLIENT_SECRET_EXPIRY_SECONDS, + rateLimit: rateLimitConfig +}: ClientRegistrationHandlerOptions): RequestHandler { + if (!clientsStore.registerClient) { + throw new Error("Client registration store does not support registering clients"); + } + + // Nested router so we can configure middleware and restrict HTTP method + const router = express.Router(); + + // Configure CORS to allow any origin, to make accessible to web-based MCP clients + router.use(cors()); + + router.use(allowedMethods(["POST"])); + router.use(express.json()); + + // Apply rate limiting unless explicitly disabled - stricter limits for registration + if (rateLimitConfig !== false) { + router.use(rateLimit({ + windowMs: 60 * 60 * 1000, // 1 hour + max: 20, // 20 requests per hour - stricter as registration is sensitive + standardHeaders: true, + legacyHeaders: false, + message: new TooManyRequestsError('You have exceeded the rate limit for client registration requests').toResponseObject(), + ...rateLimitConfig + })); + } + + router.post("/", async (req, res) => { + res.setHeader('Cache-Control', 'no-store'); + + try { + const parseResult = OAuthClientMetadataSchema.safeParse(req.body); + if (!parseResult.success) { + throw new InvalidClientMetadataError(parseResult.error.message); + } + + const clientMetadata = parseResult.data; + const isPublicClient = clientMetadata.token_endpoint_auth_method === 'none' + + // Generate client credentials + const clientId = crypto.randomUUID(); + const clientSecret = isPublicClient + ? undefined + : crypto.randomBytes(32).toString('hex'); + const clientIdIssuedAt = Math.floor(Date.now() / 1000); + + // Calculate client secret expiry time + const clientsDoExpire = clientSecretExpirySeconds > 0 + const secretExpiryTime = clientsDoExpire ? clientIdIssuedAt + clientSecretExpirySeconds : 0 + const clientSecretExpiresAt = isPublicClient ? undefined : secretExpiryTime + + let clientInfo: OAuthClientInformationFull = { + ...clientMetadata, + client_id: clientId, + client_secret: clientSecret, + client_id_issued_at: clientIdIssuedAt, + client_secret_expires_at: clientSecretExpiresAt, + }; + + clientInfo = await clientsStore.registerClient!(clientInfo); + res.status(201).json(clientInfo); + } catch (error) { + if (error instanceof OAuthError) { + const status = error instanceof ServerError ? 500 : 400; + res.status(status).json(error.toResponseObject()); + } else { + console.error("Unexpected error registering client:", error); + const serverError = new ServerError("Internal Server Error"); + res.status(500).json(serverError.toResponseObject()); + } + } + }); + + return router; +} + + +--- +File: /src/server/auth/handlers/revoke.test.ts +--- + +import { revocationHandler, RevocationHandlerOptions } from './revoke.js'; +import { OAuthServerProvider, AuthorizationParams } from '../provider.js'; +import { OAuthRegisteredClientsStore } from '../clients.js'; +import { OAuthClientInformationFull, OAuthTokenRevocationRequest, OAuthTokens } from '../../../shared/auth.js'; +import express, { Response } from 'express'; +import supertest from 'supertest'; +import { AuthInfo } from '../types.js'; +import { InvalidTokenError } from '../errors.js'; + +describe('Revocation Handler', () => { + // Mock client data + const validClient: OAuthClientInformationFull = { + client_id: 'valid-client', + client_secret: 'valid-secret', + redirect_uris: ['https://example.com/callback'] + }; + + // Mock client store + const mockClientStore: OAuthRegisteredClientsStore = { + async getClient(clientId: string): Promise { + if (clientId === 'valid-client') { + return validClient; + } + return undefined; + } + }; + + // Mock provider with revocation capability + const mockProviderWithRevocation: OAuthServerProvider = { + clientsStore: mockClientStore, + + async authorize(client: OAuthClientInformationFull, params: AuthorizationParams, res: Response): Promise { + res.redirect('https://example.com/callback?code=mock_auth_code'); + }, + + async challengeForAuthorizationCode(): Promise { + return 'mock_challenge'; + }, + + async exchangeAuthorizationCode(): Promise { + return { + access_token: 'mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'mock_refresh_token' + }; + }, + + async exchangeRefreshToken(): Promise { + return { + access_token: 'new_mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'new_mock_refresh_token' + }; + }, + + async verifyAccessToken(token: string): Promise { + if (token === 'valid_token') { + return { + token, + clientId: 'valid-client', + scopes: ['read', 'write'], + expiresAt: Date.now() / 1000 + 3600 + }; + } + throw new InvalidTokenError('Token is invalid or expired'); + }, + + async revokeToken(_client: OAuthClientInformationFull, _request: OAuthTokenRevocationRequest): Promise { + // Success - do nothing in mock + } + }; + + // Mock provider without revocation capability + const mockProviderWithoutRevocation: OAuthServerProvider = { + clientsStore: mockClientStore, + + async authorize(client: OAuthClientInformationFull, params: AuthorizationParams, res: Response): Promise { + res.redirect('https://example.com/callback?code=mock_auth_code'); + }, + + async challengeForAuthorizationCode(): Promise { + return 'mock_challenge'; + }, + + async exchangeAuthorizationCode(): Promise { + return { + access_token: 'mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'mock_refresh_token' + }; + }, + + async exchangeRefreshToken(): Promise { + return { + access_token: 'new_mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'new_mock_refresh_token' + }; + }, + + async verifyAccessToken(token: string): Promise { + if (token === 'valid_token') { + return { + token, + clientId: 'valid-client', + scopes: ['read', 'write'], + expiresAt: Date.now() / 1000 + 3600 + }; + } + throw new InvalidTokenError('Token is invalid or expired'); + } + // No revokeToken method + }; + + describe('Handler creation', () => { + it('throws error if provider does not support token revocation', () => { + const options: RevocationHandlerOptions = { provider: mockProviderWithoutRevocation }; + expect(() => revocationHandler(options)).toThrow('does not support revoking tokens'); + }); + + it('creates handler if provider supports token revocation', () => { + const options: RevocationHandlerOptions = { provider: mockProviderWithRevocation }; + expect(() => revocationHandler(options)).not.toThrow(); + }); + }); + + describe('Request handling', () => { + let app: express.Express; + let spyRevokeToken: jest.SpyInstance; + + beforeEach(() => { + // Setup express app with revocation handler + app = express(); + const options: RevocationHandlerOptions = { provider: mockProviderWithRevocation }; + app.use('/revoke', revocationHandler(options)); + + // Spy on the revokeToken method + spyRevokeToken = jest.spyOn(mockProviderWithRevocation, 'revokeToken'); + }); + + afterEach(() => { + spyRevokeToken.mockRestore(); + }); + + it('requires POST method', async () => { + const response = await supertest(app) + .get('/revoke') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + token: 'token_to_revoke' + }); + + expect(response.status).toBe(405); + expect(response.headers.allow).toBe('POST'); + expect(response.body).toEqual({ + error: "method_not_allowed", + error_description: "The method GET is not allowed for this endpoint" + }); + expect(spyRevokeToken).not.toHaveBeenCalled(); + }); + + it('requires token parameter', async () => { + const response = await supertest(app) + .post('/revoke') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret' + // Missing token + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_request'); + expect(spyRevokeToken).not.toHaveBeenCalled(); + }); + + it('authenticates client before revoking token', async () => { + const response = await supertest(app) + .post('/revoke') + .type('form') + .send({ + client_id: 'invalid-client', + client_secret: 'wrong-secret', + token: 'token_to_revoke' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_client'); + expect(spyRevokeToken).not.toHaveBeenCalled(); + }); + + it('successfully revokes token', async () => { + const response = await supertest(app) + .post('/revoke') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + token: 'token_to_revoke' + }); + + expect(response.status).toBe(200); + expect(response.body).toEqual({}); // Empty response on success + expect(spyRevokeToken).toHaveBeenCalledTimes(1); + expect(spyRevokeToken).toHaveBeenCalledWith(validClient, { + token: 'token_to_revoke' + }); + }); + + it('accepts optional token_type_hint', async () => { + const response = await supertest(app) + .post('/revoke') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + token: 'token_to_revoke', + token_type_hint: 'refresh_token' + }); + + expect(response.status).toBe(200); + expect(spyRevokeToken).toHaveBeenCalledWith(validClient, { + token: 'token_to_revoke', + token_type_hint: 'refresh_token' + }); + }); + + it('includes CORS headers in response', async () => { + const response = await supertest(app) + .post('/revoke') + .type('form') + .set('Origin', 'https://example.com') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + token: 'token_to_revoke' + }); + + expect(response.header['access-control-allow-origin']).toBe('*'); + }); + }); +}); + + +--- +File: /src/server/auth/handlers/revoke.ts +--- + +import { OAuthServerProvider } from "../provider.js"; +import express, { RequestHandler } from "express"; +import cors from "cors"; +import { authenticateClient } from "../middleware/clientAuth.js"; +import { OAuthTokenRevocationRequestSchema } from "../../../shared/auth.js"; +import { rateLimit, Options as RateLimitOptions } from "express-rate-limit"; +import { allowedMethods } from "../middleware/allowedMethods.js"; +import { + InvalidRequestError, + ServerError, + TooManyRequestsError, + OAuthError +} from "../errors.js"; + +export type RevocationHandlerOptions = { + provider: OAuthServerProvider; + /** + * Rate limiting configuration for the token revocation endpoint. + * Set to false to disable rate limiting for this endpoint. + */ + rateLimit?: Partial | false; +}; + +export function revocationHandler({ provider, rateLimit: rateLimitConfig }: RevocationHandlerOptions): RequestHandler { + if (!provider.revokeToken) { + throw new Error("Auth provider does not support revoking tokens"); + } + + // Nested router so we can configure middleware and restrict HTTP method + const router = express.Router(); + + // Configure CORS to allow any origin, to make accessible to web-based MCP clients + router.use(cors()); + + router.use(allowedMethods(["POST"])); + router.use(express.urlencoded({ extended: false })); + + // Apply rate limiting unless explicitly disabled + if (rateLimitConfig !== false) { + router.use(rateLimit({ + windowMs: 15 * 60 * 1000, // 15 minutes + max: 50, // 50 requests per windowMs + standardHeaders: true, + legacyHeaders: false, + message: new TooManyRequestsError('You have exceeded the rate limit for token revocation requests').toResponseObject(), + ...rateLimitConfig + })); + } + + // Authenticate and extract client details + router.use(authenticateClient({ clientsStore: provider.clientsStore })); + + router.post("/", async (req, res) => { + res.setHeader('Cache-Control', 'no-store'); + + try { + const parseResult = OAuthTokenRevocationRequestSchema.safeParse(req.body); + if (!parseResult.success) { + throw new InvalidRequestError(parseResult.error.message); + } + + const client = req.client; + if (!client) { + // This should never happen + console.error("Missing client information after authentication"); + throw new ServerError("Internal Server Error"); + } + + await provider.revokeToken!(client, parseResult.data); + res.status(200).json({}); + } catch (error) { + if (error instanceof OAuthError) { + const status = error instanceof ServerError ? 500 : 400; + res.status(status).json(error.toResponseObject()); + } else { + console.error("Unexpected error revoking token:", error); + const serverError = new ServerError("Internal Server Error"); + res.status(500).json(serverError.toResponseObject()); + } + } + }); + + return router; +} + + + +--- +File: /src/server/auth/handlers/token.test.ts +--- + +import { tokenHandler, TokenHandlerOptions } from './token.js'; +import { OAuthServerProvider, AuthorizationParams } from '../provider.js'; +import { OAuthRegisteredClientsStore } from '../clients.js'; +import { OAuthClientInformationFull, OAuthTokenRevocationRequest, OAuthTokens } from '../../../shared/auth.js'; +import express, { Response } from 'express'; +import supertest from 'supertest'; +import * as pkceChallenge from 'pkce-challenge'; +import { InvalidGrantError, InvalidTokenError } from '../errors.js'; +import { AuthInfo } from '../types.js'; + +// Mock pkce-challenge +jest.mock('pkce-challenge', () => ({ + verifyChallenge: jest.fn().mockImplementation(async (verifier, challenge) => { + return verifier === 'valid_verifier' && challenge === 'mock_challenge'; + }) +})); + +describe('Token Handler', () => { + // Mock client data + const validClient: OAuthClientInformationFull = { + client_id: 'valid-client', + client_secret: 'valid-secret', + redirect_uris: ['https://example.com/callback'] + }; + + // Mock client store + const mockClientStore: OAuthRegisteredClientsStore = { + async getClient(clientId: string): Promise { + if (clientId === 'valid-client') { + return validClient; + } + return undefined; + } + }; + + // Mock provider + let mockProvider: OAuthServerProvider; + let app: express.Express; + + beforeEach(() => { + // Create fresh mocks for each test + mockProvider = { + clientsStore: mockClientStore, + + async authorize(client: OAuthClientInformationFull, params: AuthorizationParams, res: Response): Promise { + res.redirect('https://example.com/callback?code=mock_auth_code'); + }, + + async challengeForAuthorizationCode(client: OAuthClientInformationFull, authorizationCode: string): Promise { + if (authorizationCode === 'valid_code') { + return 'mock_challenge'; + } else if (authorizationCode === 'expired_code') { + throw new InvalidGrantError('The authorization code has expired'); + } + throw new InvalidGrantError('The authorization code is invalid'); + }, + + async exchangeAuthorizationCode(client: OAuthClientInformationFull, authorizationCode: string): Promise { + if (authorizationCode === 'valid_code') { + return { + access_token: 'mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'mock_refresh_token' + }; + } + throw new InvalidGrantError('The authorization code is invalid or has expired'); + }, + + async exchangeRefreshToken(client: OAuthClientInformationFull, refreshToken: string, scopes?: string[]): Promise { + if (refreshToken === 'valid_refresh_token') { + const response: OAuthTokens = { + access_token: 'new_mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'new_mock_refresh_token' + }; + + if (scopes) { + response.scope = scopes.join(' '); + } + + return response; + } + throw new InvalidGrantError('The refresh token is invalid or has expired'); + }, + + async verifyAccessToken(token: string): Promise { + if (token === 'valid_token') { + return { + token, + clientId: 'valid-client', + scopes: ['read', 'write'], + expiresAt: Date.now() / 1000 + 3600 + }; + } + throw new InvalidTokenError('Token is invalid or expired'); + }, + + async revokeToken(_client: OAuthClientInformationFull, _request: OAuthTokenRevocationRequest): Promise { + // Do nothing in mock + } + }; + + // Mock PKCE verification + (pkceChallenge.verifyChallenge as jest.Mock).mockImplementation( + async (verifier: string, challenge: string) => { + return verifier === 'valid_verifier' && challenge === 'mock_challenge'; + } + ); + + // Setup express app with token handler + app = express(); + const options: TokenHandlerOptions = { provider: mockProvider }; + app.use('/token', tokenHandler(options)); + }); + + describe('Basic request validation', () => { + it('requires POST method', async () => { + const response = await supertest(app) + .get('/token') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code' + }); + + expect(response.status).toBe(405); + expect(response.headers.allow).toBe('POST'); + expect(response.body).toEqual({ + error: "method_not_allowed", + error_description: "The method GET is not allowed for this endpoint" + }); + }); + + it('requires grant_type parameter', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret' + // Missing grant_type + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_request'); + }); + + it('rejects unsupported grant types', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'password' // Unsupported grant type + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('unsupported_grant_type'); + }); + }); + + describe('Client authentication', () => { + it('requires valid client credentials', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'invalid-client', + client_secret: 'wrong-secret', + grant_type: 'authorization_code' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_client'); + }); + + it('accepts valid client credentials', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code', + code: 'valid_code', + code_verifier: 'valid_verifier' + }); + + expect(response.status).toBe(200); + }); + }); + + describe('Authorization code grant', () => { + it('requires code parameter', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code', + // Missing code + code_verifier: 'valid_verifier' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_request'); + }); + + it('requires code_verifier parameter', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code', + code: 'valid_code' + // Missing code_verifier + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_request'); + }); + + it('verifies code_verifier against challenge', async () => { + // Setup invalid verifier + (pkceChallenge.verifyChallenge as jest.Mock).mockResolvedValueOnce(false); + + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code', + code: 'valid_code', + code_verifier: 'invalid_verifier' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_grant'); + expect(response.body.error_description).toContain('code_verifier'); + }); + + it('rejects expired or invalid authorization codes', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code', + code: 'expired_code', + code_verifier: 'valid_verifier' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_grant'); + }); + + it('returns tokens for valid code exchange', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code', + code: 'valid_code', + code_verifier: 'valid_verifier' + }); + + expect(response.status).toBe(200); + expect(response.body.access_token).toBe('mock_access_token'); + expect(response.body.token_type).toBe('bearer'); + expect(response.body.expires_in).toBe(3600); + expect(response.body.refresh_token).toBe('mock_refresh_token'); + }); + }); + + describe('Refresh token grant', () => { + it('requires refresh_token parameter', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'refresh_token' + // Missing refresh_token + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_request'); + }); + + it('rejects invalid refresh tokens', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'refresh_token', + refresh_token: 'invalid_refresh_token' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_grant'); + }); + + it('returns new tokens for valid refresh token', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'refresh_token', + refresh_token: 'valid_refresh_token' + }); + + expect(response.status).toBe(200); + expect(response.body.access_token).toBe('new_mock_access_token'); + expect(response.body.token_type).toBe('bearer'); + expect(response.body.expires_in).toBe(3600); + expect(response.body.refresh_token).toBe('new_mock_refresh_token'); + }); + + it('respects requested scopes on refresh', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'refresh_token', + refresh_token: 'valid_refresh_token', + scope: 'profile email' + }); + + expect(response.status).toBe(200); + expect(response.body.scope).toBe('profile email'); + }); + }); + + describe('CORS support', () => { + it('includes CORS headers in response', async () => { + const response = await supertest(app) + .post('/token') + .type('form') + .set('Origin', 'https://example.com') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code', + code: 'valid_code', + code_verifier: 'valid_verifier' + }); + + expect(response.header['access-control-allow-origin']).toBe('*'); + }); + }); +}); + + +--- +File: /src/server/auth/handlers/token.ts +--- + +import { z } from "zod"; +import express, { RequestHandler } from "express"; +import { OAuthServerProvider } from "../provider.js"; +import cors from "cors"; +import { verifyChallenge } from "pkce-challenge"; +import { authenticateClient } from "../middleware/clientAuth.js"; +import { rateLimit, Options as RateLimitOptions } from "express-rate-limit"; +import { allowedMethods } from "../middleware/allowedMethods.js"; +import { + InvalidRequestError, + InvalidGrantError, + UnsupportedGrantTypeError, + ServerError, + TooManyRequestsError, + OAuthError +} from "../errors.js"; + +export type TokenHandlerOptions = { + provider: OAuthServerProvider; + /** + * Rate limiting configuration for the token endpoint. + * Set to false to disable rate limiting for this endpoint. + */ + rateLimit?: Partial | false; +}; + +const TokenRequestSchema = z.object({ + grant_type: z.string(), +}); + +const AuthorizationCodeGrantSchema = z.object({ + code: z.string(), + code_verifier: z.string(), +}); + +const RefreshTokenGrantSchema = z.object({ + refresh_token: z.string(), + scope: z.string().optional(), +}); + +export function tokenHandler({ provider, rateLimit: rateLimitConfig }: TokenHandlerOptions): RequestHandler { + // Nested router so we can configure middleware and restrict HTTP method + const router = express.Router(); + + // Configure CORS to allow any origin, to make accessible to web-based MCP clients + router.use(cors()); + + router.use(allowedMethods(["POST"])); + router.use(express.urlencoded({ extended: false })); + + // Apply rate limiting unless explicitly disabled + if (rateLimitConfig !== false) { + router.use(rateLimit({ + windowMs: 15 * 60 * 1000, // 15 minutes + max: 50, // 50 requests per windowMs + standardHeaders: true, + legacyHeaders: false, + message: new TooManyRequestsError('You have exceeded the rate limit for token requests').toResponseObject(), + ...rateLimitConfig + })); + } + + // Authenticate and extract client details + router.use(authenticateClient({ clientsStore: provider.clientsStore })); + + router.post("/", async (req, res) => { + res.setHeader('Cache-Control', 'no-store'); + + try { + const parseResult = TokenRequestSchema.safeParse(req.body); + if (!parseResult.success) { + throw new InvalidRequestError(parseResult.error.message); + } + + const { grant_type } = parseResult.data; + + const client = req.client; + if (!client) { + // This should never happen + console.error("Missing client information after authentication"); + throw new ServerError("Internal Server Error"); + } + + switch (grant_type) { + case "authorization_code": { + const parseResult = AuthorizationCodeGrantSchema.safeParse(req.body); + if (!parseResult.success) { + throw new InvalidRequestError(parseResult.error.message); + } + + const { code, code_verifier } = parseResult.data; + + // Verify PKCE challenge + const codeChallenge = await provider.challengeForAuthorizationCode(client, code); + if (!(await verifyChallenge(code_verifier, codeChallenge))) { + throw new InvalidGrantError("code_verifier does not match the challenge"); + } + + const tokens = await provider.exchangeAuthorizationCode(client, code); + res.status(200).json(tokens); + break; + } + + case "refresh_token": { + const parseResult = RefreshTokenGrantSchema.safeParse(req.body); + if (!parseResult.success) { + throw new InvalidRequestError(parseResult.error.message); + } + + const { refresh_token, scope } = parseResult.data; + + const scopes = scope?.split(" "); + const tokens = await provider.exchangeRefreshToken(client, refresh_token, scopes); + res.status(200).json(tokens); + break; + } + + // Not supported right now + //case "client_credentials": + + default: + throw new UnsupportedGrantTypeError( + "The grant type is not supported by this authorization server." + ); + } + } catch (error) { + if (error instanceof OAuthError) { + const status = error instanceof ServerError ? 500 : 400; + res.status(status).json(error.toResponseObject()); + } else { + console.error("Unexpected error exchanging token:", error); + const serverError = new ServerError("Internal Server Error"); + res.status(500).json(serverError.toResponseObject()); + } + } + }); + + return router; +} + + +--- +File: /src/server/auth/middleware/allowedMethods.test.ts +--- + +import { allowedMethods } from "./allowedMethods.js"; +import express, { Request, Response } from "express"; +import request from "supertest"; + +describe("allowedMethods", () => { + let app: express.Express; + + beforeEach(() => { + app = express(); + + // Set up a test router with a GET handler and 405 middleware + const router = express.Router(); + + router.get("/test", (req, res) => { + res.status(200).send("GET success"); + }); + + // Add method not allowed middleware for all other methods + router.all("/test", allowedMethods(["GET"])); + + app.use(router); + }); + + test("allows specified HTTP method", async () => { + const response = await request(app).get("/test"); + expect(response.status).toBe(200); + expect(response.text).toBe("GET success"); + }); + + test("returns 405 for unspecified HTTP methods", async () => { + const methods = ["post", "put", "delete", "patch"]; + + for (const method of methods) { + // @ts-expect-error - dynamic method call + const response = await request(app)[method]("/test"); + expect(response.status).toBe(405); + expect(response.body).toEqual({ + error: "method_not_allowed", + error_description: `The method ${method.toUpperCase()} is not allowed for this endpoint` + }); + } + }); + + test("includes Allow header with specified methods", async () => { + const response = await request(app).post("/test"); + expect(response.headers.allow).toBe("GET"); + }); + + test("works with multiple allowed methods", async () => { + const multiMethodApp = express(); + const router = express.Router(); + + router.get("/multi", (req: Request, res: Response) => { + res.status(200).send("GET"); + }); + router.post("/multi", (req: Request, res: Response) => { + res.status(200).send("POST"); + }); + router.all("/multi", allowedMethods(["GET", "POST"])); + + multiMethodApp.use(router); + + // Allowed methods should work + const getResponse = await request(multiMethodApp).get("/multi"); + expect(getResponse.status).toBe(200); + + const postResponse = await request(multiMethodApp).post("/multi"); + expect(postResponse.status).toBe(200); + + // Unallowed methods should return 405 + const putResponse = await request(multiMethodApp).put("/multi"); + expect(putResponse.status).toBe(405); + expect(putResponse.headers.allow).toBe("GET, POST"); + }); +}); + + +--- +File: /src/server/auth/middleware/allowedMethods.ts +--- + +import { RequestHandler } from "express"; +import { MethodNotAllowedError } from "../errors.js"; + +/** + * Middleware to handle unsupported HTTP methods with a 405 Method Not Allowed response. + * + * @param allowedMethods Array of allowed HTTP methods for this endpoint (e.g., ['GET', 'POST']) + * @returns Express middleware that returns a 405 error if method not in allowed list + */ +export function allowedMethods(allowedMethods: string[]): RequestHandler { + return (req, res, next) => { + if (allowedMethods.includes(req.method)) { + next(); + return; + } + + const error = new MethodNotAllowedError(`The method ${req.method} is not allowed for this endpoint`); + res.status(405) + .set('Allow', allowedMethods.join(', ')) + .json(error.toResponseObject()); + }; +} + + +--- +File: /src/server/auth/middleware/bearerAuth.test.ts +--- + +import { Request, Response } from "express"; +import { requireBearerAuth } from "./bearerAuth.js"; +import { AuthInfo } from "../types.js"; +import { InsufficientScopeError, InvalidTokenError, OAuthError, ServerError } from "../errors.js"; +import { OAuthServerProvider } from "../provider.js"; +import { OAuthRegisteredClientsStore } from "../clients.js"; + +// Mock provider +const mockVerifyAccessToken = jest.fn(); +const mockProvider: OAuthServerProvider = { + clientsStore: {} as OAuthRegisteredClientsStore, + authorize: jest.fn(), + challengeForAuthorizationCode: jest.fn(), + exchangeAuthorizationCode: jest.fn(), + exchangeRefreshToken: jest.fn(), + verifyAccessToken: mockVerifyAccessToken, +}; + +describe("requireBearerAuth middleware", () => { + let mockRequest: Partial; + let mockResponse: Partial; + let nextFunction: jest.Mock; + + beforeEach(() => { + mockRequest = { + headers: {}, + }; + mockResponse = { + status: jest.fn().mockReturnThis(), + json: jest.fn(), + set: jest.fn().mockReturnThis(), + }; + nextFunction = jest.fn(); + jest.clearAllMocks(); + }); + + it("should call next when token is valid", async () => { + const validAuthInfo: AuthInfo = { + token: "valid-token", + clientId: "client-123", + scopes: ["read", "write"], + }; + mockVerifyAccessToken.mockResolvedValue(validAuthInfo); + + mockRequest.headers = { + authorization: "Bearer valid-token", + }; + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("valid-token"); + expect(mockRequest.auth).toEqual(validAuthInfo); + expect(nextFunction).toHaveBeenCalled(); + expect(mockResponse.status).not.toHaveBeenCalled(); + expect(mockResponse.json).not.toHaveBeenCalled(); + }); + + it("should reject expired tokens", async () => { + const expiredAuthInfo: AuthInfo = { + token: "expired-token", + clientId: "client-123", + scopes: ["read", "write"], + expiresAt: Math.floor(Date.now() / 1000) - 100, // Token expired 100 seconds ago + }; + mockVerifyAccessToken.mockResolvedValue(expiredAuthInfo); + + mockRequest.headers = { + authorization: "Bearer expired-token", + }; + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("expired-token"); + expect(mockResponse.status).toHaveBeenCalledWith(401); + expect(mockResponse.set).toHaveBeenCalledWith( + "WWW-Authenticate", + expect.stringContaining('Bearer error="invalid_token"') + ); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ error: "invalid_token", error_description: "Token has expired" }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); + + it("should accept non-expired tokens", async () => { + const nonExpiredAuthInfo: AuthInfo = { + token: "valid-token", + clientId: "client-123", + scopes: ["read", "write"], + expiresAt: Math.floor(Date.now() / 1000) + 3600, // Token expires in an hour + }; + mockVerifyAccessToken.mockResolvedValue(nonExpiredAuthInfo); + + mockRequest.headers = { + authorization: "Bearer valid-token", + }; + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("valid-token"); + expect(mockRequest.auth).toEqual(nonExpiredAuthInfo); + expect(nextFunction).toHaveBeenCalled(); + expect(mockResponse.status).not.toHaveBeenCalled(); + expect(mockResponse.json).not.toHaveBeenCalled(); + }); + + it("should require specific scopes when configured", async () => { + const authInfo: AuthInfo = { + token: "valid-token", + clientId: "client-123", + scopes: ["read"], + }; + mockVerifyAccessToken.mockResolvedValue(authInfo); + + mockRequest.headers = { + authorization: "Bearer valid-token", + }; + + const middleware = requireBearerAuth({ + provider: mockProvider, + requiredScopes: ["read", "write"] + }); + + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("valid-token"); + expect(mockResponse.status).toHaveBeenCalledWith(403); + expect(mockResponse.set).toHaveBeenCalledWith( + "WWW-Authenticate", + expect.stringContaining('Bearer error="insufficient_scope"') + ); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ error: "insufficient_scope", error_description: "Insufficient scope" }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); + + it("should accept token with all required scopes", async () => { + const authInfo: AuthInfo = { + token: "valid-token", + clientId: "client-123", + scopes: ["read", "write", "admin"], + }; + mockVerifyAccessToken.mockResolvedValue(authInfo); + + mockRequest.headers = { + authorization: "Bearer valid-token", + }; + + const middleware = requireBearerAuth({ + provider: mockProvider, + requiredScopes: ["read", "write"] + }); + + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("valid-token"); + expect(mockRequest.auth).toEqual(authInfo); + expect(nextFunction).toHaveBeenCalled(); + expect(mockResponse.status).not.toHaveBeenCalled(); + expect(mockResponse.json).not.toHaveBeenCalled(); + }); + + it("should return 401 when no Authorization header is present", async () => { + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).not.toHaveBeenCalled(); + expect(mockResponse.status).toHaveBeenCalledWith(401); + expect(mockResponse.set).toHaveBeenCalledWith( + "WWW-Authenticate", + expect.stringContaining('Bearer error="invalid_token"') + ); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ error: "invalid_token", error_description: "Missing Authorization header" }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); + + it("should return 401 when Authorization header format is invalid", async () => { + mockRequest.headers = { + authorization: "InvalidFormat", + }; + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).not.toHaveBeenCalled(); + expect(mockResponse.status).toHaveBeenCalledWith(401); + expect(mockResponse.set).toHaveBeenCalledWith( + "WWW-Authenticate", + expect.stringContaining('Bearer error="invalid_token"') + ); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ + error: "invalid_token", + error_description: "Invalid Authorization header format, expected 'Bearer TOKEN'" + }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); + + it("should return 401 when token verification fails with InvalidTokenError", async () => { + mockRequest.headers = { + authorization: "Bearer invalid-token", + }; + + mockVerifyAccessToken.mockRejectedValue(new InvalidTokenError("Token expired")); + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("invalid-token"); + expect(mockResponse.status).toHaveBeenCalledWith(401); + expect(mockResponse.set).toHaveBeenCalledWith( + "WWW-Authenticate", + expect.stringContaining('Bearer error="invalid_token"') + ); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ error: "invalid_token", error_description: "Token expired" }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); + + it("should return 403 when access token has insufficient scopes", async () => { + mockRequest.headers = { + authorization: "Bearer valid-token", + }; + + mockVerifyAccessToken.mockRejectedValue(new InsufficientScopeError("Required scopes: read, write")); + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("valid-token"); + expect(mockResponse.status).toHaveBeenCalledWith(403); + expect(mockResponse.set).toHaveBeenCalledWith( + "WWW-Authenticate", + expect.stringContaining('Bearer error="insufficient_scope"') + ); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ error: "insufficient_scope", error_description: "Required scopes: read, write" }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); + + it("should return 500 when a ServerError occurs", async () => { + mockRequest.headers = { + authorization: "Bearer valid-token", + }; + + mockVerifyAccessToken.mockRejectedValue(new ServerError("Internal server issue")); + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("valid-token"); + expect(mockResponse.status).toHaveBeenCalledWith(500); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ error: "server_error", error_description: "Internal server issue" }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); + + it("should return 400 for generic OAuthError", async () => { + mockRequest.headers = { + authorization: "Bearer valid-token", + }; + + mockVerifyAccessToken.mockRejectedValue(new OAuthError("custom_error", "Some OAuth error")); + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("valid-token"); + expect(mockResponse.status).toHaveBeenCalledWith(400); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ error: "custom_error", error_description: "Some OAuth error" }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); + + it("should return 500 when unexpected error occurs", async () => { + mockRequest.headers = { + authorization: "Bearer valid-token", + }; + + mockVerifyAccessToken.mockRejectedValue(new Error("Unexpected error")); + + const middleware = requireBearerAuth({ provider: mockProvider }); + await middleware(mockRequest as Request, mockResponse as Response, nextFunction); + + expect(mockVerifyAccessToken).toHaveBeenCalledWith("valid-token"); + expect(mockResponse.status).toHaveBeenCalledWith(500); + expect(mockResponse.json).toHaveBeenCalledWith( + expect.objectContaining({ error: "server_error", error_description: "Internal Server Error" }) + ); + expect(nextFunction).not.toHaveBeenCalled(); + }); +}); + + +--- +File: /src/server/auth/middleware/bearerAuth.ts +--- + +import { RequestHandler } from "express"; +import { InsufficientScopeError, InvalidTokenError, OAuthError, ServerError } from "../errors.js"; +import { OAuthServerProvider } from "../provider.js"; +import { AuthInfo } from "../types.js"; + +export type BearerAuthMiddlewareOptions = { + /** + * A provider used to verify tokens. + */ + provider: OAuthServerProvider; + + /** + * Optional scopes that the token must have. + */ + requiredScopes?: string[]; +}; + +declare module "express-serve-static-core" { + interface Request { + /** + * Information about the validated access token, if the `requireBearerAuth` middleware was used. + */ + auth?: AuthInfo; + } +} + +/** + * Middleware that requires a valid Bearer token in the Authorization header. + * + * This will validate the token with the auth provider and add the resulting auth info to the request object. + */ +export function requireBearerAuth({ provider, requiredScopes = [] }: BearerAuthMiddlewareOptions): RequestHandler { + return async (req, res, next) => { + try { + const authHeader = req.headers.authorization; + if (!authHeader) { + throw new InvalidTokenError("Missing Authorization header"); + } + + const [type, token] = authHeader.split(' '); + if (type.toLowerCase() !== 'bearer' || !token) { + throw new InvalidTokenError("Invalid Authorization header format, expected 'Bearer TOKEN'"); + } + + const authInfo = await provider.verifyAccessToken(token); + + // Check if token has the required scopes (if any) + if (requiredScopes.length > 0) { + const hasAllScopes = requiredScopes.every(scope => + authInfo.scopes.includes(scope) + ); + + if (!hasAllScopes) { + throw new InsufficientScopeError("Insufficient scope"); + } + } + + // Check if the token is expired + if (!!authInfo.expiresAt && authInfo.expiresAt < Date.now() / 1000) { + throw new InvalidTokenError("Token has expired"); + } + + req.auth = authInfo; + next(); + } catch (error) { + if (error instanceof InvalidTokenError) { + res.set("WWW-Authenticate", `Bearer error="${error.errorCode}", error_description="${error.message}"`); + res.status(401).json(error.toResponseObject()); + } else if (error instanceof InsufficientScopeError) { + res.set("WWW-Authenticate", `Bearer error="${error.errorCode}", error_description="${error.message}"`); + res.status(403).json(error.toResponseObject()); + } else if (error instanceof ServerError) { + res.status(500).json(error.toResponseObject()); + } else if (error instanceof OAuthError) { + res.status(400).json(error.toResponseObject()); + } else { + console.error("Unexpected error authenticating bearer token:", error); + const serverError = new ServerError("Internal Server Error"); + res.status(500).json(serverError.toResponseObject()); + } + } + }; +} + + +--- +File: /src/server/auth/middleware/clientAuth.test.ts +--- + +import { authenticateClient, ClientAuthenticationMiddlewareOptions } from './clientAuth.js'; +import { OAuthRegisteredClientsStore } from '../clients.js'; +import { OAuthClientInformationFull } from '../../../shared/auth.js'; +import express from 'express'; +import supertest from 'supertest'; + +describe('clientAuth middleware', () => { + // Mock client store + const mockClientStore: OAuthRegisteredClientsStore = { + async getClient(clientId: string): Promise { + if (clientId === 'valid-client') { + return { + client_id: 'valid-client', + client_secret: 'valid-secret', + redirect_uris: ['https://example.com/callback'] + }; + } else if (clientId === 'expired-client') { + // Client with no secret + return { + client_id: 'expired-client', + redirect_uris: ['https://example.com/callback'] + }; + } else if (clientId === 'client-with-expired-secret') { + // Client with an expired secret + return { + client_id: 'client-with-expired-secret', + client_secret: 'expired-secret', + client_secret_expires_at: Math.floor(Date.now() / 1000) - 3600, // Expired 1 hour ago + redirect_uris: ['https://example.com/callback'] + }; + } + return undefined; + } + }; + + // Setup Express app with middleware + let app: express.Express; + let options: ClientAuthenticationMiddlewareOptions; + + beforeEach(() => { + app = express(); + app.use(express.json()); + + options = { + clientsStore: mockClientStore + }; + + // Setup route with client auth + app.post('/protected', authenticateClient(options), (req, res) => { + res.status(200).json({ success: true, client: req.client }); + }); + }); + + it('authenticates valid client credentials', async () => { + const response = await supertest(app) + .post('/protected') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret' + }); + + expect(response.status).toBe(200); + expect(response.body.success).toBe(true); + expect(response.body.client.client_id).toBe('valid-client'); + }); + + it('rejects invalid client_id', async () => { + const response = await supertest(app) + .post('/protected') + .send({ + client_id: 'non-existent-client', + client_secret: 'some-secret' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_client'); + expect(response.body.error_description).toBe('Invalid client_id'); + }); + + it('rejects invalid client_secret', async () => { + const response = await supertest(app) + .post('/protected') + .send({ + client_id: 'valid-client', + client_secret: 'wrong-secret' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_client'); + expect(response.body.error_description).toBe('Invalid client_secret'); + }); + + it('rejects missing client_id', async () => { + const response = await supertest(app) + .post('/protected') + .send({ + client_secret: 'valid-secret' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_request'); + }); + + it('allows missing client_secret if client has none', async () => { + const response = await supertest(app) + .post('/protected') + .send({ + client_id: 'expired-client' + }); + + // Since the client has no secret, this should pass without providing one + expect(response.status).toBe(200); + }); + + it('rejects request when client secret has expired', async () => { + const response = await supertest(app) + .post('/protected') + .send({ + client_id: 'client-with-expired-secret', + client_secret: 'expired-secret' + }); + + expect(response.status).toBe(400); + expect(response.body.error).toBe('invalid_client'); + expect(response.body.error_description).toBe('Client secret has expired'); + }); + + it('handles malformed request body', async () => { + const response = await supertest(app) + .post('/protected') + .send('not-json-format'); + + expect(response.status).toBe(400); + }); + + // Testing request with extra fields to ensure they're ignored + it('ignores extra fields in request', async () => { + const response = await supertest(app) + .post('/protected') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + extra_field: 'should be ignored' + }); + + expect(response.status).toBe(200); + }); +}); + + +--- +File: /src/server/auth/middleware/clientAuth.ts +--- + +import { z } from "zod"; +import { RequestHandler } from "express"; +import { OAuthRegisteredClientsStore } from "../clients.js"; +import { OAuthClientInformationFull } from "../../../shared/auth.js"; +import { InvalidRequestError, InvalidClientError, ServerError, OAuthError } from "../errors.js"; + +export type ClientAuthenticationMiddlewareOptions = { + /** + * A store used to read information about registered OAuth clients. + */ + clientsStore: OAuthRegisteredClientsStore; +} + +const ClientAuthenticatedRequestSchema = z.object({ + client_id: z.string(), + client_secret: z.string().optional(), +}); + +declare module "express-serve-static-core" { + interface Request { + /** + * The authenticated client for this request, if the `authenticateClient` middleware was used. + */ + client?: OAuthClientInformationFull; + } +} + +export function authenticateClient({ clientsStore }: ClientAuthenticationMiddlewareOptions): RequestHandler { + return async (req, res, next) => { + try { + const result = ClientAuthenticatedRequestSchema.safeParse(req.body); + if (!result.success) { + throw new InvalidRequestError(String(result.error)); + } + + const { client_id, client_secret } = result.data; + const client = await clientsStore.getClient(client_id); + if (!client) { + throw new InvalidClientError("Invalid client_id"); + } + + // If client has a secret, validate it + if (client.client_secret) { + // Check if client_secret is required but not provided + if (!client_secret) { + throw new InvalidClientError("Client secret is required"); + } + + // Check if client_secret matches + if (client.client_secret !== client_secret) { + throw new InvalidClientError("Invalid client_secret"); + } + + // Check if client_secret has expired + if (client.client_secret_expires_at && client.client_secret_expires_at < Math.floor(Date.now() / 1000)) { + throw new InvalidClientError("Client secret has expired"); + } + } + + req.client = client; + next(); + } catch (error) { + if (error instanceof OAuthError) { + const status = error instanceof ServerError ? 500 : 400; + res.status(status).json(error.toResponseObject()); + } else { + console.error("Unexpected error authenticating client:", error); + const serverError = new ServerError("Internal Server Error"); + res.status(500).json(serverError.toResponseObject()); + } + } + } +} + + +--- +File: /src/server/auth/clients.ts +--- + +import { OAuthClientInformationFull } from "../../shared/auth.js"; + +/** + * Stores information about registered OAuth clients for this server. + */ +export interface OAuthRegisteredClientsStore { + /** + * Returns information about a registered client, based on its ID. + */ + getClient(clientId: string): OAuthClientInformationFull | undefined | Promise; + + /** + * Registers a new client with the server. The client ID and secret will be automatically generated by the library. A modified version of the client information can be returned to reflect specific values enforced by the server. + * + * NOTE: Implementations should NOT delete expired client secrets in-place. Auth middleware provided by this library will automatically check the `client_secret_expires_at` field and reject requests with expired secrets. Any custom logic for authenticating clients should check the `client_secret_expires_at` field as well. + * + * If unimplemented, dynamic client registration is unsupported. + */ + registerClient?(client: OAuthClientInformationFull): OAuthClientInformationFull | Promise; +} + + +--- +File: /src/server/auth/errors.ts +--- + +import { OAuthErrorResponse } from "../../shared/auth.js"; + +/** + * Base class for all OAuth errors + */ +export class OAuthError extends Error { + constructor( + public readonly errorCode: string, + message: string, + public readonly errorUri?: string + ) { + super(message); + this.name = this.constructor.name; + } + + /** + * Converts the error to a standard OAuth error response object + */ + toResponseObject(): OAuthErrorResponse { + const response: OAuthErrorResponse = { + error: this.errorCode, + error_description: this.message + }; + + if (this.errorUri) { + response.error_uri = this.errorUri; + } + + return response; + } +} + +/** + * Invalid request error - The request is missing a required parameter, + * includes an invalid parameter value, includes a parameter more than once, + * or is otherwise malformed. + */ +export class InvalidRequestError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("invalid_request", message, errorUri); + } +} + +/** + * Invalid client error - Client authentication failed (e.g., unknown client, no client + * authentication included, or unsupported authentication method). + */ +export class InvalidClientError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("invalid_client", message, errorUri); + } +} + +/** + * Invalid grant error - The provided authorization grant or refresh token is + * invalid, expired, revoked, does not match the redirection URI used in the + * authorization request, or was issued to another client. + */ +export class InvalidGrantError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("invalid_grant", message, errorUri); + } +} + +/** + * Unauthorized client error - The authenticated client is not authorized to use + * this authorization grant type. + */ +export class UnauthorizedClientError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("unauthorized_client", message, errorUri); + } +} + +/** + * Unsupported grant type error - The authorization grant type is not supported + * by the authorization server. + */ +export class UnsupportedGrantTypeError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("unsupported_grant_type", message, errorUri); + } +} + +/** + * Invalid scope error - The requested scope is invalid, unknown, malformed, or + * exceeds the scope granted by the resource owner. + */ +export class InvalidScopeError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("invalid_scope", message, errorUri); + } +} + +/** + * Access denied error - The resource owner or authorization server denied the request. + */ +export class AccessDeniedError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("access_denied", message, errorUri); + } +} + +/** + * Server error - The authorization server encountered an unexpected condition + * that prevented it from fulfilling the request. + */ +export class ServerError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("server_error", message, errorUri); + } +} + +/** + * Temporarily unavailable error - The authorization server is currently unable to + * handle the request due to a temporary overloading or maintenance of the server. + */ +export class TemporarilyUnavailableError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("temporarily_unavailable", message, errorUri); + } +} + +/** + * Unsupported response type error - The authorization server does not support + * obtaining an authorization code using this method. + */ +export class UnsupportedResponseTypeError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("unsupported_response_type", message, errorUri); + } +} + +/** + * Unsupported token type error - The authorization server does not support + * the requested token type. + */ +export class UnsupportedTokenTypeError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("unsupported_token_type", message, errorUri); + } +} + +/** + * Invalid token error - The access token provided is expired, revoked, malformed, + * or invalid for other reasons. + */ +export class InvalidTokenError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("invalid_token", message, errorUri); + } +} + +/** + * Method not allowed error - The HTTP method used is not allowed for this endpoint. + * (Custom, non-standard error) + */ +export class MethodNotAllowedError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("method_not_allowed", message, errorUri); + } +} + +/** + * Too many requests error - Rate limit exceeded. + * (Custom, non-standard error based on RFC 6585) + */ +export class TooManyRequestsError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("too_many_requests", message, errorUri); + } +} + +/** + * Invalid client metadata error - The client metadata is invalid. + * (Custom error for dynamic client registration - RFC 7591) + */ +export class InvalidClientMetadataError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("invalid_client_metadata", message, errorUri); + } +} + +/** + * Insufficient scope error - The request requires higher privileges than provided by the access token. + */ +export class InsufficientScopeError extends OAuthError { + constructor(message: string, errorUri?: string) { + super("insufficient_scope", message, errorUri); + } +} + + + +--- +File: /src/server/auth/provider.ts +--- + +import { Response } from "express"; +import { OAuthRegisteredClientsStore } from "./clients.js"; +import { OAuthClientInformationFull, OAuthTokenRevocationRequest, OAuthTokens } from "../../shared/auth.js"; +import { AuthInfo } from "./types.js"; + +export type AuthorizationParams = { + state?: string; + scopes?: string[]; + codeChallenge: string; + redirectUri: string; +}; + +/** + * Implements an end-to-end OAuth server. + */ +export interface OAuthServerProvider { + /** + * A store used to read information about registered OAuth clients. + */ + get clientsStore(): OAuthRegisteredClientsStore; + + /** + * Begins the authorization flow, which can either be implemented by this server itself or via redirection to a separate authorization server. + * + * This server must eventually issue a redirect with an authorization response or an error response to the given redirect URI. Per OAuth 2.1: + * - In the successful case, the redirect MUST include the `code` and `state` (if present) query parameters. + * - In the error case, the redirect MUST include the `error` query parameter, and MAY include an optional `error_description` query parameter. + */ + authorize(client: OAuthClientInformationFull, params: AuthorizationParams, res: Response): Promise; + + /** + * Returns the `codeChallenge` that was used when the indicated authorization began. + */ + challengeForAuthorizationCode(client: OAuthClientInformationFull, authorizationCode: string): Promise; + + /** + * Exchanges an authorization code for an access token. + */ + exchangeAuthorizationCode(client: OAuthClientInformationFull, authorizationCode: string): Promise; + + /** + * Exchanges a refresh token for an access token. + */ + exchangeRefreshToken(client: OAuthClientInformationFull, refreshToken: string, scopes?: string[]): Promise; + + /** + * Verifies an access token and returns information about it. + */ + verifyAccessToken(token: string): Promise; + + /** + * Revokes an access or refresh token. If unimplemented, token revocation is not supported (not recommended). + * + * If the given token is invalid or already revoked, this method should do nothing. + */ + revokeToken?(client: OAuthClientInformationFull, request: OAuthTokenRevocationRequest): Promise; +} + + +--- +File: /src/server/auth/router.test.ts +--- + +import { mcpAuthRouter, AuthRouterOptions } from './router.js'; +import { OAuthServerProvider, AuthorizationParams } from './provider.js'; +import { OAuthRegisteredClientsStore } from './clients.js'; +import { OAuthClientInformationFull, OAuthTokenRevocationRequest, OAuthTokens } from '../../shared/auth.js'; +import express, { Response } from 'express'; +import supertest from 'supertest'; +import { AuthInfo } from './types.js'; +import { InvalidTokenError } from './errors.js'; + +describe('MCP Auth Router', () => { + // Setup mock provider with full capabilities + const mockClientStore: OAuthRegisteredClientsStore = { + async getClient(clientId: string): Promise { + if (clientId === 'valid-client') { + return { + client_id: 'valid-client', + client_secret: 'valid-secret', + redirect_uris: ['https://example.com/callback'] + }; + } + return undefined; + }, + + async registerClient(client: OAuthClientInformationFull): Promise { + return client; + } + }; + + const mockProvider: OAuthServerProvider = { + clientsStore: mockClientStore, + + async authorize(client: OAuthClientInformationFull, params: AuthorizationParams, res: Response): Promise { + const redirectUrl = new URL(params.redirectUri); + redirectUrl.searchParams.set('code', 'mock_auth_code'); + if (params.state) { + redirectUrl.searchParams.set('state', params.state); + } + res.redirect(302, redirectUrl.toString()); + }, + + async challengeForAuthorizationCode(): Promise { + return 'mock_challenge'; + }, + + async exchangeAuthorizationCode(): Promise { + return { + access_token: 'mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'mock_refresh_token' + }; + }, + + async exchangeRefreshToken(): Promise { + return { + access_token: 'new_mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'new_mock_refresh_token' + }; + }, + + async verifyAccessToken(token: string): Promise { + if (token === 'valid_token') { + return { + token, + clientId: 'valid-client', + scopes: ['read', 'write'], + expiresAt: Date.now() / 1000 + 3600 + }; + } + throw new InvalidTokenError('Token is invalid or expired'); + }, + + async revokeToken(_client: OAuthClientInformationFull, _request: OAuthTokenRevocationRequest): Promise { + // Success - do nothing in mock + } + }; + + // Provider without registration and revocation + const mockProviderMinimal: OAuthServerProvider = { + clientsStore: { + async getClient(clientId: string): Promise { + if (clientId === 'valid-client') { + return { + client_id: 'valid-client', + client_secret: 'valid-secret', + redirect_uris: ['https://example.com/callback'] + }; + } + return undefined; + } + }, + + async authorize(client: OAuthClientInformationFull, params: AuthorizationParams, res: Response): Promise { + const redirectUrl = new URL(params.redirectUri); + redirectUrl.searchParams.set('code', 'mock_auth_code'); + if (params.state) { + redirectUrl.searchParams.set('state', params.state); + } + res.redirect(302, redirectUrl.toString()); + }, + + async challengeForAuthorizationCode(): Promise { + return 'mock_challenge'; + }, + + async exchangeAuthorizationCode(): Promise { + return { + access_token: 'mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'mock_refresh_token' + }; + }, + + async exchangeRefreshToken(): Promise { + return { + access_token: 'new_mock_access_token', + token_type: 'bearer', + expires_in: 3600, + refresh_token: 'new_mock_refresh_token' + }; + }, + + async verifyAccessToken(token: string): Promise { + if (token === 'valid_token') { + return { + token, + clientId: 'valid-client', + scopes: ['read'], + expiresAt: Date.now() / 1000 + 3600 + }; + } + throw new InvalidTokenError('Token is invalid or expired'); + } + }; + + describe('Router creation', () => { + it('throws error for non-HTTPS issuer URL', () => { + const options: AuthRouterOptions = { + provider: mockProvider, + issuerUrl: new URL('http://auth.example.com') + }; + + expect(() => mcpAuthRouter(options)).toThrow('Issuer URL must be HTTPS'); + }); + + it('allows localhost HTTP for development', () => { + const options: AuthRouterOptions = { + provider: mockProvider, + issuerUrl: new URL('http://localhost:3000') + }; + + expect(() => mcpAuthRouter(options)).not.toThrow(); + }); + + it('throws error for issuer URL with fragment', () => { + const options: AuthRouterOptions = { + provider: mockProvider, + issuerUrl: new URL('https://auth.example.com#fragment') + }; + + expect(() => mcpAuthRouter(options)).toThrow('Issuer URL must not have a fragment'); + }); + + it('throws error for issuer URL with query string', () => { + const options: AuthRouterOptions = { + provider: mockProvider, + issuerUrl: new URL('https://auth.example.com?param=value') + }; + + expect(() => mcpAuthRouter(options)).toThrow('Issuer URL must not have a query string'); + }); + + it('successfully creates router with valid options', () => { + const options: AuthRouterOptions = { + provider: mockProvider, + issuerUrl: new URL('https://auth.example.com') + }; + + expect(() => mcpAuthRouter(options)).not.toThrow(); + }); + }); + + describe('Metadata endpoint', () => { + let app: express.Express; + + beforeEach(() => { + // Setup full-featured router + app = express(); + const options: AuthRouterOptions = { + provider: mockProvider, + issuerUrl: new URL('https://auth.example.com'), + serviceDocumentationUrl: new URL('https://docs.example.com') + }; + app.use(mcpAuthRouter(options)); + }); + + it('returns complete metadata for full-featured router', async () => { + const response = await supertest(app) + .get('/.well-known/oauth-authorization-server'); + + expect(response.status).toBe(200); + + // Verify essential fields + expect(response.body.issuer).toBe('https://auth.example.com/'); + expect(response.body.authorization_endpoint).toBe('https://auth.example.com/authorize'); + expect(response.body.token_endpoint).toBe('https://auth.example.com/token'); + expect(response.body.registration_endpoint).toBe('https://auth.example.com/register'); + expect(response.body.revocation_endpoint).toBe('https://auth.example.com/revoke'); + + // Verify supported features + expect(response.body.response_types_supported).toEqual(['code']); + expect(response.body.grant_types_supported).toEqual(['authorization_code', 'refresh_token']); + expect(response.body.code_challenge_methods_supported).toEqual(['S256']); + expect(response.body.token_endpoint_auth_methods_supported).toEqual(['client_secret_post']); + expect(response.body.revocation_endpoint_auth_methods_supported).toEqual(['client_secret_post']); + + // Verify optional fields + expect(response.body.service_documentation).toBe('https://docs.example.com/'); + }); + + it('returns minimal metadata for minimal router', async () => { + // Setup minimal router + const minimalApp = express(); + const options: AuthRouterOptions = { + provider: mockProviderMinimal, + issuerUrl: new URL('https://auth.example.com') + }; + minimalApp.use(mcpAuthRouter(options)); + + const response = await supertest(minimalApp) + .get('/.well-known/oauth-authorization-server'); + + expect(response.status).toBe(200); + + // Verify essential endpoints + expect(response.body.issuer).toBe('https://auth.example.com/'); + expect(response.body.authorization_endpoint).toBe('https://auth.example.com/authorize'); + expect(response.body.token_endpoint).toBe('https://auth.example.com/token'); + + // Verify missing optional endpoints + expect(response.body.registration_endpoint).toBeUndefined(); + expect(response.body.revocation_endpoint).toBeUndefined(); + expect(response.body.revocation_endpoint_auth_methods_supported).toBeUndefined(); + expect(response.body.service_documentation).toBeUndefined(); + }); + }); + + describe('Endpoint routing', () => { + let app: express.Express; + + beforeEach(() => { + // Setup full-featured router + app = express(); + const options: AuthRouterOptions = { + provider: mockProvider, + issuerUrl: new URL('https://auth.example.com') + }; + app.use(mcpAuthRouter(options)); + }); + + it('routes to authorization endpoint', async () => { + const response = await supertest(app) + .get('/authorize') + .query({ + client_id: 'valid-client', + response_type: 'code', + code_challenge: 'challenge123', + code_challenge_method: 'S256' + }); + + expect(response.status).toBe(302); + const location = new URL(response.header.location); + expect(location.searchParams.has('code')).toBe(true); + }); + + it('routes to token endpoint', async () => { + // Setup verifyChallenge mock for token handler + jest.mock('pkce-challenge', () => ({ + verifyChallenge: jest.fn().mockResolvedValue(true) + })); + + const response = await supertest(app) + .post('/token') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + grant_type: 'authorization_code', + code: 'valid_code', + code_verifier: 'valid_verifier' + }); + + // The request will fail in testing due to mocking limitations, + // but we can verify the route was matched + expect(response.status).not.toBe(404); + }); + + it('routes to registration endpoint', async () => { + const response = await supertest(app) + .post('/register') + .send({ + redirect_uris: ['https://example.com/callback'] + }); + + // The request will fail in testing due to mocking limitations, + // but we can verify the route was matched + expect(response.status).not.toBe(404); + }); + + it('routes to revocation endpoint', async () => { + const response = await supertest(app) + .post('/revoke') + .type('form') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + token: 'token_to_revoke' + }); + + // The request will fail in testing due to mocking limitations, + // but we can verify the route was matched + expect(response.status).not.toBe(404); + }); + + it('excludes endpoints for unsupported features', async () => { + // Setup minimal router + const minimalApp = express(); + const options: AuthRouterOptions = { + provider: mockProviderMinimal, + issuerUrl: new URL('https://auth.example.com') + }; + minimalApp.use(mcpAuthRouter(options)); + + // Registration should not be available + const regResponse = await supertest(minimalApp) + .post('/register') + .send({ + redirect_uris: ['https://example.com/callback'] + }); + expect(regResponse.status).toBe(404); + + // Revocation should not be available + const revokeResponse = await supertest(minimalApp) + .post('/revoke') + .send({ + client_id: 'valid-client', + client_secret: 'valid-secret', + token: 'token_to_revoke' + }); + expect(revokeResponse.status).toBe(404); + }); + }); +}); + + +--- +File: /src/server/auth/router.ts +--- + +import express, { RequestHandler } from "express"; +import { clientRegistrationHandler, ClientRegistrationHandlerOptions } from "./handlers/register.js"; +import { tokenHandler, TokenHandlerOptions } from "./handlers/token.js"; +import { authorizationHandler, AuthorizationHandlerOptions } from "./handlers/authorize.js"; +import { revocationHandler, RevocationHandlerOptions } from "./handlers/revoke.js"; +import { metadataHandler } from "./handlers/metadata.js"; +import { OAuthServerProvider } from "./provider.js"; + +export type AuthRouterOptions = { + /** + * A provider implementing the actual authorization logic for this router. + */ + provider: OAuthServerProvider; + + /** + * The authorization server's issuer identifier, which is a URL that uses the "https" scheme and has no query or fragment components. + */ + issuerUrl: URL; + + /** + * An optional URL of a page containing human-readable information that developers might want or need to know when using the authorization server. + */ + serviceDocumentationUrl?: URL; + + // Individual options per route + authorizationOptions?: Omit; + clientRegistrationOptions?: Omit; + revocationOptions?: Omit; + tokenOptions?: Omit; +}; + +/** + * Installs standard MCP authorization endpoints, including dynamic client registration and token revocation (if supported). Also advertises standard authorization server metadata, for easier discovery of supported configurations by clients. + * + * By default, rate limiting is applied to all endpoints to prevent abuse. + * + * This router MUST be installed at the application root, like so: + * + * const app = express(); + * app.use(mcpAuthRouter(...)); + */ +export function mcpAuthRouter(options: AuthRouterOptions): RequestHandler { + const issuer = options.issuerUrl; + + // Technically RFC 8414 does not permit a localhost HTTPS exemption, but this will be necessary for ease of testing + if (issuer.protocol !== "https:" && issuer.hostname !== "localhost" && issuer.hostname !== "127.0.0.1") { + throw new Error("Issuer URL must be HTTPS"); + } + if (issuer.hash) { + throw new Error("Issuer URL must not have a fragment"); + } + if (issuer.search) { + throw new Error("Issuer URL must not have a query string"); + } + + const authorization_endpoint = "/authorize"; + const token_endpoint = "/token"; + const registration_endpoint = options.provider.clientsStore.registerClient ? "/register" : undefined; + const revocation_endpoint = options.provider.revokeToken ? "/revoke" : undefined; + + const metadata = { + issuer: issuer.href, + service_documentation: options.serviceDocumentationUrl?.href, + + authorization_endpoint: new URL(authorization_endpoint, issuer).href, + response_types_supported: ["code"], + code_challenge_methods_supported: ["S256"], + + token_endpoint: new URL(token_endpoint, issuer).href, + token_endpoint_auth_methods_supported: ["client_secret_post"], + grant_types_supported: ["authorization_code", "refresh_token"], + + revocation_endpoint: revocation_endpoint ? new URL(revocation_endpoint, issuer).href : undefined, + revocation_endpoint_auth_methods_supported: revocation_endpoint ? ["client_secret_post"] : undefined, + + registration_endpoint: registration_endpoint ? new URL(registration_endpoint, issuer).href : undefined, + }; + + const router = express.Router(); + + router.use( + authorization_endpoint, + authorizationHandler({ provider: options.provider, ...options.authorizationOptions }) + ); + + router.use( + token_endpoint, + tokenHandler({ provider: options.provider, ...options.tokenOptions }) + ); + + router.use("/.well-known/oauth-authorization-server", metadataHandler(metadata)); + + if (registration_endpoint) { + router.use( + registration_endpoint, + clientRegistrationHandler({ + clientsStore: options.provider.clientsStore, + ...options, + }) + ); + } + + if (revocation_endpoint) { + router.use( + revocation_endpoint, + revocationHandler({ provider: options.provider, ...options.revocationOptions }) + ); + } + + return router; +} + + +--- +File: /src/server/auth/types.ts +--- + +/** + * Information about a validated access token, provided to request handlers. + */ +export interface AuthInfo { + /** + * The access token. + */ + token: string; + + /** + * The client ID associated with this token. + */ + clientId: string; + + /** + * Scopes associated with this token. + */ + scopes: string[]; + + /** + * When the token expires (in seconds since epoch). + */ + expiresAt?: number; +} + + +--- +File: /src/server/completable.test.ts +--- + +import { z } from "zod"; +import { completable } from "./completable.js"; + +describe("completable", () => { + it("preserves types and values of underlying schema", () => { + const baseSchema = z.string(); + const schema = completable(baseSchema, () => []); + + expect(schema.parse("test")).toBe("test"); + expect(() => schema.parse(123)).toThrow(); + }); + + it("provides access to completion function", async () => { + const completions = ["foo", "bar", "baz"]; + const schema = completable(z.string(), () => completions); + + expect(await schema._def.complete("")).toEqual(completions); + }); + + it("allows async completion functions", async () => { + const completions = ["foo", "bar", "baz"]; + const schema = completable(z.string(), async () => completions); + + expect(await schema._def.complete("")).toEqual(completions); + }); + + it("passes current value to completion function", async () => { + const schema = completable(z.string(), (value) => [value + "!"]); + + expect(await schema._def.complete("test")).toEqual(["test!"]); + }); + + it("works with number schemas", async () => { + const schema = completable(z.number(), () => [1, 2, 3]); + + expect(schema.parse(1)).toBe(1); + expect(await schema._def.complete(0)).toEqual([1, 2, 3]); + }); + + it("preserves schema description", () => { + const desc = "test description"; + const schema = completable(z.string().describe(desc), () => []); + + expect(schema.description).toBe(desc); + }); +}); + + + +--- +File: /src/server/completable.ts +--- + +import { + ZodTypeAny, + ZodTypeDef, + ZodType, + ParseInput, + ParseReturnType, + RawCreateParams, + ZodErrorMap, + ProcessedCreateParams, +} from "zod"; + +export enum McpZodTypeKind { + Completable = "McpCompletable", +} + +export type CompleteCallback = ( + value: T["_input"], +) => T["_input"][] | Promise; + +export interface CompletableDef + extends ZodTypeDef { + type: T; + complete: CompleteCallback; + typeName: McpZodTypeKind.Completable; +} + +export class Completable extends ZodType< + T["_output"], + CompletableDef, + T["_input"] +> { + _parse(input: ParseInput): ParseReturnType { + const { ctx } = this._processInputParams(input); + const data = ctx.data; + return this._def.type._parse({ + data, + path: ctx.path, + parent: ctx, + }); + } + + unwrap() { + return this._def.type; + } + + static create = ( + type: T, + params: RawCreateParams & { + complete: CompleteCallback; + }, + ): Completable => { + return new Completable({ + type, + typeName: McpZodTypeKind.Completable, + complete: params.complete, + ...processCreateParams(params), + }); + }; +} + +/** + * Wraps a Zod type to provide autocompletion capabilities. Useful for, e.g., prompt arguments in MCP. + */ +export function completable( + schema: T, + complete: CompleteCallback, +): Completable { + return Completable.create(schema, { ...schema._def, complete }); +} + +// Not sure why this isn't exported from Zod: +// https://github.com/colinhacks/zod/blob/f7ad26147ba291cb3fb257545972a8e00e767470/src/types.ts#L130 +function processCreateParams(params: RawCreateParams): ProcessedCreateParams { + if (!params) return {}; + const { errorMap, invalid_type_error, required_error, description } = params; + if (errorMap && (invalid_type_error || required_error)) { + throw new Error( + `Can't use "invalid_type_error" or "required_error" in conjunction with custom error map.`, + ); + } + if (errorMap) return { errorMap: errorMap, description }; + const customMap: ZodErrorMap = (iss, ctx) => { + const { message } = params; + + if (iss.code === "invalid_enum_value") { + return { message: message ?? ctx.defaultError }; + } + if (typeof ctx.data === "undefined") { + return { message: message ?? required_error ?? ctx.defaultError }; + } + if (iss.code !== "invalid_type") return { message: ctx.defaultError }; + return { message: message ?? invalid_type_error ?? ctx.defaultError }; + }; + return { errorMap: customMap, description }; +} + + + +--- +File: /src/server/index.test.ts +--- + +/* eslint-disable @typescript-eslint/no-unused-vars */ +/* eslint-disable no-constant-binary-expression */ +/* eslint-disable @typescript-eslint/no-unused-expressions */ +import { Server } from "./index.js"; +import { z } from "zod"; +import { + RequestSchema, + NotificationSchema, + ResultSchema, + LATEST_PROTOCOL_VERSION, + SUPPORTED_PROTOCOL_VERSIONS, + CreateMessageRequestSchema, + ListPromptsRequestSchema, + ListResourcesRequestSchema, + ListToolsRequestSchema, + SetLevelRequestSchema, + ErrorCode, +} from "../types.js"; +import { Transport } from "../shared/transport.js"; +import { InMemoryTransport } from "../inMemory.js"; +import { Client } from "../client/index.js"; + +test("should accept latest protocol version", async () => { + let sendPromiseResolve: (value: unknown) => void; + const sendPromise = new Promise((resolve) => { + sendPromiseResolve = resolve; + }); + + const serverTransport: Transport = { + start: jest.fn().mockResolvedValue(undefined), + close: jest.fn().mockResolvedValue(undefined), + send: jest.fn().mockImplementation((message) => { + if (message.id === 1 && message.result) { + expect(message.result).toEqual({ + protocolVersion: LATEST_PROTOCOL_VERSION, + capabilities: expect.any(Object), + serverInfo: { + name: "test server", + version: "1.0", + }, + instructions: "Test instructions", + }); + sendPromiseResolve(undefined); + } + return Promise.resolve(); + }), + }; + + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + resources: {}, + tools: {}, + logging: {}, + }, + instructions: "Test instructions", + }, + ); + + await server.connect(serverTransport); + + // Simulate initialize request with latest version + serverTransport.onmessage?.({ + jsonrpc: "2.0", + id: 1, + method: "initialize", + params: { + protocolVersion: LATEST_PROTOCOL_VERSION, + capabilities: {}, + clientInfo: { + name: "test client", + version: "1.0", + }, + }, + }); + + await expect(sendPromise).resolves.toBeUndefined(); +}); + +test("should accept supported older protocol version", async () => { + const OLD_VERSION = SUPPORTED_PROTOCOL_VERSIONS[1]; + let sendPromiseResolve: (value: unknown) => void; + const sendPromise = new Promise((resolve) => { + sendPromiseResolve = resolve; + }); + + const serverTransport: Transport = { + start: jest.fn().mockResolvedValue(undefined), + close: jest.fn().mockResolvedValue(undefined), + send: jest.fn().mockImplementation((message) => { + if (message.id === 1 && message.result) { + expect(message.result).toEqual({ + protocolVersion: OLD_VERSION, + capabilities: expect.any(Object), + serverInfo: { + name: "test server", + version: "1.0", + }, + }); + sendPromiseResolve(undefined); + } + return Promise.resolve(); + }), + }; + + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + resources: {}, + tools: {}, + logging: {}, + }, + }, + ); + + await server.connect(serverTransport); + + // Simulate initialize request with older version + serverTransport.onmessage?.({ + jsonrpc: "2.0", + id: 1, + method: "initialize", + params: { + protocolVersion: OLD_VERSION, + capabilities: {}, + clientInfo: { + name: "test client", + version: "1.0", + }, + }, + }); + + await expect(sendPromise).resolves.toBeUndefined(); +}); + +test("should handle unsupported protocol version", async () => { + let sendPromiseResolve: (value: unknown) => void; + const sendPromise = new Promise((resolve) => { + sendPromiseResolve = resolve; + }); + + const serverTransport: Transport = { + start: jest.fn().mockResolvedValue(undefined), + close: jest.fn().mockResolvedValue(undefined), + send: jest.fn().mockImplementation((message) => { + if (message.id === 1 && message.result) { + expect(message.result).toEqual({ + protocolVersion: LATEST_PROTOCOL_VERSION, + capabilities: expect.any(Object), + serverInfo: { + name: "test server", + version: "1.0", + }, + }); + sendPromiseResolve(undefined); + } + return Promise.resolve(); + }), + }; + + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + resources: {}, + tools: {}, + logging: {}, + }, + }, + ); + + await server.connect(serverTransport); + + // Simulate initialize request with unsupported version + serverTransport.onmessage?.({ + jsonrpc: "2.0", + id: 1, + method: "initialize", + params: { + protocolVersion: "invalid-version", + capabilities: {}, + clientInfo: { + name: "test client", + version: "1.0", + }, + }, + }); + + await expect(sendPromise).resolves.toBeUndefined(); +}); + +test("should respect client capabilities", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + resources: {}, + tools: {}, + logging: {}, + }, + enforceStrictCapabilities: true, + }, + ); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + // Implement request handler for sampling/createMessage + client.setRequestHandler(CreateMessageRequestSchema, async (request) => { + // Mock implementation of createMessage + return { + model: "test-model", + role: "assistant", + content: { + type: "text", + text: "This is a test response", + }, + }; + }); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + expect(server.getClientCapabilities()).toEqual({ sampling: {} }); + + // This should work because sampling is supported by the client + await expect( + server.createMessage({ + messages: [], + maxTokens: 10, + }), + ).resolves.not.toThrow(); + + // This should still throw because roots are not supported by the client + await expect(server.listRoots()).rejects.toThrow(/^Client does not support/); +}); + +test("should respect server notification capabilities", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + logging: {}, + }, + enforceStrictCapabilities: true, + }, + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await server.connect(serverTransport); + + // This should work because logging is supported by the server + await expect( + server.sendLoggingMessage({ + level: "info", + data: "Test log message", + }), + ).resolves.not.toThrow(); + + // This should throw because resource notificaitons are not supported by the server + await expect( + server.sendResourceUpdated({ uri: "test://resource" }), + ).rejects.toThrow(/^Server does not support/); +}); + +test("should only allow setRequestHandler for declared capabilities", () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + resources: {}, + }, + }, + ); + + // These should work because the capabilities are declared + expect(() => { + server.setRequestHandler(ListPromptsRequestSchema, () => ({ prompts: [] })); + }).not.toThrow(); + + expect(() => { + server.setRequestHandler(ListResourcesRequestSchema, () => ({ + resources: [], + })); + }).not.toThrow(); + + // These should throw because the capabilities are not declared + expect(() => { + server.setRequestHandler(ListToolsRequestSchema, () => ({ tools: [] })); + }).toThrow(/^Server does not support tools/); + + expect(() => { + server.setRequestHandler(SetLevelRequestSchema, () => ({})); + }).toThrow(/^Server does not support logging/); +}); + +/* + Test that custom request/notification/result schemas can be used with the Server class. + */ +test("should typecheck", () => { + const GetWeatherRequestSchema = RequestSchema.extend({ + method: z.literal("weather/get"), + params: z.object({ + city: z.string(), + }), + }); + + const GetForecastRequestSchema = RequestSchema.extend({ + method: z.literal("weather/forecast"), + params: z.object({ + city: z.string(), + days: z.number(), + }), + }); + + const WeatherForecastNotificationSchema = NotificationSchema.extend({ + method: z.literal("weather/alert"), + params: z.object({ + severity: z.enum(["warning", "watch"]), + message: z.string(), + }), + }); + + const WeatherRequestSchema = GetWeatherRequestSchema.or( + GetForecastRequestSchema, + ); + const WeatherNotificationSchema = WeatherForecastNotificationSchema; + const WeatherResultSchema = ResultSchema.extend({ + temperature: z.number(), + conditions: z.string(), + }); + + type WeatherRequest = z.infer; + type WeatherNotification = z.infer; + type WeatherResult = z.infer; + + // Create a typed Server for weather data + const weatherServer = new Server< + WeatherRequest, + WeatherNotification, + WeatherResult + >( + { + name: "WeatherServer", + version: "1.0.0", + }, + { + capabilities: { + prompts: {}, + resources: {}, + tools: {}, + logging: {}, + }, + }, + ); + + // Typecheck that only valid weather requests/notifications/results are allowed + weatherServer.setRequestHandler(GetWeatherRequestSchema, (request) => { + return { + temperature: 72, + conditions: "sunny", + }; + }); + + weatherServer.setNotificationHandler( + WeatherForecastNotificationSchema, + (notification) => { + console.log(`Weather alert: ${notification.params.message}`); + }, + ); +}); + +test("should handle server cancelling a request", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + // Set up client to delay responding to createMessage + client.setRequestHandler( + CreateMessageRequestSchema, + async (_request, extra) => { + await new Promise((resolve) => setTimeout(resolve, 1000)); + return { + model: "test", + role: "assistant", + content: { + type: "text", + text: "Test response", + }, + }; + }, + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + // Set up abort controller + const controller = new AbortController(); + + // Issue request but cancel it immediately + const createMessagePromise = server.createMessage( + { + messages: [], + maxTokens: 10, + }, + { + signal: controller.signal, + }, + ); + controller.abort("Cancelled by test"); + + // Request should be rejected + await expect(createMessagePromise).rejects.toBe("Cancelled by test"); +}); + +test("should handle request timeout", async () => { + const server = new Server( + { + name: "test server", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + // Set up client that delays responses + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + client.setRequestHandler( + CreateMessageRequestSchema, + async (_request, extra) => { + await new Promise((resolve, reject) => { + const timeout = setTimeout(resolve, 100); + extra.signal.addEventListener("abort", () => { + clearTimeout(timeout); + reject(extra.signal.reason); + }); + }); + + return { + model: "test", + role: "assistant", + content: { + type: "text", + text: "Test response", + }, + }; + }, + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + // Request with 0 msec timeout should fail immediately + await expect( + server.createMessage( + { + messages: [], + maxTokens: 10, + }, + { timeout: 0 }, + ), + ).rejects.toMatchObject({ + code: ErrorCode.RequestTimeout, + }); +}); + + + +--- +File: /src/server/index.ts +--- + +import { + mergeCapabilities, + Protocol, + ProtocolOptions, + RequestOptions, +} from "../shared/protocol.js"; +import { + ClientCapabilities, + CreateMessageRequest, + CreateMessageResultSchema, + EmptyResultSchema, + Implementation, + InitializedNotificationSchema, + InitializeRequest, + InitializeRequestSchema, + InitializeResult, + LATEST_PROTOCOL_VERSION, + ListRootsRequest, + ListRootsResultSchema, + LoggingMessageNotification, + Notification, + Request, + ResourceUpdatedNotification, + Result, + ServerCapabilities, + ServerNotification, + ServerRequest, + ServerResult, + SUPPORTED_PROTOCOL_VERSIONS, +} from "../types.js"; + +export type ServerOptions = ProtocolOptions & { + /** + * Capabilities to advertise as being supported by this server. + */ + capabilities?: ServerCapabilities; + + /** + * Optional instructions describing how to use the server and its features. + */ + instructions?: string; +}; + +/** + * An MCP server on top of a pluggable transport. + * + * This server will automatically respond to the initialization flow as initiated from the client. + * + * To use with custom types, extend the base Request/Notification/Result types and pass them as type parameters: + * + * ```typescript + * // Custom schemas + * const CustomRequestSchema = RequestSchema.extend({...}) + * const CustomNotificationSchema = NotificationSchema.extend({...}) + * const CustomResultSchema = ResultSchema.extend({...}) + * + * // Type aliases + * type CustomRequest = z.infer + * type CustomNotification = z.infer + * type CustomResult = z.infer + * + * // Create typed server + * const server = new Server({ + * name: "CustomServer", + * version: "1.0.0" + * }) + * ``` + */ +export class Server< + RequestT extends Request = Request, + NotificationT extends Notification = Notification, + ResultT extends Result = Result, +> extends Protocol< + ServerRequest | RequestT, + ServerNotification | NotificationT, + ServerResult | ResultT +> { + private _clientCapabilities?: ClientCapabilities; + private _clientVersion?: Implementation; + private _capabilities: ServerCapabilities; + private _instructions?: string; + + /** + * Callback for when initialization has fully completed (i.e., the client has sent an `initialized` notification). + */ + oninitialized?: () => void; + + /** + * Initializes this server with the given name and version information. + */ + constructor( + private _serverInfo: Implementation, + options?: ServerOptions, + ) { + super(options); + this._capabilities = options?.capabilities ?? {}; + this._instructions = options?.instructions; + + this.setRequestHandler(InitializeRequestSchema, (request) => + this._oninitialize(request), + ); + this.setNotificationHandler(InitializedNotificationSchema, () => + this.oninitialized?.(), + ); + } + + /** + * Registers new capabilities. This can only be called before connecting to a transport. + * + * The new capabilities will be merged with any existing capabilities previously given (e.g., at initialization). + */ + public registerCapabilities(capabilities: ServerCapabilities): void { + if (this.transport) { + throw new Error( + "Cannot register capabilities after connecting to transport", + ); + } + + this._capabilities = mergeCapabilities(this._capabilities, capabilities); + } + + protected assertCapabilityForMethod(method: RequestT["method"]): void { + switch (method as ServerRequest["method"]) { + case "sampling/createMessage": + if (!this._clientCapabilities?.sampling) { + throw new Error( + `Client does not support sampling (required for ${method})`, + ); + } + break; + + case "roots/list": + if (!this._clientCapabilities?.roots) { + throw new Error( + `Client does not support listing roots (required for ${method})`, + ); + } + break; + + case "ping": + // No specific capability required for ping + break; + } + } + + protected assertNotificationCapability( + method: (ServerNotification | NotificationT)["method"], + ): void { + switch (method as ServerNotification["method"]) { + case "notifications/message": + if (!this._capabilities.logging) { + throw new Error( + `Server does not support logging (required for ${method})`, + ); + } + break; + + case "notifications/resources/updated": + case "notifications/resources/list_changed": + if (!this._capabilities.resources) { + throw new Error( + `Server does not support notifying about resources (required for ${method})`, + ); + } + break; + + case "notifications/tools/list_changed": + if (!this._capabilities.tools) { + throw new Error( + `Server does not support notifying of tool list changes (required for ${method})`, + ); + } + break; + + case "notifications/prompts/list_changed": + if (!this._capabilities.prompts) { + throw new Error( + `Server does not support notifying of prompt list changes (required for ${method})`, + ); + } + break; + + case "notifications/cancelled": + // Cancellation notifications are always allowed + break; + + case "notifications/progress": + // Progress notifications are always allowed + break; + } + } + + protected assertRequestHandlerCapability(method: string): void { + switch (method) { + case "sampling/createMessage": + if (!this._capabilities.sampling) { + throw new Error( + `Server does not support sampling (required for ${method})`, + ); + } + break; + + case "logging/setLevel": + if (!this._capabilities.logging) { + throw new Error( + `Server does not support logging (required for ${method})`, + ); + } + break; + + case "prompts/get": + case "prompts/list": + if (!this._capabilities.prompts) { + throw new Error( + `Server does not support prompts (required for ${method})`, + ); + } + break; + + case "resources/list": + case "resources/templates/list": + case "resources/read": + if (!this._capabilities.resources) { + throw new Error( + `Server does not support resources (required for ${method})`, + ); + } + break; + + case "tools/call": + case "tools/list": + if (!this._capabilities.tools) { + throw new Error( + `Server does not support tools (required for ${method})`, + ); + } + break; + + case "ping": + case "initialize": + // No specific capability required for these methods + break; + } + } + + private async _oninitialize( + request: InitializeRequest, + ): Promise { + const requestedVersion = request.params.protocolVersion; + + this._clientCapabilities = request.params.capabilities; + this._clientVersion = request.params.clientInfo; + + return { + protocolVersion: SUPPORTED_PROTOCOL_VERSIONS.includes(requestedVersion) + ? requestedVersion + : LATEST_PROTOCOL_VERSION, + capabilities: this.getCapabilities(), + serverInfo: this._serverInfo, + ...(this._instructions && { instructions: this._instructions }), + }; + } + + /** + * After initialization has completed, this will be populated with the client's reported capabilities. + */ + getClientCapabilities(): ClientCapabilities | undefined { + return this._clientCapabilities; + } + + /** + * After initialization has completed, this will be populated with information about the client's name and version. + */ + getClientVersion(): Implementation | undefined { + return this._clientVersion; + } + + private getCapabilities(): ServerCapabilities { + return this._capabilities; + } + + async ping() { + return this.request({ method: "ping" }, EmptyResultSchema); + } + + async createMessage( + params: CreateMessageRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "sampling/createMessage", params }, + CreateMessageResultSchema, + options, + ); + } + + async listRoots( + params?: ListRootsRequest["params"], + options?: RequestOptions, + ) { + return this.request( + { method: "roots/list", params }, + ListRootsResultSchema, + options, + ); + } + + async sendLoggingMessage(params: LoggingMessageNotification["params"]) { + return this.notification({ method: "notifications/message", params }); + } + + async sendResourceUpdated(params: ResourceUpdatedNotification["params"]) { + return this.notification({ + method: "notifications/resources/updated", + params, + }); + } + + async sendResourceListChanged() { + return this.notification({ + method: "notifications/resources/list_changed", + }); + } + + async sendToolListChanged() { + return this.notification({ method: "notifications/tools/list_changed" }); + } + + async sendPromptListChanged() { + return this.notification({ method: "notifications/prompts/list_changed" }); + } +} + + + +--- +File: /src/server/mcp.test.ts +--- + +import { McpServer } from "./mcp.js"; +import { Client } from "../client/index.js"; +import { InMemoryTransport } from "../inMemory.js"; +import { z } from "zod"; +import { + ListToolsResultSchema, + CallToolResultSchema, + ListResourcesResultSchema, + ListResourceTemplatesResultSchema, + ReadResourceResultSchema, + ListPromptsResultSchema, + GetPromptResultSchema, + CompleteResultSchema, +} from "../types.js"; +import { ResourceTemplate } from "./mcp.js"; +import { completable } from "./completable.js"; +import { UriTemplate } from "../shared/uriTemplate.js"; + +describe("McpServer", () => { + test("should expose underlying Server instance", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + expect(mcpServer.server).toBeDefined(); + }); + + test("should allow sending notifications via Server", async () => { + const mcpServer = new McpServer( + { + name: "test server", + version: "1.0", + }, + { capabilities: { logging: {} } }, + ); + + const client = new Client({ + name: "test client", + version: "1.0", + }); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + // This should work because we're using the underlying server + await expect( + mcpServer.server.sendLoggingMessage({ + level: "info", + data: "Test log message", + }), + ).resolves.not.toThrow(); + }); +}); + +describe("ResourceTemplate", () => { + test("should create ResourceTemplate with string pattern", () => { + const template = new ResourceTemplate("test://{category}/{id}", { + list: undefined, + }); + expect(template.uriTemplate.toString()).toBe("test://{category}/{id}"); + expect(template.listCallback).toBeUndefined(); + }); + + test("should create ResourceTemplate with UriTemplate", () => { + const uriTemplate = new UriTemplate("test://{category}/{id}"); + const template = new ResourceTemplate(uriTemplate, { list: undefined }); + expect(template.uriTemplate).toBe(uriTemplate); + expect(template.listCallback).toBeUndefined(); + }); + + test("should create ResourceTemplate with list callback", async () => { + const list = jest.fn().mockResolvedValue({ + resources: [{ name: "Test", uri: "test://example" }], + }); + + const template = new ResourceTemplate("test://{id}", { list }); + expect(template.listCallback).toBe(list); + + const abortController = new AbortController(); + const result = await template.listCallback?.({ + signal: abortController.signal, + }); + expect(result?.resources).toHaveLength(1); + expect(list).toHaveBeenCalled(); + }); +}); + +describe("tool()", () => { + test("should register zero-argument tool", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.tool("test", async () => ({ + content: [ + { + type: "text", + text: "Test response", + }, + ], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "tools/list", + }, + ListToolsResultSchema, + ); + + expect(result.tools).toHaveLength(1); + expect(result.tools[0].name).toBe("test"); + expect(result.tools[0].inputSchema).toEqual({ + type: "object", + }); + }); + + test("should register tool with args schema", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.tool( + "test", + { + name: z.string(), + value: z.number(), + }, + async ({ name, value }) => ({ + content: [ + { + type: "text", + text: `${name}: ${value}`, + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "tools/list", + }, + ListToolsResultSchema, + ); + + expect(result.tools).toHaveLength(1); + expect(result.tools[0].name).toBe("test"); + expect(result.tools[0].inputSchema).toMatchObject({ + type: "object", + properties: { + name: { type: "string" }, + value: { type: "number" }, + }, + }); + }); + + test("should register tool with description", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.tool("test", "Test description", async () => ({ + content: [ + { + type: "text", + text: "Test response", + }, + ], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "tools/list", + }, + ListToolsResultSchema, + ); + + expect(result.tools).toHaveLength(1); + expect(result.tools[0].name).toBe("test"); + expect(result.tools[0].description).toBe("Test description"); + }); + + test("should validate tool args", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + tools: {}, + }, + }, + ); + + mcpServer.tool( + "test", + { + name: z.string(), + value: z.number(), + }, + async ({ name, value }) => ({ + content: [ + { + type: "text", + text: `${name}: ${value}`, + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + await expect( + client.request( + { + method: "tools/call", + params: { + name: "test", + arguments: { + name: "test", + value: "not a number", + }, + }, + }, + CallToolResultSchema, + ), + ).rejects.toThrow(/Invalid arguments/); + }); + + test("should prevent duplicate tool registration", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + mcpServer.tool("test", async () => ({ + content: [ + { + type: "text", + text: "Test response", + }, + ], + })); + + expect(() => { + mcpServer.tool("test", async () => ({ + content: [ + { + type: "text", + text: "Test response 2", + }, + ], + })); + }).toThrow(/already registered/); + }); + + test("should allow registering multiple tools", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + // This should succeed + mcpServer.tool("tool1", () => ({ content: [] })); + + // This should also succeed and not throw about request handlers + mcpServer.tool("tool2", () => ({ content: [] })); + }); + + test("should pass sessionId to tool callback via RequestHandlerExtra", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + tools: {}, + }, + }, + ); + + let receivedSessionId: string | undefined; + mcpServer.tool("test-tool", async (extra) => { + receivedSessionId = extra.sessionId; + return { + content: [ + { + type: "text", + text: "Test response", + }, + ], + }; + }); + + const [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair(); + // Set a test sessionId on the server transport + serverTransport.sessionId = "test-session-123"; + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + await client.request( + { + method: "tools/call", + params: { + name: "test-tool", + }, + }, + CallToolResultSchema, + ); + + expect(receivedSessionId).toBe("test-session-123"); + }); + + test("should allow client to call server tools", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + tools: {}, + }, + }, + ); + + mcpServer.tool( + "test", + "Test tool", + { + input: z.string(), + }, + async ({ input }) => ({ + content: [ + { + type: "text", + text: `Processed: ${input}`, + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "tools/call", + params: { + name: "test", + arguments: { + input: "hello", + }, + }, + }, + CallToolResultSchema, + ); + + expect(result.content).toEqual([ + { + type: "text", + text: "Processed: hello", + }, + ]); + }); + + test("should handle server tool errors gracefully", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + tools: {}, + }, + }, + ); + + mcpServer.tool("error-test", async () => { + throw new Error("Tool execution failed"); + }); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "tools/call", + params: { + name: "error-test", + }, + }, + CallToolResultSchema, + ); + + expect(result.isError).toBe(true); + expect(result.content).toEqual([ + { + type: "text", + text: "Tool execution failed", + }, + ]); + }); + + test("should throw McpError for invalid tool name", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + tools: {}, + }, + }, + ); + + mcpServer.tool("test-tool", async () => ({ + content: [ + { + type: "text", + text: "Test response", + }, + ], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + await expect( + client.request( + { + method: "tools/call", + params: { + name: "nonexistent-tool", + }, + }, + CallToolResultSchema, + ), + ).rejects.toThrow(/Tool nonexistent-tool not found/); + }); +}); + +describe("resource()", () => { + test("should register resource with uri and readCallback", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.resource("test", "test://resource", async () => ({ + contents: [ + { + uri: "test://resource", + text: "Test content", + }, + ], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "resources/list", + }, + ListResourcesResultSchema, + ); + + expect(result.resources).toHaveLength(1); + expect(result.resources[0].name).toBe("test"); + expect(result.resources[0].uri).toBe("test://resource"); + }); + + test("should register resource with metadata", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.resource( + "test", + "test://resource", + { + description: "Test resource", + mimeType: "text/plain", + }, + async () => ({ + contents: [ + { + uri: "test://resource", + text: "Test content", + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "resources/list", + }, + ListResourcesResultSchema, + ); + + expect(result.resources).toHaveLength(1); + expect(result.resources[0].description).toBe("Test resource"); + expect(result.resources[0].mimeType).toBe("text/plain"); + }); + + test("should register resource template", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.resource( + "test", + new ResourceTemplate("test://resource/{id}", { list: undefined }), + async () => ({ + contents: [ + { + uri: "test://resource/123", + text: "Test content", + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "resources/templates/list", + }, + ListResourceTemplatesResultSchema, + ); + + expect(result.resourceTemplates).toHaveLength(1); + expect(result.resourceTemplates[0].name).toBe("test"); + expect(result.resourceTemplates[0].uriTemplate).toBe( + "test://resource/{id}", + ); + }); + + test("should register resource template with listCallback", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.resource( + "test", + new ResourceTemplate("test://resource/{id}", { + list: async () => ({ + resources: [ + { + name: "Resource 1", + uri: "test://resource/1", + }, + { + name: "Resource 2", + uri: "test://resource/2", + }, + ], + }), + }), + async (uri) => ({ + contents: [ + { + uri: uri.href, + text: "Test content", + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "resources/list", + }, + ListResourcesResultSchema, + ); + + expect(result.resources).toHaveLength(2); + expect(result.resources[0].name).toBe("Resource 1"); + expect(result.resources[0].uri).toBe("test://resource/1"); + expect(result.resources[1].name).toBe("Resource 2"); + expect(result.resources[1].uri).toBe("test://resource/2"); + }); + + test("should pass template variables to readCallback", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.resource( + "test", + new ResourceTemplate("test://resource/{category}/{id}", { + list: undefined, + }), + async (uri, { category, id }) => ({ + contents: [ + { + uri: uri.href, + text: `Category: ${category}, ID: ${id}`, + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "resources/read", + params: { + uri: "test://resource/books/123", + }, + }, + ReadResourceResultSchema, + ); + + expect(result.contents[0].text).toBe("Category: books, ID: 123"); + }); + + test("should prevent duplicate resource registration", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + mcpServer.resource("test", "test://resource", async () => ({ + contents: [ + { + uri: "test://resource", + text: "Test content", + }, + ], + })); + + expect(() => { + mcpServer.resource("test2", "test://resource", async () => ({ + contents: [ + { + uri: "test://resource", + text: "Test content 2", + }, + ], + })); + }).toThrow(/already registered/); + }); + + test("should allow registering multiple resources", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + // This should succeed + mcpServer.resource("resource1", "test://resource1", async () => ({ + contents: [ + { + uri: "test://resource1", + text: "Test content 1", + }, + ], + })); + + // This should also succeed and not throw about request handlers + mcpServer.resource("resource2", "test://resource2", async () => ({ + contents: [ + { + uri: "test://resource2", + text: "Test content 2", + }, + ], + })); + }); + + test("should prevent duplicate resource template registration", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + mcpServer.resource( + "test", + new ResourceTemplate("test://resource/{id}", { list: undefined }), + async () => ({ + contents: [ + { + uri: "test://resource/123", + text: "Test content", + }, + ], + }), + ); + + expect(() => { + mcpServer.resource( + "test", + new ResourceTemplate("test://resource/{id}", { list: undefined }), + async () => ({ + contents: [ + { + uri: "test://resource/123", + text: "Test content 2", + }, + ], + }), + ); + }).toThrow(/already registered/); + }); + + test("should handle resource read errors gracefully", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.resource("error-test", "test://error", async () => { + throw new Error("Resource read failed"); + }); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + await expect( + client.request( + { + method: "resources/read", + params: { + uri: "test://error", + }, + }, + ReadResourceResultSchema, + ), + ).rejects.toThrow(/Resource read failed/); + }); + + test("should throw McpError for invalid resource URI", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.resource("test", "test://resource", async () => ({ + contents: [ + { + uri: "test://resource", + text: "Test content", + }, + ], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + await expect( + client.request( + { + method: "resources/read", + params: { + uri: "test://nonexistent", + }, + }, + ReadResourceResultSchema, + ), + ).rejects.toThrow(/Resource test:\/\/nonexistent not found/); + }); + + test("should support completion of resource template parameters", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + resources: {}, + }, + }, + ); + + mcpServer.resource( + "test", + new ResourceTemplate("test://resource/{category}", { + list: undefined, + complete: { + category: () => ["books", "movies", "music"], + }, + }), + async () => ({ + contents: [ + { + uri: "test://resource/test", + text: "Test content", + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "completion/complete", + params: { + ref: { + type: "ref/resource", + uri: "test://resource/{category}", + }, + argument: { + name: "category", + value: "", + }, + }, + }, + CompleteResultSchema, + ); + + expect(result.completion.values).toEqual(["books", "movies", "music"]); + expect(result.completion.total).toBe(3); + }); + + test("should support filtered completion of resource template parameters", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + resources: {}, + }, + }, + ); + + mcpServer.resource( + "test", + new ResourceTemplate("test://resource/{category}", { + list: undefined, + complete: { + category: (test: string) => + ["books", "movies", "music"].filter((value) => + value.startsWith(test), + ), + }, + }), + async () => ({ + contents: [ + { + uri: "test://resource/test", + text: "Test content", + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "completion/complete", + params: { + ref: { + type: "ref/resource", + uri: "test://resource/{category}", + }, + argument: { + name: "category", + value: "m", + }, + }, + }, + CompleteResultSchema, + ); + + expect(result.completion.values).toEqual(["movies", "music"]); + expect(result.completion.total).toBe(2); + }); +}); + +describe("prompt()", () => { + test("should register zero-argument prompt", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.prompt("test", async () => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: "Test response", + }, + }, + ], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "prompts/list", + }, + ListPromptsResultSchema, + ); + + expect(result.prompts).toHaveLength(1); + expect(result.prompts[0].name).toBe("test"); + expect(result.prompts[0].arguments).toBeUndefined(); + }); + + test("should register prompt with args schema", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.prompt( + "test", + { + name: z.string(), + value: z.string(), + }, + async ({ name, value }) => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: `${name}: ${value}`, + }, + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "prompts/list", + }, + ListPromptsResultSchema, + ); + + expect(result.prompts).toHaveLength(1); + expect(result.prompts[0].name).toBe("test"); + expect(result.prompts[0].arguments).toEqual([ + { name: "name", required: true }, + { name: "value", required: true }, + ]); + }); + + test("should register prompt with description", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + const client = new Client({ + name: "test client", + version: "1.0", + }); + + mcpServer.prompt("test", "Test description", async () => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: "Test response", + }, + }, + ], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "prompts/list", + }, + ListPromptsResultSchema, + ); + + expect(result.prompts).toHaveLength(1); + expect(result.prompts[0].name).toBe("test"); + expect(result.prompts[0].description).toBe("Test description"); + }); + + test("should validate prompt args", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + }, + }, + ); + + mcpServer.prompt( + "test", + { + name: z.string(), + value: z.string().min(3), + }, + async ({ name, value }) => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: `${name}: ${value}`, + }, + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + await expect( + client.request( + { + method: "prompts/get", + params: { + name: "test", + arguments: { + name: "test", + value: "ab", // Too short + }, + }, + }, + GetPromptResultSchema, + ), + ).rejects.toThrow(/Invalid arguments/); + }); + + test("should prevent duplicate prompt registration", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + mcpServer.prompt("test", async () => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: "Test response", + }, + }, + ], + })); + + expect(() => { + mcpServer.prompt("test", async () => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: "Test response 2", + }, + }, + ], + })); + }).toThrow(/already registered/); + }); + + test("should allow registering multiple prompts", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + // This should succeed + mcpServer.prompt("prompt1", async () => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: "Test response 1", + }, + }, + ], + })); + + // This should also succeed and not throw about request handlers + mcpServer.prompt("prompt2", async () => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: "Test response 2", + }, + }, + ], + })); + }); + + test("should allow registering prompts with arguments", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + // This should succeed + mcpServer.prompt( + "echo", + { message: z.string() }, + ({ message }) => ({ + messages: [{ + role: "user", + content: { + type: "text", + text: `Please process this message: ${message}` + } + }] + }) + ); + }); + + test("should allow registering both resources and prompts with completion handlers", () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + // Register a resource with completion + mcpServer.resource( + "test", + new ResourceTemplate("test://resource/{category}", { + list: undefined, + complete: { + category: () => ["books", "movies", "music"], + }, + }), + async () => ({ + contents: [ + { + uri: "test://resource/test", + text: "Test content", + }, + ], + }), + ); + + // Register a prompt with completion + mcpServer.prompt( + "echo", + { message: completable(z.string(), () => ["hello", "world"]) }, + ({ message }) => ({ + messages: [{ + role: "user", + content: { + type: "text", + text: `Please process this message: ${message}` + } + }] + }) + ); + }); + + test("should throw McpError for invalid prompt name", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + }, + }, + ); + + mcpServer.prompt("test-prompt", async () => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: "Test response", + }, + }, + ], + })); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + await expect( + client.request( + { + method: "prompts/get", + params: { + name: "nonexistent-prompt", + }, + }, + GetPromptResultSchema, + ), + ).rejects.toThrow(/Prompt nonexistent-prompt not found/); + }); + + test("should support completion of prompt arguments", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + }, + }, + ); + + mcpServer.prompt( + "test-prompt", + { + name: completable(z.string(), () => ["Alice", "Bob", "Charlie"]), + }, + async ({ name }) => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: `Hello ${name}`, + }, + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "completion/complete", + params: { + ref: { + type: "ref/prompt", + name: "test-prompt", + }, + argument: { + name: "name", + value: "", + }, + }, + }, + CompleteResultSchema, + ); + + expect(result.completion.values).toEqual(["Alice", "Bob", "Charlie"]); + expect(result.completion.total).toBe(3); + }); + + test("should support filtered completion of prompt arguments", async () => { + const mcpServer = new McpServer({ + name: "test server", + version: "1.0", + }); + + const client = new Client( + { + name: "test client", + version: "1.0", + }, + { + capabilities: { + prompts: {}, + }, + }, + ); + + mcpServer.prompt( + "test-prompt", + { + name: completable(z.string(), (test) => + ["Alice", "Bob", "Charlie"].filter((value) => value.startsWith(test)), + ), + }, + async ({ name }) => ({ + messages: [ + { + role: "assistant", + content: { + type: "text", + text: `Hello ${name}`, + }, + }, + ], + }), + ); + + const [clientTransport, serverTransport] = + InMemoryTransport.createLinkedPair(); + + await Promise.all([ + client.connect(clientTransport), + mcpServer.server.connect(serverTransport), + ]); + + const result = await client.request( + { + method: "completion/complete", + params: { + ref: { + type: "ref/prompt", + name: "test-prompt", + }, + argument: { + name: "name", + value: "A", + }, + }, + }, + CompleteResultSchema, + ); + + expect(result.completion.values).toEqual(["Alice"]); + expect(result.completion.total).toBe(1); + }); +}); + + + +--- +File: /src/server/mcp.ts +--- + +import { Server, ServerOptions } from "./index.js"; +import { zodToJsonSchema } from "zod-to-json-schema"; +import { + z, + ZodRawShape, + ZodObject, + ZodString, + AnyZodObject, + ZodTypeAny, + ZodType, + ZodTypeDef, + ZodOptional, +} from "zod"; +import { + Implementation, + Tool, + ListToolsResult, + CallToolResult, + McpError, + ErrorCode, + CompleteRequest, + CompleteResult, + PromptReference, + ResourceReference, + Resource, + ListResourcesResult, + ListResourceTemplatesRequestSchema, + ReadResourceRequestSchema, + ListToolsRequestSchema, + CallToolRequestSchema, + ListResourcesRequestSchema, + ListPromptsRequestSchema, + GetPromptRequestSchema, + CompleteRequestSchema, + ListPromptsResult, + Prompt, + PromptArgument, + GetPromptResult, + ReadResourceResult, +} from "../types.js"; +import { Completable, CompletableDef } from "./completable.js"; +import { UriTemplate, Variables } from "../shared/uriTemplate.js"; +import { RequestHandlerExtra } from "../shared/protocol.js"; +import { Transport } from "../shared/transport.js"; + +/** + * High-level MCP server that provides a simpler API for working with resources, tools, and prompts. + * For advanced usage (like sending notifications or setting custom request handlers), use the underlying + * Server instance available via the `server` property. + */ +export class McpServer { + /** + * The underlying Server instance, useful for advanced operations like sending notifications. + */ + public readonly server: Server; + + private _registeredResources: { [uri: string]: RegisteredResource } = {}; + private _registeredResourceTemplates: { + [name: string]: RegisteredResourceTemplate; + } = {}; + private _registeredTools: { [name: string]: RegisteredTool } = {}; + private _registeredPrompts: { [name: string]: RegisteredPrompt } = {}; + + constructor(serverInfo: Implementation, options?: ServerOptions) { + this.server = new Server(serverInfo, options); + } + + /** + * Attaches to the given transport, starts it, and starts listening for messages. + * + * The `server` object assumes ownership of the Transport, replacing any callbacks that have already been set, and expects that it is the only user of the Transport instance going forward. + */ + async connect(transport: Transport): Promise { + return await this.server.connect(transport); + } + + /** + * Closes the connection. + */ + async close(): Promise { + await this.server.close(); + } + + private _toolHandlersInitialized = false; + + private setToolRequestHandlers() { + if (this._toolHandlersInitialized) { + return; + } + + this.server.assertCanSetRequestHandler( + ListToolsRequestSchema.shape.method.value, + ); + this.server.assertCanSetRequestHandler( + CallToolRequestSchema.shape.method.value, + ); + + this.server.registerCapabilities({ + tools: {}, + }); + + this.server.setRequestHandler( + ListToolsRequestSchema, + (): ListToolsResult => ({ + tools: Object.entries(this._registeredTools).map( + ([name, tool]): Tool => { + return { + name, + description: tool.description, + inputSchema: tool.inputSchema + ? (zodToJsonSchema(tool.inputSchema, { + strictUnions: true, + }) as Tool["inputSchema"]) + : EMPTY_OBJECT_JSON_SCHEMA, + }; + }, + ), + }), + ); + + this.server.setRequestHandler( + CallToolRequestSchema, + async (request, extra): Promise => { + const tool = this._registeredTools[request.params.name]; + if (!tool) { + throw new McpError( + ErrorCode.InvalidParams, + `Tool ${request.params.name} not found`, + ); + } + + if (tool.inputSchema) { + const parseResult = await tool.inputSchema.safeParseAsync( + request.params.arguments, + ); + if (!parseResult.success) { + throw new McpError( + ErrorCode.InvalidParams, + `Invalid arguments for tool ${request.params.name}: ${parseResult.error.message}`, + ); + } + + const args = parseResult.data; + const cb = tool.callback as ToolCallback; + try { + return await Promise.resolve(cb(args, extra)); + } catch (error) { + return { + content: [ + { + type: "text", + text: error instanceof Error ? error.message : String(error), + }, + ], + isError: true, + }; + } + } else { + const cb = tool.callback as ToolCallback; + try { + return await Promise.resolve(cb(extra)); + } catch (error) { + return { + content: [ + { + type: "text", + text: error instanceof Error ? error.message : String(error), + }, + ], + isError: true, + }; + } + } + }, + ); + + this._toolHandlersInitialized = true; + } + + private _completionHandlerInitialized = false; + + private setCompletionRequestHandler() { + if (this._completionHandlerInitialized) { + return; + } + + this.server.assertCanSetRequestHandler( + CompleteRequestSchema.shape.method.value, + ); + + this.server.setRequestHandler( + CompleteRequestSchema, + async (request): Promise => { + switch (request.params.ref.type) { + case "ref/prompt": + return this.handlePromptCompletion(request, request.params.ref); + + case "ref/resource": + return this.handleResourceCompletion(request, request.params.ref); + + default: + throw new McpError( + ErrorCode.InvalidParams, + `Invalid completion reference: ${request.params.ref}`, + ); + } + }, + ); + + this._completionHandlerInitialized = true; + } + + private async handlePromptCompletion( + request: CompleteRequest, + ref: PromptReference, + ): Promise { + const prompt = this._registeredPrompts[ref.name]; + if (!prompt) { + throw new McpError( + ErrorCode.InvalidParams, + `Prompt ${request.params.ref.name} not found`, + ); + } + + if (!prompt.argsSchema) { + return EMPTY_COMPLETION_RESULT; + } + + const field = prompt.argsSchema.shape[request.params.argument.name]; + if (!(field instanceof Completable)) { + return EMPTY_COMPLETION_RESULT; + } + + const def: CompletableDef = field._def; + const suggestions = await def.complete(request.params.argument.value); + return createCompletionResult(suggestions); + } + + private async handleResourceCompletion( + request: CompleteRequest, + ref: ResourceReference, + ): Promise { + const template = Object.values(this._registeredResourceTemplates).find( + (t) => t.resourceTemplate.uriTemplate.toString() === ref.uri, + ); + + if (!template) { + if (this._registeredResources[ref.uri]) { + // Attempting to autocomplete a fixed resource URI is not an error in the spec (but probably should be). + return EMPTY_COMPLETION_RESULT; + } + + throw new McpError( + ErrorCode.InvalidParams, + `Resource template ${request.params.ref.uri} not found`, + ); + } + + const completer = template.resourceTemplate.completeCallback( + request.params.argument.name, + ); + if (!completer) { + return EMPTY_COMPLETION_RESULT; + } + + const suggestions = await completer(request.params.argument.value); + return createCompletionResult(suggestions); + } + + private _resourceHandlersInitialized = false; + + private setResourceRequestHandlers() { + if (this._resourceHandlersInitialized) { + return; + } + + this.server.assertCanSetRequestHandler( + ListResourcesRequestSchema.shape.method.value, + ); + this.server.assertCanSetRequestHandler( + ListResourceTemplatesRequestSchema.shape.method.value, + ); + this.server.assertCanSetRequestHandler( + ReadResourceRequestSchema.shape.method.value, + ); + + this.server.registerCapabilities({ + resources: {}, + }); + + this.server.setRequestHandler( + ListResourcesRequestSchema, + async (request, extra) => { + const resources = Object.entries(this._registeredResources).map( + ([uri, resource]) => ({ + uri, + name: resource.name, + ...resource.metadata, + }), + ); + + const templateResources: Resource[] = []; + for (const template of Object.values( + this._registeredResourceTemplates, + )) { + if (!template.resourceTemplate.listCallback) { + continue; + } + + const result = await template.resourceTemplate.listCallback(extra); + for (const resource of result.resources) { + templateResources.push({ + ...resource, + ...template.metadata, + }); + } + } + + return { resources: [...resources, ...templateResources] }; + }, + ); + + this.server.setRequestHandler( + ListResourceTemplatesRequestSchema, + async () => { + const resourceTemplates = Object.entries( + this._registeredResourceTemplates, + ).map(([name, template]) => ({ + name, + uriTemplate: template.resourceTemplate.uriTemplate.toString(), + ...template.metadata, + })); + + return { resourceTemplates }; + }, + ); + + this.server.setRequestHandler( + ReadResourceRequestSchema, + async (request, extra) => { + const uri = new URL(request.params.uri); + + // First check for exact resource match + const resource = this._registeredResources[uri.toString()]; + if (resource) { + return resource.readCallback(uri, extra); + } + + // Then check templates + for (const template of Object.values( + this._registeredResourceTemplates, + )) { + const variables = template.resourceTemplate.uriTemplate.match( + uri.toString(), + ); + if (variables) { + return template.readCallback(uri, variables, extra); + } + } + + throw new McpError( + ErrorCode.InvalidParams, + `Resource ${uri} not found`, + ); + }, + ); + + this.setCompletionRequestHandler(); + + this._resourceHandlersInitialized = true; + } + + private _promptHandlersInitialized = false; + + private setPromptRequestHandlers() { + if (this._promptHandlersInitialized) { + return; + } + + this.server.assertCanSetRequestHandler( + ListPromptsRequestSchema.shape.method.value, + ); + this.server.assertCanSetRequestHandler( + GetPromptRequestSchema.shape.method.value, + ); + + this.server.registerCapabilities({ + prompts: {}, + }); + + this.server.setRequestHandler( + ListPromptsRequestSchema, + (): ListPromptsResult => ({ + prompts: Object.entries(this._registeredPrompts).map( + ([name, prompt]): Prompt => { + return { + name, + description: prompt.description, + arguments: prompt.argsSchema + ? promptArgumentsFromSchema(prompt.argsSchema) + : undefined, + }; + }, + ), + }), + ); + + this.server.setRequestHandler( + GetPromptRequestSchema, + async (request, extra): Promise => { + const prompt = this._registeredPrompts[request.params.name]; + if (!prompt) { + throw new McpError( + ErrorCode.InvalidParams, + `Prompt ${request.params.name} not found`, + ); + } + + if (prompt.argsSchema) { + const parseResult = await prompt.argsSchema.safeParseAsync( + request.params.arguments, + ); + if (!parseResult.success) { + throw new McpError( + ErrorCode.InvalidParams, + `Invalid arguments for prompt ${request.params.name}: ${parseResult.error.message}`, + ); + } + + const args = parseResult.data; + const cb = prompt.callback as PromptCallback; + return await Promise.resolve(cb(args, extra)); + } else { + const cb = prompt.callback as PromptCallback; + return await Promise.resolve(cb(extra)); + } + }, + ); + + this.setCompletionRequestHandler(); + + this._promptHandlersInitialized = true; + } + + /** + * Registers a resource `name` at a fixed URI, which will use the given callback to respond to read requests. + */ + resource(name: string, uri: string, readCallback: ReadResourceCallback): void; + + /** + * Registers a resource `name` at a fixed URI with metadata, which will use the given callback to respond to read requests. + */ + resource( + name: string, + uri: string, + metadata: ResourceMetadata, + readCallback: ReadResourceCallback, + ): void; + + /** + * Registers a resource `name` with a template pattern, which will use the given callback to respond to read requests. + */ + resource( + name: string, + template: ResourceTemplate, + readCallback: ReadResourceTemplateCallback, + ): void; + + /** + * Registers a resource `name` with a template pattern and metadata, which will use the given callback to respond to read requests. + */ + resource( + name: string, + template: ResourceTemplate, + metadata: ResourceMetadata, + readCallback: ReadResourceTemplateCallback, + ): void; + + resource( + name: string, + uriOrTemplate: string | ResourceTemplate, + ...rest: unknown[] + ): void { + let metadata: ResourceMetadata | undefined; + if (typeof rest[0] === "object") { + metadata = rest.shift() as ResourceMetadata; + } + + const readCallback = rest[0] as + | ReadResourceCallback + | ReadResourceTemplateCallback; + + if (typeof uriOrTemplate === "string") { + if (this._registeredResources[uriOrTemplate]) { + throw new Error(`Resource ${uriOrTemplate} is already registered`); + } + + this._registeredResources[uriOrTemplate] = { + name, + metadata, + readCallback: readCallback as ReadResourceCallback, + }; + } else { + if (this._registeredResourceTemplates[name]) { + throw new Error(`Resource template ${name} is already registered`); + } + + this._registeredResourceTemplates[name] = { + resourceTemplate: uriOrTemplate, + metadata, + readCallback: readCallback as ReadResourceTemplateCallback, + }; + } + + this.setResourceRequestHandlers(); + } + + /** + * Registers a zero-argument tool `name`, which will run the given function when the client calls it. + */ + tool(name: string, cb: ToolCallback): void; + + /** + * Registers a zero-argument tool `name` (with a description) which will run the given function when the client calls it. + */ + tool(name: string, description: string, cb: ToolCallback): void; + + /** + * Registers a tool `name` accepting the given arguments, which must be an object containing named properties associated with Zod schemas. When the client calls it, the function will be run with the parsed and validated arguments. + */ + tool( + name: string, + paramsSchema: Args, + cb: ToolCallback, + ): void; + + /** + * Registers a tool `name` (with a description) accepting the given arguments, which must be an object containing named properties associated with Zod schemas. When the client calls it, the function will be run with the parsed and validated arguments. + */ + tool( + name: string, + description: string, + paramsSchema: Args, + cb: ToolCallback, + ): void; + + tool(name: string, ...rest: unknown[]): void { + if (this._registeredTools[name]) { + throw new Error(`Tool ${name} is already registered`); + } + + let description: string | undefined; + if (typeof rest[0] === "string") { + description = rest.shift() as string; + } + + let paramsSchema: ZodRawShape | undefined; + if (rest.length > 1) { + paramsSchema = rest.shift() as ZodRawShape; + } + + const cb = rest[0] as ToolCallback; + this._registeredTools[name] = { + description, + inputSchema: + paramsSchema === undefined ? undefined : z.object(paramsSchema), + callback: cb, + }; + + this.setToolRequestHandlers(); + } + + /** + * Registers a zero-argument prompt `name`, which will run the given function when the client calls it. + */ + prompt(name: string, cb: PromptCallback): void; + + /** + * Registers a zero-argument prompt `name` (with a description) which will run the given function when the client calls it. + */ + prompt(name: string, description: string, cb: PromptCallback): void; + + /** + * Registers a prompt `name` accepting the given arguments, which must be an object containing named properties associated with Zod schemas. When the client calls it, the function will be run with the parsed and validated arguments. + */ + prompt( + name: string, + argsSchema: Args, + cb: PromptCallback, + ): void; + + /** + * Registers a prompt `name` (with a description) accepting the given arguments, which must be an object containing named properties associated with Zod schemas. When the client calls it, the function will be run with the parsed and validated arguments. + */ + prompt( + name: string, + description: string, + argsSchema: Args, + cb: PromptCallback, + ): void; + + prompt(name: string, ...rest: unknown[]): void { + if (this._registeredPrompts[name]) { + throw new Error(`Prompt ${name} is already registered`); + } + + let description: string | undefined; + if (typeof rest[0] === "string") { + description = rest.shift() as string; + } + + let argsSchema: PromptArgsRawShape | undefined; + if (rest.length > 1) { + argsSchema = rest.shift() as PromptArgsRawShape; + } + + const cb = rest[0] as PromptCallback; + this._registeredPrompts[name] = { + description, + argsSchema: argsSchema === undefined ? undefined : z.object(argsSchema), + callback: cb, + }; + + this.setPromptRequestHandlers(); + } +} + +/** + * A callback to complete one variable within a resource template's URI template. + */ +export type CompleteResourceTemplateCallback = ( + value: string, +) => string[] | Promise; + +/** + * A resource template combines a URI pattern with optional functionality to enumerate + * all resources matching that pattern. + */ +export class ResourceTemplate { + private _uriTemplate: UriTemplate; + + constructor( + uriTemplate: string | UriTemplate, + private _callbacks: { + /** + * A callback to list all resources matching this template. This is required to specified, even if `undefined`, to avoid accidentally forgetting resource listing. + */ + list: ListResourcesCallback | undefined; + + /** + * An optional callback to autocomplete variables within the URI template. Useful for clients and users to discover possible values. + */ + complete?: { + [variable: string]: CompleteResourceTemplateCallback; + }; + }, + ) { + this._uriTemplate = + typeof uriTemplate === "string" + ? new UriTemplate(uriTemplate) + : uriTemplate; + } + + /** + * Gets the URI template pattern. + */ + get uriTemplate(): UriTemplate { + return this._uriTemplate; + } + + /** + * Gets the list callback, if one was provided. + */ + get listCallback(): ListResourcesCallback | undefined { + return this._callbacks.list; + } + + /** + * Gets the callback for completing a specific URI template variable, if one was provided. + */ + completeCallback( + variable: string, + ): CompleteResourceTemplateCallback | undefined { + return this._callbacks.complete?.[variable]; + } +} + +/** + * Callback for a tool handler registered with Server.tool(). + * + * Parameters will include tool arguments, if applicable, as well as other request handler context. + */ +export type ToolCallback = + Args extends ZodRawShape + ? ( + args: z.objectOutputType, + extra: RequestHandlerExtra, + ) => CallToolResult | Promise + : (extra: RequestHandlerExtra) => CallToolResult | Promise; + +type RegisteredTool = { + description?: string; + inputSchema?: AnyZodObject; + callback: ToolCallback; +}; + +const EMPTY_OBJECT_JSON_SCHEMA = { + type: "object" as const, +}; + +/** + * Additional, optional information for annotating a resource. + */ +export type ResourceMetadata = Omit; + +/** + * Callback to list all resources matching a given template. + */ +export type ListResourcesCallback = ( + extra: RequestHandlerExtra, +) => ListResourcesResult | Promise; + +/** + * Callback to read a resource at a given URI. + */ +export type ReadResourceCallback = ( + uri: URL, + extra: RequestHandlerExtra, +) => ReadResourceResult | Promise; + +type RegisteredResource = { + name: string; + metadata?: ResourceMetadata; + readCallback: ReadResourceCallback; +}; + +/** + * Callback to read a resource at a given URI, following a filled-in URI template. + */ +export type ReadResourceTemplateCallback = ( + uri: URL, + variables: Variables, + extra: RequestHandlerExtra, +) => ReadResourceResult | Promise; + +type RegisteredResourceTemplate = { + resourceTemplate: ResourceTemplate; + metadata?: ResourceMetadata; + readCallback: ReadResourceTemplateCallback; +}; + +type PromptArgsRawShape = { + [k: string]: + | ZodType + | ZodOptional>; +}; + +export type PromptCallback< + Args extends undefined | PromptArgsRawShape = undefined, +> = Args extends PromptArgsRawShape + ? ( + args: z.objectOutputType, + extra: RequestHandlerExtra, + ) => GetPromptResult | Promise + : (extra: RequestHandlerExtra) => GetPromptResult | Promise; + +type RegisteredPrompt = { + description?: string; + argsSchema?: ZodObject; + callback: PromptCallback; +}; + +function promptArgumentsFromSchema( + schema: ZodObject, +): PromptArgument[] { + return Object.entries(schema.shape).map( + ([name, field]): PromptArgument => ({ + name, + description: field.description, + required: !field.isOptional(), + }), + ); +} + +function createCompletionResult(suggestions: string[]): CompleteResult { + return { + completion: { + values: suggestions.slice(0, 100), + total: suggestions.length, + hasMore: suggestions.length > 100, + }, + }; +} + +const EMPTY_COMPLETION_RESULT: CompleteResult = { + completion: { + values: [], + hasMore: false, + }, +}; + + + +--- +File: /src/server/sse.ts +--- + +import { randomUUID } from "node:crypto"; +import { IncomingMessage, ServerResponse } from "node:http"; +import { Transport } from "../shared/transport.js"; +import { JSONRPCMessage, JSONRPCMessageSchema } from "../types.js"; +import getRawBody from "raw-body"; +import contentType from "content-type"; + +const MAXIMUM_MESSAGE_SIZE = "4mb"; + +/** + * Server transport for SSE: this will send messages over an SSE connection and receive messages from HTTP POST requests. + * + * This transport is only available in Node.js environments. + */ +export class SSEServerTransport implements Transport { + private _sseResponse?: ServerResponse; + private _sessionId: string; + + onclose?: () => void; + onerror?: (error: Error) => void; + onmessage?: (message: JSONRPCMessage) => void; + + /** + * Creates a new SSE server transport, which will direct the client to POST messages to the relative or absolute URL identified by `_endpoint`. + */ + constructor( + private _endpoint: string, + private res: ServerResponse, + ) { + this._sessionId = randomUUID(); + } + + /** + * Handles the initial SSE connection request. + * + * This should be called when a GET request is made to establish the SSE stream. + */ + async start(): Promise { + if (this._sseResponse) { + throw new Error( + "SSEServerTransport already started! If using Server class, note that connect() calls start() automatically.", + ); + } + + this.res.writeHead(200, { + "Content-Type": "text/event-stream", + "Cache-Control": "no-cache", + Connection: "keep-alive", + }); + + // Send the endpoint event + this.res.write( + `event: endpoint\ndata: ${encodeURI(this._endpoint)}?sessionId=${this._sessionId}\n\n`, + ); + + this._sseResponse = this.res; + this.res.on("close", () => { + this._sseResponse = undefined; + this.onclose?.(); + }); + } + + /** + * Handles incoming POST messages. + * + * This should be called when a POST request is made to send a message to the server. + */ + async handlePostMessage( + req: IncomingMessage, + res: ServerResponse, + parsedBody?: unknown, + ): Promise { + if (!this._sseResponse) { + const message = "SSE connection not established"; + res.writeHead(500).end(message); + throw new Error(message); + } + + let body: string | unknown; + try { + const ct = contentType.parse(req.headers["content-type"] ?? ""); + if (ct.type !== "application/json") { + throw new Error(`Unsupported content-type: ${ct}`); + } + + body = parsedBody ?? await getRawBody(req, { + limit: MAXIMUM_MESSAGE_SIZE, + encoding: ct.parameters.charset ?? "utf-8", + }); + } catch (error) { + res.writeHead(400).end(String(error)); + this.onerror?.(error as Error); + return; + } + + try { + await this.handleMessage(typeof body === 'string' ? JSON.parse(body) : body); + } catch { + res.writeHead(400).end(`Invalid message: ${body}`); + return; + } + + res.writeHead(202).end("Accepted"); + } + + /** + * Handle a client message, regardless of how it arrived. This can be used to inform the server of messages that arrive via a means different than HTTP POST. + */ + async handleMessage(message: unknown): Promise { + let parsedMessage: JSONRPCMessage; + try { + parsedMessage = JSONRPCMessageSchema.parse(message); + } catch (error) { + this.onerror?.(error as Error); + throw error; + } + + this.onmessage?.(parsedMessage); + } + + async close(): Promise { + this._sseResponse?.end(); + this._sseResponse = undefined; + this.onclose?.(); + } + + async send(message: JSONRPCMessage): Promise { + if (!this._sseResponse) { + throw new Error("Not connected"); + } + + this._sseResponse.write( + `event: message\ndata: ${JSON.stringify(message)}\n\n`, + ); + } + + /** + * Returns the session ID for this transport. + * + * This can be used to route incoming POST requests. + */ + get sessionId(): string { + return this._sessionId; + } +} + + + +--- +File: /src/server/stdio.test.ts +--- + +import { Readable, Writable } from "node:stream"; +import { ReadBuffer, serializeMessage } from "../shared/stdio.js"; +import { JSONRPCMessage } from "../types.js"; +import { StdioServerTransport } from "./stdio.js"; + +let input: Readable; +let outputBuffer: ReadBuffer; +let output: Writable; + +beforeEach(() => { + input = new Readable({ + // We'll use input.push() instead. + read: () => {}, + }); + + outputBuffer = new ReadBuffer(); + output = new Writable({ + write(chunk, encoding, callback) { + outputBuffer.append(chunk); + callback(); + }, + }); +}); + +test("should start then close cleanly", async () => { + const server = new StdioServerTransport(input, output); + server.onerror = (error) => { + throw error; + }; + + let didClose = false; + server.onclose = () => { + didClose = true; + }; + + await server.start(); + expect(didClose).toBeFalsy(); + await server.close(); + expect(didClose).toBeTruthy(); +}); + +test("should not read until started", async () => { + const server = new StdioServerTransport(input, output); + server.onerror = (error) => { + throw error; + }; + + let didRead = false; + const readMessage = new Promise((resolve) => { + server.onmessage = (message) => { + didRead = true; + resolve(message); + }; + }); + + const message: JSONRPCMessage = { + jsonrpc: "2.0", + id: 1, + method: "ping", + }; + input.push(serializeMessage(message)); + + expect(didRead).toBeFalsy(); + await server.start(); + expect(await readMessage).toEqual(message); +}); + +test("should read multiple messages", async () => { + const server = new StdioServerTransport(input, output); + server.onerror = (error) => { + throw error; + }; + + const messages: JSONRPCMessage[] = [ + { + jsonrpc: "2.0", + id: 1, + method: "ping", + }, + { + jsonrpc: "2.0", + method: "notifications/initialized", + }, + ]; + + const readMessages: JSONRPCMessage[] = []; + const finished = new Promise((resolve) => { + server.onmessage = (message) => { + readMessages.push(message); + if (JSON.stringify(message) === JSON.stringify(messages[1])) { + resolve(); + } + }; + }); + + input.push(serializeMessage(messages[0])); + input.push(serializeMessage(messages[1])); + + await server.start(); + await finished; + expect(readMessages).toEqual(messages); +}); + + + +--- +File: /src/server/stdio.ts +--- + +import process from "node:process"; +import { Readable, Writable } from "node:stream"; +import { ReadBuffer, serializeMessage } from "../shared/stdio.js"; +import { JSONRPCMessage } from "../types.js"; +import { Transport } from "../shared/transport.js"; + +/** + * Server transport for stdio: this communicates with a MCP client by reading from the current process' stdin and writing to stdout. + * + * This transport is only available in Node.js environments. + */ +export class StdioServerTransport implements Transport { + private _readBuffer: ReadBuffer = new ReadBuffer(); + private _started = false; + + constructor( + private _stdin: Readable = process.stdin, + private _stdout: Writable = process.stdout, + ) {} + + onclose?: () => void; + onerror?: (error: Error) => void; + onmessage?: (message: JSONRPCMessage) => void; + + // Arrow functions to bind `this` properly, while maintaining function identity. + _ondata = (chunk: Buffer) => { + this._readBuffer.append(chunk); + this.processReadBuffer(); + }; + _onerror = (error: Error) => { + this.onerror?.(error); + }; + + /** + * Starts listening for messages on stdin. + */ + async start(): Promise { + if (this._started) { + throw new Error( + "StdioServerTransport already started! If using Server class, note that connect() calls start() automatically.", + ); + } + + this._started = true; + this._stdin.on("data", this._ondata); + this._stdin.on("error", this._onerror); + } + + private processReadBuffer() { + while (true) { + try { + const message = this._readBuffer.readMessage(); + if (message === null) { + break; + } + + this.onmessage?.(message); + } catch (error) { + this.onerror?.(error as Error); + } + } + } + + async close(): Promise { + // Remove our event listeners first + this._stdin.off("data", this._ondata); + this._stdin.off("error", this._onerror); + + // Check if we were the only data listener + const remainingDataListeners = this._stdin.listenerCount('data'); + if (remainingDataListeners === 0) { + // Only pause stdin if we were the only listener + // This prevents interfering with other parts of the application that might be using stdin + this._stdin.pause(); + } + + // Clear the buffer and notify closure + this._readBuffer.clear(); + this.onclose?.(); + } + + send(message: JSONRPCMessage): Promise { + return new Promise((resolve) => { + const json = serializeMessage(message); + if (this._stdout.write(json)) { + resolve(); + } else { + this._stdout.once("drain", resolve); + } + }); + } +} + + + +--- +File: /src/shared/auth.ts +--- + +import { z } from "zod"; + +/** + * RFC 8414 OAuth 2.0 Authorization Server Metadata + */ +export const OAuthMetadataSchema = z + .object({ + issuer: z.string(), + authorization_endpoint: z.string(), + token_endpoint: z.string(), + registration_endpoint: z.string().optional(), + scopes_supported: z.array(z.string()).optional(), + response_types_supported: z.array(z.string()), + response_modes_supported: z.array(z.string()).optional(), + grant_types_supported: z.array(z.string()).optional(), + token_endpoint_auth_methods_supported: z.array(z.string()).optional(), + token_endpoint_auth_signing_alg_values_supported: z + .array(z.string()) + .optional(), + service_documentation: z.string().optional(), + revocation_endpoint: z.string().optional(), + revocation_endpoint_auth_methods_supported: z.array(z.string()).optional(), + revocation_endpoint_auth_signing_alg_values_supported: z + .array(z.string()) + .optional(), + introspection_endpoint: z.string().optional(), + introspection_endpoint_auth_methods_supported: z + .array(z.string()) + .optional(), + introspection_endpoint_auth_signing_alg_values_supported: z + .array(z.string()) + .optional(), + code_challenge_methods_supported: z.array(z.string()).optional(), + }) + .passthrough(); + +/** + * OAuth 2.1 token response + */ +export const OAuthTokensSchema = z + .object({ + access_token: z.string(), + token_type: z.string(), + expires_in: z.number().optional(), + scope: z.string().optional(), + refresh_token: z.string().optional(), + }) + .strip(); + +/** + * OAuth 2.1 error response + */ +export const OAuthErrorResponseSchema = z + .object({ + error: z.string(), + error_description: z.string().optional(), + error_uri: z.string().optional(), + }); + +/** + * RFC 7591 OAuth 2.0 Dynamic Client Registration metadata + */ +export const OAuthClientMetadataSchema = z.object({ + redirect_uris: z.array(z.string()).refine((uris) => uris.every((uri) => URL.canParse(uri)), { message: "redirect_uris must contain valid URLs" }), + token_endpoint_auth_method: z.string().optional(), + grant_types: z.array(z.string()).optional(), + response_types: z.array(z.string()).optional(), + client_name: z.string().optional(), + client_uri: z.string().optional(), + logo_uri: z.string().optional(), + scope: z.string().optional(), + contacts: z.array(z.string()).optional(), + tos_uri: z.string().optional(), + policy_uri: z.string().optional(), + jwks_uri: z.string().optional(), + jwks: z.any().optional(), + software_id: z.string().optional(), + software_version: z.string().optional(), +}).strip(); + +/** + * RFC 7591 OAuth 2.0 Dynamic Client Registration client information + */ +export const OAuthClientInformationSchema = z.object({ + client_id: z.string(), + client_secret: z.string().optional(), + client_id_issued_at: z.number().optional(), + client_secret_expires_at: z.number().optional(), +}).strip(); + +/** + * RFC 7591 OAuth 2.0 Dynamic Client Registration full response (client information plus metadata) + */ +export const OAuthClientInformationFullSchema = OAuthClientMetadataSchema.merge(OAuthClientInformationSchema); + +/** + * RFC 7591 OAuth 2.0 Dynamic Client Registration error response + */ +export const OAuthClientRegistrationErrorSchema = z.object({ + error: z.string(), + error_description: z.string().optional(), +}).strip(); + +/** + * RFC 7009 OAuth 2.0 Token Revocation request + */ +export const OAuthTokenRevocationRequestSchema = z.object({ + token: z.string(), + token_type_hint: z.string().optional(), +}).strip(); + +export type OAuthMetadata = z.infer; +export type OAuthTokens = z.infer; +export type OAuthErrorResponse = z.infer; +export type OAuthClientMetadata = z.infer; +export type OAuthClientInformation = z.infer; +export type OAuthClientInformationFull = z.infer; +export type OAuthClientRegistrationError = z.infer; +export type OAuthTokenRevocationRequest = z.infer; + + +--- +File: /src/shared/protocol.test.ts +--- + +import { ZodType, z } from "zod"; +import { + ClientCapabilities, + ErrorCode, + McpError, + Notification, + Request, + Result, + ServerCapabilities, +} from "../types.js"; +import { Protocol, mergeCapabilities } from "./protocol.js"; +import { Transport } from "./transport.js"; + +// Mock Transport class +class MockTransport implements Transport { + onclose?: () => void; + onerror?: (error: Error) => void; + onmessage?: (message: unknown) => void; + + async start(): Promise {} + async close(): Promise { + this.onclose?.(); + } + async send(_message: unknown): Promise {} +} + +describe("protocol tests", () => { + let protocol: Protocol; + let transport: MockTransport; + + beforeEach(() => { + transport = new MockTransport(); + protocol = new (class extends Protocol { + protected assertCapabilityForMethod(): void {} + protected assertNotificationCapability(): void {} + protected assertRequestHandlerCapability(): void {} + })(); + }); + + test("should throw a timeout error if the request exceeds the timeout", async () => { + await protocol.connect(transport); + const request = { method: "example", params: {} }; + try { + const mockSchema: ZodType<{ result: string }> = z.object({ + result: z.string(), + }); + await protocol.request(request, mockSchema, { + timeout: 0, + }); + } catch (error) { + expect(error).toBeInstanceOf(McpError); + if (error instanceof McpError) { + expect(error.code).toBe(ErrorCode.RequestTimeout); + } + } + }); + + test("should invoke onclose when the connection is closed", async () => { + const oncloseMock = jest.fn(); + protocol.onclose = oncloseMock; + await protocol.connect(transport); + await transport.close(); + expect(oncloseMock).toHaveBeenCalled(); + }); + + describe("progress notification timeout behavior", () => { + beforeEach(() => { + jest.useFakeTimers(); + }); + afterEach(() => { + jest.useRealTimers(); + }); + + test("should reset timeout when progress notification is received", async () => { + await protocol.connect(transport); + const request = { method: "example", params: {} }; + const mockSchema: ZodType<{ result: string }> = z.object({ + result: z.string(), + }); + const onProgressMock = jest.fn(); + const requestPromise = protocol.request(request, mockSchema, { + timeout: 1000, + resetTimeoutOnProgress: true, + onprogress: onProgressMock, + }); + jest.advanceTimersByTime(800); + if (transport.onmessage) { + transport.onmessage({ + jsonrpc: "2.0", + method: "notifications/progress", + params: { + progressToken: 0, + progress: 50, + total: 100, + }, + }); + } + await Promise.resolve(); + expect(onProgressMock).toHaveBeenCalledWith({ + progress: 50, + total: 100, + }); + jest.advanceTimersByTime(800); + if (transport.onmessage) { + transport.onmessage({ + jsonrpc: "2.0", + id: 0, + result: { result: "success" }, + }); + } + await Promise.resolve(); + await expect(requestPromise).resolves.toEqual({ result: "success" }); + }); + + test("should respect maxTotalTimeout", async () => { + await protocol.connect(transport); + const request = { method: "example", params: {} }; + const mockSchema: ZodType<{ result: string }> = z.object({ + result: z.string(), + }); + const onProgressMock = jest.fn(); + const requestPromise = protocol.request(request, mockSchema, { + timeout: 1000, + maxTotalTimeout: 150, + resetTimeoutOnProgress: true, + onprogress: onProgressMock, + }); + + // First progress notification should work + jest.advanceTimersByTime(80); + if (transport.onmessage) { + transport.onmessage({ + jsonrpc: "2.0", + method: "notifications/progress", + params: { + progressToken: 0, + progress: 50, + total: 100, + }, + }); + } + await Promise.resolve(); + expect(onProgressMock).toHaveBeenCalledWith({ + progress: 50, + total: 100, + }); + jest.advanceTimersByTime(80); + if (transport.onmessage) { + transport.onmessage({ + jsonrpc: "2.0", + method: "notifications/progress", + params: { + progressToken: 0, + progress: 75, + total: 100, + }, + }); + } + await expect(requestPromise).rejects.toThrow("Maximum total timeout exceeded"); + expect(onProgressMock).toHaveBeenCalledTimes(1); + }); + + test("should timeout if no progress received within timeout period", async () => { + await protocol.connect(transport); + const request = { method: "example", params: {} }; + const mockSchema: ZodType<{ result: string }> = z.object({ + result: z.string(), + }); + const requestPromise = protocol.request(request, mockSchema, { + timeout: 100, + resetTimeoutOnProgress: true, + }); + jest.advanceTimersByTime(101); + await expect(requestPromise).rejects.toThrow("Request timed out"); + }); + + test("should handle multiple progress notifications correctly", async () => { + await protocol.connect(transport); + const request = { method: "example", params: {} }; + const mockSchema: ZodType<{ result: string }> = z.object({ + result: z.string(), + }); + const onProgressMock = jest.fn(); + const requestPromise = protocol.request(request, mockSchema, { + timeout: 1000, + resetTimeoutOnProgress: true, + onprogress: onProgressMock, + }); + + // Simulate multiple progress updates + for (let i = 1; i <= 3; i++) { + jest.advanceTimersByTime(800); + if (transport.onmessage) { + transport.onmessage({ + jsonrpc: "2.0", + method: "notifications/progress", + params: { + progressToken: 0, + progress: i * 25, + total: 100, + }, + }); + } + await Promise.resolve(); + expect(onProgressMock).toHaveBeenNthCalledWith(i, { + progress: i * 25, + total: 100, + }); + } + if (transport.onmessage) { + transport.onmessage({ + jsonrpc: "2.0", + id: 0, + result: { result: "success" }, + }); + } + await Promise.resolve(); + await expect(requestPromise).resolves.toEqual({ result: "success" }); + }); + }); +}); + +describe("mergeCapabilities", () => { + it("should merge client capabilities", () => { + const base: ClientCapabilities = { + sampling: {}, + roots: { + listChanged: true, + }, + }; + + const additional: ClientCapabilities = { + experimental: { + feature: true, + }, + roots: { + newProp: true, + }, + }; + + const merged = mergeCapabilities(base, additional); + expect(merged).toEqual({ + sampling: {}, + roots: { + listChanged: true, + newProp: true, + }, + experimental: { + feature: true, + }, + }); + }); + + it("should merge server capabilities", () => { + const base: ServerCapabilities = { + logging: {}, + prompts: { + listChanged: true, + }, + }; + + const additional: ServerCapabilities = { + resources: { + subscribe: true, + }, + prompts: { + newProp: true, + }, + }; + + const merged = mergeCapabilities(base, additional); + expect(merged).toEqual({ + logging: {}, + prompts: { + listChanged: true, + newProp: true, + }, + resources: { + subscribe: true, + }, + }); + }); + + it("should override existing values with additional values", () => { + const base: ServerCapabilities = { + prompts: { + listChanged: false, + }, + }; + + const additional: ServerCapabilities = { + prompts: { + listChanged: true, + }, + }; + + const merged = mergeCapabilities(base, additional); + expect(merged.prompts!.listChanged).toBe(true); + }); + + it("should handle empty objects", () => { + const base = {}; + const additional = {}; + const merged = mergeCapabilities(base, additional); + expect(merged).toEqual({}); + }); +}); + + + +--- +File: /src/shared/protocol.ts +--- + +import { ZodLiteral, ZodObject, ZodType, z } from "zod"; +import { + CancelledNotificationSchema, + ClientCapabilities, + ErrorCode, + JSONRPCError, + JSONRPCNotification, + JSONRPCRequest, + JSONRPCResponse, + McpError, + Notification, + PingRequestSchema, + Progress, + ProgressNotification, + ProgressNotificationSchema, + Request, + RequestId, + Result, + ServerCapabilities, +} from "../types.js"; +import { Transport } from "./transport.js"; + +/** + * Callback for progress notifications. + */ +export type ProgressCallback = (progress: Progress) => void; + +/** + * Additional initialization options. + */ +export type ProtocolOptions = { + /** + * Whether to restrict emitted requests to only those that the remote side has indicated that they can handle, through their advertised capabilities. + * + * Note that this DOES NOT affect checking of _local_ side capabilities, as it is considered a logic error to mis-specify those. + * + * Currently this defaults to false, for backwards compatibility with SDK versions that did not advertise capabilities correctly. In future, this will default to true. + */ + enforceStrictCapabilities?: boolean; +}; + +/** + * The default request timeout, in miliseconds. + */ +export const DEFAULT_REQUEST_TIMEOUT_MSEC = 60000; + +/** + * Options that can be given per request. + */ +export type RequestOptions = { + /** + * If set, requests progress notifications from the remote end (if supported). When progress notifications are received, this callback will be invoked. + */ + onprogress?: ProgressCallback; + + /** + * Can be used to cancel an in-flight request. This will cause an AbortError to be raised from request(). + */ + signal?: AbortSignal; + + /** + * A timeout (in milliseconds) for this request. If exceeded, an McpError with code `RequestTimeout` will be raised from request(). + * + * If not specified, `DEFAULT_REQUEST_TIMEOUT_MSEC` will be used as the timeout. + */ + timeout?: number; + + /** + * If true, receiving a progress notification will reset the request timeout. + * This is useful for long-running operations that send periodic progress updates. + * Default: false + */ + resetTimeoutOnProgress?: boolean; + + /** + * Maximum total time (in milliseconds) to wait for a response. + * If exceeded, an McpError with code `RequestTimeout` will be raised, regardless of progress notifications. + * If not specified, there is no maximum total timeout. + */ + maxTotalTimeout?: number; +}; + +/** + * Extra data given to request handlers. + */ +export type RequestHandlerExtra = { + /** + * An abort signal used to communicate if the request was cancelled from the sender's side. + */ + signal: AbortSignal; + + /** + * The session ID from the transport, if available. + */ + sessionId?: string; +}; + +/** + * Information about a request's timeout state + */ +type TimeoutInfo = { + timeoutId: ReturnType; + startTime: number; + timeout: number; + maxTotalTimeout?: number; + onTimeout: () => void; +}; + +/** + * Implements MCP protocol framing on top of a pluggable transport, including + * features like request/response linking, notifications, and progress. + */ +export abstract class Protocol< + SendRequestT extends Request, + SendNotificationT extends Notification, + SendResultT extends Result, +> { + private _transport?: Transport; + private _requestMessageId = 0; + private _requestHandlers: Map< + string, + ( + request: JSONRPCRequest, + extra: RequestHandlerExtra, + ) => Promise + > = new Map(); + private _requestHandlerAbortControllers: Map = + new Map(); + private _notificationHandlers: Map< + string, + (notification: JSONRPCNotification) => Promise + > = new Map(); + private _responseHandlers: Map< + number, + (response: JSONRPCResponse | Error) => void + > = new Map(); + private _progressHandlers: Map = new Map(); + private _timeoutInfo: Map = new Map(); + + /** + * Callback for when the connection is closed for any reason. + * + * This is invoked when close() is called as well. + */ + onclose?: () => void; + + /** + * Callback for when an error occurs. + * + * Note that errors are not necessarily fatal; they are used for reporting any kind of exceptional condition out of band. + */ + onerror?: (error: Error) => void; + + /** + * A handler to invoke for any request types that do not have their own handler installed. + */ + fallbackRequestHandler?: (request: Request) => Promise; + + /** + * A handler to invoke for any notification types that do not have their own handler installed. + */ + fallbackNotificationHandler?: (notification: Notification) => Promise; + + constructor(private _options?: ProtocolOptions) { + this.setNotificationHandler(CancelledNotificationSchema, (notification) => { + const controller = this._requestHandlerAbortControllers.get( + notification.params.requestId, + ); + controller?.abort(notification.params.reason); + }); + + this.setNotificationHandler(ProgressNotificationSchema, (notification) => { + this._onprogress(notification as unknown as ProgressNotification); + }); + + this.setRequestHandler( + PingRequestSchema, + // Automatic pong by default. + (_request) => ({}) as SendResultT, + ); + } + + private _setupTimeout( + messageId: number, + timeout: number, + maxTotalTimeout: number | undefined, + onTimeout: () => void + ) { + this._timeoutInfo.set(messageId, { + timeoutId: setTimeout(onTimeout, timeout), + startTime: Date.now(), + timeout, + maxTotalTimeout, + onTimeout + }); + } + + private _resetTimeout(messageId: number): boolean { + const info = this._timeoutInfo.get(messageId); + if (!info) return false; + + const totalElapsed = Date.now() - info.startTime; + if (info.maxTotalTimeout && totalElapsed >= info.maxTotalTimeout) { + this._timeoutInfo.delete(messageId); + throw new McpError( + ErrorCode.RequestTimeout, + "Maximum total timeout exceeded", + { maxTotalTimeout: info.maxTotalTimeout, totalElapsed } + ); + } + + clearTimeout(info.timeoutId); + info.timeoutId = setTimeout(info.onTimeout, info.timeout); + return true; + } + + private _cleanupTimeout(messageId: number) { + const info = this._timeoutInfo.get(messageId); + if (info) { + clearTimeout(info.timeoutId); + this._timeoutInfo.delete(messageId); + } + } + + /** + * Attaches to the given transport, starts it, and starts listening for messages. + * + * The Protocol object assumes ownership of the Transport, replacing any callbacks that have already been set, and expects that it is the only user of the Transport instance going forward. + */ + async connect(transport: Transport): Promise { + this._transport = transport; + this._transport.onclose = () => { + this._onclose(); + }; + + this._transport.onerror = (error: Error) => { + this._onerror(error); + }; + + this._transport.onmessage = (message) => { + if (!("method" in message)) { + this._onresponse(message); + } else if ("id" in message) { + this._onrequest(message); + } else { + this._onnotification(message); + } + }; + + await this._transport.start(); + } + + private _onclose(): void { + const responseHandlers = this._responseHandlers; + this._responseHandlers = new Map(); + this._progressHandlers.clear(); + this._transport = undefined; + this.onclose?.(); + + const error = new McpError(ErrorCode.ConnectionClosed, "Connection closed"); + for (const handler of responseHandlers.values()) { + handler(error); + } + } + + private _onerror(error: Error): void { + this.onerror?.(error); + } + + private _onnotification(notification: JSONRPCNotification): void { + const handler = + this._notificationHandlers.get(notification.method) ?? + this.fallbackNotificationHandler; + + // Ignore notifications not being subscribed to. + if (handler === undefined) { + return; + } + + // Starting with Promise.resolve() puts any synchronous errors into the monad as well. + Promise.resolve() + .then(() => handler(notification)) + .catch((error) => + this._onerror( + new Error(`Uncaught error in notification handler: ${error}`), + ), + ); + } + + private _onrequest(request: JSONRPCRequest): void { + const handler = + this._requestHandlers.get(request.method) ?? this.fallbackRequestHandler; + + if (handler === undefined) { + this._transport + ?.send({ + jsonrpc: "2.0", + id: request.id, + error: { + code: ErrorCode.MethodNotFound, + message: "Method not found", + }, + }) + .catch((error) => + this._onerror( + new Error(`Failed to send an error response: ${error}`), + ), + ); + return; + } + + const abortController = new AbortController(); + this._requestHandlerAbortControllers.set(request.id, abortController); + + // Create extra object with both abort signal and sessionId from transport + const extra: RequestHandlerExtra = { + signal: abortController.signal, + sessionId: this._transport?.sessionId, + }; + + // Starting with Promise.resolve() puts any synchronous errors into the monad as well. + Promise.resolve() + .then(() => handler(request, extra)) + .then( + (result) => { + if (abortController.signal.aborted) { + return; + } + + return this._transport?.send({ + result, + jsonrpc: "2.0", + id: request.id, + }); + }, + (error) => { + if (abortController.signal.aborted) { + return; + } + + return this._transport?.send({ + jsonrpc: "2.0", + id: request.id, + error: { + code: Number.isSafeInteger(error["code"]) + ? error["code"] + : ErrorCode.InternalError, + message: error.message ?? "Internal error", + }, + }); + }, + ) + .catch((error) => + this._onerror(new Error(`Failed to send response: ${error}`)), + ) + .finally(() => { + this._requestHandlerAbortControllers.delete(request.id); + }); + } + + private _onprogress(notification: ProgressNotification): void { + const { progressToken, ...params } = notification.params; + const messageId = Number(progressToken); + + const handler = this._progressHandlers.get(messageId); + if (!handler) { + this._onerror(new Error(`Received a progress notification for an unknown token: ${JSON.stringify(notification)}`)); + return; + } + + const responseHandler = this._responseHandlers.get(messageId); + if (this._timeoutInfo.has(messageId) && responseHandler) { + try { + this._resetTimeout(messageId); + } catch (error) { + responseHandler(error as Error); + return; + } + } + + handler(params); + } + + private _onresponse(response: JSONRPCResponse | JSONRPCError): void { + const messageId = Number(response.id); + const handler = this._responseHandlers.get(messageId); + if (handler === undefined) { + this._onerror( + new Error( + `Received a response for an unknown message ID: ${JSON.stringify(response)}`, + ), + ); + return; + } + + this._responseHandlers.delete(messageId); + this._progressHandlers.delete(messageId); + this._cleanupTimeout(messageId); + + if ("result" in response) { + handler(response); + } else { + const error = new McpError( + response.error.code, + response.error.message, + response.error.data, + ); + handler(error); + } + } + + get transport(): Transport | undefined { + return this._transport; + } + + /** + * Closes the connection. + */ + async close(): Promise { + await this._transport?.close(); + } + + /** + * A method to check if a capability is supported by the remote side, for the given method to be called. + * + * This should be implemented by subclasses. + */ + protected abstract assertCapabilityForMethod( + method: SendRequestT["method"], + ): void; + + /** + * A method to check if a notification is supported by the local side, for the given method to be sent. + * + * This should be implemented by subclasses. + */ + protected abstract assertNotificationCapability( + method: SendNotificationT["method"], + ): void; + + /** + * A method to check if a request handler is supported by the local side, for the given method to be handled. + * + * This should be implemented by subclasses. + */ + protected abstract assertRequestHandlerCapability(method: string): void; + + /** + * Sends a request and wait for a response. + * + * Do not use this method to emit notifications! Use notification() instead. + */ + request>( + request: SendRequestT, + resultSchema: T, + options?: RequestOptions, + ): Promise> { + return new Promise((resolve, reject) => { + if (!this._transport) { + reject(new Error("Not connected")); + return; + } + + if (this._options?.enforceStrictCapabilities === true) { + this.assertCapabilityForMethod(request.method); + } + + options?.signal?.throwIfAborted(); + + const messageId = this._requestMessageId++; + const jsonrpcRequest: JSONRPCRequest = { + ...request, + jsonrpc: "2.0", + id: messageId, + }; + + if (options?.onprogress) { + this._progressHandlers.set(messageId, options.onprogress); + jsonrpcRequest.params = { + ...request.params, + _meta: { progressToken: messageId }, + }; + } + + const cancel = (reason: unknown) => { + this._responseHandlers.delete(messageId); + this._progressHandlers.delete(messageId); + this._cleanupTimeout(messageId); + + this._transport + ?.send({ + jsonrpc: "2.0", + method: "notifications/cancelled", + params: { + requestId: messageId, + reason: String(reason), + }, + }) + .catch((error) => + this._onerror(new Error(`Failed to send cancellation: ${error}`)), + ); + + reject(reason); + }; + + this._responseHandlers.set(messageId, (response) => { + if (options?.signal?.aborted) { + return; + } + + if (response instanceof Error) { + return reject(response); + } + + try { + const result = resultSchema.parse(response.result); + resolve(result); + } catch (error) { + reject(error); + } + }); + + options?.signal?.addEventListener("abort", () => { + cancel(options?.signal?.reason); + }); + + const timeout = options?.timeout ?? DEFAULT_REQUEST_TIMEOUT_MSEC; + const timeoutHandler = () => cancel(new McpError( + ErrorCode.RequestTimeout, + "Request timed out", + { timeout } + )); + + this._setupTimeout(messageId, timeout, options?.maxTotalTimeout, timeoutHandler); + + this._transport.send(jsonrpcRequest).catch((error) => { + this._cleanupTimeout(messageId); + reject(error); + }); + }); + } + + /** + * Emits a notification, which is a one-way message that does not expect a response. + */ + async notification(notification: SendNotificationT): Promise { + if (!this._transport) { + throw new Error("Not connected"); + } + + this.assertNotificationCapability(notification.method); + + const jsonrpcNotification: JSONRPCNotification = { + ...notification, + jsonrpc: "2.0", + }; + + await this._transport.send(jsonrpcNotification); + } + + /** + * Registers a handler to invoke when this protocol object receives a request with the given method. + * + * Note that this will replace any previous request handler for the same method. + */ + setRequestHandler< + T extends ZodObject<{ + method: ZodLiteral; + }>, + >( + requestSchema: T, + handler: ( + request: z.infer, + extra: RequestHandlerExtra, + ) => SendResultT | Promise, + ): void { + const method = requestSchema.shape.method.value; + this.assertRequestHandlerCapability(method); + this._requestHandlers.set(method, (request, extra) => + Promise.resolve(handler(requestSchema.parse(request), extra)), + ); + } + + /** + * Removes the request handler for the given method. + */ + removeRequestHandler(method: string): void { + this._requestHandlers.delete(method); + } + + /** + * Asserts that a request handler has not already been set for the given method, in preparation for a new one being automatically installed. + */ + assertCanSetRequestHandler(method: string): void { + if (this._requestHandlers.has(method)) { + throw new Error( + `A request handler for ${method} already exists, which would be overridden`, + ); + } + } + + /** + * Registers a handler to invoke when this protocol object receives a notification with the given method. + * + * Note that this will replace any previous notification handler for the same method. + */ + setNotificationHandler< + T extends ZodObject<{ + method: ZodLiteral; + }>, + >( + notificationSchema: T, + handler: (notification: z.infer) => void | Promise, + ): void { + this._notificationHandlers.set( + notificationSchema.shape.method.value, + (notification) => + Promise.resolve(handler(notificationSchema.parse(notification))), + ); + } + + /** + * Removes the notification handler for the given method. + */ + removeNotificationHandler(method: string): void { + this._notificationHandlers.delete(method); + } +} + +export function mergeCapabilities< + T extends ServerCapabilities | ClientCapabilities, +>(base: T, additional: T): T { + return Object.entries(additional).reduce( + (acc, [key, value]) => { + if (value && typeof value === "object") { + acc[key] = acc[key] ? { ...acc[key], ...value } : value; + } else { + acc[key] = value; + } + return acc; + }, + { ...base }, + ); +} + + + +--- +File: /src/shared/stdio.test.ts +--- + +import { JSONRPCMessage } from "../types.js"; +import { ReadBuffer } from "./stdio.js"; + +const testMessage: JSONRPCMessage = { + jsonrpc: "2.0", + method: "foobar", +}; + +test("should have no messages after initialization", () => { + const readBuffer = new ReadBuffer(); + expect(readBuffer.readMessage()).toBeNull(); +}); + +test("should only yield a message after a newline", () => { + const readBuffer = new ReadBuffer(); + + readBuffer.append(Buffer.from(JSON.stringify(testMessage))); + expect(readBuffer.readMessage()).toBeNull(); + + readBuffer.append(Buffer.from("\n")); + expect(readBuffer.readMessage()).toEqual(testMessage); + expect(readBuffer.readMessage()).toBeNull(); +}); + +test("should be reusable after clearing", () => { + const readBuffer = new ReadBuffer(); + + readBuffer.append(Buffer.from("foobar")); + readBuffer.clear(); + expect(readBuffer.readMessage()).toBeNull(); + + readBuffer.append(Buffer.from(JSON.stringify(testMessage))); + readBuffer.append(Buffer.from("\n")); + expect(readBuffer.readMessage()).toEqual(testMessage); +}); + + + +--- +File: /src/shared/stdio.ts +--- + +import { JSONRPCMessage, JSONRPCMessageSchema } from "../types.js"; + +/** + * Buffers a continuous stdio stream into discrete JSON-RPC messages. + */ +export class ReadBuffer { + private _buffer?: Buffer; + + append(chunk: Buffer): void { + this._buffer = this._buffer ? Buffer.concat([this._buffer, chunk]) : chunk; + } + + readMessage(): JSONRPCMessage | null { + if (!this._buffer) { + return null; + } + + const index = this._buffer.indexOf("\n"); + if (index === -1) { + return null; + } + + const line = this._buffer.toString("utf8", 0, index); + this._buffer = this._buffer.subarray(index + 1); + return deserializeMessage(line); + } + + clear(): void { + this._buffer = undefined; + } +} + +export function deserializeMessage(line: string): JSONRPCMessage { + return JSONRPCMessageSchema.parse(JSON.parse(line)); +} + +export function serializeMessage(message: JSONRPCMessage): string { + return JSON.stringify(message) + "\n"; +} + + + +--- +File: /src/shared/transport.ts +--- + +import { JSONRPCMessage } from "../types.js"; + +/** + * Describes the minimal contract for a MCP transport that a client or server can communicate over. + */ +export interface Transport { + /** + * Starts processing messages on the transport, including any connection steps that might need to be taken. + * + * This method should only be called after callbacks are installed, or else messages may be lost. + * + * NOTE: This method should not be called explicitly when using Client, Server, or Protocol classes, as they will implicitly call start(). + */ + start(): Promise; + + /** + * Sends a JSON-RPC message (request or response). + */ + send(message: JSONRPCMessage): Promise; + + /** + * Closes the connection. + */ + close(): Promise; + + /** + * Callback for when the connection is closed for any reason. + * + * This should be invoked when close() is called as well. + */ + onclose?: () => void; + + /** + * Callback for when an error occurs. + * + * Note that errors are not necessarily fatal; they are used for reporting any kind of exceptional condition out of band. + */ + onerror?: (error: Error) => void; + + /** + * Callback for when a message (request or response) is received over the connection. + */ + onmessage?: (message: JSONRPCMessage) => void; + + /** + * The session ID generated for this connection. + */ + sessionId?: string; +} + + + +--- +File: /src/shared/uriTemplate.test.ts +--- + +import { UriTemplate } from "./uriTemplate.js"; + +describe("UriTemplate", () => { + describe("isTemplate", () => { + it("should return true for strings containing template expressions", () => { + expect(UriTemplate.isTemplate("{foo}")).toBe(true); + expect(UriTemplate.isTemplate("/users/{id}")).toBe(true); + expect(UriTemplate.isTemplate("http://example.com/{path}/{file}")).toBe(true); + expect(UriTemplate.isTemplate("/search{?q,limit}")).toBe(true); + }); + + it("should return false for strings without template expressions", () => { + expect(UriTemplate.isTemplate("")).toBe(false); + expect(UriTemplate.isTemplate("plain string")).toBe(false); + expect(UriTemplate.isTemplate("http://example.com/foo/bar")).toBe(false); + expect(UriTemplate.isTemplate("{}")).toBe(false); // Empty braces don't count + expect(UriTemplate.isTemplate("{ }")).toBe(false); // Just whitespace doesn't count + }); + }); + + describe("simple string expansion", () => { + it("should expand simple string variables", () => { + const template = new UriTemplate("http://example.com/users/{username}"); + expect(template.expand({ username: "fred" })).toBe( + "http://example.com/users/fred", + ); + }); + + it("should handle multiple variables", () => { + const template = new UriTemplate("{x,y}"); + expect(template.expand({ x: "1024", y: "768" })).toBe("1024,768"); + }); + + it("should encode reserved characters", () => { + const template = new UriTemplate("{var}"); + expect(template.expand({ var: "value with spaces" })).toBe( + "value%20with%20spaces", + ); + }); + }); + + describe("reserved expansion", () => { + it("should not encode reserved characters with + operator", () => { + const template = new UriTemplate("{+path}/here"); + expect(template.expand({ path: "/foo/bar" })).toBe("/foo/bar/here"); + }); + }); + + describe("fragment expansion", () => { + it("should add # prefix and not encode reserved chars", () => { + const template = new UriTemplate("X{#var}"); + expect(template.expand({ var: "/test" })).toBe("X#/test"); + }); + }); + + describe("label expansion", () => { + it("should add . prefix", () => { + const template = new UriTemplate("X{.var}"); + expect(template.expand({ var: "test" })).toBe("X.test"); + }); + }); + + describe("path expansion", () => { + it("should add / prefix", () => { + const template = new UriTemplate("X{/var}"); + expect(template.expand({ var: "test" })).toBe("X/test"); + }); + }); + + describe("query expansion", () => { + it("should add ? prefix and name=value format", () => { + const template = new UriTemplate("X{?var}"); + expect(template.expand({ var: "test" })).toBe("X?var=test"); + }); + }); + + describe("form continuation expansion", () => { + it("should add & prefix and name=value format", () => { + const template = new UriTemplate("X{&var}"); + expect(template.expand({ var: "test" })).toBe("X&var=test"); + }); + }); + + describe("matching", () => { + it("should match simple strings and extract variables", () => { + const template = new UriTemplate("http://example.com/users/{username}"); + const match = template.match("http://example.com/users/fred"); + expect(match).toEqual({ username: "fred" }); + }); + + it("should match multiple variables", () => { + const template = new UriTemplate("/users/{username}/posts/{postId}"); + const match = template.match("/users/fred/posts/123"); + expect(match).toEqual({ username: "fred", postId: "123" }); + }); + + it("should return null for non-matching URIs", () => { + const template = new UriTemplate("/users/{username}"); + const match = template.match("/posts/123"); + expect(match).toBeNull(); + }); + + it("should handle exploded arrays", () => { + const template = new UriTemplate("{/list*}"); + const match = template.match("/red,green,blue"); + expect(match).toEqual({ list: ["red", "green", "blue"] }); + }); + }); + + describe("edge cases", () => { + it("should handle empty variables", () => { + const template = new UriTemplate("{empty}"); + expect(template.expand({})).toBe(""); + expect(template.expand({ empty: "" })).toBe(""); + }); + + it("should handle undefined variables", () => { + const template = new UriTemplate("{a}{b}{c}"); + expect(template.expand({ b: "2" })).toBe("2"); + }); + + it("should handle special characters in variable names", () => { + const template = new UriTemplate("{$var_name}"); + expect(template.expand({ "$var_name": "value" })).toBe("value"); + }); + }); + + describe("complex patterns", () => { + it("should handle nested path segments", () => { + const template = new UriTemplate("/api/{version}/{resource}/{id}"); + expect(template.expand({ + version: "v1", + resource: "users", + id: "123" + })).toBe("/api/v1/users/123"); + }); + + it("should handle query parameters with arrays", () => { + const template = new UriTemplate("/search{?tags*}"); + expect(template.expand({ + tags: ["nodejs", "typescript", "testing"] + })).toBe("/search?tags=nodejs,typescript,testing"); + }); + + it("should handle multiple query parameters", () => { + const template = new UriTemplate("/search{?q,page,limit}"); + expect(template.expand({ + q: "test", + page: "1", + limit: "10" + })).toBe("/search?q=test&page=1&limit=10"); + }); + }); + + describe("matching complex patterns", () => { + it("should match nested path segments", () => { + const template = new UriTemplate("/api/{version}/{resource}/{id}"); + const match = template.match("/api/v1/users/123"); + expect(match).toEqual({ + version: "v1", + resource: "users", + id: "123" + }); + }); + + it("should match query parameters", () => { + const template = new UriTemplate("/search{?q}"); + const match = template.match("/search?q=test"); + expect(match).toEqual({ q: "test" }); + }); + + it("should match multiple query parameters", () => { + const template = new UriTemplate("/search{?q,page}"); + const match = template.match("/search?q=test&page=1"); + expect(match).toEqual({ q: "test", page: "1" }); + }); + + it("should handle partial matches correctly", () => { + const template = new UriTemplate("/users/{id}"); + expect(template.match("/users/123/extra")).toBeNull(); + expect(template.match("/users")).toBeNull(); + }); + }); + + describe("security and edge cases", () => { + it("should handle extremely long input strings", () => { + const longString = "x".repeat(100000); + const template = new UriTemplate(`/api/{param}`); + expect(template.expand({ param: longString })).toBe(`/api/${longString}`); + expect(template.match(`/api/${longString}`)).toEqual({ param: longString }); + }); + + it("should handle deeply nested template expressions", () => { + const template = new UriTemplate("{a}{b}{c}{d}{e}{f}{g}{h}{i}{j}".repeat(1000)); + expect(() => template.expand({ + a: "1", b: "2", c: "3", d: "4", e: "5", + f: "6", g: "7", h: "8", i: "9", j: "0" + })).not.toThrow(); + }); + + it("should handle malformed template expressions", () => { + expect(() => new UriTemplate("{unclosed")).toThrow(); + expect(() => new UriTemplate("{}")).not.toThrow(); + expect(() => new UriTemplate("{,}")).not.toThrow(); + expect(() => new UriTemplate("{a}{")).toThrow(); + }); + + it("should handle pathological regex patterns", () => { + const template = new UriTemplate("/api/{param}"); + // Create a string that could cause catastrophic backtracking + const input = "/api/" + "a".repeat(100000); + expect(() => template.match(input)).not.toThrow(); + }); + + it("should handle invalid UTF-8 sequences", () => { + const template = new UriTemplate("/api/{param}"); + const invalidUtf8 = "īŋŊīŋŊīŋŊ"; + expect(() => template.expand({ param: invalidUtf8 })).not.toThrow(); + expect(() => template.match(`/api/${invalidUtf8}`)).not.toThrow(); + }); + + it("should handle template/URI length mismatches", () => { + const template = new UriTemplate("/api/{param}"); + expect(template.match("/api/")).toBeNull(); + expect(template.match("/api")).toBeNull(); + expect(template.match("/api/value/extra")).toBeNull(); + }); + + it("should handle repeated operators", () => { + const template = new UriTemplate("{?a}{?b}{?c}"); + expect(template.expand({ a: "1", b: "2", c: "3" })).toBe("?a=1&b=2&c=3"); + }); + + it("should handle overlapping variable names", () => { + const template = new UriTemplate("{var}{vara}"); + expect(template.expand({ var: "1", vara: "2" })).toBe("12"); + }); + + it("should handle empty segments", () => { + const template = new UriTemplate("///{a}////{b}////"); + expect(template.expand({ a: "1", b: "2" })).toBe("///1////2////"); + expect(template.match("///1////2////")).toEqual({ a: "1", b: "2" }); + }); + + it("should handle maximum template expression limit", () => { + // Create a template with many expressions + const expressions = Array(10000).fill("{param}").join(""); + expect(() => new UriTemplate(expressions)).not.toThrow(); + }); + + it("should handle maximum variable name length", () => { + const longName = "a".repeat(10000); + const template = new UriTemplate(`{${longName}}`); + const vars: Record = {}; + vars[longName] = "value"; + expect(() => template.expand(vars)).not.toThrow(); + }); + }); +}); + + + +--- +File: /src/shared/uriTemplate.ts +--- + +// Claude-authored implementation of RFC 6570 URI Templates + +export type Variables = Record; + +const MAX_TEMPLATE_LENGTH = 1000000; // 1MB +const MAX_VARIABLE_LENGTH = 1000000; // 1MB +const MAX_TEMPLATE_EXPRESSIONS = 10000; +const MAX_REGEX_LENGTH = 1000000; // 1MB + +export class UriTemplate { + /** + * Returns true if the given string contains any URI template expressions. + * A template expression is a sequence of characters enclosed in curly braces, + * like {foo} or {?bar}. + */ + static isTemplate(str: string): boolean { + // Look for any sequence of characters between curly braces + // that isn't just whitespace + return /\{[^}\s]+\}/.test(str); + } + + private static validateLength( + str: string, + max: number, + context: string, + ): void { + if (str.length > max) { + throw new Error( + `${context} exceeds maximum length of ${max} characters (got ${str.length})`, + ); + } + } + private readonly template: string; + private readonly parts: Array< + | string + | { name: string; operator: string; names: string[]; exploded: boolean } + >; + + constructor(template: string) { + UriTemplate.validateLength(template, MAX_TEMPLATE_LENGTH, "Template"); + this.template = template; + this.parts = this.parse(template); + } + + toString(): string { + return this.template; + } + + private parse( + template: string, + ): Array< + | string + | { name: string; operator: string; names: string[]; exploded: boolean } + > { + const parts: Array< + | string + | { name: string; operator: string; names: string[]; exploded: boolean } + > = []; + let currentText = ""; + let i = 0; + let expressionCount = 0; + + while (i < template.length) { + if (template[i] === "{") { + if (currentText) { + parts.push(currentText); + currentText = ""; + } + const end = template.indexOf("}", i); + if (end === -1) throw new Error("Unclosed template expression"); + + expressionCount++; + if (expressionCount > MAX_TEMPLATE_EXPRESSIONS) { + throw new Error( + `Template contains too many expressions (max ${MAX_TEMPLATE_EXPRESSIONS})`, + ); + } + + const expr = template.slice(i + 1, end); + const operator = this.getOperator(expr); + const exploded = expr.includes("*"); + const names = this.getNames(expr); + const name = names[0]; + + // Validate variable name length + for (const name of names) { + UriTemplate.validateLength( + name, + MAX_VARIABLE_LENGTH, + "Variable name", + ); + } + + parts.push({ name, operator, names, exploded }); + i = end + 1; + } else { + currentText += template[i]; + i++; + } + } + + if (currentText) { + parts.push(currentText); + } + + return parts; + } + + private getOperator(expr: string): string { + const operators = ["+", "#", ".", "/", "?", "&"]; + return operators.find((op) => expr.startsWith(op)) || ""; + } + + private getNames(expr: string): string[] { + const operator = this.getOperator(expr); + return expr + .slice(operator.length) + .split(",") + .map((name) => name.replace("*", "").trim()) + .filter((name) => name.length > 0); + } + + private encodeValue(value: string, operator: string): string { + UriTemplate.validateLength(value, MAX_VARIABLE_LENGTH, "Variable value"); + if (operator === "+" || operator === "#") { + return encodeURI(value); + } + return encodeURIComponent(value); + } + + private expandPart( + part: { + name: string; + operator: string; + names: string[]; + exploded: boolean; + }, + variables: Variables, + ): string { + if (part.operator === "?" || part.operator === "&") { + const pairs = part.names + .map((name) => { + const value = variables[name]; + if (value === undefined) return ""; + const encoded = Array.isArray(value) + ? value.map((v) => this.encodeValue(v, part.operator)).join(",") + : this.encodeValue(value.toString(), part.operator); + return `${name}=${encoded}`; + }) + .filter((pair) => pair.length > 0); + + if (pairs.length === 0) return ""; + const separator = part.operator === "?" ? "?" : "&"; + return separator + pairs.join("&"); + } + + if (part.names.length > 1) { + const values = part.names + .map((name) => variables[name]) + .filter((v) => v !== undefined); + if (values.length === 0) return ""; + return values.map((v) => (Array.isArray(v) ? v[0] : v)).join(","); + } + + const value = variables[part.name]; + if (value === undefined) return ""; + + const values = Array.isArray(value) ? value : [value]; + const encoded = values.map((v) => this.encodeValue(v, part.operator)); + + switch (part.operator) { + case "": + return encoded.join(","); + case "+": + return encoded.join(","); + case "#": + return "#" + encoded.join(","); + case ".": + return "." + encoded.join("."); + case "/": + return "/" + encoded.join("/"); + default: + return encoded.join(","); + } + } + + expand(variables: Variables): string { + let result = ""; + let hasQueryParam = false; + + for (const part of this.parts) { + if (typeof part === "string") { + result += part; + continue; + } + + const expanded = this.expandPart(part, variables); + if (!expanded) continue; + + // Convert ? to & if we already have a query parameter + if ((part.operator === "?" || part.operator === "&") && hasQueryParam) { + result += expanded.replace("?", "&"); + } else { + result += expanded; + } + + if (part.operator === "?" || part.operator === "&") { + hasQueryParam = true; + } + } + + return result; + } + + private escapeRegExp(str: string): string { + return str.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); + } + + private partToRegExp(part: { + name: string; + operator: string; + names: string[]; + exploded: boolean; + }): Array<{ pattern: string; name: string }> { + const patterns: Array<{ pattern: string; name: string }> = []; + + // Validate variable name length for matching + for (const name of part.names) { + UriTemplate.validateLength(name, MAX_VARIABLE_LENGTH, "Variable name"); + } + + if (part.operator === "?" || part.operator === "&") { + for (let i = 0; i < part.names.length; i++) { + const name = part.names[i]; + const prefix = i === 0 ? "\\" + part.operator : "&"; + patterns.push({ + pattern: prefix + this.escapeRegExp(name) + "=([^&]+)", + name, + }); + } + return patterns; + } + + let pattern: string; + const name = part.name; + + switch (part.operator) { + case "": + pattern = part.exploded ? "([^/]+(?:,[^/]+)*)" : "([^/,]+)"; + break; + case "+": + case "#": + pattern = "(.+)"; + break; + case ".": + pattern = "\\.([^/,]+)"; + break; + case "/": + pattern = "/" + (part.exploded ? "([^/]+(?:,[^/]+)*)" : "([^/,]+)"); + break; + default: + pattern = "([^/]+)"; + } + + patterns.push({ pattern, name }); + return patterns; + } + + match(uri: string): Variables | null { + UriTemplate.validateLength(uri, MAX_TEMPLATE_LENGTH, "URI"); + let pattern = "^"; + const names: Array<{ name: string; exploded: boolean }> = []; + + for (const part of this.parts) { + if (typeof part === "string") { + pattern += this.escapeRegExp(part); + } else { + const patterns = this.partToRegExp(part); + for (const { pattern: partPattern, name } of patterns) { + pattern += partPattern; + names.push({ name, exploded: part.exploded }); + } + } + } + + pattern += "$"; + UriTemplate.validateLength( + pattern, + MAX_REGEX_LENGTH, + "Generated regex pattern", + ); + const regex = new RegExp(pattern); + const match = uri.match(regex); + + if (!match) return null; + + const result: Variables = {}; + for (let i = 0; i < names.length; i++) { + const { name, exploded } = names[i]; + const value = match[i + 1]; + const cleanName = name.replace("*", ""); + + if (exploded && value.includes(",")) { + result[cleanName] = value.split(","); + } else { + result[cleanName] = value; + } + } + + return result; + } +} + + + +--- +File: /src/cli.ts +--- + +import WebSocket from "ws"; + +// eslint-disable-next-line @typescript-eslint/no-explicit-any +(global as any).WebSocket = WebSocket; + +import express from "express"; +import { Client } from "./client/index.js"; +import { SSEClientTransport } from "./client/sse.js"; +import { StdioClientTransport } from "./client/stdio.js"; +import { WebSocketClientTransport } from "./client/websocket.js"; +import { Server } from "./server/index.js"; +import { SSEServerTransport } from "./server/sse.js"; +import { StdioServerTransport } from "./server/stdio.js"; +import { ListResourcesResultSchema } from "./types.js"; + +async function runClient(url_or_command: string, args: string[]) { + const client = new Client( + { + name: "mcp-typescript test client", + version: "0.1.0", + }, + { + capabilities: { + sampling: {}, + }, + }, + ); + + let clientTransport; + + let url: URL | undefined = undefined; + try { + url = new URL(url_or_command); + } catch { + // Ignore + } + + if (url?.protocol === "http:" || url?.protocol === "https:") { + clientTransport = new SSEClientTransport(new URL(url_or_command)); + } else if (url?.protocol === "ws:" || url?.protocol === "wss:") { + clientTransport = new WebSocketClientTransport(new URL(url_or_command)); + } else { + clientTransport = new StdioClientTransport({ + command: url_or_command, + args, + }); + } + + console.log("Connected to server."); + + await client.connect(clientTransport); + console.log("Initialized."); + + await client.request({ method: "resources/list" }, ListResourcesResultSchema); + + await client.close(); + console.log("Closed."); +} + +async function runServer(port: number | null) { + if (port !== null) { + const app = express(); + + let servers: Server[] = []; + + app.get("/sse", async (req, res) => { + console.log("Got new SSE connection"); + + const transport = new SSEServerTransport("/message", res); + const server = new Server( + { + name: "mcp-typescript test server", + version: "0.1.0", + }, + { + capabilities: {}, + }, + ); + + servers.push(server); + + server.onclose = () => { + console.log("SSE connection closed"); + servers = servers.filter((s) => s !== server); + }; + + await server.connect(transport); + }); + + app.post("/message", async (req, res) => { + console.log("Received message"); + + const sessionId = req.query.sessionId as string; + const transport = servers + .map((s) => s.transport as SSEServerTransport) + .find((t) => t.sessionId === sessionId); + if (!transport) { + res.status(404).send("Session not found"); + return; + } + + await transport.handlePostMessage(req, res); + }); + + app.listen(port, () => { + console.log(`Server running on http://localhost:${port}/sse`); + }); + } else { + const server = new Server( + { + name: "mcp-typescript test server", + version: "0.1.0", + }, + { + capabilities: { + prompts: {}, + resources: {}, + tools: {}, + logging: {}, + }, + }, + ); + + const transport = new StdioServerTransport(); + await server.connect(transport); + + console.log("Server running on stdio"); + } +} + +const args = process.argv.slice(2); +const command = args[0]; +switch (command) { + case "client": + if (args.length < 2) { + console.error("Usage: client [args...]"); + process.exit(1); + } + + runClient(args[1], args.slice(2)).catch((error) => { + console.error(error); + process.exit(1); + }); + + break; + + case "server": { + const port = args[1] ? parseInt(args[1]) : null; + runServer(port).catch((error) => { + console.error(error); + process.exit(1); + }); + + break; + } + + default: + console.error("Unrecognized command:", command); +} + + + +--- +File: /src/inMemory.test.ts +--- + +import { InMemoryTransport } from "./inMemory.js"; +import { JSONRPCMessage } from "./types.js"; + +describe("InMemoryTransport", () => { + let clientTransport: InMemoryTransport; + let serverTransport: InMemoryTransport; + + beforeEach(() => { + [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair(); + }); + + test("should create linked pair", () => { + expect(clientTransport).toBeDefined(); + expect(serverTransport).toBeDefined(); + }); + + test("should start without error", async () => { + await expect(clientTransport.start()).resolves.not.toThrow(); + await expect(serverTransport.start()).resolves.not.toThrow(); + }); + + test("should send message from client to server", async () => { + const message: JSONRPCMessage = { + jsonrpc: "2.0", + method: "test", + id: 1, + }; + + let receivedMessage: JSONRPCMessage | undefined; + serverTransport.onmessage = (msg) => { + receivedMessage = msg; + }; + + await clientTransport.send(message); + expect(receivedMessage).toEqual(message); + }); + + test("should send message from server to client", async () => { + const message: JSONRPCMessage = { + jsonrpc: "2.0", + method: "test", + id: 1, + }; + + let receivedMessage: JSONRPCMessage | undefined; + clientTransport.onmessage = (msg) => { + receivedMessage = msg; + }; + + await serverTransport.send(message); + expect(receivedMessage).toEqual(message); + }); + + test("should handle close", async () => { + let clientClosed = false; + let serverClosed = false; + + clientTransport.onclose = () => { + clientClosed = true; + }; + + serverTransport.onclose = () => { + serverClosed = true; + }; + + await clientTransport.close(); + expect(clientClosed).toBe(true); + expect(serverClosed).toBe(true); + }); + + test("should throw error when sending after close", async () => { + await clientTransport.close(); + await expect( + clientTransport.send({ jsonrpc: "2.0", method: "test", id: 1 }), + ).rejects.toThrow("Not connected"); + }); + + test("should queue messages sent before start", async () => { + const message: JSONRPCMessage = { + jsonrpc: "2.0", + method: "test", + id: 1, + }; + + let receivedMessage: JSONRPCMessage | undefined; + serverTransport.onmessage = (msg) => { + receivedMessage = msg; + }; + + await clientTransport.send(message); + await serverTransport.start(); + expect(receivedMessage).toEqual(message); + }); +}); + + + +--- +File: /src/inMemory.ts +--- + +import { Transport } from "./shared/transport.js"; +import { JSONRPCMessage } from "./types.js"; + +/** + * In-memory transport for creating clients and servers that talk to each other within the same process. + */ +export class InMemoryTransport implements Transport { + private _otherTransport?: InMemoryTransport; + private _messageQueue: JSONRPCMessage[] = []; + + onclose?: () => void; + onerror?: (error: Error) => void; + onmessage?: (message: JSONRPCMessage) => void; + sessionId?: string; + + /** + * Creates a pair of linked in-memory transports that can communicate with each other. One should be passed to a Client and one to a Server. + */ + static createLinkedPair(): [InMemoryTransport, InMemoryTransport] { + const clientTransport = new InMemoryTransport(); + const serverTransport = new InMemoryTransport(); + clientTransport._otherTransport = serverTransport; + serverTransport._otherTransport = clientTransport; + return [clientTransport, serverTransport]; + } + + async start(): Promise { + // Process any messages that were queued before start was called + while (this._messageQueue.length > 0) { + const message = this._messageQueue.shift(); + if (message) { + this.onmessage?.(message); + } + } + } + + async close(): Promise { + const other = this._otherTransport; + this._otherTransport = undefined; + await other?.close(); + this.onclose?.(); + } + + async send(message: JSONRPCMessage): Promise { + if (!this._otherTransport) { + throw new Error("Not connected"); + } + + if (this._otherTransport.onmessage) { + this._otherTransport.onmessage(message); + } else { + this._otherTransport._messageQueue.push(message); + } + } +} + + + +--- +File: /src/types.ts +--- + +import { z, ZodTypeAny } from "zod"; + +export const LATEST_PROTOCOL_VERSION = "2024-11-05"; +export const SUPPORTED_PROTOCOL_VERSIONS = [ + LATEST_PROTOCOL_VERSION, + "2024-10-07", +]; + +/* JSON-RPC types */ +export const JSONRPC_VERSION = "2.0"; + +/** + * A progress token, used to associate progress notifications with the original request. + */ +export const ProgressTokenSchema = z.union([z.string(), z.number().int()]); + +/** + * An opaque token used to represent a cursor for pagination. + */ +export const CursorSchema = z.string(); + +const BaseRequestParamsSchema = z + .object({ + _meta: z.optional( + z + .object({ + /** + * If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications. + */ + progressToken: z.optional(ProgressTokenSchema), + }) + .passthrough(), + ), + }) + .passthrough(); + +export const RequestSchema = z.object({ + method: z.string(), + params: z.optional(BaseRequestParamsSchema), +}); + +const BaseNotificationParamsSchema = z + .object({ + /** + * This parameter name is reserved by MCP to allow clients and servers to attach additional metadata to their notifications. + */ + _meta: z.optional(z.object({}).passthrough()), + }) + .passthrough(); + +export const NotificationSchema = z.object({ + method: z.string(), + params: z.optional(BaseNotificationParamsSchema), +}); + +export const ResultSchema = z + .object({ + /** + * This result property is reserved by the protocol to allow clients and servers to attach additional metadata to their responses. + */ + _meta: z.optional(z.object({}).passthrough()), + }) + .passthrough(); + +/** + * A uniquely identifying ID for a request in JSON-RPC. + */ +export const RequestIdSchema = z.union([z.string(), z.number().int()]); + +/** + * A request that expects a response. + */ +export const JSONRPCRequestSchema = z + .object({ + jsonrpc: z.literal(JSONRPC_VERSION), + id: RequestIdSchema, + }) + .merge(RequestSchema) + .strict(); + +/** + * A notification which does not expect a response. + */ +export const JSONRPCNotificationSchema = z + .object({ + jsonrpc: z.literal(JSONRPC_VERSION), + }) + .merge(NotificationSchema) + .strict(); + +/** + * A successful (non-error) response to a request. + */ +export const JSONRPCResponseSchema = z + .object({ + jsonrpc: z.literal(JSONRPC_VERSION), + id: RequestIdSchema, + result: ResultSchema, + }) + .strict(); + +/** + * Error codes defined by the JSON-RPC specification. + */ +export enum ErrorCode { + // SDK error codes + ConnectionClosed = -32000, + RequestTimeout = -32001, + + // Standard JSON-RPC error codes + ParseError = -32700, + InvalidRequest = -32600, + MethodNotFound = -32601, + InvalidParams = -32602, + InternalError = -32603, +} + +/** + * A response to a request that indicates an error occurred. + */ +export const JSONRPCErrorSchema = z + .object({ + jsonrpc: z.literal(JSONRPC_VERSION), + id: RequestIdSchema, + error: z.object({ + /** + * The error type that occurred. + */ + code: z.number().int(), + /** + * A short description of the error. The message SHOULD be limited to a concise single sentence. + */ + message: z.string(), + /** + * Additional information about the error. The value of this member is defined by the sender (e.g. detailed error information, nested errors etc.). + */ + data: z.optional(z.unknown()), + }), + }) + .strict(); + +export const JSONRPCMessageSchema = z.union([ + JSONRPCRequestSchema, + JSONRPCNotificationSchema, + JSONRPCResponseSchema, + JSONRPCErrorSchema, +]); + +/* Empty result */ +/** + * A response that indicates success but carries no data. + */ +export const EmptyResultSchema = ResultSchema.strict(); + +/* Cancellation */ +/** + * This notification can be sent by either side to indicate that it is cancelling a previously-issued request. + * + * The request SHOULD still be in-flight, but due to communication latency, it is always possible that this notification MAY arrive after the request has already finished. + * + * This notification indicates that the result will be unused, so any associated processing SHOULD cease. + * + * A client MUST NOT attempt to cancel its `initialize` request. + */ +export const CancelledNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/cancelled"), + params: BaseNotificationParamsSchema.extend({ + /** + * The ID of the request to cancel. + * + * This MUST correspond to the ID of a request previously issued in the same direction. + */ + requestId: RequestIdSchema, + + /** + * An optional string describing the reason for the cancellation. This MAY be logged or presented to the user. + */ + reason: z.string().optional(), + }), +}); + +/* Initialization */ +/** + * Describes the name and version of an MCP implementation. + */ +export const ImplementationSchema = z + .object({ + name: z.string(), + version: z.string(), + }) + .passthrough(); + +/** + * Capabilities a client may support. Known capabilities are defined here, in this schema, but this is not a closed set: any client can define its own, additional capabilities. + */ +export const ClientCapabilitiesSchema = z + .object({ + /** + * Experimental, non-standard capabilities that the client supports. + */ + experimental: z.optional(z.object({}).passthrough()), + /** + * Present if the client supports sampling from an LLM. + */ + sampling: z.optional(z.object({}).passthrough()), + /** + * Present if the client supports listing roots. + */ + roots: z.optional( + z + .object({ + /** + * Whether the client supports issuing notifications for changes to the roots list. + */ + listChanged: z.optional(z.boolean()), + }) + .passthrough(), + ), + }) + .passthrough(); + +/** + * This request is sent from the client to the server when it first connects, asking it to begin initialization. + */ +export const InitializeRequestSchema = RequestSchema.extend({ + method: z.literal("initialize"), + params: BaseRequestParamsSchema.extend({ + /** + * The latest version of the Model Context Protocol that the client supports. The client MAY decide to support older versions as well. + */ + protocolVersion: z.string(), + capabilities: ClientCapabilitiesSchema, + clientInfo: ImplementationSchema, + }), +}); + +/** + * Capabilities that a server may support. Known capabilities are defined here, in this schema, but this is not a closed set: any server can define its own, additional capabilities. + */ +export const ServerCapabilitiesSchema = z + .object({ + /** + * Experimental, non-standard capabilities that the server supports. + */ + experimental: z.optional(z.object({}).passthrough()), + /** + * Present if the server supports sending log messages to the client. + */ + logging: z.optional(z.object({}).passthrough()), + /** + * Present if the server offers any prompt templates. + */ + prompts: z.optional( + z + .object({ + /** + * Whether this server supports issuing notifications for changes to the prompt list. + */ + listChanged: z.optional(z.boolean()), + }) + .passthrough(), + ), + /** + * Present if the server offers any resources to read. + */ + resources: z.optional( + z + .object({ + /** + * Whether this server supports clients subscribing to resource updates. + */ + subscribe: z.optional(z.boolean()), + + /** + * Whether this server supports issuing notifications for changes to the resource list. + */ + listChanged: z.optional(z.boolean()), + }) + .passthrough(), + ), + /** + * Present if the server offers any tools to call. + */ + tools: z.optional( + z + .object({ + /** + * Whether this server supports issuing notifications for changes to the tool list. + */ + listChanged: z.optional(z.boolean()), + }) + .passthrough(), + ), + }) + .passthrough(); + +/** + * After receiving an initialize request from the client, the server sends this response. + */ +export const InitializeResultSchema = ResultSchema.extend({ + /** + * The version of the Model Context Protocol that the server wants to use. This may not match the version that the client requested. If the client cannot support this version, it MUST disconnect. + */ + protocolVersion: z.string(), + capabilities: ServerCapabilitiesSchema, + serverInfo: ImplementationSchema, + /** + * Instructions describing how to use the server and its features. + * + * This can be used by clients to improve the LLM's understanding of available tools, resources, etc. It can be thought of like a "hint" to the model. For example, this information MAY be added to the system prompt. + */ + instructions: z.optional(z.string()), +}); + +/** + * This notification is sent from the client to the server after initialization has finished. + */ +export const InitializedNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/initialized"), +}); + +/* Ping */ +/** + * A ping, issued by either the server or the client, to check that the other party is still alive. The receiver must promptly respond, or else may be disconnected. + */ +export const PingRequestSchema = RequestSchema.extend({ + method: z.literal("ping"), +}); + +/* Progress notifications */ +export const ProgressSchema = z + .object({ + /** + * The progress thus far. This should increase every time progress is made, even if the total is unknown. + */ + progress: z.number(), + /** + * Total number of items to process (or total progress required), if known. + */ + total: z.optional(z.number()), + }) + .passthrough(); + +/** + * An out-of-band notification used to inform the receiver of a progress update for a long-running request. + */ +export const ProgressNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/progress"), + params: BaseNotificationParamsSchema.merge(ProgressSchema).extend({ + /** + * The progress token which was given in the initial request, used to associate this notification with the request that is proceeding. + */ + progressToken: ProgressTokenSchema, + }), +}); + +/* Pagination */ +export const PaginatedRequestSchema = RequestSchema.extend({ + params: BaseRequestParamsSchema.extend({ + /** + * An opaque token representing the current pagination position. + * If provided, the server should return results starting after this cursor. + */ + cursor: z.optional(CursorSchema), + }).optional(), +}); + +export const PaginatedResultSchema = ResultSchema.extend({ + /** + * An opaque token representing the pagination position after the last returned result. + * If present, there may be more results available. + */ + nextCursor: z.optional(CursorSchema), +}); + +/* Resources */ +/** + * The contents of a specific resource or sub-resource. + */ +export const ResourceContentsSchema = z + .object({ + /** + * The URI of this resource. + */ + uri: z.string(), + /** + * The MIME type of this resource, if known. + */ + mimeType: z.optional(z.string()), + }) + .passthrough(); + +export const TextResourceContentsSchema = ResourceContentsSchema.extend({ + /** + * The text of the item. This must only be set if the item can actually be represented as text (not binary data). + */ + text: z.string(), +}); + +export const BlobResourceContentsSchema = ResourceContentsSchema.extend({ + /** + * A base64-encoded string representing the binary data of the item. + */ + blob: z.string().base64(), +}); + +/** + * A known resource that the server is capable of reading. + */ +export const ResourceSchema = z + .object({ + /** + * The URI of this resource. + */ + uri: z.string(), + + /** + * A human-readable name for this resource. + * + * This can be used by clients to populate UI elements. + */ + name: z.string(), + + /** + * A description of what this resource represents. + * + * This can be used by clients to improve the LLM's understanding of available resources. It can be thought of like a "hint" to the model. + */ + description: z.optional(z.string()), + + /** + * The MIME type of this resource, if known. + */ + mimeType: z.optional(z.string()), + }) + .passthrough(); + +/** + * A template description for resources available on the server. + */ +export const ResourceTemplateSchema = z + .object({ + /** + * A URI template (according to RFC 6570) that can be used to construct resource URIs. + */ + uriTemplate: z.string(), + + /** + * A human-readable name for the type of resource this template refers to. + * + * This can be used by clients to populate UI elements. + */ + name: z.string(), + + /** + * A description of what this template is for. + * + * This can be used by clients to improve the LLM's understanding of available resources. It can be thought of like a "hint" to the model. + */ + description: z.optional(z.string()), + + /** + * The MIME type for all resources that match this template. This should only be included if all resources matching this template have the same type. + */ + mimeType: z.optional(z.string()), + }) + .passthrough(); + +/** + * Sent from the client to request a list of resources the server has. + */ +export const ListResourcesRequestSchema = PaginatedRequestSchema.extend({ + method: z.literal("resources/list"), +}); + +/** + * The server's response to a resources/list request from the client. + */ +export const ListResourcesResultSchema = PaginatedResultSchema.extend({ + resources: z.array(ResourceSchema), +}); + +/** + * Sent from the client to request a list of resource templates the server has. + */ +export const ListResourceTemplatesRequestSchema = PaginatedRequestSchema.extend( + { + method: z.literal("resources/templates/list"), + }, +); + +/** + * The server's response to a resources/templates/list request from the client. + */ +export const ListResourceTemplatesResultSchema = PaginatedResultSchema.extend({ + resourceTemplates: z.array(ResourceTemplateSchema), +}); + +/** + * Sent from the client to the server, to read a specific resource URI. + */ +export const ReadResourceRequestSchema = RequestSchema.extend({ + method: z.literal("resources/read"), + params: BaseRequestParamsSchema.extend({ + /** + * The URI of the resource to read. The URI can use any protocol; it is up to the server how to interpret it. + */ + uri: z.string(), + }), +}); + +/** + * The server's response to a resources/read request from the client. + */ +export const ReadResourceResultSchema = ResultSchema.extend({ + contents: z.array( + z.union([TextResourceContentsSchema, BlobResourceContentsSchema]), + ), +}); + +/** + * An optional notification from the server to the client, informing it that the list of resources it can read from has changed. This may be issued by servers without any previous subscription from the client. + */ +export const ResourceListChangedNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/resources/list_changed"), +}); + +/** + * Sent from the client to request resources/updated notifications from the server whenever a particular resource changes. + */ +export const SubscribeRequestSchema = RequestSchema.extend({ + method: z.literal("resources/subscribe"), + params: BaseRequestParamsSchema.extend({ + /** + * The URI of the resource to subscribe to. The URI can use any protocol; it is up to the server how to interpret it. + */ + uri: z.string(), + }), +}); + +/** + * Sent from the client to request cancellation of resources/updated notifications from the server. This should follow a previous resources/subscribe request. + */ +export const UnsubscribeRequestSchema = RequestSchema.extend({ + method: z.literal("resources/unsubscribe"), + params: BaseRequestParamsSchema.extend({ + /** + * The URI of the resource to unsubscribe from. + */ + uri: z.string(), + }), +}); + +/** + * A notification from the server to the client, informing it that a resource has changed and may need to be read again. This should only be sent if the client previously sent a resources/subscribe request. + */ +export const ResourceUpdatedNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/resources/updated"), + params: BaseNotificationParamsSchema.extend({ + /** + * The URI of the resource that has been updated. This might be a sub-resource of the one that the client actually subscribed to. + */ + uri: z.string(), + }), +}); + +/* Prompts */ +/** + * Describes an argument that a prompt can accept. + */ +export const PromptArgumentSchema = z + .object({ + /** + * The name of the argument. + */ + name: z.string(), + /** + * A human-readable description of the argument. + */ + description: z.optional(z.string()), + /** + * Whether this argument must be provided. + */ + required: z.optional(z.boolean()), + }) + .passthrough(); + +/** + * A prompt or prompt template that the server offers. + */ +export const PromptSchema = z + .object({ + /** + * The name of the prompt or prompt template. + */ + name: z.string(), + /** + * An optional description of what this prompt provides + */ + description: z.optional(z.string()), + /** + * A list of arguments to use for templating the prompt. + */ + arguments: z.optional(z.array(PromptArgumentSchema)), + }) + .passthrough(); + +/** + * Sent from the client to request a list of prompts and prompt templates the server has. + */ +export const ListPromptsRequestSchema = PaginatedRequestSchema.extend({ + method: z.literal("prompts/list"), +}); + +/** + * The server's response to a prompts/list request from the client. + */ +export const ListPromptsResultSchema = PaginatedResultSchema.extend({ + prompts: z.array(PromptSchema), +}); + +/** + * Used by the client to get a prompt provided by the server. + */ +export const GetPromptRequestSchema = RequestSchema.extend({ + method: z.literal("prompts/get"), + params: BaseRequestParamsSchema.extend({ + /** + * The name of the prompt or prompt template. + */ + name: z.string(), + /** + * Arguments to use for templating the prompt. + */ + arguments: z.optional(z.record(z.string())), + }), +}); + +/** + * Text provided to or from an LLM. + */ +export const TextContentSchema = z + .object({ + type: z.literal("text"), + /** + * The text content of the message. + */ + text: z.string(), + }) + .passthrough(); + +/** + * An image provided to or from an LLM. + */ +export const ImageContentSchema = z + .object({ + type: z.literal("image"), + /** + * The base64-encoded image data. + */ + data: z.string().base64(), + /** + * The MIME type of the image. Different providers may support different image types. + */ + mimeType: z.string(), + }) + .passthrough(); + +/** + * The contents of a resource, embedded into a prompt or tool call result. + */ +export const EmbeddedResourceSchema = z + .object({ + type: z.literal("resource"), + resource: z.union([TextResourceContentsSchema, BlobResourceContentsSchema]), + }) + .passthrough(); + +/** + * Describes a message returned as part of a prompt. + */ +export const PromptMessageSchema = z + .object({ + role: z.enum(["user", "assistant"]), + content: z.union([ + TextContentSchema, + ImageContentSchema, + EmbeddedResourceSchema, + ]), + }) + .passthrough(); + +/** + * The server's response to a prompts/get request from the client. + */ +export const GetPromptResultSchema = ResultSchema.extend({ + /** + * An optional description for the prompt. + */ + description: z.optional(z.string()), + messages: z.array(PromptMessageSchema), +}); + +/** + * An optional notification from the server to the client, informing it that the list of prompts it offers has changed. This may be issued by servers without any previous subscription from the client. + */ +export const PromptListChangedNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/prompts/list_changed"), +}); + +/* Tools */ +/** + * Definition for a tool the client can call. + */ +export const ToolSchema = z + .object({ + /** + * The name of the tool. + */ + name: z.string(), + /** + * A human-readable description of the tool. + */ + description: z.optional(z.string()), + /** + * A JSON Schema object defining the expected parameters for the tool. + */ + inputSchema: z + .object({ + type: z.literal("object"), + properties: z.optional(z.object({}).passthrough()), + }) + .passthrough(), + }) + .passthrough(); + +/** + * Sent from the client to request a list of tools the server has. + */ +export const ListToolsRequestSchema = PaginatedRequestSchema.extend({ + method: z.literal("tools/list"), +}); + +/** + * The server's response to a tools/list request from the client. + */ +export const ListToolsResultSchema = PaginatedResultSchema.extend({ + tools: z.array(ToolSchema), +}); + +/** + * The server's response to a tool call. + */ +export const CallToolResultSchema = ResultSchema.extend({ + content: z.array( + z.union([TextContentSchema, ImageContentSchema, EmbeddedResourceSchema]), + ), + isError: z.boolean().default(false).optional(), +}); + +/** + * CallToolResultSchema extended with backwards compatibility to protocol version 2024-10-07. + */ +export const CompatibilityCallToolResultSchema = CallToolResultSchema.or( + ResultSchema.extend({ + toolResult: z.unknown(), + }), +); + +/** + * Used by the client to invoke a tool provided by the server. + */ +export const CallToolRequestSchema = RequestSchema.extend({ + method: z.literal("tools/call"), + params: BaseRequestParamsSchema.extend({ + name: z.string(), + arguments: z.optional(z.record(z.unknown())), + }), +}); + +/** + * An optional notification from the server to the client, informing it that the list of tools it offers has changed. This may be issued by servers without any previous subscription from the client. + */ +export const ToolListChangedNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/tools/list_changed"), +}); + +/* Logging */ +/** + * The severity of a log message. + */ +export const LoggingLevelSchema = z.enum([ + "debug", + "info", + "notice", + "warning", + "error", + "critical", + "alert", + "emergency", +]); + +/** + * A request from the client to the server, to enable or adjust logging. + */ +export const SetLevelRequestSchema = RequestSchema.extend({ + method: z.literal("logging/setLevel"), + params: BaseRequestParamsSchema.extend({ + /** + * The level of logging that the client wants to receive from the server. The server should send all logs at this level and higher (i.e., more severe) to the client as notifications/logging/message. + */ + level: LoggingLevelSchema, + }), +}); + +/** + * Notification of a log message passed from server to client. If no logging/setLevel request has been sent from the client, the server MAY decide which messages to send automatically. + */ +export const LoggingMessageNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/message"), + params: BaseNotificationParamsSchema.extend({ + /** + * The severity of this log message. + */ + level: LoggingLevelSchema, + /** + * An optional name of the logger issuing this message. + */ + logger: z.optional(z.string()), + /** + * The data to be logged, such as a string message or an object. Any JSON serializable type is allowed here. + */ + data: z.unknown(), + }), +}); + +/* Sampling */ +/** + * Hints to use for model selection. + */ +export const ModelHintSchema = z + .object({ + /** + * A hint for a model name. + */ + name: z.string().optional(), + }) + .passthrough(); + +/** + * The server's preferences for model selection, requested of the client during sampling. + */ +export const ModelPreferencesSchema = z + .object({ + /** + * Optional hints to use for model selection. + */ + hints: z.optional(z.array(ModelHintSchema)), + /** + * How much to prioritize cost when selecting a model. + */ + costPriority: z.optional(z.number().min(0).max(1)), + /** + * How much to prioritize sampling speed (latency) when selecting a model. + */ + speedPriority: z.optional(z.number().min(0).max(1)), + /** + * How much to prioritize intelligence and capabilities when selecting a model. + */ + intelligencePriority: z.optional(z.number().min(0).max(1)), + }) + .passthrough(); + +/** + * Describes a message issued to or received from an LLM API. + */ +export const SamplingMessageSchema = z + .object({ + role: z.enum(["user", "assistant"]), + content: z.union([TextContentSchema, ImageContentSchema]), + }) + .passthrough(); + +/** + * A request from the server to sample an LLM via the client. The client has full discretion over which model to select. The client should also inform the user before beginning sampling, to allow them to inspect the request (human in the loop) and decide whether to approve it. + */ +export const CreateMessageRequestSchema = RequestSchema.extend({ + method: z.literal("sampling/createMessage"), + params: BaseRequestParamsSchema.extend({ + messages: z.array(SamplingMessageSchema), + /** + * An optional system prompt the server wants to use for sampling. The client MAY modify or omit this prompt. + */ + systemPrompt: z.optional(z.string()), + /** + * A request to include context from one or more MCP servers (including the caller), to be attached to the prompt. The client MAY ignore this request. + */ + includeContext: z.optional(z.enum(["none", "thisServer", "allServers"])), + temperature: z.optional(z.number()), + /** + * The maximum number of tokens to sample, as requested by the server. The client MAY choose to sample fewer tokens than requested. + */ + maxTokens: z.number().int(), + stopSequences: z.optional(z.array(z.string())), + /** + * Optional metadata to pass through to the LLM provider. The format of this metadata is provider-specific. + */ + metadata: z.optional(z.object({}).passthrough()), + /** + * The server's preferences for which model to select. + */ + modelPreferences: z.optional(ModelPreferencesSchema), + }), +}); + +/** + * The client's response to a sampling/create_message request from the server. The client should inform the user before returning the sampled message, to allow them to inspect the response (human in the loop) and decide whether to allow the server to see it. + */ +export const CreateMessageResultSchema = ResultSchema.extend({ + /** + * The name of the model that generated the message. + */ + model: z.string(), + /** + * The reason why sampling stopped. + */ + stopReason: z.optional( + z.enum(["endTurn", "stopSequence", "maxTokens"]).or(z.string()), + ), + role: z.enum(["user", "assistant"]), + content: z.discriminatedUnion("type", [ + TextContentSchema, + ImageContentSchema, + ]), +}); + +/* Autocomplete */ +/** + * A reference to a resource or resource template definition. + */ +export const ResourceReferenceSchema = z + .object({ + type: z.literal("ref/resource"), + /** + * The URI or URI template of the resource. + */ + uri: z.string(), + }) + .passthrough(); + +/** + * Identifies a prompt. + */ +export const PromptReferenceSchema = z + .object({ + type: z.literal("ref/prompt"), + /** + * The name of the prompt or prompt template + */ + name: z.string(), + }) + .passthrough(); + +/** + * A request from the client to the server, to ask for completion options. + */ +export const CompleteRequestSchema = RequestSchema.extend({ + method: z.literal("completion/complete"), + params: BaseRequestParamsSchema.extend({ + ref: z.union([PromptReferenceSchema, ResourceReferenceSchema]), + /** + * The argument's information + */ + argument: z + .object({ + /** + * The name of the argument + */ + name: z.string(), + /** + * The value of the argument to use for completion matching. + */ + value: z.string(), + }) + .passthrough(), + }), +}); + +/** + * The server's response to a completion/complete request + */ +export const CompleteResultSchema = ResultSchema.extend({ + completion: z + .object({ + /** + * An array of completion values. Must not exceed 100 items. + */ + values: z.array(z.string()).max(100), + /** + * The total number of completion options available. This can exceed the number of values actually sent in the response. + */ + total: z.optional(z.number().int()), + /** + * Indicates whether there are additional completion options beyond those provided in the current response, even if the exact total is unknown. + */ + hasMore: z.optional(z.boolean()), + }) + .passthrough(), +}); + +/* Roots */ +/** + * Represents a root directory or file that the server can operate on. + */ +export const RootSchema = z + .object({ + /** + * The URI identifying the root. This *must* start with file:// for now. + */ + uri: z.string().startsWith("file://"), + /** + * An optional name for the root. + */ + name: z.optional(z.string()), + }) + .passthrough(); + +/** + * Sent from the server to request a list of root URIs from the client. + */ +export const ListRootsRequestSchema = RequestSchema.extend({ + method: z.literal("roots/list"), +}); + +/** + * The client's response to a roots/list request from the server. + */ +export const ListRootsResultSchema = ResultSchema.extend({ + roots: z.array(RootSchema), +}); + +/** + * A notification from the client to the server, informing it that the list of roots has changed. + */ +export const RootsListChangedNotificationSchema = NotificationSchema.extend({ + method: z.literal("notifications/roots/list_changed"), +}); + +/* Client messages */ +export const ClientRequestSchema = z.union([ + PingRequestSchema, + InitializeRequestSchema, + CompleteRequestSchema, + SetLevelRequestSchema, + GetPromptRequestSchema, + ListPromptsRequestSchema, + ListResourcesRequestSchema, + ListResourceTemplatesRequestSchema, + ReadResourceRequestSchema, + SubscribeRequestSchema, + UnsubscribeRequestSchema, + CallToolRequestSchema, + ListToolsRequestSchema, +]); + +export const ClientNotificationSchema = z.union([ + CancelledNotificationSchema, + ProgressNotificationSchema, + InitializedNotificationSchema, + RootsListChangedNotificationSchema, +]); + +export const ClientResultSchema = z.union([ + EmptyResultSchema, + CreateMessageResultSchema, + ListRootsResultSchema, +]); + +/* Server messages */ +export const ServerRequestSchema = z.union([ + PingRequestSchema, + CreateMessageRequestSchema, + ListRootsRequestSchema, +]); + +export const ServerNotificationSchema = z.union([ + CancelledNotificationSchema, + ProgressNotificationSchema, + LoggingMessageNotificationSchema, + ResourceUpdatedNotificationSchema, + ResourceListChangedNotificationSchema, + ToolListChangedNotificationSchema, + PromptListChangedNotificationSchema, +]); + +export const ServerResultSchema = z.union([ + EmptyResultSchema, + InitializeResultSchema, + CompleteResultSchema, + GetPromptResultSchema, + ListPromptsResultSchema, + ListResourcesResultSchema, + ListResourceTemplatesResultSchema, + ReadResourceResultSchema, + CallToolResultSchema, + ListToolsResultSchema, +]); + +export class McpError extends Error { + constructor( + public readonly code: number, + message: string, + public readonly data?: unknown, + ) { + super(`MCP error ${code}: ${message}`); + this.name = "McpError"; + } +} + +type Primitive = string | number | boolean | bigint | null | undefined; +type Flatten = T extends Primitive + ? T + : T extends Array + ? Array> + : T extends Set + ? Set> + : T extends Map + ? Map, Flatten> + : T extends object + ? { [K in keyof T]: Flatten } + : T; + +type Infer = Flatten>; + +/* JSON-RPC types */ +export type ProgressToken = Infer; +export type Cursor = Infer; +export type Request = Infer; +export type Notification = Infer; +export type Result = Infer; +export type RequestId = Infer; +export type JSONRPCRequest = Infer; +export type JSONRPCNotification = Infer; +export type JSONRPCResponse = Infer; +export type JSONRPCError = Infer; +export type JSONRPCMessage = Infer; + +/* Empty result */ +export type EmptyResult = Infer; + +/* Cancellation */ +export type CancelledNotification = Infer; + +/* Initialization */ +export type Implementation = Infer; +export type ClientCapabilities = Infer; +export type InitializeRequest = Infer; +export type ServerCapabilities = Infer; +export type InitializeResult = Infer; +export type InitializedNotification = Infer; + +/* Ping */ +export type PingRequest = Infer; + +/* Progress notifications */ +export type Progress = Infer; +export type ProgressNotification = Infer; + +/* Pagination */ +export type PaginatedRequest = Infer; +export type PaginatedResult = Infer; + +/* Resources */ +export type ResourceContents = Infer; +export type TextResourceContents = Infer; +export type BlobResourceContents = Infer; +export type Resource = Infer; +export type ResourceTemplate = Infer; +export type ListResourcesRequest = Infer; +export type ListResourcesResult = Infer; +export type ListResourceTemplatesRequest = Infer; +export type ListResourceTemplatesResult = Infer; +export type ReadResourceRequest = Infer; +export type ReadResourceResult = Infer; +export type ResourceListChangedNotification = Infer; +export type SubscribeRequest = Infer; +export type UnsubscribeRequest = Infer; +export type ResourceUpdatedNotification = Infer; + +/* Prompts */ +export type PromptArgument = Infer; +export type Prompt = Infer; +export type ListPromptsRequest = Infer; +export type ListPromptsResult = Infer; +export type GetPromptRequest = Infer; +export type TextContent = Infer; +export type ImageContent = Infer; +export type EmbeddedResource = Infer; +export type PromptMessage = Infer; +export type GetPromptResult = Infer; +export type PromptListChangedNotification = Infer; + +/* Tools */ +export type Tool = Infer; +export type ListToolsRequest = Infer; +export type ListToolsResult = Infer; +export type CallToolResult = Infer; +export type CompatibilityCallToolResult = Infer; +export type CallToolRequest = Infer; +export type ToolListChangedNotification = Infer; + +/* Logging */ +export type LoggingLevel = Infer; +export type SetLevelRequest = Infer; +export type LoggingMessageNotification = Infer; + +/* Sampling */ +export type SamplingMessage = Infer; +export type CreateMessageRequest = Infer; +export type CreateMessageResult = Infer; + +/* Autocomplete */ +export type ResourceReference = Infer; +export type PromptReference = Infer; +export type CompleteRequest = Infer; +export type CompleteResult = Infer; + +/* Roots */ +export type Root = Infer; +export type ListRootsRequest = Infer; +export type ListRootsResult = Infer; +export type RootsListChangedNotification = Infer; + +/* Client messages */ +export type ClientRequest = Infer; +export type ClientNotification = Infer; +export type ClientResult = Infer; + +/* Server messages */ +export type ServerRequest = Infer; +export type ServerNotification = Infer; +export type ServerResult = Infer; + + + +--- +File: /CLAUDE.md +--- + +# MCP TypeScript SDK Guide + +## Build & Test Commands +``` +npm run build # Build ESM and CJS versions +npm run lint # Run ESLint +npm test # Run all tests +npx jest path/to/file.test.ts # Run specific test file +npx jest -t "test name" # Run tests matching pattern +``` + +## Code Style Guidelines +- **TypeScript**: Strict type checking, ES modules, explicit return types +- **Naming**: PascalCase for classes/types, camelCase for functions/variables +- **Files**: Lowercase with hyphens, test files with `.test.ts` suffix +- **Imports**: ES module style, include `.js` extension, group imports logically +- **Error Handling**: Use TypeScript's strict mode, explicit error checking in tests +- **Formatting**: 2-space indentation, semicolons required, single quotes preferred +- **Testing**: Co-locate tests with source files, use descriptive test names +- **Comments**: JSDoc for public APIs, inline comments for complex logic + +## Project Structure +- `/src`: Source code with client, server, and shared modules +- Tests alongside source files with `.test.ts` suffix +- Node.js >= 18 required + + +--- +File: /package.json +--- + +{ + "name": "@modelcontextprotocol/sdk", + "version": "1.7.0", + "description": "Model Context Protocol implementation for TypeScript", + "license": "MIT", + "author": "Anthropic, PBC (https://anthropic.com)", + "homepage": "https://modelcontextprotocol.io", + "bugs": "https://github.com/modelcontextprotocol/typescript-sdk/issues", + "type": "module", + "repository": { + "type": "git", + "url": "git+https://github.com/modelcontextprotocol/typescript-sdk.git" + }, + "engines": { + "node": ">=18" + }, + "keywords": [ + "modelcontextprotocol", + "mcp" + ], + "exports": { + "./*": { + "import": "./dist/esm/*", + "require": "./dist/cjs/*" + } + }, + "typesVersions": { + "*": { + "*": [ + "./dist/esm/*" + ] + } + }, + "files": [ + "dist" + ], + "scripts": { + "build": "npm run build:esm && npm run build:cjs", + "build:esm": "tsc -p tsconfig.prod.json && echo '{\"type\": \"module\"}' > dist/esm/package.json", + "build:cjs": "tsc -p tsconfig.cjs.json && echo '{\"type\": \"commonjs\"}' > dist/cjs/package.json", + "prepack": "npm run build:esm && npm run build:cjs", + "lint": "eslint src/", + "test": "jest", + "start": "npm run server", + "server": "tsx watch --clear-screen=false src/cli.ts server", + "client": "tsx src/cli.ts client" + }, + "dependencies": { + "content-type": "^1.0.5", + "cors": "^2.8.5", + "eventsource": "^3.0.2", + "express": "^5.0.1", + "express-rate-limit": "^7.5.0", + "pkce-challenge": "^4.1.0", + "raw-body": "^3.0.0", + "zod": "^3.23.8", + "zod-to-json-schema": "^3.24.1" + }, + "devDependencies": { + "@eslint/js": "^9.8.0", + "@jest-mock/express": "^3.0.0", + "@types/content-type": "^1.1.8", + "@types/cors": "^2.8.17", + "@types/eslint__js": "^8.42.3", + "@types/eventsource": "^1.1.15", + "@types/express": "^5.0.0", + "@types/jest": "^29.5.12", + "@types/node": "^22.0.2", + "@types/supertest": "^6.0.2", + "@types/ws": "^8.5.12", + "eslint": "^9.8.0", + "jest": "^29.7.0", + "supertest": "^7.0.0", + "ts-jest": "^29.2.4", + "tsx": "^4.16.5", + "typescript": "^5.5.4", + "typescript-eslint": "^8.0.0", + "ws": "^8.18.0" + }, + "resolutions": { + "strip-ansi": "6.0.1" + } +} + + + +--- +File: /README.md +--- + +# MCP TypeScript SDK ![NPM Version](https://img.shields.io/npm/v/%40modelcontextprotocol%2Fsdk) ![MIT licensed](https://img.shields.io/npm/l/%40modelcontextprotocol%2Fsdk) + +## Table of Contents +- [Overview](#overview) +- [Installation](#installation) +- [Quickstart](#quickstart) +- [What is MCP?](#what-is-mcp) +- [Core Concepts](#core-concepts) + - [Server](#server) + - [Resources](#resources) + - [Tools](#tools) + - [Prompts](#prompts) +- [Running Your Server](#running-your-server) + - [stdio](#stdio) + - [HTTP with SSE](#http-with-sse) + - [Testing and Debugging](#testing-and-debugging) +- [Examples](#examples) + - [Echo Server](#echo-server) + - [SQLite Explorer](#sqlite-explorer) +- [Advanced Usage](#advanced-usage) + - [Low-Level Server](#low-level-server) + - [Writing MCP Clients](#writing-mcp-clients) + - [Server Capabilities](#server-capabilities) + +## Overview + +The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This TypeScript SDK implements the full MCP specification, making it easy to: + +- Build MCP clients that can connect to any MCP server +- Create MCP servers that expose resources, prompts and tools +- Use standard transports like stdio and SSE +- Handle all MCP protocol messages and lifecycle events + +## Installation + +```bash +npm install @modelcontextprotocol/sdk +``` + +## Quick Start + +Let's create a simple MCP server that exposes a calculator tool and some data: + +```typescript +import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js"; +import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; +import { z } from "zod"; + +// Create an MCP server +const server = new McpServer({ + name: "Demo", + version: "1.0.0" +}); + +// Add an addition tool +server.tool("add", + { a: z.number(), b: z.number() }, + async ({ a, b }) => ({ + content: [{ type: "text", text: String(a + b) }] + }) +); + +// Add a dynamic greeting resource +server.resource( + "greeting", + new ResourceTemplate("greeting://{name}", { list: undefined }), + async (uri, { name }) => ({ + contents: [{ + uri: uri.href, + text: `Hello, ${name}!` + }] + }) +); + +// Start receiving messages on stdin and sending messages on stdout +const transport = new StdioServerTransport(); +await server.connect(transport); +``` + +## What is MCP? + +The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can: + +- Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context) +- Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect) +- Define interaction patterns through **Prompts** (reusable templates for LLM interactions) +- And more! + +## Core Concepts + +### Server + +The McpServer is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing: + +```typescript +const server = new McpServer({ + name: "My App", + version: "1.0.0" +}); +``` + +### Resources + +Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects: + +```typescript +// Static resource +server.resource( + "config", + "config://app", + async (uri) => ({ + contents: [{ + uri: uri.href, + text: "App configuration here" + }] + }) +); + +// Dynamic resource with parameters +server.resource( + "user-profile", + new ResourceTemplate("users://{userId}/profile", { list: undefined }), + async (uri, { userId }) => ({ + contents: [{ + uri: uri.href, + text: `Profile data for user ${userId}` + }] + }) +); +``` + +### Tools + +Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects: + +```typescript +// Simple tool with parameters +server.tool( + "calculate-bmi", + { + weightKg: z.number(), + heightM: z.number() + }, + async ({ weightKg, heightM }) => ({ + content: [{ + type: "text", + text: String(weightKg / (heightM * heightM)) + }] + }) +); + +// Async tool with external API call +server.tool( + "fetch-weather", + { city: z.string() }, + async ({ city }) => { + const response = await fetch(`https://api.weather.com/${city}`); + const data = await response.text(); + return { + content: [{ type: "text", text: data }] + }; + } +); +``` + +### Prompts + +Prompts are reusable templates that help LLMs interact with your server effectively: + +```typescript +server.prompt( + "review-code", + { code: z.string() }, + ({ code }) => ({ + messages: [{ + role: "user", + content: { + type: "text", + text: `Please review this code:\n\n${code}` + } + }] + }) +); +``` + +## Running Your Server + +MCP servers in TypeScript need to be connected to a transport to communicate with clients. How you start the server depends on the choice of transport: + +### stdio + +For command-line tools and direct integrations: + +```typescript +import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; +import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; + +const server = new McpServer({ + name: "example-server", + version: "1.0.0" +}); + +// ... set up server resources, tools, and prompts ... + +const transport = new StdioServerTransport(); +await server.connect(transport); +``` + +### HTTP with SSE + +For remote servers, start a web server with a Server-Sent Events (SSE) endpoint, and a separate endpoint for the client to send its messages to: + +```typescript +import express from "express"; +import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; +import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js"; + +const server = new McpServer({ + name: "example-server", + version: "1.0.0" +}); + +// ... set up server resources, tools, and prompts ... + +const app = express(); + +app.get("/sse", async (req, res) => { + const transport = new SSEServerTransport("/messages", res); + await server.connect(transport); +}); + +app.post("/messages", async (req, res) => { + // Note: to support multiple simultaneous connections, these messages will + // need to be routed to a specific matching transport. (This logic isn't + // implemented here, for simplicity.) + await transport.handlePostMessage(req, res); +}); + +app.listen(3001); +``` + +### Testing and Debugging + +To test your server, you can use the [MCP Inspector](https://github.com/modelcontextprotocol/inspector). See its README for more information. + +## Examples + +### Echo Server + +A simple server demonstrating resources, tools, and prompts: + +```typescript +import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js"; +import { z } from "zod"; + +const server = new McpServer({ + name: "Echo", + version: "1.0.0" +}); + +server.resource( + "echo", + new ResourceTemplate("echo://{message}", { list: undefined }), + async (uri, { message }) => ({ + contents: [{ + uri: uri.href, + text: `Resource echo: ${message}` + }] + }) +); + +server.tool( + "echo", + { message: z.string() }, + async ({ message }) => ({ + content: [{ type: "text", text: `Tool echo: ${message}` }] + }) +); + +server.prompt( + "echo", + { message: z.string() }, + ({ message }) => ({ + messages: [{ + role: "user", + content: { + type: "text", + text: `Please process this message: ${message}` + } + }] + }) +); +``` + +### SQLite Explorer + +A more complex example showing database integration: + +```typescript +import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; +import sqlite3 from "sqlite3"; +import { promisify } from "util"; +import { z } from "zod"; + +const server = new McpServer({ + name: "SQLite Explorer", + version: "1.0.0" +}); + +// Helper to create DB connection +const getDb = () => { + const db = new sqlite3.Database("database.db"); + return { + all: promisify(db.all.bind(db)), + close: promisify(db.close.bind(db)) + }; +}; + +server.resource( + "schema", + "schema://main", + async (uri) => { + const db = getDb(); + try { + const tables = await db.all( + "SELECT sql FROM sqlite_master WHERE type='table'" + ); + return { + contents: [{ + uri: uri.href, + text: tables.map((t: {sql: string}) => t.sql).join("\n") + }] + }; + } finally { + await db.close(); + } + } +); + +server.tool( + "query", + { sql: z.string() }, + async ({ sql }) => { + const db = getDb(); + try { + const results = await db.all(sql); + return { + content: [{ + type: "text", + text: JSON.stringify(results, null, 2) + }] + }; + } catch (err: unknown) { + const error = err as Error; + return { + content: [{ + type: "text", + text: `Error: ${error.message}` + }], + isError: true + }; + } finally { + await db.close(); + } + } +); +``` + +## Advanced Usage + +### Low-Level Server + +For more control, you can use the low-level Server class directly: + +```typescript +import { Server } from "@modelcontextprotocol/sdk/server/index.js"; +import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; +import { + ListPromptsRequestSchema, + GetPromptRequestSchema +} from "@modelcontextprotocol/sdk/types.js"; + +const server = new Server( + { + name: "example-server", + version: "1.0.0" + }, + { + capabilities: { + prompts: {} + } + } +); + +server.setRequestHandler(ListPromptsRequestSchema, async () => { + return { + prompts: [{ + name: "example-prompt", + description: "An example prompt template", + arguments: [{ + name: "arg1", + description: "Example argument", + required: true + }] + }] + }; +}); + +server.setRequestHandler(GetPromptRequestSchema, async (request) => { + if (request.params.name !== "example-prompt") { + throw new Error("Unknown prompt"); + } + return { + description: "Example prompt", + messages: [{ + role: "user", + content: { + type: "text", + text: "Example prompt text" + } + }] + }; +}); + +const transport = new StdioServerTransport(); +await server.connect(transport); +``` + +### Writing MCP Clients + +The SDK provides a high-level client interface: + +```typescript +import { Client } from "@modelcontextprotocol/sdk/client/index.js"; +import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js"; + +const transport = new StdioClientTransport({ + command: "node", + args: ["server.js"] +}); + +const client = new Client( + { + name: "example-client", + version: "1.0.0" + }, + { + capabilities: { + prompts: {}, + resources: {}, + tools: {} + } + } +); + +await client.connect(transport); + +// List prompts +const prompts = await client.listPrompts(); + +// Get a prompt +const prompt = await client.getPrompt("example-prompt", { + arg1: "value" +}); + +// List resources +const resources = await client.listResources(); + +// Read a resource +const resource = await client.readResource("file:///example.txt"); + +// Call a tool +const result = await client.callTool({ + name: "example-tool", + arguments: { + arg1: "value" + } +}); +``` + +## Documentation + +- [Model Context Protocol documentation](https://modelcontextprotocol.io) +- [MCP Specification](https://spec.modelcontextprotocol.io) +- [Example Servers](https://github.com/modelcontextprotocol/servers) + +## Contributing + +Issues and pull requests are welcome on GitHub at https://github.com/modelcontextprotocol/typescript-sdk. + +## License + +This project is licensed under the MIT License—see the [LICENSE](LICENSE) file for details. + diff --git a/docs/mcp-protocol-docs.txt b/docs/mcp-protocol-docs.txt new file mode 100644 index 00000000..a03dc812 --- /dev/null +++ b/docs/mcp-protocol-docs.txt @@ -0,0 +1,6649 @@ +# Example Clients +Source: https://modelcontextprotocol.io/clients + +A list of applications that support MCP integrations + +This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers. + +## Feature support matrix + +| Client | [Resources] | [Prompts] | [Tools] | [Sampling] | Roots | Notes | +| ------------------------------------ | ----------- | --------- | ------- | ---------- | ----- | ------------------------------------------------------------------ | +| [Claude Desktop App][Claude] | ✅ | ✅ | ✅ | ❌ | ❌ | Full support for all MCP features | +| [5ire][5ire] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. | +| [BeeAI Framework][BeeAI Framework] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools in agentic workflows. | +| [Cline][Cline] | ✅ | ❌ | ✅ | ❌ | ❌ | Supports tools and resources. | +| [Continue][Continue] | ✅ | ✅ | ✅ | ❌ | ❌ | Full support for all MCP features | +| [Cursor][Cursor] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. | +| [Emacs Mcp][Mcp.el] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools in Emacs. | +| [Firebase Genkit][Genkit] | âš ī¸ | ✅ | ✅ | ❌ | ❌ | Supports resource list and lookup through tools. | +| [GenAIScript][GenAIScript] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. | +| [Goose][Goose] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. | +| [LibreChat][LibreChat] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools for Agents | +| [mcp-agent][mcp-agent] | ❌ | ❌ | ✅ | âš ī¸ | ❌ | Supports tools, server connection management, and agent workflows. | +| [Roo Code][Roo Code] | ✅ | ❌ | ✅ | ❌ | ❌ | Supports tools and resources. | +| [Sourcegraph Cody][Cody] | ✅ | ❌ | ❌ | ❌ | ❌ | Supports resources through OpenCTX | +| [Superinterface][Superinterface] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools | +| [TheiaAI/TheiaIDE][TheiaAI/TheiaIDE] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools for Agents in Theia AI and the AI-powered Theia IDE | +| [Windsurf Editor][Windsurf] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools with AI Flow for collaborative development. | +| [Zed][Zed] | ❌ | ✅ | ❌ | ❌ | ❌ | Prompts appear as slash commands | +| [SpinAI][SpinAI] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools for Typescript AI Agents | +| [OpenSumi][OpenSumi] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools in OpenSumi | +| [Daydreams Agents][Daydreams] | ✅ | ✅ | ✅ | ❌ | ❌ | Support for drop in Servers to Daydreams agents | + +[Claude]: https://claude.ai/download + +[Cursor]: https://cursor.com + +[Zed]: https://zed.dev + +[Cody]: https://sourcegraph.com/cody + +[Genkit]: https://github.com/firebase/genkit + +[Continue]: https://github.com/continuedev/continue + +[GenAIScript]: https://microsoft.github.io/genaiscript/reference/scripts/mcp-tools/ + +[Cline]: https://github.com/cline/cline + +[LibreChat]: https://github.com/danny-avila/LibreChat + +[TheiaAI/TheiaIDE]: https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/ + +[Superinterface]: https://superinterface.ai + +[5ire]: https://github.com/nanbingxyz/5ire + +[BeeAI Framework]: https://i-am-bee.github.io/beeai-framework + +[mcp-agent]: https://github.com/lastmile-ai/mcp-agent + +[Mcp.el]: https://github.com/lizqwerscott/mcp.el + +[Roo Code]: https://roocode.com + +[Goose]: https://block.github.io/goose/docs/goose-architecture/#interoperability-with-extensions + +[Windsurf]: https://codeium.com/windsurf + +[Daydreams]: https://github.com/daydreamsai/daydreams + +[SpinAI]: https://spinai.dev + +[OpenSumi]: https://github.com/opensumi/core + +[Resources]: https://modelcontextprotocol.io/docs/concepts/resources + +[Prompts]: https://modelcontextprotocol.io/docs/concepts/prompts + +[Tools]: https://modelcontextprotocol.io/docs/concepts/tools + +[Sampling]: https://modelcontextprotocol.io/docs/concepts/sampling + +## Client details + +### Claude Desktop App + +The Claude desktop application provides comprehensive support for MCP, enabling deep integration with local tools and data sources. + +**Key features:** + +* Full support for resources, allowing attachment of local files and data +* Support for prompt templates +* Tool integration for executing commands and scripts +* Local server connections for enhanced privacy and security + +> ⓘ Note: The Claude.ai web application does not currently support MCP. MCP features are only available in the desktop application. + +### 5ire + +[5ire](https://github.com/nanbingxyz/5ire) is an open source cross-platform desktop AI assistant that supports tools through MCP servers. + +**Key features:** + +* Built-in MCP servers can be quickly enabled and disabled. +* Users can add more servers by modifying the configuration file. +* It is open-source and user-friendly, suitable for beginners. +* Future support for MCP will be continuously improved. + +### BeeAI Framework + +[BeeAI Framework](https://i-am-bee.github.io/beeai-framework) is an open-source framework for building, deploying, and serving powerful agentic workflows at scale. The framework includes the **MCP Tool**, a native feature that simplifies the integration of MCP servers into agentic workflows. + +**Key features:** + +* Seamlessly incorporate MCP tools into agentic workflows. +* Quickly instantiate framework-native tools from connected MCP client(s). +* Planned future support for agentic MCP capabilities. + +**Learn more:** + +* [Example of using MCP tools in agentic workflow](https://i-am-bee.github.io/beeai-framework/#/typescript/tools?id=using-the-mcptool-class) + +### Cline + +[Cline](https://github.com/cline/cline) is an autonomous coding agent in VS Code that edits files, runs commands, uses a browser, and more–with your permission at each step. + +**Key features:** + +* Create and add tools through natural language (e.g. "add a tool that searches the web") +* Share custom MCP servers Cline creates with others via the `~/Documents/Cline/MCP` directory +* Displays configured MCP servers along with their tools, resources, and any error logs + +### Continue + +[Continue](https://github.com/continuedev/continue) is an open-source AI code assistant, with built-in support for all MCP features. + +**Key features** + +* Type "@" to mention MCP resources +* Prompt templates surface as slash commands +* Use both built-in and MCP tools directly in chat +* Supports VS Code and JetBrains IDEs, with any LLM + +### Cursor + +[Cursor](https://docs.cursor.com/advanced/model-context-protocol) is an AI code editor. + +**Key Features**: + +* Support for MCP tools in Cursor Composer +* Support for both STDIO and SSE + +### Emacs Mcp + +[Emacs Mcp](https://github.com/lizqwerscott/mcp.el) is an Emacs client designed to interface with MCP servers, enabling seamless connections and interactions. It provides MCP tool invocation support for AI plugins like [gptel](https://github.com/karthink/gptel) and [llm](https://github.com/ahyatt/llm), adhering to Emacs' standard tool invocation format. This integration enhances the functionality of AI tools within the Emacs ecosystem. + +**Key features:** + +* Provides MCP tool support for Emacs. + +### Firebase Genkit + +[Genkit](https://github.com/firebase/genkit) is Firebase's SDK for building and integrating GenAI features into applications. The [genkitx-mcp](https://github.com/firebase/genkit/tree/main/js/plugins/mcp) plugin enables consuming MCP servers as a client or creating MCP servers from Genkit tools and prompts. + +**Key features:** + +* Client support for tools and prompts (resources partially supported) +* Rich discovery with support in Genkit's Dev UI playground +* Seamless interoperability with Genkit's existing tools and prompts +* Works across a wide variety of GenAI models from top providers + +### GenAIScript + +Programmatically assemble prompts for LLMs using [GenAIScript](https://microsoft.github.io/genaiscript/) (in JavaScript). Orchestrate LLMs, tools, and data in JavaScript. + +**Key features:** + +* JavaScript toolbox to work with prompts +* Abstraction to make it easy and productive +* Seamless Visual Studio Code integration + +### Goose + +[Goose](https://github.com/block/goose) is an open source AI agent that supercharges your software development by automating coding tasks. + +**Key features:** + +* Expose MCP functionality to Goose through tools. +* MCPs can be installed directly via the [extensions directory](https://block.github.io/goose/v1/extensions/), CLI, or UI. +* Goose allows you to extend its functionality by [building your own MCP servers](https://block.github.io/goose/docs/tutorials/custom-extensions). +* Includes built-in tools for development, web scraping, automation, memory, and integrations with JetBrains and Google Drive. + +### LibreChat + +[LibreChat](https://github.com/danny-avila/LibreChat) is an open-source, customizable AI chat UI that supports multiple AI providers, now including MCP integration. + +**Key features:** + +* Extend current tool ecosystem, including [Code Interpreter](https://www.librechat.ai/docs/features/code_interpreter) and Image generation tools, through MCP servers +* Add tools to customizable [Agents](https://www.librechat.ai/docs/features/agents), using a variety of LLMs from top providers +* Open-source and self-hostable, with secure multi-user support +* Future roadmap includes expanded MCP feature support + +### mcp-agent + +[mcp-agent] is a simple, composable framework to build agents using Model Context Protocol. + +**Key features:** + +* Automatic connection management of MCP servers. +* Expose tools from multiple servers to an LLM. +* Implements every pattern defined in [Building Effective Agents](https://www.anthropic.com/research/building-effective-agents). +* Supports workflow pause/resume signals, such as waiting for human feedback. + +### Roo Code + +[Roo Code](https://roocode.com) enables AI coding assistance via MCP. + +**Key features:** + +* Support for MCP tools and resources +* Integration with development workflows +* Extensible AI capabilities + +### Sourcegraph Cody + +[Cody](https://openctx.org/docs/providers/modelcontextprotocol) is Sourcegraph's AI coding assistant, which implements MCP through OpenCTX. + +**Key features:** + +* Support for MCP resources +* Integration with Sourcegraph's code intelligence +* Uses OpenCTX as an abstraction layer +* Future support planned for additional MCP features + +### SpinAI + +[SpinAI](https://spinai.dev) is an open-source TypeScript framework for building observable AI agents. The framework provides native MCP compatibility, allowing agents to seamlessly integrate with MCP servers and tools. + +**Key features:** + +* Built-in MCP compatibility for AI agents +* Open-source TypeScript framework +* Observable agent architecture +* Native support for MCP tools integration + +### Superinterface + +[Superinterface](https://superinterface.ai) is AI infrastructure and a developer platform to build in-app AI assistants with support for MCP, interactive components, client-side function calling and more. + +**Key features:** + +* Use tools from MCP servers in assistants embedded via React components or script tags +* SSE transport support +* Use any AI model from any AI provider (OpenAI, Anthropic, Ollama, others) + +### TheiaAI/TheiaIDE + +[Theia AI](https://eclipsesource.com/blogs/2024/10/07/introducing-theia-ai/) is a framework for building AI-enhanced tools and IDEs. The [AI-powered Theia IDE](https://eclipsesource.com/blogs/2024/10/08/introducting-ai-theia-ide/) is an open and flexible development environment built on Theia AI. + +**Key features:** + +* **Tool Integration**: Theia AI enables AI agents, including those in the Theia IDE, to utilize MCP servers for seamless tool interaction. +* **Customizable Prompts**: The Theia IDE allows users to define and adapt prompts, dynamically integrating MCP servers for tailored workflows. +* **Custom agents**: The Theia IDE supports creating custom agents that leverage MCP capabilities, enabling users to design dedicated workflows on the fly. + +Theia AI and Theia IDE's MCP integration provide users with flexibility, making them powerful platforms for exploring and adapting MCP. + +**Learn more:** + +* [Theia IDE and Theia AI MCP Announcement](https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/) +* [Download the AI-powered Theia IDE](https://theia-ide.org/) + +### Windsurf Editor + +[Windsurf Editor](https://codeium.com/windsurf) is an agentic IDE that combines AI assistance with developer workflows. It features an innovative AI Flow system that enables both collaborative and independent AI interactions while maintaining developer control. + +**Key features:** + +* Revolutionary AI Flow paradigm for human-AI collaboration +* Intelligent code generation and understanding +* Rich development tools with multi-model support + +### Zed + +[Zed](https://zed.dev/docs/assistant/model-context-protocol) is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration. + +**Key features:** + +* Prompt templates surface as slash commands in the editor +* Tool integration for enhanced coding workflows +* Tight integration with editor features and workspace context +* Does not support MCP resources + +### OpenSumi + +[OpenSumi](https://github.com/opensumi/core) is a framework helps you quickly build AI Native IDE products. + +**Key features:** + +* Supports MCP tools in OpenSumi +* Supports built-in IDE MCP servers and custom MCP servers + +### Daydreams + +[Daydreams](https://github.com/daydreamsai/daydreams) is a generative agent framework for executing anything onchain + +**Key features:** + +* Supports MCP Servers in config +* Exposes MCP Client + +## Adding MCP support to your application + +If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem. + +Benefits of adding MCP support: + +* Enable users to bring their own context and tools +* Join a growing ecosystem of interoperable AI applications +* Provide users with flexible integration options +* Support local-first AI workflows + +To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk) + +## Updates and corrections + +This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or [open an issue in our documentation repository](https://github.com/modelcontextprotocol/docs/issues). + + +# Contributing +Source: https://modelcontextprotocol.io/development/contributing + +How to participate in Model Context Protocol development + +We welcome contributions from the community! Please review our [contributing guidelines](https://github.com/modelcontextprotocol/.github/blob/main/CONTRIBUTING.md) for details on how to submit changes. + +All contributors must adhere to our [Code of Conduct](https://github.com/modelcontextprotocol/.github/blob/main/CODE_OF_CONDUCT.md). + +For questions and discussions, please use [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions). + + +# Roadmap +Source: https://modelcontextprotocol.io/development/roadmap + +Our plans for evolving Model Context Protocol (H1 2025) + +The Model Context Protocol is rapidly evolving. This page outlines our current thinking on key priorities and future direction for **the first half of 2025**, though these may change significantly as the project develops. + +The ideas presented here are not commitments—we may solve these challenges differently than described, or some may not materialize at all. This is also not an *exhaustive* list; we may incorporate work that isn't mentioned here. + +We encourage community participation! Each section links to relevant discussions where you can learn more and contribute your thoughts. + +## Remote MCP Support + +Our top priority is enabling [remote MCP connections](https://github.com/modelcontextprotocol/specification/discussions/102), allowing clients to securely connect to MCP servers over the internet. Key initiatives include: + +* [**Authentication & Authorization**](https://github.com/modelcontextprotocol/specification/discussions/64): Adding standardized auth capabilities, particularly focused on OAuth 2.0 support. + +* [**Service Discovery**](https://github.com/modelcontextprotocol/specification/discussions/69): Defining how clients can discover and connect to remote MCP servers. + +* [**Stateless Operations**](https://github.com/modelcontextprotocol/specification/discussions/102): Thinking about whether MCP could encompass serverless environments too, where they will need to be mostly stateless. + +## Reference Implementations + +To help developers build with MCP, we want to offer documentation for: + +* **Client Examples**: Comprehensive reference client implementation(s), demonstrating all protocol features +* **Protocol Drafting**: Streamlined process for proposing and incorporating new protocol features + +## Distribution & Discovery + +Looking ahead, we're exploring ways to make MCP servers more accessible. Some areas we may investigate include: + +* **Package Management**: Standardized packaging format for MCP servers +* **Installation Tools**: Simplified server installation across MCP clients +* **Sandboxing**: Improved security through server isolation +* **Server Registry**: A common directory for discovering available MCP servers + +## Agent Support + +We're expanding MCP's capabilities for [complex agentic workflows](https://github.com/modelcontextprotocol/specification/discussions/111), particularly focusing on: + +* [**Hierarchical Agent Systems**](https://github.com/modelcontextprotocol/specification/discussions/94): Improved support for trees of agents through namespacing and topology awareness. + +* [**Interactive Workflows**](https://github.com/modelcontextprotocol/specification/issues/97): Better handling of user permissions and information requests across agent hierarchies, and ways to send output to users instead of models. + +* [**Streaming Results**](https://github.com/modelcontextprotocol/specification/issues/117): Real-time updates from long-running agent operations. + +## Broader Ecosystem + +We're also invested in: + +* **Community-Led Standards Development**: Fostering a collaborative ecosystem where all AI providers can help shape MCP as an open standard through equal participation and shared governance, ensuring it meets the needs of diverse AI applications and use cases. +* [**Additional Modalities**](https://github.com/modelcontextprotocol/specification/discussions/88): Expanding beyond text to support audio, video, and other formats. +* \[**Standardization**] Considering standardization through a standardization body. + +## Get Involved + +We welcome community participation in shaping MCP's future. Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to join the conversation and contribute your ideas. + + +# What's New +Source: https://modelcontextprotocol.io/development/updates + +The latest updates and improvements to MCP + + + * We're excited to announce that the Java SDK developed by Spring AI at VMware Tanzu is now + the official [Java SDK](https://github.com/modelcontextprotocol/java-sdk) for MCP. + This joins our existing Kotlin SDK in our growing list of supported languages. + The Spring AI team will maintain the SDK as an integral part of the Model Context Protocol + organization. We're thrilled to welcome them to the MCP community! + + + + * Version [1.2.1](https://github.com/modelcontextprotocol/python-sdk/releases/tag/v1.2.1) of the MCP Python SDK has been released, + delivering important stability improvements and bug fixes. + + + + * Simplified, express-like API in the [TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk) + * Added 8 new clients to the [clients page](https://modelcontextprotocol.io/clients) + + + + * FastMCP API in the [Python SDK](https://github.com/modelcontextprotocol/python-sdk) + * Dockerized MCP servers in the [servers repo](https://github.com/modelcontextprotocol/servers) + + + + * Jetbrains released a Kotlin SDK for MCP! + * For a sample MCP Kotlin server, check out [this repository](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/kotlin-mcp-server) + + + +# Core architecture +Source: https://modelcontextprotocol.io/docs/concepts/architecture + +Understand how MCP connects clients, servers, and LLMs + +The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts. + +## Overview + +MCP follows a client-server architecture where: + +* **Hosts** are LLM applications (like Claude Desktop or IDEs) that initiate connections +* **Clients** maintain 1:1 connections with servers, inside the host application +* **Servers** provide context, tools, and prompts to clients + +```mermaid +flowchart LR + subgraph "Host" + client1[MCP Client] + client2[MCP Client] + end + subgraph "Server Process" + server1[MCP Server] + end + subgraph "Server Process" + server2[MCP Server] + end + + client1 <-->|Transport Layer| server1 + client2 <-->|Transport Layer| server2 +``` + +## Core components + +### Protocol layer + +The protocol layer handles message framing, request/response linking, and high-level communication patterns. + + + + ```typescript + class Protocol { + // Handle incoming requests + setRequestHandler(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise): void + + // Handle incoming notifications + setNotificationHandler(schema: T, handler: (notification: T) => Promise): void + + // Send requests and await responses + request(request: Request, schema: T, options?: RequestOptions): Promise + + // Send one-way notifications + notification(notification: Notification): Promise + } + ``` + + + + ```python + class Session(BaseSession[RequestT, NotificationT, ResultT]): + async def send_request( + self, + request: RequestT, + result_type: type[Result] + ) -> Result: + """ + Send request and wait for response. Raises McpError if response contains error. + """ + # Request handling implementation + + async def send_notification( + self, + notification: NotificationT + ) -> None: + """Send one-way notification that doesn't expect response.""" + # Notification handling implementation + + async def _received_request( + self, + responder: RequestResponder[ReceiveRequestT, ResultT] + ) -> None: + """Handle incoming request from other side.""" + # Request handling implementation + + async def _received_notification( + self, + notification: ReceiveNotificationT + ) -> None: + """Handle incoming notification from other side.""" + # Notification handling implementation + ``` + + + +Key classes include: + +* `Protocol` +* `Client` +* `Server` + +### Transport layer + +The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms: + +1. **Stdio transport** + * Uses standard input/output for communication + * Ideal for local processes + +2. **HTTP with SSE transport** + * Uses Server-Sent Events for server-to-client messages + * HTTP POST for client-to-server messages + +All transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 to exchange messages. See the [specification](https://spec.modelcontextprotocol.io) for detailed information about the Model Context Protocol message format. + +### Message types + +MCP has these main types of messages: + +1. **Requests** expect a response from the other side: + ```typescript + interface Request { + method: string; + params?: { ... }; + } + ``` + +2. **Results** are successful responses to requests: + ```typescript + interface Result { + [key: string]: unknown; + } + ``` + +3. **Errors** indicate that a request failed: + ```typescript + interface Error { + code: number; + message: string; + data?: unknown; + } + ``` + +4. **Notifications** are one-way messages that don't expect a response: + ```typescript + interface Notification { + method: string; + params?: { ... }; + } + ``` + +## Connection lifecycle + +### 1. Initialization + +```mermaid +sequenceDiagram + participant Client + participant Server + + Client->>Server: initialize request + Server->>Client: initialize response + Client->>Server: initialized notification + + Note over Client,Server: Connection ready for use +``` + +1. Client sends `initialize` request with protocol version and capabilities +2. Server responds with its protocol version and capabilities +3. Client sends `initialized` notification as acknowledgment +4. Normal message exchange begins + +### 2. Message exchange + +After initialization, the following patterns are supported: + +* **Request-Response**: Client or server sends requests, the other responds +* **Notifications**: Either party sends one-way messages + +### 3. Termination + +Either party can terminate the connection: + +* Clean shutdown via `close()` +* Transport disconnection +* Error conditions + +## Error handling + +MCP defines these standard error codes: + +```typescript +enum ErrorCode { + // Standard JSON-RPC error codes + ParseError = -32700, + InvalidRequest = -32600, + MethodNotFound = -32601, + InvalidParams = -32602, + InternalError = -32603 +} +``` + +SDKs and applications can define their own error codes above -32000. + +Errors are propagated through: + +* Error responses to requests +* Error events on transports +* Protocol-level error handlers + +## Implementation example + +Here's a basic example of implementing an MCP server: + + + + ```typescript + import { Server } from "@modelcontextprotocol/sdk/server/index.js"; + import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; + + const server = new Server({ + name: "example-server", + version: "1.0.0" + }, { + capabilities: { + resources: {} + } + }); + + // Handle requests + server.setRequestHandler(ListResourcesRequestSchema, async () => { + return { + resources: [ + { + uri: "example://resource", + name: "Example Resource" + } + ] + }; + }); + + // Connect transport + const transport = new StdioServerTransport(); + await server.connect(transport); + ``` + + + + ```python + import asyncio + import mcp.types as types + from mcp.server import Server + from mcp.server.stdio import stdio_server + + app = Server("example-server") + + @app.list_resources() + async def list_resources() -> list[types.Resource]: + return [ + types.Resource( + uri="example://resource", + name="Example Resource" + ) + ] + + async def main(): + async with stdio_server() as streams: + await app.run( + streams[0], + streams[1], + app.create_initialization_options() + ) + + if __name__ == "__main__": + asyncio.run(main) + ``` + + + +## Best practices + +### Transport selection + +1. **Local communication** + * Use stdio transport for local processes + * Efficient for same-machine communication + * Simple process management + +2. **Remote communication** + * Use SSE for scenarios requiring HTTP compatibility + * Consider security implications including authentication and authorization + +### Message handling + +1. **Request processing** + * Validate inputs thoroughly + * Use type-safe schemas + * Handle errors gracefully + * Implement timeouts + +2. **Progress reporting** + * Use progress tokens for long operations + * Report progress incrementally + * Include total progress when known + +3. **Error management** + * Use appropriate error codes + * Include helpful error messages + * Clean up resources on errors + +## Security considerations + +1. **Transport security** + * Use TLS for remote connections + * Validate connection origins + * Implement authentication when needed + +2. **Message validation** + * Validate all incoming messages + * Sanitize inputs + * Check message size limits + * Verify JSON-RPC format + +3. **Resource protection** + * Implement access controls + * Validate resource paths + * Monitor resource usage + * Rate limit requests + +4. **Error handling** + * Don't leak sensitive information + * Log security-relevant errors + * Implement proper cleanup + * Handle DoS scenarios + +## Debugging and monitoring + +1. **Logging** + * Log protocol events + * Track message flow + * Monitor performance + * Record errors + +2. **Diagnostics** + * Implement health checks + * Monitor connection state + * Track resource usage + * Profile performance + +3. **Testing** + * Test different transports + * Verify error handling + * Check edge cases + * Load test servers + + +# Prompts +Source: https://modelcontextprotocol.io/docs/concepts/prompts + +Create reusable prompt templates and workflows + +Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions. + + + Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use. + + +## Overview + +Prompts in MCP are predefined templates that can: + +* Accept dynamic arguments +* Include context from resources +* Chain multiple interactions +* Guide specific workflows +* Surface as UI elements (like slash commands) + +## Prompt structure + +Each prompt is defined with: + +```typescript +{ + name: string; // Unique identifier for the prompt + description?: string; // Human-readable description + arguments?: [ // Optional list of arguments + { + name: string; // Argument identifier + description?: string; // Argument description + required?: boolean; // Whether argument is required + } + ] +} +``` + +## Discovering prompts + +Clients can discover available prompts through the `prompts/list` endpoint: + +```typescript +// Request +{ + method: "prompts/list" +} + +// Response +{ + prompts: [ + { + name: "analyze-code", + description: "Analyze code for potential improvements", + arguments: [ + { + name: "language", + description: "Programming language", + required: true + } + ] + } + ] +} +``` + +## Using prompts + +To use a prompt, clients make a `prompts/get` request: + +````typescript +// Request +{ + method: "prompts/get", + params: { + name: "analyze-code", + arguments: { + language: "python" + } + } +} + +// Response +{ + description: "Analyze Python code for potential improvements", + messages: [ + { + role: "user", + content: { + type: "text", + text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n total = 0\n for num in numbers:\n total = total + num\n return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```" + } + } + ] +} +```` + +## Dynamic prompts + +Prompts can be dynamic and include: + +### Embedded resource context + +```json +{ + "name": "analyze-project", + "description": "Analyze project logs and code", + "arguments": [ + { + "name": "timeframe", + "description": "Time period to analyze logs", + "required": true + }, + { + "name": "fileUri", + "description": "URI of code file to review", + "required": true + } + ] +} +``` + +When handling the `prompts/get` request: + +```json +{ + "messages": [ + { + "role": "user", + "content": { + "type": "text", + "text": "Analyze these system logs and the code file for any issues:" + } + }, + { + "role": "user", + "content": { + "type": "resource", + "resource": { + "uri": "logs://recent?timeframe=1h", + "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded", + "mimeType": "text/plain" + } + } + }, + { + "role": "user", + "content": { + "type": "resource", + "resource": { + "uri": "file:///path/to/code.py", + "text": "def connect_to_service(timeout=30):\n retries = 3\n for attempt in range(retries):\n try:\n return establish_connection(timeout)\n except TimeoutError:\n if attempt == retries - 1:\n raise\n time.sleep(5)\n\ndef establish_connection(timeout):\n # Connection implementation\n pass", + "mimeType": "text/x-python" + } + } + } + ] +} +``` + +### Multi-step workflows + +```typescript +const debugWorkflow = { + name: "debug-error", + async getMessages(error: string) { + return [ + { + role: "user", + content: { + type: "text", + text: `Here's an error I'm seeing: ${error}` + } + }, + { + role: "assistant", + content: { + type: "text", + text: "I'll help analyze this error. What have you tried so far?" + } + }, + { + role: "user", + content: { + type: "text", + text: "I've tried restarting the service, but the error persists." + } + } + ]; + } +}; +``` + +## Example implementation + +Here's a complete example of implementing prompts in an MCP server: + + + + ```typescript + import { Server } from "@modelcontextprotocol/sdk/server"; + import { + ListPromptsRequestSchema, + GetPromptRequestSchema + } from "@modelcontextprotocol/sdk/types"; + + const PROMPTS = { + "git-commit": { + name: "git-commit", + description: "Generate a Git commit message", + arguments: [ + { + name: "changes", + description: "Git diff or description of changes", + required: true + } + ] + }, + "explain-code": { + name: "explain-code", + description: "Explain how code works", + arguments: [ + { + name: "code", + description: "Code to explain", + required: true + }, + { + name: "language", + description: "Programming language", + required: false + } + ] + } + }; + + const server = new Server({ + name: "example-prompts-server", + version: "1.0.0" + }, { + capabilities: { + prompts: {} + } + }); + + // List available prompts + server.setRequestHandler(ListPromptsRequestSchema, async () => { + return { + prompts: Object.values(PROMPTS) + }; + }); + + // Get specific prompt + server.setRequestHandler(GetPromptRequestSchema, async (request) => { + const prompt = PROMPTS[request.params.name]; + if (!prompt) { + throw new Error(`Prompt not found: ${request.params.name}`); + } + + if (request.params.name === "git-commit") { + return { + messages: [ + { + role: "user", + content: { + type: "text", + text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}` + } + } + ] + }; + } + + if (request.params.name === "explain-code") { + const language = request.params.arguments?.language || "Unknown"; + return { + messages: [ + { + role: "user", + content: { + type: "text", + text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}` + } + } + ] + }; + } + + throw new Error("Prompt implementation not found"); + }); + ``` + + + + ```python + from mcp.server import Server + import mcp.types as types + + # Define available prompts + PROMPTS = { + "git-commit": types.Prompt( + name="git-commit", + description="Generate a Git commit message", + arguments=[ + types.PromptArgument( + name="changes", + description="Git diff or description of changes", + required=True + ) + ], + ), + "explain-code": types.Prompt( + name="explain-code", + description="Explain how code works", + arguments=[ + types.PromptArgument( + name="code", + description="Code to explain", + required=True + ), + types.PromptArgument( + name="language", + description="Programming language", + required=False + ) + ], + ) + } + + # Initialize server + app = Server("example-prompts-server") + + @app.list_prompts() + async def list_prompts() -> list[types.Prompt]: + return list(PROMPTS.values()) + + @app.get_prompt() + async def get_prompt( + name: str, arguments: dict[str, str] | None = None + ) -> types.GetPromptResult: + if name not in PROMPTS: + raise ValueError(f"Prompt not found: {name}") + + if name == "git-commit": + changes = arguments.get("changes") if arguments else "" + return types.GetPromptResult( + messages=[ + types.PromptMessage( + role="user", + content=types.TextContent( + type="text", + text=f"Generate a concise but descriptive commit message " + f"for these changes:\n\n{changes}" + ) + ) + ] + ) + + if name == "explain-code": + code = arguments.get("code") if arguments else "" + language = arguments.get("language", "Unknown") if arguments else "Unknown" + return types.GetPromptResult( + messages=[ + types.PromptMessage( + role="user", + content=types.TextContent( + type="text", + text=f"Explain how this {language} code works:\n\n{code}" + ) + ) + ] + ) + + raise ValueError("Prompt implementation not found") + ``` + + + +## Best practices + +When implementing prompts: + +1. Use clear, descriptive prompt names +2. Provide detailed descriptions for prompts and arguments +3. Validate all required arguments +4. Handle missing arguments gracefully +5. Consider versioning for prompt templates +6. Cache dynamic content when appropriate +7. Implement error handling +8. Document expected argument formats +9. Consider prompt composability +10. Test prompts with various inputs + +## UI integration + +Prompts can be surfaced in client UIs as: + +* Slash commands +* Quick actions +* Context menu items +* Command palette entries +* Guided workflows +* Interactive forms + +## Updates and changes + +Servers can notify clients about prompt changes: + +1. Server capability: `prompts.listChanged` +2. Notification: `notifications/prompts/list_changed` +3. Client re-fetches prompt list + +## Security considerations + +When implementing prompts: + +* Validate all arguments +* Sanitize user input +* Consider rate limiting +* Implement access controls +* Audit prompt usage +* Handle sensitive data appropriately +* Validate generated content +* Implement timeouts +* Consider prompt injection risks +* Document security requirements + + +# Resources +Source: https://modelcontextprotocol.io/docs/concepts/resources + +Expose data and content from your servers to LLMs + +Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions. + + + Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used. + Different MCP clients may handle resources differently. For example: + + * Claude Desktop currently requires users to explicitly select resources before they can be used + * Other clients might automatically select resources based on heuristics + * Some implementations may even allow the AI model itself to determine which resources to use + + Server authors should be prepared to handle any of these interaction patterns when implementing resource support. In order to expose data to models automatically, server authors should use a **model-controlled** primitive such as [Tools](./tools). + + +## Overview + +Resources represent any kind of data that an MCP server wants to make available to clients. This can include: + +* File contents +* Database records +* API responses +* Live system data +* Screenshots and images +* Log files +* And more + +Each resource is identified by a unique URI and can contain either text or binary data. + +## Resource URIs + +Resources are identified using URIs that follow this format: + +``` +[protocol]://[host]/[path] +``` + +For example: + +* `file:///home/user/documents/report.pdf` +* `postgres://database/customers/schema` +* `screen://localhost/display1` + +The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes. + +## Resource types + +Resources can contain two types of content: + +### Text resources + +Text resources contain UTF-8 encoded text data. These are suitable for: + +* Source code +* Configuration files +* Log files +* JSON/XML data +* Plain text + +### Binary resources + +Binary resources contain raw binary data encoded in base64. These are suitable for: + +* Images +* PDFs +* Audio files +* Video files +* Other non-text formats + +## Resource discovery + +Clients can discover available resources through two main methods: + +### Direct resources + +Servers expose a list of concrete resources via the `resources/list` endpoint. Each resource includes: + +```typescript +{ + uri: string; // Unique identifier for the resource + name: string; // Human-readable name + description?: string; // Optional description + mimeType?: string; // Optional MIME type +} +``` + +### Resource templates + +For dynamic resources, servers can expose [URI templates](https://datatracker.ietf.org/doc/html/rfc6570) that clients can use to construct valid resource URIs: + +```typescript +{ + uriTemplate: string; // URI template following RFC 6570 + name: string; // Human-readable name for this type + description?: string; // Optional description + mimeType?: string; // Optional MIME type for all matching resources +} +``` + +## Reading resources + +To read a resource, clients make a `resources/read` request with the resource URI. + +The server responds with a list of resource contents: + +```typescript +{ + contents: [ + { + uri: string; // The URI of the resource + mimeType?: string; // Optional MIME type + + // One of: + text?: string; // For text resources + blob?: string; // For binary resources (base64 encoded) + } + ] +} +``` + + + Servers may return multiple resources in response to one `resources/read` request. This could be used, for example, to return a list of files inside a directory when the directory is read. + + +## Resource updates + +MCP supports real-time updates for resources through two mechanisms: + +### List changes + +Servers can notify clients when their list of available resources changes via the `notifications/resources/list_changed` notification. + +### Content changes + +Clients can subscribe to updates for specific resources: + +1. Client sends `resources/subscribe` with resource URI +2. Server sends `notifications/resources/updated` when the resource changes +3. Client can fetch latest content with `resources/read` +4. Client can unsubscribe with `resources/unsubscribe` + +## Example implementation + +Here's a simple example of implementing resource support in an MCP server: + + + + ```typescript + const server = new Server({ + name: "example-server", + version: "1.0.0" + }, { + capabilities: { + resources: {} + } + }); + + // List available resources + server.setRequestHandler(ListResourcesRequestSchema, async () => { + return { + resources: [ + { + uri: "file:///logs/app.log", + name: "Application Logs", + mimeType: "text/plain" + } + ] + }; + }); + + // Read resource contents + server.setRequestHandler(ReadResourceRequestSchema, async (request) => { + const uri = request.params.uri; + + if (uri === "file:///logs/app.log") { + const logContents = await readLogFile(); + return { + contents: [ + { + uri, + mimeType: "text/plain", + text: logContents + } + ] + }; + } + + throw new Error("Resource not found"); + }); + ``` + + + + ```python + app = Server("example-server") + + @app.list_resources() + async def list_resources() -> list[types.Resource]: + return [ + types.Resource( + uri="file:///logs/app.log", + name="Application Logs", + mimeType="text/plain" + ) + ] + + @app.read_resource() + async def read_resource(uri: AnyUrl) -> str: + if str(uri) == "file:///logs/app.log": + log_contents = await read_log_file() + return log_contents + + raise ValueError("Resource not found") + + # Start server + async with stdio_server() as streams: + await app.run( + streams[0], + streams[1], + app.create_initialization_options() + ) + ``` + + + +## Best practices + +When implementing resource support: + +1. Use clear, descriptive resource names and URIs +2. Include helpful descriptions to guide LLM understanding +3. Set appropriate MIME types when known +4. Implement resource templates for dynamic content +5. Use subscriptions for frequently changing resources +6. Handle errors gracefully with clear error messages +7. Consider pagination for large resource lists +8. Cache resource contents when appropriate +9. Validate URIs before processing +10. Document your custom URI schemes + +## Security considerations + +When exposing resources: + +* Validate all resource URIs +* Implement appropriate access controls +* Sanitize file paths to prevent directory traversal +* Be cautious with binary data handling +* Consider rate limiting for resource reads +* Audit resource access +* Encrypt sensitive data in transit +* Validate MIME types +* Implement timeouts for long-running reads +* Handle resource cleanup appropriately + + +# Roots +Source: https://modelcontextprotocol.io/docs/concepts/roots + +Understanding roots in MCP + +Roots are a concept in MCP that define the boundaries where servers can operate. They provide a way for clients to inform servers about relevant resources and their locations. + +## What are Roots? + +A root is a URI that a client suggests a server should focus on. When a client connects to a server, it declares which roots the server should work with. While primarily used for filesystem paths, roots can be any valid URI including HTTP URLs. + +For example, roots could be: + +``` +file:///home/user/projects/myapp +https://api.example.com/v1 +``` + +## Why Use Roots? + +Roots serve several important purposes: + +1. **Guidance**: They inform servers about relevant resources and locations +2. **Clarity**: Roots make it clear which resources are part of your workspace +3. **Organization**: Multiple roots let you work with different resources simultaneously + +## How Roots Work + +When a client supports roots, it: + +1. Declares the `roots` capability during connection +2. Provides a list of suggested roots to the server +3. Notifies the server when roots change (if supported) + +While roots are informational and not strictly enforcing, servers should: + +1. Respect the provided roots +2. Use root URIs to locate and access resources +3. Prioritize operations within root boundaries + +## Common Use Cases + +Roots are commonly used to define: + +* Project directories +* Repository locations +* API endpoints +* Configuration locations +* Resource boundaries + +## Best Practices + +When working with roots: + +1. Only suggest necessary resources +2. Use clear, descriptive names for roots +3. Monitor root accessibility +4. Handle root changes gracefully + +## Example + +Here's how a typical MCP client might expose roots: + +```json +{ + "roots": [ + { + "uri": "file:///home/user/projects/frontend", + "name": "Frontend Repository" + }, + { + "uri": "https://api.example.com/v1", + "name": "API Endpoint" + } + ] +} +``` + +This configuration suggests the server focus on both a local repository and an API endpoint while keeping them logically separated. + + +# Sampling +Source: https://modelcontextprotocol.io/docs/concepts/sampling + +Let your servers request completions from LLMs + +Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy. + + + This feature of MCP is not yet supported in the Claude Desktop client. + + +## How sampling works + +The sampling flow follows these steps: + +1. Server sends a `sampling/createMessage` request to the client +2. Client reviews the request and can modify it +3. Client samples from an LLM +4. Client reviews the completion +5. Client returns the result to the server + +This human-in-the-loop design ensures users maintain control over what the LLM sees and generates. + +## Message format + +Sampling requests use a standardized message format: + +```typescript +{ + messages: [ + { + role: "user" | "assistant", + content: { + type: "text" | "image", + + // For text: + text?: string, + + // For images: + data?: string, // base64 encoded + mimeType?: string + } + } + ], + modelPreferences?: { + hints?: [{ + name?: string // Suggested model name/family + }], + costPriority?: number, // 0-1, importance of minimizing cost + speedPriority?: number, // 0-1, importance of low latency + intelligencePriority?: number // 0-1, importance of capabilities + }, + systemPrompt?: string, + includeContext?: "none" | "thisServer" | "allServers", + temperature?: number, + maxTokens: number, + stopSequences?: string[], + metadata?: Record +} +``` + +## Request parameters + +### Messages + +The `messages` array contains the conversation history to send to the LLM. Each message has: + +* `role`: Either "user" or "assistant" +* `content`: The message content, which can be: + * Text content with a `text` field + * Image content with `data` (base64) and `mimeType` fields + +### Model preferences + +The `modelPreferences` object allows servers to specify their model selection preferences: + +* `hints`: Array of model name suggestions that clients can use to select an appropriate model: + * `name`: String that can match full or partial model names (e.g. "claude-3", "sonnet") + * Clients may map hints to equivalent models from different providers + * Multiple hints are evaluated in preference order + +* Priority values (0-1 normalized): + * `costPriority`: Importance of minimizing costs + * `speedPriority`: Importance of low latency response + * `intelligencePriority`: Importance of advanced model capabilities + +Clients make the final model selection based on these preferences and their available models. + +### System prompt + +An optional `systemPrompt` field allows servers to request a specific system prompt. The client may modify or ignore this. + +### Context inclusion + +The `includeContext` parameter specifies what MCP context to include: + +* `"none"`: No additional context +* `"thisServer"`: Include context from the requesting server +* `"allServers"`: Include context from all connected MCP servers + +The client controls what context is actually included. + +### Sampling parameters + +Fine-tune the LLM sampling with: + +* `temperature`: Controls randomness (0.0 to 1.0) +* `maxTokens`: Maximum tokens to generate +* `stopSequences`: Array of sequences that stop generation +* `metadata`: Additional provider-specific parameters + +## Response format + +The client returns a completion result: + +```typescript +{ + model: string, // Name of the model used + stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string, + role: "user" | "assistant", + content: { + type: "text" | "image", + text?: string, + data?: string, + mimeType?: string + } +} +``` + +## Example request + +Here's an example of requesting sampling from a client: + +```json +{ + "method": "sampling/createMessage", + "params": { + "messages": [ + { + "role": "user", + "content": { + "type": "text", + "text": "What files are in the current directory?" + } + } + ], + "systemPrompt": "You are a helpful file system assistant.", + "includeContext": "thisServer", + "maxTokens": 100 + } +} +``` + +## Best practices + +When implementing sampling: + +1. Always provide clear, well-structured prompts +2. Handle both text and image content appropriately +3. Set reasonable token limits +4. Include relevant context through `includeContext` +5. Validate responses before using them +6. Handle errors gracefully +7. Consider rate limiting sampling requests +8. Document expected sampling behavior +9. Test with various model parameters +10. Monitor sampling costs + +## Human in the loop controls + +Sampling is designed with human oversight in mind: + +### For prompts + +* Clients should show users the proposed prompt +* Users should be able to modify or reject prompts +* System prompts can be filtered or modified +* Context inclusion is controlled by the client + +### For completions + +* Clients should show users the completion +* Users should be able to modify or reject completions +* Clients can filter or modify completions +* Users control which model is used + +## Security considerations + +When implementing sampling: + +* Validate all message content +* Sanitize sensitive information +* Implement appropriate rate limits +* Monitor sampling usage +* Encrypt data in transit +* Handle user data privacy +* Audit sampling requests +* Control cost exposure +* Implement timeouts +* Handle model errors gracefully + +## Common patterns + +### Agentic workflows + +Sampling enables agentic patterns like: + +* Reading and analyzing resources +* Making decisions based on context +* Generating structured data +* Handling multi-step tasks +* Providing interactive assistance + +### Context management + +Best practices for context: + +* Request minimal necessary context +* Structure context clearly +* Handle context size limits +* Update context as needed +* Clean up stale context + +### Error handling + +Robust error handling should: + +* Catch sampling failures +* Handle timeout errors +* Manage rate limits +* Validate responses +* Provide fallback behaviors +* Log errors appropriately + +## Limitations + +Be aware of these limitations: + +* Sampling depends on client capabilities +* Users control sampling behavior +* Context size has limits +* Rate limits may apply +* Costs should be considered +* Model availability varies +* Response times vary +* Not all content types supported + + +# Tools +Source: https://modelcontextprotocol.io/docs/concepts/tools + +Enable LLMs to perform actions through your server + +Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world. + + + Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval). + + +## Overview + +Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include: + +* **Discovery**: Clients can list available tools through the `tools/list` endpoint +* **Invocation**: Tools are called using the `tools/call` endpoint, where servers perform the requested operation and return results +* **Flexibility**: Tools can range from simple calculations to complex API interactions + +Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems. + +## Tool definition structure + +Each tool is defined with the following structure: + +```typescript +{ + name: string; // Unique identifier for the tool + description?: string; // Human-readable description + inputSchema: { // JSON Schema for the tool's parameters + type: "object", + properties: { ... } // Tool-specific parameters + } +} +``` + +## Implementing tools + +Here's an example of implementing a basic tool in an MCP server: + + + + ```typescript + const server = new Server({ + name: "example-server", + version: "1.0.0" + }, { + capabilities: { + tools: {} + } + }); + + // Define available tools + server.setRequestHandler(ListToolsRequestSchema, async () => { + return { + tools: [{ + name: "calculate_sum", + description: "Add two numbers together", + inputSchema: { + type: "object", + properties: { + a: { type: "number" }, + b: { type: "number" } + }, + required: ["a", "b"] + } + }] + }; + }); + + // Handle tool execution + server.setRequestHandler(CallToolRequestSchema, async (request) => { + if (request.params.name === "calculate_sum") { + const { a, b } = request.params.arguments; + return { + content: [ + { + type: "text", + text: String(a + b) + } + ] + }; + } + throw new Error("Tool not found"); + }); + ``` + + + + ```python + app = Server("example-server") + + @app.list_tools() + async def list_tools() -> list[types.Tool]: + return [ + types.Tool( + name="calculate_sum", + description="Add two numbers together", + inputSchema={ + "type": "object", + "properties": { + "a": {"type": "number"}, + "b": {"type": "number"} + }, + "required": ["a", "b"] + } + ) + ] + + @app.call_tool() + async def call_tool( + name: str, + arguments: dict + ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]: + if name == "calculate_sum": + a = arguments["a"] + b = arguments["b"] + result = a + b + return [types.TextContent(type="text", text=str(result))] + raise ValueError(f"Tool not found: {name}") + ``` + + + +## Example tool patterns + +Here are some examples of types of tools that a server could provide: + +### System operations + +Tools that interact with the local system: + +```typescript +{ + name: "execute_command", + description: "Run a shell command", + inputSchema: { + type: "object", + properties: { + command: { type: "string" }, + args: { type: "array", items: { type: "string" } } + } + } +} +``` + +### API integrations + +Tools that wrap external APIs: + +```typescript +{ + name: "github_create_issue", + description: "Create a GitHub issue", + inputSchema: { + type: "object", + properties: { + title: { type: "string" }, + body: { type: "string" }, + labels: { type: "array", items: { type: "string" } } + } + } +} +``` + +### Data processing + +Tools that transform or analyze data: + +```typescript +{ + name: "analyze_csv", + description: "Analyze a CSV file", + inputSchema: { + type: "object", + properties: { + filepath: { type: "string" }, + operations: { + type: "array", + items: { + enum: ["sum", "average", "count"] + } + } + } + } +} +``` + +## Best practices + +When implementing tools: + +1. Provide clear, descriptive names and descriptions +2. Use detailed JSON Schema definitions for parameters +3. Include examples in tool descriptions to demonstrate how the model should use them +4. Implement proper error handling and validation +5. Use progress reporting for long operations +6. Keep tool operations focused and atomic +7. Document expected return value structures +8. Implement proper timeouts +9. Consider rate limiting for resource-intensive operations +10. Log tool usage for debugging and monitoring + +## Security considerations + +When exposing tools: + +### Input validation + +* Validate all parameters against the schema +* Sanitize file paths and system commands +* Validate URLs and external identifiers +* Check parameter sizes and ranges +* Prevent command injection + +### Access control + +* Implement authentication where needed +* Use appropriate authorization checks +* Audit tool usage +* Rate limit requests +* Monitor for abuse + +### Error handling + +* Don't expose internal errors to clients +* Log security-relevant errors +* Handle timeouts appropriately +* Clean up resources after errors +* Validate return values + +## Tool discovery and updates + +MCP supports dynamic tool discovery: + +1. Clients can list available tools at any time +2. Servers can notify clients when tools change using `notifications/tools/list_changed` +3. Tools can be added or removed during runtime +4. Tool definitions can be updated (though this should be done carefully) + +## Error handling + +Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error: + +1. Set `isError` to `true` in the result +2. Include error details in the `content` array + +Here's an example of proper error handling for tools: + + + + ```typescript + try { + // Tool operation + const result = performOperation(); + return { + content: [ + { + type: "text", + text: `Operation successful: ${result}` + } + ] + }; + } catch (error) { + return { + isError: true, + content: [ + { + type: "text", + text: `Error: ${error.message}` + } + ] + }; + } + ``` + + + + ```python + try: + # Tool operation + result = perform_operation() + return types.CallToolResult( + content=[ + types.TextContent( + type="text", + text=f"Operation successful: {result}" + ) + ] + ) + except Exception as error: + return types.CallToolResult( + isError=True, + content=[ + types.TextContent( + type="text", + text=f"Error: {str(error)}" + ) + ] + ) + ``` + + + +This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention. + +## Testing tools + +A comprehensive testing strategy for MCP tools should cover: + +* **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately +* **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies +* **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting +* **Performance testing**: Check behavior under load, timeout handling, and resource cleanup +* **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources + + +# Transports +Source: https://modelcontextprotocol.io/docs/concepts/transports + +Learn about MCP's communication mechanisms + +Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received. + +## Message Format + +MCP uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages. + +There are three types of JSON-RPC messages used: + +### Requests + +```typescript +{ + jsonrpc: "2.0", + id: number | string, + method: string, + params?: object +} +``` + +### Responses + +```typescript +{ + jsonrpc: "2.0", + id: number | string, + result?: object, + error?: { + code: number, + message: string, + data?: unknown + } +} +``` + +### Notifications + +```typescript +{ + jsonrpc: "2.0", + method: string, + params?: object +} +``` + +## Built-in Transport Types + +MCP includes two standard transport implementations: + +### Standard Input/Output (stdio) + +The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools. + +Use stdio when: + +* Building command-line tools +* Implementing local integrations +* Needing simple process communication +* Working with shell scripts + + + + ```typescript + const server = new Server({ + name: "example-server", + version: "1.0.0" + }, { + capabilities: {} + }); + + const transport = new StdioServerTransport(); + await server.connect(transport); + ``` + + + + ```typescript + const client = new Client({ + name: "example-client", + version: "1.0.0" + }, { + capabilities: {} + }); + + const transport = new StdioClientTransport({ + command: "./server", + args: ["--option", "value"] + }); + await client.connect(transport); + ``` + + + + ```python + app = Server("example-server") + + async with stdio_server() as streams: + await app.run( + streams[0], + streams[1], + app.create_initialization_options() + ) + ``` + + + + ```python + params = StdioServerParameters( + command="./server", + args=["--option", "value"] + ) + + async with stdio_client(params) as streams: + async with ClientSession(streams[0], streams[1]) as session: + await session.initialize() + ``` + + + +### Server-Sent Events (SSE) + +SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication. + +Use SSE when: + +* Only server-to-client streaming is needed +* Working with restricted networks +* Implementing simple updates + + + + ```typescript + import express from "express"; + + const app = express(); + + const server = new Server({ + name: "example-server", + version: "1.0.0" + }, { + capabilities: {} + }); + + let transport: SSEServerTransport | null = null; + + app.get("/sse", (req, res) => { + transport = new SSEServerTransport("/messages", res); + server.connect(transport); + }); + + app.post("/messages", (req, res) => { + if (transport) { + transport.handlePostMessage(req, res); + } + }); + + app.listen(3000); + ``` + + + + ```typescript + const client = new Client({ + name: "example-client", + version: "1.0.0" + }, { + capabilities: {} + }); + + const transport = new SSEClientTransport( + new URL("http://localhost:3000/sse") + ); + await client.connect(transport); + ``` + + + + ```python + from mcp.server.sse import SseServerTransport + from starlette.applications import Starlette + from starlette.routing import Route + + app = Server("example-server") + sse = SseServerTransport("/messages") + + async def handle_sse(scope, receive, send): + async with sse.connect_sse(scope, receive, send) as streams: + await app.run(streams[0], streams[1], app.create_initialization_options()) + + async def handle_messages(scope, receive, send): + await sse.handle_post_message(scope, receive, send) + + starlette_app = Starlette( + routes=[ + Route("/sse", endpoint=handle_sse), + Route("/messages", endpoint=handle_messages, methods=["POST"]), + ] + ) + ``` + + + + ```python + async with sse_client("http://localhost:8000/sse") as streams: + async with ClientSession(streams[0], streams[1]) as session: + await session.initialize() + ``` + + + +## Custom Transports + +MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface: + +You can implement custom transports for: + +* Custom network protocols +* Specialized communication channels +* Integration with existing systems +* Performance optimization + + + + ```typescript + interface Transport { + // Start processing messages + start(): Promise; + + // Send a JSON-RPC message + send(message: JSONRPCMessage): Promise; + + // Close the connection + close(): Promise; + + // Callbacks + onclose?: () => void; + onerror?: (error: Error) => void; + onmessage?: (message: JSONRPCMessage) => void; + } + ``` + + + + Note that while MCP Servers are often implemented with asyncio, we recommend + implementing low-level interfaces like transports with `anyio` for wider compatibility. + + ```python + @contextmanager + async def create_transport( + read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception], + write_stream: MemoryObjectSendStream[JSONRPCMessage] + ): + """ + Transport interface for MCP. + + Args: + read_stream: Stream to read incoming messages from + write_stream: Stream to write outgoing messages to + """ + async with anyio.create_task_group() as tg: + try: + # Start processing messages + tg.start_soon(lambda: process_messages(read_stream)) + + # Send messages + async with write_stream: + yield write_stream + + except Exception as exc: + # Handle errors + raise exc + finally: + # Clean up + tg.cancel_scope.cancel() + await write_stream.aclose() + await read_stream.aclose() + ``` + + + +## Error Handling + +Transport implementations should handle various error scenarios: + +1. Connection errors +2. Message parsing errors +3. Protocol errors +4. Network timeouts +5. Resource cleanup + +Example error handling: + + + + ```typescript + class ExampleTransport implements Transport { + async start() { + try { + // Connection logic + } catch (error) { + this.onerror?.(new Error(`Failed to connect: ${error}`)); + throw error; + } + } + + async send(message: JSONRPCMessage) { + try { + // Sending logic + } catch (error) { + this.onerror?.(new Error(`Failed to send message: ${error}`)); + throw error; + } + } + } + ``` + + + + Note that while MCP Servers are often implemented with asyncio, we recommend + implementing low-level interfaces like transports with `anyio` for wider compatibility. + + ```python + @contextmanager + async def example_transport(scope: Scope, receive: Receive, send: Send): + try: + # Create streams for bidirectional communication + read_stream_writer, read_stream = anyio.create_memory_object_stream(0) + write_stream, write_stream_reader = anyio.create_memory_object_stream(0) + + async def message_handler(): + try: + async with read_stream_writer: + # Message handling logic + pass + except Exception as exc: + logger.error(f"Failed to handle message: {exc}") + raise exc + + async with anyio.create_task_group() as tg: + tg.start_soon(message_handler) + try: + # Yield streams for communication + yield read_stream, write_stream + except Exception as exc: + logger.error(f"Transport error: {exc}") + raise exc + finally: + tg.cancel_scope.cancel() + await write_stream.aclose() + await read_stream.aclose() + except Exception as exc: + logger.error(f"Failed to initialize transport: {exc}") + raise exc + ``` + + + +## Best Practices + +When implementing or using MCP transport: + +1. Handle connection lifecycle properly +2. Implement proper error handling +3. Clean up resources on connection close +4. Use appropriate timeouts +5. Validate messages before sending +6. Log transport events for debugging +7. Implement reconnection logic when appropriate +8. Handle backpressure in message queues +9. Monitor connection health +10. Implement proper security measures + +## Security Considerations + +When implementing transport: + +### Authentication and Authorization + +* Implement proper authentication mechanisms +* Validate client credentials +* Use secure token handling +* Implement authorization checks + +### Data Security + +* Use TLS for network transport +* Encrypt sensitive data +* Validate message integrity +* Implement message size limits +* Sanitize input data + +### Network Security + +* Implement rate limiting +* Use appropriate timeouts +* Handle denial of service scenarios +* Monitor for unusual patterns +* Implement proper firewall rules + +## Debugging Transport + +Tips for debugging transport issues: + +1. Enable debug logging +2. Monitor message flow +3. Check connection states +4. Validate message formats +5. Test error scenarios +6. Use network analysis tools +7. Implement health checks +8. Monitor resource usage +9. Test edge cases +10. Use proper error tracking + + +# Debugging +Source: https://modelcontextprotocol.io/docs/tools/debugging + +A comprehensive guide to debugging Model Context Protocol (MCP) integrations + +Effective debugging is essential when developing MCP servers or integrating them with applications. This guide covers the debugging tools and approaches available in the MCP ecosystem. + + + This guide is for macOS. Guides for other platforms are coming soon. + + +## Debugging tools overview + +MCP provides several tools for debugging at different levels: + +1. **MCP Inspector** + * Interactive debugging interface + * Direct server testing + * See the [Inspector guide](/docs/tools/inspector) for details + +2. **Claude Desktop Developer Tools** + * Integration testing + * Log collection + * Chrome DevTools integration + +3. **Server Logging** + * Custom logging implementations + * Error tracking + * Performance monitoring + +## Debugging in Claude Desktop + +### Checking server status + +The Claude.app interface provides basic server status information: + +1. Click the icon to view: + * Connected servers + * Available prompts and resources + +2. Click the icon to view: + * Tools made available to the model + +### Viewing logs + +Review detailed MCP logs from Claude Desktop: + +```bash +# Follow logs in real-time +tail -n 20 -F ~/Library/Logs/Claude/mcp*.log +``` + +The logs capture: + +* Server connection events +* Configuration issues +* Runtime errors +* Message exchanges + +### Using Chrome DevTools + +Access Chrome's developer tools inside Claude Desktop to investigate client-side errors: + +1. Create a `developer_settings.json` file with `allowDevTools` set to true: + +```bash +echo '{"allowDevTools": true}' > ~/Library/Application\ Support/Claude/developer_settings.json +``` + +2. Open DevTools: `Command-Option-Shift-i` + +Note: You'll see two DevTools windows: + +* Main content window +* App title bar window + +Use the Console panel to inspect client-side errors. + +Use the Network panel to inspect: + +* Message payloads +* Connection timing + +## Common issues + +### Working directory + +When using MCP servers with Claude Desktop: + +* The working directory for servers launched via `claude_desktop_config.json` may be undefined (like `/` on macOS) since Claude Desktop could be started from anywhere +* Always use absolute paths in your configuration and `.env` files to ensure reliable operation +* For testing servers directly via command line, the working directory will be where you run the command + +For example in `claude_desktop_config.json`, use: + +```json +{ + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/data"] +} +``` + +Instead of relative paths like `./data` + +### Environment variables + +MCP servers inherit only a subset of environment variables automatically, like `USER`, `HOME`, and `PATH`. + +To override the default variables or provide your own, you can specify an `env` key in `claude_desktop_config.json`: + +```json +{ + "myserver": { + "command": "mcp-server-myapp", + "env": { + "MYAPP_API_KEY": "some_key", + } + } +} +``` + +### Server initialization + +Common initialization problems: + +1. **Path Issues** + * Incorrect server executable path + * Missing required files + * Permission problems + * Try using an absolute path for `command` + +2. **Configuration Errors** + * Invalid JSON syntax + * Missing required fields + * Type mismatches + +3. **Environment Problems** + * Missing environment variables + * Incorrect variable values + * Permission restrictions + +### Connection problems + +When servers fail to connect: + +1. Check Claude Desktop logs +2. Verify server process is running +3. Test standalone with [Inspector](/docs/tools/inspector) +4. Verify protocol compatibility + +## Implementing logging + +### Server-side logging + +When building a server that uses the local stdio [transport](/docs/concepts/transports), all messages logged to stderr (standard error) will be captured by the host application (e.g., Claude Desktop) automatically. + + + Local MCP servers should not log messages to stdout (standard out), as this will interfere with protocol operation. + + +For all [transports](/docs/concepts/transports), you can also provide logging to the client by sending a log message notification: + + + + ```python + server.request_context.session.send_log_message( + level="info", + data="Server started successfully", + ) + ``` + + + + ```typescript + server.sendLoggingMessage({ + level: "info", + data: "Server started successfully", + }); + ``` + + + +Important events to log: + +* Initialization steps +* Resource access +* Tool execution +* Error conditions +* Performance metrics + +### Client-side logging + +In client applications: + +1. Enable debug logging +2. Monitor network traffic +3. Track message exchanges +4. Record error states + +## Debugging workflow + +### Development cycle + +1. Initial Development + * Use [Inspector](/docs/tools/inspector) for basic testing + * Implement core functionality + * Add logging points + +2. Integration Testing + * Test in Claude Desktop + * Monitor logs + * Check error handling + +### Testing changes + +To test changes efficiently: + +* **Configuration changes**: Restart Claude Desktop +* **Server code changes**: Use Command-R to reload +* **Quick iteration**: Use [Inspector](/docs/tools/inspector) during development + +## Best practices + +### Logging strategy + +1. **Structured Logging** + * Use consistent formats + * Include context + * Add timestamps + * Track request IDs + +2. **Error Handling** + * Log stack traces + * Include error context + * Track error patterns + * Monitor recovery + +3. **Performance Tracking** + * Log operation timing + * Monitor resource usage + * Track message sizes + * Measure latency + +### Security considerations + +When debugging: + +1. **Sensitive Data** + * Sanitize logs + * Protect credentials + * Mask personal information + +2. **Access Control** + * Verify permissions + * Check authentication + * Monitor access patterns + +## Getting help + +When encountering issues: + +1. **First Steps** + * Check server logs + * Test with [Inspector](/docs/tools/inspector) + * Review configuration + * Verify environment + +2. **Support Channels** + * GitHub issues + * GitHub discussions + +3. **Providing Information** + * Log excerpts + * Configuration files + * Steps to reproduce + * Environment details + +## Next steps + + + + Learn to use the MCP Inspector + + + + +# Inspector +Source: https://modelcontextprotocol.io/docs/tools/inspector + +In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers + +The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/docs/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities. + +## Getting started + +### Installation and basic usage + +The Inspector runs directly through `npx` without requiring installation: + +```bash +npx @modelcontextprotocol/inspector +``` + +```bash +npx @modelcontextprotocol/inspector +``` + +#### Inspecting servers from NPM or PyPi + +A common way to start server packages from [NPM](https://npmjs.com) or [PyPi](https://pypi.com). + + + + ```bash + npx -y @modelcontextprotocol/inspector npx + # For example + npx -y @modelcontextprotocol/inspector npx server-postgres postgres://127.0.0.1/testdb + ``` + + + + ```bash + npx @modelcontextprotocol/inspector uvx + # For example + npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git + ``` + + + +#### Inspecting locally developed servers + +To inspect servers locally developed or downloaded as a repository, the most common +way is: + + + + ```bash + npx @modelcontextprotocol/inspector node path/to/server/index.js args... + ``` + + + + ```bash + npx @modelcontextprotocol/inspector \ + uv \ + --directory path/to/server \ + run \ + package-name \ + args... + ``` + + + +Please carefully read any attached README for the most accurate instructions. + +## Feature overview + + + + + +The Inspector provides several features for interacting with your MCP server: + +### Server connection pane + +* Allows selecting the [transport](/docs/concepts/transports) for connecting to the server +* For local servers, supports customizing the command-line arguments and environment + +### Resources tab + +* Lists all available resources +* Shows resource metadata (MIME types, descriptions) +* Allows resource content inspection +* Supports subscription testing + +### Prompts tab + +* Displays available prompt templates +* Shows prompt arguments and descriptions +* Enables prompt testing with custom arguments +* Previews generated messages + +### Tools tab + +* Lists available tools +* Shows tool schemas and descriptions +* Enables tool testing with custom inputs +* Displays tool execution results + +### Notifications pane + +* Presents all logs recorded from the server +* Shows notifications received from the server + +## Best practices + +### Development workflow + +1. Start Development + * Launch Inspector with your server + * Verify basic connectivity + * Check capability negotiation + +2. Iterative testing + * Make server changes + * Rebuild the server + * Reconnect the Inspector + * Test affected features + * Monitor messages + +3. Test edge cases + * Invalid inputs + * Missing prompt arguments + * Concurrent operations + * Verify error handling and error responses + +## Next steps + + + + Check out the MCP Inspector source code + + + + Learn about broader debugging strategies + + + + +# Example Servers +Source: https://modelcontextprotocol.io/examples + +A list of example servers and implementations + +This page showcases various Model Context Protocol (MCP) servers that demonstrate the protocol's capabilities and versatility. These servers enable Large Language Models (LLMs) to securely access tools and data sources. + +## Reference implementations + +These official reference servers demonstrate core MCP features and SDK usage: + +### Data and file systems + +* **[Filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem)** - Secure file operations with configurable access controls +* **[PostgreSQL](https://github.com/modelcontextprotocol/servers/tree/main/src/postgres)** - Read-only database access with schema inspection capabilities +* **[SQLite](https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite)** - Database interaction and business intelligence features +* **[Google Drive](https://github.com/modelcontextprotocol/servers/tree/main/src/gdrive)** - File access and search capabilities for Google Drive + +### Development tools + +* **[Git](https://github.com/modelcontextprotocol/servers/tree/main/src/git)** - Tools to read, search, and manipulate Git repositories +* **[GitHub](https://github.com/modelcontextprotocol/servers/tree/main/src/github)** - Repository management, file operations, and GitHub API integration +* **[GitLab](https://github.com/modelcontextprotocol/servers/tree/main/src/gitlab)** - GitLab API integration enabling project management +* **[Sentry](https://github.com/modelcontextprotocol/servers/tree/main/src/sentry)** - Retrieving and analyzing issues from Sentry.io + +### Web and browser automation + +* **[Brave Search](https://github.com/modelcontextprotocol/servers/tree/main/src/brave-search)** - Web and local search using Brave's Search API +* **[Fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)** - Web content fetching and conversion optimized for LLM usage +* **[Puppeteer](https://github.com/modelcontextprotocol/servers/tree/main/src/puppeteer)** - Browser automation and web scraping capabilities + +### Productivity and communication + +* **[Slack](https://github.com/modelcontextprotocol/servers/tree/main/src/slack)** - Channel management and messaging capabilities +* **[Google Maps](https://github.com/modelcontextprotocol/servers/tree/main/src/google-maps)** - Location services, directions, and place details +* **[Memory](https://github.com/modelcontextprotocol/servers/tree/main/src/memory)** - Knowledge graph-based persistent memory system + +### AI and specialized tools + +* **[EverArt](https://github.com/modelcontextprotocol/servers/tree/main/src/everart)** - AI image generation using various models +* **[Sequential Thinking](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking)** - Dynamic problem-solving through thought sequences +* **[AWS KB Retrieval](https://github.com/modelcontextprotocol/servers/tree/main/src/aws-kb-retrieval-server)** - Retrieval from AWS Knowledge Base using Bedrock Agent Runtime + +## Official integrations + +These MCP servers are maintained by companies for their platforms: + +* **[Axiom](https://github.com/axiomhq/mcp-server-axiom)** - Query and analyze logs, traces, and event data using natural language +* **[Browserbase](https://github.com/browserbase/mcp-server-browserbase)** - Automate browser interactions in the cloud +* **[Cloudflare](https://github.com/cloudflare/mcp-server-cloudflare)** - Deploy and manage resources on the Cloudflare developer platform +* **[E2B](https://github.com/e2b-dev/mcp-server)** - Execute code in secure cloud sandboxes +* **[Neon](https://github.com/neondatabase/mcp-server-neon)** - Interact with the Neon serverless Postgres platform +* **[Obsidian Markdown Notes](https://github.com/calclavia/mcp-obsidian)** - Read and search through Markdown notes in Obsidian vaults +* **[Qdrant](https://github.com/qdrant/mcp-server-qdrant/)** - Implement semantic memory using the Qdrant vector search engine +* **[Raygun](https://github.com/MindscapeHQ/mcp-server-raygun)** - Access crash reporting and monitoring data +* **[Search1API](https://github.com/fatwang2/search1api-mcp)** - Unified API for search, crawling, and sitemaps +* **[Stripe](https://github.com/stripe/agent-toolkit)** - Interact with the Stripe API +* **[Tinybird](https://github.com/tinybirdco/mcp-tinybird)** - Interface with the Tinybird serverless ClickHouse platform + +## Community highlights + +A growing ecosystem of community-developed servers extends MCP's capabilities: + +* **[Docker](https://github.com/ckreiling/mcp-server-docker)** - Manage containers, images, volumes, and networks +* **[Kubernetes](https://github.com/Flux159/mcp-server-kubernetes)** - Manage pods, deployments, and services +* **[Linear](https://github.com/jerhadf/linear-mcp-server)** - Project management and issue tracking +* **[Snowflake](https://github.com/datawiz168/mcp-snowflake-service)** - Interact with Snowflake databases +* **[Spotify](https://github.com/varunneal/spotify-mcp)** - Control Spotify playback and manage playlists +* **[Todoist](https://github.com/abhiz123/todoist-mcp-server)** - Task management integration + +> **Note:** Community servers are untested and should be used at your own risk. They are not affiliated with or endorsed by Anthropic. + +For a complete list of community servers, visit the [MCP Servers Repository](https://github.com/modelcontextprotocol/servers). + +## Getting started + +### Using reference servers + +TypeScript-based servers can be used directly with `npx`: + +```bash +npx -y @modelcontextprotocol/server-memory +``` + +Python-based servers can be used with `uvx` (recommended) or `pip`: + +```bash +# Using uvx +uvx mcp-server-git + +# Using pip +pip install mcp-server-git +python -m mcp_server_git +``` + +### Configuring with Claude + +To use an MCP server with Claude, add it to your configuration: + +```json +{ + "mcpServers": { + "memory": { + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-memory"] + }, + "filesystem": { + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"] + }, + "github": { + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-github"], + "env": { + "GITHUB_PERSONAL_ACCESS_TOKEN": "" + } + } + } +} +``` + +## Additional resources + +* [MCP Servers Repository](https://github.com/modelcontextprotocol/servers) - Complete collection of reference implementations and community servers +* [Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers) - Curated list of MCP servers +* [MCP CLI](https://github.com/wong2/mcp-cli) - Command-line inspector for testing MCP servers +* [MCP Get](https://mcp-get.com) - Tool for installing and managing MCP servers +* [Supergateway](https://github.com/supercorp-ai/supergateway) - Run MCP stdio servers over SSE + +Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to engage with the MCP community. + + +# Introduction +Source: https://modelcontextprotocol.io/introduction + +Get started with the Model Context Protocol (MCP) + +Java SDK released! Check out [what else is new.](/development/updates) + +MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools. + +## Why MCP? + +MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides: + +* A growing list of pre-built integrations that your LLM can directly plug into +* The flexibility to switch between LLM providers and vendors +* Best practices for securing your data within your infrastructure + +### General architecture + +At its core, MCP follows a client-server architecture where a host application can connect to multiple servers: + +```mermaid +flowchart LR + subgraph "Your Computer" + Host["Host with MCP Client\n(Claude, IDEs, Tools)"] + S1["MCP Server A"] + S2["MCP Server B"] + S3["MCP Server C"] + Host <-->|"MCP Protocol"| S1 + Host <-->|"MCP Protocol"| S2 + Host <-->|"MCP Protocol"| S3 + S1 <--> D1[("Local\nData Source A")] + S2 <--> D2[("Local\nData Source B")] + end + subgraph "Internet" + S3 <-->|"Web APIs"| D3[("Remote\nService C")] + end +``` + +* **MCP Hosts**: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP +* **MCP Clients**: Protocol clients that maintain 1:1 connections with servers +* **MCP Servers**: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol +* **Local Data Sources**: Your computer's files, databases, and services that MCP servers can securely access +* **Remote Services**: External systems available over the internet (e.g., through APIs) that MCP servers can connect to + +## Get started + +Choose the path that best fits your needs: + +#### Quick Starts + + + + Get started building your own server to use in Claude for Desktop and other clients + + + + Get started building your own client that can integrate with all MCP servers + + + + Get started using pre-built servers in Claude for Desktop + + + +#### Examples + + + + Check out our gallery of official MCP servers and implementations + + + + View the list of clients that support MCP integrations + + + +## Tutorials + + + + Learn how to use LLMs like Claude to speed up your MCP development + + + + Learn how to effectively debug MCP servers and integrations + + + + Test and inspect your MCP servers with our interactive debugging tool + + + +## Explore MCP + +Dive deeper into MCP's core concepts and capabilities: + + + + Understand how MCP connects clients, servers, and LLMs + + + + Expose data and content from your servers to LLMs + + + + Create reusable prompt templates and workflows + + + + Enable LLMs to perform actions through your server + + + + Let your servers request completions from LLMs + + + + Learn about MCP's communication mechanism + + + +## Contributing + +Want to contribute? Check out our [Contributing Guide](/development/contributing) to learn how you can help improve MCP. + +## Support and Feedback + +Here's how to get help or provide feedback: + +* For bug reports and feature requests related to the MCP specification, SDKs, or documentation (open source), please [create a GitHub issue](https://github.com/modelcontextprotocol) +* For discussions or Q\&A about the MCP specification, use the [specification discussions](https://github.com/modelcontextprotocol/specification/discussions) +* For discussions or Q\&A about other MCP open source components, use the [organization discussions](https://github.com/orgs/modelcontextprotocol/discussions) +* For bug reports, feature requests, and questions related to Claude.app and claude.ai's MCP integration, please email [mcp-support@anthropic.com](mailto:mcp-support@anthropic.com) + + +# For Client Developers +Source: https://modelcontextprotocol.io/quickstart/client + +Get started building your own client that can integrate with all MCP servers. + +In this tutorial, you'll learn how to build a LLM-powered chatbot client that connects to MCP servers. It helps to have gone through the [Server quickstart](/quickstart/server) that guides you through the basic of building your first server. + + + + [You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-python) + + ## System Requirements + + Before starting, ensure your system meets these requirements: + + * Mac or Windows computer + * Latest Python version installed + * Latest version of `uv` installed + + ## Setting Up Your Environment + + First, create a new Python project with `uv`: + + ```bash + # Create project directory + uv init mcp-client + cd mcp-client + + # Create virtual environment + uv venv + + # Activate virtual environment + # On Windows: + .venv\Scripts\activate + # On Unix or MacOS: + source .venv/bin/activate + + # Install required packages + uv add mcp anthropic python-dotenv + + # Remove boilerplate files + rm hello.py + + # Create our main file + touch client.py + ``` + + ## Setting Up Your API Key + + You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys). + + Create a `.env` file to store it: + + ```bash + # Create .env file + touch .env + ``` + + Add your key to the `.env` file: + + ```bash + ANTHROPIC_API_KEY= + ``` + + Add `.env` to your `.gitignore`: + + ```bash + echo ".env" >> .gitignore + ``` + + + Make sure you keep your `ANTHROPIC_API_KEY` secure! + + + ## Creating the Client + + ### Basic Client Structure + + First, let's set up our imports and create the basic client class: + + ```python + import asyncio + from typing import Optional + from contextlib import AsyncExitStack + + from mcp import ClientSession, StdioServerParameters + from mcp.client.stdio import stdio_client + + from anthropic import Anthropic + from dotenv import load_dotenv + + load_dotenv() # load environment variables from .env + + class MCPClient: + def __init__(self): + # Initialize session and client objects + self.session: Optional[ClientSession] = None + self.exit_stack = AsyncExitStack() + self.anthropic = Anthropic() + # methods will go here + ``` + + ### Server Connection Management + + Next, we'll implement the method to connect to an MCP server: + + ```python + async def connect_to_server(self, server_script_path: str): + """Connect to an MCP server + + Args: + server_script_path: Path to the server script (.py or .js) + """ + is_python = server_script_path.endswith('.py') + is_js = server_script_path.endswith('.js') + if not (is_python or is_js): + raise ValueError("Server script must be a .py or .js file") + + command = "python" if is_python else "node" + server_params = StdioServerParameters( + command=command, + args=[server_script_path], + env=None + ) + + stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params)) + self.stdio, self.write = stdio_transport + self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write)) + + await self.session.initialize() + + # List available tools + response = await self.session.list_tools() + tools = response.tools + print("\nConnected to server with tools:", [tool.name for tool in tools]) + ``` + + ### Query Processing Logic + + Now let's add the core functionality for processing queries and handling tool calls: + + ```python + async def process_query(self, query: str) -> str: + """Process a query using Claude and available tools""" + messages = [ + { + "role": "user", + "content": query + } + ] + + response = await self.session.list_tools() + available_tools = [{ + "name": tool.name, + "description": tool.description, + "input_schema": tool.inputSchema + } for tool in response.tools] + + # Initial Claude API call + response = self.anthropic.messages.create( + model="claude-3-5-sonnet-20241022", + max_tokens=1000, + messages=messages, + tools=available_tools + ) + + # Process response and handle tool calls + final_text = [] + + assistant_message_content = [] + for content in response.content: + if content.type == 'text': + final_text.append(content.text) + assistant_message_content.append(content) + elif content.type == 'tool_use': + tool_name = content.name + tool_args = content.input + + # Execute tool call + result = await self.session.call_tool(tool_name, tool_args) + final_text.append(f"[Calling tool {tool_name} with args {tool_args}]") + + assistant_message_content.append(content) + messages.append({ + "role": "assistant", + "content": assistant_message_content + }) + messages.append({ + "role": "user", + "content": [ + { + "type": "tool_result", + "tool_use_id": content.id, + "content": result.content + } + ] + }) + + # Get next response from Claude + response = self.anthropic.messages.create( + model="claude-3-5-sonnet-20241022", + max_tokens=1000, + messages=messages, + tools=available_tools + ) + + final_text.append(response.content[0].text) + + return "\n".join(final_text) + ``` + + ### Interactive Chat Interface + + Now we'll add the chat loop and cleanup functionality: + + ```python + async def chat_loop(self): + """Run an interactive chat loop""" + print("\nMCP Client Started!") + print("Type your queries or 'quit' to exit.") + + while True: + try: + query = input("\nQuery: ").strip() + + if query.lower() == 'quit': + break + + response = await self.process_query(query) + print("\n" + response) + + except Exception as e: + print(f"\nError: {str(e)}") + + async def cleanup(self): + """Clean up resources""" + await self.exit_stack.aclose() + ``` + + ### Main Entry Point + + Finally, we'll add the main execution logic: + + ```python + async def main(): + if len(sys.argv) < 2: + print("Usage: python client.py ") + sys.exit(1) + + client = MCPClient() + try: + await client.connect_to_server(sys.argv[1]) + await client.chat_loop() + finally: + await client.cleanup() + + if __name__ == "__main__": + import sys + asyncio.run(main()) + ``` + + You can find the complete `client.py` file [here.](https://gist.github.com/zckly/f3f28ea731e096e53b39b47bf0a2d4b1) + + ## Key Components Explained + + ### 1. Client Initialization + + * The `MCPClient` class initializes with session management and API clients + * Uses `AsyncExitStack` for proper resource management + * Configures the Anthropic client for Claude interactions + + ### 2. Server Connection + + * Supports both Python and Node.js servers + * Validates server script type + * Sets up proper communication channels + * Initializes the session and lists available tools + + ### 3. Query Processing + + * Maintains conversation context + * Handles Claude's responses and tool calls + * Manages the message flow between Claude and tools + * Combines results into a coherent response + + ### 4. Interactive Interface + + * Provides a simple command-line interface + * Handles user input and displays responses + * Includes basic error handling + * Allows graceful exit + + ### 5. Resource Management + + * Proper cleanup of resources + * Error handling for connection issues + * Graceful shutdown procedures + + ## Common Customization Points + + 1. **Tool Handling** + * Modify `process_query()` to handle specific tool types + * Add custom error handling for tool calls + * Implement tool-specific response formatting + + 2. **Response Processing** + * Customize how tool results are formatted + * Add response filtering or transformation + * Implement custom logging + + 3. **User Interface** + * Add a GUI or web interface + * Implement rich console output + * Add command history or auto-completion + + ## Running the Client + + To run your client with any MCP server: + + ```bash + uv run client.py path/to/server.py # python server + uv run client.py path/to/build/index.js # node server + ``` + + + If you're continuing the weather tutorial from the server quickstart, your command might look something like this: `python client.py .../weather/src/weather/server.py` + + + The client will: + + 1. Connect to the specified server + 2. List available tools + 3. Start an interactive chat session where you can: + * Enter queries + * See tool executions + * Get responses from Claude + + Here's an example of what it should look like if connected to the weather server from the server quickstart: + + + + + + ## How It Works + + When you submit a query: + + 1. The client gets the list of available tools from the server + 2. Your query is sent to Claude along with tool descriptions + 3. Claude decides which tools (if any) to use + 4. The client executes any requested tool calls through the server + 5. Results are sent back to Claude + 6. Claude provides a natural language response + 7. The response is displayed to you + + ## Best practices + + 1. **Error Handling** + * Always wrap tool calls in try-catch blocks + * Provide meaningful error messages + * Gracefully handle connection issues + + 2. **Resource Management** + * Use `AsyncExitStack` for proper cleanup + * Close connections when done + * Handle server disconnections + + 3. **Security** + * Store API keys securely in `.env` + * Validate server responses + * Be cautious with tool permissions + + ## Troubleshooting + + ### Server Path Issues + + * Double-check the path to your server script is correct + * Use the absolute path if the relative path isn't working + * For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path + * Verify the server file has the correct extension (.py for Python or .js for Node.js) + + Example of correct path usage: + + ```bash + # Relative path + uv run client.py ./server/weather.py + + # Absolute path + uv run client.py /Users/username/projects/mcp-server/weather.py + + # Windows path (either format works) + uv run client.py C:/projects/mcp-server/weather.py + uv run client.py C:\\projects\\mcp-server\\weather.py + ``` + + ### Response Timing + + * The first response might take up to 30 seconds to return + * This is normal and happens while: + * The server initializes + * Claude processes the query + * Tools are being executed + * Subsequent responses are typically faster + * Don't interrupt the process during this initial waiting period + + ### Common Error Messages + + If you see: + + * `FileNotFoundError`: Check your server path + * `Connection refused`: Ensure the server is running and the path is correct + * `Tool execution failed`: Verify the tool's required environment variables are set + * `Timeout error`: Consider increasing the timeout in your client configuration + + + + [You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-typescript) + + ## System Requirements + + Before starting, ensure your system meets these requirements: + + * Mac or Windows computer + * Node.js 16 or higher installed + * Latest version of `npm` installed + * Anthropic API key (Claude) + + ## Setting Up Your Environment + + First, let's create and set up our project: + + + ```bash MacOS/Linux + # Create project directory + mkdir mcp-client-typescript + cd mcp-client-typescript + + # Initialize npm project + npm init -y + + # Install dependencies + npm install @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv + + # Install dev dependencies + npm install -D @types/node typescript + + # Create source file + touch index.ts + ``` + + ```powershell Windows + # Create project directory + md mcp-client-typescript + cd mcp-client-typescript + + # Initialize npm project + npm init -y + + # Install dependencies + npm install @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv + + # Install dev dependencies + npm install -D @types/node typescript + + # Create source file + new-item index.ts + ``` + + + Update your `package.json` to set `type: "module"` and a build script: + + ```json package.json + { + "type": "module", + "scripts": { + "build": "tsc && chmod 755 build/index.js" + } + } + ``` + + Create a `tsconfig.json` in the root of your project: + + ```json tsconfig.json + { + "compilerOptions": { + "target": "ES2022", + "module": "Node16", + "moduleResolution": "Node16", + "outDir": "./build", + "rootDir": "./", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true + }, + "include": ["index.ts"], + "exclude": ["node_modules"] + } + ``` + + ## Setting Up Your API Key + + You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys). + + Create a `.env` file to store it: + + ```bash + echo "ANTHROPIC_API_KEY=" > .env + ``` + + Add `.env` to your `.gitignore`: + + ```bash + echo ".env" >> .gitignore + ``` + + + Make sure you keep your `ANTHROPIC_API_KEY` secure! + + + ## Creating the Client + + ### Basic Client Structure + + First, let's set up our imports and create the basic client class in `index.ts`: + + ```typescript + import { Anthropic } from "@anthropic-ai/sdk"; + import { + MessageParam, + Tool, + } from "@anthropic-ai/sdk/resources/messages/messages.mjs"; + import { Client } from "@modelcontextprotocol/sdk/client/index.js"; + import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js"; + import readline from "readline/promises"; + import dotenv from "dotenv"; + + dotenv.config(); + + const ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY; + if (!ANTHROPIC_API_KEY) { + throw new Error("ANTHROPIC_API_KEY is not set"); + } + + class MCPClient { + private mcp: Client; + private anthropic: Anthropic; + private transport: StdioClientTransport | null = null; + private tools: Tool[] = []; + + constructor() { + this.anthropic = new Anthropic({ + apiKey: ANTHROPIC_API_KEY, + }); + this.mcp = new Client({ name: "mcp-client-cli", version: "1.0.0" }); + } + // methods will go here + } + ``` + + ### Server Connection Management + + Next, we'll implement the method to connect to an MCP server: + + ```typescript + async connectToServer(serverScriptPath: string) { + try { + const isJs = serverScriptPath.endsWith(".js"); + const isPy = serverScriptPath.endsWith(".py"); + if (!isJs && !isPy) { + throw new Error("Server script must be a .js or .py file"); + } + const command = isPy + ? process.platform === "win32" + ? "python" + : "python3" + : process.execPath; + + this.transport = new StdioClientTransport({ + command, + args: [serverScriptPath], + }); + this.mcp.connect(this.transport); + + const toolsResult = await this.mcp.listTools(); + this.tools = toolsResult.tools.map((tool) => { + return { + name: tool.name, + description: tool.description, + input_schema: tool.inputSchema, + }; + }); + console.log( + "Connected to server with tools:", + this.tools.map(({ name }) => name) + ); + } catch (e) { + console.log("Failed to connect to MCP server: ", e); + throw e; + } + } + ``` + + ### Query Processing Logic + + Now let's add the core functionality for processing queries and handling tool calls: + + ```typescript + async processQuery(query: string) { + const messages: MessageParam[] = [ + { + role: "user", + content: query, + }, + ]; + + const response = await this.anthropic.messages.create({ + model: "claude-3-5-sonnet-20241022", + max_tokens: 1000, + messages, + tools: this.tools, + }); + + const finalText = []; + const toolResults = []; + + for (const content of response.content) { + if (content.type === "text") { + finalText.push(content.text); + } else if (content.type === "tool_use") { + const toolName = content.name; + const toolArgs = content.input as { [x: string]: unknown } | undefined; + + const result = await this.mcp.callTool({ + name: toolName, + arguments: toolArgs, + }); + toolResults.push(result); + finalText.push( + `[Calling tool ${toolName} with args ${JSON.stringify(toolArgs)}]` + ); + + messages.push({ + role: "user", + content: result.content as string, + }); + + const response = await this.anthropic.messages.create({ + model: "claude-3-5-sonnet-20241022", + max_tokens: 1000, + messages, + }); + + finalText.push( + response.content[0].type === "text" ? response.content[0].text : "" + ); + } + } + + return finalText.join("\n"); + } + ``` + + ### Interactive Chat Interface + + Now we'll add the chat loop and cleanup functionality: + + ```typescript + async chatLoop() { + const rl = readline.createInterface({ + input: process.stdin, + output: process.stdout, + }); + + try { + console.log("\nMCP Client Started!"); + console.log("Type your queries or 'quit' to exit."); + + while (true) { + const message = await rl.question("\nQuery: "); + if (message.toLowerCase() === "quit") { + break; + } + const response = await this.processQuery(message); + console.log("\n" + response); + } + } finally { + rl.close(); + } + } + + async cleanup() { + await this.mcp.close(); + } + ``` + + ### Main Entry Point + + Finally, we'll add the main execution logic: + + ```typescript + async function main() { + if (process.argv.length < 3) { + console.log("Usage: node index.ts "); + return; + } + const mcpClient = new MCPClient(); + try { + await mcpClient.connectToServer(process.argv[2]); + await mcpClient.chatLoop(); + } finally { + await mcpClient.cleanup(); + process.exit(0); + } + } + + main(); + ``` + + ## Running the Client + + To run your client with any MCP server: + + ```bash + # Build TypeScript + npm run build + + # Run the client + node build/index.js path/to/server.py # python server + node build/index.js path/to/build/index.js # node server + ``` + + + If you're continuing the weather tutorial from the server quickstart, your command might look something like this: `node build/index.js .../quickstart-resources/weather-server-typescript/build/index.js` + + + **The client will:** + + 1. Connect to the specified server + 2. List available tools + 3. Start an interactive chat session where you can: + * Enter queries + * See tool executions + * Get responses from Claude + + ## How It Works + + When you submit a query: + + 1. The client gets the list of available tools from the server + 2. Your query is sent to Claude along with tool descriptions + 3. Claude decides which tools (if any) to use + 4. The client executes any requested tool calls through the server + 5. Results are sent back to Claude + 6. Claude provides a natural language response + 7. The response is displayed to you + + ## Best practices + + 1. **Error Handling** + * Use TypeScript's type system for better error detection + * Wrap tool calls in try-catch blocks + * Provide meaningful error messages + * Gracefully handle connection issues + + 2. **Security** + * Store API keys securely in `.env` + * Validate server responses + * Be cautious with tool permissions + + ## Troubleshooting + + ### Server Path Issues + + * Double-check the path to your server script is correct + * Use the absolute path if the relative path isn't working + * For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path + * Verify the server file has the correct extension (.js for Node.js or .py for Python) + + Example of correct path usage: + + ```bash + # Relative path + node build/index.js ./server/build/index.js + + # Absolute path + node build/index.js /Users/username/projects/mcp-server/build/index.js + + # Windows path (either format works) + node build/index.js C:/projects/mcp-server/build/index.js + node build/index.js C:\\projects\\mcp-server\\build\\index.js + ``` + + ### Response Timing + + * The first response might take up to 30 seconds to return + * This is normal and happens while: + * The server initializes + * Claude processes the query + * Tools are being executed + * Subsequent responses are typically faster + * Don't interrupt the process during this initial waiting period + + ### Common Error Messages + + If you see: + + * `Error: Cannot find module`: Check your build folder and ensure TypeScript compilation succeeded + * `Connection refused`: Ensure the server is running and the path is correct + * `Tool execution failed`: Verify the tool's required environment variables are set + * `ANTHROPIC_API_KEY is not set`: Check your .env file and environment variables + * `TypeError`: Ensure you're using the correct types for tool arguments + + + + + This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters. + To learn how to create sync and async MCP Clients manually, consult the [Java SDK Client](/sdk/java/mcp-client) documentation + + + This example demonstrates how to build an interactive chatbot that combines Spring AI's Model Context Protocol (MCP) with the [Brave Search MCP Server](https://github.com/modelcontextprotocol/servers/tree/main/src/brave-search). The application creates a conversational interface powered by Anthropic's Claude AI model that can perform internet searches through Brave Search, enabling natural language interactions with real-time web data. + [You can find the complete code for this tutorial here.](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/web-search/brave-chatbot) + + ## System Requirements + + Before starting, ensure your system meets these requirements: + + * Java 17 or higher + * Maven 3.6+ + * npx package manager + * Anthropic API key (Claude) + * Brave Search API key + + ## Setting Up Your Environment + + 1. Install npx (Node Package eXecute): + First, make sure to install [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) + and then run: + ```bash + npm install -g npx + ``` + + 2. Clone the repository: + ```bash + git clone https://github.com/spring-projects/spring-ai-examples.git + cd model-context-protocol/brave-chatbot + ``` + + 3. Set up your API keys: + ```bash + export ANTHROPIC_API_KEY='your-anthropic-api-key-here' + export BRAVE_API_KEY='your-brave-api-key-here' + ``` + + 4. Build the application: + ```bash + ./mvnw clean install + ``` + + 5. Run the application using Maven: + ```bash + ./mvnw spring-boot:run + ``` + + + Make sure you keep your `ANTHROPIC_API_KEY` and `BRAVE_API_KEY` keys secure! + + + ## How it Works + + The application integrates Spring AI with the Brave Search MCP server through several components: + + ### MCP Client Configuration + + 1. Required dependencies in pom.xml: + + ```xml + + org.springframework.ai + spring-ai-mcp-client-spring-boot-starter + + + org.springframework.ai + spring-ai-anthropic-spring-boot-starter + + ``` + + 2. Application properties (application.yml): + + ```yml + spring: + ai: + mcp: + client: + enabled: true + name: brave-search-client + version: 1.0.0 + type: SYNC + request-timeout: 20s + stdio: + root-change-notification: true + servers-configuration: classpath:/mcp-servers-config.json + anthropic: + api-key: ${ANTHROPIC_API_KEY} + ``` + + This activates the `spring-ai-mcp-client-spring-boot-starter` to create one or more `McpClient`s based on the provided server configuration. + + 3. MCP Server Configuration (`mcp-servers-config.json`): + + ```json + { + "mcpServers": { + "brave-search": { + "command": "npx", + "args": [ + "-y", + "@modelcontextprotocol/server-brave-search" + ], + "env": { + "BRAVE_API_KEY": "" + } + } + } + } + ``` + + ### Chat Implementation + + The chatbot is implemented using Spring AI's ChatClient with MCP tool integration: + + ```java + var chatClient = chatClientBuilder + .defaultSystem("You are useful assistant, expert in AI and Java.") + .defaultTools((Object[]) mcpToolAdapter.toolCallbacks()) + .defaultAdvisors(new MessageChatMemoryAdvisor(new InMemoryChatMemory())) + .build(); + ``` + + Key features: + + * Uses Claude AI model for natural language understanding + * Integrates Brave Search through MCP for real-time web search capabilities + * Maintains conversation memory using InMemoryChatMemory + * Runs as an interactive command-line application + + ### Build and run + + ```bash + ./mvnw clean install + java -jar ./target/ai-mcp-brave-chatbot-0.0.1-SNAPSHOT.jar + ``` + + or + + ```bash + ./mvnw spring-boot:run + ``` + + The application will start an interactive chat session where you can ask questions. The chatbot will use Brave Search when it needs to find information from the internet to answer your queries. + + The chatbot can: + + * Answer questions using its built-in knowledge + * Perform web searches when needed using Brave Search + * Remember context from previous messages in the conversation + * Combine information from multiple sources to provide comprehensive answers + + ### Advanced Configuration + + The MCP client supports additional configuration options: + + * Client customization through `McpSyncClientCustomizer` or `McpAsyncClientCustomizer` + * Multiple clients with multiple transport types: `STDIO` and `SSE` (Server-Sent Events) + * Integration with Spring AI's tool execution framework + * Automatic client initialization and lifecycle management + + For WebFlux-based applications, you can use the WebFlux starter instead: + + ```xml + + org.springframework.ai + spring-ai-mcp-client-webflux-spring-boot-starter + + ``` + + This provides similar functionality but uses a WebFlux-based SSE transport implementation, recommended for production deployments. + + + +## Next steps + + + + Check out our gallery of official MCP servers and implementations + + + + View the list of clients that support MCP integrations + + + + Learn how to use LLMs like Claude to speed up your MCP development + + + + Understand how MCP connects clients, servers, and LLMs + + + + +# For Server Developers +Source: https://modelcontextprotocol.io/quickstart/server + +Get started building your own server to use in Claude for Desktop and other clients. + +In this tutorial, we'll build a simple MCP weather server and connect it to a host, Claude for Desktop. We'll start with a basic setup, and then progress to more complex use cases. + +### What we'll be building + +Many LLMs (including Claude) do not currently have the ability to fetch the forecast and severe weather alerts. Let's use MCP to solve that! + +We'll build a server that exposes two tools: `get-alerts` and `get-forecast`. Then we'll connect the server to an MCP host (in this case, Claude for Desktop): + + + + + + + + + + + Servers can connect to any client. We've chosen Claude for Desktop here for simplicity, but we also have guides on [building your own client](/quickstart/client) as well as a [list of other clients here](/clients). + + + + Because servers are locally run, MCP currently only supports desktop hosts. Remote hosts are in active development. + + +### Core MCP Concepts + +MCP servers can provide three main types of capabilities: + +1. **Resources**: File-like data that can be read by clients (like API responses or file contents) +2. **Tools**: Functions that can be called by the LLM (with user approval) +3. **Prompts**: Pre-written templates that help users accomplish specific tasks + +This tutorial will primarily focus on tools. + + + + Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python) + + ### Prerequisite knowledge + + This quickstart assumes you have familiarity with: + + * Python + * LLMs like Claude + + ### System requirements + + * Python 3.10 or higher installed. + * You must use the Python MCP SDK 1.2.0 or higher. + + ### Set up your environment + + First, let's install `uv` and set up our Python project and environment: + + + ```bash MacOS/Linux + curl -LsSf https://astral.sh/uv/install.sh | sh + ``` + + ```powershell Windows + powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" + ``` + + + Make sure to restart your terminal afterwards to ensure that the `uv` command gets picked up. + + Now, let's create and set up our project: + + + ```bash MacOS/Linux + # Create a new directory for our project + uv init weather + cd weather + + # Create virtual environment and activate it + uv venv + source .venv/bin/activate + + # Install dependencies + uv add "mcp[cli]" httpx + + # Create our server file + touch weather.py + ``` + + ```powershell Windows + # Create a new directory for our project + uv init weather + cd weather + + # Create virtual environment and activate it + uv venv + .venv\Scripts\activate + + # Install dependencies + uv add mcp[cli] httpx + + # Create our server file + new-item weather.py + ``` + + + Now let's dive into building your server. + + ## Building your server + + ### Importing packages and setting up the instance + + Add these to the top of your `weather.py`: + + ```python + from typing import Any + import httpx + from mcp.server.fastmcp import FastMCP + + # Initialize FastMCP server + mcp = FastMCP("weather") + + # Constants + NWS_API_BASE = "https://api.weather.gov" + USER_AGENT = "weather-app/1.0" + ``` + + The FastMCP class uses Python type hints and docstrings to automatically generate tool definitions, making it easy to create and maintain MCP tools. + + ### Helper functions + + Next, let's add our helper functions for querying and formatting the data from the National Weather Service API: + + ```python + async def make_nws_request(url: str) -> dict[str, Any] | None: + """Make a request to the NWS API with proper error handling.""" + headers = { + "User-Agent": USER_AGENT, + "Accept": "application/geo+json" + } + async with httpx.AsyncClient() as client: + try: + response = await client.get(url, headers=headers, timeout=30.0) + response.raise_for_status() + return response.json() + except Exception: + return None + + def format_alert(feature: dict) -> str: + """Format an alert feature into a readable string.""" + props = feature["properties"] + return f""" + Event: {props.get('event', 'Unknown')} + Area: {props.get('areaDesc', 'Unknown')} + Severity: {props.get('severity', 'Unknown')} + Description: {props.get('description', 'No description available')} + Instructions: {props.get('instruction', 'No specific instructions provided')} + """ + ``` + + ### Implementing tool execution + + The tool execution handler is responsible for actually executing the logic of each tool. Let's add it: + + ```python + @mcp.tool() + async def get_alerts(state: str) -> str: + """Get weather alerts for a US state. + + Args: + state: Two-letter US state code (e.g. CA, NY) + """ + url = f"{NWS_API_BASE}/alerts/active/area/{state}" + data = await make_nws_request(url) + + if not data or "features" not in data: + return "Unable to fetch alerts or no alerts found." + + if not data["features"]: + return "No active alerts for this state." + + alerts = [format_alert(feature) for feature in data["features"]] + return "\n---\n".join(alerts) + + @mcp.tool() + async def get_forecast(latitude: float, longitude: float) -> str: + """Get weather forecast for a location. + + Args: + latitude: Latitude of the location + longitude: Longitude of the location + """ + # First get the forecast grid endpoint + points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}" + points_data = await make_nws_request(points_url) + + if not points_data: + return "Unable to fetch forecast data for this location." + + # Get the forecast URL from the points response + forecast_url = points_data["properties"]["forecast"] + forecast_data = await make_nws_request(forecast_url) + + if not forecast_data: + return "Unable to fetch detailed forecast." + + # Format the periods into a readable forecast + periods = forecast_data["properties"]["periods"] + forecasts = [] + for period in periods[:5]: # Only show next 5 periods + forecast = f""" + {period['name']}: + Temperature: {period['temperature']}°{period['temperatureUnit']} + Wind: {period['windSpeed']} {period['windDirection']} + Forecast: {period['detailedForecast']} + """ + forecasts.append(forecast) + + return "\n---\n".join(forecasts) + ``` + + ### Running the server + + Finally, let's initialize and run the server: + + ```python + if __name__ == "__main__": + # Initialize and run the server + mcp.run(transport='stdio') + ``` + + Your server is complete! Run `uv run weather.py` to confirm that everything's working. + + Let's now test your server from an existing MCP host, Claude for Desktop. + + ## Testing your server with Claude for Desktop + + + Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/quickstart/client) tutorial to build an MCP client that connects to the server we just built. + + + First, make sure you have Claude for Desktop installed. [You can install the latest version + here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.** + + We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist. + + For example, if you have [VS Code](https://code.visualstudio.com/) installed: + + + + ```bash + code ~/Library/Application\ Support/Claude/claude_desktop_config.json + ``` + + + + ```powershell + code $env:AppData\Claude\claude_desktop_config.json + ``` + + + + You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured. + + In this case, we'll add our single weather server like so: + + + + ```json Python + { + "mcpServers": { + "weather": { + "command": "uv", + "args": [ + "--directory", + "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather", + "run", + "weather.py" + ] + } + } + } + ``` + + + + ```json Python + { + "mcpServers": { + "weather": { + "command": "uv", + "args": [ + "--directory", + "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather", + "run", + "weather.py" + ] + } + } + } + ``` + + + + + You may need to put the full path to the `uv` executable in the `command` field. You can get this by running `which uv` on MacOS/Linux or `where uv` on Windows. + + + + Make sure you pass in the absolute path to your server. + + + This tells Claude for Desktop: + + 1. There's an MCP server named "weather" + 2. To launch it by running `uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather run weather.py` + + Save the file, and restart **Claude for Desktop**. + + + + Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript) + + ### Prerequisite knowledge + + This quickstart assumes you have familiarity with: + + * TypeScript + * LLMs like Claude + + ### System requirements + + For TypeScript, make sure you have the latest version of Node installed. + + ### Set up your environment + + First, let's install Node.js and npm if you haven't already. You can download them from [nodejs.org](https://nodejs.org/). + Verify your Node.js installation: + + ```bash + node --version + npm --version + ``` + + For this tutorial, you'll need Node.js version 16 or higher. + + Now, let's create and set up our project: + + + ```bash MacOS/Linux + # Create a new directory for our project + mkdir weather + cd weather + + # Initialize a new npm project + npm init -y + + # Install dependencies + npm install @modelcontextprotocol/sdk zod + npm install -D @types/node typescript + + # Create our files + mkdir src + touch src/index.ts + ``` + + ```powershell Windows + # Create a new directory for our project + md weather + cd weather + + # Initialize a new npm project + npm init -y + + # Install dependencies + npm install @modelcontextprotocol/sdk zod + npm install -D @types/node typescript + + # Create our files + md src + new-item src\index.ts + ``` + + + Update your package.json to add type: "module" and a build script: + + ```json package.json + { + "type": "module", + "bin": { + "weather": "./build/index.js" + }, + "scripts": { + "build": "tsc && chmod 755 build/index.js" + }, + "files": [ + "build" + ], + } + ``` + + Create a `tsconfig.json` in the root of your project: + + ```json tsconfig.json + { + "compilerOptions": { + "target": "ES2022", + "module": "Node16", + "moduleResolution": "Node16", + "outDir": "./build", + "rootDir": "./src", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true + }, + "include": ["src/**/*"], + "exclude": ["node_modules"] + } + ``` + + Now let's dive into building your server. + + ## Building your server + + ### Importing packages and setting up the instance + + Add these to the top of your `src/index.ts`: + + ```typescript + import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; + import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; + import { z } from "zod"; + + const NWS_API_BASE = "https://api.weather.gov"; + const USER_AGENT = "weather-app/1.0"; + + // Create server instance + const server = new McpServer({ + name: "weather", + version: "1.0.0", + }); + ``` + + ### Helper functions + + Next, let's add our helper functions for querying and formatting the data from the National Weather Service API: + + ```typescript + // Helper function for making NWS API requests + async function makeNWSRequest(url: string): Promise { + const headers = { + "User-Agent": USER_AGENT, + Accept: "application/geo+json", + }; + + try { + const response = await fetch(url, { headers }); + if (!response.ok) { + throw new Error(`HTTP error! status: ${response.status}`); + } + return (await response.json()) as T; + } catch (error) { + console.error("Error making NWS request:", error); + return null; + } + } + + interface AlertFeature { + properties: { + event?: string; + areaDesc?: string; + severity?: string; + status?: string; + headline?: string; + }; + } + + // Format alert data + function formatAlert(feature: AlertFeature): string { + const props = feature.properties; + return [ + `Event: ${props.event || "Unknown"}`, + `Area: ${props.areaDesc || "Unknown"}`, + `Severity: ${props.severity || "Unknown"}`, + `Status: ${props.status || "Unknown"}`, + `Headline: ${props.headline || "No headline"}`, + "---", + ].join("\n"); + } + + interface ForecastPeriod { + name?: string; + temperature?: number; + temperatureUnit?: string; + windSpeed?: string; + windDirection?: string; + shortForecast?: string; + } + + interface AlertsResponse { + features: AlertFeature[]; + } + + interface PointsResponse { + properties: { + forecast?: string; + }; + } + + interface ForecastResponse { + properties: { + periods: ForecastPeriod[]; + }; + } + ``` + + ### Implementing tool execution + + The tool execution handler is responsible for actually executing the logic of each tool. Let's add it: + + ```typescript + // Register weather tools + server.tool( + "get-alerts", + "Get weather alerts for a state", + { + state: z.string().length(2).describe("Two-letter state code (e.g. CA, NY)"), + }, + async ({ state }) => { + const stateCode = state.toUpperCase(); + const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`; + const alertsData = await makeNWSRequest(alertsUrl); + + if (!alertsData) { + return { + content: [ + { + type: "text", + text: "Failed to retrieve alerts data", + }, + ], + }; + } + + const features = alertsData.features || []; + if (features.length === 0) { + return { + content: [ + { + type: "text", + text: `No active alerts for ${stateCode}`, + }, + ], + }; + } + + const formattedAlerts = features.map(formatAlert); + const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join("\n")}`; + + return { + content: [ + { + type: "text", + text: alertsText, + }, + ], + }; + }, + ); + + server.tool( + "get-forecast", + "Get weather forecast for a location", + { + latitude: z.number().min(-90).max(90).describe("Latitude of the location"), + longitude: z.number().min(-180).max(180).describe("Longitude of the location"), + }, + async ({ latitude, longitude }) => { + // Get grid point data + const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`; + const pointsData = await makeNWSRequest(pointsUrl); + + if (!pointsData) { + return { + content: [ + { + type: "text", + text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`, + }, + ], + }; + } + + const forecastUrl = pointsData.properties?.forecast; + if (!forecastUrl) { + return { + content: [ + { + type: "text", + text: "Failed to get forecast URL from grid point data", + }, + ], + }; + } + + // Get forecast data + const forecastData = await makeNWSRequest(forecastUrl); + if (!forecastData) { + return { + content: [ + { + type: "text", + text: "Failed to retrieve forecast data", + }, + ], + }; + } + + const periods = forecastData.properties?.periods || []; + if (periods.length === 0) { + return { + content: [ + { + type: "text", + text: "No forecast periods available", + }, + ], + }; + } + + // Format forecast periods + const formattedForecast = periods.map((period: ForecastPeriod) => + [ + `${period.name || "Unknown"}:`, + `Temperature: ${period.temperature || "Unknown"}°${period.temperatureUnit || "F"}`, + `Wind: ${period.windSpeed || "Unknown"} ${period.windDirection || ""}`, + `${period.shortForecast || "No forecast available"}`, + "---", + ].join("\n"), + ); + + const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join("\n")}`; + + return { + content: [ + { + type: "text", + text: forecastText, + }, + ], + }; + }, + ); + ``` + + ### Running the server + + Finally, implement the main function to run the server: + + ```typescript + async function main() { + const transport = new StdioServerTransport(); + await server.connect(transport); + console.error("Weather MCP Server running on stdio"); + } + + main().catch((error) => { + console.error("Fatal error in main():", error); + process.exit(1); + }); + ``` + + Make sure to run `npm run build` to build your server! This is a very important step in getting your server to connect. + + Let's now test your server from an existing MCP host, Claude for Desktop. + + ## Testing your server with Claude for Desktop + + + Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/quickstart/client) tutorial to build an MCP client that connects to the server we just built. + + + First, make sure you have Claude for Desktop installed. [You can install the latest version + here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.** + + We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist. + + For example, if you have [VS Code](https://code.visualstudio.com/) installed: + + + + ```bash + code ~/Library/Application\ Support/Claude/claude_desktop_config.json + ``` + + + + ```powershell + code $env:AppData\Claude\claude_desktop_config.json + ``` + + + + You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured. + + In this case, we'll add our single weather server like so: + + + + + ```json Node + { + "mcpServers": { + "weather": { + "command": "node", + "args": [ + "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js" + ] + } + } + } + ``` + + + + + + ```json Node + { + "mcpServers": { + "weather": { + "command": "node", + "args": [ + "C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\index.js" + ] + } + } + } + ``` + + + + + This tells Claude for Desktop: + + 1. There's an MCP server named "weather" + 2. Launch it by running `node /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js` + + Save the file, and restart **Claude for Desktop**. + + + + + This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters. + To learn how to create sync and async MCP Servers, manually, consult the [Java SDK Server](/sdk/java/mcp-server) documentation. + + + Let's get started with building our weather server! + [You can find the complete code for what we'll be building here.](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/weather/starter-stdio-server) + + For more information, see the [MCP Server Boot Starter](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-starter-docs.html) reference documentation. + For manual MCP Server implementation, refer to the [MCP Server Java SDK documentation](/sdk/java/mcp-server). + + ### System requirements + + * Java 17 or higher installed. + * [Spring Boot 3.3.x](https://docs.spring.io/spring-boot/installing.html) or higher + + ### Set up your environment + + Use the [Spring Initizer](https://start.spring.io/) to bootstrat the project. + + You will need to add the following dependencies: + + + + ```xml + + + org.springframework.ai + spring-ai-mcp-server-spring-boot-starter + + + + org.springframework + spring-web + + + ``` + + + + ```groovy + dependencies { + implementation platform("org.springframework.ai:spring-ai-mcp-server-spring-boot-starter") + implementation platform("org.springframework:spring-web") + } + ``` + + + + Then configure your application by setting the applicaiton properties: + + + ```bash application.properties + spring.main.bannerMode=off + logging.pattern.console= + ``` + + ```yaml application.yml + logging: + pattern: + console: + spring: + main: + banner-mode: off + ``` + + + The [Server Configuration Properties](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-starter-docs.html#_configuration_properties) documents all available properties. + + Now let's dive into building your server. + + ## Building your server + + ### Weather Service + + Let's implement a [WeatheService.java](https://github.com/spring-projects/spring-ai-examples/blob/main/model-context-protocol/weather/starter-stdio-server/src/main/java/org/springframework/ai/mcp/sample/server/WeatherService.java) that uses a REST client to query the data from the National Weather Service API: + + ```java + @Service + public class WeatherService { + + private final RestClient restClient; + + public WeatherService() { + this.restClient = RestClient.builder() + .baseUrl("https://api.weather.gov") + .defaultHeader("Accept", "application/geo+json") + .defaultHeader("User-Agent", "WeatherApiClient/1.0 (your@email.com)") + .build(); + } + + @Tool(description = "Get weather forecast for a specific latitude/longitude") + public String getWeatherForecastByLocation( + double latitude, // Latitude coordinate + double longitude // Longitude coordinate + ) { + // Returns detailed forecast including: + // - Temperature and unit + // - Wind speed and direction + // - Detailed forecast description + } + + @Tool(description = "Get weather alerts for a US state") + public String getAlerts( + @ToolParam(description = "Two-letter US state code (e.g. CA, NY") String state) + ) { + // Returns active alerts including: + // - Event type + // - Affected area + // - Severity + // - Description + // - Safety instructions + } + + // ...... + } + ``` + + The `@Service` annotation with auto-register the service in your applicaiton context. + The Spring AI `@Tool` annotation, making it easy to create and maintain MCP tools. + + The auto-configuration will automatically register these tools with the MCP server. + + ### Create your Boot Applicaiton + + ```java + @SpringBootApplication + public class McpServerApplication { + + public static void main(String[] args) { + SpringApplication.run(McpServerApplication.class, args); + } + + @Bean + public ToolCallbackProvider weatherTools(WeatherService weatherService) { + return MethodToolCallbackProvider.builder().toolObjects(weatherService).build(); + } + } + ``` + + Uses the the `MethodToolCallbackProvider` utils to convert the `@Tools` into actionalble callbackes used by the MCP server. + + ### Running the server + + Finally, let's build the server: + + ```bash + ./mvnw clean install + ``` + + This will generate a `mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar` file within the `target` folder. + + Let's now test your server from an existing MCP host, Claude for Desktop. + + ## Testing your server with Claude for Desktop + + + Claude for Desktop is not yet available on Linux. + + + First, make sure you have Claude for Desktop installed. + [You can install the latest version here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.** + + We'll need to configure Claude for Desktop for whichever MCP servers you want to use. + To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. + Make sure to create the file if it doesn't exist. + + For example, if you have [VS Code](https://code.visualstudio.com/) installed: + + + + ```bash + code ~/Library/Application\ Support/Claude/claude_desktop_config.json + ``` + + + + ```powershell + code $env:AppData\Claude\claude_desktop_config.json + ``` + + + + You'll then add your servers in the `mcpServers` key. + The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured. + + In this case, we'll add our single weather server like so: + + + + ```json java + { + "mcpServers": { + "spring-ai-mcp-weather": { + "command": "java", + "args": [ + "-Dspring.ai.mcp.server.stdio=true", + "-jar", + "/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar" + ] + } + } + } + ``` + + + + ```json java + { + "mcpServers": { + "spring-ai-mcp-weather": { + "command": "java", + "args": [ + "-Dspring.ai.mcp.server.transport=STDIO", + "-jar", + "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather\\mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar" + ] + } + } + } + ``` + + + + + Make sure you pass in the absolute path to your server. + + + This tells Claude for Desktop: + + 1. There's an MCP server named "my-weather-server" + 2. To launch it by running `java -jar /ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar` + + Save the file, and restart **Claude for Desktop**. + + ## Testing your server with Java client + + ### Create a MCP Client manually + + Use the `McpClient` to connect to the server: + + ```java + var stdioParams = ServerParameters.builder("java") + .args("-jar", "/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar") + .build(); + + var stdioTransport = new StdioClientTransport(stdioParams); + + var mcpClient = McpClient.sync(stdioTransport).build(); + + mcpClient.initialize(); + + ListToolsResult toolsList = mcpClient.listTools(); + + CallToolResult weather = mcpClient.callTool( + new CallToolRequest("getWeatherForecastByLocation", + Map.of("latitude", "47.6062", "longitude", "-122.3321"))); + + CallToolResult alert = mcpClient.callTool( + new CallToolRequest("getAlerts", Map.of("state", "NY"))); + + mcpClient.closeGracefully(); + ``` + + ### Use MCP Client Boot Starter + + Create a new boot starter applicaiton using the `spring-ai-mcp-client-spring-boot-starter` dependency: + + ```xml + + org.springframework.ai + spring-ai-mcp-client-spring-boot-starter + + ``` + + and set the `spring.ai.mcp.client.stdio.servers-configuration` property to point to your `claude_desktop_config.json`. + You can re-use the existing Anthropic Destop configuration: + + ```properties + spring.ai.mcp.client.stdio.servers-configuration=file:PATH/TO/claude_desktop_config.json + ``` + + When you stasrt your client applicaiton, the auto-configuration will create, automatically MCP clients from the claude\_desktop\_config.json. + + For more information, see the [MCP Client Boot Starters](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-client-docs.html) reference documentation. + + ## More Java MCP Server examples + + The [starter-webflux-server](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/weather/starter-webflux-server) demonstrates how to create a MCP server using SSE transport. + It showcases how to define and register MCP Tools, Resources, and Prompts, using the Spring Boot's auto-configuration capabilities. + + + +### Test with commands + +Let's make sure Claude for Desktop is picking up the two tools we've exposed in our `weather` server. You can do this by looking for the hammer icon: + + + + + +After clicking on the hammer icon, you should see two tools listed: + + + + + +If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips. + +If the hammer icon has shown up, you can now test your server by running the following commands in Claude for Desktop: + +* What's the weather in Sacramento? +* What are the active weather alerts in Texas? + + + + + + + + + + + Since this is the US National Weather service, the queries will only work for US locations. + + +## What's happening under the hood + +When you ask a question: + +1. The client sends your question to Claude +2. Claude analyzes the available tools and decides which one(s) to use +3. The client executes the chosen tool(s) through the MCP server +4. The results are sent back to Claude +5. Claude formulates a natural language response +6. The response is displayed to you! + +## Troubleshooting + + + + **Getting logs from Claude for Desktop** + + Claude.app logging related to MCP is written to log files in `~/Library/Logs/Claude`: + + * `mcp.log` will contain general logging about MCP connections and connection failures. + * Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server. + + You can run the following command to list recent logs and follow along with any new ones: + + ```bash + # Check Claude's logs for errors + tail -n 20 -f ~/Library/Logs/Claude/mcp*.log + ``` + + **Server not showing up in Claude** + + 1. Check your `claude_desktop_config.json` file syntax + 2. Make sure the path to your project is absolute and not relative + 3. Restart Claude for Desktop completely + + **Tool calls failing silently** + + If Claude attempts to use the tools but they fail: + + 1. Check Claude's logs for errors + 2. Verify your server builds and runs without errors + 3. Try restarting Claude for Desktop + + **None of this is working. What do I do?** + + Please refer to our [debugging guide](/docs/tools/debugging) for better debugging tools and more detailed guidance. + + + + **Error: Failed to retrieve grid point data** + + This usually means either: + + 1. The coordinates are outside the US + 2. The NWS API is having issues + 3. You're being rate limited + + Fix: + + * Verify you're using US coordinates + * Add a small delay between requests + * Check the NWS API status page + + **Error: No active alerts for \[STATE]** + + This isn't an error - it just means there are no current weather alerts for that state. Try a different state or check during severe weather. + + + + + For more advanced troubleshooting, check out our guide on [Debugging MCP](/docs/tools/debugging) + + +## Next steps + + + + Learn how to build your own MCP client that can connect to your server + + + + Check out our gallery of official MCP servers and implementations + + + + Learn how to effectively debug MCP servers and integrations + + + + Learn how to use LLMs like Claude to speed up your MCP development + + + + +# For Claude Desktop Users +Source: https://modelcontextprotocol.io/quickstart/user + +Get started using pre-built servers in Claude for Desktop. + +In this tutorial, you will extend [Claude for Desktop](https://claude.ai/download) so that it can read from your computer's file system, write new files, move files, and even search files. + + + + + +Don't worry — it will ask you for your permission before executing these actions! + +## 1. Download Claude for Desktop + +Start by downloading [Claude for Desktop](https://claude.ai/download), choosing either macOS or Windows. (Linux is not yet supported for Claude for Desktop.) + +Follow the installation instructions. + +If you already have Claude for Desktop, make sure it's on the latest version by clicking on the Claude menu on your computer and selecting "Check for Updates..." + + + Because servers are locally run, MCP currently only supports desktop hosts. Remote hosts are in active development. + + +## 2. Add the Filesystem MCP Server + +To add this filesystem functionality, we will be installing a pre-built [Filesystem MCP Server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem) to Claude for Desktop. This is one of dozens of [servers](https://github.com/modelcontextprotocol/servers/tree/main) created by Anthropic and the community. + +Get started by opening up the Claude menu on your computer and select "Settings..." Please note that these are not the Claude Account Settings found in the app window itself. + +This is what it should look like on a Mac: + + + + + +Click on "Developer" in the lefthand bar of the Settings pane, and then click on "Edit Config": + + + + + +This will create a configuration file at: + +* macOS: `~/Library/Application Support/Claude/claude_desktop_config.json` +* Windows: `%APPDATA%\Claude\claude_desktop_config.json` + +if you don't already have one, and will display the file in your file system. + +Open up the configuration file in any text editor. Replace the file contents with this: + + + + ```json + { + "mcpServers": { + "filesystem": { + "command": "npx", + "args": [ + "-y", + "@modelcontextprotocol/server-filesystem", + "/Users/username/Desktop", + "/Users/username/Downloads" + ] + } + } + } + ``` + + + + ```json + { + "mcpServers": { + "filesystem": { + "command": "npx", + "args": [ + "-y", + "@modelcontextprotocol/server-filesystem", + "C:\\Users\\username\\Desktop", + "C:\\Users\\username\\Downloads" + ] + } + } + } + ``` + + + +Make sure to replace `username` with your computer's username. The paths should point to valid directories that you want Claude to be able to access and modify. It's set up to work for Desktop and Downloads, but you can add more paths as well. + +You will also need [Node.js](https://nodejs.org) on your computer for this to run properly. To verify you have Node installed, open the command line on your computer. + +* On macOS, open the Terminal from your Applications folder +* On Windows, press Windows + R, type "cmd", and press Enter + +Once in the command line, verify you have Node installed by entering in the following command: + +```bash +node --version +``` + +If you get an error saying "command not found" or "node is not recognized", download Node from [nodejs.org](https://nodejs.org/). + + + **How does the configuration file work?** + + This configuration file tells Claude for Desktop which MCP servers to start up every time you start the application. In this case, we have added one server called "filesystem" that will use the Node `npx` command to install and run `@modelcontextprotocol/server-filesystem`. This server, described [here](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), will let you access your file system in Claude for Desktop. + + + + **Command Privileges** + + Claude for Desktop will run the commands in the configuration file with the permissions of your user account, and access to your local files. Only add commands if you understand and trust the source. + + +## 3. Restart Claude + +After updating your configuration file, you need to restart Claude for Desktop. + +Upon restarting, you should see a hammer icon in the bottom right corner of the input box: + + + + + +After clicking on the hammer icon, you should see the tools that come with the Filesystem MCP Server: + + + + + +If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips. + +## 4. Try it out! + +You can now talk to Claude and ask it about your filesystem. It should know when to call the relevant tools. + +Things you might try asking Claude: + +* Can you write a poem and save it to my desktop? +* What are some work-related files in my downloads folder? +* Can you take all the images on my desktop and move them to a new folder called "Images"? + +As needed, Claude will call the relevant tools and seek your approval before taking an action: + + + + + +## Troubleshooting + + + + 1. Restart Claude for Desktop completely + 2. Check your `claude_desktop_config.json` file syntax + 3. Make sure the file paths included in `claude_desktop_config.json` are valid and that they are absolute and not relative + 4. Look at [logs](#getting-logs-from-claude-for-desktop) to see why the server is not connecting + 5. In your command line, try manually running the server (replacing `username` as you did in `claude_desktop_config.json`) to see if you get any errors: + + + + ```bash + npx -y @modelcontextprotocol/server-filesystem /Users/username/Desktop /Users/username/Downloads + ``` + + + + ```bash + npx -y @modelcontextprotocol/server-filesystem C:\Users\username\Desktop C:\Users\username\Downloads + ``` + + + + + + Claude.app logging related to MCP is written to log files in: + + * macOS: `~/Library/Logs/Claude` + + * Windows: `%APPDATA%\Claude\logs` + + * `mcp.log` will contain general logging about MCP connections and connection failures. + + * Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server. + + You can run the following command to list recent logs and follow along with any new ones (on Windows, it will only show recent logs): + + + + ```bash + # Check Claude's logs for errors + tail -n 20 -f ~/Library/Logs/Claude/mcp*.log + ``` + + + + ```bash + type "%APPDATA%\Claude\logs\mcp*.log" + ``` + + + + + + If Claude attempts to use the tools but they fail: + + 1. Check Claude's logs for errors + 2. Verify your server builds and runs without errors + 3. Try restarting Claude for Desktop + + + + Please refer to our [debugging guide](/docs/tools/debugging) for better debugging tools and more detailed guidance. + + + + If your configured server fails to load, and you see within its logs an error referring to `${APPDATA}` within a path, you may need to add the expanded value of `%APPDATA%` to your `env` key in `claude_desktop_config.json`: + + ```json + { + "brave-search": { + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-brave-search"], + "env": { + "APPDATA": "C:\\Users\\user\\AppData\\Roaming\\", + "BRAVE_API_KEY": "..." + } + } + } + ``` + + With this change in place, launch Claude Desktop once again. + + + **NPM should be installed globally** + + The `npx` command may continue to fail if you have not installed NPM globally. If NPM is already installed globally, you will find `%APPDATA%\npm` exists on your system. If not, you can install NPM globally by running the following command: + + ```bash + npm install -g npm + ``` + + + + +## Next steps + + + + Check out our gallery of official MCP servers and implementations + + + + Now build your own custom server to use in Claude for Desktop and other clients + + + + +# MCP Client +Source: https://modelcontextprotocol.io/sdk/java/mcp-client + +Learn how to use the Model Context Protocol (MCP) client to interact with MCP servers + +# Model Context Protocol Client + +The MCP Client is a key component in the Model Context Protocol (MCP) architecture, responsible for establishing and managing connections with MCP servers. It implements the client-side of the protocol, handling: + +* Protocol version negotiation to ensure compatibility with servers +* Capability negotiation to determine available features +* Message transport and JSON-RPC communication +* Tool discovery and execution +* Resource access and management +* Prompt system interactions +* Optional features like roots management and sampling support + +The client provides both synchronous and asynchronous APIs for flexibility in different application contexts. + + + + ```java + // Create a sync client with custom configuration + McpSyncClient client = McpClient.sync(transport) + .requestTimeout(Duration.ofSeconds(10)) + .capabilities(ClientCapabilities.builder() + .roots(true) // Enable roots capability + .sampling() // Enable sampling capability + .build()) + .sampling(request -> new CreateMessageResult(response)) + .build(); + + // Initialize connection + client.initialize(); + + // List available tools + ListToolsResult tools = client.listTools(); + + // Call a tool + CallToolResult result = client.callTool( + new CallToolRequest("calculator", + Map.of("operation", "add", "a", 2, "b", 3)) + ); + + // List and read resources + ListResourcesResult resources = client.listResources(); + ReadResourceResult resource = client.readResource( + new ReadResourceRequest("resource://uri") + ); + + // List and use prompts + ListPromptsResult prompts = client.listPrompts(); + GetPromptResult prompt = client.getPrompt( + new GetPromptRequest("greeting", Map.of("name", "Spring")) + ); + + // Add/remove roots + client.addRoot(new Root("file:///path", "description")); + client.removeRoot("file:///path"); + + // Close client + client.closeGracefully(); + ``` + + + + ```java + // Create an async client with custom configuration + McpAsyncClient client = McpClient.async(transport) + .requestTimeout(Duration.ofSeconds(10)) + .capabilities(ClientCapabilities.builder() + .roots(true) // Enable roots capability + .sampling() // Enable sampling capability + .build()) + .sampling(request -> Mono.just(new CreateMessageResult(response))) + .toolsChangeConsumer(tools -> Mono.fromRunnable(() -> { + logger.info("Tools updated: {}", tools); + })) + .resourcesChangeConsumer(resources -> Mono.fromRunnable(() -> { + logger.info("Resources updated: {}", resources); + })) + .promptsChangeConsumer(prompts -> Mono.fromRunnable(() -> { + logger.info("Prompts updated: {}", prompts); + })) + .build(); + + // Initialize connection and use features + client.initialize() + .flatMap(initResult -> client.listTools()) + .flatMap(tools -> { + return client.callTool(new CallToolRequest( + "calculator", + Map.of("operation", "add", "a", 2, "b", 3) + )); + }) + .flatMap(result -> { + return client.listResources() + .flatMap(resources -> + client.readResource(new ReadResourceRequest("resource://uri")) + ); + }) + .flatMap(resource -> { + return client.listPrompts() + .flatMap(prompts -> + client.getPrompt(new GetPromptRequest( + "greeting", + Map.of("name", "Spring") + )) + ); + }) + .flatMap(prompt -> { + return client.addRoot(new Root("file:///path", "description")) + .then(client.removeRoot("file:///path")); + }) + .doFinally(signalType -> { + client.closeGracefully().subscribe(); + }) + .subscribe(); + ``` + + + +## Client Transport + +The transport layer handles the communication between MCP clients and servers, providing different implementations for various use cases. The client transport manages message serialization, connection establishment, and protocol-specific communication patterns. + + + + Creates transport for in-process based communication + + ```java + ServerParameters params = ServerParameters.builder("npx") + .args("-y", "@modelcontextprotocol/server-everything", "dir") + .build(); + McpTransport transport = new StdioClientTransport(params); + ``` + + + + Creates a framework agnostic (pure Java API) SSE client transport. Included in the core mcp module. + + ```java + McpTransport transport = new HttpClientSseClientTransport("http://your-mcp-server"); + ``` + + + + Creates WebFlux-based SSE client transport. Requires the mcp-webflux-sse-transport dependency. + + ```java + WebClient.Builder webClientBuilder = WebClient.builder() + .baseUrl("http://your-mcp-server"); + McpTransport transport = new WebFluxSseClientTransport(webClientBuilder); + ``` + + + +## Client Capabilities + +The client can be configured with various capabilities: + +```java +var capabilities = ClientCapabilities.builder() + .roots(true) // Enable filesystem roots support with list changes notifications + .sampling() // Enable LLM sampling support + .build(); +``` + +### Roots Support + +Roots define the boundaries of where servers can operate within the filesystem: + +```java +// Add a root dynamically +client.addRoot(new Root("file:///path", "description")); + +// Remove a root +client.removeRoot("file:///path"); + +// Notify server of roots changes +client.rootsListChangedNotification(); +``` + +The roots capability allows servers to: + +* Request the list of accessible filesystem roots +* Receive notifications when the roots list changes +* Understand which directories and files they have access to + +### Sampling Support + +Sampling enables servers to request LLM interactions ("completions" or "generations") through the client: + +```java +// Configure sampling handler +Function samplingHandler = request -> { + // Sampling implementation that interfaces with LLM + return new CreateMessageResult(response); +}; + +// Create client with sampling support +var client = McpClient.sync(transport) + .capabilities(ClientCapabilities.builder() + .sampling() + .build()) + .sampling(samplingHandler) + .build(); +``` + +This capability allows: + +* Servers to leverage AI capabilities without requiring API keys +* Clients to maintain control over model access and permissions +* Support for both text and image-based interactions +* Optional inclusion of MCP server context in prompts + +## Using MCP Clients + +### Tool Execution + +Tools are server-side functions that clients can discover and execute. The MCP client provides methods to list available tools and execute them with specific parameters. Each tool has a unique name and accepts a map of parameters. + + + + ```java + // List available tools and their names + var tools = client.listTools(); + tools.forEach(tool -> System.out.println(tool.getName())); + + // Execute a tool with parameters + var result = client.callTool("calculator", Map.of( + "operation", "add", + "a", 1, + "b", 2 + )); + ``` + + + + ```java + // List available tools asynchronously + client.listTools() + .doOnNext(tools -> tools.forEach(tool -> + System.out.println(tool.getName()))) + .subscribe(); + + // Execute a tool asynchronously + client.callTool("calculator", Map.of( + "operation", "add", + "a", 1, + "b", 2 + )) + .subscribe(); + ``` + + + +### Resource Access + +Resources represent server-side data sources that clients can access using URI templates. The MCP client provides methods to discover available resources and retrieve their contents through a standardized interface. + + + + ```java + // List available resources and their names + var resources = client.listResources(); + resources.forEach(resource -> System.out.println(resource.getName())); + + // Retrieve resource content using a URI template + var content = client.getResource("file", Map.of( + "path", "/path/to/file.txt" + )); + ``` + + + + ```java + // List available resources asynchronously + client.listResources() + .doOnNext(resources -> resources.forEach(resource -> + System.out.println(resource.getName()))) + .subscribe(); + + // Retrieve resource content asynchronously + client.getResource("file", Map.of( + "path", "/path/to/file.txt" + )) + .subscribe(); + ``` + + + +### Prompt System + +The prompt system enables interaction with server-side prompt templates. These templates can be discovered and executed with custom parameters, allowing for dynamic text generation based on predefined patterns. + + + + ```java + // List available prompt templates + var prompts = client.listPrompts(); + prompts.forEach(prompt -> System.out.println(prompt.getName())); + + // Execute a prompt template with parameters + var response = client.executePrompt("echo", Map.of( + "text", "Hello, World!" + )); + ``` + + + + ```java + // List available prompt templates asynchronously + client.listPrompts() + .doOnNext(prompts -> prompts.forEach(prompt -> + System.out.println(prompt.getName()))) + .subscribe(); + + // Execute a prompt template asynchronously + client.executePrompt("echo", Map.of( + "text", "Hello, World!" + )) + .subscribe(); + ``` + + + + +# Overview +Source: https://modelcontextprotocol.io/sdk/java/mcp-overview + +Introduction to the Model Context Protocol (MCP) Java SDK + +Java SDK for the [Model Context Protocol](https://modelcontextprotocol.org/docs/concepts/architecture) +enables standardized integration between AI models and tools. + +## Features + +* MCP Client and MCP Server implementations supporting: + * Protocol [version compatibility negotiation](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/lifecycle/#initialization) + * [Tool](https://spec.modelcontextprotocol.io/specification/2024-11-05/server/tools/) discovery, execution, list change notifications + * [Resource](https://spec.modelcontextprotocol.io/specification/2024-11-05/server/resources/) management with URI templates + * [Roots](https://spec.modelcontextprotocol.io/specification/2024-11-05/client/roots/) list management and notifications + * [Prompt](https://spec.modelcontextprotocol.io/specification/2024-11-05/server/prompts/) handling and management + * [Sampling](https://spec.modelcontextprotocol.io/specification/2024-11-05/client/sampling/) support for AI model interactions +* Multiple transport implementations: + * Default transports: + * Stdio-based transport for process-based communication + * Java HttpClient-based SSE client transport for HTTP SSE Client-side streaming + * Servlet-based SSE server transport for HTTP SSE Server streaming + * Spring-based transports: + * WebFlux SSE client and server transports for reactive HTTP streaming + * WebMVC SSE transport for servlet-based HTTP streaming +* Supports Synchronous and Asynchronous programming paradigms + +## Architecture + +The SDK follows a layered architecture with clear separation of concerns: + +![MCP Stack Architecture](https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/java/mcp-stack.svg) + +* **Client/Server Layer (McpClient/McpServer)**: Both use McpSession for sync/async operations, + with McpClient handling client-side protocol operations and McpServer managing server-side protocol operations. +* **Session Layer (McpSession)**: Manages communication patterns and state using DefaultMcpSession implementation. +* **Transport Layer (McpTransport)**: Handles JSON-RPC message serialization/deserialization via: + * StdioTransport (stdin/stdout) in the core module + * HTTP SSE transports in dedicated transport modules (Java HttpClient, Spring WebFlux, Spring WebMVC) + +The MCP Client is a key component in the Model Context Protocol (MCP) architecture, responsible for establishing and managing connections with MCP servers. +It implements the client-side of the protocol. + +![Java MCP Client Architecture](https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/java/java-mcp-client-architecture.jpg) + +The MCP Server is a foundational component in the Model Context Protocol (MCP) architecture that provides tools, resources, and capabilities to clients. +It implements the server-side of the protocol. + +![Java MCP Server Architecture](https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/java/java-mcp-server-architecture.jpg) + +Key Interactions: + +* **Client/Server Initialization**: Transport setup, protocol compatibility check, capability negotiation, and implementation details exchange. +* **Message Flow**: JSON-RPC message handling with validation, type-safe response processing, and error handling. +* **Resource Management**: Resource discovery, URI template-based access, subscription system, and content retrieval. + +## Dependencies + +Add the following Maven dependency to your project: + + + + The core MCP functionality: + + ```xml + + io.modelcontextprotocol.sdk + mcp + + ``` + + For HTTP SSE transport implementations, add one of the following dependencies: + + ```xml + + + io.modelcontextprotocol.sdk + mcp-spring-webflux + + + + + io.modelcontextprotocol.sdk + mcp-spring-webmvc + + ``` + + + + The core MCP functionality: + + ```groovy + dependencies { + implementation platform("io.modelcontextprotocol.sdk:mcp") + //... + } + ``` + + For HTTP SSE transport implementations, add one of the following dependencies: + + ```groovy + // Spring WebFlux-based SSE client and server transport + dependencies { + implementation platform("io.modelcontextprotocol.sdk:mcp-spring-webflux") + } + + // Spring WebMVC-based SSE server transport + dependencies { + implementation platform("io.modelcontextprotocol.sdk:mcp-spring-webmvc") + } + ``` + + + +### Bill of Materials (BOM) + +The Bill of Materials (BOM) declares the recommended versions of all the dependencies used by a given release. +Using the BOM from your application's build script avoids the need for you to specify and maintain the dependency versions yourself. +Instead, the version of the BOM you're using determines the utilized dependency versions. +It also ensures that you're using supported and tested versions of the dependencies by default, unless you choose to override them. + +Add the BOM to your project: + + + + ```xml + + + + io.modelcontextprotocol.sdk + mcp-bom + 0.7.0 + pom + import + + + + ``` + + + + ```groovy + dependencies { + implementation platform("io.modelcontextprotocol.sdk:mcp-bom:0.7.0") + //... + } + ``` + + Gradle users can also use the Spring AI MCP BOM by leveraging Gradle (5.0+) native support for declaring dependency constraints using a Maven BOM. + This is implemented by adding a 'platform' dependency handler method to the dependencies section of your Gradle build script. + As shown in the snippet above this can then be followed by version-less declarations of the Starter Dependencies for the one or more spring-ai modules you wish to use, e.g. spring-ai-openai. + + + +Replace the version number with the version of the BOM you want to use. + +### Available Dependencies + +The following dependencies are available and managed by the BOM: + +* Core Dependencies + * `io.modelcontextprotocol.sdk:mcp` - Core MCP library providing the base functionality and APIs for Model Context Protocol implementation. +* Transport Dependencies + * `io.modelcontextprotocol.sdk:mcp-spring-webflux` - WebFlux-based Server-Sent Events (SSE) transport implementation for reactive applications. + * `io.modelcontextprotocol.sdk:mcp-spring-webmvc` - WebMVC-based Server-Sent Events (SSE) transport implementation for servlet-based applications. +* Testing Dependencies + * `io.modelcontextprotocol.sdk:mcp-test` - Testing utilities and support for MCP-based applications. + + +# MCP Server +Source: https://modelcontextprotocol.io/sdk/java/mcp-server + +Learn how to implement and configure a Model Context Protocol (MCP) server + +## Overview + +The MCP Server is a foundational component in the Model Context Protocol (MCP) architecture that provides tools, resources, and capabilities to clients. It implements the server-side of the protocol, responsible for: + +* Exposing tools that clients can discover and execute +* Managing resources with URI-based access patterns +* Providing prompt templates and handling prompt requests +* Supporting capability negotiation with clients +* Implementing server-side protocol operations +* Managing concurrent client connections +* Providing structured logging and notifications + +The server supports both synchronous and asynchronous APIs, allowing for flexible integration in different application contexts. + + + + ```java + // Create a server with custom configuration + McpSyncServer syncServer = McpServer.sync(transport) + .serverInfo("my-server", "1.0.0") + .capabilities(ServerCapabilities.builder() + .resources(true) // Enable resource support + .tools(true) // Enable tool support + .prompts(true) // Enable prompt support + .logging() // Enable logging support + .build()) + .build(); + + // Register tools, resources, and prompts + syncServer.addTool(syncToolRegistration); + syncServer.addResource(syncResourceRegistration); + syncServer.addPrompt(syncPromptRegistration); + + // Send logging notifications + syncServer.loggingNotification(LoggingMessageNotification.builder() + .level(LoggingLevel.INFO) + .logger("custom-logger") + .data("Server initialized") + .build()); + + // Close the server when done + syncServer.close(); + ``` + + + + ```java + // Create an async server with custom configuration + McpAsyncServer asyncServer = McpServer.async(transport) + .serverInfo("my-server", "1.0.0") + .capabilities(ServerCapabilities.builder() + .resources(true) // Enable resource support + .tools(true) // Enable tool support + .prompts(true) // Enable prompt support + .logging() // Enable logging support + .build()) + .build(); + + // Register tools, resources, and prompts + asyncServer.addTool(asyncToolRegistration) + .doOnSuccess(v -> logger.info("Tool registered")) + .subscribe(); + + asyncServer.addResource(asyncResourceRegistration) + .doOnSuccess(v -> logger.info("Resource registered")) + .subscribe(); + + asyncServer.addPrompt(asyncPromptRegistration) + .doOnSuccess(v -> logger.info("Prompt registered")) + .subscribe(); + + // Send logging notifications + asyncServer.loggingNotification(LoggingMessageNotification.builder() + .level(LoggingLevel.INFO) + .logger("custom-logger") + .data("Server initialized") + .build()); + + // Close the server when done + asyncServer.close() + .doOnSuccess(v -> logger.info("Server closed")) + .subscribe(); + ``` + + + +## Server Transport + +The transport layer in the MCP SDK is responsible for handling the communication between clients and servers. It provides different implementations to support various communication protocols and patterns. The SDK includes several built-in transport implementations: + + + + <> + Create in-process based transport: + + ```java + StdioServerTransport transport = new StdioServerTransport(new ObjectMapper()); + ``` + + Provides bidirectional JSON-RPC message handling over standard input/output streams with non-blocking message processing, serialization/deserialization, and graceful shutdown support. + + Key features: + +
    +
  • Bidirectional communication through stdin/stdout
  • +
  • Process-based integration support
  • +
  • Simple setup and configuration
  • +
  • Lightweight implementation
  • +
+ +
+ + + <> +

Creates WebFlux-based SSE server transport.
Requires the mcp-spring-webflux dependency.

+ + ```java + @Configuration + class McpConfig { + @Bean + WebFluxSseServerTransport webFluxSseServerTransport(ObjectMapper mapper) { + return new WebFluxSseServerTransport(mapper, "/mcp/message"); + } + + @Bean + RouterFunction mcpRouterFunction(WebFluxSseServerTransport transport) { + return transport.getRouterFunction(); + } + } + ``` + +

Implements the MCP HTTP with SSE transport specification, providing:

+ +
    +
  • Reactive HTTP streaming with WebFlux
  • +
  • Concurrent client connections through SSE endpoints
  • +
  • Message routing and session management
  • +
  • Graceful shutdown capabilities
  • +
+ +
+ + + <> +

Creates WebMvc-based SSE server transport.
Requires the mcp-spring-webmvc dependency.

+ + ```java + @Configuration + @EnableWebMvc + class McpConfig { + @Bean + WebMvcSseServerTransport webMvcSseServerTransport(ObjectMapper mapper) { + return new WebMvcSseServerTransport(mapper, "/mcp/message"); + } + + @Bean + RouterFunction mcpRouterFunction(WebMvcSseServerTransport transport) { + return transport.getRouterFunction(); + } + } + ``` + +

Implements the MCP HTTP with SSE transport specification, providing:

+ +
    +
  • Server-side event streaming
  • +
  • Integration with Spring WebMVC
  • +
  • Support for traditional web applications
  • +
  • Synchronous operation handling
  • +
+ +
+ + + <> +

+ Creates a Servlet-based SSE server transport. It is included in the core mcp module.
+ The HttpServletSseServerTransport can be used with any Servlet container.
+ To use it with a Spring Web application, you can register it as a Servlet bean: +

+ + ```java + @Configuration + @EnableWebMvc + public class McpServerConfig implements WebMvcConfigurer { + + @Bean + public HttpServletSseServerTransport servletSseServerTransport() { + return new HttpServletSseServerTransport(new ObjectMapper(), "/mcp/message"); + } + + @Bean + public ServletRegistrationBean customServletBean(HttpServletSseServerTransport servlet) { + return new ServletRegistrationBean(servlet); + } + } + ``` + +

+ Implements the MCP HTTP with SSE transport specification using the traditional Servlet API, providing: +

+ +
    +
  • Asynchronous message handling using Servlet 6.0 async support
  • +
  • Session management for multiple client connections
  • + +
  • + Two types of endpoints: + +
      +
    • SSE endpoint (/sse) for server-to-client events
    • +
    • Message endpoint (configurable) for client-to-server requests
    • +
    +
  • + +
  • Error handling and response formatting
  • +
  • Graceful shutdown support
  • +
+ +
+
+ +## Server Capabilities + +The server can be configured with various capabilities: + +```java +var capabilities = ServerCapabilities.builder() + .resources(false, true) // Resource support with list changes notifications + .tools(true) // Tool support with list changes notifications + .prompts(true) // Prompt support with list changes notifications + .logging() // Enable logging support (enabled by default with loging level INFO) + .build(); +``` + +### Logging Support + +The server provides structured logging capabilities that allow sending log messages to clients with different severity levels: + +```java +// Send a log message to clients +server.loggingNotification(LoggingMessageNotification.builder() + .level(LoggingLevel.INFO) + .logger("custom-logger") + .data("Custom log message") + .build()); +``` + +Clients can control the minimum logging level they receive through the `mcpClient.setLoggingLevel(level)` request. Messages below the set level will be filtered out. +Supported logging levels (in order of increasing severity): DEBUG (0), INFO (1), NOTICE (2), WARNING (3), ERROR (4), CRITICAL (5), ALERT (6), EMERGENCY (7) + +### Tool Registration + + + + ```java + // Sync tool registration + var schema = """ + { + "type" : "object", + "id" : "urn:jsonschema:Operation", + "properties" : { + "operation" : { + "type" : "string" + }, + "a" : { + "type" : "number" + }, + "b" : { + "type" : "number" + } + } + } + """; + var syncToolRegistration = new McpServerFeatures.SyncToolRegistration( + new Tool("calculator", "Basic calculator", schema), + arguments -> { + // Tool implementation + return new CallToolResult(result, false); + } + ); + ``` + + + + ```java + // Async tool registration + var schema = """ + { + "type" : "object", + "id" : "urn:jsonschema:Operation", + "properties" : { + "operation" : { + "type" : "string" + }, + "a" : { + "type" : "number" + }, + "b" : { + "type" : "number" + } + } + } + """; + var asyncToolRegistration = new McpServerFeatures.AsyncToolRegistration( + new Tool("calculator", "Basic calculator", schema), + arguments -> { + // Tool implementation + return Mono.just(new CallToolResult(result, false)); + } + ); + ``` + + + +### Resource Registration + + + + ```java + // Sync resource registration + var syncResourceRegistration = new McpServerFeatures.SyncResourceRegistration( + new Resource("custom://resource", "name", "description", "mime-type", null), + request -> { + // Resource read implementation + return new ReadResourceResult(contents); + } + ); + ``` + + + + ```java + // Async resource registration + var asyncResourceRegistration = new McpServerFeatures.AsyncResourceRegistration( + new Resource("custom://resource", "name", "description", "mime-type", null), + request -> { + // Resource read implementation + return Mono.just(new ReadResourceResult(contents)); + } + ); + ``` + + + +### Prompt Registration + + + + ```java + // Sync prompt registration + var syncPromptRegistration = new McpServerFeatures.SyncPromptRegistration( + new Prompt("greeting", "description", List.of( + new PromptArgument("name", "description", true) + )), + request -> { + // Prompt implementation + return new GetPromptResult(description, messages); + } + ); + ``` + + + + ```java + // Async prompt registration + var asyncPromptRegistration = new McpServerFeatures.AsyncPromptRegistration( + new Prompt("greeting", "description", List.of( + new PromptArgument("name", "description", true) + )), + request -> { + // Prompt implementation + return Mono.just(new GetPromptResult(description, messages)); + } + ); + ``` + + + +## Error Handling + +The SDK provides comprehensive error handling through the McpError class, covering protocol compatibility, transport communication, JSON-RPC messaging, tool execution, resource management, prompt handling, timeouts, and connection issues. This unified error handling approach ensures consistent and reliable error management across both synchronous and asynchronous operations. + + +# Building MCP with LLMs +Source: https://modelcontextprotocol.io/tutorials/building-mcp-with-llms + +Speed up your MCP development using LLMs such as Claude! + +This guide will help you use LLMs to help you build custom Model Context Protocol (MCP) servers and clients. We'll be focusing on Claude for this tutorial, but you can do this with any frontier LLM. + +## Preparing the documentation + +Before starting, gather the necessary documentation to help Claude understand MCP: + +1. Visit [https://modelcontextprotocol.io/llms-full.txt](https://modelcontextprotocol.io/llms-full.txt) and copy the full documentation text +2. Navigate to either the [MCP TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk) or [Python SDK repository](https://github.com/modelcontextprotocol/python-sdk) +3. Copy the README files and other relevant documentation +4. Paste these documents into your conversation with Claude + +## Describing your server + +Once you've provided the documentation, clearly describe to Claude what kind of server you want to build. Be specific about: + +* What resources your server will expose +* What tools it will provide +* Any prompts it should offer +* What external systems it needs to interact with + +For example: + +``` +Build an MCP server that: +- Connects to my company's PostgreSQL database +- Exposes table schemas as resources +- Provides tools for running read-only SQL queries +- Includes prompts for common data analysis tasks +``` + +## Working with Claude + +When working with Claude on MCP servers: + +1. Start with the core functionality first, then iterate to add more features +2. Ask Claude to explain any parts of the code you don't understand +3. Request modifications or improvements as needed +4. Have Claude help you test the server and handle edge cases + +Claude can help implement all the key MCP features: + +* Resource management and exposure +* Tool definitions and implementations +* Prompt templates and handlers +* Error handling and logging +* Connection and transport setup + +## Best practices + +When building MCP servers with Claude: + +* Break down complex servers into smaller pieces +* Test each component thoroughly before moving on +* Keep security in mind - validate inputs and limit access appropriately +* Document your code well for future maintenance +* Follow MCP protocol specifications carefully + +## Next steps + +After Claude helps you build your server: + +1. Review the generated code carefully +2. Test the server with the MCP Inspector tool +3. Connect it to Claude.app or other MCP clients +4. Iterate based on real usage and feedback + +Remember that Claude can help you modify and improve your server as requirements change over time. + +Need more guidance? Just ask Claude specific questions about implementing MCP features or troubleshooting issues that arise. + diff --git a/scripts/modules/task-manager.js b/scripts/modules/task-manager.js index 97bb73b5..61f5948b 100644 --- a/scripts/modules/task-manager.js +++ b/scripts/modules/task-manager.js @@ -181,6 +181,16 @@ async function updateTasks(tasksPath, fromId, prompt, useResearch = false) { console.log(table.toString()); + // Display a message about how completed subtasks are handled + console.log(boxen( + chalk.cyan.bold('How Completed Subtasks Are Handled:') + '\n\n' + + chalk.white('â€ĸ Subtasks marked as "done" or "completed" will be preserved\n') + + chalk.white('â€ĸ New subtasks will build upon what has already been completed\n') + + chalk.white('â€ĸ If completed work needs revision, a new subtask will be created instead of modifying done items\n') + + chalk.white('â€ĸ This approach maintains a clear record of completed work and new requirements'), + { padding: 1, borderColor: 'blue', borderStyle: 'round', margin: { top: 1, bottom: 1 } } + )); + // Build the system prompt const systemPrompt = `You are an AI assistant helping to update software development tasks based on new context. You will be given a set of tasks and a prompt describing changes or new implementation details. @@ -192,6 +202,11 @@ Guidelines: 3. Do not change anything unnecessarily - just adapt what needs to change based on the prompt 4. You should return ALL the tasks in order, not just the modified ones 5. Return a complete valid JSON object with the updated tasks array +6. VERY IMPORTANT: Preserve all subtasks marked as "done" or "completed" - do not modify their content +7. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything +8. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly +9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced +10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted The changes described in the prompt should be applied to ALL tasks in the list.`; @@ -213,7 +228,7 @@ The changes described in the prompt should be applied to ALL tasks in the list.` messages: [ { role: "system", - content: `${systemPrompt}\n\nAdditionally, please research the latest best practices, implementation details, and considerations when updating these tasks. Use your online search capabilities to gather relevant information.` + content: `${systemPrompt}\n\nAdditionally, please research the latest best practices, implementation details, and considerations when updating these tasks. Use your online search capabilities to gather relevant information. Remember to strictly follow the guidelines about preserving completed subtasks and building upon what has already been done rather than modifying or replacing it.` }, { role: "user", @@ -223,6 +238,8 @@ ${taskData} Please update these tasks based on the following new context: ${prompt} +IMPORTANT: In the tasks JSON above, any subtasks with "status": "done" or "status": "completed" should be preserved exactly as is. Build your changes around these completed items. + Return only the updated tasks as a valid JSON array.` } ], @@ -272,6 +289,8 @@ ${taskData} Please update these tasks based on the following new context: ${prompt} +IMPORTANT: In the tasks JSON above, any subtasks with "status": "done" or "status": "completed" should be preserved exactly as is. Build your changes around these completed items. + Return only the updated tasks as a valid JSON array.` } ], From e4cff5e6710f57a74b9a2ddcc15176b17b44d234 Mon Sep 17 00:00:00 2001 From: Eyal Toledano Date: Thu, 27 Mar 2025 01:33:20 -0400 Subject: [PATCH 06/16] Implements updateTask command to update a single task instead of all tasks as of a certain one. Useful when iterating and R&D'ing bit by bit and needing more research after what has been done. --- scripts/modules/commands.js | 89 +++++++- scripts/modules/task-manager.js | 374 ++++++++++++++++++++++++++++++++ tasks/task_023.txt | 48 ++-- tasks/task_034.txt | 156 +++++++++++++ tasks/task_035.txt | 48 ++++ tasks/task_036.txt | 48 ++++ tasks/tasks.json | 98 +++++++++ tests/unit/commands.test.js | 242 ++++++++++++++++++++- tests/unit/task-manager.test.js | 320 ++++++++++++++++++++++++++- 9 files changed, 1395 insertions(+), 28 deletions(-) create mode 100644 tasks/task_034.txt create mode 100644 tasks/task_035.txt create mode 100644 tasks/task_036.txt diff --git a/scripts/modules/commands.js b/scripts/modules/commands.js index ca96d2d8..9c95f43c 100644 --- a/scripts/modules/commands.js +++ b/scripts/modules/commands.js @@ -22,7 +22,8 @@ import { addTask, addSubtask, removeSubtask, - analyzeTaskComplexity + analyzeTaskComplexity, + updateTaskById } from './task-manager.js'; import { @@ -135,6 +136,92 @@ function registerCommands(programInstance) { await updateTasks(tasksPath, fromId, prompt, useResearch); }); + // updateTask command + programInstance + .command('update-task') + .description('Update a single task by ID with new information') + .option('-f, --file ', 'Path to the tasks file', 'tasks/tasks.json') + .option('-i, --id ', 'Task ID to update (required)') + .option('-p, --prompt ', 'Prompt explaining the changes or new context (required)') + .option('-r, --research', 'Use Perplexity AI for research-backed task updates') + .action(async (options) => { + try { + const tasksPath = options.file; + + // Validate required parameters + if (!options.id) { + console.error(chalk.red('Error: --id parameter is required')); + console.log(chalk.yellow('Usage example: task-master update-task --id=23 --prompt="Update with new information"')); + process.exit(1); + } + + // Parse the task ID and validate it's a number + const taskId = parseInt(options.id, 10); + if (isNaN(taskId) || taskId <= 0) { + console.error(chalk.red(`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`)); + console.log(chalk.yellow('Usage example: task-master update-task --id=23 --prompt="Update with new information"')); + process.exit(1); + } + + if (!options.prompt) { + console.error(chalk.red('Error: --prompt parameter is required. Please provide information about the changes.')); + console.log(chalk.yellow('Usage example: task-master update-task --id=23 --prompt="Update with new information"')); + process.exit(1); + } + + const prompt = options.prompt; + const useResearch = options.research || false; + + // Validate tasks file exists + if (!fs.existsSync(tasksPath)) { + console.error(chalk.red(`Error: Tasks file not found at path: ${tasksPath}`)); + if (tasksPath === 'tasks/tasks.json') { + console.log(chalk.yellow('Hint: Run task-master init or task-master parse-prd to create tasks.json first')); + } else { + console.log(chalk.yellow(`Hint: Check if the file path is correct: ${tasksPath}`)); + } + process.exit(1); + } + + console.log(chalk.blue(`Updating task ${taskId} with prompt: "${prompt}"`)); + console.log(chalk.blue(`Tasks file: ${tasksPath}`)); + + if (useResearch) { + // Verify Perplexity API key exists if using research + if (!process.env.PERPLEXITY_API_KEY) { + console.log(chalk.yellow('Warning: PERPLEXITY_API_KEY environment variable is missing. Research-backed updates will not be available.')); + console.log(chalk.yellow('Falling back to Claude AI for task update.')); + } else { + console.log(chalk.blue('Using Perplexity AI for research-backed task update')); + } + } + + const result = await updateTaskById(tasksPath, taskId, prompt, useResearch); + + // If the task wasn't updated (e.g., if it was already marked as done) + if (!result) { + console.log(chalk.yellow('\nTask update was not completed. Review the messages above for details.')); + } + } catch (error) { + console.error(chalk.red(`Error: ${error.message}`)); + + // Provide more helpful error messages for common issues + if (error.message.includes('task') && error.message.includes('not found')) { + console.log(chalk.yellow('\nTo fix this issue:')); + console.log(' 1. Run task-master list to see all available task IDs'); + console.log(' 2. Use a valid task ID with the --id parameter'); + } else if (error.message.includes('API key')) { + console.log(chalk.yellow('\nThis error is related to API keys. Check your environment variables.')); + } + + if (CONFIG.debug) { + console.error(error); + } + + process.exit(1); + } + }); + // generate command programInstance .command('generate') diff --git a/scripts/modules/task-manager.js b/scripts/modules/task-manager.js index 61f5948b..be2a95ca 100644 --- a/scripts/modules/task-manager.js +++ b/scripts/modules/task-manager.js @@ -358,6 +358,379 @@ Return only the updated tasks as a valid JSON array.` } } +/** + * Update a single task by ID + * @param {string} tasksPath - Path to the tasks.json file + * @param {number} taskId - Task ID to update + * @param {string} prompt - Prompt with new context + * @param {boolean} useResearch - Whether to use Perplexity AI for research + * @returns {Object} - Updated task data or null if task wasn't updated + */ +async function updateTaskById(tasksPath, taskId, prompt, useResearch = false) { + try { + log('info', `Updating single task ${taskId} with prompt: "${prompt}"`); + + // Validate task ID is a positive integer + if (!Number.isInteger(taskId) || taskId <= 0) { + throw new Error(`Invalid task ID: ${taskId}. Task ID must be a positive integer.`); + } + + // Validate prompt + if (!prompt || typeof prompt !== 'string' || prompt.trim() === '') { + throw new Error('Prompt cannot be empty. Please provide context for the task update.'); + } + + // Validate research flag + if (useResearch && (!perplexity || !process.env.PERPLEXITY_API_KEY)) { + log('warn', 'Perplexity AI is not available. Falling back to Claude AI.'); + console.log(chalk.yellow('Perplexity AI is not available (API key may be missing). Falling back to Claude AI.')); + useResearch = false; + } + + // Validate tasks file exists + if (!fs.existsSync(tasksPath)) { + throw new Error(`Tasks file not found at path: ${tasksPath}`); + } + + // Read the tasks file + const data = readJSON(tasksPath); + if (!data || !data.tasks) { + throw new Error(`No valid tasks found in ${tasksPath}. The file may be corrupted or have an invalid format.`); + } + + // Find the specific task to update + const taskToUpdate = data.tasks.find(task => task.id === taskId); + if (!taskToUpdate) { + throw new Error(`Task with ID ${taskId} not found. Please verify the task ID and try again.`); + } + + // Check if task is already completed + if (taskToUpdate.status === 'done' || taskToUpdate.status === 'completed') { + log('warn', `Task ${taskId} is already marked as done and cannot be updated`); + console.log(boxen( + chalk.yellow(`Task ${taskId} is already marked as ${taskToUpdate.status} and cannot be updated.`) + '\n\n' + + chalk.white('Completed tasks are locked to maintain consistency. To modify a completed task, you must first:') + '\n' + + chalk.white('1. Change its status to "pending" or "in-progress"') + '\n' + + chalk.white('2. Then run the update-task command'), + { padding: 1, borderColor: 'yellow', borderStyle: 'round' } + )); + return null; + } + + // Show the task that will be updated + const table = new Table({ + head: [ + chalk.cyan.bold('ID'), + chalk.cyan.bold('Title'), + chalk.cyan.bold('Status') + ], + colWidths: [5, 60, 10] + }); + + table.push([ + taskToUpdate.id, + truncate(taskToUpdate.title, 57), + getStatusWithColor(taskToUpdate.status) + ]); + + console.log(boxen( + chalk.white.bold(`Updating Task #${taskId}`), + { padding: 1, borderColor: 'blue', borderStyle: 'round', margin: { top: 1, bottom: 0 } } + )); + + console.log(table.toString()); + + // Display a message about how completed subtasks are handled + console.log(boxen( + chalk.cyan.bold('How Completed Subtasks Are Handled:') + '\n\n' + + chalk.white('â€ĸ Subtasks marked as "done" or "completed" will be preserved\n') + + chalk.white('â€ĸ New subtasks will build upon what has already been completed\n') + + chalk.white('â€ĸ If completed work needs revision, a new subtask will be created instead of modifying done items\n') + + chalk.white('â€ĸ This approach maintains a clear record of completed work and new requirements'), + { padding: 1, borderColor: 'blue', borderStyle: 'round', margin: { top: 1, bottom: 1 } } + )); + + // Build the system prompt + const systemPrompt = `You are an AI assistant helping to update a software development task based on new context. +You will be given a task and a prompt describing changes or new implementation details. +Your job is to update the task to reflect these changes, while preserving its basic structure. + +Guidelines: +1. Maintain the same ID, status, and dependencies unless specifically mentioned in the prompt +2. Update the title, description, details, and test strategy to reflect the new information +3. Do not change anything unnecessarily - just adapt what needs to change based on the prompt +4. Return a complete valid JSON object representing the updated task +5. VERY IMPORTANT: Preserve all subtasks marked as "done" or "completed" - do not modify their content +6. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything +7. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly +8. Instead, add a new subtask that clearly indicates what needs to be changed or replaced +9. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted +10. Ensure any new subtasks have unique IDs that don't conflict with existing ones + +The changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.`; + + const taskData = JSON.stringify(taskToUpdate, null, 2); + + let updatedTask; + const loadingIndicator = startLoadingIndicator(useResearch + ? 'Updating task with Perplexity AI research...' + : 'Updating task with Claude AI...'); + + try { + if (useResearch) { + log('info', 'Using Perplexity AI for research-backed task update'); + + // Verify Perplexity API key exists + if (!process.env.PERPLEXITY_API_KEY) { + throw new Error('PERPLEXITY_API_KEY environment variable is missing but --research flag was used.'); + } + + try { + // Call Perplexity AI + const perplexityModel = process.env.PERPLEXITY_MODEL || 'sonar-pro'; + const result = await perplexity.chat.completions.create({ + model: perplexityModel, + messages: [ + { + role: "system", + content: `${systemPrompt}\n\nAdditionally, please research the latest best practices, implementation details, and considerations when updating this task. Use your online search capabilities to gather relevant information. Remember to strictly follow the guidelines about preserving completed subtasks and building upon what has already been done rather than modifying or replacing it.` + }, + { + role: "user", + content: `Here is the task to update: +${taskData} + +Please update this task based on the following new context: +${prompt} + +IMPORTANT: In the task JSON above, any subtasks with "status": "done" or "status": "completed" should be preserved exactly as is. Build your changes around these completed items. + +Return only the updated task as a valid JSON object.` + } + ], + temperature: parseFloat(process.env.TEMPERATURE || CONFIG.temperature), + max_tokens: parseInt(process.env.MAX_TOKENS || CONFIG.maxTokens), + }); + + const responseText = result.choices[0].message.content; + + // Extract JSON from response + const jsonStart = responseText.indexOf('{'); + const jsonEnd = responseText.lastIndexOf('}'); + + if (jsonStart === -1 || jsonEnd === -1) { + throw new Error("Could not find valid JSON object in Perplexity's response. The response may be malformed."); + } + + const jsonText = responseText.substring(jsonStart, jsonEnd + 1); + + try { + updatedTask = JSON.parse(jsonText); + } catch (parseError) { + throw new Error(`Failed to parse Perplexity response as JSON: ${parseError.message}\nResponse fragment: ${jsonText.substring(0, 100)}...`); + } + } catch (perplexityError) { + throw new Error(`Perplexity API error: ${perplexityError.message}`); + } + } else { + // Call Claude to update the task with streaming enabled + let responseText = ''; + let streamingInterval = null; + + try { + // Verify Anthropic API key exists + if (!process.env.ANTHROPIC_API_KEY) { + throw new Error('ANTHROPIC_API_KEY environment variable is missing. Required for task updates.'); + } + + // Update loading indicator to show streaming progress + let dotCount = 0; + const readline = await import('readline'); + streamingInterval = setInterval(() => { + readline.cursorTo(process.stdout, 0); + process.stdout.write(`Receiving streaming response from Claude${'.'.repeat(dotCount)}`); + dotCount = (dotCount + 1) % 4; + }, 500); + + // Use streaming API call + const stream = await anthropic.messages.create({ + model: CONFIG.model, + max_tokens: CONFIG.maxTokens, + temperature: CONFIG.temperature, + system: systemPrompt, + messages: [ + { + role: 'user', + content: `Here is the task to update: +${taskData} + +Please update this task based on the following new context: +${prompt} + +IMPORTANT: In the task JSON above, any subtasks with "status": "done" or "status": "completed" should be preserved exactly as is. Build your changes around these completed items. + +Return only the updated task as a valid JSON object.` + } + ], + stream: true + }); + + // Process the stream + for await (const chunk of stream) { + if (chunk.type === 'content_block_delta' && chunk.delta.text) { + responseText += chunk.delta.text; + } + } + + if (streamingInterval) clearInterval(streamingInterval); + log('info', "Completed streaming response from Claude API!"); + + // Extract JSON from response + const jsonStart = responseText.indexOf('{'); + const jsonEnd = responseText.lastIndexOf('}'); + + if (jsonStart === -1 || jsonEnd === -1) { + throw new Error("Could not find valid JSON object in Claude's response. The response may be malformed."); + } + + const jsonText = responseText.substring(jsonStart, jsonEnd + 1); + + try { + updatedTask = JSON.parse(jsonText); + } catch (parseError) { + throw new Error(`Failed to parse Claude response as JSON: ${parseError.message}\nResponse fragment: ${jsonText.substring(0, 100)}...`); + } + } catch (claudeError) { + if (streamingInterval) clearInterval(streamingInterval); + throw new Error(`Claude API error: ${claudeError.message}`); + } + } + + // Validation of the updated task + if (!updatedTask || typeof updatedTask !== 'object') { + throw new Error('Received invalid task object from AI. The response did not contain a valid task.'); + } + + // Ensure critical fields exist + if (!updatedTask.title || !updatedTask.description) { + throw new Error('Updated task is missing required fields (title or description).'); + } + + // Ensure ID is preserved + if (updatedTask.id !== taskId) { + log('warn', `Task ID was modified in the AI response. Restoring original ID ${taskId}.`); + updatedTask.id = taskId; + } + + // Ensure status is preserved unless explicitly changed in prompt + if (updatedTask.status !== taskToUpdate.status && !prompt.toLowerCase().includes('status')) { + log('warn', `Task status was modified without explicit instruction. Restoring original status '${taskToUpdate.status}'.`); + updatedTask.status = taskToUpdate.status; + } + + // Ensure completed subtasks are preserved + if (taskToUpdate.subtasks && taskToUpdate.subtasks.length > 0) { + if (!updatedTask.subtasks) { + log('warn', 'Subtasks were removed in the AI response. Restoring original subtasks.'); + updatedTask.subtasks = taskToUpdate.subtasks; + } else { + // Check for each completed subtask + const completedSubtasks = taskToUpdate.subtasks.filter( + st => st.status === 'done' || st.status === 'completed' + ); + + for (const completedSubtask of completedSubtasks) { + const updatedSubtask = updatedTask.subtasks.find(st => st.id === completedSubtask.id); + + // If completed subtask is missing or modified, restore it + if (!updatedSubtask) { + log('warn', `Completed subtask ${completedSubtask.id} was removed. Restoring it.`); + updatedTask.subtasks.push(completedSubtask); + } else if ( + updatedSubtask.title !== completedSubtask.title || + updatedSubtask.description !== completedSubtask.description || + updatedSubtask.details !== completedSubtask.details || + updatedSubtask.status !== completedSubtask.status + ) { + log('warn', `Completed subtask ${completedSubtask.id} was modified. Restoring original.`); + // Find and replace the modified subtask + const index = updatedTask.subtasks.findIndex(st => st.id === completedSubtask.id); + if (index !== -1) { + updatedTask.subtasks[index] = completedSubtask; + } + } + } + + // Ensure no duplicate subtask IDs + const subtaskIds = new Set(); + const uniqueSubtasks = []; + + for (const subtask of updatedTask.subtasks) { + if (!subtaskIds.has(subtask.id)) { + subtaskIds.add(subtask.id); + uniqueSubtasks.push(subtask); + } else { + log('warn', `Duplicate subtask ID ${subtask.id} found. Removing duplicate.`); + } + } + + updatedTask.subtasks = uniqueSubtasks; + } + } + + // Update the task in the original data + const index = data.tasks.findIndex(t => t.id === taskId); + if (index !== -1) { + data.tasks[index] = updatedTask; + } else { + throw new Error(`Task with ID ${taskId} not found in tasks array.`); + } + + // Write the updated tasks to the file + writeJSON(tasksPath, data); + + log('success', `Successfully updated task ${taskId}`); + + // Generate individual task files + await generateTaskFiles(tasksPath, path.dirname(tasksPath)); + + console.log(boxen( + chalk.green(`Successfully updated task #${taskId}`) + '\n\n' + + chalk.white.bold('Updated Title:') + ' ' + updatedTask.title, + { padding: 1, borderColor: 'green', borderStyle: 'round' } + )); + + // Return the updated task for testing purposes + return updatedTask; + } finally { + stopLoadingIndicator(loadingIndicator); + } + } catch (error) { + log('error', `Error updating task: ${error.message}`); + console.error(chalk.red(`Error: ${error.message}`)); + + // Provide more helpful error messages for common issues + if (error.message.includes('ANTHROPIC_API_KEY')) { + console.log(chalk.yellow('\nTo fix this issue, set your Anthropic API key:')); + console.log(' export ANTHROPIC_API_KEY=your_api_key_here'); + } else if (error.message.includes('PERPLEXITY_API_KEY')) { + console.log(chalk.yellow('\nTo fix this issue:')); + console.log(' 1. Set your Perplexity API key: export PERPLEXITY_API_KEY=your_api_key_here'); + console.log(' 2. Or run without the research flag: task-master update-task --id= --prompt="..."'); + } else if (error.message.includes('Task with ID') && error.message.includes('not found')) { + console.log(chalk.yellow('\nTo fix this issue:')); + console.log(' 1. Run task-master list to see all available task IDs'); + console.log(' 2. Use a valid task ID with the --id parameter'); + } + + if (CONFIG.debug) { + console.error(error); + } + + return null; + } +} + /** * Generate individual task files from tasks.json * @param {string} tasksPath - Path to the tasks.json file @@ -2599,6 +2972,7 @@ async function removeSubtask(tasksPath, subtaskId, convertToTask = false, genera export { parsePRD, updateTasks, + updateTaskById, generateTaskFiles, setTaskStatus, updateSingleTaskStatus, diff --git a/tasks/task_023.txt b/tasks/task_023.txt index 35e721d4..daa7aa1c 100644 --- a/tasks/task_023.txt +++ b/tasks/task_023.txt @@ -5,38 +5,38 @@ # Priority: medium # Description: Extend Task Master to function as an MCP server by leveraging FastMCP's JavaScript/TypeScript implementation for efficient context management services. # Details: -This task involves implementing the Model Context Protocol server capabilities within Task Master using FastMCP. The implementation should: +This task involves implementing the Model Context Protocol server capabilities within Task Master. The implementation should: -1. Use FastMCP to create the MCP server module (`mcp-server.ts` or equivalent) -2. Implement the required MCP endpoints using FastMCP: +1. Create a new module `mcp-server.js` that implements the core MCP server functionality +2. Implement the required MCP endpoints: - `/context` - For retrieving and updating context - `/models` - For listing available models - `/execute` - For executing operations with context -3. Utilize FastMCP's built-in features for context management, including: - - Efficient context storage and retrieval - - Context windowing and truncation - - Metadata and tagging support -4. Add authentication and authorization mechanisms using FastMCP capabilities -5. Implement error handling and response formatting as per MCP specifications -6. Configure Task Master to enable/disable MCP server functionality via FastMCP settings -7. Add documentation on using Task Master as an MCP server with FastMCP -8. Ensure compatibility with existing MCP clients by adhering to FastMCP's compliance features -9. Optimize performance using FastMCP tools, especially for context retrieval operations -10. Add logging for MCP server operations using FastMCP's logging utilities +3. Develop a context management system that can: + - Store and retrieve context data efficiently + - Handle context windowing and truncation when limits are reached + - Support context metadata and tagging +4. Add authentication and authorization mechanisms for MCP clients +5. Implement proper error handling and response formatting according to MCP specifications +6. Create configuration options in Task Master to enable/disable the MCP server functionality +7. Add documentation for how to use Task Master as an MCP server +8. Ensure the implementation is compatible with existing MCP clients +9. Optimize for performance, especially for context retrieval operations +10. Add logging for MCP server operations -The implementation should follow RESTful API design principles and leverage FastMCP's concurrency handling for multiple client requests. Consider using TypeScript for better type safety and integration with FastMCP[1][2]. +The implementation should follow RESTful API design principles and should be able to handle concurrent requests from multiple clients. # Test Strategy: Testing for the MCP server functionality should include: 1. Unit tests: - - Test each MCP endpoint handler function independently using FastMCP - - Verify context storage and retrieval mechanisms provided by FastMCP + - Test each MCP endpoint handler function independently + - Verify context storage and retrieval mechanisms - Test authentication and authorization logic - Validate error handling for various failure scenarios 2. Integration tests: - - Set up a test MCP server instance using FastMCP + - Set up a test MCP server instance - Test complete request/response cycles for each endpoint - Verify context persistence across multiple requests - Test with various payload sizes and content types @@ -44,11 +44,11 @@ Testing for the MCP server functionality should include: 3. Compatibility tests: - Test with existing MCP client libraries - Verify compliance with the MCP specification - - Ensure backward compatibility with any MCP versions supported by FastMCP + - Ensure backward compatibility with any MCP versions supported 4. Performance tests: - Measure response times for context operations with various context sizes - - Test concurrent request handling using FastMCP's concurrency tools + - Test concurrent request handling - Verify memory usage remains within acceptable limits during extended operation 5. Security tests: @@ -79,7 +79,7 @@ Testing approach: - Test basic error handling with invalid requests ## 2. Implement Context Management System [done] -### Dependencies: 23.1 +### Dependencies: 23.1 ### Description: Develop a robust context management system that can efficiently store, retrieve, and manipulate context data according to the MCP specification. ### Details: Implementation steps: @@ -100,7 +100,7 @@ Testing approach: - Test persistence mechanisms with simulated failures ## 3. Implement MCP Endpoints and API Handlers [done] -### Dependencies: 23.1, 23.2 +### Dependencies: 23.1, 23.2 ### Description: Develop the complete API handlers for all required MCP endpoints, ensuring they follow the protocol specification and integrate with the context management system. ### Details: Implementation steps: @@ -126,7 +126,7 @@ Testing approach: - Benchmark endpoint performance ## 4. Implement Authentication and Authorization System [pending] -### Dependencies: 23.1, 23.3 +### Dependencies: 23.1, 23.3 ### Description: Create a secure authentication and authorization mechanism for MCP clients to ensure only authorized applications can access the MCP server functionality. ### Details: Implementation steps: @@ -148,7 +148,7 @@ Testing approach: - Verify audit logs contain appropriate information ## 5. Optimize Performance and Finalize Documentation [pending] -### Dependencies: 23.1, 23.2, 23.3, 23.4 +### Dependencies: 23.1, 23.2, 23.3, 23.4 ### Description: Optimize the MCP server implementation for performance, especially for context retrieval operations, and create comprehensive documentation for users. ### Details: Implementation steps: diff --git a/tasks/task_034.txt b/tasks/task_034.txt new file mode 100644 index 00000000..77da9a0a --- /dev/null +++ b/tasks/task_034.txt @@ -0,0 +1,156 @@ +# Task ID: 34 +# Title: Implement updateTask Command for Single Task Updates +# Status: in-progress +# Dependencies: None +# Priority: high +# Description: Create a new command that allows updating a specific task by ID using AI-driven refinement while preserving completed subtasks and supporting all existing update command options. +# Details: +Implement a new command called 'updateTask' that focuses on updating a single task rather than all tasks from an ID onwards. The implementation should: + +1. Accept a single task ID as a required parameter +2. Use the same AI-driven approach as the existing update command to refine the task +3. Preserve the completion status of any subtasks that were previously marked as complete +4. Support all options from the existing update command including: + - The research flag for Perplexity integration + - Any formatting or refinement options + - Task context options +5. Update the CLI help documentation to include this new command +6. Ensure the command follows the same pattern as other commands in the codebase +7. Add appropriate error handling for cases where the specified task ID doesn't exist +8. Implement the ability to update task title, description, and details separately if needed +9. Ensure the command returns appropriate success/failure messages +10. Optimize the implementation to only process the single task rather than scanning through all tasks + +The command should reuse existing AI prompt templates where possible but modify them to focus on refining a single task rather than multiple tasks. + +# Test Strategy: +Testing should verify the following aspects: + +1. **Basic Functionality Test**: Verify that the command successfully updates a single task when given a valid task ID +2. **Preservation Test**: Create a task with completed subtasks, update it, and verify the completion status remains intact +3. **Research Flag Test**: Test the command with the research flag and verify it correctly integrates with Perplexity +4. **Error Handling Tests**: + - Test with non-existent task ID and verify appropriate error message + - Test with invalid parameters and verify helpful error messages +5. **Integration Test**: Run a complete workflow that creates a task, updates it with updateTask, and then verifies the changes are persisted +6. **Comparison Test**: Compare the results of updating a single task with updateTask versus using the original update command on the same task to ensure consistent quality +7. **Performance Test**: Measure execution time compared to the full update command to verify efficiency gains +8. **CLI Help Test**: Verify the command appears correctly in help documentation with appropriate descriptions + +Create unit tests for the core functionality and integration tests for the complete workflow. Document any edge cases discovered during testing. + +# Subtasks: +## 1. Create updateTaskById function in task-manager.js [done] +### Dependencies: None +### Description: Implement a new function in task-manager.js that focuses on updating a single task by ID using AI-driven refinement while preserving completed subtasks. +### Details: +Implementation steps: +1. Create a new `updateTaskById` function in task-manager.js that accepts parameters: taskId, options object (containing research flag, formatting options, etc.) +2. Implement logic to find a specific task by ID in the tasks array +3. Add appropriate error handling for cases where the task ID doesn't exist (throw a custom error) +4. Reuse existing AI prompt templates but modify them to focus on refining a single task +5. Implement logic to preserve completion status of subtasks that were previously marked as complete +6. Add support for updating task title, description, and details separately based on options +7. Optimize the implementation to only process the single task rather than scanning through all tasks +8. Return the updated task and appropriate success/failure messages + +Testing approach: +- Unit test the function with various scenarios including: + - Valid task ID with different update options + - Non-existent task ID + - Task with completed subtasks to verify preservation + - Different combinations of update options + +## 2. Implement updateTask command in commands.js [done] +### Dependencies: 34.1 +### Description: Create a new command called 'updateTask' in commands.js that leverages the updateTaskById function to update a specific task by ID. +### Details: +Implementation steps: +1. Create a new command object for 'updateTask' in commands.js following the Command pattern +2. Define command parameters including a required taskId parameter +3. Support all options from the existing update command: + - Research flag for Perplexity integration + - Formatting and refinement options + - Task context options +4. Implement the command handler function that calls the updateTaskById function from task-manager.js +5. Add appropriate error handling to catch and display user-friendly error messages +6. Ensure the command follows the same pattern as other commands in the codebase +7. Implement proper validation of input parameters +8. Format and return appropriate success/failure messages to the user + +Testing approach: +- Unit test the command handler with various input combinations +- Test error handling scenarios +- Verify command options are correctly passed to the updateTaskById function + +## 3. Add comprehensive error handling and validation [done] +### Dependencies: 34.1, 34.2 +### Description: Implement robust error handling and validation for the updateTask command to ensure proper user feedback and system stability. +### Details: +Implementation steps: +1. Create custom error types for different failure scenarios (TaskNotFoundError, ValidationError, etc.) +2. Implement input validation for the taskId parameter and all options +3. Add proper error handling for AI service failures with appropriate fallback mechanisms +4. Implement concurrency handling to prevent conflicts when multiple updates occur simultaneously +5. Add comprehensive logging for debugging and auditing purposes +6. Ensure all error messages are user-friendly and actionable +7. Implement proper HTTP status codes for API responses if applicable +8. Add validation to ensure the task exists before attempting updates + +Testing approach: +- Test various error scenarios including invalid inputs, non-existent tasks, and API failures +- Verify error messages are clear and helpful +- Test concurrency scenarios with multiple simultaneous updates +- Verify logging captures appropriate information for troubleshooting + +## 4. Write comprehensive tests for updateTask command [in-progress] +### Dependencies: 34.1, 34.2, 34.3 +### Description: Create a comprehensive test suite for the updateTask command to ensure it works correctly in all scenarios and maintains backward compatibility. +### Details: +Implementation steps: +1. Create unit tests for the updateTaskById function in task-manager.js + - Test finding and updating tasks with various IDs + - Test preservation of completed subtasks + - Test different update options combinations + - Test error handling for non-existent tasks +2. Create unit tests for the updateTask command in commands.js + - Test command parameter parsing + - Test option handling + - Test error scenarios and messages +3. Create integration tests that verify the end-to-end flow + - Test the command with actual AI service integration + - Test with mock AI responses for predictable testing +4. Implement test fixtures and mocks for consistent testing +5. Add performance tests to ensure the command is efficient +6. Test edge cases such as empty tasks, tasks with many subtasks, etc. + +Testing approach: +- Use Jest or similar testing framework +- Implement mocks for external dependencies like AI services +- Create test fixtures for consistent test data +- Use snapshot testing for command output verification + +## 5. Update CLI documentation and help text [pending] +### Dependencies: 34.2 +### Description: Update the CLI help documentation to include the new updateTask command and ensure users understand its purpose and options. +### Details: +Implementation steps: +1. Add comprehensive help text for the updateTask command including: + - Command description + - Required and optional parameters + - Examples of usage + - Description of all supported options +2. Update the main CLI help documentation to include the new command +3. Add the command to any relevant command groups or categories +4. Create usage examples that demonstrate common scenarios +5. Update README.md and other documentation files to include information about the new command +6. Add inline code comments explaining the implementation details +7. Update any API documentation if applicable +8. Create or update user guides with the new functionality + +Testing approach: +- Verify help text is displayed correctly when running `--help` +- Review documentation for clarity and completeness +- Have team members review the documentation for usability +- Test examples to ensure they work as documented + diff --git a/tasks/task_035.txt b/tasks/task_035.txt new file mode 100644 index 00000000..6f7aca5d --- /dev/null +++ b/tasks/task_035.txt @@ -0,0 +1,48 @@ +# Task ID: 35 +# Title: Integrate Grok3 API for Research Capabilities +# Status: pending +# Dependencies: None +# Priority: medium +# Description: Replace the current Perplexity API integration with Grok3 API for all research-related functionalities while maintaining existing feature parity. +# Details: +This task involves migrating from Perplexity to Grok3 API for research capabilities throughout the application. Implementation steps include: + +1. Create a new API client module for Grok3 in `src/api/grok3.ts` that handles authentication, request formatting, and response parsing +2. Update the research service layer to use the new Grok3 client instead of Perplexity +3. Modify the request payload structure to match Grok3's expected format (parameters like temperature, max_tokens, etc.) +4. Update response handling to properly parse and extract Grok3's response format +5. Implement proper error handling for Grok3-specific error codes and messages +6. Update environment variables and configuration files to include Grok3 API keys and endpoints +7. Ensure rate limiting and quota management are properly implemented according to Grok3's specifications +8. Update any UI components that display research provider information to show Grok3 instead of Perplexity +9. Maintain backward compatibility for any stored research results from Perplexity +10. Document the new API integration in the developer documentation + +Grok3 API has different parameter requirements and response formats compared to Perplexity, so careful attention must be paid to these differences during implementation. + +# Test Strategy: +Testing should verify that the Grok3 API integration works correctly and maintains feature parity with the previous Perplexity implementation: + +1. Unit tests: + - Test the Grok3 API client with mocked responses + - Verify proper error handling for various error scenarios (rate limits, authentication failures, etc.) + - Test the transformation of application requests to Grok3-compatible format + +2. Integration tests: + - Perform actual API calls to Grok3 with test credentials + - Verify that research results are correctly parsed and returned + - Test with various types of research queries to ensure broad compatibility + +3. End-to-end tests: + - Test the complete research flow from UI input to displayed results + - Verify that all existing research features work with the new API + +4. Performance tests: + - Compare response times between Perplexity and Grok3 + - Ensure the application handles any differences in response time appropriately + +5. Regression tests: + - Verify that existing features dependent on research capabilities continue to work + - Test that stored research results from Perplexity are still accessible and displayed correctly + +Create a test environment with both APIs available to compare results and ensure quality before fully replacing Perplexity with Grok3. diff --git a/tasks/task_036.txt b/tasks/task_036.txt new file mode 100644 index 00000000..02a1ffa2 --- /dev/null +++ b/tasks/task_036.txt @@ -0,0 +1,48 @@ +# Task ID: 36 +# Title: Add Ollama Support for AI Services as Claude Alternative +# Status: pending +# Dependencies: None +# Priority: medium +# Description: Implement Ollama integration as an alternative to Claude for all main AI services, allowing users to run local language models instead of relying on cloud-based Claude API. +# Details: +This task involves creating a comprehensive Ollama integration that can replace Claude across all main AI services in the application. Implementation should include: + +1. Create an OllamaService class that implements the same interface as the ClaudeService to ensure compatibility +2. Add configuration options to specify Ollama endpoint URL (default: http://localhost:11434) +3. Implement model selection functionality to allow users to choose which Ollama model to use (e.g., llama3, mistral, etc.) +4. Handle prompt formatting specific to Ollama models, ensuring proper system/user message separation +5. Implement proper error handling for cases where Ollama server is unavailable or returns errors +6. Add fallback mechanism to Claude when Ollama fails or isn't configured +7. Update the AI service factory to conditionally create either Claude or Ollama service based on configuration +8. Ensure token counting and rate limiting are appropriately handled for Ollama models +9. Add documentation for users explaining how to set up and use Ollama with the application +10. Optimize prompt templates specifically for Ollama models if needed + +The implementation should be toggled through a configuration option (useOllama: true/false) and should maintain all existing functionality currently provided by Claude. + +# Test Strategy: +Testing should verify that Ollama integration works correctly as a drop-in replacement for Claude: + +1. Unit tests: + - Test OllamaService class methods in isolation with mocked responses + - Verify proper error handling when Ollama server is unavailable + - Test fallback mechanism to Claude when configured + +2. Integration tests: + - Test with actual Ollama server running locally with at least two different models + - Verify all AI service functions work correctly with Ollama + - Compare outputs between Claude and Ollama for quality assessment + +3. Configuration tests: + - Verify toggling between Claude and Ollama works as expected + - Test with various model configurations + +4. Performance tests: + - Measure and compare response times between Claude and Ollama + - Test with different load scenarios + +5. Manual testing: + - Verify all main AI features work correctly with Ollama + - Test edge cases like very long inputs or specialized tasks + +Create a test document comparing output quality between Claude and various Ollama models to help users understand the tradeoffs. diff --git a/tasks/tasks.json b/tasks/tasks.json index ea4c7082..d3160a25 100644 --- a/tasks/tasks.json +++ b/tasks/tasks.json @@ -1731,6 +1731,104 @@ "priority": "medium", "details": "This task involves creating a mechanism to generate a Windsurf-specific rules document by combining three existing MDC (Markdown Content) files that are currently used for Cursor Rules. The implementation should:\n\n1. Identify and locate the three primary .mdc files used for Cursor Rules\n2. Extract content from these files and merge them into a single document\n3. Refactor the content to make it Windsurf-specific, replacing Cursor-specific terminology and adapting guidelines as needed\n4. Create a function that generates a .windsurfrules document from this content\n5. Integrate this function into the initialization pipeline\n6. Implement logic to check if a .windsurfrules document already exists:\n - If it exists, append the new content to it\n - If it doesn't exist, create a new document\n7. Ensure proper error handling for file operations\n8. Add appropriate logging to track the generation and modification of the .windsurfrules document\n\nThe implementation should be modular and maintainable, with clear separation of concerns between content extraction, refactoring, and file operations.", "testStrategy": "Testing should verify both the content generation and the integration with the initialization pipeline:\n\n1. Unit Tests:\n - Test the content extraction function with mock .mdc files\n - Test the content refactoring function to ensure Cursor-specific terms are properly replaced\n - Test the file operation functions with mock filesystem\n\n2. Integration Tests:\n - Test the creation of a new .windsurfrules document when none exists\n - Test appending to an existing .windsurfrules document\n - Test the complete initialization pipeline with the new functionality\n\n3. Manual Verification:\n - Inspect the generated .windsurfrules document to ensure content is properly combined and refactored\n - Verify that Cursor-specific terminology has been replaced with Windsurf-specific terminology\n - Run the initialization process multiple times to verify idempotence (content isn't duplicated on multiple runs)\n\n4. Edge Cases:\n - Test with missing or corrupted .mdc files\n - Test with an existing but empty .windsurfrules document\n - Test with an existing .windsurfrules document that already contains some of the content" + }, + { + "id": 34, + "title": "Implement updateTask Command for Single Task Updates", + "description": "Create a new command that allows updating a specific task by ID using AI-driven refinement while preserving completed subtasks and supporting all existing update command options.", + "status": "in-progress", + "dependencies": [], + "priority": "high", + "details": "Implement a new command called 'updateTask' that focuses on updating a single task rather than all tasks from an ID onwards. The implementation should:\n\n1. Accept a single task ID as a required parameter\n2. Use the same AI-driven approach as the existing update command to refine the task\n3. Preserve the completion status of any subtasks that were previously marked as complete\n4. Support all options from the existing update command including:\n - The research flag for Perplexity integration\n - Any formatting or refinement options\n - Task context options\n5. Update the CLI help documentation to include this new command\n6. Ensure the command follows the same pattern as other commands in the codebase\n7. Add appropriate error handling for cases where the specified task ID doesn't exist\n8. Implement the ability to update task title, description, and details separately if needed\n9. Ensure the command returns appropriate success/failure messages\n10. Optimize the implementation to only process the single task rather than scanning through all tasks\n\nThe command should reuse existing AI prompt templates where possible but modify them to focus on refining a single task rather than multiple tasks.", + "testStrategy": "Testing should verify the following aspects:\n\n1. **Basic Functionality Test**: Verify that the command successfully updates a single task when given a valid task ID\n2. **Preservation Test**: Create a task with completed subtasks, update it, and verify the completion status remains intact\n3. **Research Flag Test**: Test the command with the research flag and verify it correctly integrates with Perplexity\n4. **Error Handling Tests**:\n - Test with non-existent task ID and verify appropriate error message\n - Test with invalid parameters and verify helpful error messages\n5. **Integration Test**: Run a complete workflow that creates a task, updates it with updateTask, and then verifies the changes are persisted\n6. **Comparison Test**: Compare the results of updating a single task with updateTask versus using the original update command on the same task to ensure consistent quality\n7. **Performance Test**: Measure execution time compared to the full update command to verify efficiency gains\n8. **CLI Help Test**: Verify the command appears correctly in help documentation with appropriate descriptions\n\nCreate unit tests for the core functionality and integration tests for the complete workflow. Document any edge cases discovered during testing.", + "subtasks": [ + { + "id": 1, + "title": "Create updateTaskById function in task-manager.js", + "description": "Implement a new function in task-manager.js that focuses on updating a single task by ID using AI-driven refinement while preserving completed subtasks.", + "dependencies": [], + "details": "Implementation steps:\n1. Create a new `updateTaskById` function in task-manager.js that accepts parameters: taskId, options object (containing research flag, formatting options, etc.)\n2. Implement logic to find a specific task by ID in the tasks array\n3. Add appropriate error handling for cases where the task ID doesn't exist (throw a custom error)\n4. Reuse existing AI prompt templates but modify them to focus on refining a single task\n5. Implement logic to preserve completion status of subtasks that were previously marked as complete\n6. Add support for updating task title, description, and details separately based on options\n7. Optimize the implementation to only process the single task rather than scanning through all tasks\n8. Return the updated task and appropriate success/failure messages\n\nTesting approach:\n- Unit test the function with various scenarios including:\n - Valid task ID with different update options\n - Non-existent task ID\n - Task with completed subtasks to verify preservation\n - Different combinations of update options", + "status": "done", + "parentTaskId": 34 + }, + { + "id": 2, + "title": "Implement updateTask command in commands.js", + "description": "Create a new command called 'updateTask' in commands.js that leverages the updateTaskById function to update a specific task by ID.", + "dependencies": [ + 1 + ], + "details": "Implementation steps:\n1. Create a new command object for 'updateTask' in commands.js following the Command pattern\n2. Define command parameters including a required taskId parameter\n3. Support all options from the existing update command:\n - Research flag for Perplexity integration\n - Formatting and refinement options\n - Task context options\n4. Implement the command handler function that calls the updateTaskById function from task-manager.js\n5. Add appropriate error handling to catch and display user-friendly error messages\n6. Ensure the command follows the same pattern as other commands in the codebase\n7. Implement proper validation of input parameters\n8. Format and return appropriate success/failure messages to the user\n\nTesting approach:\n- Unit test the command handler with various input combinations\n- Test error handling scenarios\n- Verify command options are correctly passed to the updateTaskById function", + "status": "done", + "parentTaskId": 34 + }, + { + "id": 3, + "title": "Add comprehensive error handling and validation", + "description": "Implement robust error handling and validation for the updateTask command to ensure proper user feedback and system stability.", + "dependencies": [ + 1, + 2 + ], + "details": "Implementation steps:\n1. Create custom error types for different failure scenarios (TaskNotFoundError, ValidationError, etc.)\n2. Implement input validation for the taskId parameter and all options\n3. Add proper error handling for AI service failures with appropriate fallback mechanisms\n4. Implement concurrency handling to prevent conflicts when multiple updates occur simultaneously\n5. Add comprehensive logging for debugging and auditing purposes\n6. Ensure all error messages are user-friendly and actionable\n7. Implement proper HTTP status codes for API responses if applicable\n8. Add validation to ensure the task exists before attempting updates\n\nTesting approach:\n- Test various error scenarios including invalid inputs, non-existent tasks, and API failures\n- Verify error messages are clear and helpful\n- Test concurrency scenarios with multiple simultaneous updates\n- Verify logging captures appropriate information for troubleshooting", + "status": "done", + "parentTaskId": 34 + }, + { + "id": 4, + "title": "Write comprehensive tests for updateTask command", + "description": "Create a comprehensive test suite for the updateTask command to ensure it works correctly in all scenarios and maintains backward compatibility.", + "dependencies": [ + 1, + 2, + 3 + ], + "details": "Implementation steps:\n1. Create unit tests for the updateTaskById function in task-manager.js\n - Test finding and updating tasks with various IDs\n - Test preservation of completed subtasks\n - Test different update options combinations\n - Test error handling for non-existent tasks\n2. Create unit tests for the updateTask command in commands.js\n - Test command parameter parsing\n - Test option handling\n - Test error scenarios and messages\n3. Create integration tests that verify the end-to-end flow\n - Test the command with actual AI service integration\n - Test with mock AI responses for predictable testing\n4. Implement test fixtures and mocks for consistent testing\n5. Add performance tests to ensure the command is efficient\n6. Test edge cases such as empty tasks, tasks with many subtasks, etc.\n\nTesting approach:\n- Use Jest or similar testing framework\n- Implement mocks for external dependencies like AI services\n- Create test fixtures for consistent test data\n- Use snapshot testing for command output verification", + "status": "in-progress", + "parentTaskId": 34 + }, + { + "id": 5, + "title": "Update CLI documentation and help text", + "description": "Update the CLI help documentation to include the new updateTask command and ensure users understand its purpose and options.", + "dependencies": [ + 2 + ], + "details": "Implementation steps:\n1. Add comprehensive help text for the updateTask command including:\n - Command description\n - Required and optional parameters\n - Examples of usage\n - Description of all supported options\n2. Update the main CLI help documentation to include the new command\n3. Add the command to any relevant command groups or categories\n4. Create usage examples that demonstrate common scenarios\n5. Update README.md and other documentation files to include information about the new command\n6. Add inline code comments explaining the implementation details\n7. Update any API documentation if applicable\n8. Create or update user guides with the new functionality\n\nTesting approach:\n- Verify help text is displayed correctly when running `--help`\n- Review documentation for clarity and completeness\n- Have team members review the documentation for usability\n- Test examples to ensure they work as documented", + "status": "pending", + "parentTaskId": 34 + } + ] + }, + { + "id": 35, + "title": "Integrate Grok3 API for Research Capabilities", + "description": "Replace the current Perplexity API integration with Grok3 API for all research-related functionalities while maintaining existing feature parity.", + "status": "pending", + "dependencies": [], + "priority": "medium", + "details": "This task involves migrating from Perplexity to Grok3 API for research capabilities throughout the application. Implementation steps include:\n\n1. Create a new API client module for Grok3 in `src/api/grok3.ts` that handles authentication, request formatting, and response parsing\n2. Update the research service layer to use the new Grok3 client instead of Perplexity\n3. Modify the request payload structure to match Grok3's expected format (parameters like temperature, max_tokens, etc.)\n4. Update response handling to properly parse and extract Grok3's response format\n5. Implement proper error handling for Grok3-specific error codes and messages\n6. Update environment variables and configuration files to include Grok3 API keys and endpoints\n7. Ensure rate limiting and quota management are properly implemented according to Grok3's specifications\n8. Update any UI components that display research provider information to show Grok3 instead of Perplexity\n9. Maintain backward compatibility for any stored research results from Perplexity\n10. Document the new API integration in the developer documentation\n\nGrok3 API has different parameter requirements and response formats compared to Perplexity, so careful attention must be paid to these differences during implementation.", + "testStrategy": "Testing should verify that the Grok3 API integration works correctly and maintains feature parity with the previous Perplexity implementation:\n\n1. Unit tests:\n - Test the Grok3 API client with mocked responses\n - Verify proper error handling for various error scenarios (rate limits, authentication failures, etc.)\n - Test the transformation of application requests to Grok3-compatible format\n\n2. Integration tests:\n - Perform actual API calls to Grok3 with test credentials\n - Verify that research results are correctly parsed and returned\n - Test with various types of research queries to ensure broad compatibility\n\n3. End-to-end tests:\n - Test the complete research flow from UI input to displayed results\n - Verify that all existing research features work with the new API\n\n4. Performance tests:\n - Compare response times between Perplexity and Grok3\n - Ensure the application handles any differences in response time appropriately\n\n5. Regression tests:\n - Verify that existing features dependent on research capabilities continue to work\n - Test that stored research results from Perplexity are still accessible and displayed correctly\n\nCreate a test environment with both APIs available to compare results and ensure quality before fully replacing Perplexity with Grok3." + }, + { + "id": 36, + "title": "Add Ollama Support for AI Services as Claude Alternative", + "description": "Implement Ollama integration as an alternative to Claude for all main AI services, allowing users to run local language models instead of relying on cloud-based Claude API.", + "status": "pending", + "dependencies": [], + "priority": "medium", + "details": "This task involves creating a comprehensive Ollama integration that can replace Claude across all main AI services in the application. Implementation should include:\n\n1. Create an OllamaService class that implements the same interface as the ClaudeService to ensure compatibility\n2. Add configuration options to specify Ollama endpoint URL (default: http://localhost:11434)\n3. Implement model selection functionality to allow users to choose which Ollama model to use (e.g., llama3, mistral, etc.)\n4. Handle prompt formatting specific to Ollama models, ensuring proper system/user message separation\n5. Implement proper error handling for cases where Ollama server is unavailable or returns errors\n6. Add fallback mechanism to Claude when Ollama fails or isn't configured\n7. Update the AI service factory to conditionally create either Claude or Ollama service based on configuration\n8. Ensure token counting and rate limiting are appropriately handled for Ollama models\n9. Add documentation for users explaining how to set up and use Ollama with the application\n10. Optimize prompt templates specifically for Ollama models if needed\n\nThe implementation should be toggled through a configuration option (useOllama: true/false) and should maintain all existing functionality currently provided by Claude.", + "testStrategy": "Testing should verify that Ollama integration works correctly as a drop-in replacement for Claude:\n\n1. Unit tests:\n - Test OllamaService class methods in isolation with mocked responses\n - Verify proper error handling when Ollama server is unavailable\n - Test fallback mechanism to Claude when configured\n\n2. Integration tests:\n - Test with actual Ollama server running locally with at least two different models\n - Verify all AI service functions work correctly with Ollama\n - Compare outputs between Claude and Ollama for quality assessment\n\n3. Configuration tests:\n - Verify toggling between Claude and Ollama works as expected\n - Test with various model configurations\n\n4. Performance tests:\n - Measure and compare response times between Claude and Ollama\n - Test with different load scenarios\n\n5. Manual testing:\n - Verify all main AI features work correctly with Ollama\n - Test edge cases like very long inputs or specialized tasks\n\nCreate a test document comparing output quality between Claude and various Ollama models to help users understand the tradeoffs." + }, + { + "id": 37, + "title": "Add Gemini Support for Main AI Services as Claude Alternative", + "description": "Implement Google's Gemini API integration as an alternative to Claude for all main AI services, allowing users to switch between different LLM providers.", + "status": "pending", + "dependencies": [], + "priority": "medium", + "details": "This task involves integrating Google's Gemini API across all main AI services that currently use Claude:\n\n1. Create a new GeminiService class that implements the same interface as the existing ClaudeService\n2. Implement authentication and API key management for Gemini API\n3. Map our internal prompt formats to Gemini's expected input format\n4. Handle Gemini-specific parameters (temperature, top_p, etc.) and response parsing\n5. Update the AI service factory/provider to support selecting Gemini as an alternative\n6. Add configuration options in settings to allow users to select Gemini as their preferred provider\n7. Implement proper error handling for Gemini-specific API errors\n8. Ensure streaming responses are properly supported if Gemini offers this capability\n9. Update documentation to reflect the new Gemini option\n10. Consider implementing model selection if Gemini offers multiple models (e.g., Gemini Pro, Gemini Ultra)\n11. Ensure all existing AI capabilities (summarization, code generation, etc.) maintain feature parity when using Gemini\n\nThe implementation should follow the same pattern as the recent Ollama integration (Task #36) to maintain consistency in how alternative AI providers are supported.", + "testStrategy": "Testing should verify Gemini integration works correctly across all AI services:\n\n1. Unit tests:\n - Test GeminiService class methods with mocked API responses\n - Verify proper error handling for common API errors\n - Test configuration and model selection functionality\n\n2. Integration tests:\n - Verify authentication and API connection with valid credentials\n - Test each AI service with Gemini to ensure proper functionality\n - Compare outputs between Claude and Gemini for the same inputs to verify quality\n\n3. End-to-end tests:\n - Test the complete user flow of switching to Gemini and using various AI features\n - Verify streaming responses work correctly if supported\n\n4. Performance tests:\n - Measure and compare response times between Claude and Gemini\n - Test with various input lengths to verify handling of context limits\n\n5. Manual testing:\n - Verify the quality of Gemini responses across different use cases\n - Test edge cases like very long inputs or specialized domain knowledge\n\nAll tests should pass with Gemini selected as the provider, and the user experience should be consistent regardless of which provider is selected." } ] } \ No newline at end of file diff --git a/tests/unit/commands.test.js b/tests/unit/commands.test.js index ea997a56..1e95cbac 100644 --- a/tests/unit/commands.test.js +++ b/tests/unit/commands.test.js @@ -6,6 +6,11 @@ import { jest } from '@jest/globals'; // Mock functions that need jest.fn methods const mockParsePRD = jest.fn().mockResolvedValue(undefined); +const mockUpdateTaskById = jest.fn().mockResolvedValue({ + id: 2, + title: 'Updated Task', + description: 'Updated description' +}); const mockDisplayBanner = jest.fn(); const mockDisplayHelp = jest.fn(); const mockLog = jest.fn(); @@ -37,7 +42,8 @@ jest.mock('../../scripts/modules/ui.js', () => ({ })); jest.mock('../../scripts/modules/task-manager.js', () => ({ - parsePRD: mockParsePRD + parsePRD: mockParsePRD, + updateTaskById: mockUpdateTaskById })); // Add this function before the mock of utils.js @@ -286,4 +292,238 @@ describe('Commands Module', () => { expect(mockParsePRD).toHaveBeenCalledWith(testFile, outputFile, numTasks); }); }); + + describe('updateTask command', () => { + // Since mocking Commander is complex, we'll test the action handler directly + // Recreate the action handler logic based on commands.js + async function updateTaskAction(options) { + try { + const tasksPath = options.file; + + // Validate required parameters + if (!options.id) { + console.error(chalk.red('Error: --id parameter is required')); + console.log(chalk.yellow('Usage example: task-master update-task --id=23 --prompt="Update with new information"')); + process.exit(1); + return; // Add early return to prevent calling updateTaskById + } + + // Parse the task ID and validate it's a number + const taskId = parseInt(options.id, 10); + if (isNaN(taskId) || taskId <= 0) { + console.error(chalk.red(`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`)); + console.log(chalk.yellow('Usage example: task-master update-task --id=23 --prompt="Update with new information"')); + process.exit(1); + return; // Add early return to prevent calling updateTaskById + } + + if (!options.prompt) { + console.error(chalk.red('Error: --prompt parameter is required. Please provide information about the changes.')); + console.log(chalk.yellow('Usage example: task-master update-task --id=23 --prompt="Update with new information"')); + process.exit(1); + return; // Add early return to prevent calling updateTaskById + } + + const prompt = options.prompt; + const useResearch = options.research || false; + + // Validate tasks file exists + if (!fs.existsSync(tasksPath)) { + console.error(chalk.red(`Error: Tasks file not found at path: ${tasksPath}`)); + if (tasksPath === 'tasks/tasks.json') { + console.log(chalk.yellow('Hint: Run task-master init or task-master parse-prd to create tasks.json first')); + } else { + console.log(chalk.yellow(`Hint: Check if the file path is correct: ${tasksPath}`)); + } + process.exit(1); + return; // Add early return to prevent calling updateTaskById + } + + console.log(chalk.blue(`Updating task ${taskId} with prompt: "${prompt}"`)); + console.log(chalk.blue(`Tasks file: ${tasksPath}`)); + + if (useResearch) { + // Verify Perplexity API key exists if using research + if (!process.env.PERPLEXITY_API_KEY) { + console.log(chalk.yellow('Warning: PERPLEXITY_API_KEY environment variable is missing. Research-backed updates will not be available.')); + console.log(chalk.yellow('Falling back to Claude AI for task update.')); + } else { + console.log(chalk.blue('Using Perplexity AI for research-backed task update')); + } + } + + const result = await mockUpdateTaskById(tasksPath, taskId, prompt, useResearch); + + // If the task wasn't updated (e.g., if it was already marked as done) + if (!result) { + console.log(chalk.yellow('\nTask update was not completed. Review the messages above for details.')); + } + } catch (error) { + console.error(chalk.red(`Error: ${error.message}`)); + + // Provide more helpful error messages for common issues + if (error.message.includes('task') && error.message.includes('not found')) { + console.log(chalk.yellow('\nTo fix this issue:')); + console.log(' 1. Run task-master list to see all available task IDs'); + console.log(' 2. Use a valid task ID with the --id parameter'); + } else if (error.message.includes('API key')) { + console.log(chalk.yellow('\nThis error is related to API keys. Check your environment variables.')); + } + + if (true) { // CONFIG.debug + console.error(error); + } + + process.exit(1); + } + } + + beforeEach(() => { + // Reset all mocks + jest.clearAllMocks(); + + // Set up spy for existsSync (already mocked in the outer scope) + mockExistsSync.mockReturnValue(true); + }); + + test('should validate required parameters - missing ID', async () => { + // Set up the command options without ID + const options = { + file: 'test-tasks.json', + prompt: 'Update the task' + }; + + // Call the action directly + await updateTaskAction(options); + + // Verify validation error + expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('--id parameter is required')); + expect(mockExit).toHaveBeenCalledWith(1); + expect(mockUpdateTaskById).not.toHaveBeenCalled(); + }); + + test('should validate required parameters - invalid ID', async () => { + // Set up the command options with invalid ID + const options = { + file: 'test-tasks.json', + id: 'not-a-number', + prompt: 'Update the task' + }; + + // Call the action directly + await updateTaskAction(options); + + // Verify validation error + expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('Invalid task ID')); + expect(mockExit).toHaveBeenCalledWith(1); + expect(mockUpdateTaskById).not.toHaveBeenCalled(); + }); + + test('should validate required parameters - missing prompt', async () => { + // Set up the command options without prompt + const options = { + file: 'test-tasks.json', + id: '2' + }; + + // Call the action directly + await updateTaskAction(options); + + // Verify validation error + expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('--prompt parameter is required')); + expect(mockExit).toHaveBeenCalledWith(1); + expect(mockUpdateTaskById).not.toHaveBeenCalled(); + }); + + test('should validate tasks file exists', async () => { + // Mock file not existing + mockExistsSync.mockReturnValue(false); + + // Set up the command options + const options = { + file: 'missing-tasks.json', + id: '2', + prompt: 'Update the task' + }; + + // Call the action directly + await updateTaskAction(options); + + // Verify validation error + expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('Tasks file not found')); + expect(mockExit).toHaveBeenCalledWith(1); + expect(mockUpdateTaskById).not.toHaveBeenCalled(); + }); + + test('should call updateTaskById with correct parameters', async () => { + // Set up the command options + const options = { + file: 'test-tasks.json', + id: '2', + prompt: 'Update the task', + research: true + }; + + // Mock perplexity API key + process.env.PERPLEXITY_API_KEY = 'dummy-key'; + + // Call the action directly + await updateTaskAction(options); + + // Verify updateTaskById was called with correct parameters + expect(mockUpdateTaskById).toHaveBeenCalledWith( + 'test-tasks.json', + 2, + 'Update the task', + true + ); + + // Verify console output + expect(mockConsoleLog).toHaveBeenCalledWith(expect.stringContaining('Updating task 2')); + expect(mockConsoleLog).toHaveBeenCalledWith(expect.stringContaining('Using Perplexity AI')); + + // Clean up + delete process.env.PERPLEXITY_API_KEY; + }); + + test('should handle null result from updateTaskById', async () => { + // Mock updateTaskById returning null (e.g., task already completed) + mockUpdateTaskById.mockResolvedValueOnce(null); + + // Set up the command options + const options = { + file: 'test-tasks.json', + id: '2', + prompt: 'Update the task' + }; + + // Call the action directly + await updateTaskAction(options); + + // Verify updateTaskById was called + expect(mockUpdateTaskById).toHaveBeenCalled(); + + // Verify console output for null result + expect(mockConsoleLog).toHaveBeenCalledWith(expect.stringContaining('Task update was not completed')); + }); + + test('should handle errors from updateTaskById', async () => { + // Mock updateTaskById throwing an error + mockUpdateTaskById.mockRejectedValueOnce(new Error('Task update failed')); + + // Set up the command options + const options = { + file: 'test-tasks.json', + id: '2', + prompt: 'Update the task' + }; + + // Call the action directly + await updateTaskAction(options); + + // Verify error handling + expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('Error: Task update failed')); + expect(mockExit).toHaveBeenCalledWith(1); + }); + }); }); \ No newline at end of file diff --git a/tests/unit/task-manager.test.js b/tests/unit/task-manager.test.js index f07d5fff..52f3b7cc 100644 --- a/tests/unit/task-manager.test.js +++ b/tests/unit/task-manager.test.js @@ -22,6 +22,8 @@ const mockValidateAndFixDependencies = jest.fn(); const mockReadJSON = jest.fn(); const mockLog = jest.fn(); const mockIsTaskDependentOn = jest.fn().mockReturnValue(false); +const mockCreate = jest.fn(); // Mock for Anthropic messages.create +const mockChatCompletionsCreate = jest.fn(); // Mock for Perplexity chat.completions.create // Mock fs module jest.mock('fs', () => ({ @@ -63,6 +65,30 @@ jest.mock('../../scripts/modules/ai-services.js', () => ({ callPerplexity: mockCallPerplexity })); +// Mock Anthropic SDK +jest.mock('@anthropic-ai/sdk', () => { + return { + Anthropic: jest.fn().mockImplementation(() => ({ + messages: { + create: mockCreate + } + })) + }; +}); + +// Mock Perplexity using OpenAI +jest.mock('openai', () => { + return { + default: jest.fn().mockImplementation(() => ({ + chat: { + completions: { + create: mockChatCompletionsCreate + } + } + })) + }; +}); + // Mock the task-manager module itself to control what gets imported jest.mock('../../scripts/modules/task-manager.js', () => { // Get the original module to preserve function implementations @@ -227,7 +253,7 @@ import { sampleClaudeResponse } from '../fixtures/sample-claude-response.js'; import { sampleTasks, emptySampleTasks } from '../fixtures/sample-tasks.js'; // Destructure the required functions for convenience -const { findNextTask, generateTaskFiles, clearSubtasks } = taskManager; +const { findNextTask, generateTaskFiles, clearSubtasks, updateTaskById } = taskManager; describe('Task Manager Module', () => { beforeEach(() => { @@ -1697,4 +1723,294 @@ const testRemoveSubtask = (tasksPath, subtaskId, convertToTask = false, generate } return convertedTask; -}; \ No newline at end of file +}; + +describe.skip('updateTaskById function', () => { + let mockConsoleLog; + let mockConsoleError; + let mockProcess; + + beforeEach(() => { + // Reset all mocks + jest.clearAllMocks(); + + // Set up default mock values + mockExistsSync.mockReturnValue(true); + mockWriteJSON.mockImplementation(() => {}); + mockGenerateTaskFiles.mockResolvedValue(undefined); + + // Create a deep copy of sample tasks for tests - use imported ES module instead of require + const sampleTasksDeepCopy = JSON.parse(JSON.stringify(sampleTasks)); + mockReadJSON.mockReturnValue(sampleTasksDeepCopy); + + // Mock console and process.exit + mockConsoleLog = jest.spyOn(console, 'log').mockImplementation(() => {}); + mockConsoleError = jest.spyOn(console, 'error').mockImplementation(() => {}); + mockProcess = jest.spyOn(process, 'exit').mockImplementation(() => {}); + }); + + afterEach(() => { + // Restore console and process.exit + mockConsoleLog.mockRestore(); + mockConsoleError.mockRestore(); + mockProcess.mockRestore(); + }); + + test('should update a task successfully', async () => { + // Mock the return value of messages.create and Anthropic + const mockTask = { + id: 2, + title: "Updated Core Functionality", + description: "Updated description", + status: "in-progress", + dependencies: [1], + priority: "high", + details: "Updated details", + testStrategy: "Updated test strategy" + }; + + // Mock streaming for successful response + const mockStream = { + [Symbol.asyncIterator]: jest.fn().mockImplementation(() => { + return { + next: jest.fn() + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '{"id": 2, "title": "Updated Core Functionality",' } + } + }) + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '"description": "Updated description", "status": "in-progress",' } + } + }) + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '"dependencies": [1], "priority": "high", "details": "Updated details",' } + } + }) + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '"testStrategy": "Updated test strategy"}' } + } + }) + .mockResolvedValueOnce({ done: true }) + }; + }) + }; + + mockCreate.mockResolvedValue(mockStream); + + // Call the function + const result = await updateTaskById('test-tasks.json', 2, 'Update task 2 with new information'); + + // Verify the task was updated + expect(result).toBeDefined(); + expect(result.title).toBe("Updated Core Functionality"); + expect(result.description).toBe("Updated description"); + + // Verify the correct functions were called + expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json'); + expect(mockCreate).toHaveBeenCalled(); + expect(mockWriteJSON).toHaveBeenCalled(); + expect(mockGenerateTaskFiles).toHaveBeenCalled(); + + // Verify the task was updated in the tasks data + const tasksData = mockWriteJSON.mock.calls[0][1]; + const updatedTask = tasksData.tasks.find(task => task.id === 2); + expect(updatedTask).toEqual(mockTask); + }); + + test('should return null when task is already completed', async () => { + // Call the function with a completed task + const result = await updateTaskById('test-tasks.json', 1, 'Update task 1 with new information'); + + // Verify the result is null + expect(result).toBeNull(); + + // Verify the correct functions were called + expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json'); + expect(mockCreate).not.toHaveBeenCalled(); + expect(mockWriteJSON).not.toHaveBeenCalled(); + expect(mockGenerateTaskFiles).not.toHaveBeenCalled(); + }); + + test('should handle task not found error', async () => { + // Call the function with a non-existent task + const result = await updateTaskById('test-tasks.json', 999, 'Update non-existent task'); + + // Verify the result is null + expect(result).toBeNull(); + + // Verify the error was logged + expect(mockLog).toHaveBeenCalledWith('error', expect.stringContaining('Task with ID 999 not found')); + expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('Task with ID 999 not found')); + + // Verify the correct functions were called + expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json'); + expect(mockCreate).not.toHaveBeenCalled(); + expect(mockWriteJSON).not.toHaveBeenCalled(); + expect(mockGenerateTaskFiles).not.toHaveBeenCalled(); + }); + + test('should preserve completed subtasks', async () => { + // Modify the sample data to have a task with completed subtasks + const tasksData = mockReadJSON(); + const task = tasksData.tasks.find(t => t.id === 3); + if (task && task.subtasks && task.subtasks.length > 0) { + // Mark the first subtask as completed + task.subtasks[0].status = 'done'; + task.subtasks[0].title = 'Completed Header Component'; + mockReadJSON.mockReturnValue(tasksData); + } + + // Mock a response that tries to modify the completed subtask + const mockStream = { + [Symbol.asyncIterator]: jest.fn().mockImplementation(() => { + return { + next: jest.fn() + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '{"id": 3, "title": "Updated UI Components",' } + } + }) + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '"description": "Updated description", "status": "pending",' } + } + }) + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '"dependencies": [2], "priority": "medium", "subtasks": [' } + } + }) + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '{"id": 1, "title": "Modified Header Component", "status": "pending"},' } + } + }) + .mockResolvedValueOnce({ + done: false, + value: { + type: 'content_block_delta', + delta: { text: '{"id": 2, "title": "Create Footer Component", "status": "pending"}]}' } + } + }) + .mockResolvedValueOnce({ done: true }) + }; + }) + }; + + mockCreate.mockResolvedValue(mockStream); + + // Call the function + const result = await updateTaskById('test-tasks.json', 3, 'Update UI components task'); + + // Verify the subtasks were preserved + expect(result).toBeDefined(); + expect(result.subtasks[0].title).toBe('Completed Header Component'); + expect(result.subtasks[0].status).toBe('done'); + + // Verify the correct functions were called + expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json'); + expect(mockCreate).toHaveBeenCalled(); + expect(mockWriteJSON).toHaveBeenCalled(); + expect(mockGenerateTaskFiles).toHaveBeenCalled(); + }); + + test('should handle missing tasks file', async () => { + // Mock file not existing + mockExistsSync.mockReturnValue(false); + + // Call the function + const result = await updateTaskById('missing-tasks.json', 2, 'Update task'); + + // Verify the result is null + expect(result).toBeNull(); + + // Verify the error was logged + expect(mockLog).toHaveBeenCalledWith('error', expect.stringContaining('Tasks file not found')); + expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('Tasks file not found')); + + // Verify the correct functions were called + expect(mockReadJSON).not.toHaveBeenCalled(); + expect(mockCreate).not.toHaveBeenCalled(); + expect(mockWriteJSON).not.toHaveBeenCalled(); + expect(mockGenerateTaskFiles).not.toHaveBeenCalled(); + }); + + test('should handle API errors', async () => { + // Mock API error + mockCreate.mockRejectedValue(new Error('API error')); + + // Call the function + const result = await updateTaskById('test-tasks.json', 2, 'Update task'); + + // Verify the result is null + expect(result).toBeNull(); + + // Verify the error was logged + expect(mockLog).toHaveBeenCalledWith('error', expect.stringContaining('API error')); + expect(mockConsoleError).toHaveBeenCalledWith(expect.stringContaining('API error')); + + // Verify the correct functions were called + expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json'); + expect(mockCreate).toHaveBeenCalled(); + expect(mockWriteJSON).not.toHaveBeenCalled(); // Should not write on error + expect(mockGenerateTaskFiles).not.toHaveBeenCalled(); // Should not generate on error + }); + + test('should use Perplexity AI when research flag is true', async () => { + // Mock Perplexity API response + const mockPerplexityResponse = { + choices: [ + { + message: { + content: '{"id": 2, "title": "Researched Core Functionality", "description": "Research-backed description", "status": "in-progress", "dependencies": [1], "priority": "high", "details": "Research-backed details", "testStrategy": "Research-backed test strategy"}' + } + } + ] + }; + + mockChatCompletionsCreate.mockResolvedValue(mockPerplexityResponse); + + // Set the Perplexity API key in environment + process.env.PERPLEXITY_API_KEY = 'dummy-key'; + + // Call the function with research flag + const result = await updateTaskById('test-tasks.json', 2, 'Update task with research', true); + + // Verify the task was updated with research-backed information + expect(result).toBeDefined(); + expect(result.title).toBe("Researched Core Functionality"); + expect(result.description).toBe("Research-backed description"); + + // Verify the Perplexity API was called + expect(mockChatCompletionsCreate).toHaveBeenCalled(); + expect(mockCreate).not.toHaveBeenCalled(); // Claude should not be called + + // Verify the correct functions were called + expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json'); + expect(mockWriteJSON).toHaveBeenCalled(); + expect(mockGenerateTaskFiles).toHaveBeenCalled(); + + // Clean up + delete process.env.PERPLEXITY_API_KEY; + }); +}); \ No newline at end of file From 2ce73c625edb72359463a00ca87d81e4d303c758 Mon Sep 17 00:00:00 2001 From: Eyal Toledano Date: Thu, 27 Mar 2025 01:46:13 -0400 Subject: [PATCH 07/16] Ensures that the updateTask (single task) doesn't change the title of the task. --- .cursor/mcp.json | 11 ++++ scripts/modules/task-manager.js | 21 ++++--- tasks/task_023.txt | 105 +++++++++++++++++--------------- tasks/task_037.txt | 49 +++++++++++++++ tasks/tasks.json | 23 +++++-- 5 files changed, 144 insertions(+), 65 deletions(-) create mode 100644 tasks/task_037.txt diff --git a/.cursor/mcp.json b/.cursor/mcp.json index e69de29b..e416c639 100644 --- a/.cursor/mcp.json +++ b/.cursor/mcp.json @@ -0,0 +1,11 @@ +{ + "mcpServers": { + "taskmaster-ai": { + "command": "npx", + "args": [ + "-y", + "bin/task-master-mcp-server.js" + ] + } + } +} \ No newline at end of file diff --git a/scripts/modules/task-manager.js b/scripts/modules/task-manager.js index be2a95ca..5788a068 100644 --- a/scripts/modules/task-manager.js +++ b/scripts/modules/task-manager.js @@ -456,16 +456,17 @@ You will be given a task and a prompt describing changes or new implementation d Your job is to update the task to reflect these changes, while preserving its basic structure. Guidelines: -1. Maintain the same ID, status, and dependencies unless specifically mentioned in the prompt -2. Update the title, description, details, and test strategy to reflect the new information -3. Do not change anything unnecessarily - just adapt what needs to change based on the prompt -4. Return a complete valid JSON object representing the updated task -5. VERY IMPORTANT: Preserve all subtasks marked as "done" or "completed" - do not modify their content -6. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything -7. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly -8. Instead, add a new subtask that clearly indicates what needs to be changed or replaced -9. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted -10. Ensure any new subtasks have unique IDs that don't conflict with existing ones +1. VERY IMPORTANT: NEVER change the title of the task - keep it exactly as is +2. Maintain the same ID, status, and dependencies unless specifically mentioned in the prompt +3. Update the description, details, and test strategy to reflect the new information +4. Do not change anything unnecessarily - just adapt what needs to change based on the prompt +5. Return a complete valid JSON object representing the updated task +6. VERY IMPORTANT: Preserve all subtasks marked as "done" or "completed" - do not modify their content +7. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything +8. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly +9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced +10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted +11. Ensure any new subtasks have unique IDs that don't conflict with existing ones The changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.`; diff --git a/tasks/task_023.txt b/tasks/task_023.txt index daa7aa1c..862d1dc7 100644 --- a/tasks/task_023.txt +++ b/tasks/task_023.txt @@ -1,61 +1,48 @@ # Task ID: 23 -# Title: Implement MCP Server Functionality for Task Master using FastMCP +# Title: Complete MCP Server Implementation for Task Master using FastMCP # Status: pending # Dependencies: 22 # Priority: medium -# Description: Extend Task Master to function as an MCP server by leveraging FastMCP's JavaScript/TypeScript implementation for efficient context management services. +# Description: Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. # Details: -This task involves implementing the Model Context Protocol server capabilities within Task Master. The implementation should: +This task involves completing the Model Context Protocol (MCP) server implementation for Task Master using FastMCP. Key updates include: -1. Create a new module `mcp-server.js` that implements the core MCP server functionality -2. Implement the required MCP endpoints: - - `/context` - For retrieving and updating context - - `/models` - For listing available models - - `/execute` - For executing operations with context -3. Develop a context management system that can: - - Store and retrieve context data efficiently - - Handle context windowing and truncation when limits are reached - - Support context metadata and tagging -4. Add authentication and authorization mechanisms for MCP clients -5. Implement proper error handling and response formatting according to MCP specifications -6. Create configuration options in Task Master to enable/disable the MCP server functionality -7. Add documentation for how to use Task Master as an MCP server -8. Ensure the implementation is compatible with existing MCP clients -9. Optimize for performance, especially for context retrieval operations -10. Add logging for MCP server operations +1. Transition from CLI-based execution to direct Task Master function imports for improved performance and reliability. +2. Enhance authentication and authorization mechanisms using FastMCP's built-in capabilities (e.g., API keys, OAuth, or JWT). +3. Refactor context management to align with best practices for handling large context windows, metadata, and tagging. +4. Optimize server performance by leveraging FastMCP's efficient transport mechanisms (e.g., stdio or SSE) and implementing caching for frequently accessed contexts. +5. Integrate the ModelContextProtocol SDK directly to streamline resource and tool registration. +6. Update documentation to include examples of using the MCP server with FastMCP, detailed setup instructions, and client integration guides. -The implementation should follow RESTful API design principles and should be able to handle concurrent requests from multiple clients. +The implementation must ensure compatibility with existing MCP clients and follow RESTful API design principles, while supporting concurrent requests and maintaining robust error handling. # Test Strategy: -Testing for the MCP server functionality should include: +Testing for the updated MCP server functionality should include: 1. Unit tests: - - Test each MCP endpoint handler function independently - - Verify context storage and retrieval mechanisms - - Test authentication and authorization logic - - Validate error handling for various failure scenarios + - Validate direct function imports for Task Master tools. + - Test updated authentication and authorization mechanisms. + - Verify context management operations (CRUD, metadata, windowing). 2. Integration tests: - - Set up a test MCP server instance - - Test complete request/response cycles for each endpoint - - Verify context persistence across multiple requests - - Test with various payload sizes and content types + - Test the MCP server with FastMCP's stdio and SSE transport modes. + - Verify end-to-end request/response cycles for each endpoint. + - Ensure compatibility with the ModelContextProtocol SDK. -3. Compatibility tests: - - Test with existing MCP client libraries - - Verify compliance with the MCP specification - - Ensure backward compatibility with any MCP versions supported +3. Performance tests: + - Benchmark response times for context operations with large datasets. + - Test caching mechanisms and concurrent request handling. + - Measure memory usage and server stability under load. -4. Performance tests: - - Measure response times for context operations with various context sizes - - Test concurrent request handling - - Verify memory usage remains within acceptable limits during extended operation +4. Security tests: + - Validate the robustness of authentication/authorization mechanisms. + - Test for vulnerabilities such as injection attacks, CSRF, and unauthorized access. -5. Security tests: - - Verify authentication mechanisms cannot be bypassed - - Test for common API vulnerabilities (injection, CSRF, etc.) +5. Documentation validation: + - Ensure all examples in the documentation are accurate and functional. + - Verify manual testing workflows using tools like curl or Postman. -All tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman. +All tests should be automated and integrated into the CI/CD pipeline to ensure consistent quality. # Subtasks: ## 1. Create Core MCP Server Module and Basic Structure [done] @@ -153,15 +140,17 @@ Testing approach: ### Details: Implementation steps: 1. Profile the MCP server to identify performance bottlenecks -2. Implement caching mechanisms for frequently accessed contexts -3. Optimize context serialization and deserialization -4. Add connection pooling for database operations (if applicable) -5. Implement request batching for bulk operations -6. Create comprehensive API documentation with examples -7. Add setup and configuration guides to the Task Master documentation -8. Create example client implementations -9. Add monitoring endpoints for server health and metrics -10. Implement graceful degradation under high load +2. Replace CLI-based execution with direct Task Master function imports +3. Implement caching mechanisms for frequently accessed contexts +4. Optimize context serialization and deserialization +5. Leverage FastMCP's efficient transport mechanisms (stdio or SSE) +6. Add connection pooling for database operations (if applicable) +7. Implement request batching for bulk operations +8. Create comprehensive API documentation with examples +9. Add setup and configuration guides to the Task Master documentation +10. Create example client implementations +11. Add monitoring endpoints for server health and metrics +12. Implement graceful degradation under high load Testing approach: - Load testing with simulated concurrent clients @@ -171,3 +160,19 @@ Testing approach: - Test monitoring endpoints - Perform stress testing to identify failure points +## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [pending] +### Dependencies: 23.1, 23.2, 23.3 +### Description: Integrate the ModelContextProtocol SDK directly into the MCP server implementation to streamline tool registration and resource handling. +### Details: +Implementation steps: +1. Replace manual tool registration with ModelContextProtocol SDK methods. +2. Use SDK utilities to simplify resource and template management. +3. Ensure compatibility with FastMCP's transport mechanisms. +4. Update server initialization to include SDK-based configurations. + +Testing approach: +- Verify SDK integration with all MCP endpoints. +- Test resource and template registration using SDK methods. +- Validate compatibility with existing MCP clients. +- Benchmark performance improvements from SDK integration. + diff --git a/tasks/task_037.txt b/tasks/task_037.txt new file mode 100644 index 00000000..5e88ea43 --- /dev/null +++ b/tasks/task_037.txt @@ -0,0 +1,49 @@ +# Task ID: 37 +# Title: Add Gemini Support for Main AI Services as Claude Alternative +# Status: pending +# Dependencies: None +# Priority: medium +# Description: Implement Google's Gemini API integration as an alternative to Claude for all main AI services, allowing users to switch between different LLM providers. +# Details: +This task involves integrating Google's Gemini API across all main AI services that currently use Claude: + +1. Create a new GeminiService class that implements the same interface as the existing ClaudeService +2. Implement authentication and API key management for Gemini API +3. Map our internal prompt formats to Gemini's expected input format +4. Handle Gemini-specific parameters (temperature, top_p, etc.) and response parsing +5. Update the AI service factory/provider to support selecting Gemini as an alternative +6. Add configuration options in settings to allow users to select Gemini as their preferred provider +7. Implement proper error handling for Gemini-specific API errors +8. Ensure streaming responses are properly supported if Gemini offers this capability +9. Update documentation to reflect the new Gemini option +10. Consider implementing model selection if Gemini offers multiple models (e.g., Gemini Pro, Gemini Ultra) +11. Ensure all existing AI capabilities (summarization, code generation, etc.) maintain feature parity when using Gemini + +The implementation should follow the same pattern as the recent Ollama integration (Task #36) to maintain consistency in how alternative AI providers are supported. + +# Test Strategy: +Testing should verify Gemini integration works correctly across all AI services: + +1. Unit tests: + - Test GeminiService class methods with mocked API responses + - Verify proper error handling for common API errors + - Test configuration and model selection functionality + +2. Integration tests: + - Verify authentication and API connection with valid credentials + - Test each AI service with Gemini to ensure proper functionality + - Compare outputs between Claude and Gemini for the same inputs to verify quality + +3. End-to-end tests: + - Test the complete user flow of switching to Gemini and using various AI features + - Verify streaming responses work correctly if supported + +4. Performance tests: + - Measure and compare response times between Claude and Gemini + - Test with various input lengths to verify handling of context limits + +5. Manual testing: + - Verify the quality of Gemini responses across different use cases + - Test edge cases like very long inputs or specialized domain knowledge + +All tests should pass with Gemini selected as the provider, and the user experience should be consistent regardless of which provider is selected. diff --git a/tasks/tasks.json b/tasks/tasks.json index d3160a25..f7757d79 100644 --- a/tasks/tasks.json +++ b/tasks/tasks.json @@ -1336,15 +1336,15 @@ }, { "id": 23, - "title": "Implement MCP Server Functionality for Task Master using FastMCP", - "description": "Extend Task Master to function as an MCP server by leveraging FastMCP's JavaScript/TypeScript implementation for efficient context management services.", + "title": "Complete MCP Server Implementation for Task Master using FastMCP", + "description": "Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management.", "status": "pending", "dependencies": [ 22 ], "priority": "medium", - "details": "This task involves implementing the Model Context Protocol server capabilities within Task Master. The implementation should:\n\n1. Create a new module `mcp-server.js` that implements the core MCP server functionality\n2. Implement the required MCP endpoints:\n - `/context` - For retrieving and updating context\n - `/models` - For listing available models\n - `/execute` - For executing operations with context\n3. Develop a context management system that can:\n - Store and retrieve context data efficiently\n - Handle context windowing and truncation when limits are reached\n - Support context metadata and tagging\n4. Add authentication and authorization mechanisms for MCP clients\n5. Implement proper error handling and response formatting according to MCP specifications\n6. Create configuration options in Task Master to enable/disable the MCP server functionality\n7. Add documentation for how to use Task Master as an MCP server\n8. Ensure the implementation is compatible with existing MCP clients\n9. Optimize for performance, especially for context retrieval operations\n10. Add logging for MCP server operations\n\nThe implementation should follow RESTful API design principles and should be able to handle concurrent requests from multiple clients.", - "testStrategy": "Testing for the MCP server functionality should include:\n\n1. Unit tests:\n - Test each MCP endpoint handler function independently\n - Verify context storage and retrieval mechanisms\n - Test authentication and authorization logic\n - Validate error handling for various failure scenarios\n\n2. Integration tests:\n - Set up a test MCP server instance\n - Test complete request/response cycles for each endpoint\n - Verify context persistence across multiple requests\n - Test with various payload sizes and content types\n\n3. Compatibility tests:\n - Test with existing MCP client libraries\n - Verify compliance with the MCP specification\n - Ensure backward compatibility with any MCP versions supported\n\n4. Performance tests:\n - Measure response times for context operations with various context sizes\n - Test concurrent request handling\n - Verify memory usage remains within acceptable limits during extended operation\n\n5. Security tests:\n - Verify authentication mechanisms cannot be bypassed\n - Test for common API vulnerabilities (injection, CSRF, etc.)\n\nAll tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman.", + "details": "This task involves completing the Model Context Protocol (MCP) server implementation for Task Master using FastMCP. Key updates include:\n\n1. Transition from CLI-based execution to direct Task Master function imports for improved performance and reliability.\n2. Enhance authentication and authorization mechanisms using FastMCP's built-in capabilities (e.g., API keys, OAuth, or JWT).\n3. Refactor context management to align with best practices for handling large context windows, metadata, and tagging.\n4. Optimize server performance by leveraging FastMCP's efficient transport mechanisms (e.g., stdio or SSE) and implementing caching for frequently accessed contexts.\n5. Integrate the ModelContextProtocol SDK directly to streamline resource and tool registration.\n6. Update documentation to include examples of using the MCP server with FastMCP, detailed setup instructions, and client integration guides.\n\nThe implementation must ensure compatibility with existing MCP clients and follow RESTful API design principles, while supporting concurrent requests and maintaining robust error handling.", + "testStrategy": "Testing for the updated MCP server functionality should include:\n\n1. Unit tests:\n - Validate direct function imports for Task Master tools.\n - Test updated authentication and authorization mechanisms.\n - Verify context management operations (CRUD, metadata, windowing).\n\n2. Integration tests:\n - Test the MCP server with FastMCP's stdio and SSE transport modes.\n - Verify end-to-end request/response cycles for each endpoint.\n - Ensure compatibility with the ModelContextProtocol SDK.\n\n3. Performance tests:\n - Benchmark response times for context operations with large datasets.\n - Test caching mechanisms and concurrent request handling.\n - Measure memory usage and server stability under load.\n\n4. Security tests:\n - Validate the robustness of authentication/authorization mechanisms.\n - Test for vulnerabilities such as injection attacks, CSRF, and unauthorized access.\n\n5. Documentation validation:\n - Ensure all examples in the documentation are accurate and functional.\n - Verify manual testing workflows using tools like curl or Postman.\n\nAll tests should be automated and integrated into the CI/CD pipeline to ensure consistent quality.", "subtasks": [ { "id": 1, @@ -1400,7 +1400,20 @@ 3, 4 ], - "details": "Implementation steps:\n1. Profile the MCP server to identify performance bottlenecks\n2. Implement caching mechanisms for frequently accessed contexts\n3. Optimize context serialization and deserialization\n4. Add connection pooling for database operations (if applicable)\n5. Implement request batching for bulk operations\n6. Create comprehensive API documentation with examples\n7. Add setup and configuration guides to the Task Master documentation\n8. Create example client implementations\n9. Add monitoring endpoints for server health and metrics\n10. Implement graceful degradation under high load\n\nTesting approach:\n- Load testing with simulated concurrent clients\n- Measure response times for various operations\n- Test with large context sizes to verify performance\n- Verify documentation accuracy with sample requests\n- Test monitoring endpoints\n- Perform stress testing to identify failure points", + "details": "Implementation steps:\n1. Profile the MCP server to identify performance bottlenecks\n2. Replace CLI-based execution with direct Task Master function imports\n3. Implement caching mechanisms for frequently accessed contexts\n4. Optimize context serialization and deserialization\n5. Leverage FastMCP's efficient transport mechanisms (stdio or SSE)\n6. Add connection pooling for database operations (if applicable)\n7. Implement request batching for bulk operations\n8. Create comprehensive API documentation with examples\n9. Add setup and configuration guides to the Task Master documentation\n10. Create example client implementations\n11. Add monitoring endpoints for server health and metrics\n12. Implement graceful degradation under high load\n\nTesting approach:\n- Load testing with simulated concurrent clients\n- Measure response times for various operations\n- Test with large context sizes to verify performance\n- Verify documentation accuracy with sample requests\n- Test monitoring endpoints\n- Perform stress testing to identify failure points", + "status": "pending", + "parentTaskId": 23 + }, + { + "id": 6, + "title": "Refactor MCP Server to Leverage ModelContextProtocol SDK", + "description": "Integrate the ModelContextProtocol SDK directly into the MCP server implementation to streamline tool registration and resource handling.", + "dependencies": [ + 1, + 2, + 3 + ], + "details": "Implementation steps:\n1. Replace manual tool registration with ModelContextProtocol SDK methods.\n2. Use SDK utilities to simplify resource and template management.\n3. Ensure compatibility with FastMCP's transport mechanisms.\n4. Update server initialization to include SDK-based configurations.\n\nTesting approach:\n- Verify SDK integration with all MCP endpoints.\n- Test resource and template registration using SDK methods.\n- Validate compatibility with existing MCP clients.\n- Benchmark performance improvements from SDK integration.", "status": "pending", "parentTaskId": 23 } From d65c76d4cc041af2dc5228816d1b03ccbfbd40e5 Mon Sep 17 00:00:00 2001 From: Eyal Toledano Date: Thu, 27 Mar 2025 13:32:56 -0400 Subject: [PATCH 08/16] git commit -m "fix: improve CLI error handling and standardize option flags This commit fixes several issues with command line interface error handling: 1. Fix inconsistent behavior between --no-generate and --skip-generate: - Standardized on --skip-generate across all commands - Updated bin/task-master.js to use --skip-generate instead of --no-generate - Modified add-subtask and remove-subtask commands to use --skip-generate 2. Enhance error handling for unknown options: - Removed .allowUnknownOption() from commands to properly detect unknown options - Added global error handler in bin/task-master.js for unknown commands/options - Added command-specific error handlers with helpful error messages 3. Improve user experience with better help messages: - Added helper functions to display formatted command help on errors - Created command-specific help displays for add-subtask and remove-subtask - Show available options when encountering unknown options 4. Update MCP server configuration: - Modified .cursor/mcp.json to use node ./mcp-server/server.js directly - Removed npx -y usage for more reliable execution 5. Other minor improvements: - Adjusted column width for task ID display in UI - Updated version number in package-lock.json to 0.9.30 This resolves issues where users would see confusing error messages like 'error: unknown option --generate' when using an incorrect flag." --- .cursor/mcp.json | 5 +- bin/task-master.js | 40 +++++++++-- package-lock.json | 6 +- scripts/modules/commands.js | 70 +++++++++++++++++-- scripts/modules/ui.js | 2 +- tasks/task_023.txt | 133 ++++++++++++++++++++---------------- tasks/tasks.json | 102 +++++++++++++++++++-------- 7 files changed, 255 insertions(+), 103 deletions(-) diff --git a/.cursor/mcp.json b/.cursor/mcp.json index e416c639..6b838029 100644 --- a/.cursor/mcp.json +++ b/.cursor/mcp.json @@ -1,10 +1,9 @@ { "mcpServers": { "taskmaster-ai": { - "command": "npx", + "command": "node", "args": [ - "-y", - "bin/task-master-mcp-server.js" + "./mcp-server/server.js" ] } } diff --git a/bin/task-master.js b/bin/task-master.js index cc0fffbc..28473f74 100755 --- a/bin/task-master.js +++ b/bin/task-master.js @@ -13,6 +13,7 @@ import { Command } from 'commander'; import { displayHelp, displayBanner } from '../scripts/modules/ui.js'; import { registerCommands } from '../scripts/modules/commands.js'; import { detectCamelCaseFlags } from '../scripts/modules/utils.js'; +import chalk from 'chalk'; const __filename = fileURLToPath(import.meta.url); const __dirname = dirname(__filename); @@ -167,7 +168,7 @@ function createDevScriptAction(commandName) { if (value === true) { args.push(`--${kebabKey}`); } else if (value === false && key === 'generate') { - args.push('--no-generate'); + args.push('--skip-generate'); } } else { // Always use kebab-case for option names @@ -253,7 +254,6 @@ registerInitCommand(program); program .command('dev') .description('Run the dev.js script') - .allowUnknownOption(true) .action(() => { const args = process.argv.slice(process.argv.indexOf('dev') + 1); runDevScript(args); @@ -273,8 +273,7 @@ tempProgram.commands.forEach(cmd => { // Create a new command with the same name and description const newCmd = program .command(cmd.name()) - .description(cmd.description()) - .allowUnknownOption(); // Allow any options, including camelCase ones + .description(cmd.description()); // Copy all options cmd.options.forEach(opt => { @@ -292,6 +291,39 @@ tempProgram.commands.forEach(cmd => { // Parse the command line arguments program.parse(process.argv); +// Add global error handling for unknown commands and options +process.on('uncaughtException', (err) => { + // Check if this is a commander.js unknown option error + if (err.code === 'commander.unknownOption') { + const option = err.message.match(/'([^']+)'/)?.[1]; + const commandArg = process.argv.find(arg => !arg.startsWith('-') && + arg !== 'task-master' && + !arg.includes('/') && + arg !== 'node'); + const command = commandArg || 'unknown'; + + console.error(chalk.red(`Error: Unknown option '${option}'`)); + console.error(chalk.yellow(`Run 'task-master ${command} --help' to see available options for this command`)); + process.exit(1); + } + + // Check if this is a commander.js unknown command error + if (err.code === 'commander.unknownCommand') { + const command = err.message.match(/'([^']+)'/)?.[1]; + + console.error(chalk.red(`Error: Unknown command '${command}'`)); + console.error(chalk.yellow(`Run 'task-master --help' to see available commands`)); + process.exit(1); + } + + // Handle other uncaught exceptions + console.error(chalk.red(`Error: ${err.message}`)); + if (process.env.DEBUG === '1') { + console.error(err); + } + process.exit(1); +}); + // Show help if no command was provided (just 'task-master' with no args) if (process.argv.length <= 2) { displayBanner(); diff --git a/package-lock.json b/package-lock.json index 42eee10f..2afa26e4 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "task-master-ai", - "version": "0.9.18", + "version": "0.9.30", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "task-master-ai", - "version": "0.9.18", + "version": "0.9.30", "license": "MIT", "dependencies": { "@anthropic-ai/sdk": "^0.39.0", @@ -29,7 +29,7 @@ "bin": { "task-master": "bin/task-master.js", "task-master-init": "bin/task-master-init.js", - "task-master-mcp": "mcp-server/server.js" + "task-master-mcp-server": "mcp-server/server.js" }, "devDependencies": { "@types/jest": "^29.5.14", diff --git a/scripts/modules/commands.js b/scripts/modules/commands.js index 9c95f43c..a354d17d 100644 --- a/scripts/modules/commands.js +++ b/scripts/modules/commands.js @@ -47,6 +47,14 @@ import { * @param {Object} program - Commander program instance */ function registerCommands(programInstance) { + // Add global error handler for unknown options + programInstance.on('option:unknown', function(unknownOption) { + const commandName = this._name || 'unknown'; + console.error(chalk.red(`Error: Unknown option '${unknownOption}'`)); + console.error(chalk.yellow(`Run 'task-master ${commandName} --help' to see available options`)); + process.exit(1); + }); + // Default help programInstance.on('--help', function() { displayHelp(); @@ -524,15 +532,16 @@ function registerCommands(programInstance) { .option('--details ', 'Implementation details for the new subtask') .option('--dependencies ', 'Comma-separated list of dependency IDs for the new subtask') .option('-s, --status ', 'Status for the new subtask', 'pending') - .option('--no-generate', 'Skip regenerating task files') + .option('--skip-generate', 'Skip regenerating task files') .action(async (options) => { const tasksPath = options.file; const parentId = options.parent; const existingTaskId = options.taskId; - const generateFiles = options.generate; + const generateFiles = !options.skipGenerate; if (!parentId) { console.error(chalk.red('Error: --parent parameter is required. Please provide a parent task ID.')); + showAddSubtaskHelp(); process.exit(1); } @@ -594,8 +603,36 @@ function registerCommands(programInstance) { console.error(chalk.red(`Error: ${error.message}`)); process.exit(1); } + }) + .on('error', function(err) { + console.error(chalk.red(`Error: ${err.message}`)); + showAddSubtaskHelp(); + process.exit(1); }); + // Helper function to show add-subtask command help + function showAddSubtaskHelp() { + console.log(boxen( + chalk.white.bold('Add Subtask Command Help') + '\n\n' + + chalk.cyan('Usage:') + '\n' + + ` task-master add-subtask --parent= [options]\n\n` + + chalk.cyan('Options:') + '\n' + + ' -p, --parent Parent task ID (required)\n' + + ' -i, --task-id Existing task ID to convert to subtask\n' + + ' -t, --title Title for the new subtask\n' + + ' -d, --description <text> Description for the new subtask\n' + + ' --details <text> Implementation details for the new subtask\n' + + ' --dependencies <ids> Comma-separated list of dependency IDs\n' + + ' -s, --status <status> Status for the new subtask (default: "pending")\n' + + ' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' + + ' --skip-generate Skip regenerating task files\n\n' + + chalk.cyan('Examples:') + '\n' + + ' task-master add-subtask --parent=5 --task-id=8\n' + + ' task-master add-subtask -p 5 -t "Implement login UI" -d "Create the login form"', + { padding: 1, borderColor: 'blue', borderStyle: 'round' } + )); + } + // remove-subtask command programInstance .command('remove-subtask') @@ -603,15 +640,16 @@ function registerCommands(programInstance) { .option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json') .option('-i, --id <id>', 'Subtask ID to remove in format "parentId.subtaskId" (required)') .option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting it') - .option('--no-generate', 'Skip regenerating task files') + .option('--skip-generate', 'Skip regenerating task files') .action(async (options) => { const tasksPath = options.file; const subtaskId = options.id; const convertToTask = options.convert || false; - const generateFiles = options.generate; + const generateFiles = !options.skipGenerate; if (!subtaskId) { console.error(chalk.red('Error: --id parameter is required. Please provide a subtask ID in format "parentId.subtaskId".')); + showRemoveSubtaskHelp(); process.exit(1); } @@ -645,10 +683,34 @@ function registerCommands(programInstance) { } } catch (error) { console.error(chalk.red(`Error: ${error.message}`)); + showRemoveSubtaskHelp(); process.exit(1); } + }) + .on('error', function(err) { + console.error(chalk.red(`Error: ${err.message}`)); + showRemoveSubtaskHelp(); + process.exit(1); }); + // Helper function to show remove-subtask command help + function showRemoveSubtaskHelp() { + console.log(boxen( + chalk.white.bold('Remove Subtask Command Help') + '\n\n' + + chalk.cyan('Usage:') + '\n' + + ` task-master remove-subtask --id=<parentId.subtaskId> [options]\n\n` + + chalk.cyan('Options:') + '\n' + + ' -i, --id <id> Subtask ID to remove in format "parentId.subtaskId" (required)\n' + + ' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' + + ' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' + + ' --skip-generate Skip regenerating task files\n\n' + + chalk.cyan('Examples:') + '\n' + + ' task-master remove-subtask --id=5.2\n' + + ' task-master remove-subtask --id=5.2 --convert', + { padding: 1, borderColor: 'blue', borderStyle: 'round' } + )); + } + // init command (documentation only, implementation is in init.js) programInstance .command('init') diff --git a/scripts/modules/ui.js b/scripts/modules/ui.js index 62a32ef8..c541b2ff 100644 --- a/scripts/modules/ui.js +++ b/scripts/modules/ui.js @@ -760,7 +760,7 @@ async function displayTaskById(tasksPath, taskId) { const availableWidth = process.stdout.columns - 10 || 100; // Default to 100 if can't detect // Define percentage-based column widths - const idWidthPct = 8; + const idWidthPct = 10; const statusWidthPct = 15; const depsWidthPct = 25; const titleWidthPct = 100 - idWidthPct - statusWidthPct - depsWidthPct; diff --git a/tasks/task_023.txt b/tasks/task_023.txt index 862d1dc7..e674999a 100644 --- a/tasks/task_023.txt +++ b/tasks/task_023.txt @@ -1,18 +1,21 @@ # Task ID: 23 # Title: Complete MCP Server Implementation for Task Master using FastMCP -# Status: pending +# Status: in-progress # Dependencies: 22 # Priority: medium -# Description: Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. +# Description: Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. Ensure the server integrates seamlessly with Cursor via `mcp.json` and supports proper tool registration, efficient context handling, and transport type handling (focusing on stdio). Additionally, ensure the server can be instantiated properly when installed via `npx` or `npm i -g`. Evaluate and address gaps in the current implementation, including function imports, context management, caching, tool registration, and adherence to FastMCP best practices. # Details: This task involves completing the Model Context Protocol (MCP) server implementation for Task Master using FastMCP. Key updates include: -1. Transition from CLI-based execution to direct Task Master function imports for improved performance and reliability. -2. Enhance authentication and authorization mechanisms using FastMCP's built-in capabilities (e.g., API keys, OAuth, or JWT). +1. Transition from CLI-based execution (currently using `child_process.spawnSync`) to direct Task Master function imports for improved performance and reliability. +2. Implement caching mechanisms for frequently accessed contexts to enhance performance, leveraging FastMCP's efficient transport mechanisms (e.g., stdio). 3. Refactor context management to align with best practices for handling large context windows, metadata, and tagging. -4. Optimize server performance by leveraging FastMCP's efficient transport mechanisms (e.g., stdio or SSE) and implementing caching for frequently accessed contexts. -5. Integrate the ModelContextProtocol SDK directly to streamline resource and tool registration. -6. Update documentation to include examples of using the MCP server with FastMCP, detailed setup instructions, and client integration guides. +4. Refactor tool registration in `tools/index.js` to include clear descriptions and parameter definitions, leveraging FastMCP's decorator-based patterns for better integration. +5. Enhance transport type handling to ensure proper stdio communication and compatibility with FastMCP. +6. Ensure the MCP server can be instantiated and run correctly when installed globally via `npx` or `npm i -g`. +7. Integrate the ModelContextProtocol SDK directly to streamline resource and tool registration, ensuring compatibility with FastMCP's transport mechanisms. +8. Identify and address missing components or functionalities to meet FastMCP best practices, such as robust error handling, monitoring endpoints, and concurrency support. +9. Update documentation to include examples of using the MCP server with FastMCP, detailed setup instructions, and client integration guides. The implementation must ensure compatibility with existing MCP clients and follow RESTful API design principles, while supporting concurrent requests and maintaining robust error handling. @@ -20,14 +23,17 @@ The implementation must ensure compatibility with existing MCP clients and follo Testing for the updated MCP server functionality should include: 1. Unit tests: - - Validate direct function imports for Task Master tools. + - Validate direct function imports for Task Master tools, replacing CLI-based execution. - Test updated authentication and authorization mechanisms. - Verify context management operations (CRUD, metadata, windowing). + - Test caching mechanisms for frequently accessed contexts. + - Validate proper tool registration with descriptions and parameters. 2. Integration tests: - - Test the MCP server with FastMCP's stdio and SSE transport modes. + - Test the MCP server with FastMCP's stdio transport mode. - Verify end-to-end request/response cycles for each endpoint. - Ensure compatibility with the ModelContextProtocol SDK. + - Test the tool registration process in `tools/index.js` for correctness and efficiency. 3. Performance tests: - Benchmark response times for context operations with large datasets. @@ -38,7 +44,11 @@ Testing for the updated MCP server functionality should include: - Validate the robustness of authentication/authorization mechanisms. - Test for vulnerabilities such as injection attacks, CSRF, and unauthorized access. -5. Documentation validation: +5. Deployment tests: + - Verify proper server instantiation and operation when installed via `npx` or `npm i -g`. + - Test configuration loading from `mcp.json`. + +6. Documentation validation: - Ensure all examples in the documentation are accurate and functional. - Verify manual testing workflows using tools like curl or Postman. @@ -112,54 +122,6 @@ Testing approach: - Test error handling with invalid inputs - Benchmark endpoint performance -## 4. Implement Authentication and Authorization System [pending] -### Dependencies: 23.1, 23.3 -### Description: Create a secure authentication and authorization mechanism for MCP clients to ensure only authorized applications can access the MCP server functionality. -### Details: -Implementation steps: -1. Design authentication scheme (API keys, OAuth, JWT, etc.) -2. Implement authentication middleware for all MCP endpoints -3. Create an API key management system for client applications -4. Develop role-based access control for different operations -5. Implement rate limiting to prevent abuse -6. Add secure token validation and handling -7. Create endpoints for managing client credentials -8. Implement audit logging for authentication events - -Testing approach: -- Security testing for authentication mechanisms -- Test access control with various permission levels -- Verify rate limiting functionality -- Test token validation with valid and invalid tokens -- Simulate unauthorized access attempts -- Verify audit logs contain appropriate information - -## 5. Optimize Performance and Finalize Documentation [pending] -### Dependencies: 23.1, 23.2, 23.3, 23.4 -### Description: Optimize the MCP server implementation for performance, especially for context retrieval operations, and create comprehensive documentation for users. -### Details: -Implementation steps: -1. Profile the MCP server to identify performance bottlenecks -2. Replace CLI-based execution with direct Task Master function imports -3. Implement caching mechanisms for frequently accessed contexts -4. Optimize context serialization and deserialization -5. Leverage FastMCP's efficient transport mechanisms (stdio or SSE) -6. Add connection pooling for database operations (if applicable) -7. Implement request batching for bulk operations -8. Create comprehensive API documentation with examples -9. Add setup and configuration guides to the Task Master documentation -10. Create example client implementations -11. Add monitoring endpoints for server health and metrics -12. Implement graceful degradation under high load - -Testing approach: -- Load testing with simulated concurrent clients -- Measure response times for various operations -- Test with large context sizes to verify performance -- Verify documentation accuracy with sample requests -- Test monitoring endpoints -- Perform stress testing to identify failure points - ## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [pending] ### Dependencies: 23.1, 23.2, 23.3 ### Description: Integrate the ModelContextProtocol SDK directly into the MCP server implementation to streamline tool registration and resource handling. @@ -176,3 +138,58 @@ Testing approach: - Validate compatibility with existing MCP clients. - Benchmark performance improvements from SDK integration. +## 8. Implement Direct Function Imports and Replace CLI-based Execution [pending] +### Dependencies: None +### Description: Refactor the MCP server implementation to use direct Task Master function imports instead of the current CLI-based execution using child_process.spawnSync. This will improve performance, reliability, and enable better error handling. +### Details: +1. Create a new module to import and expose Task Master core functions directly +2. Modify tools/utils.js to remove executeTaskMasterCommand and replace with direct function calls +3. Update each tool implementation (listTasks.js, showTask.js, etc.) to use the direct function imports +4. Implement proper error handling with try/catch blocks and FastMCP's MCPError +5. Add unit tests to verify the function imports work correctly +6. Test performance improvements by comparing response times between CLI and function import approaches + +## 9. Implement Context Management and Caching Mechanisms [pending] +### Dependencies: 23.1 +### Description: Enhance the MCP server with proper context management and caching to improve performance and user experience, especially for frequently accessed data and contexts. +### Details: +1. Implement a context manager class that leverages FastMCP's Context object +2. Add caching for frequently accessed task data with configurable TTL settings +3. Implement context tagging for better organization of context data +4. Add methods to efficiently handle large context windows +5. Create helper functions for storing and retrieving context data +6. Implement cache invalidation strategies for task updates +7. Add cache statistics for monitoring performance +8. Create unit tests for context management and caching functionality + +## 10. Enhance Tool Registration and Resource Management [pending] +### Dependencies: 23.1 +### Description: Refactor tool registration to follow FastMCP best practices, using decorators and improving the overall structure. Implement proper resource management for task templates and other shared resources. +### Details: +1. Update registerTaskMasterTools function to use FastMCP's decorator pattern +2. Implement @mcp.tool() decorators for all existing tools +3. Add proper type annotations and documentation for all tools +4. Create resource handlers for task templates using @mcp.resource() +5. Implement resource templates for common task patterns +6. Update the server initialization to properly register all tools and resources +7. Add validation for tool inputs using FastMCP's built-in validation +8. Create comprehensive tests for tool registration and resource access + +## 11. Implement Comprehensive Error Handling [pending] +### Dependencies: 23.1, 23.3 +### Description: Implement robust error handling using FastMCP's MCPError, including custom error types for different categories and standardized error responses. +### Details: +1. Create custom error types extending MCPError for different categories (validation, auth, etc.)\n2. Implement standardized error responses following MCP protocol\n3. Add error handling middleware for all MCP endpoints\n4. Ensure proper error propagation from tools to client\n5. Add debug mode with detailed error information\n6. Document error types and handling patterns + +## 12. Implement Structured Logging System [pending] +### Dependencies: 23.1, 23.3 +### Description: Implement a comprehensive logging system for the MCP server with different log levels, structured logging format, and request/response tracking. +### Details: +1. Design structured log format for consistent parsing\n2. Implement different log levels (debug, info, warn, error)\n3. Add request/response logging middleware\n4. Implement correlation IDs for request tracking\n5. Add performance metrics logging\n6. Configure log output destinations (console, file)\n7. Document logging patterns and usage + +## 13. Create Testing Framework and Test Suite [pending] +### Dependencies: 23.1, 23.3, 23.8 +### Description: Implement a comprehensive testing framework for the MCP server, including unit tests, integration tests, and end-to-end tests. +### Details: +1. Set up Jest testing framework with proper configuration\n2. Create MCPTestClient for testing FastMCP server interaction\n3. Implement unit tests for individual tool functions\n4. Create integration tests for end-to-end request/response cycles\n5. Set up test fixtures and mock data\n6. Implement test coverage reporting\n7. Document testing guidelines and examples + diff --git a/tasks/tasks.json b/tasks/tasks.json index f7757d79..73e407b0 100644 --- a/tasks/tasks.json +++ b/tasks/tasks.json @@ -1337,14 +1337,14 @@ { "id": 23, "title": "Complete MCP Server Implementation for Task Master using FastMCP", - "description": "Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management.", - "status": "pending", + "description": "Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. Ensure the server integrates seamlessly with Cursor via `mcp.json` and supports proper tool registration, efficient context handling, and transport type handling (focusing on stdio). Additionally, ensure the server can be instantiated properly when installed via `npx` or `npm i -g`. Evaluate and address gaps in the current implementation, including function imports, context management, caching, tool registration, and adherence to FastMCP best practices.", + "status": "in-progress", "dependencies": [ 22 ], "priority": "medium", - "details": "This task involves completing the Model Context Protocol (MCP) server implementation for Task Master using FastMCP. Key updates include:\n\n1. Transition from CLI-based execution to direct Task Master function imports for improved performance and reliability.\n2. Enhance authentication and authorization mechanisms using FastMCP's built-in capabilities (e.g., API keys, OAuth, or JWT).\n3. Refactor context management to align with best practices for handling large context windows, metadata, and tagging.\n4. Optimize server performance by leveraging FastMCP's efficient transport mechanisms (e.g., stdio or SSE) and implementing caching for frequently accessed contexts.\n5. Integrate the ModelContextProtocol SDK directly to streamline resource and tool registration.\n6. Update documentation to include examples of using the MCP server with FastMCP, detailed setup instructions, and client integration guides.\n\nThe implementation must ensure compatibility with existing MCP clients and follow RESTful API design principles, while supporting concurrent requests and maintaining robust error handling.", - "testStrategy": "Testing for the updated MCP server functionality should include:\n\n1. Unit tests:\n - Validate direct function imports for Task Master tools.\n - Test updated authentication and authorization mechanisms.\n - Verify context management operations (CRUD, metadata, windowing).\n\n2. Integration tests:\n - Test the MCP server with FastMCP's stdio and SSE transport modes.\n - Verify end-to-end request/response cycles for each endpoint.\n - Ensure compatibility with the ModelContextProtocol SDK.\n\n3. Performance tests:\n - Benchmark response times for context operations with large datasets.\n - Test caching mechanisms and concurrent request handling.\n - Measure memory usage and server stability under load.\n\n4. Security tests:\n - Validate the robustness of authentication/authorization mechanisms.\n - Test for vulnerabilities such as injection attacks, CSRF, and unauthorized access.\n\n5. Documentation validation:\n - Ensure all examples in the documentation are accurate and functional.\n - Verify manual testing workflows using tools like curl or Postman.\n\nAll tests should be automated and integrated into the CI/CD pipeline to ensure consistent quality.", + "details": "This task involves completing the Model Context Protocol (MCP) server implementation for Task Master using FastMCP. Key updates include:\n\n1. Transition from CLI-based execution (currently using `child_process.spawnSync`) to direct Task Master function imports for improved performance and reliability.\n2. Implement caching mechanisms for frequently accessed contexts to enhance performance, leveraging FastMCP's efficient transport mechanisms (e.g., stdio).\n3. Refactor context management to align with best practices for handling large context windows, metadata, and tagging.\n4. Refactor tool registration in `tools/index.js` to include clear descriptions and parameter definitions, leveraging FastMCP's decorator-based patterns for better integration.\n5. Enhance transport type handling to ensure proper stdio communication and compatibility with FastMCP.\n6. Ensure the MCP server can be instantiated and run correctly when installed globally via `npx` or `npm i -g`.\n7. Integrate the ModelContextProtocol SDK directly to streamline resource and tool registration, ensuring compatibility with FastMCP's transport mechanisms.\n8. Identify and address missing components or functionalities to meet FastMCP best practices, such as robust error handling, monitoring endpoints, and concurrency support.\n9. Update documentation to include examples of using the MCP server with FastMCP, detailed setup instructions, and client integration guides.\n\nThe implementation must ensure compatibility with existing MCP clients and follow RESTful API design principles, while supporting concurrent requests and maintaining robust error handling.", + "testStrategy": "Testing for the updated MCP server functionality should include:\n\n1. Unit tests:\n - Validate direct function imports for Task Master tools, replacing CLI-based execution.\n - Test updated authentication and authorization mechanisms.\n - Verify context management operations (CRUD, metadata, windowing).\n - Test caching mechanisms for frequently accessed contexts.\n - Validate proper tool registration with descriptions and parameters.\n\n2. Integration tests:\n - Test the MCP server with FastMCP's stdio transport mode.\n - Verify end-to-end request/response cycles for each endpoint.\n - Ensure compatibility with the ModelContextProtocol SDK.\n - Test the tool registration process in `tools/index.js` for correctness and efficiency.\n\n3. Performance tests:\n - Benchmark response times for context operations with large datasets.\n - Test caching mechanisms and concurrent request handling.\n - Measure memory usage and server stability under load.\n\n4. Security tests:\n - Validate the robustness of authentication/authorization mechanisms.\n - Test for vulnerabilities such as injection attacks, CSRF, and unauthorized access.\n\n5. Deployment tests:\n - Verify proper server instantiation and operation when installed via `npx` or `npm i -g`.\n - Test configuration loading from `mcp.json`.\n\n6. Documentation validation:\n - Ensure all examples in the documentation are accurate and functional.\n - Verify manual testing workflows using tools like curl or Postman.\n\nAll tests should be automated and integrated into the CI/CD pipeline to ensure consistent quality.", "subtasks": [ { "id": 1, @@ -1378,32 +1378,6 @@ "status": "done", "parentTaskId": 23 }, - { - "id": 4, - "title": "Implement Authentication and Authorization System", - "description": "Create a secure authentication and authorization mechanism for MCP clients to ensure only authorized applications can access the MCP server functionality.", - "dependencies": [ - 1, - 3 - ], - "details": "Implementation steps:\n1. Design authentication scheme (API keys, OAuth, JWT, etc.)\n2. Implement authentication middleware for all MCP endpoints\n3. Create an API key management system for client applications\n4. Develop role-based access control for different operations\n5. Implement rate limiting to prevent abuse\n6. Add secure token validation and handling\n7. Create endpoints for managing client credentials\n8. Implement audit logging for authentication events\n\nTesting approach:\n- Security testing for authentication mechanisms\n- Test access control with various permission levels\n- Verify rate limiting functionality\n- Test token validation with valid and invalid tokens\n- Simulate unauthorized access attempts\n- Verify audit logs contain appropriate information", - "status": "pending", - "parentTaskId": 23 - }, - { - "id": 5, - "title": "Optimize Performance and Finalize Documentation", - "description": "Optimize the MCP server implementation for performance, especially for context retrieval operations, and create comprehensive documentation for users.", - "dependencies": [ - 1, - 2, - 3, - 4 - ], - "details": "Implementation steps:\n1. Profile the MCP server to identify performance bottlenecks\n2. Replace CLI-based execution with direct Task Master function imports\n3. Implement caching mechanisms for frequently accessed contexts\n4. Optimize context serialization and deserialization\n5. Leverage FastMCP's efficient transport mechanisms (stdio or SSE)\n6. Add connection pooling for database operations (if applicable)\n7. Implement request batching for bulk operations\n8. Create comprehensive API documentation with examples\n9. Add setup and configuration guides to the Task Master documentation\n10. Create example client implementations\n11. Add monitoring endpoints for server health and metrics\n12. Implement graceful degradation under high load\n\nTesting approach:\n- Load testing with simulated concurrent clients\n- Measure response times for various operations\n- Test with large context sizes to verify performance\n- Verify documentation accuracy with sample requests\n- Test monitoring endpoints\n- Perform stress testing to identify failure points", - "status": "pending", - "parentTaskId": 23 - }, { "id": 6, "title": "Refactor MCP Server to Leverage ModelContextProtocol SDK", @@ -1416,6 +1390,74 @@ "details": "Implementation steps:\n1. Replace manual tool registration with ModelContextProtocol SDK methods.\n2. Use SDK utilities to simplify resource and template management.\n3. Ensure compatibility with FastMCP's transport mechanisms.\n4. Update server initialization to include SDK-based configurations.\n\nTesting approach:\n- Verify SDK integration with all MCP endpoints.\n- Test resource and template registration using SDK methods.\n- Validate compatibility with existing MCP clients.\n- Benchmark performance improvements from SDK integration.", "status": "pending", "parentTaskId": 23 + }, + { + "id": 8, + "title": "Implement Direct Function Imports and Replace CLI-based Execution", + "description": "Refactor the MCP server implementation to use direct Task Master function imports instead of the current CLI-based execution using child_process.spawnSync. This will improve performance, reliability, and enable better error handling.", + "dependencies": [], + "details": "1. Create a new module to import and expose Task Master core functions directly\n2. Modify tools/utils.js to remove executeTaskMasterCommand and replace with direct function calls\n3. Update each tool implementation (listTasks.js, showTask.js, etc.) to use the direct function imports\n4. Implement proper error handling with try/catch blocks and FastMCP's MCPError\n5. Add unit tests to verify the function imports work correctly\n6. Test performance improvements by comparing response times between CLI and function import approaches", + "status": "pending", + "parentTaskId": 23 + }, + { + "id": 9, + "title": "Implement Context Management and Caching Mechanisms", + "description": "Enhance the MCP server with proper context management and caching to improve performance and user experience, especially for frequently accessed data and contexts.", + "dependencies": [ + 1 + ], + "details": "1. Implement a context manager class that leverages FastMCP's Context object\n2. Add caching for frequently accessed task data with configurable TTL settings\n3. Implement context tagging for better organization of context data\n4. Add methods to efficiently handle large context windows\n5. Create helper functions for storing and retrieving context data\n6. Implement cache invalidation strategies for task updates\n7. Add cache statistics for monitoring performance\n8. Create unit tests for context management and caching functionality", + "status": "pending", + "parentTaskId": 23 + }, + { + "id": 10, + "title": "Enhance Tool Registration and Resource Management", + "description": "Refactor tool registration to follow FastMCP best practices, using decorators and improving the overall structure. Implement proper resource management for task templates and other shared resources.", + "dependencies": [ + 1 + ], + "details": "1. Update registerTaskMasterTools function to use FastMCP's decorator pattern\n2. Implement @mcp.tool() decorators for all existing tools\n3. Add proper type annotations and documentation for all tools\n4. Create resource handlers for task templates using @mcp.resource()\n5. Implement resource templates for common task patterns\n6. Update the server initialization to properly register all tools and resources\n7. Add validation for tool inputs using FastMCP's built-in validation\n8. Create comprehensive tests for tool registration and resource access", + "status": "pending", + "parentTaskId": 23 + }, + { + "id": 11, + "title": "Implement Comprehensive Error Handling", + "description": "Implement robust error handling using FastMCP's MCPError, including custom error types for different categories and standardized error responses.", + "details": "1. Create custom error types extending MCPError for different categories (validation, auth, etc.)\\n2. Implement standardized error responses following MCP protocol\\n3. Add error handling middleware for all MCP endpoints\\n4. Ensure proper error propagation from tools to client\\n5. Add debug mode with detailed error information\\n6. Document error types and handling patterns", + "status": "pending", + "dependencies": [ + "23.1", + "23.3" + ], + "parentTaskId": 23 + }, + { + "id": 12, + "title": "Implement Structured Logging System", + "description": "Implement a comprehensive logging system for the MCP server with different log levels, structured logging format, and request/response tracking.", + "details": "1. Design structured log format for consistent parsing\\n2. Implement different log levels (debug, info, warn, error)\\n3. Add request/response logging middleware\\n4. Implement correlation IDs for request tracking\\n5. Add performance metrics logging\\n6. Configure log output destinations (console, file)\\n7. Document logging patterns and usage", + "status": "pending", + "dependencies": [ + "23.1", + "23.3" + ], + "parentTaskId": 23 + }, + { + "id": 13, + "title": "Create Testing Framework and Test Suite", + "description": "Implement a comprehensive testing framework for the MCP server, including unit tests, integration tests, and end-to-end tests.", + "details": "1. Set up Jest testing framework with proper configuration\\n2. Create MCPTestClient for testing FastMCP server interaction\\n3. Implement unit tests for individual tool functions\\n4. Create integration tests for end-to-end request/response cycles\\n5. Set up test fixtures and mock data\\n6. Implement test coverage reporting\\n7. Document testing guidelines and examples", + "status": "pending", + "dependencies": [ + "23.1", + "23.3", + "23.8" + ], + "parentTaskId": 23 } ] }, From 05095c47456b9119e350662572abe98f8ebb028e Mon Sep 17 00:00:00 2001 From: Eyal Toledano <eyal@microangel.so> Date: Thu, 27 Mar 2025 16:14:12 -0400 Subject: [PATCH 09/16] feat: enhance commands with multi-subtask support, MCP integration, and update notifications - Add support for comma-separated subtask IDs in remove-subtask command - Implement MCP configuration in project initialization - Add package update notification system with version comparison - Improve command documentation with boolean flag conventions - Add comprehensive error handling for unknown options - Update help text with better examples and formatting - Implement proper validation for command inputs - Add global error handling patterns with helpful user messages --- .cursor/rules/commands.mdc | 75 +++++++++++ scripts/init.js | 81 ++++++++++++ scripts/modules/commands.js | 214 +++++++++++++++++++++++++----- tasks/task_023.txt | 154 +++++++++++++++++----- tasks/task_038.txt | 56 ++++++++ tasks/tasks.json | 43 +++++- test-version-check-full.js | 69 ++++++++++ test-version-check.js | 22 ++++ tests/unit/commands.test.js | 55 ++++++++ tests/unit/init.test.js | 251 ++++++++++++++++++++++++++++++++++++ 10 files changed, 956 insertions(+), 64 deletions(-) create mode 100644 tasks/task_038.txt create mode 100644 test-version-check-full.js create mode 100644 test-version-check.js diff --git a/.cursor/rules/commands.mdc b/.cursor/rules/commands.mdc index 4f80ac09..04dfec92 100644 --- a/.cursor/rules/commands.mdc +++ b/.cursor/rules/commands.mdc @@ -52,6 +52,28 @@ alwaysApply: false > **Note**: Although options are defined with kebab-case (`--num-tasks`), Commander.js stores them internally as camelCase properties. Access them in code as `options.numTasks`, not `options['num-tasks']`. +- **Boolean Flag Conventions**: + - ✅ DO: Use positive flags with `--skip-` prefix for disabling behavior + - ❌ DON'T: Use negated boolean flags with `--no-` prefix + - ✅ DO: Use consistent flag handling across all commands + + ```javascript + // ✅ DO: Use positive flag with skip- prefix + .option('--skip-generate', 'Skip generating task files') + + // ❌ DON'T: Use --no- prefix + .option('--no-generate', 'Skip generating task files') + ``` + + > **Important**: When handling boolean flags in the code, make your intent clear: + ```javascript + // ✅ DO: Use clear variable naming that matches the flag's intent + const generateFiles = !options.skipGenerate; + + // ❌ DON'T: Use confusing double negatives + const dontSkipGenerate = !options.skipGenerate; + ``` + ## Input Validation - **Required Parameters**: @@ -143,6 +165,59 @@ alwaysApply: false } ``` +- **Unknown Options Handling**: + - ✅ DO: Provide clear error messages for unknown options + - ✅ DO: Show available options when an unknown option is used + - ✅ DO: Include command-specific help displays for common errors + - ❌ DON'T: Allow unknown options with `.allowUnknownOption()` + + ```javascript + // ✅ DO: Register global error handlers for unknown options + programInstance.on('option:unknown', function(unknownOption) { + const commandName = this._name || 'unknown'; + console.error(chalk.red(`Error: Unknown option '${unknownOption}'`)); + console.error(chalk.yellow(`Run 'task-master ${commandName} --help' to see available options`)); + process.exit(1); + }); + + // ✅ DO: Add command-specific help displays + function showCommandHelp() { + console.log(boxen( + chalk.white.bold('Command Help') + '\n\n' + + chalk.cyan('Usage:') + '\n' + + ` task-master command --option1=<value> [options]\n\n` + + chalk.cyan('Options:') + '\n' + + ' --option1 <value> Description of option1 (required)\n' + + ' --option2 <value> Description of option2\n\n' + + chalk.cyan('Examples:') + '\n' + + ' task-master command --option1=value --option2=value', + { padding: 1, borderColor: 'blue', borderStyle: 'round' } + )); + } + ``` + +- **Global Error Handling**: + - ✅ DO: Set up global error handlers for uncaught exceptions + - ✅ DO: Detect and format Commander-specific errors + - ✅ DO: Provide suitable guidance for fixing common errors + + ```javascript + // ✅ DO: Set up global error handlers with helpful messages + process.on('uncaughtException', (err) => { + // Handle Commander-specific errors + if (err.code === 'commander.unknownOption') { + const option = err.message.match(/'([^']+)'/)?.[1]; + console.error(chalk.red(`Error: Unknown option '${option}'`)); + console.error(chalk.yellow(`Run 'task-master <command> --help' to see available options`)); + process.exit(1); + } + + // Handle other error types... + console.error(chalk.red(`Error: ${err.message}`)); + process.exit(1); + }); + ``` + ## Integration with Other Modules - **Import Organization**: diff --git a/scripts/init.js b/scripts/init.js index 3ac1521f..2b55f8cb 100755 --- a/scripts/init.js +++ b/scripts/init.js @@ -549,6 +549,9 @@ function createProjectStructure(projectName, projectDescription, projectVersion, log('success', 'Created package.json'); } + // Setup MCP configuration for integration with Cursor + setupMCPConfiguration(targetDir, packageJson.name); + // Copy template files with replacements const replacements = { projectName, @@ -661,6 +664,84 @@ function createProjectStructure(projectName, projectDescription, projectVersion, )); } +// Function to setup MCP configuration for Cursor integration +function setupMCPConfiguration(targetDir, projectName) { + const mcpDirPath = path.join(targetDir, '.cursor'); + const mcpJsonPath = path.join(mcpDirPath, 'mcp.json'); + + log('info', 'Setting up MCP configuration for Cursor integration...'); + + // Create .cursor directory if it doesn't exist + ensureDirectoryExists(mcpDirPath); + + // New MCP config to be added - references the installed package + const newMCPServer = { + "task-master-ai": { + "command": "npx", + "args": [ + "task-master-ai", + "mcp-server" + ] + } + }; + + // Check if mcp.json already exists + if (fs.existsSync(mcpJsonPath)) { + log('info', 'MCP configuration file already exists, updating...'); + try { + // Read existing config + const mcpConfig = JSON.parse(fs.readFileSync(mcpJsonPath, 'utf8')); + + // Initialize mcpServers if it doesn't exist + if (!mcpConfig.mcpServers) { + mcpConfig.mcpServers = {}; + } + + // Add the task-master-ai server if it doesn't exist + if (!mcpConfig.mcpServers["task-master-ai"]) { + mcpConfig.mcpServers["task-master-ai"] = newMCPServer["task-master-ai"]; + log('info', 'Added task-master-ai server to existing MCP configuration'); + } else { + log('info', 'task-master-ai server already configured in mcp.json'); + } + + // Write the updated configuration + fs.writeFileSync( + mcpJsonPath, + JSON.stringify(mcpConfig, null, 4) + ); + log('success', 'Updated MCP configuration file'); + } catch (error) { + log('error', `Failed to update MCP configuration: ${error.message}`); + // Create a backup before potentially modifying + const backupPath = `${mcpJsonPath}.backup-${Date.now()}`; + if (fs.existsSync(mcpJsonPath)) { + fs.copyFileSync(mcpJsonPath, backupPath); + log('info', `Created backup of existing mcp.json at ${backupPath}`); + } + + // Create new configuration + const newMCPConfig = { + "mcpServers": newMCPServer + }; + + fs.writeFileSync(mcpJsonPath, JSON.stringify(newMCPConfig, null, 4)); + log('warn', 'Created new MCP configuration file (backup of original file was created if it existed)'); + } + } else { + // If mcp.json doesn't exist, create it + const newMCPConfig = { + "mcpServers": newMCPServer + }; + + fs.writeFileSync(mcpJsonPath, JSON.stringify(newMCPConfig, null, 4)); + log('success', 'Created MCP configuration file for Cursor integration'); + } + + // Add note to console about MCP integration + log('info', 'MCP server will use the installed task-master-ai package'); +} + // Run the initialization if this script is executed directly // The original check doesn't work with npx and global commands // if (process.argv[1] === fileURLToPath(import.meta.url)) { diff --git a/scripts/modules/commands.js b/scripts/modules/commands.js index a354d17d..3bb00cda 100644 --- a/scripts/modules/commands.js +++ b/scripts/modules/commands.js @@ -8,6 +8,7 @@ import path from 'path'; import chalk from 'chalk'; import boxen from 'boxen'; import fs from 'fs'; +import https from 'https'; import { CONFIG, log, readJSON } from './utils.js'; import { @@ -638,48 +639,60 @@ function registerCommands(programInstance) { .command('remove-subtask') .description('Remove a subtask from its parent task') .option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json') - .option('-i, --id <id>', 'Subtask ID to remove in format "parentId.subtaskId" (required)') + .option('-i, --id <id>', 'Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated for multiple subtasks)') .option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting it') .option('--skip-generate', 'Skip regenerating task files') .action(async (options) => { const tasksPath = options.file; - const subtaskId = options.id; + const subtaskIds = options.id; const convertToTask = options.convert || false; const generateFiles = !options.skipGenerate; - if (!subtaskId) { - console.error(chalk.red('Error: --id parameter is required. Please provide a subtask ID in format "parentId.subtaskId".')); + if (!subtaskIds) { + console.error(chalk.red('Error: --id parameter is required. Please provide subtask ID(s) in format "parentId.subtaskId".')); showRemoveSubtaskHelp(); process.exit(1); } try { - console.log(chalk.blue(`Removing subtask ${subtaskId}...`)); - if (convertToTask) { - console.log(chalk.blue('The subtask will be converted to a standalone task')); - } + // Split by comma to support multiple subtask IDs + const subtaskIdArray = subtaskIds.split(',').map(id => id.trim()); - const result = await removeSubtask(tasksPath, subtaskId, convertToTask, generateFiles); - - if (convertToTask && result) { - // Display success message and next steps for converted task - console.log(boxen( - chalk.white.bold(`Subtask ${subtaskId} Converted to Task #${result.id}`) + '\n\n' + - chalk.white(`Title: ${result.title}`) + '\n' + - chalk.white(`Status: ${getStatusWithColor(result.status)}`) + '\n' + - chalk.white(`Dependencies: ${result.dependencies.join(', ')}`) + '\n\n' + - chalk.white.bold('Next Steps:') + '\n' + - chalk.cyan(`1. Run ${chalk.yellow(`task-master show ${result.id}`)} to see details of the new task`) + '\n' + - chalk.cyan(`2. Run ${chalk.yellow(`task-master set-status --id=${result.id} --status=in-progress`)} to start working on it`), - { padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1 } } - )); - } else { - // Display success message for deleted subtask - console.log(boxen( - chalk.white.bold(`Subtask ${subtaskId} Removed`) + '\n\n' + - chalk.white('The subtask has been successfully deleted.'), - { padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1 } } - )); + for (const subtaskId of subtaskIdArray) { + // Validate subtask ID format + if (!subtaskId.includes('.')) { + console.error(chalk.red(`Error: Subtask ID "${subtaskId}" must be in format "parentId.subtaskId"`)); + showRemoveSubtaskHelp(); + process.exit(1); + } + + console.log(chalk.blue(`Removing subtask ${subtaskId}...`)); + if (convertToTask) { + console.log(chalk.blue('The subtask will be converted to a standalone task')); + } + + const result = await removeSubtask(tasksPath, subtaskId, convertToTask, generateFiles); + + if (convertToTask && result) { + // Display success message and next steps for converted task + console.log(boxen( + chalk.white.bold(`Subtask ${subtaskId} Converted to Task #${result.id}`) + '\n\n' + + chalk.white(`Title: ${result.title}`) + '\n' + + chalk.white(`Status: ${getStatusWithColor(result.status)}`) + '\n' + + chalk.white(`Dependencies: ${result.dependencies.join(', ')}`) + '\n\n' + + chalk.white.bold('Next Steps:') + '\n' + + chalk.cyan(`1. Run ${chalk.yellow(`task-master show ${result.id}`)} to see details of the new task`) + '\n' + + chalk.cyan(`2. Run ${chalk.yellow(`task-master set-status --id=${result.id} --status=in-progress`)} to start working on it`), + { padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1 } } + )); + } else { + // Display success message for deleted subtask + console.log(boxen( + chalk.white.bold(`Subtask ${subtaskId} Removed`) + '\n\n' + + chalk.white('The subtask has been successfully deleted.'), + { padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1 } } + )); + } } } catch (error) { console.error(chalk.red(`Error: ${error.message}`)); @@ -700,12 +713,13 @@ function registerCommands(programInstance) { chalk.cyan('Usage:') + '\n' + ` task-master remove-subtask --id=<parentId.subtaskId> [options]\n\n` + chalk.cyan('Options:') + '\n' + - ' -i, --id <id> Subtask ID to remove in format "parentId.subtaskId" (required)\n' + + ' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' + ' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' + ' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' + ' --skip-generate Skip regenerating task files\n\n' + chalk.cyan('Examples:') + '\n' + ' task-master remove-subtask --id=5.2\n' + + ' task-master remove-subtask --id=5.2,6.3,7.1\n' + ' task-master remove-subtask --id=5.2 --convert', { padding: 1, borderColor: 'blue', borderStyle: 'round' } )); @@ -783,6 +797,132 @@ function setupCLI() { return programInstance; } +/** + * Check for newer version of task-master-ai + * @returns {Promise<{currentVersion: string, latestVersion: string, needsUpdate: boolean}>} + */ +async function checkForUpdate() { + // Get current version from package.json + let currentVersion = CONFIG.projectVersion; + try { + // Try to get the version from the installed package + const packageJsonPath = path.join(process.cwd(), 'node_modules', 'task-master-ai', 'package.json'); + if (fs.existsSync(packageJsonPath)) { + const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8')); + currentVersion = packageJson.version; + } + } catch (error) { + // Silently fail and use default + log('debug', `Error reading current package version: ${error.message}`); + } + + return new Promise((resolve) => { + // Get the latest version from npm registry + const options = { + hostname: 'registry.npmjs.org', + path: '/task-master-ai', + method: 'GET', + headers: { + 'Accept': 'application/vnd.npm.install-v1+json' // Lightweight response + } + }; + + const req = https.request(options, (res) => { + let data = ''; + + res.on('data', (chunk) => { + data += chunk; + }); + + res.on('end', () => { + try { + const npmData = JSON.parse(data); + const latestVersion = npmData['dist-tags']?.latest || currentVersion; + + // Compare versions + const needsUpdate = compareVersions(currentVersion, latestVersion) < 0; + + resolve({ + currentVersion, + latestVersion, + needsUpdate + }); + } catch (error) { + log('debug', `Error parsing npm response: ${error.message}`); + resolve({ + currentVersion, + latestVersion: currentVersion, + needsUpdate: false + }); + } + }); + }); + + req.on('error', (error) => { + log('debug', `Error checking for updates: ${error.message}`); + resolve({ + currentVersion, + latestVersion: currentVersion, + needsUpdate: false + }); + }); + + // Set a timeout to avoid hanging if npm is slow + req.setTimeout(3000, () => { + req.abort(); + log('debug', 'Update check timed out'); + resolve({ + currentVersion, + latestVersion: currentVersion, + needsUpdate: false + }); + }); + + req.end(); + }); +} + +/** + * Compare semantic versions + * @param {string} v1 - First version + * @param {string} v2 - Second version + * @returns {number} -1 if v1 < v2, 0 if v1 = v2, 1 if v1 > v2 + */ +function compareVersions(v1, v2) { + const v1Parts = v1.split('.').map(p => parseInt(p, 10)); + const v2Parts = v2.split('.').map(p => parseInt(p, 10)); + + for (let i = 0; i < Math.max(v1Parts.length, v2Parts.length); i++) { + const v1Part = v1Parts[i] || 0; + const v2Part = v2Parts[i] || 0; + + if (v1Part < v2Part) return -1; + if (v1Part > v2Part) return 1; + } + + return 0; +} + +/** + * Display upgrade notification message + * @param {string} currentVersion - Current version + * @param {string} latestVersion - Latest version + */ +function displayUpgradeNotification(currentVersion, latestVersion) { + const message = boxen( + `${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)} → ${chalk.green(latestVersion)}\n\n` + + `Run ${chalk.cyan('npm i task-master-ai@latest -g')} to update to the latest version with new features and bug fixes.`, + { + padding: 1, + margin: { top: 1, bottom: 1 }, + borderColor: 'yellow', + borderStyle: 'round' + } + ); + + console.log(message); +} + /** * Parse arguments and run the CLI * @param {Array} argv - Command-line arguments @@ -800,9 +940,18 @@ async function runCLI(argv = process.argv) { process.exit(0); } + // Start the update check in the background - don't await yet + const updateCheckPromise = checkForUpdate(); + // Setup and parse const programInstance = setupCLI(); await programInstance.parseAsync(argv); + + // After command execution, check if an update is available + const updateInfo = await updateCheckPromise; + if (updateInfo.needsUpdate) { + displayUpgradeNotification(updateInfo.currentVersion, updateInfo.latestVersion); + } } catch (error) { console.error(chalk.red(`Error: ${error.message}`)); @@ -817,5 +966,8 @@ async function runCLI(argv = process.argv) { export { registerCommands, setupCLI, - runCLI + runCLI, + checkForUpdate, + compareVersions, + displayUpgradeNotification }; \ No newline at end of file diff --git a/tasks/task_023.txt b/tasks/task_023.txt index e674999a..fea63b4f 100644 --- a/tasks/task_023.txt +++ b/tasks/task_023.txt @@ -20,39 +20,123 @@ This task involves completing the Model Context Protocol (MCP) server implementa The implementation must ensure compatibility with existing MCP clients and follow RESTful API design principles, while supporting concurrent requests and maintaining robust error handling. # Test Strategy: -Testing for the updated MCP server functionality should include: +Testing for the MCP server implementation will follow a comprehensive approach based on our established testing guidelines: -1. Unit tests: - - Validate direct function imports for Task Master tools, replacing CLI-based execution. - - Test updated authentication and authorization mechanisms. - - Verify context management operations (CRUD, metadata, windowing). - - Test caching mechanisms for frequently accessed contexts. - - Validate proper tool registration with descriptions and parameters. +## Test Organization -2. Integration tests: - - Test the MCP server with FastMCP's stdio transport mode. - - Verify end-to-end request/response cycles for each endpoint. - - Ensure compatibility with the ModelContextProtocol SDK. - - Test the tool registration process in `tools/index.js` for correctness and efficiency. +1. **Unit Tests** (`tests/unit/mcp-server/`): + - Test individual MCP server components in isolation + - Mock all external dependencies including FastMCP SDK + - Test each tool implementation separately + - Verify direct function imports work correctly + - Test context management and caching mechanisms + - Example files: `context-manager.test.js`, `tool-registration.test.js`, `direct-imports.test.js` -3. Performance tests: - - Benchmark response times for context operations with large datasets. - - Test caching mechanisms and concurrent request handling. - - Measure memory usage and server stability under load. +2. **Integration Tests** (`tests/integration/mcp-server/`): + - Test interactions between MCP server components + - Verify proper tool registration with FastMCP + - Test context flow between components + - Validate error handling across module boundaries + - Example files: `server-tool-integration.test.js`, `context-flow.test.js` -4. Security tests: - - Validate the robustness of authentication/authorization mechanisms. - - Test for vulnerabilities such as injection attacks, CSRF, and unauthorized access. +3. **End-to-End Tests** (`tests/e2e/mcp-server/`): + - Test complete MCP server workflows + - Verify server instantiation via different methods (direct, npx, global install) + - Test actual stdio communication with mock clients + - Example files: `server-startup.e2e.test.js`, `client-communication.e2e.test.js` -5. Deployment tests: - - Verify proper server instantiation and operation when installed via `npx` or `npm i -g`. - - Test configuration loading from `mcp.json`. +4. **Test Fixtures** (`tests/fixtures/mcp-server/`): + - Sample context data + - Mock tool definitions + - Sample MCP requests and responses -6. Documentation validation: - - Ensure all examples in the documentation are accurate and functional. - - Verify manual testing workflows using tools like curl or Postman. +## Testing Approach -All tests should be automated and integrated into the CI/CD pipeline to ensure consistent quality. +### Module Mocking Strategy +```javascript +// Mock the FastMCP SDK +jest.mock('@model-context-protocol/sdk', () => ({ + MCPServer: jest.fn().mockImplementation(() => ({ + registerTool: jest.fn(), + registerResource: jest.fn(), + start: jest.fn().mockResolvedValue(undefined), + stop: jest.fn().mockResolvedValue(undefined) + })), + MCPError: jest.fn().mockImplementation(function(message, code) { + this.message = message; + this.code = code; + }) +})); + +// Import modules after mocks +import { MCPServer, MCPError } from '@model-context-protocol/sdk'; +import { initMCPServer } from '../../scripts/mcp-server.js'; +``` + +### Context Management Testing +- Test context creation, retrieval, and manipulation +- Verify caching mechanisms work correctly +- Test context windowing and metadata handling +- Validate context persistence across server restarts + +### Direct Function Import Testing +- Verify Task Master functions are imported correctly +- Test performance improvements compared to CLI execution +- Validate error handling with direct imports + +### Tool Registration Testing +- Verify tools are registered with proper descriptions and parameters +- Test decorator-based registration patterns +- Validate tool execution with different input types + +### Error Handling Testing +- Test all error paths with appropriate MCPError types +- Verify error propagation to clients +- Test recovery from various error conditions + +### Performance Testing +- Benchmark response times with and without caching +- Test memory usage under load +- Verify concurrent request handling + +## Test Quality Guidelines + +- Follow TDD approach when possible +- Maintain test independence and isolation +- Use descriptive test names explaining expected behavior +- Aim for 80%+ code coverage, with critical paths at 100% +- Follow the mock-first-then-import pattern for all Jest mocks +- Avoid testing implementation details that might change +- Ensure tests don't depend on execution order + +## Specific Test Cases + +1. **Server Initialization** + - Test server creation with various configuration options + - Verify proper tool and resource registration + - Test server startup and shutdown procedures + +2. **Context Operations** + - Test context creation, retrieval, update, and deletion + - Verify context windowing and truncation + - Test context metadata and tagging + +3. **Tool Execution** + - Test each tool with various input parameters + - Verify proper error handling for invalid inputs + - Test tool execution performance + +4. **MCP.json Integration** + - Test creation and updating of .cursor/mcp.json + - Verify proper server registration in mcp.json + - Test handling of existing mcp.json files + +5. **Transport Handling** + - Test stdio communication + - Verify proper message formatting + - Test error handling in transport layer + +All tests will be automated and integrated into the CI/CD pipeline to ensure consistent quality. # Subtasks: ## 1. Create Core MCP Server Module and Basic Structure [done] @@ -122,7 +206,7 @@ Testing approach: - Test error handling with invalid inputs - Benchmark endpoint performance -## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [pending] +## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [deferred] ### Dependencies: 23.1, 23.2, 23.3 ### Description: Integrate the ModelContextProtocol SDK directly into the MCP server implementation to streamline tool registration and resource handling. ### Details: @@ -138,7 +222,7 @@ Testing approach: - Validate compatibility with existing MCP clients. - Benchmark performance improvements from SDK integration. -## 8. Implement Direct Function Imports and Replace CLI-based Execution [pending] +## 8. Implement Direct Function Imports and Replace CLI-based Execution [in-progress] ### Dependencies: None ### Description: Refactor the MCP server implementation to use direct Task Master function imports instead of the current CLI-based execution using child_process.spawnSync. This will improve performance, reliability, and enable better error handling. ### Details: @@ -149,7 +233,7 @@ Testing approach: 5. Add unit tests to verify the function imports work correctly 6. Test performance improvements by comparing response times between CLI and function import approaches -## 9. Implement Context Management and Caching Mechanisms [pending] +## 9. Implement Context Management and Caching Mechanisms [deferred] ### Dependencies: 23.1 ### Description: Enhance the MCP server with proper context management and caching to improve performance and user experience, especially for frequently accessed data and contexts. ### Details: @@ -193,3 +277,15 @@ Testing approach: ### Details: 1. Set up Jest testing framework with proper configuration\n2. Create MCPTestClient for testing FastMCP server interaction\n3. Implement unit tests for individual tool functions\n4. Create integration tests for end-to-end request/response cycles\n5. Set up test fixtures and mock data\n6. Implement test coverage reporting\n7. Document testing guidelines and examples +## 14. Add MCP.json to the Init Workflow [done] +### Dependencies: 23.1, 23.3 +### Description: Implement functionality to create or update .cursor/mcp.json during project initialization, handling cases where: 1) If there's no mcp.json, create it with the appropriate configuration; 2) If there is an mcp.json, intelligently append to it without syntax errors like trailing commas +### Details: +1. Create functionality to detect if .cursor/mcp.json exists in the project\n2. Implement logic to create a new mcp.json file with proper structure if it doesn't exist\n3. Add functionality to read and parse existing mcp.json if it exists\n4. Create method to add a new taskmaster-ai server entry to the mcpServers object\n5. Implement intelligent JSON merging that avoids trailing commas and syntax errors\n6. Ensure proper formatting and indentation in the generated/updated JSON\n7. Add validation to verify the updated configuration is valid JSON\n8. Include this functionality in the init workflow\n9. Add error handling for file system operations and JSON parsing\n10. Document the mcp.json structure and integration process + +## 15. Implement SSE Support for Real-time Updates [deferred] +### Dependencies: 23.1, 23.3, 23.11 +### Description: Add Server-Sent Events (SSE) capabilities to the MCP server to enable real-time updates and streaming of task execution progress, logs, and status changes to clients +### Details: +1. Research and implement SSE protocol for the MCP server\n2. Create dedicated SSE endpoints for event streaming\n3. Implement event emitter pattern for internal event management\n4. Add support for different event types (task status, logs, errors)\n5. Implement client connection management with proper keep-alive handling\n6. Add filtering capabilities to allow subscribing to specific event types\n7. Create in-memory event buffer for clients reconnecting\n8. Document SSE endpoint usage and client implementation examples\n9. Add robust error handling for dropped connections\n10. Implement rate limiting and backpressure mechanisms\n11. Add authentication for SSE connections + diff --git a/tasks/task_038.txt b/tasks/task_038.txt new file mode 100644 index 00000000..d4fcb4a5 --- /dev/null +++ b/tasks/task_038.txt @@ -0,0 +1,56 @@ +# Task ID: 38 +# Title: Implement Version Check System with Upgrade Notifications +# Status: done +# Dependencies: None +# Priority: high +# Description: Create a system that checks for newer package versions and displays upgrade notifications when users run any command, informing them to update to the latest version. +# Details: +Implement a version check mechanism that runs automatically with every command execution: + +1. Create a new module (e.g., `versionChecker.js`) that will: + - Fetch the latest version from npm registry using the npm registry API (https://registry.npmjs.org/task-master-ai/latest) + - Compare it with the current installed version (from package.json) + - Store the last check timestamp to avoid excessive API calls (check once per day) + - Cache the result to minimize network requests + +2. The notification should: + - Use colored text (e.g., yellow background with black text) to be noticeable + - Include the current version and latest version + - Show the exact upgrade command: 'npm i task-master-ai@latest' + - Be displayed at the beginning or end of command output, not interrupting the main content + - Include a small separator line to distinguish it from command output + +3. Implementation considerations: + - Handle network failures gracefully (don't block command execution if version check fails) + - Add a configuration option to disable update checks if needed + - Ensure the check is lightweight and doesn't significantly impact command performance + - Consider using a package like 'semver' for proper version comparison + - Implement a cooldown period (e.g., only check once per day) to avoid excessive API calls + +4. The version check should be integrated into the main command execution flow so it runs for all commands automatically. + +# Test Strategy: +1. Manual testing: + - Install an older version of the package + - Run various commands and verify the update notification appears + - Update to the latest version and confirm the notification no longer appears + - Test with network disconnected to ensure graceful handling of failures + +2. Unit tests: + - Mock the npm registry response to test different scenarios: + - When a newer version exists + - When using the latest version + - When the registry is unavailable + - Test the version comparison logic with various version strings + - Test the cooldown/caching mechanism works correctly + +3. Integration tests: + - Create a test that runs a command and verifies the notification appears in the expected format + - Test that the notification appears for all commands + - Verify the notification doesn't interfere with normal command output + +4. Edge cases to test: + - Pre-release versions (alpha/beta) + - Very old versions + - When package.json is missing or malformed + - When npm registry returns unexpected data diff --git a/tasks/tasks.json b/tasks/tasks.json index 73e407b0..92545f1d 100644 --- a/tasks/tasks.json +++ b/tasks/tasks.json @@ -1344,7 +1344,7 @@ ], "priority": "medium", "details": "This task involves completing the Model Context Protocol (MCP) server implementation for Task Master using FastMCP. Key updates include:\n\n1. Transition from CLI-based execution (currently using `child_process.spawnSync`) to direct Task Master function imports for improved performance and reliability.\n2. Implement caching mechanisms for frequently accessed contexts to enhance performance, leveraging FastMCP's efficient transport mechanisms (e.g., stdio).\n3. Refactor context management to align with best practices for handling large context windows, metadata, and tagging.\n4. Refactor tool registration in `tools/index.js` to include clear descriptions and parameter definitions, leveraging FastMCP's decorator-based patterns for better integration.\n5. Enhance transport type handling to ensure proper stdio communication and compatibility with FastMCP.\n6. Ensure the MCP server can be instantiated and run correctly when installed globally via `npx` or `npm i -g`.\n7. Integrate the ModelContextProtocol SDK directly to streamline resource and tool registration, ensuring compatibility with FastMCP's transport mechanisms.\n8. Identify and address missing components or functionalities to meet FastMCP best practices, such as robust error handling, monitoring endpoints, and concurrency support.\n9. Update documentation to include examples of using the MCP server with FastMCP, detailed setup instructions, and client integration guides.\n\nThe implementation must ensure compatibility with existing MCP clients and follow RESTful API design principles, while supporting concurrent requests and maintaining robust error handling.", - "testStrategy": "Testing for the updated MCP server functionality should include:\n\n1. Unit tests:\n - Validate direct function imports for Task Master tools, replacing CLI-based execution.\n - Test updated authentication and authorization mechanisms.\n - Verify context management operations (CRUD, metadata, windowing).\n - Test caching mechanisms for frequently accessed contexts.\n - Validate proper tool registration with descriptions and parameters.\n\n2. Integration tests:\n - Test the MCP server with FastMCP's stdio transport mode.\n - Verify end-to-end request/response cycles for each endpoint.\n - Ensure compatibility with the ModelContextProtocol SDK.\n - Test the tool registration process in `tools/index.js` for correctness and efficiency.\n\n3. Performance tests:\n - Benchmark response times for context operations with large datasets.\n - Test caching mechanisms and concurrent request handling.\n - Measure memory usage and server stability under load.\n\n4. Security tests:\n - Validate the robustness of authentication/authorization mechanisms.\n - Test for vulnerabilities such as injection attacks, CSRF, and unauthorized access.\n\n5. Deployment tests:\n - Verify proper server instantiation and operation when installed via `npx` or `npm i -g`.\n - Test configuration loading from `mcp.json`.\n\n6. Documentation validation:\n - Ensure all examples in the documentation are accurate and functional.\n - Verify manual testing workflows using tools like curl or Postman.\n\nAll tests should be automated and integrated into the CI/CD pipeline to ensure consistent quality.", + "testStrategy": "Testing for the MCP server implementation will follow a comprehensive approach based on our established testing guidelines:\n\n## Test Organization\n\n1. **Unit Tests** (`tests/unit/mcp-server/`):\n - Test individual MCP server components in isolation\n - Mock all external dependencies including FastMCP SDK\n - Test each tool implementation separately\n - Verify direct function imports work correctly\n - Test context management and caching mechanisms\n - Example files: `context-manager.test.js`, `tool-registration.test.js`, `direct-imports.test.js`\n\n2. **Integration Tests** (`tests/integration/mcp-server/`):\n - Test interactions between MCP server components\n - Verify proper tool registration with FastMCP\n - Test context flow between components\n - Validate error handling across module boundaries\n - Example files: `server-tool-integration.test.js`, `context-flow.test.js`\n\n3. **End-to-End Tests** (`tests/e2e/mcp-server/`):\n - Test complete MCP server workflows\n - Verify server instantiation via different methods (direct, npx, global install)\n - Test actual stdio communication with mock clients\n - Example files: `server-startup.e2e.test.js`, `client-communication.e2e.test.js`\n\n4. **Test Fixtures** (`tests/fixtures/mcp-server/`):\n - Sample context data\n - Mock tool definitions\n - Sample MCP requests and responses\n\n## Testing Approach\n\n### Module Mocking Strategy\n```javascript\n// Mock the FastMCP SDK\njest.mock('@model-context-protocol/sdk', () => ({\n MCPServer: jest.fn().mockImplementation(() => ({\n registerTool: jest.fn(),\n registerResource: jest.fn(),\n start: jest.fn().mockResolvedValue(undefined),\n stop: jest.fn().mockResolvedValue(undefined)\n })),\n MCPError: jest.fn().mockImplementation(function(message, code) {\n this.message = message;\n this.code = code;\n })\n}));\n\n// Import modules after mocks\nimport { MCPServer, MCPError } from '@model-context-protocol/sdk';\nimport { initMCPServer } from '../../scripts/mcp-server.js';\n```\n\n### Context Management Testing\n- Test context creation, retrieval, and manipulation\n- Verify caching mechanisms work correctly\n- Test context windowing and metadata handling\n- Validate context persistence across server restarts\n\n### Direct Function Import Testing\n- Verify Task Master functions are imported correctly\n- Test performance improvements compared to CLI execution\n- Validate error handling with direct imports\n\n### Tool Registration Testing\n- Verify tools are registered with proper descriptions and parameters\n- Test decorator-based registration patterns\n- Validate tool execution with different input types\n\n### Error Handling Testing\n- Test all error paths with appropriate MCPError types\n- Verify error propagation to clients\n- Test recovery from various error conditions\n\n### Performance Testing\n- Benchmark response times with and without caching\n- Test memory usage under load\n- Verify concurrent request handling\n\n## Test Quality Guidelines\n\n- Follow TDD approach when possible\n- Maintain test independence and isolation\n- Use descriptive test names explaining expected behavior\n- Aim for 80%+ code coverage, with critical paths at 100%\n- Follow the mock-first-then-import pattern for all Jest mocks\n- Avoid testing implementation details that might change\n- Ensure tests don't depend on execution order\n\n## Specific Test Cases\n\n1. **Server Initialization**\n - Test server creation with various configuration options\n - Verify proper tool and resource registration\n - Test server startup and shutdown procedures\n\n2. **Context Operations**\n - Test context creation, retrieval, update, and deletion\n - Verify context windowing and truncation\n - Test context metadata and tagging\n\n3. **Tool Execution**\n - Test each tool with various input parameters\n - Verify proper error handling for invalid inputs\n - Test tool execution performance\n\n4. **MCP.json Integration**\n - Test creation and updating of .cursor/mcp.json\n - Verify proper server registration in mcp.json\n - Test handling of existing mcp.json files\n\n5. **Transport Handling**\n - Test stdio communication\n - Verify proper message formatting\n - Test error handling in transport layer\n\nAll tests will be automated and integrated into the CI/CD pipeline to ensure consistent quality.", "subtasks": [ { "id": 1, @@ -1388,7 +1388,7 @@ 3 ], "details": "Implementation steps:\n1. Replace manual tool registration with ModelContextProtocol SDK methods.\n2. Use SDK utilities to simplify resource and template management.\n3. Ensure compatibility with FastMCP's transport mechanisms.\n4. Update server initialization to include SDK-based configurations.\n\nTesting approach:\n- Verify SDK integration with all MCP endpoints.\n- Test resource and template registration using SDK methods.\n- Validate compatibility with existing MCP clients.\n- Benchmark performance improvements from SDK integration.", - "status": "pending", + "status": "deferred", "parentTaskId": 23 }, { @@ -1397,7 +1397,7 @@ "description": "Refactor the MCP server implementation to use direct Task Master function imports instead of the current CLI-based execution using child_process.spawnSync. This will improve performance, reliability, and enable better error handling.", "dependencies": [], "details": "1. Create a new module to import and expose Task Master core functions directly\n2. Modify tools/utils.js to remove executeTaskMasterCommand and replace with direct function calls\n3. Update each tool implementation (listTasks.js, showTask.js, etc.) to use the direct function imports\n4. Implement proper error handling with try/catch blocks and FastMCP's MCPError\n5. Add unit tests to verify the function imports work correctly\n6. Test performance improvements by comparing response times between CLI and function import approaches", - "status": "pending", + "status": "in-progress", "parentTaskId": 23 }, { @@ -1408,7 +1408,7 @@ 1 ], "details": "1. Implement a context manager class that leverages FastMCP's Context object\n2. Add caching for frequently accessed task data with configurable TTL settings\n3. Implement context tagging for better organization of context data\n4. Add methods to efficiently handle large context windows\n5. Create helper functions for storing and retrieving context data\n6. Implement cache invalidation strategies for task updates\n7. Add cache statistics for monitoring performance\n8. Create unit tests for context management and caching functionality", - "status": "pending", + "status": "deferred", "parentTaskId": 23 }, { @@ -1458,6 +1458,31 @@ "23.8" ], "parentTaskId": 23 + }, + { + "id": 14, + "title": "Add MCP.json to the Init Workflow", + "description": "Implement functionality to create or update .cursor/mcp.json during project initialization, handling cases where: 1) If there's no mcp.json, create it with the appropriate configuration; 2) If there is an mcp.json, intelligently append to it without syntax errors like trailing commas", + "details": "1. Create functionality to detect if .cursor/mcp.json exists in the project\\n2. Implement logic to create a new mcp.json file with proper structure if it doesn't exist\\n3. Add functionality to read and parse existing mcp.json if it exists\\n4. Create method to add a new taskmaster-ai server entry to the mcpServers object\\n5. Implement intelligent JSON merging that avoids trailing commas and syntax errors\\n6. Ensure proper formatting and indentation in the generated/updated JSON\\n7. Add validation to verify the updated configuration is valid JSON\\n8. Include this functionality in the init workflow\\n9. Add error handling for file system operations and JSON parsing\\n10. Document the mcp.json structure and integration process", + "status": "done", + "dependencies": [ + "23.1", + "23.3" + ], + "parentTaskId": 23 + }, + { + "id": 15, + "title": "Implement SSE Support for Real-time Updates", + "description": "Add Server-Sent Events (SSE) capabilities to the MCP server to enable real-time updates and streaming of task execution progress, logs, and status changes to clients", + "details": "1. Research and implement SSE protocol for the MCP server\\n2. Create dedicated SSE endpoints for event streaming\\n3. Implement event emitter pattern for internal event management\\n4. Add support for different event types (task status, logs, errors)\\n5. Implement client connection management with proper keep-alive handling\\n6. Add filtering capabilities to allow subscribing to specific event types\\n7. Create in-memory event buffer for clients reconnecting\\n8. Document SSE endpoint usage and client implementation examples\\n9. Add robust error handling for dropped connections\\n10. Implement rate limiting and backpressure mechanisms\\n11. Add authentication for SSE connections", + "status": "deferred", + "dependencies": [ + "23.1", + "23.3", + "23.11" + ], + "parentTaskId": 23 } ] }, @@ -1884,6 +1909,16 @@ "priority": "medium", "details": "This task involves integrating Google's Gemini API across all main AI services that currently use Claude:\n\n1. Create a new GeminiService class that implements the same interface as the existing ClaudeService\n2. Implement authentication and API key management for Gemini API\n3. Map our internal prompt formats to Gemini's expected input format\n4. Handle Gemini-specific parameters (temperature, top_p, etc.) and response parsing\n5. Update the AI service factory/provider to support selecting Gemini as an alternative\n6. Add configuration options in settings to allow users to select Gemini as their preferred provider\n7. Implement proper error handling for Gemini-specific API errors\n8. Ensure streaming responses are properly supported if Gemini offers this capability\n9. Update documentation to reflect the new Gemini option\n10. Consider implementing model selection if Gemini offers multiple models (e.g., Gemini Pro, Gemini Ultra)\n11. Ensure all existing AI capabilities (summarization, code generation, etc.) maintain feature parity when using Gemini\n\nThe implementation should follow the same pattern as the recent Ollama integration (Task #36) to maintain consistency in how alternative AI providers are supported.", "testStrategy": "Testing should verify Gemini integration works correctly across all AI services:\n\n1. Unit tests:\n - Test GeminiService class methods with mocked API responses\n - Verify proper error handling for common API errors\n - Test configuration and model selection functionality\n\n2. Integration tests:\n - Verify authentication and API connection with valid credentials\n - Test each AI service with Gemini to ensure proper functionality\n - Compare outputs between Claude and Gemini for the same inputs to verify quality\n\n3. End-to-end tests:\n - Test the complete user flow of switching to Gemini and using various AI features\n - Verify streaming responses work correctly if supported\n\n4. Performance tests:\n - Measure and compare response times between Claude and Gemini\n - Test with various input lengths to verify handling of context limits\n\n5. Manual testing:\n - Verify the quality of Gemini responses across different use cases\n - Test edge cases like very long inputs or specialized domain knowledge\n\nAll tests should pass with Gemini selected as the provider, and the user experience should be consistent regardless of which provider is selected." + }, + { + "id": 38, + "title": "Implement Version Check System with Upgrade Notifications", + "description": "Create a system that checks for newer package versions and displays upgrade notifications when users run any command, informing them to update to the latest version.", + "status": "done", + "dependencies": [], + "priority": "high", + "details": "Implement a version check mechanism that runs automatically with every command execution:\n\n1. Create a new module (e.g., `versionChecker.js`) that will:\n - Fetch the latest version from npm registry using the npm registry API (https://registry.npmjs.org/task-master-ai/latest)\n - Compare it with the current installed version (from package.json)\n - Store the last check timestamp to avoid excessive API calls (check once per day)\n - Cache the result to minimize network requests\n\n2. The notification should:\n - Use colored text (e.g., yellow background with black text) to be noticeable\n - Include the current version and latest version\n - Show the exact upgrade command: 'npm i task-master-ai@latest'\n - Be displayed at the beginning or end of command output, not interrupting the main content\n - Include a small separator line to distinguish it from command output\n\n3. Implementation considerations:\n - Handle network failures gracefully (don't block command execution if version check fails)\n - Add a configuration option to disable update checks if needed\n - Ensure the check is lightweight and doesn't significantly impact command performance\n - Consider using a package like 'semver' for proper version comparison\n - Implement a cooldown period (e.g., only check once per day) to avoid excessive API calls\n\n4. The version check should be integrated into the main command execution flow so it runs for all commands automatically.", + "testStrategy": "1. Manual testing:\n - Install an older version of the package\n - Run various commands and verify the update notification appears\n - Update to the latest version and confirm the notification no longer appears\n - Test with network disconnected to ensure graceful handling of failures\n\n2. Unit tests:\n - Mock the npm registry response to test different scenarios:\n - When a newer version exists\n - When using the latest version\n - When the registry is unavailable\n - Test the version comparison logic with various version strings\n - Test the cooldown/caching mechanism works correctly\n\n3. Integration tests:\n - Create a test that runs a command and verifies the notification appears in the expected format\n - Test that the notification appears for all commands\n - Verify the notification doesn't interfere with normal command output\n\n4. Edge cases to test:\n - Pre-release versions (alpha/beta)\n - Very old versions\n - When package.json is missing or malformed\n - When npm registry returns unexpected data" } ] } \ No newline at end of file diff --git a/test-version-check-full.js b/test-version-check-full.js new file mode 100644 index 00000000..da467790 --- /dev/null +++ b/test-version-check-full.js @@ -0,0 +1,69 @@ +import { checkForUpdate, displayUpgradeNotification, compareVersions } from './scripts/modules/commands.js'; +import fs from 'fs'; +import path from 'path'; + +// Force our current version for testing +process.env.FORCE_VERSION = '0.9.30'; + +// Create a mock package.json in memory for testing +const mockPackageJson = { + name: 'task-master-ai', + version: '0.9.30' +}; + +// Modified version of checkForUpdate that doesn't use HTTP for testing +async function testCheckForUpdate(simulatedLatestVersion) { + // Get current version - use our forced version + const currentVersion = process.env.FORCE_VERSION || '0.9.30'; + + console.log(`Using simulated current version: ${currentVersion}`); + console.log(`Using simulated latest version: ${simulatedLatestVersion}`); + + // Compare versions + const needsUpdate = compareVersions(currentVersion, simulatedLatestVersion) < 0; + + return { + currentVersion, + latestVersion: simulatedLatestVersion, + needsUpdate + }; +} + +// Test with current version older than latest (should show update notice) +async function runTest() { + console.log('=== Testing version check scenarios ===\n'); + + // Scenario 1: Update available + console.log('\n--- Scenario 1: Update available (Current: 0.9.30, Latest: 1.0.0) ---'); + const updateInfo1 = await testCheckForUpdate('1.0.0'); + console.log('Update check results:'); + console.log(`- Current version: ${updateInfo1.currentVersion}`); + console.log(`- Latest version: ${updateInfo1.latestVersion}`); + console.log(`- Update needed: ${updateInfo1.needsUpdate}`); + + if (updateInfo1.needsUpdate) { + console.log('\nDisplaying upgrade notification:'); + displayUpgradeNotification(updateInfo1.currentVersion, updateInfo1.latestVersion); + } + + // Scenario 2: No update needed (versions equal) + console.log('\n--- Scenario 2: No update needed (Current: 0.9.30, Latest: 0.9.30) ---'); + const updateInfo2 = await testCheckForUpdate('0.9.30'); + console.log('Update check results:'); + console.log(`- Current version: ${updateInfo2.currentVersion}`); + console.log(`- Latest version: ${updateInfo2.latestVersion}`); + console.log(`- Update needed: ${updateInfo2.needsUpdate}`); + + // Scenario 3: Development version (current newer than latest) + console.log('\n--- Scenario 3: Development version (Current: 0.9.30, Latest: 0.9.0) ---'); + const updateInfo3 = await testCheckForUpdate('0.9.0'); + console.log('Update check results:'); + console.log(`- Current version: ${updateInfo3.currentVersion}`); + console.log(`- Latest version: ${updateInfo3.latestVersion}`); + console.log(`- Update needed: ${updateInfo3.needsUpdate}`); + + console.log('\n=== Test complete ==='); +} + +// Run all tests +runTest(); \ No newline at end of file diff --git a/test-version-check.js b/test-version-check.js new file mode 100644 index 00000000..13dfe7a4 --- /dev/null +++ b/test-version-check.js @@ -0,0 +1,22 @@ +import { displayUpgradeNotification, compareVersions } from './scripts/modules/commands.js'; + +// Simulate different version scenarios +console.log('=== Simulating version check ===\n'); + +// 1. Current version is older than latest (should show update notice) +console.log('Scenario 1: Current version older than latest'); +displayUpgradeNotification('0.9.30', '1.0.0'); + +// 2. Current version same as latest (no update needed) +console.log('\nScenario 2: Current version same as latest (this would not normally show a notice)'); +console.log('Current: 1.0.0, Latest: 1.0.0'); +console.log('compareVersions result:', compareVersions('1.0.0', '1.0.0')); +console.log('Update needed:', compareVersions('1.0.0', '1.0.0') < 0 ? 'Yes' : 'No'); + +// 3. Current version newer than latest (e.g., development version, would not show notice) +console.log('\nScenario 3: Current version newer than latest (this would not normally show a notice)'); +console.log('Current: 1.1.0, Latest: 1.0.0'); +console.log('compareVersions result:', compareVersions('1.1.0', '1.0.0')); +console.log('Update needed:', compareVersions('1.1.0', '1.0.0') < 0 ? 'Yes' : 'No'); + +console.log('\n=== Test complete ==='); \ No newline at end of file diff --git a/tests/unit/commands.test.js b/tests/unit/commands.test.js index 1e95cbac..80d10f1d 100644 --- a/tests/unit/commands.test.js +++ b/tests/unit/commands.test.js @@ -526,4 +526,59 @@ describe('Commands Module', () => { expect(mockExit).toHaveBeenCalledWith(1); }); }); +}); + +// Test the version comparison utility +describe('Version comparison', () => { + // Use a dynamic import for the commands module + let compareVersions; + + beforeAll(async () => { + // Import the function we want to test dynamically + const commandsModule = await import('../../scripts/modules/commands.js'); + compareVersions = commandsModule.compareVersions; + }); + + test('compareVersions correctly compares semantic versions', () => { + expect(compareVersions('1.0.0', '1.0.0')).toBe(0); + expect(compareVersions('1.0.0', '1.0.1')).toBe(-1); + expect(compareVersions('1.0.1', '1.0.0')).toBe(1); + expect(compareVersions('1.0.0', '1.1.0')).toBe(-1); + expect(compareVersions('1.1.0', '1.0.0')).toBe(1); + expect(compareVersions('1.0.0', '2.0.0')).toBe(-1); + expect(compareVersions('2.0.0', '1.0.0')).toBe(1); + expect(compareVersions('1.0', '1.0.0')).toBe(0); + expect(compareVersions('1.0.0.0', '1.0.0')).toBe(0); + expect(compareVersions('1.0.0', '1.0.0.1')).toBe(-1); + }); +}); + +// Test the update check functionality +describe('Update check', () => { + let displayUpgradeNotification; + let consoleLogSpy; + + beforeAll(async () => { + // Import the function we want to test dynamically + const commandsModule = await import('../../scripts/modules/commands.js'); + displayUpgradeNotification = commandsModule.displayUpgradeNotification; + }); + + beforeEach(() => { + // Spy on console.log + consoleLogSpy = jest.spyOn(console, 'log').mockImplementation(() => {}); + }); + + afterEach(() => { + consoleLogSpy.mockRestore(); + }); + + test('displays upgrade notification when newer version is available', () => { + // Test displayUpgradeNotification function + displayUpgradeNotification('1.0.0', '1.1.0'); + expect(consoleLogSpy).toHaveBeenCalled(); + expect(consoleLogSpy.mock.calls[0][0]).toContain('Update Available!'); + expect(consoleLogSpy.mock.calls[0][0]).toContain('1.0.0'); + expect(consoleLogSpy.mock.calls[0][0]).toContain('1.1.0'); + }); }); \ No newline at end of file diff --git a/tests/unit/init.test.js b/tests/unit/init.test.js index c8ad777c..77497932 100644 --- a/tests/unit/init.test.js +++ b/tests/unit/init.test.js @@ -143,4 +143,255 @@ describe('Windsurf Rules File Handling', () => { expect.any(String) ); }); +}); + +// New test suite for MCP Configuration Handling +describe('MCP Configuration Handling', () => { + let tempDir; + + beforeEach(() => { + jest.clearAllMocks(); + + // Create a temporary directory for testing + tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-')); + + // Spy on fs methods + jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {}); + jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => { + if (filePath.toString().includes('mcp.json')) { + return JSON.stringify({ + "mcpServers": { + "existing-server": { + "command": "node", + "args": ["server.js"] + } + } + }); + } + return '{}'; + }); + jest.spyOn(fs, 'existsSync').mockImplementation((filePath) => { + // Return true for specific paths to test different scenarios + if (filePath.toString().includes('package.json')) { + return true; + } + // Default to false for other paths + return false; + }); + jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {}); + jest.spyOn(fs, 'copyFileSync').mockImplementation(() => {}); + }); + + afterEach(() => { + // Clean up the temporary directory + try { + fs.rmSync(tempDir, { recursive: true, force: true }); + } catch (err) { + console.error(`Error cleaning up: ${err.message}`); + } + }); + + // Test function that simulates the behavior of setupMCPConfiguration + function mockSetupMCPConfiguration(targetDir, projectName) { + const mcpDirPath = path.join(targetDir, '.cursor'); + const mcpJsonPath = path.join(mcpDirPath, 'mcp.json'); + + // Create .cursor directory if it doesn't exist + if (!fs.existsSync(mcpDirPath)) { + fs.mkdirSync(mcpDirPath, { recursive: true }); + } + + // New MCP config to be added - references the installed package + const newMCPServer = { + "task-master-ai": { + "command": "npx", + "args": [ + "task-master-ai", + "mcp-server" + ] + } + }; + + // Check if mcp.json already exists + if (fs.existsSync(mcpJsonPath)) { + try { + // Read existing config + const mcpConfig = JSON.parse(fs.readFileSync(mcpJsonPath, 'utf8')); + + // Initialize mcpServers if it doesn't exist + if (!mcpConfig.mcpServers) { + mcpConfig.mcpServers = {}; + } + + // Add the taskmaster-ai server if it doesn't exist + if (!mcpConfig.mcpServers["task-master-ai"]) { + mcpConfig.mcpServers["task-master-ai"] = newMCPServer["task-master-ai"]; + } + + // Write the updated configuration + fs.writeFileSync( + mcpJsonPath, + JSON.stringify(mcpConfig, null, 4) + ); + } catch (error) { + // Create new configuration on error + const newMCPConfig = { + "mcpServers": newMCPServer + }; + + fs.writeFileSync(mcpJsonPath, JSON.stringify(newMCPConfig, null, 4)); + } + } else { + // If mcp.json doesn't exist, create it + const newMCPConfig = { + "mcpServers": newMCPServer + }; + + fs.writeFileSync(mcpJsonPath, JSON.stringify(newMCPConfig, null, 4)); + } + } + + test('creates mcp.json when it does not exist', () => { + // Arrange + const mcpJsonPath = path.join(tempDir, '.cursor', 'mcp.json'); + + // Act + mockSetupMCPConfiguration(tempDir, 'test-project'); + + // Assert + expect(fs.writeFileSync).toHaveBeenCalledWith( + mcpJsonPath, + expect.stringContaining('task-master-ai') + ); + + // Should create a proper structure with mcpServers key + expect(fs.writeFileSync).toHaveBeenCalledWith( + mcpJsonPath, + expect.stringContaining('mcpServers') + ); + + // Should reference npx command + expect(fs.writeFileSync).toHaveBeenCalledWith( + mcpJsonPath, + expect.stringContaining('npx') + ); + }); + + test('updates existing mcp.json by adding new server', () => { + // Arrange + const mcpJsonPath = path.join(tempDir, '.cursor', 'mcp.json'); + + // Override the existsSync mock to simulate mcp.json exists + fs.existsSync.mockImplementation((filePath) => { + if (filePath.toString().includes('mcp.json')) { + return true; + } + return false; + }); + + // Act + mockSetupMCPConfiguration(tempDir, 'test-project'); + + // Assert + // Should preserve existing server + expect(fs.writeFileSync).toHaveBeenCalledWith( + mcpJsonPath, + expect.stringContaining('existing-server') + ); + + // Should add our new server + expect(fs.writeFileSync).toHaveBeenCalledWith( + mcpJsonPath, + expect.stringContaining('task-master-ai') + ); + }); + + test('handles JSON parsing errors by creating new mcp.json', () => { + // Arrange + const mcpJsonPath = path.join(tempDir, '.cursor', 'mcp.json'); + + // Override existsSync to say mcp.json exists + fs.existsSync.mockImplementation((filePath) => { + if (filePath.toString().includes('mcp.json')) { + return true; + } + return false; + }); + + // But make readFileSync return invalid JSON + fs.readFileSync.mockImplementation((filePath) => { + if (filePath.toString().includes('mcp.json')) { + return '{invalid json'; + } + return '{}'; + }); + + // Act + mockSetupMCPConfiguration(tempDir, 'test-project'); + + // Assert + // Should create a new valid JSON file with our server + expect(fs.writeFileSync).toHaveBeenCalledWith( + mcpJsonPath, + expect.stringContaining('task-master-ai') + ); + }); + + test('does not modify existing server configuration if it already exists', () => { + // Arrange + const mcpJsonPath = path.join(tempDir, '.cursor', 'mcp.json'); + + // Override existsSync to say mcp.json exists + fs.existsSync.mockImplementation((filePath) => { + if (filePath.toString().includes('mcp.json')) { + return true; + } + return false; + }); + + // Return JSON that already has task-master-ai + fs.readFileSync.mockImplementation((filePath) => { + if (filePath.toString().includes('mcp.json')) { + return JSON.stringify({ + "mcpServers": { + "existing-server": { + "command": "node", + "args": ["server.js"] + }, + "task-master-ai": { + "command": "custom", + "args": ["custom-args"] + } + } + }); + } + return '{}'; + }); + + // Spy to check what's written + const writeFileSyncSpy = jest.spyOn(fs, 'writeFileSync'); + + // Act + mockSetupMCPConfiguration(tempDir, 'test-project'); + + // Assert + // Verify the written data contains the original taskmaster configuration + const dataWritten = JSON.parse(writeFileSyncSpy.mock.calls[0][1]); + expect(dataWritten.mcpServers["task-master-ai"].command).toBe("custom"); + expect(dataWritten.mcpServers["task-master-ai"].args).toContain("custom-args"); + }); + + test('creates the .cursor directory if it doesnt exist', () => { + // Arrange + const cursorDirPath = path.join(tempDir, '.cursor'); + + // Make sure it looks like the directory doesn't exist + fs.existsSync.mockReturnValue(false); + + // Act + mockSetupMCPConfiguration(tempDir, 'test-project'); + + // Assert + expect(fs.mkdirSync).toHaveBeenCalledWith(cursorDirPath, { recursive: true }); + }); }); \ No newline at end of file From cc0646ac7280ee4a88e8b3c105107aae39fbfc26 Mon Sep 17 00:00:00 2001 From: Eyal Toledano <eyal@microangel.so> Date: Thu, 27 Mar 2025 23:40:13 -0400 Subject: [PATCH 10/16] chore: task management, adjust readmes, adjust cursor rules, add mcp_integration.md to docs --- .cursor/rules/commands.mdc | 187 +++++++++++++++++++++----- README.md | 19 +++ docs/MCP_INTEGRATION.md | 269 +++++++++++++++++++++++++++++++++++++ jest.config.js | 3 +- scripts/README.md | 117 +++++++++++++++- tasks/task_001.txt | 2 +- tasks/task_023.txt | 2 +- tasks/task_034.txt | 6 +- tasks/tasks.json | 13 +- 9 files changed, 575 insertions(+), 43 deletions(-) create mode 100644 docs/MCP_INTEGRATION.md diff --git a/.cursor/rules/commands.mdc b/.cursor/rules/commands.mdc index 04dfec92..beabe9c7 100644 --- a/.cursor/rules/commands.mdc +++ b/.cursor/rules/commands.mdc @@ -102,6 +102,38 @@ alwaysApply: false } ``` +- **Enhanced Input Validation**: + - ✅ DO: Validate file existence for critical file operations + - ✅ DO: Provide context-specific validation for identifiers + - ✅ DO: Check required API keys for features that depend on them + + ```javascript + // ✅ DO: Validate file existence + if (!fs.existsSync(tasksPath)) { + console.error(chalk.red(`Error: Tasks file not found at path: ${tasksPath}`)); + if (tasksPath === 'tasks/tasks.json') { + console.log(chalk.yellow('Hint: Run task-master init or task-master parse-prd to create tasks.json first')); + } else { + console.log(chalk.yellow(`Hint: Check if the file path is correct: ${tasksPath}`)); + } + process.exit(1); + } + + // ✅ DO: Validate task ID + const taskId = parseInt(options.id, 10); + if (isNaN(taskId) || taskId <= 0) { + console.error(chalk.red(`Error: Invalid task ID: ${options.id}. Task ID must be a positive integer.`)); + console.log(chalk.yellow('Usage example: task-master update-task --id=23 --prompt="Update with new information"')); + process.exit(1); + } + + // ✅ DO: Check for required API keys + if (useResearch && !process.env.PERPLEXITY_API_KEY) { + console.log(chalk.yellow('Warning: PERPLEXITY_API_KEY environment variable is missing. Research-backed updates will not be available.')); + console.log(chalk.yellow('Falling back to Claude AI for task update.')); + } + ``` + ## User Feedback - **Operation Status**: @@ -123,6 +155,26 @@ alwaysApply: false } ``` +- **Success Messages with Next Steps**: + - ✅ DO: Use boxen for important success messages with clear formatting + - ✅ DO: Provide suggested next steps after command completion + - ✅ DO: Include ready-to-use commands for follow-up actions + + ```javascript + // ✅ DO: Display success with next steps + console.log(boxen( + chalk.white.bold(`Subtask ${parentId}.${subtask.id} Added Successfully`) + '\n\n' + + chalk.white(`Title: ${subtask.title}`) + '\n' + + chalk.white(`Status: ${getStatusWithColor(subtask.status)}`) + '\n' + + (dependencies.length > 0 ? chalk.white(`Dependencies: ${dependencies.join(', ')}`) + '\n' : '') + + '\n' + + chalk.white.bold('Next Steps:') + '\n' + + chalk.cyan(`1. Run ${chalk.yellow(`task-master show ${parentId}`)} to see the parent task with all subtasks`) + '\n' + + chalk.cyan(`2. Run ${chalk.yellow(`task-master set-status --id=${parentId}.${subtask.id} --status=in-progress`)} to start working on it`), + { padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1 } } + )); + ``` + ## Command Registration - **Command Grouping**: @@ -139,7 +191,10 @@ alwaysApply: false export { registerCommands, setupCLI, - runCLI + runCLI, + checkForUpdate, // Include version checking functions + compareVersions, + displayUpgradeNotification }; ``` @@ -218,6 +273,35 @@ alwaysApply: false }); ``` +- **Contextual Error Handling**: + - ✅ DO: Provide specific error handling for common issues + - ✅ DO: Include troubleshooting hints for each error type + - ✅ DO: Use consistent error formatting across all commands + + ```javascript + // ✅ DO: Provide specific error handling with guidance + try { + // Implementation + } catch (error) { + console.error(chalk.red(`Error: ${error.message}`)); + + // Provide more helpful error messages for common issues + if (error.message.includes('task') && error.message.includes('not found')) { + console.log(chalk.yellow('\nTo fix this issue:')); + console.log(' 1. Run task-master list to see all available task IDs'); + console.log(' 2. Use a valid task ID with the --id parameter'); + } else if (error.message.includes('API key')) { + console.log(chalk.yellow('\nThis error is related to API keys. Check your environment variables.')); + } + + if (CONFIG.debug) { + console.error(error); + } + + process.exit(1); + } + ``` + ## Integration with Other Modules - **Import Organization**: @@ -230,6 +314,7 @@ alwaysApply: false import { program } from 'commander'; import path from 'path'; import chalk from 'chalk'; + import https from 'https'; import { CONFIG, log, readJSON } from './utils.js'; import { displayBanner, displayHelp } from './ui.js'; @@ -247,30 +332,22 @@ alwaysApply: false .description('Add a new subtask to a parent task or convert an existing task to a subtask') .option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json') .option('-p, --parent <id>', 'ID of the parent task (required)') - .option('-e, --existing <id>', 'ID of an existing task to convert to a subtask') + .option('-i, --task-id <id>', 'Existing task ID to convert to subtask') .option('-t, --title <title>', 'Title for the new subtask (when not converting)') .option('-d, --description <description>', 'Description for the new subtask (when not converting)') .option('--details <details>', 'Implementation details for the new subtask (when not converting)') .option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on') .option('--status <status>', 'Initial status for the subtask', 'pending') + .option('--skip-generate', 'Skip regenerating task files') .action(async (options) => { // Validate required parameters if (!options.parent) { console.error(chalk.red('Error: --parent parameter is required')); + showAddSubtaskHelp(); // Show contextual help process.exit(1); } - // Validate that either existing task ID or title is provided - if (!options.existing && !options.title) { - console.error(chalk.red('Error: Either --existing or --title must be provided')); - process.exit(1); - } - - try { - // Implementation - } catch (error) { - // Error handling - } + // Implementation with detailed error handling }); ``` @@ -283,25 +360,75 @@ alwaysApply: false .option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json') .option('-i, --id <id>', 'ID of the subtask to remove in format "parentId.subtaskId" (required)') .option('-c, --convert', 'Convert the subtask to a standalone task') + .option('--skip-generate', 'Skip regenerating task files') .action(async (options) => { - // Validate required parameters - if (!options.id) { - console.error(chalk.red('Error: --id parameter is required')); - process.exit(1); - } - - // Validate subtask ID format - if (!options.id.includes('.')) { - console.error(chalk.red('Error: Subtask ID must be in format "parentId.subtaskId"')); - process.exit(1); - } - - try { - // Implementation - } catch (error) { - // Error handling - } + // Implementation with detailed error handling + }) + .on('error', function(err) { + console.error(chalk.red(`Error: ${err.message}`)); + showRemoveSubtaskHelp(); // Show contextual help + process.exit(1); }); ``` +## Version Checking and Updates + +- **Automatic Version Checking**: + - ✅ DO: Implement version checking to notify users of available updates + - ✅ DO: Use non-blocking version checks that don't delay command execution + - ✅ DO: Display update notifications after command completion + + ```javascript + // ✅ DO: Implement version checking function + async function checkForUpdate() { + // Implementation details... + return { currentVersion, latestVersion, needsUpdate }; + } + + // ✅ DO: Implement semantic version comparison + function compareVersions(v1, v2) { + const v1Parts = v1.split('.').map(p => parseInt(p, 10)); + const v2Parts = v2.split('.').map(p => parseInt(p, 10)); + + // Implementation details... + return result; // -1, 0, or 1 + } + + // ✅ DO: Display attractive update notifications + function displayUpgradeNotification(currentVersion, latestVersion) { + const message = boxen( + `${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)} → ${chalk.green(latestVersion)}\n\n` + + `Run ${chalk.cyan('npm i task-master-ai@latest -g')} to update to the latest version with new features and bug fixes.`, + { + padding: 1, + margin: { top: 1, bottom: 1 }, + borderColor: 'yellow', + borderStyle: 'round' + } + ); + + console.log(message); + } + + // ✅ DO: Integrate version checking in CLI run function + async function runCLI(argv = process.argv) { + try { + // Start the update check in the background - don't await yet + const updateCheckPromise = checkForUpdate(); + + // Setup and parse + const programInstance = setupCLI(); + await programInstance.parseAsync(argv); + + // After command execution, check if an update is available + const updateInfo = await updateCheckPromise; + if (updateInfo.needsUpdate) { + displayUpgradeNotification(updateInfo.currentVersion, updateInfo.latestVersion); + } + } catch (error) { + // Error handling... + } + } + ``` + Refer to [`commands.js`](mdc:scripts/modules/commands.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines. \ No newline at end of file diff --git a/README.md b/README.md index b0803a99..83811f8c 100644 --- a/README.md +++ b/README.md @@ -362,6 +362,25 @@ task-master show 1.2 task-master update --from=<id> --prompt="<prompt>" ``` +# Use Perplexity AI for research-backed updates +task-master update --from=<id> --prompt="<prompt>" --research +``` + +### Update a Single Task + +```bash +# Update a specific task with new information +task-master update-task --id=<id> --prompt="<prompt>" + +# Use research-backed task updates +task-master update-task --id=<id> --prompt="<prompt>" --research +``` + +The update-task command: +- Updates a single specified task rather than multiple tasks +- Provides detailed validation and helpful error messages +- Falls back gracefully if research API is unavailable +- Preserves tasks marked as "done" ### Generate Task Files ```bash diff --git a/docs/MCP_INTEGRATION.md b/docs/MCP_INTEGRATION.md new file mode 100644 index 00000000..e1212841 --- /dev/null +++ b/docs/MCP_INTEGRATION.md @@ -0,0 +1,269 @@ +# Task Master MCP Integration + +This document outlines how Task Master CLI functionality is integrated with MCP (Master Control Program) architecture to provide both CLI and programmatic API access to features. + +## Architecture Overview + +The MCP integration uses a layered approach: + +1. **Core Functions** - In `scripts/modules/` contain the main business logic +2. **Source Parameter** - Core functions check the `source` parameter to determine behavior +3. **Task Master Core** - In `mcp-server/src/core/task-master-core.js` provides direct function imports +4. **MCP Tools** - In `mcp-server/src/tools/` register the functions with the MCP server + +``` +┌─────────────────┐ ┌─────────────────┐ +│ CLI User │ │ MCP User │ +└────────â”Ŧ────────┘ └────────â”Ŧ────────┘ + │ │ + â–ŧ â–ŧ +┌────────────────┐ ┌────────────────────┐ +│ commands.js │ │ MCP Tool API │ +└────────â”Ŧ───────┘ └──────────â”Ŧ─────────┘ + │ │ + │ │ + â–ŧ â–ŧ +┌───────────────────────────────────────────────┐ +│ │ +│ Core Modules (task-manager.js, etc.) │ +│ │ +└───────────────────────────────────────────────┘ +``` + +## Core Function Pattern + +Core functions should follow this pattern to support both CLI and MCP use: + +```javascript +/** + * Example function with source parameter support + * @param {Object} options - Additional options including source + * @returns {Object|undefined} - Returns data when source is 'mcp' + */ +function exampleFunction(param1, param2, options = {}) { + try { + // Skip UI for MCP + if (options.source !== 'mcp') { + displayBanner(); + console.log(chalk.blue('Processing operation...')); + } + + // Do the core business logic + const result = doSomething(param1, param2); + + // For MCP, return structured data + if (options.source === 'mcp') { + return { + success: true, + data: result + }; + } + + // For CLI, display output + console.log(chalk.green('Operation completed successfully!')); + } catch (error) { + // Handle errors based on source + if (options.source === 'mcp') { + return { + success: false, + error: error.message + }; + } + + // CLI error handling + console.error(chalk.red(`Error: ${error.message}`)); + process.exit(1); + } +} +``` + +## Source-Adapter Utilities + +For convenience, you can use the source adapter helpers in `scripts/modules/source-adapter.js`: + +```javascript +import { adaptForMcp, sourceSplitFunction } from './source-adapter.js'; + +// Simple adaptation - just adds source parameter support +export const simpleFunction = adaptForMcp(originalFunction); + +// Split implementation - completely different code paths for CLI vs MCP +export const complexFunction = sourceSplitFunction( + // CLI version with UI + function(param1, param2) { + displayBanner(); + console.log(`Processing ${param1}...`); + // ... CLI implementation + }, + // MCP version with structured return + function(param1, param2, options = {}) { + // ... MCP implementation + return { success: true, data }; + } +); +``` + +## Adding New Features + +When adding new features, follow these steps to ensure CLI and MCP compatibility: + +1. **Implement Core Logic** in the appropriate module file +2. **Add Source Parameter Support** using the pattern above +3. **Add to task-master-core.js** to make it available for direct import +4. **Update Command Map** in `mcp-server/src/tools/utils.js` +5. **Create Tool Implementation** in `mcp-server/src/tools/` +6. **Register the Tool** in `mcp-server/src/tools/index.js` + +### Core Function Implementation + +```javascript +// In scripts/modules/task-manager.js +export async function newFeature(param1, param2, options = {}) { + try { + // Source-specific UI + if (options.source !== 'mcp') { + displayBanner(); + console.log(chalk.blue('Running new feature...')); + } + + // Shared core logic + const result = processFeature(param1, param2); + + // Source-specific return handling + if (options.source === 'mcp') { + return { + success: true, + data: result + }; + } + + // CLI output + console.log(chalk.green('Feature completed successfully!')); + displayOutput(result); + } catch (error) { + // Error handling based on source + if (options.source === 'mcp') { + return { + success: false, + error: error.message + }; + } + + console.error(chalk.red(`Error: ${error.message}`)); + process.exit(1); + } +} +``` + +### Task Master Core Update + +```javascript +// In mcp-server/src/core/task-master-core.js +import { newFeature } from '../../../scripts/modules/task-manager.js'; + +// Add to exports +export default { + // ... existing functions + + async newFeature(args = {}, options = {}) { + const { param1, param2 } = args; + return executeFunction(newFeature, [param1, param2], options); + } +}; +``` + +### Command Map Update + +```javascript +// In mcp-server/src/tools/utils.js +const commandMap = { + // ... existing mappings + 'new-feature': 'newFeature' +}; +``` + +### Tool Implementation + +```javascript +// In mcp-server/src/tools/newFeature.js +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +export function registerNewFeatureTool(server) { + server.addTool({ + name: "newFeature", + description: "Run the new feature", + parameters: z.object({ + param1: z.string().describe("First parameter"), + param2: z.number().optional().describe("Second parameter"), + file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z.string().describe("Root directory of the project") + }), + execute: async (args, { log }) => { + try { + log.info(`Running new feature with args: ${JSON.stringify(args)}`); + + const cmdArgs = []; + if (args.param1) cmdArgs.push(`--param1=${args.param1}`); + if (args.param2) cmdArgs.push(`--param2=${args.param2}`); + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const projectRoot = args.projectRoot; + + // Execute the command + const result = await executeTaskMasterCommand( + "new-feature", + log, + cmdArgs, + projectRoot + ); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error in new feature: ${error.message}`); + return createErrorResponse(`Error in new feature: ${error.message}`); + } + }, + }); +} +``` + +### Tool Registration + +```javascript +// In mcp-server/src/tools/index.js +import { registerNewFeatureTool } from "./newFeature.js"; + +export function registerTaskMasterTools(server) { + // ... existing registrations + registerNewFeatureTool(server); +} +``` + +## Testing + +Always test your MCP-compatible features with both CLI and MCP interfaces: + +```javascript +// Test CLI usage +node scripts/dev.js new-feature --param1=test --param2=123 + +// Test MCP usage +node mcp-server/tests/test-command.js newFeature +``` + +## Best Practices + +1. **Keep Core Logic DRY** - Share as much logic as possible between CLI and MCP +2. **Structured Data for MCP** - Return clean JSON objects from MCP source functions +3. **Consistent Error Handling** - Standardize error formats for both interfaces +4. **Documentation** - Update MCP tool documentation when adding new features +5. **Testing** - Test both CLI and MCP interfaces for any new or modified feature \ No newline at end of file diff --git a/jest.config.js b/jest.config.js index 6c97f332..43929da5 100644 --- a/jest.config.js +++ b/jest.config.js @@ -17,7 +17,8 @@ export default { // The glob patterns Jest uses to detect test files testMatch: [ '**/__tests__/**/*.js', - '**/?(*.)+(spec|test).js' + '**/?(*.)+(spec|test).js', + '**/tests/*.test.js' ], // Transform files diff --git a/scripts/README.md b/scripts/README.md index f4428b23..231bc8de 100644 --- a/scripts/README.md +++ b/scripts/README.md @@ -94,6 +94,9 @@ node scripts/dev.js update --from=4 --prompt="Refactor tasks from ID 4 onward to # Update all tasks (default from=1) node scripts/dev.js update --prompt="Add authentication to all relevant tasks" +# With research-backed updates using Perplexity AI +node scripts/dev.js update --from=4 --prompt="Integrate OAuth 2.0" --research + # Specify a different tasks file node scripts/dev.js update --file=custom-tasks.json --from=5 --prompt="Change database from MongoDB to PostgreSQL" ``` @@ -102,6 +105,27 @@ Notes: - The `--prompt` parameter is required and should explain the changes or new context - Only tasks that aren't marked as 'done' will be updated - Tasks with ID >= the specified --from value will be updated +- The `--research` flag uses Perplexity AI for more informed updates when available + +## Updating a Single Task + +The `update-task` command allows you to update a specific task instead of multiple tasks: + +```bash +# Update a specific task with new information +node scripts/dev.js update-task --id=4 --prompt="Use JWT for authentication" + +# With research-backed updates using Perplexity AI +node scripts/dev.js update-task --id=4 --prompt="Use JWT for authentication" --research +``` + +This command: +- Updates only the specified task rather than a range of tasks +- Provides detailed validation with helpful error messages +- Checks for required API keys when using research mode +- Falls back gracefully if Perplexity API is unavailable +- Preserves tasks that are already marked as "done" +- Includes contextual error handling for common issues ## Setting Task Status @@ -426,4 +450,95 @@ This command: - Commands for working with subtasks - For subtasks, provides a link to view the parent task -This command is particularly useful when you need to examine a specific task in detail before implementing it or when you want to check the status and details of a particular task. \ No newline at end of file +This command is particularly useful when you need to examine a specific task in detail before implementing it or when you want to check the status and details of a particular task. + +## Enhanced Error Handling + +The script now includes improved error handling throughout all commands: + +1. **Detailed Validation**: + - Required parameters (like task IDs and prompts) are validated early + - File existence is checked with customized errors for common scenarios + - Parameter type conversion is handled with clear error messages + +2. **Contextual Error Messages**: + - Task not found errors include suggestions to run the list command + - API key errors include reminders to check environment variables + - Invalid ID format errors show the expected format + +3. **Command-Specific Help Displays**: + - When validation fails, detailed help for the specific command is shown + - Help displays include usage examples and parameter descriptions + - Formatted in clear, color-coded boxes with examples + +4. **Helpful Error Recovery**: + - Detailed troubleshooting steps for common errors + - Graceful fallbacks for missing optional dependencies + - Clear instructions for how to fix configuration issues + +## Version Checking + +The script now automatically checks for updates without slowing down execution: + +1. **Background Version Checking**: + - Non-blocking version checks run in the background while commands execute + - Actual command execution isn't delayed by version checking + - Update notifications appear after command completion + +2. **Update Notifications**: + - When a newer version is available, a notification is displayed + - Notifications include current version, latest version, and update command + - Formatted in an attention-grabbing box with clear instructions + +3. **Implementation Details**: + - Uses semantic versioning to compare current and latest versions + - Fetches version information from npm registry with a timeout + - Gracefully handles connection issues without affecting command execution + +## Subtask Management + +The script now includes enhanced commands for managing subtasks: + +### Adding Subtasks + +```bash +# Add a subtask to an existing task +node scripts/dev.js add-subtask --parent=5 --title="Implement login UI" --description="Create login form" + +# Convert an existing task to a subtask +node scripts/dev.js add-subtask --parent=5 --task-id=8 + +# Add a subtask with dependencies +node scripts/dev.js add-subtask --parent=5 --title="Authentication middleware" --dependencies=5.1,5.2 + +# Skip regenerating task files +node scripts/dev.js add-subtask --parent=5 --title="Login API route" --skip-generate +``` + +Key features: +- Create new subtasks with detailed properties or convert existing tasks +- Define dependencies between subtasks +- Set custom status for new subtasks +- Provides next-step suggestions after creation + +### Removing Subtasks + +```bash +# Remove a subtask +node scripts/dev.js remove-subtask --id=5.2 + +# Remove multiple subtasks +node scripts/dev.js remove-subtask --id=5.2,5.3,5.4 + +# Convert a subtask to a standalone task +node scripts/dev.js remove-subtask --id=5.2 --convert + +# Skip regenerating task files +node scripts/dev.js remove-subtask --id=5.2 --skip-generate +``` + +Key features: +- Remove subtasks individually or in batches +- Optionally convert subtasks to standalone tasks +- Control whether task files are regenerated +- Provides detailed success messages and next steps \ No newline at end of file diff --git a/tasks/task_001.txt b/tasks/task_001.txt index ee7d6196..b4869cd2 100644 --- a/tasks/task_001.txt +++ b/tasks/task_001.txt @@ -1,6 +1,6 @@ # Task ID: 1 # Title: Implement Task Data Structure -# Status: done +# Status: in-progress # Dependencies: None # Priority: high # Description: Design and implement the core tasks.json structure that will serve as the single source of truth for the system. diff --git a/tasks/task_023.txt b/tasks/task_023.txt index fea63b4f..53a83793 100644 --- a/tasks/task_023.txt +++ b/tasks/task_023.txt @@ -246,7 +246,7 @@ Testing approach: 7. Add cache statistics for monitoring performance 8. Create unit tests for context management and caching functionality -## 10. Enhance Tool Registration and Resource Management [pending] +## 10. Enhance Tool Registration and Resource Management [in-progress] ### Dependencies: 23.1 ### Description: Refactor tool registration to follow FastMCP best practices, using decorators and improving the overall structure. Implement proper resource management for task templates and other shared resources. ### Details: diff --git a/tasks/task_034.txt b/tasks/task_034.txt index 77da9a0a..7cf47ed4 100644 --- a/tasks/task_034.txt +++ b/tasks/task_034.txt @@ -1,6 +1,6 @@ # Task ID: 34 # Title: Implement updateTask Command for Single Task Updates -# Status: in-progress +# Status: done # Dependencies: None # Priority: high # Description: Create a new command that allows updating a specific task by ID using AI-driven refinement while preserving completed subtasks and supporting all existing update command options. @@ -103,7 +103,7 @@ Testing approach: - Test concurrency scenarios with multiple simultaneous updates - Verify logging captures appropriate information for troubleshooting -## 4. Write comprehensive tests for updateTask command [in-progress] +## 4. Write comprehensive tests for updateTask command [done] ### Dependencies: 34.1, 34.2, 34.3 ### Description: Create a comprehensive test suite for the updateTask command to ensure it works correctly in all scenarios and maintains backward compatibility. ### Details: @@ -130,7 +130,7 @@ Testing approach: - Create test fixtures for consistent test data - Use snapshot testing for command output verification -## 5. Update CLI documentation and help text [pending] +## 5. Update CLI documentation and help text [done] ### Dependencies: 34.2 ### Description: Update the CLI help documentation to include the new updateTask command and ensure users understand its purpose and options. ### Details: diff --git a/tasks/tasks.json b/tasks/tasks.json index 92545f1d..915d383c 100644 --- a/tasks/tasks.json +++ b/tasks/tasks.json @@ -12,12 +12,13 @@ "id": 1, "title": "Implement Task Data Structure", "description": "Design and implement the core tasks.json structure that will serve as the single source of truth for the system.", - "status": "done", + "status": "in-progress", "dependencies": [], "priority": "high", "details": "Create the foundational data structure including:\n- JSON schema for tasks.json\n- Task model with all required fields (id, title, description, status, dependencies, priority, details, testStrategy, subtasks)\n- Validation functions for the task model\n- Basic file system operations for reading/writing tasks.json\n- Error handling for file operations", "testStrategy": "Verify that the tasks.json structure can be created, read, and validated. Test with sample data to ensure all fields are properly handled and that validation correctly identifies invalid structures.", - "subtasks": [] + "subtasks": [], + "previousStatus": "in-progress" }, { "id": 2, @@ -1419,7 +1420,7 @@ 1 ], "details": "1. Update registerTaskMasterTools function to use FastMCP's decorator pattern\n2. Implement @mcp.tool() decorators for all existing tools\n3. Add proper type annotations and documentation for all tools\n4. Create resource handlers for task templates using @mcp.resource()\n5. Implement resource templates for common task patterns\n6. Update the server initialization to properly register all tools and resources\n7. Add validation for tool inputs using FastMCP's built-in validation\n8. Create comprehensive tests for tool registration and resource access", - "status": "pending", + "status": "in-progress", "parentTaskId": 23 }, { @@ -1816,7 +1817,7 @@ "id": 34, "title": "Implement updateTask Command for Single Task Updates", "description": "Create a new command that allows updating a specific task by ID using AI-driven refinement while preserving completed subtasks and supporting all existing update command options.", - "status": "in-progress", + "status": "done", "dependencies": [], "priority": "high", "details": "Implement a new command called 'updateTask' that focuses on updating a single task rather than all tasks from an ID onwards. The implementation should:\n\n1. Accept a single task ID as a required parameter\n2. Use the same AI-driven approach as the existing update command to refine the task\n3. Preserve the completion status of any subtasks that were previously marked as complete\n4. Support all options from the existing update command including:\n - The research flag for Perplexity integration\n - Any formatting or refinement options\n - Task context options\n5. Update the CLI help documentation to include this new command\n6. Ensure the command follows the same pattern as other commands in the codebase\n7. Add appropriate error handling for cases where the specified task ID doesn't exist\n8. Implement the ability to update task title, description, and details separately if needed\n9. Ensure the command returns appropriate success/failure messages\n10. Optimize the implementation to only process the single task rather than scanning through all tasks\n\nThe command should reuse existing AI prompt templates where possible but modify them to focus on refining a single task rather than multiple tasks.", @@ -1864,7 +1865,7 @@ 3 ], "details": "Implementation steps:\n1. Create unit tests for the updateTaskById function in task-manager.js\n - Test finding and updating tasks with various IDs\n - Test preservation of completed subtasks\n - Test different update options combinations\n - Test error handling for non-existent tasks\n2. Create unit tests for the updateTask command in commands.js\n - Test command parameter parsing\n - Test option handling\n - Test error scenarios and messages\n3. Create integration tests that verify the end-to-end flow\n - Test the command with actual AI service integration\n - Test with mock AI responses for predictable testing\n4. Implement test fixtures and mocks for consistent testing\n5. Add performance tests to ensure the command is efficient\n6. Test edge cases such as empty tasks, tasks with many subtasks, etc.\n\nTesting approach:\n- Use Jest or similar testing framework\n- Implement mocks for external dependencies like AI services\n- Create test fixtures for consistent test data\n- Use snapshot testing for command output verification", - "status": "in-progress", + "status": "done", "parentTaskId": 34 }, { @@ -1875,7 +1876,7 @@ 2 ], "details": "Implementation steps:\n1. Add comprehensive help text for the updateTask command including:\n - Command description\n - Required and optional parameters\n - Examples of usage\n - Description of all supported options\n2. Update the main CLI help documentation to include the new command\n3. Add the command to any relevant command groups or categories\n4. Create usage examples that demonstrate common scenarios\n5. Update README.md and other documentation files to include information about the new command\n6. Add inline code comments explaining the implementation details\n7. Update any API documentation if applicable\n8. Create or update user guides with the new functionality\n\nTesting approach:\n- Verify help text is displayed correctly when running `--help`\n- Review documentation for clarity and completeness\n- Have team members review the documentation for usability\n- Test examples to ensure they work as documented", - "status": "pending", + "status": "done", "parentTaskId": 34 } ] From b5f84909445066418e3e75ed133639ef2bbe28ea Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Mon, 24 Mar 2025 21:31:36 +0000 Subject: [PATCH 11/16] feat(wip): initial commits for sub-tasks 1,2,3 for task 23 --- mcp-server/README.md | 170 ++++ mcp-server/server.js | 44 + mcp-server/src/api-handlers.js | 970 ++++++++++++++++++ mcp-server/src/auth.js | 285 ++++++ mcp-server/src/context-manager.js | 873 ++++++++++++++++ mcp-server/src/index.js | 366 +++++++ package-lock.json | 1558 ++++++++++++++++++++++++++++- package.json | 18 +- tasks/task_023.txt | 115 +++ tasks/tasks.json | 64 +- 10 files changed, 4418 insertions(+), 45 deletions(-) create mode 100644 mcp-server/README.md create mode 100755 mcp-server/server.js create mode 100644 mcp-server/src/api-handlers.js create mode 100644 mcp-server/src/auth.js create mode 100644 mcp-server/src/context-manager.js create mode 100644 mcp-server/src/index.js diff --git a/mcp-server/README.md b/mcp-server/README.md new file mode 100644 index 00000000..9c8b1300 --- /dev/null +++ b/mcp-server/README.md @@ -0,0 +1,170 @@ +# Task Master MCP Server + +This module implements a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server for Task Master, allowing external applications to access Task Master functionality and context through a standardized API. + +## Features + +- MCP-compliant server implementation using FastMCP +- RESTful API for context management +- Authentication and authorization for secure access +- Context storage and retrieval with metadata and tagging +- Context windowing and truncation for handling size limits +- Integration with Task Master for task management operations + +## Installation + +The MCP server is included with Task Master. Install Task Master globally to use the MCP server: + +```bash +npm install -g task-master-ai +``` + +Or use it locally: + +```bash +npm install task-master-ai +``` + +## Environment Configuration + +The MCP server can be configured using environment variables or a `.env` file: + +| Variable | Description | Default | +| -------------------- | ---------------------------------------- | ----------------------------- | +| `MCP_SERVER_PORT` | Port for the MCP server | 3000 | +| `MCP_SERVER_HOST` | Host for the MCP server | localhost | +| `MCP_CONTEXT_DIR` | Directory for context storage | ./mcp-server/contexts | +| `MCP_API_KEYS_FILE` | File for API key storage | ./mcp-server/api-keys.json | +| `MCP_JWT_SECRET` | Secret for JWT token generation | task-master-mcp-server-secret | +| `MCP_JWT_EXPIRATION` | JWT token expiration time | 24h | +| `LOG_LEVEL` | Logging level (debug, info, warn, error) | info | + +## Getting Started + +### Starting the Server + +Start the MCP server as a standalone process: + +```bash +npx task-master-mcp-server +``` + +Or start it programmatically: + +```javascript +import { TaskMasterMCPServer } from "task-master-ai/mcp-server"; + +const server = new TaskMasterMCPServer(); +await server.start({ port: 3000, host: "localhost" }); +``` + +### Authentication + +The MCP server uses API key authentication with JWT tokens for secure access. A default admin API key is generated on first startup and can be found in the `api-keys.json` file. + +To get a JWT token: + +```bash +curl -X POST http://localhost:3000/auth/token \ + -H "x-api-key: YOUR_API_KEY" +``` + +Use the token for subsequent requests: + +```bash +curl http://localhost:3000/mcp/tools \ + -H "Authorization: Bearer YOUR_JWT_TOKEN" +``` + +### Creating a New API Key + +Admin users can create new API keys: + +```bash +curl -X POST http://localhost:3000/auth/api-keys \ + -H "Authorization: Bearer ADMIN_JWT_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"clientId": "user1", "role": "user"}' +``` + +## Available MCP Endpoints + +The MCP server implements the following MCP-compliant endpoints: + +### Context Management + +- `GET /mcp/context` - List all contexts +- `POST /mcp/context` - Create a new context +- `GET /mcp/context/{id}` - Get a specific context +- `PUT /mcp/context/{id}` - Update a context +- `DELETE /mcp/context/{id}` - Delete a context + +### Models + +- `GET /mcp/models` - List available models +- `GET /mcp/models/{id}` - Get model details + +### Execution + +- `POST /mcp/execute` - Execute an operation with context + +## Available MCP Tools + +The MCP server provides the following tools: + +### Context Tools + +- `createContext` - Create a new context +- `getContext` - Retrieve a context by ID +- `updateContext` - Update an existing context +- `deleteContext` - Delete a context +- `listContexts` - List available contexts +- `addTags` - Add tags to a context +- `truncateContext` - Truncate a context to a maximum size + +### Task Master Tools + +- `listTasks` - List tasks from Task Master +- `getTaskDetails` - Get detailed task information +- `executeWithContext` - Execute operations using context + +## Examples + +### Creating a Context + +```javascript +// Using the MCP client +const client = new MCPClient("http://localhost:3000"); +await client.authenticate("YOUR_API_KEY"); + +const context = await client.createContext("my-context", { + title: "My Project", + tasks: ["Implement feature X", "Fix bug Y"], +}); +``` + +### Executing an Operation with Context + +```javascript +// Using the MCP client +const result = await client.execute("generateTask", "my-context", { + title: "New Task", + description: "Create a new task based on context", +}); +``` + +## Integration with Other Tools + +The Task Master MCP server can be integrated with other MCP-compatible tools and clients: + +- LLM applications that support the MCP protocol +- Task management systems that support context-aware operations +- Development environments with MCP integration + +## Contributing + +Contributions are welcome! Please feel free to submit a Pull Request. + +## License + +This project is licensed under the MIT License - see the LICENSE file for details. diff --git a/mcp-server/server.js b/mcp-server/server.js new file mode 100755 index 00000000..ed5c3c69 --- /dev/null +++ b/mcp-server/server.js @@ -0,0 +1,44 @@ +#!/usr/bin/env node + +import TaskMasterMCPServer from "./src/index.js"; +import dotenv from "dotenv"; +import { logger } from "../scripts/modules/utils.js"; + +// Load environment variables +dotenv.config(); + +// Constants +const PORT = process.env.MCP_SERVER_PORT || 3000; +const HOST = process.env.MCP_SERVER_HOST || "localhost"; + +/** + * Start the MCP server + */ +async function startServer() { + const server = new TaskMasterMCPServer(); + + // Handle graceful shutdown + process.on("SIGINT", async () => { + logger.info("Received SIGINT, shutting down gracefully..."); + await server.stop(); + process.exit(0); + }); + + process.on("SIGTERM", async () => { + logger.info("Received SIGTERM, shutting down gracefully..."); + await server.stop(); + process.exit(0); + }); + + try { + await server.start({ port: PORT, host: HOST }); + logger.info(`MCP server running at http://${HOST}:${PORT}`); + logger.info("Press Ctrl+C to stop"); + } catch (error) { + logger.error(`Failed to start MCP server: ${error.message}`); + process.exit(1); + } +} + +// Start the server +startServer(); diff --git a/mcp-server/src/api-handlers.js b/mcp-server/src/api-handlers.js new file mode 100644 index 00000000..ead546f2 --- /dev/null +++ b/mcp-server/src/api-handlers.js @@ -0,0 +1,970 @@ +import { z } from "zod"; +import { logger } from "../../scripts/modules/utils.js"; +import ContextManager from "./context-manager.js"; + +/** + * MCP API Handlers class + * Implements handlers for the MCP API endpoints + */ +class MCPApiHandlers { + constructor(server) { + this.server = server; + this.contextManager = new ContextManager(); + this.logger = logger; + + // Bind methods + this.registerEndpoints = this.registerEndpoints.bind(this); + this.setupContextHandlers = this.setupContextHandlers.bind(this); + this.setupModelHandlers = this.setupModelHandlers.bind(this); + this.setupExecuteHandlers = this.setupExecuteHandlers.bind(this); + + // Register all handlers + this.registerEndpoints(); + } + + /** + * Register all MCP API endpoints + */ + registerEndpoints() { + this.setupContextHandlers(); + this.setupModelHandlers(); + this.setupExecuteHandlers(); + + this.logger.info("Registered all MCP API endpoint handlers"); + } + + /** + * Set up handlers for the /context endpoint + */ + setupContextHandlers() { + // Add a tool to create context + this.server.addTool({ + name: "createContext", + description: + "Create a new context with the given data and optional metadata", + parameters: z.object({ + contextId: z.string().describe("Unique identifier for the context"), + data: z.any().describe("The context data to store"), + metadata: z + .object({}) + .optional() + .describe("Optional metadata for the context"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.createContext( + args.contextId, + args.data, + args.metadata || {} + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error creating context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to get context + this.server.addTool({ + name: "getContext", + description: + "Retrieve a context by its ID, optionally a specific version", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to retrieve"), + versionId: z + .string() + .optional() + .describe("Optional specific version ID to retrieve"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.getContext( + args.contextId, + args.versionId + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error retrieving context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to update context + this.server.addTool({ + name: "updateContext", + description: "Update an existing context with new data and/or metadata", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to update"), + data: z + .any() + .optional() + .describe("New data to update the context with"), + metadata: z + .object({}) + .optional() + .describe("New metadata to update the context with"), + createNewVersion: z + .boolean() + .optional() + .default(true) + .describe( + "Whether to create a new version (true) or update in place (false)" + ), + }), + execute: async (args) => { + try { + const context = await this.contextManager.updateContext( + args.contextId, + args.data || {}, + args.metadata || {}, + args.createNewVersion + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error updating context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to delete context + this.server.addTool({ + name: "deleteContext", + description: "Delete a context by its ID", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to delete"), + }), + execute: async (args) => { + try { + const result = await this.contextManager.deleteContext( + args.contextId + ); + return { success: result }; + } catch (error) { + this.logger.error(`Error deleting context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to list contexts with pagination and advanced filtering + this.server.addTool({ + name: "listContexts", + description: + "List available contexts with filtering, pagination and sorting", + parameters: z.object({ + // Filtering parameters + filters: z + .object({ + tag: z.string().optional().describe("Filter contexts by tag"), + metadataKey: z + .string() + .optional() + .describe("Filter contexts by metadata key"), + metadataValue: z + .string() + .optional() + .describe("Filter contexts by metadata value"), + createdAfter: z + .string() + .optional() + .describe("Filter contexts created after date (ISO format)"), + updatedAfter: z + .string() + .optional() + .describe("Filter contexts updated after date (ISO format)"), + }) + .optional() + .describe("Filters to apply to the context list"), + + // Pagination parameters + limit: z + .number() + .optional() + .default(100) + .describe("Maximum number of contexts to return"), + offset: z + .number() + .optional() + .default(0) + .describe("Number of contexts to skip"), + + // Sorting parameters + sortBy: z + .string() + .optional() + .default("updated") + .describe("Field to sort by (id, created, updated, size)"), + sortDirection: z + .enum(["asc", "desc"]) + .optional() + .default("desc") + .describe("Sort direction"), + + // Search query + query: z.string().optional().describe("Free text search query"), + }), + execute: async (args) => { + try { + const result = await this.contextManager.listContexts(args); + return { + success: true, + ...result, + }; + } catch (error) { + this.logger.error(`Error listing contexts: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to get context history + this.server.addTool({ + name: "getContextHistory", + description: "Get the version history of a context", + parameters: z.object({ + contextId: z + .string() + .describe("The ID of the context to get history for"), + }), + execute: async (args) => { + try { + const history = await this.contextManager.getContextHistory( + args.contextId + ); + return { + success: true, + history, + contextId: args.contextId, + }; + } catch (error) { + this.logger.error(`Error getting context history: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to merge contexts + this.server.addTool({ + name: "mergeContexts", + description: "Merge multiple contexts into a new context", + parameters: z.object({ + contextIds: z + .array(z.string()) + .describe("Array of context IDs to merge"), + newContextId: z.string().describe("ID for the new merged context"), + metadata: z + .object({}) + .optional() + .describe("Optional metadata for the new context"), + }), + execute: async (args) => { + try { + const mergedContext = await this.contextManager.mergeContexts( + args.contextIds, + args.newContextId, + args.metadata || {} + ); + return { + success: true, + context: mergedContext, + }; + } catch (error) { + this.logger.error(`Error merging contexts: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to add tags to a context + this.server.addTool({ + name: "addTags", + description: "Add tags to a context", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to tag"), + tags: z + .array(z.string()) + .describe("Array of tags to add to the context"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.addTags( + args.contextId, + args.tags + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error adding tags to context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to remove tags from a context + this.server.addTool({ + name: "removeTags", + description: "Remove tags from a context", + parameters: z.object({ + contextId: z + .string() + .describe("The ID of the context to remove tags from"), + tags: z + .array(z.string()) + .describe("Array of tags to remove from the context"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.removeTags( + args.contextId, + args.tags + ); + return { success: true, context }; + } catch (error) { + this.logger.error( + `Error removing tags from context: ${error.message}` + ); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to truncate context + this.server.addTool({ + name: "truncateContext", + description: "Truncate a context to a maximum size", + parameters: z.object({ + contextId: z.string().describe("The ID of the context to truncate"), + maxSize: z + .number() + .describe("Maximum size (in characters) for the context"), + strategy: z + .enum(["start", "end", "middle"]) + .default("end") + .describe("Truncation strategy: start, end, or middle"), + }), + execute: async (args) => { + try { + const context = await this.contextManager.truncateContext( + args.contextId, + args.maxSize, + args.strategy + ); + return { success: true, context }; + } catch (error) { + this.logger.error(`Error truncating context: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + this.logger.info("Registered context endpoint handlers"); + } + + /** + * Set up handlers for the /models endpoint + */ + setupModelHandlers() { + // Add a tool to list available models + this.server.addTool({ + name: "listModels", + description: "List all available models with their capabilities", + parameters: z.object({}), + execute: async () => { + // Here we could get models from a more dynamic source + // For now, returning static list of models supported by Task Master + const models = [ + { + id: "claude-3-opus-20240229", + provider: "anthropic", + capabilities: [ + "text-generation", + "embeddings", + "context-window-100k", + ], + }, + { + id: "claude-3-7-sonnet-20250219", + provider: "anthropic", + capabilities: [ + "text-generation", + "embeddings", + "context-window-200k", + ], + }, + { + id: "sonar-medium-online", + provider: "perplexity", + capabilities: ["text-generation", "web-search", "research"], + }, + ]; + + return { success: true, models }; + }, + }); + + // Add a tool to get model details + this.server.addTool({ + name: "getModelDetails", + description: "Get detailed information about a specific model", + parameters: z.object({ + modelId: z.string().describe("The ID of the model to get details for"), + }), + execute: async (args) => { + // Here we could get model details from a more dynamic source + // For now, returning static information + const modelsMap = { + "claude-3-opus-20240229": { + id: "claude-3-opus-20240229", + provider: "anthropic", + capabilities: [ + "text-generation", + "embeddings", + "context-window-100k", + ], + maxTokens: 100000, + temperature: { min: 0, max: 1, default: 0.7 }, + pricing: { input: 0.000015, output: 0.000075 }, + }, + "claude-3-7-sonnet-20250219": { + id: "claude-3-7-sonnet-20250219", + provider: "anthropic", + capabilities: [ + "text-generation", + "embeddings", + "context-window-200k", + ], + maxTokens: 200000, + temperature: { min: 0, max: 1, default: 0.7 }, + pricing: { input: 0.000003, output: 0.000015 }, + }, + "sonar-medium-online": { + id: "sonar-medium-online", + provider: "perplexity", + capabilities: ["text-generation", "web-search", "research"], + maxTokens: 4096, + temperature: { min: 0, max: 1, default: 0.7 }, + }, + }; + + const model = modelsMap[args.modelId]; + if (!model) { + return { + success: false, + error: `Model with ID ${args.modelId} not found`, + }; + } + + return { success: true, model }; + }, + }); + + this.logger.info("Registered models endpoint handlers"); + } + + /** + * Set up handlers for the /execute endpoint + */ + setupExecuteHandlers() { + // Add a tool to execute operations with context + this.server.addTool({ + name: "executeWithContext", + description: "Execute an operation with the provided context", + parameters: z.object({ + operation: z.string().describe("The operation to execute"), + contextId: z.string().describe("The ID of the context to use"), + parameters: z + .record(z.any()) + .optional() + .describe("Additional parameters for the operation"), + versionId: z + .string() + .optional() + .describe("Optional specific context version to use"), + }), + execute: async (args) => { + try { + // Get the context first, with version if specified + const context = await this.contextManager.getContext( + args.contextId, + args.versionId + ); + + // Execute different operations based on the operation name + switch (args.operation) { + case "generateTask": + return await this.executeGenerateTask(context, args.parameters); + case "expandTask": + return await this.executeExpandTask(context, args.parameters); + case "analyzeComplexity": + return await this.executeAnalyzeComplexity( + context, + args.parameters + ); + case "mergeContexts": + return await this.executeMergeContexts(context, args.parameters); + case "searchContexts": + return await this.executeSearchContexts(args.parameters); + case "extractInsights": + return await this.executeExtractInsights( + context, + args.parameters + ); + case "syncWithRepository": + return await this.executeSyncWithRepository( + context, + args.parameters + ); + default: + return { + success: false, + error: `Unknown operation: ${args.operation}`, + }; + } + } catch (error) { + this.logger.error(`Error executing operation: ${error.message}`); + return { + success: false, + error: error.message, + operation: args.operation, + contextId: args.contextId, + }; + } + }, + }); + + // Add tool for batch operations + this.server.addTool({ + name: "executeBatchOperations", + description: "Execute multiple operations in a single request", + parameters: z.object({ + operations: z + .array( + z.object({ + operation: z.string().describe("The operation to execute"), + contextId: z.string().describe("The ID of the context to use"), + parameters: z + .record(z.any()) + .optional() + .describe("Additional parameters"), + versionId: z + .string() + .optional() + .describe("Optional context version"), + }) + ) + .describe("Array of operations to execute in sequence"), + }), + execute: async (args) => { + const results = []; + let hasErrors = false; + + for (const op of args.operations) { + try { + const context = await this.contextManager.getContext( + op.contextId, + op.versionId + ); + + let result; + switch (op.operation) { + case "generateTask": + result = await this.executeGenerateTask(context, op.parameters); + break; + case "expandTask": + result = await this.executeExpandTask(context, op.parameters); + break; + case "analyzeComplexity": + result = await this.executeAnalyzeComplexity( + context, + op.parameters + ); + break; + case "mergeContexts": + result = await this.executeMergeContexts( + context, + op.parameters + ); + break; + case "searchContexts": + result = await this.executeSearchContexts(op.parameters); + break; + case "extractInsights": + result = await this.executeExtractInsights( + context, + op.parameters + ); + break; + case "syncWithRepository": + result = await this.executeSyncWithRepository( + context, + op.parameters + ); + break; + default: + result = { + success: false, + error: `Unknown operation: ${op.operation}`, + }; + hasErrors = true; + } + + results.push({ + operation: op.operation, + contextId: op.contextId, + result: result, + }); + + if (!result.success) { + hasErrors = true; + } + } catch (error) { + this.logger.error( + `Error in batch operation ${op.operation}: ${error.message}` + ); + results.push({ + operation: op.operation, + contextId: op.contextId, + result: { + success: false, + error: error.message, + }, + }); + hasErrors = true; + } + } + + return { + success: !hasErrors, + results: results, + }; + }, + }); + + this.logger.info("Registered execute endpoint handlers"); + } + + /** + * Execute the generateTask operation + * @param {object} context - The context to use + * @param {object} parameters - Additional parameters + * @returns {Promise<object>} The result of the operation + */ + async executeGenerateTask(context, parameters = {}) { + // This is a placeholder for actual task generation logic + // In a real implementation, this would use Task Master's task generation + + this.logger.info(`Generating task with context ${context.id}`); + + // Improved task generation with more detailed result + const task = { + id: Math.floor(Math.random() * 1000), + title: parameters.title || "New Task", + description: parameters.description || "Task generated from context", + status: "pending", + dependencies: parameters.dependencies || [], + priority: parameters.priority || "medium", + details: `This task was generated using context ${ + context.id + }.\n\n${JSON.stringify(context.data, null, 2)}`, + metadata: { + generatedAt: new Date().toISOString(), + generatedFrom: context.id, + contextVersion: context.metadata.version, + generatedBy: parameters.user || "system", + }, + }; + + return { + success: true, + task, + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + }; + } + + /** + * Execute the expandTask operation + * @param {object} context - The context to use + * @param {object} parameters - Additional parameters + * @returns {Promise<object>} The result of the operation + */ + async executeExpandTask(context, parameters = {}) { + // This is a placeholder for actual task expansion logic + // In a real implementation, this would use Task Master's task expansion + + this.logger.info(`Expanding task with context ${context.id}`); + + // Enhanced task expansion with more configurable options + const numSubtasks = parameters.numSubtasks || 3; + const subtaskPrefix = parameters.subtaskPrefix || ""; + const subtasks = []; + + for (let i = 1; i <= numSubtasks; i++) { + subtasks.push({ + id: `${subtaskPrefix}${i}`, + title: parameters.titleTemplate + ? parameters.titleTemplate.replace("{i}", i) + : `Subtask ${i}`, + description: parameters.descriptionTemplate + ? parameters.descriptionTemplate + .replace("{i}", i) + .replace("{taskId}", parameters.taskId || "unknown") + : `Subtask ${i} for ${parameters.taskId || "unknown task"}`, + dependencies: i > 1 ? [i - 1] : [], + status: "pending", + metadata: { + expandedAt: new Date().toISOString(), + expandedFrom: context.id, + contextVersion: context.metadata.version, + expandedBy: parameters.user || "system", + }, + }); + } + + return { + success: true, + taskId: parameters.taskId, + subtasks, + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + }; + } + + /** + * Execute the analyzeComplexity operation + * @param {object} context - The context to use + * @param {object} parameters - Additional parameters + * @returns {Promise<object>} The result of the operation + */ + async executeAnalyzeComplexity(context, parameters = {}) { + // This is a placeholder for actual complexity analysis logic + // In a real implementation, this would use Task Master's complexity analysis + + this.logger.info(`Analyzing complexity with context ${context.id}`); + + // Enhanced complexity analysis with more detailed factors + const complexityScore = Math.floor(Math.random() * 10) + 1; + const recommendedSubtasks = Math.floor(complexityScore / 2) + 1; + + // More detailed analysis with weighted factors + const factors = [ + { + name: "Task scope breadth", + score: Math.floor(Math.random() * 10) + 1, + weight: 0.3, + description: "How broad is the scope of this task", + }, + { + name: "Technical complexity", + score: Math.floor(Math.random() * 10) + 1, + weight: 0.4, + description: "How technically complex is the implementation", + }, + { + name: "External dependencies", + score: Math.floor(Math.random() * 10) + 1, + weight: 0.2, + description: "How many external dependencies does this task have", + }, + { + name: "Risk assessment", + score: Math.floor(Math.random() * 10) + 1, + weight: 0.1, + description: "What is the risk level of this task", + }, + ]; + + return { + success: true, + analysis: { + taskId: parameters.taskId || "unknown", + complexityScore, + recommendedSubtasks, + factors, + recommendedTimeEstimate: `${complexityScore * 2}-${ + complexityScore * 4 + } hours`, + metadata: { + analyzedAt: new Date().toISOString(), + analyzedUsing: context.id, + contextVersion: context.metadata.version, + analyzedBy: parameters.user || "system", + }, + }, + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + }; + } + + /** + * Execute the mergeContexts operation + * @param {object} primaryContext - The primary context to use + * @param {object} parameters - Additional parameters + * @returns {Promise<object>} The result of the operation + */ + async executeMergeContexts(primaryContext, parameters = {}) { + this.logger.info( + `Merging contexts with primary context ${primaryContext.id}` + ); + + if ( + !parameters.contextIds || + !Array.isArray(parameters.contextIds) || + parameters.contextIds.length === 0 + ) { + return { + success: false, + error: "No context IDs provided for merging", + }; + } + + if (!parameters.newContextId) { + return { + success: false, + error: "New context ID is required for the merged context", + }; + } + + try { + // Add the primary context to the list if not already included + if (!parameters.contextIds.includes(primaryContext.id)) { + parameters.contextIds.unshift(primaryContext.id); + } + + const mergedContext = await this.contextManager.mergeContexts( + parameters.contextIds, + parameters.newContextId, + { + mergedAt: new Date().toISOString(), + mergedBy: parameters.user || "system", + mergeStrategy: parameters.strategy || "concatenate", + ...parameters.metadata, + } + ); + + return { + success: true, + mergedContext, + sourceContexts: parameters.contextIds, + }; + } catch (error) { + this.logger.error(`Error merging contexts: ${error.message}`); + return { + success: false, + error: error.message, + }; + } + } + + /** + * Execute the searchContexts operation + * @param {object} parameters - Search parameters + * @returns {Promise<object>} The result of the operation + */ + async executeSearchContexts(parameters = {}) { + this.logger.info( + `Searching contexts with query: ${parameters.query || ""}` + ); + + try { + const searchResults = await this.contextManager.listContexts({ + query: parameters.query || "", + filters: parameters.filters || {}, + limit: parameters.limit || 100, + offset: parameters.offset || 0, + sortBy: parameters.sortBy || "updated", + sortDirection: parameters.sortDirection || "desc", + }); + + return { + success: true, + ...searchResults, + }; + } catch (error) { + this.logger.error(`Error searching contexts: ${error.message}`); + return { + success: false, + error: error.message, + }; + } + } + + /** + * Execute the extractInsights operation + * @param {object} context - The context to analyze + * @param {object} parameters - Additional parameters + * @returns {Promise<object>} The result of the operation + */ + async executeExtractInsights(context, parameters = {}) { + this.logger.info(`Extracting insights from context ${context.id}`); + + // Placeholder for actual insight extraction + // In a real implementation, this would perform analysis on the context data + + const insights = [ + { + type: "summary", + content: `Summary of context ${context.id}`, + confidence: 0.85, + }, + { + type: "key_points", + content: ["First key point", "Second key point", "Third key point"], + confidence: 0.78, + }, + { + type: "recommendations", + content: ["First recommendation", "Second recommendation"], + confidence: 0.72, + }, + ]; + + return { + success: true, + insights, + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + metadata: { + extractedAt: new Date().toISOString(), + model: parameters.model || "default", + extractedBy: parameters.user || "system", + }, + }; + } + + /** + * Execute the syncWithRepository operation + * @param {object} context - The context to sync + * @param {object} parameters - Additional parameters + * @returns {Promise<object>} The result of the operation + */ + async executeSyncWithRepository(context, parameters = {}) { + this.logger.info(`Syncing context ${context.id} with repository`); + + // Placeholder for actual repository sync + // In a real implementation, this would sync the context with an external repository + + return { + success: true, + syncStatus: "complete", + syncedTo: parameters.repository || "default", + syncTimestamp: new Date().toISOString(), + contextUsed: { + id: context.id, + version: context.metadata.version, + }, + }; + } +} + +export default MCPApiHandlers; diff --git a/mcp-server/src/auth.js b/mcp-server/src/auth.js new file mode 100644 index 00000000..22c36973 --- /dev/null +++ b/mcp-server/src/auth.js @@ -0,0 +1,285 @@ +import jwt from "jsonwebtoken"; +import { logger } from "../../scripts/modules/utils.js"; +import crypto from "crypto"; +import fs from "fs/promises"; +import path from "path"; +import { fileURLToPath } from "url"; + +// Constants +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); +const API_KEYS_FILE = + process.env.MCP_API_KEYS_FILE || path.join(__dirname, "../api-keys.json"); +const JWT_SECRET = + process.env.MCP_JWT_SECRET || "task-master-mcp-server-secret"; +const JWT_EXPIRATION = process.env.MCP_JWT_EXPIRATION || "24h"; + +/** + * Authentication middleware and utilities for MCP server + */ +class MCPAuth { + constructor() { + this.apiKeys = new Map(); + this.logger = logger; + this.loadApiKeys(); + } + + /** + * Load API keys from disk + */ + async loadApiKeys() { + try { + // Create API keys file if it doesn't exist + try { + await fs.access(API_KEYS_FILE); + } catch (error) { + // File doesn't exist, create it with a default admin key + const defaultApiKey = this.generateApiKey(); + const defaultApiKeys = { + keys: [ + { + id: "admin", + key: defaultApiKey, + role: "admin", + created: new Date().toISOString(), + }, + ], + }; + + await fs.mkdir(path.dirname(API_KEYS_FILE), { recursive: true }); + await fs.writeFile( + API_KEYS_FILE, + JSON.stringify(defaultApiKeys, null, 2), + "utf8" + ); + + this.logger.info( + `Created default API keys file with admin key: ${defaultApiKey}` + ); + } + + // Load API keys + const data = await fs.readFile(API_KEYS_FILE, "utf8"); + const apiKeys = JSON.parse(data); + + apiKeys.keys.forEach((key) => { + this.apiKeys.set(key.key, { + id: key.id, + role: key.role, + created: key.created, + }); + }); + + this.logger.info(`Loaded ${this.apiKeys.size} API keys`); + } catch (error) { + this.logger.error(`Failed to load API keys: ${error.message}`); + throw error; + } + } + + /** + * Save API keys to disk + */ + async saveApiKeys() { + try { + const keys = []; + + this.apiKeys.forEach((value, key) => { + keys.push({ + id: value.id, + key, + role: value.role, + created: value.created, + }); + }); + + await fs.writeFile( + API_KEYS_FILE, + JSON.stringify({ keys }, null, 2), + "utf8" + ); + + this.logger.info(`Saved ${keys.length} API keys`); + } catch (error) { + this.logger.error(`Failed to save API keys: ${error.message}`); + throw error; + } + } + + /** + * Generate a new API key + * @returns {string} The generated API key + */ + generateApiKey() { + return crypto.randomBytes(32).toString("hex"); + } + + /** + * Create a new API key + * @param {string} id - Client identifier + * @param {string} role - Client role (admin, user) + * @returns {string} The generated API key + */ + async createApiKey(id, role = "user") { + const apiKey = this.generateApiKey(); + + this.apiKeys.set(apiKey, { + id, + role, + created: new Date().toISOString(), + }); + + await this.saveApiKeys(); + + this.logger.info(`Created new API key for ${id} with role ${role}`); + return apiKey; + } + + /** + * Revoke an API key + * @param {string} apiKey - The API key to revoke + * @returns {boolean} True if the key was revoked + */ + async revokeApiKey(apiKey) { + if (!this.apiKeys.has(apiKey)) { + return false; + } + + this.apiKeys.delete(apiKey); + await this.saveApiKeys(); + + this.logger.info(`Revoked API key`); + return true; + } + + /** + * Validate an API key + * @param {string} apiKey - The API key to validate + * @returns {object|null} The API key details if valid, null otherwise + */ + validateApiKey(apiKey) { + return this.apiKeys.get(apiKey) || null; + } + + /** + * Generate a JWT token for a client + * @param {string} clientId - Client identifier + * @param {string} role - Client role + * @returns {string} The JWT token + */ + generateToken(clientId, role) { + return jwt.sign({ clientId, role }, JWT_SECRET, { + expiresIn: JWT_EXPIRATION, + }); + } + + /** + * Verify a JWT token + * @param {string} token - The JWT token to verify + * @returns {object|null} The token payload if valid, null otherwise + */ + verifyToken(token) { + try { + return jwt.verify(token, JWT_SECRET); + } catch (error) { + this.logger.error(`Failed to verify token: ${error.message}`); + return null; + } + } + + /** + * Express middleware for API key authentication + * @param {object} req - Express request object + * @param {object} res - Express response object + * @param {function} next - Express next function + */ + authenticateApiKey(req, res, next) { + const apiKey = req.headers["x-api-key"]; + + if (!apiKey) { + return res.status(401).json({ + success: false, + error: "API key is required", + }); + } + + const keyDetails = this.validateApiKey(apiKey); + + if (!keyDetails) { + return res.status(401).json({ + success: false, + error: "Invalid API key", + }); + } + + // Attach client info to request + req.client = { + id: keyDetails.id, + role: keyDetails.role, + }; + + next(); + } + + /** + * Express middleware for JWT authentication + * @param {object} req - Express request object + * @param {object} res - Express response object + * @param {function} next - Express next function + */ + authenticateToken(req, res, next) { + const authHeader = req.headers["authorization"]; + const token = authHeader && authHeader.split(" ")[1]; + + if (!token) { + return res.status(401).json({ + success: false, + error: "Authentication token is required", + }); + } + + const payload = this.verifyToken(token); + + if (!payload) { + return res.status(401).json({ + success: false, + error: "Invalid or expired token", + }); + } + + // Attach client info to request + req.client = { + id: payload.clientId, + role: payload.role, + }; + + next(); + } + + /** + * Express middleware for role-based authorization + * @param {Array} roles - Array of allowed roles + * @returns {function} Express middleware + */ + authorizeRoles(roles) { + return (req, res, next) => { + if (!req.client || !req.client.role) { + return res.status(401).json({ + success: false, + error: "Unauthorized: Authentication required", + }); + } + + if (!roles.includes(req.client.role)) { + return res.status(403).json({ + success: false, + error: "Forbidden: Insufficient permissions", + }); + } + + next(); + }; + } +} + +export default MCPAuth; diff --git a/mcp-server/src/context-manager.js b/mcp-server/src/context-manager.js new file mode 100644 index 00000000..5b94b538 --- /dev/null +++ b/mcp-server/src/context-manager.js @@ -0,0 +1,873 @@ +import { logger } from "../../scripts/modules/utils.js"; +import fs from "fs/promises"; +import path from "path"; +import { fileURLToPath } from "url"; +import crypto from "crypto"; +import Fuse from "fuse.js"; + +// Constants +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); +const CONTEXT_DIR = + process.env.MCP_CONTEXT_DIR || path.join(__dirname, "../contexts"); +const MAX_CONTEXT_HISTORY = parseInt( + process.env.MCP_MAX_CONTEXT_HISTORY || "10", + 10 +); + +/** + * Context Manager for MCP server + * Handles storage, retrieval, and manipulation of context data + * Implements efficient indexing, versioning, and advanced context operations + */ +class ContextManager { + constructor() { + this.contexts = new Map(); + this.contextHistory = new Map(); // For version history + this.contextIndex = null; // For fuzzy search + this.logger = logger; + this.ensureContextDir(); + this.rebuildSearchIndex(); + } + + /** + * Ensure the contexts directory exists + */ + async ensureContextDir() { + try { + await fs.mkdir(CONTEXT_DIR, { recursive: true }); + this.logger.info(`Context directory ensured at ${CONTEXT_DIR}`); + + // Also create a versions subdirectory for history + await fs.mkdir(path.join(CONTEXT_DIR, "versions"), { recursive: true }); + } catch (error) { + this.logger.error(`Failed to create context directory: ${error.message}`); + throw error; + } + } + + /** + * Rebuild the search index for efficient context lookup + */ + async rebuildSearchIndex() { + await this.loadAllContextsFromDisk(); + + const contextsForIndex = Array.from(this.contexts.values()).map((ctx) => ({ + id: ctx.id, + content: + typeof ctx.data === "string" ? ctx.data : JSON.stringify(ctx.data), + tags: ctx.tags.join(" "), + metadata: Object.entries(ctx.metadata) + .map(([k, v]) => `${k}:${v}`) + .join(" "), + })); + + this.contextIndex = new Fuse(contextsForIndex, { + keys: ["id", "content", "tags", "metadata"], + includeScore: true, + threshold: 0.6, + }); + + this.logger.info( + `Rebuilt search index with ${contextsForIndex.length} contexts` + ); + } + + /** + * Create a new context + * @param {string} contextId - Unique identifier for the context + * @param {object|string} contextData - Initial context data + * @param {object} metadata - Optional metadata for the context + * @returns {object} The created context + */ + async createContext(contextId, contextData, metadata = {}) { + if (this.contexts.has(contextId)) { + throw new Error(`Context with ID ${contextId} already exists`); + } + + const timestamp = new Date().toISOString(); + const versionId = this.generateVersionId(); + + const context = { + id: contextId, + data: contextData, + metadata: { + created: timestamp, + updated: timestamp, + version: versionId, + ...metadata, + }, + tags: metadata.tags || [], + size: this.estimateSize(contextData), + }; + + this.contexts.set(contextId, context); + + // Initialize version history + this.contextHistory.set(contextId, [ + { + versionId, + timestamp, + data: JSON.parse(JSON.stringify(contextData)), // Deep clone + metadata: { ...context.metadata }, + }, + ]); + + await this.persistContext(contextId); + await this.persistContextVersion(contextId, versionId); + + // Update the search index + this.rebuildSearchIndex(); + + this.logger.info(`Created context: ${contextId} (version: ${versionId})`); + return context; + } + + /** + * Retrieve a context by ID + * @param {string} contextId - The context ID to retrieve + * @param {string} versionId - Optional specific version to retrieve + * @returns {object} The context object + */ + async getContext(contextId, versionId = null) { + // If specific version requested, try to get it from history + if (versionId) { + return this.getContextVersion(contextId, versionId); + } + + // Try to get from memory first + if (this.contexts.has(contextId)) { + return this.contexts.get(contextId); + } + + // Try to load from disk + try { + const context = await this.loadContextFromDisk(contextId); + if (context) { + this.contexts.set(contextId, context); + return context; + } + } catch (error) { + this.logger.error( + `Failed to load context ${contextId}: ${error.message}` + ); + } + + throw new Error(`Context with ID ${contextId} not found`); + } + + /** + * Get a specific version of a context + * @param {string} contextId - The context ID + * @param {string} versionId - The version ID + * @returns {object} The versioned context + */ + async getContextVersion(contextId, versionId) { + // Check if version history is in memory + if (this.contextHistory.has(contextId)) { + const history = this.contextHistory.get(contextId); + const version = history.find((v) => v.versionId === versionId); + if (version) { + return { + id: contextId, + data: version.data, + metadata: version.metadata, + tags: version.metadata.tags || [], + size: this.estimateSize(version.data), + versionId: version.versionId, + }; + } + } + + // Try to load from disk + try { + const versionPath = path.join( + CONTEXT_DIR, + "versions", + `${contextId}_${versionId}.json` + ); + const data = await fs.readFile(versionPath, "utf8"); + const version = JSON.parse(data); + + // Add to memory cache + if (!this.contextHistory.has(contextId)) { + this.contextHistory.set(contextId, []); + } + const history = this.contextHistory.get(contextId); + history.push(version); + + return { + id: contextId, + data: version.data, + metadata: version.metadata, + tags: version.metadata.tags || [], + size: this.estimateSize(version.data), + versionId: version.versionId, + }; + } catch (error) { + this.logger.error( + `Failed to load context version ${contextId}@${versionId}: ${error.message}` + ); + throw new Error( + `Context version ${versionId} for ${contextId} not found` + ); + } + } + + /** + * Update an existing context + * @param {string} contextId - The context ID to update + * @param {object|string} contextData - New context data + * @param {object} metadata - Optional metadata updates + * @param {boolean} createNewVersion - Whether to create a new version + * @returns {object} The updated context + */ + async updateContext( + contextId, + contextData, + metadata = {}, + createNewVersion = true + ) { + const context = await this.getContext(contextId); + const timestamp = new Date().toISOString(); + + // Generate a new version ID if requested + const versionId = createNewVersion + ? this.generateVersionId() + : context.metadata.version; + + // Create a backup of the current state for versioning + if (createNewVersion) { + // Store the current version in history + if (!this.contextHistory.has(contextId)) { + this.contextHistory.set(contextId, []); + } + + const history = this.contextHistory.get(contextId); + + // Add current state to history + history.push({ + versionId: context.metadata.version, + timestamp: context.metadata.updated, + data: JSON.parse(JSON.stringify(context.data)), // Deep clone + metadata: { ...context.metadata }, + }); + + // Trim history if it exceeds the maximum size + if (history.length > MAX_CONTEXT_HISTORY) { + const excessVersions = history.splice( + 0, + history.length - MAX_CONTEXT_HISTORY + ); + // Clean up excess versions from disk + for (const version of excessVersions) { + this.removeContextVersionFile(contextId, version.versionId).catch( + (err) => + this.logger.error( + `Failed to remove old version file: ${err.message}` + ) + ); + } + } + + // Persist version + await this.persistContextVersion(contextId, context.metadata.version); + } + + // Update the context + context.data = contextData; + context.metadata = { + ...context.metadata, + ...metadata, + updated: timestamp, + }; + + if (createNewVersion) { + context.metadata.version = versionId; + context.metadata.previousVersion = context.metadata.version; + } + + if (metadata.tags) { + context.tags = metadata.tags; + } + + // Update size estimate + context.size = this.estimateSize(contextData); + + this.contexts.set(contextId, context); + await this.persistContext(contextId); + + // Update the search index + this.rebuildSearchIndex(); + + this.logger.info(`Updated context: ${contextId} (version: ${versionId})`); + return context; + } + + /** + * Delete a context and all its versions + * @param {string} contextId - The context ID to delete + * @returns {boolean} True if deletion was successful + */ + async deleteContext(contextId) { + if (!this.contexts.has(contextId)) { + const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); + try { + await fs.access(contextPath); + } catch (error) { + throw new Error(`Context with ID ${contextId} not found`); + } + } + + this.contexts.delete(contextId); + + // Remove from history + const history = this.contextHistory.get(contextId) || []; + this.contextHistory.delete(contextId); + + try { + // Delete main context file + const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); + await fs.unlink(contextPath); + + // Delete all version files + for (const version of history) { + await this.removeContextVersionFile(contextId, version.versionId); + } + + // Update the search index + this.rebuildSearchIndex(); + + this.logger.info(`Deleted context: ${contextId}`); + return true; + } catch (error) { + this.logger.error( + `Failed to delete context files for ${contextId}: ${error.message}` + ); + throw error; + } + } + + /** + * List all available contexts with pagination and advanced filtering + * @param {object} options - Options for listing contexts + * @param {object} options.filters - Filters to apply + * @param {number} options.limit - Maximum number of contexts to return + * @param {number} options.offset - Number of contexts to skip + * @param {string} options.sortBy - Field to sort by + * @param {string} options.sortDirection - Sort direction ('asc' or 'desc') + * @param {string} options.query - Free text search query + * @returns {Array} Array of context objects + */ + async listContexts(options = {}) { + // Load all contexts from disk first + await this.loadAllContextsFromDisk(); + + const { + filters = {}, + limit = 100, + offset = 0, + sortBy = "updated", + sortDirection = "desc", + query = "", + } = options; + + let contexts; + + // If there's a search query, use the search index + if (query && this.contextIndex) { + const searchResults = this.contextIndex.search(query); + contexts = searchResults.map((result) => + this.contexts.get(result.item.id) + ); + } else { + contexts = Array.from(this.contexts.values()); + } + + // Apply filters + if (filters.tag) { + contexts = contexts.filter( + (ctx) => ctx.tags && ctx.tags.includes(filters.tag) + ); + } + + if (filters.metadataKey && filters.metadataValue) { + contexts = contexts.filter( + (ctx) => + ctx.metadata && + ctx.metadata[filters.metadataKey] === filters.metadataValue + ); + } + + if (filters.createdAfter) { + const timestamp = new Date(filters.createdAfter); + contexts = contexts.filter( + (ctx) => new Date(ctx.metadata.created) >= timestamp + ); + } + + if (filters.updatedAfter) { + const timestamp = new Date(filters.updatedAfter); + contexts = contexts.filter( + (ctx) => new Date(ctx.metadata.updated) >= timestamp + ); + } + + // Apply sorting + contexts.sort((a, b) => { + let valueA, valueB; + + if (sortBy === "created" || sortBy === "updated") { + valueA = new Date(a.metadata[sortBy]).getTime(); + valueB = new Date(b.metadata[sortBy]).getTime(); + } else if (sortBy === "size") { + valueA = a.size || 0; + valueB = b.size || 0; + } else if (sortBy === "id") { + valueA = a.id; + valueB = b.id; + } else { + valueA = a.metadata[sortBy]; + valueB = b.metadata[sortBy]; + } + + if (valueA === valueB) return 0; + + const sortFactor = sortDirection === "asc" ? 1 : -1; + return valueA < valueB ? -1 * sortFactor : 1 * sortFactor; + }); + + // Apply pagination + const paginatedContexts = contexts.slice(offset, offset + limit); + + return { + contexts: paginatedContexts, + total: contexts.length, + offset, + limit, + hasMore: offset + limit < contexts.length, + }; + } + + /** + * Get the version history of a context + * @param {string} contextId - The context ID + * @returns {Array} Array of version objects + */ + async getContextHistory(contextId) { + // Ensure context exists + await this.getContext(contextId); + + // Load history if not in memory + if (!this.contextHistory.has(contextId)) { + await this.loadContextHistoryFromDisk(contextId); + } + + const history = this.contextHistory.get(contextId) || []; + + // Return versions in reverse chronological order (newest first) + return history.sort((a, b) => { + const timeA = new Date(a.timestamp).getTime(); + const timeB = new Date(b.timestamp).getTime(); + return timeB - timeA; + }); + } + + /** + * Add tags to a context + * @param {string} contextId - The context ID + * @param {Array} tags - Array of tags to add + * @returns {object} The updated context + */ + async addTags(contextId, tags) { + const context = await this.getContext(contextId); + + const currentTags = context.tags || []; + const uniqueTags = [...new Set([...currentTags, ...tags])]; + + // Update context with new tags + return this.updateContext( + contextId, + context.data, + { + tags: uniqueTags, + }, + false + ); // Don't create a new version for tag updates + } + + /** + * Remove tags from a context + * @param {string} contextId - The context ID + * @param {Array} tags - Array of tags to remove + * @returns {object} The updated context + */ + async removeTags(contextId, tags) { + const context = await this.getContext(contextId); + + const currentTags = context.tags || []; + const newTags = currentTags.filter((tag) => !tags.includes(tag)); + + // Update context with new tags + return this.updateContext( + contextId, + context.data, + { + tags: newTags, + }, + false + ); // Don't create a new version for tag updates + } + + /** + * Handle context windowing and truncation + * @param {string} contextId - The context ID + * @param {number} maxSize - Maximum size in tokens/chars + * @param {string} strategy - Truncation strategy ('start', 'end', 'middle') + * @returns {object} The truncated context + */ + async truncateContext(contextId, maxSize, strategy = "end") { + const context = await this.getContext(contextId); + const contextText = + typeof context.data === "string" + ? context.data + : JSON.stringify(context.data); + + if (contextText.length <= maxSize) { + return context; // No truncation needed + } + + let truncatedData; + + switch (strategy) { + case "start": + truncatedData = contextText.slice(contextText.length - maxSize); + break; + case "middle": + const halfSize = Math.floor(maxSize / 2); + truncatedData = + contextText.slice(0, halfSize) + + "...[truncated]..." + + contextText.slice(contextText.length - halfSize); + break; + case "end": + default: + truncatedData = contextText.slice(0, maxSize); + break; + } + + // If original data was an object, try to parse the truncated data + // Otherwise use it as a string + let updatedData; + if (typeof context.data === "object") { + try { + // This may fail if truncation broke JSON structure + updatedData = { + ...context.data, + truncated: true, + truncation_strategy: strategy, + original_size: contextText.length, + truncated_size: truncatedData.length, + }; + } catch (error) { + updatedData = truncatedData; + } + } else { + updatedData = truncatedData; + } + + // Update with truncated data + return this.updateContext( + contextId, + updatedData, + { + truncated: true, + truncation_strategy: strategy, + original_size: contextText.length, + truncated_size: truncatedData.length, + }, + true + ); // Create a new version for the truncated data + } + + /** + * Merge multiple contexts into a new context + * @param {Array} contextIds - Array of context IDs to merge + * @param {string} newContextId - ID for the new merged context + * @param {object} metadata - Optional metadata for the new context + * @returns {object} The new merged context + */ + async mergeContexts(contextIds, newContextId, metadata = {}) { + if (contextIds.length === 0) { + throw new Error("At least one context ID must be provided for merging"); + } + + if (this.contexts.has(newContextId)) { + throw new Error(`Context with ID ${newContextId} already exists`); + } + + // Load all contexts to be merged + const contextsToMerge = []; + for (const id of contextIds) { + try { + const context = await this.getContext(id); + contextsToMerge.push(context); + } catch (error) { + this.logger.error( + `Could not load context ${id} for merging: ${error.message}` + ); + throw new Error(`Failed to merge contexts: ${error.message}`); + } + } + + // Check data types and decide how to merge + const allStrings = contextsToMerge.every((c) => typeof c.data === "string"); + const allObjects = contextsToMerge.every( + (c) => typeof c.data === "object" && c.data !== null + ); + + let mergedData; + + if (allStrings) { + // Merge strings with newlines between them + mergedData = contextsToMerge.map((c) => c.data).join("\n\n"); + } else if (allObjects) { + // Merge objects by combining their properties + mergedData = {}; + for (const context of contextsToMerge) { + mergedData = { ...mergedData, ...context.data }; + } + } else { + // Convert everything to strings and concatenate + mergedData = contextsToMerge + .map((c) => + typeof c.data === "string" ? c.data : JSON.stringify(c.data) + ) + .join("\n\n"); + } + + // Collect all tags from merged contexts + const allTags = new Set(); + for (const context of contextsToMerge) { + for (const tag of context.tags || []) { + allTags.add(tag); + } + } + + // Create merged metadata + const mergedMetadata = { + ...metadata, + tags: [...allTags], + merged_from: contextIds, + merged_at: new Date().toISOString(), + }; + + // Create the new merged context + return this.createContext(newContextId, mergedData, mergedMetadata); + } + + /** + * Persist a context to disk + * @param {string} contextId - The context ID to persist + * @returns {Promise<void>} + */ + async persistContext(contextId) { + const context = this.contexts.get(contextId); + if (!context) { + throw new Error(`Context with ID ${contextId} not found`); + } + + const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); + try { + await fs.writeFile(contextPath, JSON.stringify(context, null, 2), "utf8"); + this.logger.debug(`Persisted context ${contextId} to disk`); + } catch (error) { + this.logger.error( + `Failed to persist context ${contextId}: ${error.message}` + ); + throw error; + } + } + + /** + * Persist a context version to disk + * @param {string} contextId - The context ID + * @param {string} versionId - The version ID + * @returns {Promise<void>} + */ + async persistContextVersion(contextId, versionId) { + if (!this.contextHistory.has(contextId)) { + throw new Error(`Context history for ${contextId} not found`); + } + + const history = this.contextHistory.get(contextId); + const version = history.find((v) => v.versionId === versionId); + + if (!version) { + throw new Error(`Version ${versionId} of context ${contextId} not found`); + } + + const versionPath = path.join( + CONTEXT_DIR, + "versions", + `${contextId}_${versionId}.json` + ); + try { + await fs.writeFile(versionPath, JSON.stringify(version, null, 2), "utf8"); + this.logger.debug( + `Persisted context version ${contextId}@${versionId} to disk` + ); + } catch (error) { + this.logger.error( + `Failed to persist context version ${contextId}@${versionId}: ${error.message}` + ); + throw error; + } + } + + /** + * Remove a context version file from disk + * @param {string} contextId - The context ID + * @param {string} versionId - The version ID + * @returns {Promise<void>} + */ + async removeContextVersionFile(contextId, versionId) { + const versionPath = path.join( + CONTEXT_DIR, + "versions", + `${contextId}_${versionId}.json` + ); + try { + await fs.unlink(versionPath); + this.logger.debug( + `Removed context version file ${contextId}@${versionId}` + ); + } catch (error) { + if (error.code !== "ENOENT") { + this.logger.error( + `Failed to remove context version file ${contextId}@${versionId}: ${error.message}` + ); + throw error; + } + } + } + + /** + * Load a context from disk + * @param {string} contextId - The context ID to load + * @returns {Promise<object>} The loaded context + */ + async loadContextFromDisk(contextId) { + const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); + try { + const data = await fs.readFile(contextPath, "utf8"); + const context = JSON.parse(data); + this.logger.debug(`Loaded context ${contextId} from disk`); + return context; + } catch (error) { + this.logger.error( + `Failed to load context ${contextId} from disk: ${error.message}` + ); + throw error; + } + } + + /** + * Load context history from disk + * @param {string} contextId - The context ID + * @returns {Promise<Array>} The loaded history + */ + async loadContextHistoryFromDisk(contextId) { + try { + const files = await fs.readdir(path.join(CONTEXT_DIR, "versions")); + const versionFiles = files.filter( + (file) => file.startsWith(`${contextId}_`) && file.endsWith(".json") + ); + + const history = []; + + for (const file of versionFiles) { + try { + const data = await fs.readFile( + path.join(CONTEXT_DIR, "versions", file), + "utf8" + ); + const version = JSON.parse(data); + history.push(version); + } catch (error) { + this.logger.error( + `Failed to load context version file ${file}: ${error.message}` + ); + } + } + + this.contextHistory.set(contextId, history); + this.logger.debug( + `Loaded ${history.length} versions for context ${contextId}` + ); + + return history; + } catch (error) { + this.logger.error( + `Failed to load context history for ${contextId}: ${error.message}` + ); + this.contextHistory.set(contextId, []); + return []; + } + } + + /** + * Load all contexts from disk + * @returns {Promise<void>} + */ + async loadAllContextsFromDisk() { + try { + const files = await fs.readdir(CONTEXT_DIR); + const contextFiles = files.filter((file) => file.endsWith(".json")); + + for (const file of contextFiles) { + const contextId = path.basename(file, ".json"); + if (!this.contexts.has(contextId)) { + try { + const context = await this.loadContextFromDisk(contextId); + this.contexts.set(contextId, context); + } catch (error) { + // Already logged in loadContextFromDisk + } + } + } + + this.logger.info(`Loaded ${this.contexts.size} contexts from disk`); + } catch (error) { + this.logger.error(`Failed to load contexts from disk: ${error.message}`); + throw error; + } + } + + /** + * Generate a unique version ID + * @returns {string} A unique version ID + */ + generateVersionId() { + return crypto.randomBytes(8).toString("hex"); + } + + /** + * Estimate the size of context data + * @param {object|string} data - The context data + * @returns {number} Estimated size in bytes + */ + estimateSize(data) { + if (typeof data === "string") { + return Buffer.byteLength(data, "utf8"); + } + + if (typeof data === "object" && data !== null) { + return Buffer.byteLength(JSON.stringify(data), "utf8"); + } + + return 0; + } +} + +export default ContextManager; diff --git a/mcp-server/src/index.js b/mcp-server/src/index.js new file mode 100644 index 00000000..eb820f95 --- /dev/null +++ b/mcp-server/src/index.js @@ -0,0 +1,366 @@ +import { FastMCP } from "fastmcp"; +import { z } from "zod"; +import path from "path"; +import fs from "fs/promises"; +import dotenv from "dotenv"; +import { fileURLToPath } from "url"; +import express from "express"; +import cors from "cors"; +import helmet from "helmet"; +import { logger } from "../../scripts/modules/utils.js"; +import MCPAuth from "./auth.js"; +import MCPApiHandlers from "./api-handlers.js"; +import ContextManager from "./context-manager.js"; + +// Load environment variables +dotenv.config(); + +// Constants +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); +const DEFAULT_PORT = process.env.MCP_SERVER_PORT || 3000; +const DEFAULT_HOST = process.env.MCP_SERVER_HOST || "localhost"; + +/** + * Main MCP server class that integrates with Task Master + */ +class TaskMasterMCPServer { + constructor(options = {}) { + this.options = { + name: "Task Master MCP Server", + version: process.env.PROJECT_VERSION || "1.0.0", + ...options, + }; + + this.server = new FastMCP(this.options); + this.expressApp = null; + this.initialized = false; + this.auth = new MCPAuth(); + this.contextManager = new ContextManager(); + + // Bind methods + this.init = this.init.bind(this); + this.start = this.start.bind(this); + this.stop = this.stop.bind(this); + + // Setup logging + this.logger = logger; + } + + /** + * Initialize the MCP server with necessary tools and routes + */ + async init() { + if (this.initialized) return; + + this.logger.info("Initializing Task Master MCP server..."); + + // Set up express for additional customization if needed + this.expressApp = express(); + this.expressApp.use(cors()); + this.expressApp.use(helmet()); + this.expressApp.use(express.json()); + + // Set up authentication middleware + this.setupAuthentication(); + + // Register API handlers + this.apiHandlers = new MCPApiHandlers(this.server); + + // Register additional task master specific tools + this.registerTaskMasterTools(); + + this.initialized = true; + this.logger.info("Task Master MCP server initialized successfully"); + + return this; + } + + /** + * Set up authentication for the MCP server + */ + setupAuthentication() { + // Add a health check endpoint that doesn't require authentication + this.expressApp.get("/health", (req, res) => { + res.status(200).json({ + status: "ok", + service: this.options.name, + version: this.options.version, + }); + }); + + // Add an authenticate endpoint to get a JWT token using an API key + this.expressApp.post("/auth/token", async (req, res) => { + const apiKey = req.headers["x-api-key"]; + + if (!apiKey) { + return res.status(401).json({ + success: false, + error: "API key is required", + }); + } + + const keyDetails = this.auth.validateApiKey(apiKey); + + if (!keyDetails) { + return res.status(401).json({ + success: false, + error: "Invalid API key", + }); + } + + const token = this.auth.generateToken(keyDetails.id, keyDetails.role); + + res.status(200).json({ + success: true, + token, + expiresIn: process.env.MCP_JWT_EXPIRATION || "24h", + clientId: keyDetails.id, + role: keyDetails.role, + }); + }); + + // Create authenticator middleware for FastMCP + this.server.setAuthenticator((request) => { + // Get token from Authorization header + const authHeader = request.headers?.authorization; + if (!authHeader || !authHeader.startsWith("Bearer ")) { + return null; + } + + const token = authHeader.split(" ")[1]; + const payload = this.auth.verifyToken(token); + + if (!payload) { + return null; + } + + return { + clientId: payload.clientId, + role: payload.role, + }; + }); + + // Set up a protected route for API key management (admin only) + this.expressApp.post( + "/auth/api-keys", + (req, res, next) => { + this.auth.authenticateToken(req, res, next); + }, + (req, res, next) => { + this.auth.authorizeRoles(["admin"])(req, res, next); + }, + async (req, res) => { + const { clientId, role } = req.body; + + if (!clientId) { + return res.status(400).json({ + success: false, + error: "Client ID is required", + }); + } + + try { + const apiKey = await this.auth.createApiKey(clientId, role || "user"); + + res.status(201).json({ + success: true, + apiKey, + clientId, + role: role || "user", + }); + } catch (error) { + this.logger.error(`Error creating API key: ${error.message}`); + + res.status(500).json({ + success: false, + error: "Failed to create API key", + }); + } + } + ); + + this.logger.info("Set up MCP authentication"); + } + + /** + * Register Task Master specific tools with the MCP server + */ + registerTaskMasterTools() { + // Add a tool to get tasks from Task Master + this.server.addTool({ + name: "listTasks", + description: "List all tasks from Task Master", + parameters: z.object({ + status: z.string().optional().describe("Filter tasks by status"), + withSubtasks: z + .boolean() + .optional() + .describe("Include subtasks in the response"), + }), + execute: async (args) => { + try { + // In a real implementation, this would use the Task Master API + // to fetch tasks. For now, returning mock data. + + this.logger.info( + `Listing tasks with filters: ${JSON.stringify(args)}` + ); + + // Mock task data + const tasks = [ + { + id: 1, + title: "Implement Task Data Structure", + status: "done", + dependencies: [], + priority: "high", + }, + { + id: 2, + title: "Develop Command Line Interface Foundation", + status: "done", + dependencies: [1], + priority: "high", + }, + { + id: 23, + title: "Implement MCP Server Functionality", + status: "in-progress", + dependencies: [22], + priority: "medium", + subtasks: [ + { + id: "23.1", + title: "Create Core MCP Server Module", + status: "in-progress", + dependencies: [], + }, + { + id: "23.2", + title: "Implement Context Management System", + status: "pending", + dependencies: ["23.1"], + }, + ], + }, + ]; + + // Apply status filter if provided + let filteredTasks = tasks; + if (args.status) { + filteredTasks = tasks.filter((task) => task.status === args.status); + } + + // Remove subtasks if not requested + if (!args.withSubtasks) { + filteredTasks = filteredTasks.map((task) => { + const { subtasks, ...taskWithoutSubtasks } = task; + return taskWithoutSubtasks; + }); + } + + return { success: true, tasks: filteredTasks }; + } catch (error) { + this.logger.error(`Error listing tasks: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + // Add a tool to get task details + this.server.addTool({ + name: "getTaskDetails", + description: "Get detailed information about a specific task", + parameters: z.object({ + taskId: z + .union([z.number(), z.string()]) + .describe("The ID of the task to get details for"), + }), + execute: async (args) => { + try { + // In a real implementation, this would use the Task Master API + // to fetch task details. For now, returning mock data. + + this.logger.info(`Getting details for task ${args.taskId}`); + + // Mock task details + const taskDetails = { + id: 23, + title: "Implement MCP Server Functionality", + description: + "Extend Task Master to function as an MCP server, allowing it to provide context management services to other applications.", + status: "in-progress", + dependencies: [22], + priority: "medium", + details: + "This task involves implementing the Model Context Protocol server capabilities within Task Master.", + testStrategy: + "Testing should include unit tests, integration tests, and compatibility tests.", + subtasks: [ + { + id: "23.1", + title: "Create Core MCP Server Module", + status: "in-progress", + dependencies: [], + }, + { + id: "23.2", + title: "Implement Context Management System", + status: "pending", + dependencies: ["23.1"], + }, + ], + }; + + return { success: true, task: taskDetails }; + } catch (error) { + this.logger.error(`Error getting task details: ${error.message}`); + return { success: false, error: error.message }; + } + }, + }); + + this.logger.info("Registered Task Master specific tools"); + } + + /** + * Start the MCP server + */ + async start({ port = DEFAULT_PORT, host = DEFAULT_HOST } = {}) { + if (!this.initialized) { + await this.init(); + } + + this.logger.info( + `Starting Task Master MCP server on http://${host}:${port}` + ); + + // Start the FastMCP server + await this.server.start({ + port, + host, + transportType: "sse", + expressApp: this.expressApp, + }); + + this.logger.info( + `Task Master MCP server running at http://${host}:${port}` + ); + + return this; + } + + /** + * Stop the MCP server + */ + async stop() { + if (this.server) { + this.logger.info("Stopping Task Master MCP server..."); + await this.server.stop(); + this.logger.info("Task Master MCP server stopped"); + } + } +} + +export default TaskMasterMCPServer; diff --git a/package-lock.json b/package-lock.json index 9fe24aaf..198d4529 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "task-master-ai", - "version": "0.9.18", + "version": "0.9.30", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "task-master-ai", - "version": "0.9.18", + "version": "0.9.30", "license": "MIT", "dependencies": { "@anthropic-ai/sdk": "^0.39.0", @@ -14,15 +14,22 @@ "chalk": "^4.1.2", "cli-table3": "^0.6.5", "commander": "^11.1.0", + "cors": "^2.8.5", "dotenv": "^16.3.1", + "express": "^4.21.2", + "fastmcp": "^1.20.5", "figlet": "^1.8.0", + "fuse.js": "^7.0.0", "gradient-string": "^3.0.0", + "helmet": "^8.1.0", + "jsonwebtoken": "^9.0.2", "openai": "^4.89.0", "ora": "^8.2.0" }, "bin": { "task-master": "bin/task-master.js", - "task-master-init": "bin/task-master-init.js" + "task-master-init": "bin/task-master-init.js", + "task-master-mcp-server": "mcp-server/server.js" }, "devDependencies": { "@changesets/changelog-github": "^0.5.1", @@ -1419,6 +1426,317 @@ "node": ">=6 <7 || >=8" } }, + "node_modules/@modelcontextprotocol/sdk": { + "version": "1.8.0", + "resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.8.0.tgz", + "integrity": "sha512-e06W7SwrontJDHwCawNO5SGxG+nU9AAx+jpHHZqGl/WrDBdWOpvirC+s58VpJTB5QemI4jTRcjWT4Pt3Q1NPQQ==", + "license": "MIT", + "dependencies": { + "content-type": "^1.0.5", + "cors": "^2.8.5", + "cross-spawn": "^7.0.3", + "eventsource": "^3.0.2", + "express": "^5.0.1", + "express-rate-limit": "^7.5.0", + "pkce-challenge": "^4.1.0", + "raw-body": "^3.0.0", + "zod": "^3.23.8", + "zod-to-json-schema": "^3.24.1" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/accepts": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/accepts/-/accepts-2.0.0.tgz", + "integrity": "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==", + "license": "MIT", + "dependencies": { + "mime-types": "^3.0.0", + "negotiator": "^1.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/body-parser": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-2.2.0.tgz", + "integrity": "sha512-02qvAaxv8tp7fBa/mw1ga98OGm+eCbqzJOKoRt70sLmfEEi+jyBYVTDGfCL/k06/4EMk/z01gCe7HoCH/f2LTg==", + "license": "MIT", + "dependencies": { + "bytes": "^3.1.2", + "content-type": "^1.0.5", + "debug": "^4.4.0", + "http-errors": "^2.0.0", + "iconv-lite": "^0.6.3", + "on-finished": "^2.4.1", + "qs": "^6.14.0", + "raw-body": "^3.0.0", + "type-is": "^2.0.0" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/content-disposition": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-1.0.0.tgz", + "integrity": "sha512-Au9nRL8VNUut/XSzbQA38+M78dzP4D+eqg3gfJHMIHHYa3bg067xj1KxMUWj+VULbiZMowKngFFbKczUrNJ1mg==", + "license": "MIT", + "dependencies": { + "safe-buffer": "5.2.1" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/cookie-signature": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.2.2.tgz", + "integrity": "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg==", + "license": "MIT", + "engines": { + "node": ">=6.6.0" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/express": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/express/-/express-5.0.1.tgz", + "integrity": "sha512-ORF7g6qGnD+YtUG9yx4DFoqCShNMmUKiXuT5oWMHiOvt/4WFbHC6yCwQMTSBMno7AqntNCAzzcnnjowRkTL9eQ==", + "license": "MIT", + "dependencies": { + "accepts": "^2.0.0", + "body-parser": "^2.0.1", + "content-disposition": "^1.0.0", + "content-type": "~1.0.4", + "cookie": "0.7.1", + "cookie-signature": "^1.2.1", + "debug": "4.3.6", + "depd": "2.0.0", + "encodeurl": "~2.0.0", + "escape-html": "~1.0.3", + "etag": "~1.8.1", + "finalhandler": "^2.0.0", + "fresh": "2.0.0", + "http-errors": "2.0.0", + "merge-descriptors": "^2.0.0", + "methods": "~1.1.2", + "mime-types": "^3.0.0", + "on-finished": "2.4.1", + "once": "1.4.0", + "parseurl": "~1.3.3", + "proxy-addr": "~2.0.7", + "qs": "6.13.0", + "range-parser": "~1.2.1", + "router": "^2.0.0", + "safe-buffer": "5.2.1", + "send": "^1.1.0", + "serve-static": "^2.1.0", + "setprototypeof": "1.2.0", + "statuses": "2.0.1", + "type-is": "^2.0.0", + "utils-merge": "1.0.1", + "vary": "~1.1.2" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/express/node_modules/debug": { + "version": "4.3.6", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.6.tgz", + "integrity": "sha512-O/09Bd4Z1fBrU4VzkhFqVgpPzaGbw6Sm9FEkBT1A/YBXQFGuuSxa1dN2nxgxS34JmKXqYx8CZAwEVoJFImUXIg==", + "license": "MIT", + "dependencies": { + "ms": "2.1.2" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/express/node_modules/ms": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", + "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", + "license": "MIT" + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/express/node_modules/qs": { + "version": "6.13.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", + "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.0.6" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/finalhandler": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-2.1.0.tgz", + "integrity": "sha512-/t88Ty3d5JWQbWYgaOGCCYfXRwV1+be02WqYYlL6h0lEiUAMPM8o8qKGO01YIkOHzka2up08wvgYD0mDiI+q3Q==", + "license": "MIT", + "dependencies": { + "debug": "^4.4.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "on-finished": "^2.4.1", + "parseurl": "^1.3.3", + "statuses": "^2.0.1" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/fresh": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-2.0.0.tgz", + "integrity": "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "license": "MIT", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/media-typer": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-1.1.0.tgz", + "integrity": "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/merge-descriptors": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-2.0.0.tgz", + "integrity": "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/mime-db": { + "version": "1.54.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz", + "integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/mime-types": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.1.tgz", + "integrity": "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA==", + "license": "MIT", + "dependencies": { + "mime-db": "^1.54.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/negotiator": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-1.0.0.tgz", + "integrity": "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/raw-body": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-3.0.0.tgz", + "integrity": "sha512-RmkhL8CAyCRPXCE28MMH0z2PNWQBNk2Q09ZdxM9IOOXwxwZbN+qbWaatPkdkWIKL2ZVDImrN/pK5HTRz2PcS4g==", + "license": "MIT", + "dependencies": { + "bytes": "3.1.2", + "http-errors": "2.0.0", + "iconv-lite": "0.6.3", + "unpipe": "1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/send": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/send/-/send-1.2.0.tgz", + "integrity": "sha512-uaW0WwXKpL9blXE2o0bRhoL2EGXIrZxQ2ZQ4mgcfoBxdFmQold+qWsD2jLrfZ0trjKL6vOw0j//eAwcALFjKSw==", + "license": "MIT", + "dependencies": { + "debug": "^4.3.5", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "etag": "^1.8.1", + "fresh": "^2.0.0", + "http-errors": "^2.0.0", + "mime-types": "^3.0.1", + "ms": "^2.1.3", + "on-finished": "^2.4.1", + "range-parser": "^1.2.1", + "statuses": "^2.0.1" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/serve-static": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.2.0.tgz", + "integrity": "sha512-61g9pCh0Vnh7IutZjtLGGpTA355+OPn2TyDv/6ivP2h/AdAVX9azsoxmg2/M6nZeQZNYBEwIcsne1mJd9oQItQ==", + "license": "MIT", + "dependencies": { + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "parseurl": "^1.3.3", + "send": "^1.2.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/type-is": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/type-is/-/type-is-2.0.1.tgz", + "integrity": "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw==", + "license": "MIT", + "dependencies": { + "content-type": "^1.0.5", + "media-typer": "^1.1.0", + "mime-types": "^3.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, "node_modules/@nodelib/fs.scandir": { "version": "2.1.5", "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", @@ -1457,6 +1775,12 @@ "node": ">= 8" } }, + "node_modules/@sec-ant/readable-stream": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/@sec-ant/readable-stream/-/readable-stream-0.4.1.tgz", + "integrity": "sha512-831qok9r2t8AlxLko40y2ebgSDhenenCatLVeW/uBtnHPyhHOvG0C7TvfgecV+wHzIm5KUICgzmVpWS+IMEAeg==", + "license": "MIT" + }, "node_modules/@sinclair/typebox": { "version": "0.27.8", "resolved": "https://registry.npmjs.org/@sinclair/typebox/-/typebox-0.27.8.tgz", @@ -1464,6 +1788,18 @@ "dev": true, "license": "MIT" }, + "node_modules/@sindresorhus/merge-streams": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/@sindresorhus/merge-streams/-/merge-streams-4.0.0.tgz", + "integrity": "sha512-tlqY9xq5ukxTUZBmoOp+m61cqwQD5pHJtFY3Mn8CA8ps6yghLH/Hw8UPdqg4OLmFW3IFlcXnQNmo/dh8HzXYIQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/@sinonjs/commons": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/@sinonjs/commons/-/commons-3.0.1.tgz", @@ -1484,6 +1820,30 @@ "@sinonjs/commons": "^3.0.0" } }, + "node_modules/@tokenizer/inflate": { + "version": "0.2.7", + "resolved": "https://registry.npmjs.org/@tokenizer/inflate/-/inflate-0.2.7.tgz", + "integrity": "sha512-MADQgmZT1eKjp06jpI2yozxaU9uVs4GzzgSL+uEq7bVcJ9V1ZXQkeGNql1fsSI0gMy1vhvNTNbUqrx+pZfJVmg==", + "license": "MIT", + "dependencies": { + "debug": "^4.4.0", + "fflate": "^0.8.2", + "token-types": "^6.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/Borewit" + } + }, + "node_modules/@tokenizer/token": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/@tokenizer/token/-/token-0.3.0.tgz", + "integrity": "sha512-OvjF+z51L3ov0OyAU0duzsYuvO01PH7x4t6DJx+guahgTnBHkhJdG7soQeTSFLWN3efnHyibZ4Z8l2EuWwJN3A==", + "license": "MIT" + }, "node_modules/@types/babel__core": { "version": "7.20.5", "resolved": "https://registry.npmjs.org/@types/babel__core/-/babel__core-7.20.5.tgz", @@ -1638,6 +1998,19 @@ "node": ">=6.5" } }, + "node_modules/accepts": { + "version": "1.3.8", + "resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz", + "integrity": "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==", + "license": "MIT", + "dependencies": { + "mime-types": "~2.1.34", + "negotiator": "0.6.3" + }, + "engines": { + "node": ">= 0.6" + } + }, "node_modules/agentkeepalive": { "version": "4.6.0", "resolved": "https://registry.npmjs.org/agentkeepalive/-/agentkeepalive-4.6.0.tgz", @@ -1790,6 +2163,12 @@ "sprintf-js": "~1.0.2" } }, + "node_modules/array-flatten": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz", + "integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==", + "license": "MIT" + }, "node_modules/array-union": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz", @@ -1949,6 +2328,60 @@ "node": ">=4" } }, + "node_modules/body-parser": { + "version": "1.20.3", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz", + "integrity": "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==", + "license": "MIT", + "dependencies": { + "bytes": "3.1.2", + "content-type": "~1.0.5", + "debug": "2.6.9", + "depd": "2.0.0", + "destroy": "1.2.0", + "http-errors": "2.0.0", + "iconv-lite": "0.4.24", + "on-finished": "2.4.1", + "qs": "6.13.0", + "raw-body": "2.5.2", + "type-is": "~1.6.18", + "unpipe": "1.0.0" + }, + "engines": { + "node": ">= 0.8", + "npm": "1.2.8000 || >= 1.4.16" + } + }, + "node_modules/body-parser/node_modules/debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", + "dependencies": { + "ms": "2.0.0" + } + }, + "node_modules/body-parser/node_modules/ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" + }, + "node_modules/body-parser/node_modules/qs": { + "version": "6.13.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", + "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.0.6" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, "node_modules/boxen": { "version": "8.0.1", "resolved": "https://registry.npmjs.org/boxen/-/boxen-8.0.1.tgz", @@ -2050,6 +2483,12 @@ "node-int64": "^0.4.0" } }, + "node_modules/buffer-equal-constant-time": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz", + "integrity": "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==", + "license": "BSD-3-Clause" + }, "node_modules/buffer-from": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz", @@ -2057,6 +2496,15 @@ "dev": true, "license": "MIT" }, + "node_modules/bytes": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", + "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/call-bind-apply-helpers": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", @@ -2074,7 +2522,6 @@ "version": "1.0.4", "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", - "dev": true, "license": "MIT", "dependencies": { "call-bind-apply-helpers": "^1.0.2", @@ -2285,7 +2732,6 @@ "version": "8.0.1", "resolved": "https://registry.npmjs.org/cliui/-/cliui-8.0.1.tgz", "integrity": "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==", - "dev": true, "license": "ISC", "dependencies": { "string-width": "^4.2.0", @@ -2300,7 +2746,6 @@ "version": "5.0.1", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", - "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -2310,14 +2755,12 @@ "version": "8.0.0", "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", - "dev": true, "license": "MIT" }, "node_modules/cliui/node_modules/string-width": { "version": "4.2.3", "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", - "dev": true, "license": "MIT", "dependencies": { "emoji-regex": "^8.0.0", @@ -2332,7 +2775,6 @@ "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", - "dev": true, "license": "MIT", "dependencies": { "ansi-regex": "^5.0.1" @@ -2345,7 +2787,6 @@ "version": "7.0.0", "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", - "dev": true, "license": "MIT", "dependencies": { "ansi-styles": "^4.0.0", @@ -2433,6 +2874,27 @@ "dev": true, "license": "MIT" }, + "node_modules/content-disposition": { + "version": "0.5.4", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz", + "integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==", + "license": "MIT", + "dependencies": { + "safe-buffer": "5.2.1" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/content-type": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", + "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, "node_modules/convert-source-map": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-2.0.0.tgz", @@ -2440,6 +2902,21 @@ "dev": true, "license": "MIT" }, + "node_modules/cookie": { + "version": "0.7.1", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz", + "integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/cookie-signature": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz", + "integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==", + "license": "MIT" + }, "node_modules/cookiejar": { "version": "2.1.4", "resolved": "https://registry.npmjs.org/cookiejar/-/cookiejar-2.1.4.tgz", @@ -2447,6 +2924,19 @@ "dev": true, "license": "MIT" }, + "node_modules/cors": { + "version": "2.8.5", + "resolved": "https://registry.npmjs.org/cors/-/cors-2.8.5.tgz", + "integrity": "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==", + "license": "MIT", + "dependencies": { + "object-assign": "^4", + "vary": "^1" + }, + "engines": { + "node": ">= 0.10" + } + }, "node_modules/create-jest": { "version": "29.7.0", "resolved": "https://registry.npmjs.org/create-jest/-/create-jest-29.7.0.tgz", @@ -2473,7 +2963,6 @@ "version": "7.0.6", "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz", "integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==", - "dev": true, "license": "MIT", "dependencies": { "path-key": "^3.1.0", @@ -2504,7 +2993,6 @@ "version": "4.4.0", "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.0.tgz", "integrity": "sha512-6WTZ/IxCY/T6BALoZHaE4ctp9xm+Z5kY/pzYaCHRFeyVhojxlrm+46y68HA6hr0TcwEssoxNiDEUJQjfPZ/RYA==", - "dev": true, "license": "MIT", "dependencies": { "ms": "^2.1.3" @@ -2552,6 +3040,25 @@ "node": ">=0.4.0" } }, + "node_modules/depd": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", + "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/destroy": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz", + "integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==", + "license": "MIT", + "engines": { + "node": ">= 0.8", + "npm": "1.2.8000 || >= 1.4.16" + } + }, "node_modules/detect-indent": { "version": "6.1.0", "resolved": "https://registry.npmjs.org/detect-indent/-/detect-indent-6.1.0.tgz", @@ -2632,6 +3139,21 @@ "node": ">= 0.4" } }, + "node_modules/ecdsa-sig-formatter": { + "version": "1.0.11", + "resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz", + "integrity": "sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==", + "license": "Apache-2.0", + "dependencies": { + "safe-buffer": "^5.0.1" + } + }, + "node_modules/ee-first": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", + "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==", + "license": "MIT" + }, "node_modules/electron-to-chromium": { "version": "1.5.123", "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.123.tgz", @@ -2658,6 +3180,15 @@ "integrity": "sha512-EC+0oUMY1Rqm4O6LLrgjtYDvcVYTy7chDnM4Q7030tP4Kwj3u/pR6gP9ygnp2CJMK5Gq+9Q2oqmrFJAz01DXjw==", "license": "MIT" }, + "node_modules/encodeurl": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", + "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/enquirer": { "version": "2.4.1", "resolved": "https://registry.npmjs.org/enquirer/-/enquirer-2.4.1.tgz", @@ -2754,12 +3285,17 @@ "version": "3.2.0", "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==", - "dev": true, "license": "MIT", "engines": { "node": ">=6" } }, + "node_modules/escape-html": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", + "integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==", + "license": "MIT" + }, "node_modules/escape-string-regexp": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-2.0.0.tgz", @@ -2784,6 +3320,15 @@ "node": ">=4" } }, + "node_modules/etag": { + "version": "1.8.1", + "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", + "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, "node_modules/event-target-shim": { "version": "5.0.1", "resolved": "https://registry.npmjs.org/event-target-shim/-/event-target-shim-5.0.1.tgz", @@ -2793,6 +3338,27 @@ "node": ">=6" } }, + "node_modules/eventsource": { + "version": "3.0.6", + "resolved": "https://registry.npmjs.org/eventsource/-/eventsource-3.0.6.tgz", + "integrity": "sha512-l19WpE2m9hSuyP06+FbuUUf1G+R0SFLrtQfbRb9PRr+oimOfxQhgGCbVaXg5IvZyyTThJsxh6L/srkMiCeBPDA==", + "license": "MIT", + "dependencies": { + "eventsource-parser": "^3.0.1" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/eventsource-parser": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/eventsource-parser/-/eventsource-parser-3.0.1.tgz", + "integrity": "sha512-VARTJ9CYeuQYb0pZEPbzi740OWFgpHe7AYJ2WFZVnUDUQp5Dk2yJUgF36YsZ81cOyxT0QxmXD2EQpapAouzWVA==", + "license": "MIT", + "engines": { + "node": ">=18.0.0" + } + }, "node_modules/execa": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/execa/-/execa-5.1.1.tgz", @@ -2866,6 +3432,97 @@ "node": "^14.15.0 || ^16.10.0 || >=18.0.0" } }, + "node_modules/express": { + "version": "4.21.2", + "resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz", + "integrity": "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA==", + "license": "MIT", + "dependencies": { + "accepts": "~1.3.8", + "array-flatten": "1.1.1", + "body-parser": "1.20.3", + "content-disposition": "0.5.4", + "content-type": "~1.0.4", + "cookie": "0.7.1", + "cookie-signature": "1.0.6", + "debug": "2.6.9", + "depd": "2.0.0", + "encodeurl": "~2.0.0", + "escape-html": "~1.0.3", + "etag": "~1.8.1", + "finalhandler": "1.3.1", + "fresh": "0.5.2", + "http-errors": "2.0.0", + "merge-descriptors": "1.0.3", + "methods": "~1.1.2", + "on-finished": "2.4.1", + "parseurl": "~1.3.3", + "path-to-regexp": "0.1.12", + "proxy-addr": "~2.0.7", + "qs": "6.13.0", + "range-parser": "~1.2.1", + "safe-buffer": "5.2.1", + "send": "0.19.0", + "serve-static": "1.16.2", + "setprototypeof": "1.2.0", + "statuses": "2.0.1", + "type-is": "~1.6.18", + "utils-merge": "1.0.1", + "vary": "~1.1.2" + }, + "engines": { + "node": ">= 0.10.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/express-rate-limit": { + "version": "7.5.0", + "resolved": "https://registry.npmjs.org/express-rate-limit/-/express-rate-limit-7.5.0.tgz", + "integrity": "sha512-eB5zbQh5h+VenMPM3fh+nw1YExi5nMr6HUCR62ELSP11huvxm/Uir1H1QEyTkk5QX6A58pX6NmaTMceKZ0Eodg==", + "license": "MIT", + "engines": { + "node": ">= 16" + }, + "funding": { + "url": "https://github.com/sponsors/express-rate-limit" + }, + "peerDependencies": { + "express": "^4.11 || 5 || ^5.0.0-beta.1" + } + }, + "node_modules/express/node_modules/debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", + "dependencies": { + "ms": "2.0.0" + } + }, + "node_modules/express/node_modules/ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" + }, + "node_modules/express/node_modules/qs": { + "version": "6.13.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", + "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.0.6" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, "node_modules/extendable-error": { "version": "0.1.7", "resolved": "https://registry.npmjs.org/extendable-error/-/extendable-error-0.1.7.tgz", @@ -2919,6 +3576,131 @@ "dev": true, "license": "MIT" }, + "node_modules/fastmcp": { + "version": "1.20.5", + "resolved": "https://registry.npmjs.org/fastmcp/-/fastmcp-1.20.5.tgz", + "integrity": "sha512-jwcPgMF9bcE9qsEG82YMlAG26/n5CSYsr95e60ntqWWd+3kgTBbUIasB3HfpqHLTNaQuoX6/jl18fpDcybBjcQ==", + "license": "MIT", + "dependencies": { + "@modelcontextprotocol/sdk": "^1.6.0", + "execa": "^9.5.2", + "file-type": "^20.3.0", + "fuse.js": "^7.1.0", + "mcp-proxy": "^2.10.4", + "strict-event-emitter-types": "^2.0.0", + "undici": "^7.4.0", + "uri-templates": "^0.2.0", + "yargs": "^17.7.2", + "zod": "^3.24.2", + "zod-to-json-schema": "^3.24.3" + }, + "bin": { + "fastmcp": "dist/bin/fastmcp.js" + } + }, + "node_modules/fastmcp/node_modules/execa": { + "version": "9.5.2", + "resolved": "https://registry.npmjs.org/execa/-/execa-9.5.2.tgz", + "integrity": "sha512-EHlpxMCpHWSAh1dgS6bVeoLAXGnJNdR93aabr4QCGbzOM73o5XmRfM/e5FUqsw3aagP8S8XEWUWFAxnRBnAF0Q==", + "license": "MIT", + "dependencies": { + "@sindresorhus/merge-streams": "^4.0.0", + "cross-spawn": "^7.0.3", + "figures": "^6.1.0", + "get-stream": "^9.0.0", + "human-signals": "^8.0.0", + "is-plain-obj": "^4.1.0", + "is-stream": "^4.0.1", + "npm-run-path": "^6.0.0", + "pretty-ms": "^9.0.0", + "signal-exit": "^4.1.0", + "strip-final-newline": "^4.0.0", + "yoctocolors": "^2.0.0" + }, + "engines": { + "node": "^18.19.0 || >=20.5.0" + }, + "funding": { + "url": "https://github.com/sindresorhus/execa?sponsor=1" + } + }, + "node_modules/fastmcp/node_modules/get-stream": { + "version": "9.0.1", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-9.0.1.tgz", + "integrity": "sha512-kVCxPF3vQM/N0B1PmoqVUqgHP+EeVjmZSQn+1oCRPxd2P21P2F19lIgbR3HBosbB1PUhOAoctJnfEn2GbN2eZA==", + "license": "MIT", + "dependencies": { + "@sec-ant/readable-stream": "^0.4.1", + "is-stream": "^4.0.1" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/fastmcp/node_modules/human-signals": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-8.0.0.tgz", + "integrity": "sha512-/1/GPCpDUCCYwlERiYjxoczfP0zfvZMU/OWgQPMya9AbAE24vseigFdhAMObpc8Q4lc/kjutPfUddDYyAmejnA==", + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/fastmcp/node_modules/is-stream": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-4.0.1.tgz", + "integrity": "sha512-Dnz92NInDqYckGEUJv689RbRiTSEHCQ7wOVeALbkOz999YpqT46yMRIGtSNl2iCL1waAZSx40+h59NV/EwzV/A==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/fastmcp/node_modules/npm-run-path": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-6.0.0.tgz", + "integrity": "sha512-9qny7Z9DsQU8Ou39ERsPU4OZQlSTP47ShQzuKZ6PRXpYLtIFgl/DEBYEXKlvcEa+9tHVcK8CF81Y2V72qaZhWA==", + "license": "MIT", + "dependencies": { + "path-key": "^4.0.0", + "unicorn-magic": "^0.3.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/fastmcp/node_modules/path-key": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-4.0.0.tgz", + "integrity": "sha512-haREypq7xkM7ErfgIyA0z+Bj4AGKlMSdlQE2jvJo6huWD1EdkKYV+G/T4nq0YEF2vgTT8kqMFKo1uHn950r4SQ==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/fastmcp/node_modules/strip-final-newline": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-4.0.0.tgz", + "integrity": "sha512-aulFJcD6YK8V1G7iRB5tigAP4TsHBZZrOV8pjV++zdUwmeV8uzbY7yn6h9MswN62adStNZFuCIx4haBnRuMDaw==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/fastq": { "version": "1.19.1", "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.19.1.tgz", @@ -2971,6 +3753,12 @@ "node": ">= 8" } }, + "node_modules/fflate": { + "version": "0.8.2", + "resolved": "https://registry.npmjs.org/fflate/-/fflate-0.8.2.tgz", + "integrity": "sha512-cPJU47OaAoCbg0pBvzsgpTPhmhqI5eJjh/JIu8tPj5q+T7iLvW/JAYUqmE7KOB4R1ZyEhzBaIQpQpardBF5z8A==", + "license": "MIT" + }, "node_modules/figlet": { "version": "1.8.0", "resolved": "https://registry.npmjs.org/figlet/-/figlet-1.8.0.tgz", @@ -2983,6 +3771,39 @@ "node": ">= 0.4.0" } }, + "node_modules/figures": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/figures/-/figures-6.1.0.tgz", + "integrity": "sha512-d+l3qxjSesT4V7v2fh+QnmFnUWv9lSpjarhShNTgBOfA0ttejbQUAlHLitbjkoRiDulW0OPoQPYIGhIC8ohejg==", + "license": "MIT", + "dependencies": { + "is-unicode-supported": "^2.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/file-type": { + "version": "20.4.1", + "resolved": "https://registry.npmjs.org/file-type/-/file-type-20.4.1.tgz", + "integrity": "sha512-hw9gNZXUfZ02Jo0uafWLaFVPter5/k2rfcrjFJJHX/77xtSDOfJuEFb6oKlFV86FLP1SuyHMW1PSk0U9M5tKkQ==", + "license": "MIT", + "dependencies": { + "@tokenizer/inflate": "^0.2.6", + "strtok3": "^10.2.0", + "token-types": "^6.0.0", + "uint8array-extras": "^1.4.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sindresorhus/file-type?sponsor=1" + } + }, "node_modules/fill-range": { "version": "7.1.1", "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz", @@ -2996,6 +3817,39 @@ "node": ">=8" } }, + "node_modules/finalhandler": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz", + "integrity": "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ==", + "license": "MIT", + "dependencies": { + "debug": "2.6.9", + "encodeurl": "~2.0.0", + "escape-html": "~1.0.3", + "on-finished": "2.4.1", + "parseurl": "~1.3.3", + "statuses": "2.0.1", + "unpipe": "~1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/finalhandler/node_modules/debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", + "dependencies": { + "ms": "2.0.0" + } + }, + "node_modules/finalhandler/node_modules/ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" + }, "node_modules/find-up": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz", @@ -3071,6 +3925,24 @@ "url": "https://ko-fi.com/tunnckoCore/commissions" } }, + "node_modules/forwarded": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", + "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/fresh": { + "version": "0.5.2", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz", + "integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, "node_modules/fs-extra": { "version": "7.0.1", "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-7.0.1.tgz", @@ -3117,6 +3989,15 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/fuse.js": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/fuse.js/-/fuse.js-7.1.0.tgz", + "integrity": "sha512-trLf4SzuuUxfusZADLINj+dE8clK1frKdmqiJNb1Es75fmI5oY6X2mxLVUciLLjxqw/xr72Dhy+lER6dGd02FQ==", + "license": "Apache-2.0", + "engines": { + "node": ">=10" + } + }, "node_modules/gensync": { "version": "1.0.0-beta.2", "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.2.tgz", @@ -3131,7 +4012,6 @@ "version": "2.0.5", "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", - "dev": true, "license": "ISC", "engines": { "node": "6.* || 8.* || >= 10.*" @@ -3367,6 +4247,15 @@ "node": ">= 0.4" } }, + "node_modules/helmet": { + "version": "8.1.0", + "resolved": "https://registry.npmjs.org/helmet/-/helmet-8.1.0.tgz", + "integrity": "sha512-jOiHyAZsmnr8LqoPGmCjYAaiuWwjAPLgY8ZX2XrmHawt99/u1y6RgrZMTeoPfpUbV96HOalYgz1qzkRbw54Pmg==", + "license": "MIT", + "engines": { + "node": ">=18.0.0" + } + }, "node_modules/hexoid": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/hexoid/-/hexoid-2.0.0.tgz", @@ -3384,6 +4273,22 @@ "dev": true, "license": "MIT" }, + "node_modules/http-errors": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz", + "integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==", + "license": "MIT", + "dependencies": { + "depd": "2.0.0", + "inherits": "2.0.4", + "setprototypeof": "1.2.0", + "statuses": "2.0.1", + "toidentifier": "1.0.1" + }, + "engines": { + "node": ">= 0.8" + } + }, "node_modules/human-id": { "version": "4.1.1", "resolved": "https://registry.npmjs.org/human-id/-/human-id-4.1.1.tgz", @@ -3417,7 +4322,6 @@ "version": "0.4.24", "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", "integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==", - "dev": true, "license": "MIT", "dependencies": { "safer-buffer": ">= 2.1.2 < 3" @@ -3426,6 +4330,26 @@ "node": ">=0.10.0" } }, + "node_modules/ieee754": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz", + "integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "BSD-3-Clause" + }, "node_modules/ignore": { "version": "5.3.2", "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.3.2.tgz", @@ -3482,9 +4406,17 @@ "version": "2.0.4", "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", - "dev": true, "license": "ISC" }, + "node_modules/ipaddr.js": { + "version": "1.9.1", + "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", + "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", + "license": "MIT", + "engines": { + "node": ">= 0.10" + } + }, "node_modules/is-arrayish": { "version": "0.2.1", "resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.2.1.tgz", @@ -3572,6 +4504,24 @@ "node": ">=0.12.0" } }, + "node_modules/is-plain-obj": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-4.1.0.tgz", + "integrity": "sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-promise": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/is-promise/-/is-promise-4.0.0.tgz", + "integrity": "sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==", + "license": "MIT" + }, "node_modules/is-stream": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.1.tgz", @@ -3624,7 +4574,6 @@ "version": "2.0.0", "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", - "dev": true, "license": "ISC" }, "node_modules/istanbul-lib-coverage": { @@ -4371,6 +5320,61 @@ "graceful-fs": "^4.1.6" } }, + "node_modules/jsonwebtoken": { + "version": "9.0.2", + "resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.2.tgz", + "integrity": "sha512-PRp66vJ865SSqOlgqS8hujT5U4AOgMfhrwYIuIhfKaoSCZcirrmASQr8CX7cUg+RMih+hgznrjp99o+W4pJLHQ==", + "license": "MIT", + "dependencies": { + "jws": "^3.2.2", + "lodash.includes": "^4.3.0", + "lodash.isboolean": "^3.0.3", + "lodash.isinteger": "^4.0.4", + "lodash.isnumber": "^3.0.3", + "lodash.isplainobject": "^4.0.6", + "lodash.isstring": "^4.0.1", + "lodash.once": "^4.0.0", + "ms": "^2.1.1", + "semver": "^7.5.4" + }, + "engines": { + "node": ">=12", + "npm": ">=6" + } + }, + "node_modules/jsonwebtoken/node_modules/semver": { + "version": "7.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.1.tgz", + "integrity": "sha512-hlq8tAfn0m/61p4BVRcPzIGr6LKiMwo4VM6dGi6pt4qcRkmNzTcWq6eCEjEh+qXjkMDvPlOFFSGwQjoEa6gyMA==", + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/jwa": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/jwa/-/jwa-1.4.1.tgz", + "integrity": "sha512-qiLX/xhEEFKUAJ6FiBMbes3w9ATzyk5W7Hvzpa/SLYdxNtng+gcurvrI7TbACjIXlsJyr05/S1oUhZrc63evQA==", + "license": "MIT", + "dependencies": { + "buffer-equal-constant-time": "1.0.1", + "ecdsa-sig-formatter": "1.0.11", + "safe-buffer": "^5.0.1" + } + }, + "node_modules/jws": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/jws/-/jws-3.2.2.tgz", + "integrity": "sha512-YHlZCB6lMTllWDtSPHz/ZXTsi8S00usEV6v1tjq8tOUZzw7DpSDWVXjXDre6ed1w/pd495ODpHZYSdkRTsa0HA==", + "license": "MIT", + "dependencies": { + "jwa": "^1.4.1", + "safe-buffer": "^5.0.1" + } + }, "node_modules/kleur": { "version": "3.0.3", "resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz", @@ -4411,6 +5415,48 @@ "node": ">=8" } }, + "node_modules/lodash.includes": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz", + "integrity": "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w==", + "license": "MIT" + }, + "node_modules/lodash.isboolean": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz", + "integrity": "sha512-Bz5mupy2SVbPHURB98VAcw+aHh4vRV5IPNhILUCsOzRmsTmSQ17jIuqopAentWoehktxGd9e/hbIXq980/1QJg==", + "license": "MIT" + }, + "node_modules/lodash.isinteger": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/lodash.isinteger/-/lodash.isinteger-4.0.4.tgz", + "integrity": "sha512-DBwtEWN2caHQ9/imiNeEA5ys1JoRtRfY3d7V9wkqtbycnAmTvRRmbHKDV4a0EYc678/dia0jrte4tjYwVBaZUA==", + "license": "MIT" + }, + "node_modules/lodash.isnumber": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isnumber/-/lodash.isnumber-3.0.3.tgz", + "integrity": "sha512-QYqzpfwO3/CWf3XP+Z+tkQsfaLL/EnUlXWVkIk5FUPc4sBdTehEqZONuyRt2P67PXAk+NXmTBcc97zw9t1FQrw==", + "license": "MIT" + }, + "node_modules/lodash.isplainobject": { + "version": "4.0.6", + "resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz", + "integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==", + "license": "MIT" + }, + "node_modules/lodash.isstring": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz", + "integrity": "sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==", + "license": "MIT" + }, + "node_modules/lodash.once": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/lodash.once/-/lodash.once-4.1.1.tgz", + "integrity": "sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==", + "license": "MIT" + }, "node_modules/lodash.startcase": { "version": "4.4.0", "resolved": "https://registry.npmjs.org/lodash.startcase/-/lodash.startcase-4.4.0.tgz", @@ -4516,6 +5562,38 @@ "node": ">= 0.4" } }, + "node_modules/mcp-proxy": { + "version": "2.12.0", + "resolved": "https://registry.npmjs.org/mcp-proxy/-/mcp-proxy-2.12.0.tgz", + "integrity": "sha512-hL2Y6EtK7vkgAOZxOQe9M4Z9g5xEnvR4ZYBKqFi/5tjhz/1jyNEz5NL87Uzv46k8iZQPVNEof/T6arEooBU5bQ==", + "license": "MIT", + "dependencies": { + "@modelcontextprotocol/sdk": "^1.6.0", + "eventsource": "^3.0.5", + "yargs": "^17.7.2" + }, + "bin": { + "mcp-proxy": "dist/bin/mcp-proxy.js" + } + }, + "node_modules/media-typer": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz", + "integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/merge-descriptors": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.3.tgz", + "integrity": "sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/merge-stream": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz", @@ -4537,7 +5615,6 @@ "version": "1.1.2", "resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz", "integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==", - "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" @@ -4659,6 +5736,15 @@ "dev": true, "license": "MIT" }, + "node_modules/negotiator": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz", + "integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, "node_modules/node-domexception": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz", @@ -4733,11 +5819,19 @@ "node": ">=8" } }, + "node_modules/object-assign": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz", + "integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/object-inspect": { "version": "1.13.4", "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", - "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" @@ -4746,11 +5840,22 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/on-finished": { + "version": "2.4.1", + "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", + "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", + "license": "MIT", + "dependencies": { + "ee-first": "1.1.1" + }, + "engines": { + "node": ">= 0.8" + } + }, "node_modules/once": { "version": "1.4.0", "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", - "dev": true, "license": "ISC", "dependencies": { "wrappy": "1" @@ -4960,6 +6065,27 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/parse-ms": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/parse-ms/-/parse-ms-4.0.0.tgz", + "integrity": "sha512-TXfryirbmq34y8QBwgqCVLi+8oA3oWx2eAnSn62ITyEhEYaWRlVZ2DvMM9eZbMs/RfxPu/PK/aBLyGj4IrqMHw==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/parseurl": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", + "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/path-exists": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", @@ -4984,7 +6110,6 @@ "version": "3.1.1", "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", - "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -4997,6 +6122,12 @@ "dev": true, "license": "MIT" }, + "node_modules/path-to-regexp": { + "version": "0.1.12", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz", + "integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==", + "license": "MIT" + }, "node_modules/path-type": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz", @@ -5007,6 +6138,19 @@ "node": ">=8" } }, + "node_modules/peek-readable": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/peek-readable/-/peek-readable-7.0.0.tgz", + "integrity": "sha512-nri2TO5JE3/mRryik9LlHFT53cgHfRK0Lt0BAZQXku/AW3E6XLt2GaY8siWi7dvW/m1z0ecn+J+bpDa9ZN3IsQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/Borewit" + } + }, "node_modules/picocolors": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", @@ -5047,6 +6191,15 @@ "node": ">= 6" } }, + "node_modules/pkce-challenge": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/pkce-challenge/-/pkce-challenge-4.1.0.tgz", + "integrity": "sha512-ZBmhE1C9LcPoH9XZSdwiPtbPHZROwAnMy+kIFQVrnMCxY4Cudlz3gBOpzilgc0jOgRaiT3sIWfpMomW2ar2orQ==", + "license": "MIT", + "engines": { + "node": ">=16.20.0" + } + }, "node_modules/pkg-dir": { "version": "4.2.0", "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz", @@ -5104,6 +6257,21 @@ "url": "https://github.com/chalk/ansi-styles?sponsor=1" } }, + "node_modules/pretty-ms": { + "version": "9.2.0", + "resolved": "https://registry.npmjs.org/pretty-ms/-/pretty-ms-9.2.0.tgz", + "integrity": "sha512-4yf0QO/sllf/1zbZWYnvWw3NxCQwLXKzIj0G849LSufP15BXKM0rbD2Z3wVnkMfjdn/CB0Dpp444gYAACdsplg==", + "license": "MIT", + "dependencies": { + "parse-ms": "^4.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/prompts": { "version": "2.4.2", "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.2.tgz", @@ -5118,6 +6286,19 @@ "node": ">= 6" } }, + "node_modules/proxy-addr": { + "version": "2.0.7", + "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", + "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", + "license": "MIT", + "dependencies": { + "forwarded": "0.2.0", + "ipaddr.js": "1.9.1" + }, + "engines": { + "node": ">= 0.10" + } + }, "node_modules/pure-rand": { "version": "6.1.0", "resolved": "https://registry.npmjs.org/pure-rand/-/pure-rand-6.1.0.tgz", @@ -5139,7 +6320,6 @@ "version": "6.14.0", "resolved": "https://registry.npmjs.org/qs/-/qs-6.14.0.tgz", "integrity": "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==", - "dev": true, "license": "BSD-3-Clause", "dependencies": { "side-channel": "^1.1.0" @@ -5189,6 +6369,30 @@ ], "license": "MIT" }, + "node_modules/range-parser": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", + "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/raw-body": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz", + "integrity": "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA==", + "license": "MIT", + "dependencies": { + "bytes": "3.1.2", + "http-errors": "2.0.0", + "iconv-lite": "0.4.24", + "unpipe": "1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, "node_modules/react-is": { "version": "18.3.1", "resolved": "https://registry.npmjs.org/react-is/-/react-is-18.3.1.tgz", @@ -5233,7 +6437,6 @@ "version": "2.1.1", "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==", - "dev": true, "license": "MIT", "engines": { "node": ">=0.10.0" @@ -5320,6 +6523,31 @@ "node": ">=0.10.0" } }, + "node_modules/router": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/router/-/router-2.2.0.tgz", + "integrity": "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ==", + "license": "MIT", + "dependencies": { + "debug": "^4.4.0", + "depd": "^2.0.0", + "is-promise": "^4.0.0", + "parseurl": "^1.3.3", + "path-to-regexp": "^8.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/router/node_modules/path-to-regexp": { + "version": "8.2.0", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-8.2.0.tgz", + "integrity": "sha512-TdrF7fW9Rphjq4RjrW0Kp2AW0Ahwu9sRGTkS6bvDi0SCwZlEZYmcfDbEsTz8RVk0EHIS/Vd1bv3JhG+1xZuAyQ==", + "license": "MIT", + "engines": { + "node": ">=16" + } + }, "node_modules/run-parallel": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz", @@ -5344,11 +6572,30 @@ "queue-microtask": "^1.2.2" } }, + "node_modules/safe-buffer": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", + "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT" + }, "node_modules/safer-buffer": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", - "dev": true, "license": "MIT" }, "node_modules/semver": { @@ -5361,11 +6608,91 @@ "semver": "bin/semver.js" } }, + "node_modules/send": { + "version": "0.19.0", + "resolved": "https://registry.npmjs.org/send/-/send-0.19.0.tgz", + "integrity": "sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw==", + "license": "MIT", + "dependencies": { + "debug": "2.6.9", + "depd": "2.0.0", + "destroy": "1.2.0", + "encodeurl": "~1.0.2", + "escape-html": "~1.0.3", + "etag": "~1.8.1", + "fresh": "0.5.2", + "http-errors": "2.0.0", + "mime": "1.6.0", + "ms": "2.1.3", + "on-finished": "2.4.1", + "range-parser": "~1.2.1", + "statuses": "2.0.1" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/send/node_modules/debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", + "dependencies": { + "ms": "2.0.0" + } + }, + "node_modules/send/node_modules/debug/node_modules/ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" + }, + "node_modules/send/node_modules/encodeurl": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz", + "integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/send/node_modules/mime": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", + "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", + "license": "MIT", + "bin": { + "mime": "cli.js" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/serve-static": { + "version": "1.16.2", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz", + "integrity": "sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw==", + "license": "MIT", + "dependencies": { + "encodeurl": "~2.0.0", + "escape-html": "~1.0.3", + "parseurl": "~1.3.3", + "send": "0.19.0" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/setprototypeof": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", + "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==", + "license": "ISC" + }, "node_modules/shebang-command": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", - "dev": true, "license": "MIT", "dependencies": { "shebang-regex": "^3.0.0" @@ -5378,7 +6705,6 @@ "version": "3.0.0", "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", - "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -5388,7 +6714,6 @@ "version": "1.1.0", "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", - "dev": true, "license": "MIT", "dependencies": { "es-errors": "^1.3.0", @@ -5408,7 +6733,6 @@ "version": "1.0.0", "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", - "dev": true, "license": "MIT", "dependencies": { "es-errors": "^1.3.0", @@ -5425,7 +6749,6 @@ "version": "1.0.1", "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", - "dev": true, "license": "MIT", "dependencies": { "call-bound": "^1.0.2", @@ -5444,7 +6767,6 @@ "version": "1.0.2", "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", - "dev": true, "license": "MIT", "dependencies": { "call-bound": "^1.0.2", @@ -5541,6 +6863,15 @@ "node": ">=10" } }, + "node_modules/statuses": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz", + "integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/stdin-discarder": { "version": "0.2.2", "resolved": "https://registry.npmjs.org/stdin-discarder/-/stdin-discarder-0.2.2.tgz", @@ -5553,6 +6884,12 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/strict-event-emitter-types": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/strict-event-emitter-types/-/strict-event-emitter-types-2.0.0.tgz", + "integrity": "sha512-Nk/brWYpD85WlOgzw5h173aci0Teyv8YdIAEtV+N88nDB0dLlazZyJMIsN6eo1/AR61l+p6CJTG1JIyFaoNEEA==", + "license": "ISC" + }, "node_modules/string-length": { "version": "4.0.2", "resolved": "https://registry.npmjs.org/string-length/-/string-length-4.0.2.tgz", @@ -5655,6 +6992,23 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/strtok3": { + "version": "10.2.2", + "resolved": "https://registry.npmjs.org/strtok3/-/strtok3-10.2.2.tgz", + "integrity": "sha512-Xt18+h4s7Z8xyZ0tmBoRmzxcop97R4BAh+dXouUDCYn+Em+1P3qpkUfI5ueWLT8ynC5hZ+q4iPEmGG1urvQGBg==", + "license": "MIT", + "dependencies": { + "@tokenizer/token": "^0.3.0", + "peek-readable": "^7.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/Borewit" + } + }, "node_modules/superagent": { "version": "9.0.2", "resolved": "https://registry.npmjs.org/superagent/-/superagent-9.0.2.tgz", @@ -5792,6 +7146,32 @@ "node": ">=8.0" } }, + "node_modules/toidentifier": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", + "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", + "license": "MIT", + "engines": { + "node": ">=0.6" + } + }, + "node_modules/token-types": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/token-types/-/token-types-6.0.0.tgz", + "integrity": "sha512-lbDrTLVsHhOMljPscd0yitpozq7Ga2M5Cvez5AjGg8GASBjtt6iERCAJ93yommPmz62fb45oFIXHEZ3u9bfJEA==", + "license": "MIT", + "dependencies": { + "@tokenizer/token": "^0.3.0", + "ieee754": "^1.2.1" + }, + "engines": { + "node": ">=14.16" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/Borewit" + } + }, "node_modules/type-detect": { "version": "4.0.8", "resolved": "https://registry.npmjs.org/type-detect/-/type-detect-4.0.8.tgz", @@ -5814,12 +7194,58 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/type-is": { + "version": "1.6.18", + "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz", + "integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==", + "license": "MIT", + "dependencies": { + "media-typer": "0.3.0", + "mime-types": "~2.1.24" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/uint8array-extras": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/uint8array-extras/-/uint8array-extras-1.4.0.tgz", + "integrity": "sha512-ZPtzy0hu4cZjv3z5NW9gfKnNLjoz4y6uv4HlelAjDK7sY/xOkKZv9xK/WQpcsBB3jEybChz9DPC2U/+cusjJVQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/undici": { + "version": "7.6.0", + "resolved": "https://registry.npmjs.org/undici/-/undici-7.6.0.tgz", + "integrity": "sha512-gaFsbThjrDGvAaD670r81RZro/s6H2PVZF640Qn0p5kZK+/rim7/mmyfp2W7VB5vOMaFM8vuFBJUaMlaZTYHlA==", + "license": "MIT", + "engines": { + "node": ">=20.18.1" + } + }, "node_modules/undici-types": { "version": "5.26.5", "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz", "integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==", "license": "MIT" }, + "node_modules/unicorn-magic": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/unicorn-magic/-/unicorn-magic-0.3.0.tgz", + "integrity": "sha512-+QBBXBCvifc56fsbuxZQ6Sic3wqqc3WWaqxs58gvJrcOuN83HGTCwz3oS5phzU9LthRNE9VrJCFCLUgHeeFnfA==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/universalify": { "version": "0.1.2", "resolved": "https://registry.npmjs.org/universalify/-/universalify-0.1.2.tgz", @@ -5830,6 +7256,15 @@ "node": ">= 4.0.0" } }, + "node_modules/unpipe": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", + "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/update-browserslist-db": { "version": "1.1.3", "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.1.3.tgz", @@ -5861,6 +7296,21 @@ "browserslist": ">= 4.21.0" } }, + "node_modules/uri-templates": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/uri-templates/-/uri-templates-0.2.0.tgz", + "integrity": "sha512-EWkjYEN0L6KOfEoOH6Wj4ghQqU7eBZMJqRHQnxQAq+dSEzRPClkWjf8557HkWQXF6BrAUoLSAyy9i3RVTliaNg==", + "license": "http://geraintluff.github.io/tv4/LICENSE.txt" + }, + "node_modules/utils-merge": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz", + "integrity": "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==", + "license": "MIT", + "engines": { + "node": ">= 0.4.0" + } + }, "node_modules/v8-to-istanbul": { "version": "9.3.0", "resolved": "https://registry.npmjs.org/v8-to-istanbul/-/v8-to-istanbul-9.3.0.tgz", @@ -5876,6 +7326,15 @@ "node": ">=10.12.0" } }, + "node_modules/vary": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", + "integrity": "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, "node_modules/walker": { "version": "1.0.8", "resolved": "https://registry.npmjs.org/walker/-/walker-1.0.8.tgz", @@ -5899,7 +7358,6 @@ "version": "2.0.2", "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", - "dev": true, "license": "ISC", "dependencies": { "isexe": "^2.0.0" @@ -5959,7 +7417,6 @@ "version": "1.0.2", "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==", - "dev": true, "license": "ISC" }, "node_modules/write-file-atomic": { @@ -5987,7 +7444,6 @@ "version": "5.0.8", "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz", "integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==", - "dev": true, "license": "ISC", "engines": { "node": ">=10" @@ -6004,7 +7460,6 @@ "version": "17.7.2", "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.7.2.tgz", "integrity": "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==", - "dev": true, "license": "MIT", "dependencies": { "cliui": "^8.0.1", @@ -6023,7 +7478,6 @@ "version": "21.1.1", "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-21.1.1.tgz", "integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==", - "dev": true, "license": "ISC", "engines": { "node": ">=12" @@ -6033,7 +7487,6 @@ "version": "5.0.1", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", - "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -6043,14 +7496,12 @@ "version": "8.0.0", "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", - "dev": true, "license": "MIT" }, "node_modules/yargs/node_modules/string-width": { "version": "4.2.3", "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", - "dev": true, "license": "MIT", "dependencies": { "emoji-regex": "^8.0.0", @@ -6065,7 +7516,6 @@ "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", - "dev": true, "license": "MIT", "dependencies": { "ansi-regex": "^5.0.1" @@ -6086,6 +7536,36 @@ "funding": { "url": "https://github.com/sponsors/sindresorhus" } + }, + "node_modules/yoctocolors": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/yoctocolors/-/yoctocolors-2.1.1.tgz", + "integrity": "sha512-GQHQqAopRhwU8Kt1DDM8NjibDXHC8eoh1erhGAJPEyveY9qqVeXvVikNKrDz69sHowPMorbPUrH/mx8c50eiBQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/zod": { + "version": "3.24.2", + "resolved": "https://registry.npmjs.org/zod/-/zod-3.24.2.tgz", + "integrity": "sha512-lY7CDW43ECgW9u1TcT3IoXHflywfVqDYze4waEz812jR/bZ8FHDsl7pFQoSZTz5N+2NqRXs8GBwnAwo3ZNxqhQ==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/colinhacks" + } + }, + "node_modules/zod-to-json-schema": { + "version": "3.24.5", + "resolved": "https://registry.npmjs.org/zod-to-json-schema/-/zod-to-json-schema-3.24.5.tgz", + "integrity": "sha512-/AuWwMP+YqiPbsJx5D6TfgRTc4kTLjsh5SOcd4bLsfUg2RcEXrFMJl1DGgdHy2aCfsIA/cr/1JM0xcB2GZji8g==", + "license": "ISC", + "peerDependencies": { + "zod": "^3.24.1" + } } } } diff --git a/package.json b/package.json index 88db54cc..bf085c98 100644 --- a/package.json +++ b/package.json @@ -6,7 +6,8 @@ "type": "module", "bin": { "task-master": "bin/task-master.js", - "task-master-init": "bin/task-master-init.js" + "task-master-init": "bin/task-master-init.js", + "task-master-mcp-server": "mcp-server/server.js" }, "scripts": { "test": "node --experimental-vm-modules node_modules/.bin/jest", @@ -26,7 +27,9 @@ "development", "cursor", "anthropic", - "llm" + "llm", + "mcp", + "context" ], "author": "Eyal Toledano", "license": "MIT", @@ -36,11 +39,17 @@ "chalk": "^4.1.2", "cli-table3": "^0.6.5", "commander": "^11.1.0", + "cors": "^2.8.5", "dotenv": "^16.3.1", + "express": "^4.21.2", + "fastmcp": "^1.20.5", "figlet": "^1.8.0", "gradient-string": "^3.0.0", + "helmet": "^8.1.0", + "jsonwebtoken": "^9.0.2", "openai": "^4.89.0", - "ora": "^8.2.0" + "ora": "^8.2.0", + "fuse.js": "^7.0.0" }, "engines": { "node": ">=14.0.0" @@ -61,7 +70,8 @@ ".cursor/**", "README-task-master.md", "index.js", - "bin/**" + "bin/**", + "mcp-server/**" ], "overrides": { "node-fetch": "^3.3.2", diff --git a/tasks/task_023.txt b/tasks/task_023.txt index a34085a0..35e721d4 100644 --- a/tasks/task_023.txt +++ b/tasks/task_023.txt @@ -56,3 +56,118 @@ Testing for the MCP server functionality should include: - Test for common API vulnerabilities (injection, CSRF, etc.) All tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman. + +# Subtasks: +## 1. Create Core MCP Server Module and Basic Structure [done] +### Dependencies: None +### Description: Create the foundation for the MCP server implementation by setting up the core module structure, configuration, and server initialization. +### Details: +Implementation steps: +1. Create a new module `mcp-server.js` with the basic server structure +2. Implement configuration options to enable/disable the MCP server +3. Set up Express.js routes for the required MCP endpoints (/context, /models, /execute) +4. Create middleware for request validation and response formatting +5. Implement basic error handling according to MCP specifications +6. Add logging infrastructure for MCP operations +7. Create initialization and shutdown procedures for the MCP server +8. Set up integration with the main Task Master application + +Testing approach: +- Unit tests for configuration loading and validation +- Test server initialization and shutdown procedures +- Verify that routes are properly registered +- Test basic error handling with invalid requests + +## 2. Implement Context Management System [done] +### Dependencies: 23.1 +### Description: Develop a robust context management system that can efficiently store, retrieve, and manipulate context data according to the MCP specification. +### Details: +Implementation steps: +1. Design and implement data structures for context storage +2. Create methods for context creation, retrieval, updating, and deletion +3. Implement context windowing and truncation algorithms for handling size limits +4. Add support for context metadata and tagging +5. Create utilities for context serialization and deserialization +6. Implement efficient indexing for quick context lookups +7. Add support for context versioning and history +8. Develop mechanisms for context persistence (in-memory, disk-based, or database) + +Testing approach: +- Unit tests for all context operations (CRUD) +- Performance tests for context retrieval with various sizes +- Test context windowing and truncation with edge cases +- Verify metadata handling and tagging functionality +- Test persistence mechanisms with simulated failures + +## 3. Implement MCP Endpoints and API Handlers [done] +### Dependencies: 23.1, 23.2 +### Description: Develop the complete API handlers for all required MCP endpoints, ensuring they follow the protocol specification and integrate with the context management system. +### Details: +Implementation steps: +1. Implement the `/context` endpoint for: + - GET: retrieving existing context + - POST: creating new context + - PUT: updating existing context + - DELETE: removing context +2. Implement the `/models` endpoint to list available models +3. Develop the `/execute` endpoint for performing operations with context +4. Create request validators for each endpoint +5. Implement response formatters according to MCP specifications +6. Add detailed error handling for each endpoint +7. Set up proper HTTP status codes for different scenarios +8. Implement pagination for endpoints that return lists + +Testing approach: +- Unit tests for each endpoint handler +- Integration tests with mock context data +- Test various request formats and edge cases +- Verify response formats match MCP specifications +- Test error handling with invalid inputs +- Benchmark endpoint performance + +## 4. Implement Authentication and Authorization System [pending] +### Dependencies: 23.1, 23.3 +### Description: Create a secure authentication and authorization mechanism for MCP clients to ensure only authorized applications can access the MCP server functionality. +### Details: +Implementation steps: +1. Design authentication scheme (API keys, OAuth, JWT, etc.) +2. Implement authentication middleware for all MCP endpoints +3. Create an API key management system for client applications +4. Develop role-based access control for different operations +5. Implement rate limiting to prevent abuse +6. Add secure token validation and handling +7. Create endpoints for managing client credentials +8. Implement audit logging for authentication events + +Testing approach: +- Security testing for authentication mechanisms +- Test access control with various permission levels +- Verify rate limiting functionality +- Test token validation with valid and invalid tokens +- Simulate unauthorized access attempts +- Verify audit logs contain appropriate information + +## 5. Optimize Performance and Finalize Documentation [pending] +### Dependencies: 23.1, 23.2, 23.3, 23.4 +### Description: Optimize the MCP server implementation for performance, especially for context retrieval operations, and create comprehensive documentation for users. +### Details: +Implementation steps: +1. Profile the MCP server to identify performance bottlenecks +2. Implement caching mechanisms for frequently accessed contexts +3. Optimize context serialization and deserialization +4. Add connection pooling for database operations (if applicable) +5. Implement request batching for bulk operations +6. Create comprehensive API documentation with examples +7. Add setup and configuration guides to the Task Master documentation +8. Create example client implementations +9. Add monitoring endpoints for server health and metrics +10. Implement graceful degradation under high load + +Testing approach: +- Load testing with simulated concurrent clients +- Measure response times for various operations +- Test with large context sizes to verify performance +- Verify documentation accuracy with sample requests +- Test monitoring endpoints +- Perform stress testing to identify failure points + diff --git a/tasks/tasks.json b/tasks/tasks.json index a7d6c333..ea4c7082 100644 --- a/tasks/tasks.json +++ b/tasks/tasks.json @@ -1343,8 +1343,68 @@ 22 ], "priority": "medium", - "details": "This task involves implementing the Model Context Protocol server capabilities within Task Master using FastMCP. The implementation should:\n\n1. Use FastMCP to create the MCP server module (`mcp-server.ts` or equivalent)\n2. Implement the required MCP endpoints using FastMCP:\n - `/context` - For retrieving and updating context\n - `/models` - For listing available models\n - `/execute` - For executing operations with context\n3. Utilize FastMCP's built-in features for context management, including:\n - Efficient context storage and retrieval\n - Context windowing and truncation\n - Metadata and tagging support\n4. Add authentication and authorization mechanisms using FastMCP capabilities\n5. Implement error handling and response formatting as per MCP specifications\n6. Configure Task Master to enable/disable MCP server functionality via FastMCP settings\n7. Add documentation on using Task Master as an MCP server with FastMCP\n8. Ensure compatibility with existing MCP clients by adhering to FastMCP's compliance features\n9. Optimize performance using FastMCP tools, especially for context retrieval operations\n10. Add logging for MCP server operations using FastMCP's logging utilities\n\nThe implementation should follow RESTful API design principles and leverage FastMCP's concurrency handling for multiple client requests. Consider using TypeScript for better type safety and integration with FastMCP[1][2].", - "testStrategy": "Testing for the MCP server functionality should include:\n\n1. Unit tests:\n - Test each MCP endpoint handler function independently using FastMCP\n - Verify context storage and retrieval mechanisms provided by FastMCP\n - Test authentication and authorization logic\n - Validate error handling for various failure scenarios\n\n2. Integration tests:\n - Set up a test MCP server instance using FastMCP\n - Test complete request/response cycles for each endpoint\n - Verify context persistence across multiple requests\n - Test with various payload sizes and content types\n\n3. Compatibility tests:\n - Test with existing MCP client libraries\n - Verify compliance with the MCP specification\n - Ensure backward compatibility with any MCP versions supported by FastMCP\n\n4. Performance tests:\n - Measure response times for context operations with various context sizes\n - Test concurrent request handling using FastMCP's concurrency tools\n - Verify memory usage remains within acceptable limits during extended operation\n\n5. Security tests:\n - Verify authentication mechanisms cannot be bypassed\n - Test for common API vulnerabilities (injection, CSRF, etc.)\n\nAll tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman." + "details": "This task involves implementing the Model Context Protocol server capabilities within Task Master. The implementation should:\n\n1. Create a new module `mcp-server.js` that implements the core MCP server functionality\n2. Implement the required MCP endpoints:\n - `/context` - For retrieving and updating context\n - `/models` - For listing available models\n - `/execute` - For executing operations with context\n3. Develop a context management system that can:\n - Store and retrieve context data efficiently\n - Handle context windowing and truncation when limits are reached\n - Support context metadata and tagging\n4. Add authentication and authorization mechanisms for MCP clients\n5. Implement proper error handling and response formatting according to MCP specifications\n6. Create configuration options in Task Master to enable/disable the MCP server functionality\n7. Add documentation for how to use Task Master as an MCP server\n8. Ensure the implementation is compatible with existing MCP clients\n9. Optimize for performance, especially for context retrieval operations\n10. Add logging for MCP server operations\n\nThe implementation should follow RESTful API design principles and should be able to handle concurrent requests from multiple clients.", + "testStrategy": "Testing for the MCP server functionality should include:\n\n1. Unit tests:\n - Test each MCP endpoint handler function independently\n - Verify context storage and retrieval mechanisms\n - Test authentication and authorization logic\n - Validate error handling for various failure scenarios\n\n2. Integration tests:\n - Set up a test MCP server instance\n - Test complete request/response cycles for each endpoint\n - Verify context persistence across multiple requests\n - Test with various payload sizes and content types\n\n3. Compatibility tests:\n - Test with existing MCP client libraries\n - Verify compliance with the MCP specification\n - Ensure backward compatibility with any MCP versions supported\n\n4. Performance tests:\n - Measure response times for context operations with various context sizes\n - Test concurrent request handling\n - Verify memory usage remains within acceptable limits during extended operation\n\n5. Security tests:\n - Verify authentication mechanisms cannot be bypassed\n - Test for common API vulnerabilities (injection, CSRF, etc.)\n\nAll tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman.", + "subtasks": [ + { + "id": 1, + "title": "Create Core MCP Server Module and Basic Structure", + "description": "Create the foundation for the MCP server implementation by setting up the core module structure, configuration, and server initialization.", + "dependencies": [], + "details": "Implementation steps:\n1. Create a new module `mcp-server.js` with the basic server structure\n2. Implement configuration options to enable/disable the MCP server\n3. Set up Express.js routes for the required MCP endpoints (/context, /models, /execute)\n4. Create middleware for request validation and response formatting\n5. Implement basic error handling according to MCP specifications\n6. Add logging infrastructure for MCP operations\n7. Create initialization and shutdown procedures for the MCP server\n8. Set up integration with the main Task Master application\n\nTesting approach:\n- Unit tests for configuration loading and validation\n- Test server initialization and shutdown procedures\n- Verify that routes are properly registered\n- Test basic error handling with invalid requests", + "status": "done", + "parentTaskId": 23 + }, + { + "id": 2, + "title": "Implement Context Management System", + "description": "Develop a robust context management system that can efficiently store, retrieve, and manipulate context data according to the MCP specification.", + "dependencies": [ + 1 + ], + "details": "Implementation steps:\n1. Design and implement data structures for context storage\n2. Create methods for context creation, retrieval, updating, and deletion\n3. Implement context windowing and truncation algorithms for handling size limits\n4. Add support for context metadata and tagging\n5. Create utilities for context serialization and deserialization\n6. Implement efficient indexing for quick context lookups\n7. Add support for context versioning and history\n8. Develop mechanisms for context persistence (in-memory, disk-based, or database)\n\nTesting approach:\n- Unit tests for all context operations (CRUD)\n- Performance tests for context retrieval with various sizes\n- Test context windowing and truncation with edge cases\n- Verify metadata handling and tagging functionality\n- Test persistence mechanisms with simulated failures", + "status": "done", + "parentTaskId": 23 + }, + { + "id": 3, + "title": "Implement MCP Endpoints and API Handlers", + "description": "Develop the complete API handlers for all required MCP endpoints, ensuring they follow the protocol specification and integrate with the context management system.", + "dependencies": [ + 1, + 2 + ], + "details": "Implementation steps:\n1. Implement the `/context` endpoint for:\n - GET: retrieving existing context\n - POST: creating new context\n - PUT: updating existing context\n - DELETE: removing context\n2. Implement the `/models` endpoint to list available models\n3. Develop the `/execute` endpoint for performing operations with context\n4. Create request validators for each endpoint\n5. Implement response formatters according to MCP specifications\n6. Add detailed error handling for each endpoint\n7. Set up proper HTTP status codes for different scenarios\n8. Implement pagination for endpoints that return lists\n\nTesting approach:\n- Unit tests for each endpoint handler\n- Integration tests with mock context data\n- Test various request formats and edge cases\n- Verify response formats match MCP specifications\n- Test error handling with invalid inputs\n- Benchmark endpoint performance", + "status": "done", + "parentTaskId": 23 + }, + { + "id": 4, + "title": "Implement Authentication and Authorization System", + "description": "Create a secure authentication and authorization mechanism for MCP clients to ensure only authorized applications can access the MCP server functionality.", + "dependencies": [ + 1, + 3 + ], + "details": "Implementation steps:\n1. Design authentication scheme (API keys, OAuth, JWT, etc.)\n2. Implement authentication middleware for all MCP endpoints\n3. Create an API key management system for client applications\n4. Develop role-based access control for different operations\n5. Implement rate limiting to prevent abuse\n6. Add secure token validation and handling\n7. Create endpoints for managing client credentials\n8. Implement audit logging for authentication events\n\nTesting approach:\n- Security testing for authentication mechanisms\n- Test access control with various permission levels\n- Verify rate limiting functionality\n- Test token validation with valid and invalid tokens\n- Simulate unauthorized access attempts\n- Verify audit logs contain appropriate information", + "status": "pending", + "parentTaskId": 23 + }, + { + "id": 5, + "title": "Optimize Performance and Finalize Documentation", + "description": "Optimize the MCP server implementation for performance, especially for context retrieval operations, and create comprehensive documentation for users.", + "dependencies": [ + 1, + 2, + 3, + 4 + ], + "details": "Implementation steps:\n1. Profile the MCP server to identify performance bottlenecks\n2. Implement caching mechanisms for frequently accessed contexts\n3. Optimize context serialization and deserialization\n4. Add connection pooling for database operations (if applicable)\n5. Implement request batching for bulk operations\n6. Create comprehensive API documentation with examples\n7. Add setup and configuration guides to the Task Master documentation\n8. Create example client implementations\n9. Add monitoring endpoints for server health and metrics\n10. Implement graceful degradation under high load\n\nTesting approach:\n- Load testing with simulated concurrent clients\n- Measure response times for various operations\n- Test with large context sizes to verify performance\n- Verify documentation accuracy with sample requests\n- Test monitoring endpoints\n- Perform stress testing to identify failure points", + "status": "pending", + "parentTaskId": 23 + } + ] }, { "id": 24, From e91e65479e69d81ba17645fedfbbd018c4550ce7 Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Tue, 25 Mar 2025 00:39:20 +0000 Subject: [PATCH 12/16] feat(wip): set up mcp server and tools, but mcp on cursor not working despite working in inspector --- .cursor/mcp.json | 8 + mcp-server/server.js | 12 +- mcp-server/src/api-handlers.js | 970 -------------------------- mcp-server/src/auth.js | 285 -------- mcp-server/src/context-manager.js | 873 ----------------------- mcp-server/src/index.js | 314 +-------- mcp-server/src/logger.js | 68 ++ mcp-server/src/tools/addTask.js | 56 ++ mcp-server/src/tools/expandTask.js | 66 ++ mcp-server/src/tools/index.js | 29 + mcp-server/src/tools/listTasks.js | 51 ++ mcp-server/src/tools/nextTask.js | 45 ++ mcp-server/src/tools/setTaskStatus.js | 52 ++ mcp-server/src/tools/showTask.js | 45 ++ mcp-server/src/tools/utils.js | 90 +++ 15 files changed, 529 insertions(+), 2435 deletions(-) create mode 100644 .cursor/mcp.json delete mode 100644 mcp-server/src/api-handlers.js delete mode 100644 mcp-server/src/auth.js delete mode 100644 mcp-server/src/context-manager.js create mode 100644 mcp-server/src/logger.js create mode 100644 mcp-server/src/tools/addTask.js create mode 100644 mcp-server/src/tools/expandTask.js create mode 100644 mcp-server/src/tools/index.js create mode 100644 mcp-server/src/tools/listTasks.js create mode 100644 mcp-server/src/tools/nextTask.js create mode 100644 mcp-server/src/tools/setTaskStatus.js create mode 100644 mcp-server/src/tools/showTask.js create mode 100644 mcp-server/src/tools/utils.js diff --git a/.cursor/mcp.json b/.cursor/mcp.json new file mode 100644 index 00000000..3b7160ae --- /dev/null +++ b/.cursor/mcp.json @@ -0,0 +1,8 @@ +{ + "mcpServers": { + "taskMaster": { + "command": "node", + "args": ["mcp-server/server.js"] + } + } +} diff --git a/mcp-server/server.js b/mcp-server/server.js index ed5c3c69..dfca0f55 100755 --- a/mcp-server/server.js +++ b/mcp-server/server.js @@ -2,15 +2,11 @@ import TaskMasterMCPServer from "./src/index.js"; import dotenv from "dotenv"; -import { logger } from "../scripts/modules/utils.js"; +import logger from "./src/logger.js"; // Load environment variables dotenv.config(); -// Constants -const PORT = process.env.MCP_SERVER_PORT || 3000; -const HOST = process.env.MCP_SERVER_HOST || "localhost"; - /** * Start the MCP server */ @@ -19,21 +15,17 @@ async function startServer() { // Handle graceful shutdown process.on("SIGINT", async () => { - logger.info("Received SIGINT, shutting down gracefully..."); await server.stop(); process.exit(0); }); process.on("SIGTERM", async () => { - logger.info("Received SIGTERM, shutting down gracefully..."); await server.stop(); process.exit(0); }); try { - await server.start({ port: PORT, host: HOST }); - logger.info(`MCP server running at http://${HOST}:${PORT}`); - logger.info("Press Ctrl+C to stop"); + await server.start(); } catch (error) { logger.error(`Failed to start MCP server: ${error.message}`); process.exit(1); diff --git a/mcp-server/src/api-handlers.js b/mcp-server/src/api-handlers.js deleted file mode 100644 index ead546f2..00000000 --- a/mcp-server/src/api-handlers.js +++ /dev/null @@ -1,970 +0,0 @@ -import { z } from "zod"; -import { logger } from "../../scripts/modules/utils.js"; -import ContextManager from "./context-manager.js"; - -/** - * MCP API Handlers class - * Implements handlers for the MCP API endpoints - */ -class MCPApiHandlers { - constructor(server) { - this.server = server; - this.contextManager = new ContextManager(); - this.logger = logger; - - // Bind methods - this.registerEndpoints = this.registerEndpoints.bind(this); - this.setupContextHandlers = this.setupContextHandlers.bind(this); - this.setupModelHandlers = this.setupModelHandlers.bind(this); - this.setupExecuteHandlers = this.setupExecuteHandlers.bind(this); - - // Register all handlers - this.registerEndpoints(); - } - - /** - * Register all MCP API endpoints - */ - registerEndpoints() { - this.setupContextHandlers(); - this.setupModelHandlers(); - this.setupExecuteHandlers(); - - this.logger.info("Registered all MCP API endpoint handlers"); - } - - /** - * Set up handlers for the /context endpoint - */ - setupContextHandlers() { - // Add a tool to create context - this.server.addTool({ - name: "createContext", - description: - "Create a new context with the given data and optional metadata", - parameters: z.object({ - contextId: z.string().describe("Unique identifier for the context"), - data: z.any().describe("The context data to store"), - metadata: z - .object({}) - .optional() - .describe("Optional metadata for the context"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.createContext( - args.contextId, - args.data, - args.metadata || {} - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error creating context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to get context - this.server.addTool({ - name: "getContext", - description: - "Retrieve a context by its ID, optionally a specific version", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to retrieve"), - versionId: z - .string() - .optional() - .describe("Optional specific version ID to retrieve"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.getContext( - args.contextId, - args.versionId - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error retrieving context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to update context - this.server.addTool({ - name: "updateContext", - description: "Update an existing context with new data and/or metadata", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to update"), - data: z - .any() - .optional() - .describe("New data to update the context with"), - metadata: z - .object({}) - .optional() - .describe("New metadata to update the context with"), - createNewVersion: z - .boolean() - .optional() - .default(true) - .describe( - "Whether to create a new version (true) or update in place (false)" - ), - }), - execute: async (args) => { - try { - const context = await this.contextManager.updateContext( - args.contextId, - args.data || {}, - args.metadata || {}, - args.createNewVersion - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error updating context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to delete context - this.server.addTool({ - name: "deleteContext", - description: "Delete a context by its ID", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to delete"), - }), - execute: async (args) => { - try { - const result = await this.contextManager.deleteContext( - args.contextId - ); - return { success: result }; - } catch (error) { - this.logger.error(`Error deleting context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to list contexts with pagination and advanced filtering - this.server.addTool({ - name: "listContexts", - description: - "List available contexts with filtering, pagination and sorting", - parameters: z.object({ - // Filtering parameters - filters: z - .object({ - tag: z.string().optional().describe("Filter contexts by tag"), - metadataKey: z - .string() - .optional() - .describe("Filter contexts by metadata key"), - metadataValue: z - .string() - .optional() - .describe("Filter contexts by metadata value"), - createdAfter: z - .string() - .optional() - .describe("Filter contexts created after date (ISO format)"), - updatedAfter: z - .string() - .optional() - .describe("Filter contexts updated after date (ISO format)"), - }) - .optional() - .describe("Filters to apply to the context list"), - - // Pagination parameters - limit: z - .number() - .optional() - .default(100) - .describe("Maximum number of contexts to return"), - offset: z - .number() - .optional() - .default(0) - .describe("Number of contexts to skip"), - - // Sorting parameters - sortBy: z - .string() - .optional() - .default("updated") - .describe("Field to sort by (id, created, updated, size)"), - sortDirection: z - .enum(["asc", "desc"]) - .optional() - .default("desc") - .describe("Sort direction"), - - // Search query - query: z.string().optional().describe("Free text search query"), - }), - execute: async (args) => { - try { - const result = await this.contextManager.listContexts(args); - return { - success: true, - ...result, - }; - } catch (error) { - this.logger.error(`Error listing contexts: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to get context history - this.server.addTool({ - name: "getContextHistory", - description: "Get the version history of a context", - parameters: z.object({ - contextId: z - .string() - .describe("The ID of the context to get history for"), - }), - execute: async (args) => { - try { - const history = await this.contextManager.getContextHistory( - args.contextId - ); - return { - success: true, - history, - contextId: args.contextId, - }; - } catch (error) { - this.logger.error(`Error getting context history: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to merge contexts - this.server.addTool({ - name: "mergeContexts", - description: "Merge multiple contexts into a new context", - parameters: z.object({ - contextIds: z - .array(z.string()) - .describe("Array of context IDs to merge"), - newContextId: z.string().describe("ID for the new merged context"), - metadata: z - .object({}) - .optional() - .describe("Optional metadata for the new context"), - }), - execute: async (args) => { - try { - const mergedContext = await this.contextManager.mergeContexts( - args.contextIds, - args.newContextId, - args.metadata || {} - ); - return { - success: true, - context: mergedContext, - }; - } catch (error) { - this.logger.error(`Error merging contexts: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to add tags to a context - this.server.addTool({ - name: "addTags", - description: "Add tags to a context", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to tag"), - tags: z - .array(z.string()) - .describe("Array of tags to add to the context"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.addTags( - args.contextId, - args.tags - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error adding tags to context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to remove tags from a context - this.server.addTool({ - name: "removeTags", - description: "Remove tags from a context", - parameters: z.object({ - contextId: z - .string() - .describe("The ID of the context to remove tags from"), - tags: z - .array(z.string()) - .describe("Array of tags to remove from the context"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.removeTags( - args.contextId, - args.tags - ); - return { success: true, context }; - } catch (error) { - this.logger.error( - `Error removing tags from context: ${error.message}` - ); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to truncate context - this.server.addTool({ - name: "truncateContext", - description: "Truncate a context to a maximum size", - parameters: z.object({ - contextId: z.string().describe("The ID of the context to truncate"), - maxSize: z - .number() - .describe("Maximum size (in characters) for the context"), - strategy: z - .enum(["start", "end", "middle"]) - .default("end") - .describe("Truncation strategy: start, end, or middle"), - }), - execute: async (args) => { - try { - const context = await this.contextManager.truncateContext( - args.contextId, - args.maxSize, - args.strategy - ); - return { success: true, context }; - } catch (error) { - this.logger.error(`Error truncating context: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - this.logger.info("Registered context endpoint handlers"); - } - - /** - * Set up handlers for the /models endpoint - */ - setupModelHandlers() { - // Add a tool to list available models - this.server.addTool({ - name: "listModels", - description: "List all available models with their capabilities", - parameters: z.object({}), - execute: async () => { - // Here we could get models from a more dynamic source - // For now, returning static list of models supported by Task Master - const models = [ - { - id: "claude-3-opus-20240229", - provider: "anthropic", - capabilities: [ - "text-generation", - "embeddings", - "context-window-100k", - ], - }, - { - id: "claude-3-7-sonnet-20250219", - provider: "anthropic", - capabilities: [ - "text-generation", - "embeddings", - "context-window-200k", - ], - }, - { - id: "sonar-medium-online", - provider: "perplexity", - capabilities: ["text-generation", "web-search", "research"], - }, - ]; - - return { success: true, models }; - }, - }); - - // Add a tool to get model details - this.server.addTool({ - name: "getModelDetails", - description: "Get detailed information about a specific model", - parameters: z.object({ - modelId: z.string().describe("The ID of the model to get details for"), - }), - execute: async (args) => { - // Here we could get model details from a more dynamic source - // For now, returning static information - const modelsMap = { - "claude-3-opus-20240229": { - id: "claude-3-opus-20240229", - provider: "anthropic", - capabilities: [ - "text-generation", - "embeddings", - "context-window-100k", - ], - maxTokens: 100000, - temperature: { min: 0, max: 1, default: 0.7 }, - pricing: { input: 0.000015, output: 0.000075 }, - }, - "claude-3-7-sonnet-20250219": { - id: "claude-3-7-sonnet-20250219", - provider: "anthropic", - capabilities: [ - "text-generation", - "embeddings", - "context-window-200k", - ], - maxTokens: 200000, - temperature: { min: 0, max: 1, default: 0.7 }, - pricing: { input: 0.000003, output: 0.000015 }, - }, - "sonar-medium-online": { - id: "sonar-medium-online", - provider: "perplexity", - capabilities: ["text-generation", "web-search", "research"], - maxTokens: 4096, - temperature: { min: 0, max: 1, default: 0.7 }, - }, - }; - - const model = modelsMap[args.modelId]; - if (!model) { - return { - success: false, - error: `Model with ID ${args.modelId} not found`, - }; - } - - return { success: true, model }; - }, - }); - - this.logger.info("Registered models endpoint handlers"); - } - - /** - * Set up handlers for the /execute endpoint - */ - setupExecuteHandlers() { - // Add a tool to execute operations with context - this.server.addTool({ - name: "executeWithContext", - description: "Execute an operation with the provided context", - parameters: z.object({ - operation: z.string().describe("The operation to execute"), - contextId: z.string().describe("The ID of the context to use"), - parameters: z - .record(z.any()) - .optional() - .describe("Additional parameters for the operation"), - versionId: z - .string() - .optional() - .describe("Optional specific context version to use"), - }), - execute: async (args) => { - try { - // Get the context first, with version if specified - const context = await this.contextManager.getContext( - args.contextId, - args.versionId - ); - - // Execute different operations based on the operation name - switch (args.operation) { - case "generateTask": - return await this.executeGenerateTask(context, args.parameters); - case "expandTask": - return await this.executeExpandTask(context, args.parameters); - case "analyzeComplexity": - return await this.executeAnalyzeComplexity( - context, - args.parameters - ); - case "mergeContexts": - return await this.executeMergeContexts(context, args.parameters); - case "searchContexts": - return await this.executeSearchContexts(args.parameters); - case "extractInsights": - return await this.executeExtractInsights( - context, - args.parameters - ); - case "syncWithRepository": - return await this.executeSyncWithRepository( - context, - args.parameters - ); - default: - return { - success: false, - error: `Unknown operation: ${args.operation}`, - }; - } - } catch (error) { - this.logger.error(`Error executing operation: ${error.message}`); - return { - success: false, - error: error.message, - operation: args.operation, - contextId: args.contextId, - }; - } - }, - }); - - // Add tool for batch operations - this.server.addTool({ - name: "executeBatchOperations", - description: "Execute multiple operations in a single request", - parameters: z.object({ - operations: z - .array( - z.object({ - operation: z.string().describe("The operation to execute"), - contextId: z.string().describe("The ID of the context to use"), - parameters: z - .record(z.any()) - .optional() - .describe("Additional parameters"), - versionId: z - .string() - .optional() - .describe("Optional context version"), - }) - ) - .describe("Array of operations to execute in sequence"), - }), - execute: async (args) => { - const results = []; - let hasErrors = false; - - for (const op of args.operations) { - try { - const context = await this.contextManager.getContext( - op.contextId, - op.versionId - ); - - let result; - switch (op.operation) { - case "generateTask": - result = await this.executeGenerateTask(context, op.parameters); - break; - case "expandTask": - result = await this.executeExpandTask(context, op.parameters); - break; - case "analyzeComplexity": - result = await this.executeAnalyzeComplexity( - context, - op.parameters - ); - break; - case "mergeContexts": - result = await this.executeMergeContexts( - context, - op.parameters - ); - break; - case "searchContexts": - result = await this.executeSearchContexts(op.parameters); - break; - case "extractInsights": - result = await this.executeExtractInsights( - context, - op.parameters - ); - break; - case "syncWithRepository": - result = await this.executeSyncWithRepository( - context, - op.parameters - ); - break; - default: - result = { - success: false, - error: `Unknown operation: ${op.operation}`, - }; - hasErrors = true; - } - - results.push({ - operation: op.operation, - contextId: op.contextId, - result: result, - }); - - if (!result.success) { - hasErrors = true; - } - } catch (error) { - this.logger.error( - `Error in batch operation ${op.operation}: ${error.message}` - ); - results.push({ - operation: op.operation, - contextId: op.contextId, - result: { - success: false, - error: error.message, - }, - }); - hasErrors = true; - } - } - - return { - success: !hasErrors, - results: results, - }; - }, - }); - - this.logger.info("Registered execute endpoint handlers"); - } - - /** - * Execute the generateTask operation - * @param {object} context - The context to use - * @param {object} parameters - Additional parameters - * @returns {Promise<object>} The result of the operation - */ - async executeGenerateTask(context, parameters = {}) { - // This is a placeholder for actual task generation logic - // In a real implementation, this would use Task Master's task generation - - this.logger.info(`Generating task with context ${context.id}`); - - // Improved task generation with more detailed result - const task = { - id: Math.floor(Math.random() * 1000), - title: parameters.title || "New Task", - description: parameters.description || "Task generated from context", - status: "pending", - dependencies: parameters.dependencies || [], - priority: parameters.priority || "medium", - details: `This task was generated using context ${ - context.id - }.\n\n${JSON.stringify(context.data, null, 2)}`, - metadata: { - generatedAt: new Date().toISOString(), - generatedFrom: context.id, - contextVersion: context.metadata.version, - generatedBy: parameters.user || "system", - }, - }; - - return { - success: true, - task, - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - }; - } - - /** - * Execute the expandTask operation - * @param {object} context - The context to use - * @param {object} parameters - Additional parameters - * @returns {Promise<object>} The result of the operation - */ - async executeExpandTask(context, parameters = {}) { - // This is a placeholder for actual task expansion logic - // In a real implementation, this would use Task Master's task expansion - - this.logger.info(`Expanding task with context ${context.id}`); - - // Enhanced task expansion with more configurable options - const numSubtasks = parameters.numSubtasks || 3; - const subtaskPrefix = parameters.subtaskPrefix || ""; - const subtasks = []; - - for (let i = 1; i <= numSubtasks; i++) { - subtasks.push({ - id: `${subtaskPrefix}${i}`, - title: parameters.titleTemplate - ? parameters.titleTemplate.replace("{i}", i) - : `Subtask ${i}`, - description: parameters.descriptionTemplate - ? parameters.descriptionTemplate - .replace("{i}", i) - .replace("{taskId}", parameters.taskId || "unknown") - : `Subtask ${i} for ${parameters.taskId || "unknown task"}`, - dependencies: i > 1 ? [i - 1] : [], - status: "pending", - metadata: { - expandedAt: new Date().toISOString(), - expandedFrom: context.id, - contextVersion: context.metadata.version, - expandedBy: parameters.user || "system", - }, - }); - } - - return { - success: true, - taskId: parameters.taskId, - subtasks, - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - }; - } - - /** - * Execute the analyzeComplexity operation - * @param {object} context - The context to use - * @param {object} parameters - Additional parameters - * @returns {Promise<object>} The result of the operation - */ - async executeAnalyzeComplexity(context, parameters = {}) { - // This is a placeholder for actual complexity analysis logic - // In a real implementation, this would use Task Master's complexity analysis - - this.logger.info(`Analyzing complexity with context ${context.id}`); - - // Enhanced complexity analysis with more detailed factors - const complexityScore = Math.floor(Math.random() * 10) + 1; - const recommendedSubtasks = Math.floor(complexityScore / 2) + 1; - - // More detailed analysis with weighted factors - const factors = [ - { - name: "Task scope breadth", - score: Math.floor(Math.random() * 10) + 1, - weight: 0.3, - description: "How broad is the scope of this task", - }, - { - name: "Technical complexity", - score: Math.floor(Math.random() * 10) + 1, - weight: 0.4, - description: "How technically complex is the implementation", - }, - { - name: "External dependencies", - score: Math.floor(Math.random() * 10) + 1, - weight: 0.2, - description: "How many external dependencies does this task have", - }, - { - name: "Risk assessment", - score: Math.floor(Math.random() * 10) + 1, - weight: 0.1, - description: "What is the risk level of this task", - }, - ]; - - return { - success: true, - analysis: { - taskId: parameters.taskId || "unknown", - complexityScore, - recommendedSubtasks, - factors, - recommendedTimeEstimate: `${complexityScore * 2}-${ - complexityScore * 4 - } hours`, - metadata: { - analyzedAt: new Date().toISOString(), - analyzedUsing: context.id, - contextVersion: context.metadata.version, - analyzedBy: parameters.user || "system", - }, - }, - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - }; - } - - /** - * Execute the mergeContexts operation - * @param {object} primaryContext - The primary context to use - * @param {object} parameters - Additional parameters - * @returns {Promise<object>} The result of the operation - */ - async executeMergeContexts(primaryContext, parameters = {}) { - this.logger.info( - `Merging contexts with primary context ${primaryContext.id}` - ); - - if ( - !parameters.contextIds || - !Array.isArray(parameters.contextIds) || - parameters.contextIds.length === 0 - ) { - return { - success: false, - error: "No context IDs provided for merging", - }; - } - - if (!parameters.newContextId) { - return { - success: false, - error: "New context ID is required for the merged context", - }; - } - - try { - // Add the primary context to the list if not already included - if (!parameters.contextIds.includes(primaryContext.id)) { - parameters.contextIds.unshift(primaryContext.id); - } - - const mergedContext = await this.contextManager.mergeContexts( - parameters.contextIds, - parameters.newContextId, - { - mergedAt: new Date().toISOString(), - mergedBy: parameters.user || "system", - mergeStrategy: parameters.strategy || "concatenate", - ...parameters.metadata, - } - ); - - return { - success: true, - mergedContext, - sourceContexts: parameters.contextIds, - }; - } catch (error) { - this.logger.error(`Error merging contexts: ${error.message}`); - return { - success: false, - error: error.message, - }; - } - } - - /** - * Execute the searchContexts operation - * @param {object} parameters - Search parameters - * @returns {Promise<object>} The result of the operation - */ - async executeSearchContexts(parameters = {}) { - this.logger.info( - `Searching contexts with query: ${parameters.query || ""}` - ); - - try { - const searchResults = await this.contextManager.listContexts({ - query: parameters.query || "", - filters: parameters.filters || {}, - limit: parameters.limit || 100, - offset: parameters.offset || 0, - sortBy: parameters.sortBy || "updated", - sortDirection: parameters.sortDirection || "desc", - }); - - return { - success: true, - ...searchResults, - }; - } catch (error) { - this.logger.error(`Error searching contexts: ${error.message}`); - return { - success: false, - error: error.message, - }; - } - } - - /** - * Execute the extractInsights operation - * @param {object} context - The context to analyze - * @param {object} parameters - Additional parameters - * @returns {Promise<object>} The result of the operation - */ - async executeExtractInsights(context, parameters = {}) { - this.logger.info(`Extracting insights from context ${context.id}`); - - // Placeholder for actual insight extraction - // In a real implementation, this would perform analysis on the context data - - const insights = [ - { - type: "summary", - content: `Summary of context ${context.id}`, - confidence: 0.85, - }, - { - type: "key_points", - content: ["First key point", "Second key point", "Third key point"], - confidence: 0.78, - }, - { - type: "recommendations", - content: ["First recommendation", "Second recommendation"], - confidence: 0.72, - }, - ]; - - return { - success: true, - insights, - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - metadata: { - extractedAt: new Date().toISOString(), - model: parameters.model || "default", - extractedBy: parameters.user || "system", - }, - }; - } - - /** - * Execute the syncWithRepository operation - * @param {object} context - The context to sync - * @param {object} parameters - Additional parameters - * @returns {Promise<object>} The result of the operation - */ - async executeSyncWithRepository(context, parameters = {}) { - this.logger.info(`Syncing context ${context.id} with repository`); - - // Placeholder for actual repository sync - // In a real implementation, this would sync the context with an external repository - - return { - success: true, - syncStatus: "complete", - syncedTo: parameters.repository || "default", - syncTimestamp: new Date().toISOString(), - contextUsed: { - id: context.id, - version: context.metadata.version, - }, - }; - } -} - -export default MCPApiHandlers; diff --git a/mcp-server/src/auth.js b/mcp-server/src/auth.js deleted file mode 100644 index 22c36973..00000000 --- a/mcp-server/src/auth.js +++ /dev/null @@ -1,285 +0,0 @@ -import jwt from "jsonwebtoken"; -import { logger } from "../../scripts/modules/utils.js"; -import crypto from "crypto"; -import fs from "fs/promises"; -import path from "path"; -import { fileURLToPath } from "url"; - -// Constants -const __filename = fileURLToPath(import.meta.url); -const __dirname = path.dirname(__filename); -const API_KEYS_FILE = - process.env.MCP_API_KEYS_FILE || path.join(__dirname, "../api-keys.json"); -const JWT_SECRET = - process.env.MCP_JWT_SECRET || "task-master-mcp-server-secret"; -const JWT_EXPIRATION = process.env.MCP_JWT_EXPIRATION || "24h"; - -/** - * Authentication middleware and utilities for MCP server - */ -class MCPAuth { - constructor() { - this.apiKeys = new Map(); - this.logger = logger; - this.loadApiKeys(); - } - - /** - * Load API keys from disk - */ - async loadApiKeys() { - try { - // Create API keys file if it doesn't exist - try { - await fs.access(API_KEYS_FILE); - } catch (error) { - // File doesn't exist, create it with a default admin key - const defaultApiKey = this.generateApiKey(); - const defaultApiKeys = { - keys: [ - { - id: "admin", - key: defaultApiKey, - role: "admin", - created: new Date().toISOString(), - }, - ], - }; - - await fs.mkdir(path.dirname(API_KEYS_FILE), { recursive: true }); - await fs.writeFile( - API_KEYS_FILE, - JSON.stringify(defaultApiKeys, null, 2), - "utf8" - ); - - this.logger.info( - `Created default API keys file with admin key: ${defaultApiKey}` - ); - } - - // Load API keys - const data = await fs.readFile(API_KEYS_FILE, "utf8"); - const apiKeys = JSON.parse(data); - - apiKeys.keys.forEach((key) => { - this.apiKeys.set(key.key, { - id: key.id, - role: key.role, - created: key.created, - }); - }); - - this.logger.info(`Loaded ${this.apiKeys.size} API keys`); - } catch (error) { - this.logger.error(`Failed to load API keys: ${error.message}`); - throw error; - } - } - - /** - * Save API keys to disk - */ - async saveApiKeys() { - try { - const keys = []; - - this.apiKeys.forEach((value, key) => { - keys.push({ - id: value.id, - key, - role: value.role, - created: value.created, - }); - }); - - await fs.writeFile( - API_KEYS_FILE, - JSON.stringify({ keys }, null, 2), - "utf8" - ); - - this.logger.info(`Saved ${keys.length} API keys`); - } catch (error) { - this.logger.error(`Failed to save API keys: ${error.message}`); - throw error; - } - } - - /** - * Generate a new API key - * @returns {string} The generated API key - */ - generateApiKey() { - return crypto.randomBytes(32).toString("hex"); - } - - /** - * Create a new API key - * @param {string} id - Client identifier - * @param {string} role - Client role (admin, user) - * @returns {string} The generated API key - */ - async createApiKey(id, role = "user") { - const apiKey = this.generateApiKey(); - - this.apiKeys.set(apiKey, { - id, - role, - created: new Date().toISOString(), - }); - - await this.saveApiKeys(); - - this.logger.info(`Created new API key for ${id} with role ${role}`); - return apiKey; - } - - /** - * Revoke an API key - * @param {string} apiKey - The API key to revoke - * @returns {boolean} True if the key was revoked - */ - async revokeApiKey(apiKey) { - if (!this.apiKeys.has(apiKey)) { - return false; - } - - this.apiKeys.delete(apiKey); - await this.saveApiKeys(); - - this.logger.info(`Revoked API key`); - return true; - } - - /** - * Validate an API key - * @param {string} apiKey - The API key to validate - * @returns {object|null} The API key details if valid, null otherwise - */ - validateApiKey(apiKey) { - return this.apiKeys.get(apiKey) || null; - } - - /** - * Generate a JWT token for a client - * @param {string} clientId - Client identifier - * @param {string} role - Client role - * @returns {string} The JWT token - */ - generateToken(clientId, role) { - return jwt.sign({ clientId, role }, JWT_SECRET, { - expiresIn: JWT_EXPIRATION, - }); - } - - /** - * Verify a JWT token - * @param {string} token - The JWT token to verify - * @returns {object|null} The token payload if valid, null otherwise - */ - verifyToken(token) { - try { - return jwt.verify(token, JWT_SECRET); - } catch (error) { - this.logger.error(`Failed to verify token: ${error.message}`); - return null; - } - } - - /** - * Express middleware for API key authentication - * @param {object} req - Express request object - * @param {object} res - Express response object - * @param {function} next - Express next function - */ - authenticateApiKey(req, res, next) { - const apiKey = req.headers["x-api-key"]; - - if (!apiKey) { - return res.status(401).json({ - success: false, - error: "API key is required", - }); - } - - const keyDetails = this.validateApiKey(apiKey); - - if (!keyDetails) { - return res.status(401).json({ - success: false, - error: "Invalid API key", - }); - } - - // Attach client info to request - req.client = { - id: keyDetails.id, - role: keyDetails.role, - }; - - next(); - } - - /** - * Express middleware for JWT authentication - * @param {object} req - Express request object - * @param {object} res - Express response object - * @param {function} next - Express next function - */ - authenticateToken(req, res, next) { - const authHeader = req.headers["authorization"]; - const token = authHeader && authHeader.split(" ")[1]; - - if (!token) { - return res.status(401).json({ - success: false, - error: "Authentication token is required", - }); - } - - const payload = this.verifyToken(token); - - if (!payload) { - return res.status(401).json({ - success: false, - error: "Invalid or expired token", - }); - } - - // Attach client info to request - req.client = { - id: payload.clientId, - role: payload.role, - }; - - next(); - } - - /** - * Express middleware for role-based authorization - * @param {Array} roles - Array of allowed roles - * @returns {function} Express middleware - */ - authorizeRoles(roles) { - return (req, res, next) => { - if (!req.client || !req.client.role) { - return res.status(401).json({ - success: false, - error: "Unauthorized: Authentication required", - }); - } - - if (!roles.includes(req.client.role)) { - return res.status(403).json({ - success: false, - error: "Forbidden: Insufficient permissions", - }); - } - - next(); - }; - } -} - -export default MCPAuth; diff --git a/mcp-server/src/context-manager.js b/mcp-server/src/context-manager.js deleted file mode 100644 index 5b94b538..00000000 --- a/mcp-server/src/context-manager.js +++ /dev/null @@ -1,873 +0,0 @@ -import { logger } from "../../scripts/modules/utils.js"; -import fs from "fs/promises"; -import path from "path"; -import { fileURLToPath } from "url"; -import crypto from "crypto"; -import Fuse from "fuse.js"; - -// Constants -const __filename = fileURLToPath(import.meta.url); -const __dirname = path.dirname(__filename); -const CONTEXT_DIR = - process.env.MCP_CONTEXT_DIR || path.join(__dirname, "../contexts"); -const MAX_CONTEXT_HISTORY = parseInt( - process.env.MCP_MAX_CONTEXT_HISTORY || "10", - 10 -); - -/** - * Context Manager for MCP server - * Handles storage, retrieval, and manipulation of context data - * Implements efficient indexing, versioning, and advanced context operations - */ -class ContextManager { - constructor() { - this.contexts = new Map(); - this.contextHistory = new Map(); // For version history - this.contextIndex = null; // For fuzzy search - this.logger = logger; - this.ensureContextDir(); - this.rebuildSearchIndex(); - } - - /** - * Ensure the contexts directory exists - */ - async ensureContextDir() { - try { - await fs.mkdir(CONTEXT_DIR, { recursive: true }); - this.logger.info(`Context directory ensured at ${CONTEXT_DIR}`); - - // Also create a versions subdirectory for history - await fs.mkdir(path.join(CONTEXT_DIR, "versions"), { recursive: true }); - } catch (error) { - this.logger.error(`Failed to create context directory: ${error.message}`); - throw error; - } - } - - /** - * Rebuild the search index for efficient context lookup - */ - async rebuildSearchIndex() { - await this.loadAllContextsFromDisk(); - - const contextsForIndex = Array.from(this.contexts.values()).map((ctx) => ({ - id: ctx.id, - content: - typeof ctx.data === "string" ? ctx.data : JSON.stringify(ctx.data), - tags: ctx.tags.join(" "), - metadata: Object.entries(ctx.metadata) - .map(([k, v]) => `${k}:${v}`) - .join(" "), - })); - - this.contextIndex = new Fuse(contextsForIndex, { - keys: ["id", "content", "tags", "metadata"], - includeScore: true, - threshold: 0.6, - }); - - this.logger.info( - `Rebuilt search index with ${contextsForIndex.length} contexts` - ); - } - - /** - * Create a new context - * @param {string} contextId - Unique identifier for the context - * @param {object|string} contextData - Initial context data - * @param {object} metadata - Optional metadata for the context - * @returns {object} The created context - */ - async createContext(contextId, contextData, metadata = {}) { - if (this.contexts.has(contextId)) { - throw new Error(`Context with ID ${contextId} already exists`); - } - - const timestamp = new Date().toISOString(); - const versionId = this.generateVersionId(); - - const context = { - id: contextId, - data: contextData, - metadata: { - created: timestamp, - updated: timestamp, - version: versionId, - ...metadata, - }, - tags: metadata.tags || [], - size: this.estimateSize(contextData), - }; - - this.contexts.set(contextId, context); - - // Initialize version history - this.contextHistory.set(contextId, [ - { - versionId, - timestamp, - data: JSON.parse(JSON.stringify(contextData)), // Deep clone - metadata: { ...context.metadata }, - }, - ]); - - await this.persistContext(contextId); - await this.persistContextVersion(contextId, versionId); - - // Update the search index - this.rebuildSearchIndex(); - - this.logger.info(`Created context: ${contextId} (version: ${versionId})`); - return context; - } - - /** - * Retrieve a context by ID - * @param {string} contextId - The context ID to retrieve - * @param {string} versionId - Optional specific version to retrieve - * @returns {object} The context object - */ - async getContext(contextId, versionId = null) { - // If specific version requested, try to get it from history - if (versionId) { - return this.getContextVersion(contextId, versionId); - } - - // Try to get from memory first - if (this.contexts.has(contextId)) { - return this.contexts.get(contextId); - } - - // Try to load from disk - try { - const context = await this.loadContextFromDisk(contextId); - if (context) { - this.contexts.set(contextId, context); - return context; - } - } catch (error) { - this.logger.error( - `Failed to load context ${contextId}: ${error.message}` - ); - } - - throw new Error(`Context with ID ${contextId} not found`); - } - - /** - * Get a specific version of a context - * @param {string} contextId - The context ID - * @param {string} versionId - The version ID - * @returns {object} The versioned context - */ - async getContextVersion(contextId, versionId) { - // Check if version history is in memory - if (this.contextHistory.has(contextId)) { - const history = this.contextHistory.get(contextId); - const version = history.find((v) => v.versionId === versionId); - if (version) { - return { - id: contextId, - data: version.data, - metadata: version.metadata, - tags: version.metadata.tags || [], - size: this.estimateSize(version.data), - versionId: version.versionId, - }; - } - } - - // Try to load from disk - try { - const versionPath = path.join( - CONTEXT_DIR, - "versions", - `${contextId}_${versionId}.json` - ); - const data = await fs.readFile(versionPath, "utf8"); - const version = JSON.parse(data); - - // Add to memory cache - if (!this.contextHistory.has(contextId)) { - this.contextHistory.set(contextId, []); - } - const history = this.contextHistory.get(contextId); - history.push(version); - - return { - id: contextId, - data: version.data, - metadata: version.metadata, - tags: version.metadata.tags || [], - size: this.estimateSize(version.data), - versionId: version.versionId, - }; - } catch (error) { - this.logger.error( - `Failed to load context version ${contextId}@${versionId}: ${error.message}` - ); - throw new Error( - `Context version ${versionId} for ${contextId} not found` - ); - } - } - - /** - * Update an existing context - * @param {string} contextId - The context ID to update - * @param {object|string} contextData - New context data - * @param {object} metadata - Optional metadata updates - * @param {boolean} createNewVersion - Whether to create a new version - * @returns {object} The updated context - */ - async updateContext( - contextId, - contextData, - metadata = {}, - createNewVersion = true - ) { - const context = await this.getContext(contextId); - const timestamp = new Date().toISOString(); - - // Generate a new version ID if requested - const versionId = createNewVersion - ? this.generateVersionId() - : context.metadata.version; - - // Create a backup of the current state for versioning - if (createNewVersion) { - // Store the current version in history - if (!this.contextHistory.has(contextId)) { - this.contextHistory.set(contextId, []); - } - - const history = this.contextHistory.get(contextId); - - // Add current state to history - history.push({ - versionId: context.metadata.version, - timestamp: context.metadata.updated, - data: JSON.parse(JSON.stringify(context.data)), // Deep clone - metadata: { ...context.metadata }, - }); - - // Trim history if it exceeds the maximum size - if (history.length > MAX_CONTEXT_HISTORY) { - const excessVersions = history.splice( - 0, - history.length - MAX_CONTEXT_HISTORY - ); - // Clean up excess versions from disk - for (const version of excessVersions) { - this.removeContextVersionFile(contextId, version.versionId).catch( - (err) => - this.logger.error( - `Failed to remove old version file: ${err.message}` - ) - ); - } - } - - // Persist version - await this.persistContextVersion(contextId, context.metadata.version); - } - - // Update the context - context.data = contextData; - context.metadata = { - ...context.metadata, - ...metadata, - updated: timestamp, - }; - - if (createNewVersion) { - context.metadata.version = versionId; - context.metadata.previousVersion = context.metadata.version; - } - - if (metadata.tags) { - context.tags = metadata.tags; - } - - // Update size estimate - context.size = this.estimateSize(contextData); - - this.contexts.set(contextId, context); - await this.persistContext(contextId); - - // Update the search index - this.rebuildSearchIndex(); - - this.logger.info(`Updated context: ${contextId} (version: ${versionId})`); - return context; - } - - /** - * Delete a context and all its versions - * @param {string} contextId - The context ID to delete - * @returns {boolean} True if deletion was successful - */ - async deleteContext(contextId) { - if (!this.contexts.has(contextId)) { - const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); - try { - await fs.access(contextPath); - } catch (error) { - throw new Error(`Context with ID ${contextId} not found`); - } - } - - this.contexts.delete(contextId); - - // Remove from history - const history = this.contextHistory.get(contextId) || []; - this.contextHistory.delete(contextId); - - try { - // Delete main context file - const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); - await fs.unlink(contextPath); - - // Delete all version files - for (const version of history) { - await this.removeContextVersionFile(contextId, version.versionId); - } - - // Update the search index - this.rebuildSearchIndex(); - - this.logger.info(`Deleted context: ${contextId}`); - return true; - } catch (error) { - this.logger.error( - `Failed to delete context files for ${contextId}: ${error.message}` - ); - throw error; - } - } - - /** - * List all available contexts with pagination and advanced filtering - * @param {object} options - Options for listing contexts - * @param {object} options.filters - Filters to apply - * @param {number} options.limit - Maximum number of contexts to return - * @param {number} options.offset - Number of contexts to skip - * @param {string} options.sortBy - Field to sort by - * @param {string} options.sortDirection - Sort direction ('asc' or 'desc') - * @param {string} options.query - Free text search query - * @returns {Array} Array of context objects - */ - async listContexts(options = {}) { - // Load all contexts from disk first - await this.loadAllContextsFromDisk(); - - const { - filters = {}, - limit = 100, - offset = 0, - sortBy = "updated", - sortDirection = "desc", - query = "", - } = options; - - let contexts; - - // If there's a search query, use the search index - if (query && this.contextIndex) { - const searchResults = this.contextIndex.search(query); - contexts = searchResults.map((result) => - this.contexts.get(result.item.id) - ); - } else { - contexts = Array.from(this.contexts.values()); - } - - // Apply filters - if (filters.tag) { - contexts = contexts.filter( - (ctx) => ctx.tags && ctx.tags.includes(filters.tag) - ); - } - - if (filters.metadataKey && filters.metadataValue) { - contexts = contexts.filter( - (ctx) => - ctx.metadata && - ctx.metadata[filters.metadataKey] === filters.metadataValue - ); - } - - if (filters.createdAfter) { - const timestamp = new Date(filters.createdAfter); - contexts = contexts.filter( - (ctx) => new Date(ctx.metadata.created) >= timestamp - ); - } - - if (filters.updatedAfter) { - const timestamp = new Date(filters.updatedAfter); - contexts = contexts.filter( - (ctx) => new Date(ctx.metadata.updated) >= timestamp - ); - } - - // Apply sorting - contexts.sort((a, b) => { - let valueA, valueB; - - if (sortBy === "created" || sortBy === "updated") { - valueA = new Date(a.metadata[sortBy]).getTime(); - valueB = new Date(b.metadata[sortBy]).getTime(); - } else if (sortBy === "size") { - valueA = a.size || 0; - valueB = b.size || 0; - } else if (sortBy === "id") { - valueA = a.id; - valueB = b.id; - } else { - valueA = a.metadata[sortBy]; - valueB = b.metadata[sortBy]; - } - - if (valueA === valueB) return 0; - - const sortFactor = sortDirection === "asc" ? 1 : -1; - return valueA < valueB ? -1 * sortFactor : 1 * sortFactor; - }); - - // Apply pagination - const paginatedContexts = contexts.slice(offset, offset + limit); - - return { - contexts: paginatedContexts, - total: contexts.length, - offset, - limit, - hasMore: offset + limit < contexts.length, - }; - } - - /** - * Get the version history of a context - * @param {string} contextId - The context ID - * @returns {Array} Array of version objects - */ - async getContextHistory(contextId) { - // Ensure context exists - await this.getContext(contextId); - - // Load history if not in memory - if (!this.contextHistory.has(contextId)) { - await this.loadContextHistoryFromDisk(contextId); - } - - const history = this.contextHistory.get(contextId) || []; - - // Return versions in reverse chronological order (newest first) - return history.sort((a, b) => { - const timeA = new Date(a.timestamp).getTime(); - const timeB = new Date(b.timestamp).getTime(); - return timeB - timeA; - }); - } - - /** - * Add tags to a context - * @param {string} contextId - The context ID - * @param {Array} tags - Array of tags to add - * @returns {object} The updated context - */ - async addTags(contextId, tags) { - const context = await this.getContext(contextId); - - const currentTags = context.tags || []; - const uniqueTags = [...new Set([...currentTags, ...tags])]; - - // Update context with new tags - return this.updateContext( - contextId, - context.data, - { - tags: uniqueTags, - }, - false - ); // Don't create a new version for tag updates - } - - /** - * Remove tags from a context - * @param {string} contextId - The context ID - * @param {Array} tags - Array of tags to remove - * @returns {object} The updated context - */ - async removeTags(contextId, tags) { - const context = await this.getContext(contextId); - - const currentTags = context.tags || []; - const newTags = currentTags.filter((tag) => !tags.includes(tag)); - - // Update context with new tags - return this.updateContext( - contextId, - context.data, - { - tags: newTags, - }, - false - ); // Don't create a new version for tag updates - } - - /** - * Handle context windowing and truncation - * @param {string} contextId - The context ID - * @param {number} maxSize - Maximum size in tokens/chars - * @param {string} strategy - Truncation strategy ('start', 'end', 'middle') - * @returns {object} The truncated context - */ - async truncateContext(contextId, maxSize, strategy = "end") { - const context = await this.getContext(contextId); - const contextText = - typeof context.data === "string" - ? context.data - : JSON.stringify(context.data); - - if (contextText.length <= maxSize) { - return context; // No truncation needed - } - - let truncatedData; - - switch (strategy) { - case "start": - truncatedData = contextText.slice(contextText.length - maxSize); - break; - case "middle": - const halfSize = Math.floor(maxSize / 2); - truncatedData = - contextText.slice(0, halfSize) + - "...[truncated]..." + - contextText.slice(contextText.length - halfSize); - break; - case "end": - default: - truncatedData = contextText.slice(0, maxSize); - break; - } - - // If original data was an object, try to parse the truncated data - // Otherwise use it as a string - let updatedData; - if (typeof context.data === "object") { - try { - // This may fail if truncation broke JSON structure - updatedData = { - ...context.data, - truncated: true, - truncation_strategy: strategy, - original_size: contextText.length, - truncated_size: truncatedData.length, - }; - } catch (error) { - updatedData = truncatedData; - } - } else { - updatedData = truncatedData; - } - - // Update with truncated data - return this.updateContext( - contextId, - updatedData, - { - truncated: true, - truncation_strategy: strategy, - original_size: contextText.length, - truncated_size: truncatedData.length, - }, - true - ); // Create a new version for the truncated data - } - - /** - * Merge multiple contexts into a new context - * @param {Array} contextIds - Array of context IDs to merge - * @param {string} newContextId - ID for the new merged context - * @param {object} metadata - Optional metadata for the new context - * @returns {object} The new merged context - */ - async mergeContexts(contextIds, newContextId, metadata = {}) { - if (contextIds.length === 0) { - throw new Error("At least one context ID must be provided for merging"); - } - - if (this.contexts.has(newContextId)) { - throw new Error(`Context with ID ${newContextId} already exists`); - } - - // Load all contexts to be merged - const contextsToMerge = []; - for (const id of contextIds) { - try { - const context = await this.getContext(id); - contextsToMerge.push(context); - } catch (error) { - this.logger.error( - `Could not load context ${id} for merging: ${error.message}` - ); - throw new Error(`Failed to merge contexts: ${error.message}`); - } - } - - // Check data types and decide how to merge - const allStrings = contextsToMerge.every((c) => typeof c.data === "string"); - const allObjects = contextsToMerge.every( - (c) => typeof c.data === "object" && c.data !== null - ); - - let mergedData; - - if (allStrings) { - // Merge strings with newlines between them - mergedData = contextsToMerge.map((c) => c.data).join("\n\n"); - } else if (allObjects) { - // Merge objects by combining their properties - mergedData = {}; - for (const context of contextsToMerge) { - mergedData = { ...mergedData, ...context.data }; - } - } else { - // Convert everything to strings and concatenate - mergedData = contextsToMerge - .map((c) => - typeof c.data === "string" ? c.data : JSON.stringify(c.data) - ) - .join("\n\n"); - } - - // Collect all tags from merged contexts - const allTags = new Set(); - for (const context of contextsToMerge) { - for (const tag of context.tags || []) { - allTags.add(tag); - } - } - - // Create merged metadata - const mergedMetadata = { - ...metadata, - tags: [...allTags], - merged_from: contextIds, - merged_at: new Date().toISOString(), - }; - - // Create the new merged context - return this.createContext(newContextId, mergedData, mergedMetadata); - } - - /** - * Persist a context to disk - * @param {string} contextId - The context ID to persist - * @returns {Promise<void>} - */ - async persistContext(contextId) { - const context = this.contexts.get(contextId); - if (!context) { - throw new Error(`Context with ID ${contextId} not found`); - } - - const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); - try { - await fs.writeFile(contextPath, JSON.stringify(context, null, 2), "utf8"); - this.logger.debug(`Persisted context ${contextId} to disk`); - } catch (error) { - this.logger.error( - `Failed to persist context ${contextId}: ${error.message}` - ); - throw error; - } - } - - /** - * Persist a context version to disk - * @param {string} contextId - The context ID - * @param {string} versionId - The version ID - * @returns {Promise<void>} - */ - async persistContextVersion(contextId, versionId) { - if (!this.contextHistory.has(contextId)) { - throw new Error(`Context history for ${contextId} not found`); - } - - const history = this.contextHistory.get(contextId); - const version = history.find((v) => v.versionId === versionId); - - if (!version) { - throw new Error(`Version ${versionId} of context ${contextId} not found`); - } - - const versionPath = path.join( - CONTEXT_DIR, - "versions", - `${contextId}_${versionId}.json` - ); - try { - await fs.writeFile(versionPath, JSON.stringify(version, null, 2), "utf8"); - this.logger.debug( - `Persisted context version ${contextId}@${versionId} to disk` - ); - } catch (error) { - this.logger.error( - `Failed to persist context version ${contextId}@${versionId}: ${error.message}` - ); - throw error; - } - } - - /** - * Remove a context version file from disk - * @param {string} contextId - The context ID - * @param {string} versionId - The version ID - * @returns {Promise<void>} - */ - async removeContextVersionFile(contextId, versionId) { - const versionPath = path.join( - CONTEXT_DIR, - "versions", - `${contextId}_${versionId}.json` - ); - try { - await fs.unlink(versionPath); - this.logger.debug( - `Removed context version file ${contextId}@${versionId}` - ); - } catch (error) { - if (error.code !== "ENOENT") { - this.logger.error( - `Failed to remove context version file ${contextId}@${versionId}: ${error.message}` - ); - throw error; - } - } - } - - /** - * Load a context from disk - * @param {string} contextId - The context ID to load - * @returns {Promise<object>} The loaded context - */ - async loadContextFromDisk(contextId) { - const contextPath = path.join(CONTEXT_DIR, `${contextId}.json`); - try { - const data = await fs.readFile(contextPath, "utf8"); - const context = JSON.parse(data); - this.logger.debug(`Loaded context ${contextId} from disk`); - return context; - } catch (error) { - this.logger.error( - `Failed to load context ${contextId} from disk: ${error.message}` - ); - throw error; - } - } - - /** - * Load context history from disk - * @param {string} contextId - The context ID - * @returns {Promise<Array>} The loaded history - */ - async loadContextHistoryFromDisk(contextId) { - try { - const files = await fs.readdir(path.join(CONTEXT_DIR, "versions")); - const versionFiles = files.filter( - (file) => file.startsWith(`${contextId}_`) && file.endsWith(".json") - ); - - const history = []; - - for (const file of versionFiles) { - try { - const data = await fs.readFile( - path.join(CONTEXT_DIR, "versions", file), - "utf8" - ); - const version = JSON.parse(data); - history.push(version); - } catch (error) { - this.logger.error( - `Failed to load context version file ${file}: ${error.message}` - ); - } - } - - this.contextHistory.set(contextId, history); - this.logger.debug( - `Loaded ${history.length} versions for context ${contextId}` - ); - - return history; - } catch (error) { - this.logger.error( - `Failed to load context history for ${contextId}: ${error.message}` - ); - this.contextHistory.set(contextId, []); - return []; - } - } - - /** - * Load all contexts from disk - * @returns {Promise<void>} - */ - async loadAllContextsFromDisk() { - try { - const files = await fs.readdir(CONTEXT_DIR); - const contextFiles = files.filter((file) => file.endsWith(".json")); - - for (const file of contextFiles) { - const contextId = path.basename(file, ".json"); - if (!this.contexts.has(contextId)) { - try { - const context = await this.loadContextFromDisk(contextId); - this.contexts.set(contextId, context); - } catch (error) { - // Already logged in loadContextFromDisk - } - } - } - - this.logger.info(`Loaded ${this.contexts.size} contexts from disk`); - } catch (error) { - this.logger.error(`Failed to load contexts from disk: ${error.message}`); - throw error; - } - } - - /** - * Generate a unique version ID - * @returns {string} A unique version ID - */ - generateVersionId() { - return crypto.randomBytes(8).toString("hex"); - } - - /** - * Estimate the size of context data - * @param {object|string} data - The context data - * @returns {number} Estimated size in bytes - */ - estimateSize(data) { - if (typeof data === "string") { - return Buffer.byteLength(data, "utf8"); - } - - if (typeof data === "object" && data !== null) { - return Buffer.byteLength(JSON.stringify(data), "utf8"); - } - - return 0; - } -} - -export default ContextManager; diff --git a/mcp-server/src/index.js b/mcp-server/src/index.js index eb820f95..3fe17b58 100644 --- a/mcp-server/src/index.js +++ b/mcp-server/src/index.js @@ -1,16 +1,10 @@ import { FastMCP } from "fastmcp"; -import { z } from "zod"; import path from "path"; -import fs from "fs/promises"; import dotenv from "dotenv"; import { fileURLToPath } from "url"; -import express from "express"; -import cors from "cors"; -import helmet from "helmet"; -import { logger } from "../../scripts/modules/utils.js"; -import MCPAuth from "./auth.js"; -import MCPApiHandlers from "./api-handlers.js"; -import ContextManager from "./context-manager.js"; +import fs from "fs"; +import logger from "./logger.js"; +import { registerTaskMasterTools } from "./tools/index.js"; // Load environment variables dotenv.config(); @@ -18,25 +12,27 @@ dotenv.config(); // Constants const __filename = fileURLToPath(import.meta.url); const __dirname = path.dirname(__filename); -const DEFAULT_PORT = process.env.MCP_SERVER_PORT || 3000; -const DEFAULT_HOST = process.env.MCP_SERVER_HOST || "localhost"; /** * Main MCP server class that integrates with Task Master */ class TaskMasterMCPServer { - constructor(options = {}) { + constructor() { + // Get version from package.json using synchronous fs + const packagePath = path.join(__dirname, "../../package.json"); + const packageJson = JSON.parse(fs.readFileSync(packagePath, "utf8")); + this.options = { name: "Task Master MCP Server", - version: process.env.PROJECT_VERSION || "1.0.0", - ...options, + version: packageJson.version, }; this.server = new FastMCP(this.options); - this.expressApp = null; this.initialized = false; - this.auth = new MCPAuth(); - this.contextManager = new ContextManager(); + + // this.server.addResource({}); + + // this.server.addResourceTemplate({}); // Bind methods this.init = this.init.bind(this); @@ -53,301 +49,27 @@ class TaskMasterMCPServer { async init() { if (this.initialized) return; - this.logger.info("Initializing Task Master MCP server..."); - - // Set up express for additional customization if needed - this.expressApp = express(); - this.expressApp.use(cors()); - this.expressApp.use(helmet()); - this.expressApp.use(express.json()); - - // Set up authentication middleware - this.setupAuthentication(); - - // Register API handlers - this.apiHandlers = new MCPApiHandlers(this.server); - - // Register additional task master specific tools - this.registerTaskMasterTools(); + // Register Task Master tools + registerTaskMasterTools(this.server); this.initialized = true; - this.logger.info("Task Master MCP server initialized successfully"); return this; } - /** - * Set up authentication for the MCP server - */ - setupAuthentication() { - // Add a health check endpoint that doesn't require authentication - this.expressApp.get("/health", (req, res) => { - res.status(200).json({ - status: "ok", - service: this.options.name, - version: this.options.version, - }); - }); - - // Add an authenticate endpoint to get a JWT token using an API key - this.expressApp.post("/auth/token", async (req, res) => { - const apiKey = req.headers["x-api-key"]; - - if (!apiKey) { - return res.status(401).json({ - success: false, - error: "API key is required", - }); - } - - const keyDetails = this.auth.validateApiKey(apiKey); - - if (!keyDetails) { - return res.status(401).json({ - success: false, - error: "Invalid API key", - }); - } - - const token = this.auth.generateToken(keyDetails.id, keyDetails.role); - - res.status(200).json({ - success: true, - token, - expiresIn: process.env.MCP_JWT_EXPIRATION || "24h", - clientId: keyDetails.id, - role: keyDetails.role, - }); - }); - - // Create authenticator middleware for FastMCP - this.server.setAuthenticator((request) => { - // Get token from Authorization header - const authHeader = request.headers?.authorization; - if (!authHeader || !authHeader.startsWith("Bearer ")) { - return null; - } - - const token = authHeader.split(" ")[1]; - const payload = this.auth.verifyToken(token); - - if (!payload) { - return null; - } - - return { - clientId: payload.clientId, - role: payload.role, - }; - }); - - // Set up a protected route for API key management (admin only) - this.expressApp.post( - "/auth/api-keys", - (req, res, next) => { - this.auth.authenticateToken(req, res, next); - }, - (req, res, next) => { - this.auth.authorizeRoles(["admin"])(req, res, next); - }, - async (req, res) => { - const { clientId, role } = req.body; - - if (!clientId) { - return res.status(400).json({ - success: false, - error: "Client ID is required", - }); - } - - try { - const apiKey = await this.auth.createApiKey(clientId, role || "user"); - - res.status(201).json({ - success: true, - apiKey, - clientId, - role: role || "user", - }); - } catch (error) { - this.logger.error(`Error creating API key: ${error.message}`); - - res.status(500).json({ - success: false, - error: "Failed to create API key", - }); - } - } - ); - - this.logger.info("Set up MCP authentication"); - } - - /** - * Register Task Master specific tools with the MCP server - */ - registerTaskMasterTools() { - // Add a tool to get tasks from Task Master - this.server.addTool({ - name: "listTasks", - description: "List all tasks from Task Master", - parameters: z.object({ - status: z.string().optional().describe("Filter tasks by status"), - withSubtasks: z - .boolean() - .optional() - .describe("Include subtasks in the response"), - }), - execute: async (args) => { - try { - // In a real implementation, this would use the Task Master API - // to fetch tasks. For now, returning mock data. - - this.logger.info( - `Listing tasks with filters: ${JSON.stringify(args)}` - ); - - // Mock task data - const tasks = [ - { - id: 1, - title: "Implement Task Data Structure", - status: "done", - dependencies: [], - priority: "high", - }, - { - id: 2, - title: "Develop Command Line Interface Foundation", - status: "done", - dependencies: [1], - priority: "high", - }, - { - id: 23, - title: "Implement MCP Server Functionality", - status: "in-progress", - dependencies: [22], - priority: "medium", - subtasks: [ - { - id: "23.1", - title: "Create Core MCP Server Module", - status: "in-progress", - dependencies: [], - }, - { - id: "23.2", - title: "Implement Context Management System", - status: "pending", - dependencies: ["23.1"], - }, - ], - }, - ]; - - // Apply status filter if provided - let filteredTasks = tasks; - if (args.status) { - filteredTasks = tasks.filter((task) => task.status === args.status); - } - - // Remove subtasks if not requested - if (!args.withSubtasks) { - filteredTasks = filteredTasks.map((task) => { - const { subtasks, ...taskWithoutSubtasks } = task; - return taskWithoutSubtasks; - }); - } - - return { success: true, tasks: filteredTasks }; - } catch (error) { - this.logger.error(`Error listing tasks: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - // Add a tool to get task details - this.server.addTool({ - name: "getTaskDetails", - description: "Get detailed information about a specific task", - parameters: z.object({ - taskId: z - .union([z.number(), z.string()]) - .describe("The ID of the task to get details for"), - }), - execute: async (args) => { - try { - // In a real implementation, this would use the Task Master API - // to fetch task details. For now, returning mock data. - - this.logger.info(`Getting details for task ${args.taskId}`); - - // Mock task details - const taskDetails = { - id: 23, - title: "Implement MCP Server Functionality", - description: - "Extend Task Master to function as an MCP server, allowing it to provide context management services to other applications.", - status: "in-progress", - dependencies: [22], - priority: "medium", - details: - "This task involves implementing the Model Context Protocol server capabilities within Task Master.", - testStrategy: - "Testing should include unit tests, integration tests, and compatibility tests.", - subtasks: [ - { - id: "23.1", - title: "Create Core MCP Server Module", - status: "in-progress", - dependencies: [], - }, - { - id: "23.2", - title: "Implement Context Management System", - status: "pending", - dependencies: ["23.1"], - }, - ], - }; - - return { success: true, task: taskDetails }; - } catch (error) { - this.logger.error(`Error getting task details: ${error.message}`); - return { success: false, error: error.message }; - } - }, - }); - - this.logger.info("Registered Task Master specific tools"); - } - /** * Start the MCP server */ - async start({ port = DEFAULT_PORT, host = DEFAULT_HOST } = {}) { + async start() { if (!this.initialized) { await this.init(); } - this.logger.info( - `Starting Task Master MCP server on http://${host}:${port}` - ); - // Start the FastMCP server await this.server.start({ - port, - host, - transportType: "sse", - expressApp: this.expressApp, + transportType: "stdio", }); - this.logger.info( - `Task Master MCP server running at http://${host}:${port}` - ); - return this; } @@ -356,9 +78,7 @@ class TaskMasterMCPServer { */ async stop() { if (this.server) { - this.logger.info("Stopping Task Master MCP server..."); await this.server.stop(); - this.logger.info("Task Master MCP server stopped"); } } } diff --git a/mcp-server/src/logger.js b/mcp-server/src/logger.js new file mode 100644 index 00000000..80c0e55c --- /dev/null +++ b/mcp-server/src/logger.js @@ -0,0 +1,68 @@ +import chalk from "chalk"; + +// Define log levels +const LOG_LEVELS = { + debug: 0, + info: 1, + warn: 2, + error: 3, + success: 4, +}; + +// Get log level from environment or default to info +const LOG_LEVEL = process.env.LOG_LEVEL + ? LOG_LEVELS[process.env.LOG_LEVEL.toLowerCase()] + : LOG_LEVELS.info; + +/** + * Logs a message with the specified level + * @param {string} level - The log level (debug, info, warn, error, success) + * @param {...any} args - Arguments to log + */ +function log(level, ...args) { + const icons = { + debug: chalk.gray("🔍"), + info: chalk.blue("â„šī¸"), + warn: chalk.yellow("âš ī¸"), + error: chalk.red("❌"), + success: chalk.green("✅"), + }; + + if (LOG_LEVELS[level] >= LOG_LEVEL) { + const icon = icons[level] || ""; + + if (level === "error") { + console.error(icon, chalk.red(...args)); + } else if (level === "warn") { + console.warn(icon, chalk.yellow(...args)); + } else if (level === "success") { + console.log(icon, chalk.green(...args)); + } else if (level === "info") { + console.log(icon, chalk.blue(...args)); + } else { + console.log(icon, ...args); + } + } +} + +/** + * Create a logger object with methods for different log levels + * Can be used as a drop-in replacement for existing logger initialization + * @returns {Object} Logger object with info, error, debug, warn, and success methods + */ +export function createLogger() { + return { + debug: (message) => log("debug", message), + info: (message) => log("info", message), + warn: (message) => log("warn", message), + error: (message) => log("error", message), + success: (message) => log("success", message), + log: log, // Also expose the raw log function + }; +} + +// Export a default logger instance +const logger = createLogger(); + +export default logger; +export { log, LOG_LEVELS }; diff --git a/mcp-server/src/tools/addTask.js b/mcp-server/src/tools/addTask.js new file mode 100644 index 00000000..0622d0e8 --- /dev/null +++ b/mcp-server/src/tools/addTask.js @@ -0,0 +1,56 @@ +/** + * tools/addTask.js + * Tool to add a new task using AI + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the addTask tool with the MCP server + * @param {FastMCP} server - FastMCP server instance + */ +export function registerAddTaskTool(server) { + server.addTool({ + name: "addTask", + description: "Add a new task using AI", + parameters: z.object({ + prompt: z.string().describe("Description of the task to add"), + dependencies: z + .string() + .optional() + .describe("Comma-separated list of task IDs this task depends on"), + priority: z + .string() + .optional() + .describe("Task priority (high, medium, low)"), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Adding new task: ${args.prompt}`); + + const cmdArgs = [`--prompt="${args.prompt}"`]; + if (args.dependencies) + cmdArgs.push(`--dependencies=${args.dependencies}`); + if (args.priority) cmdArgs.push(`--priority=${args.priority}`); + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("add-task", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error adding task: ${error.message}`); + return createErrorResponse(`Error adding task: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/expandTask.js b/mcp-server/src/tools/expandTask.js new file mode 100644 index 00000000..b94d00d4 --- /dev/null +++ b/mcp-server/src/tools/expandTask.js @@ -0,0 +1,66 @@ +/** + * tools/expandTask.js + * Tool to break down a task into detailed subtasks + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the expandTask tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerExpandTaskTool(server) { + server.addTool({ + name: "expandTask", + description: "Break down a task into detailed subtasks", + parameters: z.object({ + id: z.union([z.string(), z.number()]).describe("Task ID to expand"), + num: z.number().optional().describe("Number of subtasks to generate"), + research: z + .boolean() + .optional() + .describe( + "Enable Perplexity AI for research-backed subtask generation" + ), + prompt: z + .string() + .optional() + .describe("Additional context to guide subtask generation"), + force: z + .boolean() + .optional() + .describe( + "Force regeneration of subtasks for tasks that already have them" + ), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Expanding task ${args.id}`); + + const cmdArgs = [`--id=${args.id}`]; + if (args.num) cmdArgs.push(`--num=${args.num}`); + if (args.research) cmdArgs.push("--research"); + if (args.prompt) cmdArgs.push(`--prompt="${args.prompt}"`); + if (args.force) cmdArgs.push("--force"); + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("expand", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error expanding task: ${error.message}`); + return createErrorResponse(`Error expanding task: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/index.js b/mcp-server/src/tools/index.js new file mode 100644 index 00000000..97d47438 --- /dev/null +++ b/mcp-server/src/tools/index.js @@ -0,0 +1,29 @@ +/** + * tools/index.js + * Export all Task Master CLI tools for MCP server + */ + +import logger from "../logger.js"; +import { registerListTasksTool } from "./listTasks.js"; +import { registerShowTaskTool } from "./showTask.js"; +import { registerSetTaskStatusTool } from "./setTaskStatus.js"; +import { registerExpandTaskTool } from "./expandTask.js"; +import { registerNextTaskTool } from "./nextTask.js"; +import { registerAddTaskTool } from "./addTask.js"; + +/** + * Register all Task Master tools with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerTaskMasterTools(server) { + registerListTasksTool(server); + registerShowTaskTool(server); + registerSetTaskStatusTool(server); + registerExpandTaskTool(server); + registerNextTaskTool(server); + registerAddTaskTool(server); +} + +export default { + registerTaskMasterTools, +}; diff --git a/mcp-server/src/tools/listTasks.js b/mcp-server/src/tools/listTasks.js new file mode 100644 index 00000000..7da65692 --- /dev/null +++ b/mcp-server/src/tools/listTasks.js @@ -0,0 +1,51 @@ +/** + * tools/listTasks.js + * Tool to list all tasks from Task Master + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the listTasks tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerListTasksTool(server) { + server.addTool({ + name: "listTasks", + description: "List all tasks from Task Master", + parameters: z.object({ + status: z.string().optional().describe("Filter tasks by status"), + withSubtasks: z + .boolean() + .optional() + .describe("Include subtasks in the response"), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Listing tasks with filters: ${JSON.stringify(args)}`); + + const cmdArgs = []; + if (args.status) cmdArgs.push(`--status=${args.status}`); + if (args.withSubtasks) cmdArgs.push("--with-subtasks"); + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("list", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error listing tasks: ${error.message}`); + return createErrorResponse(`Error listing tasks: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/nextTask.js b/mcp-server/src/tools/nextTask.js new file mode 100644 index 00000000..4003ce04 --- /dev/null +++ b/mcp-server/src/tools/nextTask.js @@ -0,0 +1,45 @@ +/** + * tools/nextTask.js + * Tool to show the next task to work on based on dependencies and status + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the nextTask tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerNextTaskTool(server) { + server.addTool({ + name: "nextTask", + description: + "Show the next task to work on based on dependencies and status", + parameters: z.object({ + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Finding next task to work on`); + + const cmdArgs = []; + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("next", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error finding next task: ${error.message}`); + return createErrorResponse(`Error finding next task: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/setTaskStatus.js b/mcp-server/src/tools/setTaskStatus.js new file mode 100644 index 00000000..5681dd7b --- /dev/null +++ b/mcp-server/src/tools/setTaskStatus.js @@ -0,0 +1,52 @@ +/** + * tools/setTaskStatus.js + * Tool to set the status of a task + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the setTaskStatus tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerSetTaskStatusTool(server) { + server.addTool({ + name: "setTaskStatus", + description: "Set the status of a task", + parameters: z.object({ + id: z + .union([z.string(), z.number()]) + .describe("Task ID (can be comma-separated for multiple tasks)"), + status: z + .string() + .describe("New status (todo, in-progress, review, done)"), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Setting status of task(s) ${args.id} to: ${args.status}`); + + const cmdArgs = [`--id=${args.id}`, `--status=${args.status}`]; + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("set-status", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error setting task status: ${error.message}`); + return createErrorResponse( + `Error setting task status: ${error.message}` + ); + } + }, + }); +} diff --git a/mcp-server/src/tools/showTask.js b/mcp-server/src/tools/showTask.js new file mode 100644 index 00000000..c44d9463 --- /dev/null +++ b/mcp-server/src/tools/showTask.js @@ -0,0 +1,45 @@ +/** + * tools/showTask.js + * Tool to show detailed information about a specific task + */ + +import { z } from "zod"; +import { + executeTaskMasterCommand, + createContentResponse, + createErrorResponse, +} from "./utils.js"; + +/** + * Register the showTask tool with the MCP server + * @param {Object} server - FastMCP server instance + */ +export function registerShowTaskTool(server) { + server.addTool({ + name: "showTask", + description: "Show detailed information about a specific task", + parameters: z.object({ + id: z.union([z.string(), z.number()]).describe("Task ID to show"), + file: z.string().optional().describe("Path to the tasks file"), + }), + execute: async (args, { log }) => { + try { + log.info(`Showing task details for ID: ${args.id}`); + + const cmdArgs = [args.id]; + if (args.file) cmdArgs.push(`--file=${args.file}`); + + const result = executeTaskMasterCommand("show", log, cmdArgs); + + if (!result.success) { + throw new Error(result.error); + } + + return createContentResponse(result.stdout); + } catch (error) { + log.error(`Error showing task: ${error.message}`); + return createErrorResponse(`Error showing task: ${error.message}`); + } + }, + }); +} diff --git a/mcp-server/src/tools/utils.js b/mcp-server/src/tools/utils.js new file mode 100644 index 00000000..24745d2e --- /dev/null +++ b/mcp-server/src/tools/utils.js @@ -0,0 +1,90 @@ +/** + * tools/utils.js + * Utility functions for Task Master CLI integration + */ + +import { spawnSync } from "child_process"; + +/** + * Execute a Task Master CLI command using child_process + * @param {string} command - The command to execute + * @param {Object} log - The logger object from FastMCP + * @param {Array} args - Arguments for the command + * @returns {Object} - The result of the command execution + */ +export function executeTaskMasterCommand(command, log, args = []) { + try { + log.info( + `Executing task-master ${command} with args: ${JSON.stringify(args)}` + ); + + // Prepare full arguments array + const fullArgs = [command, ...args]; + + // Execute the command using the global task-master CLI or local script + // Try the global CLI first + let result = spawnSync("task-master", fullArgs, { encoding: "utf8" }); + + // If global CLI is not available, try fallback to the local script + if (result.error && result.error.code === "ENOENT") { + log.info("Global task-master not found, falling back to local script"); + result = spawnSync("node", ["scripts/dev.js", ...fullArgs], { + encoding: "utf8", + }); + } + + if (result.error) { + throw new Error(`Command execution error: ${result.error.message}`); + } + + if (result.status !== 0) { + throw new Error( + `Command failed with exit code ${result.status}: ${result.stderr}` + ); + } + + return { + success: true, + stdout: result.stdout, + stderr: result.stderr, + }; + } catch (error) { + log.error(`Error executing task-master command: ${error.message}`); + return { + success: false, + error: error.message, + }; + } +} + +/** + * Creates standard content response for tools + * @param {string} text - Text content to include in response + * @returns {Object} - Content response object + */ +export function createContentResponse(text) { + return { + content: [ + { + text, + type: "text", + }, + ], + }; +} + +/** + * Creates error response for tools + * @param {string} errorMessage - Error message to include in response + * @returns {Object} - Error content response object + */ +export function createErrorResponse(errorMessage) { + return { + content: [ + { + text: errorMessage, + type: "text", + }, + ], + }; +} From bde34223410758a673d45f9c5550bfed4a6b7930 Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Tue, 25 Mar 2025 19:00:00 +0000 Subject: [PATCH 13/16] fix(mcp): get everything working, cleanup, and test all tools --- .cursor/mcp.json | 8 -- README-task-master.md | 70 ++++++++++- README.md | 70 ++++++++++- mcp-server/README.md | 170 -------------------------- mcp-server/src/tools/addTask.js | 12 +- mcp-server/src/tools/expandTask.js | 16 ++- mcp-server/src/tools/listTasks.js | 16 ++- mcp-server/src/tools/nextTask.js | 14 ++- mcp-server/src/tools/setTaskStatus.js | 16 ++- mcp-server/src/tools/showTask.js | 18 ++- mcp-server/src/tools/utils.js | 32 +++-- 11 files changed, 241 insertions(+), 201 deletions(-) delete mode 100644 mcp-server/README.md diff --git a/.cursor/mcp.json b/.cursor/mcp.json index 3b7160ae..e69de29b 100644 --- a/.cursor/mcp.json +++ b/.cursor/mcp.json @@ -1,8 +0,0 @@ -{ - "mcpServers": { - "taskMaster": { - "command": "node", - "args": ["mcp-server/server.js"] - } - } -} diff --git a/README-task-master.md b/README-task-master.md index cf46772c..26cce92b 100644 --- a/README-task-master.md +++ b/README-task-master.md @@ -1,4 +1,5 @@ # Task Master + ### by [@eyaltoledano](https://x.com/eyaltoledano) A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI. @@ -15,9 +16,11 @@ A task management system for AI-driven development with Claude, designed to work The script can be configured through environment variables in a `.env` file at the root of the project: ### Required Configuration + - `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude ### Optional Configuration + - `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219") - `MAX_TOKENS`: Maximum tokens for model responses (default: 4000) - `TEMPERATURE`: Temperature for model responses (default: 0.7) @@ -123,6 +126,21 @@ Claude Task Master is designed to work seamlessly with [Cursor AI](https://www.c 3. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`) 4. Open Cursor's AI chat and switch to Agent mode +### Setting up MCP in Cursor + +To enable enhanced task management capabilities directly within Cursor using the Model Control Protocol (MCP): + +1. Go to Cursor settings +2. Navigate to the MCP section +3. Click on "Add New MCP Server" +4. Configure with the following details: + - Name: "Task Master" + - Type: "Command" + - Command: "npx -y task-master-mcp" +5. Save the settings + +Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience. + ### Initial Task Generation In Cursor's AI chat, instruct the agent to generate tasks from your PRD: @@ -132,11 +150,13 @@ Please use the task-master parse-prd command to generate tasks from my PRD. The ``` The agent will execute: + ```bash task-master parse-prd scripts/prd.txt ``` This will: + - Parse your PRD document - Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies - The agent will understand this process due to the Cursor rules @@ -150,6 +170,7 @@ Please generate individual task files from tasks.json ``` The agent will execute: + ```bash task-master generate ``` @@ -169,6 +190,7 @@ What tasks are available to work on next? ``` The agent will: + - Run `task-master list` to see all tasks - Run `task-master next` to determine the next task to work on - Analyze dependencies to determine which tasks are ready to be worked on @@ -178,12 +200,14 @@ The agent will: ### 2. Task Implementation When implementing a task, the agent will: + - Reference the task's details section for implementation specifics - Consider dependencies on previous tasks - Follow the project's coding standards - Create appropriate tests based on the task's testStrategy You can ask: + ``` Let's implement task 3. What does it involve? ``` @@ -191,6 +215,7 @@ Let's implement task 3. What does it involve? ### 3. Task Verification Before marking a task as complete, verify it according to: + - The task's specified testStrategy - Any automated tests in the codebase - Manual verification if required @@ -204,6 +229,7 @@ Task 3 is now complete. Please update its status. ``` The agent will execute: + ```bash task-master set-status --id=3 --status=done ``` @@ -211,16 +237,19 @@ task-master set-status --id=3 --status=done ### 5. Handling Implementation Drift If during implementation, you discover that: + - The current approach differs significantly from what was planned - Future tasks need to be modified due to current implementation choices - New dependencies or requirements have emerged Tell the agent: + ``` We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change. ``` The agent will execute: + ```bash task-master update --from=4 --prompt="Now we are using Express instead of Fastify." ``` @@ -236,36 +265,43 @@ Task 5 seems complex. Can you break it down into subtasks? ``` The agent will execute: + ```bash task-master expand --id=5 --num=3 ``` You can provide additional context: + ``` Please break down task 5 with a focus on security considerations. ``` The agent will execute: + ```bash task-master expand --id=5 --prompt="Focus on security aspects" ``` You can also expand all pending tasks: + ``` Please break down all pending tasks into subtasks. ``` The agent will execute: + ```bash task-master expand --all ``` For research-backed subtask generation using Perplexity AI: + ``` Please break down task 5 using research-backed generation. ``` The agent will execute: + ```bash task-master expand --id=5 --research ``` @@ -275,6 +311,7 @@ task-master expand --id=5 --research Here's a comprehensive reference of all available commands: ### Parse PRD + ```bash # Parse a PRD file and generate tasks task-master parse-prd <prd-file.txt> @@ -284,6 +321,7 @@ task-master parse-prd <prd-file.txt> --num-tasks=10 ``` ### List Tasks + ```bash # List all tasks task-master list @@ -299,12 +337,14 @@ task-master list --status=<status> --with-subtasks ``` ### Show Next Task + ```bash # Show the next task to work on based on dependencies and status task-master next ``` ### Show Specific Task + ```bash # Show details of a specific task task-master show <id> @@ -316,18 +356,21 @@ task-master show 1.2 ``` ### Update Tasks + ```bash # Update tasks from a specific ID and provide context task-master update --from=<id> --prompt="<prompt>" ``` ### Generate Task Files + ```bash # Generate individual task files from tasks.json task-master generate ``` ### Set Task Status + ```bash # Set status of a single task task-master set-status --id=<id> --status=<status> @@ -342,6 +385,7 @@ task-master set-status --id=1.1,1.2 --status=<status> When marking a task as "done", all of its subtasks will automatically be marked as "done" as well. ### Expand Tasks + ```bash # Expand a specific task with subtasks task-master expand --id=<id> --num=<number> @@ -363,6 +407,7 @@ task-master expand --all --research ``` ### Clear Subtasks + ```bash # Clear subtasks from a specific task task-master clear-subtasks --id=<id> @@ -375,6 +420,7 @@ task-master clear-subtasks --all ``` ### Analyze Task Complexity + ```bash # Analyze complexity of all tasks task-master analyze-complexity @@ -396,6 +442,7 @@ task-master analyze-complexity --research ``` ### View Complexity Report + ```bash # Display the task complexity analysis report task-master complexity-report @@ -405,6 +452,7 @@ task-master complexity-report --file=my-report.json ``` ### Managing Task Dependencies + ```bash # Add a dependency to a task task-master add-dependency --id=<id> --depends-on=<id> @@ -420,6 +468,7 @@ task-master fix-dependencies ``` ### Add a New Task + ```bash # Add a new task using AI task-master add-task --prompt="Description of the new task" @@ -436,6 +485,7 @@ task-master add-task --prompt="Description" --priority=high ### Analyzing Task Complexity The `analyze-complexity` command: + - Analyzes each task using AI to assess its complexity on a scale of 1-10 - Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS - Generates tailored prompts for expanding each task @@ -443,6 +493,7 @@ The `analyze-complexity` command: - Saves the report to scripts/task-complexity-report.json by default The generated report contains: + - Complexity analysis for each task (scored 1-10) - Recommended number of subtasks based on complexity - AI-generated expansion prompts customized for each task @@ -451,6 +502,7 @@ The generated report contains: ### Viewing Complexity Report The `complexity-report` command: + - Displays a formatted, easy-to-read version of the complexity analysis report - Shows tasks organized by complexity score (highest to lowest) - Provides complexity distribution statistics (low, medium, high) @@ -463,12 +515,14 @@ The `complexity-report` command: The `expand` command automatically checks for and uses the complexity report: When a complexity report exists: + - Tasks are automatically expanded using the recommended subtask count and prompts - When expanding all tasks, they're processed in order of complexity (highest first) - Research-backed generation is preserved from the complexity analysis - You can still override recommendations with explicit command-line options Example workflow: + ```bash # Generate the complexity analysis report with research capabilities task-master analyze-complexity --research @@ -485,6 +539,7 @@ task-master expand --all ### Finding the Next Task The `next` command: + - Identifies tasks that are pending/in-progress and have all dependencies satisfied - Prioritizes tasks by priority level, dependency count, and task ID - Displays comprehensive information about the selected task: @@ -499,6 +554,7 @@ The `next` command: ### Viewing Specific Task Details The `show` command: + - Displays comprehensive details about a specific task or subtask - Shows task status, priority, dependencies, and detailed implementation notes - For parent tasks, displays all subtasks and their status @@ -529,43 +585,51 @@ The `show` command: ## Example Cursor AI Interactions ### Starting a new project + ``` -I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt. +I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt. Can you help me parse it and set up the initial tasks? ``` ### Working on tasks + ``` What's the next task I should work on? Please consider dependencies and priorities. ``` ### Implementing a specific task + ``` I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it? ``` ### Managing subtasks + ``` I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them? ``` ### Handling changes + ``` We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change? ``` ### Completing work + ``` -I've finished implementing the authentication system described in task 2. All tests are passing. +I've finished implementing the authentication system described in task 2. All tests are passing. Please mark it as complete and tell me what I should work on next. ``` ### Analyzing complexity + ``` Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further? ``` ### Viewing complexity report + ``` Can you show me the complexity report in a more readable format? -``` \ No newline at end of file +``` diff --git a/README.md b/README.md index 6e24c651..b0803a99 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,5 @@ # Task Master + ### by [@eyaltoledano](https://x.com/eyaltoledano) A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI. @@ -15,9 +16,11 @@ A task management system for AI-driven development with Claude, designed to work The script can be configured through environment variables in a `.env` file at the root of the project: ### Required Configuration + - `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude ### Optional Configuration + - `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219") - `MAX_TOKENS`: Maximum tokens for model responses (default: 4000) - `TEMPERATURE`: Temperature for model responses (default: 0.7) @@ -123,6 +126,21 @@ Claude Task Master is designed to work seamlessly with [Cursor AI](https://www.c 3. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`) 4. Open Cursor's AI chat and switch to Agent mode +### Setting up MCP in Cursor + +To enable enhanced task management capabilities directly within Cursor using the Model Control Protocol (MCP): + +1. Go to Cursor settings +2. Navigate to the MCP section +3. Click on "Add New MCP Server" +4. Configure with the following details: + - Name: "Task Master" + - Type: "Command" + - Command: "npx -y task-master-mcp" +5. Save the settings + +Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience. + ### Initial Task Generation In Cursor's AI chat, instruct the agent to generate tasks from your PRD: @@ -132,11 +150,13 @@ Please use the task-master parse-prd command to generate tasks from my PRD. The ``` The agent will execute: + ```bash task-master parse-prd scripts/prd.txt ``` This will: + - Parse your PRD document - Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies - The agent will understand this process due to the Cursor rules @@ -150,6 +170,7 @@ Please generate individual task files from tasks.json ``` The agent will execute: + ```bash task-master generate ``` @@ -169,6 +190,7 @@ What tasks are available to work on next? ``` The agent will: + - Run `task-master list` to see all tasks - Run `task-master next` to determine the next task to work on - Analyze dependencies to determine which tasks are ready to be worked on @@ -178,12 +200,14 @@ The agent will: ### 2. Task Implementation When implementing a task, the agent will: + - Reference the task's details section for implementation specifics - Consider dependencies on previous tasks - Follow the project's coding standards - Create appropriate tests based on the task's testStrategy You can ask: + ``` Let's implement task 3. What does it involve? ``` @@ -191,6 +215,7 @@ Let's implement task 3. What does it involve? ### 3. Task Verification Before marking a task as complete, verify it according to: + - The task's specified testStrategy - Any automated tests in the codebase - Manual verification if required @@ -204,6 +229,7 @@ Task 3 is now complete. Please update its status. ``` The agent will execute: + ```bash task-master set-status --id=3 --status=done ``` @@ -211,16 +237,19 @@ task-master set-status --id=3 --status=done ### 5. Handling Implementation Drift If during implementation, you discover that: + - The current approach differs significantly from what was planned - Future tasks need to be modified due to current implementation choices - New dependencies or requirements have emerged Tell the agent: + ``` We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change. ``` The agent will execute: + ```bash task-master update --from=4 --prompt="Now we are using Express instead of Fastify." ``` @@ -236,36 +265,43 @@ Task 5 seems complex. Can you break it down into subtasks? ``` The agent will execute: + ```bash task-master expand --id=5 --num=3 ``` You can provide additional context: + ``` Please break down task 5 with a focus on security considerations. ``` The agent will execute: + ```bash task-master expand --id=5 --prompt="Focus on security aspects" ``` You can also expand all pending tasks: + ``` Please break down all pending tasks into subtasks. ``` The agent will execute: + ```bash task-master expand --all ``` For research-backed subtask generation using Perplexity AI: + ``` Please break down task 5 using research-backed generation. ``` The agent will execute: + ```bash task-master expand --id=5 --research ``` @@ -275,6 +311,7 @@ task-master expand --id=5 --research Here's a comprehensive reference of all available commands: ### Parse PRD + ```bash # Parse a PRD file and generate tasks task-master parse-prd <prd-file.txt> @@ -284,6 +321,7 @@ task-master parse-prd <prd-file.txt> --num-tasks=10 ``` ### List Tasks + ```bash # List all tasks task-master list @@ -299,12 +337,14 @@ task-master list --status=<status> --with-subtasks ``` ### Show Next Task + ```bash # Show the next task to work on based on dependencies and status task-master next ``` ### Show Specific Task + ```bash # Show details of a specific task task-master show <id> @@ -316,18 +356,21 @@ task-master show 1.2 ``` ### Update Tasks + ```bash # Update tasks from a specific ID and provide context task-master update --from=<id> --prompt="<prompt>" ``` ### Generate Task Files + ```bash # Generate individual task files from tasks.json task-master generate ``` ### Set Task Status + ```bash # Set status of a single task task-master set-status --id=<id> --status=<status> @@ -342,6 +385,7 @@ task-master set-status --id=1.1,1.2 --status=<status> When marking a task as "done", all of its subtasks will automatically be marked as "done" as well. ### Expand Tasks + ```bash # Expand a specific task with subtasks task-master expand --id=<id> --num=<number> @@ -363,6 +407,7 @@ task-master expand --all --research ``` ### Clear Subtasks + ```bash # Clear subtasks from a specific task task-master clear-subtasks --id=<id> @@ -375,6 +420,7 @@ task-master clear-subtasks --all ``` ### Analyze Task Complexity + ```bash # Analyze complexity of all tasks task-master analyze-complexity @@ -396,6 +442,7 @@ task-master analyze-complexity --research ``` ### View Complexity Report + ```bash # Display the task complexity analysis report task-master complexity-report @@ -405,6 +452,7 @@ task-master complexity-report --file=my-report.json ``` ### Managing Task Dependencies + ```bash # Add a dependency to a task task-master add-dependency --id=<id> --depends-on=<id> @@ -420,6 +468,7 @@ task-master fix-dependencies ``` ### Add a New Task + ```bash # Add a new task using AI task-master add-task --prompt="Description of the new task" @@ -866,6 +915,7 @@ task-master add-task --prompt="Description" --priority=high ### Analyzing Task Complexity The `analyze-complexity` command: + - Analyzes each task using AI to assess its complexity on a scale of 1-10 - Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS - Generates tailored prompts for expanding each task @@ -873,6 +923,7 @@ The `analyze-complexity` command: - Saves the report to scripts/task-complexity-report.json by default The generated report contains: + - Complexity analysis for each task (scored 1-10) - Recommended number of subtasks based on complexity - AI-generated expansion prompts customized for each task @@ -881,6 +932,7 @@ The generated report contains: ### Viewing Complexity Report The `complexity-report` command: + - Displays a formatted, easy-to-read version of the complexity analysis report - Shows tasks organized by complexity score (highest to lowest) - Provides complexity distribution statistics (low, medium, high) @@ -893,12 +945,14 @@ The `complexity-report` command: The `expand` command automatically checks for and uses the complexity report: When a complexity report exists: + - Tasks are automatically expanded using the recommended subtask count and prompts - When expanding all tasks, they're processed in order of complexity (highest first) - Research-backed generation is preserved from the complexity analysis - You can still override recommendations with explicit command-line options Example workflow: + ```bash # Generate the complexity analysis report with research capabilities task-master analyze-complexity --research @@ -915,6 +969,7 @@ task-master expand --all ### Finding the Next Task The `next` command: + - Identifies tasks that are pending/in-progress and have all dependencies satisfied - Prioritizes tasks by priority level, dependency count, and task ID - Displays comprehensive information about the selected task: @@ -929,6 +984,7 @@ The `next` command: ### Viewing Specific Task Details The `show` command: + - Displays comprehensive details about a specific task or subtask - Shows task status, priority, dependencies, and detailed implementation notes - For parent tasks, displays all subtasks and their status @@ -959,43 +1015,51 @@ The `show` command: ## Example Cursor AI Interactions ### Starting a new project + ``` -I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt. +I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt. Can you help me parse it and set up the initial tasks? ``` ### Working on tasks + ``` What's the next task I should work on? Please consider dependencies and priorities. ``` ### Implementing a specific task + ``` I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it? ``` ### Managing subtasks + ``` I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them? ``` ### Handling changes + ``` We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change? ``` ### Completing work + ``` -I've finished implementing the authentication system described in task 2. All tests are passing. +I've finished implementing the authentication system described in task 2. All tests are passing. Please mark it as complete and tell me what I should work on next. ``` ### Analyzing complexity + ``` Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further? ``` ### Viewing complexity report + ``` Can you show me the complexity report in a more readable format? -``` \ No newline at end of file +``` diff --git a/mcp-server/README.md b/mcp-server/README.md deleted file mode 100644 index 9c8b1300..00000000 --- a/mcp-server/README.md +++ /dev/null @@ -1,170 +0,0 @@ -# Task Master MCP Server - -This module implements a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server for Task Master, allowing external applications to access Task Master functionality and context through a standardized API. - -## Features - -- MCP-compliant server implementation using FastMCP -- RESTful API for context management -- Authentication and authorization for secure access -- Context storage and retrieval with metadata and tagging -- Context windowing and truncation for handling size limits -- Integration with Task Master for task management operations - -## Installation - -The MCP server is included with Task Master. Install Task Master globally to use the MCP server: - -```bash -npm install -g task-master-ai -``` - -Or use it locally: - -```bash -npm install task-master-ai -``` - -## Environment Configuration - -The MCP server can be configured using environment variables or a `.env` file: - -| Variable | Description | Default | -| -------------------- | ---------------------------------------- | ----------------------------- | -| `MCP_SERVER_PORT` | Port for the MCP server | 3000 | -| `MCP_SERVER_HOST` | Host for the MCP server | localhost | -| `MCP_CONTEXT_DIR` | Directory for context storage | ./mcp-server/contexts | -| `MCP_API_KEYS_FILE` | File for API key storage | ./mcp-server/api-keys.json | -| `MCP_JWT_SECRET` | Secret for JWT token generation | task-master-mcp-server-secret | -| `MCP_JWT_EXPIRATION` | JWT token expiration time | 24h | -| `LOG_LEVEL` | Logging level (debug, info, warn, error) | info | - -## Getting Started - -### Starting the Server - -Start the MCP server as a standalone process: - -```bash -npx task-master-mcp-server -``` - -Or start it programmatically: - -```javascript -import { TaskMasterMCPServer } from "task-master-ai/mcp-server"; - -const server = new TaskMasterMCPServer(); -await server.start({ port: 3000, host: "localhost" }); -``` - -### Authentication - -The MCP server uses API key authentication with JWT tokens for secure access. A default admin API key is generated on first startup and can be found in the `api-keys.json` file. - -To get a JWT token: - -```bash -curl -X POST http://localhost:3000/auth/token \ - -H "x-api-key: YOUR_API_KEY" -``` - -Use the token for subsequent requests: - -```bash -curl http://localhost:3000/mcp/tools \ - -H "Authorization: Bearer YOUR_JWT_TOKEN" -``` - -### Creating a New API Key - -Admin users can create new API keys: - -```bash -curl -X POST http://localhost:3000/auth/api-keys \ - -H "Authorization: Bearer ADMIN_JWT_TOKEN" \ - -H "Content-Type: application/json" \ - -d '{"clientId": "user1", "role": "user"}' -``` - -## Available MCP Endpoints - -The MCP server implements the following MCP-compliant endpoints: - -### Context Management - -- `GET /mcp/context` - List all contexts -- `POST /mcp/context` - Create a new context -- `GET /mcp/context/{id}` - Get a specific context -- `PUT /mcp/context/{id}` - Update a context -- `DELETE /mcp/context/{id}` - Delete a context - -### Models - -- `GET /mcp/models` - List available models -- `GET /mcp/models/{id}` - Get model details - -### Execution - -- `POST /mcp/execute` - Execute an operation with context - -## Available MCP Tools - -The MCP server provides the following tools: - -### Context Tools - -- `createContext` - Create a new context -- `getContext` - Retrieve a context by ID -- `updateContext` - Update an existing context -- `deleteContext` - Delete a context -- `listContexts` - List available contexts -- `addTags` - Add tags to a context -- `truncateContext` - Truncate a context to a maximum size - -### Task Master Tools - -- `listTasks` - List tasks from Task Master -- `getTaskDetails` - Get detailed task information -- `executeWithContext` - Execute operations using context - -## Examples - -### Creating a Context - -```javascript -// Using the MCP client -const client = new MCPClient("http://localhost:3000"); -await client.authenticate("YOUR_API_KEY"); - -const context = await client.createContext("my-context", { - title: "My Project", - tasks: ["Implement feature X", "Fix bug Y"], -}); -``` - -### Executing an Operation with Context - -```javascript -// Using the MCP client -const result = await client.execute("generateTask", "my-context", { - title: "New Task", - description: "Create a new task based on context", -}); -``` - -## Integration with Other Tools - -The Task Master MCP server can be integrated with other MCP-compatible tools and clients: - -- LLM applications that support the MCP protocol -- Task management systems that support context-aware operations -- Development environments with MCP integration - -## Contributing - -Contributions are welcome! Please feel free to submit a Pull Request. - -## License - -This project is licensed under the MIT License - see the LICENSE file for details. diff --git a/mcp-server/src/tools/addTask.js b/mcp-server/src/tools/addTask.js index 0622d0e8..0b12d9fc 100644 --- a/mcp-server/src/tools/addTask.js +++ b/mcp-server/src/tools/addTask.js @@ -29,6 +29,11 @@ export function registerAddTaskTool(server) { .optional() .describe("Task priority (high, medium, low)"), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -40,7 +45,12 @@ export function registerAddTaskTool(server) { if (args.priority) cmdArgs.push(`--priority=${args.priority}`); if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("add-task", log, cmdArgs); + const result = executeTaskMasterCommand( + "add-task", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/expandTask.js b/mcp-server/src/tools/expandTask.js index b94d00d4..ae0b4550 100644 --- a/mcp-server/src/tools/expandTask.js +++ b/mcp-server/src/tools/expandTask.js @@ -19,7 +19,7 @@ export function registerExpandTaskTool(server) { name: "expandTask", description: "Break down a task into detailed subtasks", parameters: z.object({ - id: z.union([z.string(), z.number()]).describe("Task ID to expand"), + id: z.string().describe("Task ID to expand"), num: z.number().optional().describe("Number of subtasks to generate"), research: z .boolean() @@ -38,6 +38,11 @@ export function registerExpandTaskTool(server) { "Force regeneration of subtasks for tasks that already have them" ), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -50,7 +55,14 @@ export function registerExpandTaskTool(server) { if (args.force) cmdArgs.push("--force"); if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("expand", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "expand", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/listTasks.js b/mcp-server/src/tools/listTasks.js index 7da65692..af6f4844 100644 --- a/mcp-server/src/tools/listTasks.js +++ b/mcp-server/src/tools/listTasks.js @@ -25,6 +25,11 @@ export function registerListTasksTool(server) { .optional() .describe("Include subtasks in the response"), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -35,12 +40,21 @@ export function registerListTasksTool(server) { if (args.withSubtasks) cmdArgs.push("--with-subtasks"); if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("list", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "list", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); } + log.info(`Listing tasks result: ${result.stdout}`, result.stdout); + return createContentResponse(result.stdout); } catch (error) { log.error(`Error listing tasks: ${error.message}`); diff --git a/mcp-server/src/tools/nextTask.js b/mcp-server/src/tools/nextTask.js index 4003ce04..729c5fec 100644 --- a/mcp-server/src/tools/nextTask.js +++ b/mcp-server/src/tools/nextTask.js @@ -21,6 +21,11 @@ export function registerNextTaskTool(server) { "Show the next task to work on based on dependencies and status", parameters: z.object({ file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -29,7 +34,14 @@ export function registerNextTaskTool(server) { const cmdArgs = []; if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("next", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "next", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/setTaskStatus.js b/mcp-server/src/tools/setTaskStatus.js index 5681dd7b..d2c0b2c1 100644 --- a/mcp-server/src/tools/setTaskStatus.js +++ b/mcp-server/src/tools/setTaskStatus.js @@ -20,12 +20,17 @@ export function registerSetTaskStatusTool(server) { description: "Set the status of a task", parameters: z.object({ id: z - .union([z.string(), z.number()]) + .string() .describe("Task ID (can be comma-separated for multiple tasks)"), status: z .string() .describe("New status (todo, in-progress, review, done)"), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { @@ -34,7 +39,14 @@ export function registerSetTaskStatusTool(server) { const cmdArgs = [`--id=${args.id}`, `--status=${args.status}`]; if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("set-status", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "set-status", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/showTask.js b/mcp-server/src/tools/showTask.js index c44d9463..86130570 100644 --- a/mcp-server/src/tools/showTask.js +++ b/mcp-server/src/tools/showTask.js @@ -19,17 +19,29 @@ export function registerShowTaskTool(server) { name: "showTask", description: "Show detailed information about a specific task", parameters: z.object({ - id: z.union([z.string(), z.number()]).describe("Task ID to show"), + id: z.string().describe("Task ID to show"), file: z.string().optional().describe("Path to the tasks file"), + projectRoot: z + .string() + .describe( + "Root directory of the project (default: current working directory)" + ), }), execute: async (args, { log }) => { try { log.info(`Showing task details for ID: ${args.id}`); - const cmdArgs = [args.id]; + const cmdArgs = [`--id=${args.id}`]; if (args.file) cmdArgs.push(`--file=${args.file}`); - const result = executeTaskMasterCommand("show", log, cmdArgs); + const projectRoot = args.projectRoot; + + const result = executeTaskMasterCommand( + "show", + log, + cmdArgs, + projectRoot + ); if (!result.success) { throw new Error(result.error); diff --git a/mcp-server/src/tools/utils.js b/mcp-server/src/tools/utils.js index 24745d2e..872363e0 100644 --- a/mcp-server/src/tools/utils.js +++ b/mcp-server/src/tools/utils.js @@ -10,27 +10,39 @@ import { spawnSync } from "child_process"; * @param {string} command - The command to execute * @param {Object} log - The logger object from FastMCP * @param {Array} args - Arguments for the command + * @param {string} cwd - Working directory for command execution (defaults to current project root) * @returns {Object} - The result of the command execution */ -export function executeTaskMasterCommand(command, log, args = []) { +export function executeTaskMasterCommand( + command, + log, + args = [], + cwd = process.cwd() +) { try { log.info( - `Executing task-master ${command} with args: ${JSON.stringify(args)}` + `Executing task-master ${command} with args: ${JSON.stringify( + args + )} in directory: ${cwd}` ); // Prepare full arguments array const fullArgs = [command, ...args]; + // Common options for spawn + const spawnOptions = { + encoding: "utf8", + cwd: cwd, + }; + // Execute the command using the global task-master CLI or local script // Try the global CLI first - let result = spawnSync("task-master", fullArgs, { encoding: "utf8" }); + let result = spawnSync("task-master", fullArgs, spawnOptions); // If global CLI is not available, try fallback to the local script if (result.error && result.error.code === "ENOENT") { log.info("Global task-master not found, falling back to local script"); - result = spawnSync("node", ["scripts/dev.js", ...fullArgs], { - encoding: "utf8", - }); + result = spawnSync("node", ["scripts/dev.js", ...fullArgs], spawnOptions); } if (result.error) { @@ -38,8 +50,14 @@ export function executeTaskMasterCommand(command, log, args = []) { } if (result.status !== 0) { + // Improve error handling by combining stderr and stdout if stderr is empty + const errorOutput = result.stderr + ? result.stderr.trim() + : result.stdout + ? result.stdout.trim() + : "Unknown error"; throw new Error( - `Command failed with exit code ${result.status}: ${result.stderr}` + `Command failed with exit code ${result.status}: ${errorOutput}` ); } From ad3a58ba3e1d40c8f82647729719d43d045b44e3 Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Thu, 27 Mar 2025 18:44:17 +0100 Subject: [PATCH 14/16] chore: cleanup --- .cursor/mcp.json | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 .cursor/mcp.json diff --git a/.cursor/mcp.json b/.cursor/mcp.json deleted file mode 100644 index e69de29b..00000000 From 71fe603e03c59ec1147bbbba0f53abb479dc4217 Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Fri, 28 Mar 2025 15:25:31 +0100 Subject: [PATCH 15/16] fix: apply @rtuin suggestions --- README-task-master.md | 2 +- README.md | 37 ++++++++++++++++++++++++++++++++++--- 2 files changed, 35 insertions(+), 4 deletions(-) diff --git a/README-task-master.md b/README-task-master.md index 26cce92b..d6485936 100644 --- a/README-task-master.md +++ b/README-task-master.md @@ -136,7 +136,7 @@ To enable enhanced task management capabilities directly within Cursor using the 4. Configure with the following details: - Name: "Task Master" - Type: "Command" - - Command: "npx -y task-master-mcp" + - Command: "npx -y --package task-master-ai task-master-mcp" 5. Save the settings Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience. diff --git a/README.md b/README.md index b0803a99..ddcdd4dd 100644 --- a/README.md +++ b/README.md @@ -136,7 +136,7 @@ To enable enhanced task management capabilities directly within Cursor using the 4. Configure with the following details: - Name: "Task Master" - Type: "Command" - - Command: "npx -y task-master-mcp" + - Command: "npx -y --package task-master-ai task-master-mcp" 5. Save the settings Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience. @@ -469,7 +469,7 @@ task-master fix-dependencies ### Add a New Task -```bash +````bash # Add a new task using AI task-master add-task --prompt="Description of the new task" @@ -517,7 +517,7 @@ npm install -g task-master-ai # OR install locally within your project npm install task-master-ai -``` +```` ### Initialize a new project @@ -611,11 +611,13 @@ Please use the task-master parse-prd command to generate tasks from my PRD. The ``` The agent will execute: + ```bash task-master parse-prd scripts/prd.txt ``` This will: + - Parse your PRD document - Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies - The agent will understand this process due to the Cursor rules @@ -629,6 +631,7 @@ Please generate individual task files from tasks.json ``` The agent will execute: + ```bash task-master generate ``` @@ -648,6 +651,7 @@ What tasks are available to work on next? ``` The agent will: + - Run `task-master list` to see all tasks - Run `task-master next` to determine the next task to work on - Analyze dependencies to determine which tasks are ready to be worked on @@ -657,12 +661,14 @@ The agent will: ### 2. Task Implementation When implementing a task, the agent will: + - Reference the task's details section for implementation specifics - Consider dependencies on previous tasks - Follow the project's coding standards - Create appropriate tests based on the task's testStrategy You can ask: + ``` Let's implement task 3. What does it involve? ``` @@ -670,6 +676,7 @@ Let's implement task 3. What does it involve? ### 3. Task Verification Before marking a task as complete, verify it according to: + - The task's specified testStrategy - Any automated tests in the codebase - Manual verification if required @@ -683,6 +690,7 @@ Task 3 is now complete. Please update its status. ``` The agent will execute: + ```bash task-master set-status --id=3 --status=done ``` @@ -690,16 +698,19 @@ task-master set-status --id=3 --status=done ### 5. Handling Implementation Drift If during implementation, you discover that: + - The current approach differs significantly from what was planned - Future tasks need to be modified due to current implementation choices - New dependencies or requirements have emerged Tell the agent: + ``` We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change. ``` The agent will execute: + ```bash task-master update --from=4 --prompt="Now we are using Express instead of Fastify." ``` @@ -715,36 +726,43 @@ Task 5 seems complex. Can you break it down into subtasks? ``` The agent will execute: + ```bash task-master expand --id=5 --num=3 ``` You can provide additional context: + ``` Please break down task 5 with a focus on security considerations. ``` The agent will execute: + ```bash task-master expand --id=5 --prompt="Focus on security aspects" ``` You can also expand all pending tasks: + ``` Please break down all pending tasks into subtasks. ``` The agent will execute: + ```bash task-master expand --all ``` For research-backed subtask generation using Perplexity AI: + ``` Please break down task 5 using research-backed generation. ``` The agent will execute: + ```bash task-master expand --id=5 --research ``` @@ -754,6 +772,7 @@ task-master expand --id=5 --research Here's a comprehensive reference of all available commands: ### Parse PRD + ```bash # Parse a PRD file and generate tasks task-master parse-prd <prd-file.txt> @@ -763,6 +782,7 @@ task-master parse-prd <prd-file.txt> --num-tasks=10 ``` ### List Tasks + ```bash # List all tasks task-master list @@ -778,12 +798,14 @@ task-master list --status=<status> --with-subtasks ``` ### Show Next Task + ```bash # Show the next task to work on based on dependencies and status task-master next ``` ### Show Specific Task + ```bash # Show details of a specific task task-master show <id> @@ -795,18 +817,21 @@ task-master show 1.2 ``` ### Update Tasks + ```bash # Update tasks from a specific ID and provide context task-master update --from=<id> --prompt="<prompt>" ``` ### Generate Task Files + ```bash # Generate individual task files from tasks.json task-master generate ``` ### Set Task Status + ```bash # Set status of a single task task-master set-status --id=<id> --status=<status> @@ -821,6 +846,7 @@ task-master set-status --id=1.1,1.2 --status=<status> When marking a task as "done", all of its subtasks will automatically be marked as "done" as well. ### Expand Tasks + ```bash # Expand a specific task with subtasks task-master expand --id=<id> --num=<number> @@ -842,6 +868,7 @@ task-master expand --all --research ``` ### Clear Subtasks + ```bash # Clear subtasks from a specific task task-master clear-subtasks --id=<id> @@ -854,6 +881,7 @@ task-master clear-subtasks --all ``` ### Analyze Task Complexity + ```bash # Analyze complexity of all tasks task-master analyze-complexity @@ -875,6 +903,7 @@ task-master analyze-complexity --research ``` ### View Complexity Report + ```bash # Display the task complexity analysis report task-master complexity-report @@ -884,6 +913,7 @@ task-master complexity-report --file=my-report.json ``` ### Managing Task Dependencies + ```bash # Add a dependency to a task task-master add-dependency --id=<id> --depends-on=<id> @@ -899,6 +929,7 @@ task-master fix-dependencies ``` ### Add a New Task + ```bash # Add a new task using AI task-master add-task --prompt="Description of the new task" From 2faa5755f7c647e048d73af12eb1b92b81728e58 Mon Sep 17 00:00:00 2001 From: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Date: Fri, 28 Mar 2025 20:38:37 +0100 Subject: [PATCH 16/16] chore: add changeset for PR --- .changeset/odd-weeks-melt.md | 5 +++++ 1 file changed, 5 insertions(+) create mode 100644 .changeset/odd-weeks-melt.md diff --git a/.changeset/odd-weeks-melt.md b/.changeset/odd-weeks-melt.md new file mode 100644 index 00000000..840d4756 --- /dev/null +++ b/.changeset/odd-weeks-melt.md @@ -0,0 +1,5 @@ +--- +"task-master-ai": minor +--- + +Implement MCP server for all commands using tools.