refactor(mcp): apply withNormalizedProjectRoot HOF to update tool

Problem: The  MCP tool previously handled project root acquisition and path resolution within its  method, leading to potential inconsistencies and repetition.

Solution: Refactored the  tool () to utilize the new  Higher-Order Function (HOF) from .

Specific Changes:
- Imported  HOF.
- Updated the Zod schema for the  parameter to be optional, as the HOF handles deriving it from the session if not provided.
- Wrapped the entire  function body with the  HOF.
- Removed the manual call to  from within the  function body.
- Destructured the  from the  object received by the wrapped  function, ensuring it's the normalized path provided by the HOF.
- Used the normalized  variable when calling  and when passing arguments to .

This change standardizes project root handling for the  tool, simplifies its  method, and ensures consistent path normalization. This serves as the pattern for refactoring other MCP tools.
This commit is contained in:
Eyal Toledano
2025-05-02 02:14:32 -04:00
parent ad89253e31
commit 9d437f8594
4 changed files with 111 additions and 36 deletions

View File

@@ -4,10 +4,13 @@
*/
import { z } from 'zod';
import { handleApiResult, createErrorResponse } from './utils.js';
import {
handleApiResult,
createErrorResponse,
withNormalizedProjectRoot
} from './utils.js';
import { updateTasksDirect } from '../core/task-master-core.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
import path from 'path';
/**
* Register the update tool with the MCP server
@@ -31,59 +34,61 @@ export function registerUpdateTool(server) {
.boolean()
.optional()
.describe('Use Perplexity AI for research-backed updates'),
file: z.string().optional().describe('Absolute path to the tasks file'),
file: z
.string()
.optional()
.describe('Path to the tasks file relative to project root'),
projectRoot: z
.string()
.describe('The directory of the project. Must be an absolute path.')
.optional()
.describe(
'The directory of the project. (Optional, usually from session)'
)
}),
execute: async (args, { log, session }) => {
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
const toolName = 'update';
const { from, prompt, research, file, projectRoot } = args;
try {
log.info(`Executing update tool with args: ${JSON.stringify(args)}`);
log.info(
`Executing ${toolName} tool with normalized root: ${projectRoot}`
);
// 1. Get Project Root
const rootFolder = args.projectRoot;
if (!rootFolder || !path.isAbsolute(rootFolder)) {
return createErrorResponse(
'projectRoot is required and must be absolute.'
);
}
log.info(`Project root: ${rootFolder}`);
// 2. Resolve Path
let tasksJsonPath;
try {
tasksJsonPath = findTasksJsonPath(
{ projectRoot: rootFolder, file: args.file },
log
);
log.info(`Resolved tasks path: ${tasksJsonPath}`);
tasksJsonPath = findTasksJsonPath({ projectRoot, file }, log);
log.info(`${toolName}: Resolved tasks path: ${tasksJsonPath}`);
} catch (error) {
log.error(`Error finding tasks.json: ${error.message}`);
log.error(`${toolName}: Error finding tasks.json: ${error.message}`);
return createErrorResponse(
`Failed to find tasks.json: ${error.message}`
`Failed to find tasks.json within project root '${projectRoot}': ${error.message}`
);
}
// 3. Call Direct Function
const result = await updateTasksDirect(
{
tasksJsonPath: tasksJsonPath,
from: args.from,
prompt: args.prompt,
research: args.research,
projectRoot: rootFolder
from: from,
prompt: prompt,
research: research,
projectRoot: projectRoot
},
log,
{ session }
);
// 4. Handle Result
log.info(`updateTasksDirect result: success=${result.success}`);
log.info(
`${toolName}: Direct function result: success=${result.success}`
);
return handleApiResult(result, log, 'Error updating tasks');
} catch (error) {
log.error(`Critical error in update tool execute: ${error.message}`);
return createErrorResponse(`Internal tool error: ${error.message}`);
log.error(
`Critical error in ${toolName} tool execute: ${error.message}`
);
return createErrorResponse(
`Internal tool error (${toolName}): ${error.message}`
);
}
}
})
});
}

View File

@@ -5,7 +5,7 @@
# Priority: medium
# Description: Update the AI service layer to enable Google Search Grounding specifically when a Google model is used in the 'research' role.
# Details:
**Goal:** Conditionally enable Google Search Grounding based on the AI role.\n\n**Implementation Plan:**\n\n1. **Modify `ai-services-unified.js`:** Update `generateTextService`, `streamTextService`, and `generateObjectService`.\n2. **Conditional Logic:** Inside these functions, check if `providerName === 'google'` AND `role === 'research'`.\n3. **Construct `providerOptions`:** If the condition is met, create an options object:\n ```javascript\n let providerSpecificOptions = {};\n if (providerName === 'google' && role === 'research') {\n log('info', 'Enabling Google Search Grounding for research role.');\n providerSpecificOptions = {\n google: {\n useSearchGrounding: true,\n // Optional: Add dynamic retrieval for compatible models\n // dynamicRetrievalConfig: { mode: 'MODE_DYNAMIC' } \n }\n };\n }\n ```\n4. **Pass Options to SDK:** Pass `providerSpecificOptions` to the Vercel AI SDK functions (`generateText`, `streamText`, `generateObject`) via the `providerOptions` parameter:\n ```javascript\n const { text, ... } = await generateText({\n // ... other params\n providerOptions: providerSpecificOptions \n });\n ```\n5. **Update `supported-models.json`:** Ensure Google models intended for research (e.g., `gemini-1.5-pro-latest`, `gemini-1.5-flash-latest`) include `'research'` in their `allowed_roles` array.\n\n**Rationale:** This approach maintains the clear separation between 'main' and 'research' roles, ensuring grounding is only activated when explicitly requested via the `--research` flag or when the research model is invoked.
**Goal:** Conditionally enable Google Search Grounding based on the AI role.\n\n**Implementation Plan:**\n\n1. **Modify `ai-services-unified.js`:** Update `generateTextService`, `streamTextService`, and `generateObjectService`.\n2. **Conditional Logic:** Inside these functions, check if `providerName === 'google'` AND `role === 'research'`.\n3. **Construct `providerOptions`:** If the condition is met, create an options object:\n ```javascript\n let providerSpecificOptions = {};\n if (providerName === 'google' && role === 'research') {\n log('info', 'Enabling Google Search Grounding for research role.');\n providerSpecificOptions = {\n google: {\n useSearchGrounding: true,\n // Optional: Add dynamic retrieval for compatible models\n // dynamicRetrievalConfig: { mode: 'MODE_DYNAMIC' } \n }\n };\n }\n ```\n4. **Pass Options to SDK:** Pass `providerSpecificOptions` to the Vercel AI SDK functions (`generateText`, `streamText`, `generateObject`) via the `providerOptions` parameter:\n ```javascript\n const { text, ... } = await generateText({\n // ... other params\n providerOptions: providerSpecificOptions \n });\n ```\n5. **Update `supported-models.json`:** Ensure Google models intended for research (e.g., `gemini-1.5-pro-latest`, `gemini-1.5-flash-latest`) include `'research'` in their `allowed_roles` array.\n\n**Rationale:** This approach maintains the clear separation between 'main' and 'research' roles, ensuring grounding is only activated when explicitly requested via the `--research` flag or when the research model is invoked.\n\n**Clarification:** The Search Grounding feature is specifically designed to provide up-to-date information from the web when using Google models. This implementation ensures that grounding is only activated in research contexts where current information is needed, while preserving normal operation for standard tasks. The `useSearchGrounding: true` flag instructs the Google API to augment the model's knowledge with recent web search results relevant to the query.
# Test Strategy:
1. Configure a Google model (e.g., gemini-1.5-flash-latest) as the 'research' model in `.taskmasterconfig`.\n2. Run a command with the `--research` flag (e.g., `task-master add-task --prompt='Latest news on AI SDK 4.2' --research`).\n3. Verify logs show 'Enabling Google Search Grounding'.\n4. Check if the task output incorporates recent information.\n5. Configure the same Google model as the 'main' model.\n6. Run a command *without* the `--research` flag.\n7. Verify logs *do not* show grounding being enabled.\n8. Add unit tests to `ai-services-unified.test.js` to verify the conditional logic for adding `providerOptions`. Ensure mocks correctly simulate different roles and providers.

59
tasks/task_076.txt Normal file
View File

@@ -0,0 +1,59 @@
# Task ID: 76
# Title: Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)
# Status: pending
# Dependencies: None
# Priority: high
# Description: Design and implement an end-to-end (E2E) test framework for the Taskmaster MCP server, enabling programmatic interaction with the FastMCP server over stdio by sending and receiving JSON tool request/response messages.
# Details:
Research existing E2E testing approaches for MCP servers, referencing examples such as the MCP Server E2E Testing Example. Architect a test harness (preferably in Python or Node.js) that can launch the FastMCP server as a subprocess, establish stdio communication, and send well-formed JSON tool request messages.
Implementation details:
1. Use `subprocess.Popen` (Python) or `child_process.spawn` (Node.js) to launch the FastMCP server with appropriate stdin/stdout pipes
2. Implement a message protocol handler that formats JSON requests with proper line endings and message boundaries
3. Create a buffered reader for stdout that correctly handles chunked responses and reconstructs complete JSON objects
4. Develop a request/response correlation mechanism using unique IDs for each request
5. Implement timeout handling for requests that don't receive responses
Implement robust parsing of JSON responses, including error handling for malformed or unexpected output. The framework should support defining test cases as scripts or data files, allowing for easy addition of new scenarios.
Test case structure should include:
- Setup phase for environment preparation
- Sequence of tool requests with expected responses
- Validation functions for response verification
- Teardown phase for cleanup
Ensure the framework can assert on both the structure and content of responses, and provide clear logging for debugging. Document setup, usage, and extension instructions. Consider cross-platform compatibility and CI integration.
**Clarification:** The E2E test framework should focus on testing the FastMCP server's ability to correctly process tool requests and return appropriate responses. This includes verifying that the server properly handles different types of tool calls (e.g., file operations, web requests, task management), validates input parameters, and returns well-structured responses. The framework should be designed to be extensible, allowing new test cases to be added as the server's capabilities evolve. Tests should cover both happy paths and error conditions to ensure robust server behavior under various scenarios.
# Test Strategy:
Verify the framework by implementing a suite of representative E2E tests that cover typical tool requests and edge cases. Specific test cases should include:
1. Basic tool request/response validation
- Send a simple file_read request and verify response structure
- Test with valid and invalid file paths
- Verify error handling for non-existent files
2. Concurrent request handling
- Send multiple requests in rapid succession
- Verify all responses are received and correlated correctly
3. Large payload testing
- Test with large file contents (>1MB)
- Verify correct handling of chunked responses
4. Error condition testing
- Malformed JSON requests
- Invalid tool names
- Missing required parameters
- Server crash recovery
Confirm that tests can start and stop the FastMCP server, send requests, and accurately parse and validate responses. Implement specific assertions for response timing, structure validation using JSON schema, and content verification. Intentionally introduce malformed requests and simulate server errors to ensure robust error handling.
Implement detailed logging with different verbosity levels:
- ERROR: Failed tests and critical issues
- WARNING: Unexpected but non-fatal conditions
- INFO: Test progress and results
- DEBUG: Raw request/response data
Run the test suite in a clean environment and confirm all expected assertions and logs are produced. Validate that new test cases can be added with minimal effort and that the framework integrates with CI pipelines. Create a CI configuration that runs tests on each commit.

View File

@@ -3943,11 +3943,22 @@
"id": 75,
"title": "Integrate Google Search Grounding for Research Role",
"description": "Update the AI service layer to enable Google Search Grounding specifically when a Google model is used in the 'research' role.",
"details": "**Goal:** Conditionally enable Google Search Grounding based on the AI role.\\n\\n**Implementation Plan:**\\n\\n1. **Modify `ai-services-unified.js`:** Update `generateTextService`, `streamTextService`, and `generateObjectService`.\\n2. **Conditional Logic:** Inside these functions, check if `providerName === 'google'` AND `role === 'research'`.\\n3. **Construct `providerOptions`:** If the condition is met, create an options object:\\n ```javascript\\n let providerSpecificOptions = {};\\n if (providerName === 'google' && role === 'research') {\\n log('info', 'Enabling Google Search Grounding for research role.');\\n providerSpecificOptions = {\\n google: {\\n useSearchGrounding: true,\\n // Optional: Add dynamic retrieval for compatible models\\n // dynamicRetrievalConfig: { mode: 'MODE_DYNAMIC' } \\n }\\n };\\n }\\n ```\\n4. **Pass Options to SDK:** Pass `providerSpecificOptions` to the Vercel AI SDK functions (`generateText`, `streamText`, `generateObject`) via the `providerOptions` parameter:\\n ```javascript\\n const { text, ... } = await generateText({\\n // ... other params\\n providerOptions: providerSpecificOptions \\n });\\n ```\\n5. **Update `supported-models.json`:** Ensure Google models intended for research (e.g., `gemini-1.5-pro-latest`, `gemini-1.5-flash-latest`) include `'research'` in their `allowed_roles` array.\\n\\n**Rationale:** This approach maintains the clear separation between 'main' and 'research' roles, ensuring grounding is only activated when explicitly requested via the `--research` flag or when the research model is invoked.",
"testStrategy": "1. Configure a Google model (e.g., gemini-1.5-flash-latest) as the 'research' model in `.taskmasterconfig`.\\n2. Run a command with the `--research` flag (e.g., `task-master add-task --prompt='Latest news on AI SDK 4.2' --research`).\\n3. Verify logs show 'Enabling Google Search Grounding'.\\n4. Check if the task output incorporates recent information.\\n5. Configure the same Google model as the 'main' model.\\n6. Run a command *without* the `--research` flag.\\n7. Verify logs *do not* show grounding being enabled.\\n8. Add unit tests to `ai-services-unified.test.js` to verify the conditional logic for adding `providerOptions`. Ensure mocks correctly simulate different roles and providers.",
"status": "pending",
"dependencies": [],
"priority": "medium",
"details": "**Goal:** Conditionally enable Google Search Grounding based on the AI role.\\n\\n**Implementation Plan:**\\n\\n1. **Modify `ai-services-unified.js`:** Update `generateTextService`, `streamTextService`, and `generateObjectService`.\\n2. **Conditional Logic:** Inside these functions, check if `providerName === 'google'` AND `role === 'research'`.\\n3. **Construct `providerOptions`:** If the condition is met, create an options object:\\n ```javascript\\n let providerSpecificOptions = {};\\n if (providerName === 'google' && role === 'research') {\\n log('info', 'Enabling Google Search Grounding for research role.');\\n providerSpecificOptions = {\\n google: {\\n useSearchGrounding: true,\\n // Optional: Add dynamic retrieval for compatible models\\n // dynamicRetrievalConfig: { mode: 'MODE_DYNAMIC' } \\n }\\n };\\n }\\n ```\\n4. **Pass Options to SDK:** Pass `providerSpecificOptions` to the Vercel AI SDK functions (`generateText`, `streamText`, `generateObject`) via the `providerOptions` parameter:\\n ```javascript\\n const { text, ... } = await generateText({\\n // ... other params\\n providerOptions: providerSpecificOptions \\n });\\n ```\\n5. **Update `supported-models.json`:** Ensure Google models intended for research (e.g., `gemini-1.5-pro-latest`, `gemini-1.5-flash-latest`) include `'research'` in their `allowed_roles` array.\\n\\n**Rationale:** This approach maintains the clear separation between 'main' and 'research' roles, ensuring grounding is only activated when explicitly requested via the `--research` flag or when the research model is invoked.\\n\\n**Clarification:** The Search Grounding feature is specifically designed to provide up-to-date information from the web when using Google models. This implementation ensures that grounding is only activated in research contexts where current information is needed, while preserving normal operation for standard tasks. The `useSearchGrounding: true` flag instructs the Google API to augment the model's knowledge with recent web search results relevant to the query.",
"testStrategy": "1. Configure a Google model (e.g., gemini-1.5-flash-latest) as the 'research' model in `.taskmasterconfig`.\\n2. Run a command with the `--research` flag (e.g., `task-master add-task --prompt='Latest news on AI SDK 4.2' --research`).\\n3. Verify logs show 'Enabling Google Search Grounding'.\\n4. Check if the task output incorporates recent information.\\n5. Configure the same Google model as the 'main' model.\\n6. Run a command *without* the `--research` flag.\\n7. Verify logs *do not* show grounding being enabled.\\n8. Add unit tests to `ai-services-unified.test.js` to verify the conditional logic for adding `providerOptions`. Ensure mocks correctly simulate different roles and providers.",
"subtasks": []
},
{
"id": 76,
"title": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
"description": "Design and implement an end-to-end (E2E) test framework for the Taskmaster MCP server, enabling programmatic interaction with the FastMCP server over stdio by sending and receiving JSON tool request/response messages.",
"status": "pending",
"dependencies": [],
"priority": "high",
"details": "Research existing E2E testing approaches for MCP servers, referencing examples such as the MCP Server E2E Testing Example. Architect a test harness (preferably in Python or Node.js) that can launch the FastMCP server as a subprocess, establish stdio communication, and send well-formed JSON tool request messages. \n\nImplementation details:\n1. Use `subprocess.Popen` (Python) or `child_process.spawn` (Node.js) to launch the FastMCP server with appropriate stdin/stdout pipes\n2. Implement a message protocol handler that formats JSON requests with proper line endings and message boundaries\n3. Create a buffered reader for stdout that correctly handles chunked responses and reconstructs complete JSON objects\n4. Develop a request/response correlation mechanism using unique IDs for each request\n5. Implement timeout handling for requests that don't receive responses\n\nImplement robust parsing of JSON responses, including error handling for malformed or unexpected output. The framework should support defining test cases as scripts or data files, allowing for easy addition of new scenarios. \n\nTest case structure should include:\n- Setup phase for environment preparation\n- Sequence of tool requests with expected responses\n- Validation functions for response verification\n- Teardown phase for cleanup\n\nEnsure the framework can assert on both the structure and content of responses, and provide clear logging for debugging. Document setup, usage, and extension instructions. Consider cross-platform compatibility and CI integration.\n\n**Clarification:** The E2E test framework should focus on testing the FastMCP server's ability to correctly process tool requests and return appropriate responses. This includes verifying that the server properly handles different types of tool calls (e.g., file operations, web requests, task management), validates input parameters, and returns well-structured responses. The framework should be designed to be extensible, allowing new test cases to be added as the server's capabilities evolve. Tests should cover both happy paths and error conditions to ensure robust server behavior under various scenarios.",
"testStrategy": "Verify the framework by implementing a suite of representative E2E tests that cover typical tool requests and edge cases. Specific test cases should include:\n\n1. Basic tool request/response validation\n - Send a simple file_read request and verify response structure\n - Test with valid and invalid file paths\n - Verify error handling for non-existent files\n\n2. Concurrent request handling\n - Send multiple requests in rapid succession\n - Verify all responses are received and correlated correctly\n\n3. Large payload testing\n - Test with large file contents (>1MB)\n - Verify correct handling of chunked responses\n\n4. Error condition testing\n - Malformed JSON requests\n - Invalid tool names\n - Missing required parameters\n - Server crash recovery\n\nConfirm that tests can start and stop the FastMCP server, send requests, and accurately parse and validate responses. Implement specific assertions for response timing, structure validation using JSON schema, and content verification. Intentionally introduce malformed requests and simulate server errors to ensure robust error handling. \n\nImplement detailed logging with different verbosity levels:\n- ERROR: Failed tests and critical issues\n- WARNING: Unexpected but non-fatal conditions\n- INFO: Test progress and results\n- DEBUG: Raw request/response data\n\nRun the test suite in a clean environment and confirm all expected assertions and logs are produced. Validate that new test cases can be added with minimal effort and that the framework integrates with CI pipelines. Create a CI configuration that runs tests on each commit.",
"subtasks": []
}
]