|
|
|
|
@@ -2094,7 +2094,7 @@
|
|
|
|
|
"id": 32,
|
|
|
|
|
"title": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
|
|
|
|
|
"description": "Create a new \"learn\" command that analyzes Cursor's chat history and code changes to automatically generate or update rule files in the .cursor/rules directory, following the cursor_rules.mdc template format. This command will help Cursor autonomously improve its ability to follow development standards by learning from successful implementations.",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"status": "deferred",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"priority": "high",
|
|
|
|
|
"details": "Implement a new command in the task-master CLI that enables Cursor to learn from successful coding patterns and chat interactions:\n\nKey Components:\n1. Cursor Data Analysis\n - Access and parse Cursor's chat history from ~/Library/Application Support/Cursor/User/History\n - Extract relevant patterns, corrections, and successful implementations\n - Track file changes and their associated chat context\n\n2. Rule Management\n - Use cursor_rules.mdc as the template for all rule file formatting\n - Manage rule files in .cursor/rules directory\n - Support both creation and updates of rule files\n - Categorize rules based on context (testing, components, API, etc.)\n\n3. AI Integration\n - Utilize ai-services.js to interact with Claude\n - Provide comprehensive context including:\n * Relevant chat history showing the evolution of solutions\n * Code changes and their outcomes\n * Existing rules and template structure\n - Generate or update rules while maintaining template consistency\n\n4. Implementation Requirements:\n - Automatic triggering after task completion (configurable)\n - Manual triggering via CLI command\n - Proper error handling for missing or corrupt files\n - Validation against cursor_rules.mdc template\n - Performance optimization for large histories\n - Clear logging and progress indication\n\n5. Key Files:\n - commands/learn.js: Main command implementation\n - rules/cursor-rules-manager.js: Rule file management\n - utils/chat-history-analyzer.js: Cursor chat analysis\n - index.js: Command registration\n\n6. Security Considerations:\n - Safe file system operations\n - Proper error handling for inaccessible files\n - Validation of generated rules\n - Backup of existing rules before updates",
|
|
|
|
|
@@ -2607,7 +2607,7 @@
|
|
|
|
|
"id": 43,
|
|
|
|
|
"title": "Add Research Flag to Add-Task Command",
|
|
|
|
|
"description": "Implement a '--research' flag for the add-task command that enables users to automatically generate research-related subtasks when creating a new task.",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"priority": "medium",
|
|
|
|
|
"details": "Modify the add-task command to accept a new optional flag '--research'. When this flag is provided, the system should automatically generate and attach a set of research-oriented subtasks to the newly created task. These subtasks should follow a standard research methodology structure:\n\n1. Background Investigation: Research existing solutions and approaches\n2. Requirements Analysis: Define specific requirements and constraints\n3. Technology/Tool Evaluation: Compare potential technologies or tools for implementation\n4. Proof of Concept: Create a minimal implementation to validate approach\n5. Documentation: Document findings and recommendations\n\nThe implementation should:\n- Update the command-line argument parser to recognize the new flag\n- Create a dedicated function to generate the research subtasks with appropriate descriptions\n- Ensure subtasks are properly linked to the parent task\n- Update help documentation to explain the new flag\n- Maintain backward compatibility with existing add-task functionality\n\nThe research subtasks should be customized based on the main task's title and description when possible, rather than using generic templates.",
|
|
|
|
|
@@ -3026,7 +3026,7 @@
|
|
|
|
|
"description": "Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.",
|
|
|
|
|
"details": "### Proposed Solution\nImplement an intuitive CLI command for AI model management, leveraging Vercel's AI SDK for seamless integration:\n\n- `task-master models`: Lists currently configured models for main operations and research.\n- `task-master models --set-main=\"<model_name>\" --set-research=\"<model_name>\"`: Sets the desired models for main operations and research tasks respectively.\n\nSupported AI Models:\n- **Main Operations:** Claude (current default), OpenAI, Ollama, Gemini, OpenRouter\n- **Research Operations:** Perplexity (current default), OpenAI, Ollama, Grok\n\nIf a user specifies an invalid model, the CLI lists available models clearly.\n\n### Example CLI Usage\n\nList current models:\n```shell\ntask-master models\n```\nOutput example:\n```\nCurrent AI Model Configuration:\n- Main Operations: Claude\n- Research Operations: Perplexity\n```\n\nSet new models:\n```shell\ntask-master models --set-main=\"gemini\" --set-research=\"grok\"\n```\n\nAttempt invalid model:\n```shell\ntask-master models --set-main=\"invalidModel\"\n```\nOutput example:\n```\nError: \"invalidModel\" is not a valid model.\n\nAvailable models for Main Operations:\n- claude\n- openai\n- ollama\n- gemini\n- openrouter\n```\n\n### High-Level Workflow\n1. Update CLI parsing logic to handle new `models` command and associated flags.\n2. Consolidate all AI calls into `ai-services.js` for centralized management.\n3. Utilize Vercel's AI SDK to implement robust wrapper functions for each AI API:\n - Claude (existing)\n - Perplexity (existing)\n - OpenAI\n - Ollama\n - Gemini\n - OpenRouter\n - Grok\n4. Update environment variables and provide clear documentation in `.env_example`:\n```env\n# MAIN_MODEL options: claude, openai, ollama, gemini, openrouter\nMAIN_MODEL=claude\n\n# RESEARCH_MODEL options: perplexity, openai, ollama, grok\nRESEARCH_MODEL=perplexity\n```\n5. Ensure dynamic model switching via environment variables or configuration management.\n6. Provide clear CLI feedback and validation of model names.\n\n### Vercel AI SDK Integration\n- Use Vercel's AI SDK to abstract API calls for supported models, ensuring consistent error handling and response formatting.\n- Implement a configuration layer to map model names to their respective Vercel SDK integrations.\n- Example pattern for integration:\n```javascript\nimport { createClient } from '@vercel/ai';\n\nconst clients = {\n claude: createClient({ provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }),\n openai: createClient({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY }),\n ollama: createClient({ provider: 'ollama', apiKey: process.env.OLLAMA_API_KEY }),\n gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),\n openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),\n perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),\n grok: createClient({ provider: 'xai', apiKey: process.env.XAI_API_KEY })\n};\n\nexport function getClient(model) {\n if (!clients[model]) {\n throw new Error(`Invalid model: ${model}`);\n }\n return clients[model];\n}\n```\n- Leverage `generateText` and `streamText` functions from the SDK for text generation and streaming capabilities.\n- Ensure compatibility with serverless and edge deployments using Vercel's infrastructure.\n\n### Key Elements\n- Enhanced model visibility and intuitive management commands.\n- Centralized and robust handling of AI API integrations via Vercel AI SDK.\n- Clear CLI responses with detailed validation feedback.\n- Flexible, easy-to-understand environment configuration.\n\n### Implementation Considerations\n- Centralize all AI interactions through a single, maintainable module (`ai-services.js`).\n- Ensure comprehensive error handling for invalid model selections.\n- Clearly document environment variable options and their purposes.\n- Validate model names rigorously to prevent runtime errors.\n\n### Out of Scope (Future Considerations)\n- Automatic benchmarking or model performance comparison.\n- Dynamic runtime switching of models based on task type or complexity.",
|
|
|
|
|
"testStrategy": "### Test Strategy\n1. **Unit Tests**:\n - Test CLI commands for listing, setting, and validating models.\n - Mock Vercel AI SDK calls to ensure proper integration and error handling.\n\n2. **Integration Tests**:\n - Validate end-to-end functionality of model management commands.\n - Test dynamic switching of models via environment variables.\n\n3. **Error Handling Tests**:\n - Simulate invalid model names and verify error messages.\n - Test API failures for each model provider and ensure graceful degradation.\n\n4. **Documentation Validation**:\n - Verify that `.env_example` and CLI usage examples are accurate and comprehensive.\n\n5. **Performance Tests**:\n - Measure response times for API calls through Vercel AI SDK.\n - Ensure no significant latency is introduced by model switching.\n\n6. **SDK-Specific Tests**:\n - Validate the behavior of `generateText` and `streamText` functions for supported models.\n - Test compatibility with serverless and edge deployments.",
|
|
|
|
|
"status": "in-progress",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"priority": "high",
|
|
|
|
|
"subtasks": [
|
|
|
|
|
@@ -3121,7 +3121,7 @@
|
|
|
|
|
"61.18"
|
|
|
|
|
],
|
|
|
|
|
"details": "1. Update task processing functions to use the centralized AI services\n2. Implement dynamic model selection based on configuration\n3. Add error handling for model-specific failures\n4. Implement graceful degradation when preferred models are unavailable\n5. Update prompts to be model-agnostic where possible\n6. Add telemetry for model performance monitoring\n7. Implement response validation to ensure quality across different models\n8. Testing approach: Create integration tests that verify task processing with different model configurations\n\n<info added on 2025-04-20T03:55:56.310Z>\nWhen updating the main task processing logic, implement the following changes to align with the new configuration system:\n\n1. Replace direct environment variable access with calls to the configuration manager:\n ```javascript\n // Before\n const apiKey = process.env.OPENAI_API_KEY;\n const modelId = process.env.MAIN_MODEL || \"gpt-4\";\n \n // After\n import { getMainProvider, getMainModelId, getMainMaxTokens, getMainTemperature } from './config-manager.js';\n \n const provider = getMainProvider();\n const modelId = getMainModelId();\n const maxTokens = getMainMaxTokens();\n const temperature = getMainTemperature();\n ```\n\n2. Implement model fallback logic using the configuration hierarchy:\n ```javascript\n async function processTaskWithFallback(task) {\n try {\n return await processWithModel(task, getMainModelId());\n } catch (error) {\n logger.warn(`Primary model failed: ${error.message}`);\n const fallbackModel = getMainFallbackModelId();\n if (fallbackModel) {\n return await processWithModel(task, fallbackModel);\n }\n throw error;\n }\n }\n ```\n\n3. Add configuration-aware telemetry points to track model usage and performance:\n ```javascript\n function trackModelPerformance(modelId, startTime, success) {\n const duration = Date.now() - startTime;\n telemetry.trackEvent('model_usage', {\n modelId,\n provider: getMainProvider(),\n duration,\n success,\n configVersion: getConfigVersion()\n });\n }\n ```\n\n4. Ensure all prompt templates are loaded through the configuration system rather than hardcoded:\n ```javascript\n const promptTemplate = getPromptTemplate('task_processing');\n const prompt = formatPrompt(promptTemplate, { task: taskData });\n ```\n</info added on 2025-04-20T03:55:56.310Z>",
|
|
|
|
|
"status": "deferred",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"parentTaskId": 61
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
@@ -3135,7 +3135,7 @@
|
|
|
|
|
"61.18"
|
|
|
|
|
],
|
|
|
|
|
"details": "1. Update research functions to use the centralized AI services\n2. Implement dynamic model selection for research operations\n3. Add specialized error handling for research-specific issues\n4. Optimize prompts for research-focused models\n5. Implement result caching for research operations\n6. Add support for model-specific research parameters\n7. Create fallback mechanisms for research operations\n8. Testing approach: Create integration tests that verify research functionality with different model configurations\n\n<info added on 2025-04-20T03:55:39.633Z>\nWhen implementing the refactored research processing logic, ensure the following:\n\n1. Replace direct environment variable access with the new configuration system:\n ```javascript\n // Old approach\n const apiKey = process.env.OPENAI_API_KEY;\n const model = \"gpt-4\";\n \n // New approach\n import { getResearchProvider, getResearchModelId, getResearchMaxTokens, \n getResearchTemperature } from './config-manager.js';\n \n const provider = getResearchProvider();\n const modelId = getResearchModelId();\n const maxTokens = getResearchMaxTokens();\n const temperature = getResearchTemperature();\n ```\n\n2. Implement model fallback chains using the configuration system:\n ```javascript\n async function performResearch(query) {\n try {\n return await callAIService({\n provider: getResearchProvider(),\n modelId: getResearchModelId(),\n maxTokens: getResearchMaxTokens(),\n temperature: getResearchTemperature()\n });\n } catch (error) {\n logger.warn(`Primary research model failed: ${error.message}`);\n return await callAIService({\n provider: getResearchProvider('fallback'),\n modelId: getResearchModelId('fallback'),\n maxTokens: getResearchMaxTokens('fallback'),\n temperature: getResearchTemperature('fallback')\n });\n }\n }\n ```\n\n3. Add support for dynamic parameter adjustment based on research type:\n ```javascript\n function getResearchParameters(researchType) {\n // Get base parameters\n const baseParams = {\n provider: getResearchProvider(),\n modelId: getResearchModelId(),\n maxTokens: getResearchMaxTokens(),\n temperature: getResearchTemperature()\n };\n \n // Adjust based on research type\n switch(researchType) {\n case 'deep':\n return {...baseParams, maxTokens: baseParams.maxTokens * 1.5};\n case 'creative':\n return {...baseParams, temperature: Math.min(baseParams.temperature + 0.2, 1.0)};\n case 'factual':\n return {...baseParams, temperature: Math.max(baseParams.temperature - 0.2, 0)};\n default:\n return baseParams;\n }\n }\n ```\n\n4. Ensure the caching mechanism uses configuration-based TTL settings:\n ```javascript\n const researchCache = new Cache({\n ttl: getResearchCacheTTL(),\n maxSize: getResearchCacheMaxSize()\n });\n ```\n</info added on 2025-04-20T03:55:39.633Z>",
|
|
|
|
|
"status": "deferred",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"parentTaskId": 61
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
@@ -3168,7 +3168,7 @@
|
|
|
|
|
"title": "Refactor Basic Subtask Generation to use generateObjectService",
|
|
|
|
|
"description": "Update the `generateSubtasks` function in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the subtask array.",
|
|
|
|
|
"details": "\n\n<info added on 2025-04-20T03:54:45.542Z>\nThe refactoring should leverage the new configuration system:\n\n1. Replace direct model references with calls to config-manager.js getters:\n ```javascript\n const { getModelForRole, getModelParams } = require('./config-manager');\n \n // Instead of hardcoded models/parameters:\n const model = getModelForRole('subtask-generator');\n const modelParams = getModelParams('subtask-generator');\n ```\n\n2. Update API key handling to use the resolveEnvVariable pattern:\n ```javascript\n const { resolveEnvVariable } = require('./utils');\n const apiKey = resolveEnvVariable('OPENAI_API_KEY');\n ```\n\n3. When calling generateObjectService, pass the configuration parameters:\n ```javascript\n const result = await generateObjectService({\n schema: subtasksArraySchema,\n prompt: subtaskPrompt,\n model: model,\n temperature: modelParams.temperature,\n maxTokens: modelParams.maxTokens,\n // Other parameters from config\n });\n ```\n\n4. Add error handling that respects logging configuration:\n ```javascript\n const { isLoggingEnabled } = require('./config-manager');\n \n try {\n // Generation code\n } catch (error) {\n if (isLoggingEnabled('errors')) {\n console.error('Subtask generation error:', error);\n }\n throw error;\n }\n ```\n</info added on 2025-04-20T03:54:45.542Z>",
|
|
|
|
|
"status": "cancelled",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
"61.23"
|
|
|
|
|
],
|
|
|
|
|
@@ -3179,7 +3179,7 @@
|
|
|
|
|
"title": "Refactor Research Subtask Generation to use generateObjectService",
|
|
|
|
|
"description": "Update the `generateSubtasksWithPerplexity` function in `ai-services.js` to first perform research (potentially keeping the Perplexity call separate or adapting it) and then use `generateObjectService` from `ai-services-unified.js` with research results included in the prompt.",
|
|
|
|
|
"details": "\n\n<info added on 2025-04-20T03:54:26.882Z>\nThe refactoring should align with the new configuration system by:\n\n1. Replace direct environment variable access with `resolveEnvVariable` for API keys\n2. Use the config-manager.js getters to retrieve model parameters:\n - Replace hardcoded model names with `getModelForRole('research')`\n - Use `getParametersForRole('research')` to get temperature, maxTokens, etc.\n3. Implement proper error handling that respects the `getLoggingConfig()` settings\n4. Example implementation pattern:\n```javascript\nconst { getModelForRole, getParametersForRole, getLoggingConfig } = require('./config-manager');\nconst { resolveEnvVariable } = require('./environment-utils');\n\n// In the refactored function:\nconst researchModel = getModelForRole('research');\nconst { temperature, maxTokens } = getParametersForRole('research');\nconst apiKey = resolveEnvVariable('PERPLEXITY_API_KEY');\nconst { verbose } = getLoggingConfig();\n\n// Then use these variables in the API call configuration\n```\n5. Ensure the transition to generateObjectService maintains all existing functionality while leveraging the new configuration system\n</info added on 2025-04-20T03:54:26.882Z>",
|
|
|
|
|
"status": "cancelled",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
"61.23"
|
|
|
|
|
],
|
|
|
|
|
@@ -3190,7 +3190,7 @@
|
|
|
|
|
"title": "Refactor Research Task Description Generation to use generateObjectService",
|
|
|
|
|
"description": "Update the `generateTaskDescriptionWithPerplexity` function in `ai-services.js` to first perform research and then use `generateObjectService` from `ai-services-unified.js` to generate the structured task description.",
|
|
|
|
|
"details": "\n\n<info added on 2025-04-20T03:54:04.420Z>\nThe refactoring should incorporate the new configuration management system:\n\n1. Update imports to include the config-manager:\n```javascript\nconst { getModelForRole, getParametersForRole } = require('./config-manager');\n```\n\n2. Replace any hardcoded model selections or parameters with config-manager calls:\n```javascript\n// Replace direct model references like:\n// const model = \"perplexity-model-7b-online\" \n// With:\nconst model = getModelForRole('research');\nconst parameters = getParametersForRole('research');\n```\n\n3. For API key handling, use the resolveEnvVariable pattern:\n```javascript\nconst apiKey = resolveEnvVariable('PERPLEXITY_API_KEY');\n```\n\n4. When calling generateObjectService, pass the configuration-derived parameters:\n```javascript\nreturn generateObjectService({\n prompt: researchResults,\n schema: taskDescriptionSchema,\n role: 'taskDescription',\n // Config-driven parameters will be applied within generateObjectService\n});\n```\n\n5. Remove any hardcoded configuration values, ensuring all settings are retrieved from the centralized configuration system.\n</info added on 2025-04-20T03:54:04.420Z>",
|
|
|
|
|
"status": "cancelled",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
"61.23"
|
|
|
|
|
],
|
|
|
|
|
@@ -3201,7 +3201,7 @@
|
|
|
|
|
"title": "Refactor Complexity Analysis AI Call to use generateObjectService",
|
|
|
|
|
"description": "Update the logic that calls the AI after using `generateComplexityAnalysisPrompt` in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the complexity report.",
|
|
|
|
|
"details": "\n\n<info added on 2025-04-20T03:53:46.120Z>\nThe complexity analysis AI call should be updated to align with the new configuration system architecture. When refactoring to use `generateObjectService`, implement the following changes:\n\n1. Replace direct model references with calls to the appropriate config getter:\n ```javascript\n const modelName = getComplexityAnalysisModel(); // Use the specific getter from config-manager.js\n ```\n\n2. Retrieve AI parameters from the config system:\n ```javascript\n const temperature = getAITemperature('complexityAnalysis');\n const maxTokens = getAIMaxTokens('complexityAnalysis');\n ```\n\n3. When constructing the call to `generateObjectService`, pass these configuration values:\n ```javascript\n const result = await generateObjectService({\n prompt,\n schema: complexityReportSchema,\n modelName,\n temperature,\n maxTokens,\n sessionEnv: session?.env\n });\n ```\n\n4. Ensure API key resolution uses the `resolveEnvVariable` helper:\n ```javascript\n // Don't hardcode API keys or directly access process.env\n // The generateObjectService should handle this internally with resolveEnvVariable\n ```\n\n5. Add logging configuration based on settings:\n ```javascript\n const enableLogging = getAILoggingEnabled('complexityAnalysis');\n if (enableLogging) {\n // Use the logging mechanism defined in the configuration\n }\n ```\n</info added on 2025-04-20T03:53:46.120Z>",
|
|
|
|
|
"status": "cancelled",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
"61.23"
|
|
|
|
|
],
|
|
|
|
|
@@ -3212,7 +3212,7 @@
|
|
|
|
|
"title": "Refactor Task Addition AI Call to use generateObjectService",
|
|
|
|
|
"description": "Update the logic that calls the AI after using `_buildAddTaskPrompt` in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the single task object.",
|
|
|
|
|
"details": "\n\n<info added on 2025-04-20T03:53:27.455Z>\nTo implement this refactoring, you'll need to:\n\n1. Replace direct AI calls with the new `generateObjectService` approach:\n ```javascript\n // OLD approach\n const aiResponse = await callLLM(prompt, modelName, temperature, maxTokens);\n const task = parseAIResponseToTask(aiResponse);\n \n // NEW approach using generateObjectService with config-manager\n import { generateObjectService } from '../services/ai-services-unified.js';\n import { getAIModelForRole, getAITemperature, getAIMaxTokens } from '../config/config-manager.js';\n import { taskSchema } from '../schemas/task-schema.js'; // Create this Zod schema for a single task\n \n const modelName = getAIModelForRole('taskCreation');\n const temperature = getAITemperature('taskCreation');\n const maxTokens = getAIMaxTokens('taskCreation');\n \n const task = await generateObjectService({\n prompt: _buildAddTaskPrompt(...),\n schema: taskSchema,\n modelName,\n temperature,\n maxTokens\n });\n ```\n\n2. Create a Zod schema for the task object in a new file `schemas/task-schema.js` that defines the expected structure.\n\n3. Ensure API key resolution uses the new pattern:\n ```javascript\n // This happens inside generateObjectService, but verify it uses:\n import { resolveEnvVariable } from '../config/config-manager.js';\n // Instead of direct process.env access\n ```\n\n4. Update any error handling to match the new service's error patterns.\n</info added on 2025-04-20T03:53:27.455Z>",
|
|
|
|
|
"status": "cancelled",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
"61.23"
|
|
|
|
|
],
|
|
|
|
|
@@ -3223,7 +3223,7 @@
|
|
|
|
|
"title": "Refactor General Chat/Update AI Calls",
|
|
|
|
|
"description": "Refactor functions like `sendChatWithContext` (and potentially related task update functions in `task-manager.js` if they make direct AI calls) to use `streamTextService` or `generateTextService` from `ai-services-unified.js`.",
|
|
|
|
|
"details": "\n\n<info added on 2025-04-20T03:53:03.709Z>\nWhen refactoring `sendChatWithContext` and related functions, ensure they align with the new configuration system:\n\n1. Replace direct model references with config getter calls:\n ```javascript\n // Before\n const model = \"gpt-4\";\n \n // After\n import { getModelForRole } from './config-manager.js';\n const model = getModelForRole('chat'); // or appropriate role\n ```\n\n2. Extract AI parameters from config rather than hardcoding:\n ```javascript\n import { getAIParameters } from './config-manager.js';\n const { temperature, maxTokens } = getAIParameters('chat');\n ```\n\n3. When calling `streamTextService` or `generateTextService`, pass parameters from config:\n ```javascript\n await streamTextService({\n messages,\n model: getModelForRole('chat'),\n temperature: getAIParameters('chat').temperature,\n // other parameters as needed\n });\n ```\n\n4. For logging control, check config settings:\n ```javascript\n import { isLoggingEnabled } from './config-manager.js';\n \n if (isLoggingEnabled('aiCalls')) {\n console.log('AI request:', messages);\n }\n ```\n\n5. Ensure any default behaviors respect configuration defaults rather than hardcoded values.\n</info added on 2025-04-20T03:53:03.709Z>",
|
|
|
|
|
"status": "deferred",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
"61.23"
|
|
|
|
|
],
|
|
|
|
|
@@ -3234,7 +3234,7 @@
|
|
|
|
|
"title": "Refactor Callers of AI Parsing Utilities",
|
|
|
|
|
"description": "Update the code that calls `parseSubtasksFromText`, `parseTaskJsonResponse`, and `parseTasksFromCompletion` to instead directly handle the structured JSON output provided by `generateObjectService` (as the refactored AI calls will now use it).",
|
|
|
|
|
"details": "\n\n<info added on 2025-04-20T03:52:45.518Z>\nThe refactoring of callers to AI parsing utilities should align with the new configuration system. When updating these callers:\n\n1. Replace direct API key references with calls to the configuration system using `resolveEnvVariable` for sensitive credentials.\n\n2. Update model selection logic to use the centralized configuration from `.taskmasterconfig` via the getter functions in `config-manager.js`. For example:\n ```javascript\n // Old approach\n const model = \"gpt-4\";\n \n // New approach\n import { getModelForRole } from './config-manager';\n const model = getModelForRole('parsing'); // or appropriate role\n ```\n\n3. Similarly, replace hardcoded parameters with configuration-based values:\n ```javascript\n // Old approach\n const maxTokens = 2000;\n const temperature = 0.2;\n \n // New approach\n import { getAIParameterValue } from './config-manager';\n const maxTokens = getAIParameterValue('maxTokens', 'parsing');\n const temperature = getAIParameterValue('temperature', 'parsing');\n ```\n\n4. Ensure logging behavior respects the centralized logging configuration settings.\n\n5. When calling `generateObjectService`, pass the appropriate configuration context to ensure it uses the correct settings from the centralized configuration system.\n</info added on 2025-04-20T03:52:45.518Z>",
|
|
|
|
|
"status": "deferred",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 61
|
|
|
|
|
},
|
|
|
|
|
@@ -3299,7 +3299,7 @@
|
|
|
|
|
"title": "Implement `ollama.js` Provider Module",
|
|
|
|
|
"description": "Create and implement the `ollama.js` module within `src/ai-providers/`. This module should contain functions to interact with local Ollama models using the **`ollama-ai-provider` library**, adhering to the standardized input/output format defined for `ai-services-unified.js`. Note the specific library used.",
|
|
|
|
|
"details": "",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 61
|
|
|
|
|
},
|
|
|
|
|
@@ -3308,7 +3308,7 @@
|
|
|
|
|
"title": "Implement `mistral.js` Provider Module using Vercel AI SDK",
|
|
|
|
|
"description": "Create and implement the `mistral.js` module within `src/ai-providers/`. This module should contain functions to interact with Mistral AI models using the **Vercel AI SDK (`@ai-sdk/mistral`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.",
|
|
|
|
|
"details": "",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 61
|
|
|
|
|
},
|
|
|
|
|
@@ -3317,7 +3317,7 @@
|
|
|
|
|
"title": "Implement `azure.js` Provider Module using Vercel AI SDK",
|
|
|
|
|
"description": "Create and implement the `azure.js` module within `src/ai-providers/`. This module should contain functions to interact with Azure OpenAI models using the **Vercel AI SDK (`@ai-sdk/azure`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.",
|
|
|
|
|
"details": "",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 61
|
|
|
|
|
},
|
|
|
|
|
@@ -3476,7 +3476,7 @@
|
|
|
|
|
"title": "Add setters for temperature, max tokens on per role basis.",
|
|
|
|
|
"description": "NOT per model/provider basis though we could probably just define those in the .taskmasterconfig file but then they would be hard-coded. if we let users define them on a per role basis, they will define incorrect values. maybe a good middle ground is to do both - we enforce maximum using known max tokens for input and output at the .taskmasterconfig level but then we also give setters to adjust temp/input tokens/output tokens for each of the 3 roles.",
|
|
|
|
|
"details": "",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 61
|
|
|
|
|
},
|
|
|
|
|
@@ -3485,7 +3485,7 @@
|
|
|
|
|
"title": "Add support for Bedrock provider with ai sdk and unified service",
|
|
|
|
|
"description": "",
|
|
|
|
|
"details": "\n\n<info added on 2025-04-25T19:03:42.584Z>\n- Install the Bedrock provider for the AI SDK using your package manager (e.g., npm i @ai-sdk/amazon-bedrock) and ensure the core AI SDK is present[3][4].\n\n- To integrate with your existing config manager, externalize all Bedrock-specific configuration (such as region, model name, and credential provider) into your config management system. For example, store values like region (\"us-east-1\") and model identifier (\"meta.llama3-8b-instruct-v1:0\") in your config files or environment variables, and load them at runtime.\n\n- For credentials, leverage the AWS SDK credential provider chain to avoid hardcoding secrets. Use the @aws-sdk/credential-providers package and pass a credentialProvider (e.g., fromNodeProviderChain()) to the Bedrock provider. This allows your config manager to control credential sourcing via environment, profiles, or IAM roles, consistent with other AWS integrations[1].\n\n- Example integration with config manager:\n ```js\n import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';\n import { fromNodeProviderChain } from '@aws-sdk/credential-providers';\n\n // Assume configManager.get returns your config values\n const region = configManager.get('bedrock.region');\n const model = configManager.get('bedrock.model');\n\n const bedrock = createAmazonBedrock({\n region,\n credentialProvider: fromNodeProviderChain(),\n });\n\n // Use with AI SDK methods\n const { text } = await generateText({\n model: bedrock(model),\n prompt: 'Your prompt here',\n });\n ```\n\n- If your config manager supports dynamic provider selection, you can abstract the provider initialization so switching between Bedrock and other providers (like OpenAI or Anthropic) is seamless.\n\n- Be aware that Bedrock exposes multiple models from different vendors, each with potentially different API behaviors. Your config should allow specifying the exact model string, and your integration should handle any model-specific options or response formats[5].\n\n- For unified service integration, ensure your service layer can route requests to Bedrock using the configured provider instance, and normalize responses if you support multiple AI backends.\n</info added on 2025-04-25T19:03:42.584Z>",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 61
|
|
|
|
|
}
|
|
|
|
|
@@ -3824,7 +3824,7 @@
|
|
|
|
|
"description": "Enhance the 'show' command to accept a status parameter that filters subtasks by their current status, allowing users to view only subtasks matching a specific status.",
|
|
|
|
|
"details": "This task involves modifying the existing 'show' command functionality to support status-based filtering of subtasks. Implementation details include:\n\n1. Update the command parser to accept a new '--status' or '-s' flag followed by a status value (e.g., 'task-master show --status=in-progress' or 'task-master show -s completed').\n\n2. Modify the show command handler in the appropriate module (likely in scripts/modules/) to:\n - Parse and validate the status parameter\n - Filter the subtasks collection based on the provided status before displaying results\n - Handle invalid status values gracefully with appropriate error messages\n - Support standard status values (e.g., 'not-started', 'in-progress', 'completed', 'blocked')\n - Consider supporting multiple status values (comma-separated or multiple flags)\n\n3. Update the help documentation to include information about the new status filtering option.\n\n4. Ensure backward compatibility - the show command should function as before when no status parameter is provided.\n\n5. Consider adding a '--status-list' option to display all available status values for reference.\n\n6. Update any relevant unit tests to cover the new functionality.\n\n7. If the application uses a database or persistent storage, ensure the filtering happens at the query level for performance when possible.\n\n8. Maintain consistent formatting and styling of output regardless of filtering.",
|
|
|
|
|
"testStrategy": "Testing for this feature should include:\n\n1. Unit tests:\n - Test parsing of the status parameter in various formats (--status=value, -s value)\n - Test filtering logic with different status values\n - Test error handling for invalid status values\n - Test backward compatibility (no status parameter)\n - Test edge cases (empty status, case sensitivity, etc.)\n\n2. Integration tests:\n - Verify that the command correctly filters subtasks when a valid status is provided\n - Verify that all subtasks are shown when no status filter is applied\n - Test with a project containing subtasks of various statuses\n\n3. Manual testing:\n - Create a test project with multiple subtasks having different statuses\n - Run the show command with different status filters and verify results\n - Test with both long-form (--status) and short-form (-s) parameters\n - Verify help documentation correctly explains the new parameter\n\n4. Edge case testing:\n - Test with non-existent status values\n - Test with empty project (no subtasks)\n - Test with a project where all subtasks have the same status\n\n5. Documentation verification:\n - Ensure the README or help documentation is updated to include the new parameter\n - Verify examples in documentation work as expected\n\nAll tests should pass before considering this task complete.",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"priority": "medium",
|
|
|
|
|
"subtasks": []
|
|
|
|
|
@@ -3953,7 +3953,7 @@
|
|
|
|
|
"description": "Allow users to specify custom model IDs for Ollama and OpenRouter providers via CLI flag and interactive setup, with appropriate validation and warnings.",
|
|
|
|
|
"details": "**CLI (`task-master models --set-<role> <id> --custom`):**\n- Modify `scripts/modules/task-manager/models.js`: `setModel` function.\n- Check internal `available_models.json` first.\n- If not found and `--custom` is provided:\n - Fetch `https://openrouter.ai/api/v1/models`. (Need to add `https` import).\n - If ID found in OpenRouter list: Set `provider: 'openrouter'`, `modelId: <id>`. Warn user about lack of official validation.\n - If ID not found in OpenRouter: Assume Ollama. Set `provider: 'ollama'`, `modelId: <id>`. Warn user strongly (model must be pulled, compatibility not guaranteed).\n- If not found and `--custom` is *not* provided: Fail with error message guiding user to use `--custom`.\n\n**Interactive Setup (`task-master models --setup`):**\n- Modify `scripts/modules/commands.js`: `runInteractiveSetup` function.\n- Add options to `inquirer` choices for each role: `OpenRouter (Enter Custom ID)` and `Ollama (Enter Custom ID)`.\n- If `__CUSTOM_OPENROUTER__` selected:\n - Prompt for custom ID.\n - Fetch OpenRouter list and validate ID exists. Fail setup for that role if not found.\n - Update config and show warning if found.\n- If `__CUSTOM_OLLAMA__` selected:\n - Prompt for custom ID.\n - Update config directly (no live validation).\n - Show strong Ollama warning.",
|
|
|
|
|
"testStrategy": "**Unit Tests:**\n- Test `setModel` logic for internal models, custom OpenRouter (valid/invalid), custom Ollama, missing `--custom` flag.\n- Test `runInteractiveSetup` for new custom options flow, including OpenRouter validation success/failure.\n\n**Integration Tests:**\n- Test the `task-master models` command with `--custom` flag variations.\n- Test the `task-master models --setup` interactive flow for custom options.\n\n**Manual Testing:**\n- Run `task-master models --setup` and select custom options.\n- Run `task-master models --set-main <valid_openrouter_id> --custom`. Verify config and warning.\n- Run `task-master models --set-main <invalid_openrouter_id> --custom`. Verify error.\n- Run `task-master models --set-main <ollama_model_id> --custom`. Verify config and warning.\n- Run `task-master models --set-main <custom_id>` (without `--custom`). Verify error.\n- Check `getModelConfiguration` output reflects custom models correctly.",
|
|
|
|
|
"status": "in-progress",
|
|
|
|
|
"status": "done",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"priority": "medium",
|
|
|
|
|
"subtasks": []
|
|
|
|
|
@@ -4000,6 +4000,145 @@
|
|
|
|
|
"details": "Research existing E2E testing approaches for MCP servers, referencing examples such as the MCP Server E2E Testing Example. Architect a test harness (preferably in Python or Node.js) that can launch the FastMCP server as a subprocess, establish stdio communication, and send well-formed JSON tool request messages. \n\nImplementation details:\n1. Use `subprocess.Popen` (Python) or `child_process.spawn` (Node.js) to launch the FastMCP server with appropriate stdin/stdout pipes\n2. Implement a message protocol handler that formats JSON requests with proper line endings and message boundaries\n3. Create a buffered reader for stdout that correctly handles chunked responses and reconstructs complete JSON objects\n4. Develop a request/response correlation mechanism using unique IDs for each request\n5. Implement timeout handling for requests that don't receive responses\n\nImplement robust parsing of JSON responses, including error handling for malformed or unexpected output. The framework should support defining test cases as scripts or data files, allowing for easy addition of new scenarios. \n\nTest case structure should include:\n- Setup phase for environment preparation\n- Sequence of tool requests with expected responses\n- Validation functions for response verification\n- Teardown phase for cleanup\n\nEnsure the framework can assert on both the structure and content of responses, and provide clear logging for debugging. Document setup, usage, and extension instructions. Consider cross-platform compatibility and CI integration.\n\n**Clarification:** The E2E test framework should focus on testing the FastMCP server's ability to correctly process tool requests and return appropriate responses. This includes verifying that the server properly handles different types of tool calls (e.g., file operations, web requests, task management), validates input parameters, and returns well-structured responses. The framework should be designed to be extensible, allowing new test cases to be added as the server's capabilities evolve. Tests should cover both happy paths and error conditions to ensure robust server behavior under various scenarios.",
|
|
|
|
|
"testStrategy": "Verify the framework by implementing a suite of representative E2E tests that cover typical tool requests and edge cases. Specific test cases should include:\n\n1. Basic tool request/response validation\n - Send a simple file_read request and verify response structure\n - Test with valid and invalid file paths\n - Verify error handling for non-existent files\n\n2. Concurrent request handling\n - Send multiple requests in rapid succession\n - Verify all responses are received and correlated correctly\n\n3. Large payload testing\n - Test with large file contents (>1MB)\n - Verify correct handling of chunked responses\n\n4. Error condition testing\n - Malformed JSON requests\n - Invalid tool names\n - Missing required parameters\n - Server crash recovery\n\nConfirm that tests can start and stop the FastMCP server, send requests, and accurately parse and validate responses. Implement specific assertions for response timing, structure validation using JSON schema, and content verification. Intentionally introduce malformed requests and simulate server errors to ensure robust error handling. \n\nImplement detailed logging with different verbosity levels:\n- ERROR: Failed tests and critical issues\n- WARNING: Unexpected but non-fatal conditions\n- INFO: Test progress and results\n- DEBUG: Raw request/response data\n\nRun the test suite in a clean environment and confirm all expected assertions and logs are produced. Validate that new test cases can be added with minimal effort and that the framework integrates with CI pipelines. Create a CI configuration that runs tests on each commit.",
|
|
|
|
|
"subtasks": []
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 77,
|
|
|
|
|
"title": "Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)",
|
|
|
|
|
"description": "Capture detailed AI usage data (tokens, costs, models, commands) within Taskmaster and send this telemetry to an external, closed-source analytics backend for usage analysis, profitability measurement, and pricing optimization.",
|
|
|
|
|
"details": "* Add a telemetry utility (`logAiUsage`) within `ai-services.js` to track AI usage.\n* Collected telemetry data fields must include:\n * `timestamp`: Current date/time in ISO 8601.\n * `userId`: Unique user identifier generated at setup (stored in `.taskmasterconfig`).\n * `commandName`: Taskmaster command invoked (`expand`, `parse-prd`, `research`, etc.).\n * `modelUsed`: Name/ID of the AI model invoked.\n * `inputTokens`: Count of input tokens used.\n * `outputTokens`: Count of output tokens generated.\n * `totalTokens`: Sum of input and output tokens.\n * `totalCost`: Monetary cost calculated using pricing from `supported_models.json`.\n* Send telemetry payload securely via HTTPS POST request from user's Taskmaster installation directly to the closed-source analytics API (Express/Supabase backend).\n* Introduce a privacy notice and explicit user consent prompt upon initial installation/setup to enable telemetry.\n* Provide a graceful fallback if telemetry request fails (e.g., no internet connectivity).\n* Optionally display a usage summary directly in Taskmaster CLI output for user transparency.",
|
|
|
|
|
"testStrategy": "",
|
|
|
|
|
"status": "in-progress",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"priority": "medium",
|
|
|
|
|
"subtasks": [
|
|
|
|
|
{
|
|
|
|
|
"id": 1,
|
|
|
|
|
"title": "Implement telemetry utility and data collection",
|
|
|
|
|
"description": "Create the logAiUsage utility in ai-services.js that captures all required telemetry data fields",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"details": "Develop the logAiUsage function that collects timestamp, userId, commandName, modelUsed, inputTokens, outputTokens, totalTokens, and totalCost. Implement token counting logic and cost calculation using pricing from supported_models.json. Ensure proper error handling and data validation.\n<info added on 2025-05-05T21:08:51.413Z>\nDevelop the logAiUsage function that collects timestamp, userId, commandName, modelUsed, inputTokens, outputTokens, totalTokens, and totalCost. Implement token counting logic and cost calculation using pricing from supported_models.json. Ensure proper error handling and data validation.\n\nImplementation Plan:\n1. Define `logAiUsage` function in `ai-services-unified.js` that accepts parameters: userId, commandName, providerName, modelId, inputTokens, and outputTokens.\n\n2. Implement data collection and calculation logic:\n - Generate timestamp using `new Date().toISOString()`\n - Calculate totalTokens by adding inputTokens and outputTokens\n - Create a helper function `_getCostForModel(providerName, modelId)` that:\n - Loads pricing data from supported-models.json\n - Finds the appropriate provider/model entry\n - Returns inputCost and outputCost rates or defaults if not found\n - Calculate totalCost using the formula: ((inputTokens/1,000,000) * inputCost) + ((outputTokens/1,000,000) * outputCost)\n - Assemble complete telemetryData object with all required fields\n\n3. Add initial logging functionality:\n - Use existing log utility to record telemetry data at 'info' level\n - Implement proper error handling with try/catch blocks\n\n4. Integrate with `_unifiedServiceRunner`:\n - Modify to accept commandName and userId parameters\n - After successful API calls, extract usage data from results\n - Call logAiUsage with the appropriate parameters\n\n5. Update provider functions in src/ai-providers/*.js:\n - Ensure all provider functions return both the primary result and usage statistics\n - Standardize the return format to include a usage object with inputTokens and outputTokens\n</info added on 2025-05-05T21:08:51.413Z>\n<info added on 2025-05-07T17:28:57.361Z>\nTo implement the AI usage telemetry effectively, we need to update each command across our different stacks. Let's create a structured approach for this implementation:\n\nCommand Integration Plan:\n1. Core Function Commands:\n - Identify all AI-utilizing commands in the core function library\n - For each command, modify to pass commandName and userId to _unifiedServiceRunner\n - Update return handling to process and forward usage statistics\n\n2. Direct Function Commands:\n - Map all direct function commands that leverage AI capabilities\n - Implement telemetry collection at the appropriate execution points\n - Ensure consistent error handling and telemetry reporting\n\n3. MCP Tool Stack Commands:\n - Inventory all MCP commands with AI dependencies\n - Standardize the telemetry collection approach across the tool stack\n - Add telemetry hooks that maintain backward compatibility\n\nFor each command category, we'll need to:\n- Document current implementation details\n- Define specific code changes required\n- Create tests to verify telemetry is being properly collected\n- Establish validation procedures to ensure data accuracy\n</info added on 2025-05-07T17:28:57.361Z>",
|
|
|
|
|
"status": "in-progress",
|
|
|
|
|
"testStrategy": "Unit test the utility with mock AI usage data to verify all fields are correctly captured and calculated"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 2,
|
|
|
|
|
"title": "Implement secure telemetry transmission",
|
|
|
|
|
"description": "Create a secure mechanism to transmit telemetry data to the external analytics endpoint",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
1
|
|
|
|
|
],
|
|
|
|
|
"details": "Implement HTTPS POST request functionality to securely send the telemetry payload to the closed-source analytics API. Include proper encryption in transit using TLS. Implement retry logic and graceful fallback mechanisms for handling transmission failures due to connectivity issues.",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"testStrategy": "Test with mock endpoints to verify secure transmission and proper handling of various response scenarios"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 3,
|
|
|
|
|
"title": "Develop user consent and privacy notice system",
|
|
|
|
|
"description": "Create a privacy notice and explicit consent mechanism during Taskmaster setup",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"details": "Design and implement a clear privacy notice explaining what data is collected and how it's used. Create a user consent prompt during initial installation/setup that requires explicit opt-in. Store the consent status in the .taskmasterconfig file and respect this setting throughout the application.",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"testStrategy": "Test the consent flow to ensure users can opt in/out and that their preference is properly stored and respected"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 4,
|
|
|
|
|
"title": "Integrate telemetry into Taskmaster commands",
|
|
|
|
|
"description": "Integrate the telemetry utility across all relevant Taskmaster commands",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
1,
|
|
|
|
|
3
|
|
|
|
|
],
|
|
|
|
|
"details": "Modify each Taskmaster command (expand, parse-prd, research, etc.) to call the logAiUsage utility after AI interactions. Ensure telemetry is only sent if user has provided consent. Implement the integration in a way that doesn't impact command performance or user experience.\n<info added on 2025-05-06T17:57:13.980Z>\nModify each Taskmaster command (expand, parse-prd, research, etc.) to call the logAiUsage utility after AI interactions. Ensure telemetry is only sent if user has provided consent. Implement the integration in a way that doesn't impact command performance or user experience.\n\nSuccessfully integrated telemetry calls into `addTask` (core) and `addTaskDirect` (MCP) functions by passing `commandName` and `outputType` parameters to the telemetry system. The `ai-services-unified.js` module now logs basic telemetry data, including calculated cost information, whenever the `add-task` command or tool is invoked. This integration respects user consent settings and maintains performance standards.\n</info added on 2025-05-06T17:57:13.980Z>",
|
|
|
|
|
"status": "in-progress",
|
|
|
|
|
"testStrategy": "Integration tests to verify telemetry is correctly triggered across different commands with proper data"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 5,
|
|
|
|
|
"title": "Implement usage summary display",
|
|
|
|
|
"description": "Create an optional feature to display AI usage summary in the CLI output",
|
|
|
|
|
"dependencies": [
|
|
|
|
|
1,
|
|
|
|
|
4
|
|
|
|
|
],
|
|
|
|
|
"details": "Develop functionality to display a concise summary of AI usage (tokens used, estimated cost) directly in the CLI output after command execution. Make this feature configurable through Taskmaster settings. Ensure the display is formatted clearly and doesn't clutter the main command output.",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"testStrategy": "User acceptance testing to verify the summary display is clear, accurate, and properly configurable"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 6,
|
|
|
|
|
"title": "Telemetry Integration for parse-prd",
|
|
|
|
|
"description": "Integrate AI usage telemetry capture and propagation for the parse-prd functionality.",
|
|
|
|
|
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/parse-prd.js`):**\n * Modify AI service call to include `commandName: \\'parse-prd\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/parse-prd.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/parse-prd.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 77
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 7,
|
|
|
|
|
"title": "Telemetry Integration for expand-task",
|
|
|
|
|
"description": "Integrate AI usage telemetry capture and propagation for the expand-task functionality.",
|
|
|
|
|
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/expand-task.js`):**\n * Modify AI service call to include `commandName: \\'expand-task\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/expand-task.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/expand-task.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 77
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 8,
|
|
|
|
|
"title": "Telemetry Integration for expand-all-tasks",
|
|
|
|
|
"description": "Integrate AI usage telemetry capture and propagation for the expand-all-tasks functionality.",
|
|
|
|
|
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/expand-all-tasks.js`):**\n * Modify AI service call (likely within a loop or called by a helper) to include `commandName: \\'expand-all-tasks\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Aggregate or handle `telemetryData` appropriately if multiple AI calls are made.\n * Return object including aggregated/relevant `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/expand-all-tasks.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/expand-all.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 77
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 9,
|
|
|
|
|
"title": "Telemetry Integration for update-tasks",
|
|
|
|
|
"description": "Integrate AI usage telemetry capture and propagation for the update-tasks (bulk update) functionality.",
|
|
|
|
|
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/update-tasks.js`):**\n * Modify AI service call (likely within a loop) to include `commandName: \\'update-tasks\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }` for each AI call.\n * Aggregate or handle `telemetryData` appropriately for multiple calls.\n * Return object including aggregated/relevant `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/update-tasks.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/update.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 77
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 10,
|
|
|
|
|
"title": "Telemetry Integration for update-task-by-id",
|
|
|
|
|
"description": "Integrate AI usage telemetry capture and propagation for the update-task-by-id functionality.",
|
|
|
|
|
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/update-task-by-id.js`):**\n * Modify AI service call to include `commandName: \\'update-task\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/update-task-by-id.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/update-task.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 77
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 11,
|
|
|
|
|
"title": "Telemetry Integration for update-subtask-by-id",
|
|
|
|
|
"description": "Integrate AI usage telemetry capture and propagation for the update-subtask-by-id functionality.",
|
|
|
|
|
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/update-subtask-by-id.js`):**\n * Verify if this function *actually* calls an AI service. If it only appends text, telemetry integration might not apply directly here, but ensure its callers handle telemetry if they use AI.\n * *If it calls AI:* Modify AI service call to include `commandName: \\'update-subtask\\'` and `outputType`.\n * *If it calls AI:* Receive `{ mainResult, telemetryData }`.\n * *If it calls AI:* Return object including `telemetryData`.\n * *If it calls AI:* Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/update-subtask-by-id.js`):**\n * *If core calls AI:* Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * *If core calls AI:* Pass `outputFormat: \\'json\\'` if applicable.\n * *If core calls AI:* Receive `{ ..., telemetryData }` from core.\n * *If core calls AI:* Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/update-subtask.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through (if present).\n",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 77
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 12,
|
|
|
|
|
"title": "Telemetry Integration for analyze-task-complexity",
|
|
|
|
|
"description": "Integrate AI usage telemetry capture and propagation for the analyze-task-complexity functionality.",
|
|
|
|
|
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/analyze-task-complexity.js`):**\n * Modify AI service call to include `commandName: \\'analyze-complexity\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData` (perhaps alongside the complexity report data).\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/analyze.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"parentTaskId": 77
|
|
|
|
|
}
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"id": 80,
|
|
|
|
|
"title": "Implement Unique User ID Generation and Storage During Installation",
|
|
|
|
|
"description": "Generate a unique user identifier during npm installation and store it in the .taskmasterconfig globals to enable anonymous usage tracking and telemetry without requiring user registration.",
|
|
|
|
|
"details": "This task involves implementing a mechanism to generate and store a unique user identifier during the npm installation process of Taskmaster. The implementation should:\n\n1. Create a post-install script that runs automatically after npm install completes\n2. Generate a cryptographically secure random UUID v4 as the unique user identifier\n3. Check if a user ID already exists in the .taskmasterconfig file before generating a new one\n4. Add the generated user ID to the globals section of the .taskmasterconfig file\n5. Ensure the user ID persists across updates but is regenerated on fresh installations\n6. Handle edge cases such as failed installations, manual deletions of the config file, or permission issues\n7. Add appropriate logging to notify users that an anonymous ID is being generated (with clear privacy messaging)\n8. Document the purpose of this ID in the codebase and user documentation\n9. Ensure the ID generation is compatible with all supported operating systems\n10. Make the ID accessible to the telemetry system implemented in Task #77\n\nThe implementation should respect user privacy by:\n- Not collecting any personally identifiable information\n- Making it clear in documentation how users can opt out of telemetry\n- Ensuring the ID cannot be traced back to specific users or installations\n\nThis user ID will serve as the foundation for anonymous usage tracking, helping to understand how Taskmaster is used without compromising user privacy.",
|
|
|
|
|
"testStrategy": "Testing for this feature should include:\n\n1. **Unit Tests**:\n - Verify the UUID generation produces valid UUIDs\n - Test the config file reading and writing functionality\n - Ensure proper error handling for file system operations\n - Verify the ID remains consistent across multiple reads\n\n2. **Integration Tests**:\n - Run a complete npm installation in a clean environment and verify a new ID is generated\n - Simulate an update installation and verify the existing ID is preserved\n - Test the interaction between the ID generation and the telemetry system\n - Verify the ID is correctly stored in the expected location in .taskmasterconfig\n\n3. **Manual Testing**:\n - Perform fresh installations on different operating systems (Windows, macOS, Linux)\n - Verify the installation process completes without errors\n - Check that the .taskmasterconfig file contains the generated ID\n - Test scenarios where the config file is manually deleted or corrupted\n\n4. **Edge Case Testing**:\n - Test behavior when the installation is run without sufficient permissions\n - Verify handling of network disconnections during installation\n - Test with various npm versions to ensure compatibility\n - Verify behavior when .taskmasterconfig already exists but doesn't contain a user ID section\n\n5. **Validation**:\n - Create a simple script to extract and analyze generated IDs to ensure uniqueness\n - Verify the ID format meets UUID v4 specifications\n - Confirm the ID is accessible to the telemetry system from Task #77\n\nThe test plan should include documentation of all test cases, expected results, and actual outcomes. A successful implementation will generate unique IDs for each installation while maintaining that ID across updates.",
|
|
|
|
|
"status": "pending",
|
|
|
|
|
"dependencies": [],
|
|
|
|
|
"priority": "medium",
|
|
|
|
|
"subtasks": []
|
|
|
|
|
}
|
|
|
|
|
]
|
|
|
|
|
}
|