refactor(analyze): Align complexity analysis with unified AI service

Refactored the  feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer ().

Initially,  was implemented to leverage structured output generation. However, this approach encountered persistent errors:
- Perplexity provider returned internal server errors.
- Anthropic provider failed with schema type and model errors.

Due to the unreliability of  for this specific use case, the core AI interaction within  was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced.

Key changes include:
- Removed direct AI client initialization (Anthropic, Perplexity).
- Removed direct fetching of AI model configuration parameters.
- Removed manual AI retry/fallback/streaming logic.
- Replaced direct AI calls with a call to .
- Updated  wrapper to pass session context correctly.
- Updated  MCP tool for correct path resolution and argument passing.
- Updated  CLI command for correct path resolution.
- Preserved core functionality: task loading/filtering, report generation, CLI summary display.

Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer
[INFO] Initialized Perplexity client with OpenAI compatibility layer
Analyzing task complexity from: tasks/tasks.json
Output report will be saved to: scripts/task-complexity-report.json
Analyzing task complexity and generating expansion recommendations...
[INFO] Reading tasks from tasks/tasks.json...
[INFO] Found 62 total tasks in the task file.
[INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks.
Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks.
[INFO] Claude API attempt 1/2
[ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
This commit is contained in:
Eyal Toledano
2025-04-24 22:33:33 -04:00
parent bec989dcc9
commit ad361f482f
7 changed files with 973 additions and 1086 deletions

View File

@@ -2,12 +2,11 @@
* Direct function wrapper for analyzeTaskComplexity * Direct function wrapper for analyzeTaskComplexity
*/ */
import { analyzeTaskComplexity } from '../../../../scripts/modules/task-manager.js'; import analyzeTaskComplexity from '../../../../scripts/modules/task-manager/analyze-task-complexity.js';
import { import {
enableSilentMode, enableSilentMode,
disableSilentMode, disableSilentMode,
isSilentMode, isSilentMode
readJSON
} from '../../../../scripts/modules/utils.js'; } from '../../../../scripts/modules/utils.js';
import fs from 'fs'; import fs from 'fs';
import path from 'path'; import path from 'path';
@@ -17,22 +16,23 @@ import path from 'path';
* @param {Object} args - Function arguments * @param {Object} args - Function arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file. * @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.outputPath - Explicit absolute path to save the report. * @param {string} args.outputPath - Explicit absolute path to save the report.
* @param {string} [args.model] - LLM model to use for analysis * @param {string} [args.model] - Deprecated: LLM model to use for analysis (ignored)
* @param {string|number} [args.threshold] - Minimum complexity score to recommend expansion (1-10) * @param {string|number} [args.threshold] - Minimum complexity score to recommend expansion (1-10)
* @param {boolean} [args.research] - Use Perplexity AI for research-backed complexity analysis * @param {boolean} [args.research] - Use Perplexity AI for research-backed complexity analysis
* @param {Object} log - Logger object * @param {Object} log - Logger object
* @param {Object} [context={}] - Context object containing session data * @param {Object} [context={}] - Context object containing session data
* @param {Object} [context.session] - MCP session object
* @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>} * @returns {Promise<{success: boolean, data?: Object, error?: {code: string, message: string}}>}
*/ */
export async function analyzeTaskComplexityDirect(args, log, context = {}) { export async function analyzeTaskComplexityDirect(args, log, context = {}) {
const { session } = context; // Only extract session, not reportProgress const { session } = context; // Extract session
// Destructure expected args // Destructure expected args
const { tasksJsonPath, outputPath, model, threshold, research } = args; const { tasksJsonPath, outputPath, model, threshold, research } = args; // Model is ignored by core function now
// --- Initial Checks (remain the same) ---
try { try {
log.info(`Analyzing task complexity with args: ${JSON.stringify(args)}`); log.info(`Analyzing task complexity with args: ${JSON.stringify(args)}`);
// Check if required paths were provided
if (!tasksJsonPath) { if (!tasksJsonPath) {
log.error('analyzeTaskComplexityDirect called without tasksJsonPath'); log.error('analyzeTaskComplexityDirect called without tasksJsonPath');
return { return {
@@ -51,7 +51,6 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
}; };
} }
// Use the provided paths
const tasksPath = tasksJsonPath; const tasksPath = tasksJsonPath;
const resolvedOutputPath = outputPath; const resolvedOutputPath = outputPath;
@@ -59,25 +58,25 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
log.info(`Output report will be saved to: ${resolvedOutputPath}`); log.info(`Output report will be saved to: ${resolvedOutputPath}`);
if (research) { if (research) {
log.info('Using Perplexity AI for research-backed complexity analysis'); log.info('Using research role for complexity analysis');
} }
// Create options object for analyzeTaskComplexity using provided paths // Prepare options for the core function
const options = { const options = {
file: tasksPath, file: tasksPath,
output: resolvedOutputPath, output: resolvedOutputPath,
model: model, // model: model, // No longer needed
threshold: threshold, threshold: threshold,
research: research === true research: research === true // Ensure boolean
}; };
// --- End Initial Checks ---
// Enable silent mode to prevent console logs from interfering with JSON response // --- Silent Mode and Logger Wrapper (remain the same) ---
const wasSilent = isSilentMode(); const wasSilent = isSilentMode();
if (!wasSilent) { if (!wasSilent) {
enableSilentMode(); enableSilentMode();
} }
// Create a logWrapper that matches the expected mcpLog interface as specified in utilities.mdc
const logWrapper = { const logWrapper = {
info: (message, ...args) => log.info(message, ...args), info: (message, ...args) => log.info(message, ...args),
warn: (message, ...args) => log.warn(message, ...args), warn: (message, ...args) => log.warn(message, ...args),
@@ -85,52 +84,71 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
debug: (message, ...args) => log.debug && log.debug(message, ...args), debug: (message, ...args) => log.debug && log.debug(message, ...args),
success: (message, ...args) => log.info(message, ...args) // Map success to info success: (message, ...args) => log.info(message, ...args) // Map success to info
}; };
// --- End Silent Mode and Logger Wrapper ---
let report; // To store the result from the core function
try { try {
// Call the core function with session and logWrapper as mcpLog // --- Call Core Function (Updated Context Passing) ---
await analyzeTaskComplexity(options, { // Call the core function, passing options and the context object { session, mcpLog }
session, report = await analyzeTaskComplexity(options, {
mcpLog: logWrapper // Use the wrapper instead of passing log directly session, // Pass the session object
mcpLog: logWrapper // Pass the logger wrapper
}); });
// --- End Core Function Call ---
} catch (error) { } catch (error) {
log.error(`Error in analyzeTaskComplexity: ${error.message}`); log.error(
`Error in analyzeTaskComplexity core function: ${error.message}`
);
// Restore logging if we changed it
if (!wasSilent && isSilentMode()) {
disableSilentMode();
}
return { return {
success: false, success: false,
error: { error: {
code: 'ANALYZE_ERROR', code: 'ANALYZE_CORE_ERROR', // More specific error code
message: `Error running complexity analysis: ${error.message}` message: `Error running core complexity analysis: ${error.message}`
} }
}; };
} finally { } finally {
// Always restore normal logging in finally block, but only if we enabled it // Always restore normal logging in finally block if we enabled silent mode
if (!wasSilent) { if (!wasSilent && isSilentMode()) {
disableSilentMode(); disableSilentMode();
} }
} }
// Verify the report file was created // --- Result Handling (remains largely the same) ---
// Verify the report file was created (core function writes it)
if (!fs.existsSync(resolvedOutputPath)) { if (!fs.existsSync(resolvedOutputPath)) {
return { return {
success: false, success: false,
error: { error: {
code: 'ANALYZE_ERROR', code: 'ANALYZE_REPORT_MISSING', // Specific code
message: 'Analysis completed but no report file was created' message:
'Analysis completed but no report file was created at the expected path.'
}
};
}
// The core function now returns the report object directly
if (!report || !report.complexityAnalysis) {
log.error(
'Core analyzeTaskComplexity function did not return a valid report object.'
);
return {
success: false,
error: {
code: 'INVALID_CORE_RESPONSE',
message: 'Core analysis function returned an invalid response.'
} }
}; };
} }
// Read the report file
let report;
try { try {
report = JSON.parse(fs.readFileSync(resolvedOutputPath, 'utf8')); const analysisArray = report.complexityAnalysis; // Already an array
// Important: Handle different report formats // Count tasks by complexity (remains the same)
// The core function might return an array or an object with a complexityAnalysis property
const analysisArray = Array.isArray(report)
? report
: report.complexityAnalysis || [];
// Count tasks by complexity
const highComplexityTasks = analysisArray.filter( const highComplexityTasks = analysisArray.filter(
(t) => t.complexityScore >= 8 (t) => t.complexityScore >= 8
).length; ).length;
@@ -152,29 +170,33 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
mediumComplexityTasks, mediumComplexityTasks,
lowComplexityTasks lowComplexityTasks
} }
// Include the full report data if needed by the client
// fullReport: report
} }
}; };
} catch (parseError) { } catch (parseError) {
log.error(`Error parsing report file: ${parseError.message}`); // Should not happen if core function returns object, but good safety check
log.error(`Internal error processing report data: ${parseError.message}`);
return { return {
success: false, success: false,
error: { error: {
code: 'REPORT_PARSE_ERROR', code: 'REPORT_PROCESS_ERROR',
message: `Error parsing complexity report: ${parseError.message}` message: `Internal error processing complexity report: ${parseError.message}`
} }
}; };
} }
// --- End Result Handling ---
} catch (error) { } catch (error) {
// Make sure to restore normal logging even if there's an error // Catch errors from initial checks or path resolution
// Make sure to restore normal logging if silent mode was enabled
if (isSilentMode()) { if (isSilentMode()) {
disableSilentMode(); disableSilentMode();
} }
log.error(`Error in analyzeTaskComplexityDirect setup: ${error.message}`);
log.error(`Error in analyzeTaskComplexityDirect: ${error.message}`);
return { return {
success: false, success: false,
error: { error: {
code: 'CORE_FUNCTION_ERROR', code: 'DIRECT_FUNCTION_SETUP_ERROR',
message: error.message message: error.message
} }
}; };

View File

@@ -9,9 +9,10 @@ import {
createErrorResponse, createErrorResponse,
getProjectRootFromSession getProjectRootFromSession
} from './utils.js'; } from './utils.js';
import { analyzeTaskComplexityDirect } from '../core/task-master-core.js'; import { analyzeTaskComplexityDirect } from '../core/direct-functions/analyze-task-complexity.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js'; import { findTasksJsonPath } from '../core/utils/path-utils.js';
import path from 'path'; import path from 'path';
import fs from 'fs';
/** /**
* Register the analyze tool with the MCP server * Register the analyze tool with the MCP server
@@ -27,13 +28,13 @@ export function registerAnalyzeTool(server) {
.string() .string()
.optional() .optional()
.describe( .describe(
'Output file path for the report (default: scripts/task-complexity-report.json)' 'Output file path relative to project root (default: scripts/task-complexity-report.json)'
), ),
model: z model: z
.string() .string()
.optional() .optional()
.describe( .describe(
'LLM model to use for analysis (defaults to configured model)' 'Deprecated: LLM model override (model is determined by configured role)'
), ),
threshold: z.coerce threshold: z.coerce
.number() .number()
@@ -47,12 +48,13 @@ export function registerAnalyzeTool(server) {
.string() .string()
.optional() .optional()
.describe( .describe(
'Absolute path to the tasks file (default: tasks/tasks.json)' 'Path to the tasks file relative to project root (default: tasks/tasks.json)'
), ),
research: z research: z
.boolean() .boolean()
.optional() .optional()
.describe('Use Perplexity AI for research-backed complexity analysis'), .default(false)
.describe('Use research role for complexity analysis'),
projectRoot: z projectRoot: z
.string() .string()
.describe('The directory of the project. Must be an absolute path.') .describe('The directory of the project. Must be an absolute path.')
@@ -60,17 +62,15 @@ export function registerAnalyzeTool(server) {
execute: async (args, { log, session }) => { execute: async (args, { log, session }) => {
try { try {
log.info( log.info(
`Analyzing task complexity with args: ${JSON.stringify(args)}` `Executing analyze_project_complexity tool with args: ${JSON.stringify(args)}`
); );
// Get project root from args or session const rootFolder = args.projectRoot;
const rootFolder =
args.projectRoot || getProjectRootFromSession(session, log);
if (!rootFolder) { if (!rootFolder) {
return createErrorResponse( return createErrorResponse('projectRoot is required.');
'Could not determine project root. Please provide it explicitly or ensure your session contains valid root information.' }
); if (!path.isAbsolute(rootFolder)) {
return createErrorResponse('projectRoot must be an absolute path.');
} }
let tasksJsonPath; let tasksJsonPath;
@@ -82,7 +82,7 @@ export function registerAnalyzeTool(server) {
} catch (error) { } catch (error) {
log.error(`Error finding tasks.json: ${error.message}`); log.error(`Error finding tasks.json: ${error.message}`);
return createErrorResponse( return createErrorResponse(
`Failed to find tasks.json: ${error.message}` `Failed to find tasks.json within project root '${rootFolder}': ${error.message}`
); );
} }
@@ -90,11 +90,25 @@ export function registerAnalyzeTool(server) {
? path.resolve(rootFolder, args.output) ? path.resolve(rootFolder, args.output)
: path.resolve(rootFolder, 'scripts', 'task-complexity-report.json'); : path.resolve(rootFolder, 'scripts', 'task-complexity-report.json');
const outputDir = path.dirname(outputPath);
try {
if (!fs.existsSync(outputDir)) {
fs.mkdirSync(outputDir, { recursive: true });
log.info(`Created output directory: ${outputDir}`);
}
} catch (dirError) {
log.error(
`Failed to create output directory ${outputDir}: ${dirError.message}`
);
return createErrorResponse(
`Failed to create output directory: ${dirError.message}`
);
}
const result = await analyzeTaskComplexityDirect( const result = await analyzeTaskComplexityDirect(
{ {
tasksJsonPath: tasksJsonPath, tasksJsonPath: tasksJsonPath,
outputPath: outputPath, outputPath: outputPath,
model: args.model,
threshold: args.threshold, threshold: args.threshold,
research: args.research research: args.research
}, },
@@ -103,20 +117,17 @@ export function registerAnalyzeTool(server) {
); );
if (result.success) { if (result.success) {
log.info(`Task complexity analysis complete: ${result.data.message}`); log.info(`Tool analyze_project_complexity finished successfully.`);
log.info(
`Report summary: ${JSON.stringify(result.data.reportSummary)}`
);
} else { } else {
log.error( log.error(
`Failed to analyze task complexity: ${result.error.message}` `Tool analyze_project_complexity failed: ${result.error?.message || 'Unknown error'}`
); );
} }
return handleApiResult(result, log, 'Error analyzing task complexity'); return handleApiResult(result, log, 'Error analyzing task complexity');
} catch (error) { } catch (error) {
log.error(`Error in analyze tool: ${error.message}`); log.error(`Critical error in analyze tool execute: ${error.message}`);
return createErrorResponse(error.message); return createErrorResponse(`Internal tool error: ${error.message}`);
} }
} }
}); });

File diff suppressed because it is too large Load Diff

View File

@@ -1,203 +1,259 @@
{ {
"meta": { "meta": {
"generatedAt": "2025-03-24T20:01:35.986Z", "generatedAt": "2025-04-25T02:29:42.258Z",
"tasksAnalyzed": 24, "tasksAnalyzed": 31,
"thresholdScore": 5, "thresholdScore": 5,
"projectName": "Your Project Name", "projectName": "Task Master",
"usedResearch": false "usedResearch": false
}, },
"complexityAnalysis": [ "complexityAnalysis": [
{ {
"taskId": 1, "taskId": 24,
"taskTitle": "Implement Task Data Structure", "taskTitle": "Implement AI-Powered Test Generation Command",
"complexityScore": 7, "complexityScore": 9,
"recommendedSubtasks": 5, "recommendedSubtasks": 10,
"expansionPrompt": "Break down the implementation of the core tasks.json data structure into subtasks that cover schema design, model implementation, validation, file operations, and error handling. For each subtask, include specific technical requirements and acceptance criteria.", "expansionPrompt": "Break down the implementation of an AI-powered test generation command into granular steps, covering CLI integration, task retrieval, AI prompt construction, API integration, test file formatting, error handling, documentation, and comprehensive testing (unit, integration, error cases, and manual verification).",
"reasoning": "This task requires designing a foundational data structure that will be used throughout the system. It involves schema design, validation logic, and file system operations, which together represent moderate to high complexity. The task is critical as many other tasks depend on it." "reasoning": "This task involves advanced CLI development, deep integration with external AI APIs, dynamic prompt engineering, file system operations, error handling, and extensive testing. It requires orchestrating multiple subsystems and ensuring robust, user-friendly output. The cognitive and technical demands are high, justifying a high complexity score and a need for further decomposition into at least 10 subtasks to manage risk and ensure quality.[1][3][4][5]"
}, },
{ {
"taskId": 2, "taskId": 26,
"taskTitle": "Develop Command Line Interface Foundation", "taskTitle": "Implement Context Foundation for AI Operations",
"complexityScore": 6, "complexityScore": 7,
"recommendedSubtasks": 4, "recommendedSubtasks": 8,
"expansionPrompt": "Divide the CLI foundation implementation into subtasks covering Commander.js setup, help documentation creation, console output formatting, and global options handling. Each subtask should specify implementation details and how it integrates with the overall CLI structure.", "expansionPrompt": "Expand the context foundation implementation into detailed subtasks for CLI flag integration, file reading utilities, error handling, context formatting, command handler updates, documentation, and comprehensive testing for both functionality and error scenarios.",
"reasoning": "Setting up the CLI foundation requires integrating Commander.js, implementing various command-line options, and establishing the output formatting system. The complexity is moderate as it involves creating the interface layer that users will interact with." "reasoning": "This task introduces foundational context management across multiple commands, requiring careful CLI design, file I/O, error handling, and integration with AI prompt construction. While less complex than full AI-powered features, it still spans several modules and requires robust validation, suggesting a moderate-to-high complexity and a need for further breakdown.[1][3][4]"
}, },
{ {
"taskId": 3, "taskId": 27,
"taskTitle": "Implement Basic Task Operations", "taskTitle": "Implement Context Enhancements for AI Operations",
"complexityScore": 8, "complexityScore": 8,
"recommendedSubtasks": 5, "recommendedSubtasks": 10,
"expansionPrompt": "Break down the implementation of basic task operations into subtasks covering CRUD operations, status management, dependency handling, and priority management. Each subtask should detail the specific operations, validation requirements, and error cases to handle.", "expansionPrompt": "Decompose the context enhancement task into subtasks for code context extraction, task history integration, PRD summarization, context formatting, token optimization, error handling, and comprehensive testing for each new context type.",
"reasoning": "This task encompasses multiple operations (create, read, update, delete) along with status changes, dependency management, and priority handling. It represents high complexity due to the breadth of functionality and the need to ensure data integrity across operations." "reasoning": "This phase builds on the foundation to add sophisticated context extraction (code, history, PRD), requiring advanced parsing, summarization, and prompt engineering. The need to optimize for token limits and maintain performance across large codebases increases both technical and cognitive complexity, warranting a high score and further subtask expansion.[1][3][4][5]"
}, },
{ {
"taskId": 4, "taskId": 28,
"taskTitle": "Create Task File Generation System", "taskTitle": "Implement Advanced ContextManager System",
"complexityScore": 7, "complexityScore": 10,
"recommendedSubtasks": 4, "recommendedSubtasks": 12,
"expansionPrompt": "Divide the task file generation system into subtasks covering template creation, file generation logic, bi-directional synchronization, and file organization. Each subtask should specify the technical approach, edge cases to handle, and integration points with the task data structure.", "expansionPrompt": "Expand the ContextManager implementation into subtasks for class design, context source integration, optimization algorithms, caching, token management, command interface updates, AI service integration, performance monitoring, logging, and comprehensive testing (unit, integration, performance, and user experience).",
"reasoning": "Implementing file generation with bi-directional synchronization presents significant complexity due to the need to maintain consistency between individual files and the central tasks.json. The system must handle updates in either direction and resolve potential conflicts." "reasoning": "This is a highly complex architectural task involving advanced class design, optimization algorithms, dynamic context prioritization, caching, and integration with multiple AI services. It requires deep system knowledge, careful performance considerations, and robust error handling, making it one of the most complex tasks in the set and justifying a large number of subtasks.[1][3][4][5]"
}, },
{ {
"taskId": 5, "taskId": 32,
"taskTitle": "Integrate Anthropic Claude API", "taskTitle": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
"complexityScore": 6, "complexityScore": 9,
"recommendedSubtasks": 4, "recommendedSubtasks": 15,
"expansionPrompt": "Break down the Claude API integration into subtasks covering authentication setup, prompt template creation, response handling, and error management with retries. Each subtask should detail the specific implementation approach, including security considerations and performance optimizations.", "expansionPrompt": "Break down the 'learn' command implementation into subtasks for file structure setup, path utilities, chat history analysis, rule management, AI integration, error handling, performance optimization, CLI integration, logging, and comprehensive testing.",
"reasoning": "Integrating with the Claude API involves setting up authentication, creating effective prompts, and handling responses and errors. The complexity is moderate, focusing on establishing a reliable connection to the external service with proper error handling and retry logic." "reasoning": "This task requires orchestrating file system operations, parsing complex chat and code histories, managing rule templates, integrating with AI for pattern extraction, and ensuring robust error handling and performance. The breadth and depth of required functionality, along with the need for both automatic and manual triggers, make this a highly complex task needing extensive decomposition.[1][3][4][5]"
}, },
{ {
"taskId": 6, "taskId": 35,
"taskTitle": "Build PRD Parsing System", "taskTitle": "Integrate Grok3 API for Research Capabilities",
"complexityScore": 8, "complexityScore": 7,
"recommendedSubtasks": 5, "recommendedSubtasks": 8,
"expansionPrompt": "Divide the PRD parsing system into subtasks covering file reading, prompt engineering, content-to-task conversion, dependency inference, priority assignment, and handling large documents. Each subtask should specify the AI interaction approach, data transformation steps, and validation requirements.", "expansionPrompt": "Expand the Grok3 API integration into subtasks for API client development, service layer updates, payload/response adaptation, error handling, configuration management, UI updates, backward compatibility, and documentation/testing.",
"reasoning": "Parsing PRDs into structured tasks requires sophisticated prompt engineering and intelligent processing of unstructured text. The complexity is high due to the need to accurately extract tasks, infer dependencies, and handle potentially large documents with varying formats." "reasoning": "This migration task involves replacing a core external API, adapting to new request/response formats, updating configuration and UI, and ensuring backward compatibility. While not as cognitively complex as some AI tasks, the risk and breadth of impact across the system justify a moderate-to-high complexity and further breakdown.[1][3][4]"
}, },
{ {
"taskId": 7, "taskId": 36,
"taskTitle": "Implement Task Expansion with Claude", "taskTitle": "Add Ollama Support for AI Services as Claude Alternative",
"complexityScore": 7, "complexityScore": 7,
"recommendedSubtasks": 4, "recommendedSubtasks": 8,
"expansionPrompt": "Break down the task expansion functionality into subtasks covering prompt creation for subtask generation, expansion workflow implementation, parent-child relationship management, and regeneration mechanisms. Each subtask should detail the AI interaction patterns, data structures, and user experience considerations.", "expansionPrompt": "Decompose the Ollama integration into subtasks for service class implementation, configuration, model selection, prompt formatting, error handling, fallback logic, documentation, and comprehensive testing.",
"reasoning": "Task expansion involves complex AI interactions to generate meaningful subtasks and manage their relationships with parent tasks. The complexity comes from creating effective prompts that produce useful subtasks and implementing a smooth workflow for users to generate and refine these subtasks." "reasoning": "Adding a local AI provider requires interface compatibility, configuration management, error handling, and fallback logic, as well as user documentation. The technical complexity is moderate-to-high, especially in ensuring seamless switching and robust error handling, warranting further subtasking.[1][3][4]"
}, },
{ {
"taskId": 8, "taskId": 37,
"taskTitle": "Develop Implementation Drift Handling", "taskTitle": "Add Gemini Support for Main AI Services as Claude Alternative",
"complexityScore": 9, "complexityScore": 7,
"recommendedSubtasks": 5, "recommendedSubtasks": 8,
"expansionPrompt": "Divide the implementation drift handling into subtasks covering change detection, task rewriting based on new context, dependency chain updates, work preservation, and update suggestion analysis. Each subtask should specify the algorithms, heuristics, and AI prompts needed to effectively manage implementation changes.", "expansionPrompt": "Expand Gemini integration into subtasks for service class creation, authentication, prompt/response mapping, configuration, error handling, streaming support, documentation, and comprehensive testing.",
"reasoning": "This task involves the complex challenge of updating future tasks based on changes in implementation. It requires sophisticated analysis of completed work, understanding how it affects pending tasks, and intelligently updating those tasks while preserving dependencies. This represents high complexity due to the need for context-aware AI reasoning." "reasoning": "Integrating a new cloud AI provider involves authentication, API adaptation, configuration, and ensuring feature parity. The complexity is similar to other provider integrations, requiring careful planning and multiple subtasks for robust implementation and testing.[1][3][4]"
}, },
{ {
"taskId": 9, "taskId": 40,
"taskTitle": "Integrate Perplexity API", "taskTitle": "Implement 'plan' Command for Task Implementation Planning",
"complexityScore": 5, "complexityScore": 6,
"recommendedSubtasks": 3, "recommendedSubtasks": 6,
"expansionPrompt": "Break down the Perplexity API integration into subtasks covering authentication setup, research-oriented prompt creation, response handling, and fallback mechanisms. Each subtask should detail the implementation approach, integration with existing systems, and quality comparison metrics.", "expansionPrompt": "Break down the 'plan' command implementation into subtasks for CLI integration, task/subtask retrieval, AI prompt construction, plan formatting, error handling, and testing.",
"reasoning": "Similar to the Claude integration but slightly less complex, this task focuses on connecting to the Perplexity API for research capabilities. The complexity is moderate, involving API authentication, prompt templates, and response handling with fallback mechanisms to Claude." "reasoning": "This task involves AI prompt engineering, CLI integration, and content formatting, but is more focused and less technically demanding than full AI service or context management features. It still requires careful error handling and testing, suggesting a moderate complexity and a handful of subtasks.[1][3][4]"
}, },
{ {
"taskId": 10, "taskId": 41,
"taskTitle": "Create Research-Backed Subtask Generation", "taskTitle": "Implement Visual Task Dependency Graph in Terminal",
"complexityScore": 7, "complexityScore": 8,
"recommendedSubtasks": 4, "recommendedSubtasks": 10,
"expansionPrompt": "Divide the research-backed subtask generation into subtasks covering domain-specific prompt creation, context enrichment from research, knowledge incorporation, and detailed subtask generation. Each subtask should specify the approach for leveraging research data and integrating it into the generation process.", "expansionPrompt": "Expand the visual dependency graph implementation into subtasks for CLI command setup, graph layout algorithms, ASCII/Unicode rendering, color coding, circular dependency detection, filtering, accessibility, performance optimization, documentation, and testing.",
"reasoning": "This task builds on previous work to enhance subtask generation with research capabilities. The complexity comes from effectively incorporating research results into the generation process and creating domain-specific prompts that produce high-quality, detailed subtasks with best practices." "reasoning": "Rendering complex dependency graphs in the terminal with color coding, layout optimization, and accessibility features is technically challenging and requires careful algorithm design and robust error handling. The need for performance optimization and user-friendly output increases the complexity, justifying a high score and further subtasking.[1][3][4][5]"
}, },
{ {
"taskId": 11, "taskId": 42,
"taskTitle": "Implement Batch Operations", "taskTitle": "Implement MCP-to-MCP Communication Protocol",
"complexityScore": 6, "complexityScore": 10,
"recommendedSubtasks": 4, "recommendedSubtasks": 12,
"expansionPrompt": "Break down the batch operations functionality into subtasks covering multi-task status updates, bulk subtask generation, task filtering/querying, and batch prioritization. Each subtask should detail the command interface, implementation approach, and performance considerations for handling multiple tasks.", "expansionPrompt": "Break down the MCP-to-MCP protocol implementation into subtasks for protocol definition, adapter pattern, client module, reference integration, mode support, core module updates, configuration, documentation, error handling, security, and comprehensive testing.",
"reasoning": "Implementing batch operations requires extending existing functionality to work with multiple tasks simultaneously. The complexity is moderate, focusing on efficient processing of task sets, filtering capabilities, and maintaining data consistency across bulk operations." "reasoning": "Designing and implementing a standardized communication protocol with dynamic mode switching, adapter patterns, and robust error handling is architecturally complex. It requires deep system understanding, security considerations, and extensive testing, making it one of the most complex tasks and requiring significant decomposition.[1][3][4][5]"
}, },
{ {
"taskId": 12, "taskId": 43,
"taskTitle": "Develop Project Initialization System", "taskTitle": "Add Research Flag to Add-Task Command",
"complexityScore": 6, "complexityScore": 5,
"recommendedSubtasks": 4, "recommendedSubtasks": 5,
"expansionPrompt": "Divide the project initialization system into subtasks covering project templating, interactive setup wizard, environment configuration, directory structure creation, and example generation. Each subtask should specify the user interaction flow, template design, and integration with existing components.", "expansionPrompt": "Expand the research flag implementation into subtasks for CLI parser updates, subtask generation logic, parent linking, help documentation, and testing.",
"reasoning": "Creating a project initialization system involves setting up templates, an interactive wizard, and generating initial files and directories. The complexity is moderate, focusing on providing a smooth setup experience for new projects with appropriate defaults and configuration." "reasoning": "This is a focused feature addition involving CLI parsing, subtask generation, and documentation. While it requires some integration with AI or templating logic, the scope is well-defined and less complex than architectural or multi-module tasks, suggesting a moderate complexity and a handful of subtasks.[1][3][4]"
}, },
{ {
"taskId": 13, "taskId": 44,
"taskTitle": "Create Cursor Rules Implementation", "taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
"complexityScore": 5, "complexityScore": 9,
"recommendedSubtasks": 3, "recommendedSubtasks": 10,
"expansionPrompt": "Break down the Cursor rules implementation into subtasks covering documentation creation (dev_workflow.mdc, cursor_rules.mdc, self_improve.mdc), directory structure setup, and integration documentation. Each subtask should detail the specific content to include and how it enables effective AI interaction.", "expansionPrompt": "Decompose the webhook and event trigger system into subtasks for event system design, webhook registration, trigger definition, incoming/outgoing webhook handling, authentication, rate limiting, CLI management, payload templating, logging, and comprehensive testing.",
"reasoning": "This task focuses on creating documentation and rules for Cursor AI integration. The complexity is moderate, involving the creation of structured documentation files that define how AI should interact with the system and setting up the appropriate directory structure." "reasoning": "Building a robust automation system with webhooks and event triggers involves designing an event system, secure webhook handling, trigger logic, CLI management, and error handling. The breadth and integration requirements make this a highly complex task needing extensive breakdown.[1][3][4][5]"
}, },
{ {
"taskId": 14, "taskId": 45,
"taskTitle": "Develop Agent Workflow Guidelines", "taskTitle": "Implement GitHub Issue Import Feature",
"complexityScore": 5, "complexityScore": 7,
"recommendedSubtasks": 3, "recommendedSubtasks": 8,
"expansionPrompt": "Divide the agent workflow guidelines into subtasks covering task discovery documentation, selection guidelines, implementation guidance, verification procedures, and prioritization rules. Each subtask should specify the specific guidance to provide and how it enables effective agent workflows.", "expansionPrompt": "Expand the GitHub issue import feature into subtasks for CLI flag parsing, URL extraction, API integration, data mapping, authentication, error handling, override logic, documentation, and testing.",
"reasoning": "Creating comprehensive guidelines for AI agents involves documenting workflows, selection criteria, and implementation guidance. The complexity is moderate, focusing on clear documentation that helps agents interact effectively with the task system." "reasoning": "This task involves external API integration, data mapping, authentication, error handling, and user override logic. While not as complex as architectural changes, it still requires careful planning and multiple subtasks for robust implementation and testing.[1][3][4]"
}, },
{ {
"taskId": 15, "taskId": 46,
"taskTitle": "Optimize Agent Integration with Cursor and dev.js Commands", "taskTitle": "Implement ICE Analysis Command for Task Prioritization",
"complexityScore": 6, "complexityScore": 7,
"recommendedSubtasks": 4, "recommendedSubtasks": 8,
"expansionPrompt": "Break down the agent integration optimization into subtasks covering existing pattern documentation, Cursor-dev.js command integration enhancement, workflow documentation improvement, and feature additions. Each subtask should specify the specific improvements to make and how they enhance agent interaction.", "expansionPrompt": "Break down the ICE analysis command into subtasks for scoring algorithm development, LLM prompt engineering, report generation, CLI rendering, integration with complexity reports, sorting/filtering, error handling, and testing.",
"reasoning": "This task involves enhancing and documenting existing agent interaction patterns with Cursor and dev.js commands. The complexity is moderate, focusing on improving integration between different components and ensuring agents can effectively utilize the system's capabilities." "reasoning": "Implementing a prioritization command with LLM-based scoring, report generation, and CLI rendering involves moderate technical and cognitive complexity, especially in ensuring accurate and actionable outputs. It requires several subtasks for robust implementation and validation.[1][3][4]"
}, },
{ {
"taskId": 16, "taskId": 47,
"taskTitle": "Create Configuration Management System", "taskTitle": "Enhance Task Suggestion Actions Card Workflow",
"complexityScore": 6, "complexityScore": 7,
"recommendedSubtasks": 4, "recommendedSubtasks": 8,
"expansionPrompt": "Divide the configuration management system into subtasks covering environment variable handling, .env file support, configuration validation, defaults with overrides, and secure API key handling. Each subtask should specify the implementation approach, security considerations, and user experience for configuration.", "expansionPrompt": "Expand the workflow enhancement into subtasks for UI redesign, phase management logic, interactive elements, progress tracking, context addition, task management integration, accessibility, and comprehensive testing.",
"reasoning": "Implementing robust configuration management involves handling environment variables, .env files, validation, and secure storage of sensitive information. The complexity is moderate, focusing on creating a flexible system that works across different environments with appropriate security measures." "reasoning": "Redesigning a multi-phase workflow with interactive UI elements, progress tracking, and context management involves both UI/UX and logic complexity. The need for seamless transitions and robust state management increases the complexity, warranting further breakdown.[1][3][4]"
}, },
{ {
"taskId": 17, "taskId": 48,
"taskTitle": "Implement Comprehensive Logging System", "taskTitle": "Refactor Prompts into Centralized Structure",
"complexityScore": 5, "complexityScore": 6,
"recommendedSubtasks": 3, "recommendedSubtasks": 6,
"expansionPrompt": "Break down the logging system implementation into subtasks covering log level configuration, output destination management, specialized logging (commands, APIs, errors), and performance metrics. Each subtask should detail the implementation approach, configuration options, and integration with existing components.", "expansionPrompt": "Break down the prompt refactoring into subtasks for directory setup, prompt extraction, import updates, naming conventions, documentation, and regression testing.",
"reasoning": "Creating a comprehensive logging system involves implementing multiple log levels, configurable destinations, and specialized logging for different components. The complexity is moderate, focusing on providing useful information for debugging and monitoring while maintaining performance." "reasoning": "This is a codebase refactoring task focused on maintainability and organization. While it touches many files, the technical complexity is moderate, but careful planning and testing are needed to avoid regressions, suggesting a moderate complexity and several subtasks.[1][3][4]"
}, },
{ {
"taskId": 18, "taskId": 49,
"taskTitle": "Create Comprehensive User Documentation", "taskTitle": "Implement Code Quality Analysis Command",
"complexityScore": 7, "complexityScore": 8,
"recommendedSubtasks": 5, "recommendedSubtasks": 10,
"expansionPrompt": "Divide the user documentation creation into subtasks covering README with installation instructions, command reference, configuration guide, example workflows, troubleshooting guides, and advanced usage. Each subtask should specify the content to include, format, and organization to ensure comprehensive coverage.", "expansionPrompt": "Expand the code quality analysis command into subtasks for pattern recognition, best practice verification, AI integration, recommendation generation, task integration, CLI development, configuration, error handling, documentation, and comprehensive testing.",
"reasoning": "Creating comprehensive documentation requires covering installation, usage, configuration, examples, and troubleshooting across multiple components. The complexity is moderate to high due to the breadth of functionality to document and the need to make it accessible to different user levels." "reasoning": "This task involves static code analysis, AI integration for best practice checks, recommendation generation, and task creation workflows. The technical and cognitive demands are high, requiring robust validation and integration, justifying a high complexity and multiple subtasks.[1][3][4][5]"
}, },
{ {
"taskId": 19, "taskId": 50,
"taskTitle": "Implement Error Handling and Recovery", "taskTitle": "Implement Test Coverage Tracking System by Task",
"complexityScore": 8, "complexityScore": 9,
"recommendedSubtasks": 5, "recommendedSubtasks": 12,
"expansionPrompt": "Break down the error handling implementation into subtasks covering consistent error formatting, helpful error messages, API error handling with retries, file system error recovery, validation errors, and system state recovery. Each subtask should detail the specific error types to handle, recovery strategies, and user communication approach.", "expansionPrompt": "Break down the test coverage tracking system into subtasks for data structure design, coverage parsing, mapping algorithms, CLI commands, LLM-powered test generation, MCP integration, visualization, workflow integration, error handling, documentation, and comprehensive testing.",
"reasoning": "Implementing robust error handling across the entire system represents high complexity due to the variety of error types, the need for meaningful messages, and the implementation of recovery mechanisms. This task is critical for system reliability and user experience." "reasoning": "Mapping test coverage to tasks, integrating with coverage tools, generating targeted tests, and visualizing coverage requires advanced data modeling, parsing, AI integration, and workflow design. The breadth and depth of this system make it highly complex and in need of extensive decomposition.[1][3][4][5]"
}, },
{ {
"taskId": 20, "taskId": 51,
"taskTitle": "Create Token Usage Tracking and Cost Management", "taskTitle": "Implement Perplexity Research Command",
"complexityScore": 7, "complexityScore": 7,
"recommendedSubtasks": 4, "recommendedSubtasks": 8,
"expansionPrompt": "Divide the token tracking and cost management into subtasks covering usage tracking implementation, configurable limits, reporting features, cost estimation, caching for optimization, and usage alerts. Each subtask should specify the implementation approach, data storage, and user interface for monitoring and managing usage.", "expansionPrompt": "Expand the Perplexity research command into subtasks for API client development, context extraction, CLI interface, result formatting, caching, error handling, documentation, and comprehensive testing.",
"reasoning": "Implementing token usage tracking involves monitoring API calls, calculating costs, implementing limits, and optimizing usage through caching. The complexity is moderate to high, focusing on providing users with visibility into their API consumption and tools to manage costs." "reasoning": "This task involves external API integration, context extraction, CLI development, result formatting, caching, and error handling. The technical complexity is moderate-to-high, especially in ensuring robust and user-friendly output, suggesting multiple subtasks.[1][3][4]"
}, },
{ {
"taskId": 21, "taskId": 52,
"taskTitle": "Refactor dev.js into Modular Components", "taskTitle": "Implement Task Suggestion Command for CLI",
"complexityScore": 8, "complexityScore": 6,
"recommendedSubtasks": 5, "recommendedSubtasks": 6,
"expansionPrompt": "Break down the refactoring of dev.js into subtasks covering module design (commands.js, ai-services.js, task-manager.js, ui.js, utils.js), entry point restructuring, dependency management, error handling standardization, and documentation. Each subtask should detail the specific code to extract, interfaces to define, and integration points between modules.", "expansionPrompt": "Break down the task suggestion command into subtasks for task snapshot collection, context extraction, AI suggestion generation, interactive CLI interface, error handling, and testing.",
"reasoning": "Refactoring a monolithic file into modular components represents high complexity due to the need to identify appropriate boundaries, manage dependencies between modules, and ensure all functionality is preserved. This requires deep understanding of the existing codebase and careful restructuring." "reasoning": "This is a focused feature involving AI suggestion generation and interactive CLI elements. While it requires careful context management and error handling, the scope is well-defined and less complex than architectural or multi-module tasks, suggesting a moderate complexity and several subtasks.[1][3][4]"
}, },
{ {
"taskId": 22, "taskId": 53,
"taskTitle": "Create Comprehensive Test Suite for Task Master CLI", "taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
"complexityScore": 9, "complexityScore": 6,
"recommendedSubtasks": 5, "recommendedSubtasks": 6,
"expansionPrompt": "Divide the test suite creation into subtasks covering unit test implementation, integration test development, end-to-end test creation, mocking setup, and CI integration. Each subtask should specify the testing approach, coverage goals, test data preparation, and specific functionality to test.", "expansionPrompt": "Expand the subtask suggestion feature into subtasks for parent task validation, context gathering, AI suggestion logic, interactive CLI interface, subtask linking, and testing.",
"reasoning": "Developing a comprehensive test suite represents high complexity due to the need to cover unit, integration, and end-to-end tests across all functionality, implement appropriate mocking, and ensure good test coverage. This requires significant test engineering and understanding of the entire system." "reasoning": "Similar to the task suggestion command, this feature is focused but requires robust context management, AI integration, and interactive CLI handling. The complexity is moderate, warranting several subtasks for a robust implementation.[1][3][4]"
}, },
{ {
"taskId": 23, "taskId": 54,
"taskTitle": "Implement MCP (Model Context Protocol) Server Functionality for Task Master", "taskTitle": "Add Research Flag to Add-Task Command",
"complexityScore": 9, "complexityScore": 5,
"recommendedSubtasks": 5, "recommendedSubtasks": 5,
"expansionPrompt": "Break down the MCP server implementation into subtasks covering core server module creation, endpoint implementation (/context, /models, /execute), context management system, authentication mechanisms, and performance optimization. Each subtask should detail the API design, data structures, and integration with existing Task Master functionality.", "expansionPrompt": "Break down the research flag enhancement into subtasks for CLI parser updates, research invocation, user interaction, task creation flow integration, and testing.",
"reasoning": "Implementing an MCP server represents high complexity due to the need to create a RESTful API with multiple endpoints, manage context data efficiently, handle authentication, and ensure compatibility with the MCP specification. This requires significant API design and server-side development work." "reasoning": "This is a focused enhancement involving CLI parsing, research invocation, and user interaction. The technical complexity is moderate, with a clear scope and integration points, suggesting a handful of subtasks.[1][3][4]"
}, },
{ {
"taskId": 24, "taskId": 55,
"taskTitle": "Implement AI-Powered Test Generation Command", "taskTitle": "Implement Positional Arguments Support for CLI Commands",
"complexityScore": 7, "complexityScore": 6,
"recommendedSubtasks": 4, "recommendedSubtasks": 6,
"expansionPrompt": "Divide the test generation command implementation into subtasks covering command structure and parameter handling, task analysis logic, AI prompt construction, and test file generation. Each subtask should specify the implementation approach, AI interaction pattern, and output formatting requirements.", "expansionPrompt": "Expand positional argument support into subtasks for parser updates, argument mapping, help documentation, error handling, backward compatibility, and comprehensive testing.",
"reasoning": "Creating an AI-powered test generation command involves analyzing tasks, constructing effective prompts, and generating well-formatted test files. The complexity is moderate to high, focusing on leveraging AI to produce useful tests based on task descriptions and subtasks." "reasoning": "Upgrading CLI parsing to support positional arguments requires careful mapping, error handling, documentation, and regression testing to maintain backward compatibility. The complexity is moderate, suggesting several subtasks.[1][3][4]"
} },
] {
"taskId": 56,
"taskTitle": "Refactor Task-Master Files into Node Module Structure",
"complexityScore": 8,
"recommendedSubtasks": 10,
"expansionPrompt": "Break down the refactoring into subtasks for directory setup, file migration, import path updates, build script adjustments, compatibility checks, documentation, regression testing, and rollback planning.",
"reasoning": "This is a high-risk, broad refactoring affecting many files and build processes. It requires careful planning, incremental changes, and extensive testing to avoid regressions, justifying a high complexity and multiple subtasks.[1][3][4][5]"
},
{
"taskId": 57,
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
"complexityScore": 7,
"recommendedSubtasks": 8,
"expansionPrompt": "Expand the CLI UX enhancement into subtasks for log management, visual design, interactive elements, output formatting, help/documentation, accessibility, performance optimization, and comprehensive testing.",
"reasoning": "Improving CLI UX involves log management, visual enhancements, interactive elements, and accessibility, requiring both technical and design skills. The breadth of improvements and need for robust testing increase the complexity, suggesting multiple subtasks.[1][3][4]"
},
{
"taskId": 58,
"taskTitle": "Implement Elegant Package Update Mechanism for Task-Master",
"complexityScore": 7,
"recommendedSubtasks": 8,
"expansionPrompt": "Break down the update mechanism into subtasks for version detection, update command implementation, file management, configuration migration, notification system, rollback logic, documentation, and comprehensive testing.",
"reasoning": "Implementing a robust update mechanism involves version management, file operations, configuration migration, rollback planning, and user communication. The technical and operational complexity is moderate-to-high, requiring multiple subtasks.[1][3][4]"
},
{
"taskId": 59,
"taskTitle": "Remove Manual Package.json Modifications and Implement Automatic Dependency Management",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "Expand the dependency management refactor into subtasks for code audit, removal of manual modifications, npm dependency updates, initialization command updates, documentation, and regression testing.",
"reasoning": "This is a focused refactoring to align with npm best practices. While it touches installation and configuration logic, the technical complexity is moderate, with a clear scope and manageable risk, suggesting several subtasks.[1][3][4]"
},
{
"taskId": 60,
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
"complexityScore": 9,
"recommendedSubtasks": 12,
"expansionPrompt": "Break down the mentor system implementation into subtasks for mentor management, round-table simulation, CLI integration, AI personality simulation, task integration, output formatting, error handling, documentation, and comprehensive testing.",
"reasoning": "This task involves designing a new system for mentor management, simulating multi-personality AI discussions, integrating with tasks, and ensuring robust CLI and output handling. The breadth and novelty of the feature, along with the need for robust simulation and integration, make it highly complex and in need of extensive decomposition.[1][3][4][5]"
},
{
"taskId": 61,
"taskTitle": "Implement Flexible AI Model Management",
"complexityScore": 10,
"recommendedSubtasks": 15,
"expansionPrompt": "Expand the AI model management implementation into subtasks for configuration management, CLI command parsing, provider module development, unified service abstraction, environment variable handling, documentation, integration testing, migration planning, and cleanup of legacy code.",
"reasoning": "This is a major architectural overhaul involving configuration management, CLI design, multi-provider integration, abstraction layers, environment variable handling, documentation, and migration. The technical and organizational complexity is extremely high, requiring extensive decomposition and careful coordination.[1][3][4][5]"
},
{
"taskId": 62,
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the --simple flag implementation into subtasks for CLI parser updates, update logic modification, timestamp formatting, display logic, documentation, and testing.",
"reasoning": "This is a focused feature addition involving CLI parsing, conditional logic, timestamp formatting, and display updates. The technical complexity is moderate, with a clear scope and manageable risk, suggesting a handful of subtasks.[1][3][4]"
}
]
} }

View File

@@ -1819,39 +1819,333 @@ This piecemeal approach aims to establish the refactoring pattern before tacklin
### Details: ### Details:
## 36. Refactor analyze-task-complexity.js for Unified AI Service & Config [pending] ## 36. Refactor analyze-task-complexity.js for Unified AI Service & Config [in-progress]
### Dependencies: None ### Dependencies: None
### Description: Replace direct AI calls with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep config getters needed for report metadata (`getProjectName`, `getDefaultSubtasks`). ### Description: Replace direct AI calls with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep config getters needed for report metadata (`getProjectName`, `getDefaultSubtasks`).
### Details: ### Details:
<info added on 2025-04-24T17:45:51.956Z>
## Additional Implementation Notes for Refactoring
**General Guidance**
- Ensure all AI-related logic in `analyze-task-complexity.js` is abstracted behind the `generateObjectService` interface. The function should only specify *what* to generate (schema, prompt, and parameters), not *how* the AI call is made or which model/config is used.
- Remove any code that directly fetches AI model parameters or credentials from configuration files. All such details must be handled by the unified service layer.
**1. Core Logic Function (analyze-task-complexity.js)**
- Refactor the function signature to accept a `session` object and a `role` parameter, in addition to the existing arguments.
- When preparing the service call, construct a payload object containing:
- The Zod schema for expected output.
- The prompt or input for the AI.
- The `role` (e.g., "researcher" or "default") based on the `useResearch` flag.
- The `session` context for downstream configuration and authentication.
- Example service call:
```js
const result = await generateObjectService({
schema: complexitySchema,
prompt: buildPrompt(task, options),
role,
session,
});
```
- Remove all references to direct AI client instantiation or configuration fetching.
**2. CLI Command Action Handler (commands.js)**
- Ensure the CLI handler for `analyze-complexity`:
- Accepts and parses the `--use-research` flag (or equivalent).
- Passes the `useResearch` flag and the current session context to the core function.
- Handles errors from the unified service gracefully, providing user-friendly feedback.
**3. MCP Tool Definition (mcp-server/src/tools/analyze.js)**
- Align the Zod schema for CLI options with the parameters expected by the core function, including `useResearch` and any new required fields.
- Use `getMCPProjectRoot` to resolve the project path before invoking the core function.
- Add status logging before and after the analysis, e.g., "Analyzing task complexity..." and "Analysis complete."
- Ensure the tool calls the core function with all required parameters, including session and resolved paths.
**4. MCP Direct Function Wrapper (mcp-server/src/core/direct-functions/analyze-complexity-direct.js)**
- Remove any direct AI client or config usage.
- Implement a logger wrapper that standardizes log output for this function (e.g., `logger.info`, `logger.error`).
- Pass the session context through to the core function to ensure all environment/config access is centralized.
- Return a standardized response object, e.g.:
```js
return {
success: true,
data: analysisResult,
message: "Task complexity analysis completed.",
};
```
**Testing and Validation**
- After refactoring, add or update tests to ensure:
- The function does not break if AI service configuration changes.
- The correct role and session are always passed to the unified service.
- Errors from the unified service are handled and surfaced appropriately.
**Best Practices**
- Keep the core logic function pure and focused on orchestration, not implementation details.
- Use dependency injection for session/context to facilitate testing and future extensibility.
- Document the expected structure of the session and role parameters for maintainability.
These enhancements will ensure the refactored code is modular, maintainable, and fully decoupled from AI implementation details, aligning with modern refactoring best practices[1][3][5].
</info added on 2025-04-24T17:45:51.956Z>
## 37. Refactor expand-task.js for Unified AI Service & Config [pending] ## 37. Refactor expand-task.js for Unified AI Service & Config [pending]
### Dependencies: None ### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers like `generateSubtasksWithPerplexity`) with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep `getDefaultSubtasks` usage. ### Description: Replace direct AI calls (old `ai-services.js` helpers like `generateSubtasksWithPerplexity`) with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep `getDefaultSubtasks` usage.
### Details: ### Details:
<info added on 2025-04-24T17:46:51.286Z>
- In expand-task.js, ensure that all AI parameter configuration (such as model, temperature, max tokens) is passed via the unified generateObjectService interface, not fetched directly from config files or environment variables. This centralizes AI config management and supports future service changes without further refactoring.
- When preparing the service call, construct the payload to include both the prompt and any schema or validation requirements expected by generateObjectService. For example, if subtasks must conform to a Zod schema, pass the schema definition or reference as part of the call.
- For the CLI handler, ensure that the --research flag is mapped to the useResearch boolean and that this is explicitly passed to the core expand-task logic. Also, propagate any session or user context from CLI options to the core function for downstream auditing or personalization.
- In the MCP tool definition, validate that all CLI-exposed parameters are reflected in the Zod schema, including optional ones like prompt overrides or force regeneration. This ensures strict input validation and prevents runtime errors.
- In the direct function wrapper, implement a try/catch block around the core expandTask invocation. On error, log the error with context (task id, session id) and return a standardized error response object with error code and message fields.
- Add unit tests or integration tests to verify that expand-task.js no longer imports or uses any direct AI client or config getter, and that all AI calls are routed through ai-services-unified.js.
- Document the expected shape of the session object and any required fields for downstream service calls, so future maintainers know what context must be provided.
</info added on 2025-04-24T17:46:51.286Z>
## 38. Refactor expand-all-tasks.js for Unified AI Helpers & Config [pending] ## 38. Refactor expand-all-tasks.js for Unified AI Helpers & Config [pending]
### Dependencies: None ### Dependencies: None
### Description: Ensure this file correctly calls the refactored `getSubtasksFromAI` helper. Update config usage to only use `getDefaultSubtasks` from `config-manager.js` directly. AI interaction itself is handled by the helper. ### Description: Ensure this file correctly calls the refactored `getSubtasksFromAI` helper. Update config usage to only use `getDefaultSubtasks` from `config-manager.js` directly. AI interaction itself is handled by the helper.
### Details: ### Details:
<info added on 2025-04-24T17:48:09.354Z>
## Additional Implementation Notes for Refactoring expand-all-tasks.js
- Replace any direct imports of AI clients (e.g., OpenAI, Anthropic) and configuration getters with a single import of `expandTask` from `expand-task.js`, which now encapsulates all AI and config logic.
- Ensure that the orchestration logic in `expand-all-tasks.js`:
- Iterates over all pending tasks, checking for existing subtasks before invoking expansion.
- For each task, calls `expandTask` and passes both the `useResearch` flag and the current `session` object as received from upstream callers.
- Does not contain any logic for AI prompt construction, API calls, or config file reading—these are now delegated to the unified helpers.
- Maintain progress reporting by emitting status updates (e.g., via events or logging) before and after each task expansion, and ensure that errors from `expandTask` are caught and reported with sufficient context (task ID, error message).
- Example code snippet for calling the refactored helper:
```js
// Pseudocode for orchestration loop
for (const task of pendingTasks) {
try {
reportProgress(`Expanding task ${task.id}...`);
await expandTask({
task,
useResearch,
session,
});
reportProgress(`Task ${task.id} expanded.`);
} catch (err) {
reportError(`Failed to expand task ${task.id}: ${err.message}`);
}
}
```
- Remove any fallback or legacy code paths that previously handled AI or config logic directly within this file.
- Ensure that all configuration defaults are accessed exclusively via `getDefaultSubtasks` from `config-manager.js` and only within the unified helper, not in `expand-all-tasks.js`.
- Add or update JSDoc comments to clarify that this module is now a pure orchestrator and does not perform AI or config operations directly.
</info added on 2025-04-24T17:48:09.354Z>
## 39. Refactor get-subtasks-from-ai.js for Unified AI Service & Config [pending] ## 39. Refactor get-subtasks-from-ai.js for Unified AI Service & Config [pending]
### Dependencies: None ### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. ### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead.
### Details: ### Details:
<info added on 2025-04-24T17:48:35.005Z>
**Additional Implementation Notes for Refactoring get-subtasks-from-ai.js**
- **Zod Schema Definition**:
Define a Zod schema that precisely matches the expected subtask object structure. For example, if a subtask should have an id (string), title (string), and status (string), use:
```js
import { z } from 'zod';
const SubtaskSchema = z.object({
id: z.string(),
title: z.string(),
status: z.string(),
// Add other fields as needed
});
const SubtasksArraySchema = z.array(SubtaskSchema);
```
This ensures robust runtime validation and clear error reporting if the AI response does not match expectations[5][1][3].
- **Unified Service Invocation**:
Replace all direct AI client and config usage with:
```js
import { generateObjectService } from './ai-services-unified';
// Example usage:
const subtasks = await generateObjectService({
schema: SubtasksArraySchema,
prompt,
role,
session,
});
```
This centralizes AI invocation and parameter management, ensuring consistency and easier maintenance.
- **Role Determination**:
Use the `useResearch` flag to select the AI role:
```js
const role = useResearch ? 'researcher' : 'default';
```
- **Error Handling**:
Implement structured error handling:
```js
try {
// AI service call
} catch (err) {
if (err.name === 'ServiceUnavailableError') {
// Handle AI service unavailability
} else if (err.name === 'ZodError') {
// Handle schema validation errors
// err.errors contains detailed validation issues
} else if (err.name === 'PromptConstructionError') {
// Handle prompt construction issues
} else {
// Handle unexpected errors
}
throw err; // or wrap and rethrow as needed
}
```
This pattern ensures that consumers can distinguish between different failure modes and respond appropriately.
- **Consumer Contract**:
Update the function signature to require both `useResearch` and `session` parameters, and document this in JSDoc/type annotations for clarity.
- **Prompt Construction**:
Move all prompt construction logic outside the core function if possible, or encapsulate it so that errors can be caught and reported as `PromptConstructionError`.
- **No AI Implementation Details**:
The refactored function should not expose or depend on any AI implementation specifics—only the unified service interface and schema validation.
- **Testing**:
Add or update tests to cover:
- Successful subtask generation
- Schema validation failures (invalid AI output)
- Service unavailability scenarios
- Prompt construction errors
These enhancements ensure the refactored file is robust, maintainable, and aligned with the unified AI service architecture, leveraging Zod for strict runtime validation and clear error boundaries[5][1][3].
</info added on 2025-04-24T17:48:35.005Z>
## 40. Refactor update-task-by-id.js for Unified AI Service & Config [pending] ## 40. Refactor update-task-by-id.js for Unified AI Service & Config [pending]
### Dependencies: None ### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`. ### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`.
### Details: ### Details:
<info added on 2025-04-24T17:48:58.133Z>
- When defining the Zod schema for task update validation, consider using Zod's function schemas to validate both the input parameters and the expected output of the update function. This approach helps separate validation logic from business logic and ensures type safety throughout the update process[1][2].
- For the core logic, use Zod's `.implement()` method to wrap the update function, so that all inputs (such as task ID, prompt, and options) are validated before execution, and outputs are type-checked. This reduces runtime errors and enforces contract compliance between layers[1][2].
- In the MCP tool definition, ensure that the Zod schema explicitly validates all required parameters (e.g., `id` as a string, `prompt` as a string, `research` as a boolean or optional flag). This guarantees that only well-formed requests reach the core logic, improving reliability and error reporting[3][5].
- When preparing the unified AI service call, pass the validated and sanitized data from the Zod schema directly to `generateObjectService`, ensuring that no unvalidated data is sent to the AI layer.
- For output formatting, leverage Zod's ability to define and enforce the shape of the returned object, ensuring that the response structure (including success/failure status and updated task data) is always consistent and predictable[1][2][3].
- If you need to validate or transform nested objects (such as task metadata or options), use Zod's object and nested schema capabilities to define these structures precisely, catching errors early and simplifying downstream logic[3][5].
</info added on 2025-04-24T17:48:58.133Z>
## 41. Refactor update-tasks.js for Unified AI Service & Config [pending] ## 41. Refactor update-tasks.js for Unified AI Service & Config [pending]
### Dependencies: None ### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`. ### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`.
### Details: ### Details:
<info added on 2025-04-24T17:49:25.126Z>
## Additional Implementation Notes for Refactoring update-tasks.js
- **Zod Schema for Batch Updates**:
Define a Zod schema to validate the structure of the batch update payload. For example, if updating tasks requires an array of task objects with specific fields, use:
```typescript
import { z } from "zod";
const TaskUpdateSchema = z.object({
id: z.number(),
status: z.string(),
// add other fields as needed
});
const BatchUpdateSchema = z.object({
tasks: z.array(TaskUpdateSchema),
from: z.number(),
prompt: z.string().optional(),
useResearch: z.boolean().optional(),
});
```
This ensures all incoming data for batch updates is validated at runtime, catching malformed input early and providing clear error messages[4][5].
- **Function Schema Validation**:
If exposing the update logic as a callable function (e.g., for CLI or API), consider using Zod's function schema to validate both input and output:
```typescript
const updateTasksFunction = z
.function()
.args(BatchUpdateSchema, z.object({ session: z.any() }))
.returns(z.promise(z.object({ success: z.boolean(), updated: z.number() })))
.implement(async (input, { session }) => {
// implementation here
});
```
This pattern enforces correct usage and output shape, improving reliability[1].
- **Error Handling and Reporting**:
Use Zod's `.safeParse()` or `.parse()` methods to validate input. On validation failure, return or throw a formatted error to the caller (CLI, API, etc.), ensuring actionable feedback for users[5].
- **Consistent JSON Output**:
When invoking the core update function from wrappers (CLI, MCP), ensure the output is always serialized as JSON. This is critical for downstream consumers and for automated tooling.
- **Logger Wrapper Example**:
Implement a logger utility that can be toggled for silent mode:
```typescript
function createLogger(silent: boolean) {
return {
log: (...args: any[]) => { if (!silent) console.log(...args); },
error: (...args: any[]) => { if (!silent) console.error(...args); }
};
}
```
Pass this logger to the core logic for consistent, suppressible output.
- **Session Context Usage**:
Ensure all AI service calls and config access are routed through the provided session context, not global config getters. This supports multi-user and multi-session environments.
- **Task Filtering Logic**:
Before invoking the AI service, filter the tasks array to only include those with `id >= from` and `status === "pending"`. This preserves the intended batch update semantics.
- **Preserve File Regeneration**:
After updating tasks, ensure any logic that regenerates or writes task files is retained and invoked as before.
- **CLI and API Parameter Validation**:
Use the same Zod schemas to validate CLI arguments and API payloads, ensuring consistency across all entry points[5].
- **Example: Validating CLI Arguments**
```typescript
const cliArgsSchema = z.object({
from: z.string().regex(/^\d+$/).transform(Number),
research: z.boolean().optional(),
session: z.any(),
});
const parsedArgs = cliArgsSchema.parse(cliArgs);
```
These enhancements ensure robust validation, unified service usage, and maintainable, predictable batch update behavior.
</info added on 2025-04-24T17:49:25.126Z>

40
tasks/task_062.txt Normal file
View File

@@ -0,0 +1,40 @@
# Task ID: 62
# Title: Add --simple Flag to Update Commands for Direct Text Input
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Implement a --simple flag for update-task and update-subtask commands that allows users to add timestamped notes without AI processing, directly using the text from the prompt.
# Details:
This task involves modifying the update-task and update-subtask commands to accept a new --simple flag option. When this flag is present, the system should bypass the AI processing pipeline and directly use the text provided by the user as the update content. The implementation should:
1. Update the command parsers for both update-task and update-subtask to recognize the --simple flag
2. Modify the update logic to check for this flag and conditionally skip AI processing
3. When the flag is present, format the user's input text with a timestamp in the same format as AI-processed updates
4. Ensure the update is properly saved to the task or subtask's history
5. Update the help documentation to include information about this new flag
6. The timestamp format should match the existing format used for AI-generated updates
7. The simple update should be visually distinguishable from AI updates in the display (consider adding a 'manual update' indicator)
8. Maintain all existing functionality when the flag is not used
# Test Strategy:
Testing should verify both the functionality and user experience of the new feature:
1. Unit tests:
- Test that the command parser correctly recognizes the --simple flag
- Verify that AI processing is bypassed when the flag is present
- Ensure timestamps are correctly formatted and added
2. Integration tests:
- Update a task with --simple flag and verify the exact text is saved
- Update a subtask with --simple flag and verify the exact text is saved
- Compare the output format with AI-processed updates to ensure consistency
3. User experience tests:
- Verify help documentation correctly explains the new flag
- Test with various input lengths to ensure proper formatting
- Ensure the update appears correctly when viewing task history
4. Edge cases:
- Test with empty input text
- Test with very long input text
- Test with special characters and formatting in the input

View File

@@ -3125,8 +3125,8 @@
"id": 36, "id": 36,
"title": "Refactor analyze-task-complexity.js for Unified AI Service & Config", "title": "Refactor analyze-task-complexity.js for Unified AI Service & Config",
"description": "Replace direct AI calls with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep config getters needed for report metadata (`getProjectName`, `getDefaultSubtasks`).", "description": "Replace direct AI calls with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep config getters needed for report metadata (`getProjectName`, `getDefaultSubtasks`).",
"details": "", "details": "\n\n<info added on 2025-04-24T17:45:51.956Z>\n## Additional Implementation Notes for Refactoring\n\n**General Guidance**\n\n- Ensure all AI-related logic in `analyze-task-complexity.js` is abstracted behind the `generateObjectService` interface. The function should only specify *what* to generate (schema, prompt, and parameters), not *how* the AI call is made or which model/config is used.\n- Remove any code that directly fetches AI model parameters or credentials from configuration files. All such details must be handled by the unified service layer.\n\n**1. Core Logic Function (analyze-task-complexity.js)**\n\n- Refactor the function signature to accept a `session` object and a `role` parameter, in addition to the existing arguments.\n- When preparing the service call, construct a payload object containing:\n - The Zod schema for expected output.\n - The prompt or input for the AI.\n - The `role` (e.g., \"researcher\" or \"default\") based on the `useResearch` flag.\n - The `session` context for downstream configuration and authentication.\n- Example service call:\n ```js\n const result = await generateObjectService({\n schema: complexitySchema,\n prompt: buildPrompt(task, options),\n role,\n session,\n });\n ```\n- Remove all references to direct AI client instantiation or configuration fetching.\n\n**2. CLI Command Action Handler (commands.js)**\n\n- Ensure the CLI handler for `analyze-complexity`:\n - Accepts and parses the `--use-research` flag (or equivalent).\n - Passes the `useResearch` flag and the current session context to the core function.\n - Handles errors from the unified service gracefully, providing user-friendly feedback.\n\n**3. MCP Tool Definition (mcp-server/src/tools/analyze.js)**\n\n- Align the Zod schema for CLI options with the parameters expected by the core function, including `useResearch` and any new required fields.\n- Use `getMCPProjectRoot` to resolve the project path before invoking the core function.\n- Add status logging before and after the analysis, e.g., \"Analyzing task complexity...\" and \"Analysis complete.\"\n- Ensure the tool calls the core function with all required parameters, including session and resolved paths.\n\n**4. MCP Direct Function Wrapper (mcp-server/src/core/direct-functions/analyze-complexity-direct.js)**\n\n- Remove any direct AI client or config usage.\n- Implement a logger wrapper that standardizes log output for this function (e.g., `logger.info`, `logger.error`).\n- Pass the session context through to the core function to ensure all environment/config access is centralized.\n- Return a standardized response object, e.g.:\n ```js\n return {\n success: true,\n data: analysisResult,\n message: \"Task complexity analysis completed.\",\n };\n ```\n\n**Testing and Validation**\n\n- After refactoring, add or update tests to ensure:\n - The function does not break if AI service configuration changes.\n - The correct role and session are always passed to the unified service.\n - Errors from the unified service are handled and surfaced appropriately.\n\n**Best Practices**\n\n- Keep the core logic function pure and focused on orchestration, not implementation details.\n- Use dependency injection for session/context to facilitate testing and future extensibility.\n- Document the expected structure of the session and role parameters for maintainability.\n\nThese enhancements will ensure the refactored code is modular, maintainable, and fully decoupled from AI implementation details, aligning with modern refactoring best practices[1][3][5].\n</info added on 2025-04-24T17:45:51.956Z>",
"status": "pending", "status": "in-progress",
"dependencies": [], "dependencies": [],
"parentTaskId": 61 "parentTaskId": 61
}, },
@@ -3134,7 +3134,7 @@
"id": 37, "id": 37,
"title": "Refactor expand-task.js for Unified AI Service & Config", "title": "Refactor expand-task.js for Unified AI Service & Config",
"description": "Replace direct AI calls (old `ai-services.js` helpers like `generateSubtasksWithPerplexity`) with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep `getDefaultSubtasks` usage.", "description": "Replace direct AI calls (old `ai-services.js` helpers like `generateSubtasksWithPerplexity`) with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep `getDefaultSubtasks` usage.",
"details": "", "details": "\n\n<info added on 2025-04-24T17:46:51.286Z>\n- In expand-task.js, ensure that all AI parameter configuration (such as model, temperature, max tokens) is passed via the unified generateObjectService interface, not fetched directly from config files or environment variables. This centralizes AI config management and supports future service changes without further refactoring.\n\n- When preparing the service call, construct the payload to include both the prompt and any schema or validation requirements expected by generateObjectService. For example, if subtasks must conform to a Zod schema, pass the schema definition or reference as part of the call.\n\n- For the CLI handler, ensure that the --research flag is mapped to the useResearch boolean and that this is explicitly passed to the core expand-task logic. Also, propagate any session or user context from CLI options to the core function for downstream auditing or personalization.\n\n- In the MCP tool definition, validate that all CLI-exposed parameters are reflected in the Zod schema, including optional ones like prompt overrides or force regeneration. This ensures strict input validation and prevents runtime errors.\n\n- In the direct function wrapper, implement a try/catch block around the core expandTask invocation. On error, log the error with context (task id, session id) and return a standardized error response object with error code and message fields.\n\n- Add unit tests or integration tests to verify that expand-task.js no longer imports or uses any direct AI client or config getter, and that all AI calls are routed through ai-services-unified.js.\n\n- Document the expected shape of the session object and any required fields for downstream service calls, so future maintainers know what context must be provided.\n</info added on 2025-04-24T17:46:51.286Z>",
"status": "pending", "status": "pending",
"dependencies": [], "dependencies": [],
"parentTaskId": 61 "parentTaskId": 61
@@ -3143,7 +3143,7 @@
"id": 38, "id": 38,
"title": "Refactor expand-all-tasks.js for Unified AI Helpers & Config", "title": "Refactor expand-all-tasks.js for Unified AI Helpers & Config",
"description": "Ensure this file correctly calls the refactored `getSubtasksFromAI` helper. Update config usage to only use `getDefaultSubtasks` from `config-manager.js` directly. AI interaction itself is handled by the helper.", "description": "Ensure this file correctly calls the refactored `getSubtasksFromAI` helper. Update config usage to only use `getDefaultSubtasks` from `config-manager.js` directly. AI interaction itself is handled by the helper.",
"details": "", "details": "\n\n<info added on 2025-04-24T17:48:09.354Z>\n## Additional Implementation Notes for Refactoring expand-all-tasks.js\n\n- Replace any direct imports of AI clients (e.g., OpenAI, Anthropic) and configuration getters with a single import of `expandTask` from `expand-task.js`, which now encapsulates all AI and config logic.\n- Ensure that the orchestration logic in `expand-all-tasks.js`:\n - Iterates over all pending tasks, checking for existing subtasks before invoking expansion.\n - For each task, calls `expandTask` and passes both the `useResearch` flag and the current `session` object as received from upstream callers.\n - Does not contain any logic for AI prompt construction, API calls, or config file reading—these are now delegated to the unified helpers.\n- Maintain progress reporting by emitting status updates (e.g., via events or logging) before and after each task expansion, and ensure that errors from `expandTask` are caught and reported with sufficient context (task ID, error message).\n- Example code snippet for calling the refactored helper:\n\n```js\n// Pseudocode for orchestration loop\nfor (const task of pendingTasks) {\n try {\n reportProgress(`Expanding task ${task.id}...`);\n await expandTask({\n task,\n useResearch,\n session,\n });\n reportProgress(`Task ${task.id} expanded.`);\n } catch (err) {\n reportError(`Failed to expand task ${task.id}: ${err.message}`);\n }\n}\n```\n\n- Remove any fallback or legacy code paths that previously handled AI or config logic directly within this file.\n- Ensure that all configuration defaults are accessed exclusively via `getDefaultSubtasks` from `config-manager.js` and only within the unified helper, not in `expand-all-tasks.js`.\n- Add or update JSDoc comments to clarify that this module is now a pure orchestrator and does not perform AI or config operations directly.\n</info added on 2025-04-24T17:48:09.354Z>",
"status": "pending", "status": "pending",
"dependencies": [], "dependencies": [],
"parentTaskId": 61 "parentTaskId": 61
@@ -3152,7 +3152,7 @@
"id": 39, "id": 39,
"title": "Refactor get-subtasks-from-ai.js for Unified AI Service & Config", "title": "Refactor get-subtasks-from-ai.js for Unified AI Service & Config",
"description": "Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead.", "description": "Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead.",
"details": "", "details": "\n\n<info added on 2025-04-24T17:48:35.005Z>\n**Additional Implementation Notes for Refactoring get-subtasks-from-ai.js**\n\n- **Zod Schema Definition**: \n Define a Zod schema that precisely matches the expected subtask object structure. For example, if a subtask should have an id (string), title (string), and status (string), use:\n ```js\n import { z } from 'zod';\n\n const SubtaskSchema = z.object({\n id: z.string(),\n title: z.string(),\n status: z.string(),\n // Add other fields as needed\n });\n\n const SubtasksArraySchema = z.array(SubtaskSchema);\n ```\n This ensures robust runtime validation and clear error reporting if the AI response does not match expectations[5][1][3].\n\n- **Unified Service Invocation**: \n Replace all direct AI client and config usage with:\n ```js\n import { generateObjectService } from './ai-services-unified';\n\n // Example usage:\n const subtasks = await generateObjectService({\n schema: SubtasksArraySchema,\n prompt,\n role,\n session,\n });\n ```\n This centralizes AI invocation and parameter management, ensuring consistency and easier maintenance.\n\n- **Role Determination**: \n Use the `useResearch` flag to select the AI role:\n ```js\n const role = useResearch ? 'researcher' : 'default';\n ```\n\n- **Error Handling**: \n Implement structured error handling:\n ```js\n try {\n // AI service call\n } catch (err) {\n if (err.name === 'ServiceUnavailableError') {\n // Handle AI service unavailability\n } else if (err.name === 'ZodError') {\n // Handle schema validation errors\n // err.errors contains detailed validation issues\n } else if (err.name === 'PromptConstructionError') {\n // Handle prompt construction issues\n } else {\n // Handle unexpected errors\n }\n throw err; // or wrap and rethrow as needed\n }\n ```\n This pattern ensures that consumers can distinguish between different failure modes and respond appropriately.\n\n- **Consumer Contract**: \n Update the function signature to require both `useResearch` and `session` parameters, and document this in JSDoc/type annotations for clarity.\n\n- **Prompt Construction**: \n Move all prompt construction logic outside the core function if possible, or encapsulate it so that errors can be caught and reported as `PromptConstructionError`.\n\n- **No AI Implementation Details**: \n The refactored function should not expose or depend on any AI implementation specifics—only the unified service interface and schema validation.\n\n- **Testing**: \n Add or update tests to cover:\n - Successful subtask generation\n - Schema validation failures (invalid AI output)\n - Service unavailability scenarios\n - Prompt construction errors\n\nThese enhancements ensure the refactored file is robust, maintainable, and aligned with the unified AI service architecture, leveraging Zod for strict runtime validation and clear error boundaries[5][1][3].\n</info added on 2025-04-24T17:48:35.005Z>",
"status": "pending", "status": "pending",
"dependencies": [], "dependencies": [],
"parentTaskId": 61 "parentTaskId": 61
@@ -3161,7 +3161,7 @@
"id": 40, "id": 40,
"title": "Refactor update-task-by-id.js for Unified AI Service & Config", "title": "Refactor update-task-by-id.js for Unified AI Service & Config",
"description": "Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`.", "description": "Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`.",
"details": "", "details": "\n\n<info added on 2025-04-24T17:48:58.133Z>\n- When defining the Zod schema for task update validation, consider using Zod's function schemas to validate both the input parameters and the expected output of the update function. This approach helps separate validation logic from business logic and ensures type safety throughout the update process[1][2].\n\n- For the core logic, use Zod's `.implement()` method to wrap the update function, so that all inputs (such as task ID, prompt, and options) are validated before execution, and outputs are type-checked. This reduces runtime errors and enforces contract compliance between layers[1][2].\n\n- In the MCP tool definition, ensure that the Zod schema explicitly validates all required parameters (e.g., `id` as a string, `prompt` as a string, `research` as a boolean or optional flag). This guarantees that only well-formed requests reach the core logic, improving reliability and error reporting[3][5].\n\n- When preparing the unified AI service call, pass the validated and sanitized data from the Zod schema directly to `generateObjectService`, ensuring that no unvalidated data is sent to the AI layer.\n\n- For output formatting, leverage Zod's ability to define and enforce the shape of the returned object, ensuring that the response structure (including success/failure status and updated task data) is always consistent and predictable[1][2][3].\n\n- If you need to validate or transform nested objects (such as task metadata or options), use Zod's object and nested schema capabilities to define these structures precisely, catching errors early and simplifying downstream logic[3][5].\n</info added on 2025-04-24T17:48:58.133Z>",
"status": "pending", "status": "pending",
"dependencies": [], "dependencies": [],
"parentTaskId": 61 "parentTaskId": 61
@@ -3170,12 +3170,22 @@
"id": 41, "id": 41,
"title": "Refactor update-tasks.js for Unified AI Service & Config", "title": "Refactor update-tasks.js for Unified AI Service & Config",
"description": "Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`.", "description": "Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`.",
"details": "", "details": "\n\n<info added on 2025-04-24T17:49:25.126Z>\n## Additional Implementation Notes for Refactoring update-tasks.js\n\n- **Zod Schema for Batch Updates**: \n Define a Zod schema to validate the structure of the batch update payload. For example, if updating tasks requires an array of task objects with specific fields, use:\n ```typescript\n import { z } from \"zod\";\n\n const TaskUpdateSchema = z.object({\n id: z.number(),\n status: z.string(),\n // add other fields as needed\n });\n\n const BatchUpdateSchema = z.object({\n tasks: z.array(TaskUpdateSchema),\n from: z.number(),\n prompt: z.string().optional(),\n useResearch: z.boolean().optional(),\n });\n ```\n This ensures all incoming data for batch updates is validated at runtime, catching malformed input early and providing clear error messages[4][5].\n\n- **Function Schema Validation**: \n If exposing the update logic as a callable function (e.g., for CLI or API), consider using Zod's function schema to validate both input and output:\n ```typescript\n const updateTasksFunction = z\n .function()\n .args(BatchUpdateSchema, z.object({ session: z.any() }))\n .returns(z.promise(z.object({ success: z.boolean(), updated: z.number() })))\n .implement(async (input, { session }) => {\n // implementation here\n });\n ```\n This pattern enforces correct usage and output shape, improving reliability[1].\n\n- **Error Handling and Reporting**: \n Use Zod's `.safeParse()` or `.parse()` methods to validate input. On validation failure, return or throw a formatted error to the caller (CLI, API, etc.), ensuring actionable feedback for users[5].\n\n- **Consistent JSON Output**: \n When invoking the core update function from wrappers (CLI, MCP), ensure the output is always serialized as JSON. This is critical for downstream consumers and for automated tooling.\n\n- **Logger Wrapper Example**: \n Implement a logger utility that can be toggled for silent mode:\n ```typescript\n function createLogger(silent: boolean) {\n return {\n log: (...args: any[]) => { if (!silent) console.log(...args); },\n error: (...args: any[]) => { if (!silent) console.error(...args); }\n };\n }\n ```\n Pass this logger to the core logic for consistent, suppressible output.\n\n- **Session Context Usage**: \n Ensure all AI service calls and config access are routed through the provided session context, not global config getters. This supports multi-user and multi-session environments.\n\n- **Task Filtering Logic**: \n Before invoking the AI service, filter the tasks array to only include those with `id >= from` and `status === \"pending\"`. This preserves the intended batch update semantics.\n\n- **Preserve File Regeneration**: \n After updating tasks, ensure any logic that regenerates or writes task files is retained and invoked as before.\n\n- **CLI and API Parameter Validation**: \n Use the same Zod schemas to validate CLI arguments and API payloads, ensuring consistency across all entry points[5].\n\n- **Example: Validating CLI Arguments**\n ```typescript\n const cliArgsSchema = z.object({\n from: z.string().regex(/^\\d+$/).transform(Number),\n research: z.boolean().optional(),\n session: z.any(),\n });\n\n const parsedArgs = cliArgsSchema.parse(cliArgs);\n ```\n\nThese enhancements ensure robust validation, unified service usage, and maintainable, predictable batch update behavior.\n</info added on 2025-04-24T17:49:25.126Z>",
"status": "pending", "status": "pending",
"dependencies": [], "dependencies": [],
"parentTaskId": 61 "parentTaskId": 61
} }
] ]
},
{
"id": 62,
"title": "Add --simple Flag to Update Commands for Direct Text Input",
"description": "Implement a --simple flag for update-task and update-subtask commands that allows users to add timestamped notes without AI processing, directly using the text from the prompt.",
"details": "This task involves modifying the update-task and update-subtask commands to accept a new --simple flag option. When this flag is present, the system should bypass the AI processing pipeline and directly use the text provided by the user as the update content. The implementation should:\n\n1. Update the command parsers for both update-task and update-subtask to recognize the --simple flag\n2. Modify the update logic to check for this flag and conditionally skip AI processing\n3. When the flag is present, format the user's input text with a timestamp in the same format as AI-processed updates\n4. Ensure the update is properly saved to the task or subtask's history\n5. Update the help documentation to include information about this new flag\n6. The timestamp format should match the existing format used for AI-generated updates\n7. The simple update should be visually distinguishable from AI updates in the display (consider adding a 'manual update' indicator)\n8. Maintain all existing functionality when the flag is not used",
"testStrategy": "Testing should verify both the functionality and user experience of the new feature:\n\n1. Unit tests:\n - Test that the command parser correctly recognizes the --simple flag\n - Verify that AI processing is bypassed when the flag is present\n - Ensure timestamps are correctly formatted and added\n\n2. Integration tests:\n - Update a task with --simple flag and verify the exact text is saved\n - Update a subtask with --simple flag and verify the exact text is saved\n - Compare the output format with AI-processed updates to ensure consistency\n\n3. User experience tests:\n - Verify help documentation correctly explains the new flag\n - Test with various input lengths to ensure proper formatting\n - Ensure the update appears correctly when viewing task history\n\n4. Edge cases:\n - Test with empty input text\n - Test with very long input text\n - Test with special characters and formatting in the input",
"status": "pending",
"dependencies": [],
"priority": "medium"
} }
] ]
} }