Enhance analyze-complexity to support analyzing specific tasks by ID or range:
- Add --id option for comma-separated task IDs
- Add --from/--to options for analyzing tasks within a range
- Implement intelligent merging with existing reports
- Update CLI, MCP tools, and direct functions for consistent support
- Add changeset documenting the feature
This commit applies the standard telemetry pattern to the analyze-task-complexity command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/analyze-task-complexity.js):
- The call to generateTextService now includes commandName: 'analyze-complexity' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for parsing the complexity report JSON.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { report: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/analyze-task-complexity.js):
- The call to the core analyzeTaskComplexity function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity.
Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer ().
Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors:
- Perplexity provider returned internal server errors.
- Anthropic provider failed with schema type and model errors.
Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced.
Key changes include:
- Removed direct AI client initialization (Anthropic, Perplexity).
- Removed direct fetching of AI model configuration parameters.
- Removed manual AI retry/fallback/streaming logic.
- Replaced direct AI calls with a call to .
- Updated wrapper to pass session context correctly.
- Updated MCP tool for correct path resolution and argument passing.
- Updated CLI command for correct path resolution.
- Preserved core functionality: task loading/filtering, report generation, CLI summary display.
Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer
[INFO] Initialized Perplexity client with OpenAI compatibility layer
Analyzing task complexity from: tasks/tasks.json
Output report will be saved to: scripts/task-complexity-report.json
Analyzing task complexity and generating expansion recommendations...
[INFO] Reading tasks from tasks/tasks.json...
[INFO] Found 62 total tasks in the task file.
[INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks.
Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks.
[INFO] Claude API attempt 1/2
[ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
This commit resolves several issues with the task expansion system to
ensure higher quality subtasks and better synchronization:
1. Task File Generation
- Add automatic regeneration of task files after expanding tasks
- Ensure individual task text files stay in sync with tasks.json
- Avoids manual regeneration steps after task expansion
2. Perplexity API Integration
- Fix 'researchPrompt is not defined' error in Perplexity integration
- Add specialized research-oriented prompt template
- Improve system message for better context and instruction
- Better fallback to Claude when Perplexity unavailable
3. Subtask Parsing Improvements
- Enhance regex pattern to handle more formatting variations
- Implement multiple parsing strategies for different response formats:
* Improved section detection with flexible headings
* Added support for numbered and bulleted lists
* Implemented heuristic-based title and description extraction
- Create more meaningful dummy subtasks with relevant titles and descriptions
instead of generic placeholders
- Ensure minimal descriptions are always provided
4. Quality Verification and Retry System
- Add post-expansion verification to identify low-quality subtask sets
- Detect tasks with too many generic/placeholder subtasks
- Implement interactive retry mechanism with enhanced prompts
- Use adjusted settings for retries (research mode, subtask count)
- Clear existing subtasks before retry to prevent duplicates
- Provide detailed reporting of verification and retry process
These changes significantly improve the quality of generated subtasks
and reduce the need for manual intervention when subtask generation
produces suboptimal results.