Files
claude-task-master/tasks/task_049.txt
Eyal Toledano 21c3cb8cda feat(telemetry): Integrate telemetry for expand-all, aggregate results
This commit implements AI usage telemetry for the `expand-all-tasks` command/tool and refactors its CLI output for clarity and consistency.

Key Changes:

1.  **Telemetry Integration for `expand-all-tasks` (Subtask 77.8):**\n    -   The `expandAllTasks` core logic (`scripts/modules/task-manager/expand-all-tasks.js`) now calls the `expandTask` function for each eligible task and collects the individual `telemetryData` returned.\n    -   A new helper function `_aggregateTelemetry` (in `utils.js`) is used to sum up token counts and costs from all individual expansions into a single `telemetryData` object for the entire `expand-all` operation.\n    -   The `expandAllTasksDirect` wrapper (`mcp-server/src/core/direct-functions/expand-all-tasks.js`) now receives and passes this aggregated `telemetryData` in the MCP response.\n    -   For CLI usage, `displayAiUsageSummary` is called once with the aggregated telemetry.

2.  **Improved CLI Output for `expand-all`:**\n    -   The `expandAllTasks` core function now handles displaying a final "Expansion Summary" box (showing Attempted, Expanded, Skipped, Failed counts) directly after the aggregated telemetry summary.\n    -   This consolidates all summary output within the core function for better flow and removes redundant logging from the command action in `scripts/modules/commands.js`.\n    -   The summary box border is green for success and red if any expansions failed.

3.  **Code Refinements:**\n    -   Ensured `chalk` and `boxen` are imported in `expand-all-tasks.js` for the new summary box.\n    -   Minor adjustments to logging messages for clarity.
2025-05-08 18:22:00 -04:00

105 lines
5.8 KiB
Plaintext

# Task ID: 49
# Title: Implement Code Quality Analysis Command
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a command that analyzes the codebase to identify patterns and verify functions against current best practices, generating improvement recommendations and potential refactoring tasks.
# Details:
Develop a new command called `analyze-code-quality` that performs the following functions:
1. **Pattern Recognition**:
- Scan the codebase to identify recurring patterns in code structure, function design, and architecture
- Categorize patterns by frequency and impact on maintainability
- Generate a report of common patterns with examples from the codebase
2. **Best Practice Verification**:
- For each function in specified files, extract its purpose, parameters, and implementation details
- Create a verification checklist for each function that includes:
- Function naming conventions
- Parameter handling
- Error handling
- Return value consistency
- Documentation quality
- Complexity metrics
- Use an API integration with Perplexity or similar AI service to evaluate each function against current best practices
3. **Improvement Recommendations**:
- Generate specific refactoring suggestions for functions that don't align with best practices
- Include code examples of the recommended improvements
- Estimate the effort required for each refactoring suggestion
4. **Task Integration**:
- Create a mechanism to convert high-value improvement recommendations into Taskmaster tasks
- Allow users to select which recommendations to convert to tasks
- Generate properly formatted task descriptions that include the current implementation, recommended changes, and justification
The command should accept parameters for targeting specific directories or files, setting the depth of analysis, and filtering by improvement impact level.
# Test Strategy:
Testing should verify all aspects of the code analysis command:
1. **Functionality Testing**:
- Create a test codebase with known patterns and anti-patterns
- Verify the command correctly identifies all patterns in the test codebase
- Check that function verification correctly flags issues in deliberately non-compliant functions
- Confirm recommendations are relevant and implementable
2. **Integration Testing**:
- Test the AI service integration with mock responses to ensure proper handling of API calls
- Verify the task creation workflow correctly generates well-formed tasks
- Test integration with existing Taskmaster commands and workflows
3. **Performance Testing**:
- Measure execution time on codebases of various sizes
- Ensure memory usage remains reasonable even on large codebases
- Test with rate limiting on API calls to ensure graceful handling
4. **User Experience Testing**:
- Have developers use the command on real projects and provide feedback
- Verify the output is actionable and clear
- Test the command with different parameter combinations
5. **Validation Criteria**:
- Command successfully analyzes at least 95% of functions in the codebase
- Generated recommendations are specific and actionable
- Created tasks follow the project's task format standards
- Analysis results are consistent across multiple runs on the same codebase
# Subtasks:
## 1. Design pattern recognition algorithm [pending]
### Dependencies: None
### Description: Create an algorithm to identify common code patterns and anti-patterns in the codebase
### Details:
Develop a system that can scan code files and identify common design patterns (Factory, Singleton, etc.) and anti-patterns (God objects, excessive coupling, etc.). Include detection for language-specific patterns and create a classification system for identified patterns.
## 2. Implement best practice verification [pending]
### Dependencies: 49.1
### Description: Build verification checks against established coding standards and best practices
### Details:
Create a framework to compare code against established best practices for the specific language/framework. Include checks for naming conventions, function length, complexity metrics, comment coverage, and other industry-standard quality indicators.
## 3. Develop AI integration for code analysis [pending]
### Dependencies: 49.1, 49.2
### Description: Integrate AI capabilities to enhance code analysis and provide intelligent recommendations
### Details:
Connect to AI services (like OpenAI) to analyze code beyond rule-based checks. Configure the AI to understand context, project-specific patterns, and provide nuanced analysis that rule-based systems might miss.
## 4. Create recommendation generation system [pending]
### Dependencies: 49.2, 49.3
### Description: Build a system to generate actionable improvement recommendations based on analysis results
### Details:
Develop algorithms to transform analysis results into specific, actionable recommendations. Include priority levels, effort estimates, and potential impact assessments for each recommendation.
## 5. Implement task creation functionality [pending]
### Dependencies: 49.4
### Description: Add capability to automatically create tasks from code quality recommendations
### Details:
Build functionality to convert recommendations into tasks in the project management system. Include appropriate metadata, assignee suggestions based on code ownership, and integration with existing workflow systems.
## 6. Create comprehensive reporting interface [pending]
### Dependencies: 49.4, 49.5
### Description: Develop a user interface to display analysis results and recommendations
### Details:
Build a dashboard showing code quality metrics, identified patterns, recommendations, and created tasks. Include filtering options, trend analysis over time, and the ability to drill down into specific issues with code snippets and explanations.