56 lines
2.8 KiB
Plaintext
56 lines
2.8 KiB
Plaintext
# Task ID: 46
|
|
# Title: Implement ICE Analysis Command for Task Prioritization
|
|
# Status: pending
|
|
# Dependencies: None
|
|
# Priority: medium
|
|
# Description: Create a new command that analyzes and ranks tasks based on Impact, Confidence, and Ease (ICE) scoring methodology, generating a comprehensive prioritization report.
|
|
# Details:
|
|
Develop a new command called `analyze-ice` that evaluates non-completed tasks (excluding those marked as done, cancelled, or deferred) and ranks them according to the ICE methodology:
|
|
|
|
1. Core functionality:
|
|
- Calculate an Impact score (how much value the task will deliver)
|
|
- Calculate a Confidence score (how certain we are about the impact)
|
|
- Calculate an Ease score (how easy it is to implement)
|
|
- Compute a total ICE score (sum or product of the three components)
|
|
|
|
2. Implementation details:
|
|
- Reuse the filtering logic from `analyze-complexity` to select relevant tasks
|
|
- Leverage the LLM to generate scores for each dimension on a scale of 1-10
|
|
- For each task, prompt the LLM to evaluate and justify each score based on task description and details
|
|
- Create an `ice_report.md` file similar to the complexity report
|
|
- Sort tasks by total ICE score in descending order
|
|
|
|
3. CLI rendering:
|
|
- Implement a sister command `show-ice-report` that displays the report in the terminal
|
|
- Format the output with colorized scores and rankings
|
|
- Include options to sort by individual components (impact, confidence, or ease)
|
|
|
|
4. Integration:
|
|
- If a complexity report exists, reference it in the ICE report for additional context
|
|
- Consider adding a combined view that shows both complexity and ICE scores
|
|
|
|
The command should follow the same design patterns as `analyze-complexity` for consistency and code reuse.
|
|
|
|
# Test Strategy:
|
|
1. Unit tests:
|
|
- Test the ICE scoring algorithm with various mock task inputs
|
|
- Verify correct filtering of tasks based on status
|
|
- Test the sorting functionality with different ranking criteria
|
|
|
|
2. Integration tests:
|
|
- Create a test project with diverse tasks and verify the generated ICE report
|
|
- Test the integration with existing complexity reports
|
|
- Verify that changes to task statuses correctly update the ICE analysis
|
|
|
|
3. CLI tests:
|
|
- Verify the `analyze-ice` command generates the expected report file
|
|
- Test the `show-ice-report` command renders correctly in the terminal
|
|
- Test with various flag combinations and sorting options
|
|
|
|
4. Validation criteria:
|
|
- The ICE scores should be reasonable and consistent
|
|
- The report should clearly explain the rationale behind each score
|
|
- The ranking should prioritize high-impact, high-confidence, easy-to-implement tasks
|
|
- Performance should be acceptable even with a large number of tasks
|
|
- The command should handle edge cases gracefully (empty projects, missing data)
|