fix: enhance task expansion with multiple improvements
This commit resolves several issues with the task expansion system to ensure higher quality subtasks and better synchronization: 1. Task File Generation - Add automatic regeneration of task files after expanding tasks - Ensure individual task text files stay in sync with tasks.json - Avoids manual regeneration steps after task expansion 2. Perplexity API Integration - Fix 'researchPrompt is not defined' error in Perplexity integration - Add specialized research-oriented prompt template - Improve system message for better context and instruction - Better fallback to Claude when Perplexity unavailable 3. Subtask Parsing Improvements - Enhance regex pattern to handle more formatting variations - Implement multiple parsing strategies for different response formats: * Improved section detection with flexible headings * Added support for numbered and bulleted lists * Implemented heuristic-based title and description extraction - Create more meaningful dummy subtasks with relevant titles and descriptions instead of generic placeholders - Ensure minimal descriptions are always provided 4. Quality Verification and Retry System - Add post-expansion verification to identify low-quality subtask sets - Detect tasks with too many generic/placeholder subtasks - Implement interactive retry mechanism with enhanced prompts - Use adjusted settings for retries (research mode, subtask count) - Clear existing subtasks before retry to prevent duplicates - Provide detailed reporting of verification and retry process These changes significantly improve the quality of generated subtasks and reduce the need for manual intervention when subtask generation produces suboptimal results.
This commit is contained in:
@@ -12,6 +12,7 @@ In an AI-driven development process—particularly with tools like [Cursor](http
|
||||
4. **Generate** individual task files (e.g., `task_001.txt`) for easy reference or to feed into an AI coding workflow.
|
||||
5. **Set task status**—mark tasks as `done`, `pending`, or `deferred` based on progress.
|
||||
6. **Expand** tasks with subtasks—break down complex tasks into smaller, more manageable subtasks.
|
||||
7. **Research-backed subtask generation**—use Perplexity AI to generate more informed and contextually relevant subtasks.
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -24,6 +25,8 @@ The script can be configured through environment variables in a `.env` file at t
|
||||
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
|
||||
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
|
||||
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
|
||||
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
|
||||
- `DEBUG`: Enable debug logging (default: false)
|
||||
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
|
||||
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
|
||||
@@ -56,6 +59,44 @@ The script can be configured through environment variables in a `.env` file at t
|
||||
|
||||
Run `node scripts/dev.js` without arguments to see detailed usage information.
|
||||
|
||||
## Listing Tasks
|
||||
|
||||
The `list` command allows you to view all tasks and their status:
|
||||
|
||||
```bash
|
||||
# List all tasks
|
||||
node scripts/dev.js list
|
||||
|
||||
# List tasks with a specific status
|
||||
node scripts/dev.js list --status=pending
|
||||
|
||||
# List tasks and include their subtasks
|
||||
node scripts/dev.js list --with-subtasks
|
||||
|
||||
# List tasks with a specific status and include their subtasks
|
||||
node scripts/dev.js list --status=pending --with-subtasks
|
||||
```
|
||||
|
||||
## Updating Tasks
|
||||
|
||||
The `update` command allows you to update tasks based on new information or implementation changes:
|
||||
|
||||
```bash
|
||||
# Update tasks starting from ID 4 with a new prompt
|
||||
node scripts/dev.js update --from=4 --prompt="Refactor tasks from ID 4 onward to use Express instead of Fastify"
|
||||
|
||||
# Update all tasks (default from=1)
|
||||
node scripts/dev.js update --prompt="Add authentication to all relevant tasks"
|
||||
|
||||
# Specify a different tasks file
|
||||
node scripts/dev.js update --file=custom-tasks.json --from=5 --prompt="Change database from MongoDB to PostgreSQL"
|
||||
```
|
||||
|
||||
Notes:
|
||||
- The `--prompt` parameter is required and should explain the changes or new context
|
||||
- Only tasks that aren't marked as 'done' will be updated
|
||||
- Tasks with ID >= the specified --from value will be updated
|
||||
|
||||
## Setting Task Status
|
||||
|
||||
The `set-status` command allows you to change a task's status:
|
||||
@@ -89,7 +130,7 @@ The `expand` command allows you to break down tasks into subtasks for more detai
|
||||
node scripts/dev.js expand --id=3
|
||||
|
||||
# Expand a specific task with 5 subtasks
|
||||
node scripts/dev.js expand --id=3 --subtasks=5
|
||||
node scripts/dev.js expand --id=3 --num=5
|
||||
|
||||
# Expand a task with additional context
|
||||
node scripts/dev.js expand --id=3 --prompt="Focus on security aspects"
|
||||
@@ -99,12 +140,35 @@ node scripts/dev.js expand --all
|
||||
|
||||
# Force regeneration of subtasks for all pending tasks
|
||||
node scripts/dev.js expand --all --force
|
||||
|
||||
# Use Perplexity AI for research-backed subtask generation
|
||||
node scripts/dev.js expand --id=3 --research
|
||||
|
||||
# Use Perplexity AI for research-backed generation on all pending tasks
|
||||
node scripts/dev.js expand --all --research
|
||||
```
|
||||
|
||||
Notes:
|
||||
- Tasks marked as 'done' or 'completed' are always skipped
|
||||
- By default, tasks that already have subtasks are skipped unless `--force` is used
|
||||
- Subtasks include title, description, dependencies, and acceptance criteria
|
||||
- The `--research` flag uses Perplexity AI to generate more informed and contextually relevant subtasks
|
||||
- If Perplexity API is unavailable, the script will fall back to using Anthropic's Claude
|
||||
|
||||
## AI Integration
|
||||
|
||||
The script integrates with two AI services:
|
||||
|
||||
1. **Anthropic Claude**: Used for parsing PRDs, generating tasks, and creating subtasks.
|
||||
2. **Perplexity AI**: Used for research-backed subtask generation when the `--research` flag is specified.
|
||||
|
||||
The Perplexity integration uses the OpenAI client to connect to Perplexity's API, which provides enhanced research capabilities for generating more informed subtasks. If the Perplexity API is unavailable or encounters an error, the script will automatically fall back to using Anthropic's Claude.
|
||||
|
||||
To use the Perplexity integration:
|
||||
1. Obtain a Perplexity API key
|
||||
2. Add `PERPLEXITY_API_KEY` to your `.env` file
|
||||
3. Optionally specify `PERPLEXITY_MODEL` in your `.env` file (default: "sonar-medium-online")
|
||||
4. Use the `--research` flag with the `expand` command
|
||||
|
||||
## Logging
|
||||
|
||||
@@ -115,3 +179,79 @@ The script supports different logging levels controlled by the `LOG_LEVEL` envir
|
||||
- `error`: Error messages that might prevent execution
|
||||
|
||||
When `DEBUG=true` is set, debug logs are also written to a `dev-debug.log` file in the project root.
|
||||
|
||||
## Analyzing Task Complexity
|
||||
|
||||
The `analyze-complexity` command allows you to automatically assess task complexity and generate expansion recommendations:
|
||||
|
||||
```bash
|
||||
# Analyze all tasks and generate expansion recommendations
|
||||
node scripts/dev.js analyze-complexity
|
||||
|
||||
# Specify a custom output file
|
||||
node scripts/dev.js analyze-complexity --output=custom-report.json
|
||||
|
||||
# Override the model used for analysis
|
||||
node scripts/dev.js analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
node scripts/dev.js analyze-complexity --threshold=6
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
node scripts/dev.js analyze-complexity --research
|
||||
```
|
||||
|
||||
Notes:
|
||||
- The command uses Claude to analyze each task's complexity (or Perplexity with --research flag)
|
||||
- Tasks are scored on a scale of 1-10
|
||||
- Each task receives a recommended number of subtasks based on DEFAULT_SUBTASKS configuration
|
||||
- The default output path is `scripts/task-complexity-report.json`
|
||||
- Each task in the analysis includes a ready-to-use `expansionCommand` that can be copied directly to the terminal or executed programmatically
|
||||
- Tasks with complexity scores below the threshold (default: 5) may not need expansion
|
||||
- The research flag provides more contextual and informed complexity assessments
|
||||
|
||||
### Integration with Expand Command
|
||||
|
||||
The `expand` command automatically checks for and uses complexity analysis if available:
|
||||
|
||||
```bash
|
||||
# Expand a task, using complexity report recommendations if available
|
||||
node scripts/dev.js expand --id=8
|
||||
|
||||
# Expand all tasks, prioritizing by complexity score if a report exists
|
||||
node scripts/dev.js expand --all
|
||||
|
||||
# Override recommendations with explicit values
|
||||
node scripts/dev.js expand --id=8 --num=5 --prompt="Custom prompt"
|
||||
```
|
||||
|
||||
When a complexity report exists:
|
||||
- The `expand` command will use the recommended subtask count from the report (unless overridden)
|
||||
- It will use the tailored expansion prompt from the report (unless a custom prompt is provided)
|
||||
- When using `--all`, tasks are sorted by complexity score (highest first)
|
||||
- The `--research` flag is preserved from the complexity analysis to expansion
|
||||
|
||||
The output report structure is:
|
||||
```json
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2023-06-15T12:34:56.789Z",
|
||||
"tasksAnalyzed": 20,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Your Project Name",
|
||||
"usedResearch": true
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 8,
|
||||
"taskTitle": "Develop Implementation Drift Handling",
|
||||
"complexityScore": 9.5,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Create subtasks that handle detecting...",
|
||||
"reasoning": "This task requires sophisticated logic...",
|
||||
"expansionCommand": "node scripts/dev.js expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research"
|
||||
},
|
||||
// More tasks sorted by complexity score (highest first)
|
||||
]
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user