fix: enhance task expansion with multiple improvements
This commit resolves several issues with the task expansion system to ensure higher quality subtasks and better synchronization: 1. Task File Generation - Add automatic regeneration of task files after expanding tasks - Ensure individual task text files stay in sync with tasks.json - Avoids manual regeneration steps after task expansion 2. Perplexity API Integration - Fix 'researchPrompt is not defined' error in Perplexity integration - Add specialized research-oriented prompt template - Improve system message for better context and instruction - Better fallback to Claude when Perplexity unavailable 3. Subtask Parsing Improvements - Enhance regex pattern to handle more formatting variations - Implement multiple parsing strategies for different response formats: * Improved section detection with flexible headings * Added support for numbered and bulleted lists * Implemented heuristic-based title and description extraction - Create more meaningful dummy subtasks with relevant titles and descriptions instead of generic placeholders - Ensure minimal descriptions are always provided 4. Quality Verification and Retry System - Add post-expansion verification to identify low-quality subtask sets - Detect tasks with too many generic/placeholder subtasks - Implement interactive retry mechanism with enhanced prompts - Use adjusted settings for retries (research mode, subtask count) - Clear existing subtasks before retry to prevent duplicates - Provide detailed reporting of verification and retry process These changes significantly improve the quality of generated subtasks and reduce the need for manual intervention when subtask generation produces suboptimal results.
This commit is contained in:
@@ -361,8 +361,80 @@ Please mark it as complete and tell me what I should work on next.
|
||||
|
||||
## Documentation
|
||||
|
||||
For more detailed documentation on the scripts, see the [scripts/README.md](scripts/README.md) file in your initialized project.
|
||||
For more detailed documentation on the scripts and command-line options, see the [scripts/README.md](scripts/README.md) file in your initialized project.
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
MIT
|
||||
|
||||
### Analyzing Task Complexity
|
||||
|
||||
To analyze the complexity of tasks and automatically generate expansion recommendations:
|
||||
|
||||
```bash
|
||||
npm run dev -- analyze-complexity
|
||||
```
|
||||
|
||||
This command:
|
||||
- Analyzes each task using AI to assess its complexity
|
||||
- Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS
|
||||
- Generates tailored prompts for expanding each task
|
||||
- Creates a comprehensive JSON report with ready-to-use commands
|
||||
- Saves the report to scripts/task-complexity-report.json by default
|
||||
|
||||
Options:
|
||||
```bash
|
||||
# Save report to a custom location
|
||||
npm run dev -- analyze-complexity --output=my-report.json
|
||||
|
||||
# Use a specific LLM model
|
||||
npm run dev -- analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
npm run dev -- analyze-complexity --threshold=6
|
||||
|
||||
# Use an alternative tasks file
|
||||
npm run dev -- analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
npm run dev -- analyze-complexity --research
|
||||
```
|
||||
|
||||
The generated report contains:
|
||||
- Complexity analysis for each task (scored 1-10)
|
||||
- Recommended number of subtasks based on complexity
|
||||
- AI-generated expansion prompts customized for each task
|
||||
- Ready-to-run expansion commands directly within each task analysis
|
||||
|
||||
### Smart Task Expansion
|
||||
|
||||
The `expand` command now automatically checks for and uses the complexity report:
|
||||
|
||||
```bash
|
||||
# Expand a task, using complexity report recommendations if available
|
||||
npm run dev -- expand --id=8
|
||||
|
||||
# Expand all tasks, prioritizing by complexity score if a report exists
|
||||
npm run dev -- expand --all
|
||||
```
|
||||
|
||||
When a complexity report exists:
|
||||
- Tasks are automatically expanded using the recommended subtask count and prompts
|
||||
- When expanding all tasks, they're processed in order of complexity (highest first)
|
||||
- Research-backed generation is preserved from the complexity analysis
|
||||
- You can still override recommendations with explicit command-line options
|
||||
|
||||
Example workflow:
|
||||
```bash
|
||||
# Generate the complexity analysis report with research capabilities
|
||||
npm run dev -- analyze-complexity --research
|
||||
|
||||
# Review the report in scripts/task-complexity-report.json
|
||||
|
||||
# Expand tasks using the optimized recommendations
|
||||
npm run dev -- expand --id=8
|
||||
# or expand all tasks
|
||||
npm run dev -- expand --all
|
||||
```
|
||||
|
||||
This integration ensures that task expansion is informed by thorough complexity analysis, resulting in better subtask organization and more efficient development.
|
||||
|
||||
747
templates/dev.js
747
templates/dev.js
@@ -31,6 +31,26 @@
|
||||
* -> Use --no-research to disable research-backed generation.
|
||||
* -> Add --force when using --all to regenerate subtasks for tasks that already have them.
|
||||
* -> Note: Tasks marked as 'done' or 'completed' are always skipped.
|
||||
* -> If a complexity report exists for the specified task, its recommended
|
||||
* subtask count and expansion prompt will be used (unless overridden).
|
||||
*
|
||||
* 7) analyze-complexity [options]
|
||||
* -> Analyzes task complexity and generates expansion recommendations
|
||||
* -> Generates a report in scripts/task-complexity-report.json by default
|
||||
* -> Uses configured LLM to assess task complexity and create tailored expansion prompts
|
||||
* -> Can use Perplexity AI for research-backed analysis with --research flag
|
||||
* -> Each task includes:
|
||||
* - Complexity score (1-10)
|
||||
* - Recommended number of subtasks (based on DEFAULT_SUBTASKS config)
|
||||
* - Detailed expansion prompt
|
||||
* - Reasoning for complexity assessment
|
||||
* - Ready-to-run expansion command
|
||||
* -> Options:
|
||||
* --output, -o <file>: Specify output file path (default: 'scripts/task-complexity-report.json')
|
||||
* --model, -m <model>: Override LLM model to use for analysis
|
||||
* --threshold, -t <number>: Set minimum complexity score (1-10) for expansion recommendation (default: 5)
|
||||
* --file, -f <path>: Use alternative tasks.json file instead of default
|
||||
* --research, -r: Use Perplexity AI for research-backed complexity analysis
|
||||
*
|
||||
* Usage examples:
|
||||
* node dev.js parse-prd --input=sample-prd.txt
|
||||
@@ -43,6 +63,10 @@
|
||||
* node dev.js expand --id=3 --no-research
|
||||
* node dev.js expand --all
|
||||
* node dev.js expand --all --force
|
||||
* node dev.js analyze-complexity
|
||||
* node dev.js analyze-complexity --output=custom-report.json
|
||||
* node dev.js analyze-complexity --threshold=6 --model=claude-3.7-sonnet
|
||||
* node dev.js analyze-complexity --research
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
@@ -662,6 +686,17 @@ function setTaskStatus(tasksPath, taskIdInput, newStatus) {
|
||||
const oldStatus = task.status || 'pending';
|
||||
task.status = newStatus;
|
||||
|
||||
// Automatically update subtasks if the parent task is being marked as done
|
||||
if (newStatus === 'done' && task.subtasks && Array.isArray(task.subtasks) && task.subtasks.length > 0) {
|
||||
log('info', `Task ${taskId} has ${task.subtasks.length} subtasks that will be marked as done too.`);
|
||||
|
||||
task.subtasks.forEach(subtask => {
|
||||
const oldSubtaskStatus = subtask.status || 'pending';
|
||||
subtask.status = newStatus;
|
||||
log('info', ` └─ Updated subtask ${taskId}.${subtask.id} status from '${oldSubtaskStatus}' to '${newStatus}'`);
|
||||
});
|
||||
}
|
||||
|
||||
// Save the changes
|
||||
writeJSON(tasksPath, data);
|
||||
log('info', `Updated task ${taskId} status from '${oldStatus}' to '${newStatus}'`);
|
||||
@@ -728,6 +763,29 @@ async function expandTask(taskId, numSubtasks = CONFIG.defaultSubtasks, useResea
|
||||
return;
|
||||
}
|
||||
|
||||
// Check for complexity report
|
||||
const complexityReport = readComplexityReport();
|
||||
let recommendedSubtasks = numSubtasks;
|
||||
let recommendedPrompt = additionalContext;
|
||||
|
||||
// If report exists and has data for this task, use it
|
||||
if (complexityReport) {
|
||||
const taskAnalysis = findTaskInComplexityReport(complexityReport, parseInt(taskId));
|
||||
if (taskAnalysis) {
|
||||
// Only use report values if not explicitly overridden by command line
|
||||
if (numSubtasks === CONFIG.defaultSubtasks && taskAnalysis.recommendedSubtasks) {
|
||||
recommendedSubtasks = taskAnalysis.recommendedSubtasks;
|
||||
console.log(chalk.blue(`Using recommended subtask count from complexity analysis: ${recommendedSubtasks}`));
|
||||
}
|
||||
|
||||
if (!additionalContext && taskAnalysis.expansionPrompt) {
|
||||
recommendedPrompt = taskAnalysis.expansionPrompt;
|
||||
console.log(chalk.blue(`Using recommended prompt from complexity analysis`));
|
||||
console.log(chalk.gray(`Prompt: ${recommendedPrompt.substring(0, 100)}...`));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize subtasks array if it doesn't exist
|
||||
if (!task.subtasks) {
|
||||
task.subtasks = [];
|
||||
@@ -742,9 +800,9 @@ async function expandTask(taskId, numSubtasks = CONFIG.defaultSubtasks, useResea
|
||||
let subtasks;
|
||||
if (useResearch) {
|
||||
console.log(chalk.blue(`Using Perplexity AI for research-backed subtask generation...`));
|
||||
subtasks = await generateSubtasksWithPerplexity(task, numSubtasks, nextSubtaskId, additionalContext);
|
||||
subtasks = await generateSubtasksWithPerplexity(task, recommendedSubtasks, nextSubtaskId, recommendedPrompt);
|
||||
} else {
|
||||
subtasks = await generateSubtasks(task, numSubtasks, nextSubtaskId, additionalContext);
|
||||
subtasks = await generateSubtasks(task, recommendedSubtasks, nextSubtaskId, recommendedPrompt);
|
||||
}
|
||||
|
||||
// Add the subtasks to the task
|
||||
@@ -785,7 +843,7 @@ async function expandAllTasks(numSubtasks = CONFIG.defaultSubtasks, useResearch
|
||||
}
|
||||
|
||||
// Filter tasks that are not completed
|
||||
const tasksToExpand = tasksData.tasks.filter(task =>
|
||||
let tasksToExpand = tasksData.tasks.filter(task =>
|
||||
task.status !== 'completed' && task.status !== 'done'
|
||||
);
|
||||
|
||||
@@ -794,18 +852,51 @@ async function expandAllTasks(numSubtasks = CONFIG.defaultSubtasks, useResearch
|
||||
return 0;
|
||||
}
|
||||
|
||||
console.log(chalk.blue(`Expanding ${tasksToExpand.length} tasks with ${numSubtasks} subtasks each...`));
|
||||
// Check for complexity report
|
||||
const complexityReport = readComplexityReport();
|
||||
let usedComplexityReport = false;
|
||||
|
||||
// If complexity report exists, sort tasks by complexity
|
||||
if (complexityReport && complexityReport.complexityAnalysis) {
|
||||
console.log(chalk.blue('Found complexity report. Prioritizing tasks by complexity score.'));
|
||||
usedComplexityReport = true;
|
||||
|
||||
// Create a map of task IDs to their complexity scores
|
||||
const complexityMap = new Map();
|
||||
complexityReport.complexityAnalysis.forEach(analysis => {
|
||||
complexityMap.set(analysis.taskId, analysis.complexityScore);
|
||||
});
|
||||
|
||||
// Sort tasks by complexity score (highest first)
|
||||
tasksToExpand.sort((a, b) => {
|
||||
const scoreA = complexityMap.get(a.id) || 0;
|
||||
const scoreB = complexityMap.get(b.id) || 0;
|
||||
return scoreB - scoreA;
|
||||
});
|
||||
|
||||
// Log the sorted tasks
|
||||
console.log(chalk.blue('Tasks will be expanded in this order (by complexity):'));
|
||||
tasksToExpand.forEach(task => {
|
||||
const score = complexityMap.get(task.id) || 'N/A';
|
||||
console.log(chalk.blue(` Task ${task.id}: ${task.title} (Complexity: ${score})`));
|
||||
});
|
||||
}
|
||||
|
||||
console.log(chalk.blue(`\nExpanding ${tasksToExpand.length} tasks...`));
|
||||
|
||||
let tasksExpanded = 0;
|
||||
|
||||
// Expand each task
|
||||
for (const task of tasksToExpand) {
|
||||
console.log(chalk.blue(`\nExpanding task ${task.id}: ${task.title}`));
|
||||
|
||||
// The check for usedComplexityReport is redundant since expandTask will handle it anyway
|
||||
await expandTask(task.id, numSubtasks, useResearch, additionalContext);
|
||||
|
||||
tasksExpanded++;
|
||||
}
|
||||
|
||||
console.log(chalk.green(`\nExpanded ${tasksExpanded} tasks with ${numSubtasks} subtasks each.`));
|
||||
console.log(chalk.green(`\nExpanded ${tasksExpanded} tasks.`));
|
||||
return tasksExpanded;
|
||||
} catch (error) {
|
||||
console.error(chalk.red('Error expanding all tasks:'), error);
|
||||
@@ -1192,10 +1283,16 @@ Research the task thoroughly and ensure the subtasks are comprehensive, specific
|
||||
console.log(chalk.blue('Using Perplexity AI for research-backed subtask generation...'));
|
||||
const result = await perplexity.chat.completions.create({
|
||||
model: PERPLEXITY_MODEL,
|
||||
messages: [{
|
||||
role: "user",
|
||||
content: prompt
|
||||
}],
|
||||
messages: [
|
||||
{
|
||||
role: "system",
|
||||
content: "You are a technical analysis AI that only responds with clean, valid JSON. Never include explanatory text or markdown formatting in your response."
|
||||
},
|
||||
{
|
||||
role: "user",
|
||||
content: researchPrompt
|
||||
}
|
||||
],
|
||||
temperature: TEMPERATURE,
|
||||
max_tokens: MAX_TOKENS,
|
||||
});
|
||||
@@ -1402,9 +1499,641 @@ async function main() {
|
||||
}
|
||||
});
|
||||
|
||||
program
|
||||
.command('analyze-complexity')
|
||||
.description('Analyze tasks and generate complexity-based expansion recommendations')
|
||||
.option('-o, --output <file>', 'Output file path for the report', 'scripts/task-complexity-report.json')
|
||||
.option('-m, --model <model>', 'LLM model to use for analysis (defaults to configured model)')
|
||||
.option('-t, --threshold <number>', 'Minimum complexity score to recommend expansion (1-10)', '5')
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option('-r, --research', 'Use Perplexity AI for research-backed complexity analysis')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file || 'tasks/tasks.json';
|
||||
const outputPath = options.output;
|
||||
const modelOverride = options.model;
|
||||
const thresholdScore = parseFloat(options.threshold);
|
||||
const useResearch = options.research || false;
|
||||
|
||||
console.log(chalk.blue(`Analyzing task complexity from: ${tasksPath}`));
|
||||
console.log(chalk.blue(`Output report will be saved to: ${outputPath}`));
|
||||
|
||||
if (useResearch) {
|
||||
console.log(chalk.blue('Using Perplexity AI for research-backed complexity analysis'));
|
||||
}
|
||||
|
||||
await analyzeTaskComplexity(options);
|
||||
});
|
||||
|
||||
await program.parseAsync(process.argv);
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyzes task complexity and generates expansion recommendations
|
||||
* @param {Object} options Command options
|
||||
*/
|
||||
async function analyzeTaskComplexity(options) {
|
||||
const tasksPath = options.file || 'tasks/tasks.json';
|
||||
const outputPath = options.output || 'scripts/task-complexity-report.json';
|
||||
const modelOverride = options.model;
|
||||
const thresholdScore = parseFloat(options.threshold || '5');
|
||||
const useResearch = options.research || false;
|
||||
|
||||
console.log(chalk.blue(`Analyzing task complexity and generating expansion recommendations...`));
|
||||
|
||||
try {
|
||||
// Read tasks.json
|
||||
console.log(chalk.blue(`Reading tasks from ${tasksPath}...`));
|
||||
const tasksData = readJSON(tasksPath);
|
||||
|
||||
if (!tasksData || !tasksData.tasks || !Array.isArray(tasksData.tasks) || tasksData.tasks.length === 0) {
|
||||
throw new Error('No tasks found in the tasks file');
|
||||
}
|
||||
|
||||
console.log(chalk.blue(`Found ${tasksData.tasks.length} tasks to analyze.`));
|
||||
|
||||
// Prepare the prompt for the LLM
|
||||
const prompt = generateComplexityAnalysisPrompt(tasksData);
|
||||
|
||||
// Start loading indicator
|
||||
const loadingIndicator = startLoadingIndicator('Calling AI to analyze task complexity...');
|
||||
|
||||
let fullResponse = '';
|
||||
let streamingInterval = null;
|
||||
|
||||
try {
|
||||
// If research flag is set, use Perplexity first
|
||||
if (useResearch) {
|
||||
try {
|
||||
console.log(chalk.blue('Using Perplexity AI for research-backed complexity analysis...'));
|
||||
|
||||
// Modify prompt to include more context for Perplexity and explicitly request JSON
|
||||
const researchPrompt = `You are conducting a detailed analysis of software development tasks to determine their complexity and how they should be broken down into subtasks.
|
||||
|
||||
Please research each task thoroughly, considering best practices, industry standards, and potential implementation challenges before providing your analysis.
|
||||
|
||||
CRITICAL: You MUST respond ONLY with a valid JSON array. Do not include ANY explanatory text, markdown formatting, or code block markers.
|
||||
|
||||
${prompt}
|
||||
|
||||
Your response must be a clean JSON array only, following exactly this format:
|
||||
[
|
||||
{
|
||||
"taskId": 1,
|
||||
"taskTitle": "Example Task",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Detailed prompt for expansion",
|
||||
"reasoning": "Explanation of complexity assessment"
|
||||
},
|
||||
// more tasks...
|
||||
]
|
||||
|
||||
DO NOT include any text before or after the JSON array. No explanations, no markdown formatting.`;
|
||||
|
||||
const result = await perplexity.chat.completions.create({
|
||||
model: PERPLEXITY_MODEL,
|
||||
messages: [
|
||||
{
|
||||
role: "system",
|
||||
content: "You are a technical analysis AI that only responds with clean, valid JSON. Never include explanatory text or markdown formatting in your response."
|
||||
},
|
||||
{
|
||||
role: "user",
|
||||
content: researchPrompt
|
||||
}
|
||||
],
|
||||
temperature: TEMPERATURE,
|
||||
max_tokens: MAX_TOKENS,
|
||||
});
|
||||
|
||||
// Extract the response text
|
||||
fullResponse = result.choices[0].message.content;
|
||||
console.log(chalk.green('Successfully generated complexity analysis with Perplexity AI'));
|
||||
|
||||
if (streamingInterval) clearInterval(streamingInterval);
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
|
||||
// ALWAYS log the first part of the response for debugging
|
||||
console.log(chalk.gray('Response first 200 chars:'));
|
||||
console.log(chalk.gray(fullResponse.substring(0, 200)));
|
||||
} catch (perplexityError) {
|
||||
console.log(chalk.yellow('Falling back to Claude for complexity analysis...'));
|
||||
console.log(chalk.gray('Perplexity error:'), perplexityError.message);
|
||||
|
||||
// Continue to Claude as fallback
|
||||
await useClaudeForComplexityAnalysis();
|
||||
}
|
||||
} else {
|
||||
// Use Claude directly if research flag is not set
|
||||
await useClaudeForComplexityAnalysis();
|
||||
}
|
||||
|
||||
// Helper function to use Claude for complexity analysis
|
||||
async function useClaudeForComplexityAnalysis() {
|
||||
// Call the LLM API with streaming
|
||||
const stream = await anthropic.messages.create({
|
||||
max_tokens: CONFIG.maxTokens,
|
||||
model: modelOverride || CONFIG.model,
|
||||
temperature: CONFIG.temperature,
|
||||
messages: [{ role: "user", content: prompt }],
|
||||
system: "You are an expert software architect and project manager analyzing task complexity. Respond only with valid JSON.",
|
||||
stream: true
|
||||
});
|
||||
|
||||
// Update loading indicator to show streaming progress
|
||||
let dotCount = 0;
|
||||
streamingInterval = setInterval(() => {
|
||||
readline.cursorTo(process.stdout, 0);
|
||||
process.stdout.write(`Receiving streaming response from Claude${'.'.repeat(dotCount)}`);
|
||||
dotCount = (dotCount + 1) % 4;
|
||||
}, 500);
|
||||
|
||||
// Process the stream
|
||||
for await (const chunk of stream) {
|
||||
if (chunk.type === 'content_block_delta' && chunk.delta.text) {
|
||||
fullResponse += chunk.delta.text;
|
||||
}
|
||||
}
|
||||
|
||||
clearInterval(streamingInterval);
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
|
||||
console.log(chalk.green("Completed streaming response from Claude API!"));
|
||||
}
|
||||
|
||||
// Parse the JSON response
|
||||
console.log(chalk.blue(`Parsing complexity analysis...`));
|
||||
let complexityAnalysis;
|
||||
try {
|
||||
// Clean up the response to ensure it's valid JSON
|
||||
let cleanedResponse = fullResponse;
|
||||
|
||||
// First check for JSON code blocks (common in markdown responses)
|
||||
const codeBlockMatch = fullResponse.match(/```(?:json)?\s*([\s\S]*?)\s*```/);
|
||||
if (codeBlockMatch) {
|
||||
cleanedResponse = codeBlockMatch[1];
|
||||
console.log(chalk.blue("Extracted JSON from code block"));
|
||||
} else {
|
||||
// Look for a complete JSON array pattern
|
||||
// This regex looks for an array of objects starting with [ and ending with ]
|
||||
const jsonArrayMatch = fullResponse.match(/(\[\s*\{\s*"[^"]*"\s*:[\s\S]*\}\s*\])/);
|
||||
if (jsonArrayMatch) {
|
||||
cleanedResponse = jsonArrayMatch[1];
|
||||
console.log(chalk.blue("Extracted JSON array pattern"));
|
||||
} else {
|
||||
// Try to find the start of a JSON array and capture to the end
|
||||
const jsonStartMatch = fullResponse.match(/(\[\s*\{[\s\S]*)/);
|
||||
if (jsonStartMatch) {
|
||||
cleanedResponse = jsonStartMatch[1];
|
||||
// Try to find a proper closing to the array
|
||||
const properEndMatch = cleanedResponse.match(/([\s\S]*\}\s*\])/);
|
||||
if (properEndMatch) {
|
||||
cleanedResponse = properEndMatch[1];
|
||||
}
|
||||
console.log(chalk.blue("Extracted JSON from start of array to end"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Log the cleaned response for debugging
|
||||
console.log(chalk.gray("Attempting to parse cleaned JSON..."));
|
||||
console.log(chalk.gray("Cleaned response (first 100 chars):"));
|
||||
console.log(chalk.gray(cleanedResponse.substring(0, 100)));
|
||||
console.log(chalk.gray("Last 100 chars:"));
|
||||
console.log(chalk.gray(cleanedResponse.substring(cleanedResponse.length - 100)));
|
||||
|
||||
// More aggressive cleaning - strip any non-JSON content at the beginning or end
|
||||
const strictArrayMatch = cleanedResponse.match(/(\[\s*\{[\s\S]*\}\s*\])/);
|
||||
if (strictArrayMatch) {
|
||||
cleanedResponse = strictArrayMatch[1];
|
||||
console.log(chalk.blue("Applied strict JSON array extraction"));
|
||||
}
|
||||
|
||||
try {
|
||||
complexityAnalysis = JSON.parse(cleanedResponse);
|
||||
} catch (jsonError) {
|
||||
console.log(chalk.yellow("Initial JSON parsing failed, attempting to fix common JSON issues..."));
|
||||
|
||||
// Try to fix common JSON issues
|
||||
// 1. Remove any trailing commas in arrays or objects
|
||||
cleanedResponse = cleanedResponse.replace(/,(\s*[\]}])/g, '$1');
|
||||
|
||||
// 2. Ensure property names are double-quoted
|
||||
cleanedResponse = cleanedResponse.replace(/(\s*)(\w+)(\s*):(\s*)/g, '$1"$2"$3:$4');
|
||||
|
||||
// 3. Replace single quotes with double quotes for property values
|
||||
cleanedResponse = cleanedResponse.replace(/:(\s*)'([^']*)'(\s*[,}])/g, ':$1"$2"$3');
|
||||
|
||||
// 4. Add a special fallback option if we're still having issues
|
||||
try {
|
||||
complexityAnalysis = JSON.parse(cleanedResponse);
|
||||
console.log(chalk.green("Successfully parsed JSON after fixing common issues"));
|
||||
} catch (fixedJsonError) {
|
||||
console.log(chalk.red("Failed to parse JSON even after fixes, attempting more aggressive cleanup..."));
|
||||
|
||||
// Try to extract and process each task individually
|
||||
try {
|
||||
const taskMatches = cleanedResponse.match(/\{\s*"taskId"\s*:\s*(\d+)[^}]*\}/g);
|
||||
if (taskMatches && taskMatches.length > 0) {
|
||||
console.log(chalk.yellow(`Found ${taskMatches.length} task objects, attempting to process individually`));
|
||||
|
||||
complexityAnalysis = [];
|
||||
for (const taskMatch of taskMatches) {
|
||||
try {
|
||||
// Try to parse each task object individually
|
||||
const fixedTask = taskMatch.replace(/,\s*$/, ''); // Remove trailing commas
|
||||
const taskObj = JSON.parse(`${fixedTask}`);
|
||||
if (taskObj && taskObj.taskId) {
|
||||
complexityAnalysis.push(taskObj);
|
||||
}
|
||||
} catch (taskParseError) {
|
||||
console.log(chalk.yellow(`Could not parse individual task: ${taskMatch.substring(0, 30)}...`));
|
||||
}
|
||||
}
|
||||
|
||||
if (complexityAnalysis.length > 0) {
|
||||
console.log(chalk.green(`Successfully parsed ${complexityAnalysis.length} tasks individually`));
|
||||
} else {
|
||||
throw new Error("Could not parse any tasks individually");
|
||||
}
|
||||
} else {
|
||||
throw fixedJsonError;
|
||||
}
|
||||
} catch (individualError) {
|
||||
console.log(chalk.red("All parsing attempts failed"));
|
||||
throw jsonError; // throw the original error
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure complexityAnalysis is an array
|
||||
if (!Array.isArray(complexityAnalysis)) {
|
||||
console.log(chalk.yellow('Response is not an array, checking if it contains an array property...'));
|
||||
|
||||
// Handle the case where the response might be an object with an array property
|
||||
if (complexityAnalysis.tasks || complexityAnalysis.analysis || complexityAnalysis.results) {
|
||||
complexityAnalysis = complexityAnalysis.tasks || complexityAnalysis.analysis || complexityAnalysis.results;
|
||||
} else {
|
||||
// If no recognizable array property, wrap it as an array if it's an object
|
||||
if (typeof complexityAnalysis === 'object' && complexityAnalysis !== null) {
|
||||
console.log(chalk.yellow('Converting object to array...'));
|
||||
complexityAnalysis = [complexityAnalysis];
|
||||
} else {
|
||||
throw new Error('Response does not contain a valid array or object');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Final check to ensure we have an array
|
||||
if (!Array.isArray(complexityAnalysis)) {
|
||||
throw new Error('Failed to extract an array from the response');
|
||||
}
|
||||
|
||||
// Check that we have an analysis for each task in the input file
|
||||
const taskIds = tasksData.tasks.map(t => t.id);
|
||||
const analysisTaskIds = complexityAnalysis.map(a => a.taskId);
|
||||
const missingTaskIds = taskIds.filter(id => !analysisTaskIds.includes(id));
|
||||
|
||||
if (missingTaskIds.length > 0) {
|
||||
console.log(chalk.yellow(`Missing analysis for ${missingTaskIds.length} tasks: ${missingTaskIds.join(', ')}`));
|
||||
console.log(chalk.blue(`Attempting to analyze missing tasks...`));
|
||||
|
||||
// Create a subset of tasksData with just the missing tasks
|
||||
const missingTasks = {
|
||||
meta: tasksData.meta,
|
||||
tasks: tasksData.tasks.filter(t => missingTaskIds.includes(t.id))
|
||||
};
|
||||
|
||||
// Generate a prompt for just the missing tasks
|
||||
const missingTasksPrompt = generateComplexityAnalysisPrompt(missingTasks);
|
||||
|
||||
// Call the same AI model to analyze the missing tasks
|
||||
let missingAnalysisResponse = '';
|
||||
|
||||
try {
|
||||
// Start a new loading indicator
|
||||
const missingTasksLoadingIndicator = startLoadingIndicator('Analyzing missing tasks...');
|
||||
|
||||
// Use the same AI model as the original analysis
|
||||
if (useResearch) {
|
||||
// Create the same research prompt but for missing tasks
|
||||
const missingTasksResearchPrompt = `You are conducting a detailed analysis of software development tasks to determine their complexity and how they should be broken down into subtasks.
|
||||
|
||||
Please research each task thoroughly, considering best practices, industry standards, and potential implementation challenges before providing your analysis.
|
||||
|
||||
CRITICAL: You MUST respond ONLY with a valid JSON array. Do not include ANY explanatory text, markdown formatting, or code block markers.
|
||||
|
||||
${missingTasksPrompt}
|
||||
|
||||
Your response must be a clean JSON array only, following exactly this format:
|
||||
[
|
||||
{
|
||||
"taskId": 1,
|
||||
"taskTitle": "Example Task",
|
||||
"complexityScore": 7,
|
||||
"recommendedSubtasks": 4,
|
||||
"expansionPrompt": "Detailed prompt for expansion",
|
||||
"reasoning": "Explanation of complexity assessment"
|
||||
},
|
||||
// more tasks...
|
||||
]
|
||||
|
||||
DO NOT include any text before or after the JSON array. No explanations, no markdown formatting.`;
|
||||
|
||||
const result = await perplexity.chat.completions.create({
|
||||
model: PERPLEXITY_MODEL,
|
||||
messages: [
|
||||
{
|
||||
role: "system",
|
||||
content: "You are a technical analysis AI that only responds with clean, valid JSON. Never include explanatory text or markdown formatting in your response."
|
||||
},
|
||||
{
|
||||
role: "user",
|
||||
content: missingTasksResearchPrompt
|
||||
}
|
||||
],
|
||||
temperature: TEMPERATURE,
|
||||
max_tokens: MAX_TOKENS,
|
||||
});
|
||||
|
||||
// Extract the response
|
||||
missingAnalysisResponse = result.choices[0].message.content;
|
||||
} else {
|
||||
// Use Claude
|
||||
const stream = await anthropic.messages.create({
|
||||
max_tokens: CONFIG.maxTokens,
|
||||
model: modelOverride || CONFIG.model,
|
||||
temperature: CONFIG.temperature,
|
||||
messages: [{ role: "user", content: missingTasksPrompt }],
|
||||
system: "You are an expert software architect and project manager analyzing task complexity. Respond only with valid JSON.",
|
||||
stream: true
|
||||
});
|
||||
|
||||
// Process the stream
|
||||
for await (const chunk of stream) {
|
||||
if (chunk.type === 'content_block_delta' && chunk.delta.text) {
|
||||
missingAnalysisResponse += chunk.delta.text;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stop the loading indicator
|
||||
stopLoadingIndicator(missingTasksLoadingIndicator);
|
||||
|
||||
// Parse the response using the same parsing logic as before
|
||||
let missingAnalysis;
|
||||
try {
|
||||
// Clean up the response to ensure it's valid JSON (using same logic as above)
|
||||
let cleanedResponse = missingAnalysisResponse;
|
||||
|
||||
// Use the same JSON extraction logic as before
|
||||
// ... (code omitted for brevity, it would be the same as the original parsing)
|
||||
|
||||
// First check for JSON code blocks
|
||||
const codeBlockMatch = missingAnalysisResponse.match(/```(?:json)?\s*([\s\S]*?)\s*```/);
|
||||
if (codeBlockMatch) {
|
||||
cleanedResponse = codeBlockMatch[1];
|
||||
console.log(chalk.blue("Extracted JSON from code block for missing tasks"));
|
||||
} else {
|
||||
// Look for a complete JSON array pattern
|
||||
const jsonArrayMatch = missingAnalysisResponse.match(/(\[\s*\{\s*"[^"]*"\s*:[\s\S]*\}\s*\])/);
|
||||
if (jsonArrayMatch) {
|
||||
cleanedResponse = jsonArrayMatch[1];
|
||||
console.log(chalk.blue("Extracted JSON array pattern for missing tasks"));
|
||||
} else {
|
||||
// Try to find the start of a JSON array and capture to the end
|
||||
const jsonStartMatch = missingAnalysisResponse.match(/(\[\s*\{[\s\S]*)/);
|
||||
if (jsonStartMatch) {
|
||||
cleanedResponse = jsonStartMatch[1];
|
||||
// Try to find a proper closing to the array
|
||||
const properEndMatch = cleanedResponse.match(/([\s\S]*\}\s*\])/);
|
||||
if (properEndMatch) {
|
||||
cleanedResponse = properEndMatch[1];
|
||||
}
|
||||
console.log(chalk.blue("Extracted JSON from start of array to end for missing tasks"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// More aggressive cleaning if needed
|
||||
const strictArrayMatch = cleanedResponse.match(/(\[\s*\{[\s\S]*\}\s*\])/);
|
||||
if (strictArrayMatch) {
|
||||
cleanedResponse = strictArrayMatch[1];
|
||||
console.log(chalk.blue("Applied strict JSON array extraction for missing tasks"));
|
||||
}
|
||||
|
||||
try {
|
||||
missingAnalysis = JSON.parse(cleanedResponse);
|
||||
} catch (jsonError) {
|
||||
// Try to fix common JSON issues (same as before)
|
||||
cleanedResponse = cleanedResponse.replace(/,(\s*[\]}])/g, '$1');
|
||||
cleanedResponse = cleanedResponse.replace(/(\s*)(\w+)(\s*):(\s*)/g, '$1"$2"$3:$4');
|
||||
cleanedResponse = cleanedResponse.replace(/:(\s*)'([^']*)'(\s*[,}])/g, ':$1"$2"$3');
|
||||
|
||||
try {
|
||||
missingAnalysis = JSON.parse(cleanedResponse);
|
||||
console.log(chalk.green("Successfully parsed JSON for missing tasks after fixing common issues"));
|
||||
} catch (fixedJsonError) {
|
||||
// Try the individual task extraction as a last resort
|
||||
console.log(chalk.red("Failed to parse JSON for missing tasks, attempting individual extraction..."));
|
||||
|
||||
const taskMatches = cleanedResponse.match(/\{\s*"taskId"\s*:\s*(\d+)[^}]*\}/g);
|
||||
if (taskMatches && taskMatches.length > 0) {
|
||||
console.log(chalk.yellow(`Found ${taskMatches.length} task objects, attempting to process individually`));
|
||||
|
||||
missingAnalysis = [];
|
||||
for (const taskMatch of taskMatches) {
|
||||
try {
|
||||
const fixedTask = taskMatch.replace(/,\s*$/, '');
|
||||
const taskObj = JSON.parse(`${fixedTask}`);
|
||||
if (taskObj && taskObj.taskId) {
|
||||
missingAnalysis.push(taskObj);
|
||||
}
|
||||
} catch (taskParseError) {
|
||||
console.log(chalk.yellow(`Could not parse individual task: ${taskMatch.substring(0, 30)}...`));
|
||||
}
|
||||
}
|
||||
|
||||
if (missingAnalysis.length === 0) {
|
||||
throw new Error("Could not parse any missing tasks");
|
||||
}
|
||||
} else {
|
||||
throw fixedJsonError;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure it's an array
|
||||
if (!Array.isArray(missingAnalysis)) {
|
||||
if (missingAnalysis && typeof missingAnalysis === 'object') {
|
||||
missingAnalysis = [missingAnalysis];
|
||||
} else {
|
||||
throw new Error("Missing tasks analysis is not an array or object");
|
||||
}
|
||||
}
|
||||
|
||||
// Add the missing analyses to the main analysis array
|
||||
console.log(chalk.green(`Successfully analyzed ${missingAnalysis.length} missing tasks`));
|
||||
complexityAnalysis = [...complexityAnalysis, ...missingAnalysis];
|
||||
|
||||
// Re-check for missing tasks
|
||||
const updatedAnalysisTaskIds = complexityAnalysis.map(a => a.taskId);
|
||||
const stillMissingTaskIds = taskIds.filter(id => !updatedAnalysisTaskIds.includes(id));
|
||||
|
||||
if (stillMissingTaskIds.length > 0) {
|
||||
console.log(chalk.yellow(`Warning: Still missing analysis for ${stillMissingTaskIds.length} tasks: ${stillMissingTaskIds.join(', ')}`));
|
||||
} else {
|
||||
console.log(chalk.green(`All tasks now have complexity analysis!`));
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error analyzing missing tasks: ${error.message}`));
|
||||
console.log(chalk.yellow(`Continuing with partial analysis...`));
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error during retry for missing tasks: ${error.message}`));
|
||||
console.log(chalk.yellow(`Continuing with partial analysis...`));
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Failed to parse LLM response as JSON: ${error.message}`));
|
||||
if (CONFIG.debug) {
|
||||
console.debug(chalk.gray(`Raw response: ${fullResponse}`));
|
||||
}
|
||||
throw new Error('Invalid response format from LLM. Expected JSON.');
|
||||
}
|
||||
|
||||
// Create the final report
|
||||
const report = {
|
||||
meta: {
|
||||
generatedAt: new Date().toISOString(),
|
||||
tasksAnalyzed: tasksData.tasks.length,
|
||||
thresholdScore: thresholdScore,
|
||||
projectName: tasksData.meta?.projectName || 'Your Project Name',
|
||||
usedResearch: useResearch
|
||||
},
|
||||
complexityAnalysis: complexityAnalysis
|
||||
};
|
||||
|
||||
// Write the report to file
|
||||
console.log(chalk.blue(`Writing complexity report to ${outputPath}...`));
|
||||
writeJSON(outputPath, report);
|
||||
|
||||
console.log(chalk.green(`Task complexity analysis complete. Report written to ${outputPath}`));
|
||||
|
||||
// Display a summary of findings
|
||||
const highComplexity = complexityAnalysis.filter(t => t.complexityScore >= 8).length;
|
||||
const mediumComplexity = complexityAnalysis.filter(t => t.complexityScore >= 5 && t.complexityScore < 8).length;
|
||||
const lowComplexity = complexityAnalysis.filter(t => t.complexityScore < 5).length;
|
||||
const totalAnalyzed = complexityAnalysis.length;
|
||||
|
||||
console.log('\nComplexity Analysis Summary:');
|
||||
console.log('----------------------------');
|
||||
console.log(`Tasks in input file: ${tasksData.tasks.length}`);
|
||||
console.log(`Tasks successfully analyzed: ${totalAnalyzed}`);
|
||||
console.log(`High complexity tasks: ${highComplexity}`);
|
||||
console.log(`Medium complexity tasks: ${mediumComplexity}`);
|
||||
console.log(`Low complexity tasks: ${lowComplexity}`);
|
||||
console.log(`Sum verification: ${highComplexity + mediumComplexity + lowComplexity} (should equal ${totalAnalyzed})`);
|
||||
console.log(`Research-backed analysis: ${useResearch ? 'Yes' : 'No'}`);
|
||||
console.log(`\nSee ${outputPath} for the full report and expansion commands.`);
|
||||
|
||||
} catch (error) {
|
||||
if (streamingInterval) clearInterval(streamingInterval);
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
throw error;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error analyzing task complexity: ${error.message}`));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates the prompt for the LLM to analyze task complexity
|
||||
* @param {Object} tasksData The tasks data from tasks.json
|
||||
* @returns {string} The prompt for the LLM
|
||||
*/
|
||||
function generateComplexityAnalysisPrompt(tasksData) {
|
||||
return `
|
||||
You are an expert software architect and project manager. Your task is to analyze the complexity of development tasks and determine how many subtasks each should be broken down into.
|
||||
|
||||
Below is a list of development tasks with their descriptions and details. For each task:
|
||||
1. Assess its complexity on a scale of 1-10
|
||||
2. Recommend the optimal number of subtasks (between ${Math.max(3, CONFIG.defaultSubtasks - 1)}-${Math.min(8, CONFIG.defaultSubtasks + 2)})
|
||||
3. Suggest a specific prompt that would help generate good subtasks for this task
|
||||
4. Explain your reasoning briefly
|
||||
|
||||
Tasks:
|
||||
${tasksData.tasks.map(task => `
|
||||
ID: ${task.id}
|
||||
Title: ${task.title}
|
||||
Description: ${task.description}
|
||||
Details: ${task.details}
|
||||
Dependencies: ${JSON.stringify(task.dependencies || [])}
|
||||
Priority: ${task.priority || 'medium'}
|
||||
`).join('\n---\n')}
|
||||
|
||||
Analyze each task and return a JSON array with the following structure for each task:
|
||||
[
|
||||
{
|
||||
"taskId": number,
|
||||
"taskTitle": string,
|
||||
"complexityScore": number (1-10),
|
||||
"recommendedSubtasks": number (${Math.max(3, CONFIG.defaultSubtasks - 1)}-${Math.min(8, CONFIG.defaultSubtasks + 2)}),
|
||||
"expansionPrompt": string (a specific prompt for generating good subtasks),
|
||||
"reasoning": string (brief explanation of your assessment)
|
||||
},
|
||||
...
|
||||
]
|
||||
|
||||
IMPORTANT: Make sure to include an analysis for EVERY task listed above, with the correct taskId matching each task's ID.
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitizes a prompt string for use in a shell command
|
||||
* @param {string} prompt The prompt to sanitize
|
||||
* @returns {string} Sanitized prompt
|
||||
*/
|
||||
function sanitizePrompt(prompt) {
|
||||
// Replace double quotes with escaped double quotes
|
||||
return prompt.replace(/"/g, '\\"');
|
||||
}
|
||||
|
||||
/**
|
||||
* Reads and parses the complexity report if it exists
|
||||
* @param {string} customPath - Optional custom path to the report
|
||||
* @returns {Object|null} The parsed complexity report or null if not found
|
||||
*/
|
||||
function readComplexityReport(customPath = null) {
|
||||
try {
|
||||
const reportPath = customPath || path.join(process.cwd(), 'scripts', 'task-complexity-report.json');
|
||||
if (!fs.existsSync(reportPath)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const reportData = fs.readFileSync(reportPath, 'utf8');
|
||||
return JSON.parse(reportData);
|
||||
} catch (error) {
|
||||
console.log(chalk.yellow(`Could not read complexity report: ${error.message}`));
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds a task analysis in the complexity report
|
||||
* @param {Object} report - The complexity report
|
||||
* @param {number} taskId - The task ID to find
|
||||
* @returns {Object|null} The task analysis or null if not found
|
||||
*/
|
||||
function findTaskInComplexityReport(report, taskId) {
|
||||
if (!report || !report.complexityAnalysis || !Array.isArray(report.complexityAnalysis)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return report.complexityAnalysis.find(task => task.taskId === taskId);
|
||||
}
|
||||
|
||||
main().catch(err => {
|
||||
log('error', err);
|
||||
process.exit(1);
|
||||
|
||||
@@ -1,267 +1,152 @@
|
||||
---
|
||||
description: guide the Cursor Agent in using the meta-development script (scripts/dev.js). It also defines the overall workflow for reading, updating, and generating tasks during AI-driven development.
|
||||
globs: scripts/dev.js, tasks.json, tasks/*.txt
|
||||
description: Guide for using meta-development script (scripts/dev.js) to manage task-driven development workflows
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
rules:
|
||||
- name: "Meta Development Workflow for Cursor Agent"
|
||||
description: >
|
||||
Provides comprehensive guidelines on how the agent (Cursor) should coordinate
|
||||
with the meta task script in scripts/dev.js. The agent will call
|
||||
these commands at various points in the coding process to keep
|
||||
tasks.json up to date and maintain a single source of truth for development tasks.
|
||||
triggers:
|
||||
# Potential triggers or states in Cursor where these rules apply.
|
||||
# You may list relevant event names, e.g., "onTaskCompletion" or "onUserCommand"
|
||||
- always
|
||||
steps:
|
||||
- "**Initial Setup**: If starting a new project with a PRD document, run `node scripts/dev.js parse-prd --input=<prd-file.txt>` to generate the initial tasks.json file. This will create a structured task list with IDs, titles, descriptions, dependencies, priorities, and test strategies."
|
||||
|
||||
- "**Task Discovery**: When a coding session begins, call `node scripts/dev.js list` to see the current tasks, their status, and IDs. This provides a quick overview of all tasks and their current states (pending, done, deferred)."
|
||||
|
||||
- "**Task Selection**: Select the next pending task based on these criteria:
|
||||
1. Dependencies: Only select tasks whose dependencies are marked as 'done'
|
||||
2. Priority: Choose higher priority tasks first ('high' > 'medium' > 'low')
|
||||
3. ID order: When priorities are equal, select the task with the lowest ID
|
||||
If multiple tasks are eligible, present options to the user for selection."
|
||||
|
||||
- "**Task Clarification**: If a task description is unclear or lacks detail:
|
||||
1. Check if a corresponding task file exists in the tasks/ directory (e.g., task_001.txt)
|
||||
2. If more information is needed, ask the user for clarification
|
||||
3. If architectural changes have occurred, run `node scripts/dev.js update --from=<id> --prompt=\"<new architectural context>\"` to update the task and all subsequent tasks"
|
||||
|
||||
- "**Task Breakdown**: For complex tasks that need to be broken down into smaller steps:
|
||||
1. Use `node scripts/dev.js expand --id=<id> --subtasks=<number>` to generate detailed subtasks
|
||||
2. Optionally provide additional context with `--prompt=\"<context>\"` to guide subtask generation
|
||||
3. Review the generated subtasks and adjust if necessary
|
||||
4. For multiple tasks, use `--all` flag to expand all pending tasks that don't have subtasks"
|
||||
|
||||
- "**Task Implementation**: Implement the code necessary for the chosen task. Follow these guidelines:
|
||||
1. Reference the task's 'details' section for implementation specifics
|
||||
2. Consider dependencies on previous tasks when implementing
|
||||
3. Follow the project's coding standards and patterns
|
||||
4. Create appropriate tests based on the task's 'testStrategy' field"
|
||||
|
||||
- "**Task Verification**: Before marking a task as done, verify it according to:
|
||||
1. The task's specified 'testStrategy'
|
||||
2. Any automated tests in the codebase
|
||||
3. Manual verification if required
|
||||
4. Code quality standards (linting, formatting, etc.)"
|
||||
|
||||
- "**Task Completion**: When a task is completed and verified, run `node scripts/dev.js set-status --id=<id> --status=done` to mark it as done in tasks.json. This ensures the task tracking remains accurate."
|
||||
|
||||
- "**Implementation Drift Handling**: If during implementation, you discover that:
|
||||
1. The current approach differs significantly from what was planned
|
||||
2. Future tasks need to be modified due to current implementation choices
|
||||
3. New dependencies or requirements have emerged
|
||||
|
||||
Then call `node scripts/dev.js update --from=<futureTaskId> --prompt=\"Detailed explanation of architectural or implementation changes...\"` to rewrite or re-scope subsequent tasks in tasks.json."
|
||||
|
||||
- "**Task File Generation**: After any updates to tasks.json (status changes, task updates), run `node scripts/dev.js generate` to regenerate the individual task_XXX.txt files in the tasks/ folder. This ensures that task files are always in sync with tasks.json."
|
||||
|
||||
- "**Task Status Management**: Use appropriate status values when updating tasks:
|
||||
1. 'pending': Tasks that are ready to be worked on
|
||||
2. 'done': Tasks that have been completed and verified
|
||||
3. 'deferred': Tasks that have been postponed to a later time
|
||||
4. Any other custom status that might be relevant to the project"
|
||||
|
||||
- "**Dependency Management**: When selecting tasks, always respect the dependency chain:
|
||||
1. Never start a task whose dependencies are not marked as 'done'
|
||||
2. If a dependency task is deferred, consider whether dependent tasks should also be deferred
|
||||
3. If dependency relationships change during development, update tasks.json accordingly"
|
||||
|
||||
- "**Progress Reporting**: Periodically (at the beginning of sessions or after completing significant tasks), run `node scripts/dev.js list` to provide the user with an updated view of project progress."
|
||||
|
||||
- "**Task File Format**: When reading task files, understand they follow this structure:
|
||||
```
|
||||
# Task ID: <id>
|
||||
# Title: <title>
|
||||
# Status: <status>
|
||||
# Dependencies: <comma-separated list of dependency IDs>
|
||||
# Priority: <priority>
|
||||
# Description: <brief description>
|
||||
# Details:
|
||||
<detailed implementation notes>
|
||||
|
||||
# Test Strategy:
|
||||
<verification approach>
|
||||
```"
|
||||
|
||||
- "**Continuous Workflow**: Repeat this process until all tasks relevant to the current development phase are completed. Always maintain tasks.json as the single source of truth for development progress."
|
||||
|
||||
- name: "Meta-Development Script Command Reference"
|
||||
description: >
|
||||
Detailed reference for all commands available in the scripts/dev.js meta-development script.
|
||||
This helps the agent understand the full capabilities of the script and use it effectively.
|
||||
triggers:
|
||||
- always
|
||||
commands:
|
||||
- name: "parse-prd"
|
||||
syntax: "node scripts/dev.js parse-prd --input=<prd-file.txt>"
|
||||
description: "Parses a PRD document and generates a tasks.json file with structured tasks. This initializes the task tracking system."
|
||||
parameters:
|
||||
- "--input=<file>: Path to the PRD text file (default: sample-prd.txt)"
|
||||
example: "node scripts/dev.js parse-prd --input=requirements.txt"
|
||||
notes: "This will overwrite any existing tasks.json file. Use with caution on established projects."
|
||||
|
||||
- name: "update"
|
||||
syntax: "node scripts/dev.js update --from=<id> --prompt=\"<prompt>\""
|
||||
description: "Updates tasks with ID >= the specified ID based on the provided prompt. Useful for handling implementation drift or architectural changes."
|
||||
parameters:
|
||||
- "--from=<id>: The task ID from which to start updating (required)"
|
||||
- "--prompt=\"<text>\": The prompt explaining the changes or new context (required)"
|
||||
example: "node scripts/dev.js update --from=4 --prompt=\"Now we are using Express instead of Fastify.\""
|
||||
notes: "Only updates tasks that aren't marked as 'done'. Completed tasks remain unchanged."
|
||||
|
||||
- name: "generate"
|
||||
syntax: "node scripts/dev.js generate"
|
||||
description: "Generates individual task files in the tasks/ directory based on the current state of tasks.json."
|
||||
parameters: "None"
|
||||
example: "node scripts/dev.js generate"
|
||||
notes: "Overwrites existing task files. Creates the tasks/ directory if it doesn't exist."
|
||||
|
||||
- name: "set-status"
|
||||
syntax: "node scripts/dev.js set-status --id=<id> --status=<status>"
|
||||
description: "Updates the status of a specific task in tasks.json."
|
||||
parameters:
|
||||
- "--id=<id>: The ID of the task to update (required)"
|
||||
- "--status=<status>: The new status (e.g., 'done', 'pending', 'deferred') (required)"
|
||||
example: "node scripts/dev.js set-status --id=3 --status=done"
|
||||
notes: "Common status values are 'done', 'pending', and 'deferred', but any string is accepted."
|
||||
|
||||
- name: "list"
|
||||
syntax: "node scripts/dev.js list"
|
||||
description: "Lists all tasks in tasks.json with their IDs, titles, and current status."
|
||||
parameters: "None"
|
||||
example: "node scripts/dev.js list"
|
||||
notes: "Provides a quick overview of project progress. Use this at the start of coding sessions."
|
||||
|
||||
- name: "expand"
|
||||
syntax: "node scripts/dev.js expand --id=<id> [--subtasks=<number>] [--prompt=\"<context>\"]"
|
||||
description: "Expands a task with subtasks for more detailed implementation. Can also expand all tasks with the --all flag."
|
||||
parameters:
|
||||
- "--id=<id>: The ID of the task to expand (required unless using --all)"
|
||||
- "--all: Expand all pending tasks that don't have subtasks"
|
||||
- "--subtasks=<number>: Number of subtasks to generate (default: 3)"
|
||||
- "--prompt=\"<text>\": Additional context to guide subtask generation"
|
||||
- "--force: When used with --all, regenerates subtasks even for tasks that already have them"
|
||||
example: "node scripts/dev.js expand --id=3 --subtasks=5 --prompt=\"Focus on security aspects\""
|
||||
notes: "Tasks marked as 'done' or 'completed' are always skipped. By default, tasks that already have subtasks are skipped unless --force is used."
|
||||
- **Development Workflow Process**
|
||||
- Start new projects by running `node scripts/dev.js parse-prd --input=<prd-file.txt>` to generate initial tasks.json
|
||||
- Begin coding sessions with `node scripts/dev.js list` to see current tasks, status, and IDs
|
||||
- Analyze task complexity with `node scripts/dev.js analyze-complexity --research` before breaking down tasks
|
||||
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
|
||||
- Clarify tasks by checking task files in tasks/ directory or asking for user input
|
||||
- Break down complex tasks using `node scripts/dev.js expand --id=<id>` with appropriate flags
|
||||
- Implement code following task details, dependencies, and project standards
|
||||
- Verify tasks according to test strategies before marking as complete
|
||||
- Mark completed tasks with `node scripts/dev.js set-status --id=<id> --status=done`
|
||||
- Update dependent tasks when implementation differs from original plan
|
||||
- Generate task files with `node scripts/dev.js generate` after updating tasks.json
|
||||
- Respect dependency chains and task priorities when selecting work
|
||||
- Report progress regularly using the list command
|
||||
|
||||
- name: "Task Structure Reference"
|
||||
description: >
|
||||
Details the structure of tasks in tasks.json to help the agent understand
|
||||
and work with the task data effectively.
|
||||
triggers:
|
||||
- always
|
||||
task_fields:
|
||||
- name: "id"
|
||||
type: "number"
|
||||
description: "Unique identifier for the task. Used in commands and for tracking dependencies."
|
||||
example: "1"
|
||||
|
||||
- name: "title"
|
||||
type: "string"
|
||||
description: "Brief, descriptive title of the task."
|
||||
example: "Initialize Repo"
|
||||
|
||||
- name: "description"
|
||||
type: "string"
|
||||
description: "Concise description of what the task involves."
|
||||
example: "Create a new repository, set up initial structure."
|
||||
|
||||
- name: "status"
|
||||
type: "string"
|
||||
description: "Current state of the task. Common values: 'pending', 'done', 'deferred'."
|
||||
example: "pending"
|
||||
|
||||
- name: "dependencies"
|
||||
type: "array of numbers"
|
||||
description: "IDs of tasks that must be completed before this task can be started."
|
||||
example: "[1, 2]"
|
||||
|
||||
- name: "priority"
|
||||
type: "string"
|
||||
description: "Importance level of the task. Common values: 'high', 'medium', 'low'."
|
||||
example: "high"
|
||||
|
||||
- name: "details"
|
||||
type: "string"
|
||||
description: "In-depth instructions, references, or context for implementing the task."
|
||||
example: "Use GitHub client ID/secret, handle callback, set session token."
|
||||
|
||||
- name: "testStrategy"
|
||||
type: "string"
|
||||
description: "Approach for verifying the task has been completed correctly."
|
||||
example: "Deploy and call endpoint to confirm 'Hello World' response."
|
||||
|
||||
- name: "subtasks"
|
||||
type: "array of objects"
|
||||
description: "List of smaller, more specific tasks that make up the main task."
|
||||
example: "[{\"id\": 1, \"title\": \"Configure OAuth\", \"description\": \"...\", \"status\": \"pending\", \"dependencies\": [], \"acceptanceCriteria\": \"...\"}]"
|
||||
- **Task Complexity Analysis**
|
||||
- Run `node scripts/dev.js analyze-complexity --research` for comprehensive analysis
|
||||
- Review complexity report in scripts/task-complexity-report.json
|
||||
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
|
||||
- Use analysis results to determine appropriate subtask allocation
|
||||
- Note that reports are automatically used by the expand command
|
||||
|
||||
- name: "Environment Variables Reference"
|
||||
description: >
|
||||
Details the environment variables that can be used to configure the dev.js script.
|
||||
These variables should be set in a .env file at the root of the project.
|
||||
triggers:
|
||||
- always
|
||||
variables:
|
||||
- name: "ANTHROPIC_API_KEY"
|
||||
required: true
|
||||
description: "Your Anthropic API key for Claude. Required for task generation and expansion."
|
||||
example: "ANTHROPIC_API_KEY=sk-ant-api03-..."
|
||||
|
||||
- name: "MODEL"
|
||||
required: false
|
||||
default: "claude-3-7-sonnet-20250219"
|
||||
description: "Specify which Claude model to use for task generation and expansion."
|
||||
example: "MODEL=claude-3-opus-20240229"
|
||||
|
||||
- name: "MAX_TOKENS"
|
||||
required: false
|
||||
default: "4000"
|
||||
description: "Maximum tokens for model responses. Higher values allow for more detailed task generation."
|
||||
example: "MAX_TOKENS=8000"
|
||||
|
||||
- name: "TEMPERATURE"
|
||||
required: false
|
||||
default: "0.7"
|
||||
description: "Temperature for model responses. Higher values (0.0-1.0) increase creativity but may reduce consistency."
|
||||
example: "TEMPERATURE=0.5"
|
||||
|
||||
- name: "DEBUG"
|
||||
required: false
|
||||
default: "false"
|
||||
description: "Enable debug logging. When true, detailed logs are written to dev-debug.log."
|
||||
example: "DEBUG=true"
|
||||
|
||||
- name: "LOG_LEVEL"
|
||||
required: false
|
||||
default: "info"
|
||||
description: "Log level for console output. Options: debug, info, warn, error."
|
||||
example: "LOG_LEVEL=debug"
|
||||
|
||||
- name: "DEFAULT_SUBTASKS"
|
||||
required: false
|
||||
default: "3"
|
||||
description: "Default number of subtasks when expanding a task."
|
||||
example: "DEFAULT_SUBTASKS=5"
|
||||
|
||||
- name: "DEFAULT_PRIORITY"
|
||||
required: false
|
||||
default: "medium"
|
||||
description: "Default priority for generated tasks. Options: high, medium, low."
|
||||
example: "DEFAULT_PRIORITY=high"
|
||||
|
||||
- name: "PROJECT_NAME"
|
||||
required: false
|
||||
default: "MCP SaaS MVP"
|
||||
description: "Override default project name in tasks.json metadata."
|
||||
example: "PROJECT_NAME=My Awesome Project"
|
||||
|
||||
- name: "PROJECT_VERSION"
|
||||
required: false
|
||||
default: "1.0.0"
|
||||
description: "Override default version in tasks.json metadata."
|
||||
example: "PROJECT_VERSION=2.1.0"
|
||||
- **Task Breakdown Process**
|
||||
- For tasks with complexity analysis, use `node scripts/dev.js expand --id=<id>`
|
||||
- Otherwise use `node scripts/dev.js expand --id=<id> --subtasks=<number>`
|
||||
- Add `--research` flag to leverage Perplexity AI for research-backed expansion
|
||||
- Use `--prompt="<context>"` to provide additional context when needed
|
||||
- Review and adjust generated subtasks as necessary
|
||||
- Use `--all` flag to expand multiple pending tasks at once
|
||||
|
||||
- **Implementation Drift Handling**
|
||||
- When implementation differs significantly from planned approach
|
||||
- When future tasks need modification due to current implementation choices
|
||||
- When new dependencies or requirements emerge
|
||||
- Call `node scripts/dev.js update --from=<futureTaskId> --prompt="<explanation>"` to update tasks.json
|
||||
|
||||
- **Task Status Management**
|
||||
- Use 'pending' for tasks ready to be worked on
|
||||
- Use 'done' for completed and verified tasks
|
||||
- Use 'deferred' for postponed tasks
|
||||
- Add custom status values as needed for project-specific workflows
|
||||
|
||||
- **Task File Format Reference**
|
||||
```
|
||||
# Task ID: <id>
|
||||
# Title: <title>
|
||||
# Status: <status>
|
||||
# Dependencies: <comma-separated list of dependency IDs>
|
||||
# Priority: <priority>
|
||||
# Description: <brief description>
|
||||
# Details:
|
||||
<detailed implementation notes>
|
||||
|
||||
# Test Strategy:
|
||||
<verification approach>
|
||||
```
|
||||
|
||||
- **Command Reference: parse-prd**
|
||||
- Syntax: `node scripts/dev.js parse-prd --input=<prd-file.txt>`
|
||||
- Description: Parses a PRD document and generates a tasks.json file with structured tasks
|
||||
- Parameters:
|
||||
- `--input=<file>`: Path to the PRD text file (default: sample-prd.txt)
|
||||
- Example: `node scripts/dev.js parse-prd --input=requirements.txt`
|
||||
- Notes: Will overwrite existing tasks.json file. Use with caution.
|
||||
|
||||
- **Command Reference: update**
|
||||
- Syntax: `node scripts/dev.js update --from=<id> --prompt="<prompt>"`
|
||||
- Description: Updates tasks with ID >= specified ID based on the provided prompt
|
||||
- Parameters:
|
||||
- `--from=<id>`: Task ID from which to start updating (required)
|
||||
- `--prompt="<text>"`: Explanation of changes or new context (required)
|
||||
- Example: `node scripts/dev.js update --from=4 --prompt="Now we are using Express instead of Fastify."`
|
||||
- Notes: Only updates tasks not marked as 'done'. Completed tasks remain unchanged.
|
||||
|
||||
- **Command Reference: generate**
|
||||
- Syntax: `node scripts/dev.js generate`
|
||||
- Description: Generates individual task files in tasks/ directory based on tasks.json
|
||||
- Parameters: None
|
||||
- Example: `node scripts/dev.js generate`
|
||||
- Notes: Overwrites existing task files. Creates tasks/ directory if needed.
|
||||
|
||||
- **Command Reference: set-status**
|
||||
- Syntax: `node scripts/dev.js set-status --id=<id> --status=<status>`
|
||||
- Description: Updates the status of a specific task in tasks.json
|
||||
- Parameters:
|
||||
- `--id=<id>`: ID of the task to update (required)
|
||||
- `--status=<status>`: New status value (required)
|
||||
- Example: `node scripts/dev.js set-status --id=3 --status=done`
|
||||
- Notes: Common values are 'done', 'pending', and 'deferred', but any string is accepted.
|
||||
|
||||
- **Command Reference: list**
|
||||
- Syntax: `node scripts/dev.js list`
|
||||
- Description: Lists all tasks in tasks.json with IDs, titles, and status
|
||||
- Parameters: None
|
||||
- Example: `node scripts/dev.js list`
|
||||
- Notes: Provides quick overview of project progress. Use at start of sessions.
|
||||
|
||||
- **Command Reference: expand**
|
||||
- Syntax: `node scripts/dev.js expand --id=<id> [--num=<number>] [--research] [--prompt="<context>"]`
|
||||
- Description: Expands a task with subtasks for detailed implementation
|
||||
- Parameters:
|
||||
- `--id=<id>`: ID of task to expand (required unless using --all)
|
||||
- `--all`: Expand all pending tasks, prioritized by complexity
|
||||
- `--num=<number>`: Number of subtasks to generate (default: from complexity report)
|
||||
- `--research`: Use Perplexity AI for research-backed generation
|
||||
- `--prompt="<text>"`: Additional context for subtask generation
|
||||
- `--force`: Regenerate subtasks even for tasks that already have them
|
||||
- Example: `node scripts/dev.js expand --id=3 --num=5 --research --prompt="Focus on security aspects"`
|
||||
- Notes: Uses complexity report recommendations if available.
|
||||
|
||||
- **Command Reference: analyze-complexity**
|
||||
- Syntax: `node scripts/dev.js analyze-complexity [options]`
|
||||
- Description: Analyzes task complexity and generates expansion recommendations
|
||||
- Parameters:
|
||||
- `--output=<file>, -o`: Output file path (default: scripts/task-complexity-report.json)
|
||||
- `--model=<model>, -m`: Override LLM model to use
|
||||
- `--threshold=<number>, -t`: Minimum score for expansion recommendation (default: 5)
|
||||
- `--file=<path>, -f`: Use alternative tasks.json file
|
||||
- `--research, -r`: Use Perplexity AI for research-backed analysis
|
||||
- Example: `node scripts/dev.js analyze-complexity --research`
|
||||
- Notes: Report includes complexity scores, recommended subtasks, and tailored prompts.
|
||||
|
||||
- **Task Structure Fields**
|
||||
- **id**: Unique identifier for the task (Example: `1`)
|
||||
- **title**: Brief, descriptive title (Example: `"Initialize Repo"`)
|
||||
- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`)
|
||||
- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
|
||||
- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2]`)
|
||||
- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`)
|
||||
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
|
||||
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
|
||||
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
|
||||
|
||||
- **Environment Variables Configuration**
|
||||
- **ANTHROPIC_API_KEY** (Required): Your Anthropic API key for Claude (Example: `ANTHROPIC_API_KEY=sk-ant-api03-...`)
|
||||
- **MODEL** (Default: `"claude-3-7-sonnet-20250219"`): Claude model to use (Example: `MODEL=claude-3-opus-20240229`)
|
||||
- **MAX_TOKENS** (Default: `"4000"`): Maximum tokens for responses (Example: `MAX_TOKENS=8000`)
|
||||
- **TEMPERATURE** (Default: `"0.7"`): Temperature for model responses (Example: `TEMPERATURE=0.5`)
|
||||
- **DEBUG** (Default: `"false"`): Enable debug logging (Example: `DEBUG=true`)
|
||||
- **LOG_LEVEL** (Default: `"info"`): Console output level (Example: `LOG_LEVEL=debug`)
|
||||
- **DEFAULT_SUBTASKS** (Default: `"3"`): Default subtask count (Example: `DEFAULT_SUBTASKS=5`)
|
||||
- **DEFAULT_PRIORITY** (Default: `"medium"`): Default priority (Example: `DEFAULT_PRIORITY=high`)
|
||||
- **PROJECT_NAME** (Default: `"MCP SaaS MVP"`): Project name in metadata (Example: `PROJECT_NAME=My Awesome Project`)
|
||||
- **PROJECT_VERSION** (Default: `"1.0.0"`): Version in metadata (Example: `PROJECT_VERSION=2.1.0`)
|
||||
- **PERPLEXITY_API_KEY**: For research-backed features (Example: `PERPLEXITY_API_KEY=pplx-...`)
|
||||
- **PERPLEXITY_MODEL** (Default: `"sonar-medium-online"`): Perplexity model (Example: `PERPLEXITY_MODEL=sonar-large-online`)
|
||||
|
||||
@@ -13,7 +13,8 @@
|
||||
- User personas
|
||||
- Key user flows
|
||||
- UI/UX considerations]
|
||||
|
||||
</context>
|
||||
<PRD>
|
||||
# Technical Architecture
|
||||
[Outline the technical implementation details:
|
||||
- System components
|
||||
@@ -25,23 +26,22 @@
|
||||
[Break down the development process into phases:
|
||||
- MVP requirements
|
||||
- Future enhancements
|
||||
- Timeline estimates]
|
||||
- Do not think about timelines whatsoever -- all that matters is scope and detailing exactly what needs to be build in each phase so it can later be cut up into tasks]
|
||||
|
||||
# Success Metrics
|
||||
[Define how success will be measured:
|
||||
- Key performance indicators
|
||||
- User adoption metrics
|
||||
- Business goals]
|
||||
# Logical Dependency Chain
|
||||
[Define the logical order of development:
|
||||
- Which features need to be built first (foundation)
|
||||
- Getting as quickly as possible to something usable/visible front end that works
|
||||
- Properly pacing and scoping each feature so it is atomic but can also be built upon and improved as development approaches]
|
||||
|
||||
# Risks and Mitigations
|
||||
[Identify potential risks and how they'll be addressed:
|
||||
- Technical challenges
|
||||
- Market risks
|
||||
- Figuring out the MVP that we can build upon
|
||||
- Resource constraints]
|
||||
|
||||
# Appendix
|
||||
[Include any additional information:
|
||||
- Research findings
|
||||
- Competitive analysis
|
||||
- Technical specifications]
|
||||
</context>
|
||||
</PRD>
|
||||
@@ -12,6 +12,7 @@ In an AI-driven development process—particularly with tools like [Cursor](http
|
||||
4. **Generate** individual task files (e.g., `task_001.txt`) for easy reference or to feed into an AI coding workflow.
|
||||
5. **Set task status**—mark tasks as `done`, `pending`, or `deferred` based on progress.
|
||||
6. **Expand** tasks with subtasks—break down complex tasks into smaller, more manageable subtasks.
|
||||
7. **Research-backed subtask generation**—use Perplexity AI to generate more informed and contextually relevant subtasks.
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -24,6 +25,8 @@ The script can be configured through environment variables in a `.env` file at t
|
||||
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
|
||||
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
|
||||
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
|
||||
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
|
||||
- `DEBUG`: Enable debug logging (default: false)
|
||||
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
|
||||
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
|
||||
@@ -56,6 +59,44 @@ The script can be configured through environment variables in a `.env` file at t
|
||||
|
||||
Run `node scripts/dev.js` without arguments to see detailed usage information.
|
||||
|
||||
## Listing Tasks
|
||||
|
||||
The `list` command allows you to view all tasks and their status:
|
||||
|
||||
```bash
|
||||
# List all tasks
|
||||
node scripts/dev.js list
|
||||
|
||||
# List tasks with a specific status
|
||||
node scripts/dev.js list --status=pending
|
||||
|
||||
# List tasks and include their subtasks
|
||||
node scripts/dev.js list --with-subtasks
|
||||
|
||||
# List tasks with a specific status and include their subtasks
|
||||
node scripts/dev.js list --status=pending --with-subtasks
|
||||
```
|
||||
|
||||
## Updating Tasks
|
||||
|
||||
The `update` command allows you to update tasks based on new information or implementation changes:
|
||||
|
||||
```bash
|
||||
# Update tasks starting from ID 4 with a new prompt
|
||||
node scripts/dev.js update --from=4 --prompt="Refactor tasks from ID 4 onward to use Express instead of Fastify"
|
||||
|
||||
# Update all tasks (default from=1)
|
||||
node scripts/dev.js update --prompt="Add authentication to all relevant tasks"
|
||||
|
||||
# Specify a different tasks file
|
||||
node scripts/dev.js update --file=custom-tasks.json --from=5 --prompt="Change database from MongoDB to PostgreSQL"
|
||||
```
|
||||
|
||||
Notes:
|
||||
- The `--prompt` parameter is required and should explain the changes or new context
|
||||
- Only tasks that aren't marked as 'done' will be updated
|
||||
- Tasks with ID >= the specified --from value will be updated
|
||||
|
||||
## Setting Task Status
|
||||
|
||||
The `set-status` command allows you to change a task's status:
|
||||
@@ -89,7 +130,7 @@ The `expand` command allows you to break down tasks into subtasks for more detai
|
||||
node scripts/dev.js expand --id=3
|
||||
|
||||
# Expand a specific task with 5 subtasks
|
||||
node scripts/dev.js expand --id=3 --subtasks=5
|
||||
node scripts/dev.js expand --id=3 --num=5
|
||||
|
||||
# Expand a task with additional context
|
||||
node scripts/dev.js expand --id=3 --prompt="Focus on security aspects"
|
||||
@@ -99,12 +140,35 @@ node scripts/dev.js expand --all
|
||||
|
||||
# Force regeneration of subtasks for all pending tasks
|
||||
node scripts/dev.js expand --all --force
|
||||
|
||||
# Use Perplexity AI for research-backed subtask generation
|
||||
node scripts/dev.js expand --id=3 --research
|
||||
|
||||
# Use Perplexity AI for research-backed generation on all pending tasks
|
||||
node scripts/dev.js expand --all --research
|
||||
```
|
||||
|
||||
Notes:
|
||||
- Tasks marked as 'done' or 'completed' are always skipped
|
||||
- By default, tasks that already have subtasks are skipped unless `--force` is used
|
||||
- Subtasks include title, description, dependencies, and acceptance criteria
|
||||
- The `--research` flag uses Perplexity AI to generate more informed and contextually relevant subtasks
|
||||
- If Perplexity API is unavailable, the script will fall back to using Anthropic's Claude
|
||||
|
||||
## AI Integration
|
||||
|
||||
The script integrates with two AI services:
|
||||
|
||||
1. **Anthropic Claude**: Used for parsing PRDs, generating tasks, and creating subtasks.
|
||||
2. **Perplexity AI**: Used for research-backed subtask generation when the `--research` flag is specified.
|
||||
|
||||
The Perplexity integration uses the OpenAI client to connect to Perplexity's API, which provides enhanced research capabilities for generating more informed subtasks. If the Perplexity API is unavailable or encounters an error, the script will automatically fall back to using Anthropic's Claude.
|
||||
|
||||
To use the Perplexity integration:
|
||||
1. Obtain a Perplexity API key
|
||||
2. Add `PERPLEXITY_API_KEY` to your `.env` file
|
||||
3. Optionally specify `PERPLEXITY_MODEL` in your `.env` file (default: "sonar-medium-online")
|
||||
4. Use the `--research` flag with the `expand` command
|
||||
|
||||
## Logging
|
||||
|
||||
@@ -115,3 +179,79 @@ The script supports different logging levels controlled by the `LOG_LEVEL` envir
|
||||
- `error`: Error messages that might prevent execution
|
||||
|
||||
When `DEBUG=true` is set, debug logs are also written to a `dev-debug.log` file in the project root.
|
||||
|
||||
## Analyzing Task Complexity
|
||||
|
||||
The `analyze-complexity` command allows you to automatically assess task complexity and generate expansion recommendations:
|
||||
|
||||
```bash
|
||||
# Analyze all tasks and generate expansion recommendations
|
||||
node scripts/dev.js analyze-complexity
|
||||
|
||||
# Specify a custom output file
|
||||
node scripts/dev.js analyze-complexity --output=custom-report.json
|
||||
|
||||
# Override the model used for analysis
|
||||
node scripts/dev.js analyze-complexity --model=claude-3-opus-20240229
|
||||
|
||||
# Set a custom complexity threshold (1-10)
|
||||
node scripts/dev.js analyze-complexity --threshold=6
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
node scripts/dev.js analyze-complexity --research
|
||||
```
|
||||
|
||||
Notes:
|
||||
- The command uses Claude to analyze each task's complexity (or Perplexity with --research flag)
|
||||
- Tasks are scored on a scale of 1-10
|
||||
- Each task receives a recommended number of subtasks based on DEFAULT_SUBTASKS configuration
|
||||
- The default output path is `scripts/task-complexity-report.json`
|
||||
- Each task in the analysis includes a ready-to-use `expansionCommand` that can be copied directly to the terminal or executed programmatically
|
||||
- Tasks with complexity scores below the threshold (default: 5) may not need expansion
|
||||
- The research flag provides more contextual and informed complexity assessments
|
||||
|
||||
### Integration with Expand Command
|
||||
|
||||
The `expand` command automatically checks for and uses complexity analysis if available:
|
||||
|
||||
```bash
|
||||
# Expand a task, using complexity report recommendations if available
|
||||
node scripts/dev.js expand --id=8
|
||||
|
||||
# Expand all tasks, prioritizing by complexity score if a report exists
|
||||
node scripts/dev.js expand --all
|
||||
|
||||
# Override recommendations with explicit values
|
||||
node scripts/dev.js expand --id=8 --num=5 --prompt="Custom prompt"
|
||||
```
|
||||
|
||||
When a complexity report exists:
|
||||
- The `expand` command will use the recommended subtask count from the report (unless overridden)
|
||||
- It will use the tailored expansion prompt from the report (unless a custom prompt is provided)
|
||||
- When using `--all`, tasks are sorted by complexity score (highest first)
|
||||
- The `--research` flag is preserved from the complexity analysis to expansion
|
||||
|
||||
The output report structure is:
|
||||
```json
|
||||
{
|
||||
"meta": {
|
||||
"generatedAt": "2023-06-15T12:34:56.789Z",
|
||||
"tasksAnalyzed": 20,
|
||||
"thresholdScore": 5,
|
||||
"projectName": "Your Project Name",
|
||||
"usedResearch": true
|
||||
},
|
||||
"complexityAnalysis": [
|
||||
{
|
||||
"taskId": 8,
|
||||
"taskTitle": "Develop Implementation Drift Handling",
|
||||
"complexityScore": 9.5,
|
||||
"recommendedSubtasks": 6,
|
||||
"expansionPrompt": "Create subtasks that handle detecting...",
|
||||
"reasoning": "This task requires sophisticated logic...",
|
||||
"expansionCommand": "node scripts/dev.js expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research"
|
||||
},
|
||||
// More tasks sorted by complexity score (highest first)
|
||||
]
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user