New update-subtask command.

This commit is contained in:
Eyal Toledano
2025-03-29 19:14:44 -04:00
parent f3e8ff315d
commit 31d6bd59e8
6 changed files with 877 additions and 454 deletions

View File

@@ -97,6 +97,32 @@ alwaysApply: true
- Example: `task-master update --from=4 --prompt="Now we are using Express instead of Fastify."`
- Notes: Only updates tasks not marked as 'done'. Completed tasks remain unchanged.
- **Command Reference: update-task**
- Legacy Syntax: `node scripts/dev.js update-task --id=<id> --prompt="<prompt>"`
- CLI Syntax: `task-master update-task --id=<id> --prompt="<prompt>"`
- Description: Updates a single task by ID with new information
- Parameters:
- `--id=<id>`: ID of the task to update (required)
- `--prompt="<text>"`: New information or context to update the task (required)
- `--research`: Use Perplexity AI for research-backed updates
- Example: `task-master update-task --id=5 --prompt="Use JWT for authentication instead of sessions."`
- Notes: Only updates tasks not marked as 'done'. Preserves completed subtasks.
- **Command Reference: update-subtask**
- Legacy Syntax: `node scripts/dev.js update-subtask --id=<id> --prompt="<prompt>"`
- CLI Syntax: `task-master update-subtask --id=<id> --prompt="<prompt>"`
- Description: Appends additional information to a specific subtask without replacing existing content
- Parameters:
- `--id=<id>`: ID of the subtask to update in format "parentId.subtaskId" (required)
- `--prompt="<text>"`: Information to add to the subtask (required)
- `--research`: Use Perplexity AI for research-backed updates
- Example: `task-master update-subtask --id=5.2 --prompt="Add details about API rate limiting."`
- Notes:
- Appends new information to subtask details with timestamp
- Does not replace existing content, only adds to it
- Uses XML-like tags to clearly mark added information
- Will not update subtasks marked as 'done' or 'completed'
- **Command Reference: generate**
- Legacy Syntax: `node scripts/dev.js generate`
- CLI Syntax: `task-master generate`

464
README.md
View File

@@ -362,466 +362,30 @@ task-master show 1.2
task-master update --from=<id> --prompt="<prompt>"
```
### Generate Task Files
### Update a Specific Task
```bash
# Generate individual task files from tasks.json
task-master generate
# Update a single task by ID with new information
task-master update-task --id=<id> --prompt="<prompt>"
# Use research-backed updates with Perplexity AI
task-master update-task --id=<id> --prompt="<prompt>" --research
```
### Set Task Status
### Update a Subtask
```bash
# Set status of a single task
task-master set-status --id=<id> --status=<status>
# Append additional information to a specific subtask
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
# Set status for multiple tasks
task-master set-status --id=1,2,3 --status=<status>
# Example: Add details about API rate limiting to subtask 2 of task 5
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
# Set status for subtasks
task-master set-status --id=1.1,1.2 --status=<status>
# Use research-backed updates with Perplexity AI
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
```
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
### Expand Tasks
```bash
# Expand a specific task with subtasks
task-master expand --id=<id> --num=<number>
# Expand with additional context
task-master expand --id=<id> --prompt="<context>"
# Expand all pending tasks
task-master expand --all
# Force regeneration of subtasks for tasks that already have them
task-master expand --all --force
# Research-backed subtask generation for a specific task
task-master expand --id=<id> --research
# Research-backed generation for all tasks
task-master expand --all --research
```
### Clear Subtasks
```bash
# Clear subtasks from a specific task
task-master clear-subtasks --id=<id>
# Clear subtasks from multiple tasks
task-master clear-subtasks --id=1,2,3
# Clear subtasks from all tasks
task-master clear-subtasks --all
```
### Analyze Task Complexity
```bash
# Analyze complexity of all tasks
task-master analyze-complexity
# Save report to a custom location
task-master analyze-complexity --output=my-report.json
# Use a specific LLM model
task-master analyze-complexity --model=claude-3-opus-20240229
# Set a custom complexity threshold (1-10)
task-master analyze-complexity --threshold=6
# Use an alternative tasks file
task-master analyze-complexity --file=custom-tasks.json
# Use Perplexity AI for research-backed complexity analysis
task-master analyze-complexity --research
```
### View Complexity Report
```bash
# Display the task complexity analysis report
task-master complexity-report
# View a report at a custom location
task-master complexity-report --file=my-report.json
```
### Managing Task Dependencies
```bash
# Add a dependency to a task
task-master add-dependency --id=<id> --depends-on=<id>
# Remove a dependency from a task
task-master remove-dependency --id=<id> --depends-on=<id>
# Validate dependencies without fixing them
task-master validate-dependencies
# Find and fix invalid dependencies automatically
task-master fix-dependencies
```
### Add a New Task
````bash
# Add a new task using AI
task-master add-task --prompt="Description of the new task"
# Add a task with dependencies
task-master add-task --prompt="Description" --dependencies=1,2,3
# Add a task with priority
# Task Master
### by [@eyaltoledano](https://x.com/eyaltoledano)
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
## Requirements
- Node.js 14.0.0 or higher
- Anthropic API key (Claude API)
- Anthropic SDK version 0.39.0 or higher
- OpenAI SDK (for Perplexity API integration, optional)
## Configuration
The script can be configured through environment variables in a `.env` file at the root of the project:
### Required Configuration
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude
### Optional Configuration
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
- `DEBUG`: Enable debug logging (default: false)
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
- `PROJECT_NAME`: Override default project name in tasks.json
- `PROJECT_VERSION`: Override default version in tasks.json
## Installation
```bash
# Install globally
npm install -g task-master-ai
# OR install locally within your project
npm install task-master-ai
````
### Initialize a new project
```bash
# If installed globally
task-master init
# If installed locally
npx task-master-init
```
This will prompt you for project details and set up a new project with the necessary files and structure.
### Important Notes
1. This package uses ES modules. Your package.json should include `"type": "module"`.
2. The Anthropic SDK version should be 0.39.0 or higher.
## Quick Start with Global Commands
After installing the package globally, you can use these CLI commands from any directory:
```bash
# Initialize a new project
task-master init
# Parse a PRD and generate tasks
task-master parse-prd your-prd.txt
# List all tasks
task-master list
# Show the next task to work on
task-master next
# Generate task files
task-master generate
```
## Troubleshooting
### If `task-master init` doesn't respond:
Try running it with Node directly:
```bash
node node_modules/claude-task-master/scripts/init.js
```
Or clone the repository and run:
```bash
git clone https://github.com/eyaltoledano/claude-task-master.git
cd claude-task-master
node scripts/init.js
```
## Task Structure
Tasks in tasks.json have the following structure:
- `id`: Unique identifier for the task (Example: `1`)
- `title`: Brief, descriptive title of the task (Example: `"Initialize Repo"`)
- `description`: Concise description of what the task involves (Example: `"Create a new repository, set up initial structure."`)
- `status`: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
- `dependencies`: IDs of tasks that must be completed before this task (Example: `[1, 2]`)
- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
- This helps quickly identify which prerequisite tasks are blocking work
- `priority`: Importance level of the task (Example: `"high"`, `"medium"`, `"low"`)
- `details`: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
- `testStrategy`: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
- `subtasks`: List of smaller, more specific tasks that make up the main task (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
## Integrating with Cursor AI
Claude Task Master is designed to work seamlessly with [Cursor AI](https://www.cursor.so/), providing a structured workflow for AI-driven development.
### Setup with Cursor
1. After initializing your project, open it in Cursor
2. The `.cursor/rules/dev_workflow.mdc` file is automatically loaded by Cursor, providing the AI with knowledge about the task management system
3. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
4. Open Cursor's AI chat and switch to Agent mode
### Initial Task Generation
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
```
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
```
The agent will execute:
```bash
task-master parse-prd scripts/prd.txt
```
This will:
- Parse your PRD document
- Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies
- The agent will understand this process due to the Cursor rules
### Generate Individual Task Files
Next, ask the agent to generate individual task files:
```
Please generate individual task files from tasks.json
```
The agent will execute:
```bash
task-master generate
```
This creates individual task files in the `tasks/` directory (e.g., `task_001.txt`, `task_002.txt`), making it easier to reference specific tasks.
## AI-Driven Development Workflow
The Cursor agent is pre-configured (via the rules file) to follow this workflow:
### 1. Task Discovery and Selection
Ask the agent to list available tasks:
```
What tasks are available to work on next?
```
The agent will:
- Run `task-master list` to see all tasks
- Run `task-master next` to determine the next task to work on
- Analyze dependencies to determine which tasks are ready to be worked on
- Prioritize tasks based on priority level and ID order
- Suggest the next task(s) to implement
### 2. Task Implementation
When implementing a task, the agent will:
- Reference the task's details section for implementation specifics
- Consider dependencies on previous tasks
- Follow the project's coding standards
- Create appropriate tests based on the task's testStrategy
You can ask:
```
Let's implement task 3. What does it involve?
```
### 3. Task Verification
Before marking a task as complete, verify it according to:
- The task's specified testStrategy
- Any automated tests in the codebase
- Manual verification if required
### 4. Task Completion
When a task is completed, tell the agent:
```
Task 3 is now complete. Please update its status.
```
The agent will execute:
```bash
task-master set-status --id=3 --status=done
```
### 5. Handling Implementation Drift
If during implementation, you discover that:
- The current approach differs significantly from what was planned
- Future tasks need to be modified due to current implementation choices
- New dependencies or requirements have emerged
Tell the agent:
```
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
```
The agent will execute:
```bash
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
```
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
### 6. Breaking Down Complex Tasks
For complex tasks that need more granularity:
```
Task 5 seems complex. Can you break it down into subtasks?
```
The agent will execute:
```bash
task-master expand --id=5 --num=3
```
You can provide additional context:
```
Please break down task 5 with a focus on security considerations.
```
The agent will execute:
```bash
task-master expand --id=5 --prompt="Focus on security aspects"
```
You can also expand all pending tasks:
```
Please break down all pending tasks into subtasks.
```
The agent will execute:
```bash
task-master expand --all
```
For research-backed subtask generation using Perplexity AI:
```
Please break down task 5 using research-backed generation.
```
The agent will execute:
```bash
task-master expand --id=5 --research
```
## Command Reference
Here's a comprehensive reference of all available commands:
### Parse PRD
```bash
# Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated
task-master parse-prd <prd-file.txt> --num-tasks=10
```
### List Tasks
```bash
# List all tasks
task-master list
# List tasks with a specific status
task-master list --status=<status>
# List tasks with subtasks
task-master list --with-subtasks
# List tasks with a specific status and include subtasks
task-master list --status=<status> --with-subtasks
```
### Show Next Task
```bash
# Show the next task to work on based on dependencies and status
task-master next
```
### Show Specific Task
```bash
# Show details of a specific task
task-master show <id>
# or
task-master show --id=<id>
# View a specific subtask (e.g., subtask 2 of task 1)
task-master show 1.2
```
### Update Tasks
```bash
# Update tasks from a specific ID and provide context
task-master update --from=<id> --prompt="<prompt>"
```
Unlike the `update-task` command which replaces task information, the `update-subtask` command *appends* new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
### Generate Task Files

View File

@@ -44,6 +44,56 @@ function getPerplexityClient() {
return perplexity;
}
/**
* Get the best available AI model for a given operation
* @param {Object} options - Options for model selection
* @param {boolean} options.claudeOverloaded - Whether Claude is currently overloaded
* @param {boolean} options.requiresResearch - Whether the operation requires research capabilities
* @returns {Object} Selected model info with type and client
*/
function getAvailableAIModel(options = {}) {
const { claudeOverloaded = false, requiresResearch = false } = options;
// First choice: Perplexity if research is required and it's available
if (requiresResearch && process.env.PERPLEXITY_API_KEY) {
try {
const client = getPerplexityClient();
return { type: 'perplexity', client };
} catch (error) {
log('warn', `Perplexity not available: ${error.message}`);
// Fall through to Claude
}
}
// Second choice: Claude if not overloaded
if (!claudeOverloaded && process.env.ANTHROPIC_API_KEY) {
return { type: 'claude', client: anthropic };
}
// Third choice: Perplexity as Claude fallback (even if research not required)
if (process.env.PERPLEXITY_API_KEY) {
try {
const client = getPerplexityClient();
log('info', 'Claude is overloaded, falling back to Perplexity');
return { type: 'perplexity', client };
} catch (error) {
log('warn', `Perplexity fallback not available: ${error.message}`);
// Fall through to Claude anyway with warning
}
}
// Last resort: Use Claude even if overloaded (might fail)
if (process.env.ANTHROPIC_API_KEY) {
if (claudeOverloaded) {
log('warn', 'Claude is overloaded but no alternatives are available. Proceeding with Claude anyway.');
}
return { type: 'claude', client: anthropic };
}
// No models available
throw new Error('No AI models available. Please set ANTHROPIC_API_KEY and/or PERPLEXITY_API_KEY.');
}
/**
* Handle Claude API errors with user-friendly messages
* @param {Error} error - The error from Claude API
@@ -54,6 +104,10 @@ function handleClaudeError(error) {
if (error.type === 'error' && error.error) {
switch (error.error.type) {
case 'overloaded_error':
// Check if we can use Perplexity as a fallback
if (process.env.PERPLEXITY_API_KEY) {
return 'Claude is currently overloaded. Trying to fall back to Perplexity AI.';
}
return 'Claude is currently experiencing high demand and is overloaded. Please wait a few minutes and try again.';
case 'rate_limit_error':
return 'You have exceeded the rate limit. Please wait a few minutes before making more requests.';
@@ -676,5 +730,6 @@ export {
generateSubtasksWithPerplexity,
parseSubtasksFromText,
generateComplexityAnalysisPrompt,
handleClaudeError
handleClaudeError,
getAvailableAIModel
};

View File

@@ -24,7 +24,8 @@ import {
addSubtask,
removeSubtask,
analyzeTaskComplexity,
updateTaskById
updateTaskById,
updateSubtaskById
} from './task-manager.js';
import {
@@ -145,7 +146,7 @@ function registerCommands(programInstance) {
await updateTasks(tasksPath, fromId, prompt, useResearch);
});
// updateTask command
// update-task command
programInstance
.command('update-task')
.description('Update a single task by ID with new information')
@@ -231,6 +232,91 @@ function registerCommands(programInstance) {
}
});
// update-subtask command
programInstance
.command('update-subtask')
.description('Update a subtask by appending additional timestamped information')
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-i, --id <id>', 'Subtask ID to update in format "parentId.subtaskId" (required)')
.option('-p, --prompt <text>', 'Prompt explaining what information to add (required)')
.option('-r, --research', 'Use Perplexity AI for research-backed updates')
.action(async (options) => {
try {
const tasksPath = options.file;
// Validate required parameters
if (!options.id) {
console.error(chalk.red('Error: --id parameter is required'));
console.log(chalk.yellow('Usage example: task-master update-subtask --id=5.2 --prompt="Add more details about the API endpoint"'));
process.exit(1);
}
// Validate subtask ID format (should contain a dot)
const subtaskId = options.id;
if (!subtaskId.includes('.')) {
console.error(chalk.red(`Error: Invalid subtask ID format: ${subtaskId}. Subtask ID must be in format "parentId.subtaskId"`));
console.log(chalk.yellow('Usage example: task-master update-subtask --id=5.2 --prompt="Add more details about the API endpoint"'));
process.exit(1);
}
if (!options.prompt) {
console.error(chalk.red('Error: --prompt parameter is required. Please provide information to add to the subtask.'));
console.log(chalk.yellow('Usage example: task-master update-subtask --id=5.2 --prompt="Add more details about the API endpoint"'));
process.exit(1);
}
const prompt = options.prompt;
const useResearch = options.research || false;
// Validate tasks file exists
if (!fs.existsSync(tasksPath)) {
console.error(chalk.red(`Error: Tasks file not found at path: ${tasksPath}`));
if (tasksPath === 'tasks/tasks.json') {
console.log(chalk.yellow('Hint: Run task-master init or task-master parse-prd to create tasks.json first'));
} else {
console.log(chalk.yellow(`Hint: Check if the file path is correct: ${tasksPath}`));
}
process.exit(1);
}
console.log(chalk.blue(`Updating subtask ${subtaskId} with prompt: "${prompt}"`));
console.log(chalk.blue(`Tasks file: ${tasksPath}`));
if (useResearch) {
// Verify Perplexity API key exists if using research
if (!process.env.PERPLEXITY_API_KEY) {
console.log(chalk.yellow('Warning: PERPLEXITY_API_KEY environment variable is missing. Research-backed updates will not be available.'));
console.log(chalk.yellow('Falling back to Claude AI for subtask update.'));
} else {
console.log(chalk.blue('Using Perplexity AI for research-backed subtask update'));
}
}
const result = await updateSubtaskById(tasksPath, subtaskId, prompt, useResearch);
if (!result) {
console.log(chalk.yellow('\nSubtask update was not completed. Review the messages above for details.'));
}
} catch (error) {
console.error(chalk.red(`Error: ${error.message}`));
// Provide more helpful error messages for common issues
if (error.message.includes('subtask') && error.message.includes('not found')) {
console.log(chalk.yellow('\nTo fix this issue:'));
console.log(' 1. Run task-master list --with-subtasks to see all available subtask IDs');
console.log(' 2. Use a valid subtask ID with the --id parameter in format "parentId.subtaskId"');
} else if (error.message.includes('API key')) {
console.log(chalk.yellow('\nThis error is related to API keys. Check your environment variables.'));
}
if (CONFIG.debug) {
console.error(error);
}
process.exit(1);
}
});
// generate command
programInstance
.command('generate')

View File

@@ -2969,11 +2969,319 @@ async function removeSubtask(tasksPath, subtaskId, convertToTask = false, genera
}
}
/**
* Update a subtask by appending additional information to its description and details
* @param {string} tasksPath - Path to the tasks.json file
* @param {string} subtaskId - ID of the subtask to update in format "parentId.subtaskId"
* @param {string} prompt - Prompt for generating additional information
* @param {boolean} useResearch - Whether to use Perplexity AI for research-backed updates
* @returns {Object|null} - The updated subtask or null if update failed
*/
async function updateSubtaskById(tasksPath, subtaskId, prompt, useResearch = false) {
try {
log('info', `Updating subtask ${subtaskId} with prompt: "${prompt}"`);
// Validate subtask ID format
if (!subtaskId || typeof subtaskId !== 'string' || !subtaskId.includes('.')) {
throw new Error(`Invalid subtask ID format: ${subtaskId}. Subtask ID must be in format "parentId.subtaskId"`);
}
// Validate prompt
if (!prompt || typeof prompt !== 'string' || prompt.trim() === '') {
throw new Error('Prompt cannot be empty. Please provide context for the subtask update.');
}
// Validate research flag
if (useResearch && (!perplexity || !process.env.PERPLEXITY_API_KEY)) {
log('warn', 'Perplexity AI is not available. Falling back to Claude AI.');
console.log(chalk.yellow('Perplexity AI is not available (API key may be missing). Falling back to Claude AI.'));
useResearch = false;
}
// Validate tasks file exists
if (!fs.existsSync(tasksPath)) {
throw new Error(`Tasks file not found at path: ${tasksPath}`);
}
// Read the tasks file
const data = readJSON(tasksPath);
if (!data || !data.tasks) {
throw new Error(`No valid tasks found in ${tasksPath}. The file may be corrupted or have an invalid format.`);
}
// Parse parent and subtask IDs
const [parentIdStr, subtaskIdStr] = subtaskId.split('.');
const parentId = parseInt(parentIdStr, 10);
const subtaskIdNum = parseInt(subtaskIdStr, 10);
if (isNaN(parentId) || parentId <= 0 || isNaN(subtaskIdNum) || subtaskIdNum <= 0) {
throw new Error(`Invalid subtask ID format: ${subtaskId}. Both parent ID and subtask ID must be positive integers.`);
}
// Find the parent task
const parentTask = data.tasks.find(task => task.id === parentId);
if (!parentTask) {
throw new Error(`Parent task with ID ${parentId} not found. Please verify the task ID and try again.`);
}
// Find the subtask
if (!parentTask.subtasks || !Array.isArray(parentTask.subtasks)) {
throw new Error(`Parent task ${parentId} has no subtasks.`);
}
const subtask = parentTask.subtasks.find(st => st.id === subtaskIdNum);
if (!subtask) {
throw new Error(`Subtask with ID ${subtaskId} not found. Please verify the subtask ID and try again.`);
}
// Check if subtask is already completed
if (subtask.status === 'done' || subtask.status === 'completed') {
log('warn', `Subtask ${subtaskId} is already marked as done and cannot be updated`);
console.log(boxen(
chalk.yellow(`Subtask ${subtaskId} is already marked as ${subtask.status} and cannot be updated.`) + '\n\n' +
chalk.white('Completed subtasks are locked to maintain consistency. To modify a completed subtask, you must first:') + '\n' +
chalk.white('1. Change its status to "pending" or "in-progress"') + '\n' +
chalk.white('2. Then run the update-subtask command'),
{ padding: 1, borderColor: 'yellow', borderStyle: 'round' }
));
return null;
}
// Show the subtask that will be updated
const table = new Table({
head: [
chalk.cyan.bold('ID'),
chalk.cyan.bold('Title'),
chalk.cyan.bold('Status')
],
colWidths: [10, 55, 10]
});
table.push([
subtaskId,
truncate(subtask.title, 52),
getStatusWithColor(subtask.status)
]);
console.log(boxen(
chalk.white.bold(`Updating Subtask #${subtaskId}`),
{ padding: 1, borderColor: 'blue', borderStyle: 'round', margin: { top: 1, bottom: 0 } }
));
console.log(table.toString());
// Build the system prompt
const systemPrompt = `You are an AI assistant helping to enhance a software development subtask with additional information.
You will be given a subtask and a prompt requesting specific details or clarification.
Your job is to generate concise, technically precise information that addresses the prompt.
Guidelines:
1. Focus ONLY on generating the additional information requested in the prompt
2. Be specific, technical, and actionable in your response
3. Keep your response as low level as possible, the goal is to provide the most detailed information possible to complete the task.
4. Format your response to be easily readable when appended to existing text
5. Include code snippets, links to documentation, or technical details when appropriate
6. Do NOT include any preamble, conclusion or meta-commentary
7. Return ONLY the new information to be added - do not repeat or summarize existing content`;
const subtaskData = JSON.stringify(subtask, null, 2);
let additionalInformation;
const loadingIndicator = startLoadingIndicator(useResearch
? 'Generating additional information with Perplexity AI research...'
: 'Generating additional information with Claude AI...');
try {
if (useResearch) {
log('info', 'Using Perplexity AI for research-backed subtask update');
// Verify Perplexity API key exists
if (!process.env.PERPLEXITY_API_KEY) {
throw new Error('PERPLEXITY_API_KEY environment variable is missing but --research flag was used.');
}
try {
// Call Perplexity AI
const perplexityModel = process.env.PERPLEXITY_MODEL || 'sonar-pro';
const result = await perplexity.chat.completions.create({
model: perplexityModel,
messages: [
{
role: "system",
content: `${systemPrompt}\n\nUse your online search capabilities to research up-to-date information about the technologies and concepts mentioned in the subtask. Look for best practices, common issues, and implementation details that would be helpful.`
},
{
role: "user",
content: `Here is the subtask to enhance:
${subtaskData}
Please provide additional information addressing this request:
${prompt}
Return ONLY the new information to add - do not repeat existing content.`
}
],
temperature: parseFloat(process.env.TEMPERATURE || CONFIG.temperature),
max_tokens: parseInt(process.env.MAX_TOKENS || CONFIG.maxTokens),
});
additionalInformation = result.choices[0].message.content.trim();
} catch (perplexityError) {
throw new Error(`Perplexity API error: ${perplexityError.message}`);
}
} else {
// Call Claude to generate additional information
try {
// Verify Anthropic API key exists
if (!process.env.ANTHROPIC_API_KEY) {
throw new Error('ANTHROPIC_API_KEY environment variable is missing. Required for subtask updates.');
}
// Use streaming API call
let responseText = '';
let streamingInterval = null;
// Update loading indicator to show streaming progress
let dotCount = 0;
const readline = await import('readline');
streamingInterval = setInterval(() => {
readline.cursorTo(process.stdout, 0);
process.stdout.write(`Receiving streaming response from Claude${'.'.repeat(dotCount)}`);
dotCount = (dotCount + 1) % 4;
}, 500);
// Use streaming API call
const stream = await anthropic.messages.create({
model: CONFIG.model,
max_tokens: CONFIG.maxTokens,
temperature: CONFIG.temperature,
system: systemPrompt,
messages: [
{
role: 'user',
content: `Here is the subtask to enhance:
${subtaskData}
Please provide additional information addressing this request:
${prompt}
Return ONLY the new information to add - do not repeat existing content.`
}
],
stream: true
});
// Process the stream
for await (const chunk of stream) {
if (chunk.type === 'content_block_delta' && chunk.delta.text) {
responseText += chunk.delta.text;
}
}
if (streamingInterval) clearInterval(streamingInterval);
log('info', "Completed streaming response from Claude API!");
additionalInformation = responseText.trim();
} catch (claudeError) {
throw new Error(`Claude API error: ${claudeError.message}`);
}
}
// Validate the generated information
if (!additionalInformation || additionalInformation.trim() === '') {
throw new Error('Received empty response from AI. Unable to generate additional information.');
}
// Create timestamp
const currentDate = new Date();
const timestamp = currentDate.toISOString();
// Format the additional information with timestamp
const formattedInformation = `\n\n<info added on ${timestamp}>\n${additionalInformation}\n</info added on ${timestamp}>`;
// Append to subtask details and description
if (subtask.details) {
subtask.details += formattedInformation;
} else {
subtask.details = `${formattedInformation}`;
}
if (subtask.description) {
// Only append to description if it makes sense (for shorter updates)
if (additionalInformation.length < 200) {
subtask.description += ` [Updated: ${currentDate.toLocaleDateString()}]`;
}
}
// Update the subtask in the parent task
const subtaskIndex = parentTask.subtasks.findIndex(st => st.id === subtaskIdNum);
if (subtaskIndex !== -1) {
parentTask.subtasks[subtaskIndex] = subtask;
} else {
throw new Error(`Subtask with ID ${subtaskId} not found in parent task's subtasks array.`);
}
// Update the parent task in the original data
const parentIndex = data.tasks.findIndex(t => t.id === parentId);
if (parentIndex !== -1) {
data.tasks[parentIndex] = parentTask;
} else {
throw new Error(`Parent task with ID ${parentId} not found in tasks array.`);
}
// Write the updated tasks to the file
writeJSON(tasksPath, data);
log('success', `Successfully updated subtask ${subtaskId}`);
// Generate individual task files
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
console.log(boxen(
chalk.green(`Successfully updated subtask #${subtaskId}`) + '\n\n' +
chalk.white.bold('Title:') + ' ' + subtask.title + '\n\n' +
chalk.white.bold('Information Added:') + '\n' +
chalk.white(truncate(additionalInformation, 300, true)),
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
));
// Return the updated subtask for testing purposes
return subtask;
} finally {
stopLoadingIndicator(loadingIndicator);
}
} catch (error) {
log('error', `Error updating subtask: ${error.message}`);
console.error(chalk.red(`Error: ${error.message}`));
// Provide more helpful error messages for common issues
if (error.message.includes('ANTHROPIC_API_KEY')) {
console.log(chalk.yellow('\nTo fix this issue, set your Anthropic API key:'));
console.log(' export ANTHROPIC_API_KEY=your_api_key_here');
} else if (error.message.includes('PERPLEXITY_API_KEY')) {
console.log(chalk.yellow('\nTo fix this issue:'));
console.log(' 1. Set your Perplexity API key: export PERPLEXITY_API_KEY=your_api_key_here');
console.log(' 2. Or run without the research flag: task-master update-subtask --id=<id> --prompt="..."');
} else if (error.message.includes('not found')) {
console.log(chalk.yellow('\nTo fix this issue:'));
console.log(' 1. Run task-master list --with-subtasks to see all available subtask IDs');
console.log(' 2. Use a valid subtask ID with the --id parameter in format "parentId.subtaskId"');
}
if (CONFIG.debug) {
console.error(error);
}
return null;
}
}
// Export task manager functions
export {
parsePRD,
updateTasks,
updateTaskById,
updateSubtaskById,
generateTaskFiles,
setTaskStatus,
updateSingleTaskStatus,

View File

@@ -1651,7 +1651,7 @@ const testRemoveSubtask = (tasksPath, subtaskId, convertToTask = false, generate
// Parse the subtask ID (format: "parentId.subtaskId")
if (!subtaskId.includes('.')) {
throw new Error(`Invalid subtask ID format: ${subtaskId}. Expected format: "parentId.subtaskId"`);
throw new Error(`Invalid subtask ID format: ${subtaskId}`);
}
const [parentIdStr, subtaskIdStr] = subtaskId.split('.');
@@ -2013,4 +2013,388 @@ describe.skip('updateTaskById function', () => {
// Clean up
delete process.env.PERPLEXITY_API_KEY;
});
});
// Mock implementation of updateSubtaskById for testing
const testUpdateSubtaskById = async (tasksPath, subtaskId, prompt, useResearch = false) => {
try {
// Parse parent and subtask IDs
if (!subtaskId || typeof subtaskId !== 'string' || !subtaskId.includes('.')) {
throw new Error(`Invalid subtask ID format: ${subtaskId}`);
}
const [parentIdStr, subtaskIdStr] = subtaskId.split('.');
const parentId = parseInt(parentIdStr, 10);
const subtaskIdNum = parseInt(subtaskIdStr, 10);
if (isNaN(parentId) || parentId <= 0 || isNaN(subtaskIdNum) || subtaskIdNum <= 0) {
throw new Error(`Invalid subtask ID format: ${subtaskId}`);
}
// Validate prompt
if (!prompt || typeof prompt !== 'string' || prompt.trim() === '') {
throw new Error('Prompt cannot be empty');
}
// Check if tasks file exists
if (!mockExistsSync(tasksPath)) {
throw new Error(`Tasks file not found at path: ${tasksPath}`);
}
// Read the tasks file
const data = mockReadJSON(tasksPath);
if (!data || !data.tasks) {
throw new Error(`No valid tasks found in ${tasksPath}`);
}
// Find the parent task
const parentTask = data.tasks.find(t => t.id === parentId);
if (!parentTask) {
throw new Error(`Parent task with ID ${parentId} not found`);
}
// Find the subtask
if (!parentTask.subtasks || !Array.isArray(parentTask.subtasks)) {
throw new Error(`Parent task ${parentId} has no subtasks`);
}
const subtask = parentTask.subtasks.find(st => st.id === subtaskIdNum);
if (!subtask) {
throw new Error(`Subtask with ID ${subtaskId} not found`);
}
// Check if subtask is already completed
if (subtask.status === 'done' || subtask.status === 'completed') {
return null;
}
// Generate additional information
let additionalInformation;
if (useResearch) {
const result = await mockChatCompletionsCreate();
additionalInformation = result.choices[0].message.content;
} else {
const mockStream = {
[Symbol.asyncIterator]: jest.fn().mockImplementation(() => {
return {
next: jest.fn()
.mockResolvedValueOnce({
done: false,
value: {
type: 'content_block_delta',
delta: { text: 'Additional information about' }
}
})
.mockResolvedValueOnce({
done: false,
value: {
type: 'content_block_delta',
delta: { text: ' the subtask implementation.' }
}
})
.mockResolvedValueOnce({ done: true })
};
})
};
const stream = await mockCreate();
additionalInformation = 'Additional information about the subtask implementation.';
}
// Create timestamp
const timestamp = new Date().toISOString();
// Format the additional information with timestamp
const formattedInformation = `\n\n<info added on ${timestamp}>\n${additionalInformation}\n</info added on ${timestamp}>`;
// Append to subtask details
if (subtask.details) {
subtask.details += formattedInformation;
} else {
subtask.details = formattedInformation;
}
// Update description with update marker for shorter updates
if (subtask.description && additionalInformation.length < 200) {
subtask.description += ` [Updated: ${new Date().toLocaleDateString()}]`;
}
// Write the updated tasks to the file
mockWriteJSON(tasksPath, data);
// Generate individual task files
await mockGenerateTaskFiles(tasksPath, path.dirname(tasksPath));
return subtask;
} catch (error) {
mockLog('error', `Error updating subtask: ${error.message}`);
return null;
}
};
describe.skip('updateSubtaskById function', () => {
let mockConsoleLog;
let mockConsoleError;
let mockProcess;
beforeEach(() => {
// Reset all mocks
jest.clearAllMocks();
// Set up default mock values
mockExistsSync.mockReturnValue(true);
mockWriteJSON.mockImplementation(() => {});
mockGenerateTaskFiles.mockResolvedValue(undefined);
// Create a deep copy of sample tasks for tests - use imported ES module instead of require
const sampleTasksDeepCopy = JSON.parse(JSON.stringify(sampleTasks));
// Ensure the sample tasks has a task with subtasks for testing
// Task 3 should have subtasks
if (sampleTasksDeepCopy.tasks && sampleTasksDeepCopy.tasks.length > 2) {
const task3 = sampleTasksDeepCopy.tasks.find(t => t.id === 3);
if (task3 && (!task3.subtasks || task3.subtasks.length === 0)) {
task3.subtasks = [
{
id: 1,
title: 'Create Header Component',
description: 'Create a reusable header component',
status: 'pending'
},
{
id: 2,
title: 'Create Footer Component',
description: 'Create a reusable footer component',
status: 'pending'
}
];
}
}
mockReadJSON.mockReturnValue(sampleTasksDeepCopy);
// Mock console and process.exit
mockConsoleLog = jest.spyOn(console, 'log').mockImplementation(() => {});
mockConsoleError = jest.spyOn(console, 'error').mockImplementation(() => {});
mockProcess = jest.spyOn(process, 'exit').mockImplementation(() => {});
});
afterEach(() => {
// Restore console and process.exit
mockConsoleLog.mockRestore();
mockConsoleError.mockRestore();
mockProcess.mockRestore();
});
test('should update a subtask successfully', async () => {
// Mock streaming for successful response
const mockStream = {
[Symbol.asyncIterator]: jest.fn().mockImplementation(() => {
return {
next: jest.fn()
.mockResolvedValueOnce({
done: false,
value: {
type: 'content_block_delta',
delta: { text: 'Additional information about the subtask implementation.' }
}
})
.mockResolvedValueOnce({ done: true })
};
})
};
mockCreate.mockResolvedValue(mockStream);
// Call the function
const result = await testUpdateSubtaskById('test-tasks.json', '3.1', 'Add details about API endpoints');
// Verify the subtask was updated
expect(result).toBeDefined();
expect(result.details).toContain('<info added on');
expect(result.details).toContain('Additional information about the subtask implementation');
expect(result.details).toContain('</info added on');
// Verify the correct functions were called
expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json');
expect(mockCreate).toHaveBeenCalled();
expect(mockWriteJSON).toHaveBeenCalled();
expect(mockGenerateTaskFiles).toHaveBeenCalled();
// Verify the subtask was updated in the tasks data
const tasksData = mockWriteJSON.mock.calls[0][1];
const parentTask = tasksData.tasks.find(task => task.id === 3);
const updatedSubtask = parentTask.subtasks.find(st => st.id === 1);
expect(updatedSubtask.details).toContain('Additional information about the subtask implementation');
});
test('should return null when subtask is already completed', async () => {
// Modify the sample data to have a completed subtask
const tasksData = mockReadJSON();
const task = tasksData.tasks.find(t => t.id === 3);
if (task && task.subtasks && task.subtasks.length > 0) {
// Mark the first subtask as completed
task.subtasks[0].status = 'done';
mockReadJSON.mockReturnValue(tasksData);
}
// Call the function with a completed subtask
const result = await testUpdateSubtaskById('test-tasks.json', '3.1', 'Update completed subtask');
// Verify the result is null
expect(result).toBeNull();
// Verify the correct functions were called
expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json');
expect(mockCreate).not.toHaveBeenCalled();
expect(mockWriteJSON).not.toHaveBeenCalled();
expect(mockGenerateTaskFiles).not.toHaveBeenCalled();
});
test('should handle subtask not found error', async () => {
// Call the function with a non-existent subtask
const result = await testUpdateSubtaskById('test-tasks.json', '3.999', 'Update non-existent subtask');
// Verify the result is null
expect(result).toBeNull();
// Verify the error was logged
expect(mockLog).toHaveBeenCalledWith('error', expect.stringContaining('Subtask with ID 3.999 not found'));
// Verify the correct functions were called
expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json');
expect(mockCreate).not.toHaveBeenCalled();
expect(mockWriteJSON).not.toHaveBeenCalled();
expect(mockGenerateTaskFiles).not.toHaveBeenCalled();
});
test('should handle invalid subtask ID format', async () => {
// Call the function with an invalid subtask ID
const result = await testUpdateSubtaskById('test-tasks.json', 'invalid-id', 'Update subtask with invalid ID');
// Verify the result is null
expect(result).toBeNull();
// Verify the error was logged
expect(mockLog).toHaveBeenCalledWith('error', expect.stringContaining('Invalid subtask ID format'));
// Verify the correct functions were called
expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json');
expect(mockCreate).not.toHaveBeenCalled();
expect(mockWriteJSON).not.toHaveBeenCalled();
expect(mockGenerateTaskFiles).not.toHaveBeenCalled();
});
test('should handle missing tasks file', async () => {
// Mock file not existing
mockExistsSync.mockReturnValue(false);
// Call the function
const result = await testUpdateSubtaskById('missing-tasks.json', '3.1', 'Update subtask');
// Verify the result is null
expect(result).toBeNull();
// Verify the error was logged
expect(mockLog).toHaveBeenCalledWith('error', expect.stringContaining('Tasks file not found'));
// Verify the correct functions were called
expect(mockReadJSON).not.toHaveBeenCalled();
expect(mockCreate).not.toHaveBeenCalled();
expect(mockWriteJSON).not.toHaveBeenCalled();
expect(mockGenerateTaskFiles).not.toHaveBeenCalled();
});
test('should handle empty prompt', async () => {
// Call the function with an empty prompt
const result = await testUpdateSubtaskById('test-tasks.json', '3.1', '');
// Verify the result is null
expect(result).toBeNull();
// Verify the error was logged
expect(mockLog).toHaveBeenCalledWith('error', expect.stringContaining('Prompt cannot be empty'));
// Verify the correct functions were called
expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json');
expect(mockCreate).not.toHaveBeenCalled();
expect(mockWriteJSON).not.toHaveBeenCalled();
expect(mockGenerateTaskFiles).not.toHaveBeenCalled();
});
test('should use Perplexity AI when research flag is true', async () => {
// Mock Perplexity API response
const mockPerplexityResponse = {
choices: [
{
message: {
content: 'Research-backed information about the subtask implementation.'
}
}
]
};
mockChatCompletionsCreate.mockResolvedValue(mockPerplexityResponse);
// Set the Perplexity API key in environment
process.env.PERPLEXITY_API_KEY = 'dummy-key';
// Call the function with research flag
const result = await testUpdateSubtaskById('test-tasks.json', '3.1', 'Add research-backed details', true);
// Verify the subtask was updated with research-backed information
expect(result).toBeDefined();
expect(result.details).toContain('<info added on');
expect(result.details).toContain('Research-backed information about the subtask implementation');
expect(result.details).toContain('</info added on');
// Verify the Perplexity API was called
expect(mockChatCompletionsCreate).toHaveBeenCalled();
expect(mockCreate).not.toHaveBeenCalled(); // Claude should not be called
// Verify the correct functions were called
expect(mockReadJSON).toHaveBeenCalledWith('test-tasks.json');
expect(mockWriteJSON).toHaveBeenCalled();
expect(mockGenerateTaskFiles).toHaveBeenCalled();
// Clean up
delete process.env.PERPLEXITY_API_KEY;
});
test('should append timestamp correctly in XML-like format', async () => {
// Mock streaming for successful response
const mockStream = {
[Symbol.asyncIterator]: jest.fn().mockImplementation(() => {
return {
next: jest.fn()
.mockResolvedValueOnce({
done: false,
value: {
type: 'content_block_delta',
delta: { text: 'Additional information about the subtask implementation.' }
}
})
.mockResolvedValueOnce({ done: true })
};
})
};
mockCreate.mockResolvedValue(mockStream);
// Call the function
const result = await testUpdateSubtaskById('test-tasks.json', '3.1', 'Add details about API endpoints');
// Verify the XML-like format with timestamp
expect(result).toBeDefined();
expect(result.details).toMatch(/<info added on [0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z>/);
expect(result.details).toMatch(/<\/info added on [0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z>/);
// Verify the same timestamp is used in both opening and closing tags
const openingMatch = result.details.match(/<info added on ([0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z)>/);
const closingMatch = result.details.match(/<\/info added on ([0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z)>/);
expect(openingMatch).toBeTruthy();
expect(closingMatch).toBeTruthy();
expect(openingMatch[1]).toBe(closingMatch[1]);
});
});