feat(show): add comma-separated ID support for multi-task viewing

- Enhanced get-task/show command to support comma-separated task IDs for efficient batch operations.
- New features include multiple task retrieval, smart display logic, interactive action menu with batch operations, MCP array response for AI agent efficiency, and support for mixed parent tasks and subtasks.
- Implementation includes updated CLI show command, enhanced MCP get_task tool, modified showTaskDirect function, and maintained full backward compatibility.
- Documentation updated across all relevant files.

Benefits include faster context gathering for AI agents, improved workflow with interactive batch operations, better UX with responsive layout, and enhanced API efficiency.
This commit is contained in:
Eyal Toledano
2025-05-25 19:39:23 -04:00
parent 325f5a2aa3
commit 1e020023ed
12 changed files with 299 additions and 141 deletions

View File

@@ -0,0 +1,19 @@
---
'task-master-ai': minor
---
Enhanced get-task/show command to support comma-separated task IDs for efficient batch operations
**New Features:**
- **Multiple Task Retrieval**: Pass comma-separated IDs to get/show multiple tasks at once (e.g., `task-master show 1,3,5` or MCP `get_task` with `id: "1,3,5"`)
- **Smart Display Logic**: Single ID shows detailed view, multiple IDs show compact summary table with interactive options
- **Batch Action Menu**: Interactive menu for multiple tasks with copy-paste ready commands for common operations (mark as done/in-progress, expand all, view dependencies, etc.)
- **MCP Array Response**: MCP tool returns structured array of task objects for efficient AI agent context gathering
**Benefits:**
- **Faster Context Gathering**: AI agents can collect multiple tasks/subtasks in one call instead of iterating
- **Improved Workflow**: Interactive batch operations reduce repetitive command execution
- **Better UX**: Responsive layout adapts to terminal width, maintains consistency with existing UI patterns
- **API Efficiency**: RESTful array responses in MCP format enable more sophisticated integrations
This enhancement maintains full backward compatibility while significantly improving efficiency for both human users and AI agents working with multiple tasks.

View File

@@ -130,6 +130,7 @@ Use your AI assistant to:
- Parse requirements: `Can you parse my PRD at scripts/prd.txt?`
- Plan next step: `Whats the next task I should work on?`
- Implement a task: `Can you help me implement task 3?`
- View multiple tasks: `Can you show me tasks 1, 3, and 5?`
- Expand a task: `Can you help me expand task 4?`
[More examples on how to use Task Master in chat](docs/examples.md)
@@ -173,6 +174,9 @@ task-master list
# Show the next task to work on
task-master next
# Show specific task(s) - supports comma-separated IDs
task-master show 1,3,5
# Generate task files
task-master generate
```

View File

@@ -9,7 +9,7 @@ Welcome to the Task Master documentation. Use the links below to navigate to the
## Reference
- [Command Reference](command-reference.md) - Complete list of all available commands
- [Command Reference](command-reference.md) - Complete list of all available commands (including new multi-task viewing)
- [Task Structure](task-structure.md) - Understanding the task format and features
## Examples & Licensing

View File

@@ -43,10 +43,28 @@ task-master show <id>
# or
task-master show --id=<id>
# View multiple tasks with comma-separated IDs
task-master show 1,3,5
task-master show 44,55
# View a specific subtask (e.g., subtask 2 of task 1)
task-master show 1.2
# Mix parent tasks and subtasks
task-master show 44,44.1,55,55.2
```
**Multiple Task Display:**
- **Single ID**: Shows detailed task view with full implementation details
- **Multiple IDs**: Shows compact summary table with interactive action menu
- **Action Menu**: Provides copy-paste ready commands for batch operations:
- Mark all as in-progress/done
- Show next available task
- Expand all tasks (generate subtasks)
- View dependency relationships
- Generate task files
## Update Tasks
```bash

View File

@@ -21,6 +21,20 @@ What's the next task I should work on? Please consider dependencies and prioriti
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
```
## Viewing multiple tasks
```
Can you show me tasks 1, 3, and 5 so I can understand their relationship?
```
```
I need to see the status of tasks 44, 55, and their subtasks. Can you show me those?
```
```
Show me tasks 10, 12, and 15 and give me some batch actions I can perform on them.
```
## Managing subtasks
```

View File

@@ -198,10 +198,15 @@ Ask the agent to list available tasks:
What tasks are available to work on next?
```
```
Can you show me tasks 1, 3, and 5 to understand their current status?
```
The agent will:
- Run `task-master list` to see all tasks
- Run `task-master next` to determine the next task to work on
- Run `task-master show 1,3,5` to display multiple tasks with interactive options
- Analyze dependencies to determine which tasks are ready to be worked on
- Prioritize tasks based on priority level and ID order
- Suggest the next task(s) to implement
@@ -221,6 +226,21 @@ You can ask:
Let's implement task 3. What does it involve?
```
### 2.1. Viewing Multiple Tasks
For efficient context gathering and batch operations:
```
Show me tasks 5, 7, and 9 so I can plan my implementation approach.
```
The agent will:
- Run `task-master show 5,7,9` to display a compact summary table
- Show task status, priority, and progress indicators
- Provide an interactive action menu with batch operations
- Allow you to perform group actions like marking multiple tasks as in-progress
### 3. Task Verification
Before marking a task as complete, verify it according to:

View File

@@ -66,9 +66,27 @@ export async function showTaskDirect(args, log) {
const complexityReport = readComplexityReport(reportPath);
// Parse comma-separated IDs
const taskIds = id
.split(',')
.map((taskId) => taskId.trim())
.filter((taskId) => taskId.length > 0);
if (taskIds.length === 0) {
return {
success: false,
error: {
code: 'INVALID_TASK_ID',
message: 'No valid task IDs provided'
}
};
}
// Handle single task ID (existing behavior)
if (taskIds.length === 1) {
const { task, originalSubtaskCount } = findTaskById(
tasksData.tasks,
id,
taskIds[0],
complexityReport,
status
);
@@ -78,12 +96,12 @@ export async function showTaskDirect(args, log) {
success: false,
error: {
code: 'TASK_NOT_FOUND',
message: `Task or subtask with ID ${id} not found`
message: `Task or subtask with ID ${taskIds[0]} not found`
}
};
}
log.info(`Successfully retrieved task ${id}.`);
log.info(`Successfully retrieved task ${taskIds[0]}.`);
const returnData = { ...task };
if (originalSubtaskCount !== null) {
@@ -92,6 +110,47 @@ export async function showTaskDirect(args, log) {
}
return { success: true, data: returnData };
}
// Handle multiple task IDs
const foundTasks = [];
const notFoundIds = [];
taskIds.forEach((taskId) => {
const { task, originalSubtaskCount } = findTaskById(
tasksData.tasks,
taskId,
complexityReport,
status
);
if (task) {
const taskData = { ...task };
if (originalSubtaskCount !== null) {
taskData._originalSubtaskCount = originalSubtaskCount;
taskData._subtaskFilter = status;
}
foundTasks.push(taskData);
} else {
notFoundIds.push(taskId);
}
});
log.info(
`Successfully retrieved ${foundTasks.length} of ${taskIds.length} requested tasks.`
);
// Return multiple tasks with metadata
return {
success: true,
data: {
tasks: foundTasks,
requestedIds: taskIds,
foundCount: foundTasks.length,
notFoundIds: notFoundIds,
isMultiple: true
}
};
} catch (error) {
log.error(`Error showing task ${id}: ${error.message}`);
return {

View File

@@ -44,7 +44,11 @@ export function registerShowTaskTool(server) {
name: 'get_task',
description: 'Get detailed information about a specific task',
parameters: z.object({
id: z.string().describe('Task ID to get'),
id: z
.string()
.describe(
'Task ID(s) to get (can be comma-separated for multiple tasks)'
),
status: z
.string()
.optional()
@@ -66,7 +70,7 @@ export function registerShowTaskTool(server) {
'Absolute path to the project root directory (Optional, usually from session)'
)
}),
execute: withNormalizedProjectRoot(async (args, { log }) => {
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
const { id, file, status, projectRoot } = args;
try {
@@ -110,7 +114,8 @@ export function registerShowTaskTool(server) {
status: status,
projectRoot: projectRoot
},
log
log,
{ session }
);
if (result.success) {

View File

@@ -66,7 +66,8 @@ import {
displayModelConfiguration,
displayAvailableModels,
displayApiKeyStatus,
displayAiUsageSummary
displayAiUsageSummary,
displayMultipleTasksSummary
} from './ui.js';
import { initializeProject } from '../init.js';
@@ -1785,11 +1786,14 @@ ${result.result}
programInstance
.command('show')
.description(
`Display detailed information about a specific task${chalk.reset('')}`
`Display detailed information about one or more tasks${chalk.reset('')}`
)
.argument('[id]', 'Task ID to show')
.option('-i, --id <id>', 'Task ID to show')
.option('-s, --status <status>', 'Filter subtasks by status') // ADDED status option
.argument('[id]', 'Task ID(s) to show (comma-separated for multiple)')
.option(
'-i, --id <id>',
'Task ID(s) to show (comma-separated for multiple)'
)
.option('-s, --status <status>', 'Filter subtasks by status')
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option(
'-r, --report <report>',
@@ -1798,7 +1802,7 @@ ${result.result}
)
.action(async (taskId, options) => {
const idArg = taskId || options.id;
const statusFilter = options.status; // ADDED: Capture status filter
const statusFilter = options.status;
if (!idArg) {
console.error(chalk.red('Error: Please provide a task ID'));
@@ -1807,8 +1811,25 @@ ${result.result}
const tasksPath = options.file;
const reportPath = options.report;
// PASS statusFilter to the display function
await displayTaskById(tasksPath, idArg, reportPath, statusFilter);
// Check if multiple IDs are provided (comma-separated)
const taskIds = idArg
.split(',')
.map((id) => id.trim())
.filter((id) => id.length > 0);
if (taskIds.length > 1) {
// Multiple tasks - use compact summary view with interactive drill-down
await displayMultipleTasksSummary(
tasksPath,
taskIds,
reportPath,
statusFilter
);
} else {
// Single task - use detailed view
await displayTaskById(tasksPath, taskIds[0], reportPath, statusFilter);
}
});
// add-dependency command

View File

@@ -2262,7 +2262,7 @@ async function displayMultipleTasksSummary(
boxen(
chalk.white.bold('Interactive Options:') +
'\n' +
chalk.cyan('• Press Enter to view detailed breakdown of all tasks') +
chalk.cyan('• Press Enter to view available actions for all tasks') +
'\n' +
chalk.cyan(
'• Type a task ID (e.g., "3" or "3.2") to view that specific task'
@@ -2293,39 +2293,122 @@ async function displayMultipleTasksSummary(
if (choice.toLowerCase() === 'q') {
return;
} else if (choice.trim() === '') {
// Show detailed breakdown of all tasks
console.log('\n' + chalk.blue('='.repeat(terminalWidth - 10)));
console.log(chalk.white.bold('Detailed Task Breakdown'));
console.log(chalk.blue('='.repeat(terminalWidth - 10)) + '\n');
for (let i = 0; i < foundTasks.length; i++) {
const task = foundTasks[i];
console.log(chalk.cyan.bold(`Task ${task.id}: ${task.title}`));
// Show action menu for selected tasks
console.log(
chalk.gray(
`Status: ${task.status || 'pending'} | Priority: ${task.priority || 'medium'}`
boxen(
chalk.white.bold('Available Actions for Selected Tasks:') +
'\n' +
chalk.cyan('1.') +
' Mark all as in-progress' +
'\n' +
chalk.cyan('2.') +
' Mark all as done' +
'\n' +
chalk.cyan('3.') +
' Show next available task' +
'\n' +
chalk.cyan('4.') +
' Expand all tasks (generate subtasks)' +
'\n' +
chalk.cyan('5.') +
' View dependency relationships' +
'\n' +
chalk.cyan('6.') +
' Generate task files' +
'\n' +
chalk.gray('Or type a task ID to view details'),
{
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'blue',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
if (task.description) {
console.log(
chalk.white(`Description: ${truncate(task.description, 80)}`)
);
}
const rl2 = readline.createInterface({
input: process.stdin,
output: process.stdout
});
// Show subtask progress if exists
if (task.subtasks && task.subtasks.length > 0) {
const total = task.subtasks.length;
const completed = task.subtasks.filter(
(st) => st.status === 'done' || st.status === 'completed'
).length;
console.log(
chalk.magenta(`Subtasks: ${completed}/${total} completed`)
);
}
const actionChoice = await new Promise((resolve) => {
rl2.question(chalk.cyan('Choose action (1-6): '), resolve);
});
rl2.close();
if (i < foundTasks.length - 1) {
console.log(chalk.gray('─'.repeat(Math.min(50, terminalWidth - 10))));
const taskIdList = foundTasks.map((t) => t.id).join(',');
switch (actionChoice.trim()) {
case '1':
console.log(
chalk.blue(
`\n→ Command: task-master set-status --id=${taskIdList} --status=in-progress`
)
);
console.log(
chalk.green(
'✓ Copy and run this command to mark all tasks as in-progress'
)
);
break;
case '2':
console.log(
chalk.blue(
`\n→ Command: task-master set-status --id=${taskIdList} --status=done`
)
);
console.log(
chalk.green('✓ Copy and run this command to mark all tasks as done')
);
break;
case '3':
console.log(chalk.blue(`\n→ Command: task-master next`));
console.log(
chalk.green(
'✓ Copy and run this command to see the next available task'
)
);
break;
case '4':
console.log(
chalk.blue(
`\n→ Command: task-master expand --id=${taskIdList} --research`
)
);
console.log(
chalk.green(
'✓ Copy and run this command to expand all selected tasks into subtasks'
)
);
break;
case '5':
// Show dependency visualization
console.log(chalk.white.bold('\nDependency Relationships:'));
let hasDependencies = false;
foundTasks.forEach((task) => {
if (task.dependencies && task.dependencies.length > 0) {
console.log(
chalk.cyan(
`Task ${task.id} depends on: ${task.dependencies.join(', ')}`
)
);
hasDependencies = true;
}
});
if (!hasDependencies) {
console.log(chalk.gray('No dependencies found for selected tasks'));
}
break;
case '6':
console.log(chalk.blue(`\n→ Command: task-master generate`));
console.log(
chalk.green('✓ Copy and run this command to generate task files')
);
break;
default:
if (actionChoice.trim().length > 0) {
console.log(chalk.yellow(`Invalid choice: ${actionChoice.trim()}`));
console.log(chalk.gray('Please choose 1-6 or type a task ID'));
}
}
} else {

View File

@@ -1,61 +0,0 @@
# Task ID: 86
# Title: Implement Separate Context Window and Output Token Limits
# Status: pending
# Dependencies: None
# Priority: high
# Description: Replace the ambiguous MAX_TOKENS configuration with separate contextWindowTokens and maxOutputTokens fields to properly handle model token limits and enable dynamic token allocation.
# Details:
Currently, the MAX_TOKENS configuration entry is ambiguous and doesn't properly differentiate between:
1. Context window tokens (total input + output capacity)
2. Maximum output tokens (generation limit)
This causes issues where:
- The system can't properly validate prompt lengths against model capabilities
- Output token allocation is not optimized based on input length
- Different models with different token architectures are handled inconsistently
This epic will implement a comprehensive solution that:
- Updates supported-models.json with accurate contextWindowTokens and maxOutputTokens for each model
- Modifies config-manager.js to use separate maxInputTokens and maxOutputTokens in role configurations
- Implements a token counting utility for accurate prompt measurement
- Updates ai-services-unified.js to dynamically calculate available output tokens
- Provides migration guidance and validation for existing configurations
- Adds comprehensive error handling and validation throughout the system
The end result will be more precise token management, better cost control, and reduced likelihood of hitting model context limits.
# Test Strategy:
1. Verify all models have accurate token limit data from official documentation
2. Test dynamic token allocation with various prompt lengths
3. Ensure backward compatibility with existing .taskmasterconfig files
4. Validate error messages are clear and actionable
5. Test with multiple AI providers to ensure consistent behavior
6. Performance test token counting utility with large prompts
# Subtasks:
## 1. Update supported-models.json with token limit fields [pending]
### Dependencies: None
### Description: Modify the supported-models.json file to include contextWindowTokens and maxOutputTokens fields for each model, replacing the ambiguous max_tokens field.
### Details:
For each model entry in supported-models.json:
1. Add `contextWindowTokens` field representing the total context window (input + output tokens)
2. Add `maxOutputTokens` field representing the maximum tokens the model can generate
3. Remove or deprecate the ambiguous `max_tokens` field if present
Research and populate accurate values for each model from official documentation:
- For OpenAI models (e.g., gpt-4o): contextWindowTokens=128000, maxOutputTokens=16384
- For Anthropic models (e.g., Claude 3.7): contextWindowTokens=200000, maxOutputTokens=8192
- For other providers, find official documentation or use reasonable defaults
Example entry:
```json
{
"id": "claude-3-7-sonnet-20250219",
"swe_score": 0.623,
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
"allowed_roles": ["main", "fallback"],
"contextWindowTokens": 200000,
"maxOutputTokens": 8192
}
```

View File

@@ -5585,30 +5585,6 @@
"parentTaskId": 85
}
]
},
{
"id": 86,
"title": "Implement Separate Context Window and Output Token Limits",
"description": "Replace the ambiguous MAX_TOKENS configuration with separate contextWindowTokens and maxOutputTokens fields to properly handle model token limits and enable dynamic token allocation.",
"details": "Currently, the MAX_TOKENS configuration entry is ambiguous and doesn't properly differentiate between:\n1. Context window tokens (total input + output capacity)\n2. Maximum output tokens (generation limit)\n\nThis causes issues where:\n- The system can't properly validate prompt lengths against model capabilities\n- Output token allocation is not optimized based on input length\n- Different models with different token architectures are handled inconsistently\n\nThis epic will implement a comprehensive solution that:\n- Updates supported-models.json with accurate contextWindowTokens and maxOutputTokens for each model\n- Modifies config-manager.js to use separate maxInputTokens and maxOutputTokens in role configurations\n- Implements a token counting utility for accurate prompt measurement\n- Updates ai-services-unified.js to dynamically calculate available output tokens\n- Provides migration guidance and validation for existing configurations\n- Adds comprehensive error handling and validation throughout the system\n\nThe end result will be more precise token management, better cost control, and reduced likelihood of hitting model context limits.",
"testStrategy": "1. Verify all models have accurate token limit data from official documentation\n2. Test dynamic token allocation with various prompt lengths\n3. Ensure backward compatibility with existing .taskmasterconfig files\n4. Validate error messages are clear and actionable\n5. Test with multiple AI providers to ensure consistent behavior\n6. Performance test token counting utility with large prompts",
"status": "pending",
"dependencies": [],
"priority": "high",
"subtasks": [
{
"id": 1,
"title": "Update supported-models.json with token limit fields",
"description": "Modify the supported-models.json file to include contextWindowTokens and maxOutputTokens fields for each model, replacing the ambiguous max_tokens field.",
"details": "For each model entry in supported-models.json:\n1. Add `contextWindowTokens` field representing the total context window (input + output tokens)\n2. Add `maxOutputTokens` field representing the maximum tokens the model can generate\n3. Remove or deprecate the ambiguous `max_tokens` field if present\n\nResearch and populate accurate values for each model from official documentation:\n- For OpenAI models (e.g., gpt-4o): contextWindowTokens=128000, maxOutputTokens=16384\n- For Anthropic models (e.g., Claude 3.7): contextWindowTokens=200000, maxOutputTokens=8192\n- For other providers, find official documentation or use reasonable defaults\n\nExample entry:\n```json\n{\n \"id\": \"claude-3-7-sonnet-20250219\",\n \"swe_score\": 0.623,\n \"cost_per_1m_tokens\": { \"input\": 3.0, \"output\": 15.0 },\n \"allowed_roles\": [\"main\", \"fallback\"],\n \"contextWindowTokens\": 200000,\n \"maxOutputTokens\": 8192\n}\n```",
"testStrategy": "1. Validate JSON syntax after changes\n2. Verify all models have the new fields with reasonable values\n3. Check that the values align with official documentation from each provider\n4. Ensure backward compatibility by maintaining any fields other systems might depend on",
"priority": "high",
"dependencies": [],
"status": "pending",
"subtasks": [],
"parentTaskId": 86
}
]
}
]
}