- Fixed bug where expand task generated generic authentication subtasks - The complexity-report prompt variant now includes parent task details - Added comprehensive unit tests to prevent regression - Added debug logging to help diagnose similar issues Previously, when using a complexity report with expansionPrompt, only the expansion guidance was sent to the AI, missing the actual task context. This caused the AI to generate unrelated generic subtasks. Fixes the issue where all tasks would get the same generic auth-related subtasks regardless of their actual purpose (AWS infrastructure, Docker containerization, etc.) Co-authored-by: Sadaqat Ali <32377500+sadaqat12@users.noreply.github.com>
Task Master Prompt Management System
This directory contains the centralized prompt templates for all AI-powered features in Task Master.
Overview
The prompt management system provides:
- Centralized Storage: All prompts in one location (
/src/prompts) - JSON Schema Validation: Comprehensive validation using AJV with detailed error reporting
- Version Control: Track changes to prompts over time
- Variant Support: Different prompts for different contexts (research mode, complexity levels, etc.)
- Template Variables: Dynamic prompt generation with variable substitution
- IDE Integration: VS Code IntelliSense and validation support
Directory Structure
src/prompts/
├── README.md # This file
├── schemas/ # JSON schemas for validation
│ ├── README.md # Schema documentation
│ ├── prompt-template.schema.json # Main template schema
│ ├── parameter.schema.json # Parameter validation schema
│ └── variant.schema.json # Prompt variant schema
├── parse-prd.json # PRD parsing prompts
├── expand-task.json # Task expansion prompts
├── add-task.json # Task creation prompts
├── update-tasks.json # Bulk task update prompts
├── update-task.json # Single task update prompts
├── update-subtask.json # Subtask update prompts
├── analyze-complexity.json # Complexity analysis prompts
└── research.json # Research query prompts
Schema Validation
All prompt templates are validated against JSON schemas located in /src/prompts/schemas/. The validation system:
- Structural Validation: Ensures required fields and proper nesting
- Parameter Type Checking: Validates parameter types, patterns, and ranges
- Template Syntax: Validates Handlebars syntax and variable references
- Semantic Versioning: Enforces proper version format
- Cross-Reference Validation: Ensures parameters match template variables
Validation Features
- Required Fields:
id,version,description,prompts.default - Type Safety: String, number, boolean, array, object validation
- Pattern Matching: Regex validation for string parameters
- Range Validation: Min/max values for numeric parameters
- Enum Constraints: Restricted value sets for categorical parameters
Development Workflow
Setting Up Development Environment
- VS Code Integration: Schemas are automatically configured for IntelliSense
- Dependencies:
ajvandajv-formatsare required for validation - File Watching: Changes to templates trigger automatic validation
Creating New Prompts
- Create a new
.jsonfile in/src/prompts/ - Follow the schema structure (see Template Structure section)
- Define parameters with proper types and validation
- Create system and user prompts with template variables
- Test with the PromptManager before committing
Modifying Existing Prompts
- Update the
versionfield following semantic versioning - Maintain backward compatibility when possible
- Test with existing code that uses the prompt
- Update documentation if parameters change
Prompt Template Reference
1. parse-prd.json
Purpose: Parse a Product Requirements Document into structured tasks
Variants: default, research (when research mode is enabled)
Required Parameters:
numTasks(number): Target number of tasks to generatenextId(number): Starting ID for tasksprdContent(string): Content of the PRD fileprdPath(string): Path to the PRD filedefaultTaskPriority(string): Default priority for generated tasks
Optional Parameters:
research(boolean): Enable research mode for latest best practices (default: false)
Usage: Used by task-master parse-prd command to convert PRD documents into actionable task lists.
2. add-task.json
Purpose: Generate a new task based on user description
Variants: default, research (when research mode is enabled)
Required Parameters:
prompt(string): User's task descriptionnewTaskId(number): ID for the new task
Optional Parameters:
existingTasks(array): List of existing tasks for contextgatheredContext(string): Context gathered from codebase analysiscontextFromArgs(string): Additional context from manual argspriority(string): Task priority (high/medium/low, default: medium)dependencies(array): Task dependency IDsuseResearch(boolean): Use research mode (default: false)
Usage: Used by task-master add-task command to create new tasks with AI assistance.
3. expand-task.json
Purpose: Break down a task into detailed subtasks with three sophisticated strategies
Variants: complexity-report (when expansionPrompt exists), research (when research mode is enabled), default (standard case)
Required Parameters:
subtaskCount(number): Number of subtasks to generatetask(object): The task to expandnextSubtaskId(number): Starting ID for new subtasks
Optional Parameters:
additionalContext(string): Additional context for expansion (default: "")complexityReasoningContext(string): Complexity analysis reasoning context (default: "")gatheredContext(string): Gathered project context (default: "")useResearch(boolean): Use research mode (default: false)expansionPrompt(string): Expansion prompt from complexity report
Variant Selection Strategy:
- complexity-report: Used when
expansionPromptexists (highest priority) - research: Used when
useResearch === true && !expansionPrompt - default: Standard fallback strategy
Usage: Used by task-master expand command to break complex tasks into manageable subtasks using the most appropriate strategy based on available context and complexity analysis.
4. update-task.json
Purpose: Update a single task with new information, supporting full updates and append mode
Variants: default, append (when appendMode is true), research (when research mode is enabled)
Required Parameters:
task(object): The task to updatetaskJson(string): JSON string representation of the taskupdatePrompt(string): Description of changes to apply
Optional Parameters:
appendMode(boolean): Whether to append to details or do full update (default: false)useResearch(boolean): Use research mode (default: false)currentDetails(string): Current task details for context (default: "(No existing details)")gatheredContext(string): Additional project context
Usage: Used by task-master update-task command to modify existing tasks.
5. update-tasks.json
Purpose: Update multiple tasks based on new context or changes
Variants: default, research (when research mode is enabled)
Required Parameters:
tasks(array): Array of tasks to updateupdatePrompt(string): Description of changes to apply
Optional Parameters:
useResearch(boolean): Use research mode (default: false)projectContext(string): Additional project context
Usage: Used by task-master update command to bulk update multiple tasks.
6. update-subtask.json
Purpose: Append information to a subtask by generating only new content
Variants: default, research (when research mode is enabled)
Required Parameters:
parentTask(object): The parent task contextcurrentDetails(string): Current subtask details (default: "(No existing details)")updatePrompt(string): User request for what to add
Optional Parameters:
prevSubtask(object): The previous subtask if anynextSubtask(object): The next subtask if anyuseResearch(boolean): Use research mode (default: false)gatheredContext(string): Additional project context
Usage: Used by task-master update-subtask command to log progress and findings on subtasks.
7. analyze-complexity.json
Purpose: Analyze task complexity and generate expansion recommendations
Variants: default, research (when research mode is enabled), batch (when analyzing >10 tasks)
Required Parameters:
tasks(array): Array of tasks to analyze
Optional Parameters:
gatheredContext(string): Additional project contextthreshold(number): Complexity threshold for expansion recommendation (1-10, default: 5)useResearch(boolean): Use research mode for deeper analysis (default: false)
Usage: Used by task-master analyze-complexity command to determine which tasks need breakdown.
8. research.json
Purpose: Perform AI-powered research with project context
Variants: default, low (concise responses), medium (balanced), high (detailed)
Required Parameters:
query(string): Research query
Optional Parameters:
gatheredContext(string): Gathered project contextdetailLevel(string): Level of detail (low/medium/high, default: medium)projectInfo(object): Project information with properties:root(string): Project root pathtaskCount(number): Number of related tasksfileCount(number): Number of related files
Usage: Used by task-master research command to get contextual information and guidance.
Template Structure
Each prompt template is a JSON file with the following structure:
{
"id": "unique-identifier",
"version": "1.0.0",
"description": "What this prompt does",
"metadata": {
"author": "system",
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z",
"tags": ["category", "feature"],
"category": "task"
},
"parameters": {
"paramName": {
"type": "string|number|boolean|array|object",
"required": true|false,
"default": "default value",
"description": "Parameter description",
"enum": ["option1", "option2"],
"pattern": "^[a-z]+$",
"minimum": 1,
"maximum": 100
}
},
"prompts": {
"default": {
"system": "System prompt template",
"user": "User prompt template"
},
"variant-name": {
"condition": "JavaScript expression",
"system": "Variant system prompt",
"user": "Variant user prompt",
"metadata": {
"description": "When to use this variant"
}
}
}
}
Template Features
Variable Substitution
Use {{variableName}} to inject dynamic values:
"user": "Analyze these {{tasks.length}} tasks with threshold {{threshold}}"
Conditionals
Use {{#if variable}}...{{/if}} for conditional content:
"user": "{{#if useResearch}}Research and {{/if}}create a task"
Helper Functions
Equality Helper
Use {{#if (eq variable "value")}}...{{/if}} for string comparisons:
"user": "{{#if (eq detailLevel \"low\")}}Provide a brief summary{{/if}}"
"user": "{{#if (eq priority \"high\")}}URGENT: {{/if}}{{taskTitle}}"
The eq helper enables clean conditional logic based on parameter values:
- Compare strings:
(eq detailLevel "medium") - Compare with enum values:
(eq status "pending") - Multiple conditions:
{{#if (eq level "1")}}First{{/if}}{{#if (eq level "2")}}Second{{/if}}
Negation Helper
Use {{#if (not variable)}}...{{/if}} for negation conditions:
"user": "{{#if (not useResearch)}}Use basic analysis{{/if}}"
"user": "{{#if (not hasSubtasks)}}This task has no subtasks{{/if}}"
The not helper enables clean negative conditional logic:
- Negate boolean values:
(not useResearch) - Negate truthy/falsy values:
(not emptyArray) - Cleaner than separate boolean parameters: No need for
notUseResearchflags
Numeric Comparison Helpers
Use {{#if (gt variable number)}}...{{/if}} for greater than comparisons:
"user": "generate {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} top-level development tasks"
"user": "{{#if (gt complexity 5)}}This is a complex task{{/if}}"
"system": "create {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks"
Use {{#if (gte variable number)}}...{{/if}} for greater than or equal comparisons:
"user": "{{#if (gte priority 8)}}HIGH PRIORITY{{/if}}"
"user": "{{#if (gte threshold 1)}}Analysis enabled{{/if}}"
"system": "{{#if (gte complexityScore 8)}}Use detailed breakdown approach{{/if}}"
The numeric comparison helpers enable sophisticated conditional logic:
- Dynamic counting:
{{#if (gt numTasks 0)}}exactly {{numTasks}}{{else}}an appropriate number of{{/if}} - Threshold-based behavior:
(gte complexityScore 8)for high-complexity handling - Zero checks:
(gt subtaskCount 0)for conditional content generation - Decimal support:
(gt score 7.5)for fractional comparisons - Enhanced prompt sophistication: Enables parse-prd and expand-task logic matching GitHub specifications
Loops
Use {{#each array}}...{{/each}} to iterate over arrays:
"user": "Tasks:\n{{#each tasks}}- {{id}}: {{title}}\n{{/each}}"
Special Loop Variables
Inside {{#each}} blocks, you have access to:
{{@index}}: Current array index (0-based){{@first}}: Boolean, true for first item{{@last}}: Boolean, true for last item
"user": "{{#each tasks}}{{@index}}. {{title}}{{#unless @last}}\n{{/unless}}{{/each}}"
JSON Serialization
Use {{{json variable}}} (triple braces) to serialize objects/arrays to JSON:
"user": "Analyze these tasks: {{{json tasks}}}"
Nested Properties
Access nested properties with dot notation:
"user": "Project: {{context.projectName}}"
Prompt Variants
Variants allow different prompts based on conditions:
{
"prompts": {
"default": {
"system": "Default system prompt",
"user": "Default user prompt"
},
"research": {
"condition": "useResearch === true",
"system": "Research-focused system prompt",
"user": "Research-focused user prompt"
},
"high-complexity": {
"condition": "complexityScore >= 8",
"system": "Complex task handling prompt",
"user": "Detailed breakdown request"
}
}
}
Condition Evaluation
Conditions are JavaScript expressions evaluated with parameter values as context:
- Simple comparisons:
useResearch === true - Numeric comparisons:
threshold >= 5 - String matching:
priority === 'high' - Complex logic:
useResearch && threshold > 7
PromptManager Module
The PromptManager is implemented in scripts/modules/prompt-manager.js and provides:
- Template loading and caching: Templates are loaded once and cached for performance
- Schema validation: Comprehensive validation using AJV with detailed error reporting
- Variable substitution: Handlebars-like syntax for dynamic content
- Variant selection: Automatic selection based on conditions
- Error handling: Graceful fallbacks and detailed error messages
- Singleton pattern: One instance per project root for efficiency
Validation Behavior
- Schema Available: Full validation with detailed error messages
- Schema Missing: Falls back to basic structural validation
- Invalid Templates: Throws descriptive errors with field-level details
- Parameter Validation: Type checking, pattern matching, range validation
Usage in Code
Basic Usage
import { getPromptManager } from '../prompt-manager.js';
const promptManager = getPromptManager();
const { systemPrompt, userPrompt, metadata } = promptManager.loadPrompt('add-task', {
// Parameters matching the template's parameter definitions
prompt: 'Create a user authentication system',
newTaskId: 5,
priority: 'high',
useResearch: false
});
// Use with AI service
const result = await generateObjectService({
systemPrompt,
prompt: userPrompt,
// ... other AI parameters
});
With Variants
// Research variant will be selected automatically
const { systemPrompt, userPrompt } = promptManager.loadPrompt('expand-task', {
useResearch: true, // Triggers research variant
task: taskObject,
subtaskCount: 5
});
Error Handling
try {
const result = promptManager.loadPrompt('invalid-template', {});
} catch (error) {
if (error.message.includes('Schema validation failed')) {
console.error('Template validation error:', error.message);
} else if (error.message.includes('not found')) {
console.error('Template not found:', error.message);
}
}
Adding New Prompts
- Create the JSON file following the template structure
- Define parameters with proper types, validation, and descriptions
- Create prompts with clear system and user templates
- Use template variables for dynamic content
- Add variants if needed for different contexts
- Test thoroughly with the PromptManager
- Update this documentation with the new prompt details
Example New Prompt
{
"id": "new-feature",
"version": "1.0.0",
"description": "Generate code for a new feature",
"parameters": {
"featureName": {
"type": "string",
"required": true,
"pattern": "^[a-zA-Z][a-zA-Z0-9-]*$",
"description": "Name of the feature to implement"
},
"complexity": {
"type": "string",
"required": false,
"enum": ["simple", "medium", "complex"],
"default": "medium",
"description": "Feature complexity level"
}
},
"prompts": {
"default": {
"system": "You are a senior software engineer.",
"user": "Create a {{complexity}} {{featureName}} feature."
}
}
}
Best Practices
Template Design
- Clear IDs: Use kebab-case, descriptive identifiers
- Semantic Versioning: Follow semver for version management
- Comprehensive Parameters: Define all required and optional parameters
- Type Safety: Use proper parameter types and validation
- Clear Descriptions: Document what each prompt and parameter does
Variable Usage
- Meaningful Names: Use descriptive variable names
- Consistent Patterns: Follow established naming conventions
- Safe Defaults: Provide sensible default values
- Validation: Use patterns, enums, and ranges for validation
Variant Strategy
- Simple Conditions: Keep variant conditions easy to understand
- Clear Purpose: Each variant should have a distinct use case
- Fallback Logic: Always provide a default variant
- Documentation: Explain when each variant is used
Performance
- Caching: Templates are cached automatically
- Lazy Loading: Templates load only when needed
- Minimal Variants: Don't create unnecessary variants
- Efficient Conditions: Keep condition evaluation fast
Testing Prompts
Validation Testing
// Test schema validation
const promptManager = getPromptManager();
const results = promptManager.validateAllPrompts();
console.log(`Valid: ${results.valid.length}, Errors: ${results.errors.length}`);
Integration Testing
When modifying prompts, ensure to test:
- Variable substitution works with actual data structures
- Variant selection triggers correctly based on conditions
- AI responses remain consistent with expected behavior
- All parameters are properly validated
- Error handling works for invalid inputs
Quick Testing
// Test prompt loading and variable substitution
const promptManager = getPromptManager();
const result = promptManager.loadPrompt('research', {
query: 'What are the latest React best practices?',
detailLevel: 'medium',
gatheredContext: 'React project with TypeScript'
});
console.log('System:', result.systemPrompt);
console.log('User:', result.userPrompt);
console.log('Metadata:', result.metadata);
Testing Checklist
- Template validates against schema
- All required parameters are defined
- Variable substitution works correctly
- Variants trigger under correct conditions
- Error messages are clear and helpful
- Performance is acceptable for repeated usage
Troubleshooting
Common Issues
Schema Validation Errors:
- Check required fields are present
- Verify parameter types match schema
- Ensure version follows semantic versioning
- Validate JSON syntax
Variable Substitution Problems:
- Check variable names match parameter names
- Verify nested property access syntax
- Ensure array iteration syntax is correct
- Test with actual data structures
Variant Selection Issues:
- Verify condition syntax is valid JavaScript
- Check parameter values match condition expectations
- Ensure default variant exists
- Test condition evaluation with debug logging
Performance Issues:
- Check for circular references in templates
- Verify caching is working correctly
- Monitor template loading frequency
- Consider simplifying complex conditions