feat: Centralize AI prompts into JSON templates (#882)
* centralize prompt management * add changeset * add variant key to determine prompt version * update tests and add prompt manager test * determine internal path, don't use projectRoot * add promptManager mock * detailed prompt docs * add schemas and validator packages * add validate prompts command * add schema validation * update tests * move schemas to src/prompts/schemas * use this.promptsDir for better semantics * add prompt schemas * version schema files & update links * remove validate command * expect dependencies * update docs * fix test * remove suggestmode to ensure clean keys * remove default variant from research and update schema * now handled by prompt manager * add manual test to verify prompts * remove incorrect batch variant * consolidate variants * consolidate analyze-complexity to just default variant * consolidate parse-prd variants * add eq handler for handlebars * consolidate research prompt variants * use brevity * consolidate variants for update subtask * add not handler * consolidate variants for update-task * consolidate update-tasks variants * add conditional content to prompt when research used * update prompt tests * show correct research variant * make variant names link to below * remove changset * restore gitignore * Merge branch 'next' of https://github.com/eyaltoledano/claude-task-master into joedanz/centralize-prompts # Conflicts: # package-lock.json # scripts/modules/task-manager/expand-task.js # scripts/modules/task-manager/parse-prd.js remove unused * add else * update tests * update biome optional dependencies * responsive html output for mobile
This commit is contained in:
572
src/prompts/README.md
Normal file
572
src/prompts/README.md
Normal file
@@ -0,0 +1,572 @@
|
||||
# Task Master Prompt Management System
|
||||
|
||||
This directory contains the centralized prompt templates for all AI-powered features in Task Master.
|
||||
|
||||
## Overview
|
||||
|
||||
The prompt management system provides:
|
||||
- **Centralized Storage**: All prompts in one location (`/src/prompts`)
|
||||
- **JSON Schema Validation**: Comprehensive validation using AJV with detailed error reporting
|
||||
- **Version Control**: Track changes to prompts over time
|
||||
- **Variant Support**: Different prompts for different contexts (research mode, complexity levels, etc.)
|
||||
- **Template Variables**: Dynamic prompt generation with variable substitution
|
||||
- **IDE Integration**: VS Code IntelliSense and validation support
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
src/prompts/
|
||||
├── README.md # This file
|
||||
├── schemas/ # JSON schemas for validation
|
||||
│ ├── README.md # Schema documentation
|
||||
│ ├── prompt-template.schema.json # Main template schema
|
||||
│ ├── parameter.schema.json # Parameter validation schema
|
||||
│ └── variant.schema.json # Prompt variant schema
|
||||
├── parse-prd.json # PRD parsing prompts
|
||||
├── expand-task.json # Task expansion prompts
|
||||
├── add-task.json # Task creation prompts
|
||||
├── update-tasks.json # Bulk task update prompts
|
||||
├── update-task.json # Single task update prompts
|
||||
├── update-subtask.json # Subtask update prompts
|
||||
├── analyze-complexity.json # Complexity analysis prompts
|
||||
└── research.json # Research query prompts
|
||||
```
|
||||
|
||||
## Schema Validation
|
||||
|
||||
All prompt templates are validated against JSON schemas located in `/src/prompts/schemas/`. The validation system:
|
||||
|
||||
- **Structural Validation**: Ensures required fields and proper nesting
|
||||
- **Parameter Type Checking**: Validates parameter types, patterns, and ranges
|
||||
- **Template Syntax**: Validates Handlebars syntax and variable references
|
||||
- **Semantic Versioning**: Enforces proper version format
|
||||
- **Cross-Reference Validation**: Ensures parameters match template variables
|
||||
|
||||
### Validation Features
|
||||
- **Required Fields**: `id`, `version`, `description`, `prompts.default`
|
||||
- **Type Safety**: String, number, boolean, array, object validation
|
||||
- **Pattern Matching**: Regex validation for string parameters
|
||||
- **Range Validation**: Min/max values for numeric parameters
|
||||
- **Enum Constraints**: Restricted value sets for categorical parameters
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Setting Up Development Environment
|
||||
1. **VS Code Integration**: Schemas are automatically configured for IntelliSense
|
||||
2. **Dependencies**: `ajv` and `ajv-formats` are required for validation
|
||||
3. **File Watching**: Changes to templates trigger automatic validation
|
||||
|
||||
### Creating New Prompts
|
||||
1. Create a new `.json` file in `/src/prompts/`
|
||||
2. Follow the schema structure (see Template Structure section)
|
||||
3. Define parameters with proper types and validation
|
||||
4. Create system and user prompts with template variables
|
||||
5. Test with the PromptManager before committing
|
||||
|
||||
### Modifying Existing Prompts
|
||||
1. Update the `version` field following semantic versioning
|
||||
2. Maintain backward compatibility when possible
|
||||
3. Test with existing code that uses the prompt
|
||||
4. Update documentation if parameters change
|
||||
|
||||
## Prompt Template Reference
|
||||
|
||||
### 1. parse-prd.json
|
||||
**Purpose**: Parse a Product Requirements Document into structured tasks
|
||||
**Variants**: `default`, `research` (when research mode is enabled)
|
||||
|
||||
**Required Parameters**:
|
||||
- `numTasks` (number): Target number of tasks to generate
|
||||
- `nextId` (number): Starting ID for tasks
|
||||
- `prdContent` (string): Content of the PRD file
|
||||
- `prdPath` (string): Path to the PRD file
|
||||
- `defaultTaskPriority` (string): Default priority for generated tasks
|
||||
|
||||
**Optional Parameters**:
|
||||
- `research` (boolean): Enable research mode for latest best practices (default: false)
|
||||
|
||||
**Usage**: Used by `task-master parse-prd` command to convert PRD documents into actionable task lists.
|
||||
|
||||
### 2. add-task.json
|
||||
**Purpose**: Generate a new task based on user description
|
||||
**Variants**: `default`, `research` (when research mode is enabled)
|
||||
|
||||
**Required Parameters**:
|
||||
- `prompt` (string): User's task description
|
||||
- `newTaskId` (number): ID for the new task
|
||||
|
||||
**Optional Parameters**:
|
||||
- `existingTasks` (array): List of existing tasks for context
|
||||
- `gatheredContext` (string): Context gathered from codebase analysis
|
||||
- `contextFromArgs` (string): Additional context from manual args
|
||||
- `priority` (string): Task priority (high/medium/low, default: medium)
|
||||
- `dependencies` (array): Task dependency IDs
|
||||
- `useResearch` (boolean): Use research mode (default: false)
|
||||
|
||||
**Usage**: Used by `task-master add-task` command to create new tasks with AI assistance.
|
||||
|
||||
### 3. expand-task.json
|
||||
**Purpose**: Break down a task into detailed subtasks with three sophisticated strategies
|
||||
**Variants**: `complexity-report` (when expansionPrompt exists), `research` (when research mode is enabled), `default` (standard case)
|
||||
|
||||
**Required Parameters**:
|
||||
- `subtaskCount` (number): Number of subtasks to generate
|
||||
- `task` (object): The task to expand
|
||||
- `nextSubtaskId` (number): Starting ID for new subtasks
|
||||
|
||||
**Optional Parameters**:
|
||||
- `additionalContext` (string): Additional context for expansion (default: "")
|
||||
- `complexityReasoningContext` (string): Complexity analysis reasoning context (default: "")
|
||||
- `gatheredContext` (string): Gathered project context (default: "")
|
||||
- `useResearch` (boolean): Use research mode (default: false)
|
||||
- `expansionPrompt` (string): Expansion prompt from complexity report
|
||||
|
||||
**Variant Selection Strategy**:
|
||||
1. **complexity-report**: Used when `expansionPrompt` exists (highest priority)
|
||||
2. **research**: Used when `useResearch === true && !expansionPrompt`
|
||||
3. **default**: Standard fallback strategy
|
||||
|
||||
**Usage**: Used by `task-master expand` command to break complex tasks into manageable subtasks using the most appropriate strategy based on available context and complexity analysis.
|
||||
|
||||
### 4. update-task.json
|
||||
**Purpose**: Update a single task with new information, supporting full updates and append mode
|
||||
**Variants**: `default`, `append` (when appendMode is true), `research` (when research mode is enabled)
|
||||
|
||||
**Required Parameters**:
|
||||
- `task` (object): The task to update
|
||||
- `taskJson` (string): JSON string representation of the task
|
||||
- `updatePrompt` (string): Description of changes to apply
|
||||
|
||||
**Optional Parameters**:
|
||||
- `appendMode` (boolean): Whether to append to details or do full update (default: false)
|
||||
- `useResearch` (boolean): Use research mode (default: false)
|
||||
- `currentDetails` (string): Current task details for context (default: "(No existing details)")
|
||||
- `gatheredContext` (string): Additional project context
|
||||
|
||||
**Usage**: Used by `task-master update-task` command to modify existing tasks.
|
||||
|
||||
### 5. update-tasks.json
|
||||
**Purpose**: Update multiple tasks based on new context or changes
|
||||
**Variants**: `default`, `research` (when research mode is enabled)
|
||||
|
||||
**Required Parameters**:
|
||||
- `tasks` (array): Array of tasks to update
|
||||
- `updatePrompt` (string): Description of changes to apply
|
||||
|
||||
**Optional Parameters**:
|
||||
- `useResearch` (boolean): Use research mode (default: false)
|
||||
- `projectContext` (string): Additional project context
|
||||
|
||||
**Usage**: Used by `task-master update` command to bulk update multiple tasks.
|
||||
|
||||
### 6. update-subtask.json
|
||||
**Purpose**: Append information to a subtask by generating only new content
|
||||
**Variants**: `default`, `research` (when research mode is enabled)
|
||||
|
||||
**Required Parameters**:
|
||||
- `parentTask` (object): The parent task context
|
||||
- `currentDetails` (string): Current subtask details (default: "(No existing details)")
|
||||
- `updatePrompt` (string): User request for what to add
|
||||
|
||||
**Optional Parameters**:
|
||||
- `prevSubtask` (object): The previous subtask if any
|
||||
- `nextSubtask` (object): The next subtask if any
|
||||
- `useResearch` (boolean): Use research mode (default: false)
|
||||
- `gatheredContext` (string): Additional project context
|
||||
|
||||
**Usage**: Used by `task-master update-subtask` command to log progress and findings on subtasks.
|
||||
|
||||
### 7. analyze-complexity.json
|
||||
**Purpose**: Analyze task complexity and generate expansion recommendations
|
||||
**Variants**: `default`, `research` (when research mode is enabled), `batch` (when analyzing >10 tasks)
|
||||
|
||||
**Required Parameters**:
|
||||
- `tasks` (array): Array of tasks to analyze
|
||||
|
||||
**Optional Parameters**:
|
||||
- `gatheredContext` (string): Additional project context
|
||||
- `threshold` (number): Complexity threshold for expansion recommendation (1-10, default: 5)
|
||||
- `useResearch` (boolean): Use research mode for deeper analysis (default: false)
|
||||
|
||||
**Usage**: Used by `task-master analyze-complexity` command to determine which tasks need breakdown.
|
||||
|
||||
### 8. research.json
|
||||
**Purpose**: Perform AI-powered research with project context
|
||||
**Variants**: `default`, `low` (concise responses), `medium` (balanced), `high` (detailed)
|
||||
|
||||
**Required Parameters**:
|
||||
- `query` (string): Research query
|
||||
|
||||
**Optional Parameters**:
|
||||
- `gatheredContext` (string): Gathered project context
|
||||
- `detailLevel` (string): Level of detail (low/medium/high, default: medium)
|
||||
- `projectInfo` (object): Project information with properties:
|
||||
- `root` (string): Project root path
|
||||
- `taskCount` (number): Number of related tasks
|
||||
- `fileCount` (number): Number of related files
|
||||
|
||||
**Usage**: Used by `task-master research` command to get contextual information and guidance.
|
||||
|
||||
## Template Structure
|
||||
|
||||
Each prompt template is a JSON file with the following structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "unique-identifier",
|
||||
"version": "1.0.0",
|
||||
"description": "What this prompt does",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["category", "feature"],
|
||||
"category": "task"
|
||||
},
|
||||
"parameters": {
|
||||
"paramName": {
|
||||
"type": "string|number|boolean|array|object",
|
||||
"required": true|false,
|
||||
"default": "default value",
|
||||
"description": "Parameter description",
|
||||
"enum": ["option1", "option2"],
|
||||
"pattern": "^[a-z]+$",
|
||||
"minimum": 1,
|
||||
"maximum": 100
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "System prompt template",
|
||||
"user": "User prompt template"
|
||||
},
|
||||
"variant-name": {
|
||||
"condition": "JavaScript expression",
|
||||
"system": "Variant system prompt",
|
||||
"user": "Variant user prompt",
|
||||
"metadata": {
|
||||
"description": "When to use this variant"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Template Features
|
||||
|
||||
### Variable Substitution
|
||||
Use `{{variableName}}` to inject dynamic values:
|
||||
```
|
||||
"user": "Analyze these {{tasks.length}} tasks with threshold {{threshold}}"
|
||||
```
|
||||
|
||||
### Conditionals
|
||||
Use `{{#if variable}}...{{/if}}` for conditional content:
|
||||
```
|
||||
"user": "{{#if useResearch}}Research and {{/if}}create a task"
|
||||
```
|
||||
|
||||
### Helper Functions
|
||||
|
||||
#### Equality Helper
|
||||
Use `{{#if (eq variable "value")}}...{{/if}}` for string comparisons:
|
||||
```
|
||||
"user": "{{#if (eq detailLevel \"low\")}}Provide a brief summary{{/if}}"
|
||||
"user": "{{#if (eq priority \"high\")}}URGENT: {{/if}}{{taskTitle}}"
|
||||
```
|
||||
|
||||
The `eq` helper enables clean conditional logic based on parameter values:
|
||||
- Compare strings: `(eq detailLevel "medium")`
|
||||
- Compare with enum values: `(eq status "pending")`
|
||||
- Multiple conditions: `{{#if (eq level "1")}}First{{/if}}{{#if (eq level "2")}}Second{{/if}}`
|
||||
|
||||
#### Negation Helper
|
||||
Use `{{#if (not variable)}}...{{/if}}` for negation conditions:
|
||||
```
|
||||
"user": "{{#if (not useResearch)}}Use basic analysis{{/if}}"
|
||||
"user": "{{#if (not hasSubtasks)}}This task has no subtasks{{/if}}"
|
||||
```
|
||||
|
||||
The `not` helper enables clean negative conditional logic:
|
||||
- Negate boolean values: `(not useResearch)`
|
||||
- Negate truthy/falsy values: `(not emptyArray)`
|
||||
- Cleaner than separate boolean parameters: No need for `notUseResearch` flags
|
||||
|
||||
#### Numeric Comparison Helpers
|
||||
Use `{{#if (gt variable number)}}...{{/if}}` for greater than comparisons:
|
||||
```
|
||||
"user": "generate {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} top-level development tasks"
|
||||
"user": "{{#if (gt complexity 5)}}This is a complex task{{/if}}"
|
||||
"system": "create {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks"
|
||||
```
|
||||
|
||||
Use `{{#if (gte variable number)}}...{{/if}}` for greater than or equal comparisons:
|
||||
```
|
||||
"user": "{{#if (gte priority 8)}}HIGH PRIORITY{{/if}}"
|
||||
"user": "{{#if (gte threshold 1)}}Analysis enabled{{/if}}"
|
||||
"system": "{{#if (gte complexityScore 8)}}Use detailed breakdown approach{{/if}}"
|
||||
```
|
||||
|
||||
The numeric comparison helpers enable sophisticated conditional logic:
|
||||
- **Dynamic counting**: `{{#if (gt numTasks 0)}}exactly {{numTasks}}{{else}}an appropriate number of{{/if}}`
|
||||
- **Threshold-based behavior**: `(gte complexityScore 8)` for high-complexity handling
|
||||
- **Zero checks**: `(gt subtaskCount 0)` for conditional content generation
|
||||
- **Decimal support**: `(gt score 7.5)` for fractional comparisons
|
||||
- **Enhanced prompt sophistication**: Enables parse-prd and expand-task logic matching GitHub specifications
|
||||
|
||||
### Loops
|
||||
Use `{{#each array}}...{{/each}}` to iterate over arrays:
|
||||
```
|
||||
"user": "Tasks:\n{{#each tasks}}- {{id}}: {{title}}\n{{/each}}"
|
||||
```
|
||||
|
||||
### Special Loop Variables
|
||||
Inside `{{#each}}` blocks, you have access to:
|
||||
- `{{@index}}`: Current array index (0-based)
|
||||
- `{{@first}}`: Boolean, true for first item
|
||||
- `{{@last}}`: Boolean, true for last item
|
||||
|
||||
```
|
||||
"user": "{{#each tasks}}{{@index}}. {{title}}{{#unless @last}}\n{{/unless}}{{/each}}"
|
||||
```
|
||||
|
||||
### JSON Serialization
|
||||
Use `{{{json variable}}}` (triple braces) to serialize objects/arrays to JSON:
|
||||
```
|
||||
"user": "Analyze these tasks: {{{json tasks}}}"
|
||||
```
|
||||
|
||||
### Nested Properties
|
||||
Access nested properties with dot notation:
|
||||
```
|
||||
"user": "Project: {{context.projectName}}"
|
||||
```
|
||||
|
||||
## Prompt Variants
|
||||
|
||||
Variants allow different prompts based on conditions:
|
||||
|
||||
```json
|
||||
{
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "Default system prompt",
|
||||
"user": "Default user prompt"
|
||||
},
|
||||
"research": {
|
||||
"condition": "useResearch === true",
|
||||
"system": "Research-focused system prompt",
|
||||
"user": "Research-focused user prompt"
|
||||
},
|
||||
"high-complexity": {
|
||||
"condition": "complexityScore >= 8",
|
||||
"system": "Complex task handling prompt",
|
||||
"user": "Detailed breakdown request"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Condition Evaluation
|
||||
Conditions are JavaScript expressions evaluated with parameter values as context:
|
||||
- Simple comparisons: `useResearch === true`
|
||||
- Numeric comparisons: `threshold >= 5`
|
||||
- String matching: `priority === 'high'`
|
||||
- Complex logic: `useResearch && threshold > 7`
|
||||
|
||||
## PromptManager Module
|
||||
|
||||
The PromptManager is implemented in `scripts/modules/prompt-manager.js` and provides:
|
||||
- **Template loading and caching**: Templates are loaded once and cached for performance
|
||||
- **Schema validation**: Comprehensive validation using AJV with detailed error reporting
|
||||
- **Variable substitution**: Handlebars-like syntax for dynamic content
|
||||
- **Variant selection**: Automatic selection based on conditions
|
||||
- **Error handling**: Graceful fallbacks and detailed error messages
|
||||
- **Singleton pattern**: One instance per project root for efficiency
|
||||
|
||||
### Validation Behavior
|
||||
- **Schema Available**: Full validation with detailed error messages
|
||||
- **Schema Missing**: Falls back to basic structural validation
|
||||
- **Invalid Templates**: Throws descriptive errors with field-level details
|
||||
- **Parameter Validation**: Type checking, pattern matching, range validation
|
||||
|
||||
## Usage in Code
|
||||
|
||||
### Basic Usage
|
||||
```javascript
|
||||
import { getPromptManager } from '../prompt-manager.js';
|
||||
|
||||
const promptManager = getPromptManager();
|
||||
const { systemPrompt, userPrompt, metadata } = promptManager.loadPrompt('add-task', {
|
||||
// Parameters matching the template's parameter definitions
|
||||
prompt: 'Create a user authentication system',
|
||||
newTaskId: 5,
|
||||
priority: 'high',
|
||||
useResearch: false
|
||||
});
|
||||
|
||||
// Use with AI service
|
||||
const result = await generateObjectService({
|
||||
systemPrompt,
|
||||
prompt: userPrompt,
|
||||
// ... other AI parameters
|
||||
});
|
||||
```
|
||||
|
||||
### With Variants
|
||||
```javascript
|
||||
// Research variant will be selected automatically
|
||||
const { systemPrompt, userPrompt } = promptManager.loadPrompt('expand-task', {
|
||||
useResearch: true, // Triggers research variant
|
||||
task: taskObject,
|
||||
subtaskCount: 5
|
||||
});
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```javascript
|
||||
try {
|
||||
const result = promptManager.loadPrompt('invalid-template', {});
|
||||
} catch (error) {
|
||||
if (error.message.includes('Schema validation failed')) {
|
||||
console.error('Template validation error:', error.message);
|
||||
} else if (error.message.includes('not found')) {
|
||||
console.error('Template not found:', error.message);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Adding New Prompts
|
||||
|
||||
1. **Create the JSON file** following the template structure
|
||||
2. **Define parameters** with proper types, validation, and descriptions
|
||||
3. **Create prompts** with clear system and user templates
|
||||
4. **Use template variables** for dynamic content
|
||||
5. **Add variants** if needed for different contexts
|
||||
6. **Test thoroughly** with the PromptManager
|
||||
7. **Update this documentation** with the new prompt details
|
||||
|
||||
### Example New Prompt
|
||||
```json
|
||||
{
|
||||
"id": "new-feature",
|
||||
"version": "1.0.0",
|
||||
"description": "Generate code for a new feature",
|
||||
"parameters": {
|
||||
"featureName": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"pattern": "^[a-zA-Z][a-zA-Z0-9-]*$",
|
||||
"description": "Name of the feature to implement"
|
||||
},
|
||||
"complexity": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"enum": ["simple", "medium", "complex"],
|
||||
"default": "medium",
|
||||
"description": "Feature complexity level"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are a senior software engineer.",
|
||||
"user": "Create a {{complexity}} {{featureName}} feature."
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Template Design
|
||||
1. **Clear IDs**: Use kebab-case, descriptive identifiers
|
||||
2. **Semantic Versioning**: Follow semver for version management
|
||||
3. **Comprehensive Parameters**: Define all required and optional parameters
|
||||
4. **Type Safety**: Use proper parameter types and validation
|
||||
5. **Clear Descriptions**: Document what each prompt and parameter does
|
||||
|
||||
### Variable Usage
|
||||
1. **Meaningful Names**: Use descriptive variable names
|
||||
2. **Consistent Patterns**: Follow established naming conventions
|
||||
3. **Safe Defaults**: Provide sensible default values
|
||||
4. **Validation**: Use patterns, enums, and ranges for validation
|
||||
|
||||
### Variant Strategy
|
||||
1. **Simple Conditions**: Keep variant conditions easy to understand
|
||||
2. **Clear Purpose**: Each variant should have a distinct use case
|
||||
3. **Fallback Logic**: Always provide a default variant
|
||||
4. **Documentation**: Explain when each variant is used
|
||||
|
||||
### Performance
|
||||
1. **Caching**: Templates are cached automatically
|
||||
2. **Lazy Loading**: Templates load only when needed
|
||||
3. **Minimal Variants**: Don't create unnecessary variants
|
||||
4. **Efficient Conditions**: Keep condition evaluation fast
|
||||
|
||||
## Testing Prompts
|
||||
|
||||
### Validation Testing
|
||||
```javascript
|
||||
// Test schema validation
|
||||
const promptManager = getPromptManager();
|
||||
const results = promptManager.validateAllPrompts();
|
||||
console.log(`Valid: ${results.valid.length}, Errors: ${results.errors.length}`);
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
When modifying prompts, ensure to test:
|
||||
- Variable substitution works with actual data structures
|
||||
- Variant selection triggers correctly based on conditions
|
||||
- AI responses remain consistent with expected behavior
|
||||
- All parameters are properly validated
|
||||
- Error handling works for invalid inputs
|
||||
|
||||
### Quick Testing
|
||||
```javascript
|
||||
// Test prompt loading and variable substitution
|
||||
const promptManager = getPromptManager();
|
||||
const result = promptManager.loadPrompt('research', {
|
||||
query: 'What are the latest React best practices?',
|
||||
detailLevel: 'medium',
|
||||
gatheredContext: 'React project with TypeScript'
|
||||
});
|
||||
console.log('System:', result.systemPrompt);
|
||||
console.log('User:', result.userPrompt);
|
||||
console.log('Metadata:', result.metadata);
|
||||
```
|
||||
|
||||
### Testing Checklist
|
||||
- [ ] Template validates against schema
|
||||
- [ ] All required parameters are defined
|
||||
- [ ] Variable substitution works correctly
|
||||
- [ ] Variants trigger under correct conditions
|
||||
- [ ] Error messages are clear and helpful
|
||||
- [ ] Performance is acceptable for repeated usage
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Schema Validation Errors**:
|
||||
- Check required fields are present
|
||||
- Verify parameter types match schema
|
||||
- Ensure version follows semantic versioning
|
||||
- Validate JSON syntax
|
||||
|
||||
**Variable Substitution Problems**:
|
||||
- Check variable names match parameter names
|
||||
- Verify nested property access syntax
|
||||
- Ensure array iteration syntax is correct
|
||||
- Test with actual data structures
|
||||
|
||||
**Variant Selection Issues**:
|
||||
- Verify condition syntax is valid JavaScript
|
||||
- Check parameter values match condition expectations
|
||||
- Ensure default variant exists
|
||||
- Test condition evaluation with debug logging
|
||||
|
||||
**Performance Issues**:
|
||||
- Check for circular references in templates
|
||||
- Verify caching is working correctly
|
||||
- Monitor template loading frequency
|
||||
- Consider simplifying complex conditions
|
||||
56
src/prompts/add-task.json
Normal file
56
src/prompts/add-task.json
Normal file
@@ -0,0 +1,56 @@
|
||||
{
|
||||
"id": "add-task",
|
||||
"version": "1.0.0",
|
||||
"description": "Generate a new task based on description",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["task-creation", "generation"]
|
||||
},
|
||||
"parameters": {
|
||||
"prompt": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "User's task description"
|
||||
},
|
||||
"newTaskId": {
|
||||
"type": "number",
|
||||
"required": true,
|
||||
"description": "ID for the new task"
|
||||
},
|
||||
"existingTasks": {
|
||||
"type": "array",
|
||||
"description": "List of existing tasks for context"
|
||||
},
|
||||
"gatheredContext": {
|
||||
"type": "string",
|
||||
"description": "Context gathered from codebase analysis"
|
||||
},
|
||||
"contextFromArgs": {
|
||||
"type": "string",
|
||||
"description": "Additional context from manual args"
|
||||
},
|
||||
"priority": {
|
||||
"type": "string",
|
||||
"default": "medium",
|
||||
"enum": ["high", "medium", "low"],
|
||||
"description": "Task priority"
|
||||
},
|
||||
"dependencies": {
|
||||
"type": "array",
|
||||
"description": "Task dependency IDs"
|
||||
},
|
||||
"useResearch": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Use research mode"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are a helpful assistant that creates well-structured tasks for a software development project. Generate a single new task based on the user's description, adhering strictly to the provided JSON schema. Pay special attention to dependencies between tasks, ensuring the new task correctly references any tasks it depends on.\n\nWhen determining dependencies for a new task, follow these principles:\n1. Select dependencies based on logical requirements - what must be completed before this task can begin.\n2. Prioritize task dependencies that are semantically related to the functionality being built.\n3. Consider both direct dependencies (immediately prerequisite) and indirect dependencies.\n4. Avoid adding unnecessary dependencies - only include tasks that are genuinely prerequisite.\n5. Consider the current status of tasks - prefer completed tasks as dependencies when possible.\n6. Pay special attention to foundation tasks (1-5) but don't automatically include them without reason.\n7. Recent tasks (higher ID numbers) may be more relevant for newer functionality.\n\nThe dependencies array should contain task IDs (numbers) of prerequisite tasks.{{#if useResearch}}\n\nResearch current best practices and technologies relevant to this task.{{/if}}",
|
||||
"user": "You are generating the details for Task #{{newTaskId}}. Based on the user's request: \"{{prompt}}\", create a comprehensive new task for a software development project.\n \n {{gatheredContext}}\n \n {{#if useResearch}}Research current best practices, technologies, and implementation patterns relevant to this task. {{/if}}Based on the information about existing tasks provided above, include appropriate dependencies in the \"dependencies\" array. Only include task IDs that this new task directly depends on.\n \n Return your answer as a single JSON object matching the schema precisely:\n \n {\n \"title\": \"Task title goes here\",\n \"description\": \"A concise one or two sentence description of what the task involves\",\n \"details\": \"Detailed implementation steps, considerations, code examples, or technical approach\",\n \"testStrategy\": \"Specific steps to verify correct implementation and functionality\",\n \"dependencies\": [1, 3] // Example: IDs of tasks that must be completed before this task\n }\n \n Make sure the details and test strategy are comprehensive and specific{{#if useResearch}}, incorporating current best practices from your research{{/if}}. DO NOT include the task ID in the title.\n {{#if contextFromArgs}}{{contextFromArgs}}{{/if}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
41
src/prompts/analyze-complexity.json
Normal file
41
src/prompts/analyze-complexity.json
Normal file
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"id": "analyze-complexity",
|
||||
"version": "1.0.0",
|
||||
"description": "Analyze task complexity and generate expansion recommendations",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["analysis", "complexity", "expansion", "recommendations"]
|
||||
},
|
||||
"parameters": {
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"required": true,
|
||||
"description": "Array of tasks to analyze"
|
||||
},
|
||||
"gatheredContext": {
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "Additional project context"
|
||||
},
|
||||
"threshold": {
|
||||
"type": "number",
|
||||
"default": 5,
|
||||
"min": 1,
|
||||
"max": 10,
|
||||
"description": "Complexity threshold for expansion recommendation"
|
||||
},
|
||||
"useResearch": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Use research mode for deeper analysis"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.",
|
||||
"user": "Analyze the following tasks to determine their complexity (1-10 scale) and recommend the number of subtasks for expansion. Provide a brief reasoning and an initial expansion prompt for each.{{#if useResearch}} Consider current best practices, common implementation patterns, and industry standards in your analysis.{{/if}}\n\nTasks:\n{{{json tasks}}}\n{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}\n{{/if}}\n\nRespond ONLY with a valid JSON array matching the schema:\n[\n {\n \"taskId\": <number>,\n \"taskTitle\": \"<string>\",\n \"complexityScore\": <number 1-10>,\n \"recommendedSubtasks\": <number>,\n \"expansionPrompt\": \"<string>\",\n \"reasoning\": \"<string>\"\n },\n ...\n]\n\nDo not include any explanatory text, markdown formatting, or code block markers before or after the JSON array."
|
||||
}
|
||||
}
|
||||
}
|
||||
72
src/prompts/expand-task.json
Normal file
72
src/prompts/expand-task.json
Normal file
@@ -0,0 +1,72 @@
|
||||
{
|
||||
"id": "expand-task",
|
||||
"version": "1.0.0",
|
||||
"description": "Break down a task into detailed subtasks",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["expansion", "subtasks", "breakdown"]
|
||||
},
|
||||
"parameters": {
|
||||
"subtaskCount": {
|
||||
"type": "number",
|
||||
"required": true,
|
||||
"description": "Number of subtasks to generate"
|
||||
},
|
||||
"task": {
|
||||
"type": "object",
|
||||
"required": true,
|
||||
"description": "The task to expand"
|
||||
},
|
||||
"nextSubtaskId": {
|
||||
"type": "number",
|
||||
"required": true,
|
||||
"description": "Starting ID for new subtasks"
|
||||
},
|
||||
"useResearch": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Use research mode"
|
||||
},
|
||||
"expansionPrompt": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"description": "Expansion prompt from complexity report"
|
||||
},
|
||||
"additionalContext": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"default": "",
|
||||
"description": "Additional context for task expansion"
|
||||
},
|
||||
"complexityReasoningContext": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"default": "",
|
||||
"description": "Complexity analysis reasoning context"
|
||||
},
|
||||
"gatheredContext": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"default": "",
|
||||
"description": "Gathered project context"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"complexity-report": {
|
||||
"condition": "expansionPrompt",
|
||||
"system": "You are an AI assistant helping with task breakdown. Generate {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks based on the provided prompt and context.\nRespond ONLY with a valid JSON object containing a single key \"subtasks\" whose value is an array of the generated subtask objects.\nEach subtask object in the array must have keys: \"id\", \"title\", \"description\", \"dependencies\", \"details\", \"status\".\nEnsure the 'id' starts from {{nextSubtaskId}} and is sequential.\nEnsure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from {{nextSubtaskId}}).\nEnsure 'status' is 'pending'.\nDo not include any other text or explanation.",
|
||||
"user": "{{expansionPrompt}}{{#if additionalContext}}\n\n{{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\n\n{{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}"
|
||||
},
|
||||
"research": {
|
||||
"condition": "useResearch === true && !expansionPrompt",
|
||||
"system": "You are an AI assistant that responds ONLY with valid JSON objects as requested. The object should contain a 'subtasks' array.",
|
||||
"user": "Analyze the following task and break it down into {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks using your research capabilities. Assign sequential IDs starting from {{nextSubtaskId}}.\n\nParent Task:\nID: {{task.id}}\nTitle: {{task.title}}\nDescription: {{task.description}}\nCurrent details: {{#if task.details}}{{task.details}}{{else}}None{{/if}}{{#if additionalContext}}\nConsider this context: {{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\nComplexity Analysis Reasoning: {{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}\n\nCRITICAL: Respond ONLY with a valid JSON object containing a single key \"subtasks\". The value must be an array of the generated subtasks, strictly matching this structure:\n\n{\n \"subtasks\": [\n {\n \"id\": <number>, // Sequential ID starting from {{nextSubtaskId}}\n \"title\": \"<string>\",\n \"description\": \"<string>\",\n \"dependencies\": [<number>], // e.g., [{{nextSubtaskId}} + 1]. If no dependencies, use an empty array [].\n \"details\": \"<string>\",\n \"testStrategy\": \"<string>\" // Optional\n },\n // ... (repeat for {{#if (gt subtaskCount 0)}}{{subtaskCount}}{{else}}appropriate number of{{/if}} subtasks)\n ]\n}\n\nImportant: For the 'dependencies' field, if a subtask has no dependencies, you MUST use an empty array, for example: \"dependencies\": []. Do not use null or omit the field.\n\nDo not include ANY explanatory text, markdown, or code block markers. Just the JSON object."
|
||||
},
|
||||
"default": {
|
||||
"system": "You are an AI assistant helping with task breakdown for software development.\nYou need to break down a high-level task into {{#if (gt subtaskCount 0)}}{{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks that can be implemented one by one.\n\nSubtasks should:\n1. Be specific and actionable implementation steps\n2. Follow a logical sequence\n3. Each handle a distinct part of the parent task\n4. Include clear guidance on implementation approach\n5. Have appropriate dependency chains between subtasks (using the new sequential IDs)\n6. Collectively cover all aspects of the parent task\n\nFor each subtask, provide:\n- id: Sequential integer starting from the provided nextSubtaskId\n- title: Clear, specific title\n- description: Detailed description\n- dependencies: Array of prerequisite subtask IDs (use the new sequential IDs)\n- details: Implementation details, the output should be in string\n- testStrategy: Optional testing approach\n\nRespond ONLY with a valid JSON object containing a single key \"subtasks\" whose value is an array matching the structure described. Do not include any explanatory text, markdown formatting, or code block markers.",
|
||||
"user": "Break down this task into {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} specific subtasks:\n\nTask ID: {{task.id}}\nTitle: {{task.title}}\nDescription: {{task.description}}\nCurrent details: {{#if task.details}}{{task.details}}{{else}}None{{/if}}{{#if additionalContext}}\nAdditional context: {{additionalContext}}{{/if}}{{#if complexityReasoningContext}}\nComplexity Analysis Reasoning: {{complexityReasoningContext}}{{/if}}{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}{{/if}}\n\nReturn ONLY the JSON object containing the \"subtasks\" array, matching this structure:\n\n{\n \"subtasks\": [\n {\n \"id\": {{nextSubtaskId}}, // First subtask ID\n \"title\": \"Specific subtask title\",\n \"description\": \"Detailed description\",\n \"dependencies\": [], // e.g., [{{nextSubtaskId}} + 1] if it depends on the next\n \"details\": \"Implementation guidance\",\n \"testStrategy\": \"Optional testing approach\"\n },\n // ... (repeat for {{#if (gt subtaskCount 0)}}a total of {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks with sequential IDs)\n ]\n}"
|
||||
}
|
||||
}
|
||||
}
|
||||
51
src/prompts/parse-prd.json
Normal file
51
src/prompts/parse-prd.json
Normal file
@@ -0,0 +1,51 @@
|
||||
{
|
||||
"id": "parse-prd",
|
||||
"version": "1.0.0",
|
||||
"description": "Parse a Product Requirements Document into structured tasks",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["prd", "parsing", "initialization"]
|
||||
},
|
||||
"parameters": {
|
||||
"numTasks": {
|
||||
"type": "number",
|
||||
"required": true,
|
||||
"description": "Target number of tasks to generate"
|
||||
},
|
||||
"nextId": {
|
||||
"type": "number",
|
||||
"required": true,
|
||||
"description": "Starting ID for tasks"
|
||||
},
|
||||
"research": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Enable research mode for latest best practices"
|
||||
},
|
||||
"prdContent": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "Content of the PRD file"
|
||||
},
|
||||
"prdPath": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "Path to the PRD file"
|
||||
},
|
||||
"defaultTaskPriority": {
|
||||
"type": "string",
|
||||
"required": false,
|
||||
"default": "medium",
|
||||
"enum": ["high", "medium", "low"],
|
||||
"description": "Default priority for generated tasks"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.{{#if research}}\nBefore breaking down the PRD into tasks, you will:\n1. Research and analyze the latest technologies, libraries, frameworks, and best practices that would be appropriate for this project\n2. Identify any potential technical challenges, security concerns, or scalability issues not explicitly mentioned in the PRD without discarding any explicit requirements or going overboard with complexity -- always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches\n3. Consider current industry standards and evolving trends relevant to this project (this step aims to solve LLM hallucinations and out of date information due to training data cutoff dates)\n4. Evaluate alternative implementation approaches and recommend the most efficient path\n5. Include specific library versions, helpful APIs, and concrete implementation guidance based on your research\n6. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches\n\nYour task breakdown should incorporate this research, resulting in more detailed implementation guidance, more accurate dependency mapping, and more precise technology recommendations than would be possible from the PRD text alone, while maintaining all explicit requirements and best practices and all details and nuances of the PRD.{{/if}}\n\nAnalyze the provided PRD content and generate {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD\nEach task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.\nAssign sequential IDs starting from {{nextId}}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.\nSet status to 'pending', dependencies to an empty array [], and priority to '{{defaultTaskPriority}}' initially for all tasks.\nRespond ONLY with a valid JSON object containing a single key \"tasks\", where the value is an array of task objects adhering to the provided Zod schema. Do not include any explanation or markdown formatting.\n\nEach task should follow this JSON structure:\n{\n\t\"id\": number,\n\t\"title\": string,\n\t\"description\": string,\n\t\"status\": \"pending\",\n\t\"dependencies\": number[] (IDs of tasks this depends on),\n\t\"priority\": \"high\" | \"medium\" | \"low\",\n\t\"details\": string (implementation details),\n\t\"testStrategy\": string (validation approach)\n}\n\nGuidelines:\n1. {{#if (gt numTasks 0)}}Unless complexity warrants otherwise{{else}}Depending on the complexity{{/if}}, create {{#if (gt numTasks 0)}}exactly {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, numbered sequentially starting from {{nextId}}\n2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards\n3. Order tasks logically - consider dependencies and implementation sequence\n4. Early tasks should focus on setup, core functionality first, then advanced features\n5. Include clear validation/testing approach for each task\n6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than {{nextId}} if applicable)\n7. Assign priority (high/medium/low) based on criticality and dependency order\n8. Include detailed implementation guidance in the \"details\" field{{#if research}}, with specific libraries and version recommendations based on your research{{/if}}\n9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance\n10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements\n11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches{{#if research}}\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research{{/if}}",
|
||||
"user": "Here's the Product Requirements Document (PRD) to break down into {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, starting IDs from {{nextId}}:{{#if research}}\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.{{/if}}\n\n{{prdContent}}\n\n\n\t\tReturn your response in this format:\n{\n \"tasks\": [\n {\n \"id\": 1,\n \"title\": \"Setup Project Repository\",\n \"description\": \"...\",\n ...\n },\n ...\n ],\n \"metadata\": {\n \"projectName\": \"PRD Implementation\",\n \"totalTasks\": {{#if (gt numTasks 0)}}{{numTasks}}{{else}}{number of tasks}{{/if}},\n \"sourceFile\": \"{{prdPath}}\",\n \"generatedAt\": \"YYYY-MM-DD\"\n }\n}"
|
||||
}
|
||||
}
|
||||
}
|
||||
53
src/prompts/research.json
Normal file
53
src/prompts/research.json
Normal file
@@ -0,0 +1,53 @@
|
||||
{
|
||||
"id": "research",
|
||||
"version": "1.0.0",
|
||||
"description": "Perform AI-powered research with project context",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["research", "context-aware", "information-gathering"]
|
||||
},
|
||||
"parameters": {
|
||||
"query": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "Research query"
|
||||
},
|
||||
"gatheredContext": {
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "Gathered project context"
|
||||
},
|
||||
"detailLevel": {
|
||||
"type": "string",
|
||||
"enum": ["low", "medium", "high"],
|
||||
"default": "medium",
|
||||
"description": "Level of detail for the response"
|
||||
},
|
||||
"projectInfo": {
|
||||
"type": "object",
|
||||
"description": "Project information",
|
||||
"properties": {
|
||||
"root": {
|
||||
"type": "string",
|
||||
"description": "Project root path"
|
||||
},
|
||||
"taskCount": {
|
||||
"type": "number",
|
||||
"description": "Number of related tasks"
|
||||
},
|
||||
"fileCount": {
|
||||
"type": "number",
|
||||
"description": "Number of related files"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an expert AI research assistant helping with a software development project. You have access to project context including tasks, files, and project structure.\n\nYour role is to provide comprehensive, accurate, and actionable research responses based on the user's query and the provided project context.\n{{#if (eq detailLevel \"low\")}}\n**Response Style: Concise & Direct**\n- Provide brief, focused answers (2-4 paragraphs maximum)\n- Focus on the most essential information\n- Use bullet points for key takeaways\n- Avoid lengthy explanations unless critical\n- Skip pleasantries, introductions, and conclusions\n- No phrases like \"Based on your project context\" or \"I'll provide guidance\"\n- No summary outros or alignment statements\n- Get straight to the actionable information\n- Use simple, direct language - users want info, not explanation{{/if}}{{#if (eq detailLevel \"medium\")}}\n**Response Style: Balanced & Comprehensive**\n- Provide thorough but well-structured responses (4-8 paragraphs)\n- Include relevant examples and explanations\n- Balance depth with readability\n- Use headings and bullet points for organization{{/if}}{{#if (eq detailLevel \"high\")}}\n**Response Style: Detailed & Exhaustive**\n- Provide comprehensive, in-depth analysis (8+ paragraphs)\n- Include multiple perspectives and approaches\n- Provide detailed examples, code snippets, and step-by-step guidance\n- Cover edge cases and potential pitfalls\n- Use clear structure with headings, subheadings, and lists{{/if}}\n\n**Guidelines:**\n- Always consider the project context when formulating responses\n- Reference specific tasks, files, or project elements when relevant\n- Provide actionable insights that can be applied to the project\n- If the query relates to existing project tasks, suggest how the research applies to those tasks\n- Use markdown formatting for better readability\n- Be precise and avoid speculation unless clearly marked as such\n{{#if (eq detailLevel \"low\")}}\n**For LOW detail level specifically:**\n- Start immediately with the core information\n- No introductory phrases or context acknowledgments\n- No concluding summaries or project alignment statements\n- Focus purely on facts, steps, and actionable items{{/if}}",
|
||||
"user": "# Research Query\n\n{{query}}\n{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}\n{{/if}}\n\n# Instructions\n\nPlease research and provide a {{detailLevel}}-detail response to the query above. Consider the project context provided and make your response as relevant and actionable as possible for this specific project."
|
||||
}
|
||||
}
|
||||
}
|
||||
402
src/prompts/schemas/README.md
Normal file
402
src/prompts/schemas/README.md
Normal file
@@ -0,0 +1,402 @@
|
||||
# Task Master JSON Schemas
|
||||
|
||||
This directory contains JSON schemas for validating Task Master prompt templates. These schemas provide IDE support, validation, and better developer experience when working with prompt templates.
|
||||
|
||||
## Overview
|
||||
|
||||
The schema system provides:
|
||||
- **Structural Validation**: Ensures all required fields and proper JSON structure
|
||||
- **Type Safety**: Validates parameter types and value constraints
|
||||
- **IDE Integration**: IntelliSense and auto-completion in VS Code
|
||||
- **Development Safety**: Catches errors before runtime
|
||||
- **Documentation**: Self-documenting templates through schema definitions
|
||||
|
||||
## Schema Files
|
||||
|
||||
### `prompt-template.schema.json` (Main Schema)
|
||||
**Version**: 1.0.0
|
||||
**Purpose**: Main schema for Task Master prompt template files
|
||||
|
||||
**Validates**:
|
||||
- Template metadata (id, version, description)
|
||||
- Parameter definitions with comprehensive type validation
|
||||
- Prompt variants with conditional logic
|
||||
- Cross-references between parameters and template variables
|
||||
- Semantic versioning compliance
|
||||
- Handlebars template syntax
|
||||
|
||||
**Required Fields**:
|
||||
- `id`: Unique template identifier (kebab-case)
|
||||
- `version`: Semantic version (e.g., "1.0.0")
|
||||
- `description`: Human-readable description
|
||||
- `prompts.default`: Default prompt variant
|
||||
|
||||
**Optional Fields**:
|
||||
- `metadata`: Additional template information
|
||||
- `parameters`: Parameter definitions for template variables
|
||||
- `prompts.*`: Additional prompt variants
|
||||
|
||||
### `parameter.schema.json` (Parameter Schema)
|
||||
**Version**: 1.0.0
|
||||
**Purpose**: Reusable schema for individual prompt parameters
|
||||
|
||||
**Supports**:
|
||||
- **Type Validation**: `string`, `number`, `boolean`, `array`, `object`
|
||||
- **Constraints**: Required/optional parameters, default values
|
||||
- **String Validation**: Pattern matching (regex), enum constraints
|
||||
- **Numeric Validation**: Minimum/maximum values, integer constraints
|
||||
- **Array Validation**: Item types, minimum/maximum length
|
||||
- **Object Validation**: Property definitions and required fields
|
||||
|
||||
**Parameter Properties**:
|
||||
```json
|
||||
{
|
||||
"type": "string|number|boolean|array|object",
|
||||
"required": true|false,
|
||||
"default": "any value matching type",
|
||||
"description": "Parameter documentation",
|
||||
"enum": ["option1", "option2"],
|
||||
"pattern": "^regex$",
|
||||
"minimum": 0,
|
||||
"maximum": 100,
|
||||
"minLength": 1,
|
||||
"maxLength": 255,
|
||||
"items": { "type": "string" },
|
||||
"properties": { "key": { "type": "string" } }
|
||||
}
|
||||
```
|
||||
|
||||
### `variant.schema.json` (Variant Schema)
|
||||
**Version**: 1.0.0
|
||||
**Purpose**: Schema for prompt template variants
|
||||
|
||||
**Validates**:
|
||||
- System and user prompt templates
|
||||
- Conditional expressions for variant selection
|
||||
- Variable placeholders using Handlebars syntax
|
||||
- Variant metadata and descriptions
|
||||
|
||||
**Variant Structure**:
|
||||
```json
|
||||
{
|
||||
"condition": "JavaScript expression",
|
||||
"system": "System prompt template",
|
||||
"user": "User prompt template",
|
||||
"metadata": {
|
||||
"description": "When to use this variant"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Schema Validation Rules
|
||||
|
||||
### Template ID Validation
|
||||
- **Pattern**: `^[a-z][a-z0-9-]*[a-z0-9]$`
|
||||
- **Format**: Kebab-case, alphanumeric with hyphens
|
||||
- **Examples**:
|
||||
- ✅ `add-task`, `parse-prd`, `analyze-complexity`
|
||||
- ❌ `AddTask`, `add_task`, `-invalid-`, `task-`
|
||||
|
||||
### Version Validation
|
||||
- **Pattern**: Semantic versioning (semver)
|
||||
- **Format**: `MAJOR.MINOR.PATCH`
|
||||
- **Examples**:
|
||||
- ✅ `1.0.0`, `2.1.3`, `10.0.0`
|
||||
- ❌ `1.0`, `v1.0.0`, `1.0.0-beta`
|
||||
|
||||
### Parameter Type Validation
|
||||
- **String**: Text values with optional pattern/enum constraints
|
||||
- **Number**: Numeric values with optional min/max constraints
|
||||
- **Boolean**: True/false values
|
||||
- **Array**: Lists with optional item type validation
|
||||
- **Object**: Complex structures with property definitions
|
||||
|
||||
### Template Variable Validation
|
||||
- **Handlebars Syntax**: `{{variable}}`, `{{#if condition}}`, `{{#each array}}`
|
||||
- **Parameter References**: All template variables must have corresponding parameters
|
||||
- **Nested Access**: Support for `{{object.property}}` notation
|
||||
- **Special Variables**: `{{@index}}`, `{{@first}}`, `{{@last}}` in loops
|
||||
|
||||
## IDE Integration
|
||||
|
||||
### VS Code Setup
|
||||
The VS Code profile automatically configures schema validation:
|
||||
|
||||
```json
|
||||
{
|
||||
"json.schemas": [
|
||||
{
|
||||
"fileMatch": [
|
||||
"src/prompts/**/*.json",
|
||||
".taskmaster/prompts/**/*.json",
|
||||
"prompts/**/*.json"
|
||||
],
|
||||
"url": "./src/prompts/schemas/prompt-template.schema.json"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Features Provided**:
|
||||
- **Auto-completion**: IntelliSense for all schema properties
|
||||
- **Real-time Validation**: Immediate error highlighting
|
||||
- **Hover Documentation**: Parameter descriptions on hover
|
||||
- **Error Messages**: Detailed validation error explanations
|
||||
|
||||
### Other IDEs
|
||||
For other development environments:
|
||||
|
||||
**Schema URLs**:
|
||||
- **Local Development**: `./src/prompts/schemas/prompt-template.schema.json`
|
||||
- **GitHub Reference**: `https://github.com/eyaltoledano/claude-task-master/blob/main/src/prompts/schemas/prompt-template.schema.json`
|
||||
|
||||
**File Patterns**:
|
||||
- `src/prompts/**/*.json`
|
||||
- `.taskmaster/prompts/**/*.json`
|
||||
- `prompts/**/*.json`
|
||||
|
||||
## Validation Examples
|
||||
|
||||
### Valid Template Example
|
||||
```json
|
||||
{
|
||||
"id": "example-prompt",
|
||||
"version": "1.0.0",
|
||||
"description": "Example prompt template with comprehensive validation",
|
||||
"metadata": {
|
||||
"author": "Task Master Team",
|
||||
"category": "task",
|
||||
"tags": ["example", "validation"]
|
||||
},
|
||||
"parameters": {
|
||||
"taskDescription": {
|
||||
"type": "string",
|
||||
"description": "Description of the task to perform",
|
||||
"required": true,
|
||||
"minLength": 5,
|
||||
"maxLength": 500
|
||||
},
|
||||
"priority": {
|
||||
"type": "string",
|
||||
"description": "Task priority level",
|
||||
"required": false,
|
||||
"enum": ["high", "medium", "low"],
|
||||
"default": "medium"
|
||||
},
|
||||
"maxTokens": {
|
||||
"type": "number",
|
||||
"description": "Maximum tokens for response",
|
||||
"required": false,
|
||||
"minimum": 100,
|
||||
"maximum": 4000,
|
||||
"default": 1000
|
||||
},
|
||||
"useResearch": {
|
||||
"type": "boolean",
|
||||
"description": "Whether to include research context",
|
||||
"required": false,
|
||||
"default": false
|
||||
},
|
||||
"tags": {
|
||||
"type": "array",
|
||||
"description": "Task tags for categorization",
|
||||
"required": false,
|
||||
"items": {
|
||||
"type": "string",
|
||||
"pattern": "^[a-z][a-z0-9-]*$"
|
||||
}
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are a helpful AI assistant that creates tasks with {{priority}} priority.",
|
||||
"user": "Create a task: {{taskDescription}}{{#if tags}}\nTags: {{#each tags}}{{this}}{{#unless @last}}, {{/unless}}{{/each}}{{/if}}"
|
||||
},
|
||||
"research": {
|
||||
"condition": "useResearch === true",
|
||||
"system": "You are a research-focused AI assistant with access to current information.",
|
||||
"user": "Research and create a task: {{taskDescription}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Common Validation Errors
|
||||
|
||||
**Missing Required Fields**:
|
||||
```json
|
||||
// ❌ Error: Missing required 'id' field
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"description": "Missing ID"
|
||||
}
|
||||
```
|
||||
|
||||
**Invalid ID Format**:
|
||||
```json
|
||||
// ❌ Error: ID must be kebab-case
|
||||
{
|
||||
"id": "InvalidID_Format",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
```
|
||||
|
||||
**Parameter Type Mismatch**:
|
||||
```json
|
||||
// ❌ Error: Parameter type doesn't match usage
|
||||
{
|
||||
"parameters": {
|
||||
"count": { "type": "string" }
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"user": "Process {{count}} items" // Should be number for counting
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Invalid Condition Syntax**:
|
||||
```json
|
||||
// ❌ Error: Invalid JavaScript in condition
|
||||
{
|
||||
"prompts": {
|
||||
"variant": {
|
||||
"condition": "useResearch = true", // Should be ===
|
||||
"user": "Research prompt"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Creating New Templates
|
||||
1. **Start with Schema**: Use VS Code with schema validation enabled
|
||||
2. **Define Structure**: Begin with required fields (id, version, description)
|
||||
3. **Add Parameters**: Define all template variables with proper types
|
||||
4. **Create Prompts**: Write system and user prompts with template variables
|
||||
5. **Test Validation**: Ensure template validates without errors
|
||||
6. **Add Variants**: Create additional variants if needed
|
||||
7. **Document Usage**: Update the main README with template details
|
||||
|
||||
### Modifying Existing Templates
|
||||
1. **Check Current Version**: Note the current version number
|
||||
2. **Assess Changes**: Determine if changes are breaking or non-breaking
|
||||
3. **Update Version**: Increment version following semantic versioning
|
||||
4. **Maintain Compatibility**: Avoid breaking existing parameter contracts
|
||||
5. **Test Thoroughly**: Verify all existing code still works
|
||||
6. **Update Documentation**: Reflect changes in README files
|
||||
|
||||
### Schema Evolution
|
||||
When updating schemas themselves:
|
||||
|
||||
1. **Backward Compatibility**: Ensure existing templates remain valid
|
||||
2. **Version Increment**: Update schema version in `$id` and `version` fields
|
||||
3. **Test Migration**: Validate all existing templates against new schema
|
||||
4. **Document Changes**: Update this README with schema changes
|
||||
5. **Coordinate Release**: Ensure schema and template changes are synchronized
|
||||
|
||||
## Advanced Validation Features
|
||||
|
||||
### Cross-Reference Validation
|
||||
The schema validates that:
|
||||
- All template variables have corresponding parameters
|
||||
- Parameter types match their usage in templates
|
||||
- Variant conditions reference valid parameters
|
||||
- Nested property access is properly defined
|
||||
|
||||
### Conditional Validation
|
||||
- **Dynamic Schemas**: Different validation rules based on parameter values
|
||||
- **Variant Conditions**: JavaScript expression validation
|
||||
- **Template Syntax**: Handlebars syntax validation
|
||||
- **Parameter Dependencies**: Required parameters based on other parameters
|
||||
|
||||
### Custom Validation Rules
|
||||
The schema includes custom validation for:
|
||||
- **Semantic Versioning**: Proper version format validation
|
||||
- **Template Variables**: Handlebars syntax and parameter references
|
||||
- **Condition Expressions**: JavaScript expression syntax validation
|
||||
- **File Patterns**: Consistent naming conventions
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Schema Loading
|
||||
- **Caching**: Schemas are loaded once and cached
|
||||
- **Lazy Loading**: Validation only occurs when templates are accessed
|
||||
- **Memory Efficiency**: Shared schema instances across templates
|
||||
|
||||
### Validation Performance
|
||||
- **Fast Validation**: AJV provides optimized validation
|
||||
- **Error Batching**: Multiple errors reported in single validation pass
|
||||
- **Minimal Overhead**: Validation adds minimal runtime cost
|
||||
|
||||
### Development Impact
|
||||
- **IDE Responsiveness**: Real-time validation without performance impact
|
||||
- **Build Time**: Schema validation during development, not production
|
||||
- **Testing Speed**: Fast validation during test execution
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Schema Issues
|
||||
|
||||
**Schema Not Loading**:
|
||||
- Check file paths in VS Code settings
|
||||
- Verify schema files exist and are valid JSON
|
||||
- Restart VS Code if changes aren't recognized
|
||||
|
||||
**Validation Not Working**:
|
||||
- Ensure `ajv` and `ajv-formats` dependencies are installed
|
||||
- Check for JSON syntax errors in templates
|
||||
- Verify schema file paths are correct
|
||||
|
||||
**Performance Issues**:
|
||||
- Check for circular references in schemas
|
||||
- Verify schema caching is working
|
||||
- Monitor validation frequency in development
|
||||
|
||||
### Debugging Validation Errors
|
||||
|
||||
**Understanding Error Messages**:
|
||||
```javascript
|
||||
// Example error output
|
||||
{
|
||||
"instancePath": "/parameters/priority/type",
|
||||
"schemaPath": "#/properties/parameters/additionalProperties/properties/type/enum",
|
||||
"keyword": "enum",
|
||||
"params": { "allowedValues": ["string", "number", "boolean", "array", "object"] },
|
||||
"message": "must be equal to one of the allowed values"
|
||||
}
|
||||
```
|
||||
|
||||
**Common Error Patterns**:
|
||||
- `instancePath`: Shows where in the template the error occurred
|
||||
- `schemaPath`: Shows which schema rule was violated
|
||||
- `keyword`: Indicates the type of validation that failed
|
||||
- `params`: Provides additional context about the validation rule
|
||||
- `message`: Human-readable description of the error
|
||||
|
||||
### Getting Help
|
||||
|
||||
**Internal Resources**:
|
||||
- Main prompt README: `src/prompts/README.md`
|
||||
- Schema files: `src/prompts/schemas/*.json`
|
||||
- PromptManager code: `scripts/modules/prompt-manager.js`
|
||||
|
||||
**External Resources**:
|
||||
- JSON Schema documentation: https://json-schema.org/
|
||||
- AJV validation library: https://ajv.js.org/
|
||||
- Handlebars template syntax: https://handlebarsjs.com/
|
||||
|
||||
## Schema URLs and References
|
||||
|
||||
### Current Schema Locations
|
||||
- **Local Development**: `./src/prompts/schemas/prompt-template.schema.json`
|
||||
- **GitHub Blob**: `https://github.com/eyaltoledano/claude-task-master/blob/main/src/prompts/schemas/prompt-template.schema.json`
|
||||
- **Schema ID**: Used for internal references and validation
|
||||
|
||||
### URL Usage Guidelines
|
||||
- **`$id` Field**: Use GitHub blob URLs for stable schema identification
|
||||
- **Local References**: Use relative paths for development and testing
|
||||
- **External Tools**: GitHub blob URLs provide stable, version-controlled access
|
||||
- **Documentation**: Link to GitHub for public schema access
|
||||
48
src/prompts/schemas/parameter.schema.json
Normal file
48
src/prompts/schemas/parameter.schema.json
Normal file
@@ -0,0 +1,48 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "https://github.com/eyaltoledano/claude-task-master/blob/main/src/prompts/schemas/parameter.schema.json",
|
||||
"version": "1.0.0",
|
||||
"title": "Task Master Prompt Parameter",
|
||||
"description": "Schema for individual prompt template parameters",
|
||||
"type": "object",
|
||||
"required": ["type", "description"],
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["string", "number", "boolean", "array", "object"],
|
||||
"description": "The expected data type for this parameter"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"minLength": 1,
|
||||
"description": "Human-readable description of the parameter"
|
||||
},
|
||||
"required": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether this parameter is required"
|
||||
},
|
||||
"default": {
|
||||
"description": "Default value for optional parameters"
|
||||
},
|
||||
"enum": {
|
||||
"type": "array",
|
||||
"description": "Valid values for string parameters",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"pattern": {
|
||||
"type": "string",
|
||||
"description": "Regular expression pattern for string validation"
|
||||
},
|
||||
"minimum": {
|
||||
"type": "number",
|
||||
"description": "Minimum value for number parameters"
|
||||
},
|
||||
"maximum": {
|
||||
"type": "number",
|
||||
"description": "Maximum value for number parameters"
|
||||
}
|
||||
}
|
||||
}
|
||||
136
src/prompts/schemas/prompt-template.schema.json
Normal file
136
src/prompts/schemas/prompt-template.schema.json
Normal file
@@ -0,0 +1,136 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "https://github.com/eyaltoledano/claude-task-master/blob/main/src/prompts/schemas/prompt-template.schema.json",
|
||||
"version": "1.0.0",
|
||||
"title": "Task Master Prompt Template",
|
||||
"description": "Schema for Task Master AI prompt template files",
|
||||
"type": "object",
|
||||
"required": ["id", "version", "description", "prompts"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^[a-z0-9-]+$",
|
||||
"description": "Unique identifier for the prompt template"
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"pattern": "^\\d+\\.\\d+\\.\\d+$",
|
||||
"description": "Semantic version of the prompt template"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"minLength": 1,
|
||||
"description": "Brief description of what this prompt does"
|
||||
},
|
||||
"metadata": {
|
||||
"$ref": "#/definitions/metadata"
|
||||
},
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"$ref": "#/definitions/parameter"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"default": {
|
||||
"$ref": "#/definitions/promptVariant"
|
||||
}
|
||||
},
|
||||
"additionalProperties": {
|
||||
"$ref": "#/definitions/conditionalPromptVariant"
|
||||
}
|
||||
}
|
||||
},
|
||||
"definitions": {
|
||||
"parameter": {
|
||||
"type": "object",
|
||||
"required": ["type", "description"],
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["string", "number", "boolean", "array", "object"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"minLength": 1
|
||||
},
|
||||
"required": {
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
},
|
||||
"default": {
|
||||
"description": "Default value for optional parameters"
|
||||
},
|
||||
"enum": {
|
||||
"type": "array",
|
||||
"description": "Valid values for string parameters"
|
||||
},
|
||||
"pattern": {
|
||||
"type": "string",
|
||||
"description": "Regular expression pattern for string validation"
|
||||
},
|
||||
"minimum": {
|
||||
"type": "number",
|
||||
"description": "Minimum value for number parameters"
|
||||
},
|
||||
"maximum": {
|
||||
"type": "number",
|
||||
"description": "Maximum value for number parameters"
|
||||
}
|
||||
}
|
||||
},
|
||||
"promptVariant": {
|
||||
"type": "object",
|
||||
"required": ["system", "user"],
|
||||
"properties": {
|
||||
"system": {
|
||||
"type": "string",
|
||||
"minLength": 1
|
||||
},
|
||||
"user": {
|
||||
"type": "string",
|
||||
"minLength": 1
|
||||
}
|
||||
}
|
||||
},
|
||||
"conditionalPromptVariant": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/definitions/promptVariant" },
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"condition": {
|
||||
"type": "string",
|
||||
"description": "JavaScript expression for variant selection"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"author": { "type": "string" },
|
||||
"created": { "type": "string", "format": "date-time" },
|
||||
"updated": { "type": "string", "format": "date-time" },
|
||||
"tags": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"category": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"task",
|
||||
"analysis",
|
||||
"research",
|
||||
"parsing",
|
||||
"update",
|
||||
"expansion"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
39
src/prompts/schemas/variant.schema.json
Normal file
39
src/prompts/schemas/variant.schema.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "https://github.com/eyaltoledano/claude-task-master/blob/main/src/prompts/schemas/variant.schema.json",
|
||||
"version": "1.0.0",
|
||||
"title": "Task Master Prompt Variant",
|
||||
"description": "Schema for prompt template variants",
|
||||
"type": "object",
|
||||
"required": ["system", "user"],
|
||||
"properties": {
|
||||
"system": {
|
||||
"type": "string",
|
||||
"minLength": 1,
|
||||
"description": "System prompt template with variable placeholders"
|
||||
},
|
||||
"user": {
|
||||
"type": "string",
|
||||
"minLength": 1,
|
||||
"description": "User prompt template with variable placeholders"
|
||||
},
|
||||
"condition": {
|
||||
"type": "string",
|
||||
"description": "JavaScript expression for variant selection (optional, only for non-default variants)"
|
||||
},
|
||||
"metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "Description of when this variant should be used"
|
||||
},
|
||||
"tags": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Tags for categorizing this variant"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
55
src/prompts/update-subtask.json
Normal file
55
src/prompts/update-subtask.json
Normal file
@@ -0,0 +1,55 @@
|
||||
{
|
||||
"id": "update-subtask",
|
||||
"version": "1.0.0",
|
||||
"description": "Append information to a subtask by generating only new content",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["update", "subtask", "append", "logging"]
|
||||
},
|
||||
"parameters": {
|
||||
"parentTask": {
|
||||
"type": "object",
|
||||
"required": true,
|
||||
"description": "The parent task context"
|
||||
},
|
||||
"prevSubtask": {
|
||||
"type": "object",
|
||||
"required": false,
|
||||
"description": "The previous subtask if any"
|
||||
},
|
||||
"nextSubtask": {
|
||||
"type": "object",
|
||||
"required": false,
|
||||
"description": "The next subtask if any"
|
||||
},
|
||||
"currentDetails": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"default": "(No existing details)",
|
||||
"description": "Current subtask details"
|
||||
},
|
||||
"updatePrompt": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "User request for what to add"
|
||||
},
|
||||
"useResearch": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Use research mode"
|
||||
},
|
||||
"gatheredContext": {
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "Additional project context"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an AI assistant helping to update a subtask. You will be provided with the subtask's existing details, context about its parent and sibling tasks, and a user request string.{{#if useResearch}} You have access to current best practices and latest technical information to provide research-backed updates.{{/if}}\n\nYour Goal: Based *only* on the user's request and all the provided context (including existing details if relevant to the request), GENERATE the new text content that should be added to the subtask's details.\nFocus *only* on generating the substance of the update.\n\nOutput Requirements:\n1. Return *only* the newly generated text content as a plain string. Do NOT return a JSON object or any other structured data.\n2. Your string response should NOT include any of the subtask's original details, unless the user's request explicitly asks to rephrase, summarize, or directly modify existing text.\n3. Do NOT include any timestamps, XML-like tags, markdown, or any other special formatting in your string response.\n4. Ensure the generated text is concise yet complete for the update based on the user request. Avoid conversational fillers or explanations about what you are doing (e.g., do not start with \"Okay, here's the update...\").{{#if useResearch}}\n5. Include specific libraries, versions, and current best practices relevant to the subtask implementation.\n6. Provide research-backed technical recommendations and proven approaches.{{/if}}",
|
||||
"user": "Task Context:\n\nParent Task: {{{json parentTask}}}\n{{#if prevSubtask}}Previous Subtask: {{{json prevSubtask}}}\n{{/if}}{{#if nextSubtask}}Next Subtask: {{{json nextSubtask}}}\n{{/if}}Current Subtask Details (for context only):\n{{currentDetails}}\n\nUser Request: \"{{updatePrompt}}\"\n\n{{#if useResearch}}Research and incorporate current best practices, latest stable versions, and proven approaches into your update. {{/if}}Based on the User Request and all the Task Context (including current subtask details provided above), what is the new information or text that should be appended to this subtask's details? Return ONLY this new text as a plain string.{{#if useResearch}} Include specific technical recommendations based on current industry standards.{{/if}}\n{{#if gatheredContext}}\n\n# Additional Project Context\n\n{{gatheredContext}}\n{{/if}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
59
src/prompts/update-task.json
Normal file
59
src/prompts/update-task.json
Normal file
@@ -0,0 +1,59 @@
|
||||
{
|
||||
"id": "update-task",
|
||||
"version": "1.0.0",
|
||||
"description": "Update a single task with new information, supporting full updates and append mode",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["update", "single-task", "modification", "append"]
|
||||
},
|
||||
"parameters": {
|
||||
"task": {
|
||||
"type": "object",
|
||||
"required": true,
|
||||
"description": "The task to update"
|
||||
},
|
||||
"taskJson": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "JSON string representation of the task"
|
||||
},
|
||||
"updatePrompt": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "Description of changes to apply"
|
||||
},
|
||||
"appendMode": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether to append to details or do full update"
|
||||
},
|
||||
"useResearch": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Use research mode"
|
||||
},
|
||||
"currentDetails": {
|
||||
"type": "string",
|
||||
"default": "(No existing details)",
|
||||
"description": "Current task details for context"
|
||||
},
|
||||
"gatheredContext": {
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "Additional project context"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an AI assistant helping to update a software development task based on new context.{{#if useResearch}} You have access to current best practices and latest technical information to provide research-backed updates.{{/if}}\nYou will be given a task and a prompt describing changes or new implementation details.\nYour job is to update the task to reflect these changes, while preserving its basic structure.\n\nGuidelines:\n1. VERY IMPORTANT: NEVER change the title of the task - keep it exactly as is\n2. Maintain the same ID, status, and dependencies unless specifically mentioned in the prompt{{#if useResearch}}\n3. Research and update the description, details, and test strategy with current best practices\n4. Include specific versions, libraries, and approaches that are current and well-tested{{/if}}{{#if (not useResearch)}}\n3. Update the description, details, and test strategy to reflect the new information\n4. Do not change anything unnecessarily - just adapt what needs to change based on the prompt{{/if}}\n5. Return a complete valid JSON object representing the updated task\n6. VERY IMPORTANT: Preserve all subtasks marked as \"done\" or \"completed\" - do not modify their content\n7. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything\n8. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly\n9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced\n10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted\n11. Ensure any new subtasks have unique IDs that don't conflict with existing ones\n12. CRITICAL: For subtask IDs, use ONLY numeric values (1, 2, 3, etc.) NOT strings (\"1\", \"2\", \"3\")\n13. CRITICAL: Subtask IDs should start from 1 and increment sequentially (1, 2, 3...) - do NOT use parent task ID as prefix{{#if useResearch}}\n14. Include links to documentation or resources where helpful\n15. Focus on practical, implementable solutions using current technologies{{/if}}\n\nThe changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.",
|
||||
"user": "Here is the task to update{{#if useResearch}} with research-backed information{{/if}}:\n{{{taskJson}}}\n\nPlease {{#if useResearch}}research and {{/if}}update this task based on the following {{#if useResearch}}context:\n{{updatePrompt}}\n\nIncorporate current best practices, latest stable versions, and proven approaches.{{/if}}{{#if (not useResearch)}}new context:\n{{updatePrompt}}{{/if}}\n\nIMPORTANT: {{#if useResearch}}Preserve any subtasks marked as \"done\" or \"completed\".{{/if}}{{#if (not useResearch)}}In the task JSON above, any subtasks with \"status\": \"done\" or \"status\": \"completed\" should be preserved exactly as is. Build your changes around these completed items.{{/if}}\n{{#if gatheredContext}}\n\n# Project Context\n\n{{gatheredContext}}\n{{/if}}\n\nReturn only the updated task as a valid JSON object{{#if useResearch}} with research-backed improvements{{/if}}."
|
||||
},
|
||||
"append": {
|
||||
"condition": "appendMode === true",
|
||||
"system": "You are an AI assistant helping to append additional information to a software development task. You will be provided with the task's existing details, context, and a user request string.\n\nYour Goal: Based *only* on the user's request and all the provided context (including existing details if relevant to the request), GENERATE the new text content that should be added to the task's details.\nFocus *only* on generating the substance of the update.\n\nOutput Requirements:\n1. Return *only* the newly generated text content as a plain string. Do NOT return a JSON object or any other structured data.\n2. Your string response should NOT include any of the task's original details, unless the user's request explicitly asks to rephrase, summarize, or directly modify existing text.\n3. Do NOT include any timestamps, XML-like tags, markdown, or any other special formatting in your string response.\n4. Ensure the generated text is concise yet complete for the update based on the user request. Avoid conversational fillers or explanations about what you are doing (e.g., do not start with \"Okay, here's the update...\").",
|
||||
"user": "Task Context:\n\nTask: {{{json task}}}\nCurrent Task Details (for context only):\n{{currentDetails}}\n\nUser Request: \"{{updatePrompt}}\"\n\nBased on the User Request and all the Task Context (including current task details provided above), what is the new information or text that should be appended to this task's details? Return ONLY this new text as a plain string.\n{{#if gatheredContext}}\n\n# Additional Project Context\n\n{{gatheredContext}}\n{{/if}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
38
src/prompts/update-tasks.json
Normal file
38
src/prompts/update-tasks.json
Normal file
@@ -0,0 +1,38 @@
|
||||
{
|
||||
"id": "update-tasks",
|
||||
"version": "1.0.0",
|
||||
"description": "Update multiple tasks based on new context or changes",
|
||||
"metadata": {
|
||||
"author": "system",
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z",
|
||||
"tags": ["update", "bulk", "context-change"]
|
||||
},
|
||||
"parameters": {
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"required": true,
|
||||
"description": "Array of tasks to update"
|
||||
},
|
||||
"updatePrompt": {
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "Description of changes to apply"
|
||||
},
|
||||
"useResearch": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Use research mode"
|
||||
},
|
||||
"projectContext": {
|
||||
"type": "string",
|
||||
"description": "Additional project context"
|
||||
}
|
||||
},
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an AI assistant helping to update software development tasks based on new context.\nYou will be given a set of tasks and a prompt describing changes or new implementation details.\nYour job is to update the tasks to reflect these changes, while preserving their basic structure.\n\nGuidelines:\n1. Maintain the same IDs, statuses, and dependencies unless specifically mentioned in the prompt\n2. Update titles, descriptions, details, and test strategies to reflect the new information\n3. Do not change anything unnecessarily - just adapt what needs to change based on the prompt\n4. You should return ALL the tasks in order, not just the modified ones\n5. Return a complete valid JSON object with the updated tasks array\n6. VERY IMPORTANT: Preserve all subtasks marked as \"done\" or \"completed\" - do not modify their content\n7. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything\n8. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly\n9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced\n10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted\n\nThe changes described in the prompt should be applied to ALL tasks in the list.",
|
||||
"user": "Here are the tasks to update:\n{{{json tasks}}}\n\nPlease update these tasks based on the following new context:\n{{updatePrompt}}\n\nIMPORTANT: In the tasks JSON above, any subtasks with \"status\": \"done\" or \"status\": \"completed\" should be preserved exactly as is. Build your changes around these completed items.{{#if projectContext}}\n\n# Project Context\n\n{{projectContext}}{{/if}}\n\nReturn only the updated tasks as a valid JSON array."
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user