chore: task management
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
# Task ID: 45
|
||||
# Title: Implement GitHub Issue Import Feature
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Dependencies: 97
|
||||
# Priority: medium
|
||||
# Description: Implement a comprehensive LLM-powered 'import_task' command that can intelligently import tasks from GitHub Issues and Discussions. The system uses our existing ContextGatherer.js infrastructure to analyze the full context of GitHub content and automatically generate well-structured tasks with appropriate subtasks, priorities, and implementation details. This feature works in conjunction with the GitHub export feature (Task #97) to provide bidirectional linking between Task Master tasks and GitHub issues.
|
||||
# Details:
|
||||
|
||||
@@ -1,525 +0,0 @@
|
||||
# Task ID: 81
|
||||
# Title: Implement Separate Context Window and Output Token Limits
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Replace the ambiguous MAX_TOKENS configuration with separate contextWindowTokens and maxOutputTokens fields to properly handle model token limits and enable dynamic token allocation.
|
||||
# Details:
|
||||
Currently, the MAX_TOKENS configuration entry is ambiguous and doesn't properly differentiate between:
|
||||
1. Context window tokens (total input + output capacity)
|
||||
2. Maximum output tokens (generation limit)
|
||||
|
||||
This causes issues where:
|
||||
- The system can't properly validate prompt lengths against model capabilities
|
||||
- Output token allocation is not optimized based on input length
|
||||
- Different models with different token architectures are handled inconsistently
|
||||
|
||||
This epic will implement a comprehensive solution that:
|
||||
- Updates supported-models.json with accurate contextWindowTokens and maxOutputTokens for each model
|
||||
- Modifies config-manager.js to use separate maxInputTokens and maxOutputTokens in role configurations
|
||||
- Implements a token counting utility for accurate prompt measurement
|
||||
- Updates ai-services-unified.js to dynamically calculate available output tokens
|
||||
- Provides migration guidance and validation for existing configurations
|
||||
- Adds comprehensive error handling and validation throughout the system
|
||||
|
||||
The end result will be more precise token management, better cost control, and reduced likelihood of hitting model context limits.
|
||||
|
||||
# Test Strategy:
|
||||
1. Verify all models have accurate token limit data from official documentation
|
||||
2. Test dynamic token allocation with various prompt lengths
|
||||
3. Ensure backward compatibility with existing .taskmasterconfig files
|
||||
4. Validate error messages are clear and actionable
|
||||
5. Test with multiple AI providers to ensure consistent behavior
|
||||
6. Performance test token counting utility with large prompts
|
||||
|
||||
# Subtasks:
|
||||
## 1. Update supported-models.json with token limit fields [pending]
|
||||
### Dependencies: None
|
||||
### Description: Modify the supported-models.json file to include contextWindowTokens and maxOutputTokens fields for each model, replacing the ambiguous max_tokens field.
|
||||
### Details:
|
||||
For each model entry in supported-models.json:
|
||||
1. Add `contextWindowTokens` field representing the total context window (input + output tokens)
|
||||
2. Add `maxOutputTokens` field representing the maximum tokens the model can generate
|
||||
3. Remove or deprecate the ambiguous `max_tokens` field if present
|
||||
|
||||
Research and populate accurate values for each model from official documentation:
|
||||
- For OpenAI models (e.g., gpt-4o): contextWindowTokens=128000, maxOutputTokens=16384
|
||||
- For Anthropic models (e.g., Claude 3.7): contextWindowTokens=200000, maxOutputTokens=8192
|
||||
- For other providers, find official documentation or use reasonable defaults
|
||||
|
||||
Example entry:
|
||||
```json
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"swe_score": 0.623,
|
||||
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"contextWindowTokens": 200000,
|
||||
"maxOutputTokens": 8192
|
||||
}
|
||||
```
|
||||
|
||||
## 2. Update config-manager.js defaults and getters [pending]
|
||||
### Dependencies: None
|
||||
### Description: Modify the config-manager.js module to replace maxTokens with maxInputTokens and maxOutputTokens in the DEFAULTS object and update related getter functions.
|
||||
### Details:
|
||||
1. Update the `DEFAULTS` object in config-manager.js:
|
||||
```javascript
|
||||
const DEFAULTS = {
|
||||
// ... existing defaults
|
||||
main: {
|
||||
// Replace maxTokens with these two fields
|
||||
maxInputTokens: 16000, // Example default
|
||||
maxOutputTokens: 4000, // Example default
|
||||
temperature: 0.7
|
||||
// ... other fields
|
||||
},
|
||||
research: {
|
||||
maxInputTokens: 16000,
|
||||
maxOutputTokens: 4000,
|
||||
temperature: 0.7
|
||||
// ... other fields
|
||||
},
|
||||
fallback: {
|
||||
maxInputTokens: 8000,
|
||||
maxOutputTokens: 2000,
|
||||
temperature: 0.7
|
||||
// ... other fields
|
||||
}
|
||||
// ... rest of DEFAULTS
|
||||
};
|
||||
```
|
||||
|
||||
2. Update `getParametersForRole` function to return the new fields:
|
||||
```javascript
|
||||
function getParametersForRole(role, explicitRoot = null) {
|
||||
const config = _getConfig(explicitRoot);
|
||||
return {
|
||||
maxInputTokens: config[role]?.maxInputTokens,
|
||||
maxOutputTokens: config[role]?.maxOutputTokens,
|
||||
temperature: config[role]?.temperature
|
||||
// ... any other parameters
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
3. Add a new function to get model capabilities:
|
||||
```javascript
|
||||
function getModelCapabilities(providerName, modelId) {
|
||||
const models = MODEL_MAP[providerName?.toLowerCase()];
|
||||
const model = models?.find(m => m.id === modelId);
|
||||
return {
|
||||
contextWindowTokens: model?.contextWindowTokens,
|
||||
maxOutputTokens: model?.maxOutputTokens
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
4. Deprecate or update the role-specific maxTokens getters:
|
||||
```javascript
|
||||
// Either remove these or update them to return maxInputTokens
|
||||
function getMainMaxTokens(explicitRoot = null) {
|
||||
console.warn('getMainMaxTokens is deprecated. Use getParametersForRole("main") instead.');
|
||||
return getParametersForRole("main", explicitRoot).maxInputTokens;
|
||||
}
|
||||
// Same for getResearchMaxTokens and getFallbackMaxTokens
|
||||
```
|
||||
|
||||
5. Export the new functions:
|
||||
```javascript
|
||||
module.exports = {
|
||||
// ... existing exports
|
||||
getParametersForRole,
|
||||
getModelCapabilities
|
||||
};
|
||||
```
|
||||
|
||||
## 3. Implement token counting utility [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a utility function to count tokens for prompts based on the model being used, primarily using tiktoken for OpenAI and Anthropic models with character-based fallbacks for other providers.
|
||||
### Details:
|
||||
1. Install the tiktoken package:
|
||||
```bash
|
||||
npm install tiktoken
|
||||
```
|
||||
|
||||
2. Create a new file `scripts/modules/token-counter.js`:
|
||||
```javascript
|
||||
const tiktoken = require('tiktoken');
|
||||
|
||||
/**
|
||||
* Count tokens for a given text and model
|
||||
* @param {string} text - The text to count tokens for
|
||||
* @param {string} provider - The AI provider (e.g., 'openai', 'anthropic')
|
||||
* @param {string} modelId - The model ID
|
||||
* @returns {number} - Estimated token count
|
||||
*/
|
||||
function countTokens(text, provider, modelId) {
|
||||
if (!text) return 0;
|
||||
|
||||
// Convert to lowercase for case-insensitive matching
|
||||
const providerLower = provider?.toLowerCase();
|
||||
|
||||
try {
|
||||
// OpenAI models
|
||||
if (providerLower === 'openai') {
|
||||
// Most OpenAI chat models use cl100k_base encoding
|
||||
const encoding = tiktoken.encoding_for_model(modelId) || tiktoken.get_encoding('cl100k_base');
|
||||
return encoding.encode(text).length;
|
||||
}
|
||||
|
||||
// Anthropic models - can use cl100k_base as an approximation
|
||||
// or follow Anthropic's guidance
|
||||
if (providerLower === 'anthropic') {
|
||||
try {
|
||||
// Try to use cl100k_base as a reasonable approximation
|
||||
const encoding = tiktoken.get_encoding('cl100k_base');
|
||||
return encoding.encode(text).length;
|
||||
} catch (e) {
|
||||
// Fallback to Anthropic's character-based estimation
|
||||
return Math.ceil(text.length / 3.5); // ~3.5 chars per token for English
|
||||
}
|
||||
}
|
||||
|
||||
// For other providers, use character-based estimation as fallback
|
||||
// Different providers may have different tokenization schemes
|
||||
return Math.ceil(text.length / 4); // General fallback estimate
|
||||
} catch (error) {
|
||||
console.warn(`Token counting error: ${error.message}. Using character-based estimate.`);
|
||||
return Math.ceil(text.length / 4); // Fallback if tiktoken fails
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { countTokens };
|
||||
```
|
||||
|
||||
3. Add tests for the token counter in `tests/token-counter.test.js`:
|
||||
```javascript
|
||||
const { countTokens } = require('../scripts/modules/token-counter');
|
||||
|
||||
describe('Token Counter', () => {
|
||||
test('counts tokens for OpenAI models', () => {
|
||||
const text = 'Hello, world! This is a test.';
|
||||
const count = countTokens(text, 'openai', 'gpt-4');
|
||||
expect(count).toBeGreaterThan(0);
|
||||
expect(typeof count).toBe('number');
|
||||
});
|
||||
|
||||
test('counts tokens for Anthropic models', () => {
|
||||
const text = 'Hello, world! This is a test.';
|
||||
const count = countTokens(text, 'anthropic', 'claude-3-7-sonnet-20250219');
|
||||
expect(count).toBeGreaterThan(0);
|
||||
expect(typeof count).toBe('number');
|
||||
});
|
||||
|
||||
test('handles empty text', () => {
|
||||
expect(countTokens('', 'openai', 'gpt-4')).toBe(0);
|
||||
expect(countTokens(null, 'openai', 'gpt-4')).toBe(0);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## 4. Update ai-services-unified.js for dynamic token limits [pending]
|
||||
### Dependencies: None
|
||||
### Description: Modify the _unifiedServiceRunner function in ai-services-unified.js to use the new token counting utility and dynamically adjust output token limits based on input length.
|
||||
### Details:
|
||||
1. Import the token counter in `ai-services-unified.js`:
|
||||
```javascript
|
||||
const { countTokens } = require('./token-counter');
|
||||
const { getParametersForRole, getModelCapabilities } = require('./config-manager');
|
||||
```
|
||||
|
||||
2. Update the `_unifiedServiceRunner` function to implement dynamic token limit adjustment:
|
||||
```javascript
|
||||
async function _unifiedServiceRunner({
|
||||
serviceType,
|
||||
provider,
|
||||
modelId,
|
||||
systemPrompt,
|
||||
prompt,
|
||||
temperature,
|
||||
currentRole,
|
||||
effectiveProjectRoot,
|
||||
// ... other parameters
|
||||
}) {
|
||||
// Get role parameters with new token limits
|
||||
const roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
|
||||
|
||||
// Get model capabilities
|
||||
const modelCapabilities = getModelCapabilities(provider, modelId);
|
||||
|
||||
// Count tokens in the prompts
|
||||
const systemPromptTokens = countTokens(systemPrompt, provider, modelId);
|
||||
const userPromptTokens = countTokens(prompt, provider, modelId);
|
||||
const totalPromptTokens = systemPromptTokens + userPromptTokens;
|
||||
|
||||
// Validate against input token limits
|
||||
if (totalPromptTokens > roleParams.maxInputTokens) {
|
||||
throw new Error(
|
||||
`Prompt (${totalPromptTokens} tokens) exceeds configured max input tokens (${roleParams.maxInputTokens}) for role '${currentRole}'.`
|
||||
);
|
||||
}
|
||||
|
||||
// Validate against model's absolute context window
|
||||
if (modelCapabilities.contextWindowTokens && totalPromptTokens > modelCapabilities.contextWindowTokens) {
|
||||
throw new Error(
|
||||
`Prompt (${totalPromptTokens} tokens) exceeds model's context window (${modelCapabilities.contextWindowTokens}) for ${modelId}.`
|
||||
);
|
||||
}
|
||||
|
||||
// Calculate available output tokens
|
||||
// If model has a combined context window, we need to subtract input tokens
|
||||
let availableOutputTokens = roleParams.maxOutputTokens;
|
||||
|
||||
// If model has a context window constraint, ensure we don't exceed it
|
||||
if (modelCapabilities.contextWindowTokens) {
|
||||
const remainingContextTokens = modelCapabilities.contextWindowTokens - totalPromptTokens;
|
||||
availableOutputTokens = Math.min(availableOutputTokens, remainingContextTokens);
|
||||
}
|
||||
|
||||
// Also respect the model's absolute max output limit
|
||||
if (modelCapabilities.maxOutputTokens) {
|
||||
availableOutputTokens = Math.min(availableOutputTokens, modelCapabilities.maxOutputTokens);
|
||||
}
|
||||
|
||||
// Prepare API call parameters
|
||||
const callParams = {
|
||||
apiKey,
|
||||
modelId,
|
||||
maxTokens: availableOutputTokens, // Use dynamically calculated output limit
|
||||
temperature: roleParams.temperature,
|
||||
messages,
|
||||
baseUrl,
|
||||
...(serviceType === 'generateObject' && { schema, objectName }),
|
||||
...restApiParams
|
||||
};
|
||||
|
||||
// Log token usage information
|
||||
console.debug(`Token usage: ${totalPromptTokens} input tokens, ${availableOutputTokens} max output tokens`);
|
||||
|
||||
// Rest of the function remains the same...
|
||||
}
|
||||
```
|
||||
|
||||
3. Update the error handling to provide clear messages about token limits:
|
||||
```javascript
|
||||
try {
|
||||
// Existing code...
|
||||
} catch (error) {
|
||||
if (error.message.includes('tokens')) {
|
||||
// Token-related errors should be clearly identified
|
||||
console.error(`Token limit error: ${error.message}`);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
```
|
||||
|
||||
## 5. Update .taskmasterconfig schema and user guide [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a migration guide for users to update their .taskmasterconfig files and document the new token limit configuration options.
|
||||
### Details:
|
||||
1. Create a migration script or guide for users to update their existing `.taskmasterconfig` files:
|
||||
|
||||
```javascript
|
||||
// Example migration snippet for .taskmasterconfig
|
||||
{
|
||||
"main": {
|
||||
// Before:
|
||||
// "maxTokens": 16000,
|
||||
|
||||
// After:
|
||||
"maxInputTokens": 16000,
|
||||
"maxOutputTokens": 4000,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"research": {
|
||||
"maxInputTokens": 16000,
|
||||
"maxOutputTokens": 4000,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"fallback": {
|
||||
"maxInputTokens": 8000,
|
||||
"maxOutputTokens": 2000,
|
||||
"temperature": 0.7
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. Update the user documentation to explain the new token limit fields:
|
||||
|
||||
```markdown
|
||||
# Token Limit Configuration
|
||||
|
||||
Task Master now provides more granular control over token limits with separate settings for input and output tokens:
|
||||
|
||||
- `maxInputTokens`: Maximum number of tokens allowed in the input prompt (system prompt + user prompt)
|
||||
- `maxOutputTokens`: Maximum number of tokens the model should generate in its response
|
||||
|
||||
## Benefits
|
||||
|
||||
- More precise control over token usage
|
||||
- Better cost management
|
||||
- Reduced likelihood of hitting model context limits
|
||||
- Dynamic adjustment to maximize output space based on input length
|
||||
|
||||
## Migration from Previous Versions
|
||||
|
||||
If you're upgrading from a previous version, you'll need to update your `.taskmasterconfig` file:
|
||||
|
||||
1. Replace the single `maxTokens` field with separate `maxInputTokens` and `maxOutputTokens` fields
|
||||
2. Recommended starting values:
|
||||
- Set `maxInputTokens` to your previous `maxTokens` value
|
||||
- Set `maxOutputTokens` to approximately 1/4 of your model's context window
|
||||
|
||||
## Example Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"main": {
|
||||
"maxInputTokens": 16000,
|
||||
"maxOutputTokens": 4000,
|
||||
"temperature": 0.7
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
3. Update the schema validation in `config-manager.js` to validate the new fields:
|
||||
|
||||
```javascript
|
||||
function _validateConfig(config) {
|
||||
// ... existing validation
|
||||
|
||||
// Validate token limits for each role
|
||||
['main', 'research', 'fallback'].forEach(role => {
|
||||
if (config[role]) {
|
||||
// Check if old maxTokens is present and warn about migration
|
||||
if (config[role].maxTokens !== undefined) {
|
||||
console.warn(`Warning: 'maxTokens' in ${role} role is deprecated. Please use 'maxInputTokens' and 'maxOutputTokens' instead.`);
|
||||
}
|
||||
|
||||
// Validate new token limit fields
|
||||
if (config[role].maxInputTokens !== undefined && (!Number.isInteger(config[role].maxInputTokens) || config[role].maxInputTokens <= 0)) {
|
||||
throw new Error(`Invalid maxInputTokens for ${role} role: must be a positive integer`);
|
||||
}
|
||||
|
||||
if (config[role].maxOutputTokens !== undefined && (!Number.isInteger(config[role].maxOutputTokens) || config[role].maxOutputTokens <= 0)) {
|
||||
throw new Error(`Invalid maxOutputTokens for ${role} role: must be a positive integer`);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return config;
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Implement validation and error handling [pending]
|
||||
### Dependencies: None
|
||||
### Description: Add comprehensive validation and error handling for token limits throughout the system, including helpful error messages and graceful fallbacks.
|
||||
### Details:
|
||||
1. Add validation when loading models in `config-manager.js`:
|
||||
```javascript
|
||||
function _validateModelMap(modelMap) {
|
||||
// Validate each provider's models
|
||||
Object.entries(modelMap).forEach(([provider, models]) => {
|
||||
models.forEach(model => {
|
||||
// Check for required token limit fields
|
||||
if (!model.contextWindowTokens) {
|
||||
console.warn(`Warning: Model ${model.id} from ${provider} is missing contextWindowTokens field`);
|
||||
}
|
||||
if (!model.maxOutputTokens) {
|
||||
console.warn(`Warning: Model ${model.id} from ${provider} is missing maxOutputTokens field`);
|
||||
}
|
||||
});
|
||||
});
|
||||
return modelMap;
|
||||
}
|
||||
```
|
||||
|
||||
2. Add validation when setting up a model in the CLI:
|
||||
```javascript
|
||||
function validateModelConfig(modelConfig, modelCapabilities) {
|
||||
const issues = [];
|
||||
|
||||
// Check if input tokens exceed model's context window
|
||||
if (modelConfig.maxInputTokens > modelCapabilities.contextWindowTokens) {
|
||||
issues.push(`maxInputTokens (${modelConfig.maxInputTokens}) exceeds model's context window (${modelCapabilities.contextWindowTokens})`);
|
||||
}
|
||||
|
||||
// Check if output tokens exceed model's maximum
|
||||
if (modelConfig.maxOutputTokens > modelCapabilities.maxOutputTokens) {
|
||||
issues.push(`maxOutputTokens (${modelConfig.maxOutputTokens}) exceeds model's maximum output tokens (${modelCapabilities.maxOutputTokens})`);
|
||||
}
|
||||
|
||||
// Check if combined tokens exceed context window
|
||||
if (modelConfig.maxInputTokens + modelConfig.maxOutputTokens > modelCapabilities.contextWindowTokens) {
|
||||
issues.push(`Combined maxInputTokens and maxOutputTokens (${modelConfig.maxInputTokens + modelConfig.maxOutputTokens}) exceeds model's context window (${modelCapabilities.contextWindowTokens})`);
|
||||
}
|
||||
|
||||
return issues;
|
||||
}
|
||||
```
|
||||
|
||||
3. Add graceful fallbacks in `ai-services-unified.js`:
|
||||
```javascript
|
||||
// Fallback for missing token limits
|
||||
if (!roleParams.maxInputTokens) {
|
||||
console.warn(`Warning: maxInputTokens not specified for role '${currentRole}'. Using default value.`);
|
||||
roleParams.maxInputTokens = 8000; // Reasonable default
|
||||
}
|
||||
|
||||
if (!roleParams.maxOutputTokens) {
|
||||
console.warn(`Warning: maxOutputTokens not specified for role '${currentRole}'. Using default value.`);
|
||||
roleParams.maxOutputTokens = 2000; // Reasonable default
|
||||
}
|
||||
|
||||
// Fallback for missing model capabilities
|
||||
if (!modelCapabilities.contextWindowTokens) {
|
||||
console.warn(`Warning: contextWindowTokens not specified for model ${modelId}. Using conservative estimate.`);
|
||||
modelCapabilities.contextWindowTokens = roleParams.maxInputTokens + roleParams.maxOutputTokens;
|
||||
}
|
||||
|
||||
if (!modelCapabilities.maxOutputTokens) {
|
||||
console.warn(`Warning: maxOutputTokens not specified for model ${modelId}. Using role configuration.`);
|
||||
modelCapabilities.maxOutputTokens = roleParams.maxOutputTokens;
|
||||
}
|
||||
```
|
||||
|
||||
4. Add detailed logging for token usage:
|
||||
```javascript
|
||||
function logTokenUsage(provider, modelId, inputTokens, outputTokens, role) {
|
||||
const inputCost = calculateTokenCost(provider, modelId, 'input', inputTokens);
|
||||
const outputCost = calculateTokenCost(provider, modelId, 'output', outputTokens);
|
||||
|
||||
console.info(`Token usage for ${role} role with ${provider}/${modelId}:`);
|
||||
console.info(`- Input: ${inputTokens.toLocaleString()} tokens ($${inputCost.toFixed(6)})`);
|
||||
console.info(`- Output: ${outputTokens.toLocaleString()} tokens ($${outputCost.toFixed(6)})`);
|
||||
console.info(`- Total cost: $${(inputCost + outputCost).toFixed(6)}`);
|
||||
console.info(`- Available output tokens: ${availableOutputTokens.toLocaleString()}`);
|
||||
}
|
||||
```
|
||||
|
||||
5. Add a helper function to suggest configuration improvements:
|
||||
```javascript
|
||||
function suggestTokenConfigImprovements(roleParams, modelCapabilities, promptTokens) {
|
||||
const suggestions = [];
|
||||
|
||||
// If prompt is using less than 50% of allowed input
|
||||
if (promptTokens < roleParams.maxInputTokens * 0.5) {
|
||||
suggestions.push(`Consider reducing maxInputTokens from ${roleParams.maxInputTokens} to save on potential costs`);
|
||||
}
|
||||
|
||||
// If output tokens are very limited due to large input
|
||||
const availableOutput = Math.min(
|
||||
roleParams.maxOutputTokens,
|
||||
modelCapabilities.contextWindowTokens - promptTokens
|
||||
);
|
||||
|
||||
if (availableOutput < roleParams.maxOutputTokens * 0.5) {
|
||||
suggestions.push(`Available output tokens (${availableOutput}) are significantly less than configured maxOutputTokens (${roleParams.maxOutputTokens}) due to large input`);
|
||||
}
|
||||
|
||||
return suggestions;
|
||||
}
|
||||
```
|
||||
|
||||
@@ -1,23 +1,34 @@
|
||||
# Task ID: 82
|
||||
# Title: Introduce Prioritize Command with Enhanced Priority Levels
|
||||
# Title: Update supported-models.json with token limit fields
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Implement a prioritize command with --up/--down/--priority/--id flags and shorthand equivalents (-u/-d/-p/-i). Add 'lowest' and 'highest' priority levels, updating CLI output accordingly.
|
||||
# Priority: high
|
||||
# Description: Modify the supported-models.json file to include contextWindowTokens and maxOutputTokens fields for each model, replacing the ambiguous max_tokens field.
|
||||
# Details:
|
||||
The new prioritize command should allow users to adjust task priorities using the specified flags. The --up and --down flags will modify the priority relative to the current level, while --priority sets an absolute priority. The --id flag specifies which task to prioritize. Shorthand equivalents (-u/-d/-p/-i) should be supported for user convenience.
|
||||
For each model entry in supported-models.json:
|
||||
1. Add `contextWindowTokens` field representing the total context window (input + output tokens)
|
||||
2. Add `maxOutputTokens` field representing the maximum tokens the model can generate
|
||||
3. Remove or deprecate the ambiguous `max_tokens` field if present
|
||||
|
||||
The priority levels should now include 'lowest', 'low', 'medium', 'high', and 'highest'. The CLI output should be updated to reflect these new priority levels accurately.
|
||||
Research and populate accurate values for each model from official documentation:
|
||||
- For OpenAI models (e.g., gpt-4o): contextWindowTokens=128000, maxOutputTokens=16384
|
||||
- For Anthropic models (e.g., Claude 3.7): contextWindowTokens=200000, maxOutputTokens=8192
|
||||
- For other providers, find official documentation or use reasonable defaults
|
||||
|
||||
Considerations:
|
||||
- Ensure backward compatibility with existing commands and configurations.
|
||||
- Update the help documentation to include the new command and its usage.
|
||||
- Implement proper error handling for invalid priority levels or missing flags.
|
||||
Example entry:
|
||||
```json
|
||||
{
|
||||
"id": "claude-3-7-sonnet-20250219",
|
||||
"swe_score": 0.623,
|
||||
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"contextWindowTokens": 200000,
|
||||
"maxOutputTokens": 8192
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
To verify task completion, perform the following tests:
|
||||
1. Test each flag (--up, --down, --priority, --id) individually and in combination to ensure they function as expected.
|
||||
2. Verify that shorthand equivalents (-u, -d, -p, -i) work correctly.
|
||||
3. Check that the new priority levels ('lowest' and 'highest') are recognized and displayed properly in CLI output.
|
||||
4. Test error handling for invalid inputs (e.g., non-existent task IDs, invalid priority levels).
|
||||
5. Ensure that the help command displays accurate information about the new prioritize command.
|
||||
1. Validate JSON syntax after changes
|
||||
2. Verify all models have the new fields with reasonable values
|
||||
3. Check that the values align with official documentation from each provider
|
||||
4. Ensure backward compatibility by maintaining any fields other systems might depend on
|
||||
|
||||
@@ -1,288 +1,95 @@
|
||||
# Task ID: 83
|
||||
# Title: Implement Git Workflow Integration
|
||||
# Title: Update config-manager.js defaults and getters
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Dependencies: 82
|
||||
# Priority: high
|
||||
# Description: Add `task-master git` command suite to automate git workflows based on established patterns from Task 4, eliminating manual overhead and ensuring 100% consistency
|
||||
# Description: Modify the config-manager.js module to replace maxTokens with maxInputTokens and maxOutputTokens in the DEFAULTS object and update related getter functions.
|
||||
# Details:
|
||||
Create a comprehensive git workflow automation system that integrates deeply with TaskMaster's task management. The feature will:
|
||||
1. Update the `DEFAULTS` object in config-manager.js:
|
||||
```javascript
|
||||
const DEFAULTS = {
|
||||
// ... existing defaults
|
||||
main: {
|
||||
// Replace maxTokens with these two fields
|
||||
maxInputTokens: 16000, // Example default
|
||||
maxOutputTokens: 4000, // Example default
|
||||
temperature: 0.7
|
||||
// ... other fields
|
||||
},
|
||||
research: {
|
||||
maxInputTokens: 16000,
|
||||
maxOutputTokens: 4000,
|
||||
temperature: 0.7
|
||||
// ... other fields
|
||||
},
|
||||
fallback: {
|
||||
maxInputTokens: 8000,
|
||||
maxOutputTokens: 2000,
|
||||
temperature: 0.7
|
||||
// ... other fields
|
||||
}
|
||||
// ... rest of DEFAULTS
|
||||
};
|
||||
```
|
||||
|
||||
1. **Automated Branch Management**:
|
||||
- Create branches following `task-{id}` naming convention
|
||||
- Validate branch names and prevent conflicts
|
||||
- Handle branch switching with uncommitted changes
|
||||
- Clean up local and remote branches post-merge
|
||||
2. Update `getParametersForRole` function to return the new fields:
|
||||
```javascript
|
||||
function getParametersForRole(role, explicitRoot = null) {
|
||||
const config = _getConfig(explicitRoot);
|
||||
return {
|
||||
maxInputTokens: config[role]?.maxInputTokens,
|
||||
maxOutputTokens: config[role]?.maxOutputTokens,
|
||||
temperature: config[role]?.temperature
|
||||
// ... any other parameters
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
2. **Intelligent Commit Generation**:
|
||||
- Auto-detect commit type (feat/fix/test/refactor/docs) from file changes
|
||||
- Generate standardized commit messages with task context
|
||||
- Support subtask-specific commits with proper references
|
||||
- Include coverage delta in test commits
|
||||
3. Add a new function to get model capabilities:
|
||||
```javascript
|
||||
function getModelCapabilities(providerName, modelId) {
|
||||
const models = MODEL_MAP[providerName?.toLowerCase()];
|
||||
const model = models?.find(m => m.id === modelId);
|
||||
return {
|
||||
contextWindowTokens: model?.contextWindowTokens,
|
||||
maxOutputTokens: model?.maxOutputTokens
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
3. **PR Automation**:
|
||||
- Generate comprehensive PR descriptions from task/subtask data
|
||||
- Include implementation details, test coverage, breaking changes
|
||||
- Format using GitHub markdown with task hierarchy
|
||||
- Auto-populate PR template with relevant metadata
|
||||
4. Deprecate or update the role-specific maxTokens getters:
|
||||
```javascript
|
||||
// Either remove these or update them to return maxInputTokens
|
||||
function getMainMaxTokens(explicitRoot = null) {
|
||||
console.warn('getMainMaxTokens is deprecated. Use getParametersForRole("main") instead.');
|
||||
return getParametersForRole("main", explicitRoot).maxInputTokens;
|
||||
}
|
||||
// Same for getResearchMaxTokens and getFallbackMaxTokens
|
||||
```
|
||||
|
||||
4. **Workflow State Management**:
|
||||
- Track current task branch and status
|
||||
- Validate task readiness before PR creation
|
||||
- Ensure all subtasks completed before finishing
|
||||
- Handle merge conflicts gracefully
|
||||
|
||||
5. **Integration Points**:
|
||||
- Seamless integration with existing task commands
|
||||
- MCP server support for IDE integrations
|
||||
- GitHub CLI (`gh`) authentication support
|
||||
- Coverage report parsing and display
|
||||
|
||||
**Technical Architecture**:
|
||||
- Modular command structure in `scripts/modules/task-manager/git-*`
|
||||
- Git operations wrapper using simple-git or native child_process
|
||||
- Template engine for commit/PR generation in `scripts/modules/`
|
||||
- State persistence in `.taskmaster/git-state.json`
|
||||
- Error recovery and rollback mechanisms
|
||||
|
||||
**Key Files to Create**:
|
||||
- `scripts/modules/task-manager/git-start.js` - Branch creation and task status update
|
||||
- `scripts/modules/task-manager/git-commit.js` - Intelligent commit message generation
|
||||
- `scripts/modules/task-manager/git-pr.js` - PR creation with auto-generated description
|
||||
- `scripts/modules/task-manager/git-finish.js` - Post-merge cleanup and status update
|
||||
- `scripts/modules/task-manager/git-status.js` - Current git workflow state display
|
||||
- `scripts/modules/git-operations.js` - Core git functionality wrapper
|
||||
- `scripts/modules/commit-analyzer.js` - File change analysis for commit types
|
||||
- `scripts/modules/pr-description-generator.js` - PR description template generator
|
||||
|
||||
**MCP Integration Files**:
|
||||
- `mcp-server/src/core/direct-functions/git-start.js`
|
||||
- `mcp-server/src/core/direct-functions/git-commit.js`
|
||||
- `mcp-server/src/core/direct-functions/git-pr.js`
|
||||
- `mcp-server/src/core/direct-functions/git-finish.js`
|
||||
- `mcp-server/src/core/direct-functions/git-status.js`
|
||||
- `mcp-server/src/tools/git-start.js`
|
||||
- `mcp-server/src/tools/git-commit.js`
|
||||
- `mcp-server/src/tools/git-pr.js`
|
||||
- `mcp-server/src/tools/git-finish.js`
|
||||
- `mcp-server/src/tools/git-status.js`
|
||||
|
||||
**Configuration**:
|
||||
- Add git workflow settings to `.taskmasterconfig`
|
||||
- Support for custom commit prefixes and PR templates
|
||||
- Branch naming pattern customization
|
||||
- Remote repository detection and validation
|
||||
5. Export the new functions:
|
||||
```javascript
|
||||
module.exports = {
|
||||
// ... existing exports
|
||||
getParametersForRole,
|
||||
getModelCapabilities
|
||||
};
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
Implement comprehensive test suite following Task 4's TDD approach:
|
||||
|
||||
1. **Unit Tests** (target: 95%+ coverage):
|
||||
- Git operations wrapper with mocked git commands
|
||||
- Commit type detection with various file change scenarios
|
||||
- PR description generation with different task structures
|
||||
- Branch name validation and generation
|
||||
- State management and persistence
|
||||
|
||||
2. **Integration Tests**:
|
||||
- Full workflow simulation in test repository
|
||||
- Error handling for git conflicts and failures
|
||||
- Multi-task workflow scenarios
|
||||
- Coverage integration with real test runs
|
||||
- GitHub API interaction (mocked)
|
||||
|
||||
3. **E2E Tests**:
|
||||
- Complete task lifecycle from start to finish
|
||||
- Multiple developer workflow simulation
|
||||
- Merge conflict resolution scenarios
|
||||
- Branch protection and validation
|
||||
|
||||
4. **Test Implementation Details**:
|
||||
- Use Jest with git repository fixtures
|
||||
- Mock simple-git for isolated unit tests
|
||||
- Create test tasks.json scenarios
|
||||
- Validate all error messages and edge cases
|
||||
- Test rollback and recovery mechanisms
|
||||
|
||||
5. **Coverage Requirements**:
|
||||
- Minimum 90% overall coverage
|
||||
- 100% coverage for critical paths (branch creation, PR generation)
|
||||
- All error scenarios must be tested
|
||||
- Performance tests for large task hierarchies
|
||||
1. Unit test the updated getParametersForRole function with various configurations
|
||||
2. Verify the new getModelCapabilities function returns correct values
|
||||
3. Test with both default and custom configurations
|
||||
4. Ensure backward compatibility by checking that existing code using the old getters still works (with warnings)
|
||||
|
||||
# Subtasks:
|
||||
## 1. Design and implement core git operations wrapper [pending]
|
||||
## 1. Update config-manager.js with specific token limit fields [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a robust git operations layer that handles all git commands with proper error handling and state management
|
||||
### Description: Modify the DEFAULTS object in config-manager.js to replace maxTokens with more specific token limit fields (maxInputTokens, maxOutputTokens, maxTotalTokens) and update related getter functions while maintaining backward compatibility.
|
||||
### Details:
|
||||
Create `scripts/modules/git-operations.js` with methods for:
|
||||
- Branch creation/deletion (local and remote)
|
||||
- Commit operations with message formatting
|
||||
- Status checking and conflict detection
|
||||
- Remote operations (fetch, push, pull)
|
||||
- Repository validation and setup
|
||||
|
||||
Use simple-git library or child_process for git commands. Implement comprehensive error handling with specific error types for different git failures. Include retry logic for network operations.
|
||||
|
||||
## 2. Implement git start command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create the entry point for task-based git workflows with automated branch creation and task status updates
|
||||
### Details:
|
||||
Implement `scripts/modules/task-manager/git-start.js` with functionality to:
|
||||
- Validate task exists and is ready to start
|
||||
- Check for clean working directory
|
||||
- Create branch with `task-{id}` naming
|
||||
- Update task status to 'in-progress'
|
||||
- Store workflow state in `.taskmaster/git-state.json`
|
||||
- Handle existing branch scenarios
|
||||
- Support --force flag for branch recreation
|
||||
|
||||
Integrate with existing task-master commands and ensure MCP compatibility.
|
||||
|
||||
## 3. Build intelligent commit analyzer and generator [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a system that analyzes file changes to auto-detect commit types and generate standardized commit messages
|
||||
### Details:
|
||||
Develop `scripts/modules/commit-analyzer.js` with:
|
||||
- File change detection and categorization
|
||||
- Commit type inference rules:
|
||||
- feat: new files in scripts/, new functions
|
||||
- fix: changes to existing logic
|
||||
- test: changes in tests/ directory
|
||||
- docs: markdown and comment changes
|
||||
- refactor: file moves, renames, cleanup
|
||||
- Smart message generation with task context
|
||||
- Support for custom commit templates
|
||||
- Subtask reference inclusion
|
||||
|
||||
Create `scripts/modules/task-manager/git-commit.js` that uses the analyzer to generate commits with proper formatting.
|
||||
|
||||
## 4. Create PR description generator and command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Build a comprehensive PR description generator that creates detailed, formatted descriptions from task data
|
||||
### Details:
|
||||
Implement `scripts/modules/pr-description-generator.js` to generate:
|
||||
- Task overview with full context
|
||||
- Subtask completion checklist
|
||||
- Implementation details summary
|
||||
- Test coverage metrics integration
|
||||
- Breaking changes section
|
||||
- Related tasks and dependencies
|
||||
|
||||
Create `scripts/modules/task-manager/git-pr.js` to:
|
||||
- Validate all subtasks are complete
|
||||
- Generate PR title and description
|
||||
- Use GitHub CLI for PR creation
|
||||
- Handle draft PR scenarios
|
||||
- Support custom PR templates
|
||||
- Include labels based on task metadata
|
||||
|
||||
## 5. Implement git finish command with cleanup [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create the workflow completion command that handles post-merge cleanup and task status updates
|
||||
### Details:
|
||||
Build `scripts/modules/task-manager/git-finish.js` with:
|
||||
- PR merge verification via GitHub API
|
||||
- Local branch cleanup
|
||||
- Remote branch deletion (with confirmation)
|
||||
- Task status update to 'done'
|
||||
- Workflow state cleanup
|
||||
- Switch back to main branch
|
||||
- Pull latest changes
|
||||
|
||||
Handle scenarios where PR isn't merged yet or merge failed. Include --skip-cleanup flag for manual branch management.
|
||||
|
||||
## 6. Add git status command for workflow visibility [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a status command that shows current git workflow state with task context
|
||||
### Details:
|
||||
Implement `scripts/modules/task-manager/git-status.js` to display:
|
||||
- Current task and branch information
|
||||
- Subtask completion status
|
||||
- Uncommitted changes summary
|
||||
- PR status if exists
|
||||
- Coverage metrics comparison
|
||||
- Suggested next actions
|
||||
|
||||
Integrate with existing task status displays and provide actionable guidance based on workflow state.
|
||||
|
||||
## 7. Integrate with Commander.js and add command routing [pending]
|
||||
### Dependencies: None
|
||||
### Description: Add the git command suite to TaskMaster's CLI with proper help text and option handling
|
||||
### Details:
|
||||
Update `scripts/modules/commands.js` to:
|
||||
- Add 'git' command with subcommands
|
||||
- Implement option parsing for all git commands
|
||||
- Add comprehensive help text
|
||||
- Ensure proper error handling and display
|
||||
- Validate command prerequisites
|
||||
|
||||
Create proper command structure:
|
||||
- `task-master git start [taskId] [options]`
|
||||
- `task-master git commit [options]`
|
||||
- `task-master git pr [options]`
|
||||
- `task-master git finish [options]`
|
||||
- `task-master git status [options]`
|
||||
|
||||
## 8. Add MCP server integration for git commands [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement MCP tools and direct functions for git workflow commands to enable IDE integration
|
||||
### Details:
|
||||
Create MCP integration in:
|
||||
- `mcp-server/src/core/direct-functions/git-start.js`
|
||||
- `mcp-server/src/core/direct-functions/git-commit.js`
|
||||
- `mcp-server/src/core/direct-functions/git-pr.js`
|
||||
- `mcp-server/src/core/direct-functions/git-finish.js`
|
||||
- `mcp-server/src/core/direct-functions/git-status.js`
|
||||
- `mcp-server/src/tools/git-start.js`
|
||||
- `mcp-server/src/tools/git-commit.js`
|
||||
- `mcp-server/src/tools/git-pr.js`
|
||||
- `mcp-server/src/tools/git-finish.js`
|
||||
- `mcp-server/src/tools/git-status.js`
|
||||
|
||||
Implement tools for:
|
||||
- git_start_task
|
||||
- git_commit_task
|
||||
- git_create_pr
|
||||
- git_finish_task
|
||||
- git_workflow_status
|
||||
|
||||
Ensure proper error handling, logging, and response formatting. Include telemetry data for git operations.
|
||||
|
||||
## 9. Create comprehensive test suite [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement full test coverage following Task 4's high standards with unit, integration, and E2E tests
|
||||
### Details:
|
||||
Create test files:
|
||||
- `tests/unit/git/` - Unit tests for all git components
|
||||
- `tests/integration/git-workflow.test.js` - Full workflow tests
|
||||
- `tests/e2e/git-automation.test.js` - End-to-end scenarios
|
||||
|
||||
Implement:
|
||||
- Git repository fixtures and mocks
|
||||
- Coverage tracking and reporting
|
||||
- Performance benchmarks
|
||||
- Error scenario coverage
|
||||
- Multi-developer workflow simulations
|
||||
|
||||
Target 95%+ coverage with focus on critical paths.
|
||||
|
||||
## 10. Add configuration and documentation [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create configuration options and comprehensive documentation for the git workflow feature
|
||||
### Details:
|
||||
Configuration tasks:
|
||||
- Add git workflow settings to `.taskmasterconfig`
|
||||
- Support environment variables for GitHub tokens
|
||||
- Create default PR and commit templates
|
||||
- Add branch naming customization
|
||||
|
||||
Documentation tasks:
|
||||
- Update README with git workflow section
|
||||
- Create `docs/git-workflow.md` guide
|
||||
- Add examples for common scenarios
|
||||
- Document configuration options
|
||||
- Create troubleshooting guide
|
||||
|
||||
Update rule files:
|
||||
- Create `.cursor/rules/git_workflow.mdc`
|
||||
- Update existing workflow rules
|
||||
1. Replace maxTokens in the DEFAULTS object with maxInputTokens, maxOutputTokens, and maxTotalTokens
|
||||
2. Update any getter functions that reference maxTokens to handle both old and new configurations
|
||||
3. Ensure backward compatibility so existing code using maxTokens continues to work
|
||||
4. Update any related documentation or comments to reflect the new token limit fields
|
||||
5. Test the changes to verify both new specific token limits and legacy maxTokens usage work correctly
|
||||
|
||||
|
||||
@@ -1,639 +1,93 @@
|
||||
# Task ID: 84
|
||||
# Title: Enhance Parse-PRD with Intelligent Task Expansion and Detail Preservation
|
||||
# Title: Implement token counting utility
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Dependencies: 82
|
||||
# Priority: high
|
||||
# Description: Transform parse-prd from a simple task generator into an intelligent system that preserves PRD detail resolution through context-aware task expansion. This addresses the critical issue where highly detailed PRDs lose their specificity when parsed into too few top-level tasks, and ensures that task expansions are grounded in actual PRD content rather than generic AI assumptions.
|
||||
# Description: Create a utility function to count tokens for prompts based on the model being used, primarily using tiktoken for OpenAI and Anthropic models with character-based fallbacks for other providers.
|
||||
# Details:
|
||||
## Core Problem Statement
|
||||
|
||||
The current parse-prd implementation suffers from a fundamental resolution loss problem:
|
||||
|
||||
1. **Detail Compression**: Complex, detailed PRDs get compressed into a fixed number of top-level tasks (default 10), losing critical specificity
|
||||
2. **Orphaned Expansions**: When tasks are later expanded via expand-task, the AI lacks the original PRD context, resulting in generic subtasks that don't reflect the PRD's specific requirements
|
||||
3. **Binary Approach**: The system either creates too few high-level tasks OR requires manual expansion that loses PRD context
|
||||
|
||||
## Solution Architecture
|
||||
|
||||
### Phase 1: Enhanced PRD Analysis Engine
|
||||
- Implement intelligent PRD segmentation that identifies natural task boundaries based on content structure
|
||||
- Create a PRD context preservation system that maintains detailed mappings between PRD sections and generated tasks
|
||||
- Develop adaptive task count determination based on PRD complexity metrics (length, technical depth, feature count)
|
||||
|
||||
### Phase 2: Context-Aware Task Generation
|
||||
- Modify generateTasksFromPRD to create tasks with embedded PRD context references
|
||||
- Implement a PRD section mapping system that links each task to its source PRD content
|
||||
- Add metadata fields to tasks that preserve original PRD language and specifications
|
||||
|
||||
### Phase 3: Intelligent In-Flight Expansion
|
||||
- Add optional `--expand-tasks` flag to parse-prd that triggers immediate expansion after initial task generation
|
||||
- Implement context-aware expansion that uses the original PRD content for each task's expansion
|
||||
- Create a two-pass system: first pass generates tasks with PRD context, second pass expands using that context
|
||||
|
||||
### Phase 4: PRD-Grounded Expansion Logic
|
||||
- Enhance the expansion prompt generation to include relevant PRD excerpts for each task being expanded
|
||||
- Implement smart context windowing that includes related PRD sections when expanding tasks
|
||||
- Add validation to ensure expanded subtasks maintain fidelity to original PRD specifications
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### File Modifications Required:
|
||||
1. **scripts/modules/task-manager/parse-prd.js**
|
||||
- Add PRD analysis functions for intelligent segmentation
|
||||
- Implement context preservation during task generation
|
||||
- Add optional expansion pipeline integration
|
||||
- Create PRD-to-task mapping system
|
||||
|
||||
2. **scripts/modules/task-manager/expand-task.js**
|
||||
- Enhance to accept PRD context as additional input
|
||||
- Modify expansion prompts to include relevant PRD excerpts
|
||||
- Add PRD-grounded validation for generated subtasks
|
||||
|
||||
3. **scripts/modules/ai-services-unified.js**
|
||||
- Add support for context-aware prompting with PRD excerpts
|
||||
- Implement intelligent context windowing for large PRDs
|
||||
- Add PRD analysis capabilities for complexity assessment
|
||||
|
||||
### New Data Structures:
|
||||
```javascript
|
||||
// Enhanced task structure with PRD context
|
||||
{
|
||||
id: "1",
|
||||
title: "User Authentication System",
|
||||
description: "...",
|
||||
prdContext: {
|
||||
sourceSection: "Authentication Requirements (Lines 45-78)",
|
||||
originalText: "The system must implement OAuth 2.0...",
|
||||
relatedSections: ["Security Requirements", "User Management"],
|
||||
contextWindow: "Full PRD excerpt relevant to this task"
|
||||
},
|
||||
// ... existing fields
|
||||
}
|
||||
|
||||
// PRD analysis metadata
|
||||
{
|
||||
prdAnalysis: {
|
||||
totalComplexity: 8.5,
|
||||
naturalTaskBoundaries: [...],
|
||||
recommendedTaskCount: 15,
|
||||
sectionMappings: {...}
|
||||
}
|
||||
}
|
||||
1. Install the tiktoken package:
|
||||
```bash
|
||||
npm install tiktoken
|
||||
```
|
||||
|
||||
### New CLI Options:
|
||||
- `--expand-tasks`: Automatically expand generated tasks using PRD context
|
||||
- `--preserve-detail`: Maximum detail preservation mode
|
||||
- `--adaptive-count`: Let AI determine optimal task count based on PRD complexity
|
||||
- `--context-window-size`: Control how much PRD context to include in expansions
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### Step 1: PRD Analysis Enhancement
|
||||
- Create PRD parsing utilities that identify natural section boundaries
|
||||
- Implement complexity scoring for different PRD sections
|
||||
- Build context extraction functions that preserve relevant details
|
||||
|
||||
### Step 2: Context-Aware Task Generation
|
||||
- Modify the task generation prompt to include section-specific context
|
||||
- Implement task-to-PRD mapping during generation
|
||||
- Add metadata fields to preserve PRD relationships
|
||||
|
||||
### Step 3: Intelligent Expansion Pipeline
|
||||
- Create expansion logic that uses preserved PRD context
|
||||
- Implement smart prompt engineering that includes relevant PRD excerpts
|
||||
- Add validation to ensure subtask fidelity to original requirements
|
||||
|
||||
### Step 4: Integration and Testing
|
||||
- Integrate new functionality with existing parse-prd workflow
|
||||
- Add comprehensive testing with various PRD types and complexities
|
||||
- Implement telemetry for tracking detail preservation effectiveness
|
||||
|
||||
## Success Metrics
|
||||
- PRD detail preservation rate (measured by semantic similarity between PRD and generated tasks)
|
||||
- Reduction in manual task refinement needed post-parsing
|
||||
- Improved accuracy of expanded subtasks compared to PRD specifications
|
||||
- User satisfaction with task granularity and detail accuracy
|
||||
|
||||
## Edge Cases and Considerations
|
||||
- Very large PRDs that exceed context windows
|
||||
- PRDs with conflicting or ambiguous requirements
|
||||
- Integration with existing task expansion workflows
|
||||
- Performance impact of enhanced analysis
|
||||
- Backward compatibility with existing parse-prd usage
|
||||
|
||||
# Test Strategy:
|
||||
|
||||
|
||||
# Subtasks:
|
||||
## 1. Implement PRD Analysis and Segmentation Engine [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create intelligent PRD parsing that identifies natural task boundaries and complexity metrics
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Core Functions to Implement:
|
||||
1. **analyzePRDStructure(prdContent)**
|
||||
- Parse PRD into logical sections using headers, bullet points, and semantic breaks
|
||||
- Identify feature boundaries, technical requirements, and implementation sections
|
||||
- Return structured analysis with section metadata
|
||||
|
||||
2. **calculatePRDComplexity(prdContent)**
|
||||
- Analyze technical depth, feature count, integration requirements
|
||||
- Score complexity on 1-10 scale for different aspects
|
||||
- Return recommended task count based on complexity
|
||||
|
||||
3. **extractTaskBoundaries(prdAnalysis)**
|
||||
- Identify natural breaking points for task creation
|
||||
- Group related requirements into logical task units
|
||||
- Preserve context relationships between sections
|
||||
|
||||
### Technical Approach:
|
||||
- Use regex patterns and NLP techniques to identify section headers
|
||||
- Implement keyword analysis for technical complexity assessment
|
||||
- Create semantic grouping algorithms for related requirements
|
||||
- Build context preservation mappings
|
||||
|
||||
### Output Structure:
|
||||
2. Create a new file `scripts/modules/token-counter.js`:
|
||||
```javascript
|
||||
{
|
||||
sections: [
|
||||
{
|
||||
title: "User Authentication",
|
||||
content: "...",
|
||||
startLine: 45,
|
||||
endLine: 78,
|
||||
complexity: 7,
|
||||
relatedSections: ["Security", "User Management"]
|
||||
const tiktoken = require('tiktoken');
|
||||
|
||||
/**
|
||||
* Count tokens for a given text and model
|
||||
* @param {string} text - The text to count tokens for
|
||||
* @param {string} provider - The AI provider (e.g., 'openai', 'anthropic')
|
||||
* @param {string} modelId - The model ID
|
||||
* @returns {number} - Estimated token count
|
||||
*/
|
||||
function countTokens(text, provider, modelId) {
|
||||
if (!text) return 0;
|
||||
|
||||
// Convert to lowercase for case-insensitive matching
|
||||
const providerLower = provider?.toLowerCase();
|
||||
|
||||
try {
|
||||
// OpenAI models
|
||||
if (providerLower === 'openai') {
|
||||
// Most OpenAI chat models use cl100k_base encoding
|
||||
const encoding = tiktoken.encoding_for_model(modelId) || tiktoken.get_encoding('cl100k_base');
|
||||
return encoding.encode(text).length;
|
||||
}
|
||||
],
|
||||
overallComplexity: 8.5,
|
||||
recommendedTaskCount: 15,
|
||||
naturalBoundaries: [...],
|
||||
contextMappings: {...}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points:
|
||||
- Called at the beginning of parse-prd process
|
||||
- Results used to inform task generation strategy
|
||||
- Analysis stored for later use in expansion phase
|
||||
|
||||
## 2. Enhance Task Generation with PRD Context Preservation [pending]
|
||||
### Dependencies: None
|
||||
### Description: Modify generateTasksFromPRD to embed PRD context and maintain source mappings
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Core Modifications to generateTasksFromPRD:
|
||||
1. **Add PRD Context Embedding**
|
||||
- Modify task generation prompt to include relevant PRD excerpts
|
||||
- Ensure each generated task includes source section references
|
||||
- Preserve original PRD language and specifications in task metadata
|
||||
|
||||
2. **Implement Context Windowing**
|
||||
- For large PRDs, implement intelligent context windowing
|
||||
- Include relevant sections for each task being generated
|
||||
- Maintain context relationships between related tasks
|
||||
|
||||
3. **Enhanced Task Structure**
|
||||
- Add prdContext field to task objects
|
||||
- Include sourceSection, originalText, and relatedSections
|
||||
- Store contextWindow for later use in expansions
|
||||
|
||||
### Technical Implementation:
|
||||
```javascript
|
||||
// Enhanced task generation with context
|
||||
const generateTaskWithContext = async (prdSection, relatedSections, fullPRD) => {
|
||||
const contextWindow = buildContextWindow(prdSection, relatedSections, fullPRD);
|
||||
const prompt = `
|
||||
Generate a task based on this PRD section:
|
||||
|
||||
PRIMARY SECTION:
|
||||
${prdSection.content}
|
||||
|
||||
RELATED CONTEXT:
|
||||
${contextWindow}
|
||||
|
||||
Ensure the task preserves all specific requirements and technical details.
|
||||
`;
|
||||
|
||||
// Generate task with embedded context
|
||||
const task = await generateTask(prompt);
|
||||
task.prdContext = {
|
||||
sourceSection: prdSection.title,
|
||||
originalText: prdSection.content,
|
||||
relatedSections: relatedSections.map(s => s.title),
|
||||
contextWindow: contextWindow
|
||||
};
|
||||
|
||||
return task;
|
||||
};
|
||||
```
|
||||
|
||||
### Context Preservation Strategy:
|
||||
- Map each task to its source PRD sections
|
||||
- Preserve technical specifications and requirements language
|
||||
- Maintain relationships between interdependent features
|
||||
- Store context for later use in expansion phase
|
||||
|
||||
### Integration with Existing Flow:
|
||||
- Modify existing generateTasksFromPRD function
|
||||
- Maintain backward compatibility with simple PRDs
|
||||
- Add new metadata fields without breaking existing structure
|
||||
- Ensure context is available for subsequent operations
|
||||
|
||||
## 3. Implement In-Flight Task Expansion Pipeline [pending]
|
||||
### Dependencies: None
|
||||
### Description: Add optional --expand-tasks flag and intelligent expansion using preserved PRD context
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Core Features:
|
||||
1. **Add --expand-tasks CLI Flag**
|
||||
- Optional flag for parse-prd command
|
||||
- Triggers automatic expansion after initial task generation
|
||||
- Configurable expansion depth and strategy
|
||||
|
||||
2. **Two-Pass Processing System**
|
||||
- First pass: Generate tasks with PRD context preservation
|
||||
- Second pass: Expand tasks using their embedded PRD context
|
||||
- Maintain context fidelity throughout the process
|
||||
|
||||
3. **Context-Aware Expansion Logic**
|
||||
- Use preserved PRD context for each task's expansion
|
||||
- Include relevant PRD excerpts in expansion prompts
|
||||
- Ensure subtasks maintain fidelity to original specifications
|
||||
|
||||
### Technical Implementation:
|
||||
```javascript
|
||||
// Enhanced parse-prd with expansion pipeline
|
||||
const parsePRDWithExpansion = async (prdContent, options) => {
|
||||
// Phase 1: Analyze and generate tasks with context
|
||||
const prdAnalysis = await analyzePRDStructure(prdContent);
|
||||
const tasksWithContext = await generateTasksWithContext(prdAnalysis);
|
||||
|
||||
// Phase 2: Expand tasks if requested
|
||||
if (options.expandTasks) {
|
||||
for (const task of tasksWithContext) {
|
||||
if (shouldExpandTask(task, prdAnalysis)) {
|
||||
const expandedSubtasks = await expandTaskWithPRDContext(task);
|
||||
task.subtasks = expandedSubtasks;
|
||||
// Anthropic models - can use cl100k_base as an approximation
|
||||
// or follow Anthropic's guidance
|
||||
if (providerLower === 'anthropic') {
|
||||
try {
|
||||
// Try to use cl100k_base as a reasonable approximation
|
||||
const encoding = tiktoken.get_encoding('cl100k_base');
|
||||
return encoding.encode(text).length;
|
||||
} catch (e) {
|
||||
// Fallback to Anthropic's character-based estimation
|
||||
return Math.ceil(text.length / 3.5); // ~3.5 chars per token for English
|
||||
}
|
||||
}
|
||||
|
||||
// For other providers, use character-based estimation as fallback
|
||||
// Different providers may have different tokenization schemes
|
||||
return Math.ceil(text.length / 4); // General fallback estimate
|
||||
} catch (error) {
|
||||
console.warn(`Token counting error: ${error.message}. Using character-based estimate.`);
|
||||
return Math.ceil(text.length / 4); // Fallback if tiktoken fails
|
||||
}
|
||||
|
||||
return tasksWithContext;
|
||||
};
|
||||
|
||||
// Context-aware task expansion
|
||||
const expandTaskWithPRDContext = async (task) => {
|
||||
const { prdContext } = task;
|
||||
const expansionPrompt = `
|
||||
Expand this task into detailed subtasks using the original PRD context:
|
||||
|
||||
TASK: ${task.title}
|
||||
DESCRIPTION: ${task.description}
|
||||
|
||||
ORIGINAL PRD CONTEXT:
|
||||
${prdContext.originalText}
|
||||
|
||||
RELATED SECTIONS:
|
||||
${prdContext.contextWindow}
|
||||
|
||||
Generate subtasks that preserve all technical details and requirements from the PRD.
|
||||
`;
|
||||
|
||||
return await generateSubtasks(expansionPrompt);
|
||||
};
|
||||
```
|
||||
|
||||
### CLI Integration:
|
||||
- Add --expand-tasks flag to parse-prd command
|
||||
- Add --expansion-depth option for controlling subtask levels
|
||||
- Add --preserve-detail flag for maximum context preservation
|
||||
- Maintain backward compatibility with existing parse-prd usage
|
||||
|
||||
### Expansion Strategy:
|
||||
- Determine which tasks should be expanded based on complexity
|
||||
- Use PRD context to generate accurate, detailed subtasks
|
||||
- Preserve technical specifications and implementation details
|
||||
- Validate subtask accuracy against original PRD content
|
||||
|
||||
### Performance Considerations:
|
||||
- Implement batching for large numbers of tasks
|
||||
- Add progress indicators for long-running expansions
|
||||
- Optimize context window sizes for efficiency
|
||||
- Cache PRD analysis results for reuse
|
||||
|
||||
## 4. Enhance Expand-Task with PRD Context Integration [pending]
|
||||
### Dependencies: None
|
||||
### Description: Modify existing expand-task functionality to leverage preserved PRD context for more accurate expansions
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Core Enhancements to expand-task.js:
|
||||
1. **PRD Context Detection**
|
||||
- Check if task has embedded prdContext metadata
|
||||
- Extract relevant PRD sections for expansion
|
||||
- Fall back to existing expansion logic if no PRD context
|
||||
|
||||
2. **Context-Enhanced Expansion Prompts**
|
||||
- Include original PRD excerpts in expansion prompts
|
||||
- Add related section context for comprehensive understanding
|
||||
- Preserve technical specifications and requirements language
|
||||
|
||||
3. **Validation and Quality Assurance**
|
||||
- Validate generated subtasks against original PRD content
|
||||
- Ensure technical accuracy and requirement compliance
|
||||
- Flag potential discrepancies for review
|
||||
|
||||
### Technical Implementation:
|
||||
```javascript
|
||||
// Enhanced expand-task with PRD context
|
||||
const expandTaskWithContext = async (taskId, options, context) => {
|
||||
const task = await getTask(taskId);
|
||||
|
||||
// Check for PRD context
|
||||
if (task.prdContext) {
|
||||
return await expandWithPRDContext(task, options);
|
||||
} else {
|
||||
// Fall back to existing expansion logic
|
||||
return await expandTaskStandard(task, options);
|
||||
}
|
||||
};
|
||||
|
||||
const expandWithPRDContext = async (task, options) => {
|
||||
const { prdContext } = task;
|
||||
|
||||
const enhancedPrompt = `
|
||||
Expand this task into detailed subtasks using the original PRD context:
|
||||
|
||||
TASK DETAILS:
|
||||
Title: ${task.title}
|
||||
Description: ${task.description}
|
||||
Current Details: ${task.details}
|
||||
|
||||
ORIGINAL PRD CONTEXT:
|
||||
Source Section: ${prdContext.sourceSection}
|
||||
Original Requirements:
|
||||
${prdContext.originalText}
|
||||
|
||||
RELATED CONTEXT:
|
||||
${prdContext.contextWindow}
|
||||
|
||||
EXPANSION REQUIREMENTS:
|
||||
- Preserve all technical specifications from the PRD
|
||||
- Maintain requirement accuracy and completeness
|
||||
- Generate ${options.num || 'appropriate number of'} subtasks
|
||||
- Include implementation details that reflect PRD specifics
|
||||
|
||||
Generate subtasks that are grounded in the original PRD content.
|
||||
`;
|
||||
|
||||
const subtasks = await generateSubtasks(enhancedPrompt, options);
|
||||
|
||||
// Add PRD context inheritance to subtasks
|
||||
subtasks.forEach(subtask => {
|
||||
subtask.prdContext = {
|
||||
inheritedFrom: task.id,
|
||||
sourceSection: prdContext.sourceSection,
|
||||
relevantExcerpt: extractRelevantExcerpt(prdContext, subtask)
|
||||
};
|
||||
});
|
||||
|
||||
return subtasks;
|
||||
};
|
||||
```
|
||||
|
||||
### Integration Points:
|
||||
1. **Modify existing expand-task.js**
|
||||
- Add PRD context detection logic
|
||||
- Enhance prompt generation with context
|
||||
- Maintain backward compatibility
|
||||
|
||||
2. **Update expansion validation**
|
||||
- Add PRD compliance checking
|
||||
- Implement quality scoring for context fidelity
|
||||
- Flag potential accuracy issues
|
||||
|
||||
3. **CLI and MCP Integration**
|
||||
- Update expand-task command to leverage PRD context
|
||||
- Add options for context-aware expansion
|
||||
- Maintain existing command interface
|
||||
|
||||
### Context Inheritance Strategy:
|
||||
- Pass relevant PRD context to generated subtasks
|
||||
- Create context inheritance chain for nested expansions
|
||||
- Preserve source traceability throughout expansion tree
|
||||
- Enable future re-expansion with maintained context
|
||||
|
||||
### Quality Assurance Features:
|
||||
- Semantic similarity checking between subtasks and PRD
|
||||
- Technical requirement compliance validation
|
||||
- Automated flagging of potential context drift
|
||||
- User feedback integration for continuous improvement
|
||||
|
||||
## 5. Add New CLI Options and MCP Parameters [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement new command-line flags and MCP tool parameters for enhanced PRD parsing
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### New CLI Options for parse-prd:
|
||||
1. **--expand-tasks**
|
||||
- Automatically expand generated tasks using PRD context
|
||||
- Boolean flag, default false
|
||||
- Triggers in-flight expansion pipeline
|
||||
|
||||
2. **--preserve-detail**
|
||||
- Maximum detail preservation mode
|
||||
- Boolean flag, default false
|
||||
- Ensures highest fidelity to PRD content
|
||||
|
||||
3. **--adaptive-count**
|
||||
- Let AI determine optimal task count based on PRD complexity
|
||||
- Boolean flag, default false
|
||||
- Overrides --num-tasks when enabled
|
||||
|
||||
4. **--context-window-size**
|
||||
- Control how much PRD context to include in expansions
|
||||
- Integer value, default 2000 characters
|
||||
- Balances context richness with performance
|
||||
|
||||
5. **--expansion-depth**
|
||||
- Control how many levels deep to expand tasks
|
||||
- Integer value, default 1
|
||||
- Prevents excessive nesting
|
||||
|
||||
### MCP Tool Parameter Updates:
|
||||
```javascript
|
||||
// Enhanced parse_prd MCP tool parameters
|
||||
{
|
||||
input: "Path to PRD file",
|
||||
output: "Output path for tasks.json",
|
||||
numTasks: "Number of top-level tasks (overridden by adaptiveCount)",
|
||||
expandTasks: "Boolean - automatically expand tasks with PRD context",
|
||||
preserveDetail: "Boolean - maximum detail preservation mode",
|
||||
adaptiveCount: "Boolean - AI determines optimal task count",
|
||||
contextWindowSize: "Integer - context size for expansions",
|
||||
expansionDepth: "Integer - levels of expansion to perform",
|
||||
research: "Boolean - use research model for enhanced analysis",
|
||||
force: "Boolean - overwrite existing files"
|
||||
}
|
||||
|
||||
module.exports = { countTokens };
|
||||
```
|
||||
|
||||
### CLI Command Updates:
|
||||
```bash
|
||||
# Enhanced parse-prd command examples
|
||||
task-master parse-prd prd.txt --expand-tasks --preserve-detail
|
||||
task-master parse-prd prd.txt --adaptive-count --expansion-depth=2
|
||||
task-master parse-prd prd.txt --context-window-size=3000 --research
|
||||
```
|
||||
|
||||
### Implementation Details:
|
||||
1. **Update commands.js**
|
||||
- Add new option definitions
|
||||
- Update parse-prd command handler
|
||||
- Maintain backward compatibility
|
||||
|
||||
2. **Update MCP tool definition**
|
||||
- Add new parameter schemas
|
||||
- Update tool description and examples
|
||||
- Ensure parameter validation
|
||||
|
||||
3. **Parameter Processing Logic**
|
||||
- Validate parameter combinations
|
||||
- Set appropriate defaults
|
||||
- Handle conflicting options gracefully
|
||||
|
||||
### Validation Rules:
|
||||
- expansion-depth must be positive integer ≤ 3
|
||||
- context-window-size must be between 500-5000 characters
|
||||
- adaptive-count overrides num-tasks when both specified
|
||||
- expand-tasks requires either adaptive-count or num-tasks > 5
|
||||
|
||||
### Help Documentation Updates:
|
||||
- Update command help text with new options
|
||||
- Add usage examples for different scenarios
|
||||
- Document parameter interactions and constraints
|
||||
- Include performance considerations for large PRDs
|
||||
|
||||
## 6. Implement Comprehensive Testing and Validation [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create test suite for PRD analysis, context preservation, and expansion accuracy
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Test Categories:
|
||||
1. **PRD Analysis Testing**
|
||||
- Test section identification with various PRD formats
|
||||
- Validate complexity scoring accuracy
|
||||
- Test boundary detection for different document structures
|
||||
- Verify context mapping correctness
|
||||
|
||||
2. **Context Preservation Testing**
|
||||
- Validate PRD context embedding in generated tasks
|
||||
- Test context window generation and sizing
|
||||
- Verify source section mapping accuracy
|
||||
- Test context inheritance in subtasks
|
||||
|
||||
3. **Expansion Accuracy Testing**
|
||||
- Compare PRD-grounded vs standard expansions
|
||||
- Measure semantic similarity between PRD and subtasks
|
||||
- Test technical requirement preservation
|
||||
- Validate expansion depth and quality
|
||||
|
||||
4. **Integration Testing**
|
||||
- Test full parse-prd pipeline with expansion
|
||||
- Validate CLI option combinations
|
||||
- Test MCP tool parameter handling
|
||||
- Verify backward compatibility
|
||||
|
||||
### Test Data Requirements:
|
||||
3. Add tests for the token counter in `tests/token-counter.test.js`:
|
||||
```javascript
|
||||
// Test PRD samples
|
||||
const testPRDs = {
|
||||
simple: "Basic PRD with minimal technical details",
|
||||
complex: "Detailed PRD with extensive technical specifications",
|
||||
structured: "Well-organized PRD with clear sections",
|
||||
unstructured: "Free-form PRD with mixed content",
|
||||
technical: "Highly technical PRD with specific requirements",
|
||||
large: "Very large PRD testing context window limits"
|
||||
};
|
||||
```
|
||||
const { countTokens } = require('../scripts/modules/token-counter');
|
||||
|
||||
### Validation Metrics:
|
||||
1. **Detail Preservation Score**
|
||||
- Semantic similarity between PRD and generated tasks
|
||||
- Technical requirement coverage percentage
|
||||
- Specification accuracy rating
|
||||
|
||||
2. **Context Fidelity Score**
|
||||
- Accuracy of source section mapping
|
||||
- Relevance of included context windows
|
||||
- Quality of context inheritance
|
||||
|
||||
3. **Expansion Quality Score**
|
||||
- Subtask relevance to parent task and PRD
|
||||
- Technical accuracy of implementation details
|
||||
- Completeness of requirement coverage
|
||||
|
||||
### Test Implementation:
|
||||
```javascript
|
||||
// Example test structure
|
||||
describe('Enhanced Parse-PRD', () => {
|
||||
describe('PRD Analysis', () => {
|
||||
test('should identify sections correctly', async () => {
|
||||
const analysis = await analyzePRDStructure(testPRDs.structured);
|
||||
expect(analysis.sections).toHaveLength(expectedSectionCount);
|
||||
expect(analysis.overallComplexity).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test('should calculate appropriate task count', async () => {
|
||||
const analysis = await analyzePRDStructure(testPRDs.complex);
|
||||
expect(analysis.recommendedTaskCount).toBeGreaterThan(10);
|
||||
});
|
||||
describe('Token Counter', () => {
|
||||
test('counts tokens for OpenAI models', () => {
|
||||
const text = 'Hello, world! This is a test.';
|
||||
const count = countTokens(text, 'openai', 'gpt-4');
|
||||
expect(count).toBeGreaterThan(0);
|
||||
expect(typeof count).toBe('number');
|
||||
});
|
||||
|
||||
describe('Context Preservation', () => {
|
||||
test('should embed PRD context in tasks', async () => {
|
||||
const tasks = await generateTasksWithContext(testPRDs.technical);
|
||||
tasks.forEach(task => {
|
||||
expect(task.prdContext).toBeDefined();
|
||||
expect(task.prdContext.sourceSection).toBeTruthy();
|
||||
expect(task.prdContext.originalText).toBeTruthy();
|
||||
});
|
||||
});
|
||||
test('counts tokens for Anthropic models', () => {
|
||||
const text = 'Hello, world! This is a test.';
|
||||
const count = countTokens(text, 'anthropic', 'claude-3-7-sonnet-20250219');
|
||||
expect(count).toBeGreaterThan(0);
|
||||
expect(typeof count).toBe('number');
|
||||
});
|
||||
|
||||
describe('Expansion Accuracy', () => {
|
||||
test('should generate relevant subtasks from PRD context', async () => {
|
||||
const task = createTestTaskWithPRDContext();
|
||||
const subtasks = await expandTaskWithPRDContext(task);
|
||||
|
||||
const relevanceScore = calculateRelevanceScore(subtasks, task.prdContext);
|
||||
expect(relevanceScore).toBeGreaterThan(0.8);
|
||||
});
|
||||
test('handles empty text', () => {
|
||||
expect(countTokens('', 'openai', 'gpt-4')).toBe(0);
|
||||
expect(countTokens(null, 'openai', 'gpt-4')).toBe(0);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Performance Testing:
|
||||
- Test with large PRDs (>10,000 words)
|
||||
- Measure processing time for different complexity levels
|
||||
- Test memory usage with extensive context preservation
|
||||
- Validate timeout handling for long-running operations
|
||||
|
||||
### Quality Assurance Tools:
|
||||
- Automated semantic similarity checking
|
||||
- Technical requirement compliance validation
|
||||
- Context drift detection algorithms
|
||||
- User acceptance testing framework
|
||||
|
||||
### Continuous Integration:
|
||||
- Add tests to existing CI pipeline
|
||||
- Set up performance benchmarking
|
||||
- Implement quality gates for PRD processing
|
||||
- Create regression testing for context preservation
|
||||
|
||||
# Test Strategy:
|
||||
1. Unit test the countTokens function with various inputs and models
|
||||
2. Compare token counts with known examples from OpenAI and Anthropic documentation
|
||||
3. Test edge cases: empty strings, very long texts, non-English texts
|
||||
4. Test fallback behavior when tiktoken fails or is not applicable
|
||||
|
||||
@@ -1,161 +1,107 @@
|
||||
# Task ID: 86
|
||||
# Title: Implement GitHub Issue Export Feature
|
||||
# Title: Update .taskmasterconfig schema and user guide
|
||||
# Status: pending
|
||||
# Dependencies: 45
|
||||
# Priority: high
|
||||
# Description: Create a comprehensive 'export_task' command that enables exporting Task Master tasks to GitHub Issues, providing bidirectional integration with the existing import functionality.
|
||||
# Dependencies: 83
|
||||
# Priority: medium
|
||||
# Description: Create a migration guide for users to update their .taskmasterconfig files and document the new token limit configuration options.
|
||||
# Details:
|
||||
Implement a robust 'export_task' command with the following components:
|
||||
1. Create a migration script or guide for users to update their existing `.taskmasterconfig` files:
|
||||
|
||||
1. **Command Structure**:
|
||||
- Create a new 'export_task' command with destination-specific subcommands
|
||||
- Initial implementation should focus on GitHub integration
|
||||
- Command syntax: `taskmaster export_task github [options] <task_id>`
|
||||
- Support options for repository selection, issue type, and export configuration
|
||||
```javascript
|
||||
// Example migration snippet for .taskmasterconfig
|
||||
{
|
||||
"main": {
|
||||
// Before:
|
||||
// "maxTokens": 16000,
|
||||
|
||||
2. **GitHub Issue Creation**:
|
||||
- Convert Task Master tasks into properly formatted GitHub issues
|
||||
- Map task title and description to GitHub issue fields
|
||||
- Convert implementation details and test strategy into well-structured issue body sections
|
||||
- Transform subtasks into GitHub task lists or optionally create separate linked issues
|
||||
- Map Task Master priorities, tags, and assignees to GitHub labels and assignees
|
||||
- Add Task Master metadata as hidden comments for bidirectional linking
|
||||
// After:
|
||||
"maxInputTokens": 16000,
|
||||
"maxOutputTokens": 4000,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"research": {
|
||||
"maxInputTokens": 16000,
|
||||
"maxOutputTokens": 4000,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"fallback": {
|
||||
"maxInputTokens": 8000,
|
||||
"maxOutputTokens": 2000,
|
||||
"temperature": 0.7
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **GitHub API Integration**:
|
||||
- Implement GitHub API client for issue creation and management
|
||||
- Support authentication via GITHUB_API_KEY environment variable
|
||||
- Handle repository access for both public and private repositories
|
||||
- Implement proper error handling for API failures
|
||||
- Add rate limiting support to prevent API abuse
|
||||
- Support milestone assignment if applicable
|
||||
2. Update the user documentation to explain the new token limit fields:
|
||||
|
||||
4. **Bidirectional Linking**:
|
||||
- Store GitHub issue URL and ID in task metadata
|
||||
- Use consistent metadata schema compatible with the import feature
|
||||
- Implement checks to prevent duplicate exports
|
||||
- Support updating existing GitHub issues if task has been modified
|
||||
- Enable round-trip workflows (export → modify in GitHub → re-import)
|
||||
```markdown
|
||||
# Token Limit Configuration
|
||||
|
||||
5. **Extensible Architecture**:
|
||||
- Design the export system to be platform-agnostic
|
||||
- Create adapter interfaces for different export destinations
|
||||
- Implement the GitHub adapter as the first concrete implementation
|
||||
- Allow for custom export templates and formatting rules
|
||||
- Document extension points for future platforms (GitLab, Linear, Jira, etc.)
|
||||
Task Master now provides more granular control over token limits with separate settings for input and output tokens:
|
||||
|
||||
6. **Content Formatting**:
|
||||
- Implement smart content conversion from Task Master format to GitHub-optimized format
|
||||
- Handle markdown conversion appropriately
|
||||
- Format code blocks, tables, and other structured content
|
||||
- Add appropriate GitHub-specific references and formatting
|
||||
- Ensure proper rendering of task relationships and dependencies
|
||||
- `maxInputTokens`: Maximum number of tokens allowed in the input prompt (system prompt + user prompt)
|
||||
- `maxOutputTokens`: Maximum number of tokens the model should generate in its response
|
||||
|
||||
7. **Configuration and Settings**:
|
||||
- Add export-related configuration to Task Master settings
|
||||
- Support default repositories and export preferences
|
||||
- Allow customization of export templates and formatting
|
||||
- Implement export history tracking
|
||||
## Benefits
|
||||
|
||||
8. **Documentation**:
|
||||
- Create comprehensive documentation for the export feature
|
||||
- Include examples and best practices
|
||||
- Document the bidirectional workflow with import feature
|
||||
- More precise control over token usage
|
||||
- Better cost management
|
||||
- Reduced likelihood of hitting model context limits
|
||||
- Dynamic adjustment to maximize output space based on input length
|
||||
|
||||
## Migration from Previous Versions
|
||||
|
||||
If you're upgrading from a previous version, you'll need to update your `.taskmasterconfig` file:
|
||||
|
||||
1. Replace the single `maxTokens` field with separate `maxInputTokens` and `maxOutputTokens` fields
|
||||
2. Recommended starting values:
|
||||
- Set `maxInputTokens` to your previous `maxTokens` value
|
||||
- Set `maxOutputTokens` to approximately 1/4 of your model's context window
|
||||
|
||||
## Example Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"main": {
|
||||
"maxInputTokens": 16000,
|
||||
"maxOutputTokens": 4000,
|
||||
"temperature": 0.7
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
3. Update the schema validation in `config-manager.js` to validate the new fields:
|
||||
|
||||
```javascript
|
||||
function _validateConfig(config) {
|
||||
// ... existing validation
|
||||
|
||||
// Validate token limits for each role
|
||||
['main', 'research', 'fallback'].forEach(role => {
|
||||
if (config[role]) {
|
||||
// Check if old maxTokens is present and warn about migration
|
||||
if (config[role].maxTokens !== undefined) {
|
||||
console.warn(`Warning: 'maxTokens' in ${role} role is deprecated. Please use 'maxInputTokens' and 'maxOutputTokens' instead.`);
|
||||
}
|
||||
|
||||
// Validate new token limit fields
|
||||
if (config[role].maxInputTokens !== undefined && (!Number.isInteger(config[role].maxInputTokens) || config[role].maxInputTokens <= 0)) {
|
||||
throw new Error(`Invalid maxInputTokens for ${role} role: must be a positive integer`);
|
||||
}
|
||||
|
||||
if (config[role].maxOutputTokens !== undefined && (!Number.isInteger(config[role].maxOutputTokens) || config[role].maxOutputTokens <= 0)) {
|
||||
throw new Error(`Invalid maxOutputTokens for ${role} role: must be a positive integer`);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return config;
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
1. **Unit Tests**:
|
||||
- Create unit tests for each component of the export system
|
||||
- Test GitHub API client with mock responses
|
||||
- Verify correct task-to-issue conversion logic
|
||||
- Test bidirectional linking metadata handling
|
||||
- Validate error handling and edge cases
|
||||
|
||||
2. **Integration Tests**:
|
||||
- Test end-to-end export workflow with test GitHub repository
|
||||
- Verify created GitHub issues match expected format and content
|
||||
- Test round-trip workflow (export → import) to ensure data integrity
|
||||
- Validate behavior with various task types and structures
|
||||
- Test with both simple and complex tasks with subtasks
|
||||
|
||||
3. **Manual Testing Checklist**:
|
||||
- Export a simple task and verify all fields are correctly mapped
|
||||
- Export a complex task with subtasks and verify correct representation
|
||||
- Test exporting to different repositories and with different user permissions
|
||||
- Verify error messages are clear and helpful
|
||||
- Test updating an already-exported task
|
||||
- Verify bidirectional linking works correctly
|
||||
- Test the round-trip workflow with modifications in GitHub
|
||||
|
||||
4. **Edge Case Testing**:
|
||||
- Test with missing GitHub credentials
|
||||
- Test with invalid repository names
|
||||
- Test with rate-limited API responses
|
||||
- Test with very large tasks and content
|
||||
- Test with special characters and formatting in task content
|
||||
- Verify behavior when GitHub is unreachable
|
||||
|
||||
5. **Performance Testing**:
|
||||
- Measure export time for different task sizes
|
||||
- Test batch export of multiple tasks
|
||||
- Verify system handles GitHub API rate limits appropriately
|
||||
|
||||
# Subtasks:
|
||||
## 1. Design CLI Command Structure [pending]
|
||||
### Dependencies: None
|
||||
### Description: Define the command-line interface structure for the GitHub Issue Export Feature
|
||||
### Details:
|
||||
Create a comprehensive CLI design including command syntax, argument parsing, help documentation, and user feedback mechanisms. Define flags for filtering issues by state, labels, assignees, and date ranges. Include options for output format selection (JSON, CSV, XLSX) and destination path configuration.
|
||||
|
||||
## 2. Develop GitHub API Client [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a robust client for interacting with GitHub's REST and GraphQL APIs
|
||||
### Details:
|
||||
Implement a client library that handles API rate limiting, pagination, and response parsing. Support both REST and GraphQL endpoints for optimal performance. Include methods for fetching issues, comments, labels, milestones, and user data with appropriate caching mechanisms to minimize API calls.
|
||||
|
||||
## 3. Implement Authentication System [pending]
|
||||
### Dependencies: 86.2
|
||||
### Description: Build a secure authentication system for GitHub API access
|
||||
### Details:
|
||||
Develop authentication flows supporting personal access tokens, OAuth, and GitHub Apps. Implement secure credential storage with appropriate encryption. Create comprehensive error handling for authentication failures, token expiration, and permission issues with clear user feedback.
|
||||
|
||||
## 4. Create Task-to-Issue Mapping Logic [pending]
|
||||
### Dependencies: 86.2, 86.3
|
||||
### Description: Develop the core logic for mapping GitHub issues to task structures
|
||||
### Details:
|
||||
Implement data models and transformation logic to convert GitHub issues into structured task objects. Handle relationships between issues including parent-child relationships, dependencies, and linked issues. Support task lists within issue bodies and map them to subtasks with appropriate status tracking.
|
||||
|
||||
## 5. Build Content Formatting Engine [pending]
|
||||
### Dependencies: 86.4
|
||||
### Description: Create a system for formatting and converting issue content
|
||||
### Details:
|
||||
Develop a markdown processing engine that handles GitHub Flavored Markdown. Implement converters for transforming content to various formats (plain text, HTML, etc.). Create utilities for handling embedded images, code blocks, and other rich content elements while preserving formatting integrity.
|
||||
|
||||
## 6. Implement Bidirectional Linking System [pending]
|
||||
### Dependencies: 86.4, 86.5
|
||||
### Description: Develop mechanisms for maintaining bidirectional links between exported data and GitHub
|
||||
### Details:
|
||||
Create a reference system that maintains links between exported tasks and their source GitHub issues. Implement metadata preservation to enable round-trip workflows. Design a change tracking system to support future synchronization capabilities between exported data and GitHub.
|
||||
|
||||
## 7. Design Extensible Architecture [pending]
|
||||
### Dependencies: 86.4, 86.5, 86.6
|
||||
### Description: Create an adapter-based architecture for supporting multiple export formats and destinations
|
||||
### Details:
|
||||
Implement a plugin architecture with adapter interfaces for different output formats (JSON, CSV, XLSX) and destinations (file system, cloud storage, third-party tools). Create a registry system for dynamically loading adapters. Design clean separation between core logic and format-specific implementations.
|
||||
|
||||
## 8. Develop Configuration Management [pending]
|
||||
### Dependencies: 86.1, 86.7
|
||||
### Description: Build a robust system for managing user configurations and preferences
|
||||
### Details:
|
||||
Implement configuration file handling with support for multiple locations (global, project-specific). Create a settings management system with validation and defaults. Support environment variable overrides and command-line parameter precedence. Include migration paths for configuration format changes.
|
||||
|
||||
## 9. Create Comprehensive Documentation [pending]
|
||||
### Dependencies: 86.1, 86.7, 86.8
|
||||
### Description: Develop detailed documentation for users and contributors
|
||||
### Details:
|
||||
Write user-facing documentation including installation guides, command references, and usage examples. Create developer documentation covering architecture, extension points, and contribution guidelines. Implement automated documentation generation from code comments. Prepare tutorials for common use cases and integration scenarios.
|
||||
|
||||
## 10. Implement Testing Framework [pending]
|
||||
### Dependencies: 86.1, 86.2, 86.3, 86.4, 86.5, 86.6, 86.7, 86.8
|
||||
### Description: Develop a comprehensive testing strategy and implementation
|
||||
### Details:
|
||||
Create unit tests for all core components with high coverage targets. Implement integration tests for GitHub API interactions using mocks and fixtures. Design end-to-end tests for complete workflows. Develop performance tests for large repositories and stress testing. Create a test suite for edge cases including rate limiting, network failures, and malformed data.
|
||||
|
||||
1. Verify documentation is clear and provides migration steps
|
||||
2. Test the validation logic with various config formats
|
||||
3. Test backward compatibility with old config format
|
||||
4. Ensure error messages are helpful when validation fails
|
||||
|
||||
@@ -1,73 +1,119 @@
|
||||
# Task ID: 87
|
||||
# Title: Task Master Gateway Integration
|
||||
# Title: Implement validation and error handling
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Integrate Task Master with premium gateway services for enhanced testing and git workflow capabilities
|
||||
# Dependencies: 85
|
||||
# Priority: low
|
||||
# Description: Add comprehensive validation and error handling for token limits throughout the system, including helpful error messages and graceful fallbacks.
|
||||
# Details:
|
||||
Add gateway integration to Task Master (open source) that enables users to access premium AI-powered test generation, TDD orchestration, and smart git workflows through API key authentication. Maintains local file operations while leveraging remote AI intelligence.
|
||||
1. Add validation when loading models in `config-manager.js`:
|
||||
```javascript
|
||||
function _validateModelMap(modelMap) {
|
||||
// Validate each provider's models
|
||||
Object.entries(modelMap).forEach(([provider, models]) => {
|
||||
models.forEach(model => {
|
||||
// Check for required token limit fields
|
||||
if (!model.contextWindowTokens) {
|
||||
console.warn(`Warning: Model ${model.id} from ${provider} is missing contextWindowTokens field`);
|
||||
}
|
||||
if (!model.maxOutputTokens) {
|
||||
console.warn(`Warning: Model ${model.id} from ${provider} is missing maxOutputTokens field`);
|
||||
}
|
||||
});
|
||||
});
|
||||
return modelMap;
|
||||
}
|
||||
```
|
||||
|
||||
2. Add validation when setting up a model in the CLI:
|
||||
```javascript
|
||||
function validateModelConfig(modelConfig, modelCapabilities) {
|
||||
const issues = [];
|
||||
|
||||
// Check if input tokens exceed model's context window
|
||||
if (modelConfig.maxInputTokens > modelCapabilities.contextWindowTokens) {
|
||||
issues.push(`maxInputTokens (${modelConfig.maxInputTokens}) exceeds model's context window (${modelCapabilities.contextWindowTokens})`);
|
||||
}
|
||||
|
||||
// Check if output tokens exceed model's maximum
|
||||
if (modelConfig.maxOutputTokens > modelCapabilities.maxOutputTokens) {
|
||||
issues.push(`maxOutputTokens (${modelConfig.maxOutputTokens}) exceeds model's maximum output tokens (${modelCapabilities.maxOutputTokens})`);
|
||||
}
|
||||
|
||||
// Check if combined tokens exceed context window
|
||||
if (modelConfig.maxInputTokens + modelConfig.maxOutputTokens > modelCapabilities.contextWindowTokens) {
|
||||
issues.push(`Combined maxInputTokens and maxOutputTokens (${modelConfig.maxInputTokens + modelConfig.maxOutputTokens}) exceeds model's context window (${modelCapabilities.contextWindowTokens})`);
|
||||
}
|
||||
|
||||
return issues;
|
||||
}
|
||||
```
|
||||
|
||||
3. Add graceful fallbacks in `ai-services-unified.js`:
|
||||
```javascript
|
||||
// Fallback for missing token limits
|
||||
if (!roleParams.maxInputTokens) {
|
||||
console.warn(`Warning: maxInputTokens not specified for role '${currentRole}'. Using default value.`);
|
||||
roleParams.maxInputTokens = 8000; // Reasonable default
|
||||
}
|
||||
|
||||
if (!roleParams.maxOutputTokens) {
|
||||
console.warn(`Warning: maxOutputTokens not specified for role '${currentRole}'. Using default value.`);
|
||||
roleParams.maxOutputTokens = 2000; // Reasonable default
|
||||
}
|
||||
|
||||
// Fallback for missing model capabilities
|
||||
if (!modelCapabilities.contextWindowTokens) {
|
||||
console.warn(`Warning: contextWindowTokens not specified for model ${modelId}. Using conservative estimate.`);
|
||||
modelCapabilities.contextWindowTokens = roleParams.maxInputTokens + roleParams.maxOutputTokens;
|
||||
}
|
||||
|
||||
if (!modelCapabilities.maxOutputTokens) {
|
||||
console.warn(`Warning: maxOutputTokens not specified for model ${modelId}. Using role configuration.`);
|
||||
modelCapabilities.maxOutputTokens = roleParams.maxOutputTokens;
|
||||
}
|
||||
```
|
||||
|
||||
4. Add detailed logging for token usage:
|
||||
```javascript
|
||||
function logTokenUsage(provider, modelId, inputTokens, outputTokens, role) {
|
||||
const inputCost = calculateTokenCost(provider, modelId, 'input', inputTokens);
|
||||
const outputCost = calculateTokenCost(provider, modelId, 'output', outputTokens);
|
||||
|
||||
console.info(`Token usage for ${role} role with ${provider}/${modelId}:`);
|
||||
console.info(`- Input: ${inputTokens.toLocaleString()} tokens ($${inputCost.toFixed(6)})`);
|
||||
console.info(`- Output: ${outputTokens.toLocaleString()} tokens ($${outputCost.toFixed(6)})`);
|
||||
console.info(`- Total cost: $${(inputCost + outputCost).toFixed(6)}`);
|
||||
console.info(`- Available output tokens: ${availableOutputTokens.toLocaleString()}`);
|
||||
}
|
||||
```
|
||||
|
||||
5. Add a helper function to suggest configuration improvements:
|
||||
```javascript
|
||||
function suggestTokenConfigImprovements(roleParams, modelCapabilities, promptTokens) {
|
||||
const suggestions = [];
|
||||
|
||||
// If prompt is using less than 50% of allowed input
|
||||
if (promptTokens < roleParams.maxInputTokens * 0.5) {
|
||||
suggestions.push(`Consider reducing maxInputTokens from ${roleParams.maxInputTokens} to save on potential costs`);
|
||||
}
|
||||
|
||||
// If output tokens are very limited due to large input
|
||||
const availableOutput = Math.min(
|
||||
roleParams.maxOutputTokens,
|
||||
modelCapabilities.contextWindowTokens - promptTokens
|
||||
);
|
||||
|
||||
if (availableOutput < roleParams.maxOutputTokens * 0.5) {
|
||||
suggestions.push(`Available output tokens (${availableOutput}) are significantly less than configured maxOutputTokens (${roleParams.maxOutputTokens}) due to large input`);
|
||||
}
|
||||
|
||||
return suggestions;
|
||||
}
|
||||
```
|
||||
|
||||
# Test Strategy:
|
||||
|
||||
|
||||
# Subtasks:
|
||||
## 1. Add gateway integration foundation [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create base infrastructure for connecting to premium gateway services
|
||||
### Details:
|
||||
Implement configuration management for API keys, endpoint URLs, and feature flags. Create HTTP client wrapper with authentication, error handling, and retry logic.
|
||||
|
||||
## 2. Implement test-gen command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Add test generation command that uses gateway API
|
||||
### Details:
|
||||
Create command that gathers local context (code, tasks, patterns), sends to gateway API for intelligent test generation, then writes generated tests to local filesystem with proper structure.
|
||||
|
||||
## 3. Create TDD workflow command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement TDD orchestration for red-green-refactor cycle
|
||||
### Details:
|
||||
Build TDD state machine that manages test phases, integrates with test watchers, and provides real-time feedback during development cycles.
|
||||
|
||||
## 4. Add git-flow command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement automated git workflow with smart commits
|
||||
### Details:
|
||||
Create git workflow automation including branch management, smart commit message generation via gateway API, and PR creation with comprehensive descriptions.
|
||||
|
||||
## 5. Enhance task structure for testing metadata [pending]
|
||||
### Dependencies: None
|
||||
### Description: Extend task schema to support test and git information
|
||||
### Details:
|
||||
Add fields for test files, coverage data, git branches, commit history, and TDD phase tracking to task structure.
|
||||
|
||||
## 6. Add MCP tools for test-gen and TDD commands [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create MCP tool interfaces for IDE integration
|
||||
### Details:
|
||||
Implement MCP tools that expose test generation and TDD workflow commands to IDEs like Cursor, enabling seamless integration with development environment.
|
||||
|
||||
## 7. Create test pattern detection for existing codebase [pending]
|
||||
### Dependencies: None
|
||||
### Description: Analyze existing tests to learn project patterns
|
||||
### Details:
|
||||
Implement pattern detection that analyzes existing test files to understand project conventions, naming patterns, and testing approaches for consistency.
|
||||
|
||||
## 8. Add coverage analysis integration [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate with coverage tools and provide insights
|
||||
### Details:
|
||||
Connect with Jest, NYC, and other coverage tools to analyze test coverage, identify gaps, and suggest improvements through gateway API.
|
||||
|
||||
## 9. Implement test watcher with phase transitions [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create intelligent test watcher for TDD automation
|
||||
### Details:
|
||||
Build test watcher that monitors test results and automatically transitions between TDD phases (red/green/refactor) based on test outcomes.
|
||||
|
||||
## 10. Add fallback mode when gateway is unavailable [pending]
|
||||
### Dependencies: None
|
||||
### Description: Ensure Task Master works without gateway access
|
||||
### Details:
|
||||
Implement graceful degradation when gateway API is unavailable, falling back to local AI models or basic functionality while maintaining core Task Master features.
|
||||
|
||||
1. Test validation functions with valid and invalid configurations
|
||||
2. Verify fallback behavior works correctly when configuration is missing
|
||||
3. Test error messages are clear and actionable
|
||||
4. Test logging functions provide useful information
|
||||
5. Verify suggestion logic provides helpful recommendations
|
||||
|
||||
@@ -1,55 +1,57 @@
|
||||
# Task ID: 88
|
||||
# Title: Implement Google Vertex AI Provider Integration
|
||||
# Status: pending
|
||||
# Dependencies: 19, 89
|
||||
# Title: Enhance Add-Task Functionality to Consider All Task Dependencies
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Develop a dedicated Google Vertex AI provider in the codebase, enabling users to leverage Vertex AI models with enterprise-grade configuration and authentication.
|
||||
# Description: Improve the add-task feature to accurately account for all dependencies among tasks, ensuring proper task ordering and execution.
|
||||
# Details:
|
||||
1. Create a new provider class in `src/ai-providers/google-vertex.js` that extends the existing BaseAIProvider, following the established structure used by other providers (e.g., google.js, openai.js).
|
||||
2. Integrate the Vercel AI SDK's `@ai-sdk/google-vertex` package. Use the default `vertex` provider for standard usage, and allow for custom configuration via `createVertex` for advanced scenarios (e.g., specifying project ID, location, and credentials).
|
||||
3. Implement all required interface methods (such as `getClient`, `generateText`, etc.) to ensure compatibility with the provider system. Reference the implementation patterns from other providers for consistency.
|
||||
4. Handle Vertex AI-specific configuration, including project ID, location, and Google Cloud authentication. Support both environment-based authentication and explicit service account credentials via `googleAuthOptions`.
|
||||
5. Implement robust error handling for Vertex-specific issues, including authentication failures and API errors, leveraging the system-wide error handling patterns.
|
||||
6. Update `src/ai-providers/index.js` to export the new provider, and add the 'vertex' entry to the PROVIDERS object in `scripts/modules/ai-services-unified.js`.
|
||||
7. Update documentation to provide clear setup instructions for Google Vertex AI, including required environment variables, service account setup, and configuration examples.
|
||||
8. Ensure the implementation is modular and maintainable, supporting future expansion for additional Vertex AI features or models.
|
||||
1. Review current implementation of add-task functionality.
|
||||
2. Identify existing mechanisms for handling task dependencies.
|
||||
3. Modify add-task to recursively analyze and incorporate all dependencies.
|
||||
4. Ensure that dependencies are resolved in the correct order during task execution.
|
||||
5. Update documentation to reflect changes in dependency handling.
|
||||
6. Consider edge cases such as circular dependencies and handle them appropriately.
|
||||
7. Optimize performance to ensure efficient dependency resolution, especially for projects with a large number of tasks.
|
||||
8. Integrate with existing validation and error handling mechanisms (from Task 87) to provide clear feedback if dependencies cannot be resolved.
|
||||
9. Test thoroughly with various dependency scenarios to ensure robustness.
|
||||
|
||||
# Test Strategy:
|
||||
- Write unit tests for the new provider class, covering all interface methods and configuration scenarios (default, custom, error cases).
|
||||
- Verify that the provider can successfully authenticate using both environment-based and explicit service account credentials.
|
||||
- Test integration with the provider system by selecting 'vertex' as the provider and generating text using supported Vertex AI models (e.g., Gemini).
|
||||
- Simulate authentication and API errors to confirm robust error handling and user feedback.
|
||||
- Confirm that the provider is correctly exported and available in the PROVIDERS object.
|
||||
- Review and validate the updated documentation for accuracy and completeness.
|
||||
1. Create test cases with simple linear dependencies to verify correct ordering.
|
||||
2. Develop test cases with complex, nested dependencies to ensure recursive resolution works correctly.
|
||||
3. Include tests for edge cases such as circular dependencies, verifying appropriate error messages are displayed.
|
||||
4. Measure performance with large sets of tasks and dependencies to ensure efficiency.
|
||||
5. Conduct integration testing with other components that rely on task dependencies.
|
||||
6. Perform manual code reviews to validate implementation against requirements.
|
||||
7. Execute automated tests to verify no regressions in existing functionality.
|
||||
|
||||
# Subtasks:
|
||||
## 1. Create Google Vertex AI Provider Class [pending]
|
||||
## 1. Review Current Add-Task Implementation and Identify Dependency Mechanisms [done]
|
||||
### Dependencies: None
|
||||
### Description: Develop a new provider class in `src/ai-providers/google-vertex.js` that extends the BaseAIProvider, following the structure of existing providers.
|
||||
### Description: Examine the existing add-task functionality to understand how task dependencies are currently handled.
|
||||
### Details:
|
||||
Ensure the new class is consistent with the architecture of other providers such as google.js and openai.js, and is ready to integrate with the AI SDK.
|
||||
Conduct a code review of the add-task feature. Document any existing mechanisms for handling task dependencies.
|
||||
|
||||
## 2. Integrate Vercel AI SDK Google Vertex Package [pending]
|
||||
## 2. Modify Add-Task to Recursively Analyze Dependencies [done]
|
||||
### Dependencies: 88.1
|
||||
### Description: Integrate the `@ai-sdk/google-vertex` package, supporting both the default provider and custom configuration via `createVertex`.
|
||||
### Description: Update the add-task functionality to recursively analyze and incorporate all task dependencies.
|
||||
### Details:
|
||||
Allow for standard usage with the default `vertex` provider and advanced scenarios using `createVertex` for custom project ID, location, and credentials as per SDK documentation.
|
||||
Implement a recursive algorithm that identifies and incorporates all dependencies for a given task. Ensure it handles nested dependencies correctly.
|
||||
|
||||
## 3. Implement Provider Interface Methods [pending]
|
||||
## 3. Ensure Correct Order of Dependency Resolution [done]
|
||||
### Dependencies: 88.2
|
||||
### Description: Implement all required interface methods (e.g., `getClient`, `generateText`) to ensure compatibility with the provider system.
|
||||
### Description: Modify the add-task functionality to ensure that dependencies are resolved in the correct order during task execution.
|
||||
### Details:
|
||||
Reference implementation patterns from other providers to maintain consistency and ensure all required methods are present and functional.
|
||||
Implement logic to sort and execute tasks based on their dependency order. Handle cases where multiple tasks depend on each other.
|
||||
|
||||
## 4. Handle Vertex AI Configuration and Authentication [pending]
|
||||
## 4. Integrate with Existing Validation and Error Handling [done]
|
||||
### Dependencies: 88.3
|
||||
### Description: Implement support for Vertex AI-specific configuration, including project ID, location, and authentication via environment variables or explicit service account credentials.
|
||||
### Description: Update the add-task functionality to integrate with existing validation and error handling mechanisms (from Task 87).
|
||||
### Details:
|
||||
Support both environment-based authentication and explicit credentials using `googleAuthOptions`, following Google Cloud and Vertex AI setup best practices.
|
||||
Modify the code to provide clear feedback if dependencies cannot be resolved. Ensure that circular dependencies are detected and handled appropriately.
|
||||
|
||||
## 5. Update Exports, Documentation, and Error Handling [pending]
|
||||
## 5. Optimize Performance for Large Projects [done]
|
||||
### Dependencies: 88.4
|
||||
### Description: Export the new provider, update the PROVIDERS object, and document setup instructions, including robust error handling for Vertex-specific issues.
|
||||
### Description: Optimize the add-task functionality to ensure efficient dependency resolution, especially for projects with a large number of tasks.
|
||||
### Details:
|
||||
Update `src/ai-providers/index.js` and `scripts/modules/ai-services-unified.js`, and provide clear documentation for setup, configuration, and error handling patterns.
|
||||
Profile and optimize the recursive dependency analysis algorithm. Implement caching or other performance improvements as needed.
|
||||
|
||||
|
||||
@@ -1,103 +1,23 @@
|
||||
# Task ID: 89
|
||||
# Title: Implement Azure OpenAI Provider Integration
|
||||
# Status: done
|
||||
# Dependencies: 19, 26
|
||||
# Title: Introduce Prioritize Command with Enhanced Priority Levels
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Create a comprehensive Azure OpenAI provider implementation that integrates with the existing AI provider system, enabling users to leverage Azure-hosted OpenAI models through proper authentication and configuration.
|
||||
# Description: Implement a prioritize command with --up/--down/--priority/--id flags and shorthand equivalents (-u/-d/-p/-i). Add 'lowest' and 'highest' priority levels, updating CLI output accordingly.
|
||||
# Details:
|
||||
Implement the Azure OpenAI provider following the established provider pattern:
|
||||
The new prioritize command should allow users to adjust task priorities using the specified flags. The --up and --down flags will modify the priority relative to the current level, while --priority sets an absolute priority. The --id flag specifies which task to prioritize. Shorthand equivalents (-u/-d/-p/-i) should be supported for user convenience.
|
||||
|
||||
1. **Create Azure Provider Class** (`src/ai-providers/azure.js`):
|
||||
- Extend BaseAIProvider class following the same pattern as openai.js and google.js
|
||||
- Import and use `createAzureOpenAI` from `@ai-sdk/azure` package
|
||||
- Implement required interface methods: `getClient()`, `validateConfig()`, and any other abstract methods
|
||||
- Handle Azure-specific configuration: endpoint URL, API key, and deployment name
|
||||
- Add proper error handling for missing or invalid Azure configuration
|
||||
The priority levels should now include 'lowest', 'low', 'medium', 'high', and 'highest'. The CLI output should be updated to reflect these new priority levels accurately.
|
||||
|
||||
2. **Configuration Management**:
|
||||
- Support environment variables: AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, AZURE_OPENAI_DEPLOYMENT
|
||||
- Validate that both endpoint and API key are provided
|
||||
- Provide clear error messages for configuration issues
|
||||
- Follow the same configuration pattern as other providers
|
||||
|
||||
3. **Integration Updates**:
|
||||
- Update `src/ai-providers/index.js` to export the new AzureProvider
|
||||
- Add 'azure' entry to the PROVIDERS object in `scripts/modules/ai-services-unified.js`
|
||||
- Ensure the provider is properly registered and accessible through the unified AI services
|
||||
|
||||
4. **Error Handling**:
|
||||
- Implement Azure-specific error handling for authentication failures
|
||||
- Handle endpoint connectivity issues with helpful error messages
|
||||
- Validate deployment name and provide guidance for common configuration mistakes
|
||||
- Follow the established error handling patterns from Task 19
|
||||
|
||||
5. **Documentation Updates**:
|
||||
- Update any provider documentation to include Azure OpenAI setup instructions
|
||||
- Add configuration examples for Azure OpenAI environment variables
|
||||
- Include troubleshooting guidance for common Azure-specific issues
|
||||
|
||||
The implementation should maintain consistency with existing provider implementations while handling Azure's unique authentication and endpoint requirements.
|
||||
Considerations:
|
||||
- Ensure backward compatibility with existing commands and configurations.
|
||||
- Update the help documentation to include the new command and its usage.
|
||||
- Implement proper error handling for invalid priority levels or missing flags.
|
||||
|
||||
# Test Strategy:
|
||||
Verify the Azure OpenAI provider implementation through comprehensive testing:
|
||||
|
||||
1. **Unit Testing**:
|
||||
- Test provider class instantiation and configuration validation
|
||||
- Verify getClient() method returns properly configured Azure OpenAI client
|
||||
- Test error handling for missing/invalid configuration parameters
|
||||
- Validate that the provider correctly extends BaseAIProvider
|
||||
|
||||
2. **Integration Testing**:
|
||||
- Test provider registration in the unified AI services system
|
||||
- Verify the provider appears in the PROVIDERS object and is accessible
|
||||
- Test end-to-end functionality with valid Azure OpenAI credentials
|
||||
- Validate that the provider works with existing AI operation workflows
|
||||
|
||||
3. **Configuration Testing**:
|
||||
- Test with various environment variable combinations
|
||||
- Verify proper error messages for missing endpoint or API key
|
||||
- Test with invalid endpoint URLs and ensure graceful error handling
|
||||
- Validate deployment name handling and error reporting
|
||||
|
||||
4. **Manual Verification**:
|
||||
- Set up test Azure OpenAI credentials and verify successful connection
|
||||
- Test actual AI operations (like task expansion) using the Azure provider
|
||||
- Verify that the provider selection works correctly in the CLI
|
||||
- Confirm that error messages are helpful and actionable for users
|
||||
|
||||
5. **Documentation Verification**:
|
||||
- Ensure all configuration examples work as documented
|
||||
- Verify that setup instructions are complete and accurate
|
||||
- Test troubleshooting guidance with common error scenarios
|
||||
|
||||
# Subtasks:
|
||||
## 1. Create Azure Provider Class [done]
|
||||
### Dependencies: None
|
||||
### Description: Implement the AzureProvider class that extends BaseAIProvider to handle Azure OpenAI integration
|
||||
### Details:
|
||||
Create the AzureProvider class in src/ai-providers/azure.js that extends BaseAIProvider. Import createAzureOpenAI from @ai-sdk/azure package. Implement required interface methods including getClient() and validateConfig(). Handle Azure-specific configuration parameters: endpoint URL, API key, and deployment name. Follow the established pattern in openai.js and google.js. Ensure proper error handling for missing or invalid configuration.
|
||||
|
||||
## 2. Implement Configuration Management [done]
|
||||
### Dependencies: 89.1
|
||||
### Description: Add support for Azure OpenAI environment variables and configuration validation
|
||||
### Details:
|
||||
Implement configuration management for Azure OpenAI provider that supports environment variables: AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, and AZURE_OPENAI_DEPLOYMENT. Add validation logic to ensure both endpoint and API key are provided. Create clear error messages for configuration issues. Follow the same configuration pattern as implemented in other providers. Ensure the validateConfig() method properly checks all required Azure configuration parameters.
|
||||
|
||||
## 3. Update Provider Integration [done]
|
||||
### Dependencies: 89.1, 89.2
|
||||
### Description: Integrate the Azure provider into the existing AI provider system
|
||||
### Details:
|
||||
Update src/ai-providers/index.js to export the new AzureProvider class. Add 'azure' entry to the PROVIDERS object in scripts/modules/ai-services-unified.js. Ensure the provider is properly registered and accessible through the unified AI services. Test that the provider can be instantiated and used through the provider selection mechanism. Follow the same integration pattern used for existing providers.
|
||||
|
||||
## 4. Implement Azure-Specific Error Handling [done]
|
||||
### Dependencies: 89.1, 89.2
|
||||
### Description: Add specialized error handling for Azure OpenAI-specific issues
|
||||
### Details:
|
||||
Implement Azure-specific error handling for authentication failures, endpoint connectivity issues, and deployment name validation. Provide helpful error messages that guide users to resolve common configuration mistakes. Follow the established error handling patterns from Task 19. Create custom error classes if needed for Azure-specific errors. Ensure errors are properly propagated and formatted for user display.
|
||||
|
||||
## 5. Update Documentation [done]
|
||||
### Dependencies: 89.1, 89.2, 89.3, 89.4
|
||||
### Description: Create comprehensive documentation for the Azure OpenAI provider integration
|
||||
### Details:
|
||||
Update provider documentation to include Azure OpenAI setup instructions. Add configuration examples for Azure OpenAI environment variables. Include troubleshooting guidance for common Azure-specific issues. Document the required Azure resource creation process with references to Microsoft's documentation. Provide examples of valid configuration settings and explain each required parameter. Include information about Azure OpenAI model deployment requirements.
|
||||
|
||||
To verify task completion, perform the following tests:
|
||||
1. Test each flag (--up, --down, --priority, --id) individually and in combination to ensure they function as expected.
|
||||
2. Verify that shorthand equivalents (-u, -d, -p, -i) work correctly.
|
||||
3. Check that the new priority levels ('lowest' and 'highest') are recognized and displayed properly in CLI output.
|
||||
4. Test error handling for invalid inputs (e.g., non-existent task IDs, invalid priority levels).
|
||||
5. Ensure that the help command displays accurate information about the new prioritize command.
|
||||
|
||||
49
.taskmaster/tasks/task_091.txt
Normal file
49
.taskmaster/tasks/task_091.txt
Normal file
@@ -0,0 +1,49 @@
|
||||
# Task ID: 91
|
||||
# Title: Implement Move Command for Tasks and Subtasks
|
||||
# Status: done
|
||||
# Dependencies: 1, 3
|
||||
# Priority: medium
|
||||
# Description: Introduce a 'move' command to enable moving tasks or subtasks to a different id, facilitating conflict resolution by allowing teams to assign new ids as needed.
|
||||
# Details:
|
||||
The move command will consist of three core components: 1) Core Logic Function in scripts/modules/task-manager/move-task.js, 2) Direct Function Wrapper in mcp-server/src/core/direct-functions/move-task.js, and 3) MCP Tool in mcp-server/src/tools/move-task.js. The command will accept source and destination IDs, handling various scenarios including moving tasks to become subtasks, subtasks to become tasks, and subtasks between different parents. The implementation will handle edge cases such as invalid ids, non-existent parents, circular dependencies, and will properly update all dependencies.
|
||||
|
||||
# Test Strategy:
|
||||
Testing will follow a three-tier approach: 1) Unit tests for core functionality including moving tasks to subtasks, subtasks to tasks, subtasks between parents, dependency handling, and validation error cases; 2) Integration tests for the direct function with mock MCP environment and task file regeneration; 3) End-to-end tests for the full MCP tool call path. This will verify all scenarios including moving a task to a new id, moving a subtask under a different parent while preserving its hierarchy, and handling errors for invalid operations.
|
||||
|
||||
# Subtasks:
|
||||
## 1. Design and implement core move logic [done]
|
||||
### Dependencies: None
|
||||
### Description: Create the fundamental logic for moving tasks and subtasks within the task management system hierarchy
|
||||
### Details:
|
||||
Implement the core logic function in scripts/modules/task-manager/move-task.js with the signature that accepts tasksPath, sourceId, destinationId, and generateFiles parameters. Develop functions to handle all movement operations including task-to-subtask, subtask-to-task, and subtask-to-subtask conversions. Implement validation for source and destination IDs, and ensure proper updating of parent-child relationships and dependencies.
|
||||
|
||||
## 2. Implement edge case handling [done]
|
||||
### Dependencies: 91.1
|
||||
### Description: Develop robust error handling for all potential edge cases in the move operation
|
||||
### Details:
|
||||
Create validation functions to detect invalid task IDs, non-existent parent tasks, and circular dependencies. Handle special cases such as moving a task to become the first/last subtask, reordering within the same parent, preventing moving a task to itself, and preventing moving a parent to its own subtask. Implement proper error messages and status codes for each edge case, and ensure system stability if a move operation fails.
|
||||
|
||||
## 3. Update CLI interface for move commands [done]
|
||||
### Dependencies: 91.1
|
||||
### Description: Extend the command-line interface to support the new move functionality with appropriate flags and options
|
||||
### Details:
|
||||
Create the Direct Function Wrapper in mcp-server/src/core/direct-functions/move-task.js to adapt the core logic for MCP, handling path resolution and parameter validation. Implement silent mode to prevent console output interfering with JSON responses. Create the MCP Tool in mcp-server/src/tools/move-task.js that exposes the functionality to Cursor, handles project root resolution, and includes proper Zod parameter definitions. Update MCP tool definition in .cursor/mcp.json and register the tool in mcp-server/src/tools/index.js.
|
||||
|
||||
## 4. Ensure data integrity during moves [done]
|
||||
### Dependencies: 91.1, 91.2
|
||||
### Description: Implement safeguards to maintain data consistency and update all relationships during move operations
|
||||
### Details:
|
||||
Implement dependency handling logic to update dependencies when converting between task/subtask, add appropriate parent dependencies when needed, and validate no circular dependencies are created. Create transaction-like operations to ensure atomic moves that either complete fully or roll back. Implement functions to update all affected task relationships after a move, and add verification steps to confirm data integrity post-move.
|
||||
|
||||
## 5. Create comprehensive test suite [done]
|
||||
### Dependencies: 91.1, 91.2, 91.3, 91.4
|
||||
### Description: Develop and execute tests covering all move scenarios and edge cases
|
||||
### Details:
|
||||
Create unit tests for core functionality including moving tasks to subtasks, subtasks to tasks, subtasks between parents, dependency handling, and validation error cases. Implement integration tests for the direct function with mock MCP environment and task file regeneration. Develop end-to-end tests for the full MCP tool call path. Ensure tests cover all identified edge cases and potential failure points, and verify data integrity after moves.
|
||||
|
||||
## 6. Export and integrate the move function [done]
|
||||
### Dependencies: 91.1
|
||||
### Description: Ensure the move function is properly exported and integrated with existing code
|
||||
### Details:
|
||||
Export the move function in scripts/modules/task-manager.js. Update task-master-core.js to include the direct function. Reuse validation logic from add-subtask.js and remove-subtask.js where appropriate. Follow silent mode implementation pattern from other direct functions and match parameter naming conventions in MCP tools.
|
||||
|
||||
288
.taskmaster/tasks/task_097.txt
Normal file
288
.taskmaster/tasks/task_097.txt
Normal file
@@ -0,0 +1,288 @@
|
||||
# Task ID: 97
|
||||
# Title: Implement Git Workflow Integration
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Add `task-master git` command suite to automate git workflows based on established patterns from Task 4, eliminating manual overhead and ensuring 100% consistency
|
||||
# Details:
|
||||
Create a comprehensive git workflow automation system that integrates deeply with TaskMaster's task management. The feature will:
|
||||
|
||||
1. **Automated Branch Management**:
|
||||
- Create branches following `task-{id}` naming convention
|
||||
- Validate branch names and prevent conflicts
|
||||
- Handle branch switching with uncommitted changes
|
||||
- Clean up local and remote branches post-merge
|
||||
|
||||
2. **Intelligent Commit Generation**:
|
||||
- Auto-detect commit type (feat/fix/test/refactor/docs) from file changes
|
||||
- Generate standardized commit messages with task context
|
||||
- Support subtask-specific commits with proper references
|
||||
- Include coverage delta in test commits
|
||||
|
||||
3. **PR Automation**:
|
||||
- Generate comprehensive PR descriptions from task/subtask data
|
||||
- Include implementation details, test coverage, breaking changes
|
||||
- Format using GitHub markdown with task hierarchy
|
||||
- Auto-populate PR template with relevant metadata
|
||||
|
||||
4. **Workflow State Management**:
|
||||
- Track current task branch and status
|
||||
- Validate task readiness before PR creation
|
||||
- Ensure all subtasks completed before finishing
|
||||
- Handle merge conflicts gracefully
|
||||
|
||||
5. **Integration Points**:
|
||||
- Seamless integration with existing task commands
|
||||
- MCP server support for IDE integrations
|
||||
- GitHub CLI (`gh`) authentication support
|
||||
- Coverage report parsing and display
|
||||
|
||||
**Technical Architecture**:
|
||||
- Modular command structure in `scripts/modules/task-manager/git-*`
|
||||
- Git operations wrapper using simple-git or native child_process
|
||||
- Template engine for commit/PR generation in `scripts/modules/`
|
||||
- State persistence in `.taskmaster/git-state.json`
|
||||
- Error recovery and rollback mechanisms
|
||||
|
||||
**Key Files to Create**:
|
||||
- `scripts/modules/task-manager/git-start.js` - Branch creation and task status update
|
||||
- `scripts/modules/task-manager/git-commit.js` - Intelligent commit message generation
|
||||
- `scripts/modules/task-manager/git-pr.js` - PR creation with auto-generated description
|
||||
- `scripts/modules/task-manager/git-finish.js` - Post-merge cleanup and status update
|
||||
- `scripts/modules/task-manager/git-status.js` - Current git workflow state display
|
||||
- `scripts/modules/git-operations.js` - Core git functionality wrapper
|
||||
- `scripts/modules/commit-analyzer.js` - File change analysis for commit types
|
||||
- `scripts/modules/pr-description-generator.js` - PR description template generator
|
||||
|
||||
**MCP Integration Files**:
|
||||
- `mcp-server/src/core/direct-functions/git-start.js`
|
||||
- `mcp-server/src/core/direct-functions/git-commit.js`
|
||||
- `mcp-server/src/core/direct-functions/git-pr.js`
|
||||
- `mcp-server/src/core/direct-functions/git-finish.js`
|
||||
- `mcp-server/src/core/direct-functions/git-status.js`
|
||||
- `mcp-server/src/tools/git-start.js`
|
||||
- `mcp-server/src/tools/git-commit.js`
|
||||
- `mcp-server/src/tools/git-pr.js`
|
||||
- `mcp-server/src/tools/git-finish.js`
|
||||
- `mcp-server/src/tools/git-status.js`
|
||||
|
||||
**Configuration**:
|
||||
- Add git workflow settings to `.taskmasterconfig`
|
||||
- Support for custom commit prefixes and PR templates
|
||||
- Branch naming pattern customization
|
||||
- Remote repository detection and validation
|
||||
|
||||
# Test Strategy:
|
||||
Implement comprehensive test suite following Task 4's TDD approach:
|
||||
|
||||
1. **Unit Tests** (target: 95%+ coverage):
|
||||
- Git operations wrapper with mocked git commands
|
||||
- Commit type detection with various file change scenarios
|
||||
- PR description generation with different task structures
|
||||
- Branch name validation and generation
|
||||
- State management and persistence
|
||||
|
||||
2. **Integration Tests**:
|
||||
- Full workflow simulation in test repository
|
||||
- Error handling for git conflicts and failures
|
||||
- Multi-task workflow scenarios
|
||||
- Coverage integration with real test runs
|
||||
- GitHub API interaction (mocked)
|
||||
|
||||
3. **E2E Tests**:
|
||||
- Complete task lifecycle from start to finish
|
||||
- Multiple developer workflow simulation
|
||||
- Merge conflict resolution scenarios
|
||||
- Branch protection and validation
|
||||
|
||||
4. **Test Implementation Details**:
|
||||
- Use Jest with git repository fixtures
|
||||
- Mock simple-git for isolated unit tests
|
||||
- Create test tasks.json scenarios
|
||||
- Validate all error messages and edge cases
|
||||
- Test rollback and recovery mechanisms
|
||||
|
||||
5. **Coverage Requirements**:
|
||||
- Minimum 90% overall coverage
|
||||
- 100% coverage for critical paths (branch creation, PR generation)
|
||||
- All error scenarios must be tested
|
||||
- Performance tests for large task hierarchies
|
||||
|
||||
# Subtasks:
|
||||
## 1. Design and implement core git operations wrapper [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a robust git operations layer that handles all git commands with proper error handling and state management
|
||||
### Details:
|
||||
Create `scripts/modules/git-operations.js` with methods for:
|
||||
- Branch creation/deletion (local and remote)
|
||||
- Commit operations with message formatting
|
||||
- Status checking and conflict detection
|
||||
- Remote operations (fetch, push, pull)
|
||||
- Repository validation and setup
|
||||
|
||||
Use simple-git library or child_process for git commands. Implement comprehensive error handling with specific error types for different git failures. Include retry logic for network operations.
|
||||
|
||||
## 2. Implement git start command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create the entry point for task-based git workflows with automated branch creation and task status updates
|
||||
### Details:
|
||||
Implement `scripts/modules/task-manager/git-start.js` with functionality to:
|
||||
- Validate task exists and is ready to start
|
||||
- Check for clean working directory
|
||||
- Create branch with `task-{id}` naming
|
||||
- Update task status to 'in-progress'
|
||||
- Store workflow state in `.taskmaster/git-state.json`
|
||||
- Handle existing branch scenarios
|
||||
- Support --force flag for branch recreation
|
||||
|
||||
Integrate with existing task-master commands and ensure MCP compatibility.
|
||||
|
||||
## 3. Build intelligent commit analyzer and generator [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a system that analyzes file changes to auto-detect commit types and generate standardized commit messages
|
||||
### Details:
|
||||
Develop `scripts/modules/commit-analyzer.js` with:
|
||||
- File change detection and categorization
|
||||
- Commit type inference rules:
|
||||
- feat: new files in scripts/, new functions
|
||||
- fix: changes to existing logic
|
||||
- test: changes in tests/ directory
|
||||
- docs: markdown and comment changes
|
||||
- refactor: file moves, renames, cleanup
|
||||
- Smart message generation with task context
|
||||
- Support for custom commit templates
|
||||
- Subtask reference inclusion
|
||||
|
||||
Create `scripts/modules/task-manager/git-commit.js` that uses the analyzer to generate commits with proper formatting.
|
||||
|
||||
## 4. Create PR description generator and command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Build a comprehensive PR description generator that creates detailed, formatted descriptions from task data
|
||||
### Details:
|
||||
Implement `scripts/modules/pr-description-generator.js` to generate:
|
||||
- Task overview with full context
|
||||
- Subtask completion checklist
|
||||
- Implementation details summary
|
||||
- Test coverage metrics integration
|
||||
- Breaking changes section
|
||||
- Related tasks and dependencies
|
||||
|
||||
Create `scripts/modules/task-manager/git-pr.js` to:
|
||||
- Validate all subtasks are complete
|
||||
- Generate PR title and description
|
||||
- Use GitHub CLI for PR creation
|
||||
- Handle draft PR scenarios
|
||||
- Support custom PR templates
|
||||
- Include labels based on task metadata
|
||||
|
||||
## 5. Implement git finish command with cleanup [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create the workflow completion command that handles post-merge cleanup and task status updates
|
||||
### Details:
|
||||
Build `scripts/modules/task-manager/git-finish.js` with:
|
||||
- PR merge verification via GitHub API
|
||||
- Local branch cleanup
|
||||
- Remote branch deletion (with confirmation)
|
||||
- Task status update to 'done'
|
||||
- Workflow state cleanup
|
||||
- Switch back to main branch
|
||||
- Pull latest changes
|
||||
|
||||
Handle scenarios where PR isn't merged yet or merge failed. Include --skip-cleanup flag for manual branch management.
|
||||
|
||||
## 6. Add git status command for workflow visibility [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a status command that shows current git workflow state with task context
|
||||
### Details:
|
||||
Implement `scripts/modules/task-manager/git-status.js` to display:
|
||||
- Current task and branch information
|
||||
- Subtask completion status
|
||||
- Uncommitted changes summary
|
||||
- PR status if exists
|
||||
- Coverage metrics comparison
|
||||
- Suggested next actions
|
||||
|
||||
Integrate with existing task status displays and provide actionable guidance based on workflow state.
|
||||
|
||||
## 7. Integrate with Commander.js and add command routing [pending]
|
||||
### Dependencies: None
|
||||
### Description: Add the git command suite to TaskMaster's CLI with proper help text and option handling
|
||||
### Details:
|
||||
Update `scripts/modules/commands.js` to:
|
||||
- Add 'git' command with subcommands
|
||||
- Implement option parsing for all git commands
|
||||
- Add comprehensive help text
|
||||
- Ensure proper error handling and display
|
||||
- Validate command prerequisites
|
||||
|
||||
Create proper command structure:
|
||||
- `task-master git start [taskId] [options]`
|
||||
- `task-master git commit [options]`
|
||||
- `task-master git pr [options]`
|
||||
- `task-master git finish [options]`
|
||||
- `task-master git status [options]`
|
||||
|
||||
## 8. Add MCP server integration for git commands [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement MCP tools and direct functions for git workflow commands to enable IDE integration
|
||||
### Details:
|
||||
Create MCP integration in:
|
||||
- `mcp-server/src/core/direct-functions/git-start.js`
|
||||
- `mcp-server/src/core/direct-functions/git-commit.js`
|
||||
- `mcp-server/src/core/direct-functions/git-pr.js`
|
||||
- `mcp-server/src/core/direct-functions/git-finish.js`
|
||||
- `mcp-server/src/core/direct-functions/git-status.js`
|
||||
- `mcp-server/src/tools/git-start.js`
|
||||
- `mcp-server/src/tools/git-commit.js`
|
||||
- `mcp-server/src/tools/git-pr.js`
|
||||
- `mcp-server/src/tools/git-finish.js`
|
||||
- `mcp-server/src/tools/git-status.js`
|
||||
|
||||
Implement tools for:
|
||||
- git_start_task
|
||||
- git_commit_task
|
||||
- git_create_pr
|
||||
- git_finish_task
|
||||
- git_workflow_status
|
||||
|
||||
Ensure proper error handling, logging, and response formatting. Include telemetry data for git operations.
|
||||
|
||||
## 9. Create comprehensive test suite [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement full test coverage following Task 4's high standards with unit, integration, and E2E tests
|
||||
### Details:
|
||||
Create test files:
|
||||
- `tests/unit/git/` - Unit tests for all git components
|
||||
- `tests/integration/git-workflow.test.js` - Full workflow tests
|
||||
- `tests/e2e/git-automation.test.js` - End-to-end scenarios
|
||||
|
||||
Implement:
|
||||
- Git repository fixtures and mocks
|
||||
- Coverage tracking and reporting
|
||||
- Performance benchmarks
|
||||
- Error scenario coverage
|
||||
- Multi-developer workflow simulations
|
||||
|
||||
Target 95%+ coverage with focus on critical paths.
|
||||
|
||||
## 10. Add configuration and documentation [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create configuration options and comprehensive documentation for the git workflow feature
|
||||
### Details:
|
||||
Configuration tasks:
|
||||
- Add git workflow settings to `.taskmasterconfig`
|
||||
- Support environment variables for GitHub tokens
|
||||
- Create default PR and commit templates
|
||||
- Add branch naming customization
|
||||
|
||||
Documentation tasks:
|
||||
- Update README with git workflow section
|
||||
- Create `docs/git-workflow.md` guide
|
||||
- Add examples for common scenarios
|
||||
- Document configuration options
|
||||
- Create troubleshooting guide
|
||||
|
||||
Update rule files:
|
||||
- Create `.cursor/rules/git_workflow.mdc`
|
||||
- Update existing workflow rules
|
||||
|
||||
@@ -33,7 +33,7 @@
|
||||
Integrate the new command into the CLI's command registry. Ensure it is discoverable via the CLI's help system and follows established naming and grouping conventions.
|
||||
|
||||
## 2. Parameter and Flag Handling [done]
|
||||
### Dependencies: 94.1
|
||||
### Dependencies: 98.1
|
||||
### Description: Define and implement parsing for all arguments, flags, and options accepted by the 'research' command, including validation and default values.
|
||||
### Details:
|
||||
Use a command-line parsing framework to handle parameters. Ensure support for optional and required arguments, order-independence, and clear error messages for invalid input.
|
||||
@@ -83,7 +83,7 @@ The parameter validation is now production-ready and follows the same patterns u
|
||||
</info added on 2025-05-25T06:00:42.350Z>
|
||||
|
||||
## 3. Context Gathering [done]
|
||||
### Dependencies: 94.2
|
||||
### Dependencies: 98.2
|
||||
### Description: Implement logic to gather necessary context for the research operation, such as reading from files, stdin, or other sources as specified by the user.
|
||||
### Details:
|
||||
Support reading input from files or stdin using '-' as a convention. Validate and preprocess the gathered context to ensure it is suitable for AI processing.
|
||||
@@ -191,7 +191,7 @@ The ContextGatherer utility is now ready for integration into the core research
|
||||
</info added on 2025-05-25T06:13:19.991Z>
|
||||
|
||||
## 4. Core Function Implementation [done]
|
||||
### Dependencies: 94.2, 94.3
|
||||
### Dependencies: 98.2, 98.3
|
||||
### Description: Implement the core research function in scripts/modules/task-manager/ following the add-task.js pattern
|
||||
### Details:
|
||||
Create a new core function (e.g., research.js) in scripts/modules/task-manager/ that:
|
||||
@@ -220,7 +220,7 @@ The research command now provides the same polished user experience as other AI-
|
||||
</info added on 2025-05-25T06:29:01.194Z>
|
||||
|
||||
## 5. Direct Function Implementation [pending]
|
||||
### Dependencies: 94.4
|
||||
### Dependencies: 98.4
|
||||
### Description: Create the MCP direct function wrapper in mcp-server/src/core/direct-functions/ following the add-task pattern
|
||||
### Details:
|
||||
Create a new direct function (e.g., research.js) in mcp-server/src/core/direct-functions/ that:
|
||||
@@ -234,7 +234,7 @@ Create a new direct function (e.g., research.js) in mcp-server/src/core/direct-f
|
||||
- Export and register in task-master-core.js
|
||||
|
||||
## 6. MCP Tool Implementation [pending]
|
||||
### Dependencies: 94.5
|
||||
### Dependencies: 98.5
|
||||
### Description: Create the MCP tool in mcp-server/src/tools/ following the add-task tool pattern
|
||||
### Details:
|
||||
Create a new MCP tool (e.g., research.js) in mcp-server/src/tools/ that:
|
||||
@@ -268,7 +268,7 @@ File structure:
|
||||
- Add metadata header with query details and context sources
|
||||
|
||||
## 8. Add research-to-task linking functionality [pending]
|
||||
### Dependencies: 94.7
|
||||
### Dependencies: None
|
||||
### Description: Implement functionality to link saved research to specific tasks with interactive task selection
|
||||
### Details:
|
||||
Add capability to link research results to specific tasks by updating task details with research references. For CLI mode, use inquirer to prompt user if they want to link research to tasks and provide task selection. For MCP mode, accept linkToTasks parameter.
|
||||
|
||||
639
.taskmaster/tasks/task_099.txt
Normal file
639
.taskmaster/tasks/task_099.txt
Normal file
@@ -0,0 +1,639 @@
|
||||
# Task ID: 99
|
||||
# Title: Enhance Parse-PRD with Intelligent Task Expansion and Detail Preservation
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Transform parse-prd from a simple task generator into an intelligent system that preserves PRD detail resolution through context-aware task expansion. This addresses the critical issue where highly detailed PRDs lose their specificity when parsed into too few top-level tasks, and ensures that task expansions are grounded in actual PRD content rather than generic AI assumptions.
|
||||
# Details:
|
||||
## Core Problem Statement
|
||||
|
||||
The current parse-prd implementation suffers from a fundamental resolution loss problem:
|
||||
|
||||
1. **Detail Compression**: Complex, detailed PRDs get compressed into a fixed number of top-level tasks (default 10), losing critical specificity
|
||||
2. **Orphaned Expansions**: When tasks are later expanded via expand-task, the AI lacks the original PRD context, resulting in generic subtasks that don't reflect the PRD's specific requirements
|
||||
3. **Binary Approach**: The system either creates too few high-level tasks OR requires manual expansion that loses PRD context
|
||||
|
||||
## Solution Architecture
|
||||
|
||||
### Phase 1: Enhanced PRD Analysis Engine
|
||||
- Implement intelligent PRD segmentation that identifies natural task boundaries based on content structure
|
||||
- Create a PRD context preservation system that maintains detailed mappings between PRD sections and generated tasks
|
||||
- Develop adaptive task count determination based on PRD complexity metrics (length, technical depth, feature count)
|
||||
|
||||
### Phase 2: Context-Aware Task Generation
|
||||
- Modify generateTasksFromPRD to create tasks with embedded PRD context references
|
||||
- Implement a PRD section mapping system that links each task to its source PRD content
|
||||
- Add metadata fields to tasks that preserve original PRD language and specifications
|
||||
|
||||
### Phase 3: Intelligent In-Flight Expansion
|
||||
- Add optional `--expand-tasks` flag to parse-prd that triggers immediate expansion after initial task generation
|
||||
- Implement context-aware expansion that uses the original PRD content for each task's expansion
|
||||
- Create a two-pass system: first pass generates tasks with PRD context, second pass expands using that context
|
||||
|
||||
### Phase 4: PRD-Grounded Expansion Logic
|
||||
- Enhance the expansion prompt generation to include relevant PRD excerpts for each task being expanded
|
||||
- Implement smart context windowing that includes related PRD sections when expanding tasks
|
||||
- Add validation to ensure expanded subtasks maintain fidelity to original PRD specifications
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### File Modifications Required:
|
||||
1. **scripts/modules/task-manager/parse-prd.js**
|
||||
- Add PRD analysis functions for intelligent segmentation
|
||||
- Implement context preservation during task generation
|
||||
- Add optional expansion pipeline integration
|
||||
- Create PRD-to-task mapping system
|
||||
|
||||
2. **scripts/modules/task-manager/expand-task.js**
|
||||
- Enhance to accept PRD context as additional input
|
||||
- Modify expansion prompts to include relevant PRD excerpts
|
||||
- Add PRD-grounded validation for generated subtasks
|
||||
|
||||
3. **scripts/modules/ai-services-unified.js**
|
||||
- Add support for context-aware prompting with PRD excerpts
|
||||
- Implement intelligent context windowing for large PRDs
|
||||
- Add PRD analysis capabilities for complexity assessment
|
||||
|
||||
### New Data Structures:
|
||||
```javascript
|
||||
// Enhanced task structure with PRD context
|
||||
{
|
||||
id: "1",
|
||||
title: "User Authentication System",
|
||||
description: "...",
|
||||
prdContext: {
|
||||
sourceSection: "Authentication Requirements (Lines 45-78)",
|
||||
originalText: "The system must implement OAuth 2.0...",
|
||||
relatedSections: ["Security Requirements", "User Management"],
|
||||
contextWindow: "Full PRD excerpt relevant to this task"
|
||||
},
|
||||
// ... existing fields
|
||||
}
|
||||
|
||||
// PRD analysis metadata
|
||||
{
|
||||
prdAnalysis: {
|
||||
totalComplexity: 8.5,
|
||||
naturalTaskBoundaries: [...],
|
||||
recommendedTaskCount: 15,
|
||||
sectionMappings: {...}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### New CLI Options:
|
||||
- `--expand-tasks`: Automatically expand generated tasks using PRD context
|
||||
- `--preserve-detail`: Maximum detail preservation mode
|
||||
- `--adaptive-count`: Let AI determine optimal task count based on PRD complexity
|
||||
- `--context-window-size`: Control how much PRD context to include in expansions
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### Step 1: PRD Analysis Enhancement
|
||||
- Create PRD parsing utilities that identify natural section boundaries
|
||||
- Implement complexity scoring for different PRD sections
|
||||
- Build context extraction functions that preserve relevant details
|
||||
|
||||
### Step 2: Context-Aware Task Generation
|
||||
- Modify the task generation prompt to include section-specific context
|
||||
- Implement task-to-PRD mapping during generation
|
||||
- Add metadata fields to preserve PRD relationships
|
||||
|
||||
### Step 3: Intelligent Expansion Pipeline
|
||||
- Create expansion logic that uses preserved PRD context
|
||||
- Implement smart prompt engineering that includes relevant PRD excerpts
|
||||
- Add validation to ensure subtask fidelity to original requirements
|
||||
|
||||
### Step 4: Integration and Testing
|
||||
- Integrate new functionality with existing parse-prd workflow
|
||||
- Add comprehensive testing with various PRD types and complexities
|
||||
- Implement telemetry for tracking detail preservation effectiveness
|
||||
|
||||
## Success Metrics
|
||||
- PRD detail preservation rate (measured by semantic similarity between PRD and generated tasks)
|
||||
- Reduction in manual task refinement needed post-parsing
|
||||
- Improved accuracy of expanded subtasks compared to PRD specifications
|
||||
- User satisfaction with task granularity and detail accuracy
|
||||
|
||||
## Edge Cases and Considerations
|
||||
- Very large PRDs that exceed context windows
|
||||
- PRDs with conflicting or ambiguous requirements
|
||||
- Integration with existing task expansion workflows
|
||||
- Performance impact of enhanced analysis
|
||||
- Backward compatibility with existing parse-prd usage
|
||||
|
||||
# Test Strategy:
|
||||
|
||||
|
||||
# Subtasks:
|
||||
## 1. Implement PRD Analysis and Segmentation Engine [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create intelligent PRD parsing that identifies natural task boundaries and complexity metrics
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Core Functions to Implement:
|
||||
1. **analyzePRDStructure(prdContent)**
|
||||
- Parse PRD into logical sections using headers, bullet points, and semantic breaks
|
||||
- Identify feature boundaries, technical requirements, and implementation sections
|
||||
- Return structured analysis with section metadata
|
||||
|
||||
2. **calculatePRDComplexity(prdContent)**
|
||||
- Analyze technical depth, feature count, integration requirements
|
||||
- Score complexity on 1-10 scale for different aspects
|
||||
- Return recommended task count based on complexity
|
||||
|
||||
3. **extractTaskBoundaries(prdAnalysis)**
|
||||
- Identify natural breaking points for task creation
|
||||
- Group related requirements into logical task units
|
||||
- Preserve context relationships between sections
|
||||
|
||||
### Technical Approach:
|
||||
- Use regex patterns and NLP techniques to identify section headers
|
||||
- Implement keyword analysis for technical complexity assessment
|
||||
- Create semantic grouping algorithms for related requirements
|
||||
- Build context preservation mappings
|
||||
|
||||
### Output Structure:
|
||||
```javascript
|
||||
{
|
||||
sections: [
|
||||
{
|
||||
title: "User Authentication",
|
||||
content: "...",
|
||||
startLine: 45,
|
||||
endLine: 78,
|
||||
complexity: 7,
|
||||
relatedSections: ["Security", "User Management"]
|
||||
}
|
||||
],
|
||||
overallComplexity: 8.5,
|
||||
recommendedTaskCount: 15,
|
||||
naturalBoundaries: [...],
|
||||
contextMappings: {...}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Points:
|
||||
- Called at the beginning of parse-prd process
|
||||
- Results used to inform task generation strategy
|
||||
- Analysis stored for later use in expansion phase
|
||||
|
||||
## 2. Enhance Task Generation with PRD Context Preservation [pending]
|
||||
### Dependencies: 99.1
|
||||
### Description: Modify generateTasksFromPRD to embed PRD context and maintain source mappings
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Core Modifications to generateTasksFromPRD:
|
||||
1. **Add PRD Context Embedding**
|
||||
- Modify task generation prompt to include relevant PRD excerpts
|
||||
- Ensure each generated task includes source section references
|
||||
- Preserve original PRD language and specifications in task metadata
|
||||
|
||||
2. **Implement Context Windowing**
|
||||
- For large PRDs, implement intelligent context windowing
|
||||
- Include relevant sections for each task being generated
|
||||
- Maintain context relationships between related tasks
|
||||
|
||||
3. **Enhanced Task Structure**
|
||||
- Add prdContext field to task objects
|
||||
- Include sourceSection, originalText, and relatedSections
|
||||
- Store contextWindow for later use in expansions
|
||||
|
||||
### Technical Implementation:
|
||||
```javascript
|
||||
// Enhanced task generation with context
|
||||
const generateTaskWithContext = async (prdSection, relatedSections, fullPRD) => {
|
||||
const contextWindow = buildContextWindow(prdSection, relatedSections, fullPRD);
|
||||
const prompt = `
|
||||
Generate a task based on this PRD section:
|
||||
|
||||
PRIMARY SECTION:
|
||||
${prdSection.content}
|
||||
|
||||
RELATED CONTEXT:
|
||||
${contextWindow}
|
||||
|
||||
Ensure the task preserves all specific requirements and technical details.
|
||||
`;
|
||||
|
||||
// Generate task with embedded context
|
||||
const task = await generateTask(prompt);
|
||||
task.prdContext = {
|
||||
sourceSection: prdSection.title,
|
||||
originalText: prdSection.content,
|
||||
relatedSections: relatedSections.map(s => s.title),
|
||||
contextWindow: contextWindow
|
||||
};
|
||||
|
||||
return task;
|
||||
};
|
||||
```
|
||||
|
||||
### Context Preservation Strategy:
|
||||
- Map each task to its source PRD sections
|
||||
- Preserve technical specifications and requirements language
|
||||
- Maintain relationships between interdependent features
|
||||
- Store context for later use in expansion phase
|
||||
|
||||
### Integration with Existing Flow:
|
||||
- Modify existing generateTasksFromPRD function
|
||||
- Maintain backward compatibility with simple PRDs
|
||||
- Add new metadata fields without breaking existing structure
|
||||
- Ensure context is available for subsequent operations
|
||||
|
||||
## 3. Implement In-Flight Task Expansion Pipeline [pending]
|
||||
### Dependencies: 99.2
|
||||
### Description: Add optional --expand-tasks flag and intelligent expansion using preserved PRD context
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Core Features:
|
||||
1. **Add --expand-tasks CLI Flag**
|
||||
- Optional flag for parse-prd command
|
||||
- Triggers automatic expansion after initial task generation
|
||||
- Configurable expansion depth and strategy
|
||||
|
||||
2. **Two-Pass Processing System**
|
||||
- First pass: Generate tasks with PRD context preservation
|
||||
- Second pass: Expand tasks using their embedded PRD context
|
||||
- Maintain context fidelity throughout the process
|
||||
|
||||
3. **Context-Aware Expansion Logic**
|
||||
- Use preserved PRD context for each task's expansion
|
||||
- Include relevant PRD excerpts in expansion prompts
|
||||
- Ensure subtasks maintain fidelity to original specifications
|
||||
|
||||
### Technical Implementation:
|
||||
```javascript
|
||||
// Enhanced parse-prd with expansion pipeline
|
||||
const parsePRDWithExpansion = async (prdContent, options) => {
|
||||
// Phase 1: Analyze and generate tasks with context
|
||||
const prdAnalysis = await analyzePRDStructure(prdContent);
|
||||
const tasksWithContext = await generateTasksWithContext(prdAnalysis);
|
||||
|
||||
// Phase 2: Expand tasks if requested
|
||||
if (options.expandTasks) {
|
||||
for (const task of tasksWithContext) {
|
||||
if (shouldExpandTask(task, prdAnalysis)) {
|
||||
const expandedSubtasks = await expandTaskWithPRDContext(task);
|
||||
task.subtasks = expandedSubtasks;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return tasksWithContext;
|
||||
};
|
||||
|
||||
// Context-aware task expansion
|
||||
const expandTaskWithPRDContext = async (task) => {
|
||||
const { prdContext } = task;
|
||||
const expansionPrompt = `
|
||||
Expand this task into detailed subtasks using the original PRD context:
|
||||
|
||||
TASK: ${task.title}
|
||||
DESCRIPTION: ${task.description}
|
||||
|
||||
ORIGINAL PRD CONTEXT:
|
||||
${prdContext.originalText}
|
||||
|
||||
RELATED SECTIONS:
|
||||
${prdContext.contextWindow}
|
||||
|
||||
Generate subtasks that preserve all technical details and requirements from the PRD.
|
||||
`;
|
||||
|
||||
return await generateSubtasks(expansionPrompt);
|
||||
};
|
||||
```
|
||||
|
||||
### CLI Integration:
|
||||
- Add --expand-tasks flag to parse-prd command
|
||||
- Add --expansion-depth option for controlling subtask levels
|
||||
- Add --preserve-detail flag for maximum context preservation
|
||||
- Maintain backward compatibility with existing parse-prd usage
|
||||
|
||||
### Expansion Strategy:
|
||||
- Determine which tasks should be expanded based on complexity
|
||||
- Use PRD context to generate accurate, detailed subtasks
|
||||
- Preserve technical specifications and implementation details
|
||||
- Validate subtask accuracy against original PRD content
|
||||
|
||||
### Performance Considerations:
|
||||
- Implement batching for large numbers of tasks
|
||||
- Add progress indicators for long-running expansions
|
||||
- Optimize context window sizes for efficiency
|
||||
- Cache PRD analysis results for reuse
|
||||
|
||||
## 4. Enhance Expand-Task with PRD Context Integration [pending]
|
||||
### Dependencies: 99.2
|
||||
### Description: Modify existing expand-task functionality to leverage preserved PRD context for more accurate expansions
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Core Enhancements to expand-task.js:
|
||||
1. **PRD Context Detection**
|
||||
- Check if task has embedded prdContext metadata
|
||||
- Extract relevant PRD sections for expansion
|
||||
- Fall back to existing expansion logic if no PRD context
|
||||
|
||||
2. **Context-Enhanced Expansion Prompts**
|
||||
- Include original PRD excerpts in expansion prompts
|
||||
- Add related section context for comprehensive understanding
|
||||
- Preserve technical specifications and requirements language
|
||||
|
||||
3. **Validation and Quality Assurance**
|
||||
- Validate generated subtasks against original PRD content
|
||||
- Ensure technical accuracy and requirement compliance
|
||||
- Flag potential discrepancies for review
|
||||
|
||||
### Technical Implementation:
|
||||
```javascript
|
||||
// Enhanced expand-task with PRD context
|
||||
const expandTaskWithContext = async (taskId, options, context) => {
|
||||
const task = await getTask(taskId);
|
||||
|
||||
// Check for PRD context
|
||||
if (task.prdContext) {
|
||||
return await expandWithPRDContext(task, options);
|
||||
} else {
|
||||
// Fall back to existing expansion logic
|
||||
return await expandTaskStandard(task, options);
|
||||
}
|
||||
};
|
||||
|
||||
const expandWithPRDContext = async (task, options) => {
|
||||
const { prdContext } = task;
|
||||
|
||||
const enhancedPrompt = `
|
||||
Expand this task into detailed subtasks using the original PRD context:
|
||||
|
||||
TASK DETAILS:
|
||||
Title: ${task.title}
|
||||
Description: ${task.description}
|
||||
Current Details: ${task.details}
|
||||
|
||||
ORIGINAL PRD CONTEXT:
|
||||
Source Section: ${prdContext.sourceSection}
|
||||
Original Requirements:
|
||||
${prdContext.originalText}
|
||||
|
||||
RELATED CONTEXT:
|
||||
${prdContext.contextWindow}
|
||||
|
||||
EXPANSION REQUIREMENTS:
|
||||
- Preserve all technical specifications from the PRD
|
||||
- Maintain requirement accuracy and completeness
|
||||
- Generate ${options.num || 'appropriate number of'} subtasks
|
||||
- Include implementation details that reflect PRD specifics
|
||||
|
||||
Generate subtasks that are grounded in the original PRD content.
|
||||
`;
|
||||
|
||||
const subtasks = await generateSubtasks(enhancedPrompt, options);
|
||||
|
||||
// Add PRD context inheritance to subtasks
|
||||
subtasks.forEach(subtask => {
|
||||
subtask.prdContext = {
|
||||
inheritedFrom: task.id,
|
||||
sourceSection: prdContext.sourceSection,
|
||||
relevantExcerpt: extractRelevantExcerpt(prdContext, subtask)
|
||||
};
|
||||
});
|
||||
|
||||
return subtasks;
|
||||
};
|
||||
```
|
||||
|
||||
### Integration Points:
|
||||
1. **Modify existing expand-task.js**
|
||||
- Add PRD context detection logic
|
||||
- Enhance prompt generation with context
|
||||
- Maintain backward compatibility
|
||||
|
||||
2. **Update expansion validation**
|
||||
- Add PRD compliance checking
|
||||
- Implement quality scoring for context fidelity
|
||||
- Flag potential accuracy issues
|
||||
|
||||
3. **CLI and MCP Integration**
|
||||
- Update expand-task command to leverage PRD context
|
||||
- Add options for context-aware expansion
|
||||
- Maintain existing command interface
|
||||
|
||||
### Context Inheritance Strategy:
|
||||
- Pass relevant PRD context to generated subtasks
|
||||
- Create context inheritance chain for nested expansions
|
||||
- Preserve source traceability throughout expansion tree
|
||||
- Enable future re-expansion with maintained context
|
||||
|
||||
### Quality Assurance Features:
|
||||
- Semantic similarity checking between subtasks and PRD
|
||||
- Technical requirement compliance validation
|
||||
- Automated flagging of potential context drift
|
||||
- User feedback integration for continuous improvement
|
||||
|
||||
## 5. Add New CLI Options and MCP Parameters [pending]
|
||||
### Dependencies: 99.3
|
||||
### Description: Implement new command-line flags and MCP tool parameters for enhanced PRD parsing
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### New CLI Options for parse-prd:
|
||||
1. **--expand-tasks**
|
||||
- Automatically expand generated tasks using PRD context
|
||||
- Boolean flag, default false
|
||||
- Triggers in-flight expansion pipeline
|
||||
|
||||
2. **--preserve-detail**
|
||||
- Maximum detail preservation mode
|
||||
- Boolean flag, default false
|
||||
- Ensures highest fidelity to PRD content
|
||||
|
||||
3. **--adaptive-count**
|
||||
- Let AI determine optimal task count based on PRD complexity
|
||||
- Boolean flag, default false
|
||||
- Overrides --num-tasks when enabled
|
||||
|
||||
4. **--context-window-size**
|
||||
- Control how much PRD context to include in expansions
|
||||
- Integer value, default 2000 characters
|
||||
- Balances context richness with performance
|
||||
|
||||
5. **--expansion-depth**
|
||||
- Control how many levels deep to expand tasks
|
||||
- Integer value, default 1
|
||||
- Prevents excessive nesting
|
||||
|
||||
### MCP Tool Parameter Updates:
|
||||
```javascript
|
||||
// Enhanced parse_prd MCP tool parameters
|
||||
{
|
||||
input: "Path to PRD file",
|
||||
output: "Output path for tasks.json",
|
||||
numTasks: "Number of top-level tasks (overridden by adaptiveCount)",
|
||||
expandTasks: "Boolean - automatically expand tasks with PRD context",
|
||||
preserveDetail: "Boolean - maximum detail preservation mode",
|
||||
adaptiveCount: "Boolean - AI determines optimal task count",
|
||||
contextWindowSize: "Integer - context size for expansions",
|
||||
expansionDepth: "Integer - levels of expansion to perform",
|
||||
research: "Boolean - use research model for enhanced analysis",
|
||||
force: "Boolean - overwrite existing files"
|
||||
}
|
||||
```
|
||||
|
||||
### CLI Command Updates:
|
||||
```bash
|
||||
# Enhanced parse-prd command examples
|
||||
task-master parse-prd prd.txt --expand-tasks --preserve-detail
|
||||
task-master parse-prd prd.txt --adaptive-count --expansion-depth=2
|
||||
task-master parse-prd prd.txt --context-window-size=3000 --research
|
||||
```
|
||||
|
||||
### Implementation Details:
|
||||
1. **Update commands.js**
|
||||
- Add new option definitions
|
||||
- Update parse-prd command handler
|
||||
- Maintain backward compatibility
|
||||
|
||||
2. **Update MCP tool definition**
|
||||
- Add new parameter schemas
|
||||
- Update tool description and examples
|
||||
- Ensure parameter validation
|
||||
|
||||
3. **Parameter Processing Logic**
|
||||
- Validate parameter combinations
|
||||
- Set appropriate defaults
|
||||
- Handle conflicting options gracefully
|
||||
|
||||
### Validation Rules:
|
||||
- expansion-depth must be positive integer ≤ 3
|
||||
- context-window-size must be between 500-5000 characters
|
||||
- adaptive-count overrides num-tasks when both specified
|
||||
- expand-tasks requires either adaptive-count or num-tasks > 5
|
||||
|
||||
### Help Documentation Updates:
|
||||
- Update command help text with new options
|
||||
- Add usage examples for different scenarios
|
||||
- Document parameter interactions and constraints
|
||||
- Include performance considerations for large PRDs
|
||||
|
||||
## 6. Implement Comprehensive Testing and Validation [pending]
|
||||
### Dependencies: 99.4, 99.5
|
||||
### Description: Create test suite for PRD analysis, context preservation, and expansion accuracy
|
||||
### Details:
|
||||
## Implementation Requirements
|
||||
|
||||
### Test Categories:
|
||||
1. **PRD Analysis Testing**
|
||||
- Test section identification with various PRD formats
|
||||
- Validate complexity scoring accuracy
|
||||
- Test boundary detection for different document structures
|
||||
- Verify context mapping correctness
|
||||
|
||||
2. **Context Preservation Testing**
|
||||
- Validate PRD context embedding in generated tasks
|
||||
- Test context window generation and sizing
|
||||
- Verify source section mapping accuracy
|
||||
- Test context inheritance in subtasks
|
||||
|
||||
3. **Expansion Accuracy Testing**
|
||||
- Compare PRD-grounded vs standard expansions
|
||||
- Measure semantic similarity between PRD and subtasks
|
||||
- Test technical requirement preservation
|
||||
- Validate expansion depth and quality
|
||||
|
||||
4. **Integration Testing**
|
||||
- Test full parse-prd pipeline with expansion
|
||||
- Validate CLI option combinations
|
||||
- Test MCP tool parameter handling
|
||||
- Verify backward compatibility
|
||||
|
||||
### Test Data Requirements:
|
||||
```javascript
|
||||
// Test PRD samples
|
||||
const testPRDs = {
|
||||
simple: "Basic PRD with minimal technical details",
|
||||
complex: "Detailed PRD with extensive technical specifications",
|
||||
structured: "Well-organized PRD with clear sections",
|
||||
unstructured: "Free-form PRD with mixed content",
|
||||
technical: "Highly technical PRD with specific requirements",
|
||||
large: "Very large PRD testing context window limits"
|
||||
};
|
||||
```
|
||||
|
||||
### Validation Metrics:
|
||||
1. **Detail Preservation Score**
|
||||
- Semantic similarity between PRD and generated tasks
|
||||
- Technical requirement coverage percentage
|
||||
- Specification accuracy rating
|
||||
|
||||
2. **Context Fidelity Score**
|
||||
- Accuracy of source section mapping
|
||||
- Relevance of included context windows
|
||||
- Quality of context inheritance
|
||||
|
||||
3. **Expansion Quality Score**
|
||||
- Subtask relevance to parent task and PRD
|
||||
- Technical accuracy of implementation details
|
||||
- Completeness of requirement coverage
|
||||
|
||||
### Test Implementation:
|
||||
```javascript
|
||||
// Example test structure
|
||||
describe('Enhanced Parse-PRD', () => {
|
||||
describe('PRD Analysis', () => {
|
||||
test('should identify sections correctly', async () => {
|
||||
const analysis = await analyzePRDStructure(testPRDs.structured);
|
||||
expect(analysis.sections).toHaveLength(expectedSectionCount);
|
||||
expect(analysis.overallComplexity).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test('should calculate appropriate task count', async () => {
|
||||
const analysis = await analyzePRDStructure(testPRDs.complex);
|
||||
expect(analysis.recommendedTaskCount).toBeGreaterThan(10);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Context Preservation', () => {
|
||||
test('should embed PRD context in tasks', async () => {
|
||||
const tasks = await generateTasksWithContext(testPRDs.technical);
|
||||
tasks.forEach(task => {
|
||||
expect(task.prdContext).toBeDefined();
|
||||
expect(task.prdContext.sourceSection).toBeTruthy();
|
||||
expect(task.prdContext.originalText).toBeTruthy();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('Expansion Accuracy', () => {
|
||||
test('should generate relevant subtasks from PRD context', async () => {
|
||||
const task = createTestTaskWithPRDContext();
|
||||
const subtasks = await expandTaskWithPRDContext(task);
|
||||
|
||||
const relevanceScore = calculateRelevanceScore(subtasks, task.prdContext);
|
||||
expect(relevanceScore).toBeGreaterThan(0.8);
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Performance Testing:
|
||||
- Test with large PRDs (>10,000 words)
|
||||
- Measure processing time for different complexity levels
|
||||
- Test memory usage with extensive context preservation
|
||||
- Validate timeout handling for long-running operations
|
||||
|
||||
### Quality Assurance Tools:
|
||||
- Automated semantic similarity checking
|
||||
- Technical requirement compliance validation
|
||||
- Context drift detection algorithms
|
||||
- User acceptance testing framework
|
||||
|
||||
### Continuous Integration:
|
||||
- Add tests to existing CI pipeline
|
||||
- Set up performance benchmarking
|
||||
- Implement quality gates for PRD processing
|
||||
- Create regression testing for context preservation
|
||||
|
||||
1304
.taskmaster/tasks/task_100.txt
Normal file
1304
.taskmaster/tasks/task_100.txt
Normal file
File diff suppressed because it is too large
Load Diff
1915
.taskmaster/tasks/task_101.txt
Normal file
1915
.taskmaster/tasks/task_101.txt
Normal file
File diff suppressed because it is too large
Load Diff
73
.taskmaster/tasks/task_102.txt
Normal file
73
.taskmaster/tasks/task_102.txt
Normal file
@@ -0,0 +1,73 @@
|
||||
# Task ID: 102
|
||||
# Title: Task Master Gateway Integration
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Integrate Task Master with premium gateway services for enhanced testing and git workflow capabilities
|
||||
# Details:
|
||||
Add gateway integration to Task Master (open source) that enables users to access premium AI-powered test generation, TDD orchestration, and smart git workflows through API key authentication. Maintains local file operations while leveraging remote AI intelligence.
|
||||
|
||||
# Test Strategy:
|
||||
|
||||
|
||||
# Subtasks:
|
||||
## 1. Add gateway integration foundation [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create base infrastructure for connecting to premium gateway services
|
||||
### Details:
|
||||
Implement configuration management for API keys, endpoint URLs, and feature flags. Create HTTP client wrapper with authentication, error handling, and retry logic.
|
||||
|
||||
## 2. Implement test-gen command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Add test generation command that uses gateway API
|
||||
### Details:
|
||||
Create command that gathers local context (code, tasks, patterns), sends to gateway API for intelligent test generation, then writes generated tests to local filesystem with proper structure.
|
||||
|
||||
## 3. Create TDD workflow command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement TDD orchestration for red-green-refactor cycle
|
||||
### Details:
|
||||
Build TDD state machine that manages test phases, integrates with test watchers, and provides real-time feedback during development cycles.
|
||||
|
||||
## 4. Add git-flow command [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement automated git workflow with smart commits
|
||||
### Details:
|
||||
Create git workflow automation including branch management, smart commit message generation via gateway API, and PR creation with comprehensive descriptions.
|
||||
|
||||
## 5. Enhance task structure for testing metadata [pending]
|
||||
### Dependencies: None
|
||||
### Description: Extend task schema to support test and git information
|
||||
### Details:
|
||||
Add fields for test files, coverage data, git branches, commit history, and TDD phase tracking to task structure.
|
||||
|
||||
## 6. Add MCP tools for test-gen and TDD commands [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create MCP tool interfaces for IDE integration
|
||||
### Details:
|
||||
Implement MCP tools that expose test generation and TDD workflow commands to IDEs like Cursor, enabling seamless integration with development environment.
|
||||
|
||||
## 7. Create test pattern detection for existing codebase [pending]
|
||||
### Dependencies: None
|
||||
### Description: Analyze existing tests to learn project patterns
|
||||
### Details:
|
||||
Implement pattern detection that analyzes existing test files to understand project conventions, naming patterns, and testing approaches for consistency.
|
||||
|
||||
## 8. Add coverage analysis integration [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate with coverage tools and provide insights
|
||||
### Details:
|
||||
Connect with Jest, NYC, and other coverage tools to analyze test coverage, identify gaps, and suggest improvements through gateway API.
|
||||
|
||||
## 9. Implement test watcher with phase transitions [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create intelligent test watcher for TDD automation
|
||||
### Details:
|
||||
Build test watcher that monitors test results and automatically transitions between TDD phases (red/green/refactor) based on test outcomes.
|
||||
|
||||
## 10. Add fallback mode when gateway is unavailable [pending]
|
||||
### Dependencies: None
|
||||
### Description: Ensure Task Master works without gateway access
|
||||
### Details:
|
||||
Implement graceful degradation when gateway API is unavailable, falling back to local AI models or basic functionality while maintaining core Task Master features.
|
||||
|
||||
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user