Compare commits

..

2 Commits

Author SHA1 Message Date
Ralph Khreish
c47e2cf0fe fix: issues 2025-07-10 10:29:51 +03:00
Ralph Khreish
033cab34b6 refactor(commands): Update tasks path retrieval to use taskMaster.getTasksPath() for consistency 2025-07-10 10:26:54 +03:00
136 changed files with 4710 additions and 10071 deletions

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Recover from `@anthropic-ai/claude-code` JSON truncation bug that caused Task Master to crash when handling large (>8 kB) structured responses. The CLI/SDK still truncates, but Task Master now detects the error, preserves buffered text, and returns a usable response instead of throwing.

View File

@@ -1,12 +0,0 @@
---
"task-master-ai": patch
---
Prevent CLAUDE.md overwrite by using Claude Code's import feature
- Task Master now creates its instructions in `.taskmaster/CLAUDE.md` instead of overwriting the user's `CLAUDE.md`
- Adds an import section to the user's CLAUDE.md that references the Task Master instructions
- Preserves existing user content in CLAUDE.md files
- Provides clean uninstall that only removes Task Master's additions
**Breaking Change**: Task Master instructions for Claude Code are now stored in `.taskmaster/CLAUDE.md` and imported into the main CLAUDE.md file. Users who previously had Task Master content directly in their CLAUDE.md will need to run `task-master rules remove claude` followed by `task-master rules add claude` to migrate to the new structure.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Updating dependency ai-sdk-provider-gemini-cli to 0.0.4 to address breaking change Google made to Gemini CLI and add better 'api-key' in addition to 'gemini-api-key' AI-SDK compatibility.

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Fix: show command no longer requires complexity report file to exist
The `tm show` command was incorrectly requiring the complexity report file to exist even when not needed. Now it only validates the complexity report path when a custom report file is explicitly provided via the -r/--report option.

View File

@@ -0,0 +1,9 @@
---
"task-master-ai": minor
---
Add support for xAI Grok 4 model
- Add grok-4 model to xAI provider with $3/$15 per 1M token pricing
- Enable main, fallback, and research roles for grok-4
- Max tokens set to 131,072 (matching other xAI models)

View File

@@ -1,10 +0,0 @@
---
"task-master-ai": minor
---
Complete Groq provider integration and add MoonshotAI Kimi K2 model support
- Fixed Groq provider registration
- Added Groq API key validation
- Added GROQ_API_KEY to .env.example
- Added moonshotai/kimi-k2-instruct model with $1/$3 per 1M token pricing and 16k max output

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": minor
---
feat: Add Zed editor rule profile with agent rules and MCP config
- Resolves #637

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Add Amp rule profile with AGENT.md and MCP config

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": minor
---
Add stricter validation and clearer feedback for task priority when adding new tasks
- if a task priority is invalid, it will default to medium
- made taks priority case-insensitive, essentially making HIGH and high the same value

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Add support for MCP Sampling as AI provider, requires no API key, uses the client LLM provider

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Unify and streamline profile system architecture for improved maintainability

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Add MCP configuration support to Claude Code rules

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Fixed the comprehensive taskmaster system integration via custom slash commands with proper syntax
- Provide claude clode with a complete set of of commands that can trigger task master events directly within Claude Code

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Added Groq provider support

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Correct MCP server name and use 'Add to Cursor' button with updated placeholder keys.

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": minor
---
Add OpenCode profile with AGENTS.md and MCP config
- Resolves #965

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Add missing API keys to .env.example and README.md

View File

@@ -0,0 +1,130 @@
# Task Master Command Reference
Comprehensive command structure for Task Master integration with Claude Code.
## Command Organization
Commands are organized hierarchically to match Task Master's CLI structure while providing enhanced Claude Code integration.
## Project Setup & Configuration
### `/project:tm/init`
- `index` - Initialize new project (handles PRD files intelligently)
- `quick` - Quick setup with auto-confirmation (-y flag)
### `/project:tm/models`
- `index` - View current AI model configuration
- `setup` - Interactive model configuration
- `set-main` - Set primary generation model
- `set-research` - Set research model
- `set-fallback` - Set fallback model
## Task Generation
### `/project:tm/parse-prd`
- `index` - Generate tasks from PRD document
- `with-research` - Enhanced parsing with research mode
### `/project:tm/generate`
- Create individual task files from tasks.json
## Task Management
### `/project:tm/list`
- `index` - Smart listing with natural language filters
- `with-subtasks` - Include subtasks in hierarchical view
- `by-status` - Filter by specific status
### `/project:tm/set-status`
- `to-pending` - Reset task to pending
- `to-in-progress` - Start working on task
- `to-done` - Mark task complete
- `to-review` - Submit for review
- `to-deferred` - Defer task
- `to-cancelled` - Cancel task
### `/project:tm/sync-readme`
- Export tasks to README.md with formatting
### `/project:tm/update`
- `index` - Update tasks with natural language
- `from-id` - Update multiple tasks from a starting point
- `single` - Update specific task
### `/project:tm/add-task`
- `index` - Add new task with AI assistance
### `/project:tm/remove-task`
- `index` - Remove task with confirmation
## Subtask Management
### `/project:tm/add-subtask`
- `index` - Add new subtask to parent
- `from-task` - Convert existing task to subtask
### `/project:tm/remove-subtask`
- Remove subtask (with optional conversion)
### `/project:tm/clear-subtasks`
- `index` - Clear subtasks from specific task
- `all` - Clear all subtasks globally
## Task Analysis & Breakdown
### `/project:tm/analyze-complexity`
- Analyze and generate expansion recommendations
### `/project:tm/complexity-report`
- Display complexity analysis report
### `/project:tm/expand`
- `index` - Break down specific task
- `all` - Expand all eligible tasks
- `with-research` - Enhanced expansion
## Task Navigation
### `/project:tm/next`
- Intelligent next task recommendation
### `/project:tm/show`
- Display detailed task information
### `/project:tm/status`
- Comprehensive project dashboard
## Dependency Management
### `/project:tm/add-dependency`
- Add task dependency
### `/project:tm/remove-dependency`
- Remove task dependency
### `/project:tm/validate-dependencies`
- Check for dependency issues
### `/project:tm/fix-dependencies`
- Automatically fix dependency problems
## Usage Patterns
### Natural Language
Most commands accept natural language arguments:
```
/project:tm/add-task create user authentication system
/project:tm/update mark all API tasks as high priority
/project:tm/list show blocked tasks
```
### ID-Based Commands
Commands requiring IDs intelligently parse from $ARGUMENTS:
```
/project:tm/show 45
/project:tm/expand 23
/project:tm/set-status/to-done 67
```
### Smart Defaults
Commands provide intelligent defaults and suggestions based on context.

View File

@@ -1,146 +0,0 @@
# Task Master Command Reference
Comprehensive command structure for Task Master integration with Claude Code.
## Command Organization
Commands are organized hierarchically to match Task Master's CLI structure while providing enhanced Claude Code integration.
## Project Setup & Configuration
### `/project:tm/init`
- `init-project` - Initialize new project (handles PRD files intelligently)
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
### `/project:tm/models`
- `view-models` - View current AI model configuration
- `setup-models` - Interactive model configuration
- `set-main` - Set primary generation model
- `set-research` - Set research model
- `set-fallback` - Set fallback model
## Task Generation
### `/project:tm/parse-prd`
- `parse-prd` - Generate tasks from PRD document
- `parse-prd-with-research` - Enhanced parsing with research mode
### `/project:tm/generate`
- `generate-tasks` - Create individual task files from tasks.json
## Task Management
### `/project:tm/list`
- `list-tasks` - Smart listing with natural language filters
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
- `list-tasks-by-status` - Filter by specific status
### `/project:tm/set-status`
- `to-pending` - Reset task to pending
- `to-in-progress` - Start working on task
- `to-done` - Mark task complete
- `to-review` - Submit for review
- `to-deferred` - Defer task
- `to-cancelled` - Cancel task
### `/project:tm/sync-readme`
- `sync-readme` - Export tasks to README.md with formatting
### `/project:tm/update`
- `update-task` - Update tasks with natural language
- `update-tasks-from-id` - Update multiple tasks from a starting point
- `update-single-task` - Update specific task
### `/project:tm/add-task`
- `add-task` - Add new task with AI assistance
### `/project:tm/remove-task`
- `remove-task` - Remove task with confirmation
## Subtask Management
### `/project:tm/add-subtask`
- `add-subtask` - Add new subtask to parent
- `convert-task-to-subtask` - Convert existing task to subtask
### `/project:tm/remove-subtask`
- `remove-subtask` - Remove subtask (with optional conversion)
### `/project:tm/clear-subtasks`
- `clear-subtasks` - Clear subtasks from specific task
- `clear-all-subtasks` - Clear all subtasks globally
## Task Analysis & Breakdown
### `/project:tm/analyze-complexity`
- `analyze-complexity` - Analyze and generate expansion recommendations
### `/project:tm/complexity-report`
- `complexity-report` - Display complexity analysis report
### `/project:tm/expand`
- `expand-task` - Break down specific task
- `expand-all-tasks` - Expand all eligible tasks
- `with-research` - Enhanced expansion
## Task Navigation
### `/project:tm/next`
- `next-task` - Intelligent next task recommendation
### `/project:tm/show`
- `show-task` - Display detailed task information
### `/project:tm/status`
- `project-status` - Comprehensive project dashboard
## Dependency Management
### `/project:tm/add-dependency`
- `add-dependency` - Add task dependency
### `/project:tm/remove-dependency`
- `remove-dependency` - Remove task dependency
### `/project:tm/validate-dependencies`
- `validate-dependencies` - Check for dependency issues
### `/project:tm/fix-dependencies`
- `fix-dependencies` - Automatically fix dependency problems
## Workflows & Automation
### `/project:tm/workflows`
- `smart-workflow` - Context-aware intelligent workflow execution
- `command-pipeline` - Chain multiple commands together
- `auto-implement-tasks` - Advanced auto-implementation with code generation
## Utilities
### `/project:tm/utils`
- `analyze-project` - Deep project analysis and insights
### `/project:tm/setup`
- `install-taskmaster` - Comprehensive installation guide
- `quick-install-taskmaster` - One-line global installation
## Usage Patterns
### Natural Language
Most commands accept natural language arguments:
```
/project:tm/add-task create user authentication system
/project:tm/update mark all API tasks as high priority
/project:tm/list show blocked tasks
```
### ID-Based Commands
Commands requiring IDs intelligently parse from $ARGUMENTS:
```
/project:tm/show 45
/project:tm/expand 23
/project:tm/set-status/to-done 67
```
### Smart Defaults
Commands provide intelligent defaults and suggestions based on context.

View File

@@ -1,10 +0,0 @@
reviews:
profile: assertive
poem: false
auto_review:
base_branches:
- rc
- beta
- alpha
- production
- next

View File

@@ -8,7 +8,6 @@ GROQ_API_KEY=YOUR_GROQ_KEY_HERE
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
XAI_API_KEY=YOUR_XAI_KEY_HERE
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
# Google Vertex AI Configuration
VERTEX_PROJECT_ID=your-gcp-project-id

View File

@@ -1,21 +1,21 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"provider": "groq",
"modelId": "llama-3.1-8b-instant",
"maxTokens": 131072,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar",
"maxTokens": 8700,
"provider": "groq",
"modelId": "llama-3.3-70b-versatile",
"maxTokens": 32768,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 8192,
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 128000,
"temperature": 0.2
}
},

View File

@@ -0,0 +1,23 @@
# Task ID: 1
# Title: Implement TTS Flag for Taskmaster Commands
# Status: pending
# Dependencies: 16 (Not found)
# Priority: medium
# Description: Add text-to-speech functionality to taskmaster commands with configurable voice options and audio output settings.
# Details:
Implement TTS functionality including:
- Add --tts flag to all relevant taskmaster commands (list, show, generate, etc.)
- Integrate with system TTS engines (Windows SAPI, macOS say command, Linux espeak/festival)
- Create TTS configuration options in the configuration management system
- Add voice selection options (male/female, different languages if available)
- Implement audio output settings (volume, speed, pitch)
- Add TTS-specific error handling for cases where TTS is unavailable
- Create fallback behavior when TTS fails (silent failure or text output)
- Support for reading task titles, descriptions, and status updates aloud
- Add option to read entire task lists or individual task details
- Implement TTS for command confirmations and error messages
- Create TTS output formatting to make spoken text more natural (removing markdown, formatting numbers/dates appropriately)
- Add configuration option to enable/disable TTS globally
# Test Strategy:
Test TTS functionality across different operating systems (Windows, macOS, Linux). Verify that the --tts flag works with all major commands. Test voice configuration options and ensure audio output settings are properly applied. Test error handling when TTS services are unavailable. Verify that text formatting for speech is natural and understandable. Test with various task content types including special characters, code snippets, and long descriptions. Ensure TTS can be disabled and enabled through configuration.

File diff suppressed because one or more lines are too long

14
.vscode/settings.json vendored
View File

@@ -1,14 +0,0 @@
{
"json.schemas": [
{
"fileMatch": ["src/prompts/*.json"],
"url": "./src/prompts/schemas/prompt-template.schema.json"
}
],
"files.associations": {
"src/prompts/*.json": "json"
},
"json.format.enable": true,
"json.validate.enable": true
}

View File

@@ -1,55 +1,5 @@
# task-master-ai
## 0.20.0
### Minor Changes
- [#950](https://github.com/eyaltoledano/claude-task-master/pull/950) [`699e9ee`](https://github.com/eyaltoledano/claude-task-master/commit/699e9eefb5d687b256e9402d686bdd5e3a358b4a) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add support for xAI Grok 4 model
- Add grok-4 model to xAI provider with $3/$15 per 1M token pricing
- Enable main, fallback, and research roles for grok-4
- Max tokens set to 131,072 (matching other xAI models)
- [#946](https://github.com/eyaltoledano/claude-task-master/pull/946) [`5f009a5`](https://github.com/eyaltoledano/claude-task-master/commit/5f009a5e1fc10e37be26f5135df4b7f44a9c5320) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add stricter validation and clearer feedback for task priority when adding new tasks
- if a task priority is invalid, it will default to medium
- made taks priority case-insensitive, essentially making HIGH and high the same value
- [#863](https://github.com/eyaltoledano/claude-task-master/pull/863) [`b530657`](https://github.com/eyaltoledano/claude-task-master/commit/b53065713c8da0ae6f18eb2655397aa975004923) Thanks [@OrenMe](https://github.com/OrenMe)! - Add support for MCP Sampling as AI provider, requires no API key, uses the client LLM provider
- [#930](https://github.com/eyaltoledano/claude-task-master/pull/930) [`98d1c97`](https://github.com/eyaltoledano/claude-task-master/commit/98d1c974361a56ddbeb772b1272986b9d3913459) Thanks [@OmarElKadri](https://github.com/OmarElKadri)! - Added Groq provider support
### Patch Changes
- [#958](https://github.com/eyaltoledano/claude-task-master/pull/958) [`6c88a4a`](https://github.com/eyaltoledano/claude-task-master/commit/6c88a4a749083e3bd2d073a9240799771774495a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Recover from `@anthropic-ai/claude-code` JSON truncation bug that caused Task Master to crash when handling large (>8 kB) structured responses. The CLI/SDK still truncates, but Task Master now detects the error, preserves buffered text, and returns a usable response instead of throwing.
- [#958](https://github.com/eyaltoledano/claude-task-master/pull/958) [`3334e40`](https://github.com/eyaltoledano/claude-task-master/commit/3334e409ae659d5223bb136ae23fd22c5e219073) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Updating dependency ai-sdk-provider-gemini-cli to 0.0.4 to address breaking change Google made to Gemini CLI and add better 'api-key' in addition to 'gemini-api-key' AI-SDK compatibility.
- [#853](https://github.com/eyaltoledano/claude-task-master/pull/853) [`95c299d`](https://github.com/eyaltoledano/claude-task-master/commit/95c299df642bd8e6d75f8fa5110ac705bcc72edf) Thanks [@joedanz](https://github.com/joedanz)! - Unify and streamline profile system architecture for improved maintainability
## 0.20.0-rc.0
### Minor Changes
- [#950](https://github.com/eyaltoledano/claude-task-master/pull/950) [`699e9ee`](https://github.com/eyaltoledano/claude-task-master/commit/699e9eefb5d687b256e9402d686bdd5e3a358b4a) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add support for xAI Grok 4 model
- Add grok-4 model to xAI provider with $3/$15 per 1M token pricing
- Enable main, fallback, and research roles for grok-4
- Max tokens set to 131,072 (matching other xAI models)
- [#946](https://github.com/eyaltoledano/claude-task-master/pull/946) [`5f009a5`](https://github.com/eyaltoledano/claude-task-master/commit/5f009a5e1fc10e37be26f5135df4b7f44a9c5320) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add stricter validation and clearer feedback for task priority when adding new tasks
- if a task priority is invalid, it will default to medium
- made taks priority case-insensitive, essentially making HIGH and high the same value
- [#863](https://github.com/eyaltoledano/claude-task-master/pull/863) [`b530657`](https://github.com/eyaltoledano/claude-task-master/commit/b53065713c8da0ae6f18eb2655397aa975004923) Thanks [@OrenMe](https://github.com/OrenMe)! - Add support for MCP Sampling as AI provider, requires no API key, uses the client LLM provider
- [#930](https://github.com/eyaltoledano/claude-task-master/pull/930) [`98d1c97`](https://github.com/eyaltoledano/claude-task-master/commit/98d1c974361a56ddbeb772b1272986b9d3913459) Thanks [@OmarElKadri](https://github.com/OmarElKadri)! - Added Groq provider support
### Patch Changes
- [#916](https://github.com/eyaltoledano/claude-task-master/pull/916) [`6c88a4a`](https://github.com/eyaltoledano/claude-task-master/commit/6c88a4a749083e3bd2d073a9240799771774495a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Recover from `@anthropic-ai/claude-code` JSON truncation bug that caused Task Master to crash when handling large (>8 kB) structured responses. The CLI/SDK still truncates, but Task Master now detects the error, preserves buffered text, and returns a usable response instead of throwing.
- [#916](https://github.com/eyaltoledano/claude-task-master/pull/916) [`3334e40`](https://github.com/eyaltoledano/claude-task-master/commit/3334e409ae659d5223bb136ae23fd22c5e219073) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Updating dependency ai-sdk-provider-gemini-cli to 0.0.4 to address breaking change Google made to Gemini CLI and add better 'api-key' in addition to 'gemini-api-key' AI-SDK compatibility.
- [#853](https://github.com/eyaltoledano/claude-task-master/pull/853) [`95c299d`](https://github.com/eyaltoledano/claude-task-master/commit/95c299df642bd8e6d75f8fa5110ac705bcc72edf) Thanks [@joedanz](https://github.com/joedanz)! - Unify and streamline profile system architecture for improved maintainability
## 0.19.0
### Minor Changes

View File

@@ -25,7 +25,11 @@ For more detailed information, check out the documentation in the `docs` directo
#### Quick Install for Cursor 1.0+ (One-Click)
[![Add task-master-ai MCP server to Cursor](https://cursor.com/deeplink/mcp-install-dark.svg)](https://cursor.com/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D)
📋 Click the copy button (top-right of code block) then paste into your browser:
```text
cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQo=
```
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
@@ -69,7 +73,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
```json
{
"mcpServers": {
"task-master-ai": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
@@ -78,7 +82,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"GROQ_API_KEY": "YOUR_GROQ_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
@@ -98,7 +101,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
```json
{
"servers": {
"task-master-ai": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
@@ -107,11 +110,9 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"GROQ_API_KEY": "YOUR_GROQ_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
},
"type": "stdio"
}

View File

@@ -1,15 +0,0 @@
{
"name": "extension",
"version": "0.20.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"devDependencies": {
"typescript": "^5.8.3"
}
}

View File

@@ -1 +0,0 @@
console.log('hello world');

View File

@@ -1,113 +0,0 @@
{
"compilerOptions": {
/* Visit https://aka.ms/tsconfig to read more about this file */
/* Projects */
// "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
// "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
// "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */
// "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */
// "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
// "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
/* Language and Environment */
"target": "es2016" /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */,
// "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
// "jsx": "preserve", /* Specify what JSX code is generated. */
// "libReplacement": true, /* Enable lib replacement. */
// "experimentalDecorators": true, /* Enable experimental support for legacy experimental decorators. */
// "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
// "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
// "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
// "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
// "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
// "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
// "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
// "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */
/* Modules */
"module": "commonjs" /* Specify what module code is generated. */,
// "rootDir": "./", /* Specify the root folder within your source files. */
// "moduleResolution": "node10", /* Specify how TypeScript looks up a file from a given module specifier. */
// "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
// "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
// "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
// "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */
// "types": [], /* Specify type package names to be included without being referenced in a source file. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
// "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */
// "allowImportingTsExtensions": true, /* Allow imports to include TypeScript file extensions. Requires '--moduleResolution bundler' and either '--noEmit' or '--emitDeclarationOnly' to be set. */
// "rewriteRelativeImportExtensions": true, /* Rewrite '.ts', '.tsx', '.mts', and '.cts' file extensions in relative import paths to their JavaScript equivalent in output files. */
// "resolvePackageJsonExports": true, /* Use the package.json 'exports' field when resolving package imports. */
// "resolvePackageJsonImports": true, /* Use the package.json 'imports' field when resolving imports. */
// "customConditions": [], /* Conditions to set in addition to the resolver-specific defaults when resolving imports. */
// "noUncheckedSideEffectImports": true, /* Check side effect imports. */
// "resolveJsonModule": true, /* Enable importing .json files. */
// "allowArbitraryExtensions": true, /* Enable importing files with any extension, provided a declaration file is present. */
// "noResolve": true, /* Disallow 'import's, 'require's or '<reference>'s from expanding the number of files TypeScript should add to a project. */
/* JavaScript Support */
// "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
// "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
// "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
/* Emit */
// "declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
// "declarationMap": true, /* Create sourcemaps for d.ts files. */
// "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
// "sourceMap": true, /* Create source map files for emitted JavaScript files. */
// "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
// "noEmit": true, /* Disable emitting files from a compilation. */
// "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
// "outDir": "./", /* Specify an output folder for all emitted files. */
// "removeComments": true, /* Disable emitting comments. */
// "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
// "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
// "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
// "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
// "newLine": "crlf", /* Set the newline character for emitting files. */
// "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
// "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */
// "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
// "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */
// "declarationDir": "./", /* Specify the output directory for generated declaration files. */
/* Interop Constraints */
// "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
// "verbatimModuleSyntax": true, /* Do not transform or elide any imports or exports not marked as type-only, ensuring they are written in the output file's format based on the 'module' setting. */
// "isolatedDeclarations": true, /* Require sufficient annotation on exports so other tools can trivially generate declaration files. */
// "erasableSyntaxOnly": true, /* Do not allow runtime constructs that are not part of ECMAScript. */
// "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
"esModuleInterop": true /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */,
// "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
"forceConsistentCasingInFileNames": true /* Ensure that casing is correct in imports. */,
/* Type Checking */
"strict": true /* Enable all strict type-checking options. */,
// "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* When type checking, take into account 'null' and 'undefined'. */
// "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
// "strictBindCallApply": true, /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */
// "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
// "strictBuiltinIteratorReturn": true, /* Built-in iterators are instantiated with a 'TReturn' type of 'undefined' instead of 'any'. */
// "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */
// "useUnknownInCatchVariables": true, /* Default catch clause variables as 'unknown' instead of 'any'. */
// "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
// "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */
// "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */
// "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
// "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
// "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
// "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */
// "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
// "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */
// "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
// "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
/* Completeness */
// "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
"skipLibCheck": true /* Skip type checking all .d.ts files. */
}
}

View File

@@ -1,4 +1,4 @@
# Task Master AI - Agent Integration Guide
# Task Master AI - Claude Code Integration Guide
## Essential Commands

View File

@@ -1,12 +1,10 @@
# API Keys (Required to enable respective provider)
ANTHROPIC_API_KEY="your_anthropic_api_key_here" # Required: Format: sk-ant-api03-...
PERPLEXITY_API_KEY="your_perplexity_api_key_here" # Optional: Format: pplx-...
OPENAI_API_KEY="your_openai_api_key_here" # Optional, for OpenAI models. Format: sk-proj-...
OPENAI_API_KEY="your_openai_api_key_here" # Optional, for OpenAI/OpenRouter models. Format: sk-proj-...
GOOGLE_API_KEY="your_google_api_key_here" # Optional, for Google Gemini models.
MISTRAL_API_KEY="your_mistral_key_here" # Optional, for Mistral AI models.
XAI_API_KEY="YOUR_XAI_KEY_HERE" # Optional, for xAI AI models.
GROQ_API_KEY="YOUR_GROQ_KEY_HERE" # Optional, for Groq models.
OPENROUTER_API_KEY="YOUR_OPENROUTER_KEY_HERE" # Optional, for OpenRouter models.
AZURE_OPENAI_API_KEY="your_azure_key_here" # Optional, for Azure OpenAI models (requires endpoint in .taskmaster/config.json).
OLLAMA_API_KEY="your_ollama_api_key_here" # Optional: For remote Ollama servers that require authentication.
GITHUB_API_KEY="your_github_api_key_here" # Optional: For GitHub import/export features. Format: ghp_... or github_pat_...

View File

@@ -4,7 +4,30 @@ Taskmaster uses two primary methods for configuration:
1. **`.taskmaster/config.json` File (Recommended - New Structure)**
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
- This JSON file stores most configuration settings, including A5. **Usage Requirements**:
8. **Troubleshooting**:
- "MCP provider requires session context" → Ensure running in MCP environment
- See the [MCP Provider Guide](./mcp-provider-guide.md) for detailed troubleshootingust be running in an MCP context (session must be available)
- Session must provide `clientCapabilities.sampling` capability
6. **Best Practices**:
- Always configure a non-MCP fallback provider
- Use `mcp` for main/research roles when in MCP environments
- Test sampling capability before production use
7. **Setup Commands**:
```bash
# Set MCP provider for main role
task-master models set-main --provider mcp --model claude-3-5-sonnet-20241022
# Set MCP provider for research role
task-master models set-research --provider mcp --model claude-3-opus-20240229
# Verify configuration
task-master models list
```
8. **Troubleshooting**:lections, parameters, logging levels, and project defaults.
- **Location:** This file is created in the `.taskmaster/` directory when you run the `task-master models --setup` interactive setup or initialize a new project with `task-master init`.
- **Migration:** Existing projects with `.taskmasterconfig` in the root will continue to work, but should be migrated to the new structure using `task-master migrate`.
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
@@ -50,7 +73,6 @@ Taskmaster uses two primary methods for configuration:
}
```
> For MCP-specific setup and troubleshooting, see [Provider-Specific Configuration](#provider-specific-configuration).
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
@@ -176,6 +198,8 @@ node scripts/init.js
### MCP (Model Context Protocol) Provider
The MCP provider enables Task Master to use MCP servers as AI providers. This is particularly useful when running Task Master within MCP-compatible development environments like Claude Desktop or Cursor.
1. **Prerequisites**:
- An active MCP session with sampling capability
- MCP client with sampling support (e.g. VS Code)
@@ -214,24 +238,12 @@ node scripts/init.js
- Must be running in an MCP context (session must be available)
- Session must provide `clientCapabilities.sampling` capability
6. **Best Practices**:
5. **Best Practices**:
- Always configure a non-MCP fallback provider
- Use `mcp` for main/research roles when in MCP environments
- Test sampling capability before production use
7. **Setup Commands**:
```bash
# Set MCP provider for main role
task-master models set-main --provider mcp --model claude-3-5-sonnet-20241022
# Set MCP provider for research role
task-master models set-research --provider mcp --model claude-3-opus-20240229
# Verify configuration
task-master models list
```
8. **Troubleshooting**:
6. **Troubleshooting**:
- "MCP provider requires session context" → Ensure running in MCP environment
- See the [MCP Provider Guide](./mcp-provider-guide.md) for detailed troubleshooting

View File

@@ -1,18 +1,24 @@
# Available Models as of July 16, 2025
# Available Models as of July 10, 2025
## Main Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
| azure | gpt-4o | 0.332 | 2.5 | 10 |
| azure | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
| azure | gpt-4-1 | — | 2 | 10 |
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o1 | 0.489 | 15 | 60 |
| openai | o3 | 0.5 | 2 | 8 |
@@ -29,22 +35,19 @@
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
| google | gemini-2.0-flash-lite | — | — | — |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| xai | grok-4 | — | 3 | 15 |
| groq | moonshotai/kimi-k2-instruct | 0.66 | 1 | 3 |
| groq | llama-3.3-70b-versatile | 0.55 | 0.59 | 0.79 |
| groq | llama-3.1-8b-instant | 0.32 | 0.05 | 0.08 |
| groq | llama-4-scout | 0.45 | 0.11 | 0.34 |
| groq | llama-4-maverick | 0.52 | 0.5 | 0.77 |
| groq | mixtral-8x7b-32768 | 0.35 | 0.24 | 0.24 |
| groq | qwen-qwq-32b-preview | 0.4 | 0.18 | 0.18 |
| groq | deepseek-r1-distill-llama-70b | 0.52 | 0.75 | 0.99 |
| groq | gemma2-9b-it | 0.3 | 0.2 | 0.2 |
| groq | whisper-large-v3 | — | 0.11 | 0 |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| xai | grok-4 | — | 3 | 15 |
| ollama | devstral:latest | — | 0 | 0 |
| ollama | qwen3:latest | — | 0 | 0 |
| ollama | qwen3:14b | — | 0 | 0 |
| ollama | qwen3:32b | — | 0 | 0 |
| ollama | mistral-small3.1:latest | — | 0 | 0 |
| ollama | llama3.3:latest | — | 0 | 0 |
| ollama | phi4:latest | — | 0 | 0 |
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
@@ -70,36 +73,39 @@
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| ollama | devstral:latest | | 0 | 0 |
| ollama | qwen3:latest | | 0 | 0 |
| ollama | qwen3:14b | | 0 | 0 |
| ollama | qwen3:32b | | 0 | 0 |
| ollama | mistral-small3.1:latest | | 0 | 0 |
| ollama | llama3.3:latest | | 0 | 0 |
| ollama | phi4:latest | | 0 | 0 |
| azure | gpt-4o | 0.332 | 2.5 | 10 |
| azure | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
| azure | gpt-4-1 | — | 2 | 10 |
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
## Research Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | -------------------------------------------- | --------- | ---------- | ----------- |
| groq | llama-3.3-70b-versatile | 0.55 | 0.59 | 0.79 |
| groq | llama-3.1-8b-instant | 0.32 | 0.05 | 0.08 |
| groq | llama-4-scout | 0.45 | 0.11 | 0.34 |
| groq | llama-4-maverick | 0.52 | 0.5 | 0.77 |
| groq | mixtral-8x7b-32768 | 0.35 | 0.24 | 0.24 |
| groq | qwen-qwq-32b-preview | 0.4 | 0.18 | 0.18 |
| groq | deepseek-r1-distill-llama-70b | 0.52 | 0.75 | 0.99 |
| groq | gemma2-9b-it | 0.3 | 0.2 | 0.2 |
| groq | whisper-large-v3 | — | 0.11 | 0 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
## Research Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | -------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | — | 1 | 1 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| xai | grok-4 | — | 3 | 15 |
@@ -108,32 +114,31 @@
| groq | llama-4-maverick | 0.52 | 0.5 | 0.77 |
| groq | qwen-qwq-32b-preview | 0.4 | 0.18 | 0.18 |
| groq | deepseek-r1-distill-llama-70b | 0.52 | 0.75 | 0.99 |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | — | 1 | 1 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
## Fallback Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
## Fallback Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| azure | gpt-4o | 0.332 | 2.5 | 10 |
| azure | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
| azure | gpt-4-1 | — | 2 | 10 |
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o3 | 0.5 | 2 | 8 |
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
@@ -142,19 +147,18 @@
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
| google | gemini-2.0-flash-lite | — | — | — |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| xai | grok-4 | — | 3 | 15 |
| groq | moonshotai/kimi-k2-instruct | 0.66 | 1 | 3 |
| groq | llama-3.3-70b-versatile | 0.55 | 0.59 | 0.79 |
| groq | llama-3.1-8b-instant | 0.32 | 0.05 | 0.08 |
| groq | llama-4-scout | 0.45 | 0.11 | 0.34 |
| groq | llama-4-maverick | 0.52 | 0.5 | 0.77 |
| groq | mixtral-8x7b-32768 | 0.35 | 0.24 | 0.24 |
| groq | qwen-qwq-32b-preview | 0.4 | 0.18 | 0.18 |
| groq | gemma2-9b-it | 0.3 | 0.2 | 0.2 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| ollama | devstral:latest | — | 0 | 0 |
| ollama | qwen3:latest | | 0 | 0 |
| ollama | qwen3:14b | | 0 | 0 |
| ollama | qwen3:32b | | 0 | 0 |
| ollama | mistral-small3.1:latest | — | 0 | 0 |
| ollama | llama3.3:latest | | 0 | 0 |
| ollama | phi4:latest | | 0 | 0 |
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
@@ -178,21 +182,15 @@
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| ollama | devstral:latest | | 0 | 0 |
| ollama | qwen3:latest | | 0 | 0 |
| ollama | qwen3:14b | | 0 | 0 |
| ollama | qwen3:32b | | 0 | 0 |
| ollama | mistral-small3.1:latest | | 0 | 0 |
| ollama | llama3.3:latest | | 0 | 0 |
| ollama | phi4:latest | | 0 | 0 |
| azure | gpt-4o | 0.332 | 2.5 | 10 |
| azure | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
| azure | gpt-4-1 | — | 2 | 10 |
| bedrock | us.anthropic.claude-3-haiku-20240307-v1:0 | 0.4 | 0.25 | 1.25 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
| groq | llama-3.3-70b-versatile | 0.55 | 0.59 | 0.79 |
| groq | llama-3.1-8b-instant | 0.32 | 0.05 | 0.08 |
| groq | llama-4-scout | 0.45 | 0.11 | 0.34 |
| groq | llama-4-maverick | 0.52 | 0.5 | 0.77 |
| groq | mixtral-8x7b-32768 | 0.35 | 0.24 | 0.24 |
| groq | qwen-qwq-32b-preview | 0.4 | 0.18 | 0.18 |
| groq | gemma2-9b-it | 0.3 | 0.2 | 0.2 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |

View File

@@ -125,7 +125,8 @@ export async function addTaskDirect(args, log, context = {}) {
},
'json', // outputFormat
manualTaskData, // Pass the manual task data
false // research flag is false for manual creation
false, // research flag is false for manual creation
projectRoot // Pass projectRoot
);
newTaskId = result.newTaskId;
telemetryData = result.telemetryData;

2977
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "task-master-ai",
"version": "0.20.0",
"version": "0.19.0",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js",
"type": "module",
@@ -9,7 +9,6 @@
"task-master-mcp": "mcp-server/server.js",
"task-master-ai": "mcp-server/server.js"
},
"workspaces": ["apps/*", "."],
"scripts": {
"test": "node --experimental-vm-modules node_modules/.bin/jest",
"test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
@@ -55,8 +54,6 @@
"@inquirer/search": "^3.0.15",
"@openrouter/ai-sdk-provider": "^0.4.5",
"ai": "^4.3.10",
"ajv": "^8.17.1",
"ajv-formats": "^3.0.1",
"boxen": "^8.0.1",
"chalk": "^5.4.1",
"cli-highlight": "^2.1.11",

View File

@@ -8,48 +8,47 @@
// --- Core Dependencies ---
import {
MODEL_MAP,
getAzureBaseURL,
getBaseUrlForRole,
getBedrockBaseURL,
getDebugFlag,
getFallbackModelId,
getFallbackProvider,
getMainModelId,
getMainProvider,
getOllamaBaseURL,
getParametersForRole,
getResearchModelId,
getMainModelId,
getResearchProvider,
getResearchModelId,
getFallbackProvider,
getFallbackModelId,
getParametersForRole,
getResponseLanguage,
getUserId,
getVertexLocation,
getVertexProjectId,
MODEL_MAP,
getDebugFlag,
getBaseUrlForRole,
isApiKeySet,
getOllamaBaseURL,
getAzureBaseURL,
getBedrockBaseURL,
getVertexProjectId,
getVertexLocation,
providersWithoutApiKeys
} from './config-manager.js';
import {
findProjectRoot,
getCurrentTag,
log,
resolveEnvVariable
findProjectRoot,
resolveEnvVariable,
getCurrentTag
} from './utils.js';
// Import provider classes
import {
AnthropicAIProvider,
AzureProvider,
BedrockAIProvider,
ClaudeCodeProvider,
GeminiCliProvider,
GoogleAIProvider,
GroqProvider,
OllamaAIProvider,
OpenAIProvider,
OpenRouterAIProvider,
PerplexityAIProvider,
GoogleAIProvider,
OpenAIProvider,
XAIProvider,
OpenRouterAIProvider,
OllamaAIProvider,
BedrockAIProvider,
AzureProvider,
VertexAIProvider,
XAIProvider
ClaudeCodeProvider,
GeminiCliProvider
} from '../../src/ai-providers/index.js';
// Import the provider registry
@@ -62,7 +61,6 @@ const PROVIDERS = {
google: new GoogleAIProvider(),
openai: new OpenAIProvider(),
xai: new XAIProvider(),
groq: new GroqProvider(),
openrouter: new OpenRouterAIProvider(),
ollama: new OllamaAIProvider(),
bedrock: new BedrockAIProvider(),

View File

@@ -805,7 +805,7 @@ function registerCommands(programInstance) {
'-i, --input <file>',
'Path to the PRD file (alternative to positional argument)'
)
.option('-o, --output <file>', 'Output file path')
.option('-o, --output <file>', 'Output file path', TASKMASTER_TASKS_FILE)
.option(
'-n, --num-tasks <number>',
'Number of tasks to generate',
@@ -825,18 +825,14 @@ function registerCommands(programInstance) {
// Initialize TaskMaster
let taskMaster;
try {
const initOptions = {
prdPath: file || options.input || true
};
// Only include tasksPath if output is explicitly specified
if (options.output) {
initOptions.tasksPath = options.output;
}
taskMaster = initTaskMaster(initOptions);
taskMaster = initTaskMaster({
prdPath: file || options.input || true,
tasksPath: options.output || true
});
} catch (error) {
console.log(
boxen(
`${chalk.white.bold('Parse PRD Help')}\n\n${chalk.cyan('Usage:')}\n task-master parse-prd <prd-file.txt> [options]\n\n${chalk.cyan('Options:')}\n -i, --input <file> Path to the PRD file (alternative to positional argument)\n -o, --output <file> Output file path (default: .taskmaster/tasks/tasks.json)\n -n, --num-tasks <number> Number of tasks to generate (default: 10)\n -f, --force Skip confirmation when overwriting existing tasks\n --append Append new tasks to existing tasks.json instead of overwriting\n -r, --research Use Perplexity AI for research-backed task generation\n\n${chalk.cyan('Example:')}\n task-master parse-prd requirements.txt --num-tasks 15\n task-master parse-prd --input=requirements.txt\n task-master parse-prd --force\n task-master parse-prd requirements_v2.txt --append\n task-master parse-prd requirements.txt --research\n\n${chalk.yellow('Note: This command will:')}\n 1. Look for a PRD file at ${TASKMASTER_DOCS_DIR}/PRD.md by default\n 2. Use the file specified by --input or positional argument if provided\n 3. Generate tasks from the PRD and either:\n - Overwrite any existing tasks.json file (default)\n - Append to existing tasks.json if --append is used`,
`${chalk.white.bold('Parse PRD Help')}\n\n${chalk.cyan('Usage:')}\n task-master parse-prd <prd-file.txt> [options]\n\n${chalk.cyan('Options:')}\n -i, --input <file> Path to the PRD file (alternative to positional argument)\n -o, --output <file> Output file path (default: "${TASKMASTER_TASKS_FILE}")\n -n, --num-tasks <number> Number of tasks to generate (default: 10)\n -f, --force Skip confirmation when overwriting existing tasks\n --append Append new tasks to existing tasks.json instead of overwriting\n -r, --research Use Perplexity AI for research-backed task generation\n\n${chalk.cyan('Example:')}\n task-master parse-prd requirements.txt --num-tasks 15\n task-master parse-prd --input=requirements.txt\n task-master parse-prd --force\n task-master parse-prd requirements_v2.txt --append\n task-master parse-prd requirements.txt --research\n\n${chalk.yellow('Note: This command will:')}\n 1. Look for a PRD file at ${TASKMASTER_DOCS_DIR}/PRD.md by default\n 2. Use the file specified by --input or positional argument if provided\n 3. Generate tasks from the PRD and either:\n - Overwrite any existing tasks.json file (default)\n - Append to existing tasks.json if --append is used`,
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
)
);
@@ -916,17 +912,18 @@ function registerCommands(programInstance) {
}
spinner = ora('Parsing PRD and generating tasks...\n').start();
// Handle case where getTasksPath() returns null
const outputPath =
taskMaster.getTasksPath() ||
path.join(taskMaster.getProjectRoot(), TASKMASTER_TASKS_FILE);
await parsePRD(taskMaster.getPrdPath(), outputPath, numTasks, {
await parsePRD(
taskMaster.getPrdPath(),
taskMaster.getTasksPath(),
numTasks,
{
append: useAppend,
force: useForce,
research: research,
projectRoot: taskMaster.getProjectRoot(),
tag: tag
});
}
);
spinner.succeed('Tasks generated successfully!');
} catch (error) {
if (spinner) {
@@ -1019,7 +1016,7 @@ function registerCommands(programInstance) {
`Updating tasks from ID >= ${fromId} with prompt: "${prompt}"`
)
);
console.log(chalk.blue(`Tasks file: ${tasksPath}`));
console.log(chalk.blue(`Tasks file: ${taskMaster.getTasksPath()}`));
if (useResearch) {
console.log(
@@ -1500,16 +1497,10 @@ function registerCommands(programInstance) {
.option('--tag <tag>', 'Specify tag context for task operations')
.action(async (options) => {
// Initialize TaskMaster
const initOptions = {
tasksPath: options.file || true
};
// Only pass complexityReportPath if user provided a custom path
if (options.report && options.report !== COMPLEXITY_REPORT_FILE) {
initOptions.complexityReportPath = options.report;
}
const taskMaster = initTaskMaster(initOptions);
const taskMaster = initTaskMaster({
tasksPath: options.file || true,
complexityReportPath: options.report || false
});
const statusFilter = options.status;
const withSubtasks = options.withSubtasks || false;
@@ -1640,7 +1631,11 @@ function registerCommands(programInstance) {
.description(
`Analyze tasks and generate expansion recommendations${chalk.reset('')}`
)
.option('-o, --output <file>', 'Output file path for the report')
.option(
'-o, --output <file>',
'Output file path for the report',
COMPLEXITY_REPORT_FILE
)
.option(
'-m, --model <model>',
'LLM model to use for analysis (defaults to configured model)'
@@ -1668,14 +1663,10 @@ function registerCommands(programInstance) {
.option('--tag <tag>', 'Specify tag context for task operations')
.action(async (options) => {
// Initialize TaskMaster
const initOptions = {
tasksPath: options.file || true // Tasks file is required to analyze
};
// Only include complexityReportPath if output is explicitly specified
if (options.output) {
initOptions.complexityReportPath = options.output;
}
const taskMaster = initTaskMaster(initOptions);
const taskMaster = initTaskMaster({
tasksPath: options.file || true,
complexityReportPath: options.output || true
});
const tag = options.tag;
const modelOverride = options.model;
@@ -1690,13 +1681,11 @@ function registerCommands(programInstance) {
displayCurrentTagIndicator(targetTag);
// Tag-aware output file naming: master -> task-complexity-report.json, other tags -> task-complexity-report_tagname.json
const baseOutputPath =
taskMaster.getComplexityReportPath() ||
path.join(taskMaster.getProjectRoot(), COMPLEXITY_REPORT_FILE);
const baseOutputPath = taskMaster.getComplexityReportPath();
const outputPath =
options.output === COMPLEXITY_REPORT_FILE && targetTag !== 'master'
? baseOutputPath.replace('.json', `_${targetTag}.json`)
: options.output || baseOutputPath;
: baseOutputPath;
console.log(
chalk.blue(
@@ -1776,11 +1765,6 @@ function registerCommands(programInstance) {
)
.option('--tag <tag>', 'Specify tag context for task operations')
.action(async (prompt, options) => {
// Initialize TaskMaster
const taskMaster = initTaskMaster({
tasksPath: options.file || true
});
// Parameter validation
if (!prompt || typeof prompt !== 'string' || prompt.trim().length === 0) {
console.error(
@@ -2222,8 +2206,6 @@ ${result.result}
tasksPath: options.file || true
});
const projectRoot = taskMaster.getProjectRoot();
// Show current tag context
displayCurrentTagIndicator(
options.tag || getCurrentTag(taskMaster.getProjectRoot()) || 'master'
@@ -2353,14 +2335,10 @@ ${result.result}
.option('--tag <tag>', 'Specify tag context for task operations')
.action(async (taskId, options) => {
// Initialize TaskMaster
const initOptions = {
tasksPath: options.file || true
};
// Only pass complexityReportPath if user provided a custom path
if (options.report && options.report !== COMPLEXITY_REPORT_FILE) {
initOptions.complexityReportPath = options.report;
}
const taskMaster = initTaskMaster(initOptions);
const taskMaster = initTaskMaster({
tasksPath: options.file || true,
complexityReportPath: options.report || false
});
const idArg = taskId || options.id;
const statusFilter = options.status;
@@ -3477,11 +3455,8 @@ Examples:
.action(async (options) => {
// Initialize TaskMaster
const taskMaster = initTaskMaster({
tasksPath: options.file || false
tasksPath: options.file || true
});
const projectRoot = taskMaster.getProjectRoot();
// Validate flags: cannot use multiple provider flags simultaneously
const providerFlags = [
options.openrouter,
@@ -3510,7 +3485,7 @@ Examples:
// Action 1: Run Interactive Setup
console.log(chalk.blue('Starting interactive model setup...')); // Added feedback
try {
await runInteractiveSetup(taskMaster.getProjectRoot());
await runInteractiveSetup(projectRoot);
// runInteractiveSetup logs its own completion/error messages
} catch (setupError) {
console.error(

View File

@@ -1,21 +1,18 @@
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
import chalk from 'chalk';
import { z } from 'zod';
import { AI_COMMAND_NAMES } from '../../src/constants/commands.js';
import { fileURLToPath } from 'url';
import { log, findProjectRoot, resolveEnvVariable, isEmpty } from './utils.js';
import { LEGACY_CONFIG_FILE } from '../../src/constants/paths.js';
import { findConfigPath } from '../../src/utils/path-utils.js';
import {
LEGACY_CONFIG_FILE,
TASKMASTER_DIR
} from '../../src/constants/paths.js';
import {
ALL_PROVIDERS,
VALIDATED_PROVIDERS,
CUSTOM_PROVIDERS,
CUSTOM_PROVIDERS_ARRAY,
VALIDATED_PROVIDERS
ALL_PROVIDERS
} from '../../src/constants/providers.js';
import { findConfigPath } from '../../src/utils/path-utils.js';
import { findProjectRoot, isEmpty, log, resolveEnvVariable } from './utils.js';
import { AI_COMMAND_NAMES } from '../../src/constants/commands.js';
// Calculate __dirname in ESM
const __filename = fileURLToPath(import.meta.url);
@@ -102,30 +99,17 @@ function _loadAndValidateConfig(explicitRoot = null) {
if (rootToUse) {
configSource = `found root (${rootToUse})`;
} else {
// No root found, use current working directory as fallback
// This prevents infinite loops during initialization
rootToUse = process.cwd();
configSource = `current directory (${rootToUse}) - no project markers found`;
// No root found, return defaults immediately
return defaults;
}
}
// ---> End find project root logic <---
// --- Find configuration file ---
let configPath = null;
// --- Find configuration file using centralized path utility ---
const configPath = findConfigPath(null, { projectRoot: rootToUse });
let config = { ...defaults }; // Start with a deep copy of defaults
let configExists = false;
// During initialization (no project markers), skip config file search entirely
const hasProjectMarkers =
fs.existsSync(path.join(rootToUse, TASKMASTER_DIR)) ||
fs.existsSync(path.join(rootToUse, LEGACY_CONFIG_FILE));
if (hasProjectMarkers) {
// Only try to find config if we have project markers
// This prevents the repeated warnings during init
configPath = findConfigPath(null, { projectRoot: rootToUse });
}
if (configPath) {
configExists = true;
const isLegacy = configPath.endsWith(LEGACY_CONFIG_FILE);
@@ -215,23 +199,12 @@ function _loadAndValidateConfig(explicitRoot = null) {
)
);
} else {
// Don't warn about missing config during initialization
// Only warn if this looks like an existing project (has .taskmaster dir or legacy config marker)
const hasTaskmasterDir = fs.existsSync(
path.join(rootToUse, TASKMASTER_DIR)
);
const hasLegacyMarker = fs.existsSync(
path.join(rootToUse, LEGACY_CONFIG_FILE)
);
if (hasTaskmasterDir || hasLegacyMarker) {
console.warn(
chalk.yellow(
`Warning: Configuration file not found at derived root (${rootToUse}). Using defaults.`
)
);
}
}
// Keep config as defaults
config = { ...defaults };
configSource = `defaults (no config file found at ${rootToUse})`;
@@ -641,7 +614,6 @@ function isApiKeySet(providerName, session = null, projectRoot = null) {
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY',
groq: 'GROQ_API_KEY',
vertex: 'GOOGLE_API_KEY', // Vertex uses the same key as Google
'claude-code': 'CLAUDE_CODE_API_KEY', // Not actually used, but included for consistency
bedrock: 'AWS_ACCESS_KEY_ID' // Bedrock uses AWS credentials
@@ -727,10 +699,6 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
apiKeyToCheck = mcpEnv.XAI_API_KEY;
placeholderValue = 'YOUR_XAI_API_KEY_HERE';
break;
case 'groq':
apiKeyToCheck = mcpEnv.GROQ_API_KEY;
placeholderValue = 'YOUR_GROQ_API_KEY_HERE';
break;
case 'ollama':
return true; // No key needed
case 'claude-code':

View File

@@ -4,8 +4,7 @@
*/
// Export all modules
export * from './ui.js';
export * from './utils.js';
export * from './commands.js';
export * from './ui.js';
export * from './task-manager.js';
export * from './prompt-manager.js';
export * from './commands.js';

View File

@@ -1,509 +0,0 @@
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
import { log } from './utils.js';
import Ajv from 'ajv';
import addFormats from 'ajv-formats';
/**
* Manages prompt templates for AI interactions
*/
export class PromptManager {
constructor() {
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
this.promptsDir = path.join(__dirname, '..', '..', 'src', 'prompts');
this.cache = new Map();
this.setupValidation();
}
/**
* Set up JSON schema validation
* @private
*/
setupValidation() {
this.ajv = new Ajv({ allErrors: true, strict: false });
addFormats(this.ajv);
try {
// Load schema from src/prompts/schemas
const schemaPath = path.join(
this.promptsDir,
'schemas',
'prompt-template.schema.json'
);
const schemaContent = fs.readFileSync(schemaPath, 'utf-8');
const schema = JSON.parse(schemaContent);
this.validatePrompt = this.ajv.compile(schema);
log('info', '✓ JSON schema validation enabled');
} catch (error) {
log('warn', `⚠ Schema validation disabled: ${error.message}`);
this.validatePrompt = () => true; // Fallback to no validation
}
}
/**
* Load a prompt template and render it with variables
* @param {string} promptId - The prompt template ID
* @param {Object} variables - Variables to inject into the template
* @param {string} [variantKey] - Optional specific variant to use
* @returns {{systemPrompt: string, userPrompt: string, metadata: Object}}
*/
loadPrompt(promptId, variables = {}, variantKey = null) {
try {
// Check cache first
const cacheKey = `${promptId}-${JSON.stringify(variables)}-${variantKey}`;
if (this.cache.has(cacheKey)) {
return this.cache.get(cacheKey);
}
// Load template
const template = this.loadTemplate(promptId);
// Validate parameters if schema validation is available
if (this.validatePrompt && this.validatePrompt !== true) {
this.validateParameters(template, variables);
}
// Select the variant - use specified key or select based on conditions
const variant = variantKey
? { ...template.prompts[variantKey], name: variantKey }
: this.selectVariant(template, variables);
// Render the prompts with variables
const rendered = {
systemPrompt: this.renderTemplate(variant.system, variables),
userPrompt: this.renderTemplate(variant.user, variables),
metadata: {
templateId: template.id,
version: template.version,
variant: variant.name || 'default',
parameters: variables
}
};
// Cache the result
this.cache.set(cacheKey, rendered);
return rendered;
} catch (error) {
log('error', `Failed to load prompt ${promptId}: ${error.message}`);
throw error;
}
}
/**
* Load a prompt template from disk
* @private
*/
loadTemplate(promptId) {
const templatePath = path.join(this.promptsDir, `${promptId}.json`);
try {
const content = fs.readFileSync(templatePath, 'utf-8');
const template = JSON.parse(content);
// Schema validation if available (do this first for detailed errors)
if (this.validatePrompt && this.validatePrompt !== true) {
const valid = this.validatePrompt(template);
if (!valid) {
const errors = this.validatePrompt.errors
.map((err) => `${err.instancePath || 'root'}: ${err.message}`)
.join(', ');
throw new Error(`Schema validation failed: ${errors}`);
}
} else {
// Fallback basic validation if no schema validation available
if (!template.id || !template.prompts || !template.prompts.default) {
throw new Error(
'Invalid template structure: missing required fields (id, prompts.default)'
);
}
}
return template;
} catch (error) {
if (error.code === 'ENOENT') {
throw new Error(`Prompt template '${promptId}' not found`);
}
throw error;
}
}
/**
* Validate parameters against template schema
* @private
*/
validateParameters(template, variables) {
if (!template.parameters) return;
const errors = [];
for (const [paramName, paramConfig] of Object.entries(
template.parameters
)) {
const value = variables[paramName];
// Check required parameters
if (paramConfig.required && value === undefined) {
errors.push(`Required parameter '${paramName}' missing`);
continue;
}
// Skip validation for undefined optional parameters
if (value === undefined) continue;
// Type validation
if (!this.validateParameterType(value, paramConfig.type)) {
errors.push(
`Parameter '${paramName}' expected ${paramConfig.type}, got ${typeof value}`
);
}
// Enum validation
if (paramConfig.enum && !paramConfig.enum.includes(value)) {
errors.push(
`Parameter '${paramName}' must be one of: ${paramConfig.enum.join(', ')}`
);
}
// Pattern validation for strings
if (paramConfig.pattern && typeof value === 'string') {
const regex = new RegExp(paramConfig.pattern);
if (!regex.test(value)) {
errors.push(
`Parameter '${paramName}' does not match required pattern: ${paramConfig.pattern}`
);
}
}
// Range validation for numbers
if (typeof value === 'number') {
if (paramConfig.minimum !== undefined && value < paramConfig.minimum) {
errors.push(
`Parameter '${paramName}' must be >= ${paramConfig.minimum}`
);
}
if (paramConfig.maximum !== undefined && value > paramConfig.maximum) {
errors.push(
`Parameter '${paramName}' must be <= ${paramConfig.maximum}`
);
}
}
}
if (errors.length > 0) {
throw new Error(`Parameter validation failed: ${errors.join('; ')}`);
}
}
/**
* Validate parameter type
* @private
*/
validateParameterType(value, expectedType) {
switch (expectedType) {
case 'string':
return typeof value === 'string';
case 'number':
return typeof value === 'number';
case 'boolean':
return typeof value === 'boolean';
case 'array':
return Array.isArray(value);
case 'object':
return (
typeof value === 'object' && value !== null && !Array.isArray(value)
);
default:
return true;
}
}
/**
* Select the best variant based on conditions
* @private
*/
selectVariant(template, variables) {
// Check each variant's condition
for (const [name, variant] of Object.entries(template.prompts)) {
if (name === 'default') continue;
if (
variant.condition &&
this.evaluateCondition(variant.condition, variables)
) {
return { ...variant, name };
}
}
// Fall back to default
return { ...template.prompts.default, name: 'default' };
}
/**
* Evaluate a condition string
* @private
*/
evaluateCondition(condition, variables) {
try {
// Create a safe evaluation context
const context = { ...variables };
// Simple condition evaluation (can be enhanced)
// For now, supports basic comparisons
const func = new Function(...Object.keys(context), `return ${condition}`);
return func(...Object.values(context));
} catch (error) {
log('warn', `Failed to evaluate condition: ${condition}`);
return false;
}
}
/**
* Render a template string with variables
* @private
*/
renderTemplate(template, variables) {
let rendered = template;
// Handle helper functions like (eq variable "value")
rendered = rendered.replace(
/\(eq\s+(\w+(?:\.\w+)*)\s+"([^"]+)"\)/g,
(match, path, compareValue) => {
const value = this.getNestedValue(variables, path);
return value === compareValue ? 'true' : 'false';
}
);
// Handle not helper function like (not variable)
rendered = rendered.replace(/\(not\s+(\w+(?:\.\w+)*)\)/g, (match, path) => {
const value = this.getNestedValue(variables, path);
return !value ? 'true' : 'false';
});
// Handle gt (greater than) helper function like (gt variable 0)
rendered = rendered.replace(
/\(gt\s+(\w+(?:\.\w+)*)\s+(\d+(?:\.\d+)?)\)/g,
(match, path, compareValue) => {
const value = this.getNestedValue(variables, path);
const numValue = parseFloat(compareValue);
return typeof value === 'number' && value > numValue ? 'true' : 'false';
}
);
// Handle gte (greater than or equal) helper function like (gte variable 0)
rendered = rendered.replace(
/\(gte\s+(\w+(?:\.\w+)*)\s+(\d+(?:\.\d+)?)\)/g,
(match, path, compareValue) => {
const value = this.getNestedValue(variables, path);
const numValue = parseFloat(compareValue);
return typeof value === 'number' && value >= numValue
? 'true'
: 'false';
}
);
// Handle conditionals with else {{#if variable}}...{{else}}...{{/if}}
rendered = rendered.replace(
/\{\{#if\s+([^}]+)\}\}([\s\S]*?)(?:\{\{else\}\}([\s\S]*?))?\{\{\/if\}\}/g,
(match, condition, trueContent, falseContent = '') => {
// Handle boolean values and helper function results
let value;
if (condition === 'true') {
value = true;
} else if (condition === 'false') {
value = false;
} else {
value = this.getNestedValue(variables, condition);
}
return value ? trueContent : falseContent;
}
);
// Handle each loops {{#each array}}...{{/each}}
rendered = rendered.replace(
/\{\{#each\s+(\w+(?:\.\w+)*)\}\}([\s\S]*?)\{\{\/each\}\}/g,
(match, path, content) => {
const array = this.getNestedValue(variables, path);
if (!Array.isArray(array)) return '';
return array
.map((item, index) => {
// Create a context with item properties and special variables
const itemContext = {
...variables,
...item,
'@index': index,
'@first': index === 0,
'@last': index === array.length - 1
};
// Recursively render the content with item context
return this.renderTemplate(content, itemContext);
})
.join('');
}
);
// Handle json helper {{{json variable}}} (triple braces for raw output)
rendered = rendered.replace(
/\{\{\{json\s+(\w+(?:\.\w+)*)\}\}\}/g,
(match, path) => {
const value = this.getNestedValue(variables, path);
return value !== undefined ? JSON.stringify(value, null, 2) : '';
}
);
// Handle variable substitution {{variable}}
rendered = rendered.replace(/\{\{(\w+(?:\.\w+)*)\}\}/g, (match, path) => {
const value = this.getNestedValue(variables, path);
return value !== undefined ? value : '';
});
return rendered;
}
/**
* Get nested value from object using dot notation
* @private
*/
getNestedValue(obj, path) {
return path
.split('.')
.reduce(
(current, key) =>
current && current[key] !== undefined ? current[key] : undefined,
obj
);
}
/**
* Validate all prompt templates
*/
validateAllPrompts() {
const results = { total: 0, errors: [], valid: [] };
try {
const files = fs.readdirSync(this.promptsDir);
const promptFiles = files.filter((file) => file.endsWith('.json'));
for (const file of promptFiles) {
const promptId = file.replace('.json', '');
results.total++;
try {
this.loadTemplate(promptId);
results.valid.push(promptId);
} catch (error) {
results.errors.push(`${promptId}: ${error.message}`);
}
}
} catch (error) {
results.errors.push(
`Failed to read templates directory: ${error.message}`
);
}
return results;
}
/**
* List all available prompt templates
*/
listPrompts() {
try {
const files = fs.readdirSync(this.promptsDir);
const prompts = [];
for (const file of files) {
if (!file.endsWith('.json')) continue;
const promptId = file.replace('.json', '');
try {
const template = this.loadTemplate(promptId);
prompts.push({
id: template.id,
description: template.description,
version: template.version,
parameters: template.parameters,
tags: template.metadata?.tags || []
});
} catch (error) {
log('warn', `Failed to load template ${promptId}: ${error.message}`);
}
}
return prompts;
} catch (error) {
if (error.code === 'ENOENT') {
// Templates directory doesn't exist yet
return [];
}
throw error;
}
}
/**
* Validate template structure
*/
validateTemplate(templatePath) {
try {
const content = fs.readFileSync(templatePath, 'utf-8');
const template = JSON.parse(content);
// Check required fields
const required = ['id', 'version', 'description', 'prompts'];
for (const field of required) {
if (!template[field]) {
return { valid: false, error: `Missing required field: ${field}` };
}
}
// Check default prompt exists
if (!template.prompts.default) {
return { valid: false, error: 'Missing default prompt variant' };
}
// Check each variant has required fields
for (const [name, variant] of Object.entries(template.prompts)) {
if (!variant.system || !variant.user) {
return {
valid: false,
error: `Variant '${name}' missing system or user prompt`
};
}
}
// Schema validation if available
if (this.validatePrompt && this.validatePrompt !== true) {
const valid = this.validatePrompt(template);
if (!valid) {
const errors = this.validatePrompt.errors
.map((err) => `${err.instancePath || 'root'}: ${err.message}`)
.join(', ');
return { valid: false, error: `Schema validation failed: ${errors}` };
}
}
return { valid: true };
} catch (error) {
return { valid: false, error: error.message };
}
}
}
// Singleton instance
let promptManager = null;
/**
* Get or create the prompt manager instance
* @returns {PromptManager}
*/
export function getPromptManager() {
if (!promptManager) {
promptManager = new PromptManager();
}
return promptManager;
}

View File

@@ -1,4 +1,89 @@
{
"bedrock": [
{
"id": "us.anthropic.claude-3-haiku-20240307-v1:0",
"swe_score": 0.4,
"cost_per_1m_tokens": {
"input": 0.25,
"output": 1.25
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "us.anthropic.claude-3-opus-20240229-v1:0",
"swe_score": 0.725,
"cost_per_1m_tokens": {
"input": 15,
"output": 75
},
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-3-5-sonnet-20240620-v1:0",
"swe_score": 0.49,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"swe_score": 0.49,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"swe_score": 0.623,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
},
{
"id": "us.anthropic.claude-3-5-haiku-20241022-v1:0",
"swe_score": 0.4,
"cost_per_1m_tokens": {
"input": 0.8,
"output": 4
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "us.anthropic.claude-opus-4-20250514-v1:0",
"swe_score": 0.725,
"cost_per_1m_tokens": {
"input": 15,
"output": 75
},
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-sonnet-4-20250514-v1:0",
"swe_score": 0.727,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.deepseek.r1-v1:0",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 1.35,
"output": 5.4
},
"allowed_roles": ["research"],
"max_tokens": 65536
}
],
"anthropic": [
{
"id": "claude-sonnet-4-20250514",
@@ -41,60 +126,36 @@
"max_tokens": 8192
}
],
"claude-code": [
"azure": [
{
"id": "opus",
"swe_score": 0.725,
"id": "gpt-4o",
"swe_score": 0.332,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
"input": 2.5,
"output": 10.0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32000
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
},
{
"id": "sonnet",
"swe_score": 0.727,
"id": "gpt-4o-mini",
"swe_score": 0.3,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
"input": 0.15,
"output": 0.6
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 64000
}
],
"mcp": [
{
"id": "mcp-sampling",
"swe_score": null,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 100000
}
],
"gemini-cli": [
{
"id": "gemini-2.5-pro",
"swe_score": 0.72,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
},
{
"id": "gemini-2.5-flash",
"swe_score": 0.71,
"id": "gpt-4-1",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
"input": 2.0,
"output": 10.0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
}
],
"openai": [
@@ -259,143 +320,6 @@
"max_tokens": 1048000
}
],
"xai": [
{
"id": "grok-3",
"name": "Grok 3",
"swe_score": null,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
},
{
"id": "grok-3-fast",
"name": "Grok 3 Fast",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 5,
"output": 25
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
},
{
"id": "grok-4",
"name": "Grok 4",
"swe_score": null,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
}
],
"groq": [
{
"id": "moonshotai/kimi-k2-instruct",
"swe_score": 0.66,
"cost_per_1m_tokens": {
"input": 1.0,
"output": 3.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
},
{
"id": "llama-3.3-70b-versatile",
"swe_score": 0.55,
"cost_per_1m_tokens": {
"input": 0.59,
"output": 0.79
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768
},
{
"id": "llama-3.1-8b-instant",
"swe_score": 0.32,
"cost_per_1m_tokens": {
"input": 0.05,
"output": 0.08
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 131072
},
{
"id": "llama-4-scout",
"swe_score": 0.45,
"cost_per_1m_tokens": {
"input": 0.11,
"output": 0.34
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768
},
{
"id": "llama-4-maverick",
"swe_score": 0.52,
"cost_per_1m_tokens": {
"input": 0.5,
"output": 0.77
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768
},
{
"id": "mixtral-8x7b-32768",
"swe_score": 0.35,
"cost_per_1m_tokens": {
"input": 0.24,
"output": 0.24
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 32768
},
{
"id": "qwen-qwq-32b-preview",
"swe_score": 0.4,
"cost_per_1m_tokens": {
"input": 0.18,
"output": 0.18
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768
},
{
"id": "deepseek-r1-distill-llama-70b",
"swe_score": 0.52,
"cost_per_1m_tokens": {
"input": 0.75,
"output": 0.99
},
"allowed_roles": ["main", "research"],
"max_tokens": 8192
},
{
"id": "gemma2-9b-it",
"swe_score": 0.3,
"cost_per_1m_tokens": {
"input": 0.2,
"output": 0.2
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 8192
},
{
"id": "whisper-large-v3",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0.11,
"output": 0
},
"allowed_roles": ["main"],
"max_tokens": 0
}
],
"perplexity": [
{
"id": "sonar-pro",
@@ -448,6 +372,106 @@
"max_tokens": 8700
}
],
"xai": [
{
"id": "grok-3",
"name": "Grok 3",
"swe_score": null,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
},
{
"id": "grok-3-fast",
"name": "Grok 3 Fast",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 5,
"output": 25
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
},
{
"id": "grok-4",
"name": "Grok 4",
"swe_score": null,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072
}
],
"ollama": [
{
"id": "devstral:latest",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "qwen3:latest",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "qwen3:14b",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "qwen3:32b",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "mistral-small3.1:latest",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "llama3.3:latest",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "phi4:latest",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
}
],
"openrouter": [
{
"id": "google/gemini-2.5-flash-preview-05-20",
@@ -700,185 +724,151 @@
"max_tokens": 32768
}
],
"ollama": [
"groq": [
{
"id": "devstral:latest",
"swe_score": 0,
"id": "llama-3.3-70b-versatile",
"swe_score": 0.55,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
"input": 0.59,
"output": 0.79
},
"allowed_roles": ["main", "fallback"]
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768
},
{
"id": "qwen3:latest",
"swe_score": 0,
"id": "llama-3.1-8b-instant",
"swe_score": 0.32,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "qwen3:14b",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "qwen3:32b",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "mistral-small3.1:latest",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "llama3.3:latest",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "phi4:latest",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
}
],
"azure": [
{
"id": "gpt-4o",
"swe_score": 0.332,
"cost_per_1m_tokens": {
"input": 2.5,
"output": 10.0
"input": 0.05,
"output": 0.08
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
"max_tokens": 131072
},
{
"id": "gpt-4o-mini",
"swe_score": 0.3,
"id": "llama-4-scout",
"swe_score": 0.45,
"cost_per_1m_tokens": {
"input": 0.15,
"output": 0.6
"input": 0.11,
"output": 0.34
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768
},
{
"id": "llama-4-maverick",
"swe_score": 0.52,
"cost_per_1m_tokens": {
"input": 0.5,
"output": 0.77
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768
},
{
"id": "mixtral-8x7b-32768",
"swe_score": 0.35,
"cost_per_1m_tokens": {
"input": 0.24,
"output": 0.24
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
"max_tokens": 32768
},
{
"id": "gpt-4-1",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 2.0,
"output": 10.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
}
],
"bedrock": [
{
"id": "us.anthropic.claude-3-haiku-20240307-v1:0",
"id": "qwen-qwq-32b-preview",
"swe_score": 0.4,
"cost_per_1m_tokens": {
"input": 0.25,
"output": 1.25
"input": 0.18,
"output": 0.18
},
"allowed_roles": ["main", "fallback"]
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768
},
{
"id": "us.anthropic.claude-3-opus-20240229-v1:0",
"id": "deepseek-r1-distill-llama-70b",
"swe_score": 0.52,
"cost_per_1m_tokens": {
"input": 0.75,
"output": 0.99
},
"allowed_roles": ["main", "research"],
"max_tokens": 8192
},
{
"id": "gemma2-9b-it",
"swe_score": 0.3,
"cost_per_1m_tokens": {
"input": 0.2,
"output": 0.2
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 8192
},
{
"id": "whisper-large-v3",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 0.11,
"output": 0
},
"allowed_roles": ["main"],
"max_tokens": 0
}
],
"claude-code": [
{
"id": "opus",
"swe_score": 0.725,
"cost_per_1m_tokens": {
"input": 15,
"output": 75
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"]
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32000
},
{
"id": "us.anthropic.claude-3-5-sonnet-20240620-v1:0",
"swe_score": 0.49,
"id": "sonnet",
"swe_score": 0.727,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"]
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 64000
}
],
"mcp": [
{
"id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"swe_score": 0.49,
"id": "mcp-sampling",
"swe_score": null,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"]
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 100000
}
],
"gemini-cli": [
{
"id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"swe_score": 0.623,
"id": "gemini-2.5-pro",
"swe_score": 0.72,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
},
{
"id": "us.anthropic.claude-3-5-haiku-20241022-v1:0",
"swe_score": 0.4,
"id": "gemini-2.5-flash",
"swe_score": 0.71,
"cost_per_1m_tokens": {
"input": 0.8,
"output": 4
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"]
},
{
"id": "us.anthropic.claude-opus-4-20250514-v1:0",
"swe_score": 0.725,
"cost_per_1m_tokens": {
"input": 15,
"output": 75
},
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.anthropic.claude-sonnet-4-20250514-v1:0",
"swe_score": 0.727,
"cost_per_1m_tokens": {
"input": 3,
"output": 15
},
"allowed_roles": ["main", "fallback", "research"]
},
{
"id": "us.deepseek.r1-v1:0",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 1.35,
"output": 5.4
},
"allowed_roles": ["research"],
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536
}
]

View File

@@ -27,7 +27,6 @@ import {
} from '../utils.js';
import { generateObjectService } from '../ai-services-unified.js';
import { getDefaultPriority } from '../config-manager.js';
import { getPromptManager } from '../prompt-manager.js';
import ContextGatherer from '../utils/contextGatherer.js';
import generateTaskFiles from './generate-task-files.js';
import {
@@ -404,6 +403,30 @@ async function addTask(
displayContextAnalysis(analysisData, prompt, gatheredContext.length);
}
// System Prompt - Enhanced for dependency awareness
const systemPrompt =
"You are a helpful assistant that creates well-structured tasks for a software development project. Generate a single new task based on the user's description, adhering strictly to the provided JSON schema. Pay special attention to dependencies between tasks, ensuring the new task correctly references any tasks it depends on.\n\n" +
'When determining dependencies for a new task, follow these principles:\n' +
'1. Select dependencies based on logical requirements - what must be completed before this task can begin.\n' +
'2. Prioritize task dependencies that are semantically related to the functionality being built.\n' +
'3. Consider both direct dependencies (immediately prerequisite) and indirect dependencies.\n' +
'4. Avoid adding unnecessary dependencies - only include tasks that are genuinely prerequisite.\n' +
'5. Consider the current status of tasks - prefer completed tasks as dependencies when possible.\n' +
"6. Pay special attention to foundation tasks (1-5) but don't automatically include them without reason.\n" +
'7. Recent tasks (higher ID numbers) may be more relevant for newer functionality.\n\n' +
'The dependencies array should contain task IDs (numbers) of prerequisite tasks.\n';
// Task Structure Description (for user prompt)
const taskStructureDesc = `
{
"title": "Task title goes here",
"description": "A concise one or two sentence description of what the task involves",
"details": "Detailed implementation steps, considerations, code examples, or technical approach",
"testStrategy": "Specific steps to verify correct implementation and functionality",
"dependencies": [1, 3] // Example: IDs of tasks that must be completed before this task
}
`;
// Add any manually provided details to the prompt for context
let contextFromArgs = '';
if (manualTaskData?.title)
@@ -415,21 +438,18 @@ async function addTask(
if (manualTaskData?.testStrategy)
contextFromArgs += `\n- Additional Test Strategy Context: "${manualTaskData.testStrategy}"`;
// Load prompts using PromptManager
const promptManager = getPromptManager();
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
'add-task',
{
prompt,
newTaskId,
existingTasks: allTasks,
gatheredContext,
contextFromArgs,
useResearch,
priority: effectivePriority,
dependencies: numericDependencies
}
);
// User Prompt
const userPrompt = `You are generating the details for Task #${newTaskId}. Based on the user's request: "${prompt}", create a comprehensive new task for a software development project.
${gatheredContext}
Based on the information about existing tasks provided above, include appropriate dependencies in the "dependencies" array. Only include task IDs that this new task directly depends on.
Return your answer as a single JSON object matching the schema precisely:
${taskStructureDesc}
Make sure the details and test strategy are comprehensive and specific. DO NOT include the task ID in the title.
`;
// Start the loading indicator - only for text mode
if (outputFormat === 'text') {
@@ -561,6 +581,16 @@ async function addTask(
writeJSON(tasksPath, rawData, projectRoot, targetTag);
report('DEBUG: tasks.json written.', 'debug');
// Generate markdown task files
report('Generating task files...', 'info');
report('DEBUG: Calling generateTaskFiles...', 'debug');
// Pass mcpLog if available to generateTaskFiles
await generateTaskFiles(tasksPath, path.dirname(tasksPath), {
projectRoot,
tag: targetTag
});
report('DEBUG: generateTaskFiles finished.', 'debug');
// Show success message - only for text output (CLI)
if (outputFormat === 'text') {
const table = new Table({

View File

@@ -14,7 +14,6 @@ import {
import { generateTextService } from '../ai-services-unified.js';
import { getDebugFlag, getProjectName } from '../config-manager.js';
import { getPromptManager } from '../prompt-manager.js';
import {
COMPLEXITY_REPORT_FILE,
LEGACY_TASKS_FILE
@@ -240,7 +239,7 @@ async function analyzeTaskComplexity(options, context = {}) {
tasks: relevantTaskIds,
format: 'research'
});
gatheredContext = contextResult.context || '';
gatheredContext = contextResult;
}
} catch (contextError) {
reportLog(
@@ -397,20 +396,12 @@ async function analyzeTaskComplexity(options, context = {}) {
}
// Continue with regular analysis path
// Load prompts using PromptManager
const promptManager = getPromptManager();
const promptParams = {
tasks: tasksData.tasks,
gatheredContext: gatheredContext || '',
useResearch: useResearch
};
const { systemPrompt, userPrompt: prompt } = await promptManager.loadPrompt(
'analyze-complexity',
promptParams,
'default'
const prompt = generateInternalComplexityAnalysisPrompt(
tasksData,
gatheredContext
);
const systemPrompt =
'You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.';
let loadingIndicator = null;
if (outputFormat === 'text') {

View File

@@ -19,7 +19,6 @@ import {
import { generateTextService } from '../ai-services-unified.js';
import { getDefaultSubtasks, getDebugFlag } from '../config-manager.js';
import { getPromptManager } from '../prompt-manager.js';
import generateTaskFiles from './generate-task-files.js';
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
import { ContextGatherer } from '../utils/contextGatherer.js';
@@ -61,6 +60,128 @@ const subtaskWrapperSchema = z.object({
});
// --- End Zod Schemas ---
/**
* Generates the system prompt for the main AI role (e.g., Claude).
* @param {number} subtaskCount - The target number of subtasks.
* @returns {string} The system prompt.
*/
function generateMainSystemPrompt(subtaskCount) {
return `You are an AI assistant helping with task breakdown for software development.
You need to break down a high-level task into ${subtaskCount > 0 ? subtaskCount : 'an appropriate number of'} specific subtasks that can be implemented one by one.
Subtasks should:
1. Be specific and actionable implementation steps
2. Follow a logical sequence
3. Each handle a distinct part of the parent task
4. Include clear guidance on implementation approach
5. Have appropriate dependency chains between subtasks (using the new sequential IDs)
6. Collectively cover all aspects of the parent task
For each subtask, provide:
- id: Sequential integer starting from the provided nextSubtaskId
- title: Clear, specific title
- description: Detailed description
- dependencies: Array of prerequisite subtask IDs (use the new sequential IDs)
- details: Implementation details, the output should be in string
- testStrategy: Optional testing approach
Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array matching the structure described. Do not include any explanatory text, markdown formatting, or code block markers.`;
}
/**
* Generates the user prompt for the main AI role (e.g., Claude).
* @param {Object} task - The parent task object.
* @param {number} subtaskCount - The target number of subtasks.
* @param {string} additionalContext - Optional additional context.
* @param {number} nextSubtaskId - The starting ID for the new subtasks.
* @returns {string} The user prompt.
*/
function generateMainUserPrompt(
task,
subtaskCount,
additionalContext,
nextSubtaskId
) {
const contextPrompt = additionalContext
? `\n\nAdditional context: ${additionalContext}`
: '';
const schemaDescription = `
{
"subtasks": [
{
"id": ${nextSubtaskId}, // First subtask ID
"title": "Specific subtask title",
"description": "Detailed description",
"dependencies": [], // e.g., [${nextSubtaskId + 1}] if it depends on the next
"details": "Implementation guidance",
"testStrategy": "Optional testing approach"
},
// ... (repeat for ${subtaskCount ? 'a total of ' + subtaskCount : 'each of the'} subtasks with sequential IDs)
]
}`;
return `Break down this task into ${subtaskCount > 0 ? 'exactly ' + subtaskCount : 'an appropriate number of'} specific subtasks:
Task ID: ${task.id}
Title: ${task.title}
Description: ${task.description}
Current details: ${task.details || 'None'}
${contextPrompt}
Return ONLY the JSON object containing the "subtasks" array, matching this structure:
${schemaDescription}`;
}
/**
* Generates the user prompt for the research AI role (e.g., Perplexity).
* @param {Object} task - The parent task object.
* @param {number} subtaskCount - The target number of subtasks.
* @param {string} additionalContext - Optional additional context.
* @param {number} nextSubtaskId - The starting ID for the new subtasks.
* @returns {string} The user prompt.
*/
function generateResearchUserPrompt(
task,
subtaskCount,
additionalContext,
nextSubtaskId
) {
const contextPrompt = additionalContext
? `\n\nConsider this context: ${additionalContext}`
: '';
const schemaDescription = `
{
"subtasks": [
{
"id": <number>, // Sequential ID starting from ${nextSubtaskId}
"title": "<string>",
"description": "<string>",
"dependencies": [<number>], // e.g., [${nextSubtaskId + 1}]. If no dependencies, use an empty array [].
"details": "<string>",
"testStrategy": "<string>" // Optional
},
// ... (repeat for ${subtaskCount} subtasks)
]
}`;
return `Analyze the following task and break it down into ${subtaskCount > 0 ? 'exactly ' + subtaskCount : 'an appropriate number of'} specific subtasks using your research capabilities. Assign sequential IDs starting from ${nextSubtaskId}.
Parent Task:
ID: ${task.id}
Title: ${task.title}
Description: ${task.description}
Current details: ${task.details || 'None'}
${contextPrompt}
CRITICAL: Respond ONLY with a valid JSON object containing a single key "subtasks". The value must be an array of the generated subtasks, strictly matching this structure:
${schemaDescription}
Important: For the 'dependencies' field, if a subtask has no dependencies, you MUST use an empty array, for example: "dependencies": []. Do not use null or omit the field.
Do not include ANY explanatory text, markdown, or code block markers. Just the JSON object.`;
}
/**
* Parse subtasks from AI's text response. Includes basic cleanup.
* @param {string} text - Response text from AI.
@@ -369,7 +490,7 @@ async function expandTask(
tasks: finalTaskIds,
format: 'research'
});
gatheredContext = contextResult.context || '';
gatheredContext = contextResult;
}
} catch (contextError) {
logger.warn(`Could not gather context: ${contextError.message}`);
@@ -378,7 +499,9 @@ async function expandTask(
// --- Complexity Report Integration ---
let finalSubtaskCount;
let promptContent = '';
let complexityReasoningContext = '';
let systemPrompt; // Declare systemPrompt here
// Use tag-aware complexity report path
const complexityReportPath = getTagAwareFilePath(
@@ -447,71 +570,52 @@ async function expandTask(
// Determine prompt content AND system prompt
const nextSubtaskId = (task.subtasks?.length || 0) + 1;
// Load prompts using PromptManager
const promptManager = getPromptManager();
// Combine all context sources into a single additionalContext parameter
let combinedAdditionalContext = '';
if (additionalContext || complexityReasoningContext) {
combinedAdditionalContext =
`\n\n${additionalContext}${complexityReasoningContext}`.trim();
if (taskAnalysis?.expansionPrompt) {
// Use prompt from complexity report
promptContent = taskAnalysis.expansionPrompt;
// Append additional context and reasoning
promptContent += `\n\n${additionalContext}`.trim();
promptContent += `${complexityReasoningContext}`.trim();
if (gatheredContext) {
promptContent += `\n\n# Project Context\n\n${gatheredContext}`;
}
// --- Use Simplified System Prompt for Report Prompts ---
systemPrompt = `You are an AI assistant helping with task breakdown. Generate ${finalSubtaskCount > 0 ? 'exactly ' + finalSubtaskCount : 'an appropriate number of'} subtasks based on the provided prompt and context. Respond ONLY with a valid JSON object containing a single key "subtasks" whose value is an array of the generated subtask objects. Each subtask object in the array must have keys: "id", "title", "description", "dependencies", "details", "status". Ensure the 'id' starts from ${nextSubtaskId} and is sequential. Ensure 'dependencies' only reference valid prior subtask IDs generated in this response (starting from ${nextSubtaskId}). Ensure 'status' is 'pending'. Do not include any other text or explanation.`;
logger.info(
`Using expansion prompt from complexity report and simplified system prompt for task ${task.id}.`
);
// --- End Simplified System Prompt ---
} else {
// Use standard prompt generation
let combinedAdditionalContext =
`${additionalContext}${complexityReasoningContext}`.trim();
if (gatheredContext) {
combinedAdditionalContext =
`${combinedAdditionalContext}\n\n# Project Context\n\n${gatheredContext}`.trim();
}
// Ensure expansionPrompt is a string (handle both string and object formats)
let expansionPromptText = undefined;
if (taskAnalysis?.expansionPrompt) {
if (typeof taskAnalysis.expansionPrompt === 'string') {
expansionPromptText = taskAnalysis.expansionPrompt;
} else if (
typeof taskAnalysis.expansionPrompt === 'object' &&
taskAnalysis.expansionPrompt.text
) {
expansionPromptText = taskAnalysis.expansionPrompt.text;
}
}
// Ensure gatheredContext is a string (handle both string and object formats)
let gatheredContextText = gatheredContext;
if (typeof gatheredContext === 'object' && gatheredContext !== null) {
if (gatheredContext.data) {
gatheredContextText = gatheredContext.data;
} else if (gatheredContext.text) {
gatheredContextText = gatheredContext.text;
} else {
gatheredContextText = JSON.stringify(gatheredContext);
}
}
const promptParams = {
task: task,
subtaskCount: finalSubtaskCount,
nextSubtaskId: nextSubtaskId,
additionalContext: additionalContext,
complexityReasoningContext: complexityReasoningContext,
gatheredContext: gatheredContextText || '',
useResearch: useResearch,
expansionPrompt: expansionPromptText || undefined
};
let variantKey = 'default';
if (expansionPromptText) {
variantKey = 'complexity-report';
logger.info(
`Using expansion prompt from complexity report for task ${task.id}.`
if (useResearch) {
promptContent = generateResearchUserPrompt(
task,
finalSubtaskCount,
combinedAdditionalContext,
nextSubtaskId
);
} else if (useResearch) {
variantKey = 'research';
logger.info(`Using research variant for task ${task.id}.`);
// Use the specific research system prompt if needed, or a standard one
systemPrompt = `You are an AI assistant that responds ONLY with valid JSON objects as requested. The object should contain a 'subtasks' array.`; // Or keep generateResearchSystemPrompt if it exists
} else {
promptContent = generateMainUserPrompt(
task,
finalSubtaskCount,
combinedAdditionalContext,
nextSubtaskId
);
// Use the original detailed system prompt for standard generation
systemPrompt = generateMainSystemPrompt(finalSubtaskCount);
}
logger.info(`Using standard prompt generation for task ${task.id}.`);
}
const { systemPrompt, userPrompt: promptContent } =
await promptManager.loadPrompt('expand-task', promptParams, variantKey);
// --- End Complexity Report / Prompt Logic ---
// --- AI Subtask Generation using generateTextService ---

View File

@@ -864,54 +864,64 @@ function generateMarkdownOutput(data, filteredTasks, stats) {
return '█'.repeat(filled) + '░'.repeat(empty);
};
const taskProgressBar = createMarkdownProgressBar(completionPercentage, 20);
const subtaskProgressBar = createMarkdownProgressBar(
subtaskCompletionPercentage,
20
);
// Dashboard section
// markdown += '```\n';
markdown += '| Project Dashboard | |\n';
markdown += '| :- |:-|\n';
markdown += `| Task Progress | ${taskProgressBar} ${Math.round(completionPercentage)}% |\n`;
markdown += `| Done | ${doneCount} |\n`;
markdown += `| In Progress | ${inProgressCount} |\n`;
markdown += `| Pending | ${pendingCount} |\n`;
markdown += `| Deferred | ${deferredCount} |\n`;
markdown += `| Cancelled | ${cancelledCount} |\n`;
markdown += `|-|-|\n`;
markdown += `| Subtask Progress | ${subtaskProgressBar} ${Math.round(subtaskCompletionPercentage)}% |\n`;
markdown += `| Completed | ${completedSubtasks} |\n`;
markdown += `| In Progress | ${inProgressSubtasks} |\n`;
markdown += `| Pending | ${pendingSubtasks} |\n`;
markdown += '```\n';
markdown +=
'╭─────────────────────────────────────────────────────────╮╭─────────────────────────────────────────────────────────╮\n';
markdown +=
'│ ││ │\n';
markdown +=
'│ Project Dashboard ││ Dependency Status & Next Task │\n';
markdown += `│ Tasks Progress: ${createMarkdownProgressBar(completionPercentage, 20)} ${Math.round(completionPercentage)}% ││ Dependency Metrics: │\n`;
markdown += `${Math.round(completionPercentage)}% ││ • Tasks with no dependencies: ${tasksWithNoDeps}\n`;
markdown += `│ Done: ${doneCount} In Progress: ${inProgressCount} Pending: ${pendingCount} Blocked: ${blockedCount} ││ • Tasks ready to work on: ${tasksReadyToWork}\n`;
markdown += `│ Deferred: ${deferredCount} Cancelled: ${cancelledCount} ││ • Tasks blocked by dependencies: ${tasksWithUnsatisfiedDeps}\n`;
markdown += `│ ││ • Most depended-on task: #${mostDependedOnTaskId} (${maxDependents} dependents) │\n`;
markdown += `│ Subtasks Progress: ${createMarkdownProgressBar(subtaskCompletionPercentage, 20)} ││ • Avg dependencies per task: ${avgDependenciesPerTask.toFixed(1)}\n`;
markdown += `${Math.round(subtaskCompletionPercentage)}% ${Math.round(subtaskCompletionPercentage)}% ││ │\n`;
markdown += `│ Completed: ${completedSubtasks}/${totalSubtasks} In Progress: ${inProgressSubtasks} Pending: ${pendingSubtasks} ││ Next Task to Work On: │\n`;
markdown += '\n\n';
const nextTaskTitle = nextItem
? nextItem.title.length > 40
? nextItem.title.substring(0, 37) + '...'
: nextItem.title
: 'No task available';
markdown += `│ Blocked: ${blockedSubtasks} Deferred: ${deferredSubtasks} Cancelled: ${cancelledSubtasks} ││ ID: ${nextItem ? nextItem.id : 'N/A'} - ${nextTaskTitle}\n`;
markdown += `│ ││ Priority: ${nextItem ? nextItem.priority || 'medium' : ''} Dependencies: ${nextItem && nextItem.dependencies && nextItem.dependencies.length > 0 ? 'Some' : 'None'}\n`;
markdown += `│ Priority Breakdown: ││ Complexity: ${nextItem && nextItem.complexityScore ? '● ' + nextItem.complexityScore : 'N/A'}\n`;
markdown += `│ • High priority: ${data.tasks.filter((t) => t.priority === 'high').length} │╰─────────────────────────────────────────────────────────╯\n`;
markdown += `│ • Medium priority: ${data.tasks.filter((t) => t.priority === 'medium').length}\n`;
markdown += `│ • Low priority: ${data.tasks.filter((t) => t.priority === 'low').length}\n`;
markdown += '│ │\n';
markdown += '╰─────────────────────────────────────────────────────────╯\n';
// Tasks table
markdown +=
'| ID | Title | Status | Priority | Dependencies | Complexity |\n';
'┌───────────┬──────────────────────────────────────┬─────────────────┬──────────────┬───────────────────────┬───────────┐\n';
markdown +=
'| :- | :- | :- | :- | :- | :- |\n';
'│ ID │ Title │ Status │ Priority │ Dependencies │ Complexi… │\n';
markdown +=
'├───────────┼──────────────────────────────────────┼─────────────────┼──────────────┼───────────────────────┼───────────┤\n';
// Helper function to format status with symbols
const getStatusSymbol = (status) => {
switch (status) {
case 'done':
case 'completed':
return '✓&nbsp;done';
return '✓ done';
case 'in-progress':
return '►&nbsp;in-progress';
return '► in-progress';
case 'pending':
return '○&nbsp;pending';
return '○ pending';
case 'blocked':
return '⭕&nbsp;blocked';
return '⭕ blocked';
case 'deferred':
return 'x&nbsp;deferred';
return 'x deferred';
case 'cancelled':
return 'x&nbsp;cancelled';
return 'x cancelled';
case 'review':
return '?&nbsp;review';
return '? review';
default:
return status || 'pending';
}
@@ -938,12 +948,12 @@ function generateMarkdownOutput(data, filteredTasks, stats) {
? `${task.complexityScore}`
: 'N/A';
markdown += `| ${task.id} | ${taskTitle} | ${statusSymbol} | ${priority} | ${deps} | ${complexity} |\n`;
markdown += ` ${task.id.toString().padEnd(9)} ${taskTitle.substring(0, 36).padEnd(36)} ${statusSymbol.padEnd(15)} ${priority.padEnd(12)} ${deps.substring(0, 21).padEnd(21)} ${complexity.padEnd(9)} \n`;
// Add subtasks if requested
if (withSubtasks && task.subtasks && task.subtasks.length > 0) {
task.subtasks.forEach((subtask) => {
const subtaskTitle = `${subtask.title}`; // No truncation
const subtaskTitle = `└─ ${subtask.title}`; // No truncation
const subtaskStatus = getStatusSymbol(subtask.status);
const subtaskDeps = formatDependenciesForMarkdown(
subtask.dependencies,
@@ -953,11 +963,85 @@ function generateMarkdownOutput(data, filteredTasks, stats) {
? subtask.complexityScore.toString()
: 'N/A';
markdown += `| ${task.id}.${subtask.id} | ${subtaskTitle} | ${subtaskStatus} | - | ${subtaskDeps} | ${subtaskComplexity} |\n`;
markdown +=
'├───────────┼──────────────────────────────────────┼─────────────────┼──────────────┼───────────────────────┼───────────┤\n';
markdown += `${task.id}.${subtask.id}${' '.padEnd(6)}${subtaskTitle.substring(0, 36).padEnd(36)}${subtaskStatus.padEnd(15)} │ - │ ${subtaskDeps.substring(0, 21).padEnd(21)}${subtaskComplexity.padEnd(9)}\n`;
});
}
markdown +=
'├───────────┼──────────────────────────────────────┼─────────────────┼──────────────┼───────────────────────┼───────────┤\n';
});
// Close the table
markdown = markdown.slice(
0,
-1 *
'├───────────┼──────────────────────────────────────┼─────────────────┼──────────────┼───────────────────────┼───────────┤\n'
.length
);
markdown +=
'└───────────┴──────────────────────────────────────┴─────────────────┴──────────────┴───────────────────────┴───────────┘\n';
markdown += '```\n\n';
// Next task recommendation
if (nextItem) {
markdown +=
'╭────────────────────────────────────────────── ⚡ RECOMMENDED NEXT TASK ⚡ ──────────────────────────────────────────────╮\n';
markdown +=
'│ │\n';
markdown += `│ 🔥 Next Task to Work On: #${nextItem.id} - ${nextItem.title}\n`;
markdown +=
'│ │\n';
markdown += `│ Priority: ${nextItem.priority || 'medium'} Status: ${getStatusSymbol(nextItem.status)}\n`;
markdown += `│ Dependencies: ${nextItem.dependencies && nextItem.dependencies.length > 0 ? formatDependenciesForMarkdown(nextItem.dependencies, data.tasks) : 'None'}\n`;
markdown +=
'│ │\n';
markdown += `│ Description: ${getWorkItemDescription(nextItem, data.tasks)}\n`;
markdown +=
'│ │\n';
// Add subtasks if they exist
const parentTask = data.tasks.find((t) => t.id === nextItem.id);
if (parentTask && parentTask.subtasks && parentTask.subtasks.length > 0) {
markdown +=
'│ Subtasks: │\n';
parentTask.subtasks.forEach((subtask) => {
markdown += `${nextItem.id}.${subtask.id} [${subtask.status || 'pending'}] ${subtask.title}\n`;
});
markdown +=
'│ │\n';
}
markdown += `│ Start working: task-master set-status --id=${nextItem.id} --status=in-progress │\n`;
markdown += `│ View details: task-master show ${nextItem.id}\n`;
markdown +=
'│ │\n';
markdown +=
'╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n\n';
}
// Suggested next steps
markdown += '\n';
markdown +=
'╭──────────────────────────────────────────────────────────────────────────────────────╮\n';
markdown +=
'│ │\n';
markdown +=
'│ Suggested Next Steps: │\n';
markdown +=
'│ │\n';
markdown +=
'│ 1. Run task-master next to see what to work on next │\n';
markdown +=
'│ 2. Run task-master expand --id=<id> to break down a task into subtasks │\n';
markdown +=
'│ 3. Run task-master set-status --id=<id> --status=done to mark a task as complete │\n';
markdown +=
'│ │\n';
markdown +=
'╰──────────────────────────────────────────────────────────────────────────────────────╯\n';
return markdown;
}

View File

@@ -18,7 +18,6 @@ import {
import { generateObjectService } from '../ai-services-unified.js';
import { getDebugFlag } from '../config-manager.js';
import { getPromptManager } from '../prompt-manager.js';
import generateTaskFiles from './generate-task-files.js';
import { displayAiUsageSummary } from '../ui.js';
@@ -148,8 +147,10 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
report(overwriteError.message, 'error');
if (outputFormat === 'text') {
console.error(chalk.red(overwriteError.message));
}
process.exit(1);
} else {
throw overwriteError;
}
} else {
// Force overwrite is true
report(
@@ -171,24 +172,74 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
throw new Error(`Input file ${prdPath} is empty or could not be read.`);
}
// Load prompts using PromptManager
const promptManager = getPromptManager();
// Research-specific enhancements to the system prompt
const researchPromptAddition = research
? `\nBefore breaking down the PRD into tasks, you will:
1. Research and analyze the latest technologies, libraries, frameworks, and best practices that would be appropriate for this project
2. Identify any potential technical challenges, security concerns, or scalability issues not explicitly mentioned in the PRD without discarding any explicit requirements or going overboard with complexity -- always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches
3. Consider current industry standards and evolving trends relevant to this project (this step aims to solve LLM hallucinations and out of date information due to training data cutoff dates)
4. Evaluate alternative implementation approaches and recommend the most efficient path
5. Include specific library versions, helpful APIs, and concrete implementation guidance based on your research
6. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches
// Get defaultTaskPriority from config
const { getDefaultPriority } = await import('../config-manager.js');
const defaultTaskPriority = getDefaultPriority(projectRoot) || 'medium';
Your task breakdown should incorporate this research, resulting in more detailed implementation guidance, more accurate dependency mapping, and more precise technology recommendations than would be possible from the PRD text alone, while maintaining all explicit requirements and best practices and all details and nuances of the PRD.`
: '';
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
'parse-prd',
// Base system prompt for PRD parsing
const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.${researchPromptAddition}
Analyze the provided PRD content and generate ${numTasks > 0 ? 'approximately ' + numTasks : 'an appropriate number of'} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.
Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.
Set status to 'pending', dependencies to an empty array [], and priority to 'medium' initially for all tasks.
Respond ONLY with a valid JSON object containing a single key "tasks", where the value is an array of task objects adhering to the provided Zod schema. Do not include any explanation or markdown formatting.
Each task should follow this JSON structure:
{
research,
numTasks,
nextId,
prdContent,
prdPath,
defaultTaskPriority
"id": number,
"title": string,
"description": string,
"status": "pending",
"dependencies": number[] (IDs of tasks this depends on),
"priority": "high" | "medium" | "low",
"details": string (implementation details),
"testStrategy": string (validation approach)
}
);
Guidelines:
1. ${numTasks > 0 ? 'Unless complexity warrants otherwise' : 'Depending on the complexity'}, create ${numTasks > 0 ? 'exactly ' + numTasks : 'an appropriate number of'} tasks, numbered sequentially starting from ${nextId}
2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards
3. Order tasks logically - consider dependencies and implementation sequence
4. Early tasks should focus on setup, core functionality first, then advanced features
5. Include clear validation/testing approach for each task
6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than ${nextId} if applicable)
7. Assign priority (high/medium/low) based on criticality and dependency order
8. Include detailed implementation guidance in the "details" field${research ? ', with specific libraries and version recommendations based on your research' : ''}
9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance
10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches${research ? '\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research' : ''}`;
// Build user prompt with PRD content
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks > 0 ? 'approximately ' + numTasks : 'an appropriate number of'} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n
Return your response in this format:
{
"tasks": [
{
"id": 1,
"title": "Setup Project Repository",
"description": "...",
...
},
...
],
"metadata": {
"projectName": "PRD Implementation",
"totalTasks": {number of tasks},
"sourceFile": "${prdPath}",
"generatedAt": "YYYY-MM-DD"
}
}`;
// Call the unified AI service
report(
@@ -369,9 +420,11 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
// Use projectRoot for debug flag check
console.error(error);
}
}
throw error; // Always re-throw for proper error handling
process.exit(1);
} else {
throw error; // Re-throw for JSON output
}
}
}

View File

@@ -12,7 +12,6 @@ import { highlight } from 'cli-highlight';
import { ContextGatherer } from '../utils/contextGatherer.js';
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
import { generateTextService } from '../ai-services-unified.js';
import { getPromptManager } from '../prompt-manager.js';
import {
log as consoleLog,
findProjectRoot,
@@ -191,24 +190,14 @@ async function performResearch(
const gatheredContext = contextResult.context;
const tokenBreakdown = contextResult.tokenBreakdown;
// Load prompts using PromptManager
const promptManager = getPromptManager();
// Build system prompt based on detail level
const systemPrompt = buildResearchSystemPrompt(detailLevel, projectRoot);
const promptParams = {
query: query,
gatheredContext: gatheredContext || '',
detailLevel: detailLevel,
projectInfo: {
root: projectRoot,
taskCount: finalTaskIds.length,
fileCount: filePaths.length
}
};
// Load prompts - the research template handles detail level internally
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
'research',
promptParams
// Build user prompt with context
const userPrompt = buildResearchUserPrompt(
query,
gatheredContext,
detailLevel
);
// Count tokens for system and user prompts
@@ -360,6 +349,94 @@ async function performResearch(
}
}
/**
* Build system prompt for research based on detail level
* @param {string} detailLevel - Detail level: 'low', 'medium', 'high'
* @param {string} projectRoot - Project root for context
* @returns {string} System prompt
*/
function buildResearchSystemPrompt(detailLevel, projectRoot) {
const basePrompt = `You are an expert AI research assistant helping with a software development project. You have access to project context including tasks, files, and project structure.
Your role is to provide comprehensive, accurate, and actionable research responses based on the user's query and the provided project context.`;
const detailInstructions = {
low: `
**Response Style: Concise & Direct**
- Provide brief, focused answers (2-4 paragraphs maximum)
- Focus on the most essential information
- Use bullet points for key takeaways
- Avoid lengthy explanations unless critical
- Skip pleasantries, introductions, and conclusions
- No phrases like "Based on your project context" or "I'll provide guidance"
- No summary outros or alignment statements
- Get straight to the actionable information
- Use simple, direct language - users want info, not explanation`,
medium: `
**Response Style: Balanced & Comprehensive**
- Provide thorough but well-structured responses (4-8 paragraphs)
- Include relevant examples and explanations
- Balance depth with readability
- Use headings and bullet points for organization`,
high: `
**Response Style: Detailed & Exhaustive**
- Provide comprehensive, in-depth analysis (8+ paragraphs)
- Include multiple perspectives and approaches
- Provide detailed examples, code snippets, and step-by-step guidance
- Cover edge cases and potential pitfalls
- Use clear structure with headings, subheadings, and lists`
};
return `${basePrompt}
${detailInstructions[detailLevel]}
**Guidelines:**
- Always consider the project context when formulating responses
- Reference specific tasks, files, or project elements when relevant
- Provide actionable insights that can be applied to the project
- If the query relates to existing project tasks, suggest how the research applies to those tasks
- Use markdown formatting for better readability
- Be precise and avoid speculation unless clearly marked as such
**For LOW detail level specifically:**
- Start immediately with the core information
- No introductory phrases or context acknowledgments
- No concluding summaries or project alignment statements
- Focus purely on facts, steps, and actionable items`;
}
/**
* Build user prompt with query and context
* @param {string} query - User's research query
* @param {string} gatheredContext - Gathered project context
* @param {string} detailLevel - Detail level for response guidance
* @returns {string} Complete user prompt
*/
function buildResearchUserPrompt(query, gatheredContext, detailLevel) {
let prompt = `# Research Query
${query}`;
if (gatheredContext && gatheredContext.trim()) {
prompt += `
# Project Context
${gatheredContext}`;
}
prompt += `
# Instructions
Please research and provide a ${detailLevel}-detail response to the query above. Consider the project context provided and make your response as relevant and actionable as possible for this specific project.`;
return prompt;
}
/**
* Display detailed token breakdown for context and prompts
* @param {Object} tokenBreakdown - Token breakdown from context gatherer

View File

@@ -22,7 +22,6 @@ import {
} from '../utils.js';
import { generateTextService } from '../ai-services-unified.js';
import { getDebugFlag } from '../config-manager.js';
import { getPromptManager } from '../prompt-manager.js';
import generateTaskFiles from './generate-task-files.js';
import { ContextGatherer } from '../utils/contextGatherer.js';
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
@@ -161,7 +160,7 @@ async function updateSubtaskById(
tasks: finalTaskIds,
format: 'research'
});
gatheredContext = contextResult.context || '';
gatheredContext = contextResult;
}
} catch (contextError) {
report('warn', `Could not gather context: ${contextError.message}`);
@@ -214,7 +213,7 @@ async function updateSubtaskById(
title: parentTask.subtasks[subtaskIndex - 1].title,
status: parentTask.subtasks[subtaskIndex - 1].status
}
: undefined;
: null;
const nextSubtask =
subtaskIndex < parentTask.subtasks.length - 1
? {
@@ -222,27 +221,32 @@ async function updateSubtaskById(
title: parentTask.subtasks[subtaskIndex + 1].title,
status: parentTask.subtasks[subtaskIndex + 1].status
}
: undefined;
: null;
// Build prompts using PromptManager
const promptManager = getPromptManager();
const contextString = `
Parent Task: ${JSON.stringify(parentContext)}
${prevSubtask ? `Previous Subtask: ${JSON.stringify(prevSubtask)}` : ''}
${nextSubtask ? `Next Subtask: ${JSON.stringify(nextSubtask)}` : ''}
Current Subtask Details (for context only):\n${subtask.details || '(No existing details)'}
`;
const promptParams = {
parentTask: parentContext,
prevSubtask: prevSubtask,
nextSubtask: nextSubtask,
currentDetails: subtask.details || '(No existing details)',
updatePrompt: prompt,
useResearch: useResearch,
gatheredContext: gatheredContext || ''
};
const systemPrompt = `You are an AI assistant helping to update a subtask. You will be provided with the subtask's existing details, context about its parent and sibling tasks, and a user request string.
const variantKey = useResearch ? 'research' : 'default';
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
'update-subtask',
promptParams,
variantKey
);
Your Goal: Based *only* on the user's request and all the provided context (including existing details if relevant to the request), GENERATE the new text content that should be added to the subtask's details.
Focus *only* on generating the substance of the update.
Output Requirements:
1. Return *only* the newly generated text content as a plain string. Do NOT return a JSON object or any other structured data.
2. Your string response should NOT include any of the subtask's original details, unless the user's request explicitly asks to rephrase, summarize, or directly modify existing text.
3. Do NOT include any timestamps, XML-like tags, markdown, or any other special formatting in your string response.
4. Ensure the generated text is concise yet complete for the update based on the user request. Avoid conversational fillers or explanations about what you are doing (e.g., do not start with "Okay, here's the update...").`;
// Pass the existing subtask.details in the user prompt for the AI's context.
let userPrompt = `Task Context:\n${contextString}\n\nUser Request: "${prompt}"\n\nBased on the User Request and all the Task Context (including current subtask details provided above), what is the new information or text that should be appended to this subtask's details? Return ONLY this new text as a plain string.`;
if (gatheredContext) {
userPrompt += `\n\n# Additional Project Context\n\n${gatheredContext}`;
}
const role = useResearch ? 'research' : 'main';
report('info', `Using AI text service with role: ${role}`);

View File

@@ -25,7 +25,6 @@ import {
import { generateTextService } from '../ai-services-unified.js';
import { getDebugFlag, isApiKeySet } from '../config-manager.js';
import { getPromptManager } from '../prompt-manager.js';
import { ContextGatherer } from '../utils/contextGatherer.js';
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
@@ -190,45 +189,8 @@ function parseUpdatedTaskFromText(text, expectedTaskId, logFn, isMCP) {
throw new Error('Parsed AI response is not a valid JSON object.');
}
// Preprocess the task to ensure subtasks have proper structure
const preprocessedTask = {
...parsedTask,
status: parsedTask.status || 'pending',
dependencies: Array.isArray(parsedTask.dependencies)
? parsedTask.dependencies
: [],
details:
typeof parsedTask.details === 'string'
? parsedTask.details
: String(parsedTask.details || ''),
testStrategy:
typeof parsedTask.testStrategy === 'string'
? parsedTask.testStrategy
: String(parsedTask.testStrategy || ''),
// Ensure subtasks is an array and each subtask has required fields
subtasks: Array.isArray(parsedTask.subtasks)
? parsedTask.subtasks.map((subtask) => ({
...subtask,
title: subtask.title || '',
description: subtask.description || '',
status: subtask.status || 'pending',
dependencies: Array.isArray(subtask.dependencies)
? subtask.dependencies
: [],
details:
typeof subtask.details === 'string'
? subtask.details
: String(subtask.details || ''),
testStrategy:
typeof subtask.testStrategy === 'string'
? subtask.testStrategy
: String(subtask.testStrategy || '')
}))
: []
};
// Validate the parsed task object using Zod
const validationResult = updatedTaskSchema.safeParse(preprocessedTask);
const validationResult = updatedTaskSchema.safeParse(parsedTask);
if (!validationResult.success) {
report('error', 'Parsed task object failed Zod validation.');
validationResult.error.errors.forEach((err) => {
@@ -383,7 +345,7 @@ async function updateTaskById(
tasks: finalTaskIds,
format: 'research'
});
gatheredContext = contextResult.context || '';
gatheredContext = contextResult;
}
} catch (contextError) {
report('warn', `Could not gather context: ${contextError.message}`);
@@ -446,61 +408,69 @@ async function updateTaskById(
);
}
// --- Build Prompts using PromptManager ---
const promptManager = getPromptManager();
const promptParams = {
task: taskToUpdate,
taskJson: JSON.stringify(taskToUpdate, null, 2),
updatePrompt: prompt,
appendMode: appendMode,
useResearch: useResearch,
currentDetails: taskToUpdate.details || '(No existing details)',
gatheredContext: gatheredContext || ''
};
const variantKey = appendMode
? 'append'
: useResearch
? 'research'
: 'default';
report(
'info',
`Loading prompt template with variant: ${variantKey}, appendMode: ${appendMode}, useResearch: ${useResearch}`
);
// --- Build Prompts (Different for append vs full update) ---
let systemPrompt;
let userPrompt;
try {
const promptResult = await promptManager.loadPrompt(
'update-task',
promptParams,
variantKey
);
report(
'info',
`Prompt result type: ${typeof promptResult}, keys: ${promptResult ? Object.keys(promptResult).join(', ') : 'null'}`
);
// Extract prompts - loadPrompt returns { systemPrompt, userPrompt, metadata }
systemPrompt = promptResult.systemPrompt;
userPrompt = promptResult.userPrompt;
if (appendMode) {
// Append mode: generate new content to add to task details
systemPrompt = `You are an AI assistant helping to append additional information to a software development task. You will be provided with the task's existing details, context, and a user request string.
report(
'info',
`Loaded prompts - systemPrompt length: ${systemPrompt?.length}, userPrompt length: ${userPrompt?.length}`
);
} catch (error) {
report('error', `Failed to load prompt template: ${error.message}`);
throw new Error(`Failed to load prompt template: ${error.message}`);
Your Goal: Based *only* on the user's request and all the provided context (including existing details if relevant to the request), GENERATE the new text content that should be added to the task's details.
Focus *only* on generating the substance of the update.
Output Requirements:
1. Return *only* the newly generated text content as a plain string. Do NOT return a JSON object or any other structured data.
2. Your string response should NOT include any of the task's original details, unless the user's request explicitly asks to rephrase, summarize, or directly modify existing text.
3. Do NOT include any timestamps, XML-like tags, markdown, or any other special formatting in your string response.
4. Ensure the generated text is concise yet complete for the update based on the user request. Avoid conversational fillers or explanations about what you are doing (e.g., do not start with "Okay, here's the update...").`;
const taskContext = `
Task: ${JSON.stringify({
id: taskToUpdate.id,
title: taskToUpdate.title,
description: taskToUpdate.description,
status: taskToUpdate.status
})}
Current Task Details (for context only):\n${taskToUpdate.details || '(No existing details)'}
`;
userPrompt = `Task Context:\n${taskContext}\n\nUser Request: "${prompt}"\n\nBased on the User Request and all the Task Context (including current task details provided above), what is the new information or text that should be appended to this task's details? Return ONLY this new text as a plain string.`;
if (gatheredContext) {
userPrompt += `\n\n# Additional Project Context\n\n${gatheredContext}`;
}
} else {
// Full update mode: use original prompts
systemPrompt = `You are an AI assistant helping to update a software development task based on new context.
You will be given a task and a prompt describing changes or new implementation details.
Your job is to update the task to reflect these changes, while preserving its basic structure.
Guidelines:
1. VERY IMPORTANT: NEVER change the title of the task - keep it exactly as is
2. Maintain the same ID, status, and dependencies unless specifically mentioned in the prompt
3. Update the description, details, and test strategy to reflect the new information
4. Do not change anything unnecessarily - just adapt what needs to change based on the prompt
5. Return a complete valid JSON object representing the updated task
6. VERY IMPORTANT: Preserve all subtasks marked as "done" or "completed" - do not modify their content
7. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything
8. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly
9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced
10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted
11. Ensure any new subtasks have unique IDs that don't conflict with existing ones
12. CRITICAL: For subtask IDs, use ONLY numeric values (1, 2, 3, etc.) NOT strings ("1", "2", "3")
13. CRITICAL: Subtask IDs should start from 1 and increment sequentially (1, 2, 3...) - do NOT use parent task ID as prefix
The changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.`;
const taskDataString = JSON.stringify(taskToUpdate, null, 2);
userPrompt = `Here is the task to update:\n${taskDataString}\n\nPlease update this task based on the following new context:\n${prompt}\n\nIMPORTANT: In the task JSON above, any subtasks with "status": "done" or "status": "completed" should be preserved exactly as is. Build your changes around these completed items.`;
if (gatheredContext) {
userPrompt += `\n\n# Project Context\n\n${gatheredContext}`;
}
// If prompts are still not set, throw an error
if (!systemPrompt || !userPrompt) {
throw new Error(
`Failed to load prompts: systemPrompt=${!!systemPrompt}, userPrompt=${!!userPrompt}`
);
userPrompt += `\n\nReturn only the updated task as a valid JSON object.`;
}
// --- End Build Prompts ---

View File

@@ -21,7 +21,6 @@ import {
} from '../ui.js';
import { getDebugFlag } from '../config-manager.js';
import { getPromptManager } from '../prompt-manager.js';
import generateTaskFiles from './generate-task-files.js';
import { generateTextService } from '../ai-services-unified.js';
import { getModelConfiguration } from './models.js';
@@ -196,18 +195,7 @@ function parseUpdatedTasksFromText(text, expectedCount, logFn, isMCP) {
);
}
// Preprocess tasks to ensure required fields have proper defaults
const preprocessedTasks = parsedTasks.map((task) => ({
...task,
// Ensure subtasks is always an array (not null or undefined)
subtasks: Array.isArray(task.subtasks) ? task.subtasks : [],
// Ensure status has a default value if missing
status: task.status || 'pending',
// Ensure dependencies is always an array
dependencies: Array.isArray(task.dependencies) ? task.dependencies : []
}));
const validationResult = updatedTaskArraySchema.safeParse(preprocessedTasks);
const validationResult = updatedTaskArraySchema.safeParse(parsedTasks);
if (!validationResult.success) {
report('error', 'Parsed task array failed Zod validation.');
validationResult.error.errors.forEach((err) => {
@@ -311,7 +299,7 @@ async function updateTasks(
tasks: finalTaskIds,
format: 'research'
});
gatheredContext = contextResult.context || '';
gatheredContext = contextResult; // contextResult is a string
}
} catch (contextError) {
logFn(
@@ -380,18 +368,35 @@ async function updateTasks(
}
// --- End Display Tasks ---
// --- Build Prompts (Using PromptManager) ---
// Load prompts using PromptManager
const promptManager = getPromptManager();
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
'update-tasks',
{
tasks: tasksToUpdate,
updatePrompt: prompt,
useResearch,
projectContext: gatheredContext
// --- Build Prompts (Unchanged Core Logic) ---
// Keep the original system prompt logic
const systemPrompt = `You are an AI assistant helping to update software development tasks based on new context.
You will be given a set of tasks and a prompt describing changes or new implementation details.
Your job is to update the tasks to reflect these changes, while preserving their basic structure.
Guidelines:
1. Maintain the same IDs, statuses, and dependencies unless specifically mentioned in the prompt
2. Update titles, descriptions, details, and test strategies to reflect the new information
3. Do not change anything unnecessarily - just adapt what needs to change based on the prompt
4. You should return ALL the tasks in order, not just the modified ones
5. Return a complete valid JSON object with the updated tasks array
6. VERY IMPORTANT: Preserve all subtasks marked as "done" or "completed" - do not modify their content
7. For tasks with completed subtasks, build upon what has already been done rather than rewriting everything
8. If an existing completed subtask needs to be changed/undone based on the new context, DO NOT modify it directly
9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced
10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted
The changes described in the prompt should be applied to ALL tasks in the list.`;
// Keep the original user prompt logic
const taskDataString = JSON.stringify(tasksToUpdate, null, 2);
let userPrompt = `Here are the tasks to update:\n${taskDataString}\n\nPlease update these tasks based on the following new context:\n${prompt}\n\nIMPORTANT: In the tasks JSON above, any subtasks with "status": "done" or "status": "completed" should be preserved exactly as is. Build your changes around these completed items.`;
if (gatheredContext) {
userPrompt += `\n\n# Project Context\n\n${gatheredContext}`;
}
);
userPrompt += `\n\nReturn only the updated tasks as a valid JSON array.`;
// --- End Build Prompts ---
// --- AI Call ---
@@ -453,17 +458,7 @@ async function updateTasks(
data.tasks.forEach((task, index) => {
if (updatedTasksMap.has(task.id)) {
// Only update if the task was part of the set sent to AI
const updatedTask = updatedTasksMap.get(task.id);
// Merge the updated task with the existing one to preserve fields like subtasks
data.tasks[index] = {
...task, // Keep all existing fields
...updatedTask, // Override with updated fields
// Ensure subtasks field is preserved if not provided by AI
subtasks:
updatedTask.subtasks !== undefined
? updatedTask.subtasks
: task.subtasks
};
data.tasks[index] = updatedTasksMap.get(task.id);
actualUpdateCount++;
}
});

View File

@@ -14,14 +14,6 @@ export class GroqProvider extends BaseAIProvider {
this.name = 'Groq';
}
/**
* Returns the environment variable name required for this provider's API key.
* @returns {string} The environment variable name for the Groq API key
*/
getRequiredApiKeyName() {
return 'GROQ_API_KEY';
}
/**
* Creates and returns a Groq client instance.
* @param {object} params - Parameters for client initialization

View File

@@ -1,5 +1,5 @@
/**
* @typedef {'amp' | 'claude' | 'cline' | 'codex' | 'cursor' | 'gemini' | 'opencode' | 'roo' | 'trae' | 'windsurf' | 'vscode' | 'zed'} RulesProfile
* @typedef {'claude' | 'cline' | 'codex' | 'cursor' | 'gemini' | 'roo' | 'trae' | 'windsurf' | 'vscode'} RulesProfile
*/
/**
@@ -10,18 +10,15 @@
*
* @type {RulesProfile[]}
* @description Defines possible rule profile sets:
* - amp: Amp Code integration
* - claude: Claude Code integration
* - cline: Cline IDE rules
* - codex: Codex integration
* - cursor: Cursor IDE rules
* - gemini: Gemini integration
* - opencode: OpenCode integration
* - roo: Roo Code IDE rules
* - trae: Trae IDE rules
* - vscode: VS Code with GitHub Copilot integration
* - windsurf: Windsurf IDE rules
* - zed: Zed IDE rules
*
* To add a new rule profile:
* 1. Add the profile name to this array
@@ -29,18 +26,15 @@
* 3. Export it as {profile}Profile in src/profiles/index.js
*/
export const RULE_PROFILES = [
'amp',
'claude',
'cline',
'codex',
'cursor',
'gemini',
'opencode',
'roo',
'trae',
'vscode',
'windsurf',
'zed'
'windsurf'
];
/**

View File

@@ -1,277 +0,0 @@
// Amp profile for rule-transformer
import path from 'path';
import fs from 'fs';
import { isSilentMode, log } from '../../scripts/modules/utils.js';
import { createProfile } from './base-profile.js';
/**
* Transform standard MCP config format to Amp format
* @param {Object} mcpConfig - Standard MCP configuration object
* @returns {Object} - Transformed Amp configuration object
*/
function transformToAmpFormat(mcpConfig) {
const ampConfig = {};
// Transform mcpServers to amp.mcpServers
if (mcpConfig.mcpServers) {
ampConfig['amp.mcpServers'] = mcpConfig.mcpServers;
}
// Preserve any other existing settings
for (const [key, value] of Object.entries(mcpConfig)) {
if (key !== 'mcpServers') {
ampConfig[key] = value;
}
}
return ampConfig;
}
// Lifecycle functions for Amp profile
function onAddRulesProfile(targetDir, assetsDir) {
// Handle AGENT.md import for non-destructive integration (Amp uses AGENT.md, copies from AGENTS.md)
const sourceFile = path.join(assetsDir, 'AGENTS.md');
const userAgentFile = path.join(targetDir, 'AGENT.md');
const taskMasterAgentFile = path.join(targetDir, '.taskmaster', 'AGENT.md');
const importLine = '@./.taskmaster/AGENT.md';
const importSection = `\n## Task Master AI Instructions\n**Import Task Master's development workflow commands and guidelines, treat as if import is in the main AGENT.md file.**\n${importLine}`;
if (fs.existsSync(sourceFile)) {
try {
// Ensure .taskmaster directory exists
const taskMasterDir = path.join(targetDir, '.taskmaster');
if (!fs.existsSync(taskMasterDir)) {
fs.mkdirSync(taskMasterDir, { recursive: true });
}
// Copy Task Master instructions to .taskmaster/AGENT.md
fs.copyFileSync(sourceFile, taskMasterAgentFile);
log(
'debug',
`[Amp] Created Task Master instructions at ${taskMasterAgentFile}`
);
// Handle user's AGENT.md
if (fs.existsSync(userAgentFile)) {
// Check if import already exists
const content = fs.readFileSync(userAgentFile, 'utf8');
if (!content.includes(importLine)) {
// Append import section at the end
const updatedContent = content.trim() + '\n' + importSection + '\n';
fs.writeFileSync(userAgentFile, updatedContent);
log(
'info',
`[Amp] Added Task Master import to existing ${userAgentFile}`
);
} else {
log(
'info',
`[Amp] Task Master import already present in ${userAgentFile}`
);
}
} else {
// Create minimal AGENT.md with the import section
const minimalContent = `# Amp Instructions\n${importSection}\n`;
fs.writeFileSync(userAgentFile, minimalContent);
log('info', `[Amp] Created ${userAgentFile} with Task Master import`);
}
} catch (err) {
log('error', `[Amp] Failed to set up Amp instructions: ${err.message}`);
}
}
// MCP transformation will be handled in onPostConvertRulesProfile
}
function onRemoveRulesProfile(targetDir) {
// Clean up AGENT.md import (Amp uses AGENT.md, not AGENTS.md)
const userAgentFile = path.join(targetDir, 'AGENT.md');
const taskMasterAgentFile = path.join(targetDir, '.taskmaster', 'AGENT.md');
const importLine = '@./.taskmaster/AGENT.md';
try {
// Remove Task Master AGENT.md from .taskmaster
if (fs.existsSync(taskMasterAgentFile)) {
fs.rmSync(taskMasterAgentFile, { force: true });
log('debug', `[Amp] Removed ${taskMasterAgentFile}`);
}
// Clean up import from user's AGENT.md
if (fs.existsSync(userAgentFile)) {
const content = fs.readFileSync(userAgentFile, 'utf8');
const lines = content.split('\n');
const filteredLines = [];
let skipNextLines = 0;
// Remove the Task Master section
for (let i = 0; i < lines.length; i++) {
if (skipNextLines > 0) {
skipNextLines--;
continue;
}
// Check if this is the start of our Task Master section
if (lines[i].includes('## Task Master AI Instructions')) {
// Skip this line and the next two lines (bold text and import)
skipNextLines = 2;
continue;
}
// Also remove standalone import lines (for backward compatibility)
if (lines[i].trim() === importLine) {
continue;
}
filteredLines.push(lines[i]);
}
// Join back and clean up excessive newlines
let updatedContent = filteredLines
.join('\n')
.replace(/\n{3,}/g, '\n\n')
.trim();
// Check if file only contained our minimal template
if (updatedContent === '# Amp Instructions' || updatedContent === '') {
// File only contained our import, remove it
fs.rmSync(userAgentFile, { force: true });
log('debug', `[Amp] Removed empty ${userAgentFile}`);
} else {
// Write back without the import
fs.writeFileSync(userAgentFile, updatedContent + '\n');
log('debug', `[Amp] Removed Task Master import from ${userAgentFile}`);
}
}
} catch (err) {
log('error', `[Amp] Failed to remove Amp instructions: ${err.message}`);
}
// MCP Removal: Remove amp.mcpServers section
const mcpConfigPath = path.join(targetDir, '.vscode', 'settings.json');
if (!fs.existsSync(mcpConfigPath)) {
log('debug', '[Amp] No .vscode/settings.json found to clean up');
return;
}
try {
// Read the current config
const configContent = fs.readFileSync(mcpConfigPath, 'utf8');
const config = JSON.parse(configContent);
// Check if it has the amp.mcpServers section and task-master-ai server
if (
config['amp.mcpServers'] &&
config['amp.mcpServers']['task-master-ai']
) {
// Remove task-master-ai server
delete config['amp.mcpServers']['task-master-ai'];
// Check if there are other MCP servers in amp.mcpServers
const remainingServers = Object.keys(config['amp.mcpServers']);
if (remainingServers.length === 0) {
// No other servers, remove entire amp.mcpServers section
delete config['amp.mcpServers'];
log('debug', '[Amp] Removed empty amp.mcpServers section');
}
// Check if config is now empty
const remainingKeys = Object.keys(config);
if (remainingKeys.length === 0) {
// Config is empty, remove entire file
fs.rmSync(mcpConfigPath, { force: true });
log('info', '[Amp] Removed empty settings.json file');
// Check if .vscode directory is empty
const vscodeDirPath = path.join(targetDir, '.vscode');
if (fs.existsSync(vscodeDirPath)) {
const remainingContents = fs.readdirSync(vscodeDirPath);
if (remainingContents.length === 0) {
fs.rmSync(vscodeDirPath, { recursive: true, force: true });
log('debug', '[Amp] Removed empty .vscode directory');
}
}
} else {
// Write back the modified config
fs.writeFileSync(
mcpConfigPath,
JSON.stringify(config, null, '\t') + '\n'
);
log(
'info',
'[Amp] Removed TaskMaster from settings.json, preserved other configurations'
);
}
} else {
log('debug', '[Amp] TaskMaster not found in amp.mcpServers');
}
} catch (error) {
log('error', `[Amp] Failed to clean up settings.json: ${error.message}`);
}
}
function onPostConvertRulesProfile(targetDir, assetsDir) {
// Handle AGENT.md setup (same as onAddRulesProfile)
onAddRulesProfile(targetDir, assetsDir);
// Transform MCP config to Amp format
const mcpConfigPath = path.join(targetDir, '.vscode', 'settings.json');
if (!fs.existsSync(mcpConfigPath)) {
log('debug', '[Amp] No .vscode/settings.json found to transform');
return;
}
try {
// Read the generated standard MCP config
const mcpConfigContent = fs.readFileSync(mcpConfigPath, 'utf8');
const mcpConfig = JSON.parse(mcpConfigContent);
// Check if it's already in Amp format (has amp.mcpServers)
if (mcpConfig['amp.mcpServers']) {
log(
'info',
'[Amp] settings.json already in Amp format, skipping transformation'
);
return;
}
// Transform to Amp format
const ampConfig = transformToAmpFormat(mcpConfig);
// Write back the transformed config with proper formatting
fs.writeFileSync(
mcpConfigPath,
JSON.stringify(ampConfig, null, '\t') + '\n'
);
log('info', '[Amp] Transformed settings.json to Amp format');
log('debug', '[Amp] Renamed mcpServers to amp.mcpServers');
} catch (error) {
log('error', `[Amp] Failed to transform settings.json: ${error.message}`);
}
}
// Create and export amp profile using the base factory
export const ampProfile = createProfile({
name: 'amp',
displayName: 'Amp',
url: 'ampcode.com',
docsUrl: 'ampcode.com/manual',
profileDir: '.vscode',
rulesDir: '.',
mcpConfig: true,
mcpConfigName: 'settings.json',
includeDefaultRules: false,
fileMap: {
'AGENTS.md': '.taskmaster/AGENT.md'
},
onAdd: onAddRulesProfile,
onRemove: onRemoveRulesProfile,
onPostConvert: onPostConvertRulesProfile
});
// Export lifecycle functions separately to avoid naming conflicts
export { onAddRulesProfile, onRemoveRulesProfile, onPostConvertRulesProfile };

View File

@@ -46,9 +46,7 @@ export function createProfile(editorConfig) {
onPostConvert
} = editorConfig;
const mcpConfigPath = mcpConfigName
? path.join(profileDir, mcpConfigName)
: null;
const mcpConfigPath = mcpConfigName ? `${profileDir}/${mcpConfigName}` : null;
// Standard file mapping with custom overrides
// Use taskmaster subdirectory only if profile supports it

View File

@@ -59,63 +59,6 @@ function onAddRulesProfile(targetDir, assetsDir) {
`[Claude] An error occurred during directory copy: ${err.message}`
);
}
// Handle CLAUDE.md import for non-destructive integration
const sourceFile = path.join(assetsDir, 'AGENTS.md');
const userClaudeFile = path.join(targetDir, 'CLAUDE.md');
const taskMasterClaudeFile = path.join(targetDir, '.taskmaster', 'CLAUDE.md');
const importLine = '@./.taskmaster/CLAUDE.md';
const importSection = `\n## Task Master AI Instructions\n**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**\n${importLine}`;
if (fs.existsSync(sourceFile)) {
try {
// Ensure .taskmaster directory exists
const taskMasterDir = path.join(targetDir, '.taskmaster');
if (!fs.existsSync(taskMasterDir)) {
fs.mkdirSync(taskMasterDir, { recursive: true });
}
// Copy Task Master instructions to .taskmaster/CLAUDE.md
fs.copyFileSync(sourceFile, taskMasterClaudeFile);
log(
'debug',
`[Claude] Created Task Master instructions at ${taskMasterClaudeFile}`
);
// Handle user's CLAUDE.md
if (fs.existsSync(userClaudeFile)) {
// Check if import already exists
const content = fs.readFileSync(userClaudeFile, 'utf8');
if (!content.includes(importLine)) {
// Append import section at the end
const updatedContent = content.trim() + '\n' + importSection + '\n';
fs.writeFileSync(userClaudeFile, updatedContent);
log(
'info',
`[Claude] Added Task Master import to existing ${userClaudeFile}`
);
} else {
log(
'info',
`[Claude] Task Master import already present in ${userClaudeFile}`
);
}
} else {
// Create minimal CLAUDE.md with the import section
const minimalContent = `# Claude Code Instructions\n${importSection}\n`;
fs.writeFileSync(userClaudeFile, minimalContent);
log(
'info',
`[Claude] Created ${userClaudeFile} with Task Master import`
);
}
} catch (err) {
log(
'error',
`[Claude] Failed to set up Claude instructions: ${err.message}`
);
}
}
}
function onRemoveRulesProfile(targetDir) {
@@ -124,146 +67,11 @@ function onRemoveRulesProfile(targetDir) {
if (removeDirectoryRecursive(claudeDir)) {
log('debug', `[Claude] Removed .claude directory from ${claudeDir}`);
}
// Clean up CLAUDE.md import
const userClaudeFile = path.join(targetDir, 'CLAUDE.md');
const taskMasterClaudeFile = path.join(targetDir, '.taskmaster', 'CLAUDE.md');
const importLine = '@./.taskmaster/CLAUDE.md';
try {
// Remove Task Master CLAUDE.md from .taskmaster
if (fs.existsSync(taskMasterClaudeFile)) {
fs.rmSync(taskMasterClaudeFile, { force: true });
log('debug', `[Claude] Removed ${taskMasterClaudeFile}`);
}
// Clean up import from user's CLAUDE.md
if (fs.existsSync(userClaudeFile)) {
const content = fs.readFileSync(userClaudeFile, 'utf8');
const lines = content.split('\n');
const filteredLines = [];
let skipNextLines = 0;
// Remove the Task Master section
for (let i = 0; i < lines.length; i++) {
if (skipNextLines > 0) {
skipNextLines--;
continue;
}
// Check if this is the start of our Task Master section
if (lines[i].includes('## Task Master AI Instructions')) {
// Skip this line and the next two lines (bold text and import)
skipNextLines = 2;
continue;
}
// Also remove standalone import lines (for backward compatibility)
if (lines[i].trim() === importLine) {
continue;
}
filteredLines.push(lines[i]);
}
// Join back and clean up excessive newlines
let updatedContent = filteredLines
.join('\n')
.replace(/\n{3,}/g, '\n\n')
.trim();
// Check if file only contained our minimal template
if (
updatedContent === '# Claude Code Instructions' ||
updatedContent === ''
) {
// File only contained our import, remove it
fs.rmSync(userClaudeFile, { force: true });
log('debug', `[Claude] Removed empty ${userClaudeFile}`);
} else {
// Write back without the import
fs.writeFileSync(userClaudeFile, updatedContent + '\n');
log(
'debug',
`[Claude] Removed Task Master import from ${userClaudeFile}`
);
}
}
} catch (err) {
log(
'error',
`[Claude] Failed to remove Claude instructions: ${err.message}`
);
}
}
/**
* Transform standard MCP config format to Claude format
* @param {Object} mcpConfig - Standard MCP configuration object
* @returns {Object} - Transformed Claude configuration object
*/
function transformToClaudeFormat(mcpConfig) {
const claudeConfig = {};
// Transform mcpServers to servers (keeping the same structure but adding type)
if (mcpConfig.mcpServers) {
claudeConfig.mcpServers = {};
for (const [serverName, serverConfig] of Object.entries(
mcpConfig.mcpServers
)) {
// Transform server configuration with type as first key
const reorderedServer = {};
// Add type: "stdio" as the first key
reorderedServer.type = 'stdio';
// Then add the rest of the properties in order
if (serverConfig.command) reorderedServer.command = serverConfig.command;
if (serverConfig.args) reorderedServer.args = serverConfig.args;
if (serverConfig.env) reorderedServer.env = serverConfig.env;
// Add any other properties that might exist
Object.keys(serverConfig).forEach((key) => {
if (!['command', 'args', 'env', 'type'].includes(key)) {
reorderedServer[key] = serverConfig[key];
}
});
claudeConfig.mcpServers[serverName] = reorderedServer;
}
}
return claudeConfig;
}
function onPostConvertRulesProfile(targetDir, assetsDir) {
// For Claude, post-convert is the same as add since we don't transform rules
onAddRulesProfile(targetDir, assetsDir);
// Transform MCP configuration to Claude format
const mcpConfigPath = path.join(targetDir, '.mcp.json');
if (fs.existsSync(mcpConfigPath)) {
try {
const mcpConfig = JSON.parse(fs.readFileSync(mcpConfigPath, 'utf8'));
const claudeConfig = transformToClaudeFormat(mcpConfig);
// Write back the transformed configuration
fs.writeFileSync(
mcpConfigPath,
JSON.stringify(claudeConfig, null, '\t') + '\n'
);
log(
'debug',
`[Claude] Transformed MCP configuration to Claude format at ${mcpConfigPath}`
);
} catch (err) {
log(
'error',
`[Claude] Failed to transform MCP configuration: ${err.message}`
);
}
}
}
// Create and export claude profile using the base factory
@@ -274,10 +82,11 @@ export const claudeProfile = createProfile({
docsUrl: 'docs.anthropic.com/en/docs/claude-code',
profileDir: '.', // Root directory
rulesDir: '.', // No specific rules directory needed
mcpConfigName: '.mcp.json', // Place MCP config in project root
mcpConfig: false,
mcpConfigName: null,
includeDefaultRules: false,
fileMap: {
'AGENTS.md': '.taskmaster/CLAUDE.md'
'AGENTS.md': 'CLAUDE.md'
},
onAdd: onAddRulesProfile,
onRemove: onRemoveRulesProfile,

View File

@@ -1,13 +1,10 @@
// Profile exports for centralized importing
export { ampProfile } from './amp.js';
export { claudeProfile } from './claude.js';
export { clineProfile } from './cline.js';
export { codexProfile } from './codex.js';
export { cursorProfile } from './cursor.js';
export { geminiProfile } from './gemini.js';
export { opencodeProfile } from './opencode.js';
export { rooProfile } from './roo.js';
export { traeProfile } from './trae.js';
export { vscodeProfile } from './vscode.js';
export { windsurfProfile } from './windsurf.js';
export { zedProfile } from './zed.js';

View File

@@ -1,183 +0,0 @@
// Opencode profile for rule-transformer
import path from 'path';
import fs from 'fs';
import { log } from '../../scripts/modules/utils.js';
import { createProfile } from './base-profile.js';
/**
* Transform standard MCP config format to OpenCode format
* @param {Object} mcpConfig - Standard MCP configuration object
* @returns {Object} - Transformed OpenCode configuration object
*/
function transformToOpenCodeFormat(mcpConfig) {
const openCodeConfig = {
$schema: 'https://opencode.ai/config.json'
};
// Transform mcpServers to mcp
if (mcpConfig.mcpServers) {
openCodeConfig.mcp = {};
for (const [serverName, serverConfig] of Object.entries(
mcpConfig.mcpServers
)) {
// Transform server configuration
const transformedServer = {
type: 'local'
};
// Combine command and args into single command array
if (serverConfig.command && serverConfig.args) {
transformedServer.command = [
serverConfig.command,
...serverConfig.args
];
} else if (serverConfig.command) {
transformedServer.command = [serverConfig.command];
}
// Add enabled flag
transformedServer.enabled = true;
// Transform env to environment
if (serverConfig.env) {
transformedServer.environment = serverConfig.env;
}
// update with transformed config
openCodeConfig.mcp[serverName] = transformedServer;
}
}
return openCodeConfig;
}
/**
* Lifecycle function called after MCP config generation to transform to OpenCode format
* @param {string} targetDir - Target project directory
* @param {string} assetsDir - Assets directory (unused for OpenCode)
*/
function onPostConvertRulesProfile(targetDir, assetsDir) {
const openCodeConfigPath = path.join(targetDir, 'opencode.json');
if (!fs.existsSync(openCodeConfigPath)) {
log('debug', '[OpenCode] No opencode.json found to transform');
return;
}
try {
// Read the generated standard MCP config
const mcpConfigContent = fs.readFileSync(openCodeConfigPath, 'utf8');
const mcpConfig = JSON.parse(mcpConfigContent);
// Check if it's already in OpenCode format (has $schema)
if (mcpConfig.$schema) {
log(
'info',
'[OpenCode] opencode.json already in OpenCode format, skipping transformation'
);
return;
}
// Transform to OpenCode format
const openCodeConfig = transformToOpenCodeFormat(mcpConfig);
// Write back the transformed config with proper formatting
fs.writeFileSync(
openCodeConfigPath,
JSON.stringify(openCodeConfig, null, 2) + '\n'
);
log('info', '[OpenCode] Transformed opencode.json to OpenCode format');
log(
'debug',
`[OpenCode] Added schema, renamed mcpServers->mcp, combined command+args, added type/enabled, renamed env->environment`
);
} catch (error) {
log(
'error',
`[OpenCode] Failed to transform opencode.json: ${error.message}`
);
}
}
/**
* Lifecycle function called when removing OpenCode profile
* @param {string} targetDir - Target project directory
*/
function onRemoveRulesProfile(targetDir) {
const openCodeConfigPath = path.join(targetDir, 'opencode.json');
if (!fs.existsSync(openCodeConfigPath)) {
log('debug', '[OpenCode] No opencode.json found to clean up');
return;
}
try {
// Read the current config
const configContent = fs.readFileSync(openCodeConfigPath, 'utf8');
const config = JSON.parse(configContent);
// Check if it has the mcp section and taskmaster-ai server
if (config.mcp && config.mcp['taskmaster-ai']) {
// Remove taskmaster-ai server
delete config.mcp['taskmaster-ai'];
// Check if there are other MCP servers
const remainingServers = Object.keys(config.mcp);
if (remainingServers.length === 0) {
// No other servers, remove entire mcp section
delete config.mcp;
}
// Check if config is now empty (only has $schema)
const remainingKeys = Object.keys(config).filter(
(key) => key !== '$schema'
);
if (remainingKeys.length === 0) {
// Config only has schema left, remove entire file
fs.rmSync(openCodeConfigPath, { force: true });
log('info', '[OpenCode] Removed empty opencode.json file');
} else {
// Write back the modified config
fs.writeFileSync(
openCodeConfigPath,
JSON.stringify(config, null, 2) + '\n'
);
log(
'info',
'[OpenCode] Removed TaskMaster from opencode.json, preserved other configurations'
);
}
} else {
log('debug', '[OpenCode] TaskMaster not found in opencode.json');
}
} catch (error) {
log(
'error',
`[OpenCode] Failed to clean up opencode.json: ${error.message}`
);
}
}
// Create and export opencode profile using the base factory
export const opencodeProfile = createProfile({
name: 'opencode',
displayName: 'OpenCode',
url: 'opencode.ai',
docsUrl: 'opencode.ai/docs/',
profileDir: '.', // Root directory
rulesDir: '.', // Root directory for AGENTS.md
mcpConfigName: 'opencode.json', // Override default 'mcp.json'
includeDefaultRules: false,
fileMap: {
'AGENTS.md': 'AGENTS.md'
},
onPostConvert: onPostConvertRulesProfile,
onRemove: onRemoveRulesProfile
});
// Export lifecycle functions separately to avoid naming conflicts
export { onPostConvertRulesProfile, onRemoveRulesProfile };

View File

@@ -1,178 +0,0 @@
// Zed profile for rule-transformer
import path from 'path';
import fs from 'fs';
import { isSilentMode, log } from '../../scripts/modules/utils.js';
import { createProfile } from './base-profile.js';
/**
* Transform standard MCP config format to Zed format
* @param {Object} mcpConfig - Standard MCP configuration object
* @returns {Object} - Transformed Zed configuration object
*/
function transformToZedFormat(mcpConfig) {
const zedConfig = {};
// Transform mcpServers to context_servers
if (mcpConfig.mcpServers) {
zedConfig['context_servers'] = mcpConfig.mcpServers;
}
// Preserve any other existing settings
for (const [key, value] of Object.entries(mcpConfig)) {
if (key !== 'mcpServers') {
zedConfig[key] = value;
}
}
return zedConfig;
}
// Lifecycle functions for Zed profile
function onAddRulesProfile(targetDir, assetsDir) {
// MCP transformation will be handled in onPostConvertRulesProfile
// File copying is handled by the base profile via fileMap
}
function onRemoveRulesProfile(targetDir) {
// Clean up .rules (Zed uses .rules directly in root)
const userRulesFile = path.join(targetDir, '.rules');
try {
// Remove Task Master .rules
if (fs.existsSync(userRulesFile)) {
fs.rmSync(userRulesFile, { force: true });
log('debug', `[Zed] Removed ${userRulesFile}`);
}
} catch (err) {
log('error', `[Zed] Failed to remove Zed instructions: ${err.message}`);
}
// MCP Removal: Remove context_servers section
const mcpConfigPath = path.join(targetDir, '.zed', 'settings.json');
if (!fs.existsSync(mcpConfigPath)) {
log('debug', '[Zed] No .zed/settings.json found to clean up');
return;
}
try {
// Read the current config
const configContent = fs.readFileSync(mcpConfigPath, 'utf8');
const config = JSON.parse(configContent);
// Check if it has the context_servers section and task-master-ai server
if (
config['context_servers'] &&
config['context_servers']['task-master-ai']
) {
// Remove task-master-ai server
delete config['context_servers']['task-master-ai'];
// Check if there are other MCP servers in context_servers
const remainingServers = Object.keys(config['context_servers']);
if (remainingServers.length === 0) {
// No other servers, remove entire context_servers section
delete config['context_servers'];
log('debug', '[Zed] Removed empty context_servers section');
}
// Check if config is now empty
const remainingKeys = Object.keys(config);
if (remainingKeys.length === 0) {
// Config is empty, remove entire file
fs.rmSync(mcpConfigPath, { force: true });
log('info', '[Zed] Removed empty settings.json file');
// Check if .zed directory is empty
const zedDirPath = path.join(targetDir, '.zed');
if (fs.existsSync(zedDirPath)) {
const remainingContents = fs.readdirSync(zedDirPath);
if (remainingContents.length === 0) {
fs.rmSync(zedDirPath, { recursive: true, force: true });
log('debug', '[Zed] Removed empty .zed directory');
}
}
} else {
// Write back the modified config
fs.writeFileSync(
mcpConfigPath,
JSON.stringify(config, null, '\t') + '\n'
);
log(
'info',
'[Zed] Removed TaskMaster from settings.json, preserved other configurations'
);
}
} else {
log('debug', '[Zed] TaskMaster not found in context_servers');
}
} catch (error) {
log('error', `[Zed] Failed to clean up settings.json: ${error.message}`);
}
}
function onPostConvertRulesProfile(targetDir, assetsDir) {
// Handle .rules setup (same as onAddRulesProfile)
onAddRulesProfile(targetDir, assetsDir);
// Transform MCP config to Zed format
const mcpConfigPath = path.join(targetDir, '.zed', 'settings.json');
if (!fs.existsSync(mcpConfigPath)) {
log('debug', '[Zed] No .zed/settings.json found to transform');
return;
}
try {
// Read the generated standard MCP config
const mcpConfigContent = fs.readFileSync(mcpConfigPath, 'utf8');
const mcpConfig = JSON.parse(mcpConfigContent);
// Check if it's already in Zed format (has context_servers)
if (mcpConfig['context_servers']) {
log(
'info',
'[Zed] settings.json already in Zed format, skipping transformation'
);
return;
}
// Transform to Zed format
const zedConfig = transformToZedFormat(mcpConfig);
// Write back the transformed config with proper formatting
fs.writeFileSync(
mcpConfigPath,
JSON.stringify(zedConfig, null, '\t') + '\n'
);
log('info', '[Zed] Transformed settings.json to Zed format');
log('debug', '[Zed] Renamed mcpServers to context_servers');
} catch (error) {
log('error', `[Zed] Failed to transform settings.json: ${error.message}`);
}
}
// Create and export zed profile using the base factory
export const zedProfile = createProfile({
name: 'zed',
displayName: 'Zed',
url: 'zed.dev',
docsUrl: 'zed.dev/docs',
profileDir: '.zed',
rulesDir: '.',
mcpConfig: true,
mcpConfigName: 'settings.json',
includeDefaultRules: false,
fileMap: {
'AGENTS.md': '.rules'
},
onAdd: onAddRulesProfile,
onRemove: onRemoveRulesProfile,
onPostConvert: onPostConvertRulesProfile
});
// Export lifecycle functions separately to avoid naming conflicts
export { onAddRulesProfile, onRemoveRulesProfile, onPostConvertRulesProfile };

View File

@@ -1,572 +0,0 @@
# Task Master Prompt Management System
This directory contains the centralized prompt templates for all AI-powered features in Task Master.
## Overview
The prompt management system provides:
- **Centralized Storage**: All prompts in one location (`/src/prompts`)
- **JSON Schema Validation**: Comprehensive validation using AJV with detailed error reporting
- **Version Control**: Track changes to prompts over time
- **Variant Support**: Different prompts for different contexts (research mode, complexity levels, etc.)
- **Template Variables**: Dynamic prompt generation with variable substitution
- **IDE Integration**: VS Code IntelliSense and validation support
## Directory Structure
```
src/prompts/
├── README.md # This file
├── schemas/ # JSON schemas for validation
│ ├── README.md # Schema documentation
│ ├── prompt-template.schema.json # Main template schema
│ ├── parameter.schema.json # Parameter validation schema
│ └── variant.schema.json # Prompt variant schema
├── parse-prd.json # PRD parsing prompts
├── expand-task.json # Task expansion prompts
├── add-task.json # Task creation prompts
├── update-tasks.json # Bulk task update prompts
├── update-task.json # Single task update prompts
├── update-subtask.json # Subtask update prompts
├── analyze-complexity.json # Complexity analysis prompts
└── research.json # Research query prompts
```
## Schema Validation
All prompt templates are validated against JSON schemas located in `/src/prompts/schemas/`. The validation system:
- **Structural Validation**: Ensures required fields and proper nesting
- **Parameter Type Checking**: Validates parameter types, patterns, and ranges
- **Template Syntax**: Validates Handlebars syntax and variable references
- **Semantic Versioning**: Enforces proper version format
- **Cross-Reference Validation**: Ensures parameters match template variables
### Validation Features
- **Required Fields**: `id`, `version`, `description`, `prompts.default`
- **Type Safety**: String, number, boolean, array, object validation
- **Pattern Matching**: Regex validation for string parameters
- **Range Validation**: Min/max values for numeric parameters
- **Enum Constraints**: Restricted value sets for categorical parameters
## Development Workflow
### Setting Up Development Environment
1. **VS Code Integration**: Schemas are automatically configured for IntelliSense
2. **Dependencies**: `ajv` and `ajv-formats` are required for validation
3. **File Watching**: Changes to templates trigger automatic validation
### Creating New Prompts
1. Create a new `.json` file in `/src/prompts/`
2. Follow the schema structure (see Template Structure section)
3. Define parameters with proper types and validation
4. Create system and user prompts with template variables
5. Test with the PromptManager before committing
### Modifying Existing Prompts
1. Update the `version` field following semantic versioning
2. Maintain backward compatibility when possible
3. Test with existing code that uses the prompt
4. Update documentation if parameters change
## Prompt Template Reference
### 1. parse-prd.json
**Purpose**: Parse a Product Requirements Document into structured tasks
**Variants**: `default`, `research` (when research mode is enabled)
**Required Parameters**:
- `numTasks` (number): Target number of tasks to generate
- `nextId` (number): Starting ID for tasks
- `prdContent` (string): Content of the PRD file
- `prdPath` (string): Path to the PRD file
- `defaultTaskPriority` (string): Default priority for generated tasks
**Optional Parameters**:
- `research` (boolean): Enable research mode for latest best practices (default: false)
**Usage**: Used by `task-master parse-prd` command to convert PRD documents into actionable task lists.
### 2. add-task.json
**Purpose**: Generate a new task based on user description
**Variants**: `default`, `research` (when research mode is enabled)
**Required Parameters**:
- `prompt` (string): User's task description
- `newTaskId` (number): ID for the new task
**Optional Parameters**:
- `existingTasks` (array): List of existing tasks for context
- `gatheredContext` (string): Context gathered from codebase analysis
- `contextFromArgs` (string): Additional context from manual args
- `priority` (string): Task priority (high/medium/low, default: medium)
- `dependencies` (array): Task dependency IDs
- `useResearch` (boolean): Use research mode (default: false)
**Usage**: Used by `task-master add-task` command to create new tasks with AI assistance.
### 3. expand-task.json
**Purpose**: Break down a task into detailed subtasks with three sophisticated strategies
**Variants**: `complexity-report` (when expansionPrompt exists), `research` (when research mode is enabled), `default` (standard case)
**Required Parameters**:
- `subtaskCount` (number): Number of subtasks to generate
- `task` (object): The task to expand
- `nextSubtaskId` (number): Starting ID for new subtasks
**Optional Parameters**:
- `additionalContext` (string): Additional context for expansion (default: "")
- `complexityReasoningContext` (string): Complexity analysis reasoning context (default: "")
- `gatheredContext` (string): Gathered project context (default: "")
- `useResearch` (boolean): Use research mode (default: false)
- `expansionPrompt` (string): Expansion prompt from complexity report
**Variant Selection Strategy**:
1. **complexity-report**: Used when `expansionPrompt` exists (highest priority)
2. **research**: Used when `useResearch === true && !expansionPrompt`
3. **default**: Standard fallback strategy
**Usage**: Used by `task-master expand` command to break complex tasks into manageable subtasks using the most appropriate strategy based on available context and complexity analysis.
### 4. update-task.json
**Purpose**: Update a single task with new information, supporting full updates and append mode
**Variants**: `default`, `append` (when appendMode is true), `research` (when research mode is enabled)
**Required Parameters**:
- `task` (object): The task to update
- `taskJson` (string): JSON string representation of the task
- `updatePrompt` (string): Description of changes to apply
**Optional Parameters**:
- `appendMode` (boolean): Whether to append to details or do full update (default: false)
- `useResearch` (boolean): Use research mode (default: false)
- `currentDetails` (string): Current task details for context (default: "(No existing details)")
- `gatheredContext` (string): Additional project context
**Usage**: Used by `task-master update-task` command to modify existing tasks.
### 5. update-tasks.json
**Purpose**: Update multiple tasks based on new context or changes
**Variants**: `default`, `research` (when research mode is enabled)
**Required Parameters**:
- `tasks` (array): Array of tasks to update
- `updatePrompt` (string): Description of changes to apply
**Optional Parameters**:
- `useResearch` (boolean): Use research mode (default: false)
- `projectContext` (string): Additional project context
**Usage**: Used by `task-master update` command to bulk update multiple tasks.
### 6. update-subtask.json
**Purpose**: Append information to a subtask by generating only new content
**Variants**: `default`, `research` (when research mode is enabled)
**Required Parameters**:
- `parentTask` (object): The parent task context
- `currentDetails` (string): Current subtask details (default: "(No existing details)")
- `updatePrompt` (string): User request for what to add
**Optional Parameters**:
- `prevSubtask` (object): The previous subtask if any
- `nextSubtask` (object): The next subtask if any
- `useResearch` (boolean): Use research mode (default: false)
- `gatheredContext` (string): Additional project context
**Usage**: Used by `task-master update-subtask` command to log progress and findings on subtasks.
### 7. analyze-complexity.json
**Purpose**: Analyze task complexity and generate expansion recommendations
**Variants**: `default`, `research` (when research mode is enabled), `batch` (when analyzing >10 tasks)
**Required Parameters**:
- `tasks` (array): Array of tasks to analyze
**Optional Parameters**:
- `gatheredContext` (string): Additional project context
- `threshold` (number): Complexity threshold for expansion recommendation (1-10, default: 5)
- `useResearch` (boolean): Use research mode for deeper analysis (default: false)
**Usage**: Used by `task-master analyze-complexity` command to determine which tasks need breakdown.
### 8. research.json
**Purpose**: Perform AI-powered research with project context
**Variants**: `default`, `low` (concise responses), `medium` (balanced), `high` (detailed)
**Required Parameters**:
- `query` (string): Research query
**Optional Parameters**:
- `gatheredContext` (string): Gathered project context
- `detailLevel` (string): Level of detail (low/medium/high, default: medium)
- `projectInfo` (object): Project information with properties:
- `root` (string): Project root path
- `taskCount` (number): Number of related tasks
- `fileCount` (number): Number of related files
**Usage**: Used by `task-master research` command to get contextual information and guidance.
## Template Structure
Each prompt template is a JSON file with the following structure:
```json
{
"id": "unique-identifier",
"version": "1.0.0",
"description": "What this prompt does",
"metadata": {
"author": "system",
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z",
"tags": ["category", "feature"],
"category": "task"
},
"parameters": {
"paramName": {
"type": "string|number|boolean|array|object",
"required": true|false,
"default": "default value",
"description": "Parameter description",
"enum": ["option1", "option2"],
"pattern": "^[a-z]+$",
"minimum": 1,
"maximum": 100
}
},
"prompts": {
"default": {
"system": "System prompt template",
"user": "User prompt template"
},
"variant-name": {
"condition": "JavaScript expression",
"system": "Variant system prompt",
"user": "Variant user prompt",
"metadata": {
"description": "When to use this variant"
}
}
}
}
```
## Template Features
### Variable Substitution
Use `{{variableName}}` to inject dynamic values:
```
"user": "Analyze these {{tasks.length}} tasks with threshold {{threshold}}"
```
### Conditionals
Use `{{#if variable}}...{{/if}}` for conditional content:
```
"user": "{{#if useResearch}}Research and {{/if}}create a task"
```
### Helper Functions
#### Equality Helper
Use `{{#if (eq variable "value")}}...{{/if}}` for string comparisons:
```
"user": "{{#if (eq detailLevel \"low\")}}Provide a brief summary{{/if}}"
"user": "{{#if (eq priority \"high\")}}URGENT: {{/if}}{{taskTitle}}"
```
The `eq` helper enables clean conditional logic based on parameter values:
- Compare strings: `(eq detailLevel "medium")`
- Compare with enum values: `(eq status "pending")`
- Multiple conditions: `{{#if (eq level "1")}}First{{/if}}{{#if (eq level "2")}}Second{{/if}}`
#### Negation Helper
Use `{{#if (not variable)}}...{{/if}}` for negation conditions:
```
"user": "{{#if (not useResearch)}}Use basic analysis{{/if}}"
"user": "{{#if (not hasSubtasks)}}This task has no subtasks{{/if}}"
```
The `not` helper enables clean negative conditional logic:
- Negate boolean values: `(not useResearch)`
- Negate truthy/falsy values: `(not emptyArray)`
- Cleaner than separate boolean parameters: No need for `notUseResearch` flags
#### Numeric Comparison Helpers
Use `{{#if (gt variable number)}}...{{/if}}` for greater than comparisons:
```
"user": "generate {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} top-level development tasks"
"user": "{{#if (gt complexity 5)}}This is a complex task{{/if}}"
"system": "create {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks"
```
Use `{{#if (gte variable number)}}...{{/if}}` for greater than or equal comparisons:
```
"user": "{{#if (gte priority 8)}}HIGH PRIORITY{{/if}}"
"user": "{{#if (gte threshold 1)}}Analysis enabled{{/if}}"
"system": "{{#if (gte complexityScore 8)}}Use detailed breakdown approach{{/if}}"
```
The numeric comparison helpers enable sophisticated conditional logic:
- **Dynamic counting**: `{{#if (gt numTasks 0)}}exactly {{numTasks}}{{else}}an appropriate number of{{/if}}`
- **Threshold-based behavior**: `(gte complexityScore 8)` for high-complexity handling
- **Zero checks**: `(gt subtaskCount 0)` for conditional content generation
- **Decimal support**: `(gt score 7.5)` for fractional comparisons
- **Enhanced prompt sophistication**: Enables parse-prd and expand-task logic matching GitHub specifications
### Loops
Use `{{#each array}}...{{/each}}` to iterate over arrays:
```
"user": "Tasks:\n{{#each tasks}}- {{id}}: {{title}}\n{{/each}}"
```
### Special Loop Variables
Inside `{{#each}}` blocks, you have access to:
- `{{@index}}`: Current array index (0-based)
- `{{@first}}`: Boolean, true for first item
- `{{@last}}`: Boolean, true for last item
```
"user": "{{#each tasks}}{{@index}}. {{title}}{{#unless @last}}\n{{/unless}}{{/each}}"
```
### JSON Serialization
Use `{{{json variable}}}` (triple braces) to serialize objects/arrays to JSON:
```
"user": "Analyze these tasks: {{{json tasks}}}"
```
### Nested Properties
Access nested properties with dot notation:
```
"user": "Project: {{context.projectName}}"
```
## Prompt Variants
Variants allow different prompts based on conditions:
```json
{
"prompts": {
"default": {
"system": "Default system prompt",
"user": "Default user prompt"
},
"research": {
"condition": "useResearch === true",
"system": "Research-focused system prompt",
"user": "Research-focused user prompt"
},
"high-complexity": {
"condition": "complexityScore >= 8",
"system": "Complex task handling prompt",
"user": "Detailed breakdown request"
}
}
}
```
### Condition Evaluation
Conditions are JavaScript expressions evaluated with parameter values as context:
- Simple comparisons: `useResearch === true`
- Numeric comparisons: `threshold >= 5`
- String matching: `priority === 'high'`
- Complex logic: `useResearch && threshold > 7`
## PromptManager Module
The PromptManager is implemented in `scripts/modules/prompt-manager.js` and provides:
- **Template loading and caching**: Templates are loaded once and cached for performance
- **Schema validation**: Comprehensive validation using AJV with detailed error reporting
- **Variable substitution**: Handlebars-like syntax for dynamic content
- **Variant selection**: Automatic selection based on conditions
- **Error handling**: Graceful fallbacks and detailed error messages
- **Singleton pattern**: One instance per project root for efficiency
### Validation Behavior
- **Schema Available**: Full validation with detailed error messages
- **Schema Missing**: Falls back to basic structural validation
- **Invalid Templates**: Throws descriptive errors with field-level details
- **Parameter Validation**: Type checking, pattern matching, range validation
## Usage in Code
### Basic Usage
```javascript
import { getPromptManager } from '../prompt-manager.js';
const promptManager = getPromptManager();
const { systemPrompt, userPrompt, metadata } = promptManager.loadPrompt('add-task', {
// Parameters matching the template's parameter definitions
prompt: 'Create a user authentication system',
newTaskId: 5,
priority: 'high',
useResearch: false
});
// Use with AI service
const result = await generateObjectService({
systemPrompt,
prompt: userPrompt,
// ... other AI parameters
});
```
### With Variants
```javascript
// Research variant will be selected automatically
const { systemPrompt, userPrompt } = promptManager.loadPrompt('expand-task', {
useResearch: true, // Triggers research variant
task: taskObject,
subtaskCount: 5
});
```
### Error Handling
```javascript
try {
const result = promptManager.loadPrompt('invalid-template', {});
} catch (error) {
if (error.message.includes('Schema validation failed')) {
console.error('Template validation error:', error.message);
} else if (error.message.includes('not found')) {
console.error('Template not found:', error.message);
}
}
```
## Adding New Prompts
1. **Create the JSON file** following the template structure
2. **Define parameters** with proper types, validation, and descriptions
3. **Create prompts** with clear system and user templates
4. **Use template variables** for dynamic content
5. **Add variants** if needed for different contexts
6. **Test thoroughly** with the PromptManager
7. **Update this documentation** with the new prompt details
### Example New Prompt
```json
{
"id": "new-feature",
"version": "1.0.0",
"description": "Generate code for a new feature",
"parameters": {
"featureName": {
"type": "string",
"required": true,
"pattern": "^[a-zA-Z][a-zA-Z0-9-]*$",
"description": "Name of the feature to implement"
},
"complexity": {
"type": "string",
"required": false,
"enum": ["simple", "medium", "complex"],
"default": "medium",
"description": "Feature complexity level"
}
},
"prompts": {
"default": {
"system": "You are a senior software engineer.",
"user": "Create a {{complexity}} {{featureName}} feature."
}
}
}
```
## Best Practices
### Template Design
1. **Clear IDs**: Use kebab-case, descriptive identifiers
2. **Semantic Versioning**: Follow semver for version management
3. **Comprehensive Parameters**: Define all required and optional parameters
4. **Type Safety**: Use proper parameter types and validation
5. **Clear Descriptions**: Document what each prompt and parameter does
### Variable Usage
1. **Meaningful Names**: Use descriptive variable names
2. **Consistent Patterns**: Follow established naming conventions
3. **Safe Defaults**: Provide sensible default values
4. **Validation**: Use patterns, enums, and ranges for validation
### Variant Strategy
1. **Simple Conditions**: Keep variant conditions easy to understand
2. **Clear Purpose**: Each variant should have a distinct use case
3. **Fallback Logic**: Always provide a default variant
4. **Documentation**: Explain when each variant is used
### Performance
1. **Caching**: Templates are cached automatically
2. **Lazy Loading**: Templates load only when needed
3. **Minimal Variants**: Don't create unnecessary variants
4. **Efficient Conditions**: Keep condition evaluation fast
## Testing Prompts
### Validation Testing
```javascript
// Test schema validation
const promptManager = getPromptManager();
const results = promptManager.validateAllPrompts();
console.log(`Valid: ${results.valid.length}, Errors: ${results.errors.length}`);
```
### Integration Testing
When modifying prompts, ensure to test:
- Variable substitution works with actual data structures
- Variant selection triggers correctly based on conditions
- AI responses remain consistent with expected behavior
- All parameters are properly validated
- Error handling works for invalid inputs
### Quick Testing
```javascript
// Test prompt loading and variable substitution
const promptManager = getPromptManager();
const result = promptManager.loadPrompt('research', {
query: 'What are the latest React best practices?',
detailLevel: 'medium',
gatheredContext: 'React project with TypeScript'
});
console.log('System:', result.systemPrompt);
console.log('User:', result.userPrompt);
console.log('Metadata:', result.metadata);
```
### Testing Checklist
- [ ] Template validates against schema
- [ ] All required parameters are defined
- [ ] Variable substitution works correctly
- [ ] Variants trigger under correct conditions
- [ ] Error messages are clear and helpful
- [ ] Performance is acceptable for repeated usage
## Troubleshooting
### Common Issues
**Schema Validation Errors**:
- Check required fields are present
- Verify parameter types match schema
- Ensure version follows semantic versioning
- Validate JSON syntax
**Variable Substitution Problems**:
- Check variable names match parameter names
- Verify nested property access syntax
- Ensure array iteration syntax is correct
- Test with actual data structures
**Variant Selection Issues**:
- Verify condition syntax is valid JavaScript
- Check parameter values match condition expectations
- Ensure default variant exists
- Test condition evaluation with debug logging
**Performance Issues**:
- Check for circular references in templates
- Verify caching is working correctly
- Monitor template loading frequency
- Consider simplifying complex conditions

View File

@@ -1,56 +0,0 @@
{
"id": "add-task",
"version": "1.0.0",
"description": "Generate a new task based on description",
"metadata": {
"author": "system",
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z",
"tags": ["task-creation", "generation"]
},
"parameters": {
"prompt": {
"type": "string",
"required": true,
"description": "User's task description"
},
"newTaskId": {
"type": "number",
"required": true,
"description": "ID for the new task"
},
"existingTasks": {
"type": "array",
"description": "List of existing tasks for context"
},
"gatheredContext": {
"type": "string",
"description": "Context gathered from codebase analysis"
},
"contextFromArgs": {
"type": "string",
"description": "Additional context from manual args"
},
"priority": {
"type": "string",
"default": "medium",
"enum": ["high", "medium", "low"],
"description": "Task priority"
},
"dependencies": {
"type": "array",
"description": "Task dependency IDs"
},
"useResearch": {
"type": "boolean",
"default": false,
"description": "Use research mode"
}
},
"prompts": {
"default": {
"system": "You are a helpful assistant that creates well-structured tasks for a software development project. Generate a single new task based on the user's description, adhering strictly to the provided JSON schema. Pay special attention to dependencies between tasks, ensuring the new task correctly references any tasks it depends on.\n\nWhen determining dependencies for a new task, follow these principles:\n1. Select dependencies based on logical requirements - what must be completed before this task can begin.\n2. Prioritize task dependencies that are semantically related to the functionality being built.\n3. Consider both direct dependencies (immediately prerequisite) and indirect dependencies.\n4. Avoid adding unnecessary dependencies - only include tasks that are genuinely prerequisite.\n5. Consider the current status of tasks - prefer completed tasks as dependencies when possible.\n6. Pay special attention to foundation tasks (1-5) but don't automatically include them without reason.\n7. Recent tasks (higher ID numbers) may be more relevant for newer functionality.\n\nThe dependencies array should contain task IDs (numbers) of prerequisite tasks.{{#if useResearch}}\n\nResearch current best practices and technologies relevant to this task.{{/if}}",
"user": "You are generating the details for Task #{{newTaskId}}. Based on the user's request: \"{{prompt}}\", create a comprehensive new task for a software development project.\n \n {{gatheredContext}}\n \n {{#if useResearch}}Research current best practices, technologies, and implementation patterns relevant to this task. {{/if}}Based on the information about existing tasks provided above, include appropriate dependencies in the \"dependencies\" array. Only include task IDs that this new task directly depends on.\n \n Return your answer as a single JSON object matching the schema precisely:\n \n {\n \"title\": \"Task title goes here\",\n \"description\": \"A concise one or two sentence description of what the task involves\",\n \"details\": \"Detailed implementation steps, considerations, code examples, or technical approach\",\n \"testStrategy\": \"Specific steps to verify correct implementation and functionality\",\n \"dependencies\": [1, 3] // Example: IDs of tasks that must be completed before this task\n }\n \n Make sure the details and test strategy are comprehensive and specific{{#if useResearch}}, incorporating current best practices from your research{{/if}}. DO NOT include the task ID in the title.\n {{#if contextFromArgs}}{{contextFromArgs}}{{/if}}"
}
}
}

Some files were not shown because too many files have changed in this diff Show More