Compare commits
14 Commits
fix/metric
...
docs/auto-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8d1d82c897 | ||
|
|
aaacc3dae3 | ||
|
|
46cd5dc186 | ||
|
|
49a31be416 | ||
|
|
2b69936ee7 | ||
|
|
b5fe723f8e | ||
|
|
d67b81d25d | ||
|
|
66c05053c0 | ||
|
|
d7ab4609aa | ||
|
|
05f6242f7e | ||
|
|
a58719cf50 | ||
|
|
674d1f6de7 | ||
|
|
f106fb8e0b | ||
|
|
fd9dd43ee0 |
@@ -58,9 +58,9 @@ Examples:
|
||||
```md
|
||||
# Good
|
||||
|
||||
Added new `--research` flag to the `expand` command that uses Perplexity AI
|
||||
to provide research-backed task expansions. Requires PERPLEXITY_API_KEY
|
||||
environment variable.
|
||||
Added new `--research` flag to the `expand` command that uses your configured research model
|
||||
to provide research-backed task expansions. Requires appropriate API key
|
||||
for your research model.
|
||||
|
||||
# Not Good
|
||||
|
||||
|
||||
5
.changeset/chore-fix-docs.md
Normal file
5
.changeset/chore-fix-docs.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve `analyze-complexity` cli docs and `--research` flag documentation
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
No longer need --package=task-master-ai in mcp server
|
||||
|
||||
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
|
||||
- we now bundle our whole package, so we no longer need the --package
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add new `task-master start` command for automated task execution with Claude Code
|
||||
|
||||
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
|
||||
- `task-master start` will automatically detect next-task when no ID is provided.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript
|
||||
13
.changeset/mcp-timeout-configuration.md
Normal file
13
.changeset/mcp-timeout-configuration.md
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
|
||||
|
||||
**What's New:**
|
||||
- 300-second timeout for MCP operations (up from default 60 seconds)
|
||||
- Programmatic MCP configuration generation (replaces static asset files)
|
||||
- Enhanced reliability for AI-powered operations
|
||||
- Consistent with other AI coding assistant profiles
|
||||
|
||||
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
|
||||
5
.changeset/petite-ideas-grab.md
Normal file
5
.changeset/petite-ideas-grab.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix Claude Code settings validation for pathToClaudeCodeExecutable
|
||||
@@ -1,19 +0,0 @@
|
||||
{
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.26.0",
|
||||
"@tm/cli": "0.26.0",
|
||||
"docs": "0.0.2",
|
||||
"extension": "0.24.2",
|
||||
"@tm/build-config": "1.0.0",
|
||||
"@tm/core": "0.26.0"
|
||||
},
|
||||
"changesets": [
|
||||
"easy-deer-heal",
|
||||
"moody-oranges-slide",
|
||||
"odd-otters-tan",
|
||||
"shiny-regions-teach",
|
||||
"wild-ears-look"
|
||||
]
|
||||
}
|
||||
@@ -1,36 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
|
||||
2. **Set the environment variable**:
|
||||
```bash
|
||||
export GROK_CLI_API_KEY="your-api-key-here"
|
||||
```
|
||||
3. **Configure Task Master to use Grok**:
|
||||
```bash
|
||||
task-master models --set-main grok-beta
|
||||
# or
|
||||
task-master models --set-research grok-beta
|
||||
# or
|
||||
task-master models --set-fallback grok-beta
|
||||
```
|
||||
|
||||
## Key Features
|
||||
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
|
||||
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
|
||||
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
|
||||
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
|
||||
|
||||
## Available Models
|
||||
- `grok-beta` - Latest Grok model with codebase context
|
||||
- `grok-vision-beta` - Grok with vision capabilities and codebase context
|
||||
|
||||
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
|
||||
|
||||
## Credits
|
||||
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Improve taskmaster ai provider defaults
|
||||
|
||||
- moving from main anthropic 3.7 to anthropic sonnet 4
|
||||
- moving from fallback anthropic 3.5 to anthropic 3.7
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
@tm/cli: add auto-update functionality to every command
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
"extension": minor
|
||||
---
|
||||
|
||||
Add "Start Task" button to VS Code extension for seamless Claude Code integration
|
||||
|
||||
You can now click a "Start Task" button directly in the Task Master extension which will open a new terminal and automatically execute the task using Claude Code. This provides a seamless workflow from viewing tasks in the extension to implementing them without leaving VS Code.
|
||||
5
.changeset/silly-pandas-find.md
Normal file
5
.changeset/silly-pandas-find.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix sonar deep research model failing, should be called `sonar-deep-research`
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"extension": minor
|
||||
---
|
||||
|
||||
Added a Start Build button to the VSCODE Task Properties Right Panel
|
||||
@@ -16,7 +16,7 @@ task-master analyze-complexity [--research] [--threshold=5]
|
||||
|
||||
## Analysis Parameters
|
||||
|
||||
- `--research` → Use research AI for deeper analysis
|
||||
- `--research` → Use configured research model for deeper analysis
|
||||
- `--threshold=5` → Only flag tasks above complexity 5
|
||||
- Default: Analyze all pending tasks
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "node",
|
||||
"args": ["./mcp-server/server.js"],
|
||||
"args": ["./dist/mcp-server.js"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||
|
||||
@@ -231,7 +231,7 @@ Taskmaster offers two primary ways to interact:
|
||||
|
||||
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
|
||||
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
|
||||
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
|
||||
- Add `--research` flag to leverage your configured research model for research-backed expansion.
|
||||
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
|
||||
- Use `--prompt="<context>"` to provide additional context when needed.
|
||||
- Review and adjust generated subtasks as necessary.
|
||||
|
||||
@@ -229,7 +229,7 @@ Taskmaster offers two primary ways to interact:
|
||||
|
||||
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
|
||||
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
|
||||
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
|
||||
- Add `--research` flag to leverage your configured research model for research-backed expansion.
|
||||
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
|
||||
- Use `--prompt="<context>"` to provide additional context when needed.
|
||||
- Review and adjust generated subtasks as necessary.
|
||||
|
||||
@@ -12,7 +12,7 @@ In an AI-driven development process—particularly with tools like [Cursor](http
|
||||
4. **Generate** individual task files (e.g., `task_001.txt`) for easy reference or to feed into an AI coding workflow.
|
||||
5. **Set task status**—mark tasks as `done`, `pending`, or `deferred` based on progress.
|
||||
6. **Expand** tasks with subtasks—break down complex tasks into smaller, more manageable subtasks.
|
||||
7. **Research-backed subtask generation**—use Perplexity AI to generate more informed and contextually relevant subtasks.
|
||||
7. **Research-backed subtask generation**—use your configured research model to generate more informed and contextually relevant subtasks.
|
||||
8. **Clear subtasks**—remove subtasks from specified tasks to allow regeneration or restructuring.
|
||||
9. **Show task details**—display detailed information about a specific task and its subtasks.
|
||||
|
||||
@@ -29,7 +29,7 @@ The script can be configured through environment variables in a `.env` file at t
|
||||
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
|
||||
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
|
||||
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
|
||||
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation (if using Perplexity as your research model)
|
||||
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
|
||||
- `DEBUG`: Enable debug logging (default: false)
|
||||
- `TASKMASTER_LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
|
||||
@@ -97,7 +97,7 @@ node scripts/dev.js update --from=4 --prompt="Refactor tasks from ID 4 onward to
|
||||
# Update all tasks (default from=1)
|
||||
node scripts/dev.js update --prompt="Add authentication to all relevant tasks"
|
||||
|
||||
# With research-backed updates using Perplexity AI
|
||||
# With research-backed updates using your configured research model
|
||||
node scripts/dev.js update --from=4 --prompt="Integrate OAuth 2.0" --research
|
||||
|
||||
# Specify a different tasks file
|
||||
@@ -109,7 +109,7 @@ Notes:
|
||||
- The `--prompt` parameter is required and should explain the changes or new context
|
||||
- Only tasks that aren't marked as 'done' will be updated
|
||||
- Tasks with ID >= the specified --from value will be updated
|
||||
- The `--research` flag uses Perplexity AI for more informed updates when available
|
||||
- The `--research` flag uses your configured research model for more informed updates when available
|
||||
|
||||
## Updating a Single Task
|
||||
|
||||
@@ -119,7 +119,7 @@ The `update-task` command allows you to update a specific task instead of multip
|
||||
# Update a specific task with new information
|
||||
node scripts/dev.js update-task --id=4 --prompt="Use JWT for authentication"
|
||||
|
||||
# With research-backed updates using Perplexity AI
|
||||
# With research-backed updates using your configured research model
|
||||
node scripts/dev.js update-task --id=4 --prompt="Use JWT for authentication" --research
|
||||
```
|
||||
|
||||
@@ -178,10 +178,10 @@ node scripts/dev.js expand --all
|
||||
# Force regeneration of subtasks for all pending tasks
|
||||
node scripts/dev.js expand --all --force
|
||||
|
||||
# Use Perplexity AI for research-backed subtask generation
|
||||
# Use your configured research model for research-backed subtask generation
|
||||
node scripts/dev.js expand --id=3 --research
|
||||
|
||||
# Use Perplexity AI for research-backed generation on all pending tasks
|
||||
# Use your configured research model for research-backed generation on all pending tasks
|
||||
node scripts/dev.js expand --all --research
|
||||
```
|
||||
|
||||
@@ -211,17 +211,16 @@ Notes:
|
||||
|
||||
The script integrates with two AI services:
|
||||
|
||||
1. **Anthropic Claude**: Used for parsing PRDs, generating tasks, and creating subtasks.
|
||||
2. **Perplexity AI**: Used for research-backed subtask generation when the `--research` flag is specified.
|
||||
1. **Main AI Model**: Used for parsing PRDs, generating tasks, and creating subtasks (typically Anthropic Claude).
|
||||
2. **Research Model**: Used for research-backed subtask generation when the `--research` flag is specified.
|
||||
|
||||
The Perplexity integration uses the OpenAI client to connect to Perplexity's API, which provides enhanced research capabilities for generating more informed subtasks. If the Perplexity API is unavailable or encounters an error, the script will automatically fall back to using Anthropic's Claude.
|
||||
The research integration provides enhanced research capabilities for generating more informed subtasks. You can configure different models for research (like Perplexity, which has access to current information) vs. main tasks. If the research model is unavailable or encounters an error, the script will automatically fall back to using your main model.
|
||||
|
||||
To use the Perplexity integration:
|
||||
To use research-backed features:
|
||||
|
||||
1. Obtain a Perplexity API key
|
||||
2. Add `PERPLEXITY_API_KEY` to your `.env` file
|
||||
3. Optionally specify `PERPLEXITY_MODEL` in your `.env` file (default: "sonar-medium-online")
|
||||
4. Use the `--research` flag with the `expand` command
|
||||
1. Configure your research model using `task-master models --setup`
|
||||
2. Ensure you have the appropriate API key in your `.env` file (e.g., `PERPLEXITY_API_KEY` if using Perplexity)
|
||||
3. Use the `--research` flag with supported commands
|
||||
|
||||
## Logging
|
||||
|
||||
@@ -342,13 +341,13 @@ node scripts/dev.js analyze-complexity --model=claude-3-opus-20240229
|
||||
# Set a custom complexity threshold (1-10)
|
||||
node scripts/dev.js analyze-complexity --threshold=6
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
# Use your configured research model for research-backed complexity analysis
|
||||
node scripts/dev.js analyze-complexity --research
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- The command uses Claude to analyze each task's complexity (or Perplexity with --research flag)
|
||||
- The command uses your main model to analyze each task's complexity (or your configured research model with --research flag)
|
||||
- Tasks are scored on a scale of 1-10
|
||||
- Each task receives a recommended number of subtasks based on DEFAULT_SUBTASKS configuration
|
||||
- The default output path is `scripts/task-complexity-report.json`
|
||||
|
||||
55
CHANGELOG.md
55
CHANGELOG.md
@@ -1,5 +1,60 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.27.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1220](https://github.com/eyaltoledano/claude-task-master/pull/1220) [`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - No longer need --package=task-master-ai in mcp server
|
||||
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
|
||||
- we now bundle our whole package, so we no longer need the --package
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `task-master start` command for automated task execution with Claude Code
|
||||
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
|
||||
- `task-master start` will automatically detect next-task when no ID is provided.
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
|
||||
|
||||
## Setup Instructions
|
||||
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
|
||||
2. **Set the environment variable**:
|
||||
```bash
|
||||
export GROK_CLI_API_KEY="your-api-key-here"
|
||||
```
|
||||
3. **Configure Task Master to use Grok**:
|
||||
```bash
|
||||
task-master models --set-main grok-beta
|
||||
# or
|
||||
task-master models --set-research grok-beta
|
||||
# or
|
||||
task-master models --set-fallback grok-beta
|
||||
```
|
||||
|
||||
## Key Features
|
||||
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
|
||||
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
|
||||
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
|
||||
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
|
||||
|
||||
## Available Models
|
||||
- `grok-beta` - Latest Grok model with codebase context
|
||||
- `grok-vision-beta` - Grok with vision capabilities and codebase context
|
||||
|
||||
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
|
||||
|
||||
## Credits
|
||||
|
||||
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.
|
||||
|
||||
- [#1225](https://github.com/eyaltoledano/claude-task-master/pull/1225) [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve taskmaster ai provider defaults
|
||||
- moving from main anthropic 3.7 to anthropic sonnet 4
|
||||
- moving from fallback anthropic 3.5 to anthropic 3.7
|
||||
|
||||
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.
|
||||
|
||||
## 0.27.0-rc.2
|
||||
|
||||
### Minor Changes
|
||||
|
||||
@@ -301,7 +301,7 @@ The agent will execute:
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
For research-backed subtask generation using Perplexity AI:
|
||||
For research-backed subtask generation using your configured research model:
|
||||
|
||||
```
|
||||
Please break down task 5 using research-backed generation.
|
||||
@@ -450,7 +450,7 @@ task-master analyze-complexity --threshold=6
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
# Use your configured research model for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
|
||||
|
||||
13
README.md
13
README.md
@@ -60,6 +60,19 @@ The following documentation is also available in the `docs` directory:
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
#### Claude Code Quick Install
|
||||
|
||||
For Claude Code users:
|
||||
|
||||
```bash
|
||||
claude mcp add taskmaster-ai -- npx -y task-master-ai
|
||||
```
|
||||
|
||||
Don't forget to add your API keys to the configuration:
|
||||
- in the root .env of your Project
|
||||
- in the "env" section of your mcp config for taskmaster-ai
|
||||
|
||||
|
||||
## Requirements
|
||||
|
||||
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
|
||||
|
||||
@@ -1,5 +1,12 @@
|
||||
# @tm/cli
|
||||
|
||||
## 0.27.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies []:
|
||||
- @tm/core@0.26.1
|
||||
|
||||
## 0.27.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@tm/cli",
|
||||
"version": "0.27.0-rc.0",
|
||||
"version": "0.27.0",
|
||||
"description": "Task Master CLI - Command line interface for task management",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
# docs
|
||||
|
||||
## 0.0.3
|
||||
|
||||
## 0.0.2
|
||||
|
||||
## 0.0.1
|
||||
|
||||
@@ -118,7 +118,7 @@ description: "Learn how Task Master and Cursor AI work together to streamline yo
|
||||
task-master expand --all
|
||||
```
|
||||
|
||||
For research-backed subtask generation using Perplexity AI:
|
||||
For research-backed subtask generation using configured research model:
|
||||
|
||||
```
|
||||
Please break down task 5 using research-backed generation.
|
||||
|
||||
@@ -61,7 +61,7 @@ description: "A comprehensive reference of all available Task Master commands"
|
||||
# Update a single task by ID with new information
|
||||
task-master update-task --id=<id> --prompt="<prompt>"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
# Use research-backed updates with configured research model
|
||||
task-master update-task --id=<id> --prompt="<prompt>" --research
|
||||
```
|
||||
</Accordion>
|
||||
@@ -74,7 +74,7 @@ description: "A comprehensive reference of all available Task Master commands"
|
||||
# Example: Add details about API rate limiting to subtask 2 of task 5
|
||||
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
|
||||
|
||||
# Use research-backed updates with Perplexity AI
|
||||
# Use research-backed updates with configured research model
|
||||
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
|
||||
```
|
||||
|
||||
@@ -155,7 +155,7 @@ description: "A comprehensive reference of all available Task Master commands"
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
# Use configured research model for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
@@ -156,7 +156,7 @@ sidebarTitle: "CLI Commands"
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
# Use your configured research model for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
@@ -18,8 +18,8 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "node",
|
||||
"args": ["./mcp-server/server.js"],
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
|
||||
|
||||
@@ -30,6 +30,19 @@ cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21
|
||||
```
|
||||
|
||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||
|
||||
### Claude Code Quick Install
|
||||
|
||||
For Claude Code users:
|
||||
|
||||
```bash
|
||||
claude mcp add taskmaster-ai -- npx -y task-master-ai
|
||||
```
|
||||
|
||||
Don't forget to add your API keys to the configuration:
|
||||
- in the root .env of your Project
|
||||
- in the "env" section of your mcp config for taskmaster-ai
|
||||
|
||||
</Accordion>
|
||||
## Installation Options
|
||||
|
||||
|
||||
@@ -61,9 +61,25 @@ Task Master can provide a complexity report which can be helpful to read before
|
||||
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
|
||||
```
|
||||
|
||||
The agent will use the `analyze_project_complexity` MCP tool, or you can run it directly with the CLI command:
|
||||
```bash
|
||||
task-master analyze-complexity
|
||||
```
|
||||
|
||||
For more comprehensive analysis using your configured research model, you can use:
|
||||
```bash
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
|
||||
<Tip>
|
||||
The `--research` flag uses whatever research model you have configured in `.taskmaster/config.json` (configurable via `task-master models --setup`) for research-backed complexity analysis, providing more informed recommendations.
|
||||
</Tip>
|
||||
|
||||
You can view the report in a friendly table using:
|
||||
```
|
||||
Can you show me the complexity report in a more readable format?
|
||||
```
|
||||
|
||||
For more detailed CLI options, see the [Analyze Task Complexity](/docs/capabilities/cli-root-commands#analyze-task-complexity) section.
|
||||
|
||||
<Check>Now you are ready to begin [executing tasks](/docs/getting-started/quick-start/execute-quick)</Check>
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "docs",
|
||||
"version": "0.0.2",
|
||||
"version": "0.0.3",
|
||||
"private": true,
|
||||
"description": "Task Master documentation powered by Mintlify",
|
||||
"scripts": {
|
||||
|
||||
@@ -1,5 +1,22 @@
|
||||
# Change Log
|
||||
|
||||
## 0.25.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add "Start Task" button to VS Code extension for seamless Claude Code integration
|
||||
|
||||
You can now click a "Start Task" button directly in the Task Master extension which will open a new terminal and automatically execute the task using Claude Code. This provides a seamless workflow from viewing tasks in the extension to implementing them without leaving VS Code.
|
||||
|
||||
- [#1201](https://github.com/eyaltoledano/claude-task-master/pull/1201) [`83af314`](https://github.com/eyaltoledano/claude-task-master/commit/83af314879fc0e563581161c60d2bd089899313e) Thanks [@losolosol](https://github.com/losolosol)! - Added a Start Build button to the VSCODE Task Properties Right Panel
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1229](https://github.com/eyaltoledano/claude-task-master/pull/1229) [`674d1f6`](https://github.com/eyaltoledano/claude-task-master/commit/674d1f6de7ea98116b61bdae6198bafe6c4e7c1a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP not connecting to new Taskmaster version
|
||||
|
||||
- Updated dependencies [[`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869), [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142)]:
|
||||
- task-master-ai@0.27.0
|
||||
|
||||
## 0.25.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
"private": true,
|
||||
"displayName": "TaskMaster",
|
||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||
"version": "0.25.0-rc.0",
|
||||
"version": "0.25.0",
|
||||
"publisher": "Hamster",
|
||||
"icon": "assets/icon.png",
|
||||
"engines": {
|
||||
|
||||
@@ -408,7 +408,7 @@ export function createMCPConfigFromSettings(): MCPConfig {
|
||||
const taskMasterPath = require.resolve('task-master-ai');
|
||||
const mcpServerPath = path.resolve(
|
||||
path.dirname(taskMasterPath),
|
||||
'mcp-server/server.js'
|
||||
'./dist/mcp-server.js'
|
||||
);
|
||||
|
||||
// Verify the server file exists
|
||||
|
||||
@@ -181,7 +181,7 @@ task-master analyze-complexity --threshold=6
|
||||
# Use an alternative tasks file
|
||||
task-master analyze-complexity --file=custom-tasks.json
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
# Use your configured research model for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
|
||||
|
||||
@@ -235,6 +235,60 @@ node scripts/init.js
|
||||
- "MCP provider requires session context" → Ensure running in MCP environment
|
||||
- See the [MCP Provider Guide](./mcp-provider-guide.md) for detailed troubleshooting
|
||||
|
||||
### MCP Timeout Configuration
|
||||
|
||||
Long-running AI operations in taskmaster-ai can exceed the default 60-second MCP timeout. Operations like `parse_prd`, `expand_task`, `research`, and `analyze_project_complexity` may take 2-5 minutes to complete.
|
||||
|
||||
#### Adding Timeout Configuration
|
||||
|
||||
Add a `timeout` parameter to your MCP configuration to extend the timeout limit. The timeout configuration works identically across MCP clients including Cursor, Windsurf, and RooCode:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"timeout": 300,
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your-anthropic-api-key"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Configuration Details:**
|
||||
- **`timeout: 300`** - Sets timeout to 300 seconds (5 minutes)
|
||||
- **Value range**: 1-3600 seconds (1 second to 1 hour)
|
||||
- **Recommended**: 300 seconds provides sufficient time for most AI operations
|
||||
- **Format**: Integer value in seconds (not milliseconds)
|
||||
|
||||
#### Automatic Setup
|
||||
|
||||
When adding taskmaster rules for supported editors, the timeout configuration is automatically included:
|
||||
|
||||
```bash
|
||||
# Automatically includes timeout configuration
|
||||
task-master rules add cursor
|
||||
task-master rules add roo
|
||||
task-master rules add windsurf
|
||||
task-master rules add vscode
|
||||
```
|
||||
|
||||
#### Troubleshooting Timeouts
|
||||
|
||||
If you're still experiencing timeout errors:
|
||||
|
||||
1. **Verify configuration**: Check that `timeout: 300` is present in your MCP config
|
||||
2. **Restart editor**: Restart your editor after making configuration changes
|
||||
3. **Increase timeout**: For very complex operations, try `timeout: 600` (10 minutes)
|
||||
4. **Check API keys**: Ensure required API keys are properly configured
|
||||
|
||||
**Expected behavior:**
|
||||
- **Before fix**: Operations fail after 60 seconds with `MCP request timed out after 60000ms`
|
||||
- **After fix**: Operations complete successfully within the configured timeout limit
|
||||
|
||||
### Google Vertex AI Configuration
|
||||
|
||||
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
|
||||
|
||||
@@ -451,8 +451,8 @@ When using Task Master in VS Code with MCP support:
|
||||
{
|
||||
"servers": {
|
||||
"task-master-dev": {
|
||||
"command": "node",
|
||||
"args": ["mcp-server/server.js"],
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"cwd": "/path/to/your/task-master-project",
|
||||
"env": {
|
||||
"NODE_ENV": "development",
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Available Models as of September 19, 2025
|
||||
# Available Models as of September 23, 2025
|
||||
|
||||
## Main Models
|
||||
|
||||
@@ -119,7 +119,7 @@
|
||||
| groq | deepseek-r1-distill-llama-70b | 0.52 | 0.75 | 0.99 |
|
||||
| perplexity | sonar-pro | — | 3 | 15 |
|
||||
| perplexity | sonar | — | 1 | 1 |
|
||||
| perplexity | deep-research | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-deep-research | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
|
||||
|
||||
172
output.txt
Normal file
172
output.txt
Normal file
File diff suppressed because one or more lines are too long
14
package-lock.json
generated
14
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.27.0-rc.2",
|
||||
"version": "0.27.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "task-master-ai",
|
||||
"version": "0.27.0-rc.2",
|
||||
"version": "0.27.0",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
@@ -99,7 +99,7 @@
|
||||
},
|
||||
"apps/cli": {
|
||||
"name": "@tm/cli",
|
||||
"version": "0.27.0-rc.0",
|
||||
"version": "0.27.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@tm/core": "*",
|
||||
@@ -359,13 +359,13 @@
|
||||
}
|
||||
},
|
||||
"apps/docs": {
|
||||
"version": "0.0.2",
|
||||
"version": "0.0.3",
|
||||
"devDependencies": {
|
||||
"mintlify": "^4.2.111"
|
||||
}
|
||||
},
|
||||
"apps/extension": {
|
||||
"version": "0.25.0-rc.0",
|
||||
"version": "0.25.0",
|
||||
"dependencies": {
|
||||
"task-master-ai": "*"
|
||||
},
|
||||
@@ -31873,7 +31873,7 @@
|
||||
},
|
||||
"packages/build-config": {
|
||||
"name": "@tm/build-config",
|
||||
"version": "1.0.0",
|
||||
"version": "1.0.1",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"tsup": "^8.5.0"
|
||||
@@ -31885,7 +31885,7 @@
|
||||
},
|
||||
"packages/tm-core": {
|
||||
"name": "@tm/core",
|
||||
"version": "0.26.0",
|
||||
"version": "0.26.1",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/supabase-js": "^2.57.4",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.27.0-rc.2",
|
||||
"version": "0.27.0",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
|
||||
3
packages/build-config/CHANGELOG.md
Normal file
3
packages/build-config/CHANGELOG.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# @tm/build-config
|
||||
|
||||
## 1.0.1
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@tm/build-config",
|
||||
"version": "1.0.0",
|
||||
"version": "1.0.1",
|
||||
"description": "Shared build configuration for Task Master monorepo",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
# Changelog
|
||||
|
||||
## 0.26.1
|
||||
|
||||
All notable changes to the @task-master/tm-core package will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
@@ -8,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
|
||||
- Initial package structure and configuration
|
||||
- TypeScript support with strict mode
|
||||
- Dual ESM/CJS build system with tsup
|
||||
@@ -18,6 +21,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
- Comprehensive documentation and README
|
||||
|
||||
### Development Infrastructure
|
||||
|
||||
- tsup configuration for dual format builds
|
||||
- Jest configuration with ESM support
|
||||
- ESLint configuration with TypeScript rules
|
||||
@@ -27,6 +31,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
- .gitignore for development files
|
||||
|
||||
### Package Structure
|
||||
|
||||
- `src/types/` - TypeScript type definitions (placeholder)
|
||||
- `src/providers/` - AI provider implementations (placeholder)
|
||||
- `src/storage/` - Storage layer abstractions (placeholder)
|
||||
@@ -38,6 +43,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
## [1.0.0] - TBD
|
||||
|
||||
### Planned Features
|
||||
|
||||
- Complete TypeScript type system
|
||||
- AI provider implementations
|
||||
- Storage adapters
|
||||
@@ -52,9 +58,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
## Release Notes
|
||||
|
||||
### Version 1.0.0 (Coming Soon)
|
||||
|
||||
This will be the first stable release of tm-core with complete implementations of all modules. Currently, all modules contain placeholder implementations to establish the package structure and enable development of dependent packages.
|
||||
|
||||
### Development Status
|
||||
|
||||
- ✅ Package structure and configuration
|
||||
- ✅ Build and test infrastructure
|
||||
- ✅ Development tooling setup
|
||||
@@ -67,4 +75,4 @@ This will be the first stable release of tm-core with complete implementations o
|
||||
- 🚧 Configuration system (Task 122)
|
||||
- 🚧 Testing infrastructure (Task 123)
|
||||
- 🚧 Documentation (Task 124)
|
||||
- 🚧 Package finalization (Task 125)
|
||||
- 🚧 Package finalization (Task 125)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@tm/core",
|
||||
"version": "0.26.0",
|
||||
"version": "0.26.1",
|
||||
"private": true,
|
||||
"description": "Core library for Task Master - TypeScript task management system",
|
||||
"type": "module",
|
||||
|
||||
@@ -60,7 +60,7 @@ alwaysApply: true
|
||||
|
||||
- For tasks with complexity analysis, use `node scripts/dev.js expand --id=<id>`
|
||||
- Otherwise use `node scripts/dev.js expand --id=<id> --subtasks=<number>`
|
||||
- Add `--research` flag to leverage Perplexity AI for research-backed expansion
|
||||
- Add `--research` flag to leverage your configured research model for research-backed expansion
|
||||
- Use `--prompt="<context>"` to provide additional context when needed
|
||||
- Review and adjust generated subtasks as necessary
|
||||
- Use `--all` flag to expand multiple pending tasks at once
|
||||
@@ -160,7 +160,7 @@ alwaysApply: true
|
||||
- `--id=<id>`: ID of task to expand (required unless using --all)
|
||||
- `--all`: Expand all pending tasks, prioritized by complexity
|
||||
- `--num=<number>`: Number of subtasks to generate (default: from complexity report)
|
||||
- `--research`: Use Perplexity AI for research-backed generation
|
||||
- `--research`: Use your configured research model for research-backed generation
|
||||
- `--prompt="<text>"`: Additional context for subtask generation
|
||||
- `--force`: Regenerate subtasks even for tasks that already have them
|
||||
- Example: `task-master expand --id=3 --num=5 --research --prompt="Focus on security aspects"`
|
||||
@@ -176,7 +176,7 @@ alwaysApply: true
|
||||
- `--model=<model>, -m`: Override LLM model to use
|
||||
- `--threshold=<number>, -t`: Minimum score for expansion recommendation (default: 5)
|
||||
- `--file=<path>, -f`: Use alternative tasks.json file
|
||||
- `--research, -r`: Use Perplexity AI for research-backed analysis
|
||||
- `--research, -r`: Use your configured research model for research-backed analysis
|
||||
- Example: `task-master analyze-complexity --research`
|
||||
- Notes: Report includes complexity scores, recommended subtasks, and tailored prompts.
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ task-master analyze-complexity [--research] [--threshold=5]
|
||||
|
||||
## Analysis Parameters
|
||||
|
||||
- `--research` → Use research AI for deeper analysis
|
||||
- `--research` → Use configured research model for deeper analysis
|
||||
- `--threshold=5` → Only flag tasks above complexity 5
|
||||
- Default: Analyze all pending tasks
|
||||
|
||||
|
||||
@@ -231,7 +231,7 @@ Taskmaster offers two primary ways to interact:
|
||||
|
||||
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
|
||||
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
|
||||
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
|
||||
- Add `--research` flag to leverage your configured research model for research-backed expansion.
|
||||
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
|
||||
- Use `--prompt="<context>"` to provide additional context when needed.
|
||||
- Review and adjust generated subtasks as necessary.
|
||||
|
||||
@@ -12,7 +12,7 @@ In an AI-driven development process—particularly with tools like [Cursor](http
|
||||
4. **Generate** individual task files (e.g., `task_001.txt`) for easy reference or to feed into an AI coding workflow.
|
||||
5. **Set task status**—mark tasks as `done`, `pending`, or `deferred` based on progress.
|
||||
6. **Expand** tasks with subtasks—break down complex tasks into smaller, more manageable subtasks.
|
||||
7. **Research-backed subtask generation**—use Perplexity AI to generate more informed and contextually relevant subtasks.
|
||||
7. **Research-backed subtask generation**—use your configured research model to generate more informed and contextually relevant subtasks.
|
||||
8. **Clear subtasks**—remove subtasks from specified tasks to allow regeneration or restructuring.
|
||||
9. **Show task details**—display detailed information about a specific task and its subtasks.
|
||||
|
||||
@@ -27,7 +27,7 @@ Task Master configuration is now managed through two primary methods:
|
||||
- This is the main configuration file for most settings.
|
||||
|
||||
2. **Environment Variables (`.env` File - API Keys Only)**
|
||||
- Used **only** for sensitive **API Keys** (e.g., `ANTHROPIC_API_KEY`, `PERPLEXITY_API_KEY`).
|
||||
- Used **only** for sensitive **API Keys** (e.g., `ANTHROPIC_API_KEY`, `PERPLEXITY_API_KEY`, etc.).
|
||||
- Create a `.env` file in your project root for CLI usage.
|
||||
- See `assets/env.example` for required key names.
|
||||
|
||||
@@ -160,10 +160,10 @@ task-master expand --all
|
||||
# Force regeneration of subtasks for all pending tasks
|
||||
task-master expand --all --force
|
||||
|
||||
# Use Perplexity AI for research-backed subtask generation
|
||||
# Use your configured research model for research-backed subtask generation
|
||||
task-master expand --id=3 --research
|
||||
|
||||
# Use Perplexity AI for research-backed generation on all pending tasks
|
||||
# Use your configured research model for research-backed generation on all pending tasks
|
||||
task-master expand --all --research
|
||||
```
|
||||
|
||||
@@ -192,10 +192,10 @@ Notes:
|
||||
## AI Integration (Updated)
|
||||
|
||||
- The script now uses a unified AI service layer (`ai-services-unified.js`).
|
||||
- Model selection (e.g., Claude vs. Perplexity for `--research`) is determined by the configuration in `.taskmaster/config.json` based on the requested `role` (`main` or `research`).
|
||||
- Model selection (e.g., Claude vs. research models for `--research`) is determined by the configuration in `.taskmaster/config.json` based on the requested `role` (`main` or `research`).
|
||||
- API keys are automatically resolved from your `.env` file (for CLI) or MCP session environment.
|
||||
- To use the research capabilities (e.g., `expand --research`), ensure you have:
|
||||
1. Configured a model for the `research` role using `task-master models --setup` (Perplexity models are recommended).
|
||||
1. Configured a model for the `research` role using `task-master models --setup` (research-capable models like Perplexity are recommended).
|
||||
2. Added the corresponding API key (e.g., `PERPLEXITY_API_KEY`) to your `.env` file.
|
||||
|
||||
## Logging
|
||||
@@ -317,13 +317,13 @@ task-master analyze-complexity --model=claude-3-opus-20240229
|
||||
# Set a custom complexity threshold (1-10)
|
||||
task-master analyze-complexity --threshold=6
|
||||
|
||||
# Use Perplexity AI for research-backed complexity analysis
|
||||
# Use your configured research model for research-backed complexity analysis
|
||||
task-master analyze-complexity --research
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- The command uses Claude to analyze each task's complexity (or Perplexity with --research flag)
|
||||
- The command uses your main model to analyze each task's complexity (or your configured research model with --research flag)
|
||||
- Tasks are scored on a scale of 1-10
|
||||
- Each task receives a recommended number of subtasks based on DEFAULT_SUBTASKS configuration
|
||||
- The default output path is `scripts/task-complexity-report.json`
|
||||
|
||||
@@ -1847,7 +1847,7 @@ function registerCommands(programInstance) {
|
||||
)
|
||||
.option(
|
||||
'-r, --research',
|
||||
'Use Perplexity AI for research-backed complexity analysis'
|
||||
'Use configured research model for research-backed complexity analysis'
|
||||
)
|
||||
.option(
|
||||
'-i, --id <ids>',
|
||||
|
||||
@@ -310,6 +310,7 @@ function validateProviderModelCombination(providerName, modelId) {
|
||||
function validateClaudeCodeSettings(settings) {
|
||||
// Define the base settings schema without commandSpecific first
|
||||
const BaseSettingsSchema = z.object({
|
||||
pathToClaudeCodeExecutable: z.string().optional(),
|
||||
maxTurns: z.number().int().positive().optional(),
|
||||
customSystemPrompt: z.string().optional(),
|
||||
appendSystemPrompt: z.string().optional(),
|
||||
|
||||
@@ -522,7 +522,7 @@
|
||||
"supported": true
|
||||
},
|
||||
{
|
||||
"id": "deep-research",
|
||||
"id": "sonar-deep-research",
|
||||
"swe_score": 0.211,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 2,
|
||||
|
||||
@@ -5,6 +5,40 @@ import { isSilentMode, log } from '../../scripts/modules/utils.js';
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
import { ROO_MODES } from '../constants/profiles.js';
|
||||
|
||||
// Import the shared MCP configuration helper
|
||||
import { formatJSONWithTabs } from '../utils/create-mcp-config.js';
|
||||
|
||||
// Roo-specific MCP configuration enhancements
|
||||
function enhanceRooMCPConfiguration(mcpPath) {
|
||||
if (!fs.existsSync(mcpPath)) {
|
||||
log('warn', `[Roo] MCP configuration file not found at ${mcpPath}`);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Read the existing configuration
|
||||
const mcpConfig = JSON.parse(fs.readFileSync(mcpPath, 'utf8'));
|
||||
|
||||
if (mcpConfig.mcpServers && mcpConfig.mcpServers['task-master-ai']) {
|
||||
const server = mcpConfig.mcpServers['task-master-ai'];
|
||||
|
||||
// Add Roo-specific timeout enhancement for long-running AI operations
|
||||
server.timeout = 300;
|
||||
|
||||
// Write the enhanced configuration back
|
||||
fs.writeFileSync(mcpPath, formatJSONWithTabs(mcpConfig) + '\n');
|
||||
log(
|
||||
'debug',
|
||||
`[Roo] Enhanced MCP configuration with timeout at ${mcpPath}`
|
||||
);
|
||||
} else {
|
||||
log('warn', `[Roo] task-master-ai server not found in MCP configuration`);
|
||||
}
|
||||
} catch (error) {
|
||||
log('error', `[Roo] Failed to enhance MCP configuration: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Lifecycle functions for Roo profile
|
||||
function onAddRulesProfile(targetDir, assetsDir) {
|
||||
// Use the provided assets directory to find the roocode directory
|
||||
@@ -32,6 +66,9 @@ function onAddRulesProfile(targetDir, assetsDir) {
|
||||
}
|
||||
}
|
||||
|
||||
// Note: MCP configuration is now handled by the base profile system
|
||||
// The base profile will call setupMCPConfiguration, and we enhance it in onPostConvert
|
||||
|
||||
for (const mode of ROO_MODES) {
|
||||
const src = path.join(rooModesDir, `rules-${mode}`, `${mode}-rules`);
|
||||
const dest = path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`);
|
||||
@@ -78,6 +115,15 @@ function onRemoveRulesProfile(targetDir) {
|
||||
|
||||
const rooDir = path.join(targetDir, '.roo');
|
||||
if (fs.existsSync(rooDir)) {
|
||||
// Remove MCP configuration
|
||||
const mcpPath = path.join(rooDir, 'mcp.json');
|
||||
try {
|
||||
fs.rmSync(mcpPath, { force: true });
|
||||
log('debug', `[Roo] Removed MCP configuration from ${mcpPath}`);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to remove MCP configuration: ${err.message}`);
|
||||
}
|
||||
|
||||
fs.readdirSync(rooDir).forEach((entry) => {
|
||||
if (entry.startsWith('rules-')) {
|
||||
const modeDir = path.join(rooDir, entry);
|
||||
@@ -101,7 +147,13 @@ function onRemoveRulesProfile(targetDir) {
|
||||
}
|
||||
|
||||
function onPostConvertRulesProfile(targetDir, assetsDir) {
|
||||
onAddRulesProfile(targetDir, assetsDir);
|
||||
// Enhance the MCP configuration with Roo-specific features after base setup
|
||||
const mcpPath = path.join(targetDir, '.roo', 'mcp.json');
|
||||
try {
|
||||
enhanceRooMCPConfiguration(mcpPath);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to enhance MCP configuration: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Create and export roo profile using the base factory
|
||||
@@ -111,6 +163,7 @@ export const rooProfile = createProfile({
|
||||
url: 'roocode.com',
|
||||
docsUrl: 'docs.roocode.com',
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.ROO_STYLE,
|
||||
mcpConfig: true, // Enable MCP config - we enhance it with Roo-specific features
|
||||
onAdd: onAddRulesProfile,
|
||||
onRemove: onRemoveRulesProfile,
|
||||
onPostConvert: onPostConvertRulesProfile
|
||||
|
||||
@@ -262,3 +262,6 @@ export function removeTaskMasterMCPConfiguration(projectRoot, mcpConfigPath) {
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Export the formatting function for use by other modules
|
||||
export { formatJSONWithTabs };
|
||||
|
||||
@@ -26,7 +26,7 @@ describe('Roo Profile Initialization Functionality', () => {
|
||||
expect(rooProfile.displayName).toBe('Roo Code');
|
||||
expect(rooProfile.profileDir).toBe('.roo'); // default
|
||||
expect(rooProfile.rulesDir).toBe('.roo/rules'); // default
|
||||
expect(rooProfile.mcpConfig).toBe(true); // default
|
||||
expect(rooProfile.mcpConfig).toBe(true); // now uses standard MCP configuration with Roo enhancements
|
||||
});
|
||||
|
||||
test('roo.js uses custom ROO_STYLE tool mappings', () => {
|
||||
|
||||
@@ -266,10 +266,10 @@ describe('MCP Configuration Validation', () => {
|
||||
expect(mcpEnabledProfiles).toContain('cursor');
|
||||
expect(mcpEnabledProfiles).toContain('gemini');
|
||||
expect(mcpEnabledProfiles).toContain('opencode');
|
||||
expect(mcpEnabledProfiles).toContain('roo');
|
||||
expect(mcpEnabledProfiles).toContain('vscode');
|
||||
expect(mcpEnabledProfiles).toContain('windsurf');
|
||||
expect(mcpEnabledProfiles).toContain('zed');
|
||||
expect(mcpEnabledProfiles).toContain('roo');
|
||||
expect(mcpEnabledProfiles).not.toContain('cline');
|
||||
expect(mcpEnabledProfiles).not.toContain('codex');
|
||||
expect(mcpEnabledProfiles).not.toContain('trae');
|
||||
@@ -384,6 +384,7 @@ describe('MCP Configuration Validation', () => {
|
||||
'claude',
|
||||
'cursor',
|
||||
'gemini',
|
||||
'kiro',
|
||||
'opencode',
|
||||
'roo',
|
||||
'windsurf',
|
||||
|
||||
Reference in New Issue
Block a user