Compare commits

..

1 Commits

Author SHA1 Message Date
github-actions[bot]
8801091ffb docs: auto-update documentation based on changes in next branch
This PR was automatically generated to update documentation based on recent changes.

  Original commit: feat: update tm models defaults (#1225)\n\n\n

  Co-authored-by: Claude <claude-assistant@anthropic.com>
2025-09-19 23:15:28 +00:00
45 changed files with 317 additions and 480 deletions

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Improve `analyze-complexity` cli docs and `--research` flag documentation

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": minor
---
No longer need --package=task-master-ai in mcp server
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
- we now bundle our whole package, so we no longer need the --package

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": minor
---
Add new `task-master start` command for automated task execution with Claude Code
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
- `task-master start` will automatically detect next-task when no ID is provided.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript

View File

@@ -1,13 +0,0 @@
---
"task-master-ai": minor
---
Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
**What's New:**
- 300-second timeout for MCP operations (up from default 60 seconds)
- Programmatic MCP configuration generation (replaces static asset files)
- Enhanced reliability for AI-powered operations
- Consistent with other AI coding assistant profiles
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix Claude Code settings validation for pathToClaudeCodeExecutable

19
.changeset/pre.json Normal file
View File

@@ -0,0 +1,19 @@
{
"mode": "pre",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.26.0",
"@tm/cli": "0.26.0",
"docs": "0.0.2",
"extension": "0.24.2",
"@tm/build-config": "1.0.0",
"@tm/core": "0.26.0"
},
"changesets": [
"easy-deer-heal",
"moody-oranges-slide",
"odd-otters-tan",
"shiny-regions-teach",
"wild-ears-look"
]
}

View File

@@ -0,0 +1,36 @@
---
"task-master-ai": minor
---
Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
## Setup Instructions
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
2. **Set the environment variable**:
```bash
export GROK_CLI_API_KEY="your-api-key-here"
```
3. **Configure Task Master to use Grok**:
```bash
task-master models --set-main grok-beta
# or
task-master models --set-research grok-beta
# or
task-master models --set-fallback grok-beta
```
## Key Features
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
## Available Models
- `grok-beta` - Latest Grok model with codebase context
- `grok-vision-beta` - Grok with vision capabilities and codebase context
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
## Credits
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.

View File

@@ -0,0 +1,8 @@
---
"task-master-ai": minor
---
Improve taskmaster ai provider defaults
- moving from main anthropic 3.7 to anthropic sonnet 4
- moving from fallback anthropic 3.5 to anthropic 3.7

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
@tm/cli: add auto-update functionality to every command

View File

@@ -0,0 +1,7 @@
---
"extension": minor
---
Add "Start Task" button to VS Code extension for seamless Claude Code integration
You can now click a "Start Task" button directly in the Task Master extension which will open a new terminal and automatically execute the task using Claude Code. This provides a seamless workflow from viewing tasks in the extension to implementing them without leaving VS Code.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.

View File

@@ -0,0 +1,5 @@
---
"extension": minor
---
Added a Start Build button to the VSCODE Task Properties Right Panel

View File

@@ -2,7 +2,7 @@
"mcpServers": {
"task-master-ai": {
"command": "node",
"args": ["./dist/mcp-server.js"],
"args": ["./mcp-server/server.js"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",

View File

@@ -1,157 +0,0 @@
#!/usr/bin/env node
import { readFileSync, existsSync, writeFileSync } from 'fs';
function parseMetricsTable(content, metricName) {
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
const line = lines[i].trim();
// Match a markdown table row like: | Metric Name | value | ...
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
const match = line.match(re);
if (match) {
return match[1].trim() || 'N/A';
}
}
return 'N/A';
}
function parseCountMetric(content, metricName) {
const result = parseMetricsTable(content, metricName);
// Extract number from string, handling commas and spaces
const numberMatch = result.toString().match(/[\d,]+/);
if (numberMatch) {
const number = parseInt(numberMatch[0].replace(/,/g, ''));
return isNaN(number) ? 0 : number;
}
return 0;
}
function main() {
const metrics = {
issues_created: 0,
issues_closed: 0,
prs_created: 0,
prs_merged: 0,
issue_avg_first_response: 'N/A',
issue_avg_time_to_close: 'N/A',
pr_avg_first_response: 'N/A',
pr_avg_merge_time: 'N/A'
};
// Parse issue metrics
if (existsSync('issue_metrics.md')) {
console.log('📄 Found issue_metrics.md, parsing...');
const issueContent = readFileSync('issue_metrics.md', 'utf8');
metrics.issues_created = parseCountMetric(
issueContent,
'Total number of items created'
);
metrics.issues_closed = parseCountMetric(
issueContent,
'Number of items closed'
);
metrics.issue_avg_first_response = parseMetricsTable(
issueContent,
'Time to first response'
);
metrics.issue_avg_time_to_close = parseMetricsTable(
issueContent,
'Time to close'
);
} else {
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
}
// Parse PR created metrics
if (existsSync('pr_created_metrics.md')) {
console.log('📄 Found pr_created_metrics.md, parsing...');
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
metrics.prs_created = parseCountMetric(
prCreatedContent,
'Total number of items created'
);
metrics.pr_avg_first_response = parseMetricsTable(
prCreatedContent,
'Time to first response'
);
} else {
console.warn(
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
);
}
// Parse PR merged metrics (for more accurate merge data)
if (existsSync('pr_merged_metrics.md')) {
console.log('📄 Found pr_merged_metrics.md, parsing...');
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
metrics.prs_merged = parseCountMetric(
prMergedContent,
'Total number of items created'
);
// For merged PRs, "Time to close" is actually time to merge
metrics.pr_avg_merge_time = parseMetricsTable(
prMergedContent,
'Time to close'
);
} else {
console.warn(
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
);
// Fallback: try old pr_metrics.md if it exists
if (existsSync('pr_metrics.md')) {
console.log('📄 Falling back to pr_metrics.md...');
const prContent = readFileSync('pr_metrics.md', 'utf8');
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
metrics.prs_merged =
mergedCount || parseCountMetric(prContent, 'Number of items closed');
const maybeMergeTime = parseMetricsTable(
prContent,
'Average time to merge'
);
metrics.pr_avg_merge_time =
maybeMergeTime !== 'N/A'
? maybeMergeTime
: parseMetricsTable(prContent, 'Time to close');
} else {
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
}
}
// Output for GitHub Actions
const output = Object.entries(metrics)
.map(([key, value]) => `${key}=${value}`)
.join('\n');
// Always output to stdout for debugging
console.log('\n=== FINAL METRICS ===');
Object.entries(metrics).forEach(([key, value]) => {
console.log(`${key}: ${value}`);
});
// Write to GITHUB_OUTPUT if in GitHub Actions
if (process.env.GITHUB_OUTPUT) {
try {
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
console.log(
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
);
} catch (error) {
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
process.exit(1);
}
} else {
console.log(
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
);
}
}
main();

View File

@@ -8,7 +8,7 @@ on:
permissions:
contents: read
issues: read
issues: write
pull-requests: read
jobs:
@@ -17,25 +17,15 @@ jobs:
env:
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Get dates for last 14 days
- name: Get dates for last week
run: |
set -Eeuo pipefail
# Last 14 days
first_day=$(date -d "14 days ago" +%Y-%m-%d)
# Last 7 days
first_day=$(date -d "7 days ago" +%Y-%m-%d)
last_day=$(date +%Y-%m-%d)
echo "first_day=$first_day" >> $GITHUB_ENV
echo "last_day=$last_day" >> $GITHUB_ENV
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
- name: Generate issue metrics
uses: github/issue-metrics@v3
@@ -44,39 +34,40 @@ jobs:
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
HIDE_TIME_TO_ANSWER: true
HIDE_LABEL_METRICS: false
OUTPUT_FILE: issue_metrics.md
- name: Generate PR created metrics
- name: Generate PR metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
OUTPUT_FILE: pr_created_metrics.md
- name: Generate PR merged metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
OUTPUT_FILE: pr_merged_metrics.md
- name: Debug generated metrics
run: |
set -Eeuo pipefail
echo "Listing markdown files in workspace:"
ls -la *.md || true
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
if [ -f "$f" ]; then
echo "== $f (first 10 lines) =="
head -n 10 "$f"
else
echo "Missing $f"
fi
done
OUTPUT_FILE: pr_metrics.md
- name: Parse metrics
id: metrics
run: node .github/scripts/parse-metrics.mjs
run: |
# Parse the metrics from the generated markdown files
if [ -f "issue_metrics.md" ]; then
# Extract key metrics using grep/awk
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
fi
if [ -f "pr_metrics.md" ]; then
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
fi
# Set outputs for Discord action
echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT
- name: Send to Discord
uses: sarisia/actions-status-discord@v1
@@ -87,22 +78,19 @@ jobs:
title: "📊 Weekly Metrics Report"
description: |
**${{ env.week_of }}**
*${{ env.date_range }}*
**🎯 Issues**
• Created: ${{ steps.metrics.outputs.issues_created }}
• Closed: ${{ steps.metrics.outputs.issues_closed }}
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
**🔀 Pull Requests**
• Created: ${{ steps.metrics.outputs.prs_created }}
• Merged: ${{ steps.metrics.outputs.prs_merged }}
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
**📈 Visual Analytics**
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
**⏱️ Response Times**
• First Response: ${{ steps.metrics.outputs.avg_first_response }}
• Time to Close: ${{ steps.metrics.outputs.avg_time_to_close }}
• PR Merge Time: ${{ steps.metrics.outputs.pr_avg_merge_time }}
color: 0x58AFFF
username: Task Master Metrics Bot
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png

View File

@@ -1,60 +1,5 @@
# task-master-ai
## 0.27.0
### Minor Changes
- [#1220](https://github.com/eyaltoledano/claude-task-master/pull/1220) [`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - No longer need --package=task-master-ai in mcp server
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
- we now bundle our whole package, so we no longer need the --package
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `task-master start` command for automated task execution with Claude Code
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
- `task-master start` will automatically detect next-task when no ID is provided.
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
## Setup Instructions
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
2. **Set the environment variable**:
```bash
export GROK_CLI_API_KEY="your-api-key-here"
```
3. **Configure Task Master to use Grok**:
```bash
task-master models --set-main grok-beta
# or
task-master models --set-research grok-beta
# or
task-master models --set-fallback grok-beta
```
## Key Features
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
## Available Models
- `grok-beta` - Latest Grok model with codebase context
- `grok-vision-beta` - Grok with vision capabilities and codebase context
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
## Credits
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.
- [#1225](https://github.com/eyaltoledano/claude-task-master/pull/1225) [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve taskmaster ai provider defaults
- moving from main anthropic 3.7 to anthropic sonnet 4
- moving from fallback anthropic 3.5 to anthropic 3.7
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.
## 0.27.0-rc.2
### Minor Changes

View File

@@ -1,12 +1,5 @@
# @tm/cli
## 0.27.0
### Patch Changes
- Updated dependencies []:
- @tm/core@0.26.1
## 0.27.0-rc.0
### Minor Changes

View File

@@ -1,6 +1,6 @@
{
"name": "@tm/cli",
"version": "0.27.0",
"version": "0.27.0-rc.0",
"description": "Task Master CLI - Command line interface for task management",
"type": "module",
"private": true,

View File

@@ -1,7 +1,5 @@
# docs
## 0.0.3
## 0.0.2
## 0.0.1

View File

@@ -23,7 +23,7 @@ description: "This guide walks you through setting up Task Master in your develo
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"MODEL": "claude-sonnet-4-20250514",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 128000,
"TEMPERATURE": 0.2,

View File

@@ -19,7 +19,7 @@ description: "Configure Task Master through environment variables in a .env file
| Variable | Default Value | Description | Example |
| --- | --- | --- | --- |
| `MODEL` | `"claude-3-7-sonnet-20250219"` | Claude model to use | `MODEL=claude-3-opus-20240229` |
| `MODEL` | `"claude-sonnet-4-20250514"` | Claude model to use | `MODEL=claude-3-opus-20240229` |
| `MAX_TOKENS` | `"4000"` | Maximum tokens for responses | `MAX_TOKENS=8000` |
| `TEMPERATURE` | `"0.7"` | Temperature for model responses | `TEMPERATURE=0.5` |
| `DEBUG` | `"false"` | Enable debug logging | `DEBUG=true` |
@@ -38,7 +38,7 @@ description: "Configure Task Master through environment variables in a .env file
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
# Optional - Claude Configuration
MODEL=claude-3-7-sonnet-20250219
MODEL=claude-sonnet-4-20250514
MAX_TOKENS=4000
TEMPERATURE=0.7

View File

@@ -18,7 +18,7 @@ Taskmaster uses two primary methods for configuration:
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"modelId": "claude-sonnet-4-20250514",
"maxTokens": 64000,
"temperature": 0.2,
"baseURL": "https://api.anthropic.com/v1"
@@ -32,7 +32,7 @@ Taskmaster uses two primary methods for configuration:
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2
}
@@ -75,6 +75,7 @@ Taskmaster uses two primary methods for configuration:
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- `GROK_CLI_API_KEY`: Your Grok API key for grok-cli provider.
- **Optional Endpoint Overrides:**
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
- **Environment Variable Overrides (`<PROVIDER>_BASE_URL`):** For greater flexibility, especially with third-party services, you can set an environment variable like `OPENAI_BASE_URL` or `MISTRAL_BASE_URL`. This will override any `baseURL` set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.
@@ -137,6 +138,7 @@ PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# GROK_CLI_API_KEY=your-grok-api-key-here
# etc.
# Optional Endpoint Overrides
@@ -317,3 +319,61 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
- Verify the deployment name matches your configuration exactly (case-sensitive)
- Ensure the model deployment is in a "Succeeded" state in Azure OpenAI Studio
- Ensure youre not getting rate limited by `maxTokens` maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.
### Grok CLI Configuration
The Grok CLI provider integrates with xAI's Grok models and provides full codebase context support for enhanced task generation and analysis.
1. **Prerequisites**:
- A Grok API key from [console.x.ai](https://console.x.ai)
- The `grok-cli` package will be automatically used when this provider is configured
2. **Authentication**:
- Set the `GROK_CLI_API_KEY` environment variable with your Grok API key
3. **Configuration**:
```json
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "grok-cli",
"modelId": "grok-beta",
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
"provider": "grok-cli",
"modelId": "grok-vision-beta",
"maxTokens": 8700,
"temperature": 0.1
}
},
"grokCli": {
"timeout": 120000,
"workingDirectory": null,
"defaultModel": "grok-4-latest"
}
}
```
4. **Available Models**:
- `grok-beta`: Latest Grok model with codebase context
- `grok-vision-beta`: Grok with vision capabilities and codebase context
- `grok-2`, `grok-3`, `grok-4`: Standard Grok models
5. **Key Features**:
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
6. **Environment Variables**:
```bash
# In .env file
GROK_CLI_API_KEY=your-grok-api-key-here
```
7. **Configuration Options**:
- `timeout`: Request timeout in milliseconds (default: 120000)
- `workingDirectory`: Override working directory for grok-cli (default: null, uses current directory)
- `defaultModel`: Default Grok model to use (default: "grok-4-latest")

View File

@@ -156,7 +156,7 @@ sidebarTitle: "CLI Commands"
# Use an alternative tasks file
task-master analyze-complexity --file=custom-tasks.json
# Use your configured research model for research-backed complexity analysis
# Use Perplexity AI for research-backed complexity analysis
task-master analyze-complexity --research
```
</Accordion>

View File

@@ -18,8 +18,8 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"command": "node",
"args": ["./mcp-server/server.js"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
@@ -30,6 +30,7 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE",
"GROK_CLI_API_KEY": "GROK_CLI_API_KEY_HERE",
"GITHUB_API_KEY": "GITHUB_API_KEY_HERE"
}
}
@@ -50,6 +51,7 @@ PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# GROK_CLI_API_KEY=your-grok-api-key-here
# etc.
# Optional Endpoint Overrides

View File

@@ -61,25 +61,9 @@ Task Master can provide a complexity report which can be helpful to read before
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
```
The agent will use the `analyze_project_complexity` MCP tool, or you can run it directly with the CLI command:
```bash
task-master analyze-complexity
```
For more comprehensive analysis using your configured research model, you can use:
```bash
task-master analyze-complexity --research
```
<Tip>
The `--research` flag uses whatever research model you have configured in `.taskmaster/config.json` (configurable via `task-master models --setup`) for research-backed complexity analysis, providing more informed recommendations.
</Tip>
You can view the report in a friendly table using:
```
Can you show me the complexity report in a more readable format?
```
For more detailed CLI options, see the [Analyze Task Complexity](/docs/capabilities/cli-root-commands#analyze-task-complexity) section.
<Check>Now you are ready to begin [executing tasks](/docs/getting-started/quick-start/execute-quick)</Check>

View File

@@ -1,6 +1,6 @@
{
"name": "docs",
"version": "0.0.3",
"version": "0.0.2",
"private": true,
"description": "Task Master documentation powered by Mintlify",
"scripts": {

View File

@@ -1,22 +1,5 @@
# Change Log
## 0.25.0
### Minor Changes
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add "Start Task" button to VS Code extension for seamless Claude Code integration
You can now click a "Start Task" button directly in the Task Master extension which will open a new terminal and automatically execute the task using Claude Code. This provides a seamless workflow from viewing tasks in the extension to implementing them without leaving VS Code.
- [#1201](https://github.com/eyaltoledano/claude-task-master/pull/1201) [`83af314`](https://github.com/eyaltoledano/claude-task-master/commit/83af314879fc0e563581161c60d2bd089899313e) Thanks [@losolosol](https://github.com/losolosol)! - Added a Start Build button to the VSCODE Task Properties Right Panel
### Patch Changes
- [#1229](https://github.com/eyaltoledano/claude-task-master/pull/1229) [`674d1f6`](https://github.com/eyaltoledano/claude-task-master/commit/674d1f6de7ea98116b61bdae6198bafe6c4e7c1a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP not connecting to new Taskmaster version
- Updated dependencies [[`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869), [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142)]:
- task-master-ai@0.27.0
## 0.25.0-rc.0
### Minor Changes

View File

@@ -3,7 +3,7 @@
"private": true,
"displayName": "TaskMaster",
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
"version": "0.25.0",
"version": "0.25.0-rc.0",
"publisher": "Hamster",
"icon": "assets/icon.png",
"engines": {

View File

@@ -408,7 +408,7 @@ export function createMCPConfigFromSettings(): MCPConfig {
const taskMasterPath = require.resolve('task-master-ai');
const mcpServerPath = path.resolve(
path.dirname(taskMasterPath),
'./dist/mcp-server.js'
'mcp-server/server.js'
);
// Verify the server file exists

View File

@@ -235,60 +235,6 @@ node scripts/init.js
- "MCP provider requires session context" → Ensure running in MCP environment
- See the [MCP Provider Guide](./mcp-provider-guide.md) for detailed troubleshooting
### MCP Timeout Configuration
Long-running AI operations in taskmaster-ai can exceed the default 60-second MCP timeout. Operations like `parse_prd`, `expand_task`, `research`, and `analyze_project_complexity` may take 2-5 minutes to complete.
#### Adding Timeout Configuration
Add a `timeout` parameter to your MCP configuration to extend the timeout limit. The timeout configuration works identically across MCP clients including Cursor, Windsurf, and RooCode:
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"timeout": 300,
"env": {
"ANTHROPIC_API_KEY": "your-anthropic-api-key"
}
}
}
}
```
**Configuration Details:**
- **`timeout: 300`** - Sets timeout to 300 seconds (5 minutes)
- **Value range**: 1-3600 seconds (1 second to 1 hour)
- **Recommended**: 300 seconds provides sufficient time for most AI operations
- **Format**: Integer value in seconds (not milliseconds)
#### Automatic Setup
When adding taskmaster rules for supported editors, the timeout configuration is automatically included:
```bash
# Automatically includes timeout configuration
task-master rules add cursor
task-master rules add roo
task-master rules add windsurf
task-master rules add vscode
```
#### Troubleshooting Timeouts
If you're still experiencing timeout errors:
1. **Verify configuration**: Check that `timeout: 300` is present in your MCP config
2. **Restart editor**: Restart your editor after making configuration changes
3. **Increase timeout**: For very complex operations, try `timeout: 600` (10 minutes)
4. **Check API keys**: Ensure required API keys are properly configured
**Expected behavior:**
- **Before fix**: Operations fail after 60 seconds with `MCP request timed out after 60000ms`
- **After fix**: Operations complete successfully within the configured timeout limit
### Google Vertex AI Configuration
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:

View File

@@ -451,8 +451,8 @@ When using Task Master in VS Code with MCP support:
{
"servers": {
"task-master-dev": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"command": "node",
"args": ["mcp-server/server.js"],
"cwd": "/path/to/your/task-master-project",
"env": {
"NODE_ENV": "development",

81
output.txt Normal file

File diff suppressed because one or more lines are too long

14
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "task-master-ai",
"version": "0.27.0",
"version": "0.27.0-rc.2",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "task-master-ai",
"version": "0.27.0",
"version": "0.27.0-rc.2",
"license": "MIT WITH Commons-Clause",
"workspaces": [
"apps/*",
@@ -99,7 +99,7 @@
},
"apps/cli": {
"name": "@tm/cli",
"version": "0.27.0",
"version": "0.27.0-rc.0",
"license": "MIT",
"dependencies": {
"@tm/core": "*",
@@ -359,13 +359,13 @@
}
},
"apps/docs": {
"version": "0.0.3",
"version": "0.0.2",
"devDependencies": {
"mintlify": "^4.2.111"
}
},
"apps/extension": {
"version": "0.25.0",
"version": "0.25.0-rc.0",
"dependencies": {
"task-master-ai": "*"
},
@@ -31873,7 +31873,7 @@
},
"packages/build-config": {
"name": "@tm/build-config",
"version": "1.0.1",
"version": "1.0.0",
"license": "MIT",
"dependencies": {
"tsup": "^8.5.0"
@@ -31885,7 +31885,7 @@
},
"packages/tm-core": {
"name": "@tm/core",
"version": "0.26.1",
"version": "0.26.0",
"license": "MIT",
"dependencies": {
"@supabase/supabase-js": "^2.57.4",

View File

@@ -1,6 +1,6 @@
{
"name": "task-master-ai",
"version": "0.27.0",
"version": "0.27.0-rc.2",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js",
"type": "module",

View File

@@ -1,3 +0,0 @@
# @tm/build-config
## 1.0.1

View File

@@ -1,6 +1,6 @@
{
"name": "@tm/build-config",
"version": "1.0.1",
"version": "1.0.0",
"description": "Shared build configuration for Task Master monorepo",
"type": "module",
"private": true,

View File

@@ -1,7 +1,5 @@
# Changelog
## 0.26.1
All notable changes to the @task-master/tm-core package will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
@@ -10,7 +8,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Added
- Initial package structure and configuration
- TypeScript support with strict mode
- Dual ESM/CJS build system with tsup
@@ -21,7 +18,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Comprehensive documentation and README
### Development Infrastructure
- tsup configuration for dual format builds
- Jest configuration with ESM support
- ESLint configuration with TypeScript rules
@@ -31,7 +27,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- .gitignore for development files
### Package Structure
- `src/types/` - TypeScript type definitions (placeholder)
- `src/providers/` - AI provider implementations (placeholder)
- `src/storage/` - Storage layer abstractions (placeholder)
@@ -43,7 +38,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [1.0.0] - TBD
### Planned Features
- Complete TypeScript type system
- AI provider implementations
- Storage adapters
@@ -58,11 +52,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## Release Notes
### Version 1.0.0 (Coming Soon)
This will be the first stable release of tm-core with complete implementations of all modules. Currently, all modules contain placeholder implementations to establish the package structure and enable development of dependent packages.
### Development Status
- ✅ Package structure and configuration
- ✅ Build and test infrastructure
- ✅ Development tooling setup

View File

@@ -1,6 +1,6 @@
{
"name": "@tm/core",
"version": "0.26.1",
"version": "0.26.0",
"private": true,
"description": "Core library for Task Master - TypeScript task management system",
"type": "module",

View File

@@ -1847,7 +1847,7 @@ function registerCommands(programInstance) {
)
.option(
'-r, --research',
'Use configured research model for research-backed complexity analysis'
'Use Perplexity AI for research-backed complexity analysis'
)
.option(
'-i, --id <ids>',

View File

@@ -310,7 +310,6 @@ function validateProviderModelCombination(providerName, modelId) {
function validateClaudeCodeSettings(settings) {
// Define the base settings schema without commandSpecific first
const BaseSettingsSchema = z.object({
pathToClaudeCodeExecutable: z.string().optional(),
maxTurns: z.number().int().positive().optional(),
customSystemPrompt: z.string().optional(),
appendSystemPrompt: z.string().optional(),

View File

@@ -5,40 +5,6 @@ import { isSilentMode, log } from '../../scripts/modules/utils.js';
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
import { ROO_MODES } from '../constants/profiles.js';
// Import the shared MCP configuration helper
import { formatJSONWithTabs } from '../utils/create-mcp-config.js';
// Roo-specific MCP configuration enhancements
function enhanceRooMCPConfiguration(mcpPath) {
if (!fs.existsSync(mcpPath)) {
log('warn', `[Roo] MCP configuration file not found at ${mcpPath}`);
return;
}
try {
// Read the existing configuration
const mcpConfig = JSON.parse(fs.readFileSync(mcpPath, 'utf8'));
if (mcpConfig.mcpServers && mcpConfig.mcpServers['task-master-ai']) {
const server = mcpConfig.mcpServers['task-master-ai'];
// Add Roo-specific timeout enhancement for long-running AI operations
server.timeout = 300;
// Write the enhanced configuration back
fs.writeFileSync(mcpPath, formatJSONWithTabs(mcpConfig) + '\n');
log(
'debug',
`[Roo] Enhanced MCP configuration with timeout at ${mcpPath}`
);
} else {
log('warn', `[Roo] task-master-ai server not found in MCP configuration`);
}
} catch (error) {
log('error', `[Roo] Failed to enhance MCP configuration: ${error.message}`);
}
}
// Lifecycle functions for Roo profile
function onAddRulesProfile(targetDir, assetsDir) {
// Use the provided assets directory to find the roocode directory
@@ -66,9 +32,6 @@ function onAddRulesProfile(targetDir, assetsDir) {
}
}
// Note: MCP configuration is now handled by the base profile system
// The base profile will call setupMCPConfiguration, and we enhance it in onPostConvert
for (const mode of ROO_MODES) {
const src = path.join(rooModesDir, `rules-${mode}`, `${mode}-rules`);
const dest = path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`);
@@ -115,15 +78,6 @@ function onRemoveRulesProfile(targetDir) {
const rooDir = path.join(targetDir, '.roo');
if (fs.existsSync(rooDir)) {
// Remove MCP configuration
const mcpPath = path.join(rooDir, 'mcp.json');
try {
fs.rmSync(mcpPath, { force: true });
log('debug', `[Roo] Removed MCP configuration from ${mcpPath}`);
} catch (err) {
log('error', `[Roo] Failed to remove MCP configuration: ${err.message}`);
}
fs.readdirSync(rooDir).forEach((entry) => {
if (entry.startsWith('rules-')) {
const modeDir = path.join(rooDir, entry);
@@ -147,13 +101,7 @@ function onRemoveRulesProfile(targetDir) {
}
function onPostConvertRulesProfile(targetDir, assetsDir) {
// Enhance the MCP configuration with Roo-specific features after base setup
const mcpPath = path.join(targetDir, '.roo', 'mcp.json');
try {
enhanceRooMCPConfiguration(mcpPath);
} catch (err) {
log('error', `[Roo] Failed to enhance MCP configuration: ${err.message}`);
}
onAddRulesProfile(targetDir, assetsDir);
}
// Create and export roo profile using the base factory
@@ -163,7 +111,6 @@ export const rooProfile = createProfile({
url: 'roocode.com',
docsUrl: 'docs.roocode.com',
toolMappings: COMMON_TOOL_MAPPINGS.ROO_STYLE,
mcpConfig: true, // Enable MCP config - we enhance it with Roo-specific features
onAdd: onAddRulesProfile,
onRemove: onRemoveRulesProfile,
onPostConvert: onPostConvertRulesProfile

View File

@@ -262,6 +262,3 @@ export function removeTaskMasterMCPConfiguration(projectRoot, mcpConfigPath) {
return result;
}
// Export the formatting function for use by other modules
export { formatJSONWithTabs };

View File

@@ -26,7 +26,7 @@ describe('Roo Profile Initialization Functionality', () => {
expect(rooProfile.displayName).toBe('Roo Code');
expect(rooProfile.profileDir).toBe('.roo'); // default
expect(rooProfile.rulesDir).toBe('.roo/rules'); // default
expect(rooProfile.mcpConfig).toBe(true); // now uses standard MCP configuration with Roo enhancements
expect(rooProfile.mcpConfig).toBe(true); // default
});
test('roo.js uses custom ROO_STYLE tool mappings', () => {

View File

@@ -266,10 +266,10 @@ describe('MCP Configuration Validation', () => {
expect(mcpEnabledProfiles).toContain('cursor');
expect(mcpEnabledProfiles).toContain('gemini');
expect(mcpEnabledProfiles).toContain('opencode');
expect(mcpEnabledProfiles).toContain('roo');
expect(mcpEnabledProfiles).toContain('vscode');
expect(mcpEnabledProfiles).toContain('windsurf');
expect(mcpEnabledProfiles).toContain('zed');
expect(mcpEnabledProfiles).toContain('roo');
expect(mcpEnabledProfiles).not.toContain('cline');
expect(mcpEnabledProfiles).not.toContain('codex');
expect(mcpEnabledProfiles).not.toContain('trae');
@@ -384,7 +384,6 @@ describe('MCP Configuration Validation', () => {
'claude',
'cursor',
'gemini',
'kiro',
'opencode',
'roo',
'windsurf',