Compare commits

..

4 Commits

Author SHA1 Message Date
Ralph Khreish
cd90b4d65f chore: fix changeset config 2025-09-17 22:53:25 +02:00
Ralph Khreish
33259cc4f8 docs(changeset): Test out the RC 2025-09-17 22:24:22 +02:00
Ralph Khreish
1917e6a01a chore: fix privating certain packages 2025-09-17 22:19:41 +02:00
Ralph Khreish
35f3e71d2d chore: fix pre-release CI 2025-09-17 22:19:41 +02:00
307 changed files with 6148 additions and 15983 deletions

View File

@@ -1,11 +0,0 @@
---
"task-master-ai": minor
---
Add Codex CLI provider with OAuth authentication
- Added codex-cli provider for GPT-5 and GPT-5-Codex models (272K input / 128K output)
- OAuth-first authentication via `codex login` - no API key required
- Optional OPENAI_CODEX_API_KEY support
- Codebase analysis capabilities automatically enabled
- Command-specific settings and approval/sandbox modes

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Improve `analyze-complexity` cli docs and `--research` flag documentation

View File

@@ -8,8 +8,12 @@
],
"commit": false,
"fixed": [],
"linked": [
["task-master-ai", "@tm/cli", "@tm/core"]
],
"access": "public",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": [
"docs"
]

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": minor
---
Add Cursor IDE custom slash command support
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Change parent task back to "pending" when all subtasks are in "pending" state

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Do a quick fix on build

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix MCP connection errors caused by deprecated generateTaskFiles calls. Resolves "Cannot read properties of null (reading 'toString')" errors when using MCP tools for task management operations.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix MCP server error when file parameter not provided - now properly constructs default tasks.json path instead of failing with 'tasksJsonPath is required' error.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys

View File

@@ -1,10 +0,0 @@
---
"task-master-ai": minor
---
Move to AI SDK v5:
- Works better with claude-code and gemini-cli as ai providers
- Improved openai model family compatibility
- Migrate ollama provider to v2
- Closes #1223, #1013, #1161, #1174

View File

@@ -1,30 +0,0 @@
---
"task-master-ai": minor
---
Migrate AI services to use generateObject for structured data generation
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
### Key Changes:
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
### Technical Improvements:
- Centralized provider configuration in `ai-providers-unified.js`
- Added `generateObject` support detection for each provider
- Implemented proper error handling for schema validation failures
- Maintained backward compatibility with existing prompt structures
### Bug Fixes:
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
- Enhanced prompt instructions to enforce proper ID generation patterns
- Ensured subtasks display correctly as X.1, X.2, X.3 format
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.

View File

@@ -1,13 +0,0 @@
---
"task-master-ai": minor
---
Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
**What's New:**
- 300-second timeout for MCP operations (up from default 60 seconds)
- Programmatic MCP configuration generation (replaces static asset files)
- Enhanced reliability for AI-powered operations
- Consistent with other AI coding assistant profiles
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Test out the RC

View File

@@ -0,0 +1,5 @@
---
"@tm/cli": minor
---
testing this stuff out to see how the release candidate works with monorepo

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix Claude Code settings validation for pathToClaudeCodeExecutable

View File

@@ -1,23 +0,0 @@
{
"mode": "pre",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.27.3",
"docs": "0.0.4",
"extension": "0.25.4"
},
"changesets": [
"chore-fix-docs",
"cursor-slash-commands",
"curvy-weeks-flow",
"easy-spiders-wave",
"flat-cities-say",
"forty-tables-invite",
"gentle-cats-dance",
"mcp-timeout-configuration",
"petite-ideas-grab",
"silly-pandas-find",
"sweet-maps-rule",
"whole-pigs-say"
]
}

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix sonar deep research model failing, should be called `sonar-deep-research`

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Upgrade grok-cli ai provider to ai sdk v5

View File

@@ -1,8 +0,0 @@
---
"task-master-ai": patch
---
Fix complexity score not showing for `task-master show` and `task-master list`
- Added complexity score on "next task" when running `task-master list`
- Added colors to complexity to reflect complexity (easy, medium, hard)

View File

@@ -0,0 +1,5 @@
---
"extension": minor
---
Added a Start Build button to the VSCODE Task Properties Right Panel

View File

@@ -2,7 +2,7 @@
"mcpServers": {
"task-master-ai": {
"command": "node",
"args": ["./dist/mcp-server.js"],
"args": ["./mcp-server/server.js"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",

View File

@@ -1,157 +0,0 @@
#!/usr/bin/env node
import { readFileSync, existsSync, writeFileSync } from 'fs';
function parseMetricsTable(content, metricName) {
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
const line = lines[i].trim();
// Match a markdown table row like: | Metric Name | value | ...
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
const match = line.match(re);
if (match) {
return match[1].trim() || 'N/A';
}
}
return 'N/A';
}
function parseCountMetric(content, metricName) {
const result = parseMetricsTable(content, metricName);
// Extract number from string, handling commas and spaces
const numberMatch = result.toString().match(/[\d,]+/);
if (numberMatch) {
const number = parseInt(numberMatch[0].replace(/,/g, ''));
return isNaN(number) ? 0 : number;
}
return 0;
}
function main() {
const metrics = {
issues_created: 0,
issues_closed: 0,
prs_created: 0,
prs_merged: 0,
issue_avg_first_response: 'N/A',
issue_avg_time_to_close: 'N/A',
pr_avg_first_response: 'N/A',
pr_avg_merge_time: 'N/A'
};
// Parse issue metrics
if (existsSync('issue_metrics.md')) {
console.log('📄 Found issue_metrics.md, parsing...');
const issueContent = readFileSync('issue_metrics.md', 'utf8');
metrics.issues_created = parseCountMetric(
issueContent,
'Total number of items created'
);
metrics.issues_closed = parseCountMetric(
issueContent,
'Number of items closed'
);
metrics.issue_avg_first_response = parseMetricsTable(
issueContent,
'Time to first response'
);
metrics.issue_avg_time_to_close = parseMetricsTable(
issueContent,
'Time to close'
);
} else {
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
}
// Parse PR created metrics
if (existsSync('pr_created_metrics.md')) {
console.log('📄 Found pr_created_metrics.md, parsing...');
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
metrics.prs_created = parseCountMetric(
prCreatedContent,
'Total number of items created'
);
metrics.pr_avg_first_response = parseMetricsTable(
prCreatedContent,
'Time to first response'
);
} else {
console.warn(
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
);
}
// Parse PR merged metrics (for more accurate merge data)
if (existsSync('pr_merged_metrics.md')) {
console.log('📄 Found pr_merged_metrics.md, parsing...');
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
metrics.prs_merged = parseCountMetric(
prMergedContent,
'Total number of items created'
);
// For merged PRs, "Time to close" is actually time to merge
metrics.pr_avg_merge_time = parseMetricsTable(
prMergedContent,
'Time to close'
);
} else {
console.warn(
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
);
// Fallback: try old pr_metrics.md if it exists
if (existsSync('pr_metrics.md')) {
console.log('📄 Falling back to pr_metrics.md...');
const prContent = readFileSync('pr_metrics.md', 'utf8');
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
metrics.prs_merged =
mergedCount || parseCountMetric(prContent, 'Number of items closed');
const maybeMergeTime = parseMetricsTable(
prContent,
'Average time to merge'
);
metrics.pr_avg_merge_time =
maybeMergeTime !== 'N/A'
? maybeMergeTime
: parseMetricsTable(prContent, 'Time to close');
} else {
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
}
}
// Output for GitHub Actions
const output = Object.entries(metrics)
.map(([key, value]) => `${key}=${value}`)
.join('\n');
// Always output to stdout for debugging
console.log('\n=== FINAL METRICS ===');
Object.entries(metrics).forEach(([key, value]) => {
console.log(`${key}: ${value}`);
});
// Write to GITHUB_OUTPUT if in GitHub Actions
if (process.env.GITHUB_OUTPUT) {
try {
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
console.log(
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
);
} catch (error) {
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
process.exit(1);
}
} else {
console.log(
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
);
}
}
main();

View File

@@ -6,6 +6,9 @@ on:
- main
- next
pull_request:
branches:
- main
- next
workflow_dispatch:
concurrency:
@@ -89,9 +92,6 @@ jobs:
env:
NODE_ENV: production
FORCE_COLOR: 1
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
- name: Upload build artifacts
uses: actions/upload-artifact@v4

View File

@@ -41,7 +41,8 @@ jobs:
restore-keys: |
${{ runner.os }}-node-
- name: Install Monorepo Dependencies
- name: Install Extension Dependencies
working-directory: apps/extension
run: npm ci
timeout-minutes: 5
@@ -67,6 +68,7 @@ jobs:
${{ runner.os }}-node-
- name: Install if cache miss
working-directory: apps/extension
run: npm ci
timeout-minutes: 3
@@ -98,6 +100,7 @@ jobs:
${{ runner.os }}-node-
- name: Install if cache miss
working-directory: apps/extension
run: npm ci
timeout-minutes: 3

View File

@@ -31,7 +31,8 @@ jobs:
restore-keys: |
${{ runner.os }}-node-
- name: Install Monorepo Dependencies
- name: Install Extension Dependencies
working-directory: apps/extension
run: npm ci
timeout-minutes: 5

View File

@@ -65,19 +65,11 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
- name: Run format
run: npm run format
env:
FORCE_COLOR: 1
- name: Build packages
run: npm run turbo:build
env:
NODE_ENV: production
FORCE_COLOR: 1
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
uses: changesets/action@v1

View File

@@ -22,7 +22,7 @@ jobs:
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
cache: 'npm'
- name: Cache node_modules
uses: actions/cache@v4
@@ -46,9 +46,6 @@ jobs:
env:
NODE_ENV: production
FORCE_COLOR: 1
TM_PUBLIC_BASE_DOMAIN: ${{ secrets.TM_PUBLIC_BASE_DOMAIN }}
TM_PUBLIC_SUPABASE_URL: ${{ secrets.TM_PUBLIC_SUPABASE_URL }}
TM_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.TM_PUBLIC_SUPABASE_ANON_KEY }}
- name: Create Release Pull Request or Publish to npm
uses: changesets/action@v1

View File

@@ -8,7 +8,7 @@ on:
permissions:
contents: read
issues: read
issues: write
pull-requests: read
jobs:
@@ -17,25 +17,15 @@ jobs:
env:
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Get dates for last 14 days
- name: Get dates for last week
run: |
set -Eeuo pipefail
# Last 14 days
first_day=$(date -d "14 days ago" +%Y-%m-%d)
# Last 7 days
first_day=$(date -d "7 days ago" +%Y-%m-%d)
last_day=$(date +%Y-%m-%d)
echo "first_day=$first_day" >> $GITHUB_ENV
echo "last_day=$last_day" >> $GITHUB_ENV
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
- name: Generate issue metrics
uses: github/issue-metrics@v3
@@ -44,39 +34,40 @@ jobs:
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
HIDE_TIME_TO_ANSWER: true
HIDE_LABEL_METRICS: false
OUTPUT_FILE: issue_metrics.md
- name: Generate PR created metrics
- name: Generate PR metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
OUTPUT_FILE: pr_created_metrics.md
- name: Generate PR merged metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
OUTPUT_FILE: pr_merged_metrics.md
- name: Debug generated metrics
run: |
set -Eeuo pipefail
echo "Listing markdown files in workspace:"
ls -la *.md || true
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
if [ -f "$f" ]; then
echo "== $f (first 10 lines) =="
head -n 10 "$f"
else
echo "Missing $f"
fi
done
OUTPUT_FILE: pr_metrics.md
- name: Parse metrics
id: metrics
run: node .github/scripts/parse-metrics.mjs
run: |
# Parse the metrics from the generated markdown files
if [ -f "issue_metrics.md" ]; then
# Extract key metrics using grep/awk
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
fi
if [ -f "pr_metrics.md" ]; then
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
fi
# Set outputs for Discord action
echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT
- name: Send to Discord
uses: sarisia/actions-status-discord@v1
@@ -87,22 +78,19 @@ jobs:
title: "📊 Weekly Metrics Report"
description: |
**${{ env.week_of }}**
*${{ env.date_range }}*
**🎯 Issues**
• Created: ${{ steps.metrics.outputs.issues_created }}
• Closed: ${{ steps.metrics.outputs.issues_closed }}
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
**🔀 Pull Requests**
• Created: ${{ steps.metrics.outputs.prs_created }}
• Merged: ${{ steps.metrics.outputs.prs_merged }}
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
**📈 Visual Analytics**
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
**⏱️ Response Times**
• First Response: ${{ steps.metrics.outputs.avg_first_response }}
• Time to Close: ${{ steps.metrics.outputs.avg_time_to_close }}
• PR Merge Time: ${{ steps.metrics.outputs.pr_avg_merge_time }}
color: 0x58AFFF
username: Task Master Metrics Bot
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png

View File

@@ -2,7 +2,7 @@
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",

View File

@@ -1,6 +0,0 @@
{
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
"defaultBranch": "main",
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
}

View File

@@ -85,7 +85,7 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_key_here",
"PERPLEXITY_API_KEY": "your_key_here",

View File

@@ -2,8 +2,8 @@
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-sonnet-4-20250514",
"maxTokens": 64000,
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
},
"research": {
@@ -14,8 +14,8 @@
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 8192,
"temperature": 0.2
}
},
@@ -29,15 +29,9 @@
"ollamaBaseURL": "http://localhost:11434/api",
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
"responseLanguage": "English",
"enableCodebaseAnalysis": true,
"userId": "1234567890",
"azureBaseURL": "https://your-endpoint.azure.com/",
"defaultTag": "master"
},
"claudeCode": {},
"grokCli": {
"timeout": 120000,
"workingDirectory": null,
"defaultModel": "grok-4-latest"
}
"claudeCode": {}
}

View File

@@ -1,91 +0,0 @@
<context>
# Overview
Add a new CLI command: `task-master start <task_id>` (alias: `tm start <task_id>`). This command hard-codes `claude-code` as the executor, fetches task details, builds a standardized prompt, runs claude-code, shows the result, checks for git changes, and auto-marks the task as done if successful.
We follow the Commander class pattern, reuse task retrieval from `show` command flow. Extremely minimal for 1-hour hackathon timeline.
# Core Features
- `start` command (Commander class style)
- Hard-coded executor: `claude-code`
- Standardized prompt designed for minimal changes following existing patterns
- Shows claude-code output (no streaming)
- Git status check for success detection
- Auto-mark task done if successful
# User Experience
```
task-master start 12
```
1) Fetches Task #12 details
2) Builds standardized prompt with task context
3) Runs claude-code with the prompt
4) Shows output
5) Checks git status for changes
6) Auto-marks task done if changes detected
</context>
<PRD>
# Technical Architecture
- Command pattern:
- Create `apps/cli/src/commands/start.command.ts` modeled on [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) and task lookup from [show.command.ts](mdc:apps/cli/src/commands/show.command.ts)
- Task retrieval:
- Use `@tm/core` via `createTaskMasterCore` to get task by ID
- Extract: id, title, description, details
- Executor (ultra-simple approach):
- Execute `claude "full prompt here"` command directly
- The prompt tells Claude to first run `tm show <task_id>` to get task details
- Then tells Claude to implement the code changes
- This opens Claude CLI interface naturally in the current terminal
- No subprocess management needed - just execute the command
- Execution flow:
1) Validate `<task_id>` exists; exit with error if not
2) Build standardized prompt that includes instructions to run `tm show <task_id>`
3) Execute `claude "prompt"` command directly in terminal
4) Claude CLI opens, runs `tm show`, then implements changes
5) After Claude session ends, run `git status --porcelain` to detect changes
6) If changes detected, auto-run `task-master set-status --id=<task_id> --status=done`
- Success criteria:
- Success = exit code 0 AND git shows modified/created files
- Print changed file paths; warn if no changes detected
# Development Roadmap
MVP (ship in ~1 hour):
1) Implement `start.command.ts` (Commander class), parse `<task_id>`
2) Validate task exists via tm-core
3) Build prompt that tells Claude to run `tm show <task_id>` then implement
4) Execute `claude "prompt"` command, then check git status and auto-mark done
# Risks and Mitigations
- Executor availability: Error clearly if `claude-code` provider fails
- False success: Git-change heuristic acceptable for hackathon MVP
# Appendix
**Standardized Prompt Template:**
```
You are an AI coding assistant with access to this repository's codebase.
First, run this command to get the task details:
tm show <task_id>
Then implement the task with these requirements:
- Make the SMALLEST number of code changes possible
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
- Do NOT over-engineer the solution
- Use existing files/functions/patterns wherever possible
- When complete, print: COMPLETED: <brief summary of changes>
Begin by running tm show <task_id> to understand what needs to be implemented.
```
**Key References:**
- [list.command.ts](mdc:apps/cli/src/commands/list.command.ts) - Command structure
- [show.command.ts](mdc:apps/cli/src/commands/show.command.ts) - Task validation
- Node.js `child_process.exec()` - For executing `claude "prompt"` command
</PRD>

View File

@@ -1,6 +1,6 @@
{
"currentTag": "master",
"lastSwitched": "2025-09-12T22:25:27.535Z",
"lastSwitched": "2025-08-27T21:03:20.550Z",
"branchTagMapping": {
"v017-adds": "v017-adds",
"next": "next"

View File

@@ -1,34 +0,0 @@
# Task ID: 1
# Title: Create start command class structure
# Status: pending
# Dependencies: None
# Priority: high
# Description: Create the basic structure for the start command following the Commander class pattern
# Details:
Create a new file `apps/cli/src/commands/start.command.ts` based on the existing list.command.ts pattern. Implement the command class with proper command registration, description, and argument handling for the task_id parameter. The class should extend the base Command class and implement the required methods.
Example structure:
```typescript
import { Command } from 'commander';
import { BaseCommand } from './base.command';
export class StartCommand extends BaseCommand {
public register(program: Command): void {
program
.command('start')
.alias('tm start')
.description('Start implementing a task using claude-code')
.argument('<task_id>', 'ID of the task to start')
.action(async (taskId: string) => {
await this.execute(taskId);
});
}
public async execute(taskId: string): Promise<void> {
// Implementation will be added in subsequent tasks
}
}
```
# Test Strategy:
Verify the command registers correctly by running the CLI with --help and checking that the start command appears with proper description and arguments. Test the basic structure by ensuring the command can be invoked without errors.

View File

@@ -1,26 +0,0 @@
# Task ID: 2
# Title: Register start command in CLI
# Status: pending
# Dependencies: 7
# Priority: high
# Description: Register the start command in the CLI application
# Details:
Update the CLI application to register the new start command. This involves importing the StartCommand class and adding it to the commands array in the CLI initialization.
In `apps/cli/src/index.ts` or the appropriate file where commands are registered:
```typescript
import { StartCommand } from './commands/start.command';
// Add StartCommand to the commands array
const commands = [
// ... existing commands
new StartCommand(),
];
// Register all commands
commands.forEach(command => command.register(program));
```
# Test Strategy:
Verify the command is correctly registered by running the CLI with --help and checking that the start command appears in the list of available commands.

View File

@@ -1,32 +0,0 @@
# Task ID: 3
# Title: Create standardized prompt builder
# Status: pending
# Dependencies: 1
# Priority: medium
# Description: Implement a function to build the standardized prompt for claude-code based on the task details
# Details:
Create a function in the StartCommand class that builds the standardized prompt according to the template provided in the PRD. The prompt should include instructions for Claude to first run `tm show <task_id>` to get task details, and then implement the required changes.
```typescript
private buildPrompt(taskId: string): string {
return `You are an AI coding assistant with access to this repository's codebase.
First, run this command to get the task details:
tm show ${taskId}
Then implement the task with these requirements:
- Make the SMALLEST number of code changes possible
- Follow ALL existing patterns in the codebase (you have access to analyze the code)
- Do NOT over-engineer the solution
- Use existing files/functions/patterns wherever possible
- When complete, print: COMPLETED: <brief summary of changes>
Begin by running tm show ${taskId} to understand what needs to be implemented.`;
}
```
<info added on 2025-09-12T02:40:01.812Z>
The prompt builder function will handle task context retrieval by instructing Claude to use the task-master show command. This approach ensures Claude has access to all necessary task details before implementation begins. The command syntax "tm show ${taskId}" embedded in the prompt will direct Claude to first gather the complete task context, including description, requirements, and any existing implementation details, before proceeding with code changes.
</info added on 2025-09-12T02:40:01.812Z>
# Test Strategy:
Verify the prompt is correctly formatted by calling the function with a sample task ID and checking that the output matches the expected template with the task ID properly inserted.

View File

@@ -1,36 +0,0 @@
# Task ID: 4
# Title: Implement claude-code executor
# Status: pending
# Dependencies: 3
# Priority: high
# Description: Add functionality to execute the claude-code command with the built prompt
# Details:
Implement the functionality to execute the claude command with the built prompt. This should use Node.js child_process.exec() to run the command directly in the terminal.
```typescript
import { exec } from 'child_process';
// Inside execute method, after task validation
private async executeClaude(prompt: string): Promise<void> {
console.log('Starting claude-code to implement the task...');
try {
// Execute claude with the prompt
const claudeCommand = `claude "${prompt.replace(/"/g, '\\"')}"`;
// Use execSync to wait for the command to complete
const { execSync } = require('child_process');
execSync(claudeCommand, { stdio: 'inherit' });
console.log('Claude session completed.');
} catch (error) {
console.error('Error executing claude-code:', error.message);
process.exit(1);
}
}
```
Then call this method from the execute method after building the prompt.
# Test Strategy:
Test by running the command with a valid task ID and verifying that the claude command is executed with the correct prompt. Check that the command handles errors appropriately if claude-code is not available.

View File

@@ -1,49 +0,0 @@
# Task ID: 7
# Title: Integrate execution flow in start command
# Status: pending
# Dependencies: 3, 4
# Priority: high
# Description: Connect all the components to implement the complete execution flow for the start command
# Details:
Update the execute method in the StartCommand class to integrate all the components and implement the complete execution flow as described in the PRD:
1. Validate task exists
2. Build standardized prompt
3. Execute claude-code
4. Check git status for changes
5. Auto-mark task as done if changes detected
```typescript
public async execute(taskId: string): Promise<void> {
// Validate task exists
const core = await createTaskMasterCore();
const task = await core.tasks.getById(parseInt(taskId, 10));
if (!task) {
console.error(`Task with ID ${taskId} not found`);
process.exit(1);
}
// Build prompt
const prompt = this.buildPrompt(taskId);
// Execute claude-code
await this.executeClaude(prompt);
// Check git status
const changedFiles = await this.checkGitChanges();
if (changedFiles.length > 0) {
console.log('\nChanges detected in the following files:');
changedFiles.forEach(file => console.log(`- ${file}`));
// Auto-mark task as done
await this.markTaskAsDone(taskId);
console.log(`\nTask ${taskId} completed successfully and marked as done.`);
} else {
console.warn('\nNo changes detected after claude-code execution. Task not marked as done.');
}
}
```
# Test Strategy:
Test the complete execution flow by running the start command with a valid task ID and verifying that all steps are executed correctly. Test with both scenarios: when changes are detected and when no changes are detected.

File diff suppressed because one or more lines are too long

View File

@@ -1,191 +1,5 @@
# task-master-ai
## 0.28.0-rc.1
### Patch Changes
- [#1274](https://github.com/eyaltoledano/claude-task-master/pull/1274) [`4f984f8`](https://github.com/eyaltoledano/claude-task-master/commit/4f984f8a6965da9f9c7edd60ddfd6560ac022917) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Do a quick fix on build
## 0.28.0-rc.0
### Minor Changes
- [#1215](https://github.com/eyaltoledano/claude-task-master/pull/1215) [`0079b7d`](https://github.com/eyaltoledano/claude-task-master/commit/0079b7defdad550811f704c470fdd01955d91d4d) Thanks [@joedanz](https://github.com/joedanz)! - Add Cursor IDE custom slash command support
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Added api keys page on docs website: docs.task-master.dev/getting-started/api-keys
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Move to AI SDK v5:
- Works better with claude-code and gemini-cli as ai providers
- Improved openai model family compatibility
- Migrate ollama provider to v2
- Closes #1223, #1013, #1161, #1174
- [#1262](https://github.com/eyaltoledano/claude-task-master/pull/1262) [`738ec51`](https://github.com/eyaltoledano/claude-task-master/commit/738ec51c049a295a12839b2dfddaf05e23b8fede) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Migrate AI services to use generateObject for structured data generation
This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
### Key Changes:
- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
### Technical Improvements:
- Centralized provider configuration in `ai-providers-unified.js`
- Added `generateObject` support detection for each provider
- Implemented proper error handling for schema validation failures
- Maintained backward compatibility with existing prompt structures
### Bug Fixes:
- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
- Enhanced prompt instructions to enforce proper ID generation patterns
- Ensured subtasks display correctly as X.1, X.2, X.3 format
This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.
- [#1112](https://github.com/eyaltoledano/claude-task-master/pull/1112) [`d67b81d`](https://github.com/eyaltoledano/claude-task-master/commit/d67b81d25ddd927fabb6f5deb368e8993519c541) Thanks [@olssonsten](https://github.com/olssonsten)! - Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
**What's New:**
- 300-second timeout for MCP operations (up from default 60 seconds)
- Programmatic MCP configuration generation (replaces static asset files)
- Enhanced reliability for AI-powered operations
- Consistent with other AI coding assistant profiles
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.
- [#1246](https://github.com/eyaltoledano/claude-task-master/pull/1246) [`986ac11`](https://github.com/eyaltoledano/claude-task-master/commit/986ac117aee00bcd3e6830a0f76e1ad6d10e0bca) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Upgrade grok-cli ai provider to ai sdk v5
### Patch Changes
- [#1235](https://github.com/eyaltoledano/claude-task-master/pull/1235) [`aaacc3d`](https://github.com/eyaltoledano/claude-task-master/commit/aaacc3dae36247b4de72b2d2697f49e5df6d01e3) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve `analyze-complexity` cli docs and `--research` flag documentation
- [#1251](https://github.com/eyaltoledano/claude-task-master/pull/1251) [`0b2c696`](https://github.com/eyaltoledano/claude-task-master/commit/0b2c6967c4605c33a100cff16f6ce8ff09ad06f0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Change parent task back to "pending" when all subtasks are in "pending" state
- [#1172](https://github.com/eyaltoledano/claude-task-master/pull/1172) [`b5fe723`](https://github.com/eyaltoledano/claude-task-master/commit/b5fe723f8ead928e9f2dbde13b833ee70ac3382d) Thanks [@jujax](https://github.com/jujax)! - Fix Claude Code settings validation for pathToClaudeCodeExecutable
- [#1192](https://github.com/eyaltoledano/claude-task-master/pull/1192) [`2b69936`](https://github.com/eyaltoledano/claude-task-master/commit/2b69936ee7b34346d6de5175af20e077359e2e2a) Thanks [@nukunga](https://github.com/nukunga)! - Fix sonar deep research model failing, should be called `sonar-deep-research`
- [#1270](https://github.com/eyaltoledano/claude-task-master/pull/1270) [`20004a3`](https://github.com/eyaltoledano/claude-task-master/commit/20004a39ea848f747e1ff48981bfe176554e4055) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix complexity score not showing for `task-master show` and `task-master list`
- Added complexity score on "next task" when running `task-master list`
- Added colors to complexity to reflect complexity (easy, medium, hard)
## 0.27.3
### Patch Changes
- [#1254](https://github.com/eyaltoledano/claude-task-master/pull/1254) [`af53525`](https://github.com/eyaltoledano/claude-task-master/commit/af53525cbc660a595b67d4bb90d906911c71f45d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixed issue where `tm show` command could not find subtasks using dotted notation IDs (e.g., '8.1').
- The command now properly searches within parent task subtasks and returns the correct subtask information.
## 0.27.2
### Patch Changes
- [#1248](https://github.com/eyaltoledano/claude-task-master/pull/1248) [`044a7bf`](https://github.com/eyaltoledano/claude-task-master/commit/044a7bfc98049298177bc655cf341d7a8b6a0011) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix set-status for subtasks:
- Parent tasks are now set as `done` when subtasks are all `done`
- Parent tasks are now set as `in-progress` when at least one subtask is `in-progress` or `done`
## 0.27.1
### Patch Changes
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`c911608`](https://github.com/eyaltoledano/claude-task-master/commit/c911608f60454253f4e024b57ca84e5a5a53f65c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix Zed MCP configuration by adding required "source" property
- Add "source": "custom" property to task-master-ai server in Zed settings.json
## 0.27.1-rc.1
### Patch Changes
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - One last testing final final
## 0.27.1-rc.0
### Patch Changes
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
## 0.27.0
### Minor Changes
- [#1220](https://github.com/eyaltoledano/claude-task-master/pull/1220) [`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - No longer need --package=task-master-ai in mcp server
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
- we now bundle our whole package, so we no longer need the --package
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `task-master start` command for automated task execution with Claude Code
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
- `task-master start` will automatically detect next-task when no ID is provided.
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
## Setup Instructions
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
2. **Set the environment variable**:
```bash
export GROK_CLI_API_KEY="your-api-key-here"
```
3. **Configure Task Master to use Grok**:
```bash
task-master models --set-main grok-beta
# or
task-master models --set-research grok-beta
# or
task-master models --set-fallback grok-beta
```
## Key Features
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
## Available Models
- `grok-beta` - Latest Grok model with codebase context
- `grok-vision-beta` - Grok with vision capabilities and codebase context
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
## Credits
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.
- [#1225](https://github.com/eyaltoledano/claude-task-master/pull/1225) [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve taskmaster ai provider defaults
- moving from main anthropic 3.7 to anthropic sonnet 4
- moving from fallback anthropic 3.5 to anthropic 3.7
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.
## 0.27.0-rc.2
### Minor Changes
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
## 0.27.0-rc.1
### Minor Changes
- [`255b9f0`](https://github.com/eyaltoledano/claude-task-master/commit/255b9f0334555b0063280abde701445cd62fa11b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Testing one more pre-release iteration
## 0.27.0-rc.0
### Minor Changes
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Test out the RC
### Patch Changes
- Updated dependencies [[`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee)]:
- @tm/cli@0.27.0-rc.0
## 0.26.0
### Minor Changes

View File

@@ -4,28 +4,6 @@
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
@./.taskmaster/CLAUDE.md
## Test Guidelines
### Synchronous Tests
- **NEVER use async/await in test functions** unless testing actual asynchronous operations
- Use synchronous top-level imports instead of dynamic `await import()`
- Test bodies should be synchronous whenever possible
- Example:
```javascript
// ✅ CORRECT - Synchronous imports
import { MyClass } from '../src/my-class.js';
it('should verify behavior', () => {
expect(new MyClass().property).toBe(value);
});
// ❌ INCORRECT - Async imports
it('should verify behavior', async () => {
const { MyClass } = await import('../src/my-class.js');
expect(new MyClass().property).toBe(value);
});
```
## Changeset Guidelines
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.

View File

@@ -60,19 +60,6 @@ The following documentation is also available in the `docs` directory:
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
#### Claude Code Quick Install
For Claude Code users:
```bash
claude mcp add taskmaster-ai -- npx -y task-master-ai
```
Don't forget to add your API keys to the configuration:
- in the root .env of your Project
- in the "env" section of your mcp config for taskmaster-ai
## Requirements
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
@@ -88,9 +75,8 @@ At least one (1) of the following is required:
- xAI API Key (for research or main model)
- OpenRouter API Key (for research or main model)
- Claude Code (no API key required - requires Claude Code CLI)
- Codex CLI (OAuth via ChatGPT subscription - requires Codex CLI)
Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code or Codex CLI with OAuth). Adding all API keys enables you to seamlessly switch between model providers at will.
Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code). Adding all API keys enables you to seamlessly switch between model providers at will.
## Quick Start
@@ -106,18 +92,17 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
| **Q CLI** | Global | `~/.aws/amazonq/mcp.json` | | `mcpServers` |
##### Manual Configuration
###### Cursor & Windsurf & Q Developer CLI (`mcpServers`)
###### Cursor & Windsurf (`mcpServers`)
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
@@ -137,7 +122,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
> **Note**: If you see `0 tools enabled` in the MCP settings, restart your editor and check that your API keys are correctly configured.
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
###### VSCode (`servers` + `type`)
@@ -146,7 +131,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"servers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",

View File

@@ -1,27 +0,0 @@
# @tm/cli
## null
### Patch Changes
- Updated dependencies []:
- @tm/core@null
## 0.27.0
### Patch Changes
- Updated dependencies []:
- @tm/core@0.26.1
## 0.27.0-rc.0
### Minor Changes
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - testing this stuff out to see how the release candidate works with monorepo
## 1.1.0-rc.0
### Minor Changes
- [#1213](https://github.com/eyaltoledano/claude-task-master/pull/1213) [`cd90b4d`](https://github.com/eyaltoledano/claude-task-master/commit/cd90b4d65fc2f04bdad9fb73aba320b58a124240) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - testing this stuff out to see how the release candidate works with monorepo

View File

@@ -1,5 +1,6 @@
{
"name": "@tm/cli",
"version": "1.0.0",
"description": "Task Master CLI - Command line interface for task management",
"type": "module",
"private": true,
@@ -23,19 +24,19 @@
},
"dependencies": {
"@tm/core": "*",
"boxen": "^8.0.1",
"boxen": "^7.1.1",
"chalk": "5.6.2",
"cli-table3": "^0.6.5",
"commander": "^12.1.0",
"inquirer": "^12.5.0",
"ora": "^8.2.0"
"inquirer": "^9.2.10",
"ora": "^8.1.0"
},
"devDependencies": {
"@biomejs/biome": "^1.9.4",
"@types/inquirer": "^9.0.3",
"@types/node": "^22.10.5",
"tsx": "^4.20.4",
"typescript": "^5.9.2",
"typescript": "^5.7.3",
"vitest": "^2.1.8"
},
"engines": {

View File

@@ -6,7 +6,7 @@
import { Command } from 'commander';
import chalk from 'chalk';
import inquirer from 'inquirer';
import ora, { Ora } from 'ora';
import ora from 'ora';
import {
AuthManager,
AuthenticationError,
@@ -49,15 +49,8 @@ export class ContextCommand extends Command {
this.addClearCommand();
this.addSetCommand();
// Accept optional positional argument for brief ID or Hamster URL
this.argument('[briefOrUrl]', 'Brief ID or Hamster brief URL');
// Default action: if an argument is provided, resolve and set context; else show
this.action(async (briefOrUrl?: string) => {
if (briefOrUrl && briefOrUrl.trim().length > 0) {
await this.executeSetFromBriefInput(briefOrUrl.trim());
return;
}
// Default action shows current context
this.action(async () => {
await this.executeShow();
});
}
@@ -333,7 +326,7 @@ export class ContextCommand extends Command {
choices: [
{ name: '(No brief - organization level)', value: null },
...briefs.map((brief) => ({
name: `Brief ${brief.id} (${new Date(brief.createdAt).toLocaleDateString()})`,
name: `Brief ${brief.id.slice(0, 8)} (${new Date(brief.createdAt).toLocaleDateString()})`,
value: brief
}))
]
@@ -448,142 +441,6 @@ export class ContextCommand extends Command {
}
}
/**
* Execute setting context from a brief ID or Hamster URL
*/
private async executeSetFromBriefInput(briefOrUrl: string): Promise<void> {
let spinner: Ora | undefined;
try {
// Check authentication
if (!this.authManager.isAuthenticated()) {
ui.displayError('Not authenticated. Run "tm auth login" first.');
process.exit(1);
}
spinner = ora('Resolving brief...');
spinner.start();
// Extract brief ID
const briefId = this.extractBriefId(briefOrUrl);
if (!briefId) {
spinner.fail('Could not extract a brief ID from the provided input');
ui.displayError(
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
);
process.exit(1);
}
// Fetch brief and resolve its organization
const brief = await this.authManager.getBrief(briefId);
if (!brief) {
spinner.fail('Brief not found or you do not have access');
process.exit(1);
}
// Fetch org to get a friendly name (optional)
let orgName: string | undefined;
try {
const org = await this.authManager.getOrganization(brief.accountId);
orgName = org?.name;
} catch {
// Non-fatal if org lookup fails
}
// Update context: set org and brief
const briefName = `Brief ${brief.id.slice(0, 8)}`;
await this.authManager.updateContext({
orgId: brief.accountId,
orgName,
briefId: brief.id,
briefName
});
spinner.succeed('Context set from brief');
console.log(
chalk.gray(
` Organization: ${orgName || brief.accountId}\n Brief: ${briefName}`
)
);
this.setLastResult({
success: true,
action: 'set',
context: this.authManager.getContext() || undefined,
message: 'Context set from brief'
});
} catch (error: any) {
try {
if (spinner?.isSpinning) spinner.stop();
} catch {}
this.handleError(error);
process.exit(1);
}
}
/**
* Extract a brief ID from raw input (ID or Hamster URL)
*/
private extractBriefId(input: string): string | null {
const raw = input?.trim() ?? '';
if (!raw) return null;
const parseUrl = (s: string): URL | null => {
try {
return new URL(s);
} catch {}
try {
return new URL(`https://${s}`);
} catch {}
return null;
};
const fromParts = (path: string): string | null => {
const parts = path.split('/').filter(Boolean);
const briefsIdx = parts.lastIndexOf('briefs');
const candidate =
briefsIdx >= 0 && parts.length > briefsIdx + 1
? parts[briefsIdx + 1]
: parts[parts.length - 1];
return candidate?.trim() || null;
};
// 1) URL (absolute or schemeless)
const url = parseUrl(raw);
if (url) {
const qId = url.searchParams.get('id') || url.searchParams.get('briefId');
const candidate = (qId || fromParts(url.pathname)) ?? null;
if (candidate) {
// Light sanity check; let API be the final validator
if (this.isLikelyId(candidate) || candidate.length >= 8)
return candidate;
}
}
// 2) Looks like a path without scheme
if (raw.includes('/')) {
const candidate = fromParts(raw);
if (candidate && (this.isLikelyId(candidate) || candidate.length >= 8)) {
return candidate;
}
}
// 3) Fallback: raw token
return raw;
}
/**
* Heuristic to check if a string looks like a brief ID (UUID-like)
*/
private isLikelyId(value: string): boolean {
const uuidRegex =
/^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$/;
const ulidRegex = /^[0-9A-HJKMNP-TV-Z]{26}$/i; // ULID
const slugRegex = /^[A-Za-z0-9_-]{16,}$/; // general token
return (
uuidRegex.test(value) || ulidRegex.test(value) || slugRegex.test(value)
);
}
/**
* Set context directly from options
*/

View File

@@ -281,14 +281,9 @@ export class ListTasksCommand extends Command {
const priorityBreakdown = getPriorityBreakdown(tasks);
// Find next task following the same logic as findNextTask
const nextTaskInfo = this.findNextTask(tasks);
const nextTask = this.findNextTask(tasks);
// Get the full task object with complexity data already included
const nextTask = nextTaskInfo
? tasks.find((t) => String(t.id) === String(nextTaskInfo.id))
: undefined;
// Display dashboard boxes (nextTask already has complexity from storage enrichment)
// Display dashboard boxes
displayDashboards(
taskStats,
subtaskStats,
@@ -308,16 +303,14 @@ export class ListTasksCommand extends Command {
// Display recommended next task section immediately after table
if (nextTask) {
const description = getTaskDescription(nextTask);
// Find the full task object to get description
const fullTask = tasks.find((t) => String(t.id) === String(nextTask.id));
const description = fullTask ? getTaskDescription(fullTask) : undefined;
displayRecommendedNextTask({
id: nextTask.id,
title: nextTask.title,
priority: nextTask.priority,
status: nextTask.status,
dependencies: nextTask.dependencies,
description,
complexity: nextTask.complexity as number | undefined
...nextTask,
status: 'pending', // Next task is typically pending
description
});
} else {
displayRecommendedNextTask(undefined);

View File

@@ -1,318 +0,0 @@
/**
* @fileoverview SetStatusCommand using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import { Command } from 'commander';
import chalk from 'chalk';
import boxen from 'boxen';
import {
createTaskMasterCore,
type TaskMasterCore,
type TaskStatus
} from '@tm/core';
import type { StorageType } from '@tm/core/types';
/**
* Valid task status values for validation
*/
const VALID_TASK_STATUSES: TaskStatus[] = [
'pending',
'in-progress',
'done',
'deferred',
'cancelled',
'blocked',
'review'
];
/**
* Options interface for the set-status command
*/
export interface SetStatusCommandOptions {
id?: string;
status?: TaskStatus;
format?: 'text' | 'json';
silent?: boolean;
project?: string;
}
/**
* Result type from set-status command
*/
export interface SetStatusResult {
success: boolean;
updatedTasks: Array<{
taskId: string;
oldStatus: TaskStatus;
newStatus: TaskStatus;
}>;
storageType: Exclude<StorageType, 'auto'>;
}
/**
* SetStatusCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core
*/
export class SetStatusCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: SetStatusResult;
constructor(name?: string) {
super(name || 'set-status');
// Configure the command
this.description('Update the status of one or more tasks')
.requiredOption(
'-i, --id <id>',
'Task ID(s) to update (comma-separated for multiple, supports subtasks like 5.2)'
)
.requiredOption(
'-s, --status <status>',
`New status (${VALID_TASK_STATUSES.join(', ')})`
)
.option('-f, --format <format>', 'Output format (text, json)', 'text')
.option('--silent', 'Suppress output (useful for programmatic usage)')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.action(async (options: SetStatusCommandOptions) => {
await this.executeCommand(options);
});
}
/**
* Execute the set-status command
*/
private async executeCommand(
options: SetStatusCommandOptions
): Promise<void> {
try {
// Validate required options
if (!options.id) {
console.error(chalk.red('Error: Task ID is required. Use -i or --id'));
process.exit(1);
}
if (!options.status) {
console.error(
chalk.red('Error: Status is required. Use -s or --status')
);
process.exit(1);
}
// Validate status
if (!VALID_TASK_STATUSES.includes(options.status)) {
console.error(
chalk.red(
`Error: Invalid status "${options.status}". Valid options: ${VALID_TASK_STATUSES.join(', ')}`
)
);
process.exit(1);
}
// Initialize TaskMaster core
this.tmCore = await createTaskMasterCore({
projectPath: options.project || process.cwd()
});
// Parse task IDs (handle comma-separated values)
const taskIds = options.id.split(',').map((id) => id.trim());
// Update each task
const updatedTasks: Array<{
taskId: string;
oldStatus: TaskStatus;
newStatus: TaskStatus;
}> = [];
for (const taskId of taskIds) {
try {
const result = await this.tmCore.updateTaskStatus(
taskId,
options.status
);
updatedTasks.push({
taskId: result.taskId,
oldStatus: result.oldStatus,
newStatus: result.newStatus
});
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : String(error);
if (!options.silent) {
console.error(
chalk.red(`Failed to update task ${taskId}: ${errorMessage}`)
);
}
if (options.format === 'json') {
console.log(
JSON.stringify({
success: false,
error: errorMessage,
taskId,
timestamp: new Date().toISOString()
})
);
}
process.exit(1);
}
}
// Store result for potential reuse
this.lastResult = {
success: true,
updatedTasks,
storageType: this.tmCore.getStorageType() as Exclude<
StorageType,
'auto'
>
};
// Display results
this.displayResults(this.lastResult, options);
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Unknown error occurred';
if (!options.silent) {
console.error(chalk.red(`Error: ${errorMessage}`));
}
if (options.format === 'json') {
console.log(JSON.stringify({ success: false, error: errorMessage }));
}
process.exit(1);
} finally {
// Clean up resources
if (this.tmCore) {
await this.tmCore.close();
}
}
}
/**
* Display results based on format
*/
private displayResults(
result: SetStatusResult,
options: SetStatusCommandOptions
): void {
const format = options.format || 'text';
switch (format) {
case 'json':
console.log(JSON.stringify(result, null, 2));
break;
case 'text':
default:
if (!options.silent) {
this.displayTextResults(result);
}
break;
}
}
/**
* Display results in text format
*/
private displayTextResults(result: SetStatusResult): void {
if (result.updatedTasks.length === 1) {
// Single task update
const update = result.updatedTasks[0];
console.log(
boxen(
chalk.white.bold(`✅ Successfully updated task ${update.taskId}`) +
'\n\n' +
`${chalk.blue('From:')} ${this.getStatusDisplay(update.oldStatus)}\n` +
`${chalk.blue('To:')} ${this.getStatusDisplay(update.newStatus)}`,
{
padding: 1,
borderColor: 'green',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
} else {
// Multiple task updates
console.log(
boxen(
chalk.white.bold(
`✅ Successfully updated ${result.updatedTasks.length} tasks`
) +
'\n\n' +
result.updatedTasks
.map(
(update) =>
`${chalk.cyan(update.taskId)}: ${this.getStatusDisplay(update.oldStatus)}${this.getStatusDisplay(update.newStatus)}`
)
.join('\n'),
{
padding: 1,
borderColor: 'green',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
}
// Show storage info
console.log(chalk.gray(`\nUsing ${result.storageType} storage`));
}
/**
* Get colored status display
*/
private getStatusDisplay(status: TaskStatus): string {
const statusColors: Record<TaskStatus, (text: string) => string> = {
pending: chalk.yellow,
'in-progress': chalk.blue,
done: chalk.green,
deferred: chalk.gray,
cancelled: chalk.red,
blocked: chalk.red,
review: chalk.magenta,
completed: chalk.green
};
const colorFn = statusColors[status] || chalk.white;
return colorFn(status);
}
/**
* Get the last command result (useful for testing or chaining)
*/
getLastResult(): SetStatusResult | undefined {
return this.lastResult;
}
/**
* Static method to register this command on an existing program
* This is for gradual migration - allows commands.js to use this
*/
static registerOn(program: Command): Command {
const setStatusCommand = new SetStatusCommand();
program.addCommand(setStatusCommand);
return setStatusCommand;
}
/**
* Alternative registration that returns the command for chaining
* Can also configure the command name if needed
*/
static register(program: Command, name?: string): SetStatusCommand {
const setStatusCommand = new SetStatusCommand(name);
program.addCommand(setStatusCommand);
return setStatusCommand;
}
}
/**
* Factory function to create and configure the set-status command
*/
export function createSetStatusCommand(): SetStatusCommand {
return new SetStatusCommand();
}

View File

@@ -9,7 +9,14 @@ import boxen from 'boxen';
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
import type { StorageType } from '@tm/core/types';
import * as ui from '../utils/ui.js';
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
import {
displayTaskHeader,
displayTaskProperties,
displayImplementationDetails,
displayTestStrategy,
displaySubtasks,
displaySuggestedActions
} from '../ui/components/task-detail.component.js';
/**
* Options interface for the show command
@@ -257,11 +264,44 @@ export class ShowCommand extends Command {
return;
}
// Use the global task details display function
displayTaskDetails(result.task, {
statusFilter: options.status,
showSuggestedActions: true
});
const task = result.task;
// Display header with tag
displayTaskHeader(task.id, task.title);
// Display task properties in table format
displayTaskProperties(task);
// Display implementation details if available
if (task.details) {
console.log(); // Empty line for spacing
displayImplementationDetails(task.details);
}
// Display test strategy if available
if ('testStrategy' in task && task.testStrategy) {
console.log(); // Empty line for spacing
displayTestStrategy(task.testStrategy as string);
}
// Display subtasks if available
if (task.subtasks && task.subtasks.length > 0) {
// Filter subtasks by status if provided
const filteredSubtasks = options.status
? task.subtasks.filter((sub) => sub.status === options.status)
: task.subtasks;
if (filteredSubtasks.length === 0 && options.status) {
console.log(
chalk.gray(` No subtasks with status '${options.status}'`)
);
} else {
displaySubtasks(filteredSubtasks, task.id);
}
}
// Display suggested actions
displaySuggestedActions(task.id);
}
/**

View File

@@ -1,512 +0,0 @@
/**
* @fileoverview StartCommand using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
* This is a thin presentation layer over @tm/core's TaskExecutionService
*/
import { Command } from 'commander';
import chalk from 'chalk';
import boxen from 'boxen';
import ora, { type Ora } from 'ora';
import { spawn } from 'child_process';
import {
createTaskMasterCore,
type TaskMasterCore,
type StartTaskResult as CoreStartTaskResult
} from '@tm/core';
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
import * as ui from '../utils/ui.js';
/**
* CLI-specific options interface for the start command
*/
export interface StartCommandOptions {
id?: string;
format?: 'text' | 'json';
project?: string;
dryRun?: boolean;
force?: boolean;
noStatusUpdate?: boolean;
}
/**
* CLI-specific result type from start command
* Extends the core result with CLI-specific display information
*/
export interface StartCommandResult extends CoreStartTaskResult {
storageType?: string;
}
/**
* StartCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core's TaskExecutionService
*/
export class StartCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: StartCommandResult;
constructor(name?: string) {
super(name || 'start');
// Configure the command
this.description(
'Start working on a task by launching claude-code with context'
)
.argument('[id]', 'Task ID to start working on')
.option('-i, --id <id>', 'Task ID to start working on')
.option('-f, --format <format>', 'Output format (text, json)', 'text')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.option(
'--dry-run',
'Show what would be executed without launching claude-code'
)
.option(
'--force',
'Force start even if another task is already in-progress'
)
.option(
'--no-status-update',
'Do not automatically update task status to in-progress'
)
.action(
async (taskId: string | undefined, options: StartCommandOptions) => {
await this.executeCommand(taskId, options);
}
);
}
/**
* Execute the start command
*/
private async executeCommand(
taskId: string | undefined,
options: StartCommandOptions
): Promise<void> {
let spinner: Ora | null = null;
try {
// Validate options
if (!this.validateOptions(options)) {
process.exit(1);
}
// Initialize tm-core with spinner
spinner = ora('Initializing Task Master...').start();
await this.initializeCore(options.project || process.cwd());
spinner.succeed('Task Master initialized');
// Get the task ID from argument or option, or find next available task
const idArg = taskId || options.id || null;
let targetTaskId = idArg;
if (!targetTaskId) {
spinner = ora('Finding next available task...').start();
targetTaskId = await this.performGetNextTask();
if (targetTaskId) {
spinner.succeed(`Found next task: #${targetTaskId}`);
} else {
spinner.fail('No available tasks found');
}
}
if (!targetTaskId) {
ui.displayError('No task ID provided and no available tasks found');
process.exit(1);
}
// Show pre-launch message (no spinner needed, it's just display)
if (!options.dryRun) {
await this.showPreLaunchMessage(targetTaskId);
}
// Use tm-core's startTask method with spinner
spinner = ora('Preparing task execution...').start();
const coreResult = await this.performStartTask(targetTaskId, options);
if (coreResult.started) {
spinner.succeed(
options.dryRun
? 'Dry run completed'
: 'Task prepared - launching Claude...'
);
} else {
spinner.fail('Task execution failed');
}
// Execute command if we have one and it's not a dry run
if (!options.dryRun && coreResult.command) {
// Stop any remaining spinners before launching Claude
if (spinner && !spinner.isSpinning) {
// Clear the line to make room for Claude
console.log();
}
await this.executeChildProcess(coreResult.command);
}
// Convert core result to CLI result with storage type
const result: StartCommandResult = {
...coreResult,
storageType: this.tmCore?.getStorageType()
};
// Store result for programmatic access
this.setLastResult(result);
// Display results (only for dry run or if execution failed)
if (options.dryRun || !coreResult.started) {
this.displayResults(result, options);
}
} catch (error: any) {
if (spinner) {
spinner.fail('Operation failed');
}
this.handleError(error);
process.exit(1);
}
}
/**
* Validate command options
*/
private validateOptions(options: StartCommandOptions): boolean {
// Validate format
if (options.format && !['text', 'json'].includes(options.format)) {
console.error(chalk.red(`Invalid format: ${options.format}`));
console.error(chalk.gray(`Valid formats: text, json`));
return false;
}
return true;
}
/**
* Initialize TaskMasterCore
*/
private async initializeCore(projectRoot: string): Promise<void> {
if (!this.tmCore) {
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
}
}
/**
* Get the next available task using tm-core
*/
private async performGetNextTask(): Promise<string | null> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
return this.tmCore.getNextAvailableTask();
}
/**
* Show pre-launch message using tm-core data
*/
private async showPreLaunchMessage(targetTaskId: string): Promise<void> {
if (!this.tmCore) return;
const { task, subtask, subtaskId } =
await this.tmCore.getTaskWithSubtask(targetTaskId);
if (task) {
const workItemText = subtask
? `Subtask #${task.id}.${subtaskId} - ${subtask.title}`
: `Task #${task.id} - ${task.title}`;
console.log(
chalk.green('🚀 Starting: ') + chalk.white.bold(workItemText)
);
console.log(chalk.gray('Launching Claude Code...'));
console.log(); // Empty line
}
}
/**
* Perform start task using tm-core business logic
*/
private async performStartTask(
targetTaskId: string,
options: StartCommandOptions
): Promise<CoreStartTaskResult> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
// Show spinner for status update if enabled
let statusSpinner: Ora | null = null;
if (!options.noStatusUpdate && !options.dryRun) {
statusSpinner = ora('Updating task status to in-progress...').start();
}
// Get execution command from tm-core (instead of executing directly)
const result = await this.tmCore.startTask(targetTaskId, {
dryRun: options.dryRun,
force: options.force,
updateStatus: !options.noStatusUpdate
});
if (statusSpinner) {
if (result.started) {
statusSpinner.succeed('Task status updated');
} else {
statusSpinner.warn('Task status update skipped');
}
}
if (!result) {
throw new Error('Failed to start task - core result is undefined');
}
// Don't execute here - let the main executeCommand method handle it
return result;
}
/**
* Execute the child process directly in the main thread for better process control
*/
private async executeChildProcess(command: {
executable: string;
args: string[];
cwd: string;
}): Promise<void> {
return new Promise((resolve, reject) => {
// Don't show the full command with args as it can be very long
console.log(chalk.green('🚀 Launching Claude Code...'));
console.log(); // Add space before Claude takes over
const childProcess = spawn(command.executable, command.args, {
cwd: command.cwd,
stdio: 'inherit', // Inherit stdio from parent process
shell: false
});
childProcess.on('close', (code) => {
if (code === 0) {
resolve();
} else {
reject(new Error(`Process exited with code ${code}`));
}
});
childProcess.on('error', (error) => {
reject(new Error(`Failed to spawn process: ${error.message}`));
});
// Handle process termination signals gracefully
const cleanup = () => {
if (childProcess && !childProcess.killed) {
childProcess.kill('SIGTERM');
}
};
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
process.on('exit', cleanup);
});
}
/**
* Display results based on format
*/
private displayResults(
result: StartCommandResult,
options: StartCommandOptions
): void {
const format = options.format || 'text';
switch (format) {
case 'json':
this.displayJson(result);
break;
case 'text':
default:
this.displayTextResult(result, options);
break;
}
}
/**
* Display in JSON format
*/
private displayJson(result: StartCommandResult): void {
console.log(JSON.stringify(result, null, 2));
}
/**
* Display result in text format
*/
private displayTextResult(
result: StartCommandResult,
options: StartCommandOptions
): void {
if (!result.found || !result.task) {
console.log(
boxen(chalk.yellow(`Task not found!`), {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'yellow',
borderStyle: 'round',
margin: { top: 1 }
})
);
return;
}
const task = result.task;
if (options.dryRun) {
// For dry run, show full details since Claude Code won't be launched
let headerText = `Dry Run: Starting Task #${task.id} - ${task.title}`;
// If working on a specific subtask, highlight it in the header
if (result.subtask && result.subtaskId) {
headerText = `Dry Run: Starting Subtask #${task.id}.${result.subtaskId} - ${result.subtask.title}`;
}
displayTaskDetails(task, {
customHeader: headerText,
headerColor: 'yellow'
});
// Show claude-code prompt
if (result.executionOutput) {
console.log(); // Empty line for spacing
console.log(
boxen(
chalk.white.bold('Claude-Code Prompt:') +
'\n\n' +
result.executionOutput,
{
padding: 1,
borderStyle: 'round',
borderColor: 'cyan',
width: process.stdout.columns * 0.95 || 100
}
)
);
}
console.log(); // Empty line for spacing
console.log(
boxen(
chalk.yellow(
'🔍 Dry run - claude-code would be launched with the above prompt'
),
{
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'yellow',
borderStyle: 'round'
}
)
);
} else {
// For actual execution, show minimal info since Claude Code will clear the terminal
if (result.started) {
// Determine what was worked on - task or subtask
let workItemText = `Task: #${task.id} - ${task.title}`;
let statusTarget = task.id;
if (result.subtask && result.subtaskId) {
workItemText = `Subtask: #${task.id}.${result.subtaskId} - ${result.subtask.title}`;
statusTarget = `${task.id}.${result.subtaskId}`;
}
// Post-execution message (shown after Claude Code exits)
console.log(
boxen(
chalk.green.bold('🎉 Task Session Complete!') +
'\n\n' +
chalk.white(workItemText) +
'\n\n' +
chalk.cyan('Next steps:') +
'\n' +
`• Run ${chalk.yellow('tm show ' + task.id)} to review task details\n` +
`• Run ${chalk.yellow('tm set-status --id=' + statusTarget + ' --status=done')} when complete\n` +
`• Run ${chalk.yellow('tm next')} to find the next available task\n` +
`• Run ${chalk.yellow('tm start')} to begin the next task`,
{
padding: 1,
borderStyle: 'round',
borderColor: 'green',
width: process.stdout.columns * 0.95 || 100,
margin: { top: 1 }
}
)
);
} else {
// Error case
console.log(
boxen(
chalk.red(
'❌ Failed to launch claude-code' +
(result.error ? `\nError: ${result.error}` : '')
),
{
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: 'red',
borderStyle: 'round'
}
)
);
}
}
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
}
/**
* Handle general errors
*/
private handleError(error: any): void {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
// Show stack trace in development mode or when DEBUG is set
const isDevelopment = process.env.NODE_ENV !== 'production';
if ((isDevelopment || process.env.DEBUG) && error.stack) {
console.error(chalk.gray(error.stack));
}
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: StartCommandResult): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): StartCommandResult | undefined {
return this.lastResult;
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
if (this.tmCore) {
await this.tmCore.close();
this.tmCore = undefined;
}
}
/**
* Static method to register this command on an existing program
*/
static registerOn(program: Command): Command {
const startCommand = new StartCommand();
program.addCommand(startCommand);
return startCommand;
}
/**
* Alternative registration that returns the command for chaining
*/
static register(program: Command, name?: string): StartCommand {
const startCommand = new StartCommand(name);
program.addCommand(startCommand);
return startCommand;
}
}

View File

@@ -8,20 +8,10 @@ export { ListTasksCommand } from './commands/list.command.js';
export { ShowCommand } from './commands/show.command.js';
export { AuthCommand } from './commands/auth.command.js';
export { ContextCommand } from './commands/context.command.js';
export { StartCommand } from './commands/start.command.js';
export { SetStatusCommand } from './commands/set-status.command.js';
// UI utilities (for other commands to use)
export * as ui from './utils/ui.js';
// Auto-update utilities
export {
checkForUpdate,
performAutoUpdate,
displayUpgradeNotification,
compareVersions
} from './utils/auto-update.js';
// Re-export commonly used types from tm-core
export type {
Task,

View File

@@ -6,7 +6,6 @@
import chalk from 'chalk';
import boxen from 'boxen';
import type { Task, TaskPriority } from '@tm/core/types';
import { getComplexityWithColor } from '../../utils/ui.js';
/**
* Statistics for task collection
@@ -480,7 +479,7 @@ export function displayDependencyDashboard(
? chalk.cyan(nextTask.dependencies.join(', '))
: chalk.gray('None')
}\n` +
`Complexity: ${nextTask?.complexity !== undefined ? getComplexityWithColor(nextTask.complexity) : chalk.gray('N/A')}`;
`Complexity: ${nextTask?.complexity || chalk.gray('N/A')}`;
return content;
}

View File

@@ -6,7 +6,6 @@
import chalk from 'chalk';
import boxen from 'boxen';
import type { Task } from '@tm/core/types';
import { getComplexityWithColor } from '../../utils/ui.js';
/**
* Next task display options
@@ -18,7 +17,6 @@ export interface NextTaskDisplayOptions {
status?: string;
dependencies?: (string | number)[];
description?: string;
complexity?: number;
}
/**
@@ -84,11 +82,6 @@ export function displayRecommendedNextTask(
: chalk.cyan(task.dependencies.join(', '));
content.push(`Dependencies: ${depsDisplay}`);
// Complexity with color and label
if (typeof task.complexity === 'number') {
content.push(`Complexity: ${getComplexityWithColor(task.complexity)}`);
}
// Description if available
if (task.description) {
content.push('');

View File

@@ -9,11 +9,7 @@ import Table from 'cli-table3';
import { marked, MarkedExtension } from 'marked';
import { markedTerminal } from 'marked-terminal';
import type { Task } from '@tm/core/types';
import {
getStatusWithColor,
getPriorityWithColor,
getComplexityWithColor
} from '../../utils/ui.js';
import { getStatusWithColor, getPriorityWithColor } from '../../utils/ui.js';
// Configure marked to use terminal renderer with subtle colors
marked.use(
@@ -112,9 +108,7 @@ export function displayTaskProperties(task: Task): void {
getStatusWithColor(task.status),
getPriorityWithColor(task.priority),
deps,
typeof task.complexity === 'number'
? getComplexityWithColor(task.complexity)
: chalk.gray('N/A'),
'N/A',
task.description || ''
].join('\n');
@@ -268,74 +262,3 @@ export function displaySuggestedActions(taskId: string | number): void {
)
);
}
/**
* Display complete task details - used by both show and start commands
*/
export function displayTaskDetails(
task: Task,
options?: {
statusFilter?: string;
showSuggestedActions?: boolean;
customHeader?: string;
headerColor?: string;
}
): void {
const {
statusFilter,
showSuggestedActions = false,
customHeader,
headerColor = 'blue'
} = options || {};
// Display header - either custom or default
if (customHeader) {
console.log(
boxen(chalk.white.bold(customHeader), {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
borderColor: headerColor,
borderStyle: 'round',
margin: { top: 1 }
})
);
} else {
displayTaskHeader(task.id, task.title);
}
// Display task properties in table format
displayTaskProperties(task);
// Display implementation details if available
if (task.details) {
console.log(); // Empty line for spacing
displayImplementationDetails(task.details);
}
// Display test strategy if available
if ('testStrategy' in task && task.testStrategy) {
console.log(); // Empty line for spacing
displayTestStrategy(task.testStrategy as string);
}
// Display subtasks if available
if (task.subtasks && task.subtasks.length > 0) {
// Filter subtasks by status if provided
const filteredSubtasks = statusFilter
? task.subtasks.filter((sub) => sub.status === statusFilter)
: task.subtasks;
if (filteredSubtasks.length === 0 && statusFilter) {
console.log(); // Empty line for spacing
console.log(chalk.gray(` No subtasks with status '${statusFilter}'`));
} else if (filteredSubtasks.length > 0) {
console.log(); // Empty line for spacing
displaySubtasks(filteredSubtasks, task.id);
}
}
// Display suggested actions if requested
if (showSuggestedActions) {
console.log(); // Empty line for spacing
displaySuggestedActions(task.id);
}
}

View File

@@ -1,248 +0,0 @@
/**
* @fileoverview Auto-update utilities for task-master-ai CLI
*/
import { spawn } from 'child_process';
import https from 'https';
import chalk from 'chalk';
import ora from 'ora';
import boxen from 'boxen';
export interface UpdateInfo {
currentVersion: string;
latestVersion: string;
needsUpdate: boolean;
}
/**
* Get current version from build-time injected environment variable
*/
function getCurrentVersion(): string {
// Version is injected at build time via TM_PUBLIC_VERSION
const version = process.env.TM_PUBLIC_VERSION;
if (version && version !== 'unknown') {
return version;
}
// Fallback for development or if injection failed
console.warn('Could not read version from TM_PUBLIC_VERSION, using fallback');
return '0.0.0';
}
/**
* Compare semantic versions with proper pre-release handling
* @param v1 - First version
* @param v2 - Second version
* @returns -1 if v1 < v2, 0 if v1 = v2, 1 if v1 > v2
*/
export function compareVersions(v1: string, v2: string): number {
const toParts = (v: string) => {
const [core, pre = ''] = v.split('-', 2);
const nums = core.split('.').map((n) => Number.parseInt(n, 10) || 0);
return { nums, pre };
};
const a = toParts(v1);
const b = toParts(v2);
const len = Math.max(a.nums.length, b.nums.length);
// Compare numeric parts
for (let i = 0; i < len; i++) {
const d = (a.nums[i] || 0) - (b.nums[i] || 0);
if (d !== 0) return d < 0 ? -1 : 1;
}
// Handle pre-release comparison
if (a.pre && !b.pre) return -1; // prerelease < release
if (!a.pre && b.pre) return 1; // release > prerelease
if (a.pre === b.pre) return 0; // same or both empty
return a.pre < b.pre ? -1 : 1; // basic prerelease tie-break
}
/**
* Check for newer version of task-master-ai
*/
export async function checkForUpdate(
currentVersionOverride?: string
): Promise<UpdateInfo> {
const currentVersion = currentVersionOverride || getCurrentVersion();
return new Promise((resolve) => {
const options = {
hostname: 'registry.npmjs.org',
path: '/task-master-ai',
method: 'GET',
headers: {
Accept: 'application/vnd.npm.install-v1+json',
'User-Agent': `task-master-ai/${currentVersion}`
}
};
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
try {
if (res.statusCode !== 200)
throw new Error(`npm registry status ${res.statusCode}`);
const npmData = JSON.parse(data);
const latestVersion = npmData['dist-tags']?.latest || currentVersion;
const needsUpdate =
compareVersions(currentVersion, latestVersion) < 0;
resolve({
currentVersion,
latestVersion,
needsUpdate
});
} catch (error) {
resolve({
currentVersion,
latestVersion: currentVersion,
needsUpdate: false
});
}
});
});
req.on('error', () => {
resolve({
currentVersion,
latestVersion: currentVersion,
needsUpdate: false
});
});
req.setTimeout(3000, () => {
req.destroy();
resolve({
currentVersion,
latestVersion: currentVersion,
needsUpdate: false
});
});
req.end();
});
}
/**
* Display upgrade notification message
*/
export function displayUpgradeNotification(
currentVersion: string,
latestVersion: string
) {
const message = boxen(
`${chalk.blue.bold('Update Available!')} ${chalk.dim(currentVersion)}${chalk.green(latestVersion)}\n\n` +
`Auto-updating to the latest version with new features and bug fixes...`,
{
padding: 1,
margin: { top: 1, bottom: 1 },
borderColor: 'yellow',
borderStyle: 'round'
}
);
console.log(message);
}
/**
* Automatically update task-master-ai to the latest version
*/
export async function performAutoUpdate(
latestVersion: string
): Promise<boolean> {
if (
process.env.TASKMASTER_SKIP_AUTO_UPDATE === '1' ||
process.env.CI ||
process.env.NODE_ENV === 'test'
) {
const reason =
process.env.TASKMASTER_SKIP_AUTO_UPDATE === '1'
? 'TASKMASTER_SKIP_AUTO_UPDATE=1'
: process.env.CI
? 'CI environment'
: 'NODE_ENV=test';
console.log(chalk.dim(`Skipping auto-update (${reason})`));
return false;
}
const spinner = ora({
text: chalk.blue(
`Updating task-master-ai to version ${chalk.green(latestVersion)}`
),
spinner: 'dots',
color: 'blue'
}).start();
return new Promise((resolve) => {
const updateProcess = spawn(
'npm',
[
'install',
'-g',
`task-master-ai@${latestVersion}`,
'--no-fund',
'--no-audit',
'--loglevel=warn'
],
{
stdio: ['ignore', 'pipe', 'pipe']
}
);
let errorOutput = '';
updateProcess.stdout.on('data', () => {
// Update spinner text with progress
spinner.text = chalk.blue(
`Installing task-master-ai@${latestVersion}...`
);
});
updateProcess.stderr.on('data', (data) => {
errorOutput += data.toString();
});
updateProcess.on('close', (code) => {
if (code === 0) {
spinner.succeed(
chalk.green(
`Successfully updated to version ${chalk.bold(latestVersion)}`
)
);
console.log(
chalk.dim('Please restart your command to use the new version.')
);
resolve(true);
} else {
spinner.fail(chalk.red('Auto-update failed'));
console.log(
chalk.cyan(
`Please run manually: npm install -g task-master-ai@${latestVersion}`
)
);
if (errorOutput) {
console.log(chalk.dim(`Error: ${errorOutput.trim()}`));
}
resolve(false);
}
});
updateProcess.on('error', (error) => {
spinner.fail(chalk.red('Auto-update failed'));
console.log(chalk.red('Error:'), error.message);
console.log(
chalk.cyan(
`Please run manually: npm install -g task-master-ai@${latestVersion}`
)
);
resolve(false);
});
});
}

View File

@@ -84,23 +84,7 @@ export function getPriorityWithColor(priority: TaskPriority): string {
}
/**
* Get complexity color and label based on score thresholds
*/
function getComplexityLevel(score: number): {
color: (text: string) => string;
label: string;
} {
if (score >= 7) {
return { color: chalk.hex('#CC0000'), label: 'High' };
} else if (score >= 4) {
return { color: chalk.hex('#FF8800'), label: 'Medium' };
} else {
return { color: chalk.green, label: 'Low' };
}
}
/**
* Get colored complexity display with dot indicator (simple format)
* Get colored complexity display
*/
export function getComplexityWithColor(complexity: number | string): string {
const score =
@@ -110,20 +94,13 @@ export function getComplexityWithColor(complexity: number | string): string {
return chalk.gray('N/A');
}
const { color } = getComplexityLevel(score);
return color(`${score}`);
if (score >= 8) {
return chalk.red.bold(`${score} (High)`);
} else if (score >= 5) {
return chalk.yellow(`${score} (Medium)`);
} else {
return chalk.green(`${score} (Low)`);
}
/**
* Get colored complexity display with /10 format (for dashboards)
*/
export function getComplexityWithScore(complexity: number | undefined): string {
if (typeof complexity !== 'number') {
return chalk.gray('N/A');
}
const { color, label } = getComplexityLevel(complexity);
return color(`${complexity}/10 (${label})`);
}
/**
@@ -346,13 +323,9 @@ export function createTaskTable(
}
if (showComplexity) {
// Show complexity score from report if available
if (typeof task.complexity === 'number') {
row.push(getComplexityWithColor(task.complexity));
} else {
// Show N/A if no complexity score
row.push(chalk.gray('N/A'));
}
}
table.push(row);

View File

@@ -1,9 +1,5 @@
# docs
## 0.0.4
## 0.0.3
## 0.0.2
## 0.0.1

View File

@@ -1,24 +1,22 @@
# Task Master Documentation
Welcome to the Task Master documentation. This documentation site provides comprehensive guides for getting started with Task Master.
Welcome to the Task Master documentation. Use the links below to navigate to the information you need:
## Getting Started
- [Quick Start Guide](/getting-started/quick-start) - Complete setup and first-time usage guide
- [Requirements](/getting-started/quick-start/requirements) - What you need to get started
- [Installation](/getting-started/quick-start/installation) - How to install Task Master
- [Configuration Guide](archive/configuration.md) - Set up environment variables and customize Task Master
- [Tutorial](archive/ctutorial.md) - Step-by-step guide to getting started with Task Master
## Core Capabilities
## Reference
- [MCP Tools](/capabilities/mcp) - Model Control Protocol integration
- [CLI Commands](/capabilities/cli-root-commands) - Command line interface reference
- [Task Structure](/capabilities/task-structure) - Understanding tasks and subtasks
- [Command Reference](archive/ccommand-reference.md) - Complete list of all available commands
- [Task Structure](archive/ctask-structure.md) - Understanding the task format and features
## Best Practices
## Examples & Licensing
- [Advanced Configuration](/best-practices/configuration-advanced) - Detailed configuration options
- [Advanced Tasks](/best-practices/advanced-tasks) - Working with complex task structures
- [Example Interactions](archive/cexamples.md) - Common Cursor AI interaction examples
- [Licensing Information](archive/clicensing.md) - Detailed information about the license
## Need More Help?
If you can't find what you're looking for in these docs, please check the root README.md or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).
If you can't find what you're looking for in these docs, please check the [main README](../README.md) or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).

View File

@@ -19,7 +19,7 @@ description: "This guide walks you through setting up Task Master in your develo
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package", "task-master-ai", "task-master-mcp"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",

View File

@@ -48,7 +48,7 @@ description: "Learn how to set up and use Task Master with Cursor AI"
<Step title="Configure with the following details:">
- Name: "Task Master"
- Type: "Command"
- Command: "npx -y task-master-ai"
- Command: "npx -y --package task-master-ai task-master-mcp"
</Step>
<Step title="Save Settings">

View File

@@ -83,8 +83,6 @@ Taskmaster uses two primary methods for configuration:
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
- **Optional Auto-Update Control:**
- `TASKMASTER_SKIP_AUTO_UPDATE`: Set to '1' to disable automatic updates. Also automatically disabled in CI environments (when `CI` environment variable is set).
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.

View File

@@ -156,7 +156,7 @@ sidebarTitle: "CLI Commands"
# Use an alternative tasks file
task-master analyze-complexity --file=custom-tasks.json
# Use your configured research model for research-backed complexity analysis
# Use Perplexity AI for research-backed complexity analysis
task-master analyze-complexity --research
```
</Accordion>

View File

@@ -85,7 +85,7 @@ The CLI is organized into a series of commands, each with its own set of options
### 4. Project and Configuration
- **`init`**: Initializes a new project.
- **`generate`**: Generates individual task files from tasks.json. Run this manually after task operations to create readable text files for each task.
- **`generate`**: Generates individual task files.
- **`migrate`**: Migrates a project to the new directory structure.
- **`research`**: Performs AI-powered research.
- `--query <query>`: The research query.
@@ -123,7 +123,7 @@ The core functionalities can be categorized as follows:
### 1. Task and Subtask Management
These functions are the bread and butter of the application, allowing for the creation, modification, and deletion of tasks and subtasks. Note: As of v0.27.3, these operations no longer automatically generate individual task files - use the `generate` command manually when needed.
These functions are the bread and butter of the application, allowing for the creation, modification, and deletion of tasks and subtasks.
- **`addTask(prompt, dependencies, priority)`**: Creates a new task using an AI-powered prompt to generate the title, description, details, and test strategy. It can also be used to create a task manually by providing the task data directly.
- **`addSubtask(parentId, existingTaskId, newSubtaskData)`**: Adds a subtask to a parent task. It can either convert an existing task into a subtask or create a new subtask from scratch.
@@ -167,7 +167,7 @@ These functions are crucial for managing the relationships between tasks.
These functions are for managing the project and its configuration.
- **`generateTaskFiles()`**: Generates individual task files from `tasks.json`. This is now a manual operation - task management operations no longer automatically generate these files.
- **`generateTaskFiles()`**: Generates individual task files from `tasks.json`.
- **`migrateProject()`**: Migrates the project to the new `.taskmaster` directory structure.
- **`performResearch(query, options)`**: Performs AI-powered research with project context.
@@ -225,7 +225,7 @@ The MCP tools can be categorized in the same way as the core functionalities:
### 5. Project and Configuration
- **`initialize_project`**: Initializes a new project.
- **`generate`**: Generates individual task files from tasks.json. Run this manually when you want to create readable text files for each task.
- **`generate`**: Generates individual task files.
- **`models`**: Manages AI model configurations.
- **`research`**: Performs AI-powered research.

View File

@@ -32,7 +32,6 @@
"getting-started/quick-start/execute-quick"
]
},
"getting-started/api-keys",
"getting-started/faq",
"getting-started/contribute"
]

View File

@@ -1,267 +0,0 @@
# API Keys Configuration
Task Master supports multiple AI providers through environment variables. This page lists all available API keys and their configuration requirements.
## Required API Keys
> **Note**: At least one required API key must be configured for Task Master to function.
>
> "Required: Yes" below means "required to use that specific provider," not "required globally." You only need at least one provider configured.
### ANTHROPIC_API_KEY (Recommended)
- **Provider**: Anthropic Claude models
- **Format**: `sk-ant-api03-...`
- **Required**: ✅ **Yes**
- **Models**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
- **Get Key**: [Anthropic Console](https://console.anthropic.com/)
```bash
ANTHROPIC_API_KEY="sk-ant-api03-your-key-here"
```
### PERPLEXITY_API_KEY (Highly Recommended for Research)
- **Provider**: Perplexity AI (Research features)
- **Format**: `pplx-...`
- **Required**: ✅ **Yes**
- **Purpose**: Enables research-backed task expansions and updates
- **Models**: Perplexity Sonar models
- **Get Key**: [Perplexity API](https://www.perplexity.ai/settings/api)
```bash
PERPLEXITY_API_KEY="pplx-your-key-here"
```
### OPENAI_API_KEY
- **Provider**: OpenAI GPT models
- **Format**: `sk-proj-...` or `sk-...`
- **Required**: ✅ **Yes**
- **Models**: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, O1 models
- **Get Key**: [OpenAI Platform](https://platform.openai.com/api-keys)
```bash
OPENAI_API_KEY="sk-proj-your-key-here"
```
### GOOGLE_API_KEY
- **Provider**: Google Gemini models
- **Format**: Various formats
- **Required**: ✅ **Yes**
- **Models**: Gemini Pro, Gemini Flash, Gemini Ultra
- **Get Key**: [Google AI Studio](https://aistudio.google.com/app/apikey)
- **Alternative**: Use `GOOGLE_APPLICATION_CREDENTIALS` for service account (Google Vertex)
```bash
GOOGLE_API_KEY="your-google-api-key-here"
```
### GROQ_API_KEY
- **Provider**: Groq (High-performance inference)
- **Required**: ✅ **Yes**
- **Models**: Llama models, Mixtral models (via Groq)
- **Get Key**: [Groq Console](https://console.groq.com/keys)
```bash
GROQ_API_KEY="your-groq-key-here"
```
### OPENROUTER_API_KEY
- **Provider**: OpenRouter (Multiple model access)
- **Required**: ✅ **Yes**
- **Models**: Access to various models through single API
- **Get Key**: [OpenRouter](https://openrouter.ai/keys)
```bash
OPENROUTER_API_KEY="your-openrouter-key-here"
```
### AZURE_OPENAI_API_KEY
- **Provider**: Azure OpenAI Service
- **Required**: ✅ **Yes**
- **Requirements**: Also requires `AZURE_OPENAI_ENDPOINT` configuration
- **Models**: GPT models via Azure
- **Get Key**: [Azure Portal](https://portal.azure.com/)
```bash
AZURE_OPENAI_API_KEY="your-azure-key-here"
```
### XAI_API_KEY
- **Provider**: xAI (Grok) models
- **Required**: ✅ **Yes**
- **Models**: Grok models
- **Get Key**: [xAI Console](https://console.x.ai/)
```bash
XAI_API_KEY="your-xai-key-here"
```
## Optional API Keys
> **Note**: These API keys are optional - providers will work without them or use alternative authentication methods.
### AWS_ACCESS_KEY_ID (Bedrock)
- **Provider**: AWS Bedrock
- **Required**: ❌ **No** (uses AWS credential chain)
- **Models**: Claude models via AWS Bedrock
- **Authentication**: Uses AWS credential chain (profiles, IAM roles, etc.)
- **Get Key**: [AWS Console](https://console.aws.amazon.com/iam/)
```bash
# Optional - AWS credential chain is preferred
AWS_ACCESS_KEY_ID="your-aws-access-key"
AWS_SECRET_ACCESS_KEY="your-aws-secret-key"
```
### CLAUDE_CODE_API_KEY
- **Provider**: Claude Code CLI
- **Required**: ❌ **No** (uses OAuth tokens)
- **Purpose**: Integration with local Claude Code CLI
- **Authentication**: Uses OAuth tokens, no API key needed
```bash
# Not typically needed
CLAUDE_CODE_API_KEY="not-usually-required"
```
### GEMINI_API_KEY
- **Provider**: Gemini CLI
- **Required**: ❌ **No** (uses OAuth authentication)
- **Purpose**: Integration with Gemini CLI
- **Authentication**: Primarily uses OAuth via CLI, API key is optional
```bash
# Optional - OAuth via CLI is preferred
GEMINI_API_KEY="your-gemini-key-here"
```
### GROK_CLI_API_KEY
- **Provider**: Grok CLI
- **Required**: ❌ **No** (can use CLI config)
- **Purpose**: Integration with Grok CLI
- **Authentication**: Can use Grok CLI's own config file
```bash
# Optional - CLI config is preferred
GROK_CLI_API_KEY="your-grok-cli-key"
```
### OLLAMA_API_KEY
- **Provider**: Ollama (Local/Remote)
- **Required**: ❌ **No** (local installation doesn't need key)
- **Purpose**: For remote Ollama servers that require authentication
- **Requirements**: Only needed for remote servers with authentication
- **Note**: Not needed for local Ollama installations
```bash
# Only needed for remote Ollama servers
OLLAMA_API_KEY="your-ollama-api-key-here"
```
### GITHUB_API_KEY
- **Provider**: GitHub (Import/Export features)
- **Format**: `ghp_...` or `github_pat_...`
- **Required**: ❌ **No** (for GitHub features only)
- **Purpose**: GitHub import/export features
- **Get Key**: [GitHub Settings](https://github.com/settings/tokens)
```bash
GITHUB_API_KEY="ghp-your-github-key-here"
```
## Configuration Methods
### Method 1: Environment File (.env)
Create a `.env` file in your project root:
```bash
# Copy from .env.example
cp .env.example .env
# Edit with your keys
vim .env
```
### Method 2: System Environment Variables
```bash
export ANTHROPIC_API_KEY="your-key-here"
export PERPLEXITY_API_KEY="your-key-here"
# ... other keys
```
### Method 3: MCP Server Configuration
For Claude Code integration, configure keys in `.mcp.json`:
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your-key-here",
"PERPLEXITY_API_KEY": "your-key-here",
"OPENAI_API_KEY": "your-key-here"
}
}
}
}
```
## Key Requirements
### Minimum Requirements
- **At least one** AI provider key is required
- **ANTHROPIC_API_KEY** is recommended as the primary provider
- **PERPLEXITY_API_KEY** is highly recommended for research features
### Provider-Specific Requirements
- **Azure OpenAI**: Requires both `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT` configuration
- **Google Vertex**: Requires `VERTEX_PROJECT_ID` and `VERTEX_LOCATION` environment variables
- **AWS Bedrock**: Uses AWS credential chain (profiles, IAM roles, etc.) instead of API keys
- **Ollama**: Only needs API key for remote servers with authentication
- **CLI Providers**: Gemini CLI, Grok CLI, and Claude Code use OAuth/CLI config instead of API keys
## Model Configuration
After setting up API keys, configure which models to use:
```bash
# Interactive model setup
task-master models --setup
# Set specific models
task-master models --set-main claude-3-5-sonnet-20241022
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
task-master models --set-fallback gpt-4o-mini
```
## Security Best Practices
1. **Never commit API keys** to version control
2. **Use .env files** and add them to `.gitignore`
3. **Rotate keys regularly** especially if compromised
4. **Use minimal permissions** for service accounts
5. **Monitor usage** to detect unauthorized access
## Troubleshooting
### Key Validation
```bash
# Check if keys are properly configured
task-master models
# Test specific provider
task-master add-task --prompt="test task" --model=claude-3-5-sonnet-20241022
```
### Common Issues
- **Invalid key format**: Check the expected format for each provider
- **Insufficient permissions**: Ensure keys have necessary API access
- **Rate limits**: Some providers have usage limits
- **Regional restrictions**: Some models may not be available in all regions
### Getting Help
If you encounter issues with API key configuration:
- Check the [FAQ](/getting-started/faq) for common solutions
- Join our [Discord community](https://discord.gg/fWJkU7rf) for support
- Report issues on [GitHub](https://github.com/eyaltoledano/claude-task-master/issues)

View File

@@ -18,8 +18,8 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"command": "node",
"args": ["./mcp-server/server.js"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
@@ -108,5 +108,5 @@ You dont need to configure everything up front. Most settings can be left as
</Accordion>
<Note>
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide](/best-practices/configuration-advanced) page.
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide](/docs/best-practices/configuration-advanced) page.
</Note>

View File

@@ -56,4 +56,4 @@ If you ran into problems and had to debug errors you can create new rules as you
By now you have all you need to get started executing code faster and smarter with Task Master.
If you have any questions please check out [Frequently Asked Questions](/getting-started/faq)
If you have any questions please check out [Frequently Asked Questions](/docs/getting-started/faq)

View File

@@ -30,19 +30,6 @@ cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21
```
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
### Claude Code Quick Install
For Claude Code users:
```bash
claude mcp add taskmaster-ai -- npx -y task-master-ai
```
Don't forget to add your API keys to the configuration:
- in the root .env of your Project
- in the "env" section of your mcp config for taskmaster-ai
</Accordion>
## Installation Options
@@ -69,7 +56,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
@@ -88,7 +75,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
> **Note**: If you see `0 tools enabled` in the MCP settings, restart your editor and check that your API keys are correctly configured.
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
### VS Code (`servers` + `type`)
@@ -97,7 +84,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"servers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",

View File

@@ -6,13 +6,13 @@ sidebarTitle: "Quick Start"
This guide is for new users who want to start using Task Master with minimal setup time.
It covers:
- [Requirements](/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
- [Installation](/getting-started/quick-start/installation): How to Install Task Master.
- [Configuration](/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
- [PRD](/getting-started/quick-start/prd-quick): Writing and parsing your first PRD.
- [Task Setup](/getting-started/quick-start/tasks-quick): Preparing your tasks for execution.
- [Executing Tasks](/getting-started/quick-start/execute-quick): Using Task Master to execute tasks.
- [Rules & Context](/getting-started/quick-start/rules-quick): Learn how and why to build context in your project over time.
- [Requirements](/docs/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
- [Installation](/docs/getting-started/quick-start/installation): How to Install Task Master.
- [Configuration](/docs/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
- [PRD](/docs/getting-started/quick-start/prd-quick): Writing and parsing your first PRD.
- [Task Setup](/docs/getting-started/quick-start/tasks-quick): Preparing your tasks for execution.
- [Executing Tasks](/docs/getting-started/quick-start/execute-quick): Using Task Master to execute tasks.
- [Rules & Context](/docs/getting-started/quick-start/rules-quick): Learn how and why to build context in your project over time.
<Tip>
By the end of this guide, you'll have everything you need to begin working productively with Task Master.

View File

@@ -61,25 +61,9 @@ Task Master can provide a complexity report which can be helpful to read before
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
```
The agent will use the `analyze_project_complexity` MCP tool, or you can run it directly with the CLI command:
```bash
task-master analyze-complexity
```
For more comprehensive analysis using your configured research model, you can use:
```bash
task-master analyze-complexity --research
```
<Tip>
The `--research` flag uses whatever research model you have configured in `.taskmaster/config.json` (configurable via `task-master models --setup`) for research-backed complexity analysis, providing more informed recommendations.
</Tip>
You can view the report in a friendly table using:
```
Can you show me the complexity report in a more readable format?
```
For more detailed CLI options, see the [Analyze Task Complexity](/capabilities/cli-root-commands#analyze-task-complexity) section.
<Check>Now you are ready to begin [executing tasks](/getting-started/quick-start/execute-quick)</Check>
<Check>Now you are ready to begin [executing tasks](/docs/getting-started/quick-start/execute-quick)</Check>

View File

@@ -4,7 +4,7 @@ Welcome to v1 of the Task Master Docs. Expect weekly updates as we expand and re
We've organized the docs into three sections depending on your experience level and goals:
### Getting Started - Jump in to [Quick Start](/getting-started/quick-start)
### Getting Started - Jump in to [Quick Start](/docs/getting-started/quick-start)
Designed for first-time users. Get set up, create your first PRD, and run your first task.
### Best Practices

View File

@@ -1,6 +1,6 @@
{
"name": "docs",
"version": "0.0.4",
"version": "0.0.2",
"private": true,
"description": "Task Master documentation powered by Mintlify",
"scripts": {

View File

@@ -1,75 +1,5 @@
# Change Log
## 0.25.5-rc.0
### Patch Changes
- Updated dependencies [[`aaacc3d`](https://github.com/eyaltoledano/claude-task-master/commit/aaacc3dae36247b4de72b2d2697f49e5df6d01e3), [`0079b7d`](https://github.com/eyaltoledano/claude-task-master/commit/0079b7defdad550811f704c470fdd01955d91d4d), [`0b2c696`](https://github.com/eyaltoledano/claude-task-master/commit/0b2c6967c4605c33a100cff16f6ce8ff09ad06f0), [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042), [`18aa416`](https://github.com/eyaltoledano/claude-task-master/commit/18aa416035f44345bde1c7321490345733a5d042), [`738ec51`](https://github.com/eyaltoledano/claude-task-master/commit/738ec51c049a295a12839b2dfddaf05e23b8fede), [`d67b81d`](https://github.com/eyaltoledano/claude-task-master/commit/d67b81d25ddd927fabb6f5deb368e8993519c541), [`b5fe723`](https://github.com/eyaltoledano/claude-task-master/commit/b5fe723f8ead928e9f2dbde13b833ee70ac3382d), [`2b69936`](https://github.com/eyaltoledano/claude-task-master/commit/2b69936ee7b34346d6de5175af20e077359e2e2a), [`986ac11`](https://github.com/eyaltoledano/claude-task-master/commit/986ac117aee00bcd3e6830a0f76e1ad6d10e0bca), [`20004a3`](https://github.com/eyaltoledano/claude-task-master/commit/20004a39ea848f747e1ff48981bfe176554e4055)]:
- task-master-ai@0.28.0-rc.0
## 0.25.4
### Patch Changes
- Updated dependencies [[`af53525`](https://github.com/eyaltoledano/claude-task-master/commit/af53525cbc660a595b67d4bb90d906911c71f45d)]:
- task-master-ai@0.27.3
## 0.25.3
### Patch Changes
- Updated dependencies [[`044a7bf`](https://github.com/eyaltoledano/claude-task-master/commit/044a7bfc98049298177bc655cf341d7a8b6a0011)]:
- task-master-ai@0.27.2
## 0.25.2
### Patch Changes
- Updated dependencies [[`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e), [`c911608`](https://github.com/eyaltoledano/claude-task-master/commit/c911608f60454253f4e024b57ca84e5a5a53f65c), [`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9)]:
- task-master-ai@0.27.1
## 0.25.2-rc.1
### Patch Changes
- Updated dependencies [[`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9)]:
- task-master-ai@0.27.1-rc.1
## 0.25.2-rc.0
### Patch Changes
- Updated dependencies [[`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e)]:
- task-master-ai@0.27.1-rc.0
## 0.25.0
### Minor Changes
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add "Start Task" button to VS Code extension for seamless Claude Code integration
You can now click a "Start Task" button directly in the Task Master extension which will open a new terminal and automatically execute the task using Claude Code. This provides a seamless workflow from viewing tasks in the extension to implementing them without leaving VS Code.
- [#1201](https://github.com/eyaltoledano/claude-task-master/pull/1201) [`83af314`](https://github.com/eyaltoledano/claude-task-master/commit/83af314879fc0e563581161c60d2bd089899313e) Thanks [@losolosol](https://github.com/losolosol)! - Added a Start Build button to the VSCODE Task Properties Right Panel
### Patch Changes
- [#1229](https://github.com/eyaltoledano/claude-task-master/pull/1229) [`674d1f6`](https://github.com/eyaltoledano/claude-task-master/commit/674d1f6de7ea98116b61bdae6198bafe6c4e7c1a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP not connecting to new Taskmaster version
- Updated dependencies [[`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869), [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142)]:
- task-master-ai@0.27.0
## 0.25.0-rc.0
### Minor Changes
- [#1201](https://github.com/eyaltoledano/claude-task-master/pull/1201) [`83af314`](https://github.com/eyaltoledano/claude-task-master/commit/83af314879fc0e563581161c60d2bd089899313e) Thanks [@losolosol](https://github.com/losolosol)! - Added a Start Build button to the VSCODE Task Properties Right Panel
### Patch Changes
- Updated dependencies [[`137ef36`](https://github.com/eyaltoledano/claude-task-master/commit/137ef362789a9cdfdb1925e35e0438c1fa6c69ee)]:
- task-master-ai@0.27.0-rc.0
## 0.24.2
### Patch Changes

View File

@@ -109,7 +109,7 @@ Access settings via **File → Preferences → Settings** and search for "Taskma
### **MCP Connection Settings**
- **MCP Server Command** - Path to task-master-ai executable (default: `npx`)
- **MCP Server Args** - Arguments for the server command (default: `-y`, `task-master-ai`)
- **MCP Server Args** - Arguments for the server command (default: `-y`, `--package=task-master-ai`, `task-master-ai`)
- **Connection Timeout** - Server response timeout (default: 30s)
- **Auto Refresh** - Enable automatic task updates (default: enabled)

View File

@@ -3,7 +3,7 @@
"private": true,
"displayName": "TaskMaster",
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
"version": "0.25.5-rc.0",
"version": "0.24.2",
"publisher": "Hamster",
"icon": "assets/icon.png",
"engines": {
@@ -240,7 +240,7 @@
"check-types": "tsc --noEmit"
},
"dependencies": {
"task-master-ai": "*"
"task-master-ai": "0.26.0"
},
"devDependencies": {
"@dnd-kit/core": "^6.3.1",
@@ -254,9 +254,8 @@
"@radix-ui/react-separator": "^1.1.7",
"@radix-ui/react-slot": "^1.2.3",
"@tailwindcss/postcss": "^4.1.11",
"@tanstack/react-query": "^5.83.0",
"@types/mocha": "^10.0.10",
"@types/node": "^22.10.5",
"@types/node": "20.x",
"@types/react": "19.1.8",
"@types/react-dom": "19.1.6",
"@types/vscode": "^1.101.0",
@@ -272,12 +271,12 @@
"lucide-react": "^0.525.0",
"npm-run-all": "^4.1.5",
"postcss": "8.5.6",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"tailwind-merge": "^3.3.1",
"tailwindcss": "4.1.11",
"typescript": "^5.9.2",
"@tm/core": "*"
"typescript": "^5.8.3",
"@tanstack/react-query": "^5.83.0",
"react": "^19.0.0",
"react-dom": "^19.0.0"
},
"overrides": {
"glob@<8": "^10.4.5",

View File

@@ -2,7 +2,7 @@
"name": "task-master-hamster",
"displayName": "Taskmaster AI",
"description": "A visual Kanban board interface for Taskmaster projects in VS Code",
"version": "0.25.3",
"version": "0.23.1",
"publisher": "Hamster",
"icon": "assets/icon.png",
"engines": {
@@ -95,7 +95,7 @@
"items": {
"type": "string"
},
"default": ["-y", "task-master-ai"],
"default": ["-y", "--package=task-master-ai", "task-master-ai"],
"description": "An array of arguments to pass to the MCP server command."
},
"taskmaster.mcp.cwd": {

View File

@@ -11,6 +11,7 @@ interface TaskMetadataSidebarProps {
tasks: TaskMasterTask[];
complexity: any;
isSubtask: boolean;
sendMessage: (message: any) => Promise<any>;
onStatusChange: (status: TaskMasterTask['status']) => void;
onDependencyClick: (depId: string) => void;
isRegenerating?: boolean;
@@ -22,12 +23,13 @@ export const TaskMetadataSidebar: React.FC<TaskMetadataSidebarProps> = ({
tasks,
complexity,
isSubtask,
sendMessage,
onStatusChange,
onDependencyClick,
isRegenerating = false,
isAppending = false
}) => {
const { sendMessage } = useVSCodeContext();
const { vscode } = useVSCodeContext();
const [isLoadingComplexity, setIsLoadingComplexity] = useState(false);
const [mcpComplexityScore, setMcpComplexityScore] = useState<
number | undefined
@@ -99,37 +101,26 @@ export const TaskMetadataSidebar: React.FC<TaskMetadataSidebarProps> = ({
};
// Handle starting a task
const handleStartTask = async () => {
const handleStartTask = () => {
if (!currentTask || isStartingTask) {
return;
}
setIsStartingTask(true);
try {
// Send message to extension to open terminal
const result = await sendMessage({
if (vscode) {
vscode.postMessage({
type: 'openTerminal',
data: {
taskId: currentTask.id,
taskTitle: currentTask.title
}
});
}
// Handle the response
if (result && !result.success) {
console.error('Terminal execution failed:', result.error);
// The extension will show VS Code error notification and webview toast
} else if (result && result.success) {
console.log('Terminal started successfully:', result.terminalName);
}
} catch (error) {
console.error('Failed to start task:', error);
// This handles network/communication errors
} finally {
// Reset loading state
// Reset loading state after a short delay
setTimeout(() => {
setIsStartingTask(false);
}
}, 500);
};
// Effect to handle complexity on task change

View File

@@ -208,6 +208,7 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
tasks={allTasks}
complexity={complexity}
isSubtask={isSubtask}
sendMessage={sendMessage}
onStatusChange={handleStatusChange}
onDependencyClick={handleDependencyClick}
/>

View File

@@ -8,7 +8,6 @@ import { ConfigService } from './services/config-service';
import { PollingService } from './services/polling-service';
import { createPollingStrategy } from './services/polling-strategies';
import { TaskRepository } from './services/task-repository';
import { TerminalManager } from './services/terminal-manager';
import { WebviewManager } from './services/webview-manager';
import { EventEmitter } from './utils/event-emitter';
import { ExtensionLogger } from './utils/logger';
@@ -23,7 +22,6 @@ let logger: ExtensionLogger;
let mcpClient: MCPClientManager;
let api: TaskMasterApi;
let repository: TaskRepository;
let terminalManager: TerminalManager;
let pollingService: PollingService;
let webviewManager: WebviewManager;
let events: EventEmitter;
@@ -48,9 +46,6 @@ export async function activate(context: vscode.ExtensionContext) {
// Repository with caching (actually useful for performance)
repository = new TaskRepository(api, logger);
// Terminal manager for task execution
terminalManager = new TerminalManager(context, logger);
// Config service for TaskMaster config.json
configService = new ConfigService(logger);
@@ -61,13 +56,7 @@ export async function activate(context: vscode.ExtensionContext) {
pollingService = new PollingService(repository, strategy, logger);
// Webview manager (cleaner than global panel array) - create before connection
webviewManager = new WebviewManager(
context,
repository,
events,
logger,
terminalManager
);
webviewManager = new WebviewManager(context, repository, events, logger);
webviewManager.setConfigService(configService);
// Sidebar webview manager
@@ -221,11 +210,10 @@ function registerCommands(context: vscode.ExtensionContext) {
);
}
export async function deactivate() {
export function deactivate() {
logger?.log('👋 TaskMaster Extension deactivating...');
pollingService?.stop();
webviewManager?.dispose();
await terminalManager?.dispose();
api?.destroy();
mcpClient?.disconnect();
}

View File

@@ -1,156 +0,0 @@
/**
* Terminal Manager - Handles task execution in VS Code terminals
* Uses @tm/core for consistent task management with the CLI
*/
import * as vscode from 'vscode';
import { createTaskMasterCore, type TaskMasterCore } from '@tm/core';
import type { ExtensionLogger } from '../utils/logger';
export interface TerminalExecutionOptions {
taskId: string;
taskTitle: string;
tag?: string;
}
export interface TerminalExecutionResult {
success: boolean;
error?: string;
terminalName?: string;
}
export class TerminalManager {
private terminals = new Map<string, vscode.Terminal>();
private tmCore?: TaskMasterCore;
constructor(
private context: vscode.ExtensionContext,
private logger: ExtensionLogger
) {}
/**
* Execute a task in a new VS Code terminal with Claude
* Uses @tm/core for consistent task management with the CLI
*/
async executeTask(
options: TerminalExecutionOptions
): Promise<TerminalExecutionResult> {
const { taskTitle, tag } = options;
// Ensure taskId is always a string
const taskId = String(options.taskId);
this.logger.log(
`Starting task execution for ${taskId}: ${taskTitle}${tag ? ` (tag: ${tag})` : ''}`
);
this.logger.log(`TaskId type: ${typeof taskId}, value: ${taskId}`);
try {
// Initialize tm-core if needed
await this.initializeCore();
// Use tm-core to start the task (same as CLI)
const startResult = await this.tmCore!.startTask(taskId, {
dryRun: false,
force: false,
updateStatus: true
});
if (!startResult.started || !startResult.executionOutput) {
throw new Error(
startResult.error || 'Failed to start task with tm-core'
);
}
// Create terminal with custom TaskMaster icon
const terminalName = `Task ${taskId}: ${taskTitle}`;
const terminal = this.createTerminal(terminalName);
// Store terminal reference for potential cleanup
this.terminals.set(taskId, terminal);
// Show terminal and run Claude command
terminal.show();
const command = `claude "${startResult.executionOutput}"`;
terminal.sendText(command);
this.logger.log(`Launched Claude for task ${taskId} using tm-core`);
return {
success: true,
terminalName
};
} catch (error) {
this.logger.error('Failed to execute task:', error);
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error'
};
}
}
/**
* Create a new terminal with TaskMaster branding
*/
private createTerminal(name: string): vscode.Terminal {
const workspaceRoot = vscode.workspace.workspaceFolders?.[0]?.uri.fsPath;
return vscode.window.createTerminal({
name,
cwd: workspaceRoot,
iconPath: new vscode.ThemeIcon('play') // Use a VS Code built-in icon for now
});
}
/**
* Initialize TaskMaster Core (same as CLI)
*/
private async initializeCore(): Promise<void> {
if (!this.tmCore) {
const workspaceRoot = vscode.workspace.workspaceFolders?.[0]?.uri.fsPath;
if (!workspaceRoot) {
throw new Error('No workspace folder found');
}
this.tmCore = await createTaskMasterCore({ projectPath: workspaceRoot });
}
}
/**
* Get terminal by task ID (if still active)
*/
getTerminalByTaskId(taskId: string): vscode.Terminal | undefined {
return this.terminals.get(taskId);
}
/**
* Clean up terminated terminals
*/
cleanupTerminal(taskId: string): void {
const terminal = this.terminals.get(taskId);
if (terminal) {
this.terminals.delete(taskId);
}
}
/**
* Dispose all managed terminals and clean up tm-core
*/
async dispose(): Promise<void> {
this.terminals.forEach((terminal) => {
try {
terminal.dispose();
} catch (error) {
this.logger.error('Failed to dispose terminal:', error);
}
});
this.terminals.clear();
if (this.tmCore) {
try {
await this.tmCore.close();
this.tmCore = undefined;
} catch (error) {
this.logger.error('Failed to close tm-core:', error);
}
}
}
}

View File

@@ -8,7 +8,6 @@ import type { EventEmitter } from '../utils/event-emitter';
import type { ExtensionLogger } from '../utils/logger';
import type { ConfigService } from './config-service';
import type { TaskRepository } from './task-repository';
import type { TerminalManager } from './terminal-manager';
export class WebviewManager {
private panels = new Set<vscode.WebviewPanel>();
@@ -20,8 +19,7 @@ export class WebviewManager {
private context: vscode.ExtensionContext,
private repository: TaskRepository,
private events: EventEmitter,
private logger: ExtensionLogger,
private terminalManager: TerminalManager
private logger: ExtensionLogger
) {}
setConfigService(configService: ConfigService): void {
@@ -364,67 +362,27 @@ export class WebviewManager {
return;
case 'openTerminal':
// Delegate terminal execution to TerminalManager
const { taskId, taskTitle } = data.data || data; // Handle both nested and direct data
// Open VS Code terminal for task execution
this.logger.log(
`Webview openTerminal - taskId: ${taskId} (type: ${typeof taskId}), taskTitle: ${taskTitle}`
`Opening terminal for task ${data.taskId}: ${data.taskTitle}`
);
// Get current tag to ensure we're working in the right context
let currentTag = 'master'; // default fallback
if (this.mcpClient) {
try {
const tagsResult = await this.mcpClient.callTool('list_tags', {
projectRoot: vscode.workspace.workspaceFolders?.[0]?.uri.fsPath,
showMetadata: false
const terminal = vscode.window.createTerminal({
name: `Task ${data.taskId}: ${data.taskTitle}`,
cwd: vscode.workspace.workspaceFolders?.[0]?.uri.fsPath
});
terminal.show();
let parsedData;
if (
tagsResult?.content &&
Array.isArray(tagsResult.content) &&
tagsResult.content[0]?.text
) {
try {
parsedData = JSON.parse(tagsResult.content[0].text);
if (parsedData?.data?.currentTag) {
currentTag = parsedData.data.currentTag;
}
} catch (e) {
this.logger.warn(
'Failed to parse tags response for terminal execution'
);
}
}
this.logger.log('Terminal created and shown successfully');
response = { success: true };
} catch (error) {
this.logger.warn(
'Failed to get current tag for terminal execution:',
error
);
this.logger.error('Failed to create terminal:', error);
response = {
success: false,
error: error instanceof Error ? error.message : 'Unknown error'
};
}
}
const result = await this.terminalManager.executeTask({
taskId,
taskTitle,
tag: currentTag
});
response = result;
// Show user feedback AFTER sending the response (like the working "TaskMaster connected!" example)
setImmediate(() => {
if (result.success) {
// Success: Show info message
vscode.window.showInformationMessage(
`✅ Started Claude session for Task ${taskId}: ${taskTitle}`
);
} else {
// Error: Show VS Code native error notification only
const errorMsg = `Failed to start task: ${result.error}`;
vscode.window.showErrorMessage(errorMsg);
}
});
break;
default:

View File

@@ -408,7 +408,7 @@ export function createMCPConfigFromSettings(): MCPConfig {
const taskMasterPath = require.resolve('task-master-ai');
const mcpServerPath = path.resolve(
path.dirname(taskMasterPath),
'./dist/mcp-server.js'
'mcp-server/server.js'
);
// Verify the server file exists

View File

@@ -5,6 +5,7 @@
"outDir": "out",
"lib": ["ES2022", "DOM"],
"sourceMap": true,
"rootDir": "src",
"strict": true /* enable all strict type-checking options */,
"moduleResolution": "Node",
"esModuleInterop": true,
@@ -19,11 +20,8 @@
"paths": {
"@/*": ["./src/*"],
"@/components/*": ["./src/components/*"],
"@/lib/*": ["./src/lib/*"],
"@tm/core": ["../../packages/tm-core/src/index.ts"],
"@tm/core/*": ["../../packages/tm-core/src/*"]
"@/lib/*": ["./src/lib/*"]
}
},
"include": ["src/**/*"],
"exclude": ["node_modules", ".vscode-test", "out", "dist"]
}

View File

@@ -1,231 +0,0 @@
# TODO: Move to apps/docs inside our documentation website
# Claude Code Integration Guide
This guide covers how to use Task Master with Claude Code AI SDK integration for enhanced AI-powered development workflows.
## Overview
Claude Code integration allows Task Master to leverage the Claude Code CLI for AI operations without requiring direct API keys. The integration uses OAuth tokens managed by the Claude Code CLI itself.
## Authentication Setup
The Claude Code provider uses token authentication managed by the Claude Code CLI.
### Prerequisites
1. **Install Claude Code CLI** (if not already installed):
```bash
# Installation method depends on your system
# Follow Claude Code documentation for installation
```
2. **Set up OAuth token** using Claude Code CLI:
```bash
claude setup-token
```
This command will:
- Guide you through OAuth authentication
- Store the token securely for CLI usage
- Enable Task Master to use Claude Code without manual API key configuration
### Authentication Priority
Task Master will attempt authentication in this order:
1. **Environment Variable** (optional): `CLAUDE_CODE_OAUTH_TOKEN`
- Useful for CI/CD environments or when you want to override the default token
- Not required if you've set up the CLI token
2. **Claude Code CLI Token** (recommended): Token managed by `claude setup-token`
- Automatically used when available
- Most convenient for local development
3. **Fallback**: Error if neither is available
## Configuration
### Basic Configuration
Add Claude Code to your Task Master configuration:
```javascript
// In your .taskmaster/config.json or via task-master models command
{
"models": {
"main": "claude-code:sonnet", // Use Claude Code with Sonnet
"research": "perplexity-llama-3.1-sonar-large-128k-online",
"fallback": "claude-code:opus" // Use Claude Code with Opus as fallback
}
}
```
### Supported Models
- `claude-code:sonnet` - Claude 3.5 Sonnet via Claude Code CLI
- `claude-code:opus` - Claude 3 Opus via Claude Code CLI
### Environment Variables (Optional)
While not required, you can optionally set:
```bash
export CLAUDE_CODE_OAUTH_TOKEN="your_oauth_token_here"
```
This is only needed in specific scenarios like:
- CI/CD pipelines
- Docker containers
- When you want to use a different token than the CLI default
## Usage Examples
### Basic Task Operations
```bash
# Use Claude Code for task operations
task-master add-task --prompt="Implement user authentication system" --research
task-master expand --id=1 --research
task-master update-task --id=1.1 --prompt="Add JWT token validation"
```
### Model Configuration Commands
```bash
# Set Claude Code as main model
task-master models --set-main claude-code:sonnet
# Use interactive setup
task-master models --setup
# Then select "claude-code" from the provider list
```
## Troubleshooting
### Common Issues
#### 1. "Claude Code CLI not available" Error
**Problem**: Task Master cannot connect to Claude Code CLI.
**Solutions**:
- Ensure Claude Code CLI is installed and in your PATH
- Run `claude setup-token` to configure authentication
- Verify Claude Code CLI works: `claude --help`
#### 2. Authentication Failures
**Problem**: Token authentication is failing.
**Solutions**:
- Re-run `claude setup-token` to refresh your OAuth token
- Check if your token has expired
- Verify Claude Code CLI can authenticate: try a simple `claude` command
#### 3. Model Not Available
**Problem**: Specified Claude Code model is not supported.
**Solutions**:
- Use supported models: `sonnet` or `opus`
- Check model availability: `task-master models --list`
- Verify your Claude Code CLI has access to the requested model
### Debug Steps
1. **Test Claude Code CLI directly**:
```bash
claude --help
# Should show help without errors
```
2. **Test authentication**:
```bash
claude setup-token --verify
# Should confirm token is valid
```
3. **Test Task Master integration**:
```bash
task-master models --test claude-code:sonnet
# Should successfully connect and test the model
```
4. **Check logs**:
- Task Master logs will show detailed error messages
- Use `--verbose` flag for more detailed output
### Environment-Specific Configuration
#### Docker/Containers
When running in Docker, you'll need to:
1. Install Claude Code CLI in your container
2. Set up authentication via environment variable:
```dockerfile
ENV CLAUDE_CODE_OAUTH_TOKEN="your_token_here"
```
#### CI/CD Pipelines
For automated environments:
1. Set up a service account token or use environment variables
2. Ensure Claude Code CLI is available in the pipeline environment
3. Configure authentication before running Task Master commands
## Integration with AI SDK
Task Master's Claude Code integration uses the official `ai-sdk-provider-claude-code` package, providing:
- **Streaming Support**: Real-time token streaming for interactive experiences
- **Full AI SDK Compatibility**: Works with generateText, streamText, and other AI SDK functions
- **Automatic Error Handling**: Graceful degradation when Claude Code is unavailable
- **Type Safety**: Full TypeScript support with proper type definitions
### Example AI SDK Usage
```javascript
import { generateText } from 'ai';
import { ClaudeCodeProvider } from './src/ai-providers/claude-code.js';
const provider = new ClaudeCodeProvider();
const client = provider.getClient();
const result = await generateText({
model: client('sonnet'),
messages: [
{ role: 'user', content: 'Hello Claude!' }
]
});
console.log(result.text);
```
## Security Notes
- OAuth tokens are managed securely by Claude Code CLI
- No API keys need to be stored in your project files
- Tokens are automatically refreshed by the Claude Code CLI
- Environment variables should only be used in secure environments
## Getting Help
If you encounter issues:
1. Check the Claude Code CLI documentation
2. Verify your authentication setup with `claude setup-token --verify`
3. Review Task Master logs for detailed error messages
4. Open an issue with both Task Master and Claude Code version information

View File

@@ -383,12 +383,6 @@ task-master models --set-main=my-local-llama --ollama
# Set a custom OpenRouter model for the research role
task-master models --set-research=google/gemini-pro --openrouter
# Set Codex CLI model for the main role (uses ChatGPT subscription via OAuth)
task-master models --set-main=gpt-5-codex --codex-cli
# Set Codex CLI model for the fallback role
task-master models --set-fallback=gpt-5 --codex-cli
# Run interactive setup to configure models, including custom ones
task-master models --setup
```

View File

@@ -235,60 +235,6 @@ node scripts/init.js
- "MCP provider requires session context" → Ensure running in MCP environment
- See the [MCP Provider Guide](./mcp-provider-guide.md) for detailed troubleshooting
### MCP Timeout Configuration
Long-running AI operations in taskmaster-ai can exceed the default 60-second MCP timeout. Operations like `parse_prd`, `expand_task`, `research`, and `analyze_project_complexity` may take 2-5 minutes to complete.
#### Adding Timeout Configuration
Add a `timeout` parameter to your MCP configuration to extend the timeout limit. The timeout configuration works identically across MCP clients including Cursor, Windsurf, and RooCode:
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"timeout": 300,
"env": {
"ANTHROPIC_API_KEY": "your-anthropic-api-key"
}
}
}
}
```
**Configuration Details:**
- **`timeout: 300`** - Sets timeout to 300 seconds (5 minutes)
- **Value range**: 1-3600 seconds (1 second to 1 hour)
- **Recommended**: 300 seconds provides sufficient time for most AI operations
- **Format**: Integer value in seconds (not milliseconds)
#### Automatic Setup
When adding taskmaster rules for supported editors, the timeout configuration is automatically included:
```bash
# Automatically includes timeout configuration
task-master rules add cursor
task-master rules add roo
task-master rules add windsurf
task-master rules add vscode
```
#### Troubleshooting Timeouts
If you're still experiencing timeout errors:
1. **Verify configuration**: Check that `timeout: 300` is present in your MCP config
2. **Restart editor**: Restart your editor after making configuration changes
3. **Increase timeout**: For very complex operations, try `timeout: 600` (10 minutes)
4. **Check API keys**: Ensure required API keys are properly configured
**Expected behavior:**
- **Before fix**: Operations fail after 60 seconds with `MCP request timed out after 60000ms`
- **After fix**: Operations complete successfully within the configured timeout limit
### Google Vertex AI Configuration
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
@@ -429,153 +375,3 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
- Verify the deployment name matches your configuration exactly (case-sensitive)
- Ensure the model deployment is in a "Succeeded" state in Azure OpenAI Studio
- Ensure youre not getting rate limited by `maxTokens` maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.
### Codex CLI Provider
The Codex CLI provider integrates Task Master with OpenAI's Codex CLI, allowing you to use ChatGPT subscription models via OAuth authentication.
1. **Prerequisites**:
- Node.js >= 18
- Codex CLI >= 0.42.0 (>= 0.44.0 recommended)
- ChatGPT subscription: Plus, Pro, Business, Edu, or Enterprise (for OAuth access to GPT-5 models)
2. **Installation**:
```bash
npm install -g @openai/codex
```
3. **Authentication** (OAuth - Primary Method):
```bash
codex login
```
This will open a browser window for OAuth authentication with your ChatGPT account. Once authenticated, Task Master will automatically use these credentials.
4. **Optional API Key Method**:
While OAuth is the primary and recommended authentication method, you can optionally set an OpenAI API key:
```bash
# In .env file
OPENAI_API_KEY=sk-your-openai-api-key-here
```
**Note**: The API key will only be injected if explicitly provided. OAuth is always preferred.
5. **Configuration**:
```json
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "codex-cli",
"modelId": "gpt-5-codex",
"maxTokens": 128000,
"temperature": 0.2
},
"fallback": {
"provider": "codex-cli",
"modelId": "gpt-5",
"maxTokens": 128000,
"temperature": 0.2
}
},
"codexCli": {
"allowNpx": true,
"skipGitRepoCheck": true,
"approvalMode": "on-failure",
"sandboxMode": "workspace-write"
}
}
```
6. **Available Models**:
- `gpt-5` - Latest GPT-5 model (272K max input, 128K max output)
- `gpt-5-codex` - GPT-5 optimized for agentic software engineering (272K max input, 128K max output)
7. **Codex CLI Settings (`codexCli` section)**:
The `codexCli` section in your configuration file supports the following options:
- **`allowNpx`** (boolean, default: `false`): Allow fallback to `npx @openai/codex` if CLI not found on PATH
- **`skipGitRepoCheck`** (boolean, default: `false`): Skip git repository safety check (recommended for CI/non-repo usage)
- **`approvalMode`** (string): Control command execution approval
- `"untrusted"`: Require approval for all commands
- `"on-failure"`: Only require approval after a command fails (default)
- `"on-request"`: Approve only when explicitly requested
- `"never"`: Never require approval (not recommended)
- **`sandboxMode`** (string): Control filesystem access
- `"read-only"`: Read-only access
- `"workspace-write"`: Allow writes to workspace (default)
- `"danger-full-access"`: Full filesystem access (use with caution)
- **`codexPath`** (string, optional): Custom path to codex CLI executable
- **`cwd`** (string, optional): Working directory for Codex CLI execution
- **`fullAuto`** (boolean, optional): Fully automatic mode (equivalent to `--full-auto` flag)
- **`dangerouslyBypassApprovalsAndSandbox`** (boolean, optional): Bypass all safety checks (dangerous!)
- **`color`** (string, optional): Color handling - `"always"`, `"never"`, or `"auto"`
- **`outputLastMessageFile`** (string, optional): Write last agent message to specified file
- **`verbose`** (boolean, optional): Enable verbose logging
- **`env`** (object, optional): Additional environment variables for Codex CLI
8. **Command-Specific Settings** (optional):
You can override settings for specific Task Master commands:
```json
{
"codexCli": {
"allowNpx": true,
"approvalMode": "on-failure",
"commandSpecific": {
"parse-prd": {
"approvalMode": "never",
"verbose": true
},
"expand": {
"sandboxMode": "read-only"
}
}
}
}
```
9. **Codebase Features**:
The Codex CLI provider is codebase-capable, meaning it can analyze and interact with your project files. Codebase analysis features are automatically enabled when using `codex-cli` as your provider and `enableCodebaseAnalysis` is set to `true` in your global configuration (default).
10. **Setup Commands**:
```bash
# Set Codex CLI for main role
task-master models --set-main gpt-5-codex --codex-cli
# Set Codex CLI for fallback role
task-master models --set-fallback gpt-5 --codex-cli
# Verify configuration
task-master models
```
11. **Troubleshooting**:
**"codex: command not found" error:**
- Install Codex CLI globally: `npm install -g @openai/codex`
- Verify installation: `codex --version`
- Alternatively, enable `allowNpx: true` in your codexCli configuration
**"Not logged in" errors:**
- Run `codex login` to authenticate with your ChatGPT account
- Verify authentication status: `codex` (opens interactive CLI)
**"Old version" warnings:**
- Check version: `codex --version`
- Upgrade: `npm install -g @openai/codex@latest`
- Minimum version: 0.42.0, recommended: >= 0.44.0
**"Model not available" errors:**
- Only `gpt-5` and `gpt-5-codex` are available via OAuth subscription
- Verify your ChatGPT subscription is active
- For other OpenAI models, use the standard `openai` provider with an API key
**API key not being used:**
- API key is only injected when explicitly provided
- OAuth authentication is always preferred
- If you want to use an API key, ensure `OPENAI_API_KEY` is set in your `.env` file
12. **Important Notes**:
- OAuth subscription required for model access (no API key needed for basic operation)
- Limited to OAuth-available models only (`gpt-5` and `gpt-5-codex`)
- Pricing information is not available for OAuth models (shows as "Unknown" in cost calculations)
- See [Codex CLI Provider Documentation](./providers/codex-cli.md) for more details

View File

@@ -1,463 +0,0 @@
# Codex CLI Provider Usage Examples
This guide provides practical examples of using Task Master with the Codex CLI provider.
## Prerequisites
Before using these examples, ensure you have:
```bash
# 1. Codex CLI installed
npm install -g @openai/codex
# 2. Authenticated with ChatGPT
codex login
# 3. Codex CLI configured as your provider
task-master models --set-main gpt-5-codex --codex-cli
```
## Example 1: Basic Task Creation
Use Codex CLI to create tasks from a simple description:
```bash
# Add a task with AI-powered enhancement
task-master add-task --prompt="Implement user authentication with JWT" --research
```
**What happens**:
1. Task Master sends your prompt to GPT-5-Codex via the CLI
2. The AI analyzes your request and generates a detailed task
3. The task is added to your `.taskmaster/tasks/tasks.json`
4. OAuth credentials are automatically used (no API key needed)
## Example 2: Parsing a Product Requirements Document
Create a comprehensive task list from a PRD:
```bash
# Create your PRD
cat > my-feature.txt <<EOF
# User Profile Feature
## Requirements
1. Users can view their profile
2. Users can edit their information
3. Profile pictures can be uploaded
4. Email verification required
## Technical Constraints
- Use React for frontend
- Node.js/Express backend
- PostgreSQL database
EOF
# Parse with Codex CLI
task-master parse-prd my-feature.txt --num-tasks 12
```
**What happens**:
1. GPT-5-Codex reads and analyzes your PRD
2. Generates structured tasks with dependencies
3. Creates subtasks for complex items
4. Saves everything to `.taskmaster/tasks/`
## Example 3: Expanding Tasks with Research
Break down a complex task into detailed subtasks:
```bash
# First, show your current tasks
task-master list
# Expand a specific task (e.g., task 1.2)
task-master expand --id=1.2 --research --force
```
**What happens**:
1. Codex CLI uses GPT-5 for research-level analysis
2. Breaks down the task into logical subtasks
3. Adds implementation details and test strategies
4. Updates the task with dependency information
## Example 4: Analyzing Project Complexity
Get AI-powered insights into your project's task complexity:
```bash
# Analyze all tasks
task-master analyze-complexity --research
# View the complexity report
task-master complexity-report
```
**What happens**:
1. GPT-5 analyzes each task's scope and requirements
2. Assigns complexity scores and estimates subtask counts
3. Generates a detailed report
4. Saves to `.taskmaster/reports/task-complexity-report.json`
## Example 5: Using Custom Codex CLI Settings
Configure Codex CLI behavior for different commands:
```json
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "codex-cli",
"modelId": "gpt-5-codex",
"maxTokens": 128000,
"temperature": 0.2
}
},
"codexCli": {
"allowNpx": true,
"approvalMode": "on-failure",
"sandboxMode": "workspace-write",
"commandSpecific": {
"parse-prd": {
"verbose": true,
"approvalMode": "never"
},
"expand": {
"sandboxMode": "read-only",
"verbose": true
}
}
}
}
```
```bash
# Now parse-prd runs with verbose output and no approvals
task-master parse-prd requirements.txt
# Expand runs with read-only mode
task-master expand --id=2.1
```
## Example 6: Workflow - Building a Feature End-to-End
Complete workflow from PRD to implementation tracking:
```bash
# Step 1: Initialize project
task-master init
# Step 2: Set up Codex CLI
task-master models --set-main gpt-5-codex --codex-cli
task-master models --set-fallback gpt-5 --codex-cli
# Step 3: Create PRD
cat > feature-prd.txt <<EOF
# Authentication System
Implement a complete authentication system with:
- User registration
- Email verification
- Password reset
- Two-factor authentication
- Session management
EOF
# Step 4: Parse PRD into tasks
task-master parse-prd feature-prd.txt --num-tasks 8
# Step 5: Analyze complexity
task-master analyze-complexity --research
# Step 6: Expand complex tasks
task-master expand --all --research
# Step 7: Start working
task-master next
# Shows: Task 1.1: User registration database schema
# Step 8: Mark completed as you work
task-master set-status --id=1.1 --status=done
# Step 9: Continue to next task
task-master next
```
## Example 7: Multi-Role Configuration
Use Codex CLI for main tasks, Perplexity for research:
```json
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "codex-cli",
"modelId": "gpt-5-codex",
"maxTokens": 128000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "codex-cli",
"modelId": "gpt-5",
"maxTokens": 128000,
"temperature": 0.2
}
}
}
```
```bash
# Main task operations use GPT-5-Codex
task-master add-task --prompt="Build REST API endpoint"
# Research operations use Perplexity
task-master analyze-complexity --research
# Fallback to GPT-5 if needed
task-master expand --id=3.2 --force
```
## Example 8: Troubleshooting Common Issues
### Issue: Codex CLI not found
```bash
# Check if Codex is installed
codex --version
# If not found, install globally
npm install -g @openai/codex
# Or enable npx fallback in config
cat >> .taskmaster/config.json <<EOF
{
"codexCli": {
"allowNpx": true
}
}
EOF
```
### Issue: Not authenticated
```bash
# Check auth status
codex
# Use /about command to see auth info
# Re-authenticate if needed
codex login
```
### Issue: Want more verbose output
```bash
# Enable verbose mode in config
cat >> .taskmaster/config.json <<EOF
{
"codexCli": {
"verbose": true
}
}
EOF
# Or for specific commands
task-master parse-prd my-prd.txt
# (verbose output shows detailed Codex CLI interactions)
```
## Example 9: CI/CD Integration
Use Codex CLI in automated workflows:
```yaml
# .github/workflows/task-analysis.yml
name: Analyze Task Complexity
on:
push:
paths:
- '.taskmaster/**'
jobs:
analyze:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Task Master
run: npm install -g task-master-ai
- name: Configure Codex CLI
run: |
npm install -g @openai/codex
echo "${{ secrets.OPENAI_CODEX_API_KEY }}" > ~/.codex-auth
env:
OPENAI_CODEX_API_KEY: ${{ secrets.OPENAI_CODEX_API_KEY }}
- name: Configure Task Master
run: |
cat > .taskmaster/config.json <<EOF
{
"models": {
"main": {
"provider": "codex-cli",
"modelId": "gpt-5"
}
},
"codexCli": {
"allowNpx": true,
"skipGitRepoCheck": true,
"approvalMode": "never",
"fullAuto": true
}
}
EOF
- name: Analyze Complexity
run: task-master analyze-complexity --research
- name: Upload Report
uses: actions/upload-artifact@v3
with:
name: complexity-report
path: .taskmaster/reports/task-complexity-report.json
```
## Best Practices
### 1. Use OAuth for Development
```bash
# For local development, use OAuth (no API key needed)
codex login
task-master models --set-main gpt-5-codex --codex-cli
```
### 2. Configure Approval Modes Appropriately
```json
{
"codexCli": {
"approvalMode": "on-failure", // Safe default
"sandboxMode": "workspace-write" // Restricts to project directory
}
}
```
### 3. Use Command-Specific Settings
```json
{
"codexCli": {
"commandSpecific": {
"parse-prd": {
"approvalMode": "never", // PRD parsing is safe
"verbose": true
},
"expand": {
"approvalMode": "on-request", // More cautious for task expansion
"verbose": false
}
}
}
}
```
### 4. Leverage Codebase Analysis
```json
{
"global": {
"enableCodebaseAnalysis": true // Let Codex analyze your code
}
}
```
### 5. Handle Errors Gracefully
```bash
# Always configure a fallback model
task-master models --set-fallback gpt-5 --codex-cli
# Or use a different provider as fallback
task-master models --set-fallback claude-3-5-sonnet
```
## Next Steps
- Read the [Codex CLI Provider Documentation](../providers/codex-cli.md)
- Explore [Configuration Options](../configuration.md#codex-cli-provider)
- Check out [Command Reference](../command-reference.md)
- Learn about [Task Structure](../task-structure.md)
## Common Patterns
### Pattern: Daily Development Workflow
```bash
# Morning: Review tasks
task-master list
# Get next task
task-master next
# Work on task...
# Update task with notes
task-master update-subtask --id=2.3 --prompt="Implemented authentication middleware"
# Mark complete
task-master set-status --id=2.3 --status=done
# Repeat
```
### Pattern: Feature Planning
```bash
# Write feature spec
vim new-feature.txt
# Generate tasks
task-master parse-prd new-feature.txt --num-tasks 10
# Analyze and expand
task-master analyze-complexity --research
task-master expand --all --research --force
# Review and adjust
task-master list
```
### Pattern: Sprint Planning
```bash
# Parse sprint requirements
task-master parse-prd sprint-requirements.txt
# Analyze complexity
task-master analyze-complexity --research
# View report
task-master complexity-report
# Adjust task estimates based on complexity scores
```
---
For more examples and advanced usage, see the [full documentation](https://docs.task-master.dev).

View File

@@ -451,8 +451,8 @@ When using Task Master in VS Code with MCP support:
{
"servers": {
"task-master-dev": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"command": "node",
"args": ["mcp-server/server.js"],
"cwd": "/path/to/your/task-master-project",
"env": {
"NODE_ENV": "development",

View File

@@ -1,4 +1,4 @@
# Available Models as of October 5, 2025
# Available Models as of August 12, 2025
## Main Models
@@ -10,15 +10,9 @@
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| codex-cli | gpt-5 | 0.749 | 0 | 0 |
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
| grok-cli | grok-4-latest | 0.7 | 0 | 0 |
| grok-cli | grok-3-latest | 0.65 | 0 | 0 |
| grok-cli | grok-3-fast | 0.6 | 0 | 0 |
| grok-cli | grok-3-mini-fast | 0.55 | 0 | 0 |
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o1 | 0.489 | 15 | 60 |
| openai | o3 | 0.5 | 2 | 8 |
@@ -102,15 +96,9 @@
| ----------- | -------------------------------------------- | --------- | ---------- | ----------- |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| codex-cli | gpt-5 | 0.749 | 0 | 0 |
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
| grok-cli | grok-4-latest | 0.7 | 0 | 0 |
| grok-cli | grok-3-latest | 0.65 | 0 | 0 |
| grok-cli | grok-3-fast | 0.6 | 0 | 0 |
| grok-cli | grok-3-mini-fast | 0.55 | 0 | 0 |
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
| xai | grok-3 | — | 3 | 15 |
@@ -123,7 +111,7 @@
| groq | deepseek-r1-distill-llama-70b | 0.52 | 0.75 | 0.99 |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | — | 1 | 1 |
| perplexity | sonar-deep-research | 0.211 | 2 | 8 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
@@ -144,15 +132,9 @@
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| codex-cli | gpt-5 | 0.749 | 0 | 0 |
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 |
| gemini-cli | gemini-2.5-pro | 0.72 | 0 | 0 |
| gemini-cli | gemini-2.5-flash | 0.71 | 0 | 0 |
| grok-cli | grok-4-latest | 0.7 | 0 | 0 |
| grok-cli | grok-3-latest | 0.65 | 0 | 0 |
| grok-cli | grok-3-fast | 0.6 | 0 | 0 |
| grok-cli | grok-3-mini-fast | 0.55 | 0 | 0 |
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o3 | 0.5 | 2 | 8 |
| openai | o4-mini | 0.45 | 1.1 | 4.4 |

View File

@@ -1,510 +0,0 @@
# Codex CLI Provider
The `codex-cli` provider integrates Task Master with OpenAI's Codex CLI via the community AI SDK provider [`ai-sdk-provider-codex-cli`](https://github.com/ben-vargas/ai-sdk-provider-codex-cli). It uses your ChatGPT subscription (OAuth) via `codex login`, with optional `OPENAI_CODEX_API_KEY` support.
## Why Use Codex CLI?
The primary benefits of using the `codex-cli` provider include:
- **Use Latest OpenAI Models**: Access to cutting-edge models like GPT-5 and GPT-5-Codex via ChatGPT subscription
- **OAuth Authentication**: No API key management needed - authenticate once with `codex login`
- **Built-in Tool Execution**: Native support for command execution, file changes, MCP tools, and web search
- **Native JSON Schema Support**: Structured output generation without post-processing
- **Approval/Sandbox Modes**: Fine-grained control over command execution and filesystem access for safety
## Quickstart
Get up and running with Codex CLI in 3 steps:
```bash
# 1. Install Codex CLI globally
npm install -g @openai/codex
# 2. Authenticate with your ChatGPT account
codex login
# 3. Configure Task Master to use Codex CLI
task-master models --set-main gpt-5-codex --codex-cli
```
## Requirements
- **Node.js**: >= 18.0.0
- **Codex CLI**: >= 0.42.0 (>= 0.44.0 recommended)
- **ChatGPT Subscription**: Required for OAuth access (Plus, Pro, Business, Edu, or Enterprise)
- **Task Master**: >= 0.27.3 (version with Codex CLI support)
### Checking Your Versions
```bash
# Check Node.js version
node --version
# Check Codex CLI version
codex --version
# Check Task Master version
task-master --version
```
## Installation
### Install Codex CLI
```bash
# Install globally via npm
npm install -g @openai/codex
# Verify installation
codex --version
```
Expected output: `v0.44.0` or higher
### Install Task Master (if not already installed)
```bash
# Install globally
npm install -g task-master-ai
# Or install in your project
npm install --save-dev task-master-ai
```
## Authentication
### OAuth Authentication (Primary Method - Recommended)
The Codex CLI provider is designed to use OAuth authentication with your ChatGPT subscription:
```bash
# Launch Codex CLI and authenticate
codex login
```
This will:
1. Open a browser window for OAuth authentication
2. Prompt you to log in with your ChatGPT account
3. Store authentication credentials locally
4. Allow Task Master to automatically use these credentials
To verify your authentication:
```bash
# Open interactive Codex CLI
codex
# Use /about command to see auth status
/about
```
### Optional: API Key Method
While OAuth is the primary and recommended method, you can optionally use an OpenAI API key:
```bash
# In your .env file
OPENAI_CODEX_API_KEY=sk-your-openai-api-key-here
```
**Important Notes**:
- The API key will **only** be injected when explicitly provided
- OAuth authentication is always preferred when available
- Using an API key doesn't provide access to subscription-only models like GPT-5-Codex
- For full OpenAI API access with non-subscription models, consider using the standard `openai` provider instead
- `OPENAI_CODEX_API_KEY` is specific to the codex-cli provider to avoid conflicts with the `openai` provider's `OPENAI_API_KEY`
## Available Models
The Codex CLI provider supports only models available through ChatGPT subscription:
| Model ID | Description | Max Input Tokens | Max Output Tokens |
|----------|-------------|------------------|-------------------|
| `gpt-5` | Latest GPT-5 model | 272K | 128K |
| `gpt-5-codex` | GPT-5 optimized for agentic software engineering | 272K | 128K |
**Note**: These models are only available via OAuth subscription through Codex CLI (ChatGPT Plus, Pro, Business, Edu, or Enterprise plans). For other OpenAI models, use the standard `openai` provider with an API key.
**Research Capabilities**: Both GPT-5 models support web search tools, making them suitable for the `research` role in addition to `main` and `fallback` roles.
## Configuration
### Basic Configuration
Add Codex CLI to your `.taskmaster/config.json`:
```json
{
"models": {
"main": {
"provider": "codex-cli",
"modelId": "gpt-5-codex",
"maxTokens": 128000,
"temperature": 0.2
},
"fallback": {
"provider": "codex-cli",
"modelId": "gpt-5",
"maxTokens": 128000,
"temperature": 0.2
}
}
}
```
### Advanced Configuration with Codex CLI Settings
The `codexCli` section allows you to customize Codex CLI behavior:
```json
{
"models": {
"main": {
"provider": "codex-cli",
"modelId": "gpt-5-codex",
"maxTokens": 128000,
"temperature": 0.2
}
},
"codexCli": {
"allowNpx": true,
"skipGitRepoCheck": true,
"approvalMode": "on-failure",
"sandboxMode": "workspace-write",
"verbose": false
}
}
```
### Codex CLI Settings Reference
#### Core Settings
- **`allowNpx`** (boolean, default: `false`)
- Allow fallback to `npx @openai/codex` if the CLI is not found on PATH
- Useful for CI environments or systems without global npm installations
- Example: `"allowNpx": true`
- **`skipGitRepoCheck`** (boolean, default: `false`)
- Skip git repository safety check before execution
- Recommended for CI environments or non-repository usage
- Example: `"skipGitRepoCheck": true`
#### Execution Control
- **`approvalMode`** (string)
- Controls when to require user approval for command execution
- Options:
- `"untrusted"`: Require approval for all commands
- `"on-failure"`: Only require approval after a command fails (default)
- `"on-request"`: Approve only when explicitly requested
- `"never"`: Never require approval (use with caution)
- Example: `"approvalMode": "on-failure"`
- **`sandboxMode`** (string)
- Controls filesystem access permissions
- Options:
- `"read-only"`: Read-only access to filesystem
- `"workspace-write"`: Allow writes to workspace directory (default)
- `"danger-full-access"`: Full filesystem access (use with extreme caution)
- Example: `"sandboxMode": "workspace-write"`
#### Path and Environment
- **`codexPath`** (string, optional)
- Custom path to Codex CLI executable
- Useful when Codex is installed in a non-standard location
- Example: `"codexPath": "/usr/local/bin/codex"`
- **`cwd`** (string, optional)
- Working directory for Codex CLI execution
- Defaults to current working directory
- Example: `"cwd": "/path/to/project"`
- **`env`** (object, optional)
- Additional environment variables for Codex CLI
- Example: `"env": { "DEBUG": "true" }`
#### Advanced Settings
- **`fullAuto`** (boolean, optional)
- Fully automatic mode (equivalent to `--full-auto` flag)
- Bypasses most approvals for fully automated workflows
- Example: `"fullAuto": true`
- **`dangerouslyBypassApprovalsAndSandbox`** (boolean, optional)
- Bypass all safety checks including approvals and sandbox
- **WARNING**: Use with extreme caution - can execute arbitrary code
- Example: `"dangerouslyBypassApprovalsAndSandbox": false`
- **`color`** (string, optional)
- Force color handling in Codex CLI output
- Options: `"always"`, `"never"`, `"auto"`
- Example: `"color": "auto"`
- **`outputLastMessageFile`** (string, optional)
- Write last agent message to specified file
- Useful for debugging or logging
- Example: `"outputLastMessageFile": "./last-message.txt"`
- **`verbose`** (boolean, optional)
- Enable verbose provider logging
- Helpful for debugging issues
- Example: `"verbose": true`
### Command-Specific Settings
Override settings for specific Task Master commands:
```json
{
"codexCli": {
"allowNpx": true,
"approvalMode": "on-failure",
"commandSpecific": {
"parse-prd": {
"approvalMode": "never",
"verbose": true
},
"expand": {
"sandboxMode": "read-only"
},
"add-task": {
"approvalMode": "untrusted"
}
}
}
}
```
## Usage
### Setting Codex CLI Models
```bash
# Set Codex CLI for main role
task-master models --set-main gpt-5-codex --codex-cli
# Set Codex CLI for fallback role
task-master models --set-fallback gpt-5 --codex-cli
# Set Codex CLI for research role
task-master models --set-research gpt-5 --codex-cli
# Verify configuration
task-master models
```
### Using Codex CLI with Task Master Commands
Once configured, use Task Master commands as normal:
```bash
# Parse a PRD with Codex CLI
task-master parse-prd my-requirements.txt
# Analyze project complexity
task-master analyze-complexity --research
# Expand a task into subtasks
task-master expand --id=1.2
# Add a new task with AI assistance
task-master add-task --prompt="Implement user authentication" --research
```
The provider will automatically use your OAuth credentials when Codex CLI is configured.
## Codebase Features
The Codex CLI provider is **codebase-capable**, meaning it can analyze and interact with your project files. This enables advanced features like:
- **Code Analysis**: Understanding your project structure and dependencies
- **Intelligent Suggestions**: Context-aware task recommendations
- **File Operations**: Reading and analyzing project files for better task generation
- **Pattern Recognition**: Identifying common patterns and best practices in your codebase
### Enabling Codebase Analysis
Codebase analysis is automatically enabled when:
1. Your provider is set to `codex-cli`
2. `enableCodebaseAnalysis` is `true` in your global configuration (default)
To verify or configure:
```json
{
"global": {
"enableCodebaseAnalysis": true
}
}
```
## Troubleshooting
### "codex: command not found" Error
**Symptoms**: Task Master reports that the Codex CLI is not found.
**Solutions**:
1. **Install Codex CLI globally**:
```bash
npm install -g @openai/codex
```
2. **Verify installation**:
```bash
codex --version
```
3. **Alternative: Enable npx fallback**:
```json
{
"codexCli": {
"allowNpx": true
}
}
```
### "Not logged in" Errors
**Symptoms**: Authentication errors when trying to use Codex CLI.
**Solutions**:
1. **Authenticate with OAuth**:
```bash
codex login
```
2. **Verify authentication status**:
```bash
codex
# Then use /about command
```
3. **Re-authenticate if needed**:
```bash
# Logout first
codex
# Use /auth command to change auth method
# Then login again
codex login
```
### "Old version" Warnings
**Symptoms**: Warnings about Codex CLI version being outdated.
**Solutions**:
1. **Check current version**:
```bash
codex --version
```
2. **Upgrade to latest version**:
```bash
npm install -g @openai/codex@latest
```
3. **Verify upgrade**:
```bash
codex --version
```
Should show >= 0.44.0
### "Model not available" Errors
**Symptoms**: Error indicating the requested model is not available.
**Causes and Solutions**:
1. **Using unsupported model**:
- Only `gpt-5` and `gpt-5-codex` are available via Codex CLI
- For other OpenAI models, use the standard `openai` provider
2. **Subscription not active**:
- Verify your ChatGPT subscription is active
- Check subscription status at <https://platform.openai.com>
3. **Wrong provider selected**:
- Verify you're using `--codex-cli` flag when setting models
- Check `.taskmaster/config.json` shows `"provider": "codex-cli"`
### API Key Not Being Used
**Symptoms**: You've set `OPENAI_CODEX_API_KEY` but it's not being used.
**Expected Behavior**:
- OAuth authentication is always preferred
- API key is only injected when explicitly provided
- API key doesn't grant access to subscription-only models
**Solutions**:
1. **Verify OAuth is working**:
```bash
codex
# Check /about for auth status
```
2. **If you want to force API key usage**:
- This is not recommended with Codex CLI
- Consider using the standard `openai` provider instead
3. **Verify .env file is being loaded**:
```bash
# Check if .env exists in project root
ls -la .env
# Verify OPENAI_CODEX_API_KEY is set
grep OPENAI_CODEX_API_KEY .env
```
### Approval/Sandbox Issues
**Symptoms**: Commands are blocked or filesystem access is denied.
**Solutions**:
1. **Adjust approval mode**:
```json
{
"codexCli": {
"approvalMode": "on-request"
}
}
```
2. **Adjust sandbox mode**:
```json
{
"codexCli": {
"sandboxMode": "workspace-write"
}
}
```
3. **For fully automated workflows** (use cautiously):
```json
{
"codexCli": {
"fullAuto": true
}
}
```
## Important Notes
- **OAuth subscription required**: No API key needed for basic operation, but requires active ChatGPT subscription
- **Limited model selection**: Only `gpt-5` and `gpt-5-codex` available via OAuth
- **Pricing information**: Not available for OAuth models (shows as "Unknown" in cost calculations)
- **No automatic dependency**: The `@openai/codex` package is not added to Task Master's dependencies - install it globally or enable `allowNpx`
- **Codebase analysis**: Automatically enabled when using `codex-cli` provider
- **Safety first**: Default settings prioritize safety with `approvalMode: "on-failure"` and `sandboxMode: "workspace-write"`
## See Also
- [Configuration Guide](../configuration.md#codex-cli-provider) - Complete Codex CLI configuration reference
- [Command Reference](../command-reference.md) - Using `--codex-cli` flag with commands
- [Gemini CLI Provider](./gemini-cli.md) - Similar CLI-based provider for Google Gemini
- [Claude Code Integration](../claude-code-integration.md) - Another CLI-based provider
- [ai-sdk-provider-codex-cli](https://github.com/ben-vargas/ai-sdk-provider-codex-cli) - Source code for the provider package

View File

@@ -23,7 +23,7 @@ npm i -g task-master-ai
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
@@ -145,7 +145,7 @@ You can also set up the MCP server in Cursor settings:
4. Configure with the following details:
- Name: "Task Master"
- Type: "Command"
- Command: "npx -y task-master-ai"
- Command: "npx -y --package=task-master-ai task-master-ai"
5. Save the settings
Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience.

View File

@@ -17,7 +17,7 @@ Add the following configuration to the user's MCP settings file (`.cursor/mcp.js
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "user_will_add_their_key_here",
"PERPLEXITY_API_KEY": "user_will_add_their_key_here",

View File

@@ -69,29 +69,11 @@ export function resolveTasksPath(args, log = silentLogger) {
// Use core findTasksPath with explicit path and normalized projectRoot context
if (projectRoot) {
const foundPath = coreFindTasksPath(explicitPath, { projectRoot }, log);
// If core function returns null and no explicit path was provided,
// construct the expected default path as documented
if (foundPath === null && !explicitPath) {
const defaultPath = path.join(
projectRoot,
'.taskmaster',
'tasks',
'tasks.json'
);
log?.info?.(
`Core findTasksPath returned null, using default path: ${defaultPath}`
);
return defaultPath;
}
return foundPath;
return coreFindTasksPath(explicitPath, { projectRoot }, log);
}
// Fallback to core function without projectRoot context
const foundPath = coreFindTasksPath(explicitPath, null, log);
// Note: When no projectRoot is available, we can't construct a default path
// so we return null and let the calling code handle the error
return foundPath;
return coreFindTasksPath(explicitPath, null, log);
}
/**

View File

@@ -75,50 +75,13 @@ function generateExampleFromSchema(schema) {
return result;
case 'ZodString':
// Check for min/max length constraints
if (def.checks) {
const minCheck = def.checks.find((c) => c.kind === 'min');
const maxCheck = def.checks.find((c) => c.kind === 'max');
if (minCheck && maxCheck) {
return (
'<string between ' +
minCheck.value +
'-' +
maxCheck.value +
' characters>'
);
} else if (minCheck) {
return '<string with at least ' + minCheck.value + ' characters>';
} else if (maxCheck) {
return '<string up to ' + maxCheck.value + ' characters>';
}
}
return '<string>';
return 'string';
case 'ZodNumber':
// Check for int, positive, min/max constraints
if (def.checks) {
const intCheck = def.checks.find((c) => c.kind === 'int');
const minCheck = def.checks.find((c) => c.kind === 'min');
const maxCheck = def.checks.find((c) => c.kind === 'max');
if (intCheck && minCheck && minCheck.value > 0) {
return '<positive integer>';
} else if (intCheck) {
return '<integer>';
} else if (minCheck || maxCheck) {
return (
'<number' +
(minCheck ? ' >= ' + minCheck.value : '') +
(maxCheck ? ' <= ' + maxCheck.value : '') +
'>'
);
}
}
return '<number>';
return 0;
case 'ZodBoolean':
return '<boolean>';
return false;
case 'ZodArray':
const elementExample = generateExampleFromSchema(def.type);

File diff suppressed because one or more lines are too long

6539
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "task-master-ai",
"version": "0.28.0-rc.1",
"version": "0.26.0",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js",
"type": "module",
@@ -12,12 +12,12 @@
"workspaces": ["apps/*", "packages/*", "."],
"scripts": {
"build": "npm run build:build-config && cross-env NODE_ENV=production tsdown",
"dev": "tsdown --watch",
"dev": "tsdown --watch='packages/*/src/**/*' --watch='apps/cli/src/**/*' --watch='bin/**/*' --watch='mcp-server/**/*'",
"turbo:dev": "turbo dev",
"turbo:build": "turbo build",
"turbo:typecheck": "turbo typecheck",
"build:build-config": "npm run build -w @tm/build-config",
"test": "cross-env NODE_ENV=test node --experimental-vm-modules node_modules/.bin/jest",
"test": "node --experimental-vm-modules node_modules/.bin/jest",
"test:unit": "node --experimental-vm-modules node_modules/.bin/jest --testPathPattern=unit",
"test:integration": "node --experimental-vm-modules node_modules/.bin/jest --testPathPattern=integration",
"test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
@@ -33,9 +33,7 @@
"inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js",
"mcp-server": "node dist/mcp-server.js",
"format-check": "biome format .",
"format": "biome format . --write",
"deps:check": "manypkg check || echo 'Note: Workspace package version warnings are expected for internal @tm/* packages'",
"deps:fix": "manypkg fix"
"format": "biome format . --write"
},
"keywords": [
"claude",
@@ -52,27 +50,23 @@
"author": "Eyal Toledano",
"license": "MIT WITH Commons-Clause",
"dependencies": {
"@ai-sdk/amazon-bedrock": "^3.0.23",
"@ai-sdk/anthropic": "^2.0.18",
"@ai-sdk/azure": "^2.0.34",
"@ai-sdk/google": "^2.0.16",
"@ai-sdk/google-vertex": "^3.0.29",
"@ai-sdk/groq": "^2.0.21",
"@ai-sdk/mistral": "^2.0.16",
"@ai-sdk/openai": "^2.0.34",
"@ai-sdk/perplexity": "^2.0.10",
"@ai-sdk/provider": "^2.0.0",
"@ai-sdk/provider-utils": "^3.0.10",
"@ai-sdk/xai": "^2.0.22",
"@aws-sdk/credential-providers": "^3.895.0",
"@ai-sdk/amazon-bedrock": "^2.2.9",
"@ai-sdk/anthropic": "^1.2.10",
"@ai-sdk/azure": "^1.3.17",
"@ai-sdk/google": "^1.2.13",
"@ai-sdk/google-vertex": "^2.2.23",
"@ai-sdk/groq": "^1.2.9",
"@ai-sdk/mistral": "^1.2.7",
"@ai-sdk/openai": "^1.3.20",
"@ai-sdk/perplexity": "^1.1.7",
"@ai-sdk/xai": "^1.2.15",
"@anthropic-ai/sdk": "^0.39.0",
"@aws-sdk/credential-providers": "^3.817.0",
"@inquirer/search": "^3.0.15",
"@openrouter/ai-sdk-provider": "^1.2.0",
"@openrouter/ai-sdk-provider": "^0.4.5",
"@streamparser/json": "^0.0.22",
"@supabase/supabase-js": "^2.57.4",
"ai": "^5.0.51",
"ai-sdk-provider-claude-code": "^1.1.4",
"ai-sdk-provider-codex-cli": "^0.3.0",
"ai-sdk-provider-gemini-cli": "^1.1.1",
"@tm/cli": "*",
"ai": "^4.3.10",
"ajv": "^8.17.1",
"ajv-formats": "^3.0.1",
"boxen": "^8.0.1",
@@ -80,9 +74,9 @@
"cli-highlight": "^2.1.11",
"cli-progress": "^3.12.0",
"cli-table3": "^0.6.5",
"commander": "^12.1.0",
"commander": "^11.1.0",
"cors": "^2.8.5",
"dotenv": "^16.6.1",
"dotenv": "^16.3.1",
"express": "^4.21.2",
"fastmcp": "^3.5.0",
"figlet": "^1.8.0",
@@ -97,14 +91,17 @@
"lru-cache": "^10.2.0",
"marked": "^15.0.12",
"marked-terminal": "^7.3.0",
"ollama-ai-provider-v2": "^1.3.1",
"ollama-ai-provider": "^1.2.0",
"openai": "^4.89.0",
"ora": "^8.2.0",
"uuid": "^11.1.0",
"zod": "^4.1.11"
"zod": "^3.23.8",
"zod-to-json-schema": "^3.24.5"
},
"optionalDependencies": {
"@anthropic-ai/claude-code": "^1.0.88",
"@biomejs/cli-linux-x64": "^1.9.4"
"@biomejs/cli-linux-x64": "^1.9.4",
"ai-sdk-provider-gemini-cli": "^0.1.3"
},
"engines": {
"node": ">=18.0.0"
@@ -127,13 +124,11 @@
"@biomejs/biome": "^1.9.4",
"@changesets/changelog-github": "^0.5.1",
"@changesets/cli": "^2.28.1",
"@manypkg/cli": "^0.25.1",
"@tm/ai-sdk-provider-grok-cli": "*",
"@tm/cli": "*",
"@types/jest": "^29.5.14",
"@types/marked-terminal": "^6.1.1",
"concurrently": "^9.2.1",
"cross-env": "^10.0.0",
"dotenv-mono": "^1.5.1",
"execa": "^8.0.1",
"jest": "^29.7.0",
"jest-environment-node": "^29.7.0",
@@ -142,8 +137,8 @@
"supertest": "^7.1.0",
"ts-jest": "^29.4.2",
"tsdown": "^0.15.2",
"tsx": "^4.20.4",
"turbo": "2.5.6",
"tsx": "^4.16.2",
"turbo": "^2.5.6",
"typescript": "^5.9.2"
}
}

View File

@@ -1,165 +0,0 @@
# AI SDK Provider for Grok CLI
A provider for the [AI SDK](https://sdk.vercel.ai) that integrates with [Grok CLI](https://docs.x.ai/api) for accessing xAI's Grok language models.
## Features
-**AI SDK v5 Compatible** - Full support for the latest AI SDK interfaces
-**Streaming & Non-streaming** - Both generation modes supported
-**Error Handling** - Comprehensive error handling with retry logic
-**Type Safety** - Full TypeScript support with proper type definitions
-**JSON Mode** - Automatic JSON extraction from responses
-**Abort Signals** - Proper cancellation support
## Installation
```bash
npm install @tm/ai-sdk-provider-grok-cli
# or
yarn add @tm/ai-sdk-provider-grok-cli
```
## Prerequisites
1. Install the Grok CLI:
```bash
npm install -g grok-cli
# or follow xAI's installation instructions
```
2. Set up authentication:
```bash
export GROK_CLI_API_KEY="your-api-key"
# or configure via grok CLI: grok config set api-key your-key
```
## Usage
### Basic Usage
```typescript
import { grokCli } from '@tm/ai-sdk-provider-grok-cli';
import { generateText } from 'ai';
const result = await generateText({
model: grokCli('grok-3-latest'),
prompt: 'Write a haiku about TypeScript'
});
console.log(result.text);
```
### Streaming
```typescript
import { grokCli } from '@tm/ai-sdk-provider-grok-cli';
import { streamText } from 'ai';
const { textStream } = await streamText({
model: grokCli('grok-4-latest'),
prompt: 'Explain quantum computing'
});
for await (const delta of textStream) {
process.stdout.write(delta);
}
```
### JSON Mode
```typescript
import { grokCli } from '@tm/ai-sdk-provider-grok-cli';
import { generateObject } from 'ai';
import { z } from 'zod';
const result = await generateObject({
model: grokCli('grok-3-latest'),
schema: z.object({
name: z.string(),
age: z.number(),
hobbies: z.array(z.string())
}),
prompt: 'Generate a person profile'
});
console.log(result.object);
```
## Supported Models
- `grok-3-latest` - Grok 3 (latest version)
- `grok-4-latest` - Grok 4 (latest version)
- `grok-4` - Grok 4 (stable)
- Custom model strings supported
## Configuration
### Provider Settings
```typescript
import { createGrokCli } from '@tm/ai-sdk-provider-grok-cli';
const grok = createGrokCli({
apiKey: 'your-api-key', // Optional if set via env/CLI
timeout: 120000, // 2 minutes default
workingDirectory: '/path/to/project', // Optional
baseURL: 'https://api.x.ai' // Optional
});
```
### Model Settings
```typescript
const model = grok('grok-4-latest', {
timeout: 300000, // 5 minutes for grok-4
// Other CLI-specific settings
});
```
## Error Handling
The provider includes comprehensive error handling:
```typescript
import {
isAuthenticationError,
isTimeoutError,
isInstallationError
} from '@tm/ai-sdk-provider-grok-cli';
try {
const result = await generateText({
model: grokCli('grok-4-latest'),
prompt: 'Hello!'
});
} catch (error) {
if (isAuthenticationError(error)) {
console.error('Authentication failed:', error.message);
} else if (isTimeoutError(error)) {
console.error('Request timed out:', error.message);
} else if (isInstallationError(error)) {
console.error('Grok CLI not installed or not found in PATH');
}
}
```
## Development
```bash
# Install dependencies
npm install
# Start development mode (keep running during development)
npm run dev
# Type check
npm run typecheck
# Run tests (requires build first)
NODE_ENV=production npm run build
npm test
```
**Important**: Always run `npm run dev` and keep it running during development. This ensures proper compilation and hot-reloading of TypeScript files.

View File

@@ -1,35 +0,0 @@
{
"name": "@tm/ai-sdk-provider-grok-cli",
"private": true,
"description": "AI SDK provider for Grok CLI integration",
"type": "module",
"types": "./src/index.ts",
"main": "./dist/index.js",
"exports": {
".": "./src/index.ts"
},
"scripts": {
"test": "vitest run",
"test:watch": "vitest",
"test:ui": "vitest --ui",
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@ai-sdk/provider": "^2.0.0",
"@ai-sdk/provider-utils": "^3.0.10",
"jsonc-parser": "^3.3.1"
},
"devDependencies": {
"@types/node": "^22.18.6",
"typescript": "^5.9.2",
"vitest": "^3.2.4"
},
"engines": {
"node": ">=18"
},
"keywords": ["ai", "grok", "x.ai", "cli", "language-model", "provider"],
"files": ["dist/**/*", "README.md"],
"publishConfig": {
"access": "public"
}
}

View File

@@ -1,188 +0,0 @@
/**
* Tests for error handling utilities
*/
import { APICallError, LoadAPIKeyError } from '@ai-sdk/provider';
import { describe, expect, it } from 'vitest';
import {
createAPICallError,
createAuthenticationError,
createInstallationError,
createTimeoutError,
getErrorMetadata,
isAuthenticationError,
isInstallationError,
isTimeoutError
} from './errors.js';
describe('createAPICallError', () => {
it('should create APICallError with metadata', () => {
const error = createAPICallError({
message: 'Test error',
code: 'TEST_ERROR',
exitCode: 1,
stderr: 'Error output',
stdout: 'Success output',
promptExcerpt: 'Test prompt',
isRetryable: true
});
expect(error).toBeInstanceOf(APICallError);
expect(error.message).toBe('Test error');
expect(error.isRetryable).toBe(true);
expect(error.url).toBe('grok-cli://command');
expect(error.data).toEqual({
code: 'TEST_ERROR',
exitCode: 1,
stderr: 'Error output',
stdout: 'Success output',
promptExcerpt: 'Test prompt'
});
});
it('should create APICallError with minimal parameters', () => {
const error = createAPICallError({
message: 'Simple error'
});
expect(error).toBeInstanceOf(APICallError);
expect(error.message).toBe('Simple error');
expect(error.isRetryable).toBe(false);
});
});
describe('createAuthenticationError', () => {
it('should create LoadAPIKeyError with custom message', () => {
const error = createAuthenticationError({
message: 'Custom auth error'
});
expect(error).toBeInstanceOf(LoadAPIKeyError);
expect(error.message).toBe('Custom auth error');
});
it('should create LoadAPIKeyError with default message', () => {
const error = createAuthenticationError({});
expect(error).toBeInstanceOf(LoadAPIKeyError);
expect(error.message).toContain('Authentication failed');
});
});
describe('createTimeoutError', () => {
it('should create APICallError for timeout', () => {
const error = createTimeoutError({
message: 'Operation timed out',
timeoutMs: 5000,
promptExcerpt: 'Test prompt'
});
expect(error).toBeInstanceOf(APICallError);
expect(error.message).toBe('Operation timed out');
expect(error.isRetryable).toBe(true);
expect(error.data).toEqual({
code: 'TIMEOUT',
promptExcerpt: 'Test prompt',
timeoutMs: 5000
});
});
});
describe('createInstallationError', () => {
it('should create APICallError for installation issues', () => {
const error = createInstallationError({
message: 'CLI not found'
});
expect(error).toBeInstanceOf(APICallError);
expect(error.message).toBe('CLI not found');
expect(error.isRetryable).toBe(false);
expect(error.url).toBe('grok-cli://installation');
});
it('should create APICallError with default message', () => {
const error = createInstallationError({});
expect(error).toBeInstanceOf(APICallError);
expect(error.message).toContain('Grok CLI is not installed');
});
});
describe('isAuthenticationError', () => {
it('should return true for LoadAPIKeyError', () => {
const error = new LoadAPIKeyError({ message: 'Auth failed' });
expect(isAuthenticationError(error)).toBe(true);
});
it('should return true for APICallError with 401 exit code', () => {
const error = new APICallError({
message: 'Unauthorized',
data: { exitCode: 401 }
});
expect(isAuthenticationError(error)).toBe(true);
});
it('should return false for other errors', () => {
const error = new Error('Generic error');
expect(isAuthenticationError(error)).toBe(false);
});
});
describe('isTimeoutError', () => {
it('should return true for timeout APICallError', () => {
const error = new APICallError({
message: 'Timeout',
data: { code: 'TIMEOUT' }
});
expect(isTimeoutError(error)).toBe(true);
});
it('should return false for other errors', () => {
const error = new APICallError({ message: 'Other error' });
expect(isTimeoutError(error)).toBe(false);
});
});
describe('isInstallationError', () => {
it('should return true for installation APICallError', () => {
const error = new APICallError({
message: 'Not installed',
url: 'grok-cli://installation'
});
expect(isInstallationError(error)).toBe(true);
});
it('should return false for other errors', () => {
const error = new APICallError({ message: 'Other error' });
expect(isInstallationError(error)).toBe(false);
});
});
describe('getErrorMetadata', () => {
it('should return metadata from APICallError', () => {
const metadata = {
code: 'TEST_ERROR',
exitCode: 1,
stderr: 'Error output'
};
const error = new APICallError({
message: 'Test error',
data: metadata
});
const result = getErrorMetadata(error);
expect(result).toEqual(metadata);
});
it('should return undefined for errors without metadata', () => {
const error = new Error('Generic error');
const result = getErrorMetadata(error);
expect(result).toBeUndefined();
});
it('should return undefined for APICallError without data', () => {
const error = new APICallError({ message: 'Test error' });
const result = getErrorMetadata(error);
expect(result).toBeUndefined();
});
});

Some files were not shown because too many files have changed in this diff Show More