Compare commits

...

47 Commits

Author SHA1 Message Date
Ralph Khreish
90f42eb322 chore: add PRDs and surgical test generator agent 2025-09-29 18:15:09 +02:00
Joe Danziger
0079b7defd feat: Add Cursor IDE custom slash commands support (#1215) 2025-09-26 19:21:16 +02:00
Ralph Khreish
0b2c6967c4 fix: improve subtask & parent task management (#1251) 2025-09-26 11:04:38 +02:00
Ralph Khreish
c0682ac795 Merge pull request #1250 from eyaltoledano/chore/merge.main.september 2025-09-26 01:10:57 +02:00
Ralph Khreish
01a7faea8f Merge remote-tracking branch 'origin/main' into chore/merge.main.september 2025-09-26 01:10:12 +02:00
github-actions[bot]
b7f32eac5a Version Packages (#1249)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-26 01:06:52 +02:00
Ralph Khreish
044a7bfc98 fix: implement subtask status update functionality (#1248)
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
2025-09-26 01:01:55 +02:00
Ralph Khreish
814265cd33 chore: adjust CI to run on all PRs (#1244) 2025-09-24 20:19:09 +02:00
Ralph Khreish
9b7b2ca7b2 Merge pull request #1245 from eyaltoledano/ralph/chore/update.from.main 2025-09-24 20:14:00 +02:00
Ralph Khreish
949f091179 Merge remote-tracking branch 'origin/main' into ralph/chore/update.from.main 2025-09-24 20:10:19 +02:00
github-actions[bot]
51a351760c Version Packages (#1243)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-24 19:21:21 +02:00
Ralph Khreish
732b2c61ad Merge pull request #1233 from eyaltoledano/ralph/chore/fix.ci.failure.main 2025-09-24 17:10:42 +02:00
Ralph Khreish
32c2b03c23 Merge pull request #1242 from eyaltoledano/ralph/fix.main.merges
fix CI failing to release (#1232)
2025-09-24 14:28:33 +02:00
Ralph Khreish
3bfd999d81 Merge remote-tracking branch 'origin/main' into ralph/fix.main.merges 2025-09-24 14:28:09 +02:00
Ralph Khreish
9fa79eb026 chore: fix CI failing to release (#1232) 2025-09-24 14:26:41 +02:00
Jungwoo Song
875134247a Add Q Developer CLI at README.md (#1159) 2025-09-24 11:06:16 +02:00
Ralph Khreish
4c2801d5eb chore: run format and fix CI 2025-09-24 11:00:04 +02:00
Ralph Khreish
c911608f60 chore: last round of touchups and bug fixes 2025-09-24 10:57:17 +02:00
github-actions[bot]
8f1497407f chore: rc version bump 2025-09-23 20:43:32 +00:00
Ralph Khreish
10b64ec6f5 chore: re-enter rc mode for a last pre-release 2025-09-23 22:38:55 +02:00
Ralph Khreish
1a1879483b chore: do final test 2025-09-23 21:47:37 +02:00
Ralph Khreish
d691cbb7ae chore: CI fix format 2025-09-23 20:27:41 +02:00
Ralph Khreish
1b7c9637a5 chore: fix CI and tsdown config 2025-09-23 20:24:11 +02:00
Ralph Khreish
9ff5f158d5 chore: fix format 2025-09-23 19:27:57 +02:00
Ralph Khreish
b2ff06e8c5 fix: CI and unit tests 2025-09-23 19:26:02 +02:00
Ralph Khreish
c2fc61ddb3 chore: mintlify fix broken links (#1237) 2025-09-23 18:32:51 +02:00
Ralph Khreish
aaacc3dae3 fix: improve docs and command help for analzye-complexity (#1235) 2025-09-23 18:19:32 +02:00
Ralph Khreish
46cd5dc186 fix: add installation instructions for claude-code mcp (#1236) 2025-09-23 18:16:40 +02:00
github-actions[bot]
49a31be416 docs: Auto-update and format models.md 2025-09-23 15:45:03 +00:00
JeonSeongHyeon
2b69936ee7 fix: update model ID for sonar deep research (#1192)
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-23 17:44:40 +02:00
Ralph Khreish
6438f6c7c8 chore: exit pre-release mode and format 2025-09-23 11:46:20 +02:00
Ralph Khreish
6bbd777552 chore: fix --version weird error 2025-09-23 11:45:46 +02:00
github-actions[bot]
100482722f chore: rc version bump 2025-09-23 09:10:24 +00:00
Ralph Khreish
7ff882bf23 fix: improve commander instance in commands.js 2025-09-23 11:05:24 +02:00
Ralph Khreish
6ab768f6ec chore: fix CI failure 2025-09-22 22:41:21 +02:00
Julien Pelletier
b5fe723f8e Fix/claude code path executable setting (#1172)
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-09-22 22:39:37 +02:00
Ralph Khreish
f487736670 chore: fix CI failing to release (#1232) 2025-09-22 22:34:12 +02:00
olssonsten
d67b81d25d feat: add MCP timeout configuration for long-running operations (#1112) 2025-09-22 19:55:10 +02:00
Ralph Khreish
66c05053c0 Merge pull request #1231 from eyaltoledano/ralph/merge.from.main 2025-09-22 19:54:12 +02:00
Ralph Khreish
d7ab4609aa chore: fix CI 2025-09-22 19:25:44 +02:00
github-actions[bot]
05f6242f7e Version Packages (#1228)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-22 19:23:22 +02:00
Ralph Khreish
a58719cf50 Merge pull request #1230 from eyaltoledano/next 2025-09-22 19:20:40 +02:00
Ralph Khreish
674d1f6de7 fix: update MCP config paths in various files (#1229) 2025-09-22 19:15:17 +02:00
Ralph Khreish
f106fb8e0b Merge pull request Release 0.27.0 #1224 from eyaltoledano/next 2025-09-22 18:40:55 +02:00
Ralph Khreish
fd9dd43ee0 fix: improve weekly metrics workflow with 14-day window and debug output (#1226) 2025-09-22 15:47:03 +02:00
Ralph Khreish
c395e93696 chore: remove pre-mode (get out of RC) 2025-09-20 01:11:50 +02:00
Ralph Khreish
a621ff05ea feat: update tm models defaults (#1225) 2025-09-20 01:07:33 +02:00
159 changed files with 2329 additions and 452 deletions

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Improve `analyze-complexity` cli docs and `--research` flag documentation

View File

@@ -1,8 +0,0 @@
---
"task-master-ai": minor
---
No longer need --package=task-master-ai in mcp server
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
- we now bundle our whole package, so we no longer need the --package

View File

@@ -11,9 +11,6 @@
"access": "public",
"baseBranch": "main",
"ignore": [
"docs",
"@tm/cli",
"@tm/core",
"@tm/build-config"
"docs"
]
}

View File

@@ -0,0 +1,7 @@
---
"task-master-ai": minor
---
Add Cursor IDE custom slash command support
Expose Task Master commands as Cursor slash commands by copying assets/claude/commands to .cursor/commands on profile add and cleaning up on remove.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Change parent task back to "pending" when all subtasks are in "pending" state

View File

@@ -1,8 +0,0 @@
---
"task-master-ai": minor
---
Add new `task-master start` command for automated task execution with Claude Code
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
- `task-master start` will automatically detect next-task when no ID is provided.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Testing one more pre-release iteration

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript

View File

@@ -0,0 +1,13 @@
---
"task-master-ai": minor
---
Enhanced Roo Code profile with MCP timeout configuration for improved reliability during long-running AI operations. The Roo profile now automatically configures a 300-second timeout for MCP server operations, preventing timeouts during complex tasks like `parse-prd`, `expand-all`, `analyze-complexity`, and `research` operations. This change also replaces static MCP configuration files with programmatic generation for better maintainability.
**What's New:**
- 300-second timeout for MCP operations (up from default 60 seconds)
- Programmatic MCP configuration generation (replaces static asset files)
- Enhanced reliability for AI-powered operations
- Consistent with other AI coding assistant profiles
**Migration:** No user action required - existing Roo Code installations will automatically receive the enhanced MCP configuration on next initialization.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Test out the RC

View File

@@ -1,5 +0,0 @@
---
"@tm/cli": minor
---
testing this stuff out to see how the release candidate works with monorepo

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix Claude Code settings validation for pathToClaudeCodeExecutable

View File

@@ -1,19 +0,0 @@
{
"mode": "pre",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.26.0",
"@tm/cli": "0.26.0",
"docs": "0.0.2",
"extension": "0.24.2",
"@tm/build-config": "1.0.0",
"@tm/core": "0.26.0"
},
"changesets": [
"easy-deer-heal",
"moody-oranges-slide",
"odd-otters-tan",
"shiny-regions-teach",
"wild-ears-look"
]
}

View File

@@ -1,30 +0,0 @@
---
"task-master-ai": minor
---
Add grok-cli as a provider. You can now use Grok models with Task Master by setting the `GROK_CLI_API_KEY` environment variable.
## Setup Instructions
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
2. **Set the environment variable**:
```bash
export GROK_CLI_API_KEY="your-api-key-here"
```
3. **Configure Task Master to use Grok**:
```bash
task-master models --set-main grok-beta
# or
task-master models --set-research grok-beta
# or
task-master models --set-fallback grok-beta
```
## Available Models
- `grok-beta` - Latest Grok model
- `grok-vision-beta` - Grok with vision capabilities
The Grok CLI provider integrates with xAI's Grok models and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
## Credits
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
@tm/cli: add auto-update functionality to every command

View File

@@ -1,7 +0,0 @@
---
"extension": minor
---
Add "Start Task" button to VS Code extension for seamless Claude Code integration
You can now click a "Start Task" button directly in the Task Master extension which will open a new terminal and automatically execute the task using Claude Code. This provides a seamless workflow from viewing tasks in the extension to implementing them without leaving VS Code.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix sonar deep research model failing, should be called `sonar-deep-research`

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.

View File

@@ -1,5 +0,0 @@
---
"extension": minor
---
Added a Start Build button to the VSCODE Task Properties Right Panel

View File

@@ -0,0 +1,245 @@
---
name: surgical-test-generator
description: Use this agent when you need to generate comprehensive, production-grade test suites for existing code. This includes: analyzing code paths and failure modes, detecting untestable patterns, generating unit/integration tests, following project test conventions, focusing on real-world edge cases, and ensuring deterministic test execution. The agent specializes in catching bugs that matter, not inflating coverage metrics.
Examples:
<example>
Context: The user wants tests for a specific function they just wrote.
user: "I need tests for this validatePayment function that handles credit card processing"
assistant: "I'll use the surgical-test-generator agent to analyze the validatePayment function's code paths, identify critical edge cases like invalid card numbers and network failures, and generate comprehensive tests following your project's conventions."
<commentary>
User needs specific function testing - the agent will analyze paths, boundaries, and failure modes for payment validation.
</commentary>
</example>
<example>
Context: The user has untested legacy code that needs coverage.
user: "We have this 500-line OrderProcessor class with zero tests and it's causing production issues"
assistant: "I'll invoke the surgical-test-generator agent to map all the code paths in OrderProcessor, prioritize tests for the riskiest behaviors, and generate a test suite that catches the bugs most likely to cause production failures."
<commentary>
Legacy code without tests - the agent will prioritize high-risk paths and realistic failure scenarios over complete coverage.
</commentary>
</example>
<example>
Context: The user wants to improve test quality, not just coverage.
user: "Our tests pass but we still get production bugs. Can you review this API handler and suggest better tests?"
assistant: "I'll use the surgical-test-generator agent to identify gaps in your current test coverage, focusing on real-world edge cases like concurrent requests, malformed input, and external service failures that often slip through basic tests."
<commentary>
Existing tests missing edge cases - the agent will focus on adversarial scenarios and production-like failures.
</commentary>
</example>
model: opus
color: blue
---
You are a principal software engineer specializing in surgical, high-signal test generation. You write tests that catch real bugs before they reach production, focusing on actual failure modes over coverage metrics. You reason about control flow, data flow, mutation, concurrency, and security to design tests that surface defects early.
## Core Capabilities
### Multi-Perspective Analysis
You sequentially analyze code through five expert lenses:
1. **Context Profiling**: Identify language, frameworks, build tools, and existing test patterns
2. **Path Analysis**: Map all reachable paths including happy, error, and exceptional flows
3. **Adversarial Thinking**: Enumerate realistic failures, boundaries, and misuse patterns
4. **Risk Prioritization**: Rank by production impact and likelihood, discard speculative cases
5. **Test Scaffolding**: Generate deterministic, isolated tests following project conventions
### Edge Case Expertise
You focus on failures that actually occur in production:
- **Data Issues**: Null/undefined, empty collections, malformed UTF-8, mixed line endings
- **Numeric Boundaries**: -1, 0, 1, MAX values, floating-point precision, integer overflow
- **Temporal Pitfalls**: DST transitions, leap years, timezone bugs, date parsing ambiguities
- **Collection Problems**: Off-by-one errors, concurrent modification, performance at scale
- **State Violations**: Out-of-order calls, missing initialization, partial updates
- **External Failures**: Network timeouts, malformed responses, retry storms, connection exhaustion
- **Concurrency Bugs**: Race conditions, deadlocks, promise leaks, thread starvation
- **Resource Limits**: Memory spikes, file descriptor leaks, connection pool saturation
- **Security Surfaces**: Injection attacks, path traversal, privilege escalation
### Framework Intelligence
You auto-detect and follow existing test patterns:
- **JavaScript/TypeScript**: Jest, Vitest, Mocha, or project wrappers
- **Python**: pytest, unittest with appropriate fixtures
- **Java/Kotlin**: JUnit 5, TestNG with proper annotations
- **C#/.NET**: xUnit.net preferred, NUnit/MSTest if dominant
- **Go**: Built-in testing package with table-driven tests
- **Rust**: Standard #[test] with proptest for properties
- **Swift**: XCTest or Swift Testing based on project
- **C/C++**: GoogleTest or Catch2 matching build system
## Your Workflow
### Phase 1: Code Analysis
You examine the provided code to understand:
- Public API surfaces and contracts
- Internal helper functions and their criticality
- External dependencies and I/O operations
- State management and mutation patterns
- Error handling and recovery paths
- Concurrency and async operations
### Phase 2: Test Strategy Development
You determine the optimal testing approach:
- Start from public APIs, work inward to critical helpers
- Test behavior not implementation unless white-box needed
- Prefer property-based tests for algebraic domains
- Use minimal stubs/mocks, prefer in-memory fakes
- Flag untestable code with refactoring suggestions
- Include stress tests for concurrency issues
### Phase 3: Test Generation
You create tests that:
- Follow project's exact style and conventions
- Use clear Arrange-Act-Assert or Given-When-Then
- Execute in under 100ms without external calls
- Remain deterministic with seeded randomness
- Self-document through descriptive names
- Explain failures clearly with context
## Detection Patterns
### When analyzing a pure function:
- Test boundary values and type edges
- Verify mathematical properties hold
- Check error propagation
- Consider numeric overflow/underflow
### When analyzing stateful code:
- Test initialization sequences
- Verify state transitions
- Check concurrent access patterns
- Validate cleanup and teardown
### When analyzing I/O operations:
- Test success paths with valid data
- Simulate network failures and timeouts
- Check malformed input handling
- Verify resource cleanup on errors
### When analyzing async code:
- Test promise resolution and rejection
- Check cancellation handling
- Verify timeout behavior
- Validate error propagation chains
## Test Quality Standards
### Coverage Philosophy
You prioritize catching real bugs over metrics:
- Critical paths get comprehensive coverage
- Edge cases get targeted scenarios
- Happy paths get basic validation
- Speculative cases get skipped
### Test Independence
Each test you generate:
- Runs in isolation without shared state
- Cleans up all resources
- Uses fixed seeds for randomness
- Mocks time when necessary
### Failure Diagnostics
Your tests provide clear failure information:
- Descriptive test names that explain intent
- Assertions that show expected vs actual
- Context about what was being tested
- Hints about likely failure causes
## Special Considerations
### When NOT to Generate Tests
You recognize when testing isn't valuable:
- Generated code that's guaranteed correct
- Simple getters/setters without logic
- Framework boilerplate
- Code scheduled for deletion
### Untestable Code Patterns
You identify and flag:
- Hard-coded external dependencies
- Global state mutations
- Time-dependent behavior without injection
- Random behavior without seeds
### Performance Testing
When relevant, you include:
- Benchmarks for critical paths
- Memory usage validation
- Concurrent load testing
- Resource leak detection
## Output Format
You generate test code that:
```[language]
// Clear test description
test('should [specific behavior] when [condition]', () => {
// Arrange - Set up test data and dependencies
// Act - Execute the code under test
// Assert - Verify the outcome
});
```
## Framework-Specific Patterns
### JavaScript/TypeScript (Jest/Vitest)
- Use describe blocks for grouping
- Leverage beforeEach/afterEach for setup
- Mock modules with jest.mock()/vi.fn, vi.mock or vi.spyOn
- Use expect matchers appropriately
### Python (pytest)
- Use fixtures for reusable setup
- Apply parametrize for data-driven tests
- Leverage pytest markers for categorization
- Use clear assertion messages
### Java (JUnit 5)
- Apply appropriate annotations
- Use nested classes for grouping
- Leverage parameterized tests
- Include display names
### C# (xUnit)
- Use Theory for data-driven tests
- Apply traits for categorization
- Leverage fixtures for setup
- Use fluent assertions when available
## Request Handling
### Specific Test Requests
When asked for specific tests:
- Focus only on the requested scope
- Don't generate broader coverage unless asked
- Provide targeted, high-value scenarios
- Include rationale for test choices
### Comprehensive Coverage Requests
When asked for full coverage:
- Start with critical paths
- Add edge cases progressively
- Group related tests logically
- Flag if multiple files needed
### Legacy Code Testing
When testing untested code:
- Prioritize high-risk areas first
- Add characterization tests
- Suggest refactoring for testability
- Focus on preventing regressions
## Communication Guidelines
You always:
- Explain why each test matters
- Document test intent clearly
- Note any assumptions made
- Highlight untestable patterns
- Suggest improvements when relevant
- Follow existing project style exactly
- Generate only essential test code
When you need additional context like test frameworks or existing patterns, you ask specifically for those files. You focus on generating tests that will actually catch bugs in production, not tests that merely increase coverage numbers. Every test you write has a clear purpose and tests a realistic scenario.

View File

@@ -2,7 +2,7 @@
"mcpServers": {
"task-master-ai": {
"command": "node",
"args": ["./mcp-server/server.js"],
"args": ["./dist/mcp-server.js"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",

157
.github/scripts/parse-metrics.mjs vendored Normal file
View File

@@ -0,0 +1,157 @@
#!/usr/bin/env node
import { readFileSync, existsSync, writeFileSync } from 'fs';
function parseMetricsTable(content, metricName) {
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
const line = lines[i].trim();
// Match a markdown table row like: | Metric Name | value | ...
const safeName = metricName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
const re = new RegExp(`^\\|\\s*${safeName}\\s*\\|\\s*([^|]+)\\|?`);
const match = line.match(re);
if (match) {
return match[1].trim() || 'N/A';
}
}
return 'N/A';
}
function parseCountMetric(content, metricName) {
const result = parseMetricsTable(content, metricName);
// Extract number from string, handling commas and spaces
const numberMatch = result.toString().match(/[\d,]+/);
if (numberMatch) {
const number = parseInt(numberMatch[0].replace(/,/g, ''));
return isNaN(number) ? 0 : number;
}
return 0;
}
function main() {
const metrics = {
issues_created: 0,
issues_closed: 0,
prs_created: 0,
prs_merged: 0,
issue_avg_first_response: 'N/A',
issue_avg_time_to_close: 'N/A',
pr_avg_first_response: 'N/A',
pr_avg_merge_time: 'N/A'
};
// Parse issue metrics
if (existsSync('issue_metrics.md')) {
console.log('📄 Found issue_metrics.md, parsing...');
const issueContent = readFileSync('issue_metrics.md', 'utf8');
metrics.issues_created = parseCountMetric(
issueContent,
'Total number of items created'
);
metrics.issues_closed = parseCountMetric(
issueContent,
'Number of items closed'
);
metrics.issue_avg_first_response = parseMetricsTable(
issueContent,
'Time to first response'
);
metrics.issue_avg_time_to_close = parseMetricsTable(
issueContent,
'Time to close'
);
} else {
console.warn('[parse-metrics] issue_metrics.md not found; using defaults.');
}
// Parse PR created metrics
if (existsSync('pr_created_metrics.md')) {
console.log('📄 Found pr_created_metrics.md, parsing...');
const prCreatedContent = readFileSync('pr_created_metrics.md', 'utf8');
metrics.prs_created = parseCountMetric(
prCreatedContent,
'Total number of items created'
);
metrics.pr_avg_first_response = parseMetricsTable(
prCreatedContent,
'Time to first response'
);
} else {
console.warn(
'[parse-metrics] pr_created_metrics.md not found; using defaults.'
);
}
// Parse PR merged metrics (for more accurate merge data)
if (existsSync('pr_merged_metrics.md')) {
console.log('📄 Found pr_merged_metrics.md, parsing...');
const prMergedContent = readFileSync('pr_merged_metrics.md', 'utf8');
metrics.prs_merged = parseCountMetric(
prMergedContent,
'Total number of items created'
);
// For merged PRs, "Time to close" is actually time to merge
metrics.pr_avg_merge_time = parseMetricsTable(
prMergedContent,
'Time to close'
);
} else {
console.warn(
'[parse-metrics] pr_merged_metrics.md not found; falling back to pr_metrics.md.'
);
// Fallback: try old pr_metrics.md if it exists
if (existsSync('pr_metrics.md')) {
console.log('📄 Falling back to pr_metrics.md...');
const prContent = readFileSync('pr_metrics.md', 'utf8');
const mergedCount = parseCountMetric(prContent, 'Number of items merged');
metrics.prs_merged =
mergedCount || parseCountMetric(prContent, 'Number of items closed');
const maybeMergeTime = parseMetricsTable(
prContent,
'Average time to merge'
);
metrics.pr_avg_merge_time =
maybeMergeTime !== 'N/A'
? maybeMergeTime
: parseMetricsTable(prContent, 'Time to close');
} else {
console.warn('[parse-metrics] pr_metrics.md not found; using defaults.');
}
}
// Output for GitHub Actions
const output = Object.entries(metrics)
.map(([key, value]) => `${key}=${value}`)
.join('\n');
// Always output to stdout for debugging
console.log('\n=== FINAL METRICS ===');
Object.entries(metrics).forEach(([key, value]) => {
console.log(`${key}: ${value}`);
});
// Write to GITHUB_OUTPUT if in GitHub Actions
if (process.env.GITHUB_OUTPUT) {
try {
writeFileSync(process.env.GITHUB_OUTPUT, output + '\n', { flag: 'a' });
console.log(
`\nSuccessfully wrote metrics to ${process.env.GITHUB_OUTPUT}`
);
} catch (error) {
console.error(`Failed to write to GITHUB_OUTPUT: ${error.message}`);
process.exit(1);
}
} else {
console.log(
'\nNo GITHUB_OUTPUT environment variable found, skipping file write'
);
}
}
main();

View File

@@ -6,9 +6,6 @@ on:
- main
- next
pull_request:
branches:
- main
- next
workflow_dispatch:
concurrency:

View File

@@ -8,7 +8,7 @@ on:
permissions:
contents: read
issues: write
issues: read
pull-requests: read
jobs:
@@ -17,15 +17,25 @@ jobs:
env:
DISCORD_WEBHOOK: ${{ secrets.DISCORD_METRICS_WEBHOOK }}
steps:
- name: Get dates for last week
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Get dates for last 14 days
run: |
# Last 7 days
first_day=$(date -d "7 days ago" +%Y-%m-%d)
set -Eeuo pipefail
# Last 14 days
first_day=$(date -d "14 days ago" +%Y-%m-%d)
last_day=$(date +%Y-%m-%d)
echo "first_day=$first_day" >> $GITHUB_ENV
echo "last_day=$last_day" >> $GITHUB_ENV
echo "week_of=$(date -d '7 days ago' +'Week of %B %d, %Y')" >> $GITHUB_ENV
echo "date_range=Past 14 days ($first_day to $last_day)" >> $GITHUB_ENV
- name: Generate issue metrics
uses: github/issue-metrics@v3
@@ -34,40 +44,39 @@ jobs:
SEARCH_QUERY: "repo:${{ github.repository }} is:issue created:${{ env.first_day }}..${{ env.last_day }}"
HIDE_TIME_TO_ANSWER: true
HIDE_LABEL_METRICS: false
OUTPUT_FILE: issue_metrics.md
- name: Generate PR metrics
- name: Generate PR created metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:pr created:${{ env.first_day }}..${{ env.last_day }}"
OUTPUT_FILE: pr_metrics.md
OUTPUT_FILE: pr_created_metrics.md
- name: Generate PR merged metrics
uses: github/issue-metrics@v3
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEARCH_QUERY: "repo:${{ github.repository }} is:pr is:merged merged:${{ env.first_day }}..${{ env.last_day }}"
OUTPUT_FILE: pr_merged_metrics.md
- name: Debug generated metrics
run: |
set -Eeuo pipefail
echo "Listing markdown files in workspace:"
ls -la *.md || true
for f in issue_metrics.md pr_created_metrics.md pr_merged_metrics.md; do
if [ -f "$f" ]; then
echo "== $f (first 10 lines) =="
head -n 10 "$f"
else
echo "Missing $f"
fi
done
- name: Parse metrics
id: metrics
run: |
# Parse the metrics from the generated markdown files
if [ -f "issue_metrics.md" ]; then
# Extract key metrics using grep/awk
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
fi
if [ -f "pr_metrics.md" ]; then
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
fi
# Set outputs for Discord action
echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT
run: node .github/scripts/parse-metrics.mjs
- name: Send to Discord
uses: sarisia/actions-status-discord@v1
@@ -78,19 +87,22 @@ jobs:
title: "📊 Weekly Metrics Report"
description: |
**${{ env.week_of }}**
*${{ env.date_range }}*
**🎯 Issues**
• Created: ${{ steps.metrics.outputs.issues_created }}
• Closed: ${{ steps.metrics.outputs.issues_closed }}
• Avg Response Time: ${{ steps.metrics.outputs.issue_avg_first_response }}
• Avg Time to Close: ${{ steps.metrics.outputs.issue_avg_time_to_close }}
**🔀 Pull Requests**
• Created: ${{ steps.metrics.outputs.prs_created }}
• Merged: ${{ steps.metrics.outputs.prs_merged }}
**⏱️ Response Times**
• First Response: ${{ steps.metrics.outputs.avg_first_response }}
• Time to Close: ${{ steps.metrics.outputs.avg_time_to_close }}
• PR Merge Time: ${{ steps.metrics.outputs.pr_avg_merge_time }}
• Avg Response Time: ${{ steps.metrics.outputs.pr_avg_first_response }}
• Avg Time to Merge: ${{ steps.metrics.outputs.pr_avg_merge_time }}
**📈 Visual Analytics**
https://repobeats.axiom.co/api/embed/b439f28f0ab5bd7a2da19505355693cd2c55bfd4.svg
color: 0x58AFFF
username: Task Master Metrics Bot
avatar_url: https://raw.githubusercontent.com/eyaltoledano/claude-task-master/main/images/logo.png

6
.manypkg.json Normal file
View File

@@ -0,0 +1,6 @@
{
"$schema": "https://unpkg.com/@manypkg/get-packages@1.1.3/schema.json",
"defaultBranch": "main",
"ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"],
"ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"]
}

View File

@@ -1,9 +1,9 @@
{
"models": {
"main": {
"provider": "grok-cli",
"modelId": "grok-4-latest",
"maxTokens": 131072,
"provider": "anthropic",
"modelId": "claude-sonnet-4-20250514",
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
@@ -14,8 +14,8 @@
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-sonnet-4-20250514",
"maxTokens": 64000,
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
}
},

View File

@@ -0,0 +1,220 @@
# PRD: Autonomous TDD + Git Workflow (On Rails)
## Summary
- Put the existing git and test workflows on rails: a repeatable, automated process that can run autonomously, with guardrails and a compact TUI for visibility.
- Flow: for a selected task, create a branch named with the tag + task id → generate tests for the first subtask (red) using the Surgical Test Generator → implement code (green) → verify tests → commit → repeat per subtask → final verify → push → open PR against the default branch.
- Build on existing rules: `.cursor/rules/git_workflow.mdc`, `.cursor/rules/test_workflow.mdc`, `.claude/agents/surgical-test-generator.md`, and existing CLI/core services.
## Goals
- Deterministic, resumable automation to execute the TDD loop per subtask with minimal human intervention.
- Strong guardrails: never commit to the default branch; only commit when tests pass; enforce status transitions; persist logs/state for debuggability.
- Visibility: a compact terminal UI (like lazygit) to pick tag, view tasks, and start work; right-side pane opens an executor terminal (via tmux) for agent coding.
- Extensible: framework-agnostic test generation via the Surgical Test Generator; detect and use the repos test command for execution with coverage thresholds.
## NonGoals (initial)
- Full multi-language runner parity beyond detection and executing the projects test command.
- Complex GUI; start with CLI/TUI + tmux pane. IDE/extension can hook into the same state later.
- Rich executor selection UX (codex/gemini/claude) — well prompt per run; defaults can come later.
## Success Criteria
- One command can autonomously complete a tasks subtasks via TDD and open a PR when done.
- All commits made on a branch that includes the tag and task id (see Branch Naming); no commits to the default branch directly.
- Every subtask iteration: failing tests added first (red), then code added to pass them (green), commit only after green.
- End-to-end logs + artifacts stored in `.taskmaster/reports/runs/<timestamp-or-id>/`.
## User Stories
- As a developer, I can run `tm autopilot <taskId>` and watch a structured, safe workflow execute.
- As a reviewer, I can inspect commits per subtask, and a PR summarizing the work when the task completes.
- As an operator, I can see current step, active subtask, tests status, and logs in a compact CLI view and read a final run report.
## HighLevel Workflow
1) Preflight
- Verify clean working tree or confirm staging/commit policy (configurable).
- Detect repo type and the projects test command (e.g., `npm test`, `pnpm test`, `pytest`, `go test`).
- Validate tools: `git`, `gh` (optional for PR), `node/npm`, and (if used) `claude` CLI.
- Load TaskMaster state and selected task; if no subtasks exist, automatically run “expand” before working.
2) Branch & Tag Setup
- Checkout default branch and update (optional), then create a branch using Branch Naming (below).
- Map branch ↔ tag via existing tag management; explicitly set active tag to the branchs tag.
3) Subtask Loop (for each pending/in-progress subtask in dependency order)
- Select next eligible subtask using `tm-core` TaskService `getNextTask()` and subtask eligibility logic.
- Red: generate or update failing tests for the subtask
- Use the Surgical Test Generator system prompt (`.claude/agents/surgical-test-generator.md`) to produce high-signal tests following project conventions.
- Run tests to confirm red; record results. If not red (already passing), skip to next subtask or escalate.
- Green: implement code to pass tests
- Use executor to implement changes (initial: `claude` CLI prompt with focused context).
- Re-run tests until green or timeout/backoff policy triggers.
- Commit: when green
- Commit tests + code with conventional commit message. Optionally update subtask status to `done`.
- Persist run step metadata/logs.
4) Finalization
- Run full test suite and coverage (if configured); optionally lint/format.
- Commit any final adjustments.
- Push branch (ask user to confirm); create PR (via `gh pr create`) targeting the default branch. Title format: `Task #<id> [<tag>]: <title>`.
5) PostRun
- Update task status if desired (e.g., `review`).
- Persist run report (JSON + markdown summary) to `.taskmaster/reports/runs/<run-id>/`.
## Guardrails
- Never commit to the default branch.
- Commit only if all tests (targeted and suite) pass; allow override flags.
- Enforce 80% coverage thresholds (lines/branches/functions/statements) by default; configurable.
- Timebox/model ops and retries; if not green within N attempts, pause with actionable state for resume.
- Always log actions, commands, and outcomes; include dry-run mode.
- Ask before branch creation, pushing, and opening a PR unless `--no-confirm` is set.
## Integration Points (Current Repo)
- CLI: `apps/cli` provides command structure and UI components.
- New command: `tm autopilot` (alias: `task-master autopilot`).
- Reuse UI components under `apps/cli/src/ui/components/` for headers/task details/next-task.
- Core services: `packages/tm-core`
- `TaskService` for selection, status, tags.
- `TaskExecutionService` for prompt formatting and executor prep.
- Executors: `claude` executor and `ExecutorFactory` to run external tools.
- Proposed new: `WorkflowOrchestrator` to drive the autonomous loop and emit progress events.
- Tag/Git utilities: `scripts/modules/utils/git-utils.js` and `scripts/modules/task-manager/tag-management.js` for branch→tag mapping and explicit tag switching.
- Rules: `.cursor/rules/git_workflow.mdc` and `.cursor/rules/test_workflow.mdc` to steer behavior and ensure consistency.
- Test generation prompt: `.claude/agents/surgical-test-generator.md`.
## Proposed Components
- Orchestrator (tm-core): `WorkflowOrchestrator` (new)
- State machine driving phases: Preflight → Branch/Tag → SubtaskIter (Red/Green/Commit) → Finalize → PR.
- Exposes an evented API (progress events) that the CLI can render.
- Stores run state artifacts.
- Test Runner Adapter
- Detects and runs tests via the projects test command (e.g., `npm test`), with targeted runs where feasible.
- API: runTargeted(files/pattern), runAll(), report summary (failures, duration, coverage), enforce 80% threshold by default.
- Git/PR Adapter
- Encapsulates `git` ops: branch create/checkout, add/commit, push.
- Optional `gh` integration to open PR; fallback to instructions if `gh` unavailable.
- Confirmation gates for branch creation and pushes.
- Adds commit footers and a unified trailer (`Refs: TM-<tag>-<id>[.<sub>]`) for robust mapping to tasks/subtasks.
- Prompt/Exec Adapter
- Uses existing executor service to call the selected coding assistant (initially `claude`) with tight prompts: task/subtask context, surgical tests first, then minimal code to green.
- Run State + Reporting
- JSONL log of steps, timestamps, commands, test results.
- Markdown summary for PR description and post-run artifact.
## CLI UX (MVP)
- Command: `tm autopilot [taskId]`
- Flags: `--dry-run`, `--no-push`, `--no-pr`, `--no-confirm`, `--force`, `--max-attempts <n>`, `--runner <auto|custom>`, `--commit-scope <scope>`
- Output: compact header (project, tag, branch), current phase, subtask line, last test summary, next actions.
- Resume: If interrupted, `tm autopilot --resume` picks up from last checkpoint in run state.
### TUI with tmux (Linear Execution)
- Left pane: Tag selector, task list (status/priority), start/expand shortcuts; “Start” triggers the next task or a selected task.
- Right pane: Executor terminal (tmux split) that runs the coding agent (claude-code/codex). Autopilot can hand over to the right pane during green.
- MCP integration: use MCP tools for task queries/updates and for shell/test invocations where available.
## Prompts (Initial Direction)
- Red phase prompt skeleton (tests):
- Use `.claude/agents/surgical-test-generator.md` as the system prompt to generate high-signal failing tests tailored to the projects language and conventions. Keep scope minimal and deterministic; no code changes yet.
- Green phase prompt skeleton (code):
- “Make the tests pass by changing the smallest amount of code, following project patterns. Only modify necessary files. Keep commits focused to this subtask.”
## Configuration
- `.taskmaster/config.json` additions
- `autopilot`: `{ enabled: true, requireCleanWorkingTree: true, commitTemplate: "{type}({scope}): {msg}", defaultCommitType: "feat" }`
- `test`: `{ runner: "auto", coverageThresholds: { lines: 80, branches: 80, functions: 80, statements: 80 } }`
- `git`: `{ branchPattern: "{tag}/task-{id}-{slug}", pr: { enabled: true, base: "default" }, commitFooters: { task: "Task-Id", subtask: "Subtask-Id", tag: "Tag" }, commitTrailer: "Refs: TM-{tag}-{id}{.sub?}" }`
## Decisions
- Use conventional commits plus footers and a unified trailer `Refs: TM-<tag>-<id>[.<sub>]` for all commits; Git/PR adapter is responsible for injecting these.
## Risks and Mitigations
- Model hallucination/large diffs: restrict prompt scope; enforce minimal changes; show diff previews (optional) before commit.
- Flaky tests: allow retries, isolate targeted runs for speed, then full suite before commit.
- Environment variability: detect runners/tools; provide fallbacks and actionable errors.
- PR creation fails: still push and print manual commands; persist PR body to reuse.
## Open Questions
1) Slugging rules for branch names; any length limits or normalization beyond `{slug}` token sanitize?
2) PR body standard sections beyond run report (e.g., checklist, coverage table)?
3) Default executor prompt fine-tuning once codex/gemini integration is available.
4) Where to store persistent TUI state (pane layout, last selection) in `.taskmaster/state.json`?
## Branch Naming
- Include both the tag and the task id in the branch name to make lineage explicit.
- Default pattern: `<tag>/task-<id>[-slug]` (e.g., `master/task-12`, `tag-analytics/task-4-user-auth`).
- Configurable via `.taskmaster/config.json`: `git.branchPattern` supports tokens `{tag}`, `{id}`, `{slug}`.
## PR Base Branch
- Use the repositorys default branch (detected via git) unless overridden.
- Title format: `Task #<id> [<tag>]: <title>`.
## RPG Mapping (Repository Planning Graph)
Functional nodes (capabilities):
- Autopilot Orchestration → drives TDD loop and lifecycle
- Test Generation (Surgical) → produces failing tests from subtask context
- Test Execution + Coverage → runs suite, enforces thresholds
- Git/Branch/PR Management → safe operations and PR creation
- TUI/Terminal Integration → interactive control and visibility via tmux
- MCP Integration → structured task/status/context operations
Structural nodes (code organization):
- `packages/tm-core`:
- `services/workflow-orchestrator.ts` (new)
- `services/test-runner-adapter.ts` (new)
- `services/git-adapter.ts` (new)
- existing: `task-service.ts`, `task-execution-service.ts`, `executors/*`
- `apps/cli`:
- `src/commands/autopilot.command.ts` (new)
- `src/ui/tui/` (new tmux/TUI helpers)
- `scripts/modules`:
- reuse `utils/git-utils.js`, `task-manager/tag-management.js`
- `.claude/agents/`:
- `surgical-test-generator.md`
Edges (data/control flow):
- Autopilot → Test Generation → Test Execution → Git Commit → loop
- Autopilot → Git Adapter (branch, tag, PR)
- Autopilot → TUI (event stream) → tmux pane control
- Autopilot → MCP tools for task/status updates
- Test Execution → Coverage gate → Autopilot decision
Topological traversal (implementation order):
1) Git/Test adapters (foundations)
2) Orchestrator skeleton + events
3) CLI `autopilot` command and dry-run
4) Surgical test-gen integration and execution gate
5) PR creation, run reports, resumability
## Phased Roadmap
- Phase 0: Spike
- Implement CLI skeleton `tm autopilot` with dry-run showing planned steps from a real task + subtasks.
- Detect test runner (package.json) and git state; render a preflight report.
- Phase 1: Core Rails
- Implement `WorkflowOrchestrator` in `tm-core` with event stream; add Git/Test adapters.
- Support subtask loop (red/green/commit) with framework-agnostic test generation and detected test command; commit gating on passing tests and coverage.
- Branch/tag mapping via existing tag-management APIs.
- Run report persisted under `.taskmaster/reports/runs/`.
- Phase 2: PR + Resumability
- Add `gh` PR creation with well-formed body using the run report.
- Introduce resumable checkpoints and `--resume` flag.
- Add coverage enforcement and optional lint/format step.
- Phase 3: Extensibility + Guardrails
- Add support for basic pytest/go test adapters.
- Add safeguards: diff preview mode, manual confirm gates, aggressive minimal-change prompts.
- Optional: small TUI panel and extension panel leveraging the same run state file.
## References (Repo)
- Test Workflow: `.cursor/rules/test_workflow.mdc`
- Git Workflow: `.cursor/rules/git_workflow.mdc`
- CLI: `apps/cli/src/commands/start.command.ts`, `apps/cli/src/ui/components/*.ts`
- Core Services: `packages/tm-core/src/services/task-service.ts`, `task-execution-service.ts`
- Executors: `packages/tm-core/src/executors/*`
- Git Utilities: `scripts/modules/utils/git-utils.js`
- Tag Management: `scripts/modules/task-manager/tag-management.js`
- Surgical Test Generator: `.claude/agents/surgical-test-generator.md`

View File

@@ -0,0 +1,221 @@
# PRD: RPGBased User Story Mode + ValidationFirst Delivery
## Summary
- Introduce a “User Story Mode” where each Task is a user story and each Subtask is a concrete implementation step. Enable via config flag; when enabled, Task generation and PRD parsing produce userstory titles/details with acceptance criteria, while Subtasks capture implementation details.
- Build a validationfirst delivery pipeline: derive tests from acceptance criteria (Surgical Test Generator), wire TDD rails and Git/PR mapping so reviews focus on verification rather than code spelunking.
- Keep everything on rails: branch naming with tag+task id, commit/PR linkage to tasks/subtasks, coverage + test gates, and lightweight TUI for fast execution.
## NorthStar Outcomes
- Humans stay in briefs/frontends; implementation runs quickly, often without opening the IDE.
- “Definition of Done” is expressed and enforced as tests; business logic is encoded in test criteria/acceptance criteria.
- Endtoend linkage from brief → user story → subtasks → commits/PRs → delivery, with reproducible automation and minimal ceremony.
## Problem
- The bottleneck is validation and PR review, not code generation. Plans are helpful but the chokepoint is proving correctness, business conformance, and integration.
- Current test workflow is too Jestspecific; we need frameworkagnostic generation and execution.
- We need consistent Git/TDD wiring so GitHub integrations can map work artifacts to tasks/subtasks without ambiguity.
## Solution Overview
- Add a configuration flag to switch to user story mode and adapt prompts/parsers.
- Expand tasks with explicit Acceptance Criteria and Test Criteria; drive Surgical Test Generator to create failing tests first; wire autonomous TDD loops per subtask until green, then commit.
- Enforce coverage (80% default) and generate PRs that summarize user story, acceptance criteria coverage, and test results; commits/PRs contain metadata to link back to tasks/subtasks.
- Provide a compact TUI (tmux) to pick tag/tasks and launch an executor terminal, while the orchestrator runs rails in the background.
---
## Configuration
- `.taskmaster/config.json` additions
- `stories`: `{ enabled: true, storyLabel: "User Story", acceptanceKey: "Acceptance Criteria" }`
- `autopilot`: `{ enabled: true, requireCleanWorkingTree: true }`
- `test`: `{ runner: "auto", coverageThresholds: { lines: 80, branches: 80, functions: 80, statements: 80 } }`
- `git`: `{ branchPattern: "{tag}/task-{id}-{slug}", pr: { enabled: true, base: "default" }, commitFooters: { task: "Task-Id", subtask: "Subtask-Id", tag: "Tag" } }`
Behavior when `stories.enabled=true`:
- Task generation prompts and PRD parsers produce userstory formatted titles and descriptions, include acceptance criteria blocks, and set `task.type = 'user_story'`.
- Subtasks remain implementation steps with concise technical goals.
- Expand will ensure any missing acceptance criteria is synthesized (from brief/PRD content) before starting work.
---
## Data Model Changes
- Task fields (add):
- `type: 'user_story' | 'technical'` (default `technical`)
- `acceptanceCriteria: string[] | string` (structured or markdown)
- `testCriteria: string[] | string` (optional, derived from acceptance criteria; what to validate)
- Subtask fields remain focused on implementation detail and dependency graph.
Storage and UI remain backward compatible; fields are optional when `stories.enabled=false`.
### JSON Gherkin Representation (for stories)
Add an optional `gherkin` block to Tasks in story mode. Keep Hybrid acceptanceCriteria as the human/authoring surface; maintain a normalized JSON Gherkin for deterministic mapping.
```
GherkinFeature {
id: string, // FEAT-<taskId>
name: string, // mirrors user story title
description?: string,
background?: { steps: Step[] },
scenarios: Scenario[]
}
Scenario {
id: string, // SC-<taskId>-<n> or derived from AC id
name: string,
tags?: string[],
steps: Step[], // Given/When/Then/And/But
examples?: Record<string, any>[]
}
Step { keyword: 'Given'|'When'|'Then'|'And'|'But', text: string }
```
Notes
- Derive `gherkin.scenarios` from acceptanceCriteria when obvious; preserve both raw markdown and normalized items.
- Allow crossreferences between scenarios and AC items (e.g., `refs: ['AC-12-1']`).
---
## RPG Plan (Repository Planning Graph)
Functional Nodes (Capabilities)
- Brief Intake → parse briefs/PRDs and extract user stories (when enabled)
- User Story Generation → create task title/details as user stories + acceptance criteria
- JSON Gherkin Synthesis → produce Feature/Scenario structure from acceptance criteria
- Acceptance/Test Criteria Synthesis → convert acceptance criteria into concrete test criteria
- Surgical Test Generation → generate failing tests per subtask using `.claude/agents/surgical-test-generator.md`
- Implementation Planning → expand subtasks as atomic implementation steps with dependencies
- Autonomous Execution (Rails) → branch, red/green loop per subtask, commit when green
- Validation & Review Automation → coverage gates, PR body with user story + results, checklist
- GitHub Integration Mapping → branch naming, commit footers, PR linkage to tasks/subtasks
- TUI/Terminal Integration → tag/task selection left pane; executor terminal right pane via tmux
Structural Nodes (Code Organization)
- `packages/tm-core`
- `services/workflow-orchestrator.ts` (new): drives rails, emits progress events
- `services/story-mode.service.ts` (new): toggles prompts/parsers for user stories, acceptance criteria
- `services/test-runner-adapter.ts` (new): detects/executes project test command, collects coverage
- `services/git-adapter.ts` (new): branch/commit/push, PR creation; applies commit footers
- existing: `task-service.ts`, `task-execution-service.ts`, `executors/*`
- `apps/cli`
- `src/commands/autopilot.command.ts` (new): orchestrates a full run; supports `--stories`
- `src/ui/tui/` (new): tmux helpers and compact panes for selection and logs
- `scripts/modules`
- reuse `utils/git-utils.js`, `task-manager/tag-management.js`, PR template utilities
- `.cursor/rules`
- update generation/parsing rules to emit userstory format when enabled
- `.claude/agents`
- existing: `surgical-test-generator.md` for red phase
Edges (Dependencies / Data Flow)
- Brief Intake → User Story Generation → Acceptance/Test Criteria Synthesis → Implementation Planning → Autonomous Execution → Validation/PR
- Execution ↔ Test Runner (runAll/runTargeted, coverage) → back to Execution for decisions
- Git Adapter ← Execution (commits/branch) → PR creation (target default branch)
- TUI ↔ Orchestrator (event stream) → user confirmations for branch/push/PR
- MCP Tools ↔ Orchestrator for task/status/context updates
Topological Traversal (Build Order)
1) Config + Data Model changes (stories flag, acceptance fields, optional `gherkin`)
2) Rules/Prompts updates for parsing/generation in story mode (emit AC Hybrid + JSON Gherkin)
3) Test Runner Adapter (frameworkagnostic execute + coverage)
4) Git Adapter (branch pattern `{tag}/task-{id}-{slug}`, commit footers/trailer, PR create)
5) Workflow Orchestrator wiring red/green/commit loop with coverage gate and scenario iteration
6) Surgical Test Gen integration (red) from JSON Gherkin + AC; minimalchange impl prompts (green)
7) CLI autopilot (dryrun → full run) and TUI (tmux panes)
8) PR template and review automation (user story, AC table with test links, scenarios, coverage)
---
## Git/TDD Wiring (ValidationFirst)
- Branch naming: include tag + task id (e.g., `master/task-12-user-auth`) to disambiguate context.
- Commit footers (configurable):
- `Task-Id: <id>`
- `Subtask-Id: <id>.<sub>` when relevant
- `Tag: <tag>`
- Trailer: `Refs: TM-<tag>-<id>[.<sub>] SC:<scenarioId> AC:<acId>`
- Red/Green/Commit loop per subtask:
- Red: synthesize failing tests from acceptance criteria (Surgical agent)
- Green: minimal code to pass; rerun full suite
- Commit when all tests pass and coverage ≥ 80%
- PR base: repository default branch. Title `Task #<id> [<tag>]: <title>`.
- PR body sections: User Story, Acceptance Criteria, Subtask Summary, Test Results, Coverage Table, Linked Work Items (ids), Next Steps.
---
## Prompts & Parsers (Story Mode)
- PRD/Brief Parser updates:
- Extract user stories with “As a … I want … so that …” format when present.
- Extract Acceptance Criteria as bullet list; fill gaps with LLM synthesis from brief context.
- Emit JSON Gherkin Feature/Scenarios; autosplit Given/When/Then when feasible; otherwise store text under `then` and refine later.
- Task Generation Prompt (story mode):
- “Generate a task as a User Story with clear Acceptance Criteria. Do not include implementation details in the story; produce implementation subtasks separately.”
- Subtask Generation Prompt:
- “Produce technical implementation steps to satisfy the acceptance criteria. Each subtask should be atomic and testable.”
- Test Generation (Red):
- Use `.claude/agents/surgical-test-generator.md`; seed with JSON Gherkin + Acceptance/Test Criteria; determinism favored over maximum coverage.
- Record produced test paths back into AC items and optionally scenario annotations.
- Implementation (Green):
- Minimal diffs, follow patterns, keep commits scoped to the subtask.
---
## TUI (Linear, tmuxbased)
- Left: Tag selector and task list (status/priority). Actions: Expand, Start (Next or Selected), Review.
- Right: Executor terminal (claudecode/codex) under tmux split; orchestrator logs under another pane.
- Confirmations inline (branch create, push, PR) unless `--no-confirm`.
---
## Migration & Backward Compatibility
- Optional `gherkin` block; existing tasks remain valid.
- When `stories.enabled=true`, new tasks include AC Hybrid + `gherkin`; upgrade path via a utility to synthesize both from description/testStrategy/acceptanceCriteria.
---
## Risks & Mitigations
- Hallucinated acceptance criteria → Always show criteria in PR; allow quick amend and rerun.
- Framework variance → Test Runner Adapter detects and normalizes execution/coverage; fallback to `test` script.
- Large diffs → Prompt for minimal changes; allow diff preview before commit.
- Flaky tests → Retry policy; isolate targeted runs; enforce passing full suite before commit.
---
## Acceptance Criteria Schema Options (for decision)
- Option A: Markdown only
- Pros: simple to write/edit, good for humans
- Cons: hard to map deterministically to tests; weak traceability; brittle diffs
- Option B: Structured array
- Example: `{ id, summary, given, when, then, severity, tags }`
- Pros: machinereadable; strong linking to tests/coverage; easy to diff
- Cons: heavier authoring; requires schema discipline
- Option C: Hybrid (recommended)
- Store both: a normalized array of criteria objects and a preserved `raw` markdown block
- Each criterion gets a stable `id` (e.g., `AC-<taskId>-<n>`) used in tests, commit trailers, and PR tables
- Enables clean PR tables and deterministic coverage mapping while keeping humanfriendly text
Proposed default schema (hybrid):
```
acceptanceCriteria: {
raw: """
- AC1: Guest can checkout with credit card
- AC2: Declined cards show error inline
""",
items: [
{
id: "AC-12-1",
summary: "Guest can checkout with credit card",
given: "a guest with items in cart",
when: "submits valid credit card",
then: "order is created and receipt emailed",
severity: "must",
tags: ["checkout", "payments"],
tests: [] // filled by orchestrator (file paths/test IDs)
}
]
}
```
Decision: adopt Hybrid default; allow Markdownonly input and autonormalize.
## Decisions
- Adopt Hybrid acceptance criteria schema by default; normalize Markdown to structured items with stable IDs `AC-<taskId>-<n>`.
- Use conventional commits plus footers and a unified trailer `Refs: TM-<tag>-<id>[.<sub>]` across PRDs for robust mapping.

View File

@@ -1,5 +1,89 @@
# task-master-ai
## 0.27.2
### Patch Changes
- [#1248](https://github.com/eyaltoledano/claude-task-master/pull/1248) [`044a7bf`](https://github.com/eyaltoledano/claude-task-master/commit/044a7bfc98049298177bc655cf341d7a8b6a0011) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix set-status for subtasks:
- Parent tasks are now set as `done` when subtasks are all `done`
- Parent tasks are now set as `in-progress` when at least one subtask is `in-progress` or `done`
## 0.27.1
### Patch Changes
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`c911608`](https://github.com/eyaltoledano/claude-task-master/commit/c911608f60454253f4e024b57ca84e5a5a53f65c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix Zed MCP configuration by adding required "source" property
- Add "source": "custom" property to task-master-ai server in Zed settings.json
## 0.27.1-rc.1
### Patch Changes
- [#1233](https://github.com/eyaltoledano/claude-task-master/pull/1233) [`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - One last testing final final
## 0.27.1-rc.0
### Patch Changes
- [#1232](https://github.com/eyaltoledano/claude-task-master/pull/1232) [`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix module not found for new 0.27.0 release
## 0.27.0
### Minor Changes
- [#1220](https://github.com/eyaltoledano/claude-task-master/pull/1220) [`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - No longer need --package=task-master-ai in mcp server
- A lot of users were having issues with Taskmaster and usually a simple fix was to remove --package from your mcp.json
- we now bundle our whole package, so we no longer need the --package
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `task-master start` command for automated task execution with Claude Code
- You can now start working on tasks directly by running `task-master start <task-id>` which will automatically launch Claude Code with a comprehensive prompt containing all task details, implementation guidelines, and context.
- `task-master start` will automatically detect next-task when no ID is provided.
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Move from javascript to typescript, not a full refactor but we now have a typescript environment and are moving our javascript commands slowly into typescript
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add grok-cli as a provider with full codebase context support. You can now use Grok models (grok-2, grok-3, grok-4, etc.) with Task Master for AI operations that have access to your entire codebase context, enabling more informed task generation and PRD parsing.
## Setup Instructions
1. **Get your Grok API key** from [console.x.ai](https://console.x.ai)
2. **Set the environment variable**:
```bash
export GROK_CLI_API_KEY="your-api-key-here"
```
3. **Configure Task Master to use Grok**:
```bash
task-master models --set-main grok-beta
# or
task-master models --set-research grok-beta
# or
task-master models --set-fallback grok-beta
```
## Key Features
- **Full codebase context**: Grok models can analyze your entire project when generating tasks or parsing PRDs
- **xAI model access**: Support for latest Grok models (grok-2, grok-3, grok-4, etc.)
- **Code-aware task generation**: Create more accurate and contextual tasks based on your actual codebase
- **Intelligent PRD parsing**: Parse requirements with understanding of your existing code structure
## Available Models
- `grok-beta` - Latest Grok model with codebase context
- `grok-vision-beta` - Grok with vision capabilities and codebase context
The Grok CLI provider integrates with xAI's Grok models via grok-cli and can also use the local Grok CLI configuration file (`~/.grok/user-settings.json`) if available.
## Credits
Built using the [grok-cli](https://github.com/superagent-ai/grok-cli) by Superagent AI for seamless integration with xAI's Grok models.
- [#1225](https://github.com/eyaltoledano/claude-task-master/pull/1225) [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve taskmaster ai provider defaults
- moving from main anthropic 3.7 to anthropic sonnet 4
- moving from fallback anthropic 3.5 to anthropic 3.7
- [#1217](https://github.com/eyaltoledano/claude-task-master/pull/1217) [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - @tm/cli: add auto-update functionality to every command
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix Grok model configuration validation and update deprecated Claude fallback model. Grok models now properly support their full 131K token capacity, and the fallback model has been upgraded to Claude Sonnet 4 for better performance and future compatibility.
## 0.27.0-rc.2
### Minor Changes

View File

@@ -60,6 +60,19 @@ The following documentation is also available in the `docs` directory:
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
#### Claude Code Quick Install
For Claude Code users:
```bash
claude mcp add taskmaster-ai -- npx -y task-master-ai
```
Don't forget to add your API keys to the configuration:
- in the root .env of your Project
- in the "env" section of your mcp config for taskmaster-ai
## Requirements
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
@@ -92,10 +105,11 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
| **Q CLI** | Global | `~/.aws/amazonq/mcp.json` | | `mcpServers` |
##### Manual Configuration
###### Cursor & Windsurf (`mcpServers`)
###### Cursor & Windsurf & Q Developer CLI (`mcpServers`)
```json
{

View File

@@ -1,5 +1,19 @@
# @tm/cli
## null
### Patch Changes
- Updated dependencies []:
- @tm/core@null
## 0.27.0
### Patch Changes
- Updated dependencies []:
- @tm/core@0.26.1
## 0.27.0-rc.0
### Minor Changes

View File

@@ -1,6 +1,5 @@
{
"name": "@tm/cli",
"version": "0.27.0-rc.0",
"description": "Task Master CLI - Command line interface for task management",
"type": "module",
"private": true,
@@ -24,12 +23,12 @@
},
"dependencies": {
"@tm/core": "*",
"boxen": "^7.1.1",
"boxen": "^8.0.1",
"chalk": "5.6.2",
"cli-table3": "^0.6.5",
"commander": "^12.1.0",
"inquirer": "^9.2.10",
"ora": "^8.1.0"
"inquirer": "^12.5.0",
"ora": "^8.2.0"
},
"devDependencies": {
"@biomejs/biome": "^1.9.4",

View File

@@ -18,7 +18,8 @@ export * as ui from './utils/ui.js';
export {
checkForUpdate,
performAutoUpdate,
displayUpgradeNotification
displayUpgradeNotification,
compareVersions
} from './utils/auto-update.js';
// Re-export commonly used types from tm-core

View File

@@ -7,7 +7,6 @@ import https from 'https';
import chalk from 'chalk';
import ora from 'ora';
import boxen from 'boxen';
import packageJson from '../../../../package.json' with { type: 'json' };
export interface UpdateInfo {
currentVersion: string;
@@ -16,15 +15,18 @@ export interface UpdateInfo {
}
/**
* Get current version from package.json
* Get current version from build-time injected environment variable
*/
function getCurrentVersion(): string {
try {
return packageJson.version;
} catch (error) {
console.warn('Could not read package.json for version info');
return '0.0.0';
// Version is injected at build time via TM_PUBLIC_VERSION
const version = process.env.TM_PUBLIC_VERSION;
if (version && version !== 'unknown') {
return version;
}
// Fallback for development or if injection failed
console.warn('Could not read version from TM_PUBLIC_VERSION, using fallback');
return '0.0.0';
}
/**
@@ -33,7 +35,7 @@ function getCurrentVersion(): string {
* @param v2 - Second version
* @returns -1 if v1 < v2, 0 if v1 = v2, 1 if v1 > v2
*/
function compareVersions(v1: string, v2: string): number {
export function compareVersions(v1: string, v2: string): number {
const toParts = (v: string) => {
const [core, pre = ''] = v.split('-', 2);
const nums = core.split('.').map((n) => Number.parseInt(n, 10) || 0);

View File

@@ -1,5 +1,9 @@
# docs
## 0.0.4
## 0.0.3
## 0.0.2
## 0.0.1

View File

@@ -1,22 +1,24 @@
# Task Master Documentation
Welcome to the Task Master documentation. Use the links below to navigate to the information you need:
Welcome to the Task Master documentation. This documentation site provides comprehensive guides for getting started with Task Master.
## Getting Started
- [Configuration Guide](archive/configuration.md) - Set up environment variables and customize Task Master
- [Tutorial](archive/ctutorial.md) - Step-by-step guide to getting started with Task Master
- [Quick Start Guide](/getting-started/quick-start) - Complete setup and first-time usage guide
- [Requirements](/getting-started/quick-start/requirements) - What you need to get started
- [Installation](/getting-started/quick-start/installation) - How to install Task Master
## Reference
## Core Capabilities
- [Command Reference](archive/ccommand-reference.md) - Complete list of all available commands
- [Task Structure](archive/ctask-structure.md) - Understanding the task format and features
- [MCP Tools](/capabilities/mcp) - Model Control Protocol integration
- [CLI Commands](/capabilities/cli-root-commands) - Command line interface reference
- [Task Structure](/capabilities/task-structure) - Understanding tasks and subtasks
## Examples & Licensing
## Best Practices
- [Example Interactions](archive/cexamples.md) - Common Cursor AI interaction examples
- [Licensing Information](archive/clicensing.md) - Detailed information about the license
- [Advanced Configuration](/best-practices/configuration-advanced) - Detailed configuration options
- [Advanced Tasks](/best-practices/advanced-tasks) - Working with complex task structures
## Need More Help?
If you can't find what you're looking for in these docs, please check the [main README](../README.md) or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).
If you can't find what you're looking for in these docs, please check the root README.md or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).

View File

@@ -156,7 +156,7 @@ sidebarTitle: "CLI Commands"
# Use an alternative tasks file
task-master analyze-complexity --file=custom-tasks.json
# Use Perplexity AI for research-backed complexity analysis
# Use your configured research model for research-backed complexity analysis
task-master analyze-complexity --research
```
</Accordion>

View File

@@ -18,8 +18,8 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
{
"mcpServers": {
"task-master-ai": {
"command": "node",
"args": ["./mcp-server/server.js"],
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
@@ -108,5 +108,5 @@ You dont need to configure everything up front. Most settings can be left as
</Accordion>
<Note>
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide](/docs/best-practices/configuration-advanced) page.
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide](/best-practices/configuration-advanced) page.
</Note>

View File

@@ -56,4 +56,4 @@ If you ran into problems and had to debug errors you can create new rules as you
By now you have all you need to get started executing code faster and smarter with Task Master.
If you have any questions please check out [Frequently Asked Questions](/docs/getting-started/faq)
If you have any questions please check out [Frequently Asked Questions](/getting-started/faq)

View File

@@ -30,6 +30,19 @@ cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21
```
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
### Claude Code Quick Install
For Claude Code users:
```bash
claude mcp add taskmaster-ai -- npx -y task-master-ai
```
Don't forget to add your API keys to the configuration:
- in the root .env of your Project
- in the "env" section of your mcp config for taskmaster-ai
</Accordion>
## Installation Options

View File

@@ -6,13 +6,13 @@ sidebarTitle: "Quick Start"
This guide is for new users who want to start using Task Master with minimal setup time.
It covers:
- [Requirements](/docs/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
- [Installation](/docs/getting-started/quick-start/installation): How to Install Task Master.
- [Configuration](/docs/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
- [PRD](/docs/getting-started/quick-start/prd-quick): Writing and parsing your first PRD.
- [Task Setup](/docs/getting-started/quick-start/tasks-quick): Preparing your tasks for execution.
- [Executing Tasks](/docs/getting-started/quick-start/execute-quick): Using Task Master to execute tasks.
- [Rules & Context](/docs/getting-started/quick-start/rules-quick): Learn how and why to build context in your project over time.
- [Requirements](/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
- [Installation](/getting-started/quick-start/installation): How to Install Task Master.
- [Configuration](/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
- [PRD](/getting-started/quick-start/prd-quick): Writing and parsing your first PRD.
- [Task Setup](/getting-started/quick-start/tasks-quick): Preparing your tasks for execution.
- [Executing Tasks](/getting-started/quick-start/execute-quick): Using Task Master to execute tasks.
- [Rules & Context](/getting-started/quick-start/rules-quick): Learn how and why to build context in your project over time.
<Tip>
By the end of this guide, you'll have everything you need to begin working productively with Task Master.

View File

@@ -61,9 +61,25 @@ Task Master can provide a complexity report which can be helpful to read before
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
```
The agent will use the `analyze_project_complexity` MCP tool, or you can run it directly with the CLI command:
```bash
task-master analyze-complexity
```
For more comprehensive analysis using your configured research model, you can use:
```bash
task-master analyze-complexity --research
```
<Tip>
The `--research` flag uses whatever research model you have configured in `.taskmaster/config.json` (configurable via `task-master models --setup`) for research-backed complexity analysis, providing more informed recommendations.
</Tip>
You can view the report in a friendly table using:
```
Can you show me the complexity report in a more readable format?
```
<Check>Now you are ready to begin [executing tasks](/docs/getting-started/quick-start/execute-quick)</Check>
For more detailed CLI options, see the [Analyze Task Complexity](/capabilities/cli-root-commands#analyze-task-complexity) section.
<Check>Now you are ready to begin [executing tasks](/getting-started/quick-start/execute-quick)</Check>

View File

@@ -4,7 +4,7 @@ Welcome to v1 of the Task Master Docs. Expect weekly updates as we expand and re
We've organized the docs into three sections depending on your experience level and goals:
### Getting Started - Jump in to [Quick Start](/docs/getting-started/quick-start)
### Getting Started - Jump in to [Quick Start](/getting-started/quick-start)
Designed for first-time users. Get set up, create your first PRD, and run your first task.
### Best Practices

View File

@@ -1,6 +1,6 @@
{
"name": "docs",
"version": "0.0.2",
"version": "0.0.4",
"private": true,
"description": "Task Master documentation powered by Mintlify",
"scripts": {

View File

@@ -1,5 +1,50 @@
# Change Log
## 0.25.3
### Patch Changes
- Updated dependencies [[`044a7bf`](https://github.com/eyaltoledano/claude-task-master/commit/044a7bfc98049298177bc655cf341d7a8b6a0011)]:
- task-master-ai@0.27.2
## 0.25.2
### Patch Changes
- Updated dependencies [[`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e), [`c911608`](https://github.com/eyaltoledano/claude-task-master/commit/c911608f60454253f4e024b57ca84e5a5a53f65c), [`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9)]:
- task-master-ai@0.27.1
## 0.25.2-rc.1
### Patch Changes
- Updated dependencies [[`1a18794`](https://github.com/eyaltoledano/claude-task-master/commit/1a1879483b86c118a4e46c02cbf4acebfcf6bcf9)]:
- task-master-ai@0.27.1-rc.1
## 0.25.2-rc.0
### Patch Changes
- Updated dependencies [[`f487736`](https://github.com/eyaltoledano/claude-task-master/commit/f487736670ef8c484059f676293777eabb249c9e)]:
- task-master-ai@0.27.1-rc.0
## 0.25.0
### Minor Changes
- [#1200](https://github.com/eyaltoledano/claude-task-master/pull/1200) [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add "Start Task" button to VS Code extension for seamless Claude Code integration
You can now click a "Start Task" button directly in the Task Master extension which will open a new terminal and automatically execute the task using Claude Code. This provides a seamless workflow from viewing tasks in the extension to implementing them without leaving VS Code.
- [#1201](https://github.com/eyaltoledano/claude-task-master/pull/1201) [`83af314`](https://github.com/eyaltoledano/claude-task-master/commit/83af314879fc0e563581161c60d2bd089899313e) Thanks [@losolosol](https://github.com/losolosol)! - Added a Start Build button to the VSCODE Task Properties Right Panel
### Patch Changes
- [#1229](https://github.com/eyaltoledano/claude-task-master/pull/1229) [`674d1f6`](https://github.com/eyaltoledano/claude-task-master/commit/674d1f6de7ea98116b61bdae6198bafe6c4e7c1a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP not connecting to new Taskmaster version
- Updated dependencies [[`4e12643`](https://github.com/eyaltoledano/claude-task-master/commit/4e126430a092fb54afb035514fb3d46115714f97), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142), [`a621ff0`](https://github.com/eyaltoledano/claude-task-master/commit/a621ff05eafb51a147a9aabd7b37ddc0e45b0869), [`e6de285`](https://github.com/eyaltoledano/claude-task-master/commit/e6de285ceacb0a397e952a63435cd32a9c731515), [`fce8414`](https://github.com/eyaltoledano/claude-task-master/commit/fce841490a9ebbf1801a42dd8a29397379cf1142)]:
- task-master-ai@0.27.0
## 0.25.0-rc.0
### Minor Changes

View File

@@ -3,7 +3,7 @@
"private": true,
"displayName": "TaskMaster",
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
"version": "0.25.0-rc.0",
"version": "0.25.3",
"publisher": "Hamster",
"icon": "assets/icon.png",
"engines": {
@@ -240,7 +240,7 @@
"check-types": "tsc --noEmit"
},
"dependencies": {
"task-master-ai": "*"
"task-master-ai": "0.27.2"
},
"devDependencies": {
"@dnd-kit/core": "^6.3.1",
@@ -254,8 +254,9 @@
"@radix-ui/react-separator": "^1.1.7",
"@radix-ui/react-slot": "^1.2.3",
"@tailwindcss/postcss": "^4.1.11",
"@tanstack/react-query": "^5.83.0",
"@types/mocha": "^10.0.10",
"@types/node": "20.x",
"@types/node": "^22.10.5",
"@types/react": "19.1.8",
"@types/react-dom": "19.1.6",
"@types/vscode": "^1.101.0",
@@ -271,12 +272,11 @@
"lucide-react": "^0.525.0",
"npm-run-all": "^4.1.5",
"postcss": "8.5.6",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"tailwind-merge": "^3.3.1",
"tailwindcss": "4.1.11",
"typescript": "^5.8.3",
"@tanstack/react-query": "^5.83.0",
"react": "^19.0.0",
"react-dom": "^19.0.0"
"typescript": "^5.7.3"
},
"overrides": {
"glob@<8": "^10.4.5",

View File

@@ -408,7 +408,7 @@ export function createMCPConfigFromSettings(): MCPConfig {
const taskMasterPath = require.resolve('task-master-ai');
const mcpServerPath = path.resolve(
path.dirname(taskMasterPath),
'mcp-server/server.js'
'./dist/mcp-server.js'
);
// Verify the server file exists

Some files were not shown because too many files have changed in this diff Show More