Compare commits

...

17 Commits

Author SHA1 Message Date
Ralph Khreish
943356221c chore: apply requested changes 2025-10-16 21:38:40 +02:00
Ralph Khreish
9923b4f486 feat: add sonnet and haiku to supported providers
- make supported providers list more dynamic for cli models
2025-10-16 17:39:17 +02:00
Ralph Khreish
6bc75c0ac6 fix: auth refresh (#1314) 2025-10-15 17:32:15 +02:00
Ralph Khreish
d7fca1844f feat: add "next" command to new command structure (#1312) 2025-10-15 15:26:34 +02:00
Ben Coombs
a98d96ef04 fix: standardize UI box width calculations across components (#1305)
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-10-14 20:08:11 +02:00
Karol Fabjańczuk
a69d8c91dc feat: add configurable MCP tool loading to reduce LLM context usage (#1181)
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-10-14 20:01:21 +02:00
Ralph Khreish
474a86cebb Merge pull request #1308 from eyaltoledano/ralph/chore/update.from.main.october 2025-10-14 18:49:08 +02:00
Ben Coombs
3283506444 fix: enhance findProjectRoot to traverse parent directories (#1302)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-10-14 18:32:10 +02:00
github-actions[bot]
9acb900153 Version Packages (#1303)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-10-14 11:47:13 +02:00
Ralph Khreish
c4f5d89e72 Merge pull request #1297 from eyaltoledano/next 2025-10-14 10:08:08 +02:00
Ralph Khreish
e308cf4f46 chore: exit pre mode 2025-10-13 22:46:24 +02:00
Ralph Khreish
11b7354010 fix: export url (#1288) 2025-10-13 21:51:19 +02:00
Ralph Khreish
4c1ef2ca94 fix: auth refresh token (#1299)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-13 21:50:22 +02:00
Ralph Khreish
663aa2dfe9 chore: fix CI 2025-10-12 17:10:24 +02:00
Ralph Khreish
8f60a0561e chore: add hiring banner 2025-10-12 16:52:58 +02:00
Ralph Khreish
9a22622e9c chore: cleanup changelog and pre exit 2025-10-11 21:21:04 +02:00
github-actions[bot]
8d3c7e4116 chore: rc version bump 2025-10-11 19:09:36 +00:00
73 changed files with 3345 additions and 399 deletions

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": minor
---
Add changelog highlights to auto-update notifications
When the CLI auto-updates to a new version, it now displays a "What's New" section.

View File

@@ -11,6 +11,7 @@
"access": "public", "access": "public",
"baseBranch": "main", "baseBranch": "main",
"ignore": [ "ignore": [
"docs" "docs",
"@tm/claude-code-plugin"
] ]
} }

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Improve auth token refresh flow

View File

@@ -0,0 +1,7 @@
---
"task-master-ai": patch
---
Enable Task Master commands to traverse parent directories to find project root from nested paths
Fixes #1301

View File

@@ -0,0 +1,5 @@
---
"@tm/cli": patch
---
Fix warning message box width to match dashboard box width for consistent UI alignment

View File

@@ -0,0 +1,35 @@
---
"task-master-ai": minor
---
Add configurable MCP tool loading to optimize LLM context usage
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
**Configuration Options:**
- `all` (default): Load all 36 tools
- `core` or `lean`: Load only 7 essential tools for daily development
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
**Example .mcp.json configuration:**
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).

View File

@@ -1,47 +0,0 @@
---
"task-master-ai": minor
---
Add Claude Code plugin with marketplace distribution
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
## 🎉 New: Claude Code Plugin
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
- **49 slash commands** with clean naming (`/task-master-ai:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **MCP server integration** for deep Claude Code integration
**Installation:**
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
- Shows plugin installation instructions
- Only manages CLAUDE.md imports for agent instructions
- Directs users to install the official plugin
**Migration for Existing Users:**
If you previously used `rules add claude`:
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
3. remove old `.claude/commands/` and `.claude/agents/` directories
**Why This Change?**
Claude Code plugins provide:
- ✅ Automatic updates when we release new features
- ✅ Better command organization and naming
- ✅ Seamless integration with Claude Code
- ✅ No manual file copying or management
The plugin system is the future of Task Master AI integration with Claude Code!

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Improve next command to work with remote

View File

@@ -1,17 +0,0 @@
---
"task-master-ai": minor
---
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
Key features:
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
- Inline instructions at decision points guide AI through each section
- Good/bad examples for immediate pattern matching
- Flexible plain-text format with XML-style tags for parseability
- Critical dependency-graph section ensures correct task ordering
- Automatic inclusion during `task-master init`
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Add 4.5 haiku and sonnet to supported models for claude-code and anthropic ai providers

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Fix cross-level task dependencies not being saved
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.

View File

@@ -1,16 +0,0 @@
---
"task-master-ai": minor
---
Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
Key improvements:
- Automatic integration with complexity analysis reports
- Tag-aware complexity report path resolution
- Intelligent subtask count determination based on task complexity
- Falls back to defaults when complexity analysis is unavailable
- Enhanced logging for better visibility into expansion decisions
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.

View File

@@ -1,5 +1,175 @@
# task-master-ai # task-master-ai
## 0.29.0
### Minor Changes
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
When the CLI auto-updates to a new version, it now displays a "What's New" section.
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
## 🎉 New: Claude Code Plugin
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **MCP server integration** for deep Claude Code integration
**Installation:**
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
- Shows plugin installation instructions
- Only manages CLAUDE.md imports for agent instructions
- Directs users to install the official plugin
**Migration for Existing Users:**
If you previously used `rules add claude`:
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
3. remove old `.claude/commands/` and `.claude/agents/` directories
**Why This Change?**
Claude Code plugins provide:
- ✅ Automatic updates when we release new features
- ✅ Better command organization and naming
- ✅ Seamless integration with Claude Code
- ✅ No manual file copying or management
The plugin system is the future of Task Master AI integration with Claude Code!
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
Key features:
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
- Inline instructions at decision points guide AI through each section
- Good/bad examples for immediate pattern matching
- Flexible plain-text format with XML-style tags for parseability
- Critical dependency-graph section ensures correct task ordering
- Automatic inclusion during `task-master init`
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
Key improvements:
- Automatic integration with complexity analysis reports
- Tag-aware complexity report path resolution
- Intelligent subtask count determination based on task complexity
- Falls back to defaults when complexity analysis is unavailable
- Enhanced logging for better visibility into expansion decisions
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
### Patch Changes
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`4c1ef2c`](https://github.com/eyaltoledano/claude-task-master/commit/4c1ef2ca94411c53bcd2a78ec710b06c500236dd) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
## 0.29.0-rc.1
### Patch Changes
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`a6c5152`](https://github.com/eyaltoledano/claude-task-master/commit/a6c5152f20edd8717cf1aea34e7c178b1261aa99) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
## 0.29.0-rc.0
### Minor Changes
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
When the CLI auto-updates to a new version, it now displays a "What's New" section.
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
## 🎉 New: Claude Code Plugin
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
- **49 slash commands** with clean naming (`/task-master-ai:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **MCP server integration** for deep Claude Code integration
**Installation:**
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
- Shows plugin installation instructions
- Only manages CLAUDE.md imports for agent instructions
- Directs users to install the official plugin
**Migration for Existing Users:**
If you previously used `rules add claude`:
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
3. remove old `.claude/commands/` and `.claude/agents/` directories
**Why This Change?**
Claude Code plugins provide:
- ✅ Automatic updates when we release new features
- ✅ Better command organization and naming
- ✅ Seamless integration with Claude Code
- ✅ No manual file copying or management
The plugin system is the future of Task Master AI integration with Claude Code!
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
Key features:
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
- Inline instructions at decision points guide AI through each section
- Good/bad examples for immediate pattern matching
- Flexible plain-text format with XML-style tags for parseability
- Critical dependency-graph section ensures correct task ordering
- Automatic inclusion during `task-master init`
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
Key improvements:
- Automatic integration with complexity analysis reports
- Tag-aware complexity report path resolution
- Intelligent subtask count determination based on task complexity
- Falls back to defaults when complexity analysis is unavailable
- Enhanced logging for better visibility into expansion decisions
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
### Patch Changes
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
## 0.28.0 ## 0.28.0
### Minor Changes ### Minor Changes

View File

@@ -119,6 +119,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"command": "npx", "command": "npx",
"args": ["-y", "task-master-ai"], "args": ["-y", "task-master-ai"],
"env": { "env": {
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE", "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE", "PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE", "OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
@@ -148,6 +149,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"command": "npx", "command": "npx",
"args": ["-y", "task-master-ai"], "args": ["-y", "task-master-ai"],
"env": { "env": {
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE", "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE", "PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE", "OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
@@ -196,7 +198,7 @@ Initialize taskmaster-ai in my project
#### 5. Make sure you have a PRD (Recommended) #### 5. Make sure you have a PRD (Recommended)
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt` For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`.
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate` For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`. An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
@@ -282,6 +284,76 @@ task-master generate
task-master rules add windsurf,roo,vscode task-master rules add windsurf,roo,vscode
``` ```
## Tool Loading Configuration
### Optimizing MCP Tool Loading
Task Master's MCP server supports selective tool loading to reduce context window usage. By default, all 36 tools are loaded (~21,000 tokens) to maintain backward compatibility with existing installations.
You can optimize performance by configuring the `TASK_MASTER_TOOLS` environment variable:
### Available Modes
| Mode | Tools | Context Usage | Use Case |
|------|-------|--------------|----------|
| `all` (default) | 36 | ~21,000 tokens | Complete feature set - all tools available |
| `standard` | 15 | ~10,000 tokens | Common task management operations |
| `core` (or `lean`) | 7 | ~5,000 tokens | Essential daily development workflow |
| `custom` | Variable | Variable | Comma-separated list of specific tools |
### Configuration Methods
#### Method 1: Environment Variable in MCP Configuration
Add `TASK_MASTER_TOOLS` to your MCP configuration file's `env` section:
```jsonc
{
"mcpServers": { // or "servers" for VS Code
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard", // Options: "all", "standard", "core", "lean", or comma-separated list
"ANTHROPIC_API_KEY": "your-key-here",
// ... other API keys
}
}
}
}
```
#### Method 2: Claude Code CLI (One-Time Setup)
For Claude Code users, you can set the mode during installation:
```bash
# Core mode example (~70% token reduction)
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="core" \
-- npx -y task-master-ai@latest
# Custom tools example
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="get_tasks,next_task,set_task_status" \
-- npx -y task-master-ai@latest
```
### Tool Sets Details
**Core Tools (7):** `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
**Standard Tools (15):** All core tools plus `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
**All Tools (36):** Complete set including project setup, task management, analysis, dependencies, tags, research, and more
### Recommendations
- **New users**: Start with `"standard"` mode for a good balance
- **Large projects**: Use `"core"` mode to minimize token usage
- **Complex workflows**: Use `"all"` mode or custom selection
- **Backward compatibility**: If not specified, defaults to `"all"` mode
## Claude Code Support ## Claude Code Support
Task Master now supports Claude models through the Claude Code CLI, which requires no API key: Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
@@ -310,6 +382,12 @@ cd claude-task-master
node scripts/init.js node scripts/init.js
``` ```
## Join Our Team
<a href="https://tryhamster.com" target="_blank">
<img src="./images/hamster-hiring.png" alt="Join Hamster's founding team" />
</a>
## Contributors ## Contributors
<a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors"> <a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors">

View File

@@ -11,6 +11,13 @@
### Patch Changes ### Patch Changes
- Updated dependencies []:
- @tm/core@null
## null
### Patch Changes
- Updated dependencies []: - Updated dependencies []:
- @tm/core@null - @tm/core@null

View File

@@ -8,6 +8,7 @@ import { Command } from 'commander';
// Import all commands // Import all commands
import { ListTasksCommand } from './commands/list.command.js'; import { ListTasksCommand } from './commands/list.command.js';
import { ShowCommand } from './commands/show.command.js'; import { ShowCommand } from './commands/show.command.js';
import { NextCommand } from './commands/next.command.js';
import { AuthCommand } from './commands/auth.command.js'; import { AuthCommand } from './commands/auth.command.js';
import { ContextCommand } from './commands/context.command.js'; import { ContextCommand } from './commands/context.command.js';
import { StartCommand } from './commands/start.command.js'; import { StartCommand } from './commands/start.command.js';
@@ -45,6 +46,12 @@ export class CommandRegistry {
commandClass: ShowCommand as any, commandClass: ShowCommand as any,
category: 'task' category: 'task'
}, },
{
name: 'next',
description: 'Find the next available task to work on',
commandClass: NextCommand as any,
category: 'task'
},
{ {
name: 'start', name: 'start',
description: 'Start working on a task with claude-code', description: 'Start working on a task with claude-code',

View File

@@ -187,19 +187,29 @@ export class AuthCommand extends Command {
if (credentials.expiresAt) { if (credentials.expiresAt) {
const expiresAt = new Date(credentials.expiresAt); const expiresAt = new Date(credentials.expiresAt);
const now = new Date(); const now = new Date();
const hoursRemaining = Math.floor( const timeRemaining = expiresAt.getTime() - now.getTime();
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60) const hoursRemaining = Math.floor(timeRemaining / (1000 * 60 * 60));
); const minutesRemaining = Math.floor(timeRemaining / (1000 * 60));
if (hoursRemaining > 0) { if (timeRemaining > 0) {
console.log( // Token is still valid
chalk.gray( if (hoursRemaining > 0) {
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)` console.log(
) chalk.gray(
); ` Expires at: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.gray(
` Expires at: ${expiresAt.toLocaleString()} (${minutesRemaining} minutes remaining)`
)
);
}
} else { } else {
// Token has expired
console.log( console.log(
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`) chalk.yellow(` Expired at: ${expiresAt.toLocaleString()}`)
); );
} }
} else { } else {

View File

@@ -250,7 +250,7 @@ export class ContextCommand extends Command {
]); ]);
// Update context // Update context
await this.authManager.updateContext({ this.authManager.updateContext({
orgId: selectedOrg.id, orgId: selectedOrg.id,
orgName: selectedOrg.name, orgName: selectedOrg.name,
// Clear brief when changing org // Clear brief when changing org
@@ -343,7 +343,7 @@ export class ContextCommand extends Command {
if (selectedBrief) { if (selectedBrief) {
// Update context with brief // Update context with brief
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`; const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
await this.authManager.updateContext({ this.authManager.updateContext({
briefId: selectedBrief.id, briefId: selectedBrief.id,
briefName: briefName briefName: briefName
}); });
@@ -358,7 +358,7 @@ export class ContextCommand extends Command {
}; };
} else { } else {
// Clear brief selection // Clear brief selection
await this.authManager.updateContext({ this.authManager.updateContext({
briefId: undefined, briefId: undefined,
briefName: undefined briefName: undefined
}); });
@@ -491,7 +491,7 @@ export class ContextCommand extends Command {
// Update context: set org and brief // Update context: set org and brief
const briefName = `Brief ${brief.id.slice(0, 8)}`; const briefName = `Brief ${brief.id.slice(0, 8)}`;
await this.authManager.updateContext({ this.authManager.updateContext({
orgId: brief.accountId, orgId: brief.accountId,
orgName, orgName,
briefId: brief.id, briefId: brief.id,
@@ -613,7 +613,7 @@ export class ContextCommand extends Command {
}; };
} }
await this.authManager.updateContext(context); this.authManager.updateContext(context);
ui.displaySuccess('Context updated'); ui.displaySuccess('Context updated');
// Display what was set // Display what was set

View File

@@ -103,7 +103,7 @@ export class ExportCommand extends Command {
await this.initializeServices(); await this.initializeServices();
// Get current context // Get current context
const context = this.authManager.getContext(); const context = await this.authManager.getContext();
// Determine org and brief IDs // Determine org and brief IDs
let orgId = options?.org || context?.orgId; let orgId = options?.org || context?.orgId;

View File

@@ -0,0 +1,247 @@
/**
* @fileoverview NextCommand using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import path from 'node:path';
import { Command } from 'commander';
import chalk from 'chalk';
import boxen from 'boxen';
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
import type { StorageType } from '@tm/core/types';
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
import { displayHeader } from '../ui/index.js';
/**
* Options interface for the next command
*/
export interface NextCommandOptions {
tag?: string;
format?: 'text' | 'json';
silent?: boolean;
project?: string;
}
/**
* Result type from next command
*/
export interface NextTaskResult {
task: Task | null;
found: boolean;
tag: string;
storageType: Exclude<StorageType, 'auto'>;
}
/**
* NextCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core
*/
export class NextCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: NextTaskResult;
constructor(name?: string) {
super(name || 'next');
// Configure the command
this.description('Find the next available task to work on')
.option('-t, --tag <tag>', 'Filter by tag')
.option('-f, --format <format>', 'Output format (text, json)', 'text')
.option('--silent', 'Suppress output (useful for programmatic usage)')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.action(async (options: NextCommandOptions) => {
await this.executeCommand(options);
});
}
/**
* Execute the next command
*/
private async executeCommand(options: NextCommandOptions): Promise<void> {
try {
// Validate options (throws on invalid options)
this.validateOptions(options);
// Initialize tm-core
await this.initializeCore(options.project || process.cwd());
// Get next task from core
const result = await this.getNextTask(options);
// Store result for programmatic access
this.setLastResult(result);
// Display results
if (!options.silent) {
this.displayResults(result, options);
}
} catch (error: any) {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
// Allow error to propagate for library compatibility
throw new Error(msg.message || 'Unexpected error in next command');
} finally {
// Always clean up resources, even on error
await this.cleanup();
}
}
/**
* Validate command options
*/
private validateOptions(options: NextCommandOptions): void {
// Validate format
if (options.format && !['text', 'json'].includes(options.format)) {
throw new Error(
`Invalid format: ${options.format}. Valid formats are: text, json`
);
}
}
/**
* Initialize TaskMasterCore
*/
private async initializeCore(projectRoot: string): Promise<void> {
if (!this.tmCore) {
const resolved = path.resolve(projectRoot);
this.tmCore = await createTaskMasterCore({ projectPath: resolved });
}
}
/**
* Get next task from tm-core
*/
private async getNextTask(
options: NextCommandOptions
): Promise<NextTaskResult> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
// Call tm-core to get next task
const task = await this.tmCore.getNextTask(options.tag);
// Get storage type and active tag
const storageType = this.tmCore.getStorageType();
if (storageType === 'auto') {
throw new Error('Storage type must be resolved before use');
}
const activeTag = options.tag || this.tmCore.getActiveTag();
return {
task,
found: task !== null,
tag: activeTag,
storageType
};
}
/**
* Display results based on format
*/
private displayResults(
result: NextTaskResult,
options: NextCommandOptions
): void {
const format = options.format || 'text';
switch (format) {
case 'json':
this.displayJson(result);
break;
case 'text':
default:
this.displayText(result);
break;
}
}
/**
* Display in JSON format
*/
private displayJson(result: NextTaskResult): void {
console.log(JSON.stringify(result, null, 2));
}
/**
* Display in text format
*/
private displayText(result: NextTaskResult): void {
// Display header with tag (no file path for next command)
displayHeader({
tag: result.tag || 'master'
});
if (!result.found || !result.task) {
// No next task available
console.log(
boxen(
chalk.yellow(
'No tasks available to work on. All tasks are either completed, blocked by dependencies, or in progress.'
),
{
padding: 1,
borderStyle: 'round',
borderColor: 'yellow',
title: '⚠ NO TASKS AVAILABLE ⚠',
titleAlignment: 'center'
}
)
);
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
console.log(
`\n${chalk.dim('Tip: Try')} ${chalk.cyan('task-master list --status pending')} ${chalk.dim('to see all pending tasks')}`
);
return;
}
const task = result.task;
// Display the task details using the same component as 'show' command
// with a custom header indicating this is the next task
const customHeader = `Next Task: #${task.id} - ${task.title}`;
displayTaskDetails(task, {
customHeader,
headerColor: 'green',
showSuggestedActions: true
});
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: NextTaskResult): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): NextTaskResult | undefined {
return this.lastResult;
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
if (this.tmCore) {
await this.tmCore.close();
this.tmCore = undefined;
}
}
/**
* Register this command on an existing program
*/
static register(program: Command, name?: string): NextCommand {
const nextCommand = new NextCommand(name);
program.addCommand(nextCommand);
return nextCommand;
}
}

View File

@@ -6,6 +6,7 @@
// Commands // Commands
export { ListTasksCommand } from './commands/list.command.js'; export { ListTasksCommand } from './commands/list.command.js';
export { ShowCommand } from './commands/show.command.js'; export { ShowCommand } from './commands/show.command.js';
export { NextCommand } from './commands/next.command.js';
export { AuthCommand } from './commands/auth.command.js'; export { AuthCommand } from './commands/auth.command.js';
export { ContextCommand } from './commands/context.command.js'; export { ContextCommand } from './commands/context.command.js';
export { StartCommand } from './commands/start.command.js'; export { StartCommand } from './commands/start.command.js';

View File

@@ -25,9 +25,9 @@ export function displayHeader(options: HeaderOptions = {}): void {
let tagInfo = ''; let tagInfo = '';
if (tag && tag !== 'master') { if (tag && tag !== 'master') {
tagInfo = `🏷 tag: ${chalk.cyan(tag)}`; tagInfo = `🏷 tag: ${chalk.cyan(tag)}`;
} else { } else {
tagInfo = `🏷 tag: ${chalk.cyan('master')}`; tagInfo = `🏷 tag: ${chalk.cyan('master')}`;
} }
console.log(tagInfo); console.log(tagInfo);
@@ -39,7 +39,5 @@ export function displayHeader(options: HeaderOptions = {}): void {
: `${process.cwd()}/${filePath}`; : `${process.cwd()}/${filePath}`;
console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`); console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`);
} }
console.log(); // Empty line for spacing
} }
} }

View File

@@ -6,7 +6,7 @@
import chalk from 'chalk'; import chalk from 'chalk';
import boxen from 'boxen'; import boxen from 'boxen';
import type { Task } from '@tm/core/types'; import type { Task } from '@tm/core/types';
import { getComplexityWithColor } from '../../utils/ui.js'; import { getComplexityWithColor, getBoxWidth } from '../../utils/ui.js';
/** /**
* Next task display options * Next task display options
@@ -113,7 +113,7 @@ export function displayRecommendedNextTask(
borderColor: '#FFA500', // Orange color borderColor: '#FFA500', // Orange color
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'), title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
titleAlignment: 'center', titleAlignment: 'center',
width: process.stdout.columns * 0.97, width: getBoxWidth(0.97),
fullscreen: false fullscreen: false
}) })
); );

View File

@@ -5,6 +5,7 @@
import chalk from 'chalk'; import chalk from 'chalk';
import boxen from 'boxen'; import boxen from 'boxen';
import { getBoxWidth } from '../../utils/ui.js';
/** /**
* Display suggested next steps section * Display suggested next steps section
@@ -24,7 +25,7 @@ export function displaySuggestedNextSteps(): void {
margin: { top: 0, bottom: 1 }, margin: { top: 0, bottom: 1 },
borderStyle: 'round', borderStyle: 'round',
borderColor: 'gray', borderColor: 'gray',
width: process.stdout.columns * 0.97 width: getBoxWidth(0.97)
} }
) )
); );

View File

@@ -0,0 +1,158 @@
/**
* CLI UI utilities tests
* Tests for apps/cli/src/utils/ui.ts
*/
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import type { MockInstance } from 'vitest';
import { getBoxWidth } from './ui.js';
describe('CLI UI Utilities', () => {
describe('getBoxWidth', () => {
let columnsSpy: MockInstance;
let originalDescriptor: PropertyDescriptor | undefined;
beforeEach(() => {
// Store original descriptor if it exists
originalDescriptor = Object.getOwnPropertyDescriptor(
process.stdout,
'columns'
);
// If columns doesn't exist or isn't a getter, define it as one
if (!originalDescriptor || !originalDescriptor.get) {
const currentValue = process.stdout.columns || 80;
Object.defineProperty(process.stdout, 'columns', {
get() {
return currentValue;
},
configurable: true
});
}
// Now spy on the getter
columnsSpy = vi.spyOn(process.stdout, 'columns', 'get');
});
afterEach(() => {
// Restore the spy
columnsSpy.mockRestore();
// Restore original descriptor or delete the property
if (originalDescriptor) {
Object.defineProperty(process.stdout, 'columns', originalDescriptor);
} else {
delete (process.stdout as any).columns;
}
});
it('should calculate width as percentage of terminal width', () => {
columnsSpy.mockReturnValue(100);
const width = getBoxWidth(0.9, 40);
expect(width).toBe(90);
});
it('should use default percentage of 0.9 when not specified', () => {
columnsSpy.mockReturnValue(100);
const width = getBoxWidth();
expect(width).toBe(90);
});
it('should use default minimum width of 40 when not specified', () => {
columnsSpy.mockReturnValue(30);
const width = getBoxWidth();
expect(width).toBe(40); // Should enforce minimum
});
it('should enforce minimum width when terminal is too narrow', () => {
columnsSpy.mockReturnValue(50);
const width = getBoxWidth(0.9, 60);
expect(width).toBe(60); // Should use minWidth instead of 45
});
it('should handle undefined process.stdout.columns', () => {
columnsSpy.mockReturnValue(undefined);
const width = getBoxWidth(0.9, 40);
// Should fall back to 80 columns: Math.floor(80 * 0.9) = 72
expect(width).toBe(72);
});
it('should handle custom percentage values', () => {
columnsSpy.mockReturnValue(100);
expect(getBoxWidth(0.95, 40)).toBe(95);
expect(getBoxWidth(0.8, 40)).toBe(80);
expect(getBoxWidth(0.5, 40)).toBe(50);
});
it('should handle custom minimum width values', () => {
columnsSpy.mockReturnValue(60);
expect(getBoxWidth(0.9, 70)).toBe(70); // 60 * 0.9 = 54, but min is 70
expect(getBoxWidth(0.9, 50)).toBe(54); // 60 * 0.9 = 54, min is 50
});
it('should floor the calculated width', () => {
columnsSpy.mockReturnValue(99);
const width = getBoxWidth(0.9, 40);
// 99 * 0.9 = 89.1, should floor to 89
expect(width).toBe(89);
});
it('should match warning box width calculation', () => {
// Test the specific case from displayWarning()
columnsSpy.mockReturnValue(80);
const width = getBoxWidth(0.9, 40);
expect(width).toBe(72);
});
it('should match table width calculation', () => {
// Test the specific case from createTaskTable()
columnsSpy.mockReturnValue(111);
const width = getBoxWidth(0.9, 100);
// 111 * 0.9 = 99.9, floor to 99, but max(99, 100) = 100
expect(width).toBe(100);
});
it('should match recommended task box width calculation', () => {
// Test the specific case from displayRecommendedNextTask()
columnsSpy.mockReturnValue(120);
const width = getBoxWidth(0.97, 40);
// 120 * 0.97 = 116.4, floor to 116
expect(width).toBe(116);
});
it('should handle edge case of zero terminal width', () => {
columnsSpy.mockReturnValue(0);
const width = getBoxWidth(0.9, 40);
// When columns is 0, it uses fallback of 80: Math.floor(80 * 0.9) = 72
expect(width).toBe(72);
});
it('should handle very large terminal widths', () => {
columnsSpy.mockReturnValue(1000);
const width = getBoxWidth(0.9, 40);
expect(width).toBe(900);
});
it('should handle very small percentages', () => {
columnsSpy.mockReturnValue(100);
const width = getBoxWidth(0.1, 5);
// 100 * 0.1 = 10, which is greater than min 5
expect(width).toBe(10);
});
it('should handle percentage of 1.0 (100%)', () => {
columnsSpy.mockReturnValue(80);
const width = getBoxWidth(1.0, 40);
expect(width).toBe(80);
});
it('should consistently return same value for same inputs', () => {
columnsSpy.mockReturnValue(100);
const width1 = getBoxWidth(0.9, 40);
const width2 = getBoxWidth(0.9, 40);
const width3 = getBoxWidth(0.9, 40);
expect(width1).toBe(width2);
expect(width2).toBe(width3);
});
});
});

View File

@@ -126,6 +126,20 @@ export function getComplexityWithScore(complexity: number | undefined): string {
return color(`${complexity}/10 (${label})`); return color(`${complexity}/10 (${label})`);
} }
/**
* Calculate box width as percentage of terminal width
* @param percentage - Percentage of terminal width to use (default: 0.9)
* @param minWidth - Minimum width to enforce (default: 40)
* @returns Calculated box width
*/
export function getBoxWidth(
percentage: number = 0.9,
minWidth: number = 40
): number {
const terminalWidth = process.stdout.columns || 80;
return Math.max(Math.floor(terminalWidth * percentage), minWidth);
}
/** /**
* Truncate text to specified length * Truncate text to specified length
*/ */
@@ -176,6 +190,8 @@ export function displayBanner(title: string = 'Task Master'): void {
* Display an error message (matches scripts/modules/ui.js style) * Display an error message (matches scripts/modules/ui.js style)
*/ */
export function displayError(message: string, details?: string): void { export function displayError(message: string, details?: string): void {
const boxWidth = getBoxWidth();
console.error( console.error(
boxen( boxen(
chalk.red.bold('X Error: ') + chalk.red.bold('X Error: ') +
@@ -184,7 +200,8 @@ export function displayError(message: string, details?: string): void {
{ {
padding: 1, padding: 1,
borderStyle: 'round', borderStyle: 'round',
borderColor: 'red' borderColor: 'red',
width: boxWidth
} }
) )
); );
@@ -194,13 +211,16 @@ export function displayError(message: string, details?: string): void {
* Display a success message * Display a success message
*/ */
export function displaySuccess(message: string): void { export function displaySuccess(message: string): void {
const boxWidth = getBoxWidth();
console.log( console.log(
boxen( boxen(
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message), chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
{ {
padding: 1, padding: 1,
borderStyle: 'round', borderStyle: 'round',
borderColor: 'green' borderColor: 'green',
width: boxWidth
} }
) )
); );
@@ -210,11 +230,14 @@ export function displaySuccess(message: string): void {
* Display a warning message * Display a warning message
*/ */
export function displayWarning(message: string): void { export function displayWarning(message: string): void {
const boxWidth = getBoxWidth();
console.log( console.log(
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), { boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
padding: 1, padding: 1,
borderStyle: 'round', borderStyle: 'round',
borderColor: 'yellow' borderColor: 'yellow',
width: boxWidth
}) })
); );
} }
@@ -223,11 +246,14 @@ export function displayWarning(message: string): void {
* Display info message * Display info message
*/ */
export function displayInfo(message: string): void { export function displayInfo(message: string): void {
const boxWidth = getBoxWidth();
console.log( console.log(
boxen(chalk.blue.bold('i ') + chalk.white(message), { boxen(chalk.blue.bold('i ') + chalk.white(message), {
padding: 1, padding: 1,
borderStyle: 'round', borderStyle: 'round',
borderColor: 'blue' borderColor: 'blue',
width: boxWidth
}) })
); );
} }
@@ -282,23 +308,23 @@ export function createTaskTable(
} = options || {}; } = options || {};
// Calculate dynamic column widths based on terminal width // Calculate dynamic column widths based on terminal width
const terminalWidth = process.stdout.columns * 0.9 || 100; const tableWidth = getBoxWidth(0.9, 100);
// Adjust column widths to better match the original layout // Adjust column widths to better match the original layout
const baseColWidths = showComplexity const baseColWidths = showComplexity
? [ ? [
Math.floor(terminalWidth * 0.1), Math.floor(tableWidth * 0.1),
Math.floor(terminalWidth * 0.4), Math.floor(tableWidth * 0.4),
Math.floor(terminalWidth * 0.15), Math.floor(tableWidth * 0.15),
Math.floor(terminalWidth * 0.1), Math.floor(tableWidth * 0.1),
Math.floor(terminalWidth * 0.2), Math.floor(tableWidth * 0.2),
Math.floor(terminalWidth * 0.1) Math.floor(tableWidth * 0.1)
] // ID, Title, Status, Priority, Dependencies, Complexity ] // ID, Title, Status, Priority, Dependencies, Complexity
: [ : [
Math.floor(terminalWidth * 0.08), Math.floor(tableWidth * 0.08),
Math.floor(terminalWidth * 0.4), Math.floor(tableWidth * 0.4),
Math.floor(terminalWidth * 0.18), Math.floor(tableWidth * 0.18),
Math.floor(terminalWidth * 0.12), Math.floor(tableWidth * 0.12),
Math.floor(terminalWidth * 0.2) Math.floor(tableWidth * 0.2)
]; // ID, Title, Status, Priority, Dependencies ]; // ID, Title, Status, Priority, Dependencies
const headers = [ const headers = [

View File

@@ -1,5 +1,7 @@
# docs # docs
## 0.0.6
## 0.0.5 ## 0.0.5
## 0.0.4 ## 0.0.4

View File

@@ -13,6 +13,126 @@ The MCP interface is built on top of the `fastmcp` library and registers a set o
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`. Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
## Configurable Tool Loading
To optimize LLM context usage, you can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This is particularly useful when working with LLMs that have context limits or when you only need a subset of tools.
### Configuration Modes
#### All Tools (Default)
Loads all 36 available tools. Use when you need full Task Master functionality.
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "all",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
If `TASK_MASTER_TOOLS` is not set, all tools are loaded by default.
#### Core Tools (Lean Mode)
Loads only 7 essential tools for daily development. Ideal for minimal context usage.
**Core tools included:**
- `get_tasks` - List all tasks
- `next_task` - Find the next task to work on
- `get_task` - Get detailed task information
- `set_task_status` - Update task status
- `update_subtask` - Add implementation notes
- `parse_prd` - Generate tasks from PRD
- `expand_task` - Break down tasks into subtasks
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "core",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
You can also use `"lean"` as an alias for `"core"`.
#### Standard Tools
Loads 15 commonly used tools. Balances functionality with context efficiency.
**Standard tools include all core tools plus:**
- `initialize_project` - Set up new projects
- `analyze_project_complexity` - Analyze task complexity
- `expand_all` - Expand all eligible tasks
- `add_subtask` - Add subtasks manually
- `remove_task` - Remove tasks
- `generate` - Generate task markdown files
- `add_task` - Create new tasks
- `complexity_report` - View complexity analysis
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
#### Custom Tool Selection
Specify exactly which tools to load using a comma-separated list. Tool names are case-insensitive and support both underscores and hyphens.
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "get_tasks,next_task,set_task_status,update_subtask",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
### Choosing the Right Configuration
- **Use `core`/`lean`**: When working with basic task management workflows or when context limits are strict
- **Use `standard`**: For most development workflows that include task creation and analysis
- **Use `all`**: When you need full functionality including tag management, dependencies, and advanced features
- **Use custom list**: When you have specific tool requirements or want to experiment with minimal sets
### Verification
When the MCP server starts, it logs which tools were loaded:
```
Task Master MCP Server starting...
Tool mode configuration: standard
Loading standard tools
Registering 15 MCP tools (mode: standard)
Successfully registered 15/15 tools
```
## Tool Categories ## Tool Categories
The MCP tools can be categorized in the same way as the core functionalities: The MCP tools can be categorized in the same way as the core functionalities:

View File

@@ -37,6 +37,25 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
} }
``` ```
<Tip>
**Optimize Context Usage**: You can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This helps reduce LLM context usage by only loading the tools you need.
Options:
- `all` (default) - All 36 tools
- `standard` - 15 commonly used tools
- `core` or `lean` - 7 essential tools
Example:
```json
"env": {
"TASK_MASTER_TOOLS": "standard",
"ANTHROPIC_API_KEY": "your_key_here"
}
```
See the [MCP Tools documentation](/capabilities/mcp#configurable-tool-loading) for details.
</Tip>
### CLI Usage: `.env` File ### CLI Usage: `.env` File
Create a `.env` file in your project root and include the keys for the providers you plan to use: Create a `.env` file in your project root and include the keys for the providers you plan to use:

View File

@@ -1,6 +1,6 @@
{ {
"name": "docs", "name": "docs",
"version": "0.0.5", "version": "0.0.6",
"private": true, "private": true,
"description": "Task Master documentation powered by Mintlify", "description": "Task Master documentation powered by Mintlify",
"scripts": { "scripts": {

View File

@@ -1,5 +1,14 @@
# Change Log # Change Log
## 0.25.6
## 0.25.6-rc.0
### Patch Changes
- Updated dependencies [[`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5), [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0), [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48), [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815), [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654)]:
- task-master-ai@0.29.0-rc.0
## 0.25.5 ## 0.25.5
### Patch Changes ### Patch Changes

View File

@@ -3,7 +3,7 @@
"private": true, "private": true,
"displayName": "TaskMaster", "displayName": "TaskMaster",
"description": "A visual Kanban board interface for TaskMaster projects in VS Code", "description": "A visual Kanban board interface for TaskMaster projects in VS Code",
"version": "0.25.5", "version": "0.25.6",
"publisher": "Hamster", "publisher": "Hamster",
"icon": "assets/icon.png", "icon": "assets/icon.png",
"engines": { "engines": {
@@ -239,9 +239,6 @@
"watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch", "watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch",
"check-types": "tsc --noEmit" "check-types": "tsc --noEmit"
}, },
"dependencies": {
"task-master-ai": "*"
},
"devDependencies": { "devDependencies": {
"@dnd-kit/core": "^6.3.1", "@dnd-kit/core": "^6.3.1",
"@dnd-kit/modifiers": "^9.0.0", "@dnd-kit/modifiers": "^9.0.0",
@@ -277,7 +274,8 @@
"tailwind-merge": "^3.3.1", "tailwind-merge": "^3.3.1",
"tailwindcss": "4.1.11", "tailwindcss": "4.1.11",
"typescript": "^5.9.2", "typescript": "^5.9.2",
"@tm/core": "*" "@tm/core": "*",
"task-master-ai": "*"
}, },
"overrides": { "overrides": {
"glob@<8": "^10.4.5", "glob@<8": "^10.4.5",

View File

@@ -59,6 +59,76 @@ Taskmaster uses two primary methods for configuration:
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`. - **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure. - **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
## MCP Tool Loading Configuration
### TASK_MASTER_TOOLS Environment Variable
The `TASK_MASTER_TOOLS` environment variable controls which tools are loaded by the Task Master MCP server. This allows you to optimize token usage based on your workflow needs.
> Note
> Prefer setting `TASK_MASTER_TOOLS` in your MCP client's `env` block (e.g., `.cursor/mcp.json`) or in CI/deployment env. The `.env` file is reserved for API keys/endpoints; avoid persisting non-secret settings there.
#### Configuration Options
- **`all`** (default): Loads all 36 available tools (~21,000 tokens)
- Best for: Users who need the complete feature set
- Use when: Working with complex projects requiring all Task Master features
- Backward compatibility: This is the default to maintain compatibility with existing installations
- **`standard`**: Loads 15 commonly used tools (~10,000 tokens, 50% reduction)
- Best for: Regular task management workflows
- Tools included: All core tools plus project initialization, complexity analysis, task generation, and more
- Use when: You need a balanced set of features with reduced token usage
- **`core`** (or `lean`): Loads 7 essential tools (~5,000 tokens, 70% reduction)
- Best for: Daily development with minimal token overhead
- Tools included: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
- Use when: Working in large contexts where token usage is critical
- Note: "lean" is an alias for "core" (same tools, token estimate and recommended use). You can refer to it as either "core" or "lean" when configuring.
- **Custom list**: Comma-separated list of specific tool names
- Best for: Specialized workflows requiring specific tools
- Example: `"get_tasks,next_task,set_task_status"`
- Use when: You know exactly which tools you need
#### How to Configure
1. **In MCP configuration files** (`.cursor/mcp.json`, `.vscode/mcp.json`, etc.) - **Recommended**:
```jsonc
{
"mcpServers": {
"task-master-ai": {
"env": {
"TASK_MASTER_TOOLS": "standard", // Set tool loading mode
// API keys can still use .env for security
}
}
}
}
```
2. **Via Claude Code CLI**:
```bash
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="core" \
-- npx -y task-master-ai@latest
```
3. **In CI/deployment environment variables**:
```bash
export TASK_MASTER_TOOLS="standard"
node mcp-server/server.js
```
#### Tool Loading Behavior
- When `TASK_MASTER_TOOLS` is unset or empty, the system defaults to `"all"`
- Invalid tool names in a user-specified list are ignored (a warning is emitted for each)
- If every tool name in a custom list is invalid, the system falls back to `"all"`
- Tool names are case-insensitive (e.g., `"CORE"`, `"core"`, and `"Core"` are treated identically)
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only) ## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
- Used **exclusively** for sensitive API keys and specific endpoint URLs. - Used **exclusively** for sensitive API keys and specific endpoint URLs.

BIN
images/hamster-hiring.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

View File

@@ -4,12 +4,14 @@ import dotenv from 'dotenv';
import { fileURLToPath } from 'url'; import { fileURLToPath } from 'url';
import fs from 'fs'; import fs from 'fs';
import logger from './logger.js'; import logger from './logger.js';
import { registerTaskMasterTools } from './tools/index.js'; import {
registerTaskMasterTools,
getToolsConfiguration
} from './tools/index.js';
import ProviderRegistry from '../../src/provider-registry/index.js'; import ProviderRegistry from '../../src/provider-registry/index.js';
import { MCPProvider } from './providers/mcp-provider.js'; import { MCPProvider } from './providers/mcp-provider.js';
import packageJson from '../../package.json' with { type: 'json' }; import packageJson from '../../package.json' with { type: 'json' };
// Load environment variables
dotenv.config(); dotenv.config();
// Constants // Constants
@@ -29,12 +31,10 @@ class TaskMasterMCPServer {
this.server = new FastMCP(this.options); this.server = new FastMCP(this.options);
this.initialized = false; this.initialized = false;
// Bind methods
this.init = this.init.bind(this); this.init = this.init.bind(this);
this.start = this.start.bind(this); this.start = this.start.bind(this);
this.stop = this.stop.bind(this); this.stop = this.stop.bind(this);
// Setup logging
this.logger = logger; this.logger = logger;
} }
@@ -44,8 +44,34 @@ class TaskMasterMCPServer {
async init() { async init() {
if (this.initialized) return; if (this.initialized) return;
// Pass the manager instance to the tool registration function const normalizedToolMode = getToolsConfiguration();
registerTaskMasterTools(this.server, this.asyncManager);
this.logger.info('Task Master MCP Server starting...');
this.logger.info(`Tool mode configuration: ${normalizedToolMode}`);
const registrationResult = registerTaskMasterTools(
this.server,
normalizedToolMode
);
this.logger.info(
`Normalized tool mode: ${registrationResult.normalizedMode}`
);
this.logger.info(
`Registered ${registrationResult.registeredTools.length} tools successfully`
);
if (registrationResult.registeredTools.length > 0) {
this.logger.debug(
`Registered tools: ${registrationResult.registeredTools.join(', ')}`
);
}
if (registrationResult.failedTools.length > 0) {
this.logger.warn(
`Failed to register ${registrationResult.failedTools.length} tools: ${registrationResult.failedTools.join(', ')}`
);
}
this.initialized = true; this.initialized = true;

View File

@@ -3,109 +3,238 @@
* Export all Task Master CLI tools for MCP server * Export all Task Master CLI tools for MCP server
*/ */
import { registerListTasksTool } from './get-tasks.js';
import logger from '../logger.js'; import logger from '../logger.js';
import { registerSetTaskStatusTool } from './set-task-status.js'; import {
import { registerParsePRDTool } from './parse-prd.js'; toolRegistry,
import { registerUpdateTool } from './update.js'; coreTools,
import { registerUpdateTaskTool } from './update-task.js'; standardTools,
import { registerUpdateSubtaskTool } from './update-subtask.js'; getAvailableTools,
import { registerGenerateTool } from './generate.js'; getToolRegistration,
import { registerShowTaskTool } from './get-task.js'; isValidTool
import { registerNextTaskTool } from './next-task.js'; } from './tool-registry.js';
import { registerExpandTaskTool } from './expand-task.js';
import { registerAddTaskTool } from './add-task.js';
import { registerAddSubtaskTool } from './add-subtask.js';
import { registerRemoveSubtaskTool } from './remove-subtask.js';
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
import { registerClearSubtasksTool } from './clear-subtasks.js';
import { registerExpandAllTool } from './expand-all.js';
import { registerRemoveDependencyTool } from './remove-dependency.js';
import { registerValidateDependenciesTool } from './validate-dependencies.js';
import { registerFixDependenciesTool } from './fix-dependencies.js';
import { registerComplexityReportTool } from './complexity-report.js';
import { registerAddDependencyTool } from './add-dependency.js';
import { registerRemoveTaskTool } from './remove-task.js';
import { registerInitializeProjectTool } from './initialize-project.js';
import { registerModelsTool } from './models.js';
import { registerMoveTaskTool } from './move-task.js';
import { registerResponseLanguageTool } from './response-language.js';
import { registerAddTagTool } from './add-tag.js';
import { registerDeleteTagTool } from './delete-tag.js';
import { registerListTagsTool } from './list-tags.js';
import { registerUseTagTool } from './use-tag.js';
import { registerRenameTagTool } from './rename-tag.js';
import { registerCopyTagTool } from './copy-tag.js';
import { registerResearchTool } from './research.js';
import { registerRulesTool } from './rules.js';
import { registerScopeUpTool } from './scope-up.js';
import { registerScopeDownTool } from './scope-down.js';
/** /**
* Register all Task Master tools with the MCP server * Helper function to safely read and normalize the TASK_MASTER_TOOLS environment variable
* @param {Object} server - FastMCP server instance * @returns {string} The tools configuration string, defaults to 'all'
*/ */
export function registerTaskMasterTools(server) { export function getToolsConfiguration() {
const rawValue = process.env.TASK_MASTER_TOOLS;
if (!rawValue || rawValue.trim() === '') {
logger.debug('No TASK_MASTER_TOOLS env var found, defaulting to "all"');
return 'all';
}
const normalizedValue = rawValue.trim();
logger.debug(`TASK_MASTER_TOOLS env var: "${normalizedValue}"`);
return normalizedValue;
}
/**
* Register Task Master tools with the MCP server
* Supports selective tool loading via TASK_MASTER_TOOLS environment variable
* @param {Object} server - FastMCP server instance
* @param {string} toolMode - The tool mode configuration (defaults to 'all')
* @returns {Object} Object containing registered tools, failed tools, and normalized mode
*/
export function registerTaskMasterTools(server, toolMode = 'all') {
const registeredTools = [];
const failedTools = [];
try { try {
// Register each tool in a logical workflow order const enabledTools = toolMode.trim();
let toolsToRegister = [];
// Group 1: Initialization & Setup const lowerCaseConfig = enabledTools.toLowerCase();
registerInitializeProjectTool(server);
registerModelsTool(server);
registerRulesTool(server);
registerParsePRDTool(server);
// Group 2: Task Analysis & Expansion switch (lowerCaseConfig) {
registerAnalyzeProjectComplexityTool(server); case 'all':
registerExpandTaskTool(server); toolsToRegister = Object.keys(toolRegistry);
registerExpandAllTool(server); logger.info('Loading all available tools');
registerScopeUpTool(server); break;
registerScopeDownTool(server); case 'core':
case 'lean':
toolsToRegister = coreTools;
logger.info('Loading core tools only');
break;
case 'standard':
toolsToRegister = standardTools;
logger.info('Loading standard tools');
break;
default:
const requestedTools = enabledTools
.split(',')
.map((t) => t.trim())
.filter((t) => t.length > 0);
// Group 3: Task Listing & Viewing const uniqueTools = new Set();
registerListTasksTool(server); const unknownTools = [];
registerShowTaskTool(server);
registerNextTaskTool(server);
registerComplexityReportTool(server);
// Group 4: Task Status & Management const aliasMap = {
registerSetTaskStatusTool(server); response_language: 'response-language'
registerGenerateTool(server); };
// Group 5: Task Creation & Modification for (const toolName of requestedTools) {
registerAddTaskTool(server); let resolvedName = null;
registerAddSubtaskTool(server); const lowerToolName = toolName.toLowerCase();
registerUpdateTool(server);
registerUpdateTaskTool(server);
registerUpdateSubtaskTool(server);
registerRemoveTaskTool(server);
registerRemoveSubtaskTool(server);
registerClearSubtasksTool(server);
registerMoveTaskTool(server);
// Group 6: Dependency Management if (aliasMap[lowerToolName]) {
registerAddDependencyTool(server); const aliasTarget = aliasMap[lowerToolName];
registerRemoveDependencyTool(server); for (const registryKey of Object.keys(toolRegistry)) {
registerValidateDependenciesTool(server); if (registryKey.toLowerCase() === aliasTarget.toLowerCase()) {
registerFixDependenciesTool(server); resolvedName = registryKey;
registerResponseLanguageTool(server); break;
}
}
}
// Group 7: Tag Management if (!resolvedName) {
registerListTagsTool(server); for (const registryKey of Object.keys(toolRegistry)) {
registerAddTagTool(server); if (registryKey.toLowerCase() === lowerToolName) {
registerDeleteTagTool(server); resolvedName = registryKey;
registerUseTagTool(server); break;
registerRenameTagTool(server); }
registerCopyTagTool(server); }
}
// Group 8: Research Features if (!resolvedName) {
registerResearchTool(server); const withHyphens = lowerToolName.replace(/_/g, '-');
for (const registryKey of Object.keys(toolRegistry)) {
if (registryKey.toLowerCase() === withHyphens) {
resolvedName = registryKey;
break;
}
}
}
if (!resolvedName) {
const withUnderscores = lowerToolName.replace(/-/g, '_');
for (const registryKey of Object.keys(toolRegistry)) {
if (registryKey.toLowerCase() === withUnderscores) {
resolvedName = registryKey;
break;
}
}
}
if (resolvedName) {
uniqueTools.add(resolvedName);
logger.debug(`Resolved tool "${toolName}" to "${resolvedName}"`);
} else {
unknownTools.push(toolName);
logger.warn(`Unknown tool specified: "${toolName}"`);
}
}
toolsToRegister = Array.from(uniqueTools);
if (unknownTools.length > 0) {
logger.warn(`Unknown tools: ${unknownTools.join(', ')}`);
}
if (toolsToRegister.length === 0) {
logger.warn(
`No valid tools found in custom list. Loading all tools as fallback.`
);
toolsToRegister = Object.keys(toolRegistry);
} else {
logger.info(
`Loading ${toolsToRegister.length} custom tools from list (${uniqueTools.size} unique after normalization)`
);
}
break;
}
logger.info(
`Registering ${toolsToRegister.length} MCP tools (mode: ${enabledTools})`
);
toolsToRegister.forEach((toolName) => {
try {
const registerFunction = getToolRegistration(toolName);
if (registerFunction) {
registerFunction(server);
logger.debug(`Registered tool: ${toolName}`);
registeredTools.push(toolName);
} else {
logger.warn(`Tool ${toolName} not found in registry`);
failedTools.push(toolName);
}
} catch (error) {
if (error.message && error.message.includes('already registered')) {
logger.debug(`Tool ${toolName} already registered, skipping`);
registeredTools.push(toolName);
} else {
logger.error(`Failed to register tool ${toolName}: ${error.message}`);
failedTools.push(toolName);
}
}
});
logger.info(
`Successfully registered ${registeredTools.length}/${toolsToRegister.length} tools`
);
if (failedTools.length > 0) {
logger.warn(`Failed tools: ${failedTools.join(', ')}`);
}
return {
registeredTools,
failedTools,
normalizedMode: lowerCaseConfig
};
} catch (error) { } catch (error) {
logger.error(`Error registering Task Master tools: ${error.message}`); logger.error(
throw error; `Error parsing TASK_MASTER_TOOLS environment variable: ${error.message}`
);
logger.info('Falling back to loading all tools');
const fallbackTools = Object.keys(toolRegistry);
for (const toolName of fallbackTools) {
const registerFunction = getToolRegistration(toolName);
if (registerFunction) {
try {
registerFunction(server);
registeredTools.push(toolName);
} catch (err) {
if (err.message && err.message.includes('already registered')) {
logger.debug(
`Fallback tool ${toolName} already registered, skipping`
);
registeredTools.push(toolName);
} else {
logger.warn(
`Failed to register fallback tool '${toolName}': ${err.message}`
);
failedTools.push(toolName);
}
}
} else {
logger.warn(`Tool '${toolName}' not found in registry`);
failedTools.push(toolName);
}
}
logger.info(
`Successfully registered ${registeredTools.length} fallback tools`
);
return {
registeredTools,
failedTools,
normalizedMode: 'all'
};
} }
} }
export {
toolRegistry,
coreTools,
standardTools,
getAvailableTools,
getToolRegistration,
isValidTool
};
export default { export default {
registerTaskMasterTools registerTaskMasterTools
}; };

View File

@@ -0,0 +1,168 @@
/**
* tool-registry.js
* Tool Registry Object Structure - Maps all 36 tool names to registration functions
*/
import { registerListTasksTool } from './get-tasks.js';
import { registerSetTaskStatusTool } from './set-task-status.js';
import { registerParsePRDTool } from './parse-prd.js';
import { registerUpdateTool } from './update.js';
import { registerUpdateTaskTool } from './update-task.js';
import { registerUpdateSubtaskTool } from './update-subtask.js';
import { registerGenerateTool } from './generate.js';
import { registerShowTaskTool } from './get-task.js';
import { registerNextTaskTool } from './next-task.js';
import { registerExpandTaskTool } from './expand-task.js';
import { registerAddTaskTool } from './add-task.js';
import { registerAddSubtaskTool } from './add-subtask.js';
import { registerRemoveSubtaskTool } from './remove-subtask.js';
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
import { registerClearSubtasksTool } from './clear-subtasks.js';
import { registerExpandAllTool } from './expand-all.js';
import { registerRemoveDependencyTool } from './remove-dependency.js';
import { registerValidateDependenciesTool } from './validate-dependencies.js';
import { registerFixDependenciesTool } from './fix-dependencies.js';
import { registerComplexityReportTool } from './complexity-report.js';
import { registerAddDependencyTool } from './add-dependency.js';
import { registerRemoveTaskTool } from './remove-task.js';
import { registerInitializeProjectTool } from './initialize-project.js';
import { registerModelsTool } from './models.js';
import { registerMoveTaskTool } from './move-task.js';
import { registerResponseLanguageTool } from './response-language.js';
import { registerAddTagTool } from './add-tag.js';
import { registerDeleteTagTool } from './delete-tag.js';
import { registerListTagsTool } from './list-tags.js';
import { registerUseTagTool } from './use-tag.js';
import { registerRenameTagTool } from './rename-tag.js';
import { registerCopyTagTool } from './copy-tag.js';
import { registerResearchTool } from './research.js';
import { registerRulesTool } from './rules.js';
import { registerScopeUpTool } from './scope-up.js';
import { registerScopeDownTool } from './scope-down.js';
/**
* Comprehensive tool registry mapping all 36 tool names to their registration functions
* Used for dynamic tool registration and validation
*/
export const toolRegistry = {
initialize_project: registerInitializeProjectTool,
models: registerModelsTool,
rules: registerRulesTool,
parse_prd: registerParsePRDTool,
'response-language': registerResponseLanguageTool,
analyze_project_complexity: registerAnalyzeProjectComplexityTool,
expand_task: registerExpandTaskTool,
expand_all: registerExpandAllTool,
scope_up_task: registerScopeUpTool,
scope_down_task: registerScopeDownTool,
get_tasks: registerListTasksTool,
get_task: registerShowTaskTool,
next_task: registerNextTaskTool,
complexity_report: registerComplexityReportTool,
set_task_status: registerSetTaskStatusTool,
generate: registerGenerateTool,
add_task: registerAddTaskTool,
add_subtask: registerAddSubtaskTool,
update: registerUpdateTool,
update_task: registerUpdateTaskTool,
update_subtask: registerUpdateSubtaskTool,
remove_task: registerRemoveTaskTool,
remove_subtask: registerRemoveSubtaskTool,
clear_subtasks: registerClearSubtasksTool,
move_task: registerMoveTaskTool,
add_dependency: registerAddDependencyTool,
remove_dependency: registerRemoveDependencyTool,
validate_dependencies: registerValidateDependenciesTool,
fix_dependencies: registerFixDependenciesTool,
list_tags: registerListTagsTool,
add_tag: registerAddTagTool,
delete_tag: registerDeleteTagTool,
use_tag: registerUseTagTool,
rename_tag: registerRenameTagTool,
copy_tag: registerCopyTagTool,
research: registerResearchTool
};
/**
* Core tools array containing the 7 essential tools for daily development
* These represent the minimal set needed for basic task management operations
*/
export const coreTools = [
'get_tasks',
'next_task',
'get_task',
'set_task_status',
'update_subtask',
'parse_prd',
'expand_task'
];
/**
* Standard tools array containing the 15 most commonly used tools
* Includes all core tools plus frequently used additional tools
*/
export const standardTools = [
...coreTools,
'initialize_project',
'analyze_project_complexity',
'expand_all',
'add_subtask',
'remove_task',
'generate',
'add_task',
'complexity_report'
];
/**
* Get all available tool names
* @returns {string[]} Array of tool names
*/
export function getAvailableTools() {
return Object.keys(toolRegistry);
}
/**
* Get tool counts for all categories
* @returns {Object} Object with core, standard, and total counts
*/
export function getToolCounts() {
return {
core: coreTools.length,
standard: standardTools.length,
total: Object.keys(toolRegistry).length
};
}
/**
* Get tool arrays organized by category
* @returns {Object} Object with arrays for each category
*/
export function getToolCategories() {
const allTools = Object.keys(toolRegistry);
return {
core: [...coreTools],
standard: [...standardTools],
all: [...allTools],
extended: allTools.filter((t) => !standardTools.includes(t))
};
}
/**
* Get registration function for a specific tool
* @param {string} toolName - Name of the tool
* @returns {Function|null} Registration function or null if not found
*/
export function getToolRegistration(toolName) {
return toolRegistry[toolName] || null;
}
/**
* Validate if a tool exists in the registry
* @param {string} toolName - Name of the tool
* @returns {boolean} True if tool exists
*/
export function isValidTool(toolName) {
return toolName in toolRegistry;
}
export default toolRegistry;

109
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.28.0", "version": "0.29.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.28.0", "version": "0.29.0",
"license": "MIT WITH Commons-Clause", "license": "MIT WITH Commons-Clause",
"workspaces": [ "workspaces": [
"apps/*", "apps/*",
@@ -125,16 +125,13 @@
} }
}, },
"apps/docs": { "apps/docs": {
"version": "0.0.5", "version": "0.0.6",
"devDependencies": { "devDependencies": {
"mintlify": "^4.2.111" "mintlify": "^4.2.111"
} }
}, },
"apps/extension": { "apps/extension": {
"version": "0.25.5", "version": "0.25.6",
"dependencies": {
"task-master-ai": "*"
},
"devDependencies": { "devDependencies": {
"@dnd-kit/core": "^6.3.1", "@dnd-kit/core": "^6.3.1",
"@dnd-kit/modifiers": "^9.0.0", "@dnd-kit/modifiers": "^9.0.0",
@@ -170,6 +167,7 @@
"react-dom": "^19.0.0", "react-dom": "^19.0.0",
"tailwind-merge": "^3.3.1", "tailwind-merge": "^3.3.1",
"tailwindcss": "4.1.11", "tailwindcss": "4.1.11",
"task-master-ai": "*",
"typescript": "^5.9.2" "typescript": "^5.9.2"
}, },
"engines": { "engines": {
@@ -178,6 +176,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/amazon-bedrock": { "apps/extension/node_modules/@ai-sdk/amazon-bedrock": {
"version": "2.2.12", "version": "2.2.12",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -195,6 +194,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/anthropic": { "apps/extension/node_modules/@ai-sdk/anthropic": {
"version": "1.2.12", "version": "1.2.12",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -209,6 +209,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/azure": { "apps/extension/node_modules/@ai-sdk/azure": {
"version": "1.3.25", "version": "1.3.25",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/openai": "1.3.24", "@ai-sdk/openai": "1.3.24",
@@ -224,6 +225,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/google": { "apps/extension/node_modules/@ai-sdk/google": {
"version": "1.2.22", "version": "1.2.22",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -238,6 +240,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/google-vertex": { "apps/extension/node_modules/@ai-sdk/google-vertex": {
"version": "2.2.27", "version": "2.2.27",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/anthropic": "1.2.12", "@ai-sdk/anthropic": "1.2.12",
@@ -255,6 +258,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/groq": { "apps/extension/node_modules/@ai-sdk/groq": {
"version": "1.2.9", "version": "1.2.9",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -269,6 +273,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/mistral": { "apps/extension/node_modules/@ai-sdk/mistral": {
"version": "1.2.8", "version": "1.2.8",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -283,6 +288,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/openai": { "apps/extension/node_modules/@ai-sdk/openai": {
"version": "1.3.24", "version": "1.3.24",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -297,6 +303,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/openai-compatible": { "apps/extension/node_modules/@ai-sdk/openai-compatible": {
"version": "0.2.16", "version": "0.2.16",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -311,6 +318,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/perplexity": { "apps/extension/node_modules/@ai-sdk/perplexity": {
"version": "1.1.9", "version": "1.1.9",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -325,6 +333,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/provider": { "apps/extension/node_modules/@ai-sdk/provider": {
"version": "1.1.3", "version": "1.1.3",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"json-schema": "^0.4.0" "json-schema": "^0.4.0"
@@ -335,6 +344,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/provider-utils": { "apps/extension/node_modules/@ai-sdk/provider-utils": {
"version": "2.2.8", "version": "2.2.8",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -350,6 +360,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/react": { "apps/extension/node_modules/@ai-sdk/react": {
"version": "1.2.12", "version": "1.2.12",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider-utils": "2.2.8", "@ai-sdk/provider-utils": "2.2.8",
@@ -372,6 +383,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/ui-utils": { "apps/extension/node_modules/@ai-sdk/ui-utils": {
"version": "1.2.11", "version": "1.2.11",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -387,6 +399,7 @@
}, },
"apps/extension/node_modules/@ai-sdk/xai": { "apps/extension/node_modules/@ai-sdk/xai": {
"version": "1.2.18", "version": "1.2.18",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/openai-compatible": "0.2.16", "@ai-sdk/openai-compatible": "0.2.16",
@@ -402,6 +415,7 @@
}, },
"apps/extension/node_modules/@openrouter/ai-sdk-provider": { "apps/extension/node_modules/@openrouter/ai-sdk-provider": {
"version": "0.4.6", "version": "0.4.6",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.0.9", "@ai-sdk/provider": "1.0.9",
@@ -416,6 +430,7 @@
}, },
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider": { "apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider": {
"version": "1.0.9", "version": "1.0.9",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"json-schema": "^0.4.0" "json-schema": "^0.4.0"
@@ -426,6 +441,7 @@
}, },
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider-utils": { "apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider-utils": {
"version": "2.1.10", "version": "2.1.10",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.0.9", "@ai-sdk/provider": "1.0.9",
@@ -447,6 +463,7 @@
}, },
"apps/extension/node_modules/ai": { "apps/extension/node_modules/ai": {
"version": "4.3.19", "version": "4.3.19",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -471,6 +488,7 @@
}, },
"apps/extension/node_modules/ai-sdk-provider-gemini-cli": { "apps/extension/node_modules/ai-sdk-provider-gemini-cli": {
"version": "0.1.3", "version": "0.1.3",
"dev": true,
"license": "MIT", "license": "MIT",
"optional": true, "optional": true,
"dependencies": { "dependencies": {
@@ -504,6 +522,7 @@
}, },
"apps/extension/node_modules/ollama-ai-provider": { "apps/extension/node_modules/ollama-ai-provider": {
"version": "1.2.0", "version": "1.2.0",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "^1.0.0", "@ai-sdk/provider": "^1.0.0",
@@ -524,6 +543,7 @@
}, },
"apps/extension/node_modules/openai": { "apps/extension/node_modules/openai": {
"version": "4.104.0", "version": "4.104.0",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@types/node": "^18.11.18", "@types/node": "^18.11.18",
@@ -552,6 +572,7 @@
}, },
"apps/extension/node_modules/openai/node_modules/@types/node": { "apps/extension/node_modules/openai/node_modules/@types/node": {
"version": "18.19.127", "version": "18.19.127",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"undici-types": "~5.26.4" "undici-types": "~5.26.4"
@@ -559,10 +580,12 @@
}, },
"apps/extension/node_modules/openai/node_modules/undici-types": { "apps/extension/node_modules/openai/node_modules/undici-types": {
"version": "5.26.5", "version": "5.26.5",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"apps/extension/node_modules/task-master-ai": { "apps/extension/node_modules/task-master-ai": {
"version": "0.27.1", "version": "0.27.1",
"dev": true,
"license": "MIT WITH Commons-Clause", "license": "MIT WITH Commons-Clause",
"workspaces": [ "workspaces": [
"apps/*", "apps/*",
@@ -634,6 +657,7 @@
}, },
"apps/extension/node_modules/zod": { "apps/extension/node_modules/zod": {
"version": "3.25.76", "version": "3.25.76",
"dev": true,
"license": "MIT", "license": "MIT",
"funding": { "funding": {
"url": "https://github.com/sponsors/colinhacks" "url": "https://github.com/sponsors/colinhacks"
@@ -641,6 +665,7 @@
}, },
"apps/extension/node_modules/zod-to-json-schema": { "apps/extension/node_modules/zod-to-json-schema": {
"version": "3.24.6", "version": "3.24.6",
"dev": true,
"license": "ISC", "license": "ISC",
"peerDependencies": { "peerDependencies": {
"zod": "^3.24.1" "zod": "^3.24.1"
@@ -929,6 +954,7 @@
}, },
"node_modules/@anthropic-ai/sdk": { "node_modules/@anthropic-ai/sdk": {
"version": "0.39.0", "version": "0.39.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@types/node": "^18.11.18", "@types/node": "^18.11.18",
@@ -942,6 +968,7 @@
}, },
"node_modules/@anthropic-ai/sdk/node_modules/@types/node": { "node_modules/@anthropic-ai/sdk/node_modules/@types/node": {
"version": "18.19.127", "version": "18.19.127",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"undici-types": "~5.26.4" "undici-types": "~5.26.4"
@@ -949,6 +976,7 @@
}, },
"node_modules/@anthropic-ai/sdk/node_modules/undici-types": { "node_modules/@anthropic-ai/sdk/node_modules/undici-types": {
"version": "5.26.5", "version": "5.26.5",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/@ark/schema": { "node_modules/@ark/schema": {
@@ -8408,6 +8436,7 @@
}, },
"node_modules/@types/diff-match-patch": { "node_modules/@types/diff-match-patch": {
"version": "1.0.36", "version": "1.0.36",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/@types/es-aggregate-error": { "node_modules/@types/es-aggregate-error": {
@@ -8578,6 +8607,7 @@
}, },
"node_modules/@types/node-fetch": { "node_modules/@types/node-fetch": {
"version": "2.6.13", "version": "2.6.13",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@types/node": "*", "@types/node": "*",
@@ -9023,6 +9053,7 @@
}, },
"node_modules/abort-controller": { "node_modules/abort-controller": {
"version": "3.0.0", "version": "3.0.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"event-target-shim": "^5.0.0" "event-target-shim": "^5.0.0"
@@ -9084,6 +9115,7 @@
}, },
"node_modules/agentkeepalive": { "node_modules/agentkeepalive": {
"version": "4.6.0", "version": "4.6.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"humanize-ms": "^1.2.1" "humanize-ms": "^1.2.1"
@@ -9666,6 +9698,7 @@
}, },
"node_modules/asynckit": { "node_modules/asynckit": {
"version": "0.4.0", "version": "0.4.0",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/auto-bind": { "node_modules/auto-bind": {
@@ -11443,6 +11476,7 @@
}, },
"node_modules/combined-stream": { "node_modules/combined-stream": {
"version": "1.0.8", "version": "1.0.8",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"delayed-stream": "~1.0.0" "delayed-stream": "~1.0.0"
@@ -12081,6 +12115,7 @@
}, },
"node_modules/delayed-stream": { "node_modules/delayed-stream": {
"version": "1.0.0", "version": "1.0.0",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=0.4.0" "node": ">=0.4.0"
@@ -12103,6 +12138,7 @@
}, },
"node_modules/dequal": { "node_modules/dequal": {
"version": "2.0.3", "version": "2.0.3",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=6" "node": ">=6"
@@ -12222,6 +12258,7 @@
}, },
"node_modules/diff-match-patch": { "node_modules/diff-match-patch": {
"version": "1.0.5", "version": "1.0.5",
"dev": true,
"license": "Apache-2.0" "license": "Apache-2.0"
}, },
"node_modules/diff-sequences": { "node_modules/diff-sequences": {
@@ -12721,6 +12758,7 @@
}, },
"node_modules/es-set-tostringtag": { "node_modules/es-set-tostringtag": {
"version": "2.1.0", "version": "2.1.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"es-errors": "^1.3.0", "es-errors": "^1.3.0",
@@ -13009,6 +13047,7 @@
}, },
"node_modules/event-target-shim": { "node_modules/event-target-shim": {
"version": "5.0.1", "version": "5.0.1",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=6" "node": ">=6"
@@ -14047,6 +14086,7 @@
}, },
"node_modules/form-data": { "node_modules/form-data": {
"version": "4.0.4", "version": "4.0.4",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"asynckit": "^0.4.0", "asynckit": "^0.4.0",
@@ -14061,6 +14101,7 @@
}, },
"node_modules/form-data-encoder": { "node_modules/form-data-encoder": {
"version": "1.7.2", "version": "1.7.2",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/format": { "node_modules/format": {
@@ -14072,6 +14113,7 @@
}, },
"node_modules/formdata-node": { "node_modules/formdata-node": {
"version": "4.4.1", "version": "4.4.1",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"node-domexception": "1.0.0", "node-domexception": "1.0.0",
@@ -14732,6 +14774,7 @@
}, },
"node_modules/has-tostringtag": { "node_modules/has-tostringtag": {
"version": "1.0.2", "version": "1.0.2",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"has-symbols": "^1.0.3" "has-symbols": "^1.0.3"
@@ -15306,6 +15349,7 @@
}, },
"node_modules/humanize-ms": { "node_modules/humanize-ms": {
"version": "1.2.1", "version": "1.2.1",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"ms": "^2.0.0" "ms": "^2.0.0"
@@ -18092,6 +18136,7 @@
}, },
"node_modules/jsondiffpatch": { "node_modules/jsondiffpatch": {
"version": "0.6.0", "version": "0.6.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@types/diff-match-patch": "^1.0.36", "@types/diff-match-patch": "^1.0.36",
@@ -20273,6 +20318,7 @@
}, },
"node_modules/nanoid": { "node_modules/nanoid": {
"version": "3.3.11", "version": "3.3.11",
"devOptional": true,
"funding": [ "funding": [
{ {
"type": "github", "type": "github",
@@ -20381,6 +20427,7 @@
}, },
"node_modules/node-domexception": { "node_modules/node-domexception": {
"version": "1.0.0", "version": "1.0.0",
"dev": true,
"funding": [ "funding": [
{ {
"type": "github", "type": "github",
@@ -21245,6 +21292,7 @@
}, },
"node_modules/partial-json": { "node_modules/partial-json": {
"version": "0.1.7", "version": "0.1.7",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/patch-console": { "node_modules/patch-console": {
@@ -22017,6 +22065,7 @@
}, },
"node_modules/react": { "node_modules/react": {
"version": "19.1.1", "version": "19.1.1",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=0.10.0" "node": ">=0.10.0"
@@ -23143,6 +23192,7 @@
}, },
"node_modules/secure-json-parse": { "node_modules/secure-json-parse": {
"version": "2.7.0", "version": "2.7.0",
"dev": true,
"license": "BSD-3-Clause" "license": "BSD-3-Clause"
}, },
"node_modules/selderee": { "node_modules/selderee": {
@@ -24190,6 +24240,26 @@
"url": "https://github.com/sponsors/sindresorhus" "url": "https://github.com/sponsors/sindresorhus"
} }
}, },
"node_modules/strip-literal": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/strip-literal/-/strip-literal-3.1.0.tgz",
"integrity": "sha512-8r3mkIM/2+PpjHoOtiAW8Rg3jJLHaV7xPwG+YRGrv6FP0wwk/toTpATxWYOW0BKdWwl82VT2tFYi5DlROa0Mxg==",
"dev": true,
"license": "MIT",
"dependencies": {
"js-tokens": "^9.0.1"
},
"funding": {
"url": "https://github.com/sponsors/antfu"
}
},
"node_modules/strip-literal/node_modules/js-tokens": {
"version": "9.0.1",
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-9.0.1.tgz",
"integrity": "sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==",
"dev": true,
"license": "MIT"
},
"node_modules/strnum": { "node_modules/strnum": {
"version": "2.1.1", "version": "2.1.1",
"funding": [ "funding": [
@@ -24367,6 +24437,7 @@
}, },
"node_modules/swr": { "node_modules/swr": {
"version": "2.3.6", "version": "2.3.6",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"dequal": "^2.0.3", "dequal": "^2.0.3",
@@ -24559,6 +24630,7 @@
}, },
"node_modules/throttleit": { "node_modules/throttleit": {
"version": "2.1.0", "version": "2.1.0",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=18" "node": ">=18"
@@ -25661,6 +25733,7 @@
}, },
"node_modules/use-sync-external-store": { "node_modules/use-sync-external-store": {
"version": "1.5.0", "version": "1.5.0",
"dev": true,
"license": "MIT", "license": "MIT",
"peerDependencies": { "peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
@@ -26069,6 +26142,7 @@
}, },
"node_modules/web-streams-polyfill": { "node_modules/web-streams-polyfill": {
"version": "4.0.0-beta.3", "version": "4.0.0-beta.3",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">= 14" "node": ">= 14"
@@ -27062,22 +27136,8 @@
}, },
"packages/claude-code-plugin": { "packages/claude-code-plugin": {
"name": "@tm/claude-code-plugin", "name": "@tm/claude-code-plugin",
"license": "MIT WITH Commons-Clause", "version": "0.0.2",
"devDependencies": { "license": "MIT WITH Commons-Clause"
"@types/node": "^20.0.0",
"tsx": "^4.20.4",
"typescript": "^5.9.2"
}
},
"packages/claude-code-plugin/node_modules/@types/node": {
"version": "20.19.20",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.20.tgz",
"integrity": "sha512-2Q7WS25j4pS1cS8yw3d6buNCVJukOTeQ39bAnwR6sOJbaxvyCGebzTMypDFN82CxBLnl+lSWVdCCWbRY6y9yZQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~6.21.0"
}
}, },
"packages/tm-core": { "packages/tm-core": {
"name": "@tm/core", "name": "@tm/core",
@@ -27089,6 +27149,7 @@
"devDependencies": { "devDependencies": {
"@types/node": "^22.10.5", "@types/node": "^22.10.5",
"@vitest/coverage-v8": "^3.2.4", "@vitest/coverage-v8": "^3.2.4",
"strip-literal": "3.1.0",
"typescript": "^5.9.2", "typescript": "^5.9.2",
"vitest": "^3.2.4" "vitest": "^3.2.4"
} }
@@ -27398,6 +27459,8 @@
}, },
"packages/tm-core/node_modules/vitest": { "packages/tm-core/node_modules/vitest": {
"version": "3.2.4", "version": "3.2.4",
"resolved": "https://registry.npmjs.org/vitest/-/vitest-3.2.4.tgz",
"integrity": "sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {

View File

@@ -1,6 +1,6 @@
{ {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.28.0", "version": "0.29.0",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.", "description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js", "main": "index.js",
"type": "module", "type": "module",

View File

@@ -1,3 +1,5 @@
# @tm/ai-sdk-provider-grok-cli # @tm/ai-sdk-provider-grok-cli
## null ## null
## null

View File

@@ -4,4 +4,6 @@
## null ## null
## null
## 1.0.1 ## 1.0.1

View File

@@ -0,0 +1,3 @@
# @tm/claude-code-plugin
## 0.0.2

View File

@@ -1,6 +1,6 @@
{ {
"name": "@tm/claude-code-plugin", "name": "@tm/claude-code-plugin",
"version": "0.0.1", "version": "0.0.2",
"description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration", "description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration",
"type": "module", "type": "module",
"private": true, "private": true,

View File

@@ -4,6 +4,8 @@
## null ## null
## null
## 0.26.1 ## 0.26.1
All notable changes to the @task-master/tm-core package will be documented in this file. All notable changes to the @task-master/tm-core package will be documented in this file.

View File

@@ -37,7 +37,8 @@
"@types/node": "^22.10.5", "@types/node": "^22.10.5",
"@vitest/coverage-v8": "^3.2.4", "@vitest/coverage-v8": "^3.2.4",
"typescript": "^5.9.2", "typescript": "^5.9.2",
"vitest": "^3.2.4" "vitest": "^3.2.4",
"strip-literal": "3.1.0"
}, },
"files": ["src", "README.md", "CHANGELOG.md"], "files": ["src", "README.md", "CHANGELOG.md"],
"keywords": ["task-management", "typescript", "ai", "prd", "parser"], "keywords": ["task-management", "typescript", "ai", "prd", "parser"],

View File

@@ -21,16 +21,21 @@ const CredentialStoreSpy = vi.fn();
vi.mock('./credential-store.js', () => { vi.mock('./credential-store.js', () => {
return { return {
CredentialStore: class { CredentialStore: class {
static getInstance(config?: any) {
return new (this as any)(config);
}
static resetInstance() {
// Mock reset instance method
}
constructor(config: any) { constructor(config: any) {
CredentialStoreSpy(config); CredentialStoreSpy(config);
this.getCredentials = vi.fn(() => null);
} }
getCredentials() { getCredentials(_options?: any) {
return null; return null;
} }
saveCredentials() {} saveCredentials() {}
clearCredentials() {} clearCredentials() {}
hasValidCredentials() { hasCredentials() {
return false; return false;
} }
} }
@@ -85,7 +90,7 @@ describe('AuthManager Singleton', () => {
expect(instance1).toBe(instance2); expect(instance1).toBe(instance2);
}); });
it('should use config on first call', () => { it('should use config on first call', async () => {
const config = { const config = {
baseUrl: 'https://test.auth.com', baseUrl: 'https://test.auth.com',
configDir: '/test/config', configDir: '/test/config',
@@ -101,7 +106,7 @@ describe('AuthManager Singleton', () => {
// Verify the config is passed to internal components through observable behavior // Verify the config is passed to internal components through observable behavior
// getCredentials would look in the configured file path // getCredentials would look in the configured file path
const credentials = instance.getCredentials(); const credentials = await instance.getCredentials();
expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly
}); });

View File

@@ -36,7 +36,10 @@ export class AuthManager {
this.oauthService = new OAuthService(this.credentialStore, config); this.oauthService = new OAuthService(this.credentialStore, config);
// Initialize Supabase client with session restoration // Initialize Supabase client with session restoration
this.initializeSupabaseSession(); // Fire-and-forget with catch handler to prevent unhandled rejections
this.initializeSupabaseSession().catch(() => {
// Errors are already logged in initializeSupabaseSession
});
} }
/** /**
@@ -78,6 +81,8 @@ export class AuthManager {
/** /**
* Get stored authentication credentials * Get stored authentication credentials
* Returns credentials as-is (even if expired). Refresh must be triggered explicitly
* via refreshToken() or will occur automatically when using the Supabase client for API calls.
*/ */
getCredentials(): AuthCredentials | null { getCredentials(): AuthCredentials | null {
return this.credentialStore.getCredentials(); return this.credentialStore.getCredentials();
@@ -162,10 +167,11 @@ export class AuthManager {
} }
/** /**
* Check if authenticated * Check if authenticated (credentials exist, regardless of expiration)
* @returns true if credentials are stored, including expired credentials
*/ */
isAuthenticated(): boolean { isAuthenticated(): boolean {
return this.credentialStore.hasValidCredentials(); return this.credentialStore.hasCredentials();
} }
/** /**
@@ -179,7 +185,7 @@ export class AuthManager {
/** /**
* Update the user context (org/brief selection) * Update the user context (org/brief selection)
*/ */
async updateContext(context: Partial<UserContext>): Promise<void> { updateContext(context: Partial<UserContext>): void {
const credentials = this.getCredentials(); const credentials = this.getCredentials();
if (!credentials) { if (!credentials) {
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED'); throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
@@ -205,7 +211,7 @@ export class AuthManager {
/** /**
* Clear the user context * Clear the user context
*/ */
async clearContext(): Promise<void> { clearContext(): void {
const credentials = this.getCredentials(); const credentials = this.getCredentials();
if (!credentials) { if (!credentials) {
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED'); throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');

View File

@@ -0,0 +1,308 @@
/**
* @fileoverview Unit tests for CredentialStore token expiration handling
*/
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
import fs from 'fs';
import path from 'path';
import os from 'os';
import { CredentialStore } from './credential-store';
import type { AuthCredentials } from './types';
describe('CredentialStore - Token Expiration', () => {
let credentialStore: CredentialStore;
let tmpDir: string;
let authFile: string;
beforeEach(() => {
// Create temp directory for test credentials
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-cred-test-'));
authFile = path.join(tmpDir, 'auth.json');
// Create instance with test config
CredentialStore.resetInstance();
credentialStore = CredentialStore.getInstance({
configDir: tmpDir,
configFile: authFile
});
});
afterEach(() => {
// Clean up
try {
if (fs.existsSync(tmpDir)) {
fs.rmSync(tmpDir, { recursive: true, force: true });
}
} catch {
// Ignore cleanup errors
}
CredentialStore.resetInstance();
});
describe('Expiration Detection', () => {
it('should return null for expired token', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(), // 1 minute ago
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).toBeNull();
});
it('should return credentials for valid token', () => {
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(), // 1 hour from now
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).not.toBeNull();
expect(retrieved?.token).toBe('valid-token');
});
it('should return expired token when allowExpired is true', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: true });
expect(retrieved).not.toBeNull();
expect(retrieved?.token).toBe('expired-token');
});
it('should return expired token by default (allowExpired defaults to true)', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token-default',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
// Call without options - should default to allowExpired: true
const retrieved = credentialStore.getCredentials();
expect(retrieved).not.toBeNull();
expect(retrieved?.token).toBe('expired-token-default');
});
});
describe('Clock Skew Tolerance', () => {
it('should reject token expiring within 30-second buffer', () => {
// Token expires in 15 seconds (within 30-second buffer)
const almostExpiredCredentials: AuthCredentials = {
token: 'almost-expired-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 15000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(almostExpiredCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).toBeNull();
});
it('should accept token expiring outside 30-second buffer', () => {
// Token expires in 60 seconds (outside 30-second buffer)
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).not.toBeNull();
expect(retrieved?.token).toBe('valid-token');
});
});
describe('Timestamp Format Handling', () => {
it('should handle ISO string timestamps', () => {
const credentials: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(credentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).not.toBeNull();
expect(typeof retrieved?.expiresAt).toBe('number'); // Normalized to number
});
it('should handle numeric timestamps', () => {
const credentials: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: Date.now() + 3600000,
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(credentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).not.toBeNull();
expect(typeof retrieved?.expiresAt).toBe('number');
});
it('should return null for invalid timestamp format', () => {
// Manually write invalid timestamp to file
const invalidCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: 'invalid-date',
savedAt: new Date().toISOString()
};
fs.writeFileSync(authFile, JSON.stringify(invalidCredentials), {
mode: 0o600
});
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).toBeNull();
});
it('should return null for missing expiresAt', () => {
const credentialsWithoutExpiry = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
savedAt: new Date().toISOString()
};
fs.writeFileSync(authFile, JSON.stringify(credentialsWithoutExpiry), {
mode: 0o600
});
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).toBeNull();
});
});
describe('Storage Persistence', () => {
it('should persist expiresAt as ISO string', () => {
const expiryTime = Date.now() + 3600000;
const credentials: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: expiryTime,
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(credentials);
// Read raw file to verify format
const fileContent = fs.readFileSync(authFile, 'utf-8');
const parsed = JSON.parse(fileContent);
// Should be stored as ISO string
expect(typeof parsed.expiresAt).toBe('string');
expect(parsed.expiresAt).toMatch(/^\d{4}-\d{2}-\d{2}T/); // ISO format
});
it('should normalize timestamp on retrieval', () => {
const credentials: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(credentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
// Should be normalized to number for runtime use
expect(typeof retrieved?.expiresAt).toBe('number');
});
});
describe('hasCredentials', () => {
it('should return true for expired credentials', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
expect(credentialStore.hasCredentials()).toBe(true);
});
it('should return true for valid credentials', () => {
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
expect(credentialStore.hasCredentials()).toBe(true);
});
it('should return false when no credentials exist', () => {
expect(credentialStore.hasCredentials()).toBe(false);
});
});
});

View File

@@ -197,7 +197,7 @@ describe('CredentialStore', () => {
JSON.stringify(mockCredentials) JSON.stringify(mockCredentials)
); );
const result = store.getCredentials(); const result = store.getCredentials({ allowExpired: false });
expect(result).toBeNull(); expect(result).toBeNull();
expect(mockLogger.warn).toHaveBeenCalledWith( expect(mockLogger.warn).toHaveBeenCalledWith(
@@ -226,6 +226,31 @@ describe('CredentialStore', () => {
expect(result).not.toBeNull(); expect(result).not.toBeNull();
expect(result?.token).toBe('expired-token'); expect(result?.token).toBe('expired-token');
}); });
it('should return expired tokens by default (allowExpired defaults to true)', () => {
const expiredTimestamp = Date.now() - 3600000; // 1 hour ago
const mockCredentials = {
token: 'expired-token-default',
userId: 'user-expired',
expiresAt: expiredTimestamp,
tokenType: 'standard',
savedAt: new Date().toISOString()
};
vi.mocked(fs.existsSync).mockReturnValue(true);
vi.mocked(fs.readFileSync).mockReturnValue(
JSON.stringify(mockCredentials)
);
// Call without options - should default to allowExpired: true
const result = store.getCredentials();
expect(result).not.toBeNull();
expect(result?.token).toBe('expired-token-default');
expect(mockLogger.warn).not.toHaveBeenCalledWith(
expect.stringContaining('Authentication token has expired')
);
});
}); });
describe('saveCredentials with timestamp normalization', () => { describe('saveCredentials with timestamp normalization', () => {
@@ -451,7 +476,7 @@ describe('CredentialStore', () => {
}); });
}); });
describe('hasValidCredentials', () => { describe('hasCredentials', () => {
it('should return true when valid unexpired credentials exist', () => { it('should return true when valid unexpired credentials exist', () => {
const futureDate = new Date(Date.now() + 3600000); // 1 hour from now const futureDate = new Date(Date.now() + 3600000); // 1 hour from now
const credentials = { const credentials = {
@@ -465,10 +490,10 @@ describe('CredentialStore', () => {
vi.mocked(fs.existsSync).mockReturnValue(true); vi.mocked(fs.existsSync).mockReturnValue(true);
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials)); vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
expect(store.hasValidCredentials()).toBe(true); expect(store.hasCredentials()).toBe(true);
}); });
it('should return false when credentials are expired', () => { it('should return true when credentials are expired', () => {
const pastDate = new Date(Date.now() - 3600000); // 1 hour ago const pastDate = new Date(Date.now() - 3600000); // 1 hour ago
const credentials = { const credentials = {
token: 'expired-token', token: 'expired-token',
@@ -481,13 +506,13 @@ describe('CredentialStore', () => {
vi.mocked(fs.existsSync).mockReturnValue(true); vi.mocked(fs.existsSync).mockReturnValue(true);
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials)); vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
expect(store.hasValidCredentials()).toBe(false); expect(store.hasCredentials()).toBe(true);
}); });
it('should return false when no credentials exist', () => { it('should return false when no credentials exist', () => {
vi.mocked(fs.existsSync).mockReturnValue(false); vi.mocked(fs.existsSync).mockReturnValue(false);
expect(store.hasValidCredentials()).toBe(false); expect(store.hasCredentials()).toBe(false);
}); });
it('should return false when file contains invalid JSON', () => { it('should return false when file contains invalid JSON', () => {
@@ -495,7 +520,7 @@ describe('CredentialStore', () => {
vi.mocked(fs.readFileSync).mockReturnValue('invalid json {'); vi.mocked(fs.readFileSync).mockReturnValue('invalid json {');
vi.mocked(fs.renameSync).mockImplementation(() => undefined); vi.mocked(fs.renameSync).mockImplementation(() => undefined);
expect(store.hasValidCredentials()).toBe(false); expect(store.hasCredentials()).toBe(false);
}); });
it('should return false for credentials without expiry', () => { it('should return false for credentials without expiry', () => {
@@ -510,7 +535,7 @@ describe('CredentialStore', () => {
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials)); vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
// Credentials without expiry are considered invalid // Credentials without expiry are considered invalid
expect(store.hasValidCredentials()).toBe(false); expect(store.hasCredentials()).toBe(false);
// Should log warning about missing expiration // Should log warning about missing expiration
expect(mockLogger.warn).toHaveBeenCalledWith( expect(mockLogger.warn).toHaveBeenCalledWith(
@@ -518,14 +543,14 @@ describe('CredentialStore', () => {
); );
}); });
it('should use allowExpired=false by default', () => { it('should use allowExpired=true', () => {
// Spy on getCredentials to verify it's called with correct params // Spy on getCredentials to verify it's called with correct params
const getCredentialsSpy = vi.spyOn(store, 'getCredentials'); const getCredentialsSpy = vi.spyOn(store, 'getCredentials');
vi.mocked(fs.existsSync).mockReturnValue(false); vi.mocked(fs.existsSync).mockReturnValue(false);
store.hasValidCredentials(); store.hasCredentials();
expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: false }); expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: true });
}); });
}); });

View File

@@ -54,9 +54,12 @@ export class CredentialStore {
/** /**
* Get stored authentication credentials * Get stored authentication credentials
* @param options.allowExpired - Whether to return expired credentials (default: true)
* @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use * @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use
*/ */
getCredentials(options?: { allowExpired?: boolean }): AuthCredentials | null { getCredentials({
allowExpired = true
}: { allowExpired?: boolean } = {}): AuthCredentials | null {
try { try {
if (!fs.existsSync(this.config.configFile)) { if (!fs.existsSync(this.config.configFile)) {
return null; return null;
@@ -90,7 +93,6 @@ export class CredentialStore {
// Check if the token has expired (with clock skew tolerance) // Check if the token has expired (with clock skew tolerance)
const now = Date.now(); const now = Date.now();
const allowExpired = options?.allowExpired ?? false;
if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) { if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) {
this.logger.warn( this.logger.warn(
'Authentication token has expired or is about to expire', 'Authentication token has expired or is about to expire',
@@ -103,7 +105,7 @@ export class CredentialStore {
return null; return null;
} }
// Return valid token // Return credentials (even if expired) to enable refresh flows
return authData; return authData;
} catch (error) { } catch (error) {
this.logger.error( this.logger.error(
@@ -199,10 +201,11 @@ export class CredentialStore {
} }
/** /**
* Check if credentials exist and are valid * Check if credentials exist (regardless of expiration status)
* @returns true if credentials are stored, including expired credentials
*/ */
hasValidCredentials(): boolean { hasCredentials(): boolean {
const credentials = this.getCredentials({ allowExpired: false }); const credentials = this.getCredentials({ allowExpired: true });
return credentials !== null; return credentials !== null;
} }

View File

@@ -281,15 +281,26 @@ export class OAuthService {
// Exchange code for session using PKCE // Exchange code for session using PKCE
const session = await this.supabaseClient.exchangeCodeForSession(code); const session = await this.supabaseClient.exchangeCodeForSession(code);
// Calculate expiration - can be overridden with TM_TOKEN_EXPIRY_MINUTES
let expiresAt: string | undefined;
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
if (tokenExpiryMinutes) {
const minutes = parseInt(tokenExpiryMinutes);
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
} else {
expiresAt = session.expires_at
? new Date(session.expires_at * 1000).toISOString()
: undefined;
}
// Save authentication data // Save authentication data
const authData: AuthCredentials = { const authData: AuthCredentials = {
token: session.access_token, token: session.access_token,
refreshToken: session.refresh_token, refreshToken: session.refresh_token,
userId: session.user.id, userId: session.user.id,
email: session.user.email, email: session.user.email,
expiresAt: session.expires_at expiresAt,
? new Date(session.expires_at * 1000).toISOString()
: undefined,
tokenType: 'standard', tokenType: 'standard',
savedAt: new Date().toISOString() savedAt: new Date().toISOString()
}; };
@@ -340,10 +351,18 @@ export class OAuthService {
// Get user info from the session // Get user info from the session
const user = await this.supabaseClient.getUser(); const user = await this.supabaseClient.getUser();
// Calculate expiration time // Calculate expiration time - can be overridden with TM_TOKEN_EXPIRY_MINUTES
const expiresAt = expiresIn let expiresAt: string | undefined;
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString() const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
: undefined; if (tokenExpiryMinutes) {
const minutes = parseInt(tokenExpiryMinutes);
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
} else {
expiresAt = expiresIn
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
: undefined;
}
// Save authentication data // Save authentication data
const authData: AuthCredentials = { const authData: AuthCredentials = {
@@ -351,7 +370,7 @@ export class OAuthService {
refreshToken: refreshToken || undefined, refreshToken: refreshToken || undefined,
userId: user?.id || 'unknown', userId: user?.id || 'unknown',
email: user?.email, email: user?.email,
expiresAt: expiresAt, expiresAt,
tokenType: 'standard', tokenType: 'standard',
savedAt: new Date().toISOString() savedAt: new Date().toISOString()
}; };

View File

@@ -98,11 +98,11 @@ export class SupabaseSessionStorage implements SupportedStorage {
// Only handle Supabase session keys // Only handle Supabase session keys
if (key === STORAGE_KEY || key.includes('auth-token')) { if (key === STORAGE_KEY || key.includes('auth-token')) {
try { try {
this.logger.info('Supabase called setItem - storing refreshed session');
// Parse the session and update our credentials // Parse the session and update our credentials
const sessionUpdates = this.parseSessionToCredentials(value); const sessionUpdates = this.parseSessionToCredentials(value);
const existingCredentials = this.store.getCredentials({ const existingCredentials = this.store.getCredentials();
allowExpired: true
});
if (sessionUpdates.token) { if (sessionUpdates.token) {
const updatedCredentials: AuthCredentials = { const updatedCredentials: AuthCredentials = {
@@ -113,6 +113,9 @@ export class SupabaseSessionStorage implements SupportedStorage {
} as AuthCredentials; } as AuthCredentials;
this.store.saveCredentials(updatedCredentials); this.store.saveCredentials(updatedCredentials);
this.logger.info(
'Successfully saved refreshed credentials from Supabase'
);
} }
} catch (error) { } catch (error) {
this.logger.error('Error setting session:', error); this.logger.error('Error setting session:', error);

View File

@@ -17,10 +17,11 @@ export class SupabaseAuthClient {
private client: SupabaseJSClient | null = null; private client: SupabaseJSClient | null = null;
private sessionStorage: SupabaseSessionStorage; private sessionStorage: SupabaseSessionStorage;
private logger = getLogger('SupabaseAuthClient'); private logger = getLogger('SupabaseAuthClient');
private credentialStore: CredentialStore;
constructor() { constructor() {
const credentialStore = CredentialStore.getInstance(); this.credentialStore = CredentialStore.getInstance();
this.sessionStorage = new SupabaseSessionStorage(credentialStore); this.sessionStorage = new SupabaseSessionStorage(this.credentialStore);
} }
/** /**

View File

@@ -47,8 +47,8 @@ export class SupabaseTaskRepository {
* Gets the current brief ID from auth context * Gets the current brief ID from auth context
* @throws {Error} If no brief is selected * @throws {Error} If no brief is selected
*/ */
private getBriefIdOrThrow(): string { private async getBriefIdOrThrow(): Promise<string> {
const context = this.authManager.getContext(); const context = await this.authManager.getContext();
if (!context?.briefId) { if (!context?.briefId) {
throw new Error( throw new Error(
'No brief selected. Please select a brief first using: tm context brief' 'No brief selected. Please select a brief first using: tm context brief'
@@ -61,7 +61,7 @@ export class SupabaseTaskRepository {
_projectId?: string, _projectId?: string,
options?: LoadTasksOptions options?: LoadTasksOptions
): Promise<Task[]> { ): Promise<Task[]> {
const briefId = this.getBriefIdOrThrow(); const briefId = await this.getBriefIdOrThrow();
// Build query with filters // Build query with filters
let query = this.supabase let query = this.supabase
@@ -114,7 +114,7 @@ export class SupabaseTaskRepository {
} }
async getTask(_projectId: string, taskId: string): Promise<Task | null> { async getTask(_projectId: string, taskId: string): Promise<Task | null> {
const briefId = this.getBriefIdOrThrow(); const briefId = await this.getBriefIdOrThrow();
const { data, error } = await this.supabase const { data, error } = await this.supabase
.from('tasks') .from('tasks')
@@ -157,7 +157,7 @@ export class SupabaseTaskRepository {
taskId: string, taskId: string,
updates: Partial<Task> updates: Partial<Task>
): Promise<Task> { ): Promise<Task> {
const briefId = this.getBriefIdOrThrow(); const briefId = await this.getBriefIdOrThrow();
// Validate updates using Zod schema // Validate updates using Zod schema
try { try {

View File

@@ -105,7 +105,7 @@ export class ExportService {
} }
// Get current context // Get current context
const context = this.authManager.getContext(); const context = await this.authManager.getContext();
// Determine org and brief IDs // Determine org and brief IDs
let orgId = options.orgId || context?.orgId; let orgId = options.orgId || context?.orgId;
@@ -232,7 +232,7 @@ export class ExportService {
hasBrief: boolean; hasBrief: boolean;
context: UserContext | null; context: UserContext | null;
}> { }> {
const context = this.authManager.getContext(); const context = await this.authManager.getContext();
return { return {
hasOrg: !!context?.orgId, hasOrg: !!context?.orgId,
@@ -362,7 +362,7 @@ export class ExportService {
if (useAPIEndpoint) { if (useAPIEndpoint) {
// Use the new bulk import API endpoint // Use the new bulk import API endpoint
const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks/bulk`; const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks`;
// Transform tasks to flat structure for API // Transform tasks to flat structure for API
const flatTasks = this.transformTasksForBulkImport(tasks); const flatTasks = this.transformTasksForBulkImport(tasks);
@@ -370,16 +370,16 @@ export class ExportService {
// Prepare request body // Prepare request body
const requestBody = { const requestBody = {
source: 'task-master-cli', source: 'task-master-cli',
accountId: orgId,
options: { options: {
dryRun: false, dryRun: false,
stopOnError: false stopOnError: false
}, },
accountId: orgId,
tasks: flatTasks tasks: flatTasks
}; };
// Get auth token // Get auth token
const credentials = this.authManager.getCredentials(); const credentials = await this.authManager.getCredentials();
if (!credentials || !credentials.token) { if (!credentials || !credentials.token) {
throw new Error('Not authenticated'); throw new Error('Not authenticated');
} }

View File

@@ -119,7 +119,7 @@ export class ApiStorage implements IStorage {
private async loadTagsIntoCache(): Promise<void> { private async loadTagsIntoCache(): Promise<void> {
try { try {
const authManager = AuthManager.getInstance(); const authManager = AuthManager.getInstance();
const context = authManager.getContext(); const context = await authManager.getContext();
// If we have a selected brief, create a virtual "tag" for it // If we have a selected brief, create a virtual "tag" for it
if (context?.briefId) { if (context?.briefId) {
@@ -152,7 +152,7 @@ export class ApiStorage implements IStorage {
try { try {
const authManager = AuthManager.getInstance(); const authManager = AuthManager.getInstance();
const context = authManager.getContext(); const context = await authManager.getContext();
// If no brief is selected in context, throw an error // If no brief is selected in context, throw an error
if (!context?.briefId) { if (!context?.briefId) {
@@ -318,7 +318,7 @@ export class ApiStorage implements IStorage {
try { try {
const authManager = AuthManager.getInstance(); const authManager = AuthManager.getInstance();
const context = authManager.getContext(); const context = await authManager.getContext();
// In our API-based system, we only have one "tag" at a time - the current brief // In our API-based system, we only have one "tag" at a time - the current brief
if (context?.briefId) { if (context?.briefId) {

View File

@@ -72,7 +72,7 @@ export class StorageFactory {
{ storageType: 'api', missing } { storageType: 'api', missing }
); );
} }
// Use auth token from AuthManager // Use auth token from AuthManager (synchronous - no auto-refresh here)
const credentials = authManager.getCredentials(); const credentials = authManager.getCredentials();
if (credentials) { if (credentials) {
// Merge with existing storage config, ensuring required fields // Merge with existing storage config, ensuring required fields

View File

@@ -0,0 +1,139 @@
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import fs from 'fs';
import os from 'os';
import path from 'path';
import type { Session } from '@supabase/supabase-js';
import { AuthManager } from '../../src/auth/auth-manager';
import { CredentialStore } from '../../src/auth/credential-store';
import type { AuthCredentials } from '../../src/auth/types';
describe('AuthManager Token Refresh', () => {
let authManager: AuthManager;
let credentialStore: CredentialStore;
let tmpDir: string;
let authFile: string;
beforeEach(() => {
// Reset singletons
AuthManager.resetInstance();
CredentialStore.resetInstance();
// Create temporary directory for test isolation
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-refresh-'));
authFile = path.join(tmpDir, 'auth.json');
// Initialize AuthManager with test config (this will create CredentialStore internally)
authManager = AuthManager.getInstance({
configDir: tmpDir,
configFile: authFile
});
// Get the CredentialStore instance that AuthManager created
credentialStore = CredentialStore.getInstance();
credentialStore.clearCredentials();
});
afterEach(() => {
// Clean up
try {
credentialStore.clearCredentials();
} catch {
// Ignore cleanup errors
}
AuthManager.resetInstance();
CredentialStore.resetInstance();
vi.restoreAllMocks();
// Remove temporary directory
if (tmpDir && fs.existsSync(tmpDir)) {
fs.rmSync(tmpDir, { recursive: true, force: true });
}
});
it('should return expired credentials to enable refresh flows', () => {
// Set up expired credentials with refresh token
const expiredCredentials: AuthCredentials = {
token: 'expired_access_token',
refreshToken: 'valid_refresh_token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
// Get credentials should return them even if expired
// Refresh will be handled by explicit calls or client operations
const credentials = authManager.getCredentials();
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired_access_token');
expect(credentials?.refreshToken).toBe('valid_refresh_token');
});
it('should return valid credentials', () => {
// Set up valid (non-expired) credentials
const validCredentials: AuthCredentials = {
token: 'valid_access_token',
refreshToken: 'valid_refresh_token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(), // Expires in 1 hour
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
const credentials = authManager.getCredentials();
expect(credentials?.token).toBe('valid_access_token');
});
it('should return expired credentials even without refresh token', () => {
// Set up expired credentials WITHOUT refresh token
// We still return them - it's up to the caller to handle
const expiredCredentials: AuthCredentials = {
token: 'expired_access_token',
refreshToken: undefined,
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
const credentials = authManager.getCredentials();
// Returns credentials even if expired
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired_access_token');
});
it('should return null if no credentials exist', () => {
const credentials = authManager.getCredentials();
expect(credentials).toBeNull();
});
it('should return credentials regardless of refresh token validity', () => {
// Set up expired credentials with refresh token
const expiredCredentials: AuthCredentials = {
token: 'expired_access_token',
refreshToken: 'invalid_refresh_token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 1000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
const credentials = authManager.getCredentials();
// Returns credentials - refresh will be attempted by the client which will handle failure
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired_access_token');
expect(credentials?.refreshToken).toBe('invalid_refresh_token');
});
});

View File

@@ -0,0 +1,336 @@
/**
* @fileoverview Integration tests for JWT token auto-refresh functionality
*
* These tests verify that expired tokens are automatically refreshed
* when making API calls through AuthManager.
*/
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import fs from 'fs';
import os from 'os';
import path from 'path';
import type { Session } from '@supabase/supabase-js';
import { AuthManager } from '../../src/auth/auth-manager';
import { CredentialStore } from '../../src/auth/credential-store';
import type { AuthCredentials } from '../../src/auth/types';
describe('AuthManager - Token Auto-Refresh Integration', () => {
let authManager: AuthManager;
let credentialStore: CredentialStore;
let tmpDir: string;
let authFile: string;
// Mock Supabase session that will be returned on refresh
const mockRefreshedSession: Session = {
access_token: 'new-access-token-xyz',
refresh_token: 'new-refresh-token-xyz',
token_type: 'bearer',
expires_at: Math.floor(Date.now() / 1000) + 3600, // 1 hour from now
expires_in: 3600,
user: {
id: 'test-user-id',
email: 'test@example.com',
aud: 'authenticated',
role: 'authenticated',
app_metadata: {},
user_metadata: {},
created_at: new Date().toISOString()
}
};
beforeEach(() => {
// Reset singletons
AuthManager.resetInstance();
CredentialStore.resetInstance();
// Create temporary directory for test isolation
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-integration-'));
authFile = path.join(tmpDir, 'auth.json');
// Initialize AuthManager with test config (this will create CredentialStore internally)
authManager = AuthManager.getInstance({
configDir: tmpDir,
configFile: authFile
});
// Get the CredentialStore instance that AuthManager created
credentialStore = CredentialStore.getInstance();
credentialStore.clearCredentials();
});
afterEach(() => {
// Clean up
try {
credentialStore.clearCredentials();
} catch {
// Ignore cleanup errors
}
AuthManager.resetInstance();
CredentialStore.resetInstance();
vi.restoreAllMocks();
// Remove temporary directory
if (tmpDir && fs.existsSync(tmpDir)) {
fs.rmSync(tmpDir, { recursive: true, force: true });
}
});
describe('Expired Token Detection', () => {
it('should return expired token for Supabase to refresh', () => {
// Set up expired credentials
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(), // 1 minute ago
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
// Get credentials returns them even if expired
const credentials = authManager.getCredentials();
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired-token');
expect(credentials?.refreshToken).toBe('valid-refresh-token');
});
it('should return valid token', () => {
// Set up valid credentials
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(), // 1 hour from now
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
expect(credentials?.token).toBe('valid-token');
});
});
describe('Token Refresh Flow', () => {
it('should manually refresh expired token and save new credentials', async () => {
const expiredCredentials: AuthCredentials = {
token: 'old-token',
refreshToken: 'old-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date(Date.now() - 3600000).toISOString(),
selectedContext: {
orgId: 'test-org',
briefId: 'test-brief',
updatedAt: new Date().toISOString()
}
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
vi.spyOn(
authManager['supabaseClient'],
'refreshSession'
).mockResolvedValue(mockRefreshedSession);
// Explicitly call refreshToken() method
const refreshedCredentials = await authManager.refreshToken();
expect(refreshedCredentials).not.toBeNull();
expect(refreshedCredentials.token).toBe('new-access-token-xyz');
expect(refreshedCredentials.refreshToken).toBe('new-refresh-token-xyz');
// Verify context was preserved
expect(refreshedCredentials.selectedContext?.orgId).toBe('test-org');
expect(refreshedCredentials.selectedContext?.briefId).toBe('test-brief');
// Verify new expiration is in the future
const newExpiry = new Date(refreshedCredentials.expiresAt!).getTime();
const now = Date.now();
expect(newExpiry).toBeGreaterThan(now);
});
it('should throw error if manual refresh fails', async () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'invalid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
// Mock refresh to fail
vi.spyOn(
authManager['supabaseClient'],
'refreshSession'
).mockRejectedValue(new Error('Refresh token expired'));
// Explicit refreshToken() call should throw
await expect(authManager.refreshToken()).rejects.toThrow();
});
it('should return expired credentials even without refresh token', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
// No refresh token
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
// Credentials are returned even without refresh token
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired-token');
expect(credentials?.refreshToken).toBeUndefined();
});
it('should return null if credentials missing expiresAt', () => {
const credentialsWithoutExpiry: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
// Missing expiresAt - invalid token
savedAt: new Date().toISOString()
} as any;
credentialStore.saveCredentials(credentialsWithoutExpiry);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
// Tokens without valid expiration are considered invalid
expect(credentials).toBeNull();
});
});
describe('Clock Skew Tolerance', () => {
it('should return credentials within 30-second expiry window', () => {
// Token expires in 15 seconds (within 30-second buffer)
// Supabase will handle refresh automatically
const almostExpiredCredentials: AuthCredentials = {
token: 'almost-expired-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 15000).toISOString(), // 15 seconds from now
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(almostExpiredCredentials);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
// Credentials are returned (Supabase handles auto-refresh in background)
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('almost-expired-token');
expect(credentials?.refreshToken).toBe('valid-refresh-token');
});
it('should return valid token well before expiry', () => {
// Token expires in 5 minutes
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 300000).toISOString(), // 5 minutes
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
// Valid credentials are returned as-is
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('valid-token');
expect(credentials?.refreshToken).toBe('valid-refresh-token');
});
});
describe('Synchronous vs Async Methods', () => {
it('getCredentials should return expired credentials', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
// Returns credentials even if expired - Supabase will handle refresh
const credentials = authManager.getCredentials();
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired-token');
expect(credentials?.refreshToken).toBe('valid-refresh-token');
});
});
describe('Multiple Concurrent Calls', () => {
it('should handle concurrent getCredentials calls gracefully', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
// Make multiple concurrent calls (synchronous now)
const creds1 = authManager.getCredentials();
const creds2 = authManager.getCredentials();
const creds3 = authManager.getCredentials();
// All should get the same credentials (even if expired)
expect(creds1?.token).toBe('expired-token');
expect(creds2?.token).toBe('expired-token');
expect(creds3?.token).toBe('expired-token');
// All include refresh token for Supabase to use
expect(creds1?.refreshToken).toBe('valid-refresh-token');
expect(creds2?.refreshToken).toBe('valid-refresh-token');
expect(creds3?.refreshToken).toBe('valid-refresh-token');
});
});
});

View File

@@ -2441,57 +2441,6 @@ ${result.result}
} }
}); });
// next command
programInstance
.command('next')
.description(
`Show the next task to work on based on dependencies and status${chalk.reset('')}`
)
.option(
'-f, --file <file>',
'Path to the tasks file',
TASKMASTER_TASKS_FILE
)
.option(
'-r, --report <report>',
'Path to the complexity report file',
COMPLEXITY_REPORT_FILE
)
.option('--tag <tag>', 'Specify tag context for task operations')
.action(async (options) => {
const initOptions = {
tasksPath: options.file || true,
tag: options.tag
};
if (options.report && options.report !== COMPLEXITY_REPORT_FILE) {
initOptions.complexityReportPath = options.report;
}
// Initialize TaskMaster
const taskMaster = initTaskMaster({
tasksPath: options.file || true,
tag: options.tag,
complexityReportPath: options.report || false
});
const tag = taskMaster.getCurrentTag();
const context = {
projectRoot: taskMaster.getProjectRoot(),
tag
};
// Show current tag context
displayCurrentTagIndicator(tag);
await displayNextTask(
taskMaster.getTasksPath(),
taskMaster.getComplexityReportPath(),
context
);
});
// add-dependency command // add-dependency command
programInstance programInstance
.command('add-dependency') .command('add-dependency')

View File

@@ -307,6 +307,20 @@ function validateProviderModelCombination(providerName, modelId) {
); );
} }
/**
* Gets the list of supported model IDs for a given provider from supported-models.json
* @param {string} providerName - The name of the provider (e.g., 'claude-code', 'anthropic')
* @returns {string[]} Array of supported model IDs, or empty array if provider not found
*/
export function getSupportedModelsForProvider(providerName) {
if (!MODEL_MAP[providerName]) {
return [];
}
return MODEL_MAP[providerName]
.filter((model) => model.supported !== false)
.map((model) => model.id);
}
/** /**
* Validates Claude Code AI provider custom settings * Validates Claude Code AI provider custom settings
* @param {object} settings The settings to validate * @param {object} settings The settings to validate

View File

@@ -43,6 +43,28 @@
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 8192, "max_tokens": 8192,
"supported": true "supported": true
},
{
"id": "claude-sonnet-4-5-20250929",
"swe_score": 0.73,
"cost_per_1m_tokens": {
"input": 3.0,
"output": 15.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 64000,
"supported": true
},
{
"id": "claude-haiku-4-5-20251001",
"swe_score": 0.45,
"cost_per_1m_tokens": {
"input": 1.0,
"output": 5.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 200000,
"supported": true
} }
], ],
"claude-code": [ "claude-code": [
@@ -67,6 +89,17 @@
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 64000, "max_tokens": 64000,
"supported": true "supported": true
},
{
"id": "haiku",
"swe_score": 0.45,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 200000,
"supported": true
} }
], ],
"codex-cli": [ "codex-cli": [

View File

@@ -12,7 +12,10 @@
import { createClaudeCode } from 'ai-sdk-provider-claude-code'; import { createClaudeCode } from 'ai-sdk-provider-claude-code';
import { BaseAIProvider } from './base-provider.js'; import { BaseAIProvider } from './base-provider.js';
import { getClaudeCodeSettingsForCommand } from '../../scripts/modules/config-manager.js'; import {
getClaudeCodeSettingsForCommand,
getSupportedModelsForProvider
} from '../../scripts/modules/config-manager.js';
import { execSync } from 'child_process'; import { execSync } from 'child_process';
import { log } from '../../scripts/modules/utils.js'; import { log } from '../../scripts/modules/utils.js';
@@ -24,14 +27,24 @@ let _claudeCliAvailable = null;
* *
* Features: * Features:
* - No API key required (uses local Claude Code CLI) * - No API key required (uses local Claude Code CLI)
* - Supports 'sonnet' and 'opus' models * - Supported models loaded from supported-models.json
* - Command-specific configuration support * - Command-specific configuration support
*/ */
export class ClaudeCodeProvider extends BaseAIProvider { export class ClaudeCodeProvider extends BaseAIProvider {
constructor() { constructor() {
super(); super();
this.name = 'Claude Code'; this.name = 'Claude Code';
this.supportedModels = ['sonnet', 'opus']; // Load supported models from supported-models.json
this.supportedModels = getSupportedModelsForProvider('claude-code');
// Validate that models were loaded successfully
if (this.supportedModels.length === 0) {
log(
'warn',
'No supported models found for claude-code provider. Check supported-models.json configuration.'
);
}
// Claude Code requires explicit JSON schema mode // Claude Code requires explicit JSON schema mode
this.needsExplicitJsonSchema = true; this.needsExplicitJsonSchema = true;
// Claude Code does not support temperature parameter // Claude Code does not support temperature parameter

View File

@@ -10,7 +10,10 @@ import { createCodexCli } from 'ai-sdk-provider-codex-cli';
import { BaseAIProvider } from './base-provider.js'; import { BaseAIProvider } from './base-provider.js';
import { execSync } from 'child_process'; import { execSync } from 'child_process';
import { log } from '../../scripts/modules/utils.js'; import { log } from '../../scripts/modules/utils.js';
import { getCodexCliSettingsForCommand } from '../../scripts/modules/config-manager.js'; import {
getCodexCliSettingsForCommand,
getSupportedModelsForProvider
} from '../../scripts/modules/config-manager.js';
export class CodexCliProvider extends BaseAIProvider { export class CodexCliProvider extends BaseAIProvider {
constructor() { constructor() {
@@ -20,8 +23,17 @@ export class CodexCliProvider extends BaseAIProvider {
this.needsExplicitJsonSchema = false; this.needsExplicitJsonSchema = false;
// Codex CLI does not support temperature parameter // Codex CLI does not support temperature parameter
this.supportsTemperature = false; this.supportsTemperature = false;
// Restrict to supported models for OAuth subscription usage // Load supported models from supported-models.json
this.supportedModels = ['gpt-5', 'gpt-5-codex']; this.supportedModels = getSupportedModelsForProvider('codex-cli');
// Validate that models were loaded successfully
if (this.supportedModels.length === 0) {
log(
'warn',
'No supported models found for codex-cli provider. Check supported-models.json configuration.'
);
}
// CLI availability check cache // CLI availability check cache
this._codexCliChecked = false; this._codexCliChecked = false;
this._codexCliAvailable = null; this._codexCliAvailable = null;

View File

@@ -47,21 +47,33 @@ export function normalizeProjectRoot(projectRoot) {
/** /**
* Find the project root directory by looking for project markers * Find the project root directory by looking for project markers
* @param {string} startDir - Directory to start searching from * Traverses upwards from startDir until a project marker is found or filesystem root is reached
* @returns {string|null} - Project root path or null if not found * Limited to 50 parent directory levels to prevent excessive traversal
* @param {string} startDir - Directory to start searching from (defaults to process.cwd())
* @returns {string} - Project root path (falls back to current directory if no markers found)
*/ */
export function findProjectRoot(startDir = process.cwd()) { export function findProjectRoot(startDir = process.cwd()) {
// Define project markers that indicate a project root
// Prioritize Task Master specific markers first
const projectMarkers = [ const projectMarkers = [
'.taskmaster', '.taskmaster', // Task Master directory (highest priority)
TASKMASTER_TASKS_FILE, TASKMASTER_CONFIG_FILE, // .taskmaster/config.json
'tasks.json', TASKMASTER_TASKS_FILE, // .taskmaster/tasks/tasks.json
LEGACY_TASKS_FILE, LEGACY_CONFIG_FILE, // .taskmasterconfig (legacy)
'.git', LEGACY_TASKS_FILE, // tasks/tasks.json (legacy)
'.svn', 'tasks.json', // Root tasks.json (legacy)
'package.json', '.git', // Git repository
'yarn.lock', '.svn', // SVN repository
'package-lock.json', 'package.json', // Node.js project
'pnpm-lock.yaml' 'yarn.lock', // Yarn project
'package-lock.json', // npm project
'pnpm-lock.yaml', // pnpm project
'Cargo.toml', // Rust project
'go.mod', // Go project
'pyproject.toml', // Python project
'requirements.txt', // Python project
'Gemfile', // Ruby project
'composer.json' // PHP project
]; ];
let currentDir = path.resolve(startDir); let currentDir = path.resolve(startDir);
@@ -69,19 +81,36 @@ export function findProjectRoot(startDir = process.cwd()) {
const maxDepth = 50; // Reasonable limit to prevent infinite loops const maxDepth = 50; // Reasonable limit to prevent infinite loops
let depth = 0; let depth = 0;
// Traverse upwards looking for project markers
while (currentDir !== rootDir && depth < maxDepth) { while (currentDir !== rootDir && depth < maxDepth) {
// Check if current directory contains any project markers // Check if current directory contains any project markers
for (const marker of projectMarkers) { for (const marker of projectMarkers) {
const markerPath = path.join(currentDir, marker); const markerPath = path.join(currentDir, marker);
if (fs.existsSync(markerPath)) { try {
return currentDir; if (fs.existsSync(markerPath)) {
// Found a project marker - return this directory as project root
return currentDir;
}
} catch (error) {
// Ignore permission errors and continue searching
continue;
} }
} }
currentDir = path.dirname(currentDir);
// Move up one directory level
const parentDir = path.dirname(currentDir);
// Safety check: if dirname returns the same path, we've hit the root
if (parentDir === currentDir) {
break;
}
currentDir = parentDir;
depth++; depth++;
} }
// Fallback to current working directory if no project root found // Fallback to current working directory if no project root found
// This ensures the function always returns a valid path
return process.cwd(); return process.cwd();
} }

View File

@@ -0,0 +1,123 @@
/**
* tool-counts.js
* Shared helper for validating tool counts across tests and validation scripts
*/
import {
getToolCounts,
getToolCategories
} from '../../mcp-server/src/tools/tool-registry.js';
/**
* Expected tool counts - update these when tools are added/removed
* These serve as the canonical source of truth for expected counts
*/
export const EXPECTED_TOOL_COUNTS = {
core: 7,
standard: 15,
total: 36
};
/**
* Expected core tools list for validation
*/
export const EXPECTED_CORE_TOOLS = [
'get_tasks',
'next_task',
'get_task',
'set_task_status',
'update_subtask',
'parse_prd',
'expand_task'
];
/**
* Validate that actual tool counts match expected counts
* @returns {Object} Validation result with isValid flag and details
*/
export function validateToolCounts() {
const actual = getToolCounts();
const expected = EXPECTED_TOOL_COUNTS;
const isValid =
actual.core === expected.core &&
actual.standard === expected.standard &&
actual.total === expected.total;
return {
isValid,
actual,
expected,
differences: {
core: actual.core - expected.core,
standard: actual.standard - expected.standard,
total: actual.total - expected.total
}
};
}
/**
* Validate that tool categories have correct structure and content
* @returns {Object} Validation result
*/
export function validateToolStructure() {
const categories = getToolCategories();
const counts = getToolCounts();
// Check that core tools are subset of standard tools
const coreInStandard = categories.core.every((tool) =>
categories.standard.includes(tool)
);
// Check that standard tools are subset of all tools
const standardInAll = categories.standard.every((tool) =>
categories.all.includes(tool)
);
// Check that expected core tools match actual
const expectedCoreMatch =
EXPECTED_CORE_TOOLS.every((tool) => categories.core.includes(tool)) &&
categories.core.every((tool) => EXPECTED_CORE_TOOLS.includes(tool));
// Check array lengths match counts
const lengthsMatch =
categories.core.length === counts.core &&
categories.standard.length === counts.standard &&
categories.all.length === counts.total;
return {
isValid:
coreInStandard && standardInAll && expectedCoreMatch && lengthsMatch,
details: {
coreInStandard,
standardInAll,
expectedCoreMatch,
lengthsMatch
},
categories,
counts
};
}
/**
* Get a detailed report of all tool information
* @returns {Object} Comprehensive tool information
*/
export function getToolReport() {
const counts = getToolCounts();
const categories = getToolCategories();
const validation = validateToolCounts();
const structure = validateToolStructure();
return {
counts,
categories,
validation,
structure,
summary: {
totalValid: validation.isValid && structure.isValid,
countsValid: validation.isValid,
structureValid: structure.isValid
}
};
}

View File

@@ -43,9 +43,9 @@ describe('Claude Code Error Handling', () => {
// These should work even if CLI is not available // These should work even if CLI is not available
expect(provider.name).toBe('Claude Code'); expect(provider.name).toBe('Claude Code');
expect(provider.getSupportedModels()).toEqual(['sonnet', 'opus']); expect(provider.getSupportedModels()).toEqual(['opus', 'sonnet', 'haiku']);
expect(provider.isModelSupported('sonnet')).toBe(true); expect(provider.isModelSupported('sonnet')).toBe(true);
expect(provider.isModelSupported('haiku')).toBe(false); expect(provider.isModelSupported('haiku')).toBe(true);
expect(provider.isRequiredApiKey()).toBe(false); expect(provider.isRequiredApiKey()).toBe(false);
expect(() => provider.validateAuth()).not.toThrow(); expect(() => provider.validateAuth()).not.toThrow();
}); });

View File

@@ -40,14 +40,14 @@ describe('Claude Code Integration (Optional)', () => {
it('should create a working provider instance', () => { it('should create a working provider instance', () => {
const provider = new ClaudeCodeProvider(); const provider = new ClaudeCodeProvider();
expect(provider.name).toBe('Claude Code'); expect(provider.name).toBe('Claude Code');
expect(provider.getSupportedModels()).toEqual(['sonnet', 'opus']); expect(provider.getSupportedModels()).toEqual(['opus', 'sonnet', 'haiku']);
}); });
it('should support model validation', () => { it('should support model validation', () => {
const provider = new ClaudeCodeProvider(); const provider = new ClaudeCodeProvider();
expect(provider.isModelSupported('sonnet')).toBe(true); expect(provider.isModelSupported('sonnet')).toBe(true);
expect(provider.isModelSupported('opus')).toBe(true); expect(provider.isModelSupported('opus')).toBe(true);
expect(provider.isModelSupported('haiku')).toBe(false); expect(provider.isModelSupported('haiku')).toBe(true);
expect(provider.isModelSupported('unknown')).toBe(false); expect(provider.isModelSupported('unknown')).toBe(false);
}); });

View File

@@ -28,6 +28,14 @@ jest.unstable_mockModule('../../../src/ai-providers/base-provider.js', () => ({
} }
})); }));
// Mock config getters
jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({
getClaudeCodeSettingsForCommand: jest.fn(() => ({})),
getSupportedModelsForProvider: jest.fn(() => ['opus', 'sonnet', 'haiku']),
getDebugFlag: jest.fn(() => false),
getLogLevel: jest.fn(() => 'info')
}));
// Import after mocking // Import after mocking
const { ClaudeCodeProvider } = await import( const { ClaudeCodeProvider } = await import(
'../../../src/ai-providers/claude-code.js' '../../../src/ai-providers/claude-code.js'
@@ -96,13 +104,13 @@ describe('ClaudeCodeProvider', () => {
describe('model support', () => { describe('model support', () => {
it('should return supported models', () => { it('should return supported models', () => {
const models = provider.getSupportedModels(); const models = provider.getSupportedModels();
expect(models).toEqual(['sonnet', 'opus']); expect(models).toEqual(['opus', 'sonnet', 'haiku']);
}); });
it('should check if model is supported', () => { it('should check if model is supported', () => {
expect(provider.isModelSupported('sonnet')).toBe(true); expect(provider.isModelSupported('sonnet')).toBe(true);
expect(provider.isModelSupported('opus')).toBe(true); expect(provider.isModelSupported('opus')).toBe(true);
expect(provider.isModelSupported('haiku')).toBe(false); expect(provider.isModelSupported('haiku')).toBe(true);
expect(provider.isModelSupported('unknown')).toBe(false); expect(provider.isModelSupported('unknown')).toBe(false);
}); });
}); });

View File

@@ -20,6 +20,7 @@ jest.unstable_mockModule('ai-sdk-provider-codex-cli', () => ({
// Mock config getters // Mock config getters
jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({ jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({
getCodexCliSettingsForCommand: jest.fn(() => ({ allowNpx: true })), getCodexCliSettingsForCommand: jest.fn(() => ({ allowNpx: true })),
getSupportedModelsForProvider: jest.fn(() => ['gpt-5', 'gpt-5-codex']),
// Provide commonly imported getters to satisfy other module imports if any // Provide commonly imported getters to satisfy other module imports if any
getDebugFlag: jest.fn(() => false), getDebugFlag: jest.fn(() => false),
getLogLevel: jest.fn(() => 'info') getLogLevel: jest.fn(() => 'info')

View File

@@ -0,0 +1,410 @@
/**
* tool-registration.test.js
* Comprehensive unit tests for the Task Master MCP tool registration system
* Tests environment variable control system covering all configuration modes and edge cases
*/
import {
describe,
it,
expect,
beforeEach,
afterEach,
jest
} from '@jest/globals';
import {
EXPECTED_TOOL_COUNTS,
EXPECTED_CORE_TOOLS,
validateToolCounts,
validateToolStructure
} from '../../../helpers/tool-counts.js';
import { registerTaskMasterTools } from '../../../../mcp-server/src/tools/index.js';
import {
toolRegistry,
coreTools,
standardTools
} from '../../../../mcp-server/src/tools/tool-registry.js';
// Derive constants from imported registry to avoid brittle magic numbers
const ALL_COUNT = Object.keys(toolRegistry).length;
const CORE_COUNT = coreTools.length;
const STANDARD_COUNT = standardTools.length;
describe('Task Master Tool Registration System', () => {
let mockServer;
let originalEnv;
beforeEach(() => {
originalEnv = process.env.TASK_MASTER_TOOLS;
mockServer = {
tools: [],
addTool: jest.fn((tool) => {
mockServer.tools.push(tool);
return tool;
})
};
delete process.env.TASK_MASTER_TOOLS;
});
afterEach(() => {
if (originalEnv !== undefined) {
process.env.TASK_MASTER_TOOLS = originalEnv;
} else {
delete process.env.TASK_MASTER_TOOLS;
}
jest.clearAllMocks();
});
describe('Test Environment Setup', () => {
it('should have properly configured mock server', () => {
expect(mockServer).toBeDefined();
expect(typeof mockServer.addTool).toBe('function');
expect(Array.isArray(mockServer.tools)).toBe(true);
expect(mockServer.tools.length).toBe(0);
});
it('should have correct tool registry structure', () => {
const validation = validateToolCounts();
expect(validation.isValid).toBe(true);
if (!validation.isValid) {
console.error('Tool count validation failed:', validation);
}
expect(validation.actual.total).toBe(EXPECTED_TOOL_COUNTS.total);
expect(validation.actual.core).toBe(EXPECTED_TOOL_COUNTS.core);
expect(validation.actual.standard).toBe(EXPECTED_TOOL_COUNTS.standard);
});
it('should have correct core tools', () => {
const structure = validateToolStructure();
expect(structure.isValid).toBe(true);
if (!structure.isValid) {
console.error('Tool structure validation failed:', structure);
}
expect(coreTools).toEqual(expect.arrayContaining(EXPECTED_CORE_TOOLS));
expect(coreTools.length).toBe(EXPECTED_TOOL_COUNTS.core);
});
it('should have correct standard tools that include all core tools', () => {
const structure = validateToolStructure();
expect(structure.details.coreInStandard).toBe(true);
expect(standardTools.length).toBe(EXPECTED_TOOL_COUNTS.standard);
coreTools.forEach((tool) => {
expect(standardTools).toContain(tool);
});
});
it('should have all expected tools in registry', () => {
const expectedTools = [
'initialize_project',
'models',
'research',
'add_tag',
'delete_tag',
'get_tasks',
'next_task',
'get_task'
];
expectedTools.forEach((tool) => {
expect(toolRegistry).toHaveProperty(tool);
});
});
});
describe('Configuration Modes', () => {
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS is not set (default behavior)`, () => {
delete process.env.TASK_MASTER_TOOLS;
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(
EXPECTED_TOOL_COUNTS.total
);
});
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS=all`, () => {
process.env.TASK_MASTER_TOOLS = 'all';
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it(`should register exactly ${CORE_COUNT} core tools when TASK_MASTER_TOOLS=core`, () => {
process.env.TASK_MASTER_TOOLS = 'core';
registerTaskMasterTools(mockServer, 'core');
expect(mockServer.addTool).toHaveBeenCalledTimes(
EXPECTED_TOOL_COUNTS.core
);
});
it(`should register exactly ${STANDARD_COUNT} standard tools when TASK_MASTER_TOOLS=standard`, () => {
process.env.TASK_MASTER_TOOLS = 'standard';
registerTaskMasterTools(mockServer, 'standard');
expect(mockServer.addTool).toHaveBeenCalledTimes(
EXPECTED_TOOL_COUNTS.standard
);
});
it(`should treat lean as alias for core mode (${CORE_COUNT} tools)`, () => {
process.env.TASK_MASTER_TOOLS = 'lean';
registerTaskMasterTools(mockServer, 'lean');
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
});
it('should handle case insensitive configuration values', () => {
process.env.TASK_MASTER_TOOLS = 'CORE';
registerTaskMasterTools(mockServer, 'CORE');
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
});
});
describe('Custom Tool Selection and Edge Cases', () => {
it('should register specific tools from comma-separated list', () => {
process.env.TASK_MASTER_TOOLS = 'get_tasks,next_task,get_task';
registerTaskMasterTools(mockServer, 'get_tasks,next_task,get_task');
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
});
it('should handle mixed valid and invalid tool names gracefully', () => {
process.env.TASK_MASTER_TOOLS =
'invalid_tool,get_tasks,fake_tool,next_task';
registerTaskMasterTools(
mockServer,
'invalid_tool,get_tasks,fake_tool,next_task'
);
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
});
it('should default to all tools with completely invalid input', () => {
process.env.TASK_MASTER_TOOLS = 'completely_invalid';
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it('should handle empty string environment variable', () => {
process.env.TASK_MASTER_TOOLS = '';
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it('should handle whitespace in comma-separated lists', () => {
process.env.TASK_MASTER_TOOLS = ' get_tasks , next_task , get_task ';
registerTaskMasterTools(mockServer, ' get_tasks , next_task , get_task ');
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
});
it('should ignore duplicate tools in list', () => {
process.env.TASK_MASTER_TOOLS = 'get_tasks,get_tasks,next_task,get_tasks';
registerTaskMasterTools(
mockServer,
'get_tasks,get_tasks,next_task,get_tasks'
);
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
});
it('should handle only commas and empty entries', () => {
process.env.TASK_MASTER_TOOLS = ',,,';
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it('should handle single tool selection', () => {
process.env.TASK_MASTER_TOOLS = 'get_tasks';
registerTaskMasterTools(mockServer, 'get_tasks');
expect(mockServer.addTool).toHaveBeenCalledTimes(1);
});
});
describe('Coverage Analysis and Integration Tests', () => {
it('should provide 100% code coverage for environment control logic', () => {
const testCases = [
{
env: undefined,
expectedCount: ALL_COUNT,
description: 'undefined env (all)'
},
{
env: '',
expectedCount: ALL_COUNT,
description: 'empty string (all)'
},
{ env: 'all', expectedCount: ALL_COUNT, description: 'all mode' },
{ env: 'core', expectedCount: CORE_COUNT, description: 'core mode' },
{
env: 'lean',
expectedCount: CORE_COUNT,
description: 'lean mode (alias)'
},
{
env: 'standard',
expectedCount: STANDARD_COUNT,
description: 'standard mode'
},
{
env: 'get_tasks,next_task',
expectedCount: 2,
description: 'custom list'
},
{
env: 'invalid_tool',
expectedCount: ALL_COUNT,
description: 'invalid fallback'
}
];
testCases.forEach((testCase) => {
delete process.env.TASK_MASTER_TOOLS;
if (testCase.env !== undefined) {
process.env.TASK_MASTER_TOOLS = testCase.env;
}
mockServer.tools = [];
mockServer.addTool.mockClear();
registerTaskMasterTools(mockServer, testCase.env || 'all');
expect(mockServer.addTool).toHaveBeenCalledTimes(
testCase.expectedCount
);
});
});
it('should have optimal performance characteristics', () => {
const startTime = Date.now();
process.env.TASK_MASTER_TOOLS = 'all';
registerTaskMasterTools(mockServer);
const endTime = Date.now();
const executionTime = endTime - startTime;
expect(executionTime).toBeLessThan(100);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it('should validate token reduction claims', () => {
expect(coreTools.length).toBeLessThan(standardTools.length);
expect(standardTools.length).toBeLessThan(
Object.keys(toolRegistry).length
);
expect(coreTools.length).toBe(CORE_COUNT);
expect(standardTools.length).toBe(STANDARD_COUNT);
expect(Object.keys(toolRegistry).length).toBe(ALL_COUNT);
const allToolsCount = Object.keys(toolRegistry).length;
const coreReduction =
((allToolsCount - coreTools.length) / allToolsCount) * 100;
const standardReduction =
((allToolsCount - standardTools.length) / allToolsCount) * 100;
expect(coreReduction).toBeGreaterThan(80);
expect(standardReduction).toBeGreaterThan(50);
});
it('should maintain referential integrity of tool registry', () => {
coreTools.forEach((tool) => {
expect(standardTools).toContain(tool);
});
standardTools.forEach((tool) => {
expect(toolRegistry).toHaveProperty(tool);
});
Object.keys(toolRegistry).forEach((tool) => {
expect(typeof toolRegistry[tool]).toBe('function');
});
});
it('should handle concurrent registration attempts', () => {
process.env.TASK_MASTER_TOOLS = 'core';
registerTaskMasterTools(mockServer, 'core');
registerTaskMasterTools(mockServer, 'core');
registerTaskMasterTools(mockServer, 'core');
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT * 3);
});
it('should validate all documented tool categories exist', () => {
const allTools = Object.keys(toolRegistry);
const projectSetupTools = allTools.filter((tool) =>
['initialize_project', 'models', 'rules', 'parse_prd'].includes(tool)
);
expect(projectSetupTools.length).toBeGreaterThan(0);
const taskManagementTools = allTools.filter((tool) =>
['get_tasks', 'get_task', 'next_task', 'set_task_status'].includes(tool)
);
expect(taskManagementTools.length).toBeGreaterThan(0);
const analysisTools = allTools.filter((tool) =>
['analyze_project_complexity', 'complexity_report'].includes(tool)
);
expect(analysisTools.length).toBeGreaterThan(0);
const tagManagementTools = allTools.filter((tool) =>
['add_tag', 'delete_tag', 'list_tags', 'use_tag'].includes(tool)
);
expect(tagManagementTools.length).toBeGreaterThan(0);
});
it('should handle error conditions gracefully', () => {
const problematicInputs = [
'null',
'undefined',
' ',
'\n\t',
'special!@#$%^&*()characters',
'very,very,very,very,very,very,very,long,comma,separated,list,with,invalid,tools,that,should,fallback,to,all'
];
problematicInputs.forEach((input) => {
mockServer.tools = [];
mockServer.addTool.mockClear();
process.env.TASK_MASTER_TOOLS = input;
expect(() => registerTaskMasterTools(mockServer)).not.toThrow();
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
});
});
});

View File

@@ -0,0 +1,223 @@
/**
* Unit tests for findProjectRoot() function
* Tests the parent directory traversal functionality
*/
import { jest } from '@jest/globals';
import path from 'path';
import fs from 'fs';
// Import the function to test
import { findProjectRoot } from '../../src/utils/path-utils.js';
describe('findProjectRoot', () => {
describe('Parent Directory Traversal', () => {
test('should find .taskmaster in parent directory', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
// .taskmaster exists only at /project
return normalized === path.normalize('/project/.taskmaster');
});
const result = findProjectRoot('/project/subdir');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should find .git in parent directory', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
return normalized === path.normalize('/project/.git');
});
const result = findProjectRoot('/project/subdir');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should find package.json in parent directory', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
return normalized === path.normalize('/project/package.json');
});
const result = findProjectRoot('/project/subdir');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should traverse multiple levels to find project root', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
// Only exists at /project, not in any subdirectories
return normalized === path.normalize('/project/.taskmaster');
});
const result = findProjectRoot('/project/subdir/deep/nested');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should return current directory as fallback when no markers found', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
// No project markers exist anywhere
mockExistsSync.mockReturnValue(false);
const result = findProjectRoot('/some/random/path');
// Should fall back to process.cwd()
expect(result).toBe(process.cwd());
mockExistsSync.mockRestore();
});
test('should find markers at current directory before checking parent', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
// .git exists at /project/subdir, .taskmaster exists at /project
if (normalized.includes('/project/subdir/.git')) return true;
if (normalized.includes('/project/.taskmaster')) return true;
return false;
});
const result = findProjectRoot('/project/subdir');
// Should find /project/subdir first because .git exists there,
// even though .taskmaster is earlier in the marker array
expect(result).toBe('/project/subdir');
mockExistsSync.mockRestore();
});
test('should handle permission errors gracefully', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
// Throw permission error for checks in /project/subdir
if (normalized.startsWith('/project/subdir/')) {
throw new Error('EACCES: permission denied');
}
// Return true only for .taskmaster at /project
return normalized.includes('/project/.taskmaster');
});
const result = findProjectRoot('/project/subdir');
// Should handle permission errors in subdirectory and traverse to parent
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should detect filesystem root correctly', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
// No markers exist
mockExistsSync.mockReturnValue(false);
const result = findProjectRoot('/');
// Should stop at root and fall back to process.cwd()
expect(result).toBe(process.cwd());
mockExistsSync.mockRestore();
});
test('should recognize various project markers', () => {
const projectMarkers = [
'.taskmaster',
'.git',
'package.json',
'Cargo.toml',
'go.mod',
'pyproject.toml',
'requirements.txt',
'Gemfile',
'composer.json'
];
projectMarkers.forEach((marker) => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
return normalized.includes(`/project/${marker}`);
});
const result = findProjectRoot('/project/subdir');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
});
});
describe('Edge Cases', () => {
test('should handle empty string as startDir', () => {
const result = findProjectRoot('');
// Should use process.cwd() or fall back appropriately
expect(typeof result).toBe('string');
expect(result.length).toBeGreaterThan(0);
});
test('should handle relative paths', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
// Simulate .git existing in the resolved path
return checkPath.includes('.git');
});
const result = findProjectRoot('./subdir');
expect(typeof result).toBe('string');
mockExistsSync.mockRestore();
});
test('should not exceed max depth limit', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
// Track how many times existsSync is called
let callCount = 0;
mockExistsSync.mockImplementation(() => {
callCount++;
return false; // Never find a marker
});
// Create a very deep path
const deepPath = '/a/'.repeat(100) + 'deep';
const result = findProjectRoot(deepPath);
// Should stop after max depth (50) and not check 100 levels
// Each level checks multiple markers, so callCount will be high but bounded
expect(callCount).toBeLessThan(1000); // Reasonable upper bound
// With 18 markers and max depth of 50, expect around 900 calls maximum
expect(callCount).toBeLessThanOrEqual(50 * 18);
mockExistsSync.mockRestore();
});
});
});