Compare commits
1 Commits
ralph/fix/
...
docs/auto-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
086ae62f77 |
7
.changeset/auto-update-changelog-highlights.md
Normal file
7
.changeset/auto-update-changelog-highlights.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add changelog highlights to auto-update notifications
|
||||
|
||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||
@@ -11,7 +11,6 @@
|
||||
"access": "public",
|
||||
"baseBranch": "main",
|
||||
"ignore": [
|
||||
"docs",
|
||||
"@tm/claude-code-plugin"
|
||||
"docs"
|
||||
]
|
||||
}
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve auth token refresh flow
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Enable Task Master commands to traverse parent directories to find project root from nested paths
|
||||
|
||||
Fixes #1301
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"@tm/cli": patch
|
||||
---
|
||||
|
||||
Fix warning message box width to match dashboard box width for consistent UI alignment
|
||||
@@ -1,35 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add configurable MCP tool loading to optimize LLM context usage
|
||||
|
||||
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
|
||||
|
||||
**Configuration Options:**
|
||||
|
||||
- `all` (default): Load all 36 tools
|
||||
- `core` or `lean`: Load only 7 essential tools for daily development
|
||||
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
|
||||
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
||||
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
|
||||
|
||||
**Example .mcp.json configuration:**
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"TASK_MASTER_TOOLS": "standard",
|
||||
"ANTHROPIC_API_KEY": "your_key_here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).
|
||||
47
.changeset/mean-planes-wave.md
Normal file
47
.changeset/mean-planes-wave.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add Claude Code plugin with marketplace distribution
|
||||
|
||||
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
||||
|
||||
## 🎉 New: Claude Code Plugin
|
||||
|
||||
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
||||
|
||||
- **49 slash commands** with clean naming (`/task-master-ai:command-name`)
|
||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
||||
- **MCP server integration** for deep Claude Code integration
|
||||
|
||||
**Installation:**
|
||||
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
||||
|
||||
- Shows plugin installation instructions
|
||||
- Only manages CLAUDE.md imports for agent instructions
|
||||
- Directs users to install the official plugin
|
||||
|
||||
**Migration for Existing Users:**
|
||||
|
||||
If you previously used `rules add claude`:
|
||||
|
||||
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
||||
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
||||
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
||||
|
||||
**Why This Change?**
|
||||
|
||||
Claude Code plugins provide:
|
||||
|
||||
- ✅ Automatic updates when we release new features
|
||||
- ✅ Better command organization and naming
|
||||
- ✅ Seamless integration with Claude Code
|
||||
- ✅ No manual file copying or management
|
||||
|
||||
The plugin system is the future of Task Master AI integration with Claude Code!
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Improve next command to work with remote
|
||||
17
.changeset/nice-ways-hope.md
Normal file
17
.changeset/nice-ways-hope.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||
|
||||
Key features:
|
||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||
- Inline instructions at decision points guide AI through each section
|
||||
- Good/bad examples for immediate pattern matching
|
||||
- Flexible plain-text format with XML-style tags for parseability
|
||||
- Critical dependency-graph section ensures correct task ordering
|
||||
- Automatic inclusion during `task-master init`
|
||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||
|
||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||
7
.changeset/plain-falcons-serve.md
Normal file
7
.changeset/plain-falcons-serve.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix cross-level task dependencies not being saved
|
||||
|
||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||
16
.changeset/smart-owls-relax.md
Normal file
16
.changeset/smart-owls-relax.md
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
||||
|
||||
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
||||
|
||||
Key improvements:
|
||||
- Automatic integration with complexity analysis reports
|
||||
- Tag-aware complexity report path resolution
|
||||
- Intelligent subtask count determination based on task complexity
|
||||
- Falls back to defaults when complexity analysis is unavailable
|
||||
- Enhanced logging for better visibility into expansion decisions
|
||||
|
||||
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
||||
@@ -14,4 +14,4 @@ OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
|
||||
VERTEX_PROJECT_ID=your-gcp-project-id
|
||||
VERTEX_LOCATION=us-central1
|
||||
# Optional: Path to service account credentials JSON file (alternative to API key)
|
||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||
|
||||
170
CHANGELOG.md
170
CHANGELOG.md
@@ -1,175 +1,5 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.29.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
|
||||
|
||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||
|
||||
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
|
||||
|
||||
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
||||
|
||||
## 🎉 New: Claude Code Plugin
|
||||
|
||||
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
||||
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
|
||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
||||
- **MCP server integration** for deep Claude Code integration
|
||||
|
||||
**Installation:**
|
||||
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
||||
- Shows plugin installation instructions
|
||||
- Only manages CLAUDE.md imports for agent instructions
|
||||
- Directs users to install the official plugin
|
||||
|
||||
**Migration for Existing Users:**
|
||||
|
||||
If you previously used `rules add claude`:
|
||||
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
||||
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
||||
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
||||
|
||||
**Why This Change?**
|
||||
|
||||
Claude Code plugins provide:
|
||||
- ✅ Automatic updates when we release new features
|
||||
- ✅ Better command organization and naming
|
||||
- ✅ Seamless integration with Claude Code
|
||||
- ✅ No manual file copying or management
|
||||
|
||||
The plugin system is the future of Task Master AI integration with Claude Code!
|
||||
|
||||
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||
|
||||
Key features:
|
||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||
- Inline instructions at decision points guide AI through each section
|
||||
- Good/bad examples for immediate pattern matching
|
||||
- Flexible plain-text format with XML-style tags for parseability
|
||||
- Critical dependency-graph section ensures correct task ordering
|
||||
- Automatic inclusion during `task-master init`
|
||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||
|
||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||
|
||||
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
||||
|
||||
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
||||
|
||||
Key improvements:
|
||||
- Automatic integration with complexity analysis reports
|
||||
- Tag-aware complexity report path resolution
|
||||
- Intelligent subtask count determination based on task complexity
|
||||
- Falls back to defaults when complexity analysis is unavailable
|
||||
- Enhanced logging for better visibility into expansion decisions
|
||||
|
||||
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
|
||||
|
||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||
|
||||
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`4c1ef2c`](https://github.com/eyaltoledano/claude-task-master/commit/4c1ef2ca94411c53bcd2a78ec710b06c500236dd) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
|
||||
|
||||
## 0.29.0-rc.1
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`a6c5152`](https://github.com/eyaltoledano/claude-task-master/commit/a6c5152f20edd8717cf1aea34e7c178b1261aa99) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
|
||||
|
||||
## 0.29.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
|
||||
|
||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||
|
||||
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
|
||||
|
||||
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
||||
|
||||
## 🎉 New: Claude Code Plugin
|
||||
|
||||
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
||||
- **49 slash commands** with clean naming (`/task-master-ai:command-name`)
|
||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
||||
- **MCP server integration** for deep Claude Code integration
|
||||
|
||||
**Installation:**
|
||||
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
||||
- Shows plugin installation instructions
|
||||
- Only manages CLAUDE.md imports for agent instructions
|
||||
- Directs users to install the official plugin
|
||||
|
||||
**Migration for Existing Users:**
|
||||
|
||||
If you previously used `rules add claude`:
|
||||
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
||||
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
||||
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
||||
|
||||
**Why This Change?**
|
||||
|
||||
Claude Code plugins provide:
|
||||
- ✅ Automatic updates when we release new features
|
||||
- ✅ Better command organization and naming
|
||||
- ✅ Seamless integration with Claude Code
|
||||
- ✅ No manual file copying or management
|
||||
|
||||
The plugin system is the future of Task Master AI integration with Claude Code!
|
||||
|
||||
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||
|
||||
Key features:
|
||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||
- Inline instructions at decision points guide AI through each section
|
||||
- Good/bad examples for immediate pattern matching
|
||||
- Flexible plain-text format with XML-style tags for parseability
|
||||
- Critical dependency-graph section ensures correct task ordering
|
||||
- Automatic inclusion during `task-master init`
|
||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||
|
||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||
|
||||
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
||||
|
||||
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
||||
|
||||
Key improvements:
|
||||
- Automatic integration with complexity analysis reports
|
||||
- Tag-aware complexity report path resolution
|
||||
- Intelligent subtask count determination based on task complexity
|
||||
- Falls back to defaults when complexity analysis is unavailable
|
||||
- Enhanced logging for better visibility into expansion decisions
|
||||
|
||||
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
|
||||
|
||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||
|
||||
## 0.28.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
80
README.md
80
README.md
@@ -119,7 +119,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
@@ -149,7 +148,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
|
||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||
@@ -198,7 +196,7 @@ Initialize taskmaster-ai in my project
|
||||
|
||||
#### 5. Make sure you have a PRD (Recommended)
|
||||
|
||||
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`.
|
||||
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`
|
||||
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
|
||||
|
||||
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
|
||||
@@ -284,76 +282,6 @@ task-master generate
|
||||
task-master rules add windsurf,roo,vscode
|
||||
```
|
||||
|
||||
## Tool Loading Configuration
|
||||
|
||||
### Optimizing MCP Tool Loading
|
||||
|
||||
Task Master's MCP server supports selective tool loading to reduce context window usage. By default, all 36 tools are loaded (~21,000 tokens) to maintain backward compatibility with existing installations.
|
||||
|
||||
You can optimize performance by configuring the `TASK_MASTER_TOOLS` environment variable:
|
||||
|
||||
### Available Modes
|
||||
|
||||
| Mode | Tools | Context Usage | Use Case |
|
||||
|------|-------|--------------|----------|
|
||||
| `all` (default) | 36 | ~21,000 tokens | Complete feature set - all tools available |
|
||||
| `standard` | 15 | ~10,000 tokens | Common task management operations |
|
||||
| `core` (or `lean`) | 7 | ~5,000 tokens | Essential daily development workflow |
|
||||
| `custom` | Variable | Variable | Comma-separated list of specific tools |
|
||||
|
||||
### Configuration Methods
|
||||
|
||||
#### Method 1: Environment Variable in MCP Configuration
|
||||
|
||||
Add `TASK_MASTER_TOOLS` to your MCP configuration file's `env` section:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"mcpServers": { // or "servers" for VS Code
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||
"env": {
|
||||
"TASK_MASTER_TOOLS": "standard", // Options: "all", "standard", "core", "lean", or comma-separated list
|
||||
"ANTHROPIC_API_KEY": "your-key-here",
|
||||
// ... other API keys
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Method 2: Claude Code CLI (One-Time Setup)
|
||||
|
||||
For Claude Code users, you can set the mode during installation:
|
||||
|
||||
```bash
|
||||
# Core mode example (~70% token reduction)
|
||||
claude mcp add task-master-ai --scope user \
|
||||
--env TASK_MASTER_TOOLS="core" \
|
||||
-- npx -y task-master-ai@latest
|
||||
|
||||
# Custom tools example
|
||||
claude mcp add task-master-ai --scope user \
|
||||
--env TASK_MASTER_TOOLS="get_tasks,next_task,set_task_status" \
|
||||
-- npx -y task-master-ai@latest
|
||||
```
|
||||
|
||||
### Tool Sets Details
|
||||
|
||||
**Core Tools (7):** `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||
|
||||
**Standard Tools (15):** All core tools plus `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
||||
|
||||
**All Tools (36):** Complete set including project setup, task management, analysis, dependencies, tags, research, and more
|
||||
|
||||
### Recommendations
|
||||
|
||||
- **New users**: Start with `"standard"` mode for a good balance
|
||||
- **Large projects**: Use `"core"` mode to minimize token usage
|
||||
- **Complex workflows**: Use `"all"` mode or custom selection
|
||||
- **Backward compatibility**: If not specified, defaults to `"all"` mode
|
||||
|
||||
## Claude Code Support
|
||||
|
||||
Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
|
||||
@@ -382,12 +310,6 @@ cd claude-task-master
|
||||
node scripts/init.js
|
||||
```
|
||||
|
||||
## Join Our Team
|
||||
|
||||
<a href="https://tryhamster.com" target="_blank">
|
||||
<img src="./images/hamster-hiring.png" alt="Join Hamster's founding team" />
|
||||
</a>
|
||||
|
||||
## Contributors
|
||||
|
||||
<a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors">
|
||||
|
||||
@@ -11,13 +11,6 @@
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies []:
|
||||
- @tm/core@null
|
||||
|
||||
## null
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies []:
|
||||
- @tm/core@null
|
||||
|
||||
|
||||
@@ -22,7 +22,6 @@
|
||||
"test:ci": "vitest run --coverage --reporter=dot"
|
||||
},
|
||||
"dependencies": {
|
||||
"@inquirer/search": "^3.2.0",
|
||||
"@tm/core": "*",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "5.6.2",
|
||||
|
||||
@@ -8,7 +8,6 @@ import { Command } from 'commander';
|
||||
// Import all commands
|
||||
import { ListTasksCommand } from './commands/list.command.js';
|
||||
import { ShowCommand } from './commands/show.command.js';
|
||||
import { NextCommand } from './commands/next.command.js';
|
||||
import { AuthCommand } from './commands/auth.command.js';
|
||||
import { ContextCommand } from './commands/context.command.js';
|
||||
import { StartCommand } from './commands/start.command.js';
|
||||
@@ -46,12 +45,6 @@ export class CommandRegistry {
|
||||
commandClass: ShowCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'next',
|
||||
description: 'Find the next available task to work on',
|
||||
commandClass: NextCommand as any,
|
||||
category: 'task'
|
||||
},
|
||||
{
|
||||
name: 'start',
|
||||
description: 'Start working on a task with claude-code',
|
||||
|
||||
@@ -14,8 +14,6 @@ import {
|
||||
type AuthCredentials
|
||||
} from '@tm/core/auth';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { ContextCommand } from './context.command.js';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
|
||||
/**
|
||||
* Result type from auth command
|
||||
@@ -118,7 +116,8 @@ export class AuthCommand extends Command {
|
||||
process.exit(0);
|
||||
}, 100);
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -134,7 +133,8 @@ export class AuthCommand extends Command {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -146,7 +146,8 @@ export class AuthCommand extends Command {
|
||||
const result = this.displayStatus();
|
||||
this.setLastResult(result);
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -162,7 +163,8 @@ export class AuthCommand extends Command {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -185,29 +187,19 @@ export class AuthCommand extends Command {
|
||||
if (credentials.expiresAt) {
|
||||
const expiresAt = new Date(credentials.expiresAt);
|
||||
const now = new Date();
|
||||
const timeRemaining = expiresAt.getTime() - now.getTime();
|
||||
const hoursRemaining = Math.floor(timeRemaining / (1000 * 60 * 60));
|
||||
const minutesRemaining = Math.floor(timeRemaining / (1000 * 60));
|
||||
const hoursRemaining = Math.floor(
|
||||
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
|
||||
);
|
||||
|
||||
if (timeRemaining > 0) {
|
||||
// Token is still valid
|
||||
if (hoursRemaining > 0) {
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` Expires at: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.log(
|
||||
chalk.gray(
|
||||
` Expires at: ${expiresAt.toLocaleString()} (${minutesRemaining} minutes remaining)`
|
||||
)
|
||||
);
|
||||
}
|
||||
} else {
|
||||
// Token has expired
|
||||
if (hoursRemaining > 0) {
|
||||
console.log(
|
||||
chalk.yellow(` Expired at: ${expiresAt.toLocaleString()}`)
|
||||
chalk.gray(
|
||||
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.log(
|
||||
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
|
||||
);
|
||||
}
|
||||
} else {
|
||||
@@ -349,37 +341,6 @@ export class AuthCommand extends Command {
|
||||
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
|
||||
);
|
||||
|
||||
// Post-auth: Set up workspace context
|
||||
console.log(); // Add spacing
|
||||
try {
|
||||
const contextCommand = new ContextCommand();
|
||||
const contextResult = await contextCommand.setupContextInteractive();
|
||||
if (contextResult.success) {
|
||||
if (contextResult.orgSelected && contextResult.briefSelected) {
|
||||
console.log(
|
||||
chalk.green('✓ Workspace context configured successfully')
|
||||
);
|
||||
} else if (contextResult.orgSelected) {
|
||||
console.log(chalk.green('✓ Organization selected'));
|
||||
}
|
||||
} else {
|
||||
console.log(
|
||||
chalk.yellow('⚠ Context setup was skipped or encountered issues')
|
||||
);
|
||||
console.log(
|
||||
chalk.gray(' You can set up context later with "tm context"')
|
||||
);
|
||||
}
|
||||
} catch (contextError) {
|
||||
console.log(chalk.yellow('⚠ Context setup encountered an error'));
|
||||
console.log(
|
||||
chalk.gray(' You can set up context later with "tm context"')
|
||||
);
|
||||
if (process.env.DEBUG) {
|
||||
console.error(chalk.gray((contextError as Error).message));
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
action: 'login',
|
||||
@@ -387,7 +348,7 @@ export class AuthCommand extends Command {
|
||||
message: 'Authentication successful'
|
||||
};
|
||||
} catch (error) {
|
||||
displayError(error, { skipExit: true });
|
||||
this.handleAuthError(error as AuthenticationError);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
@@ -450,6 +411,51 @@ export class AuthCommand extends Command {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle authentication errors
|
||||
*/
|
||||
private handleAuthError(error: AuthenticationError): void {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
switch (error.code) {
|
||||
case 'NETWORK_ERROR':
|
||||
ui.displayWarning(
|
||||
'Please check your internet connection and try again.'
|
||||
);
|
||||
break;
|
||||
case 'INVALID_CREDENTIALS':
|
||||
ui.displayWarning('Please check your credentials and try again.');
|
||||
break;
|
||||
case 'AUTH_EXPIRED':
|
||||
ui.displayWarning(
|
||||
'Your session has expired. Please authenticate again.'
|
||||
);
|
||||
break;
|
||||
default:
|
||||
if (process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack || ''));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle general errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
this.handleAuthError(error);
|
||||
} else {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
|
||||
@@ -6,11 +6,13 @@
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import search from '@inquirer/search';
|
||||
import ora, { Ora } from 'ora';
|
||||
import { AuthManager, type UserContext } from '@tm/core/auth';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type UserContext
|
||||
} from '@tm/core/auth';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
|
||||
/**
|
||||
* Result type from context command
|
||||
@@ -116,7 +118,8 @@ export class ContextCommand extends Command {
|
||||
const result = this.displayContext();
|
||||
this.setLastResult(result);
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -153,14 +156,10 @@ export class ContextCommand extends Command {
|
||||
|
||||
if (context.briefName || context.briefId) {
|
||||
console.log(chalk.green('\n✓ Brief'));
|
||||
if (context.briefName && context.briefId) {
|
||||
const shortId = context.briefId.slice(0, 8);
|
||||
console.log(
|
||||
chalk.white(` ${context.briefName} `) + chalk.gray(`(${shortId})`)
|
||||
);
|
||||
} else if (context.briefName) {
|
||||
if (context.briefName) {
|
||||
console.log(chalk.white(` ${context.briefName}`));
|
||||
} else if (context.briefId) {
|
||||
}
|
||||
if (context.briefId) {
|
||||
console.log(chalk.gray(` ID: ${context.briefId}`));
|
||||
}
|
||||
}
|
||||
@@ -212,7 +211,8 @@ export class ContextCommand extends Command {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -250,10 +250,9 @@ export class ContextCommand extends Command {
|
||||
]);
|
||||
|
||||
// Update context
|
||||
this.authManager.updateContext({
|
||||
await this.authManager.updateContext({
|
||||
orgId: selectedOrg.id,
|
||||
orgName: selectedOrg.name,
|
||||
orgSlug: selectedOrg.slug,
|
||||
// Clear brief when changing org
|
||||
briefId: undefined,
|
||||
briefName: undefined
|
||||
@@ -300,7 +299,8 @@ export class ContextCommand extends Command {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -324,54 +324,26 @@ export class ContextCommand extends Command {
|
||||
};
|
||||
}
|
||||
|
||||
// Prompt for selection with search
|
||||
const selectedBrief = await search<(typeof briefs)[0] | null>({
|
||||
message: 'Search for a brief:',
|
||||
source: async (input) => {
|
||||
const searchTerm = input?.toLowerCase() || '';
|
||||
|
||||
// Static option for no brief
|
||||
const noBriefOption = {
|
||||
name: '(No brief - organization level)',
|
||||
value: null as any,
|
||||
description: 'Clear brief selection'
|
||||
};
|
||||
|
||||
// Filter and map brief options
|
||||
const briefOptions = briefs
|
||||
.filter((brief) => {
|
||||
if (!searchTerm) return true;
|
||||
|
||||
const title = brief.document?.title || '';
|
||||
const shortId = brief.id.slice(0, 8);
|
||||
|
||||
// Search by title first, then by UUID
|
||||
return (
|
||||
title.toLowerCase().includes(searchTerm) ||
|
||||
brief.id.toLowerCase().includes(searchTerm) ||
|
||||
shortId.toLowerCase().includes(searchTerm)
|
||||
);
|
||||
})
|
||||
.map((brief) => {
|
||||
const title =
|
||||
brief.document?.title || `Brief ${brief.id.slice(0, 8)}`;
|
||||
const shortId = brief.id.slice(0, 8);
|
||||
return {
|
||||
name: `${title} ${chalk.gray(`(${shortId})`)}`,
|
||||
value: brief
|
||||
};
|
||||
});
|
||||
|
||||
return [noBriefOption, ...briefOptions];
|
||||
// Prompt for selection
|
||||
const { selectedBrief } = await inquirer.prompt([
|
||||
{
|
||||
type: 'list',
|
||||
name: 'selectedBrief',
|
||||
message: 'Select a brief:',
|
||||
choices: [
|
||||
{ name: '(No brief - organization level)', value: null },
|
||||
...briefs.map((brief) => ({
|
||||
name: `Brief ${brief.id} (${new Date(brief.createdAt).toLocaleDateString()})`,
|
||||
value: brief
|
||||
}))
|
||||
]
|
||||
}
|
||||
});
|
||||
]);
|
||||
|
||||
if (selectedBrief) {
|
||||
// Update context with brief
|
||||
const briefName =
|
||||
selectedBrief.document?.title ||
|
||||
`Brief ${selectedBrief.id.slice(0, 8)}`;
|
||||
this.authManager.updateContext({
|
||||
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
|
||||
await this.authManager.updateContext({
|
||||
briefId: selectedBrief.id,
|
||||
briefName: briefName
|
||||
});
|
||||
@@ -382,11 +354,11 @@ export class ContextCommand extends Command {
|
||||
success: true,
|
||||
action: 'select-brief',
|
||||
context: this.authManager.getContext() || undefined,
|
||||
message: `Selected brief: ${selectedBrief.document?.title}`
|
||||
message: `Selected brief: ${selectedBrief.name}`
|
||||
};
|
||||
} else {
|
||||
// Clear brief selection
|
||||
this.authManager.updateContext({
|
||||
await this.authManager.updateContext({
|
||||
briefId: undefined,
|
||||
briefName: undefined
|
||||
});
|
||||
@@ -424,7 +396,8 @@ export class ContextCommand extends Command {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -470,7 +443,8 @@ export class ContextCommand extends Command {
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -494,7 +468,7 @@ export class ContextCommand extends Command {
|
||||
if (!briefId) {
|
||||
spinner.fail('Could not extract a brief ID from the provided input');
|
||||
ui.displayError(
|
||||
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_BASE_DOMAIN || process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
|
||||
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
@@ -506,24 +480,20 @@ export class ContextCommand extends Command {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Fetch org to get a friendly name and slug (optional)
|
||||
// Fetch org to get a friendly name (optional)
|
||||
let orgName: string | undefined;
|
||||
let orgSlug: string | undefined;
|
||||
try {
|
||||
const org = await this.authManager.getOrganization(brief.accountId);
|
||||
orgName = org?.name;
|
||||
orgSlug = org?.slug;
|
||||
} catch {
|
||||
// Non-fatal if org lookup fails
|
||||
}
|
||||
|
||||
// Update context: set org and brief
|
||||
const briefName =
|
||||
brief.document?.title || `Brief ${brief.id.slice(0, 8)}`;
|
||||
this.authManager.updateContext({
|
||||
const briefName = `Brief ${brief.id.slice(0, 8)}`;
|
||||
await this.authManager.updateContext({
|
||||
orgId: brief.accountId,
|
||||
orgName,
|
||||
orgSlug,
|
||||
briefId: brief.id,
|
||||
briefName
|
||||
});
|
||||
@@ -545,7 +515,8 @@ export class ContextCommand extends Command {
|
||||
try {
|
||||
if (spinner?.isSpinning) spinner.stop();
|
||||
} catch {}
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -642,7 +613,7 @@ export class ContextCommand extends Command {
|
||||
};
|
||||
}
|
||||
|
||||
this.authManager.updateContext(context);
|
||||
await this.authManager.updateContext(context);
|
||||
ui.displaySuccess('Context updated');
|
||||
|
||||
// Display what was set
|
||||
@@ -674,6 +645,26 @@ export class ContextCommand extends Command {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
if (error.code === 'NOT_AUTHENTICATED') {
|
||||
ui.displayWarning('Please authenticate first: tm auth login');
|
||||
}
|
||||
} else {
|
||||
const msg = error?.message ?? String(error);
|
||||
console.error(chalk.red(`Error: ${msg}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
@@ -695,53 +686,6 @@ export class ContextCommand extends Command {
|
||||
return this.authManager.getContext();
|
||||
}
|
||||
|
||||
/**
|
||||
* Interactive context setup (for post-auth flow)
|
||||
* Prompts user to select org and brief
|
||||
*/
|
||||
async setupContextInteractive(): Promise<{
|
||||
success: boolean;
|
||||
orgSelected: boolean;
|
||||
briefSelected: boolean;
|
||||
}> {
|
||||
try {
|
||||
// Ask if user wants to set up workspace context
|
||||
const { setupContext } = await inquirer.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'setupContext',
|
||||
message: 'Would you like to set up your workspace context now?',
|
||||
default: true
|
||||
}
|
||||
]);
|
||||
|
||||
if (!setupContext) {
|
||||
return { success: true, orgSelected: false, briefSelected: false };
|
||||
}
|
||||
|
||||
// Select organization
|
||||
const orgResult = await this.selectOrganization();
|
||||
if (!orgResult.success || !orgResult.context?.orgId) {
|
||||
return { success: false, orgSelected: false, briefSelected: false };
|
||||
}
|
||||
|
||||
// Select brief
|
||||
const briefResult = await this.selectBrief(orgResult.context.orgId);
|
||||
return {
|
||||
success: true,
|
||||
orgSelected: true,
|
||||
briefSelected: briefResult.success
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(
|
||||
chalk.yellow(
|
||||
'\nContext setup skipped due to error. You can set it up later with "tm context"'
|
||||
)
|
||||
);
|
||||
return { success: false, orgSelected: false, briefSelected: false };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
|
||||
@@ -7,10 +7,13 @@ import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import ora, { Ora } from 'ora';
|
||||
import { AuthManager, type UserContext } from '@tm/core/auth';
|
||||
import {
|
||||
AuthManager,
|
||||
AuthenticationError,
|
||||
type UserContext
|
||||
} from '@tm/core/auth';
|
||||
import { TaskMasterCore, type ExportResult } from '@tm/core';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
|
||||
/**
|
||||
* Result type from export command
|
||||
@@ -100,7 +103,7 @@ export class ExportCommand extends Command {
|
||||
await this.initializeServices();
|
||||
|
||||
// Get current context
|
||||
const context = await this.authManager.getContext();
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
// Determine org and brief IDs
|
||||
let orgId = options?.org || context?.orgId;
|
||||
@@ -194,7 +197,8 @@ export class ExportCommand extends Command {
|
||||
};
|
||||
} catch (error: any) {
|
||||
if (spinner?.isSpinning) spinner.fail('Export failed');
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -330,6 +334,26 @@ export class ExportCommand extends Command {
|
||||
return confirmed;
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
if (error instanceof AuthenticationError) {
|
||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
||||
|
||||
if (error.code === 'NOT_AUTHENTICATED') {
|
||||
ui.displayWarning('Please authenticate first: tm auth login');
|
||||
}
|
||||
} else {
|
||||
const msg = error?.message ?? String(error);
|
||||
console.error(chalk.red(`Error: ${msg}`));
|
||||
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last export result (useful for testing)
|
||||
*/
|
||||
|
||||
@@ -17,9 +17,8 @@ import {
|
||||
} from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
import { displayCommandHeader } from '../utils/display-helpers.js';
|
||||
import {
|
||||
displayHeader,
|
||||
displayDashboards,
|
||||
calculateTaskStatistics,
|
||||
calculateSubtaskStatistics,
|
||||
@@ -107,7 +106,14 @@ export class ListTasksCommand extends Command {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -251,12 +257,15 @@ export class ListTasksCommand extends Command {
|
||||
* Display in text format with tables
|
||||
*/
|
||||
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
|
||||
const { tasks, tag, storageType } = data;
|
||||
const { tasks, tag } = data;
|
||||
|
||||
// Display header using utility function
|
||||
displayCommandHeader(this.tmCore, {
|
||||
// Get file path for display
|
||||
const filePath = this.tmCore ? `.taskmaster/tasks/tasks.json` : undefined;
|
||||
|
||||
// Display header without banner (banner already shown by main CLI)
|
||||
displayHeader({
|
||||
tag: tag || 'master',
|
||||
storageType
|
||||
filePath: filePath
|
||||
});
|
||||
|
||||
// No tasks message
|
||||
|
||||
@@ -1,248 +0,0 @@
|
||||
/**
|
||||
* @fileoverview NextCommand using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
*/
|
||||
|
||||
import path from 'node:path';
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||
import { displayCommandHeader } from '../utils/display-helpers.js';
|
||||
|
||||
/**
|
||||
* Options interface for the next command
|
||||
*/
|
||||
export interface NextCommandOptions {
|
||||
tag?: string;
|
||||
format?: 'text' | 'json';
|
||||
silent?: boolean;
|
||||
project?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type from next command
|
||||
*/
|
||||
export interface NextTaskResult {
|
||||
task: Task | null;
|
||||
found: boolean;
|
||||
tag: string;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* NextCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core
|
||||
*/
|
||||
export class NextCommand extends Command {
|
||||
private tmCore?: TaskMasterCore;
|
||||
private lastResult?: NextTaskResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'next');
|
||||
|
||||
// Configure the command
|
||||
this.description('Find the next available task to work on')
|
||||
.option('-t, --tag <tag>', 'Filter by tag')
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.action(async (options: NextCommandOptions) => {
|
||||
await this.executeCommand(options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the next command
|
||||
*/
|
||||
private async executeCommand(options: NextCommandOptions): Promise<void> {
|
||||
let hasError = false;
|
||||
try {
|
||||
// Validate options (throws on invalid options)
|
||||
this.validateOptions(options);
|
||||
|
||||
// Initialize tm-core
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
|
||||
// Get next task from core
|
||||
const result = await this.getNextTask(options);
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results
|
||||
if (!options.silent) {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
hasError = true;
|
||||
displayError(error, { skipExit: true });
|
||||
} finally {
|
||||
// Always clean up resources, even on error
|
||||
await this.cleanup();
|
||||
}
|
||||
|
||||
// Exit after cleanup completes
|
||||
if (hasError) {
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: NextCommandOptions): void {
|
||||
// Validate format
|
||||
if (options.format && !['text', 'json'].includes(options.format)) {
|
||||
throw new Error(
|
||||
`Invalid format: ${options.format}. Valid formats are: text, json`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TaskMasterCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
const resolved = path.resolve(projectRoot);
|
||||
this.tmCore = await createTaskMasterCore({ projectPath: resolved });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get next task from tm-core
|
||||
*/
|
||||
private async getNextTask(
|
||||
options: NextCommandOptions
|
||||
): Promise<NextTaskResult> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TaskMasterCore not initialized');
|
||||
}
|
||||
|
||||
// Call tm-core to get next task
|
||||
const task = await this.tmCore.getNextTask(options.tag);
|
||||
|
||||
// Get storage type and active tag
|
||||
const storageType = this.tmCore.getStorageType();
|
||||
if (storageType === 'auto') {
|
||||
throw new Error('Storage type must be resolved before use');
|
||||
}
|
||||
const activeTag = options.tag || this.tmCore.getActiveTag();
|
||||
|
||||
return {
|
||||
task,
|
||||
found: task !== null,
|
||||
tag: activeTag,
|
||||
storageType
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: NextTaskResult,
|
||||
options: NextCommandOptions
|
||||
): void {
|
||||
const format = options.format || 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
this.displayText(result);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(result: NextTaskResult): void {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in text format
|
||||
*/
|
||||
private displayText(result: NextTaskResult): void {
|
||||
// Display header with storage info
|
||||
displayCommandHeader(this.tmCore, {
|
||||
tag: result.tag || 'master',
|
||||
storageType: result.storageType
|
||||
});
|
||||
|
||||
if (!result.found || !result.task) {
|
||||
// No next task available
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.yellow(
|
||||
'No tasks available to work on. All tasks are either completed, blocked by dependencies, or in progress.'
|
||||
),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'yellow',
|
||||
title: '⚠ NO TASKS AVAILABLE ⚠',
|
||||
titleAlignment: 'center'
|
||||
}
|
||||
)
|
||||
);
|
||||
console.log(
|
||||
`\n${chalk.dim('Tip: Try')} ${chalk.cyan('task-master list --status pending')} ${chalk.dim('to see all pending tasks')}`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const task = result.task;
|
||||
|
||||
// Display the task details using the same component as 'show' command
|
||||
// with a custom header indicating this is the next task
|
||||
const customHeader = `Next Task: #${task.id} - ${task.title}`;
|
||||
displayTaskDetails(task, {
|
||||
customHeader,
|
||||
headerColor: 'green',
|
||||
showSuggestedActions: true
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: NextTaskResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): NextTaskResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): NextCommand {
|
||||
const nextCommand = new NextCommand(name);
|
||||
program.addCommand(nextCommand);
|
||||
return nextCommand;
|
||||
}
|
||||
}
|
||||
@@ -12,7 +12,6 @@ import {
|
||||
type TaskStatus
|
||||
} from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
|
||||
/**
|
||||
* Valid task status values for validation
|
||||
@@ -86,7 +85,6 @@ export class SetStatusCommand extends Command {
|
||||
private async executeCommand(
|
||||
options: SetStatusCommandOptions
|
||||
): Promise<void> {
|
||||
let hasError = false;
|
||||
try {
|
||||
// Validate required options
|
||||
if (!options.id) {
|
||||
@@ -137,15 +135,16 @@ export class SetStatusCommand extends Command {
|
||||
oldStatus: result.oldStatus,
|
||||
newStatus: result.newStatus
|
||||
});
|
||||
} catch (error: any) {
|
||||
hasError = true;
|
||||
if (options.format === 'json') {
|
||||
const errorMessage = error?.getSanitizedDetails
|
||||
? error.getSanitizedDetails().message
|
||||
: error instanceof Error
|
||||
? error.message
|
||||
: String(error);
|
||||
} catch (error) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : String(error);
|
||||
|
||||
if (!options.silent) {
|
||||
console.error(
|
||||
chalk.red(`Failed to update task ${taskId}: ${errorMessage}`)
|
||||
);
|
||||
}
|
||||
if (options.format === 'json') {
|
||||
console.log(
|
||||
JSON.stringify({
|
||||
success: false,
|
||||
@@ -154,13 +153,8 @@ export class SetStatusCommand extends Command {
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
);
|
||||
} else if (!options.silent) {
|
||||
// Show which task failed with context
|
||||
console.error(chalk.red(`\nFailed to update task ${taskId}:`));
|
||||
displayError(error, { skipExit: true });
|
||||
}
|
||||
// Don't exit here - let finally block clean up first
|
||||
break;
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -176,26 +170,25 @@ export class SetStatusCommand extends Command {
|
||||
|
||||
// Display results
|
||||
this.displayResults(this.lastResult, options);
|
||||
} catch (error: any) {
|
||||
hasError = true;
|
||||
if (options.format === 'json') {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : 'Unknown error occurred';
|
||||
console.log(JSON.stringify({ success: false, error: errorMessage }));
|
||||
} else if (!options.silent) {
|
||||
displayError(error, { skipExit: true });
|
||||
} catch (error) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : 'Unknown error occurred';
|
||||
|
||||
if (!options.silent) {
|
||||
console.error(chalk.red(`Error: ${errorMessage}`));
|
||||
}
|
||||
|
||||
if (options.format === 'json') {
|
||||
console.log(JSON.stringify({ success: false, error: errorMessage }));
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
} finally {
|
||||
// Clean up resources
|
||||
if (this.tmCore) {
|
||||
await this.tmCore.close();
|
||||
}
|
||||
}
|
||||
|
||||
// Exit after cleanup completes
|
||||
if (hasError) {
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -9,9 +9,7 @@ import boxen from 'boxen';
|
||||
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||
import { displayCommandHeader } from '../utils/display-helpers.js';
|
||||
|
||||
/**
|
||||
* Options interface for the show command
|
||||
@@ -114,7 +112,14 @@ export class ShowCommand extends Command {
|
||||
this.displayResults(result, options);
|
||||
}
|
||||
} catch (error: any) {
|
||||
displayError(error);
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
if (error.stack && process.env.DEBUG) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -252,15 +257,6 @@ export class ShowCommand extends Command {
|
||||
return;
|
||||
}
|
||||
|
||||
// Display header with storage info
|
||||
const activeTag = this.tmCore?.getActiveTag() || 'master';
|
||||
displayCommandHeader(this.tmCore, {
|
||||
tag: activeTag,
|
||||
storageType: result.storageType
|
||||
});
|
||||
|
||||
console.log(); // Add spacing
|
||||
|
||||
// Use the global task details display function
|
||||
displayTaskDetails(result.task, {
|
||||
statusFilter: options.status,
|
||||
@@ -275,12 +271,8 @@ export class ShowCommand extends Command {
|
||||
result: ShowMultipleTasksResult,
|
||||
_options: ShowCommandOptions
|
||||
): void {
|
||||
// Display header with storage info
|
||||
const activeTag = this.tmCore?.getActiveTag() || 'master';
|
||||
displayCommandHeader(this.tmCore, {
|
||||
tag: activeTag,
|
||||
storageType: result.storageType
|
||||
});
|
||||
// Header
|
||||
ui.displayBanner(`Tasks (${result.tasks.length} found)`);
|
||||
|
||||
if (result.notFound.length > 0) {
|
||||
console.log(chalk.yellow(`\n⚠ Not found: ${result.notFound.join(', ')}`));
|
||||
@@ -299,6 +291,8 @@ export class ShowCommand extends Command {
|
||||
showDependencies: true
|
||||
})
|
||||
);
|
||||
|
||||
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -16,7 +16,6 @@ import {
|
||||
} from '@tm/core';
|
||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
|
||||
/**
|
||||
* CLI-specific options interface for the start command
|
||||
@@ -161,7 +160,8 @@ export class StartCommand extends Command {
|
||||
if (spinner) {
|
||||
spinner.fail('Operation failed');
|
||||
}
|
||||
displayError(error);
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -452,6 +452,22 @@ export class StartCommand extends Command {
|
||||
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle general errors
|
||||
*/
|
||||
private handleError(error: any): void {
|
||||
const msg = error?.getSanitizedDetails?.() ?? {
|
||||
message: error?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
|
||||
// Show stack trace in development mode or when DEBUG is set
|
||||
const isDevelopment = process.env.NODE_ENV !== 'production';
|
||||
if ((isDevelopment || process.env.DEBUG) && error.stack) {
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
|
||||
@@ -6,7 +6,6 @@
|
||||
// Commands
|
||||
export { ListTasksCommand } from './commands/list.command.js';
|
||||
export { ShowCommand } from './commands/show.command.js';
|
||||
export { NextCommand } from './commands/next.command.js';
|
||||
export { AuthCommand } from './commands/auth.command.js';
|
||||
export { ContextCommand } from './commands/context.command.js';
|
||||
export { StartCommand } from './commands/start.command.js';
|
||||
@@ -24,9 +23,6 @@ export {
|
||||
// UI utilities (for other commands to use)
|
||||
export * as ui from './utils/ui.js';
|
||||
|
||||
// Error handling utilities
|
||||
export { displayError, isDebugMode } from './utils/error-handler.js';
|
||||
|
||||
// Auto-update utilities
|
||||
export {
|
||||
checkForUpdate,
|
||||
|
||||
@@ -5,16 +5,6 @@
|
||||
|
||||
import chalk from 'chalk';
|
||||
|
||||
/**
|
||||
* Brief information for API storage
|
||||
*/
|
||||
export interface BriefInfo {
|
||||
briefId: string;
|
||||
briefName: string;
|
||||
orgSlug?: string;
|
||||
webAppUrl?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Header configuration options
|
||||
*/
|
||||
@@ -22,50 +12,22 @@ export interface HeaderOptions {
|
||||
title?: string;
|
||||
tag?: string;
|
||||
filePath?: string;
|
||||
storageType?: 'api' | 'file';
|
||||
briefInfo?: BriefInfo;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the Task Master header with project info
|
||||
*/
|
||||
export function displayHeader(options: HeaderOptions = {}): void {
|
||||
const { filePath, tag, storageType, briefInfo } = options;
|
||||
const { filePath, tag } = options;
|
||||
|
||||
// Display different header based on storage type
|
||||
if (storageType === 'api' && briefInfo) {
|
||||
// API storage: Show brief information
|
||||
const briefDisplay = `🏷 Brief: ${chalk.cyan(briefInfo.briefName)} ${chalk.gray(`(${briefInfo.briefId})`)}`;
|
||||
console.log(briefDisplay);
|
||||
|
||||
// Construct and display the brief URL or ID
|
||||
if (briefInfo.webAppUrl && briefInfo.orgSlug) {
|
||||
const briefUrl = `${briefInfo.webAppUrl}/home/${briefInfo.orgSlug}/briefs/${briefInfo.briefId}/plan`;
|
||||
console.log(`Listing tasks from: ${chalk.dim(briefUrl)}`);
|
||||
} else if (briefInfo.webAppUrl) {
|
||||
// Show web app URL and brief ID if org slug is missing
|
||||
console.log(
|
||||
`Listing tasks from: ${chalk.dim(`${briefInfo.webAppUrl} (Brief: ${briefInfo.briefId})`)}`
|
||||
);
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
`💡 Tip: Run ${chalk.cyan('tm context select')} to set your organization and see the full URL`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
// Fallback: just show the brief ID if we can't get web app URL
|
||||
console.log(
|
||||
`Listing tasks from: ${chalk.dim(`API (Brief ID: ${briefInfo.briefId})`)}`
|
||||
);
|
||||
}
|
||||
} else if (tag) {
|
||||
// File storage: Show tag information
|
||||
// Display tag and file path info
|
||||
if (tag) {
|
||||
let tagInfo = '';
|
||||
|
||||
if (tag && tag !== 'master') {
|
||||
tagInfo = `🏷 tag: ${chalk.cyan(tag)}`;
|
||||
tagInfo = `🏷 tag: ${chalk.cyan(tag)}`;
|
||||
} else {
|
||||
tagInfo = `🏷 tag: ${chalk.cyan('master')}`;
|
||||
tagInfo = `🏷 tag: ${chalk.cyan('master')}`;
|
||||
}
|
||||
|
||||
console.log(tagInfo);
|
||||
@@ -77,5 +39,7 @@ export function displayHeader(options: HeaderOptions = {}): void {
|
||||
: `${process.cwd()}/${filePath}`;
|
||||
console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`);
|
||||
}
|
||||
|
||||
console.log(); // Empty line for spacing
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import type { Task } from '@tm/core/types';
|
||||
import { getComplexityWithColor, getBoxWidth } from '../../utils/ui.js';
|
||||
import { getComplexityWithColor } from '../../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Next task display options
|
||||
@@ -113,7 +113,7 @@ export function displayRecommendedNextTask(
|
||||
borderColor: '#FFA500', // Orange color
|
||||
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
|
||||
titleAlignment: 'center',
|
||||
width: getBoxWidth(0.97),
|
||||
width: process.stdout.columns * 0.97,
|
||||
fullscreen: false
|
||||
})
|
||||
);
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import { getBoxWidth } from '../../utils/ui.js';
|
||||
|
||||
/**
|
||||
* Display suggested next steps section
|
||||
@@ -25,7 +24,7 @@ export function displaySuggestedNextSteps(): void {
|
||||
margin: { top: 0, bottom: 1 },
|
||||
borderStyle: 'round',
|
||||
borderColor: 'gray',
|
||||
width: getBoxWidth(0.97)
|
||||
width: process.stdout.columns * 0.97
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Display helper utilities for commands
|
||||
* Provides DRY utilities for displaying headers and other command output
|
||||
*/
|
||||
|
||||
import type { TaskMasterCore } from '@tm/core';
|
||||
import type { StorageType } from '@tm/core/types';
|
||||
import { displayHeader, type BriefInfo } from '../ui/index.js';
|
||||
|
||||
/**
|
||||
* Get web app base URL from environment
|
||||
*/
|
||||
function getWebAppUrl(): string | undefined {
|
||||
const baseDomain =
|
||||
process.env.TM_BASE_DOMAIN || process.env.TM_PUBLIC_BASE_DOMAIN;
|
||||
|
||||
if (!baseDomain) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
// If it already includes protocol, use as-is
|
||||
if (baseDomain.startsWith('http://') || baseDomain.startsWith('https://')) {
|
||||
return baseDomain;
|
||||
}
|
||||
|
||||
// Otherwise, add protocol based on domain
|
||||
if (baseDomain.includes('localhost') || baseDomain.includes('127.0.0.1')) {
|
||||
return `http://${baseDomain}`;
|
||||
}
|
||||
|
||||
return `https://${baseDomain}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Display the command header with appropriate storage information
|
||||
* Handles both API and file storage displays
|
||||
*/
|
||||
export function displayCommandHeader(
|
||||
tmCore: TaskMasterCore | undefined,
|
||||
options: {
|
||||
tag?: string;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
): void {
|
||||
const { tag, storageType } = options;
|
||||
|
||||
// Get brief info if using API storage
|
||||
let briefInfo: BriefInfo | undefined;
|
||||
if (storageType === 'api' && tmCore) {
|
||||
const storageInfo = tmCore.getStorageDisplayInfo();
|
||||
if (storageInfo) {
|
||||
// Construct full brief info with web app URL
|
||||
briefInfo = {
|
||||
...storageInfo,
|
||||
webAppUrl: getWebAppUrl()
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Get file path for display (only for file storage)
|
||||
// Note: The file structure is fixed for file storage and won't change.
|
||||
// This is a display-only relative path, not used for actual file operations.
|
||||
const filePath =
|
||||
storageType === 'file' && tmCore
|
||||
? `.taskmaster/tasks/tasks.json`
|
||||
: undefined;
|
||||
|
||||
// Display header
|
||||
displayHeader({
|
||||
tag: tag || 'master',
|
||||
filePath: filePath,
|
||||
storageType: storageType === 'api' ? 'api' : 'file',
|
||||
briefInfo: briefInfo
|
||||
});
|
||||
}
|
||||
@@ -1,60 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Centralized error handling utilities for CLI
|
||||
* Provides consistent error formatting and debug mode detection
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
|
||||
/**
|
||||
* Check if debug mode is enabled via environment variable
|
||||
* Only returns true when DEBUG is explicitly set to 'true' or '1'
|
||||
*
|
||||
* @returns True if debug mode is enabled
|
||||
*/
|
||||
export function isDebugMode(): boolean {
|
||||
return process.env.DEBUG === 'true' || process.env.DEBUG === '1';
|
||||
}
|
||||
|
||||
/**
|
||||
* Display an error to the user with optional stack trace in debug mode
|
||||
* Handles both TaskMasterError instances and regular errors
|
||||
*
|
||||
* @param error - The error to display
|
||||
* @param options - Display options
|
||||
*/
|
||||
export function displayError(
|
||||
error: any,
|
||||
options: {
|
||||
/** Skip exit, useful when caller wants to handle exit */
|
||||
skipExit?: boolean;
|
||||
/** Force show stack trace regardless of debug mode */
|
||||
forceStack?: boolean;
|
||||
} = {}
|
||||
): void {
|
||||
// Check if it's a TaskMasterError with sanitized details
|
||||
if (error?.getSanitizedDetails) {
|
||||
const sanitized = error.getSanitizedDetails();
|
||||
console.error(chalk.red(`\n${sanitized.message}`));
|
||||
|
||||
// Show stack trace in debug mode or if forced
|
||||
if ((isDebugMode() || options.forceStack) && error.stack) {
|
||||
console.error(chalk.gray('\nStack trace:'));
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
} else {
|
||||
// For other errors, show the message
|
||||
const message = error?.message ?? String(error);
|
||||
console.error(chalk.red(`\nError: ${message}`));
|
||||
|
||||
// Show stack trace in debug mode or if forced
|
||||
if ((isDebugMode() || options.forceStack) && error?.stack) {
|
||||
console.error(chalk.gray('\nStack trace:'));
|
||||
console.error(chalk.gray(error.stack));
|
||||
}
|
||||
}
|
||||
|
||||
// Exit if not skipped
|
||||
if (!options.skipExit) {
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
@@ -1,158 +0,0 @@
|
||||
/**
|
||||
* CLI UI utilities tests
|
||||
* Tests for apps/cli/src/utils/ui.ts
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
import type { MockInstance } from 'vitest';
|
||||
import { getBoxWidth } from './ui.js';
|
||||
|
||||
describe('CLI UI Utilities', () => {
|
||||
describe('getBoxWidth', () => {
|
||||
let columnsSpy: MockInstance;
|
||||
let originalDescriptor: PropertyDescriptor | undefined;
|
||||
|
||||
beforeEach(() => {
|
||||
// Store original descriptor if it exists
|
||||
originalDescriptor = Object.getOwnPropertyDescriptor(
|
||||
process.stdout,
|
||||
'columns'
|
||||
);
|
||||
|
||||
// If columns doesn't exist or isn't a getter, define it as one
|
||||
if (!originalDescriptor || !originalDescriptor.get) {
|
||||
const currentValue = process.stdout.columns || 80;
|
||||
Object.defineProperty(process.stdout, 'columns', {
|
||||
get() {
|
||||
return currentValue;
|
||||
},
|
||||
configurable: true
|
||||
});
|
||||
}
|
||||
|
||||
// Now spy on the getter
|
||||
columnsSpy = vi.spyOn(process.stdout, 'columns', 'get');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Restore the spy
|
||||
columnsSpy.mockRestore();
|
||||
|
||||
// Restore original descriptor or delete the property
|
||||
if (originalDescriptor) {
|
||||
Object.defineProperty(process.stdout, 'columns', originalDescriptor);
|
||||
} else {
|
||||
delete (process.stdout as any).columns;
|
||||
}
|
||||
});
|
||||
|
||||
it('should calculate width as percentage of terminal width', () => {
|
||||
columnsSpy.mockReturnValue(100);
|
||||
const width = getBoxWidth(0.9, 40);
|
||||
expect(width).toBe(90);
|
||||
});
|
||||
|
||||
it('should use default percentage of 0.9 when not specified', () => {
|
||||
columnsSpy.mockReturnValue(100);
|
||||
const width = getBoxWidth();
|
||||
expect(width).toBe(90);
|
||||
});
|
||||
|
||||
it('should use default minimum width of 40 when not specified', () => {
|
||||
columnsSpy.mockReturnValue(30);
|
||||
const width = getBoxWidth();
|
||||
expect(width).toBe(40); // Should enforce minimum
|
||||
});
|
||||
|
||||
it('should enforce minimum width when terminal is too narrow', () => {
|
||||
columnsSpy.mockReturnValue(50);
|
||||
const width = getBoxWidth(0.9, 60);
|
||||
expect(width).toBe(60); // Should use minWidth instead of 45
|
||||
});
|
||||
|
||||
it('should handle undefined process.stdout.columns', () => {
|
||||
columnsSpy.mockReturnValue(undefined);
|
||||
const width = getBoxWidth(0.9, 40);
|
||||
// Should fall back to 80 columns: Math.floor(80 * 0.9) = 72
|
||||
expect(width).toBe(72);
|
||||
});
|
||||
|
||||
it('should handle custom percentage values', () => {
|
||||
columnsSpy.mockReturnValue(100);
|
||||
expect(getBoxWidth(0.95, 40)).toBe(95);
|
||||
expect(getBoxWidth(0.8, 40)).toBe(80);
|
||||
expect(getBoxWidth(0.5, 40)).toBe(50);
|
||||
});
|
||||
|
||||
it('should handle custom minimum width values', () => {
|
||||
columnsSpy.mockReturnValue(60);
|
||||
expect(getBoxWidth(0.9, 70)).toBe(70); // 60 * 0.9 = 54, but min is 70
|
||||
expect(getBoxWidth(0.9, 50)).toBe(54); // 60 * 0.9 = 54, min is 50
|
||||
});
|
||||
|
||||
it('should floor the calculated width', () => {
|
||||
columnsSpy.mockReturnValue(99);
|
||||
const width = getBoxWidth(0.9, 40);
|
||||
// 99 * 0.9 = 89.1, should floor to 89
|
||||
expect(width).toBe(89);
|
||||
});
|
||||
|
||||
it('should match warning box width calculation', () => {
|
||||
// Test the specific case from displayWarning()
|
||||
columnsSpy.mockReturnValue(80);
|
||||
const width = getBoxWidth(0.9, 40);
|
||||
expect(width).toBe(72);
|
||||
});
|
||||
|
||||
it('should match table width calculation', () => {
|
||||
// Test the specific case from createTaskTable()
|
||||
columnsSpy.mockReturnValue(111);
|
||||
const width = getBoxWidth(0.9, 100);
|
||||
// 111 * 0.9 = 99.9, floor to 99, but max(99, 100) = 100
|
||||
expect(width).toBe(100);
|
||||
});
|
||||
|
||||
it('should match recommended task box width calculation', () => {
|
||||
// Test the specific case from displayRecommendedNextTask()
|
||||
columnsSpy.mockReturnValue(120);
|
||||
const width = getBoxWidth(0.97, 40);
|
||||
// 120 * 0.97 = 116.4, floor to 116
|
||||
expect(width).toBe(116);
|
||||
});
|
||||
|
||||
it('should handle edge case of zero terminal width', () => {
|
||||
columnsSpy.mockReturnValue(0);
|
||||
const width = getBoxWidth(0.9, 40);
|
||||
// When columns is 0, it uses fallback of 80: Math.floor(80 * 0.9) = 72
|
||||
expect(width).toBe(72);
|
||||
});
|
||||
|
||||
it('should handle very large terminal widths', () => {
|
||||
columnsSpy.mockReturnValue(1000);
|
||||
const width = getBoxWidth(0.9, 40);
|
||||
expect(width).toBe(900);
|
||||
});
|
||||
|
||||
it('should handle very small percentages', () => {
|
||||
columnsSpy.mockReturnValue(100);
|
||||
const width = getBoxWidth(0.1, 5);
|
||||
// 100 * 0.1 = 10, which is greater than min 5
|
||||
expect(width).toBe(10);
|
||||
});
|
||||
|
||||
it('should handle percentage of 1.0 (100%)', () => {
|
||||
columnsSpy.mockReturnValue(80);
|
||||
const width = getBoxWidth(1.0, 40);
|
||||
expect(width).toBe(80);
|
||||
});
|
||||
|
||||
it('should consistently return same value for same inputs', () => {
|
||||
columnsSpy.mockReturnValue(100);
|
||||
const width1 = getBoxWidth(0.9, 40);
|
||||
const width2 = getBoxWidth(0.9, 40);
|
||||
const width3 = getBoxWidth(0.9, 40);
|
||||
expect(width1).toBe(width2);
|
||||
expect(width2).toBe(width3);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -126,20 +126,6 @@ export function getComplexityWithScore(complexity: number | undefined): string {
|
||||
return color(`${complexity}/10 (${label})`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate box width as percentage of terminal width
|
||||
* @param percentage - Percentage of terminal width to use (default: 0.9)
|
||||
* @param minWidth - Minimum width to enforce (default: 40)
|
||||
* @returns Calculated box width
|
||||
*/
|
||||
export function getBoxWidth(
|
||||
percentage: number = 0.9,
|
||||
minWidth: number = 40
|
||||
): number {
|
||||
const terminalWidth = process.stdout.columns || 80;
|
||||
return Math.max(Math.floor(terminalWidth * percentage), minWidth);
|
||||
}
|
||||
|
||||
/**
|
||||
* Truncate text to specified length
|
||||
*/
|
||||
@@ -190,8 +176,6 @@ export function displayBanner(title: string = 'Task Master'): void {
|
||||
* Display an error message (matches scripts/modules/ui.js style)
|
||||
*/
|
||||
export function displayError(message: string, details?: string): void {
|
||||
const boxWidth = getBoxWidth();
|
||||
|
||||
console.error(
|
||||
boxen(
|
||||
chalk.red.bold('X Error: ') +
|
||||
@@ -200,8 +184,7 @@ export function displayError(message: string, details?: string): void {
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'red',
|
||||
width: boxWidth
|
||||
borderColor: 'red'
|
||||
}
|
||||
)
|
||||
);
|
||||
@@ -211,16 +194,13 @@ export function displayError(message: string, details?: string): void {
|
||||
* Display a success message
|
||||
*/
|
||||
export function displaySuccess(message: string): void {
|
||||
const boxWidth = getBoxWidth();
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green',
|
||||
width: boxWidth
|
||||
borderColor: 'green'
|
||||
}
|
||||
)
|
||||
);
|
||||
@@ -230,14 +210,11 @@ export function displaySuccess(message: string): void {
|
||||
* Display a warning message
|
||||
*/
|
||||
export function displayWarning(message: string): void {
|
||||
const boxWidth = getBoxWidth();
|
||||
|
||||
console.log(
|
||||
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'yellow',
|
||||
width: boxWidth
|
||||
borderColor: 'yellow'
|
||||
})
|
||||
);
|
||||
}
|
||||
@@ -246,14 +223,11 @@ export function displayWarning(message: string): void {
|
||||
* Display info message
|
||||
*/
|
||||
export function displayInfo(message: string): void {
|
||||
const boxWidth = getBoxWidth();
|
||||
|
||||
console.log(
|
||||
boxen(chalk.blue.bold('i ') + chalk.white(message), {
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'blue',
|
||||
width: boxWidth
|
||||
borderColor: 'blue'
|
||||
})
|
||||
);
|
||||
}
|
||||
@@ -308,23 +282,23 @@ export function createTaskTable(
|
||||
} = options || {};
|
||||
|
||||
// Calculate dynamic column widths based on terminal width
|
||||
const tableWidth = getBoxWidth(0.9, 100);
|
||||
const terminalWidth = process.stdout.columns * 0.9 || 100;
|
||||
// Adjust column widths to better match the original layout
|
||||
const baseColWidths = showComplexity
|
||||
? [
|
||||
Math.floor(tableWidth * 0.1),
|
||||
Math.floor(tableWidth * 0.4),
|
||||
Math.floor(tableWidth * 0.15),
|
||||
Math.floor(tableWidth * 0.1),
|
||||
Math.floor(tableWidth * 0.2),
|
||||
Math.floor(tableWidth * 0.1)
|
||||
Math.floor(terminalWidth * 0.1),
|
||||
Math.floor(terminalWidth * 0.4),
|
||||
Math.floor(terminalWidth * 0.15),
|
||||
Math.floor(terminalWidth * 0.1),
|
||||
Math.floor(terminalWidth * 0.2),
|
||||
Math.floor(terminalWidth * 0.1)
|
||||
] // ID, Title, Status, Priority, Dependencies, Complexity
|
||||
: [
|
||||
Math.floor(tableWidth * 0.08),
|
||||
Math.floor(tableWidth * 0.4),
|
||||
Math.floor(tableWidth * 0.18),
|
||||
Math.floor(tableWidth * 0.12),
|
||||
Math.floor(tableWidth * 0.2)
|
||||
Math.floor(terminalWidth * 0.08),
|
||||
Math.floor(terminalWidth * 0.4),
|
||||
Math.floor(terminalWidth * 0.18),
|
||||
Math.floor(terminalWidth * 0.12),
|
||||
Math.floor(terminalWidth * 0.2)
|
||||
]; // ID, Title, Status, Priority, Dependencies
|
||||
|
||||
const headers = [
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
# docs
|
||||
|
||||
## 0.0.6
|
||||
|
||||
## 0.0.5
|
||||
|
||||
## 0.0.4
|
||||
|
||||
@@ -13,126 +13,6 @@ The MCP interface is built on top of the `fastmcp` library and registers a set o
|
||||
|
||||
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
||||
|
||||
## Configurable Tool Loading
|
||||
|
||||
To optimize LLM context usage, you can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This is particularly useful when working with LLMs that have context limits or when you only need a subset of tools.
|
||||
|
||||
### Configuration Modes
|
||||
|
||||
#### All Tools (Default)
|
||||
Loads all 36 available tools. Use when you need full Task Master functionality.
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"TASK_MASTER_TOOLS": "all",
|
||||
"ANTHROPIC_API_KEY": "your_key_here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If `TASK_MASTER_TOOLS` is not set, all tools are loaded by default.
|
||||
|
||||
#### Core Tools (Lean Mode)
|
||||
Loads only 7 essential tools for daily development. Ideal for minimal context usage.
|
||||
|
||||
**Core tools included:**
|
||||
- `get_tasks` - List all tasks
|
||||
- `next_task` - Find the next task to work on
|
||||
- `get_task` - Get detailed task information
|
||||
- `set_task_status` - Update task status
|
||||
- `update_subtask` - Add implementation notes
|
||||
- `parse_prd` - Generate tasks from PRD
|
||||
- `expand_task` - Break down tasks into subtasks
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"TASK_MASTER_TOOLS": "core",
|
||||
"ANTHROPIC_API_KEY": "your_key_here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You can also use `"lean"` as an alias for `"core"`.
|
||||
|
||||
#### Standard Tools
|
||||
Loads 15 commonly used tools. Balances functionality with context efficiency.
|
||||
|
||||
**Standard tools include all core tools plus:**
|
||||
- `initialize_project` - Set up new projects
|
||||
- `analyze_project_complexity` - Analyze task complexity
|
||||
- `expand_all` - Expand all eligible tasks
|
||||
- `add_subtask` - Add subtasks manually
|
||||
- `remove_task` - Remove tasks
|
||||
- `generate` - Generate task markdown files
|
||||
- `add_task` - Create new tasks
|
||||
- `complexity_report` - View complexity analysis
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"TASK_MASTER_TOOLS": "standard",
|
||||
"ANTHROPIC_API_KEY": "your_key_here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Custom Tool Selection
|
||||
Specify exactly which tools to load using a comma-separated list. Tool names are case-insensitive and support both underscores and hyphens.
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"],
|
||||
"env": {
|
||||
"TASK_MASTER_TOOLS": "get_tasks,next_task,set_task_status,update_subtask",
|
||||
"ANTHROPIC_API_KEY": "your_key_here"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Choosing the Right Configuration
|
||||
|
||||
- **Use `core`/`lean`**: When working with basic task management workflows or when context limits are strict
|
||||
- **Use `standard`**: For most development workflows that include task creation and analysis
|
||||
- **Use `all`**: When you need full functionality including tag management, dependencies, and advanced features
|
||||
- **Use custom list**: When you have specific tool requirements or want to experiment with minimal sets
|
||||
|
||||
### Verification
|
||||
|
||||
When the MCP server starts, it logs which tools were loaded:
|
||||
|
||||
```
|
||||
Task Master MCP Server starting...
|
||||
Tool mode configuration: standard
|
||||
Loading standard tools
|
||||
Registering 15 MCP tools (mode: standard)
|
||||
Successfully registered 15/15 tools
|
||||
```
|
||||
|
||||
## Tool Categories
|
||||
|
||||
The MCP tools can be categorized in the same way as the core functionalities:
|
||||
|
||||
@@ -33,6 +33,8 @@
|
||||
]
|
||||
},
|
||||
"getting-started/api-keys",
|
||||
"getting-started/claude-code-plugin",
|
||||
"getting-started/migration-plugin",
|
||||
"getting-started/faq",
|
||||
"getting-started/contribute"
|
||||
]
|
||||
|
||||
129
apps/docs/getting-started/claude-code-plugin.mdx
Normal file
129
apps/docs/getting-started/claude-code-plugin.mdx
Normal file
@@ -0,0 +1,129 @@
|
||||
# Claude Code Plugin Integration
|
||||
|
||||
Task Master AI now offers official Claude Code plugin support, providing seamless integration with 49 specialized commands and 3 AI agents.
|
||||
|
||||
## Installation
|
||||
|
||||
### Quick Installation
|
||||
|
||||
Install the plugin directly from the Task Master marketplace:
|
||||
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
### What You Get
|
||||
|
||||
- **49 Slash Commands**: All Task Master commands accessible via `/task-master-ai:` prefix
|
||||
- **3 Specialized Agents**: task-orchestrator, task-executor, and task-checker
|
||||
- **MCP Integration**: Deep integration with Claude Code's MCP system
|
||||
- **Automatic Updates**: Plugin updates automatically with new releases
|
||||
|
||||
## Quick Start with Plugin
|
||||
|
||||
After installation, initialize your project:
|
||||
|
||||
```bash
|
||||
/task-master-ai:init-project
|
||||
/task-master-ai:parse-prd
|
||||
/task-master-ai:next-task
|
||||
```
|
||||
|
||||
## Command Reference
|
||||
|
||||
All Task Master commands are available with the `/task-master-ai:` prefix:
|
||||
|
||||
### Core Workflow
|
||||
- `/task-master-ai:init-project` - Initialize Task Master in current project
|
||||
- `/task-master-ai:parse-prd` - Generate tasks from PRD document
|
||||
- `/task-master-ai:next-task` - Get next available task
|
||||
- `/task-master-ai:show-task` - View detailed task information
|
||||
|
||||
### Task Management
|
||||
- `/task-master-ai:add-task` - Add new task with AI assistance
|
||||
- `/task-master-ai:expand-task` - Break task into subtasks
|
||||
- `/task-master-ai:to-done` - Mark task complete
|
||||
- `/task-master-ai:list-tasks` - Show all tasks with status
|
||||
|
||||
### Analysis & Planning
|
||||
- `/task-master-ai:analyze-complexity` - Analyze task complexity
|
||||
- `/task-master-ai:complexity-report` - View complexity analysis
|
||||
- `/task-master-ai:expand-all-tasks` - Expand all eligible tasks
|
||||
|
||||
## AI Agents
|
||||
|
||||
The plugin includes three specialized agents for different workflow needs:
|
||||
|
||||
### Task Orchestrator
|
||||
High-level project coordination and strategic planning.
|
||||
|
||||
### Task Executor
|
||||
Hands-on implementation and code generation.
|
||||
|
||||
### Task Checker
|
||||
Quality assurance and validation of completed work.
|
||||
|
||||
## Migration from Legacy Setup
|
||||
|
||||
<Warning>
|
||||
If you previously used `rules add claude`, those commands will continue working but won't receive updates.
|
||||
</Warning>
|
||||
|
||||
### Migration Steps
|
||||
|
||||
1. **Install the plugin**: `/plugin install taskmaster@taskmaster`
|
||||
2. **Remove old files** (optional):
|
||||
```bash
|
||||
rm -rf .claude/commands/tm/
|
||||
rm -rf .claude/agents/task-*
|
||||
```
|
||||
3. **Update workflows** to use new command names with `/task-master-ai:` prefix
|
||||
|
||||
### Why Migrate?
|
||||
|
||||
- ✅ **Automatic updates** - Get new features without manual copying
|
||||
- ✅ **Better organization** - Clean command naming and structure
|
||||
- ✅ **Seamless integration** - Native Claude Code plugin experience
|
||||
- ✅ **No file management** - No need to manually maintain command files
|
||||
|
||||
## Team Configuration
|
||||
|
||||
Organizations can auto-install the plugin for team members:
|
||||
|
||||
```json
|
||||
{
|
||||
"extraKnownMarketplaces": {
|
||||
"task-master": {
|
||||
"source": {
|
||||
"source": "github",
|
||||
"repo": "eyaltoledano/claude-task-master"
|
||||
}
|
||||
}
|
||||
},
|
||||
"enabledPlugins": {
|
||||
"taskmaster": {
|
||||
"marketplace": "taskmaster"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Add this to `.claude/settings.json` in your repository root.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Plugin Not Found
|
||||
Ensure you've added the marketplace first:
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
```
|
||||
|
||||
### Commands Not Working
|
||||
Verify the plugin is installed and enabled:
|
||||
```bash
|
||||
/plugin list
|
||||
```
|
||||
|
||||
### MCP Integration Issues
|
||||
Check that your MCP configuration includes the Task Master server as outlined in the [MCP documentation](/capabilities/mcp).
|
||||
144
apps/docs/getting-started/migration-plugin.mdx
Normal file
144
apps/docs/getting-started/migration-plugin.mdx
Normal file
@@ -0,0 +1,144 @@
|
||||
# Migrating to Claude Code Plugin
|
||||
|
||||
<Warning>
|
||||
If you previously used `task-master init --rules claude`, this guide will help you migrate to the new plugin system.
|
||||
</Warning>
|
||||
|
||||
## What Changed?
|
||||
|
||||
Task Master AI has evolved from copying files to `.claude/commands/` and `.claude/agents/` directories to a modern plugin-based architecture that provides:
|
||||
|
||||
- ✅ **Automatic updates** when new features are released
|
||||
- ✅ **Better command organization** with clean `/task-master-ai:` prefixes
|
||||
- ✅ **Seamless Claude Code integration** using native plugin system
|
||||
- ✅ **No manual file management** - no more copying or updating files
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### 1. Install the Plugin
|
||||
|
||||
First, install the official Task Master plugin:
|
||||
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
### 2. Verify Plugin Installation
|
||||
|
||||
Check that the plugin is working:
|
||||
|
||||
```bash
|
||||
/task-master-ai:help
|
||||
```
|
||||
|
||||
You should see the full list of available commands.
|
||||
|
||||
### 3. Remove Old Files (Optional)
|
||||
|
||||
<Warning>
|
||||
Your existing Task Master project files (`.taskmaster/` folder) are NOT affected and will continue working normally.
|
||||
</Warning>
|
||||
|
||||
The old command files in `.claude/` are now redundant. You can safely remove them:
|
||||
|
||||
```bash
|
||||
# Remove old command files
|
||||
rm -rf .claude/commands/tm/
|
||||
|
||||
# Remove old agent files
|
||||
rm -rf .claude/agents/task-*
|
||||
```
|
||||
|
||||
### 4. Update Your Workflows
|
||||
|
||||
Update any saved workflows or documentation to use the new command names:
|
||||
|
||||
#### Old Commands (still work but won't update)
|
||||
```bash
|
||||
/tm:init
|
||||
/tm:parse-prd
|
||||
/tm:next
|
||||
```
|
||||
|
||||
#### New Commands (get automatic updates)
|
||||
```bash
|
||||
/task-master-ai:init-project
|
||||
/task-master-ai:parse-prd
|
||||
/task-master-ai:next-task
|
||||
```
|
||||
|
||||
## Command Name Changes
|
||||
|
||||
Most commands have the same name but with the new prefix:
|
||||
|
||||
| Old Format | New Format |
|
||||
|------------|------------|
|
||||
| `/tm:init` | `/task-master-ai:init-project` |
|
||||
| `/tm:parse-prd` | `/task-master-ai:parse-prd` |
|
||||
| `/tm:next` | `/task-master-ai:next-task` |
|
||||
| `/tm:show` | `/task-master-ai:show-task` |
|
||||
| `/tm:add-task` | `/task-master-ai:add-task` |
|
||||
| `/tm:expand` | `/task-master-ai:expand-task` |
|
||||
| `/tm:to-done` | `/task-master-ai:to-done` |
|
||||
|
||||
## What Happens to MCP?
|
||||
|
||||
<Note>
|
||||
The MCP server integration remains fully functional and is recommended alongside the plugin for the complete Task Master experience.
|
||||
</Note>
|
||||
|
||||
The plugin provides slash commands and agents, while the MCP server provides deep integration tools. For the best experience, keep both:
|
||||
|
||||
1. **Plugin**: For slash commands and AI agents
|
||||
2. **MCP Server**: For advanced tool integration
|
||||
|
||||
## Troubleshooting Migration
|
||||
|
||||
### Old Commands Still Showing
|
||||
|
||||
If you're still seeing old commands after removing the files:
|
||||
|
||||
1. Restart Claude Code completely
|
||||
2. Clear command cache if available in your editor
|
||||
|
||||
### Plugin Commands Not Working
|
||||
|
||||
1. Verify plugin installation: `/plugin list`
|
||||
2. Check marketplace is added: `/plugin marketplace list`
|
||||
3. Reinstall if needed: `/plugin uninstall taskmaster && /plugin install taskmaster@taskmaster`
|
||||
|
||||
### MCP Tools Not Working
|
||||
|
||||
If your MCP integration breaks during migration:
|
||||
|
||||
1. Verify your `.mcp.json` configuration is intact
|
||||
2. Restart your editor to reconnect MCP servers
|
||||
3. Check API keys are still configured correctly
|
||||
|
||||
## Benefits of Migration
|
||||
|
||||
### Automatic Updates
|
||||
- New commands and features arrive automatically
|
||||
- No need to run `rules add claude` again
|
||||
- Always get the latest Task Master capabilities
|
||||
|
||||
### Better Organization
|
||||
- Clean command naming with consistent prefixes
|
||||
- Better integration with Claude Code's plugin system
|
||||
- Reduced local file clutter
|
||||
|
||||
### Enhanced Functionality
|
||||
- 49 specialized commands (vs. previous limited set)
|
||||
- 3 AI agents for different workflow needs
|
||||
- Native Claude Code plugin experience
|
||||
|
||||
## Need Help?
|
||||
|
||||
If you encounter issues during migration:
|
||||
|
||||
1. Check the [FAQ](/getting-started/faq) for common issues
|
||||
2. Join our [Discord community](https://discord.gg/fWJkU7rf) for support
|
||||
3. File an issue on [GitHub](https://github.com/eyaltoledano/claude-task-master/issues)
|
||||
|
||||
The migration should be seamless, but we're here to help if you run into any problems!
|
||||
@@ -37,25 +37,6 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
|
||||
}
|
||||
```
|
||||
|
||||
<Tip>
|
||||
**Optimize Context Usage**: You can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This helps reduce LLM context usage by only loading the tools you need.
|
||||
|
||||
Options:
|
||||
- `all` (default) - All 36 tools
|
||||
- `standard` - 15 commonly used tools
|
||||
- `core` or `lean` - 7 essential tools
|
||||
|
||||
Example:
|
||||
```json
|
||||
"env": {
|
||||
"TASK_MASTER_TOOLS": "standard",
|
||||
"ANTHROPIC_API_KEY": "your_key_here"
|
||||
}
|
||||
```
|
||||
|
||||
See the [MCP Tools documentation](/capabilities/mcp#configurable-tool-loading) for details.
|
||||
</Tip>
|
||||
|
||||
### CLI Usage: `.env` File
|
||||
|
||||
Create a `.env` file in your project root and include the keys for the providers you plan to use:
|
||||
|
||||
@@ -3,9 +3,38 @@ title: Installation
|
||||
sidebarTitle: "Installation"
|
||||
---
|
||||
|
||||
Now that you have Node.js and your first API Key, you are ready to begin installing Task Master in one of three ways.
|
||||
Now that you have Node.js and your first API Key, you are ready to begin installing Task Master in one of three ways.
|
||||
|
||||
<Note>Cursor Users Can Use the One Click Install Below</Note>
|
||||
## Recommended: Claude Code Plugin
|
||||
|
||||
<Note>**New!** Task Master is now available as an official Claude Code plugin with automatic updates and seamless integration.</Note>
|
||||
|
||||
### Quick Install for Claude Code
|
||||
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
After installation, initialize your project:
|
||||
|
||||
```bash
|
||||
/task-master-ai:init-project
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- ✅ Automatic updates when new features are released
|
||||
- ✅ 49 specialized slash commands
|
||||
- ✅ 3 AI agents for different workflows
|
||||
- ✅ No manual file management required
|
||||
|
||||
[Learn more about the Claude Code plugin →](/getting-started/claude-code-plugin)
|
||||
|
||||
---
|
||||
|
||||
## Alternative Installation Methods
|
||||
|
||||
<Note>Cursor Users Can Use the One Click MCP Install Below</Note>
|
||||
<Accordion title="Quick Install for Cursor 1.0+ (One-Click)">
|
||||
|
||||
<a href="cursor://anysphere.cursor-deeplink/mcp/install?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUJFX0FQSV9LRVkiOiJZT1VSX0FaVVJFX0tFWV9IRVJFIiwiT0xMQU1BX0FQSV9LRVkiOiJZT1VSX09MTEFNQV9BUElfS0VZX0hFUkUifX0%3D">
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
<Tip>
|
||||
Welcome to v1 of the Task Master Docs. Expect weekly updates as we expand and refine each section.
|
||||
Welcome to v1 of the Task Master Docs. **New!** Task Master is now available as an official Claude Code plugin with 49 commands and 3 AI agents.
|
||||
</Tip>
|
||||
|
||||
We've organized the docs into three sections depending on your experience level and goals:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "docs",
|
||||
"version": "0.0.6",
|
||||
"version": "0.0.5",
|
||||
"private": true,
|
||||
"description": "Task Master documentation powered by Mintlify",
|
||||
"scripts": {
|
||||
|
||||
@@ -3,4 +3,26 @@ title: "What's New"
|
||||
sidebarTitle: "What's New"
|
||||
---
|
||||
|
||||
## 🎉 Claude Code Plugin Now Available (Latest)
|
||||
|
||||
Task Master AI has evolved with official Claude Code plugin support!
|
||||
|
||||
### What's New
|
||||
- **49 specialized slash commands** with clean `/task-master-ai:` naming
|
||||
- **3 AI agents** for orchestration, execution, and checking
|
||||
- **Automatic updates** - no more manual file copying
|
||||
- **Seamless integration** with Claude Code's native plugin system
|
||||
|
||||
### Quick Install
|
||||
```bash
|
||||
/plugin marketplace add eyaltoledano/claude-task-master
|
||||
/plugin install taskmaster@taskmaster
|
||||
```
|
||||
|
||||
[Learn more →](/getting-started/claude-code-plugin) | [Migration guide →](/getting-started/migration-plugin)
|
||||
|
||||
---
|
||||
|
||||
## Previous Releases
|
||||
|
||||
An easy way to see the latest releases
|
||||
@@ -1,14 +1,5 @@
|
||||
# Change Log
|
||||
|
||||
## 0.25.6
|
||||
|
||||
## 0.25.6-rc.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [[`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5), [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0), [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48), [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815), [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654)]:
|
||||
- task-master-ai@0.29.0-rc.0
|
||||
|
||||
## 0.25.5
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
"private": true,
|
||||
"displayName": "TaskMaster",
|
||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||
"version": "0.25.6",
|
||||
"version": "0.25.5",
|
||||
"publisher": "Hamster",
|
||||
"icon": "assets/icon.png",
|
||||
"engines": {
|
||||
@@ -239,6 +239,9 @@
|
||||
"watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch",
|
||||
"check-types": "tsc --noEmit"
|
||||
},
|
||||
"dependencies": {
|
||||
"task-master-ai": "*"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@dnd-kit/core": "^6.3.1",
|
||||
"@dnd-kit/modifiers": "^9.0.0",
|
||||
@@ -274,8 +277,7 @@
|
||||
"tailwind-merge": "^3.3.1",
|
||||
"tailwindcss": "4.1.11",
|
||||
"typescript": "^5.9.2",
|
||||
"@tm/core": "*",
|
||||
"task-master-ai": "*"
|
||||
"@tm/core": "*"
|
||||
},
|
||||
"overrides": {
|
||||
"glob@<8": "^10.4.5",
|
||||
|
||||
@@ -59,76 +59,6 @@ Taskmaster uses two primary methods for configuration:
|
||||
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
||||
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
||||
|
||||
## MCP Tool Loading Configuration
|
||||
|
||||
### TASK_MASTER_TOOLS Environment Variable
|
||||
|
||||
The `TASK_MASTER_TOOLS` environment variable controls which tools are loaded by the Task Master MCP server. This allows you to optimize token usage based on your workflow needs.
|
||||
|
||||
> Note
|
||||
> Prefer setting `TASK_MASTER_TOOLS` in your MCP client's `env` block (e.g., `.cursor/mcp.json`) or in CI/deployment env. The `.env` file is reserved for API keys/endpoints; avoid persisting non-secret settings there.
|
||||
|
||||
#### Configuration Options
|
||||
|
||||
- **`all`** (default): Loads all 36 available tools (~21,000 tokens)
|
||||
- Best for: Users who need the complete feature set
|
||||
- Use when: Working with complex projects requiring all Task Master features
|
||||
- Backward compatibility: This is the default to maintain compatibility with existing installations
|
||||
|
||||
- **`standard`**: Loads 15 commonly used tools (~10,000 tokens, 50% reduction)
|
||||
- Best for: Regular task management workflows
|
||||
- Tools included: All core tools plus project initialization, complexity analysis, task generation, and more
|
||||
- Use when: You need a balanced set of features with reduced token usage
|
||||
|
||||
- **`core`** (or `lean`): Loads 7 essential tools (~5,000 tokens, 70% reduction)
|
||||
- Best for: Daily development with minimal token overhead
|
||||
- Tools included: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||
- Use when: Working in large contexts where token usage is critical
|
||||
- Note: "lean" is an alias for "core" (same tools, token estimate and recommended use). You can refer to it as either "core" or "lean" when configuring.
|
||||
|
||||
- **Custom list**: Comma-separated list of specific tool names
|
||||
- Best for: Specialized workflows requiring specific tools
|
||||
- Example: `"get_tasks,next_task,set_task_status"`
|
||||
- Use when: You know exactly which tools you need
|
||||
|
||||
#### How to Configure
|
||||
|
||||
1. **In MCP configuration files** (`.cursor/mcp.json`, `.vscode/mcp.json`, etc.) - **Recommended**:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"env": {
|
||||
"TASK_MASTER_TOOLS": "standard", // Set tool loading mode
|
||||
// API keys can still use .env for security
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Via Claude Code CLI**:
|
||||
|
||||
```bash
|
||||
claude mcp add task-master-ai --scope user \
|
||||
--env TASK_MASTER_TOOLS="core" \
|
||||
-- npx -y task-master-ai@latest
|
||||
```
|
||||
|
||||
3. **In CI/deployment environment variables**:
|
||||
```bash
|
||||
export TASK_MASTER_TOOLS="standard"
|
||||
node mcp-server/server.js
|
||||
```
|
||||
|
||||
#### Tool Loading Behavior
|
||||
|
||||
- When `TASK_MASTER_TOOLS` is unset or empty, the system defaults to `"all"`
|
||||
- Invalid tool names in a user-specified list are ignored (a warning is emitted for each)
|
||||
- If every tool name in a custom list is invalid, the system falls back to `"all"`
|
||||
- Tool names are case-insensitive (e.g., `"CORE"`, `"core"`, and `"Core"` are treated identically)
|
||||
|
||||
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
||||
|
||||
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
||||
@@ -293,10 +223,10 @@ node scripts/init.js
|
||||
```bash
|
||||
# Set MCP provider for main role
|
||||
task-master models set-main --provider mcp --model claude-3-5-sonnet-20241022
|
||||
|
||||
# Set MCP provider for research role
|
||||
|
||||
# Set MCP provider for research role
|
||||
task-master models set-research --provider mcp --model claude-3-opus-20240229
|
||||
|
||||
|
||||
# Verify configuration
|
||||
task-master models list
|
||||
```
|
||||
@@ -427,7 +357,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
|
||||
"temperature": 0.7
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7
|
||||
@@ -446,7 +376,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o",
|
||||
"modelId": "gpt-4o",
|
||||
"maxTokens": 16000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
@@ -460,7 +390,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
|
||||
"fallback": {
|
||||
"provider": "azure",
|
||||
"modelId": "gpt-4o-mini",
|
||||
"maxTokens": 10000,
|
||||
"maxTokens": 10000,
|
||||
"temperature": 0.7,
|
||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||
}
|
||||
@@ -472,7 +402,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
|
||||
```bash
|
||||
# In .env file
|
||||
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||
|
||||
|
||||
# Optional: Override endpoint for all Azure models
|
||||
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
|
||||
```
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 130 KiB |
@@ -4,14 +4,12 @@ import dotenv from 'dotenv';
|
||||
import { fileURLToPath } from 'url';
|
||||
import fs from 'fs';
|
||||
import logger from './logger.js';
|
||||
import {
|
||||
registerTaskMasterTools,
|
||||
getToolsConfiguration
|
||||
} from './tools/index.js';
|
||||
import { registerTaskMasterTools } from './tools/index.js';
|
||||
import ProviderRegistry from '../../src/provider-registry/index.js';
|
||||
import { MCPProvider } from './providers/mcp-provider.js';
|
||||
import packageJson from '../../package.json' with { type: 'json' };
|
||||
|
||||
// Load environment variables
|
||||
dotenv.config();
|
||||
|
||||
// Constants
|
||||
@@ -31,10 +29,12 @@ class TaskMasterMCPServer {
|
||||
this.server = new FastMCP(this.options);
|
||||
this.initialized = false;
|
||||
|
||||
// Bind methods
|
||||
this.init = this.init.bind(this);
|
||||
this.start = this.start.bind(this);
|
||||
this.stop = this.stop.bind(this);
|
||||
|
||||
// Setup logging
|
||||
this.logger = logger;
|
||||
}
|
||||
|
||||
@@ -44,34 +44,8 @@ class TaskMasterMCPServer {
|
||||
async init() {
|
||||
if (this.initialized) return;
|
||||
|
||||
const normalizedToolMode = getToolsConfiguration();
|
||||
|
||||
this.logger.info('Task Master MCP Server starting...');
|
||||
this.logger.info(`Tool mode configuration: ${normalizedToolMode}`);
|
||||
|
||||
const registrationResult = registerTaskMasterTools(
|
||||
this.server,
|
||||
normalizedToolMode
|
||||
);
|
||||
|
||||
this.logger.info(
|
||||
`Normalized tool mode: ${registrationResult.normalizedMode}`
|
||||
);
|
||||
this.logger.info(
|
||||
`Registered ${registrationResult.registeredTools.length} tools successfully`
|
||||
);
|
||||
|
||||
if (registrationResult.registeredTools.length > 0) {
|
||||
this.logger.debug(
|
||||
`Registered tools: ${registrationResult.registeredTools.join(', ')}`
|
||||
);
|
||||
}
|
||||
|
||||
if (registrationResult.failedTools.length > 0) {
|
||||
this.logger.warn(
|
||||
`Failed to register ${registrationResult.failedTools.length} tools: ${registrationResult.failedTools.join(', ')}`
|
||||
);
|
||||
}
|
||||
// Pass the manager instance to the tool registration function
|
||||
registerTaskMasterTools(this.server, this.asyncManager);
|
||||
|
||||
this.initialized = true;
|
||||
|
||||
|
||||
@@ -3,238 +3,109 @@
|
||||
* Export all Task Master CLI tools for MCP server
|
||||
*/
|
||||
|
||||
import { registerListTasksTool } from './get-tasks.js';
|
||||
import logger from '../logger.js';
|
||||
import {
|
||||
toolRegistry,
|
||||
coreTools,
|
||||
standardTools,
|
||||
getAvailableTools,
|
||||
getToolRegistration,
|
||||
isValidTool
|
||||
} from './tool-registry.js';
|
||||
import { registerSetTaskStatusTool } from './set-task-status.js';
|
||||
import { registerParsePRDTool } from './parse-prd.js';
|
||||
import { registerUpdateTool } from './update.js';
|
||||
import { registerUpdateTaskTool } from './update-task.js';
|
||||
import { registerUpdateSubtaskTool } from './update-subtask.js';
|
||||
import { registerGenerateTool } from './generate.js';
|
||||
import { registerShowTaskTool } from './get-task.js';
|
||||
import { registerNextTaskTool } from './next-task.js';
|
||||
import { registerExpandTaskTool } from './expand-task.js';
|
||||
import { registerAddTaskTool } from './add-task.js';
|
||||
import { registerAddSubtaskTool } from './add-subtask.js';
|
||||
import { registerRemoveSubtaskTool } from './remove-subtask.js';
|
||||
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
|
||||
import { registerClearSubtasksTool } from './clear-subtasks.js';
|
||||
import { registerExpandAllTool } from './expand-all.js';
|
||||
import { registerRemoveDependencyTool } from './remove-dependency.js';
|
||||
import { registerValidateDependenciesTool } from './validate-dependencies.js';
|
||||
import { registerFixDependenciesTool } from './fix-dependencies.js';
|
||||
import { registerComplexityReportTool } from './complexity-report.js';
|
||||
import { registerAddDependencyTool } from './add-dependency.js';
|
||||
import { registerRemoveTaskTool } from './remove-task.js';
|
||||
import { registerInitializeProjectTool } from './initialize-project.js';
|
||||
import { registerModelsTool } from './models.js';
|
||||
import { registerMoveTaskTool } from './move-task.js';
|
||||
import { registerResponseLanguageTool } from './response-language.js';
|
||||
import { registerAddTagTool } from './add-tag.js';
|
||||
import { registerDeleteTagTool } from './delete-tag.js';
|
||||
import { registerListTagsTool } from './list-tags.js';
|
||||
import { registerUseTagTool } from './use-tag.js';
|
||||
import { registerRenameTagTool } from './rename-tag.js';
|
||||
import { registerCopyTagTool } from './copy-tag.js';
|
||||
import { registerResearchTool } from './research.js';
|
||||
import { registerRulesTool } from './rules.js';
|
||||
import { registerScopeUpTool } from './scope-up.js';
|
||||
import { registerScopeDownTool } from './scope-down.js';
|
||||
|
||||
/**
|
||||
* Helper function to safely read and normalize the TASK_MASTER_TOOLS environment variable
|
||||
* @returns {string} The tools configuration string, defaults to 'all'
|
||||
*/
|
||||
export function getToolsConfiguration() {
|
||||
const rawValue = process.env.TASK_MASTER_TOOLS;
|
||||
|
||||
if (!rawValue || rawValue.trim() === '') {
|
||||
logger.debug('No TASK_MASTER_TOOLS env var found, defaulting to "all"');
|
||||
return 'all';
|
||||
}
|
||||
|
||||
const normalizedValue = rawValue.trim();
|
||||
logger.debug(`TASK_MASTER_TOOLS env var: "${normalizedValue}"`);
|
||||
return normalizedValue;
|
||||
}
|
||||
|
||||
/**
|
||||
* Register Task Master tools with the MCP server
|
||||
* Supports selective tool loading via TASK_MASTER_TOOLS environment variable
|
||||
* Register all Task Master tools with the MCP server
|
||||
* @param {Object} server - FastMCP server instance
|
||||
* @param {string} toolMode - The tool mode configuration (defaults to 'all')
|
||||
* @returns {Object} Object containing registered tools, failed tools, and normalized mode
|
||||
*/
|
||||
export function registerTaskMasterTools(server, toolMode = 'all') {
|
||||
const registeredTools = [];
|
||||
const failedTools = [];
|
||||
|
||||
export function registerTaskMasterTools(server) {
|
||||
try {
|
||||
const enabledTools = toolMode.trim();
|
||||
let toolsToRegister = [];
|
||||
// Register each tool in a logical workflow order
|
||||
|
||||
const lowerCaseConfig = enabledTools.toLowerCase();
|
||||
// Group 1: Initialization & Setup
|
||||
registerInitializeProjectTool(server);
|
||||
registerModelsTool(server);
|
||||
registerRulesTool(server);
|
||||
registerParsePRDTool(server);
|
||||
|
||||
switch (lowerCaseConfig) {
|
||||
case 'all':
|
||||
toolsToRegister = Object.keys(toolRegistry);
|
||||
logger.info('Loading all available tools');
|
||||
break;
|
||||
case 'core':
|
||||
case 'lean':
|
||||
toolsToRegister = coreTools;
|
||||
logger.info('Loading core tools only');
|
||||
break;
|
||||
case 'standard':
|
||||
toolsToRegister = standardTools;
|
||||
logger.info('Loading standard tools');
|
||||
break;
|
||||
default:
|
||||
const requestedTools = enabledTools
|
||||
.split(',')
|
||||
.map((t) => t.trim())
|
||||
.filter((t) => t.length > 0);
|
||||
// Group 2: Task Analysis & Expansion
|
||||
registerAnalyzeProjectComplexityTool(server);
|
||||
registerExpandTaskTool(server);
|
||||
registerExpandAllTool(server);
|
||||
registerScopeUpTool(server);
|
||||
registerScopeDownTool(server);
|
||||
|
||||
const uniqueTools = new Set();
|
||||
const unknownTools = [];
|
||||
// Group 3: Task Listing & Viewing
|
||||
registerListTasksTool(server);
|
||||
registerShowTaskTool(server);
|
||||
registerNextTaskTool(server);
|
||||
registerComplexityReportTool(server);
|
||||
|
||||
const aliasMap = {
|
||||
response_language: 'response-language'
|
||||
};
|
||||
// Group 4: Task Status & Management
|
||||
registerSetTaskStatusTool(server);
|
||||
registerGenerateTool(server);
|
||||
|
||||
for (const toolName of requestedTools) {
|
||||
let resolvedName = null;
|
||||
const lowerToolName = toolName.toLowerCase();
|
||||
// Group 5: Task Creation & Modification
|
||||
registerAddTaskTool(server);
|
||||
registerAddSubtaskTool(server);
|
||||
registerUpdateTool(server);
|
||||
registerUpdateTaskTool(server);
|
||||
registerUpdateSubtaskTool(server);
|
||||
registerRemoveTaskTool(server);
|
||||
registerRemoveSubtaskTool(server);
|
||||
registerClearSubtasksTool(server);
|
||||
registerMoveTaskTool(server);
|
||||
|
||||
if (aliasMap[lowerToolName]) {
|
||||
const aliasTarget = aliasMap[lowerToolName];
|
||||
for (const registryKey of Object.keys(toolRegistry)) {
|
||||
if (registryKey.toLowerCase() === aliasTarget.toLowerCase()) {
|
||||
resolvedName = registryKey;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
// Group 6: Dependency Management
|
||||
registerAddDependencyTool(server);
|
||||
registerRemoveDependencyTool(server);
|
||||
registerValidateDependenciesTool(server);
|
||||
registerFixDependenciesTool(server);
|
||||
registerResponseLanguageTool(server);
|
||||
|
||||
if (!resolvedName) {
|
||||
for (const registryKey of Object.keys(toolRegistry)) {
|
||||
if (registryKey.toLowerCase() === lowerToolName) {
|
||||
resolvedName = registryKey;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
// Group 7: Tag Management
|
||||
registerListTagsTool(server);
|
||||
registerAddTagTool(server);
|
||||
registerDeleteTagTool(server);
|
||||
registerUseTagTool(server);
|
||||
registerRenameTagTool(server);
|
||||
registerCopyTagTool(server);
|
||||
|
||||
if (!resolvedName) {
|
||||
const withHyphens = lowerToolName.replace(/_/g, '-');
|
||||
for (const registryKey of Object.keys(toolRegistry)) {
|
||||
if (registryKey.toLowerCase() === withHyphens) {
|
||||
resolvedName = registryKey;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!resolvedName) {
|
||||
const withUnderscores = lowerToolName.replace(/-/g, '_');
|
||||
for (const registryKey of Object.keys(toolRegistry)) {
|
||||
if (registryKey.toLowerCase() === withUnderscores) {
|
||||
resolvedName = registryKey;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (resolvedName) {
|
||||
uniqueTools.add(resolvedName);
|
||||
logger.debug(`Resolved tool "${toolName}" to "${resolvedName}"`);
|
||||
} else {
|
||||
unknownTools.push(toolName);
|
||||
logger.warn(`Unknown tool specified: "${toolName}"`);
|
||||
}
|
||||
}
|
||||
|
||||
toolsToRegister = Array.from(uniqueTools);
|
||||
|
||||
if (unknownTools.length > 0) {
|
||||
logger.warn(`Unknown tools: ${unknownTools.join(', ')}`);
|
||||
}
|
||||
|
||||
if (toolsToRegister.length === 0) {
|
||||
logger.warn(
|
||||
`No valid tools found in custom list. Loading all tools as fallback.`
|
||||
);
|
||||
toolsToRegister = Object.keys(toolRegistry);
|
||||
} else {
|
||||
logger.info(
|
||||
`Loading ${toolsToRegister.length} custom tools from list (${uniqueTools.size} unique after normalization)`
|
||||
);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
logger.info(
|
||||
`Registering ${toolsToRegister.length} MCP tools (mode: ${enabledTools})`
|
||||
);
|
||||
|
||||
toolsToRegister.forEach((toolName) => {
|
||||
try {
|
||||
const registerFunction = getToolRegistration(toolName);
|
||||
if (registerFunction) {
|
||||
registerFunction(server);
|
||||
logger.debug(`Registered tool: ${toolName}`);
|
||||
registeredTools.push(toolName);
|
||||
} else {
|
||||
logger.warn(`Tool ${toolName} not found in registry`);
|
||||
failedTools.push(toolName);
|
||||
}
|
||||
} catch (error) {
|
||||
if (error.message && error.message.includes('already registered')) {
|
||||
logger.debug(`Tool ${toolName} already registered, skipping`);
|
||||
registeredTools.push(toolName);
|
||||
} else {
|
||||
logger.error(`Failed to register tool ${toolName}: ${error.message}`);
|
||||
failedTools.push(toolName);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(
|
||||
`Successfully registered ${registeredTools.length}/${toolsToRegister.length} tools`
|
||||
);
|
||||
if (failedTools.length > 0) {
|
||||
logger.warn(`Failed tools: ${failedTools.join(', ')}`);
|
||||
}
|
||||
|
||||
return {
|
||||
registeredTools,
|
||||
failedTools,
|
||||
normalizedMode: lowerCaseConfig
|
||||
};
|
||||
// Group 8: Research Features
|
||||
registerResearchTool(server);
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`Error parsing TASK_MASTER_TOOLS environment variable: ${error.message}`
|
||||
);
|
||||
logger.info('Falling back to loading all tools');
|
||||
|
||||
const fallbackTools = Object.keys(toolRegistry);
|
||||
for (const toolName of fallbackTools) {
|
||||
const registerFunction = getToolRegistration(toolName);
|
||||
if (registerFunction) {
|
||||
try {
|
||||
registerFunction(server);
|
||||
registeredTools.push(toolName);
|
||||
} catch (err) {
|
||||
if (err.message && err.message.includes('already registered')) {
|
||||
logger.debug(
|
||||
`Fallback tool ${toolName} already registered, skipping`
|
||||
);
|
||||
registeredTools.push(toolName);
|
||||
} else {
|
||||
logger.warn(
|
||||
`Failed to register fallback tool '${toolName}': ${err.message}`
|
||||
);
|
||||
failedTools.push(toolName);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
logger.warn(`Tool '${toolName}' not found in registry`);
|
||||
failedTools.push(toolName);
|
||||
}
|
||||
}
|
||||
logger.info(
|
||||
`Successfully registered ${registeredTools.length} fallback tools`
|
||||
);
|
||||
|
||||
return {
|
||||
registeredTools,
|
||||
failedTools,
|
||||
normalizedMode: 'all'
|
||||
};
|
||||
logger.error(`Error registering Task Master tools: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
export {
|
||||
toolRegistry,
|
||||
coreTools,
|
||||
standardTools,
|
||||
getAvailableTools,
|
||||
getToolRegistration,
|
||||
isValidTool
|
||||
};
|
||||
|
||||
export default {
|
||||
registerTaskMasterTools
|
||||
};
|
||||
|
||||
@@ -1,168 +0,0 @@
|
||||
/**
|
||||
* tool-registry.js
|
||||
* Tool Registry Object Structure - Maps all 36 tool names to registration functions
|
||||
*/
|
||||
|
||||
import { registerListTasksTool } from './get-tasks.js';
|
||||
import { registerSetTaskStatusTool } from './set-task-status.js';
|
||||
import { registerParsePRDTool } from './parse-prd.js';
|
||||
import { registerUpdateTool } from './update.js';
|
||||
import { registerUpdateTaskTool } from './update-task.js';
|
||||
import { registerUpdateSubtaskTool } from './update-subtask.js';
|
||||
import { registerGenerateTool } from './generate.js';
|
||||
import { registerShowTaskTool } from './get-task.js';
|
||||
import { registerNextTaskTool } from './next-task.js';
|
||||
import { registerExpandTaskTool } from './expand-task.js';
|
||||
import { registerAddTaskTool } from './add-task.js';
|
||||
import { registerAddSubtaskTool } from './add-subtask.js';
|
||||
import { registerRemoveSubtaskTool } from './remove-subtask.js';
|
||||
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
|
||||
import { registerClearSubtasksTool } from './clear-subtasks.js';
|
||||
import { registerExpandAllTool } from './expand-all.js';
|
||||
import { registerRemoveDependencyTool } from './remove-dependency.js';
|
||||
import { registerValidateDependenciesTool } from './validate-dependencies.js';
|
||||
import { registerFixDependenciesTool } from './fix-dependencies.js';
|
||||
import { registerComplexityReportTool } from './complexity-report.js';
|
||||
import { registerAddDependencyTool } from './add-dependency.js';
|
||||
import { registerRemoveTaskTool } from './remove-task.js';
|
||||
import { registerInitializeProjectTool } from './initialize-project.js';
|
||||
import { registerModelsTool } from './models.js';
|
||||
import { registerMoveTaskTool } from './move-task.js';
|
||||
import { registerResponseLanguageTool } from './response-language.js';
|
||||
import { registerAddTagTool } from './add-tag.js';
|
||||
import { registerDeleteTagTool } from './delete-tag.js';
|
||||
import { registerListTagsTool } from './list-tags.js';
|
||||
import { registerUseTagTool } from './use-tag.js';
|
||||
import { registerRenameTagTool } from './rename-tag.js';
|
||||
import { registerCopyTagTool } from './copy-tag.js';
|
||||
import { registerResearchTool } from './research.js';
|
||||
import { registerRulesTool } from './rules.js';
|
||||
import { registerScopeUpTool } from './scope-up.js';
|
||||
import { registerScopeDownTool } from './scope-down.js';
|
||||
|
||||
/**
|
||||
* Comprehensive tool registry mapping all 36 tool names to their registration functions
|
||||
* Used for dynamic tool registration and validation
|
||||
*/
|
||||
export const toolRegistry = {
|
||||
initialize_project: registerInitializeProjectTool,
|
||||
models: registerModelsTool,
|
||||
rules: registerRulesTool,
|
||||
parse_prd: registerParsePRDTool,
|
||||
'response-language': registerResponseLanguageTool,
|
||||
analyze_project_complexity: registerAnalyzeProjectComplexityTool,
|
||||
expand_task: registerExpandTaskTool,
|
||||
expand_all: registerExpandAllTool,
|
||||
scope_up_task: registerScopeUpTool,
|
||||
scope_down_task: registerScopeDownTool,
|
||||
get_tasks: registerListTasksTool,
|
||||
get_task: registerShowTaskTool,
|
||||
next_task: registerNextTaskTool,
|
||||
complexity_report: registerComplexityReportTool,
|
||||
set_task_status: registerSetTaskStatusTool,
|
||||
generate: registerGenerateTool,
|
||||
add_task: registerAddTaskTool,
|
||||
add_subtask: registerAddSubtaskTool,
|
||||
update: registerUpdateTool,
|
||||
update_task: registerUpdateTaskTool,
|
||||
update_subtask: registerUpdateSubtaskTool,
|
||||
remove_task: registerRemoveTaskTool,
|
||||
remove_subtask: registerRemoveSubtaskTool,
|
||||
clear_subtasks: registerClearSubtasksTool,
|
||||
move_task: registerMoveTaskTool,
|
||||
add_dependency: registerAddDependencyTool,
|
||||
remove_dependency: registerRemoveDependencyTool,
|
||||
validate_dependencies: registerValidateDependenciesTool,
|
||||
fix_dependencies: registerFixDependenciesTool,
|
||||
list_tags: registerListTagsTool,
|
||||
add_tag: registerAddTagTool,
|
||||
delete_tag: registerDeleteTagTool,
|
||||
use_tag: registerUseTagTool,
|
||||
rename_tag: registerRenameTagTool,
|
||||
copy_tag: registerCopyTagTool,
|
||||
research: registerResearchTool
|
||||
};
|
||||
|
||||
/**
|
||||
* Core tools array containing the 7 essential tools for daily development
|
||||
* These represent the minimal set needed for basic task management operations
|
||||
*/
|
||||
export const coreTools = [
|
||||
'get_tasks',
|
||||
'next_task',
|
||||
'get_task',
|
||||
'set_task_status',
|
||||
'update_subtask',
|
||||
'parse_prd',
|
||||
'expand_task'
|
||||
];
|
||||
|
||||
/**
|
||||
* Standard tools array containing the 15 most commonly used tools
|
||||
* Includes all core tools plus frequently used additional tools
|
||||
*/
|
||||
export const standardTools = [
|
||||
...coreTools,
|
||||
'initialize_project',
|
||||
'analyze_project_complexity',
|
||||
'expand_all',
|
||||
'add_subtask',
|
||||
'remove_task',
|
||||
'generate',
|
||||
'add_task',
|
||||
'complexity_report'
|
||||
];
|
||||
|
||||
/**
|
||||
* Get all available tool names
|
||||
* @returns {string[]} Array of tool names
|
||||
*/
|
||||
export function getAvailableTools() {
|
||||
return Object.keys(toolRegistry);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get tool counts for all categories
|
||||
* @returns {Object} Object with core, standard, and total counts
|
||||
*/
|
||||
export function getToolCounts() {
|
||||
return {
|
||||
core: coreTools.length,
|
||||
standard: standardTools.length,
|
||||
total: Object.keys(toolRegistry).length
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get tool arrays organized by category
|
||||
* @returns {Object} Object with arrays for each category
|
||||
*/
|
||||
export function getToolCategories() {
|
||||
const allTools = Object.keys(toolRegistry);
|
||||
return {
|
||||
core: [...coreTools],
|
||||
standard: [...standardTools],
|
||||
all: [...allTools],
|
||||
extended: allTools.filter((t) => !standardTools.includes(t))
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get registration function for a specific tool
|
||||
* @param {string} toolName - Name of the tool
|
||||
* @returns {Function|null} Registration function or null if not found
|
||||
*/
|
||||
export function getToolRegistration(toolName) {
|
||||
return toolRegistry[toolName] || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate if a tool exists in the registry
|
||||
* @param {string} toolName - Name of the tool
|
||||
* @returns {boolean} True if tool exists
|
||||
*/
|
||||
export function isValidTool(toolName) {
|
||||
return toolName in toolRegistry;
|
||||
}
|
||||
|
||||
export default toolRegistry;
|
||||
70
output.txt
Normal file
70
output.txt
Normal file
File diff suppressed because one or more lines are too long
195
package-lock.json
generated
195
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.29.0",
|
||||
"version": "0.28.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "task-master-ai",
|
||||
"version": "0.29.0",
|
||||
"version": "0.28.0",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
@@ -104,7 +104,6 @@
|
||||
"name": "@tm/cli",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@inquirer/search": "^3.2.0",
|
||||
"@tm/core": "*",
|
||||
"boxen": "^8.0.1",
|
||||
"chalk": "5.6.2",
|
||||
@@ -125,99 +124,17 @@
|
||||
"node": ">=18.0.0"
|
||||
}
|
||||
},
|
||||
"apps/cli/node_modules/@inquirer/ansi": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/@inquirer/ansi/-/ansi-1.0.1.tgz",
|
||||
"integrity": "sha512-yqq0aJW/5XPhi5xOAL1xRCpe1eh8UFVgYFpFsjEqmIR8rKLyP+HINvFXwUaxYICflJrVlxnp7lLN6As735kVpw==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"apps/cli/node_modules/@inquirer/figures": {
|
||||
"version": "1.0.14",
|
||||
"resolved": "https://registry.npmjs.org/@inquirer/figures/-/figures-1.0.14.tgz",
|
||||
"integrity": "sha512-DbFgdt+9/OZYFM+19dbpXOSeAstPy884FPy1KjDu4anWwymZeOYhMY1mdFri172htv6mvc/uvIAAi7b7tvjJBQ==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
}
|
||||
},
|
||||
"apps/cli/node_modules/@inquirer/search": {
|
||||
"version": "3.2.0",
|
||||
"resolved": "https://registry.npmjs.org/@inquirer/search/-/search-3.2.0.tgz",
|
||||
"integrity": "sha512-a5SzB/qrXafDX1Z4AZW3CsVoiNxcIYCzYP7r9RzrfMpaLpB+yWi5U8BWagZyLmwR0pKbbL5umnGRd0RzGVI8bQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@inquirer/core": "^10.3.0",
|
||||
"@inquirer/figures": "^1.0.14",
|
||||
"@inquirer/type": "^3.0.9",
|
||||
"yoctocolors-cjs": "^2.1.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@types/node": ">=18"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"@types/node": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"apps/cli/node_modules/@inquirer/search/node_modules/@inquirer/core": {
|
||||
"version": "10.3.0",
|
||||
"resolved": "https://registry.npmjs.org/@inquirer/core/-/core-10.3.0.tgz",
|
||||
"integrity": "sha512-Uv2aPPPSK5jeCplQmQ9xadnFx2Zhj9b5Dj7bU6ZeCdDNNY11nhYy4btcSdtDguHqCT2h5oNeQTcUNSGGLA7NTA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@inquirer/ansi": "^1.0.1",
|
||||
"@inquirer/figures": "^1.0.14",
|
||||
"@inquirer/type": "^3.0.9",
|
||||
"cli-width": "^4.1.0",
|
||||
"mute-stream": "^2.0.0",
|
||||
"signal-exit": "^4.1.0",
|
||||
"wrap-ansi": "^6.2.0",
|
||||
"yoctocolors-cjs": "^2.1.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@types/node": ">=18"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"@types/node": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"apps/cli/node_modules/@inquirer/search/node_modules/@inquirer/type": {
|
||||
"version": "3.0.9",
|
||||
"resolved": "https://registry.npmjs.org/@inquirer/type/-/type-3.0.9.tgz",
|
||||
"integrity": "sha512-QPaNt/nmE2bLGQa9b7wwyRJoLZ7pN6rcyXvzU0YCmivmJyq1BVo94G98tStRWkoD1RgDX5C+dPlhhHzNdu/W/w==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@types/node": ">=18"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"@types/node": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"apps/docs": {
|
||||
"version": "0.0.6",
|
||||
"version": "0.0.5",
|
||||
"devDependencies": {
|
||||
"mintlify": "^4.2.111"
|
||||
}
|
||||
},
|
||||
"apps/extension": {
|
||||
"version": "0.25.6",
|
||||
"version": "0.25.5",
|
||||
"dependencies": {
|
||||
"task-master-ai": "*"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@dnd-kit/core": "^6.3.1",
|
||||
"@dnd-kit/modifiers": "^9.0.0",
|
||||
@@ -253,7 +170,6 @@
|
||||
"react-dom": "^19.0.0",
|
||||
"tailwind-merge": "^3.3.1",
|
||||
"tailwindcss": "4.1.11",
|
||||
"task-master-ai": "*",
|
||||
"typescript": "^5.9.2"
|
||||
},
|
||||
"engines": {
|
||||
@@ -262,7 +178,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/amazon-bedrock": {
|
||||
"version": "2.2.12",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -280,7 +195,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/anthropic": {
|
||||
"version": "1.2.12",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -295,7 +209,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/azure": {
|
||||
"version": "1.3.25",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/openai": "1.3.24",
|
||||
@@ -311,7 +224,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/google": {
|
||||
"version": "1.2.22",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -326,7 +238,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/google-vertex": {
|
||||
"version": "2.2.27",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/anthropic": "1.2.12",
|
||||
@@ -344,7 +255,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/groq": {
|
||||
"version": "1.2.9",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -359,7 +269,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/mistral": {
|
||||
"version": "1.2.8",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -374,7 +283,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/openai": {
|
||||
"version": "1.3.24",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -389,7 +297,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/openai-compatible": {
|
||||
"version": "0.2.16",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -404,7 +311,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/perplexity": {
|
||||
"version": "1.1.9",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -419,7 +325,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/provider": {
|
||||
"version": "1.1.3",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"json-schema": "^0.4.0"
|
||||
@@ -430,7 +335,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/provider-utils": {
|
||||
"version": "2.2.8",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -446,7 +350,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/react": {
|
||||
"version": "1.2.12",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider-utils": "2.2.8",
|
||||
@@ -469,7 +372,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/ui-utils": {
|
||||
"version": "1.2.11",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -485,7 +387,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@ai-sdk/xai": {
|
||||
"version": "1.2.18",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/openai-compatible": "0.2.16",
|
||||
@@ -501,7 +402,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@openrouter/ai-sdk-provider": {
|
||||
"version": "0.4.6",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.0.9",
|
||||
@@ -516,7 +416,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider": {
|
||||
"version": "1.0.9",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"json-schema": "^0.4.0"
|
||||
@@ -527,7 +426,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider-utils": {
|
||||
"version": "2.1.10",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.0.9",
|
||||
@@ -549,7 +447,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/ai": {
|
||||
"version": "4.3.19",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -574,7 +471,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/ai-sdk-provider-gemini-cli": {
|
||||
"version": "0.1.3",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"dependencies": {
|
||||
@@ -608,7 +504,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/ollama-ai-provider": {
|
||||
"version": "1.2.0",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "^1.0.0",
|
||||
@@ -629,7 +524,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/openai": {
|
||||
"version": "4.104.0",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@types/node": "^18.11.18",
|
||||
@@ -658,7 +552,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/openai/node_modules/@types/node": {
|
||||
"version": "18.19.127",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"undici-types": "~5.26.4"
|
||||
@@ -666,12 +559,10 @@
|
||||
},
|
||||
"apps/extension/node_modules/openai/node_modules/undici-types": {
|
||||
"version": "5.26.5",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"apps/extension/node_modules/task-master-ai": {
|
||||
"version": "0.27.1",
|
||||
"dev": true,
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
@@ -743,7 +634,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/zod": {
|
||||
"version": "3.25.76",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/colinhacks"
|
||||
@@ -751,7 +641,6 @@
|
||||
},
|
||||
"apps/extension/node_modules/zod-to-json-schema": {
|
||||
"version": "3.24.6",
|
||||
"dev": true,
|
||||
"license": "ISC",
|
||||
"peerDependencies": {
|
||||
"zod": "^3.24.1"
|
||||
@@ -1040,7 +929,6 @@
|
||||
},
|
||||
"node_modules/@anthropic-ai/sdk": {
|
||||
"version": "0.39.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/node": "^18.11.18",
|
||||
@@ -1054,7 +942,6 @@
|
||||
},
|
||||
"node_modules/@anthropic-ai/sdk/node_modules/@types/node": {
|
||||
"version": "18.19.127",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"undici-types": "~5.26.4"
|
||||
@@ -1062,7 +949,6 @@
|
||||
},
|
||||
"node_modules/@anthropic-ai/sdk/node_modules/undici-types": {
|
||||
"version": "5.26.5",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@ark/schema": {
|
||||
@@ -8522,7 +8408,6 @@
|
||||
},
|
||||
"node_modules/@types/diff-match-patch": {
|
||||
"version": "1.0.36",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@types/es-aggregate-error": {
|
||||
@@ -8693,7 +8578,6 @@
|
||||
},
|
||||
"node_modules/@types/node-fetch": {
|
||||
"version": "2.6.13",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/node": "*",
|
||||
@@ -9139,7 +9023,6 @@
|
||||
},
|
||||
"node_modules/abort-controller": {
|
||||
"version": "3.0.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"event-target-shim": "^5.0.0"
|
||||
@@ -9201,7 +9084,6 @@
|
||||
},
|
||||
"node_modules/agentkeepalive": {
|
||||
"version": "4.6.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"humanize-ms": "^1.2.1"
|
||||
@@ -9784,7 +9666,6 @@
|
||||
},
|
||||
"node_modules/asynckit": {
|
||||
"version": "0.4.0",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/auto-bind": {
|
||||
@@ -11562,7 +11443,6 @@
|
||||
},
|
||||
"node_modules/combined-stream": {
|
||||
"version": "1.0.8",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"delayed-stream": "~1.0.0"
|
||||
@@ -12201,7 +12081,6 @@
|
||||
},
|
||||
"node_modules/delayed-stream": {
|
||||
"version": "1.0.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=0.4.0"
|
||||
@@ -12224,7 +12103,6 @@
|
||||
},
|
||||
"node_modules/dequal": {
|
||||
"version": "2.0.3",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=6"
|
||||
@@ -12344,7 +12222,6 @@
|
||||
},
|
||||
"node_modules/diff-match-patch": {
|
||||
"version": "1.0.5",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0"
|
||||
},
|
||||
"node_modules/diff-sequences": {
|
||||
@@ -12844,7 +12721,6 @@
|
||||
},
|
||||
"node_modules/es-set-tostringtag": {
|
||||
"version": "2.1.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"es-errors": "^1.3.0",
|
||||
@@ -13133,7 +13009,6 @@
|
||||
},
|
||||
"node_modules/event-target-shim": {
|
||||
"version": "5.0.1",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=6"
|
||||
@@ -14172,7 +14047,6 @@
|
||||
},
|
||||
"node_modules/form-data": {
|
||||
"version": "4.0.4",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"asynckit": "^0.4.0",
|
||||
@@ -14187,7 +14061,6 @@
|
||||
},
|
||||
"node_modules/form-data-encoder": {
|
||||
"version": "1.7.2",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/format": {
|
||||
@@ -14199,7 +14072,6 @@
|
||||
},
|
||||
"node_modules/formdata-node": {
|
||||
"version": "4.4.1",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"node-domexception": "1.0.0",
|
||||
@@ -14860,7 +14732,6 @@
|
||||
},
|
||||
"node_modules/has-tostringtag": {
|
||||
"version": "1.0.2",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"has-symbols": "^1.0.3"
|
||||
@@ -15435,7 +15306,6 @@
|
||||
},
|
||||
"node_modules/humanize-ms": {
|
||||
"version": "1.2.1",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ms": "^2.0.0"
|
||||
@@ -18222,7 +18092,6 @@
|
||||
},
|
||||
"node_modules/jsondiffpatch": {
|
||||
"version": "0.6.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/diff-match-patch": "^1.0.36",
|
||||
@@ -20404,7 +20273,6 @@
|
||||
},
|
||||
"node_modules/nanoid": {
|
||||
"version": "3.3.11",
|
||||
"devOptional": true,
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
@@ -20513,7 +20381,6 @@
|
||||
},
|
||||
"node_modules/node-domexception": {
|
||||
"version": "1.0.0",
|
||||
"dev": true,
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
@@ -21378,7 +21245,6 @@
|
||||
},
|
||||
"node_modules/partial-json": {
|
||||
"version": "0.1.7",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/patch-console": {
|
||||
@@ -22151,7 +22017,6 @@
|
||||
},
|
||||
"node_modules/react": {
|
||||
"version": "19.1.1",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=0.10.0"
|
||||
@@ -23278,7 +23143,6 @@
|
||||
},
|
||||
"node_modules/secure-json-parse": {
|
||||
"version": "2.7.0",
|
||||
"dev": true,
|
||||
"license": "BSD-3-Clause"
|
||||
},
|
||||
"node_modules/selderee": {
|
||||
@@ -24326,26 +24190,6 @@
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/strip-literal": {
|
||||
"version": "3.1.0",
|
||||
"resolved": "https://registry.npmjs.org/strip-literal/-/strip-literal-3.1.0.tgz",
|
||||
"integrity": "sha512-8r3mkIM/2+PpjHoOtiAW8Rg3jJLHaV7xPwG+YRGrv6FP0wwk/toTpATxWYOW0BKdWwl82VT2tFYi5DlROa0Mxg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"js-tokens": "^9.0.1"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/antfu"
|
||||
}
|
||||
},
|
||||
"node_modules/strip-literal/node_modules/js-tokens": {
|
||||
"version": "9.0.1",
|
||||
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-9.0.1.tgz",
|
||||
"integrity": "sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/strnum": {
|
||||
"version": "2.1.1",
|
||||
"funding": [
|
||||
@@ -24523,7 +24367,6 @@
|
||||
},
|
||||
"node_modules/swr": {
|
||||
"version": "2.3.6",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"dequal": "^2.0.3",
|
||||
@@ -24716,7 +24559,6 @@
|
||||
},
|
||||
"node_modules/throttleit": {
|
||||
"version": "2.1.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
@@ -25819,7 +25661,6 @@
|
||||
},
|
||||
"node_modules/use-sync-external-store": {
|
||||
"version": "1.5.0",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peerDependencies": {
|
||||
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
|
||||
@@ -26228,7 +26069,6 @@
|
||||
},
|
||||
"node_modules/web-streams-polyfill": {
|
||||
"version": "4.0.0-beta.3",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 14"
|
||||
@@ -27222,8 +27062,22 @@
|
||||
},
|
||||
"packages/claude-code-plugin": {
|
||||
"name": "@tm/claude-code-plugin",
|
||||
"version": "0.0.2",
|
||||
"license": "MIT WITH Commons-Clause"
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"devDependencies": {
|
||||
"@types/node": "^20.0.0",
|
||||
"tsx": "^4.20.4",
|
||||
"typescript": "^5.9.2"
|
||||
}
|
||||
},
|
||||
"packages/claude-code-plugin/node_modules/@types/node": {
|
||||
"version": "20.19.20",
|
||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.20.tgz",
|
||||
"integrity": "sha512-2Q7WS25j4pS1cS8yw3d6buNCVJukOTeQ39bAnwR6sOJbaxvyCGebzTMypDFN82CxBLnl+lSWVdCCWbRY6y9yZQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"undici-types": "~6.21.0"
|
||||
}
|
||||
},
|
||||
"packages/tm-core": {
|
||||
"name": "@tm/core",
|
||||
@@ -27235,7 +27089,6 @@
|
||||
"devDependencies": {
|
||||
"@types/node": "^22.10.5",
|
||||
"@vitest/coverage-v8": "^3.2.4",
|
||||
"strip-literal": "3.1.0",
|
||||
"typescript": "^5.9.2",
|
||||
"vitest": "^3.2.4"
|
||||
}
|
||||
@@ -27545,8 +27398,6 @@
|
||||
},
|
||||
"packages/tm-core/node_modules/vitest": {
|
||||
"version": "3.2.4",
|
||||
"resolved": "https://registry.npmjs.org/vitest/-/vitest-3.2.4.tgz",
|
||||
"integrity": "sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.29.0",
|
||||
"version": "0.28.0",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
# @tm/ai-sdk-provider-grok-cli
|
||||
|
||||
## null
|
||||
|
||||
## null
|
||||
|
||||
@@ -4,6 +4,4 @@
|
||||
|
||||
## null
|
||||
|
||||
## null
|
||||
|
||||
## 1.0.1
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
# @tm/claude-code-plugin
|
||||
|
||||
## 0.0.2
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@tm/claude-code-plugin",
|
||||
"version": "0.0.2",
|
||||
"version": "0.0.1",
|
||||
"description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
|
||||
@@ -4,8 +4,6 @@
|
||||
|
||||
## null
|
||||
|
||||
## null
|
||||
|
||||
## 0.26.1
|
||||
|
||||
All notable changes to the @task-master/tm-core package will be documented in this file.
|
||||
|
||||
@@ -37,8 +37,7 @@
|
||||
"@types/node": "^22.10.5",
|
||||
"@vitest/coverage-v8": "^3.2.4",
|
||||
"typescript": "^5.9.2",
|
||||
"vitest": "^3.2.4",
|
||||
"strip-literal": "3.1.0"
|
||||
"vitest": "^3.2.4"
|
||||
},
|
||||
"files": ["src", "README.md", "CHANGELOG.md"],
|
||||
"keywords": ["task-management", "typescript", "ai", "prd", "parser"],
|
||||
|
||||
@@ -21,21 +21,16 @@ const CredentialStoreSpy = vi.fn();
|
||||
vi.mock('./credential-store.js', () => {
|
||||
return {
|
||||
CredentialStore: class {
|
||||
static getInstance(config?: any) {
|
||||
return new (this as any)(config);
|
||||
}
|
||||
static resetInstance() {
|
||||
// Mock reset instance method
|
||||
}
|
||||
constructor(config: any) {
|
||||
CredentialStoreSpy(config);
|
||||
this.getCredentials = vi.fn(() => null);
|
||||
}
|
||||
getCredentials(_options?: any) {
|
||||
getCredentials() {
|
||||
return null;
|
||||
}
|
||||
saveCredentials() {}
|
||||
clearCredentials() {}
|
||||
hasCredentials() {
|
||||
hasValidCredentials() {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -90,7 +85,7 @@ describe('AuthManager Singleton', () => {
|
||||
expect(instance1).toBe(instance2);
|
||||
});
|
||||
|
||||
it('should use config on first call', async () => {
|
||||
it('should use config on first call', () => {
|
||||
const config = {
|
||||
baseUrl: 'https://test.auth.com',
|
||||
configDir: '/test/config',
|
||||
@@ -106,7 +101,7 @@ describe('AuthManager Singleton', () => {
|
||||
|
||||
// Verify the config is passed to internal components through observable behavior
|
||||
// getCredentials would look in the configured file path
|
||||
const credentials = await instance.getCredentials();
|
||||
const credentials = instance.getCredentials();
|
||||
expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly
|
||||
});
|
||||
|
||||
@@ -36,10 +36,7 @@ export class AuthManager {
|
||||
this.oauthService = new OAuthService(this.credentialStore, config);
|
||||
|
||||
// Initialize Supabase client with session restoration
|
||||
// Fire-and-forget with catch handler to prevent unhandled rejections
|
||||
this.initializeSupabaseSession().catch(() => {
|
||||
// Errors are already logged in initializeSupabaseSession
|
||||
});
|
||||
this.initializeSupabaseSession();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -81,8 +78,6 @@ export class AuthManager {
|
||||
|
||||
/**
|
||||
* Get stored authentication credentials
|
||||
* Returns credentials as-is (even if expired). Refresh must be triggered explicitly
|
||||
* via refreshToken() or will occur automatically when using the Supabase client for API calls.
|
||||
*/
|
||||
getCredentials(): AuthCredentials | null {
|
||||
return this.credentialStore.getCredentials();
|
||||
@@ -167,11 +162,10 @@ export class AuthManager {
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if authenticated (credentials exist, regardless of expiration)
|
||||
* @returns true if credentials are stored, including expired credentials
|
||||
* Check if authenticated
|
||||
*/
|
||||
isAuthenticated(): boolean {
|
||||
return this.credentialStore.hasCredentials();
|
||||
return this.credentialStore.hasValidCredentials();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -185,7 +179,7 @@ export class AuthManager {
|
||||
/**
|
||||
* Update the user context (org/brief selection)
|
||||
*/
|
||||
updateContext(context: Partial<UserContext>): void {
|
||||
async updateContext(context: Partial<UserContext>): Promise<void> {
|
||||
const credentials = this.getCredentials();
|
||||
if (!credentials) {
|
||||
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
||||
@@ -211,7 +205,7 @@ export class AuthManager {
|
||||
/**
|
||||
* Clear the user context
|
||||
*/
|
||||
clearContext(): void {
|
||||
async clearContext(): Promise<void> {
|
||||
const credentials = this.getCredentials();
|
||||
if (!credentials) {
|
||||
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
||||
|
||||
@@ -7,13 +7,11 @@ import path from 'path';
|
||||
import { AuthConfig } from './types.js';
|
||||
|
||||
// Single base domain for all URLs
|
||||
// Runtime vars (TM_*) take precedence over build-time vars (TM_PUBLIC_*)
|
||||
// Build-time: process.env.TM_PUBLIC_BASE_DOMAIN gets replaced by tsdown's env option
|
||||
// Runtime: process.env.TM_BASE_DOMAIN can override for staging/development
|
||||
// Build-time: process.env.TM_PUBLIC_BASE_DOMAIN gets replaced by tsup's env option
|
||||
// Default: https://tryhamster.com for production
|
||||
const BASE_DOMAIN =
|
||||
process.env.TM_BASE_DOMAIN || // Runtime override (for staging/tux)
|
||||
process.env.TM_PUBLIC_BASE_DOMAIN; // Build-time (baked into compiled code)
|
||||
process.env.TM_PUBLIC_BASE_DOMAIN || // This gets replaced at build time by tsup
|
||||
'https://tryhamster.com';
|
||||
|
||||
/**
|
||||
* Default authentication configuration
|
||||
@@ -21,7 +19,7 @@ const BASE_DOMAIN =
|
||||
*/
|
||||
export const DEFAULT_AUTH_CONFIG: AuthConfig = {
|
||||
// Base domain for all services
|
||||
baseUrl: BASE_DOMAIN!,
|
||||
baseUrl: BASE_DOMAIN,
|
||||
|
||||
// Configuration directory and file paths
|
||||
configDir: path.join(os.homedir(), '.taskmaster'),
|
||||
|
||||
@@ -1,308 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Unit tests for CredentialStore token expiration handling
|
||||
*/
|
||||
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { CredentialStore } from './credential-store';
|
||||
import type { AuthCredentials } from './types';
|
||||
|
||||
describe('CredentialStore - Token Expiration', () => {
|
||||
let credentialStore: CredentialStore;
|
||||
let tmpDir: string;
|
||||
let authFile: string;
|
||||
|
||||
beforeEach(() => {
|
||||
// Create temp directory for test credentials
|
||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-cred-test-'));
|
||||
authFile = path.join(tmpDir, 'auth.json');
|
||||
|
||||
// Create instance with test config
|
||||
CredentialStore.resetInstance();
|
||||
credentialStore = CredentialStore.getInstance({
|
||||
configDir: tmpDir,
|
||||
configFile: authFile
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up
|
||||
try {
|
||||
if (fs.existsSync(tmpDir)) {
|
||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||
}
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
CredentialStore.resetInstance();
|
||||
});
|
||||
|
||||
describe('Expiration Detection', () => {
|
||||
it('should return null for expired token', () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(), // 1 minute ago
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
expect(retrieved).toBeNull();
|
||||
});
|
||||
|
||||
it('should return credentials for valid token', () => {
|
||||
const validCredentials: AuthCredentials = {
|
||||
token: 'valid-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 3600000).toISOString(), // 1 hour from now
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(validCredentials);
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
expect(retrieved).not.toBeNull();
|
||||
expect(retrieved?.token).toBe('valid-token');
|
||||
});
|
||||
|
||||
it('should return expired token when allowExpired is true', () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: true });
|
||||
|
||||
expect(retrieved).not.toBeNull();
|
||||
expect(retrieved?.token).toBe('expired-token');
|
||||
});
|
||||
|
||||
it('should return expired token by default (allowExpired defaults to true)', () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token-default',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
// Call without options - should default to allowExpired: true
|
||||
const retrieved = credentialStore.getCredentials();
|
||||
|
||||
expect(retrieved).not.toBeNull();
|
||||
expect(retrieved?.token).toBe('expired-token-default');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Clock Skew Tolerance', () => {
|
||||
it('should reject token expiring within 30-second buffer', () => {
|
||||
// Token expires in 15 seconds (within 30-second buffer)
|
||||
const almostExpiredCredentials: AuthCredentials = {
|
||||
token: 'almost-expired-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 15000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(almostExpiredCredentials);
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
expect(retrieved).toBeNull();
|
||||
});
|
||||
|
||||
it('should accept token expiring outside 30-second buffer', () => {
|
||||
// Token expires in 60 seconds (outside 30-second buffer)
|
||||
const validCredentials: AuthCredentials = {
|
||||
token: 'valid-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 60000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(validCredentials);
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
expect(retrieved).not.toBeNull();
|
||||
expect(retrieved?.token).toBe('valid-token');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Timestamp Format Handling', () => {
|
||||
it('should handle ISO string timestamps', () => {
|
||||
const credentials: AuthCredentials = {
|
||||
token: 'test-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 3600000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(credentials);
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
expect(retrieved).not.toBeNull();
|
||||
expect(typeof retrieved?.expiresAt).toBe('number'); // Normalized to number
|
||||
});
|
||||
|
||||
it('should handle numeric timestamps', () => {
|
||||
const credentials: AuthCredentials = {
|
||||
token: 'test-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: Date.now() + 3600000,
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(credentials);
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
expect(retrieved).not.toBeNull();
|
||||
expect(typeof retrieved?.expiresAt).toBe('number');
|
||||
});
|
||||
|
||||
it('should return null for invalid timestamp format', () => {
|
||||
// Manually write invalid timestamp to file
|
||||
const invalidCredentials = {
|
||||
token: 'test-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: 'invalid-date',
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
fs.writeFileSync(authFile, JSON.stringify(invalidCredentials), {
|
||||
mode: 0o600
|
||||
});
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
expect(retrieved).toBeNull();
|
||||
});
|
||||
|
||||
it('should return null for missing expiresAt', () => {
|
||||
const credentialsWithoutExpiry = {
|
||||
token: 'test-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
fs.writeFileSync(authFile, JSON.stringify(credentialsWithoutExpiry), {
|
||||
mode: 0o600
|
||||
});
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
expect(retrieved).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Storage Persistence', () => {
|
||||
it('should persist expiresAt as ISO string', () => {
|
||||
const expiryTime = Date.now() + 3600000;
|
||||
const credentials: AuthCredentials = {
|
||||
token: 'test-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: expiryTime,
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(credentials);
|
||||
|
||||
// Read raw file to verify format
|
||||
const fileContent = fs.readFileSync(authFile, 'utf-8');
|
||||
const parsed = JSON.parse(fileContent);
|
||||
|
||||
// Should be stored as ISO string
|
||||
expect(typeof parsed.expiresAt).toBe('string');
|
||||
expect(parsed.expiresAt).toMatch(/^\d{4}-\d{2}-\d{2}T/); // ISO format
|
||||
});
|
||||
|
||||
it('should normalize timestamp on retrieval', () => {
|
||||
const credentials: AuthCredentials = {
|
||||
token: 'test-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 3600000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(credentials);
|
||||
|
||||
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||
|
||||
// Should be normalized to number for runtime use
|
||||
expect(typeof retrieved?.expiresAt).toBe('number');
|
||||
});
|
||||
});
|
||||
|
||||
describe('hasCredentials', () => {
|
||||
it('should return true for expired credentials', () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
expect(credentialStore.hasCredentials()).toBe(true);
|
||||
});
|
||||
|
||||
it('should return true for valid credentials', () => {
|
||||
const validCredentials: AuthCredentials = {
|
||||
token: 'valid-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 3600000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(validCredentials);
|
||||
|
||||
expect(credentialStore.hasCredentials()).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when no credentials exist', () => {
|
||||
expect(credentialStore.hasCredentials()).toBe(false);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -197,7 +197,7 @@ describe('CredentialStore', () => {
|
||||
JSON.stringify(mockCredentials)
|
||||
);
|
||||
|
||||
const result = store.getCredentials({ allowExpired: false });
|
||||
const result = store.getCredentials();
|
||||
|
||||
expect(result).toBeNull();
|
||||
expect(mockLogger.warn).toHaveBeenCalledWith(
|
||||
@@ -226,31 +226,6 @@ describe('CredentialStore', () => {
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.token).toBe('expired-token');
|
||||
});
|
||||
|
||||
it('should return expired tokens by default (allowExpired defaults to true)', () => {
|
||||
const expiredTimestamp = Date.now() - 3600000; // 1 hour ago
|
||||
const mockCredentials = {
|
||||
token: 'expired-token-default',
|
||||
userId: 'user-expired',
|
||||
expiresAt: expiredTimestamp,
|
||||
tokenType: 'standard',
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||
vi.mocked(fs.readFileSync).mockReturnValue(
|
||||
JSON.stringify(mockCredentials)
|
||||
);
|
||||
|
||||
// Call without options - should default to allowExpired: true
|
||||
const result = store.getCredentials();
|
||||
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.token).toBe('expired-token-default');
|
||||
expect(mockLogger.warn).not.toHaveBeenCalledWith(
|
||||
expect.stringContaining('Authentication token has expired')
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('saveCredentials with timestamp normalization', () => {
|
||||
@@ -476,7 +451,7 @@ describe('CredentialStore', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('hasCredentials', () => {
|
||||
describe('hasValidCredentials', () => {
|
||||
it('should return true when valid unexpired credentials exist', () => {
|
||||
const futureDate = new Date(Date.now() + 3600000); // 1 hour from now
|
||||
const credentials = {
|
||||
@@ -490,10 +465,10 @@ describe('CredentialStore', () => {
|
||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||
|
||||
expect(store.hasCredentials()).toBe(true);
|
||||
expect(store.hasValidCredentials()).toBe(true);
|
||||
});
|
||||
|
||||
it('should return true when credentials are expired', () => {
|
||||
it('should return false when credentials are expired', () => {
|
||||
const pastDate = new Date(Date.now() - 3600000); // 1 hour ago
|
||||
const credentials = {
|
||||
token: 'expired-token',
|
||||
@@ -506,13 +481,13 @@ describe('CredentialStore', () => {
|
||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||
|
||||
expect(store.hasCredentials()).toBe(true);
|
||||
expect(store.hasValidCredentials()).toBe(false);
|
||||
});
|
||||
|
||||
it('should return false when no credentials exist', () => {
|
||||
vi.mocked(fs.existsSync).mockReturnValue(false);
|
||||
|
||||
expect(store.hasCredentials()).toBe(false);
|
||||
expect(store.hasValidCredentials()).toBe(false);
|
||||
});
|
||||
|
||||
it('should return false when file contains invalid JSON', () => {
|
||||
@@ -520,7 +495,7 @@ describe('CredentialStore', () => {
|
||||
vi.mocked(fs.readFileSync).mockReturnValue('invalid json {');
|
||||
vi.mocked(fs.renameSync).mockImplementation(() => undefined);
|
||||
|
||||
expect(store.hasCredentials()).toBe(false);
|
||||
expect(store.hasValidCredentials()).toBe(false);
|
||||
});
|
||||
|
||||
it('should return false for credentials without expiry', () => {
|
||||
@@ -535,7 +510,7 @@ describe('CredentialStore', () => {
|
||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||
|
||||
// Credentials without expiry are considered invalid
|
||||
expect(store.hasCredentials()).toBe(false);
|
||||
expect(store.hasValidCredentials()).toBe(false);
|
||||
|
||||
// Should log warning about missing expiration
|
||||
expect(mockLogger.warn).toHaveBeenCalledWith(
|
||||
@@ -543,14 +518,14 @@ describe('CredentialStore', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should use allowExpired=true', () => {
|
||||
it('should use allowExpired=false by default', () => {
|
||||
// Spy on getCredentials to verify it's called with correct params
|
||||
const getCredentialsSpy = vi.spyOn(store, 'getCredentials');
|
||||
|
||||
vi.mocked(fs.existsSync).mockReturnValue(false);
|
||||
store.hasCredentials();
|
||||
store.hasValidCredentials();
|
||||
|
||||
expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: true });
|
||||
expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: false });
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -24,8 +24,6 @@ export class CredentialStore {
|
||||
private config: AuthConfig;
|
||||
// Clock skew tolerance for expiry checks (30 seconds)
|
||||
private readonly CLOCK_SKEW_MS = 30_000;
|
||||
// Track if we've already warned about missing expiration to avoid spam
|
||||
private hasWarnedAboutMissingExpiration = false;
|
||||
|
||||
private constructor(config?: Partial<AuthConfig>) {
|
||||
this.config = getAuthConfig(config);
|
||||
@@ -56,12 +54,9 @@ export class CredentialStore {
|
||||
|
||||
/**
|
||||
* Get stored authentication credentials
|
||||
* @param options.allowExpired - Whether to return expired credentials (default: true)
|
||||
* @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use
|
||||
*/
|
||||
getCredentials({
|
||||
allowExpired = true
|
||||
}: { allowExpired?: boolean } = {}): AuthCredentials | null {
|
||||
getCredentials(options?: { allowExpired?: boolean }): AuthCredentials | null {
|
||||
try {
|
||||
if (!fs.existsSync(this.config.configFile)) {
|
||||
return null;
|
||||
@@ -86,11 +81,7 @@ export class CredentialStore {
|
||||
|
||||
// Validate expiration time for tokens
|
||||
if (expiresAtMs === undefined) {
|
||||
// Only log this warning once to avoid spam during auth flows
|
||||
if (!this.hasWarnedAboutMissingExpiration) {
|
||||
this.logger.warn('No valid expiration time provided for token');
|
||||
this.hasWarnedAboutMissingExpiration = true;
|
||||
}
|
||||
this.logger.warn('No valid expiration time provided for token');
|
||||
return null;
|
||||
}
|
||||
|
||||
@@ -99,6 +90,7 @@ export class CredentialStore {
|
||||
|
||||
// Check if the token has expired (with clock skew tolerance)
|
||||
const now = Date.now();
|
||||
const allowExpired = options?.allowExpired ?? false;
|
||||
if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) {
|
||||
this.logger.warn(
|
||||
'Authentication token has expired or is about to expire',
|
||||
@@ -111,7 +103,7 @@ export class CredentialStore {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Return credentials (even if expired) to enable refresh flows
|
||||
// Return valid token
|
||||
return authData;
|
||||
} catch (error) {
|
||||
this.logger.error(
|
||||
@@ -180,9 +172,6 @@ export class CredentialStore {
|
||||
mode: 0o600
|
||||
});
|
||||
fs.renameSync(tempFile, this.config.configFile);
|
||||
|
||||
// Reset the warning flag so it can be shown again for future invalid tokens
|
||||
this.hasWarnedAboutMissingExpiration = false;
|
||||
} catch (error) {
|
||||
throw new AuthenticationError(
|
||||
`Failed to save auth credentials: ${(error as Error).message}`,
|
||||
@@ -210,11 +199,10 @@ export class CredentialStore {
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if credentials exist (regardless of expiration status)
|
||||
* @returns true if credentials are stored, including expired credentials
|
||||
* Check if credentials exist and are valid
|
||||
*/
|
||||
hasCredentials(): boolean {
|
||||
const credentials = this.getCredentials({ allowExpired: true });
|
||||
hasValidCredentials(): boolean {
|
||||
const credentials = this.getCredentials({ allowExpired: false });
|
||||
return credentials !== null;
|
||||
}
|
||||
|
||||
|
||||
@@ -281,26 +281,15 @@ export class OAuthService {
|
||||
// Exchange code for session using PKCE
|
||||
const session = await this.supabaseClient.exchangeCodeForSession(code);
|
||||
|
||||
// Calculate expiration - can be overridden with TM_TOKEN_EXPIRY_MINUTES
|
||||
let expiresAt: string | undefined;
|
||||
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
|
||||
if (tokenExpiryMinutes) {
|
||||
const minutes = parseInt(tokenExpiryMinutes);
|
||||
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
|
||||
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
|
||||
} else {
|
||||
expiresAt = session.expires_at
|
||||
? new Date(session.expires_at * 1000).toISOString()
|
||||
: undefined;
|
||||
}
|
||||
|
||||
// Save authentication data
|
||||
const authData: AuthCredentials = {
|
||||
token: session.access_token,
|
||||
refreshToken: session.refresh_token,
|
||||
userId: session.user.id,
|
||||
email: session.user.email,
|
||||
expiresAt,
|
||||
expiresAt: session.expires_at
|
||||
? new Date(session.expires_at * 1000).toISOString()
|
||||
: undefined,
|
||||
tokenType: 'standard',
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
@@ -351,18 +340,10 @@ export class OAuthService {
|
||||
// Get user info from the session
|
||||
const user = await this.supabaseClient.getUser();
|
||||
|
||||
// Calculate expiration time - can be overridden with TM_TOKEN_EXPIRY_MINUTES
|
||||
let expiresAt: string | undefined;
|
||||
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
|
||||
if (tokenExpiryMinutes) {
|
||||
const minutes = parseInt(tokenExpiryMinutes);
|
||||
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
|
||||
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
|
||||
} else {
|
||||
expiresAt = expiresIn
|
||||
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
|
||||
: undefined;
|
||||
}
|
||||
// Calculate expiration time
|
||||
const expiresAt = expiresIn
|
||||
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
|
||||
: undefined;
|
||||
|
||||
// Save authentication data
|
||||
const authData: AuthCredentials = {
|
||||
@@ -370,7 +351,7 @@ export class OAuthService {
|
||||
refreshToken: refreshToken || undefined,
|
||||
userId: user?.id || 'unknown',
|
||||
email: user?.email,
|
||||
expiresAt,
|
||||
expiresAt: expiresAt,
|
||||
tokenType: 'standard',
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
@@ -98,11 +98,11 @@ export class SupabaseSessionStorage implements SupportedStorage {
|
||||
// Only handle Supabase session keys
|
||||
if (key === STORAGE_KEY || key.includes('auth-token')) {
|
||||
try {
|
||||
this.logger.info('Supabase called setItem - storing refreshed session');
|
||||
|
||||
// Parse the session and update our credentials
|
||||
const sessionUpdates = this.parseSessionToCredentials(value);
|
||||
const existingCredentials = this.store.getCredentials();
|
||||
const existingCredentials = this.store.getCredentials({
|
||||
allowExpired: true
|
||||
});
|
||||
|
||||
if (sessionUpdates.token) {
|
||||
const updatedCredentials: AuthCredentials = {
|
||||
@@ -113,9 +113,6 @@ export class SupabaseSessionStorage implements SupportedStorage {
|
||||
} as AuthCredentials;
|
||||
|
||||
this.store.saveCredentials(updatedCredentials);
|
||||
this.logger.info(
|
||||
'Successfully saved refreshed credentials from Supabase'
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
this.logger.error('Error setting session:', error);
|
||||
|
||||
@@ -16,7 +16,6 @@ export interface AuthCredentials {
|
||||
export interface UserContext {
|
||||
orgId?: string;
|
||||
orgName?: string;
|
||||
orgSlug?: string;
|
||||
briefId?: string;
|
||||
briefName?: string;
|
||||
updatedAt: string;
|
||||
|
||||
@@ -17,11 +17,10 @@ export class SupabaseAuthClient {
|
||||
private client: SupabaseJSClient | null = null;
|
||||
private sessionStorage: SupabaseSessionStorage;
|
||||
private logger = getLogger('SupabaseAuthClient');
|
||||
private credentialStore: CredentialStore;
|
||||
|
||||
constructor() {
|
||||
this.credentialStore = CredentialStore.getInstance();
|
||||
this.sessionStorage = new SupabaseSessionStorage(this.credentialStore);
|
||||
const credentialStore = CredentialStore.getInstance();
|
||||
this.sessionStorage = new SupabaseSessionStorage(credentialStore);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -29,17 +28,13 @@ export class SupabaseAuthClient {
|
||||
*/
|
||||
getClient(): SupabaseJSClient {
|
||||
if (!this.client) {
|
||||
// Get Supabase configuration from environment
|
||||
// Runtime vars (TM_*) take precedence over build-time vars (TM_PUBLIC_*)
|
||||
const supabaseUrl =
|
||||
process.env.TM_SUPABASE_URL || process.env.TM_PUBLIC_SUPABASE_URL;
|
||||
const supabaseAnonKey =
|
||||
process.env.TM_SUPABASE_ANON_KEY ||
|
||||
process.env.TM_PUBLIC_SUPABASE_ANON_KEY;
|
||||
// Get Supabase configuration from environment - using TM_PUBLIC prefix
|
||||
const supabaseUrl = process.env.TM_PUBLIC_SUPABASE_URL;
|
||||
const supabaseAnonKey = process.env.TM_PUBLIC_SUPABASE_ANON_KEY;
|
||||
|
||||
if (!supabaseUrl || !supabaseAnonKey) {
|
||||
throw new AuthenticationError(
|
||||
'Supabase configuration missing. Please set TM_SUPABASE_URL and TM_SUPABASE_ANON_KEY (runtime) or TM_PUBLIC_SUPABASE_URL and TM_PUBLIC_SUPABASE_ANON_KEY (build-time) environment variables.',
|
||||
'Supabase configuration missing. Please set TM_PUBLIC_SUPABASE_URL and TM_PUBLIC_SUPABASE_ANON_KEY environment variables.',
|
||||
'CONFIG_MISSING'
|
||||
);
|
||||
}
|
||||
|
||||
@@ -52,10 +52,7 @@ export const ERROR_CODES = {
|
||||
INVALID_INPUT: 'INVALID_INPUT',
|
||||
NOT_IMPLEMENTED: 'NOT_IMPLEMENTED',
|
||||
UNKNOWN_ERROR: 'UNKNOWN_ERROR',
|
||||
NOT_FOUND: 'NOT_FOUND',
|
||||
|
||||
// Context errors
|
||||
NO_BRIEF_SELECTED: 'NO_BRIEF_SELECTED'
|
||||
NOT_FOUND: 'NOT_FOUND'
|
||||
} as const;
|
||||
|
||||
export type ErrorCode = (typeof ERROR_CODES)[keyof typeof ERROR_CODES];
|
||||
|
||||
@@ -47,8 +47,8 @@ export class SupabaseTaskRepository {
|
||||
* Gets the current brief ID from auth context
|
||||
* @throws {Error} If no brief is selected
|
||||
*/
|
||||
private async getBriefIdOrThrow(): Promise<string> {
|
||||
const context = await this.authManager.getContext();
|
||||
private getBriefIdOrThrow(): string {
|
||||
const context = this.authManager.getContext();
|
||||
if (!context?.briefId) {
|
||||
throw new Error(
|
||||
'No brief selected. Please select a brief first using: tm context brief'
|
||||
@@ -61,7 +61,7 @@ export class SupabaseTaskRepository {
|
||||
_projectId?: string,
|
||||
options?: LoadTasksOptions
|
||||
): Promise<Task[]> {
|
||||
const briefId = await this.getBriefIdOrThrow();
|
||||
const briefId = this.getBriefIdOrThrow();
|
||||
|
||||
// Build query with filters
|
||||
let query = this.supabase
|
||||
@@ -114,7 +114,7 @@ export class SupabaseTaskRepository {
|
||||
}
|
||||
|
||||
async getTask(_projectId: string, taskId: string): Promise<Task | null> {
|
||||
const briefId = await this.getBriefIdOrThrow();
|
||||
const briefId = this.getBriefIdOrThrow();
|
||||
|
||||
const { data, error } = await this.supabase
|
||||
.from('tasks')
|
||||
@@ -157,7 +157,7 @@ export class SupabaseTaskRepository {
|
||||
taskId: string,
|
||||
updates: Partial<Task>
|
||||
): Promise<Task> {
|
||||
const briefId = await this.getBriefIdOrThrow();
|
||||
const briefId = this.getBriefIdOrThrow();
|
||||
|
||||
// Validate updates using Zod schema
|
||||
try {
|
||||
|
||||
@@ -105,7 +105,7 @@ export class ExportService {
|
||||
}
|
||||
|
||||
// Get current context
|
||||
const context = await this.authManager.getContext();
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
// Determine org and brief IDs
|
||||
let orgId = options.orgId || context?.orgId;
|
||||
@@ -232,7 +232,7 @@ export class ExportService {
|
||||
hasBrief: boolean;
|
||||
context: UserContext | null;
|
||||
}> {
|
||||
const context = await this.authManager.getContext();
|
||||
const context = this.authManager.getContext();
|
||||
|
||||
return {
|
||||
hasOrg: !!context?.orgId,
|
||||
@@ -358,12 +358,11 @@ export class ExportService {
|
||||
tasks: any[]
|
||||
): Promise<void> {
|
||||
// Check if we should use the API endpoint or direct Supabase
|
||||
const apiEndpoint =
|
||||
process.env.TM_BASE_DOMAIN || process.env.TM_PUBLIC_BASE_DOMAIN;
|
||||
const useAPIEndpoint = process.env.TM_PUBLIC_BASE_DOMAIN;
|
||||
|
||||
if (apiEndpoint) {
|
||||
if (useAPIEndpoint) {
|
||||
// Use the new bulk import API endpoint
|
||||
const apiUrl = `${apiEndpoint}/ai/api/v1/briefs/${briefId}/tasks`;
|
||||
const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks/bulk`;
|
||||
|
||||
// Transform tasks to flat structure for API
|
||||
const flatTasks = this.transformTasksForBulkImport(tasks);
|
||||
@@ -371,16 +370,16 @@ export class ExportService {
|
||||
// Prepare request body
|
||||
const requestBody = {
|
||||
source: 'task-master-cli',
|
||||
accountId: orgId,
|
||||
options: {
|
||||
dryRun: false,
|
||||
stopOnError: false
|
||||
},
|
||||
accountId: orgId,
|
||||
tasks: flatTasks
|
||||
};
|
||||
|
||||
// Get auth token
|
||||
const credentials = await this.authManager.getCredentials();
|
||||
const credentials = this.authManager.getCredentials();
|
||||
if (!credentials || !credentials.token) {
|
||||
throw new Error('Not authenticated');
|
||||
}
|
||||
|
||||
@@ -27,12 +27,6 @@ export interface Brief {
|
||||
status: string;
|
||||
createdAt: string;
|
||||
updatedAt: string;
|
||||
document?: {
|
||||
id: string;
|
||||
title: string;
|
||||
document_name: string;
|
||||
description?: string;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -177,12 +171,7 @@ export class OrganizationService {
|
||||
document_id,
|
||||
status,
|
||||
created_at,
|
||||
updated_at,
|
||||
document:document_id (
|
||||
id,
|
||||
document_name,
|
||||
title
|
||||
)
|
||||
updated_at
|
||||
`)
|
||||
.eq('account_id', orgId);
|
||||
|
||||
@@ -207,14 +196,7 @@ export class OrganizationService {
|
||||
documentId: brief.document_id,
|
||||
status: brief.status,
|
||||
createdAt: brief.created_at,
|
||||
updatedAt: brief.updated_at,
|
||||
document: brief.document
|
||||
? {
|
||||
id: brief.document.id,
|
||||
document_name: brief.document.document_name,
|
||||
title: brief.document.title
|
||||
}
|
||||
: undefined
|
||||
updatedAt: brief.updated_at
|
||||
}));
|
||||
} catch (error) {
|
||||
if (error instanceof TaskMasterError) {
|
||||
@@ -242,13 +224,7 @@ export class OrganizationService {
|
||||
document_id,
|
||||
status,
|
||||
created_at,
|
||||
updated_at,
|
||||
document:document_id (
|
||||
id,
|
||||
document_name,
|
||||
title,
|
||||
description
|
||||
)
|
||||
updated_at
|
||||
`)
|
||||
.eq('id', briefId)
|
||||
.single();
|
||||
@@ -277,15 +253,7 @@ export class OrganizationService {
|
||||
documentId: briefData.document_id,
|
||||
status: briefData.status,
|
||||
createdAt: briefData.created_at,
|
||||
updatedAt: briefData.updated_at,
|
||||
document: briefData.document
|
||||
? {
|
||||
id: briefData.document.id,
|
||||
document_name: briefData.document.document_name,
|
||||
title: briefData.document.title,
|
||||
description: briefData.document.description
|
||||
}
|
||||
: undefined
|
||||
updatedAt: briefData.updated_at
|
||||
};
|
||||
} catch (error) {
|
||||
if (error instanceof TaskMasterError) {
|
||||
|
||||
@@ -161,16 +161,6 @@ export class TaskService {
|
||||
storageType: this.getStorageType()
|
||||
};
|
||||
} catch (error) {
|
||||
// If it's a user-facing error (like NO_BRIEF_SELECTED), don't log it as an internal error
|
||||
if (
|
||||
error instanceof TaskMasterError &&
|
||||
error.is(ERROR_CODES.NO_BRIEF_SELECTED)
|
||||
) {
|
||||
// Just re-throw user-facing errors without wrapping
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Log internal errors
|
||||
this.logger.error('Failed to get task list', error);
|
||||
throw new TaskMasterError(
|
||||
'Failed to get task list',
|
||||
@@ -196,14 +186,6 @@ export class TaskService {
|
||||
// Delegate to storage layer which handles the specific logic for tasks vs subtasks
|
||||
return await this.storage.loadTask(String(taskId), activeTag);
|
||||
} catch (error) {
|
||||
// If it's a user-facing error (like NO_BRIEF_SELECTED), don't wrap it
|
||||
if (
|
||||
error instanceof TaskMasterError &&
|
||||
error.is(ERROR_CODES.NO_BRIEF_SELECTED)
|
||||
) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
throw new TaskMasterError(
|
||||
`Failed to get task ${taskId}`,
|
||||
ERROR_CODES.STORAGE_ERROR,
|
||||
@@ -540,14 +522,6 @@ export class TaskService {
|
||||
activeTag
|
||||
);
|
||||
} catch (error) {
|
||||
// If it's a user-facing error (like NO_BRIEF_SELECTED), don't wrap it
|
||||
if (
|
||||
error instanceof TaskMasterError &&
|
||||
error.is(ERROR_CODES.NO_BRIEF_SELECTED)
|
||||
) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
throw new TaskMasterError(
|
||||
`Failed to update task status for ${taskIdStr}`,
|
||||
ERROR_CODES.STORAGE_ERROR,
|
||||
|
||||
@@ -37,13 +37,6 @@ export interface ApiStorageConfig {
|
||||
maxRetries?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Auth context with a guaranteed briefId
|
||||
*/
|
||||
type ContextWithBrief = NonNullable<
|
||||
ReturnType<typeof AuthManager.prototype.getContext>
|
||||
> & { briefId: string };
|
||||
|
||||
/**
|
||||
* ApiStorage implementation using repository pattern
|
||||
* Provides flexibility to swap between different backend implementations
|
||||
@@ -119,13 +112,6 @@ export class ApiStorage implements IStorage {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the storage type
|
||||
*/
|
||||
getType(): 'api' {
|
||||
return 'api';
|
||||
}
|
||||
|
||||
/**
|
||||
* Load tags into cache
|
||||
* In our API-based system, "tags" represent briefs
|
||||
@@ -165,7 +151,15 @@ export class ApiStorage implements IStorage {
|
||||
await this.ensureInitialized();
|
||||
|
||||
try {
|
||||
const context = this.ensureBriefSelected('loadTasks');
|
||||
const authManager = AuthManager.getInstance();
|
||||
const context = authManager.getContext();
|
||||
|
||||
// If no brief is selected in context, throw an error
|
||||
if (!context?.briefId) {
|
||||
throw new Error(
|
||||
'No brief selected. Please select a brief first using: tm context brief <brief-id>'
|
||||
);
|
||||
}
|
||||
|
||||
// Load tasks from the current brief context with filters pushed to repository
|
||||
const tasks = await this.retryOperation(() =>
|
||||
@@ -180,11 +174,12 @@ export class ApiStorage implements IStorage {
|
||||
|
||||
return tasks;
|
||||
} catch (error) {
|
||||
this.wrapError(error, 'Failed to load tasks from API', {
|
||||
operation: 'loadTasks',
|
||||
tag,
|
||||
context: 'brief-based loading'
|
||||
});
|
||||
throw new TaskMasterError(
|
||||
'Failed to load tasks from API',
|
||||
ERROR_CODES.STORAGE_ERROR,
|
||||
{ operation: 'loadTasks', tag, context: 'brief-based loading' },
|
||||
error as Error
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -235,17 +230,16 @@ export class ApiStorage implements IStorage {
|
||||
await this.ensureInitialized();
|
||||
|
||||
try {
|
||||
this.ensureBriefSelected('loadTask');
|
||||
|
||||
return await this.retryOperation(() =>
|
||||
this.repository.getTask(this.projectId, taskId)
|
||||
);
|
||||
} catch (error) {
|
||||
this.wrapError(error, 'Failed to load task from API', {
|
||||
operation: 'loadTask',
|
||||
taskId,
|
||||
tag
|
||||
});
|
||||
throw new TaskMasterError(
|
||||
'Failed to load task from API',
|
||||
ERROR_CODES.STORAGE_ERROR,
|
||||
{ operation: 'loadTask', taskId, tag },
|
||||
error as Error
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -509,8 +503,6 @@ export class ApiStorage implements IStorage {
|
||||
await this.ensureInitialized();
|
||||
|
||||
try {
|
||||
this.ensureBriefSelected('updateTaskStatus');
|
||||
|
||||
const existingTask = await this.retryOperation(() =>
|
||||
this.repository.getTask(this.projectId, taskId)
|
||||
);
|
||||
@@ -547,12 +539,12 @@ export class ApiStorage implements IStorage {
|
||||
taskId
|
||||
};
|
||||
} catch (error) {
|
||||
this.wrapError(error, 'Failed to update task status via API', {
|
||||
operation: 'updateTaskStatus',
|
||||
taskId,
|
||||
newStatus,
|
||||
tag
|
||||
});
|
||||
throw new TaskMasterError(
|
||||
'Failed to update task status via API',
|
||||
ERROR_CODES.STORAGE_ERROR,
|
||||
{ operation: 'updateTaskStatus', taskId, newStatus, tag },
|
||||
error as Error
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -770,29 +762,6 @@ export class ApiStorage implements IStorage {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure a brief is selected in the current context
|
||||
* @returns The current auth context with a valid briefId
|
||||
*/
|
||||
private ensureBriefSelected(operation: string): ContextWithBrief {
|
||||
const authManager = AuthManager.getInstance();
|
||||
const context = authManager.getContext();
|
||||
|
||||
if (!context?.briefId) {
|
||||
throw new TaskMasterError(
|
||||
'No brief selected',
|
||||
ERROR_CODES.NO_BRIEF_SELECTED,
|
||||
{
|
||||
operation,
|
||||
userMessage:
|
||||
'No brief selected. Please select a brief first using: tm context brief <brief-id> or tm context brief <brief-url>'
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
return context as ContextWithBrief;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retry an operation with exponential backoff
|
||||
*/
|
||||
@@ -811,28 +780,4 @@ export class ApiStorage implements IStorage {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Wrap an error unless it's already a NO_BRIEF_SELECTED error
|
||||
*/
|
||||
private wrapError(
|
||||
error: unknown,
|
||||
message: string,
|
||||
context: Record<string, unknown>
|
||||
): never {
|
||||
// If it's already a NO_BRIEF_SELECTED error, don't wrap it
|
||||
if (
|
||||
error instanceof TaskMasterError &&
|
||||
error.is(ERROR_CODES.NO_BRIEF_SELECTED)
|
||||
) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
throw new TaskMasterError(
|
||||
message,
|
||||
ERROR_CODES.STORAGE_ERROR,
|
||||
context,
|
||||
error as Error
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -44,13 +44,6 @@ export class FileStorage implements IStorage {
|
||||
await this.fileOps.cleanup();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the storage type
|
||||
*/
|
||||
getType(): 'file' {
|
||||
return 'file';
|
||||
}
|
||||
|
||||
/**
|
||||
* Get statistics about the storage
|
||||
*/
|
||||
|
||||
@@ -72,7 +72,7 @@ export class StorageFactory {
|
||||
{ storageType: 'api', missing }
|
||||
);
|
||||
}
|
||||
// Use auth token from AuthManager (synchronous - no auto-refresh here)
|
||||
// Use auth token from AuthManager
|
||||
const credentials = authManager.getCredentials();
|
||||
if (credentials) {
|
||||
// Merge with existing storage config, ensuring required fields
|
||||
@@ -82,8 +82,8 @@ export class StorageFactory {
|
||||
apiAccessToken: credentials.token,
|
||||
apiEndpoint:
|
||||
config.storage?.apiEndpoint ||
|
||||
process.env.TM_BASE_DOMAIN ||
|
||||
process.env.TM_PUBLIC_BASE_DOMAIN
|
||||
process.env.TM_PUBLIC_BASE_DOMAIN ||
|
||||
'https://tryhamster.com/api'
|
||||
};
|
||||
config.storage = nextStorage;
|
||||
}
|
||||
@@ -112,7 +112,6 @@ export class StorageFactory {
|
||||
apiAccessToken: credentials.token,
|
||||
apiEndpoint:
|
||||
config.storage?.apiEndpoint ||
|
||||
process.env.TM_BASE_DOMAIN ||
|
||||
process.env.TM_PUBLIC_BASE_DOMAIN ||
|
||||
'https://tryhamster.com/api'
|
||||
};
|
||||
|
||||
@@ -201,44 +201,6 @@ export class TaskMasterCore {
|
||||
return this.taskService.getStorageType();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get storage configuration
|
||||
*/
|
||||
getStorageConfig() {
|
||||
return this.configManager.getStorageConfig();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get storage display information for headers
|
||||
* Returns context info for API storage, null for file storage
|
||||
*/
|
||||
getStorageDisplayInfo(): {
|
||||
briefId: string;
|
||||
briefName: string;
|
||||
orgSlug?: string;
|
||||
} | null {
|
||||
// Only return info if using API storage
|
||||
const storageType = this.getStorageType();
|
||||
if (storageType !== 'api') {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Get credentials from auth manager
|
||||
const authManager = AuthManager.getInstance();
|
||||
const credentials = authManager.getCredentials();
|
||||
const selectedContext = credentials?.selectedContext;
|
||||
|
||||
if (!selectedContext?.briefId || !selectedContext?.briefName) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
briefId: selectedContext.briefId,
|
||||
briefName: selectedContext.briefName,
|
||||
orgSlug: selectedContext.orgSlug
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current active tag
|
||||
*/
|
||||
|
||||
@@ -1,139 +0,0 @@
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import fs from 'fs';
|
||||
import os from 'os';
|
||||
import path from 'path';
|
||||
import type { Session } from '@supabase/supabase-js';
|
||||
import { AuthManager } from '../../src/auth/auth-manager';
|
||||
import { CredentialStore } from '../../src/auth/credential-store';
|
||||
import type { AuthCredentials } from '../../src/auth/types';
|
||||
|
||||
describe('AuthManager Token Refresh', () => {
|
||||
let authManager: AuthManager;
|
||||
let credentialStore: CredentialStore;
|
||||
let tmpDir: string;
|
||||
let authFile: string;
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset singletons
|
||||
AuthManager.resetInstance();
|
||||
CredentialStore.resetInstance();
|
||||
|
||||
// Create temporary directory for test isolation
|
||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-refresh-'));
|
||||
authFile = path.join(tmpDir, 'auth.json');
|
||||
|
||||
// Initialize AuthManager with test config (this will create CredentialStore internally)
|
||||
authManager = AuthManager.getInstance({
|
||||
configDir: tmpDir,
|
||||
configFile: authFile
|
||||
});
|
||||
|
||||
// Get the CredentialStore instance that AuthManager created
|
||||
credentialStore = CredentialStore.getInstance();
|
||||
credentialStore.clearCredentials();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up
|
||||
try {
|
||||
credentialStore.clearCredentials();
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
AuthManager.resetInstance();
|
||||
CredentialStore.resetInstance();
|
||||
vi.restoreAllMocks();
|
||||
|
||||
// Remove temporary directory
|
||||
if (tmpDir && fs.existsSync(tmpDir)) {
|
||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it('should return expired credentials to enable refresh flows', () => {
|
||||
// Set up expired credentials with refresh token
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired_access_token',
|
||||
refreshToken: 'valid_refresh_token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
// Get credentials should return them even if expired
|
||||
// Refresh will be handled by explicit calls or client operations
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
expect(credentials).not.toBeNull();
|
||||
expect(credentials?.token).toBe('expired_access_token');
|
||||
expect(credentials?.refreshToken).toBe('valid_refresh_token');
|
||||
});
|
||||
|
||||
it('should return valid credentials', () => {
|
||||
// Set up valid (non-expired) credentials
|
||||
const validCredentials: AuthCredentials = {
|
||||
token: 'valid_access_token',
|
||||
refreshToken: 'valid_refresh_token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 3600000).toISOString(), // Expires in 1 hour
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(validCredentials);
|
||||
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
expect(credentials?.token).toBe('valid_access_token');
|
||||
});
|
||||
|
||||
it('should return expired credentials even without refresh token', () => {
|
||||
// Set up expired credentials WITHOUT refresh token
|
||||
// We still return them - it's up to the caller to handle
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired_access_token',
|
||||
refreshToken: undefined,
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
// Returns credentials even if expired
|
||||
expect(credentials).not.toBeNull();
|
||||
expect(credentials?.token).toBe('expired_access_token');
|
||||
});
|
||||
|
||||
it('should return null if no credentials exist', () => {
|
||||
const credentials = authManager.getCredentials();
|
||||
expect(credentials).toBeNull();
|
||||
});
|
||||
|
||||
it('should return credentials regardless of refresh token validity', () => {
|
||||
// Set up expired credentials with refresh token
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired_access_token',
|
||||
refreshToken: 'invalid_refresh_token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 1000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
// Returns credentials - refresh will be attempted by the client which will handle failure
|
||||
expect(credentials).not.toBeNull();
|
||||
expect(credentials?.token).toBe('expired_access_token');
|
||||
expect(credentials?.refreshToken).toBe('invalid_refresh_token');
|
||||
});
|
||||
});
|
||||
@@ -1,336 +0,0 @@
|
||||
/**
|
||||
* @fileoverview Integration tests for JWT token auto-refresh functionality
|
||||
*
|
||||
* These tests verify that expired tokens are automatically refreshed
|
||||
* when making API calls through AuthManager.
|
||||
*/
|
||||
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import fs from 'fs';
|
||||
import os from 'os';
|
||||
import path from 'path';
|
||||
import type { Session } from '@supabase/supabase-js';
|
||||
import { AuthManager } from '../../src/auth/auth-manager';
|
||||
import { CredentialStore } from '../../src/auth/credential-store';
|
||||
import type { AuthCredentials } from '../../src/auth/types';
|
||||
|
||||
describe('AuthManager - Token Auto-Refresh Integration', () => {
|
||||
let authManager: AuthManager;
|
||||
let credentialStore: CredentialStore;
|
||||
let tmpDir: string;
|
||||
let authFile: string;
|
||||
|
||||
// Mock Supabase session that will be returned on refresh
|
||||
const mockRefreshedSession: Session = {
|
||||
access_token: 'new-access-token-xyz',
|
||||
refresh_token: 'new-refresh-token-xyz',
|
||||
token_type: 'bearer',
|
||||
expires_at: Math.floor(Date.now() / 1000) + 3600, // 1 hour from now
|
||||
expires_in: 3600,
|
||||
user: {
|
||||
id: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
aud: 'authenticated',
|
||||
role: 'authenticated',
|
||||
app_metadata: {},
|
||||
user_metadata: {},
|
||||
created_at: new Date().toISOString()
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset singletons
|
||||
AuthManager.resetInstance();
|
||||
CredentialStore.resetInstance();
|
||||
|
||||
// Create temporary directory for test isolation
|
||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-integration-'));
|
||||
authFile = path.join(tmpDir, 'auth.json');
|
||||
|
||||
// Initialize AuthManager with test config (this will create CredentialStore internally)
|
||||
authManager = AuthManager.getInstance({
|
||||
configDir: tmpDir,
|
||||
configFile: authFile
|
||||
});
|
||||
|
||||
// Get the CredentialStore instance that AuthManager created
|
||||
credentialStore = CredentialStore.getInstance();
|
||||
credentialStore.clearCredentials();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up
|
||||
try {
|
||||
credentialStore.clearCredentials();
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
AuthManager.resetInstance();
|
||||
CredentialStore.resetInstance();
|
||||
vi.restoreAllMocks();
|
||||
|
||||
// Remove temporary directory
|
||||
if (tmpDir && fs.existsSync(tmpDir)) {
|
||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
describe('Expired Token Detection', () => {
|
||||
it('should return expired token for Supabase to refresh', () => {
|
||||
// Set up expired credentials
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token',
|
||||
refreshToken: 'valid-refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(), // 1 minute ago
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
// Get credentials returns them even if expired
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
expect(credentials).not.toBeNull();
|
||||
expect(credentials?.token).toBe('expired-token');
|
||||
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||
});
|
||||
|
||||
it('should return valid token', () => {
|
||||
// Set up valid credentials
|
||||
const validCredentials: AuthCredentials = {
|
||||
token: 'valid-token',
|
||||
refreshToken: 'valid-refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 3600000).toISOString(), // 1 hour from now
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(validCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
expect(credentials?.token).toBe('valid-token');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Token Refresh Flow', () => {
|
||||
it('should manually refresh expired token and save new credentials', async () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'old-token',
|
||||
refreshToken: 'old-refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||
savedAt: new Date(Date.now() - 3600000).toISOString(),
|
||||
selectedContext: {
|
||||
orgId: 'test-org',
|
||||
briefId: 'test-brief',
|
||||
updatedAt: new Date().toISOString()
|
||||
}
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
vi.spyOn(
|
||||
authManager['supabaseClient'],
|
||||
'refreshSession'
|
||||
).mockResolvedValue(mockRefreshedSession);
|
||||
|
||||
// Explicitly call refreshToken() method
|
||||
const refreshedCredentials = await authManager.refreshToken();
|
||||
|
||||
expect(refreshedCredentials).not.toBeNull();
|
||||
expect(refreshedCredentials.token).toBe('new-access-token-xyz');
|
||||
expect(refreshedCredentials.refreshToken).toBe('new-refresh-token-xyz');
|
||||
|
||||
// Verify context was preserved
|
||||
expect(refreshedCredentials.selectedContext?.orgId).toBe('test-org');
|
||||
expect(refreshedCredentials.selectedContext?.briefId).toBe('test-brief');
|
||||
|
||||
// Verify new expiration is in the future
|
||||
const newExpiry = new Date(refreshedCredentials.expiresAt!).getTime();
|
||||
const now = Date.now();
|
||||
expect(newExpiry).toBeGreaterThan(now);
|
||||
});
|
||||
|
||||
it('should throw error if manual refresh fails', async () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token',
|
||||
refreshToken: 'invalid-refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
// Mock refresh to fail
|
||||
vi.spyOn(
|
||||
authManager['supabaseClient'],
|
||||
'refreshSession'
|
||||
).mockRejectedValue(new Error('Refresh token expired'));
|
||||
|
||||
// Explicit refreshToken() call should throw
|
||||
await expect(authManager.refreshToken()).rejects.toThrow();
|
||||
});
|
||||
|
||||
it('should return expired credentials even without refresh token', () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token',
|
||||
// No refresh token
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
// Credentials are returned even without refresh token
|
||||
expect(credentials).not.toBeNull();
|
||||
expect(credentials?.token).toBe('expired-token');
|
||||
expect(credentials?.refreshToken).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return null if credentials missing expiresAt', () => {
|
||||
const credentialsWithoutExpiry: AuthCredentials = {
|
||||
token: 'test-token',
|
||||
refreshToken: 'refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
// Missing expiresAt - invalid token
|
||||
savedAt: new Date().toISOString()
|
||||
} as any;
|
||||
|
||||
credentialStore.saveCredentials(credentialsWithoutExpiry);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
// Tokens without valid expiration are considered invalid
|
||||
expect(credentials).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Clock Skew Tolerance', () => {
|
||||
it('should return credentials within 30-second expiry window', () => {
|
||||
// Token expires in 15 seconds (within 30-second buffer)
|
||||
// Supabase will handle refresh automatically
|
||||
const almostExpiredCredentials: AuthCredentials = {
|
||||
token: 'almost-expired-token',
|
||||
refreshToken: 'valid-refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 15000).toISOString(), // 15 seconds from now
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(almostExpiredCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
// Credentials are returned (Supabase handles auto-refresh in background)
|
||||
expect(credentials).not.toBeNull();
|
||||
expect(credentials?.token).toBe('almost-expired-token');
|
||||
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||
});
|
||||
|
||||
it('should return valid token well before expiry', () => {
|
||||
// Token expires in 5 minutes
|
||||
const validCredentials: AuthCredentials = {
|
||||
token: 'valid-token',
|
||||
refreshToken: 'valid-refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() + 300000).toISOString(), // 5 minutes
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(validCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
// Valid credentials are returned as-is
|
||||
expect(credentials).not.toBeNull();
|
||||
expect(credentials?.token).toBe('valid-token');
|
||||
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Synchronous vs Async Methods', () => {
|
||||
it('getCredentials should return expired credentials', () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token',
|
||||
refreshToken: 'valid-refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
// Returns credentials even if expired - Supabase will handle refresh
|
||||
const credentials = authManager.getCredentials();
|
||||
|
||||
expect(credentials).not.toBeNull();
|
||||
expect(credentials?.token).toBe('expired-token');
|
||||
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Multiple Concurrent Calls', () => {
|
||||
it('should handle concurrent getCredentials calls gracefully', () => {
|
||||
const expiredCredentials: AuthCredentials = {
|
||||
token: 'expired-token',
|
||||
refreshToken: 'valid-refresh-token',
|
||||
userId: 'test-user-id',
|
||||
email: 'test@example.com',
|
||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||
savedAt: new Date().toISOString()
|
||||
};
|
||||
|
||||
credentialStore.saveCredentials(expiredCredentials);
|
||||
|
||||
authManager = AuthManager.getInstance();
|
||||
|
||||
// Make multiple concurrent calls (synchronous now)
|
||||
const creds1 = authManager.getCredentials();
|
||||
const creds2 = authManager.getCredentials();
|
||||
const creds3 = authManager.getCredentials();
|
||||
|
||||
// All should get the same credentials (even if expired)
|
||||
expect(creds1?.token).toBe('expired-token');
|
||||
expect(creds2?.token).toBe('expired-token');
|
||||
expect(creds3?.token).toBe('expired-token');
|
||||
|
||||
// All include refresh token for Supabase to use
|
||||
expect(creds1?.refreshToken).toBe('valid-refresh-token');
|
||||
expect(creds2?.refreshToken).toBe('valid-refresh-token');
|
||||
expect(creds3?.refreshToken).toBe('valid-refresh-token');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -9,8 +9,6 @@
|
||||
*/
|
||||
|
||||
import dotenv from 'dotenv';
|
||||
|
||||
// Load .env BEFORE any other imports to ensure env vars are available
|
||||
dotenv.config();
|
||||
|
||||
// Add at the very beginning of the file
|
||||
@@ -18,8 +16,7 @@ if (process.env.DEBUG === '1') {
|
||||
console.error('DEBUG - dev.js received args:', process.argv.slice(2));
|
||||
}
|
||||
|
||||
// Use dynamic import to ensure dotenv.config() runs before module-level code executes
|
||||
const { runCLI } = await import('./modules/commands.js');
|
||||
import { runCLI } from './modules/commands.js';
|
||||
|
||||
// Run the CLI with the process arguments
|
||||
runCLI(process.argv);
|
||||
|
||||
@@ -19,8 +19,7 @@ import {
|
||||
registerAllCommands,
|
||||
checkForUpdate,
|
||||
performAutoUpdate,
|
||||
displayUpgradeNotification,
|
||||
displayError
|
||||
displayUpgradeNotification
|
||||
} from '@tm/cli';
|
||||
|
||||
import {
|
||||
@@ -2442,6 +2441,57 @@ ${result.result}
|
||||
}
|
||||
});
|
||||
|
||||
// next command
|
||||
programInstance
|
||||
.command('next')
|
||||
.description(
|
||||
`Show the next task to work on based on dependencies and status${chalk.reset('')}`
|
||||
)
|
||||
.option(
|
||||
'-f, --file <file>',
|
||||
'Path to the tasks file',
|
||||
TASKMASTER_TASKS_FILE
|
||||
)
|
||||
.option(
|
||||
'-r, --report <report>',
|
||||
'Path to the complexity report file',
|
||||
COMPLEXITY_REPORT_FILE
|
||||
)
|
||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
||||
.action(async (options) => {
|
||||
const initOptions = {
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag
|
||||
};
|
||||
|
||||
if (options.report && options.report !== COMPLEXITY_REPORT_FILE) {
|
||||
initOptions.complexityReportPath = options.report;
|
||||
}
|
||||
|
||||
// Initialize TaskMaster
|
||||
const taskMaster = initTaskMaster({
|
||||
tasksPath: options.file || true,
|
||||
tag: options.tag,
|
||||
complexityReportPath: options.report || false
|
||||
});
|
||||
|
||||
const tag = taskMaster.getCurrentTag();
|
||||
|
||||
const context = {
|
||||
projectRoot: taskMaster.getProjectRoot(),
|
||||
tag
|
||||
};
|
||||
|
||||
// Show current tag context
|
||||
displayCurrentTagIndicator(tag);
|
||||
|
||||
await displayNextTask(
|
||||
taskMaster.getTasksPath(),
|
||||
taskMaster.getComplexityReportPath(),
|
||||
context
|
||||
);
|
||||
});
|
||||
|
||||
// add-dependency command
|
||||
programInstance
|
||||
.command('add-dependency')
|
||||
@@ -5157,7 +5207,10 @@ async function runCLI(argv = process.argv) {
|
||||
);
|
||||
} else {
|
||||
// Generic error handling for other errors
|
||||
displayError(error);
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
if (getDebugFlag()) {
|
||||
console.error(error);
|
||||
}
|
||||
}
|
||||
|
||||
process.exit(1);
|
||||
|
||||
@@ -47,33 +47,21 @@ export function normalizeProjectRoot(projectRoot) {
|
||||
|
||||
/**
|
||||
* Find the project root directory by looking for project markers
|
||||
* Traverses upwards from startDir until a project marker is found or filesystem root is reached
|
||||
* Limited to 50 parent directory levels to prevent excessive traversal
|
||||
* @param {string} startDir - Directory to start searching from (defaults to process.cwd())
|
||||
* @returns {string} - Project root path (falls back to current directory if no markers found)
|
||||
* @param {string} startDir - Directory to start searching from
|
||||
* @returns {string|null} - Project root path or null if not found
|
||||
*/
|
||||
export function findProjectRoot(startDir = process.cwd()) {
|
||||
// Define project markers that indicate a project root
|
||||
// Prioritize Task Master specific markers first
|
||||
const projectMarkers = [
|
||||
'.taskmaster', // Task Master directory (highest priority)
|
||||
TASKMASTER_CONFIG_FILE, // .taskmaster/config.json
|
||||
TASKMASTER_TASKS_FILE, // .taskmaster/tasks/tasks.json
|
||||
LEGACY_CONFIG_FILE, // .taskmasterconfig (legacy)
|
||||
LEGACY_TASKS_FILE, // tasks/tasks.json (legacy)
|
||||
'tasks.json', // Root tasks.json (legacy)
|
||||
'.git', // Git repository
|
||||
'.svn', // SVN repository
|
||||
'package.json', // Node.js project
|
||||
'yarn.lock', // Yarn project
|
||||
'package-lock.json', // npm project
|
||||
'pnpm-lock.yaml', // pnpm project
|
||||
'Cargo.toml', // Rust project
|
||||
'go.mod', // Go project
|
||||
'pyproject.toml', // Python project
|
||||
'requirements.txt', // Python project
|
||||
'Gemfile', // Ruby project
|
||||
'composer.json' // PHP project
|
||||
'.taskmaster',
|
||||
TASKMASTER_TASKS_FILE,
|
||||
'tasks.json',
|
||||
LEGACY_TASKS_FILE,
|
||||
'.git',
|
||||
'.svn',
|
||||
'package.json',
|
||||
'yarn.lock',
|
||||
'package-lock.json',
|
||||
'pnpm-lock.yaml'
|
||||
];
|
||||
|
||||
let currentDir = path.resolve(startDir);
|
||||
@@ -81,36 +69,19 @@ export function findProjectRoot(startDir = process.cwd()) {
|
||||
const maxDepth = 50; // Reasonable limit to prevent infinite loops
|
||||
let depth = 0;
|
||||
|
||||
// Traverse upwards looking for project markers
|
||||
while (currentDir !== rootDir && depth < maxDepth) {
|
||||
// Check if current directory contains any project markers
|
||||
for (const marker of projectMarkers) {
|
||||
const markerPath = path.join(currentDir, marker);
|
||||
try {
|
||||
if (fs.existsSync(markerPath)) {
|
||||
// Found a project marker - return this directory as project root
|
||||
return currentDir;
|
||||
}
|
||||
} catch (error) {
|
||||
// Ignore permission errors and continue searching
|
||||
continue;
|
||||
if (fs.existsSync(markerPath)) {
|
||||
return currentDir;
|
||||
}
|
||||
}
|
||||
|
||||
// Move up one directory level
|
||||
const parentDir = path.dirname(currentDir);
|
||||
|
||||
// Safety check: if dirname returns the same path, we've hit the root
|
||||
if (parentDir === currentDir) {
|
||||
break;
|
||||
}
|
||||
|
||||
currentDir = parentDir;
|
||||
currentDir = path.dirname(currentDir);
|
||||
depth++;
|
||||
}
|
||||
|
||||
// Fallback to current working directory if no project root found
|
||||
// This ensures the function always returns a valid path
|
||||
return process.cwd();
|
||||
}
|
||||
|
||||
|
||||
@@ -1,123 +0,0 @@
|
||||
/**
|
||||
* tool-counts.js
|
||||
* Shared helper for validating tool counts across tests and validation scripts
|
||||
*/
|
||||
|
||||
import {
|
||||
getToolCounts,
|
||||
getToolCategories
|
||||
} from '../../mcp-server/src/tools/tool-registry.js';
|
||||
|
||||
/**
|
||||
* Expected tool counts - update these when tools are added/removed
|
||||
* These serve as the canonical source of truth for expected counts
|
||||
*/
|
||||
export const EXPECTED_TOOL_COUNTS = {
|
||||
core: 7,
|
||||
standard: 15,
|
||||
total: 36
|
||||
};
|
||||
|
||||
/**
|
||||
* Expected core tools list for validation
|
||||
*/
|
||||
export const EXPECTED_CORE_TOOLS = [
|
||||
'get_tasks',
|
||||
'next_task',
|
||||
'get_task',
|
||||
'set_task_status',
|
||||
'update_subtask',
|
||||
'parse_prd',
|
||||
'expand_task'
|
||||
];
|
||||
|
||||
/**
|
||||
* Validate that actual tool counts match expected counts
|
||||
* @returns {Object} Validation result with isValid flag and details
|
||||
*/
|
||||
export function validateToolCounts() {
|
||||
const actual = getToolCounts();
|
||||
const expected = EXPECTED_TOOL_COUNTS;
|
||||
|
||||
const isValid =
|
||||
actual.core === expected.core &&
|
||||
actual.standard === expected.standard &&
|
||||
actual.total === expected.total;
|
||||
|
||||
return {
|
||||
isValid,
|
||||
actual,
|
||||
expected,
|
||||
differences: {
|
||||
core: actual.core - expected.core,
|
||||
standard: actual.standard - expected.standard,
|
||||
total: actual.total - expected.total
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate that tool categories have correct structure and content
|
||||
* @returns {Object} Validation result
|
||||
*/
|
||||
export function validateToolStructure() {
|
||||
const categories = getToolCategories();
|
||||
const counts = getToolCounts();
|
||||
|
||||
// Check that core tools are subset of standard tools
|
||||
const coreInStandard = categories.core.every((tool) =>
|
||||
categories.standard.includes(tool)
|
||||
);
|
||||
|
||||
// Check that standard tools are subset of all tools
|
||||
const standardInAll = categories.standard.every((tool) =>
|
||||
categories.all.includes(tool)
|
||||
);
|
||||
|
||||
// Check that expected core tools match actual
|
||||
const expectedCoreMatch =
|
||||
EXPECTED_CORE_TOOLS.every((tool) => categories.core.includes(tool)) &&
|
||||
categories.core.every((tool) => EXPECTED_CORE_TOOLS.includes(tool));
|
||||
|
||||
// Check array lengths match counts
|
||||
const lengthsMatch =
|
||||
categories.core.length === counts.core &&
|
||||
categories.standard.length === counts.standard &&
|
||||
categories.all.length === counts.total;
|
||||
|
||||
return {
|
||||
isValid:
|
||||
coreInStandard && standardInAll && expectedCoreMatch && lengthsMatch,
|
||||
details: {
|
||||
coreInStandard,
|
||||
standardInAll,
|
||||
expectedCoreMatch,
|
||||
lengthsMatch
|
||||
},
|
||||
categories,
|
||||
counts
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a detailed report of all tool information
|
||||
* @returns {Object} Comprehensive tool information
|
||||
*/
|
||||
export function getToolReport() {
|
||||
const counts = getToolCounts();
|
||||
const categories = getToolCategories();
|
||||
const validation = validateToolCounts();
|
||||
const structure = validateToolStructure();
|
||||
|
||||
return {
|
||||
counts,
|
||||
categories,
|
||||
validation,
|
||||
structure,
|
||||
summary: {
|
||||
totalValid: validation.isValid && structure.isValid,
|
||||
countsValid: validation.isValid,
|
||||
structureValid: structure.isValid
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,410 +0,0 @@
|
||||
/**
|
||||
* tool-registration.test.js
|
||||
* Comprehensive unit tests for the Task Master MCP tool registration system
|
||||
* Tests environment variable control system covering all configuration modes and edge cases
|
||||
*/
|
||||
|
||||
import {
|
||||
describe,
|
||||
it,
|
||||
expect,
|
||||
beforeEach,
|
||||
afterEach,
|
||||
jest
|
||||
} from '@jest/globals';
|
||||
|
||||
import {
|
||||
EXPECTED_TOOL_COUNTS,
|
||||
EXPECTED_CORE_TOOLS,
|
||||
validateToolCounts,
|
||||
validateToolStructure
|
||||
} from '../../../helpers/tool-counts.js';
|
||||
|
||||
import { registerTaskMasterTools } from '../../../../mcp-server/src/tools/index.js';
|
||||
import {
|
||||
toolRegistry,
|
||||
coreTools,
|
||||
standardTools
|
||||
} from '../../../../mcp-server/src/tools/tool-registry.js';
|
||||
|
||||
// Derive constants from imported registry to avoid brittle magic numbers
|
||||
const ALL_COUNT = Object.keys(toolRegistry).length;
|
||||
const CORE_COUNT = coreTools.length;
|
||||
const STANDARD_COUNT = standardTools.length;
|
||||
|
||||
describe('Task Master Tool Registration System', () => {
|
||||
let mockServer;
|
||||
let originalEnv;
|
||||
|
||||
beforeEach(() => {
|
||||
originalEnv = process.env.TASK_MASTER_TOOLS;
|
||||
|
||||
mockServer = {
|
||||
tools: [],
|
||||
addTool: jest.fn((tool) => {
|
||||
mockServer.tools.push(tool);
|
||||
return tool;
|
||||
})
|
||||
};
|
||||
|
||||
delete process.env.TASK_MASTER_TOOLS;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (originalEnv !== undefined) {
|
||||
process.env.TASK_MASTER_TOOLS = originalEnv;
|
||||
} else {
|
||||
delete process.env.TASK_MASTER_TOOLS;
|
||||
}
|
||||
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('Test Environment Setup', () => {
|
||||
it('should have properly configured mock server', () => {
|
||||
expect(mockServer).toBeDefined();
|
||||
expect(typeof mockServer.addTool).toBe('function');
|
||||
expect(Array.isArray(mockServer.tools)).toBe(true);
|
||||
expect(mockServer.tools.length).toBe(0);
|
||||
});
|
||||
|
||||
it('should have correct tool registry structure', () => {
|
||||
const validation = validateToolCounts();
|
||||
expect(validation.isValid).toBe(true);
|
||||
|
||||
if (!validation.isValid) {
|
||||
console.error('Tool count validation failed:', validation);
|
||||
}
|
||||
|
||||
expect(validation.actual.total).toBe(EXPECTED_TOOL_COUNTS.total);
|
||||
expect(validation.actual.core).toBe(EXPECTED_TOOL_COUNTS.core);
|
||||
expect(validation.actual.standard).toBe(EXPECTED_TOOL_COUNTS.standard);
|
||||
});
|
||||
|
||||
it('should have correct core tools', () => {
|
||||
const structure = validateToolStructure();
|
||||
expect(structure.isValid).toBe(true);
|
||||
|
||||
if (!structure.isValid) {
|
||||
console.error('Tool structure validation failed:', structure);
|
||||
}
|
||||
|
||||
expect(coreTools).toEqual(expect.arrayContaining(EXPECTED_CORE_TOOLS));
|
||||
expect(coreTools.length).toBe(EXPECTED_TOOL_COUNTS.core);
|
||||
});
|
||||
|
||||
it('should have correct standard tools that include all core tools', () => {
|
||||
const structure = validateToolStructure();
|
||||
expect(structure.details.coreInStandard).toBe(true);
|
||||
expect(standardTools.length).toBe(EXPECTED_TOOL_COUNTS.standard);
|
||||
|
||||
coreTools.forEach((tool) => {
|
||||
expect(standardTools).toContain(tool);
|
||||
});
|
||||
});
|
||||
|
||||
it('should have all expected tools in registry', () => {
|
||||
const expectedTools = [
|
||||
'initialize_project',
|
||||
'models',
|
||||
'research',
|
||||
'add_tag',
|
||||
'delete_tag',
|
||||
'get_tasks',
|
||||
'next_task',
|
||||
'get_task'
|
||||
];
|
||||
expectedTools.forEach((tool) => {
|
||||
expect(toolRegistry).toHaveProperty(tool);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('Configuration Modes', () => {
|
||||
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS is not set (default behavior)`, () => {
|
||||
delete process.env.TASK_MASTER_TOOLS;
|
||||
|
||||
registerTaskMasterTools(mockServer);
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||
EXPECTED_TOOL_COUNTS.total
|
||||
);
|
||||
});
|
||||
|
||||
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS=all`, () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'all';
|
||||
|
||||
registerTaskMasterTools(mockServer);
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||
});
|
||||
|
||||
it(`should register exactly ${CORE_COUNT} core tools when TASK_MASTER_TOOLS=core`, () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'core';
|
||||
|
||||
registerTaskMasterTools(mockServer, 'core');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||
EXPECTED_TOOL_COUNTS.core
|
||||
);
|
||||
});
|
||||
|
||||
it(`should register exactly ${STANDARD_COUNT} standard tools when TASK_MASTER_TOOLS=standard`, () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'standard';
|
||||
|
||||
registerTaskMasterTools(mockServer, 'standard');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||
EXPECTED_TOOL_COUNTS.standard
|
||||
);
|
||||
});
|
||||
|
||||
it(`should treat lean as alias for core mode (${CORE_COUNT} tools)`, () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'lean';
|
||||
|
||||
registerTaskMasterTools(mockServer, 'lean');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
|
||||
});
|
||||
|
||||
it('should handle case insensitive configuration values', () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'CORE';
|
||||
|
||||
registerTaskMasterTools(mockServer, 'CORE');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Custom Tool Selection and Edge Cases', () => {
|
||||
it('should register specific tools from comma-separated list', () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'get_tasks,next_task,get_task';
|
||||
|
||||
registerTaskMasterTools(mockServer, 'get_tasks,next_task,get_task');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
|
||||
it('should handle mixed valid and invalid tool names gracefully', () => {
|
||||
process.env.TASK_MASTER_TOOLS =
|
||||
'invalid_tool,get_tasks,fake_tool,next_task';
|
||||
|
||||
registerTaskMasterTools(
|
||||
mockServer,
|
||||
'invalid_tool,get_tasks,fake_tool,next_task'
|
||||
);
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
||||
it('should default to all tools with completely invalid input', () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'completely_invalid';
|
||||
|
||||
registerTaskMasterTools(mockServer);
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||
});
|
||||
|
||||
it('should handle empty string environment variable', () => {
|
||||
process.env.TASK_MASTER_TOOLS = '';
|
||||
|
||||
registerTaskMasterTools(mockServer);
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||
});
|
||||
|
||||
it('should handle whitespace in comma-separated lists', () => {
|
||||
process.env.TASK_MASTER_TOOLS = ' get_tasks , next_task , get_task ';
|
||||
|
||||
registerTaskMasterTools(mockServer, ' get_tasks , next_task , get_task ');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
|
||||
it('should ignore duplicate tools in list', () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'get_tasks,get_tasks,next_task,get_tasks';
|
||||
|
||||
registerTaskMasterTools(
|
||||
mockServer,
|
||||
'get_tasks,get_tasks,next_task,get_tasks'
|
||||
);
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
||||
it('should handle only commas and empty entries', () => {
|
||||
process.env.TASK_MASTER_TOOLS = ',,,';
|
||||
|
||||
registerTaskMasterTools(mockServer);
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||
});
|
||||
|
||||
it('should handle single tool selection', () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'get_tasks';
|
||||
|
||||
registerTaskMasterTools(mockServer, 'get_tasks');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Coverage Analysis and Integration Tests', () => {
|
||||
it('should provide 100% code coverage for environment control logic', () => {
|
||||
const testCases = [
|
||||
{
|
||||
env: undefined,
|
||||
expectedCount: ALL_COUNT,
|
||||
description: 'undefined env (all)'
|
||||
},
|
||||
{
|
||||
env: '',
|
||||
expectedCount: ALL_COUNT,
|
||||
description: 'empty string (all)'
|
||||
},
|
||||
{ env: 'all', expectedCount: ALL_COUNT, description: 'all mode' },
|
||||
{ env: 'core', expectedCount: CORE_COUNT, description: 'core mode' },
|
||||
{
|
||||
env: 'lean',
|
||||
expectedCount: CORE_COUNT,
|
||||
description: 'lean mode (alias)'
|
||||
},
|
||||
{
|
||||
env: 'standard',
|
||||
expectedCount: STANDARD_COUNT,
|
||||
description: 'standard mode'
|
||||
},
|
||||
{
|
||||
env: 'get_tasks,next_task',
|
||||
expectedCount: 2,
|
||||
description: 'custom list'
|
||||
},
|
||||
{
|
||||
env: 'invalid_tool',
|
||||
expectedCount: ALL_COUNT,
|
||||
description: 'invalid fallback'
|
||||
}
|
||||
];
|
||||
|
||||
testCases.forEach((testCase) => {
|
||||
delete process.env.TASK_MASTER_TOOLS;
|
||||
if (testCase.env !== undefined) {
|
||||
process.env.TASK_MASTER_TOOLS = testCase.env;
|
||||
}
|
||||
|
||||
mockServer.tools = [];
|
||||
mockServer.addTool.mockClear();
|
||||
|
||||
registerTaskMasterTools(mockServer, testCase.env || 'all');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||
testCase.expectedCount
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
it('should have optimal performance characteristics', () => {
|
||||
const startTime = Date.now();
|
||||
|
||||
process.env.TASK_MASTER_TOOLS = 'all';
|
||||
|
||||
registerTaskMasterTools(mockServer);
|
||||
|
||||
const endTime = Date.now();
|
||||
const executionTime = endTime - startTime;
|
||||
|
||||
expect(executionTime).toBeLessThan(100);
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||
});
|
||||
|
||||
it('should validate token reduction claims', () => {
|
||||
expect(coreTools.length).toBeLessThan(standardTools.length);
|
||||
expect(standardTools.length).toBeLessThan(
|
||||
Object.keys(toolRegistry).length
|
||||
);
|
||||
|
||||
expect(coreTools.length).toBe(CORE_COUNT);
|
||||
expect(standardTools.length).toBe(STANDARD_COUNT);
|
||||
expect(Object.keys(toolRegistry).length).toBe(ALL_COUNT);
|
||||
|
||||
const allToolsCount = Object.keys(toolRegistry).length;
|
||||
const coreReduction =
|
||||
((allToolsCount - coreTools.length) / allToolsCount) * 100;
|
||||
const standardReduction =
|
||||
((allToolsCount - standardTools.length) / allToolsCount) * 100;
|
||||
|
||||
expect(coreReduction).toBeGreaterThan(80);
|
||||
expect(standardReduction).toBeGreaterThan(50);
|
||||
});
|
||||
|
||||
it('should maintain referential integrity of tool registry', () => {
|
||||
coreTools.forEach((tool) => {
|
||||
expect(standardTools).toContain(tool);
|
||||
});
|
||||
|
||||
standardTools.forEach((tool) => {
|
||||
expect(toolRegistry).toHaveProperty(tool);
|
||||
});
|
||||
|
||||
Object.keys(toolRegistry).forEach((tool) => {
|
||||
expect(typeof toolRegistry[tool]).toBe('function');
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle concurrent registration attempts', () => {
|
||||
process.env.TASK_MASTER_TOOLS = 'core';
|
||||
|
||||
registerTaskMasterTools(mockServer, 'core');
|
||||
registerTaskMasterTools(mockServer, 'core');
|
||||
registerTaskMasterTools(mockServer, 'core');
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT * 3);
|
||||
});
|
||||
|
||||
it('should validate all documented tool categories exist', () => {
|
||||
const allTools = Object.keys(toolRegistry);
|
||||
|
||||
const projectSetupTools = allTools.filter((tool) =>
|
||||
['initialize_project', 'models', 'rules', 'parse_prd'].includes(tool)
|
||||
);
|
||||
expect(projectSetupTools.length).toBeGreaterThan(0);
|
||||
|
||||
const taskManagementTools = allTools.filter((tool) =>
|
||||
['get_tasks', 'get_task', 'next_task', 'set_task_status'].includes(tool)
|
||||
);
|
||||
expect(taskManagementTools.length).toBeGreaterThan(0);
|
||||
|
||||
const analysisTools = allTools.filter((tool) =>
|
||||
['analyze_project_complexity', 'complexity_report'].includes(tool)
|
||||
);
|
||||
expect(analysisTools.length).toBeGreaterThan(0);
|
||||
|
||||
const tagManagementTools = allTools.filter((tool) =>
|
||||
['add_tag', 'delete_tag', 'list_tags', 'use_tag'].includes(tool)
|
||||
);
|
||||
expect(tagManagementTools.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should handle error conditions gracefully', () => {
|
||||
const problematicInputs = [
|
||||
'null',
|
||||
'undefined',
|
||||
' ',
|
||||
'\n\t',
|
||||
'special!@#$%^&*()characters',
|
||||
'very,very,very,very,very,very,very,long,comma,separated,list,with,invalid,tools,that,should,fallback,to,all'
|
||||
];
|
||||
|
||||
problematicInputs.forEach((input) => {
|
||||
mockServer.tools = [];
|
||||
mockServer.addTool.mockClear();
|
||||
|
||||
process.env.TASK_MASTER_TOOLS = input;
|
||||
|
||||
expect(() => registerTaskMasterTools(mockServer)).not.toThrow();
|
||||
|
||||
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,223 +0,0 @@
|
||||
/**
|
||||
* Unit tests for findProjectRoot() function
|
||||
* Tests the parent directory traversal functionality
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
|
||||
// Import the function to test
|
||||
import { findProjectRoot } from '../../src/utils/path-utils.js';
|
||||
|
||||
describe('findProjectRoot', () => {
|
||||
describe('Parent Directory Traversal', () => {
|
||||
test('should find .taskmaster in parent directory', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
mockExistsSync.mockImplementation((checkPath) => {
|
||||
const normalized = path.normalize(checkPath);
|
||||
// .taskmaster exists only at /project
|
||||
return normalized === path.normalize('/project/.taskmaster');
|
||||
});
|
||||
|
||||
const result = findProjectRoot('/project/subdir');
|
||||
|
||||
expect(result).toBe('/project');
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should find .git in parent directory', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
mockExistsSync.mockImplementation((checkPath) => {
|
||||
const normalized = path.normalize(checkPath);
|
||||
return normalized === path.normalize('/project/.git');
|
||||
});
|
||||
|
||||
const result = findProjectRoot('/project/subdir');
|
||||
|
||||
expect(result).toBe('/project');
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should find package.json in parent directory', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
mockExistsSync.mockImplementation((checkPath) => {
|
||||
const normalized = path.normalize(checkPath);
|
||||
return normalized === path.normalize('/project/package.json');
|
||||
});
|
||||
|
||||
const result = findProjectRoot('/project/subdir');
|
||||
|
||||
expect(result).toBe('/project');
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should traverse multiple levels to find project root', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
mockExistsSync.mockImplementation((checkPath) => {
|
||||
const normalized = path.normalize(checkPath);
|
||||
// Only exists at /project, not in any subdirectories
|
||||
return normalized === path.normalize('/project/.taskmaster');
|
||||
});
|
||||
|
||||
const result = findProjectRoot('/project/subdir/deep/nested');
|
||||
|
||||
expect(result).toBe('/project');
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should return current directory as fallback when no markers found', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
// No project markers exist anywhere
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
const result = findProjectRoot('/some/random/path');
|
||||
|
||||
// Should fall back to process.cwd()
|
||||
expect(result).toBe(process.cwd());
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should find markers at current directory before checking parent', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
mockExistsSync.mockImplementation((checkPath) => {
|
||||
const normalized = path.normalize(checkPath);
|
||||
// .git exists at /project/subdir, .taskmaster exists at /project
|
||||
if (normalized.includes('/project/subdir/.git')) return true;
|
||||
if (normalized.includes('/project/.taskmaster')) return true;
|
||||
return false;
|
||||
});
|
||||
|
||||
const result = findProjectRoot('/project/subdir');
|
||||
|
||||
// Should find /project/subdir first because .git exists there,
|
||||
// even though .taskmaster is earlier in the marker array
|
||||
expect(result).toBe('/project/subdir');
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should handle permission errors gracefully', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
mockExistsSync.mockImplementation((checkPath) => {
|
||||
const normalized = path.normalize(checkPath);
|
||||
// Throw permission error for checks in /project/subdir
|
||||
if (normalized.startsWith('/project/subdir/')) {
|
||||
throw new Error('EACCES: permission denied');
|
||||
}
|
||||
// Return true only for .taskmaster at /project
|
||||
return normalized.includes('/project/.taskmaster');
|
||||
});
|
||||
|
||||
const result = findProjectRoot('/project/subdir');
|
||||
|
||||
// Should handle permission errors in subdirectory and traverse to parent
|
||||
expect(result).toBe('/project');
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should detect filesystem root correctly', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
// No markers exist
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
const result = findProjectRoot('/');
|
||||
|
||||
// Should stop at root and fall back to process.cwd()
|
||||
expect(result).toBe(process.cwd());
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should recognize various project markers', () => {
|
||||
const projectMarkers = [
|
||||
'.taskmaster',
|
||||
'.git',
|
||||
'package.json',
|
||||
'Cargo.toml',
|
||||
'go.mod',
|
||||
'pyproject.toml',
|
||||
'requirements.txt',
|
||||
'Gemfile',
|
||||
'composer.json'
|
||||
];
|
||||
|
||||
projectMarkers.forEach((marker) => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
mockExistsSync.mockImplementation((checkPath) => {
|
||||
const normalized = path.normalize(checkPath);
|
||||
return normalized.includes(`/project/${marker}`);
|
||||
});
|
||||
|
||||
const result = findProjectRoot('/project/subdir');
|
||||
|
||||
expect(result).toBe('/project');
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('Edge Cases', () => {
|
||||
test('should handle empty string as startDir', () => {
|
||||
const result = findProjectRoot('');
|
||||
|
||||
// Should use process.cwd() or fall back appropriately
|
||||
expect(typeof result).toBe('string');
|
||||
expect(result.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test('should handle relative paths', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
mockExistsSync.mockImplementation((checkPath) => {
|
||||
// Simulate .git existing in the resolved path
|
||||
return checkPath.includes('.git');
|
||||
});
|
||||
|
||||
const result = findProjectRoot('./subdir');
|
||||
|
||||
expect(typeof result).toBe('string');
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
|
||||
test('should not exceed max depth limit', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
|
||||
// Track how many times existsSync is called
|
||||
let callCount = 0;
|
||||
mockExistsSync.mockImplementation(() => {
|
||||
callCount++;
|
||||
return false; // Never find a marker
|
||||
});
|
||||
|
||||
// Create a very deep path
|
||||
const deepPath = '/a/'.repeat(100) + 'deep';
|
||||
const result = findProjectRoot(deepPath);
|
||||
|
||||
// Should stop after max depth (50) and not check 100 levels
|
||||
// Each level checks multiple markers, so callCount will be high but bounded
|
||||
expect(callCount).toBeLessThan(1000); // Reasonable upper bound
|
||||
// With 18 markers and max depth of 50, expect around 900 calls maximum
|
||||
expect(callCount).toBeLessThanOrEqual(50 * 18);
|
||||
|
||||
mockExistsSync.mockRestore();
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user