Compare commits
13 Commits
task-maste
...
ralph/fix/
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
93097dfeb5 | ||
|
|
fa2b63de40 | ||
|
|
b11dacece9 | ||
|
|
01cbbe97f2 | ||
|
|
a98d96ef04 | ||
|
|
a69d8c91dc | ||
|
|
474a86cebb | ||
|
|
3283506444 | ||
|
|
9acb900153 | ||
|
|
c4f5d89e72 | ||
|
|
e308cf4f46 | ||
|
|
11b7354010 | ||
|
|
4c1ef2ca94 |
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add changelog highlights to auto-update notifications
|
|
||||||
|
|
||||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
|
||||||
5
.changeset/dirty-hairs-know.md
Normal file
5
.changeset/dirty-hairs-know.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Improve auth token refresh flow
|
||||||
7
.changeset/fix-parent-directory-traversal.md
Normal file
7
.changeset/fix-parent-directory-traversal.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Enable Task Master commands to traverse parent directories to find project root from nested paths
|
||||||
|
|
||||||
|
Fixes #1301
|
||||||
5
.changeset/fix-warning-box-alignment.md
Normal file
5
.changeset/fix-warning-box-alignment.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"@tm/cli": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix warning message box width to match dashboard box width for consistent UI alignment
|
||||||
35
.changeset/light-owls-stay.md
Normal file
35
.changeset/light-owls-stay.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add configurable MCP tool loading to optimize LLM context usage
|
||||||
|
|
||||||
|
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
- `all` (default): Load all 36 tools
|
||||||
|
- `core` or `lean`: Load only 7 essential tools for daily development
|
||||||
|
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||||
|
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
|
||||||
|
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
||||||
|
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
|
||||||
|
|
||||||
|
**Example .mcp.json configuration:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add Claude Code plugin with marketplace distribution
|
|
||||||
|
|
||||||
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
|
||||||
|
|
||||||
## 🎉 New: Claude Code Plugin
|
|
||||||
|
|
||||||
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
|
||||||
|
|
||||||
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
|
|
||||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
|
||||||
- **MCP server integration** for deep Claude Code integration
|
|
||||||
|
|
||||||
**Installation:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/plugin marketplace add eyaltoledano/claude-task-master
|
|
||||||
/plugin install taskmaster@taskmaster
|
|
||||||
```
|
|
||||||
|
|
||||||
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
|
||||||
|
|
||||||
- Shows plugin installation instructions
|
|
||||||
- Only manages CLAUDE.md imports for agent instructions
|
|
||||||
- Directs users to install the official plugin
|
|
||||||
|
|
||||||
**Migration for Existing Users:**
|
|
||||||
|
|
||||||
If you previously used `rules add claude`:
|
|
||||||
|
|
||||||
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
|
||||||
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
|
||||||
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
|
||||||
|
|
||||||
**Why This Change?**
|
|
||||||
|
|
||||||
Claude Code plugins provide:
|
|
||||||
|
|
||||||
- ✅ Automatic updates when we release new features
|
|
||||||
- ✅ Better command organization and naming
|
|
||||||
- ✅ Seamless integration with Claude Code
|
|
||||||
- ✅ No manual file copying or management
|
|
||||||
|
|
||||||
The plugin system is the future of Task Master AI integration with Claude Code!
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
|
||||||
|
|
||||||
Key features:
|
|
||||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
|
||||||
- Inline instructions at decision points guide AI through each section
|
|
||||||
- Good/bad examples for immediate pattern matching
|
|
||||||
- Flexible plain-text format with XML-style tags for parseability
|
|
||||||
- Critical dependency-graph section ensures correct task ordering
|
|
||||||
- Automatic inclusion during `task-master init`
|
|
||||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
|
||||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
|
||||||
|
|
||||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix cross-level task dependencies not being saved
|
|
||||||
|
|
||||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
{
|
|
||||||
"mode": "exit",
|
|
||||||
"tag": "rc",
|
|
||||||
"initialVersions": {
|
|
||||||
"task-master-ai": "0.28.0",
|
|
||||||
"@tm/cli": "",
|
|
||||||
"docs": "0.0.5",
|
|
||||||
"extension": "0.25.5",
|
|
||||||
"@tm/ai-sdk-provider-grok-cli": "",
|
|
||||||
"@tm/build-config": "",
|
|
||||||
"@tm/claude-code-plugin": "0.0.1",
|
|
||||||
"@tm/core": ""
|
|
||||||
},
|
|
||||||
"changesets": [
|
|
||||||
"auto-update-changelog-highlights",
|
|
||||||
"mean-planes-wave",
|
|
||||||
"nice-ways-hope",
|
|
||||||
"plain-falcons-serve",
|
|
||||||
"smart-owls-relax"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Improve refresh token when authenticating
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
|
||||||
|
|
||||||
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
|
||||||
|
|
||||||
Key improvements:
|
|
||||||
- Automatic integration with complexity analysis reports
|
|
||||||
- Tag-aware complexity report path resolution
|
|
||||||
- Intelligent subtask count determination based on task complexity
|
|
||||||
- Falls back to defaults when complexity analysis is unavailable
|
|
||||||
- Enhanced logging for better visibility into expansion decisions
|
|
||||||
|
|
||||||
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
|
||||||
@@ -14,4 +14,4 @@ OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
|
|||||||
VERTEX_PROJECT_ID=your-gcp-project-id
|
VERTEX_PROJECT_ID=your-gcp-project-id
|
||||||
VERTEX_LOCATION=us-central1
|
VERTEX_LOCATION=us-central1
|
||||||
# Optional: Path to service account credentials JSON file (alternative to API key)
|
# Optional: Path to service account credentials JSON file (alternative to API key)
|
||||||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||||
89
CHANGELOG.md
89
CHANGELOG.md
@@ -1,5 +1,94 @@
|
|||||||
# task-master-ai
|
# task-master-ai
|
||||||
|
|
||||||
|
## 0.29.0
|
||||||
|
|
||||||
|
### Minor Changes
|
||||||
|
|
||||||
|
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
|
||||||
|
|
||||||
|
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||||
|
|
||||||
|
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
|
||||||
|
|
||||||
|
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
||||||
|
|
||||||
|
## 🎉 New: Claude Code Plugin
|
||||||
|
|
||||||
|
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
||||||
|
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
|
||||||
|
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
||||||
|
- **MCP server integration** for deep Claude Code integration
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/plugin marketplace add eyaltoledano/claude-task-master
|
||||||
|
/plugin install taskmaster@taskmaster
|
||||||
|
```
|
||||||
|
|
||||||
|
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
||||||
|
- Shows plugin installation instructions
|
||||||
|
- Only manages CLAUDE.md imports for agent instructions
|
||||||
|
- Directs users to install the official plugin
|
||||||
|
|
||||||
|
**Migration for Existing Users:**
|
||||||
|
|
||||||
|
If you previously used `rules add claude`:
|
||||||
|
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
||||||
|
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
||||||
|
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
||||||
|
|
||||||
|
**Why This Change?**
|
||||||
|
|
||||||
|
Claude Code plugins provide:
|
||||||
|
- ✅ Automatic updates when we release new features
|
||||||
|
- ✅ Better command organization and naming
|
||||||
|
- ✅ Seamless integration with Claude Code
|
||||||
|
- ✅ No manual file copying or management
|
||||||
|
|
||||||
|
The plugin system is the future of Task Master AI integration with Claude Code!
|
||||||
|
|
||||||
|
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||||
|
|
||||||
|
Key features:
|
||||||
|
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||||
|
- Inline instructions at decision points guide AI through each section
|
||||||
|
- Good/bad examples for immediate pattern matching
|
||||||
|
- Flexible plain-text format with XML-style tags for parseability
|
||||||
|
- Critical dependency-graph section ensures correct task ordering
|
||||||
|
- Automatic inclusion during `task-master init`
|
||||||
|
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||||
|
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||||
|
|
||||||
|
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||||
|
|
||||||
|
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
||||||
|
|
||||||
|
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
||||||
|
|
||||||
|
Key improvements:
|
||||||
|
- Automatic integration with complexity analysis reports
|
||||||
|
- Tag-aware complexity report path resolution
|
||||||
|
- Intelligent subtask count determination based on task complexity
|
||||||
|
- Falls back to defaults when complexity analysis is unavailable
|
||||||
|
- Enhanced logging for better visibility into expansion decisions
|
||||||
|
|
||||||
|
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
|
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
|
||||||
|
|
||||||
|
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||||
|
|
||||||
|
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`4c1ef2c`](https://github.com/eyaltoledano/claude-task-master/commit/4c1ef2ca94411c53bcd2a78ec710b06c500236dd) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
|
||||||
|
|
||||||
|
## 0.29.0-rc.1
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
|
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`a6c5152`](https://github.com/eyaltoledano/claude-task-master/commit/a6c5152f20edd8717cf1aea34e7c178b1261aa99) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
|
||||||
|
|
||||||
## 0.29.0-rc.0
|
## 0.29.0-rc.0
|
||||||
|
|
||||||
### Minor Changes
|
### Minor Changes
|
||||||
|
|||||||
74
README.md
74
README.md
@@ -119,6 +119,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
|||||||
"command": "npx",
|
"command": "npx",
|
||||||
"args": ["-y", "task-master-ai"],
|
"args": ["-y", "task-master-ai"],
|
||||||
"env": {
|
"env": {
|
||||||
|
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
|
||||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||||
@@ -148,6 +149,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
|||||||
"command": "npx",
|
"command": "npx",
|
||||||
"args": ["-y", "task-master-ai"],
|
"args": ["-y", "task-master-ai"],
|
||||||
"env": {
|
"env": {
|
||||||
|
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
|
||||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||||
@@ -196,7 +198,7 @@ Initialize taskmaster-ai in my project
|
|||||||
|
|
||||||
#### 5. Make sure you have a PRD (Recommended)
|
#### 5. Make sure you have a PRD (Recommended)
|
||||||
|
|
||||||
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`
|
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`.
|
||||||
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
|
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
|
||||||
|
|
||||||
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
|
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
|
||||||
@@ -282,6 +284,76 @@ task-master generate
|
|||||||
task-master rules add windsurf,roo,vscode
|
task-master rules add windsurf,roo,vscode
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Tool Loading Configuration
|
||||||
|
|
||||||
|
### Optimizing MCP Tool Loading
|
||||||
|
|
||||||
|
Task Master's MCP server supports selective tool loading to reduce context window usage. By default, all 36 tools are loaded (~21,000 tokens) to maintain backward compatibility with existing installations.
|
||||||
|
|
||||||
|
You can optimize performance by configuring the `TASK_MASTER_TOOLS` environment variable:
|
||||||
|
|
||||||
|
### Available Modes
|
||||||
|
|
||||||
|
| Mode | Tools | Context Usage | Use Case |
|
||||||
|
|------|-------|--------------|----------|
|
||||||
|
| `all` (default) | 36 | ~21,000 tokens | Complete feature set - all tools available |
|
||||||
|
| `standard` | 15 | ~10,000 tokens | Common task management operations |
|
||||||
|
| `core` (or `lean`) | 7 | ~5,000 tokens | Essential daily development workflow |
|
||||||
|
| `custom` | Variable | Variable | Comma-separated list of specific tools |
|
||||||
|
|
||||||
|
### Configuration Methods
|
||||||
|
|
||||||
|
#### Method 1: Environment Variable in MCP Configuration
|
||||||
|
|
||||||
|
Add `TASK_MASTER_TOOLS` to your MCP configuration file's `env` section:
|
||||||
|
|
||||||
|
```jsonc
|
||||||
|
{
|
||||||
|
"mcpServers": { // or "servers" for VS Code
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard", // Options: "all", "standard", "core", "lean", or comma-separated list
|
||||||
|
"ANTHROPIC_API_KEY": "your-key-here",
|
||||||
|
// ... other API keys
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Method 2: Claude Code CLI (One-Time Setup)
|
||||||
|
|
||||||
|
For Claude Code users, you can set the mode during installation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Core mode example (~70% token reduction)
|
||||||
|
claude mcp add task-master-ai --scope user \
|
||||||
|
--env TASK_MASTER_TOOLS="core" \
|
||||||
|
-- npx -y task-master-ai@latest
|
||||||
|
|
||||||
|
# Custom tools example
|
||||||
|
claude mcp add task-master-ai --scope user \
|
||||||
|
--env TASK_MASTER_TOOLS="get_tasks,next_task,set_task_status" \
|
||||||
|
-- npx -y task-master-ai@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Sets Details
|
||||||
|
|
||||||
|
**Core Tools (7):** `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||||
|
|
||||||
|
**Standard Tools (15):** All core tools plus `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
||||||
|
|
||||||
|
**All Tools (36):** Complete set including project setup, task management, analysis, dependencies, tags, research, and more
|
||||||
|
|
||||||
|
### Recommendations
|
||||||
|
|
||||||
|
- **New users**: Start with `"standard"` mode for a good balance
|
||||||
|
- **Large projects**: Use `"core"` mode to minimize token usage
|
||||||
|
- **Complex workflows**: Use `"all"` mode or custom selection
|
||||||
|
- **Backward compatibility**: If not specified, defaults to `"all"` mode
|
||||||
|
|
||||||
## Claude Code Support
|
## Claude Code Support
|
||||||
|
|
||||||
Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
|
Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
|
||||||
|
|||||||
@@ -11,6 +11,13 @@
|
|||||||
|
|
||||||
### Patch Changes
|
### Patch Changes
|
||||||
|
|
||||||
|
- Updated dependencies []:
|
||||||
|
- @tm/core@null
|
||||||
|
|
||||||
|
## null
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
- Updated dependencies []:
|
- Updated dependencies []:
|
||||||
- @tm/core@null
|
- @tm/core@null
|
||||||
|
|
||||||
|
|||||||
@@ -143,7 +143,7 @@ export class AuthCommand extends Command {
|
|||||||
*/
|
*/
|
||||||
private async executeStatus(): Promise<void> {
|
private async executeStatus(): Promise<void> {
|
||||||
try {
|
try {
|
||||||
const result = await this.displayStatus();
|
const result = this.displayStatus();
|
||||||
this.setLastResult(result);
|
this.setLastResult(result);
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
this.handleError(error);
|
||||||
@@ -171,8 +171,8 @@ export class AuthCommand extends Command {
|
|||||||
/**
|
/**
|
||||||
* Display authentication status
|
* Display authentication status
|
||||||
*/
|
*/
|
||||||
private async displayStatus(): Promise<AuthResult> {
|
private displayStatus(): AuthResult {
|
||||||
const credentials = await this.authManager.getCredentials();
|
const credentials = this.authManager.getCredentials();
|
||||||
|
|
||||||
console.log(chalk.cyan('\n🔐 Authentication Status\n'));
|
console.log(chalk.cyan('\n🔐 Authentication Status\n'));
|
||||||
|
|
||||||
@@ -325,7 +325,7 @@ export class AuthCommand extends Command {
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
if (!continueAuth) {
|
if (!continueAuth) {
|
||||||
const credentials = await this.authManager.getCredentials();
|
const credentials = this.authManager.getCredentials();
|
||||||
ui.displaySuccess('Using existing authentication');
|
ui.displaySuccess('Using existing authentication');
|
||||||
|
|
||||||
if (credentials) {
|
if (credentials) {
|
||||||
@@ -490,7 +490,7 @@ export class AuthCommand extends Command {
|
|||||||
/**
|
/**
|
||||||
* Get current credentials (for programmatic usage)
|
* Get current credentials (for programmatic usage)
|
||||||
*/
|
*/
|
||||||
getCredentials(): Promise<AuthCredentials | null> {
|
getCredentials(): AuthCredentials | null {
|
||||||
return this.authManager.getCredentials();
|
return this.authManager.getCredentials();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -115,7 +115,7 @@ export class ContextCommand extends Command {
|
|||||||
*/
|
*/
|
||||||
private async executeShow(): Promise<void> {
|
private async executeShow(): Promise<void> {
|
||||||
try {
|
try {
|
||||||
const result = await this.displayContext();
|
const result = this.displayContext();
|
||||||
this.setLastResult(result);
|
this.setLastResult(result);
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
this.handleError(error);
|
||||||
@@ -126,7 +126,7 @@ export class ContextCommand extends Command {
|
|||||||
/**
|
/**
|
||||||
* Display current context
|
* Display current context
|
||||||
*/
|
*/
|
||||||
private async displayContext(): Promise<ContextResult> {
|
private displayContext(): ContextResult {
|
||||||
// Check authentication first
|
// Check authentication first
|
||||||
if (!this.authManager.isAuthenticated()) {
|
if (!this.authManager.isAuthenticated()) {
|
||||||
console.log(chalk.yellow('✗ Not authenticated'));
|
console.log(chalk.yellow('✗ Not authenticated'));
|
||||||
@@ -139,7 +139,7 @@ export class ContextCommand extends Command {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
const context = await this.authManager.getContext();
|
const context = this.authManager.getContext();
|
||||||
|
|
||||||
console.log(chalk.cyan('\n🌍 Workspace Context\n'));
|
console.log(chalk.cyan('\n🌍 Workspace Context\n'));
|
||||||
|
|
||||||
@@ -250,7 +250,7 @@ export class ContextCommand extends Command {
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
// Update context
|
// Update context
|
||||||
await this.authManager.updateContext({
|
this.authManager.updateContext({
|
||||||
orgId: selectedOrg.id,
|
orgId: selectedOrg.id,
|
||||||
orgName: selectedOrg.name,
|
orgName: selectedOrg.name,
|
||||||
// Clear brief when changing org
|
// Clear brief when changing org
|
||||||
@@ -263,7 +263,7 @@ export class ContextCommand extends Command {
|
|||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
action: 'select-org',
|
action: 'select-org',
|
||||||
context: (await this.authManager.getContext()) || undefined,
|
context: this.authManager.getContext() || undefined,
|
||||||
message: `Selected organization: ${selectedOrg.name}`
|
message: `Selected organization: ${selectedOrg.name}`
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -284,7 +284,7 @@ export class ContextCommand extends Command {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Check if org is selected
|
// Check if org is selected
|
||||||
const context = await this.authManager.getContext();
|
const context = this.authManager.getContext();
|
||||||
if (!context?.orgId) {
|
if (!context?.orgId) {
|
||||||
ui.displayError(
|
ui.displayError(
|
||||||
'No organization selected. Run "tm context org" first.'
|
'No organization selected. Run "tm context org" first.'
|
||||||
@@ -343,7 +343,7 @@ export class ContextCommand extends Command {
|
|||||||
if (selectedBrief) {
|
if (selectedBrief) {
|
||||||
// Update context with brief
|
// Update context with brief
|
||||||
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
|
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
|
||||||
await this.authManager.updateContext({
|
this.authManager.updateContext({
|
||||||
briefId: selectedBrief.id,
|
briefId: selectedBrief.id,
|
||||||
briefName: briefName
|
briefName: briefName
|
||||||
});
|
});
|
||||||
@@ -353,12 +353,12 @@ export class ContextCommand extends Command {
|
|||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
action: 'select-brief',
|
action: 'select-brief',
|
||||||
context: (await this.authManager.getContext()) || undefined,
|
context: this.authManager.getContext() || undefined,
|
||||||
message: `Selected brief: ${selectedBrief.name}`
|
message: `Selected brief: ${selectedBrief.name}`
|
||||||
};
|
};
|
||||||
} else {
|
} else {
|
||||||
// Clear brief selection
|
// Clear brief selection
|
||||||
await this.authManager.updateContext({
|
this.authManager.updateContext({
|
||||||
briefId: undefined,
|
briefId: undefined,
|
||||||
briefName: undefined
|
briefName: undefined
|
||||||
});
|
});
|
||||||
@@ -368,7 +368,7 @@ export class ContextCommand extends Command {
|
|||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
action: 'select-brief',
|
action: 'select-brief',
|
||||||
context: (await this.authManager.getContext()) || undefined,
|
context: this.authManager.getContext() || undefined,
|
||||||
message: 'Cleared brief selection'
|
message: 'Cleared brief selection'
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
@@ -491,7 +491,7 @@ export class ContextCommand extends Command {
|
|||||||
|
|
||||||
// Update context: set org and brief
|
// Update context: set org and brief
|
||||||
const briefName = `Brief ${brief.id.slice(0, 8)}`;
|
const briefName = `Brief ${brief.id.slice(0, 8)}`;
|
||||||
await this.authManager.updateContext({
|
this.authManager.updateContext({
|
||||||
orgId: brief.accountId,
|
orgId: brief.accountId,
|
||||||
orgName,
|
orgName,
|
||||||
briefId: brief.id,
|
briefId: brief.id,
|
||||||
@@ -508,7 +508,7 @@ export class ContextCommand extends Command {
|
|||||||
this.setLastResult({
|
this.setLastResult({
|
||||||
success: true,
|
success: true,
|
||||||
action: 'set',
|
action: 'set',
|
||||||
context: (await this.authManager.getContext()) || undefined,
|
context: this.authManager.getContext() || undefined,
|
||||||
message: 'Context set from brief'
|
message: 'Context set from brief'
|
||||||
});
|
});
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
@@ -613,7 +613,7 @@ export class ContextCommand extends Command {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
await this.authManager.updateContext(context);
|
this.authManager.updateContext(context);
|
||||||
ui.displaySuccess('Context updated');
|
ui.displaySuccess('Context updated');
|
||||||
|
|
||||||
// Display what was set
|
// Display what was set
|
||||||
@@ -631,7 +631,7 @@ export class ContextCommand extends Command {
|
|||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
action: 'set',
|
action: 'set',
|
||||||
context: (await this.authManager.getContext()) || undefined,
|
context: this.authManager.getContext() || undefined,
|
||||||
message: 'Context updated'
|
message: 'Context updated'
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -682,7 +682,7 @@ export class ContextCommand extends Command {
|
|||||||
/**
|
/**
|
||||||
* Get current context (for programmatic usage)
|
* Get current context (for programmatic usage)
|
||||||
*/
|
*/
|
||||||
getContext(): Promise<UserContext | null> {
|
getContext(): UserContext | null {
|
||||||
return this.authManager.getContext();
|
return this.authManager.getContext();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -6,7 +6,7 @@
|
|||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
import type { Task } from '@tm/core/types';
|
import type { Task } from '@tm/core/types';
|
||||||
import { getComplexityWithColor } from '../../utils/ui.js';
|
import { getComplexityWithColor, getBoxWidth } from '../../utils/ui.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Next task display options
|
* Next task display options
|
||||||
@@ -113,7 +113,7 @@ export function displayRecommendedNextTask(
|
|||||||
borderColor: '#FFA500', // Orange color
|
borderColor: '#FFA500', // Orange color
|
||||||
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
|
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
|
||||||
titleAlignment: 'center',
|
titleAlignment: 'center',
|
||||||
width: process.stdout.columns * 0.97,
|
width: getBoxWidth(0.97),
|
||||||
fullscreen: false
|
fullscreen: false
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -5,6 +5,7 @@
|
|||||||
|
|
||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
|
import { getBoxWidth } from '../../utils/ui.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Display suggested next steps section
|
* Display suggested next steps section
|
||||||
@@ -24,7 +25,7 @@ export function displaySuggestedNextSteps(): void {
|
|||||||
margin: { top: 0, bottom: 1 },
|
margin: { top: 0, bottom: 1 },
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'gray',
|
borderColor: 'gray',
|
||||||
width: process.stdout.columns * 0.97
|
width: getBoxWidth(0.97)
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
|||||||
158
apps/cli/src/utils/ui.spec.ts
Normal file
158
apps/cli/src/utils/ui.spec.ts
Normal file
@@ -0,0 +1,158 @@
|
|||||||
|
/**
|
||||||
|
* CLI UI utilities tests
|
||||||
|
* Tests for apps/cli/src/utils/ui.ts
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
|
import type { MockInstance } from 'vitest';
|
||||||
|
import { getBoxWidth } from './ui.js';
|
||||||
|
|
||||||
|
describe('CLI UI Utilities', () => {
|
||||||
|
describe('getBoxWidth', () => {
|
||||||
|
let columnsSpy: MockInstance;
|
||||||
|
let originalDescriptor: PropertyDescriptor | undefined;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Store original descriptor if it exists
|
||||||
|
originalDescriptor = Object.getOwnPropertyDescriptor(
|
||||||
|
process.stdout,
|
||||||
|
'columns'
|
||||||
|
);
|
||||||
|
|
||||||
|
// If columns doesn't exist or isn't a getter, define it as one
|
||||||
|
if (!originalDescriptor || !originalDescriptor.get) {
|
||||||
|
const currentValue = process.stdout.columns || 80;
|
||||||
|
Object.defineProperty(process.stdout, 'columns', {
|
||||||
|
get() {
|
||||||
|
return currentValue;
|
||||||
|
},
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Now spy on the getter
|
||||||
|
columnsSpy = vi.spyOn(process.stdout, 'columns', 'get');
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Restore the spy
|
||||||
|
columnsSpy.mockRestore();
|
||||||
|
|
||||||
|
// Restore original descriptor or delete the property
|
||||||
|
if (originalDescriptor) {
|
||||||
|
Object.defineProperty(process.stdout, 'columns', originalDescriptor);
|
||||||
|
} else {
|
||||||
|
delete (process.stdout as any).columns;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should calculate width as percentage of terminal width', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
expect(width).toBe(90);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use default percentage of 0.9 when not specified', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
const width = getBoxWidth();
|
||||||
|
expect(width).toBe(90);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use default minimum width of 40 when not specified', () => {
|
||||||
|
columnsSpy.mockReturnValue(30);
|
||||||
|
const width = getBoxWidth();
|
||||||
|
expect(width).toBe(40); // Should enforce minimum
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should enforce minimum width when terminal is too narrow', () => {
|
||||||
|
columnsSpy.mockReturnValue(50);
|
||||||
|
const width = getBoxWidth(0.9, 60);
|
||||||
|
expect(width).toBe(60); // Should use minWidth instead of 45
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle undefined process.stdout.columns', () => {
|
||||||
|
columnsSpy.mockReturnValue(undefined);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
// Should fall back to 80 columns: Math.floor(80 * 0.9) = 72
|
||||||
|
expect(width).toBe(72);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle custom percentage values', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
expect(getBoxWidth(0.95, 40)).toBe(95);
|
||||||
|
expect(getBoxWidth(0.8, 40)).toBe(80);
|
||||||
|
expect(getBoxWidth(0.5, 40)).toBe(50);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle custom minimum width values', () => {
|
||||||
|
columnsSpy.mockReturnValue(60);
|
||||||
|
expect(getBoxWidth(0.9, 70)).toBe(70); // 60 * 0.9 = 54, but min is 70
|
||||||
|
expect(getBoxWidth(0.9, 50)).toBe(54); // 60 * 0.9 = 54, min is 50
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should floor the calculated width', () => {
|
||||||
|
columnsSpy.mockReturnValue(99);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
// 99 * 0.9 = 89.1, should floor to 89
|
||||||
|
expect(width).toBe(89);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should match warning box width calculation', () => {
|
||||||
|
// Test the specific case from displayWarning()
|
||||||
|
columnsSpy.mockReturnValue(80);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
expect(width).toBe(72);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should match table width calculation', () => {
|
||||||
|
// Test the specific case from createTaskTable()
|
||||||
|
columnsSpy.mockReturnValue(111);
|
||||||
|
const width = getBoxWidth(0.9, 100);
|
||||||
|
// 111 * 0.9 = 99.9, floor to 99, but max(99, 100) = 100
|
||||||
|
expect(width).toBe(100);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should match recommended task box width calculation', () => {
|
||||||
|
// Test the specific case from displayRecommendedNextTask()
|
||||||
|
columnsSpy.mockReturnValue(120);
|
||||||
|
const width = getBoxWidth(0.97, 40);
|
||||||
|
// 120 * 0.97 = 116.4, floor to 116
|
||||||
|
expect(width).toBe(116);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle edge case of zero terminal width', () => {
|
||||||
|
columnsSpy.mockReturnValue(0);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
// When columns is 0, it uses fallback of 80: Math.floor(80 * 0.9) = 72
|
||||||
|
expect(width).toBe(72);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle very large terminal widths', () => {
|
||||||
|
columnsSpy.mockReturnValue(1000);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
expect(width).toBe(900);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle very small percentages', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
const width = getBoxWidth(0.1, 5);
|
||||||
|
// 100 * 0.1 = 10, which is greater than min 5
|
||||||
|
expect(width).toBe(10);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle percentage of 1.0 (100%)', () => {
|
||||||
|
columnsSpy.mockReturnValue(80);
|
||||||
|
const width = getBoxWidth(1.0, 40);
|
||||||
|
expect(width).toBe(80);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should consistently return same value for same inputs', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
const width1 = getBoxWidth(0.9, 40);
|
||||||
|
const width2 = getBoxWidth(0.9, 40);
|
||||||
|
const width3 = getBoxWidth(0.9, 40);
|
||||||
|
expect(width1).toBe(width2);
|
||||||
|
expect(width2).toBe(width3);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -126,6 +126,20 @@ export function getComplexityWithScore(complexity: number | undefined): string {
|
|||||||
return color(`${complexity}/10 (${label})`);
|
return color(`${complexity}/10 (${label})`);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate box width as percentage of terminal width
|
||||||
|
* @param percentage - Percentage of terminal width to use (default: 0.9)
|
||||||
|
* @param minWidth - Minimum width to enforce (default: 40)
|
||||||
|
* @returns Calculated box width
|
||||||
|
*/
|
||||||
|
export function getBoxWidth(
|
||||||
|
percentage: number = 0.9,
|
||||||
|
minWidth: number = 40
|
||||||
|
): number {
|
||||||
|
const terminalWidth = process.stdout.columns || 80;
|
||||||
|
return Math.max(Math.floor(terminalWidth * percentage), minWidth);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Truncate text to specified length
|
* Truncate text to specified length
|
||||||
*/
|
*/
|
||||||
@@ -176,6 +190,8 @@ export function displayBanner(title: string = 'Task Master'): void {
|
|||||||
* Display an error message (matches scripts/modules/ui.js style)
|
* Display an error message (matches scripts/modules/ui.js style)
|
||||||
*/
|
*/
|
||||||
export function displayError(message: string, details?: string): void {
|
export function displayError(message: string, details?: string): void {
|
||||||
|
const boxWidth = getBoxWidth();
|
||||||
|
|
||||||
console.error(
|
console.error(
|
||||||
boxen(
|
boxen(
|
||||||
chalk.red.bold('X Error: ') +
|
chalk.red.bold('X Error: ') +
|
||||||
@@ -184,7 +200,8 @@ export function displayError(message: string, details?: string): void {
|
|||||||
{
|
{
|
||||||
padding: 1,
|
padding: 1,
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'red'
|
borderColor: 'red',
|
||||||
|
width: boxWidth
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
@@ -194,13 +211,16 @@ export function displayError(message: string, details?: string): void {
|
|||||||
* Display a success message
|
* Display a success message
|
||||||
*/
|
*/
|
||||||
export function displaySuccess(message: string): void {
|
export function displaySuccess(message: string): void {
|
||||||
|
const boxWidth = getBoxWidth();
|
||||||
|
|
||||||
console.log(
|
console.log(
|
||||||
boxen(
|
boxen(
|
||||||
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
|
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
|
||||||
{
|
{
|
||||||
padding: 1,
|
padding: 1,
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'green'
|
borderColor: 'green',
|
||||||
|
width: boxWidth
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
@@ -210,11 +230,14 @@ export function displaySuccess(message: string): void {
|
|||||||
* Display a warning message
|
* Display a warning message
|
||||||
*/
|
*/
|
||||||
export function displayWarning(message: string): void {
|
export function displayWarning(message: string): void {
|
||||||
|
const boxWidth = getBoxWidth();
|
||||||
|
|
||||||
console.log(
|
console.log(
|
||||||
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
|
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
|
||||||
padding: 1,
|
padding: 1,
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'yellow'
|
borderColor: 'yellow',
|
||||||
|
width: boxWidth
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -223,11 +246,14 @@ export function displayWarning(message: string): void {
|
|||||||
* Display info message
|
* Display info message
|
||||||
*/
|
*/
|
||||||
export function displayInfo(message: string): void {
|
export function displayInfo(message: string): void {
|
||||||
|
const boxWidth = getBoxWidth();
|
||||||
|
|
||||||
console.log(
|
console.log(
|
||||||
boxen(chalk.blue.bold('i ') + chalk.white(message), {
|
boxen(chalk.blue.bold('i ') + chalk.white(message), {
|
||||||
padding: 1,
|
padding: 1,
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'blue'
|
borderColor: 'blue',
|
||||||
|
width: boxWidth
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -282,23 +308,23 @@ export function createTaskTable(
|
|||||||
} = options || {};
|
} = options || {};
|
||||||
|
|
||||||
// Calculate dynamic column widths based on terminal width
|
// Calculate dynamic column widths based on terminal width
|
||||||
const terminalWidth = process.stdout.columns * 0.9 || 100;
|
const tableWidth = getBoxWidth(0.9, 100);
|
||||||
// Adjust column widths to better match the original layout
|
// Adjust column widths to better match the original layout
|
||||||
const baseColWidths = showComplexity
|
const baseColWidths = showComplexity
|
||||||
? [
|
? [
|
||||||
Math.floor(terminalWidth * 0.1),
|
Math.floor(tableWidth * 0.1),
|
||||||
Math.floor(terminalWidth * 0.4),
|
Math.floor(tableWidth * 0.4),
|
||||||
Math.floor(terminalWidth * 0.15),
|
Math.floor(tableWidth * 0.15),
|
||||||
Math.floor(terminalWidth * 0.1),
|
Math.floor(tableWidth * 0.1),
|
||||||
Math.floor(terminalWidth * 0.2),
|
Math.floor(tableWidth * 0.2),
|
||||||
Math.floor(terminalWidth * 0.1)
|
Math.floor(tableWidth * 0.1)
|
||||||
] // ID, Title, Status, Priority, Dependencies, Complexity
|
] // ID, Title, Status, Priority, Dependencies, Complexity
|
||||||
: [
|
: [
|
||||||
Math.floor(terminalWidth * 0.08),
|
Math.floor(tableWidth * 0.08),
|
||||||
Math.floor(terminalWidth * 0.4),
|
Math.floor(tableWidth * 0.4),
|
||||||
Math.floor(terminalWidth * 0.18),
|
Math.floor(tableWidth * 0.18),
|
||||||
Math.floor(terminalWidth * 0.12),
|
Math.floor(tableWidth * 0.12),
|
||||||
Math.floor(terminalWidth * 0.2)
|
Math.floor(tableWidth * 0.2)
|
||||||
]; // ID, Title, Status, Priority, Dependencies
|
]; // ID, Title, Status, Priority, Dependencies
|
||||||
|
|
||||||
const headers = [
|
const headers = [
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
# docs
|
# docs
|
||||||
|
|
||||||
|
## 0.0.6
|
||||||
|
|
||||||
## 0.0.5
|
## 0.0.5
|
||||||
|
|
||||||
## 0.0.4
|
## 0.0.4
|
||||||
|
|||||||
@@ -13,6 +13,126 @@ The MCP interface is built on top of the `fastmcp` library and registers a set o
|
|||||||
|
|
||||||
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
||||||
|
|
||||||
|
## Configurable Tool Loading
|
||||||
|
|
||||||
|
To optimize LLM context usage, you can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This is particularly useful when working with LLMs that have context limits or when you only need a subset of tools.
|
||||||
|
|
||||||
|
### Configuration Modes
|
||||||
|
|
||||||
|
#### All Tools (Default)
|
||||||
|
Loads all 36 available tools. Use when you need full Task Master functionality.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "all",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If `TASK_MASTER_TOOLS` is not set, all tools are loaded by default.
|
||||||
|
|
||||||
|
#### Core Tools (Lean Mode)
|
||||||
|
Loads only 7 essential tools for daily development. Ideal for minimal context usage.
|
||||||
|
|
||||||
|
**Core tools included:**
|
||||||
|
- `get_tasks` - List all tasks
|
||||||
|
- `next_task` - Find the next task to work on
|
||||||
|
- `get_task` - Get detailed task information
|
||||||
|
- `set_task_status` - Update task status
|
||||||
|
- `update_subtask` - Add implementation notes
|
||||||
|
- `parse_prd` - Generate tasks from PRD
|
||||||
|
- `expand_task` - Break down tasks into subtasks
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "core",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also use `"lean"` as an alias for `"core"`.
|
||||||
|
|
||||||
|
#### Standard Tools
|
||||||
|
Loads 15 commonly used tools. Balances functionality with context efficiency.
|
||||||
|
|
||||||
|
**Standard tools include all core tools plus:**
|
||||||
|
- `initialize_project` - Set up new projects
|
||||||
|
- `analyze_project_complexity` - Analyze task complexity
|
||||||
|
- `expand_all` - Expand all eligible tasks
|
||||||
|
- `add_subtask` - Add subtasks manually
|
||||||
|
- `remove_task` - Remove tasks
|
||||||
|
- `generate` - Generate task markdown files
|
||||||
|
- `add_task` - Create new tasks
|
||||||
|
- `complexity_report` - View complexity analysis
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Custom Tool Selection
|
||||||
|
Specify exactly which tools to load using a comma-separated list. Tool names are case-insensitive and support both underscores and hyphens.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "get_tasks,next_task,set_task_status,update_subtask",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Choosing the Right Configuration
|
||||||
|
|
||||||
|
- **Use `core`/`lean`**: When working with basic task management workflows or when context limits are strict
|
||||||
|
- **Use `standard`**: For most development workflows that include task creation and analysis
|
||||||
|
- **Use `all`**: When you need full functionality including tag management, dependencies, and advanced features
|
||||||
|
- **Use custom list**: When you have specific tool requirements or want to experiment with minimal sets
|
||||||
|
|
||||||
|
### Verification
|
||||||
|
|
||||||
|
When the MCP server starts, it logs which tools were loaded:
|
||||||
|
|
||||||
|
```
|
||||||
|
Task Master MCP Server starting...
|
||||||
|
Tool mode configuration: standard
|
||||||
|
Loading standard tools
|
||||||
|
Registering 15 MCP tools (mode: standard)
|
||||||
|
Successfully registered 15/15 tools
|
||||||
|
```
|
||||||
|
|
||||||
## Tool Categories
|
## Tool Categories
|
||||||
|
|
||||||
The MCP tools can be categorized in the same way as the core functionalities:
|
The MCP tools can be categorized in the same way as the core functionalities:
|
||||||
|
|||||||
@@ -37,6 +37,25 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
**Optimize Context Usage**: You can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This helps reduce LLM context usage by only loading the tools you need.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- `all` (default) - All 36 tools
|
||||||
|
- `standard` - 15 commonly used tools
|
||||||
|
- `core` or `lean` - 7 essential tools
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```json
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
See the [MCP Tools documentation](/capabilities/mcp#configurable-tool-loading) for details.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
### CLI Usage: `.env` File
|
### CLI Usage: `.env` File
|
||||||
|
|
||||||
Create a `.env` file in your project root and include the keys for the providers you plan to use:
|
Create a `.env` file in your project root and include the keys for the providers you plan to use:
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "docs",
|
"name": "docs",
|
||||||
"version": "0.0.5",
|
"version": "0.0.6",
|
||||||
"private": true,
|
"private": true,
|
||||||
"description": "Task Master documentation powered by Mintlify",
|
"description": "Task Master documentation powered by Mintlify",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
# Change Log
|
# Change Log
|
||||||
|
|
||||||
|
## 0.25.6
|
||||||
|
|
||||||
## 0.25.6-rc.0
|
## 0.25.6-rc.0
|
||||||
|
|
||||||
### Patch Changes
|
### Patch Changes
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
"private": true,
|
"private": true,
|
||||||
"displayName": "TaskMaster",
|
"displayName": "TaskMaster",
|
||||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||||
"version": "0.25.6-rc.0",
|
"version": "0.25.6",
|
||||||
"publisher": "Hamster",
|
"publisher": "Hamster",
|
||||||
"icon": "assets/icon.png",
|
"icon": "assets/icon.png",
|
||||||
"engines": {
|
"engines": {
|
||||||
|
|||||||
@@ -59,6 +59,76 @@ Taskmaster uses two primary methods for configuration:
|
|||||||
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
||||||
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
||||||
|
|
||||||
|
## MCP Tool Loading Configuration
|
||||||
|
|
||||||
|
### TASK_MASTER_TOOLS Environment Variable
|
||||||
|
|
||||||
|
The `TASK_MASTER_TOOLS` environment variable controls which tools are loaded by the Task Master MCP server. This allows you to optimize token usage based on your workflow needs.
|
||||||
|
|
||||||
|
> Note
|
||||||
|
> Prefer setting `TASK_MASTER_TOOLS` in your MCP client's `env` block (e.g., `.cursor/mcp.json`) or in CI/deployment env. The `.env` file is reserved for API keys/endpoints; avoid persisting non-secret settings there.
|
||||||
|
|
||||||
|
#### Configuration Options
|
||||||
|
|
||||||
|
- **`all`** (default): Loads all 36 available tools (~21,000 tokens)
|
||||||
|
- Best for: Users who need the complete feature set
|
||||||
|
- Use when: Working with complex projects requiring all Task Master features
|
||||||
|
- Backward compatibility: This is the default to maintain compatibility with existing installations
|
||||||
|
|
||||||
|
- **`standard`**: Loads 15 commonly used tools (~10,000 tokens, 50% reduction)
|
||||||
|
- Best for: Regular task management workflows
|
||||||
|
- Tools included: All core tools plus project initialization, complexity analysis, task generation, and more
|
||||||
|
- Use when: You need a balanced set of features with reduced token usage
|
||||||
|
|
||||||
|
- **`core`** (or `lean`): Loads 7 essential tools (~5,000 tokens, 70% reduction)
|
||||||
|
- Best for: Daily development with minimal token overhead
|
||||||
|
- Tools included: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||||
|
- Use when: Working in large contexts where token usage is critical
|
||||||
|
- Note: "lean" is an alias for "core" (same tools, token estimate and recommended use). You can refer to it as either "core" or "lean" when configuring.
|
||||||
|
|
||||||
|
- **Custom list**: Comma-separated list of specific tool names
|
||||||
|
- Best for: Specialized workflows requiring specific tools
|
||||||
|
- Example: `"get_tasks,next_task,set_task_status"`
|
||||||
|
- Use when: You know exactly which tools you need
|
||||||
|
|
||||||
|
#### How to Configure
|
||||||
|
|
||||||
|
1. **In MCP configuration files** (`.cursor/mcp.json`, `.vscode/mcp.json`, etc.) - **Recommended**:
|
||||||
|
|
||||||
|
```jsonc
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard", // Set tool loading mode
|
||||||
|
// API keys can still use .env for security
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Via Claude Code CLI**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
claude mcp add task-master-ai --scope user \
|
||||||
|
--env TASK_MASTER_TOOLS="core" \
|
||||||
|
-- npx -y task-master-ai@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **In CI/deployment environment variables**:
|
||||||
|
```bash
|
||||||
|
export TASK_MASTER_TOOLS="standard"
|
||||||
|
node mcp-server/server.js
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Tool Loading Behavior
|
||||||
|
|
||||||
|
- When `TASK_MASTER_TOOLS` is unset or empty, the system defaults to `"all"`
|
||||||
|
- Invalid tool names in a user-specified list are ignored (a warning is emitted for each)
|
||||||
|
- If every tool name in a custom list is invalid, the system falls back to `"all"`
|
||||||
|
- Tool names are case-insensitive (e.g., `"CORE"`, `"core"`, and `"Core"` are treated identically)
|
||||||
|
|
||||||
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
||||||
|
|
||||||
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
||||||
@@ -223,10 +293,10 @@ node scripts/init.js
|
|||||||
```bash
|
```bash
|
||||||
# Set MCP provider for main role
|
# Set MCP provider for main role
|
||||||
task-master models set-main --provider mcp --model claude-3-5-sonnet-20241022
|
task-master models set-main --provider mcp --model claude-3-5-sonnet-20241022
|
||||||
|
|
||||||
# Set MCP provider for research role
|
# Set MCP provider for research role
|
||||||
task-master models set-research --provider mcp --model claude-3-opus-20240229
|
task-master models set-research --provider mcp --model claude-3-opus-20240229
|
||||||
|
|
||||||
# Verify configuration
|
# Verify configuration
|
||||||
task-master models list
|
task-master models list
|
||||||
```
|
```
|
||||||
@@ -357,7 +427,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
|
|||||||
"temperature": 0.7
|
"temperature": 0.7
|
||||||
},
|
},
|
||||||
"fallback": {
|
"fallback": {
|
||||||
"provider": "azure",
|
"provider": "azure",
|
||||||
"modelId": "gpt-4o-mini",
|
"modelId": "gpt-4o-mini",
|
||||||
"maxTokens": 10000,
|
"maxTokens": 10000,
|
||||||
"temperature": 0.7
|
"temperature": 0.7
|
||||||
@@ -376,7 +446,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
|
|||||||
"models": {
|
"models": {
|
||||||
"main": {
|
"main": {
|
||||||
"provider": "azure",
|
"provider": "azure",
|
||||||
"modelId": "gpt-4o",
|
"modelId": "gpt-4o",
|
||||||
"maxTokens": 16000,
|
"maxTokens": 16000,
|
||||||
"temperature": 0.7,
|
"temperature": 0.7,
|
||||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||||
@@ -390,7 +460,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
|
|||||||
"fallback": {
|
"fallback": {
|
||||||
"provider": "azure",
|
"provider": "azure",
|
||||||
"modelId": "gpt-4o-mini",
|
"modelId": "gpt-4o-mini",
|
||||||
"maxTokens": 10000,
|
"maxTokens": 10000,
|
||||||
"temperature": 0.7,
|
"temperature": 0.7,
|
||||||
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
|
||||||
}
|
}
|
||||||
@@ -402,7 +472,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
|
|||||||
```bash
|
```bash
|
||||||
# In .env file
|
# In .env file
|
||||||
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
||||||
|
|
||||||
# Optional: Override endpoint for all Azure models
|
# Optional: Override endpoint for all Azure models
|
||||||
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
|
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -4,12 +4,14 @@ import dotenv from 'dotenv';
|
|||||||
import { fileURLToPath } from 'url';
|
import { fileURLToPath } from 'url';
|
||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import logger from './logger.js';
|
import logger from './logger.js';
|
||||||
import { registerTaskMasterTools } from './tools/index.js';
|
import {
|
||||||
|
registerTaskMasterTools,
|
||||||
|
getToolsConfiguration
|
||||||
|
} from './tools/index.js';
|
||||||
import ProviderRegistry from '../../src/provider-registry/index.js';
|
import ProviderRegistry from '../../src/provider-registry/index.js';
|
||||||
import { MCPProvider } from './providers/mcp-provider.js';
|
import { MCPProvider } from './providers/mcp-provider.js';
|
||||||
import packageJson from '../../package.json' with { type: 'json' };
|
import packageJson from '../../package.json' with { type: 'json' };
|
||||||
|
|
||||||
// Load environment variables
|
|
||||||
dotenv.config();
|
dotenv.config();
|
||||||
|
|
||||||
// Constants
|
// Constants
|
||||||
@@ -29,12 +31,10 @@ class TaskMasterMCPServer {
|
|||||||
this.server = new FastMCP(this.options);
|
this.server = new FastMCP(this.options);
|
||||||
this.initialized = false;
|
this.initialized = false;
|
||||||
|
|
||||||
// Bind methods
|
|
||||||
this.init = this.init.bind(this);
|
this.init = this.init.bind(this);
|
||||||
this.start = this.start.bind(this);
|
this.start = this.start.bind(this);
|
||||||
this.stop = this.stop.bind(this);
|
this.stop = this.stop.bind(this);
|
||||||
|
|
||||||
// Setup logging
|
|
||||||
this.logger = logger;
|
this.logger = logger;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -44,8 +44,34 @@ class TaskMasterMCPServer {
|
|||||||
async init() {
|
async init() {
|
||||||
if (this.initialized) return;
|
if (this.initialized) return;
|
||||||
|
|
||||||
// Pass the manager instance to the tool registration function
|
const normalizedToolMode = getToolsConfiguration();
|
||||||
registerTaskMasterTools(this.server, this.asyncManager);
|
|
||||||
|
this.logger.info('Task Master MCP Server starting...');
|
||||||
|
this.logger.info(`Tool mode configuration: ${normalizedToolMode}`);
|
||||||
|
|
||||||
|
const registrationResult = registerTaskMasterTools(
|
||||||
|
this.server,
|
||||||
|
normalizedToolMode
|
||||||
|
);
|
||||||
|
|
||||||
|
this.logger.info(
|
||||||
|
`Normalized tool mode: ${registrationResult.normalizedMode}`
|
||||||
|
);
|
||||||
|
this.logger.info(
|
||||||
|
`Registered ${registrationResult.registeredTools.length} tools successfully`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (registrationResult.registeredTools.length > 0) {
|
||||||
|
this.logger.debug(
|
||||||
|
`Registered tools: ${registrationResult.registeredTools.join(', ')}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (registrationResult.failedTools.length > 0) {
|
||||||
|
this.logger.warn(
|
||||||
|
`Failed to register ${registrationResult.failedTools.length} tools: ${registrationResult.failedTools.join(', ')}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
this.initialized = true;
|
this.initialized = true;
|
||||||
|
|
||||||
|
|||||||
@@ -3,109 +3,238 @@
|
|||||||
* Export all Task Master CLI tools for MCP server
|
* Export all Task Master CLI tools for MCP server
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { registerListTasksTool } from './get-tasks.js';
|
|
||||||
import logger from '../logger.js';
|
import logger from '../logger.js';
|
||||||
import { registerSetTaskStatusTool } from './set-task-status.js';
|
import {
|
||||||
import { registerParsePRDTool } from './parse-prd.js';
|
toolRegistry,
|
||||||
import { registerUpdateTool } from './update.js';
|
coreTools,
|
||||||
import { registerUpdateTaskTool } from './update-task.js';
|
standardTools,
|
||||||
import { registerUpdateSubtaskTool } from './update-subtask.js';
|
getAvailableTools,
|
||||||
import { registerGenerateTool } from './generate.js';
|
getToolRegistration,
|
||||||
import { registerShowTaskTool } from './get-task.js';
|
isValidTool
|
||||||
import { registerNextTaskTool } from './next-task.js';
|
} from './tool-registry.js';
|
||||||
import { registerExpandTaskTool } from './expand-task.js';
|
|
||||||
import { registerAddTaskTool } from './add-task.js';
|
|
||||||
import { registerAddSubtaskTool } from './add-subtask.js';
|
|
||||||
import { registerRemoveSubtaskTool } from './remove-subtask.js';
|
|
||||||
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
|
|
||||||
import { registerClearSubtasksTool } from './clear-subtasks.js';
|
|
||||||
import { registerExpandAllTool } from './expand-all.js';
|
|
||||||
import { registerRemoveDependencyTool } from './remove-dependency.js';
|
|
||||||
import { registerValidateDependenciesTool } from './validate-dependencies.js';
|
|
||||||
import { registerFixDependenciesTool } from './fix-dependencies.js';
|
|
||||||
import { registerComplexityReportTool } from './complexity-report.js';
|
|
||||||
import { registerAddDependencyTool } from './add-dependency.js';
|
|
||||||
import { registerRemoveTaskTool } from './remove-task.js';
|
|
||||||
import { registerInitializeProjectTool } from './initialize-project.js';
|
|
||||||
import { registerModelsTool } from './models.js';
|
|
||||||
import { registerMoveTaskTool } from './move-task.js';
|
|
||||||
import { registerResponseLanguageTool } from './response-language.js';
|
|
||||||
import { registerAddTagTool } from './add-tag.js';
|
|
||||||
import { registerDeleteTagTool } from './delete-tag.js';
|
|
||||||
import { registerListTagsTool } from './list-tags.js';
|
|
||||||
import { registerUseTagTool } from './use-tag.js';
|
|
||||||
import { registerRenameTagTool } from './rename-tag.js';
|
|
||||||
import { registerCopyTagTool } from './copy-tag.js';
|
|
||||||
import { registerResearchTool } from './research.js';
|
|
||||||
import { registerRulesTool } from './rules.js';
|
|
||||||
import { registerScopeUpTool } from './scope-up.js';
|
|
||||||
import { registerScopeDownTool } from './scope-down.js';
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Register all Task Master tools with the MCP server
|
* Helper function to safely read and normalize the TASK_MASTER_TOOLS environment variable
|
||||||
* @param {Object} server - FastMCP server instance
|
* @returns {string} The tools configuration string, defaults to 'all'
|
||||||
*/
|
*/
|
||||||
export function registerTaskMasterTools(server) {
|
export function getToolsConfiguration() {
|
||||||
|
const rawValue = process.env.TASK_MASTER_TOOLS;
|
||||||
|
|
||||||
|
if (!rawValue || rawValue.trim() === '') {
|
||||||
|
logger.debug('No TASK_MASTER_TOOLS env var found, defaulting to "all"');
|
||||||
|
return 'all';
|
||||||
|
}
|
||||||
|
|
||||||
|
const normalizedValue = rawValue.trim();
|
||||||
|
logger.debug(`TASK_MASTER_TOOLS env var: "${normalizedValue}"`);
|
||||||
|
return normalizedValue;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Register Task Master tools with the MCP server
|
||||||
|
* Supports selective tool loading via TASK_MASTER_TOOLS environment variable
|
||||||
|
* @param {Object} server - FastMCP server instance
|
||||||
|
* @param {string} toolMode - The tool mode configuration (defaults to 'all')
|
||||||
|
* @returns {Object} Object containing registered tools, failed tools, and normalized mode
|
||||||
|
*/
|
||||||
|
export function registerTaskMasterTools(server, toolMode = 'all') {
|
||||||
|
const registeredTools = [];
|
||||||
|
const failedTools = [];
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Register each tool in a logical workflow order
|
const enabledTools = toolMode.trim();
|
||||||
|
let toolsToRegister = [];
|
||||||
|
|
||||||
// Group 1: Initialization & Setup
|
const lowerCaseConfig = enabledTools.toLowerCase();
|
||||||
registerInitializeProjectTool(server);
|
|
||||||
registerModelsTool(server);
|
|
||||||
registerRulesTool(server);
|
|
||||||
registerParsePRDTool(server);
|
|
||||||
|
|
||||||
// Group 2: Task Analysis & Expansion
|
switch (lowerCaseConfig) {
|
||||||
registerAnalyzeProjectComplexityTool(server);
|
case 'all':
|
||||||
registerExpandTaskTool(server);
|
toolsToRegister = Object.keys(toolRegistry);
|
||||||
registerExpandAllTool(server);
|
logger.info('Loading all available tools');
|
||||||
registerScopeUpTool(server);
|
break;
|
||||||
registerScopeDownTool(server);
|
case 'core':
|
||||||
|
case 'lean':
|
||||||
|
toolsToRegister = coreTools;
|
||||||
|
logger.info('Loading core tools only');
|
||||||
|
break;
|
||||||
|
case 'standard':
|
||||||
|
toolsToRegister = standardTools;
|
||||||
|
logger.info('Loading standard tools');
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
const requestedTools = enabledTools
|
||||||
|
.split(',')
|
||||||
|
.map((t) => t.trim())
|
||||||
|
.filter((t) => t.length > 0);
|
||||||
|
|
||||||
// Group 3: Task Listing & Viewing
|
const uniqueTools = new Set();
|
||||||
registerListTasksTool(server);
|
const unknownTools = [];
|
||||||
registerShowTaskTool(server);
|
|
||||||
registerNextTaskTool(server);
|
|
||||||
registerComplexityReportTool(server);
|
|
||||||
|
|
||||||
// Group 4: Task Status & Management
|
const aliasMap = {
|
||||||
registerSetTaskStatusTool(server);
|
response_language: 'response-language'
|
||||||
registerGenerateTool(server);
|
};
|
||||||
|
|
||||||
// Group 5: Task Creation & Modification
|
for (const toolName of requestedTools) {
|
||||||
registerAddTaskTool(server);
|
let resolvedName = null;
|
||||||
registerAddSubtaskTool(server);
|
const lowerToolName = toolName.toLowerCase();
|
||||||
registerUpdateTool(server);
|
|
||||||
registerUpdateTaskTool(server);
|
|
||||||
registerUpdateSubtaskTool(server);
|
|
||||||
registerRemoveTaskTool(server);
|
|
||||||
registerRemoveSubtaskTool(server);
|
|
||||||
registerClearSubtasksTool(server);
|
|
||||||
registerMoveTaskTool(server);
|
|
||||||
|
|
||||||
// Group 6: Dependency Management
|
if (aliasMap[lowerToolName]) {
|
||||||
registerAddDependencyTool(server);
|
const aliasTarget = aliasMap[lowerToolName];
|
||||||
registerRemoveDependencyTool(server);
|
for (const registryKey of Object.keys(toolRegistry)) {
|
||||||
registerValidateDependenciesTool(server);
|
if (registryKey.toLowerCase() === aliasTarget.toLowerCase()) {
|
||||||
registerFixDependenciesTool(server);
|
resolvedName = registryKey;
|
||||||
registerResponseLanguageTool(server);
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Group 7: Tag Management
|
if (!resolvedName) {
|
||||||
registerListTagsTool(server);
|
for (const registryKey of Object.keys(toolRegistry)) {
|
||||||
registerAddTagTool(server);
|
if (registryKey.toLowerCase() === lowerToolName) {
|
||||||
registerDeleteTagTool(server);
|
resolvedName = registryKey;
|
||||||
registerUseTagTool(server);
|
break;
|
||||||
registerRenameTagTool(server);
|
}
|
||||||
registerCopyTagTool(server);
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Group 8: Research Features
|
if (!resolvedName) {
|
||||||
registerResearchTool(server);
|
const withHyphens = lowerToolName.replace(/_/g, '-');
|
||||||
|
for (const registryKey of Object.keys(toolRegistry)) {
|
||||||
|
if (registryKey.toLowerCase() === withHyphens) {
|
||||||
|
resolvedName = registryKey;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!resolvedName) {
|
||||||
|
const withUnderscores = lowerToolName.replace(/-/g, '_');
|
||||||
|
for (const registryKey of Object.keys(toolRegistry)) {
|
||||||
|
if (registryKey.toLowerCase() === withUnderscores) {
|
||||||
|
resolvedName = registryKey;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (resolvedName) {
|
||||||
|
uniqueTools.add(resolvedName);
|
||||||
|
logger.debug(`Resolved tool "${toolName}" to "${resolvedName}"`);
|
||||||
|
} else {
|
||||||
|
unknownTools.push(toolName);
|
||||||
|
logger.warn(`Unknown tool specified: "${toolName}"`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
toolsToRegister = Array.from(uniqueTools);
|
||||||
|
|
||||||
|
if (unknownTools.length > 0) {
|
||||||
|
logger.warn(`Unknown tools: ${unknownTools.join(', ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (toolsToRegister.length === 0) {
|
||||||
|
logger.warn(
|
||||||
|
`No valid tools found in custom list. Loading all tools as fallback.`
|
||||||
|
);
|
||||||
|
toolsToRegister = Object.keys(toolRegistry);
|
||||||
|
} else {
|
||||||
|
logger.info(
|
||||||
|
`Loading ${toolsToRegister.length} custom tools from list (${uniqueTools.size} unique after normalization)`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
`Registering ${toolsToRegister.length} MCP tools (mode: ${enabledTools})`
|
||||||
|
);
|
||||||
|
|
||||||
|
toolsToRegister.forEach((toolName) => {
|
||||||
|
try {
|
||||||
|
const registerFunction = getToolRegistration(toolName);
|
||||||
|
if (registerFunction) {
|
||||||
|
registerFunction(server);
|
||||||
|
logger.debug(`Registered tool: ${toolName}`);
|
||||||
|
registeredTools.push(toolName);
|
||||||
|
} else {
|
||||||
|
logger.warn(`Tool ${toolName} not found in registry`);
|
||||||
|
failedTools.push(toolName);
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
if (error.message && error.message.includes('already registered')) {
|
||||||
|
logger.debug(`Tool ${toolName} already registered, skipping`);
|
||||||
|
registeredTools.push(toolName);
|
||||||
|
} else {
|
||||||
|
logger.error(`Failed to register tool ${toolName}: ${error.message}`);
|
||||||
|
failedTools.push(toolName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
`Successfully registered ${registeredTools.length}/${toolsToRegister.length} tools`
|
||||||
|
);
|
||||||
|
if (failedTools.length > 0) {
|
||||||
|
logger.warn(`Failed tools: ${failedTools.join(', ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
registeredTools,
|
||||||
|
failedTools,
|
||||||
|
normalizedMode: lowerCaseConfig
|
||||||
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(`Error registering Task Master tools: ${error.message}`);
|
logger.error(
|
||||||
throw error;
|
`Error parsing TASK_MASTER_TOOLS environment variable: ${error.message}`
|
||||||
|
);
|
||||||
|
logger.info('Falling back to loading all tools');
|
||||||
|
|
||||||
|
const fallbackTools = Object.keys(toolRegistry);
|
||||||
|
for (const toolName of fallbackTools) {
|
||||||
|
const registerFunction = getToolRegistration(toolName);
|
||||||
|
if (registerFunction) {
|
||||||
|
try {
|
||||||
|
registerFunction(server);
|
||||||
|
registeredTools.push(toolName);
|
||||||
|
} catch (err) {
|
||||||
|
if (err.message && err.message.includes('already registered')) {
|
||||||
|
logger.debug(
|
||||||
|
`Fallback tool ${toolName} already registered, skipping`
|
||||||
|
);
|
||||||
|
registeredTools.push(toolName);
|
||||||
|
} else {
|
||||||
|
logger.warn(
|
||||||
|
`Failed to register fallback tool '${toolName}': ${err.message}`
|
||||||
|
);
|
||||||
|
failedTools.push(toolName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
logger.warn(`Tool '${toolName}' not found in registry`);
|
||||||
|
failedTools.push(toolName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logger.info(
|
||||||
|
`Successfully registered ${registeredTools.length} fallback tools`
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
registeredTools,
|
||||||
|
failedTools,
|
||||||
|
normalizedMode: 'all'
|
||||||
|
};
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export {
|
||||||
|
toolRegistry,
|
||||||
|
coreTools,
|
||||||
|
standardTools,
|
||||||
|
getAvailableTools,
|
||||||
|
getToolRegistration,
|
||||||
|
isValidTool
|
||||||
|
};
|
||||||
|
|
||||||
export default {
|
export default {
|
||||||
registerTaskMasterTools
|
registerTaskMasterTools
|
||||||
};
|
};
|
||||||
|
|||||||
168
mcp-server/src/tools/tool-registry.js
Normal file
168
mcp-server/src/tools/tool-registry.js
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
/**
|
||||||
|
* tool-registry.js
|
||||||
|
* Tool Registry Object Structure - Maps all 36 tool names to registration functions
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { registerListTasksTool } from './get-tasks.js';
|
||||||
|
import { registerSetTaskStatusTool } from './set-task-status.js';
|
||||||
|
import { registerParsePRDTool } from './parse-prd.js';
|
||||||
|
import { registerUpdateTool } from './update.js';
|
||||||
|
import { registerUpdateTaskTool } from './update-task.js';
|
||||||
|
import { registerUpdateSubtaskTool } from './update-subtask.js';
|
||||||
|
import { registerGenerateTool } from './generate.js';
|
||||||
|
import { registerShowTaskTool } from './get-task.js';
|
||||||
|
import { registerNextTaskTool } from './next-task.js';
|
||||||
|
import { registerExpandTaskTool } from './expand-task.js';
|
||||||
|
import { registerAddTaskTool } from './add-task.js';
|
||||||
|
import { registerAddSubtaskTool } from './add-subtask.js';
|
||||||
|
import { registerRemoveSubtaskTool } from './remove-subtask.js';
|
||||||
|
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
|
||||||
|
import { registerClearSubtasksTool } from './clear-subtasks.js';
|
||||||
|
import { registerExpandAllTool } from './expand-all.js';
|
||||||
|
import { registerRemoveDependencyTool } from './remove-dependency.js';
|
||||||
|
import { registerValidateDependenciesTool } from './validate-dependencies.js';
|
||||||
|
import { registerFixDependenciesTool } from './fix-dependencies.js';
|
||||||
|
import { registerComplexityReportTool } from './complexity-report.js';
|
||||||
|
import { registerAddDependencyTool } from './add-dependency.js';
|
||||||
|
import { registerRemoveTaskTool } from './remove-task.js';
|
||||||
|
import { registerInitializeProjectTool } from './initialize-project.js';
|
||||||
|
import { registerModelsTool } from './models.js';
|
||||||
|
import { registerMoveTaskTool } from './move-task.js';
|
||||||
|
import { registerResponseLanguageTool } from './response-language.js';
|
||||||
|
import { registerAddTagTool } from './add-tag.js';
|
||||||
|
import { registerDeleteTagTool } from './delete-tag.js';
|
||||||
|
import { registerListTagsTool } from './list-tags.js';
|
||||||
|
import { registerUseTagTool } from './use-tag.js';
|
||||||
|
import { registerRenameTagTool } from './rename-tag.js';
|
||||||
|
import { registerCopyTagTool } from './copy-tag.js';
|
||||||
|
import { registerResearchTool } from './research.js';
|
||||||
|
import { registerRulesTool } from './rules.js';
|
||||||
|
import { registerScopeUpTool } from './scope-up.js';
|
||||||
|
import { registerScopeDownTool } from './scope-down.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Comprehensive tool registry mapping all 36 tool names to their registration functions
|
||||||
|
* Used for dynamic tool registration and validation
|
||||||
|
*/
|
||||||
|
export const toolRegistry = {
|
||||||
|
initialize_project: registerInitializeProjectTool,
|
||||||
|
models: registerModelsTool,
|
||||||
|
rules: registerRulesTool,
|
||||||
|
parse_prd: registerParsePRDTool,
|
||||||
|
'response-language': registerResponseLanguageTool,
|
||||||
|
analyze_project_complexity: registerAnalyzeProjectComplexityTool,
|
||||||
|
expand_task: registerExpandTaskTool,
|
||||||
|
expand_all: registerExpandAllTool,
|
||||||
|
scope_up_task: registerScopeUpTool,
|
||||||
|
scope_down_task: registerScopeDownTool,
|
||||||
|
get_tasks: registerListTasksTool,
|
||||||
|
get_task: registerShowTaskTool,
|
||||||
|
next_task: registerNextTaskTool,
|
||||||
|
complexity_report: registerComplexityReportTool,
|
||||||
|
set_task_status: registerSetTaskStatusTool,
|
||||||
|
generate: registerGenerateTool,
|
||||||
|
add_task: registerAddTaskTool,
|
||||||
|
add_subtask: registerAddSubtaskTool,
|
||||||
|
update: registerUpdateTool,
|
||||||
|
update_task: registerUpdateTaskTool,
|
||||||
|
update_subtask: registerUpdateSubtaskTool,
|
||||||
|
remove_task: registerRemoveTaskTool,
|
||||||
|
remove_subtask: registerRemoveSubtaskTool,
|
||||||
|
clear_subtasks: registerClearSubtasksTool,
|
||||||
|
move_task: registerMoveTaskTool,
|
||||||
|
add_dependency: registerAddDependencyTool,
|
||||||
|
remove_dependency: registerRemoveDependencyTool,
|
||||||
|
validate_dependencies: registerValidateDependenciesTool,
|
||||||
|
fix_dependencies: registerFixDependenciesTool,
|
||||||
|
list_tags: registerListTagsTool,
|
||||||
|
add_tag: registerAddTagTool,
|
||||||
|
delete_tag: registerDeleteTagTool,
|
||||||
|
use_tag: registerUseTagTool,
|
||||||
|
rename_tag: registerRenameTagTool,
|
||||||
|
copy_tag: registerCopyTagTool,
|
||||||
|
research: registerResearchTool
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Core tools array containing the 7 essential tools for daily development
|
||||||
|
* These represent the minimal set needed for basic task management operations
|
||||||
|
*/
|
||||||
|
export const coreTools = [
|
||||||
|
'get_tasks',
|
||||||
|
'next_task',
|
||||||
|
'get_task',
|
||||||
|
'set_task_status',
|
||||||
|
'update_subtask',
|
||||||
|
'parse_prd',
|
||||||
|
'expand_task'
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Standard tools array containing the 15 most commonly used tools
|
||||||
|
* Includes all core tools plus frequently used additional tools
|
||||||
|
*/
|
||||||
|
export const standardTools = [
|
||||||
|
...coreTools,
|
||||||
|
'initialize_project',
|
||||||
|
'analyze_project_complexity',
|
||||||
|
'expand_all',
|
||||||
|
'add_subtask',
|
||||||
|
'remove_task',
|
||||||
|
'generate',
|
||||||
|
'add_task',
|
||||||
|
'complexity_report'
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get all available tool names
|
||||||
|
* @returns {string[]} Array of tool names
|
||||||
|
*/
|
||||||
|
export function getAvailableTools() {
|
||||||
|
return Object.keys(toolRegistry);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get tool counts for all categories
|
||||||
|
* @returns {Object} Object with core, standard, and total counts
|
||||||
|
*/
|
||||||
|
export function getToolCounts() {
|
||||||
|
return {
|
||||||
|
core: coreTools.length,
|
||||||
|
standard: standardTools.length,
|
||||||
|
total: Object.keys(toolRegistry).length
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get tool arrays organized by category
|
||||||
|
* @returns {Object} Object with arrays for each category
|
||||||
|
*/
|
||||||
|
export function getToolCategories() {
|
||||||
|
const allTools = Object.keys(toolRegistry);
|
||||||
|
return {
|
||||||
|
core: [...coreTools],
|
||||||
|
standard: [...standardTools],
|
||||||
|
all: [...allTools],
|
||||||
|
extended: allTools.filter((t) => !standardTools.includes(t))
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get registration function for a specific tool
|
||||||
|
* @param {string} toolName - Name of the tool
|
||||||
|
* @returns {Function|null} Registration function or null if not found
|
||||||
|
*/
|
||||||
|
export function getToolRegistration(toolName) {
|
||||||
|
return toolRegistry[toolName] || null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate if a tool exists in the registry
|
||||||
|
* @param {string} toolName - Name of the tool
|
||||||
|
* @returns {boolean} True if tool exists
|
||||||
|
*/
|
||||||
|
export function isValidTool(toolName) {
|
||||||
|
return toolName in toolRegistry;
|
||||||
|
}
|
||||||
|
|
||||||
|
export default toolRegistry;
|
||||||
10
package-lock.json
generated
10
package-lock.json
generated
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.29.0-rc.0",
|
"version": "0.29.0",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.29.0-rc.0",
|
"version": "0.29.0",
|
||||||
"license": "MIT WITH Commons-Clause",
|
"license": "MIT WITH Commons-Clause",
|
||||||
"workspaces": [
|
"workspaces": [
|
||||||
"apps/*",
|
"apps/*",
|
||||||
@@ -125,13 +125,13 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"apps/docs": {
|
"apps/docs": {
|
||||||
"version": "0.0.5",
|
"version": "0.0.6",
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"mintlify": "^4.2.111"
|
"mintlify": "^4.2.111"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"apps/extension": {
|
"apps/extension": {
|
||||||
"version": "0.25.6-rc.0",
|
"version": "0.25.6",
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@dnd-kit/core": "^6.3.1",
|
"@dnd-kit/core": "^6.3.1",
|
||||||
"@dnd-kit/modifiers": "^9.0.0",
|
"@dnd-kit/modifiers": "^9.0.0",
|
||||||
@@ -27136,7 +27136,7 @@
|
|||||||
},
|
},
|
||||||
"packages/claude-code-plugin": {
|
"packages/claude-code-plugin": {
|
||||||
"name": "@tm/claude-code-plugin",
|
"name": "@tm/claude-code-plugin",
|
||||||
"version": "0.0.1",
|
"version": "0.0.2",
|
||||||
"license": "MIT WITH Commons-Clause"
|
"license": "MIT WITH Commons-Clause"
|
||||||
},
|
},
|
||||||
"packages/tm-core": {
|
"packages/tm-core": {
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.29.0-rc.0",
|
"version": "0.29.0",
|
||||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||||
"main": "index.js",
|
"main": "index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
# @tm/ai-sdk-provider-grok-cli
|
# @tm/ai-sdk-provider-grok-cli
|
||||||
|
|
||||||
## null
|
## null
|
||||||
|
|
||||||
|
## null
|
||||||
|
|||||||
@@ -4,4 +4,6 @@
|
|||||||
|
|
||||||
## null
|
## null
|
||||||
|
|
||||||
|
## null
|
||||||
|
|
||||||
## 1.0.1
|
## 1.0.1
|
||||||
|
|||||||
3
packages/claude-code-plugin/CHANGELOG.md
Normal file
3
packages/claude-code-plugin/CHANGELOG.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# @tm/claude-code-plugin
|
||||||
|
|
||||||
|
## 0.0.2
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@tm/claude-code-plugin",
|
"name": "@tm/claude-code-plugin",
|
||||||
"version": "0.0.1",
|
"version": "0.0.2",
|
||||||
"description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration",
|
"description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"private": true,
|
"private": true,
|
||||||
|
|||||||
@@ -4,6 +4,8 @@
|
|||||||
|
|
||||||
## null
|
## null
|
||||||
|
|
||||||
|
## null
|
||||||
|
|
||||||
## 0.26.1
|
## 0.26.1
|
||||||
|
|
||||||
All notable changes to the @task-master/tm-core package will be documented in this file.
|
All notable changes to the @task-master/tm-core package will be documented in this file.
|
||||||
|
|||||||
@@ -21,16 +21,21 @@ const CredentialStoreSpy = vi.fn();
|
|||||||
vi.mock('./credential-store.js', () => {
|
vi.mock('./credential-store.js', () => {
|
||||||
return {
|
return {
|
||||||
CredentialStore: class {
|
CredentialStore: class {
|
||||||
|
static getInstance(config?: any) {
|
||||||
|
return new (this as any)(config);
|
||||||
|
}
|
||||||
|
static resetInstance() {
|
||||||
|
// Mock reset instance method
|
||||||
|
}
|
||||||
constructor(config: any) {
|
constructor(config: any) {
|
||||||
CredentialStoreSpy(config);
|
CredentialStoreSpy(config);
|
||||||
this.getCredentials = vi.fn(() => null);
|
|
||||||
}
|
}
|
||||||
getCredentials() {
|
getCredentials(_options?: any) {
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
saveCredentials() {}
|
saveCredentials() {}
|
||||||
clearCredentials() {}
|
clearCredentials() {}
|
||||||
hasValidCredentials() {
|
hasCredentials() {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -85,7 +90,7 @@ describe('AuthManager Singleton', () => {
|
|||||||
expect(instance1).toBe(instance2);
|
expect(instance1).toBe(instance2);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should use config on first call', () => {
|
it('should use config on first call', async () => {
|
||||||
const config = {
|
const config = {
|
||||||
baseUrl: 'https://test.auth.com',
|
baseUrl: 'https://test.auth.com',
|
||||||
configDir: '/test/config',
|
configDir: '/test/config',
|
||||||
@@ -101,7 +106,7 @@ describe('AuthManager Singleton', () => {
|
|||||||
|
|
||||||
// Verify the config is passed to internal components through observable behavior
|
// Verify the config is passed to internal components through observable behavior
|
||||||
// getCredentials would look in the configured file path
|
// getCredentials would look in the configured file path
|
||||||
const credentials = instance.getCredentials();
|
const credentials = await instance.getCredentials();
|
||||||
expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly
|
expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -29,7 +29,6 @@ export class AuthManager {
|
|||||||
private oauthService: OAuthService;
|
private oauthService: OAuthService;
|
||||||
private supabaseClient: SupabaseAuthClient;
|
private supabaseClient: SupabaseAuthClient;
|
||||||
private organizationService?: OrganizationService;
|
private organizationService?: OrganizationService;
|
||||||
private logger = getLogger('AuthManager');
|
|
||||||
|
|
||||||
private constructor(config?: Partial<AuthConfig>) {
|
private constructor(config?: Partial<AuthConfig>) {
|
||||||
this.credentialStore = CredentialStore.getInstance(config);
|
this.credentialStore = CredentialStore.getInstance(config);
|
||||||
@@ -37,7 +36,10 @@ export class AuthManager {
|
|||||||
this.oauthService = new OAuthService(this.credentialStore, config);
|
this.oauthService = new OAuthService(this.credentialStore, config);
|
||||||
|
|
||||||
// Initialize Supabase client with session restoration
|
// Initialize Supabase client with session restoration
|
||||||
this.initializeSupabaseSession();
|
// Fire-and-forget with catch handler to prevent unhandled rejections
|
||||||
|
this.initializeSupabaseSession().catch(() => {
|
||||||
|
// Errors are already logged in initializeSupabaseSession
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -79,49 +81,10 @@ export class AuthManager {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Get stored authentication credentials
|
* Get stored authentication credentials
|
||||||
* Automatically refreshes the token if expired
|
* Returns credentials as-is (even if expired). Refresh must be triggered explicitly
|
||||||
|
* via refreshToken() or will occur automatically when using the Supabase client for API calls.
|
||||||
*/
|
*/
|
||||||
async getCredentials(): Promise<AuthCredentials | null> {
|
getCredentials(): AuthCredentials | null {
|
||||||
const credentials = this.credentialStore.getCredentials();
|
|
||||||
|
|
||||||
// If credentials exist but are expired, try to refresh
|
|
||||||
if (!credentials) {
|
|
||||||
const expiredCredentials = this.credentialStore.getCredentials({
|
|
||||||
allowExpired: true
|
|
||||||
});
|
|
||||||
|
|
||||||
// Check if we have any credentials at all
|
|
||||||
if (!expiredCredentials) {
|
|
||||||
// No credentials found
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if refresh token is available
|
|
||||||
if (!expiredCredentials.refreshToken) {
|
|
||||||
this.logger.warn(
|
|
||||||
'Token expired but no refresh token available. Please re-authenticate.'
|
|
||||||
);
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Attempt refresh
|
|
||||||
try {
|
|
||||||
this.logger.info('Token expired, attempting automatic refresh...');
|
|
||||||
return await this.refreshToken();
|
|
||||||
} catch (error) {
|
|
||||||
this.logger.warn('Automatic token refresh failed:', error);
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return credentials;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get stored authentication credentials (synchronous version)
|
|
||||||
* Does not attempt automatic refresh
|
|
||||||
*/
|
|
||||||
getCredentialsSync(): AuthCredentials | null {
|
|
||||||
return this.credentialStore.getCredentials();
|
return this.credentialStore.getCredentials();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -204,25 +167,26 @@ export class AuthManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if authenticated
|
* Check if authenticated (credentials exist, regardless of expiration)
|
||||||
|
* @returns true if credentials are stored, including expired credentials
|
||||||
*/
|
*/
|
||||||
isAuthenticated(): boolean {
|
isAuthenticated(): boolean {
|
||||||
return this.credentialStore.hasValidCredentials();
|
return this.credentialStore.hasCredentials();
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the current user context (org/brief selection)
|
* Get the current user context (org/brief selection)
|
||||||
*/
|
*/
|
||||||
async getContext(): Promise<UserContext | null> {
|
getContext(): UserContext | null {
|
||||||
const credentials = await this.getCredentials();
|
const credentials = this.getCredentials();
|
||||||
return credentials?.selectedContext || null;
|
return credentials?.selectedContext || null;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Update the user context (org/brief selection)
|
* Update the user context (org/brief selection)
|
||||||
*/
|
*/
|
||||||
async updateContext(context: Partial<UserContext>): Promise<void> {
|
updateContext(context: Partial<UserContext>): void {
|
||||||
const credentials = await this.getCredentials();
|
const credentials = this.getCredentials();
|
||||||
if (!credentials) {
|
if (!credentials) {
|
||||||
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
||||||
}
|
}
|
||||||
@@ -247,8 +211,8 @@ export class AuthManager {
|
|||||||
/**
|
/**
|
||||||
* Clear the user context
|
* Clear the user context
|
||||||
*/
|
*/
|
||||||
async clearContext(): Promise<void> {
|
clearContext(): void {
|
||||||
const credentials = await this.getCredentials();
|
const credentials = this.getCredentials();
|
||||||
if (!credentials) {
|
if (!credentials) {
|
||||||
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
||||||
}
|
}
|
||||||
@@ -265,7 +229,7 @@ export class AuthManager {
|
|||||||
private async getOrganizationService(): Promise<OrganizationService> {
|
private async getOrganizationService(): Promise<OrganizationService> {
|
||||||
if (!this.organizationService) {
|
if (!this.organizationService) {
|
||||||
// First check if we have credentials with a token
|
// First check if we have credentials with a token
|
||||||
const credentials = await this.getCredentials();
|
const credentials = this.getCredentials();
|
||||||
if (!credentials || !credentials.token) {
|
if (!credentials || !credentials.token) {
|
||||||
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -52,7 +52,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(expiredCredentials);
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(retrieved).toBeNull();
|
expect(retrieved).toBeNull();
|
||||||
});
|
});
|
||||||
@@ -69,7 +69,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(validCredentials);
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(retrieved).not.toBeNull();
|
expect(retrieved).not.toBeNull();
|
||||||
expect(retrieved?.token).toBe('valid-token');
|
expect(retrieved?.token).toBe('valid-token');
|
||||||
@@ -92,6 +92,25 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
expect(retrieved).not.toBeNull();
|
expect(retrieved).not.toBeNull();
|
||||||
expect(retrieved?.token).toBe('expired-token');
|
expect(retrieved?.token).toBe('expired-token');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should return expired token by default (allowExpired defaults to true)', () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token-default',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
// Call without options - should default to allowExpired: true
|
||||||
|
const retrieved = credentialStore.getCredentials();
|
||||||
|
|
||||||
|
expect(retrieved).not.toBeNull();
|
||||||
|
expect(retrieved?.token).toBe('expired-token-default');
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Clock Skew Tolerance', () => {
|
describe('Clock Skew Tolerance', () => {
|
||||||
@@ -108,7 +127,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(almostExpiredCredentials);
|
credentialStore.saveCredentials(almostExpiredCredentials);
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(retrieved).toBeNull();
|
expect(retrieved).toBeNull();
|
||||||
});
|
});
|
||||||
@@ -126,7 +145,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(validCredentials);
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(retrieved).not.toBeNull();
|
expect(retrieved).not.toBeNull();
|
||||||
expect(retrieved?.token).toBe('valid-token');
|
expect(retrieved?.token).toBe('valid-token');
|
||||||
@@ -146,7 +165,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(credentials);
|
credentialStore.saveCredentials(credentials);
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(retrieved).not.toBeNull();
|
expect(retrieved).not.toBeNull();
|
||||||
expect(typeof retrieved?.expiresAt).toBe('number'); // Normalized to number
|
expect(typeof retrieved?.expiresAt).toBe('number'); // Normalized to number
|
||||||
@@ -164,7 +183,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(credentials);
|
credentialStore.saveCredentials(credentials);
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(retrieved).not.toBeNull();
|
expect(retrieved).not.toBeNull();
|
||||||
expect(typeof retrieved?.expiresAt).toBe('number');
|
expect(typeof retrieved?.expiresAt).toBe('number');
|
||||||
@@ -185,7 +204,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
mode: 0o600
|
mode: 0o600
|
||||||
});
|
});
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(retrieved).toBeNull();
|
expect(retrieved).toBeNull();
|
||||||
});
|
});
|
||||||
@@ -203,7 +222,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
mode: 0o600
|
mode: 0o600
|
||||||
});
|
});
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(retrieved).toBeNull();
|
expect(retrieved).toBeNull();
|
||||||
});
|
});
|
||||||
@@ -244,15 +263,15 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(credentials);
|
credentialStore.saveCredentials(credentials);
|
||||||
|
|
||||||
const retrieved = credentialStore.getCredentials();
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
// Should be normalized to number for runtime use
|
// Should be normalized to number for runtime use
|
||||||
expect(typeof retrieved?.expiresAt).toBe('number');
|
expect(typeof retrieved?.expiresAt).toBe('number');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('hasValidCredentials', () => {
|
describe('hasCredentials', () => {
|
||||||
it('should return false for expired credentials', () => {
|
it('should return true for expired credentials', () => {
|
||||||
const expiredCredentials: AuthCredentials = {
|
const expiredCredentials: AuthCredentials = {
|
||||||
token: 'expired-token',
|
token: 'expired-token',
|
||||||
refreshToken: 'refresh-token',
|
refreshToken: 'refresh-token',
|
||||||
@@ -264,7 +283,7 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(expiredCredentials);
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
expect(credentialStore.hasValidCredentials()).toBe(false);
|
expect(credentialStore.hasCredentials()).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return true for valid credentials', () => {
|
it('should return true for valid credentials', () => {
|
||||||
@@ -279,11 +298,11 @@ describe('CredentialStore - Token Expiration', () => {
|
|||||||
|
|
||||||
credentialStore.saveCredentials(validCredentials);
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
expect(credentialStore.hasValidCredentials()).toBe(true);
|
expect(credentialStore.hasCredentials()).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false when no credentials exist', () => {
|
it('should return false when no credentials exist', () => {
|
||||||
expect(credentialStore.hasValidCredentials()).toBe(false);
|
expect(credentialStore.hasCredentials()).toBe(false);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -197,7 +197,7 @@ describe('CredentialStore', () => {
|
|||||||
JSON.stringify(mockCredentials)
|
JSON.stringify(mockCredentials)
|
||||||
);
|
);
|
||||||
|
|
||||||
const result = store.getCredentials();
|
const result = store.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(result).toBeNull();
|
expect(result).toBeNull();
|
||||||
expect(mockLogger.warn).toHaveBeenCalledWith(
|
expect(mockLogger.warn).toHaveBeenCalledWith(
|
||||||
@@ -226,6 +226,31 @@ describe('CredentialStore', () => {
|
|||||||
expect(result).not.toBeNull();
|
expect(result).not.toBeNull();
|
||||||
expect(result?.token).toBe('expired-token');
|
expect(result?.token).toBe('expired-token');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should return expired tokens by default (allowExpired defaults to true)', () => {
|
||||||
|
const expiredTimestamp = Date.now() - 3600000; // 1 hour ago
|
||||||
|
const mockCredentials = {
|
||||||
|
token: 'expired-token-default',
|
||||||
|
userId: 'user-expired',
|
||||||
|
expiresAt: expiredTimestamp,
|
||||||
|
tokenType: 'standard',
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||||
|
vi.mocked(fs.readFileSync).mockReturnValue(
|
||||||
|
JSON.stringify(mockCredentials)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Call without options - should default to allowExpired: true
|
||||||
|
const result = store.getCredentials();
|
||||||
|
|
||||||
|
expect(result).not.toBeNull();
|
||||||
|
expect(result?.token).toBe('expired-token-default');
|
||||||
|
expect(mockLogger.warn).not.toHaveBeenCalledWith(
|
||||||
|
expect.stringContaining('Authentication token has expired')
|
||||||
|
);
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('saveCredentials with timestamp normalization', () => {
|
describe('saveCredentials with timestamp normalization', () => {
|
||||||
@@ -451,7 +476,7 @@ describe('CredentialStore', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('hasValidCredentials', () => {
|
describe('hasCredentials', () => {
|
||||||
it('should return true when valid unexpired credentials exist', () => {
|
it('should return true when valid unexpired credentials exist', () => {
|
||||||
const futureDate = new Date(Date.now() + 3600000); // 1 hour from now
|
const futureDate = new Date(Date.now() + 3600000); // 1 hour from now
|
||||||
const credentials = {
|
const credentials = {
|
||||||
@@ -465,10 +490,10 @@ describe('CredentialStore', () => {
|
|||||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||||
|
|
||||||
expect(store.hasValidCredentials()).toBe(true);
|
expect(store.hasCredentials()).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false when credentials are expired', () => {
|
it('should return true when credentials are expired', () => {
|
||||||
const pastDate = new Date(Date.now() - 3600000); // 1 hour ago
|
const pastDate = new Date(Date.now() - 3600000); // 1 hour ago
|
||||||
const credentials = {
|
const credentials = {
|
||||||
token: 'expired-token',
|
token: 'expired-token',
|
||||||
@@ -481,13 +506,13 @@ describe('CredentialStore', () => {
|
|||||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||||
|
|
||||||
expect(store.hasValidCredentials()).toBe(false);
|
expect(store.hasCredentials()).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false when no credentials exist', () => {
|
it('should return false when no credentials exist', () => {
|
||||||
vi.mocked(fs.existsSync).mockReturnValue(false);
|
vi.mocked(fs.existsSync).mockReturnValue(false);
|
||||||
|
|
||||||
expect(store.hasValidCredentials()).toBe(false);
|
expect(store.hasCredentials()).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false when file contains invalid JSON', () => {
|
it('should return false when file contains invalid JSON', () => {
|
||||||
@@ -495,7 +520,7 @@ describe('CredentialStore', () => {
|
|||||||
vi.mocked(fs.readFileSync).mockReturnValue('invalid json {');
|
vi.mocked(fs.readFileSync).mockReturnValue('invalid json {');
|
||||||
vi.mocked(fs.renameSync).mockImplementation(() => undefined);
|
vi.mocked(fs.renameSync).mockImplementation(() => undefined);
|
||||||
|
|
||||||
expect(store.hasValidCredentials()).toBe(false);
|
expect(store.hasCredentials()).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false for credentials without expiry', () => {
|
it('should return false for credentials without expiry', () => {
|
||||||
@@ -510,7 +535,7 @@ describe('CredentialStore', () => {
|
|||||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||||
|
|
||||||
// Credentials without expiry are considered invalid
|
// Credentials without expiry are considered invalid
|
||||||
expect(store.hasValidCredentials()).toBe(false);
|
expect(store.hasCredentials()).toBe(false);
|
||||||
|
|
||||||
// Should log warning about missing expiration
|
// Should log warning about missing expiration
|
||||||
expect(mockLogger.warn).toHaveBeenCalledWith(
|
expect(mockLogger.warn).toHaveBeenCalledWith(
|
||||||
@@ -518,14 +543,14 @@ describe('CredentialStore', () => {
|
|||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should use allowExpired=false by default', () => {
|
it('should use allowExpired=true', () => {
|
||||||
// Spy on getCredentials to verify it's called with correct params
|
// Spy on getCredentials to verify it's called with correct params
|
||||||
const getCredentialsSpy = vi.spyOn(store, 'getCredentials');
|
const getCredentialsSpy = vi.spyOn(store, 'getCredentials');
|
||||||
|
|
||||||
vi.mocked(fs.existsSync).mockReturnValue(false);
|
vi.mocked(fs.existsSync).mockReturnValue(false);
|
||||||
store.hasValidCredentials();
|
store.hasCredentials();
|
||||||
|
|
||||||
expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: false });
|
expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: true });
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -54,9 +54,12 @@ export class CredentialStore {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Get stored authentication credentials
|
* Get stored authentication credentials
|
||||||
|
* @param options.allowExpired - Whether to return expired credentials (default: true)
|
||||||
* @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use
|
* @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use
|
||||||
*/
|
*/
|
||||||
getCredentials(options?: { allowExpired?: boolean }): AuthCredentials | null {
|
getCredentials({
|
||||||
|
allowExpired = true
|
||||||
|
}: { allowExpired?: boolean } = {}): AuthCredentials | null {
|
||||||
try {
|
try {
|
||||||
if (!fs.existsSync(this.config.configFile)) {
|
if (!fs.existsSync(this.config.configFile)) {
|
||||||
return null;
|
return null;
|
||||||
@@ -90,7 +93,6 @@ export class CredentialStore {
|
|||||||
|
|
||||||
// Check if the token has expired (with clock skew tolerance)
|
// Check if the token has expired (with clock skew tolerance)
|
||||||
const now = Date.now();
|
const now = Date.now();
|
||||||
const allowExpired = options?.allowExpired ?? false;
|
|
||||||
if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) {
|
if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) {
|
||||||
this.logger.warn(
|
this.logger.warn(
|
||||||
'Authentication token has expired or is about to expire',
|
'Authentication token has expired or is about to expire',
|
||||||
@@ -103,7 +105,7 @@ export class CredentialStore {
|
|||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Return valid token
|
// Return credentials (even if expired) to enable refresh flows
|
||||||
return authData;
|
return authData;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
this.logger.error(
|
this.logger.error(
|
||||||
@@ -199,10 +201,11 @@ export class CredentialStore {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if credentials exist and are valid
|
* Check if credentials exist (regardless of expiration status)
|
||||||
|
* @returns true if credentials are stored, including expired credentials
|
||||||
*/
|
*/
|
||||||
hasValidCredentials(): boolean {
|
hasCredentials(): boolean {
|
||||||
const credentials = this.getCredentials({ allowExpired: false });
|
const credentials = this.getCredentials({ allowExpired: true });
|
||||||
return credentials !== null;
|
return credentials !== null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -281,15 +281,26 @@ export class OAuthService {
|
|||||||
// Exchange code for session using PKCE
|
// Exchange code for session using PKCE
|
||||||
const session = await this.supabaseClient.exchangeCodeForSession(code);
|
const session = await this.supabaseClient.exchangeCodeForSession(code);
|
||||||
|
|
||||||
|
// Calculate expiration - can be overridden with TM_TOKEN_EXPIRY_MINUTES
|
||||||
|
let expiresAt: string | undefined;
|
||||||
|
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
|
||||||
|
if (tokenExpiryMinutes) {
|
||||||
|
const minutes = parseInt(tokenExpiryMinutes);
|
||||||
|
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
|
||||||
|
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
|
||||||
|
} else {
|
||||||
|
expiresAt = session.expires_at
|
||||||
|
? new Date(session.expires_at * 1000).toISOString()
|
||||||
|
: undefined;
|
||||||
|
}
|
||||||
|
|
||||||
// Save authentication data
|
// Save authentication data
|
||||||
const authData: AuthCredentials = {
|
const authData: AuthCredentials = {
|
||||||
token: session.access_token,
|
token: session.access_token,
|
||||||
refreshToken: session.refresh_token,
|
refreshToken: session.refresh_token,
|
||||||
userId: session.user.id,
|
userId: session.user.id,
|
||||||
email: session.user.email,
|
email: session.user.email,
|
||||||
expiresAt: session.expires_at
|
expiresAt,
|
||||||
? new Date(session.expires_at * 1000).toISOString()
|
|
||||||
: undefined,
|
|
||||||
tokenType: 'standard',
|
tokenType: 'standard',
|
||||||
savedAt: new Date().toISOString()
|
savedAt: new Date().toISOString()
|
||||||
};
|
};
|
||||||
@@ -340,10 +351,18 @@ export class OAuthService {
|
|||||||
// Get user info from the session
|
// Get user info from the session
|
||||||
const user = await this.supabaseClient.getUser();
|
const user = await this.supabaseClient.getUser();
|
||||||
|
|
||||||
// Calculate expiration time
|
// Calculate expiration time - can be overridden with TM_TOKEN_EXPIRY_MINUTES
|
||||||
const expiresAt = expiresIn
|
let expiresAt: string | undefined;
|
||||||
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
|
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
|
||||||
: undefined;
|
if (tokenExpiryMinutes) {
|
||||||
|
const minutes = parseInt(tokenExpiryMinutes);
|
||||||
|
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
|
||||||
|
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
|
||||||
|
} else {
|
||||||
|
expiresAt = expiresIn
|
||||||
|
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
|
||||||
|
: undefined;
|
||||||
|
}
|
||||||
|
|
||||||
// Save authentication data
|
// Save authentication data
|
||||||
const authData: AuthCredentials = {
|
const authData: AuthCredentials = {
|
||||||
@@ -351,7 +370,7 @@ export class OAuthService {
|
|||||||
refreshToken: refreshToken || undefined,
|
refreshToken: refreshToken || undefined,
|
||||||
userId: user?.id || 'unknown',
|
userId: user?.id || 'unknown',
|
||||||
email: user?.email,
|
email: user?.email,
|
||||||
expiresAt: expiresAt,
|
expiresAt,
|
||||||
tokenType: 'standard',
|
tokenType: 'standard',
|
||||||
savedAt: new Date().toISOString()
|
savedAt: new Date().toISOString()
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -98,11 +98,11 @@ export class SupabaseSessionStorage implements SupportedStorage {
|
|||||||
// Only handle Supabase session keys
|
// Only handle Supabase session keys
|
||||||
if (key === STORAGE_KEY || key.includes('auth-token')) {
|
if (key === STORAGE_KEY || key.includes('auth-token')) {
|
||||||
try {
|
try {
|
||||||
|
this.logger.info('Supabase called setItem - storing refreshed session');
|
||||||
|
|
||||||
// Parse the session and update our credentials
|
// Parse the session and update our credentials
|
||||||
const sessionUpdates = this.parseSessionToCredentials(value);
|
const sessionUpdates = this.parseSessionToCredentials(value);
|
||||||
const existingCredentials = this.store.getCredentials({
|
const existingCredentials = this.store.getCredentials();
|
||||||
allowExpired: true
|
|
||||||
});
|
|
||||||
|
|
||||||
if (sessionUpdates.token) {
|
if (sessionUpdates.token) {
|
||||||
const updatedCredentials: AuthCredentials = {
|
const updatedCredentials: AuthCredentials = {
|
||||||
@@ -113,6 +113,9 @@ export class SupabaseSessionStorage implements SupportedStorage {
|
|||||||
} as AuthCredentials;
|
} as AuthCredentials;
|
||||||
|
|
||||||
this.store.saveCredentials(updatedCredentials);
|
this.store.saveCredentials(updatedCredentials);
|
||||||
|
this.logger.info(
|
||||||
|
'Successfully saved refreshed credentials from Supabase'
|
||||||
|
);
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
this.logger.error('Error setting session:', error);
|
this.logger.error('Error setting session:', error);
|
||||||
|
|||||||
@@ -17,10 +17,11 @@ export class SupabaseAuthClient {
|
|||||||
private client: SupabaseJSClient | null = null;
|
private client: SupabaseJSClient | null = null;
|
||||||
private sessionStorage: SupabaseSessionStorage;
|
private sessionStorage: SupabaseSessionStorage;
|
||||||
private logger = getLogger('SupabaseAuthClient');
|
private logger = getLogger('SupabaseAuthClient');
|
||||||
|
private credentialStore: CredentialStore;
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
const credentialStore = CredentialStore.getInstance();
|
this.credentialStore = CredentialStore.getInstance();
|
||||||
this.sessionStorage = new SupabaseSessionStorage(credentialStore);
|
this.sessionStorage = new SupabaseSessionStorage(this.credentialStore);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -362,7 +362,7 @@ export class ExportService {
|
|||||||
|
|
||||||
if (useAPIEndpoint) {
|
if (useAPIEndpoint) {
|
||||||
// Use the new bulk import API endpoint
|
// Use the new bulk import API endpoint
|
||||||
const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks/bulk`;
|
const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks`;
|
||||||
|
|
||||||
// Transform tasks to flat structure for API
|
// Transform tasks to flat structure for API
|
||||||
const flatTasks = this.transformTasksForBulkImport(tasks);
|
const flatTasks = this.transformTasksForBulkImport(tasks);
|
||||||
@@ -370,11 +370,11 @@ export class ExportService {
|
|||||||
// Prepare request body
|
// Prepare request body
|
||||||
const requestBody = {
|
const requestBody = {
|
||||||
source: 'task-master-cli',
|
source: 'task-master-cli',
|
||||||
accountId: orgId,
|
|
||||||
options: {
|
options: {
|
||||||
dryRun: false,
|
dryRun: false,
|
||||||
stopOnError: false
|
stopOnError: false
|
||||||
},
|
},
|
||||||
|
accountId: orgId,
|
||||||
tasks: flatTasks
|
tasks: flatTasks
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -73,7 +73,7 @@ export class StorageFactory {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
// Use auth token from AuthManager (synchronous - no auto-refresh here)
|
// Use auth token from AuthManager (synchronous - no auto-refresh here)
|
||||||
const credentials = authManager.getCredentialsSync();
|
const credentials = authManager.getCredentials();
|
||||||
if (credentials) {
|
if (credentials) {
|
||||||
// Merge with existing storage config, ensuring required fields
|
// Merge with existing storage config, ensuring required fields
|
||||||
const nextStorage: StorageSettings = {
|
const nextStorage: StorageSettings = {
|
||||||
@@ -103,7 +103,7 @@ export class StorageFactory {
|
|||||||
|
|
||||||
// Then check if authenticated via AuthManager
|
// Then check if authenticated via AuthManager
|
||||||
if (authManager.isAuthenticated()) {
|
if (authManager.isAuthenticated()) {
|
||||||
const credentials = authManager.getCredentialsSync();
|
const credentials = authManager.getCredentials();
|
||||||
if (credentials) {
|
if (credentials) {
|
||||||
// Configure API storage with auth credentials
|
// Configure API storage with auth credentials
|
||||||
const nextStorage: StorageSettings = {
|
const nextStorage: StorageSettings = {
|
||||||
|
|||||||
139
packages/tm-core/tests/auth/auth-refresh.test.ts
Normal file
139
packages/tm-core/tests/auth/auth-refresh.test.ts
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
import fs from 'fs';
|
||||||
|
import os from 'os';
|
||||||
|
import path from 'path';
|
||||||
|
import type { Session } from '@supabase/supabase-js';
|
||||||
|
import { AuthManager } from '../../src/auth/auth-manager';
|
||||||
|
import { CredentialStore } from '../../src/auth/credential-store';
|
||||||
|
import type { AuthCredentials } from '../../src/auth/types';
|
||||||
|
|
||||||
|
describe('AuthManager Token Refresh', () => {
|
||||||
|
let authManager: AuthManager;
|
||||||
|
let credentialStore: CredentialStore;
|
||||||
|
let tmpDir: string;
|
||||||
|
let authFile: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Reset singletons
|
||||||
|
AuthManager.resetInstance();
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
|
||||||
|
// Create temporary directory for test isolation
|
||||||
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-refresh-'));
|
||||||
|
authFile = path.join(tmpDir, 'auth.json');
|
||||||
|
|
||||||
|
// Initialize AuthManager with test config (this will create CredentialStore internally)
|
||||||
|
authManager = AuthManager.getInstance({
|
||||||
|
configDir: tmpDir,
|
||||||
|
configFile: authFile
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get the CredentialStore instance that AuthManager created
|
||||||
|
credentialStore = CredentialStore.getInstance();
|
||||||
|
credentialStore.clearCredentials();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Clean up
|
||||||
|
try {
|
||||||
|
credentialStore.clearCredentials();
|
||||||
|
} catch {
|
||||||
|
// Ignore cleanup errors
|
||||||
|
}
|
||||||
|
AuthManager.resetInstance();
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
|
||||||
|
// Remove temporary directory
|
||||||
|
if (tmpDir && fs.existsSync(tmpDir)) {
|
||||||
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return expired credentials to enable refresh flows', () => {
|
||||||
|
// Set up expired credentials with refresh token
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired_access_token',
|
||||||
|
refreshToken: 'valid_refresh_token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
// Get credentials should return them even if expired
|
||||||
|
// Refresh will be handled by explicit calls or client operations
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired_access_token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid_refresh_token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return valid credentials', () => {
|
||||||
|
// Set up valid (non-expired) credentials
|
||||||
|
const validCredentials: AuthCredentials = {
|
||||||
|
token: 'valid_access_token',
|
||||||
|
refreshToken: 'valid_refresh_token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 3600000).toISOString(), // Expires in 1 hour
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
expect(credentials?.token).toBe('valid_access_token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return expired credentials even without refresh token', () => {
|
||||||
|
// Set up expired credentials WITHOUT refresh token
|
||||||
|
// We still return them - it's up to the caller to handle
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired_access_token',
|
||||||
|
refreshToken: undefined,
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
// Returns credentials even if expired
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired_access_token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null if no credentials exist', () => {
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
expect(credentials).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return credentials regardless of refresh token validity', () => {
|
||||||
|
// Set up expired credentials with refresh token
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired_access_token',
|
||||||
|
refreshToken: 'invalid_refresh_token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 1000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
// Returns credentials - refresh will be attempted by the client which will handle failure
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired_access_token');
|
||||||
|
expect(credentials?.refreshToken).toBe('invalid_refresh_token');
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -6,6 +6,9 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
import fs from 'fs';
|
||||||
|
import os from 'os';
|
||||||
|
import path from 'path';
|
||||||
import type { Session } from '@supabase/supabase-js';
|
import type { Session } from '@supabase/supabase-js';
|
||||||
import { AuthManager } from '../../src/auth/auth-manager';
|
import { AuthManager } from '../../src/auth/auth-manager';
|
||||||
import { CredentialStore } from '../../src/auth/credential-store';
|
import { CredentialStore } from '../../src/auth/credential-store';
|
||||||
@@ -14,6 +17,8 @@ import type { AuthCredentials } from '../../src/auth/types';
|
|||||||
describe('AuthManager - Token Auto-Refresh Integration', () => {
|
describe('AuthManager - Token Auto-Refresh Integration', () => {
|
||||||
let authManager: AuthManager;
|
let authManager: AuthManager;
|
||||||
let credentialStore: CredentialStore;
|
let credentialStore: CredentialStore;
|
||||||
|
let tmpDir: string;
|
||||||
|
let authFile: string;
|
||||||
|
|
||||||
// Mock Supabase session that will be returned on refresh
|
// Mock Supabase session that will be returned on refresh
|
||||||
const mockRefreshedSession: Session = {
|
const mockRefreshedSession: Session = {
|
||||||
@@ -34,10 +39,21 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
// Reset AuthManager singleton
|
// Reset singletons
|
||||||
AuthManager.resetInstance();
|
AuthManager.resetInstance();
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
|
||||||
// Clear any existing credentials
|
// Create temporary directory for test isolation
|
||||||
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-integration-'));
|
||||||
|
authFile = path.join(tmpDir, 'auth.json');
|
||||||
|
|
||||||
|
// Initialize AuthManager with test config (this will create CredentialStore internally)
|
||||||
|
authManager = AuthManager.getInstance({
|
||||||
|
configDir: tmpDir,
|
||||||
|
configFile: authFile
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get the CredentialStore instance that AuthManager created
|
||||||
credentialStore = CredentialStore.getInstance();
|
credentialStore = CredentialStore.getInstance();
|
||||||
credentialStore.clearCredentials();
|
credentialStore.clearCredentials();
|
||||||
});
|
});
|
||||||
@@ -50,11 +66,17 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
// Ignore cleanup errors
|
// Ignore cleanup errors
|
||||||
}
|
}
|
||||||
AuthManager.resetInstance();
|
AuthManager.resetInstance();
|
||||||
|
CredentialStore.resetInstance();
|
||||||
vi.restoreAllMocks();
|
vi.restoreAllMocks();
|
||||||
|
|
||||||
|
// Remove temporary directory
|
||||||
|
if (tmpDir && fs.existsSync(tmpDir)) {
|
||||||
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Expired Token Detection', () => {
|
describe('Expired Token Detection', () => {
|
||||||
it('should detect expired token', async () => {
|
it('should return expired token for Supabase to refresh', () => {
|
||||||
// Set up expired credentials
|
// Set up expired credentials
|
||||||
const expiredCredentials: AuthCredentials = {
|
const expiredCredentials: AuthCredentials = {
|
||||||
token: 'expired-token',
|
token: 'expired-token',
|
||||||
@@ -69,24 +91,15 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
// Mock the Supabase refreshSession to return new tokens
|
// Get credentials returns them even if expired
|
||||||
const mockRefreshSession = vi
|
const credentials = authManager.getCredentials();
|
||||||
.fn()
|
|
||||||
.mockResolvedValue(mockRefreshedSession);
|
|
||||||
vi.spyOn(
|
|
||||||
authManager['supabaseClient'],
|
|
||||||
'refreshSession'
|
|
||||||
).mockImplementation(mockRefreshSession);
|
|
||||||
|
|
||||||
// Get credentials should trigger refresh
|
|
||||||
const credentials = await authManager.getCredentials();
|
|
||||||
|
|
||||||
expect(mockRefreshSession).toHaveBeenCalledTimes(1);
|
|
||||||
expect(credentials).not.toBeNull();
|
expect(credentials).not.toBeNull();
|
||||||
expect(credentials?.token).toBe('new-access-token-xyz');
|
expect(credentials?.token).toBe('expired-token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not refresh valid token', async () => {
|
it('should return valid token', () => {
|
||||||
// Set up valid credentials
|
// Set up valid credentials
|
||||||
const validCredentials: AuthCredentials = {
|
const validCredentials: AuthCredentials = {
|
||||||
token: 'valid-token',
|
token: 'valid-token',
|
||||||
@@ -101,22 +114,14 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
// Mock refresh to ensure it's not called
|
const credentials = authManager.getCredentials();
|
||||||
const mockRefreshSession = vi.fn();
|
|
||||||
vi.spyOn(
|
|
||||||
authManager['supabaseClient'],
|
|
||||||
'refreshSession'
|
|
||||||
).mockImplementation(mockRefreshSession);
|
|
||||||
|
|
||||||
const credentials = await authManager.getCredentials();
|
|
||||||
|
|
||||||
expect(mockRefreshSession).not.toHaveBeenCalled();
|
|
||||||
expect(credentials?.token).toBe('valid-token');
|
expect(credentials?.token).toBe('valid-token');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Token Refresh Flow', () => {
|
describe('Token Refresh Flow', () => {
|
||||||
it('should refresh expired token and save new credentials', async () => {
|
it('should manually refresh expired token and save new credentials', async () => {
|
||||||
const expiredCredentials: AuthCredentials = {
|
const expiredCredentials: AuthCredentials = {
|
||||||
token: 'old-token',
|
token: 'old-token',
|
||||||
refreshToken: 'old-refresh-token',
|
refreshToken: 'old-refresh-token',
|
||||||
@@ -140,23 +145,24 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
'refreshSession'
|
'refreshSession'
|
||||||
).mockResolvedValue(mockRefreshedSession);
|
).mockResolvedValue(mockRefreshedSession);
|
||||||
|
|
||||||
const refreshedCredentials = await authManager.getCredentials();
|
// Explicitly call refreshToken() method
|
||||||
|
const refreshedCredentials = await authManager.refreshToken();
|
||||||
|
|
||||||
expect(refreshedCredentials).not.toBeNull();
|
expect(refreshedCredentials).not.toBeNull();
|
||||||
expect(refreshedCredentials?.token).toBe('new-access-token-xyz');
|
expect(refreshedCredentials.token).toBe('new-access-token-xyz');
|
||||||
expect(refreshedCredentials?.refreshToken).toBe('new-refresh-token-xyz');
|
expect(refreshedCredentials.refreshToken).toBe('new-refresh-token-xyz');
|
||||||
|
|
||||||
// Verify context was preserved
|
// Verify context was preserved
|
||||||
expect(refreshedCredentials?.selectedContext?.orgId).toBe('test-org');
|
expect(refreshedCredentials.selectedContext?.orgId).toBe('test-org');
|
||||||
expect(refreshedCredentials?.selectedContext?.briefId).toBe('test-brief');
|
expect(refreshedCredentials.selectedContext?.briefId).toBe('test-brief');
|
||||||
|
|
||||||
// Verify new expiration is in the future
|
// Verify new expiration is in the future
|
||||||
const newExpiry = new Date(refreshedCredentials!.expiresAt!).getTime();
|
const newExpiry = new Date(refreshedCredentials.expiresAt!).getTime();
|
||||||
const now = Date.now();
|
const now = Date.now();
|
||||||
expect(newExpiry).toBeGreaterThan(now);
|
expect(newExpiry).toBeGreaterThan(now);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return null if refresh fails', async () => {
|
it('should throw error if manual refresh fails', async () => {
|
||||||
const expiredCredentials: AuthCredentials = {
|
const expiredCredentials: AuthCredentials = {
|
||||||
token: 'expired-token',
|
token: 'expired-token',
|
||||||
refreshToken: 'invalid-refresh-token',
|
refreshToken: 'invalid-refresh-token',
|
||||||
@@ -176,12 +182,11 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
'refreshSession'
|
'refreshSession'
|
||||||
).mockRejectedValue(new Error('Refresh token expired'));
|
).mockRejectedValue(new Error('Refresh token expired'));
|
||||||
|
|
||||||
const credentials = await authManager.getCredentials();
|
// Explicit refreshToken() call should throw
|
||||||
|
await expect(authManager.refreshToken()).rejects.toThrow();
|
||||||
expect(credentials).toBeNull();
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return null if no refresh token available', async () => {
|
it('should return expired credentials even without refresh token', () => {
|
||||||
const expiredCredentials: AuthCredentials = {
|
const expiredCredentials: AuthCredentials = {
|
||||||
token: 'expired-token',
|
token: 'expired-token',
|
||||||
// No refresh token
|
// No refresh token
|
||||||
@@ -195,18 +200,21 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
const credentials = await authManager.getCredentials();
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
expect(credentials).toBeNull();
|
// Credentials are returned even without refresh token
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired-token');
|
||||||
|
expect(credentials?.refreshToken).toBeUndefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return null if credentials missing expiresAt', async () => {
|
it('should return null if credentials missing expiresAt', () => {
|
||||||
const credentialsWithoutExpiry: AuthCredentials = {
|
const credentialsWithoutExpiry: AuthCredentials = {
|
||||||
token: 'test-token',
|
token: 'test-token',
|
||||||
refreshToken: 'refresh-token',
|
refreshToken: 'refresh-token',
|
||||||
userId: 'test-user-id',
|
userId: 'test-user-id',
|
||||||
email: 'test@example.com',
|
email: 'test@example.com',
|
||||||
// Missing expiresAt
|
// Missing expiresAt - invalid token
|
||||||
savedAt: new Date().toISOString()
|
savedAt: new Date().toISOString()
|
||||||
} as any;
|
} as any;
|
||||||
|
|
||||||
@@ -214,16 +222,17 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
const credentials = await authManager.getCredentials();
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
// Should return null because no valid expiration
|
// Tokens without valid expiration are considered invalid
|
||||||
expect(credentials).toBeNull();
|
expect(credentials).toBeNull();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Clock Skew Tolerance', () => {
|
describe('Clock Skew Tolerance', () => {
|
||||||
it('should refresh token within 30-second expiry window', async () => {
|
it('should return credentials within 30-second expiry window', () => {
|
||||||
// Token expires in 15 seconds (within 30-second buffer)
|
// Token expires in 15 seconds (within 30-second buffer)
|
||||||
|
// Supabase will handle refresh automatically
|
||||||
const almostExpiredCredentials: AuthCredentials = {
|
const almostExpiredCredentials: AuthCredentials = {
|
||||||
token: 'almost-expired-token',
|
token: 'almost-expired-token',
|
||||||
refreshToken: 'valid-refresh-token',
|
refreshToken: 'valid-refresh-token',
|
||||||
@@ -237,23 +246,16 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
const mockRefreshSession = vi
|
const credentials = authManager.getCredentials();
|
||||||
.fn()
|
|
||||||
.mockResolvedValue(mockRefreshedSession);
|
|
||||||
vi.spyOn(
|
|
||||||
authManager['supabaseClient'],
|
|
||||||
'refreshSession'
|
|
||||||
).mockImplementation(mockRefreshSession);
|
|
||||||
|
|
||||||
const credentials = await authManager.getCredentials();
|
// Credentials are returned (Supabase handles auto-refresh in background)
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
// Should trigger refresh due to 30-second buffer
|
expect(credentials?.token).toBe('almost-expired-token');
|
||||||
expect(mockRefreshSession).toHaveBeenCalledTimes(1);
|
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||||
expect(credentials?.token).toBe('new-access-token-xyz');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not refresh token well before expiry', async () => {
|
it('should return valid token well before expiry', () => {
|
||||||
// Token expires in 5 minutes (well outside 30-second buffer)
|
// Token expires in 5 minutes
|
||||||
const validCredentials: AuthCredentials = {
|
const validCredentials: AuthCredentials = {
|
||||||
token: 'valid-token',
|
token: 'valid-token',
|
||||||
refreshToken: 'valid-refresh-token',
|
refreshToken: 'valid-refresh-token',
|
||||||
@@ -267,21 +269,17 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
const mockRefreshSession = vi.fn();
|
const credentials = authManager.getCredentials();
|
||||||
vi.spyOn(
|
|
||||||
authManager['supabaseClient'],
|
|
||||||
'refreshSession'
|
|
||||||
).mockImplementation(mockRefreshSession);
|
|
||||||
|
|
||||||
const credentials = await authManager.getCredentials();
|
// Valid credentials are returned as-is
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
expect(mockRefreshSession).not.toHaveBeenCalled();
|
|
||||||
expect(credentials?.token).toBe('valid-token');
|
expect(credentials?.token).toBe('valid-token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Synchronous vs Async Methods', () => {
|
describe('Synchronous vs Async Methods', () => {
|
||||||
it('getCredentialsSync should not trigger refresh', () => {
|
it('getCredentials should return expired credentials', () => {
|
||||||
const expiredCredentials: AuthCredentials = {
|
const expiredCredentials: AuthCredentials = {
|
||||||
token: 'expired-token',
|
token: 'expired-token',
|
||||||
refreshToken: 'valid-refresh-token',
|
refreshToken: 'valid-refresh-token',
|
||||||
@@ -295,40 +293,17 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
// Synchronous call should return null without refresh
|
// Returns credentials even if expired - Supabase will handle refresh
|
||||||
const credentials = authManager.getCredentialsSync();
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
expect(credentials).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('getCredentials async should trigger refresh', async () => {
|
|
||||||
const expiredCredentials: AuthCredentials = {
|
|
||||||
token: 'expired-token',
|
|
||||||
refreshToken: 'valid-refresh-token',
|
|
||||||
userId: 'test-user-id',
|
|
||||||
email: 'test@example.com',
|
|
||||||
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
|
||||||
savedAt: new Date().toISOString()
|
|
||||||
};
|
|
||||||
|
|
||||||
credentialStore.saveCredentials(expiredCredentials);
|
|
||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
|
||||||
|
|
||||||
vi.spyOn(
|
|
||||||
authManager['supabaseClient'],
|
|
||||||
'refreshSession'
|
|
||||||
).mockResolvedValue(mockRefreshedSession);
|
|
||||||
|
|
||||||
const credentials = await authManager.getCredentials();
|
|
||||||
|
|
||||||
expect(credentials).not.toBeNull();
|
expect(credentials).not.toBeNull();
|
||||||
expect(credentials?.token).toBe('new-access-token-xyz');
|
expect(credentials?.token).toBe('expired-token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Multiple Concurrent Calls', () => {
|
describe('Multiple Concurrent Calls', () => {
|
||||||
it('should handle concurrent getCredentials calls gracefully', async () => {
|
it('should handle concurrent getCredentials calls gracefully', () => {
|
||||||
const expiredCredentials: AuthCredentials = {
|
const expiredCredentials: AuthCredentials = {
|
||||||
token: 'expired-token',
|
token: 'expired-token',
|
||||||
refreshToken: 'valid-refresh-token',
|
refreshToken: 'valid-refresh-token',
|
||||||
@@ -342,29 +317,20 @@ describe('AuthManager - Token Auto-Refresh Integration', () => {
|
|||||||
|
|
||||||
authManager = AuthManager.getInstance();
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
const mockRefreshSession = vi
|
// Make multiple concurrent calls (synchronous now)
|
||||||
.fn()
|
const creds1 = authManager.getCredentials();
|
||||||
.mockResolvedValue(mockRefreshedSession);
|
const creds2 = authManager.getCredentials();
|
||||||
vi.spyOn(
|
const creds3 = authManager.getCredentials();
|
||||||
authManager['supabaseClient'],
|
|
||||||
'refreshSession'
|
|
||||||
).mockImplementation(mockRefreshSession);
|
|
||||||
|
|
||||||
// Make multiple concurrent calls
|
// All should get the same credentials (even if expired)
|
||||||
const [creds1, creds2, creds3] = await Promise.all([
|
expect(creds1?.token).toBe('expired-token');
|
||||||
authManager.getCredentials(),
|
expect(creds2?.token).toBe('expired-token');
|
||||||
authManager.getCredentials(),
|
expect(creds3?.token).toBe('expired-token');
|
||||||
authManager.getCredentials()
|
|
||||||
]);
|
|
||||||
|
|
||||||
// All should get the refreshed token
|
// All include refresh token for Supabase to use
|
||||||
expect(creds1?.token).toBe('new-access-token-xyz');
|
expect(creds1?.refreshToken).toBe('valid-refresh-token');
|
||||||
expect(creds2?.token).toBe('new-access-token-xyz');
|
expect(creds2?.refreshToken).toBe('valid-refresh-token');
|
||||||
expect(creds3?.token).toBe('new-access-token-xyz');
|
expect(creds3?.refreshToken).toBe('valid-refresh-token');
|
||||||
|
|
||||||
// Refresh might be called multiple times, but that's okay
|
|
||||||
// (ideally we'd debounce, but this is acceptable behavior)
|
|
||||||
expect(mockRefreshSession).toHaveBeenCalled();
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -47,21 +47,33 @@ export function normalizeProjectRoot(projectRoot) {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Find the project root directory by looking for project markers
|
* Find the project root directory by looking for project markers
|
||||||
* @param {string} startDir - Directory to start searching from
|
* Traverses upwards from startDir until a project marker is found or filesystem root is reached
|
||||||
* @returns {string|null} - Project root path or null if not found
|
* Limited to 50 parent directory levels to prevent excessive traversal
|
||||||
|
* @param {string} startDir - Directory to start searching from (defaults to process.cwd())
|
||||||
|
* @returns {string} - Project root path (falls back to current directory if no markers found)
|
||||||
*/
|
*/
|
||||||
export function findProjectRoot(startDir = process.cwd()) {
|
export function findProjectRoot(startDir = process.cwd()) {
|
||||||
|
// Define project markers that indicate a project root
|
||||||
|
// Prioritize Task Master specific markers first
|
||||||
const projectMarkers = [
|
const projectMarkers = [
|
||||||
'.taskmaster',
|
'.taskmaster', // Task Master directory (highest priority)
|
||||||
TASKMASTER_TASKS_FILE,
|
TASKMASTER_CONFIG_FILE, // .taskmaster/config.json
|
||||||
'tasks.json',
|
TASKMASTER_TASKS_FILE, // .taskmaster/tasks/tasks.json
|
||||||
LEGACY_TASKS_FILE,
|
LEGACY_CONFIG_FILE, // .taskmasterconfig (legacy)
|
||||||
'.git',
|
LEGACY_TASKS_FILE, // tasks/tasks.json (legacy)
|
||||||
'.svn',
|
'tasks.json', // Root tasks.json (legacy)
|
||||||
'package.json',
|
'.git', // Git repository
|
||||||
'yarn.lock',
|
'.svn', // SVN repository
|
||||||
'package-lock.json',
|
'package.json', // Node.js project
|
||||||
'pnpm-lock.yaml'
|
'yarn.lock', // Yarn project
|
||||||
|
'package-lock.json', // npm project
|
||||||
|
'pnpm-lock.yaml', // pnpm project
|
||||||
|
'Cargo.toml', // Rust project
|
||||||
|
'go.mod', // Go project
|
||||||
|
'pyproject.toml', // Python project
|
||||||
|
'requirements.txt', // Python project
|
||||||
|
'Gemfile', // Ruby project
|
||||||
|
'composer.json' // PHP project
|
||||||
];
|
];
|
||||||
|
|
||||||
let currentDir = path.resolve(startDir);
|
let currentDir = path.resolve(startDir);
|
||||||
@@ -69,19 +81,36 @@ export function findProjectRoot(startDir = process.cwd()) {
|
|||||||
const maxDepth = 50; // Reasonable limit to prevent infinite loops
|
const maxDepth = 50; // Reasonable limit to prevent infinite loops
|
||||||
let depth = 0;
|
let depth = 0;
|
||||||
|
|
||||||
|
// Traverse upwards looking for project markers
|
||||||
while (currentDir !== rootDir && depth < maxDepth) {
|
while (currentDir !== rootDir && depth < maxDepth) {
|
||||||
// Check if current directory contains any project markers
|
// Check if current directory contains any project markers
|
||||||
for (const marker of projectMarkers) {
|
for (const marker of projectMarkers) {
|
||||||
const markerPath = path.join(currentDir, marker);
|
const markerPath = path.join(currentDir, marker);
|
||||||
if (fs.existsSync(markerPath)) {
|
try {
|
||||||
return currentDir;
|
if (fs.existsSync(markerPath)) {
|
||||||
|
// Found a project marker - return this directory as project root
|
||||||
|
return currentDir;
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
// Ignore permission errors and continue searching
|
||||||
|
continue;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
currentDir = path.dirname(currentDir);
|
|
||||||
|
// Move up one directory level
|
||||||
|
const parentDir = path.dirname(currentDir);
|
||||||
|
|
||||||
|
// Safety check: if dirname returns the same path, we've hit the root
|
||||||
|
if (parentDir === currentDir) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
currentDir = parentDir;
|
||||||
depth++;
|
depth++;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback to current working directory if no project root found
|
// Fallback to current working directory if no project root found
|
||||||
|
// This ensures the function always returns a valid path
|
||||||
return process.cwd();
|
return process.cwd();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
123
tests/helpers/tool-counts.js
Normal file
123
tests/helpers/tool-counts.js
Normal file
@@ -0,0 +1,123 @@
|
|||||||
|
/**
|
||||||
|
* tool-counts.js
|
||||||
|
* Shared helper for validating tool counts across tests and validation scripts
|
||||||
|
*/
|
||||||
|
|
||||||
|
import {
|
||||||
|
getToolCounts,
|
||||||
|
getToolCategories
|
||||||
|
} from '../../mcp-server/src/tools/tool-registry.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Expected tool counts - update these when tools are added/removed
|
||||||
|
* These serve as the canonical source of truth for expected counts
|
||||||
|
*/
|
||||||
|
export const EXPECTED_TOOL_COUNTS = {
|
||||||
|
core: 7,
|
||||||
|
standard: 15,
|
||||||
|
total: 36
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Expected core tools list for validation
|
||||||
|
*/
|
||||||
|
export const EXPECTED_CORE_TOOLS = [
|
||||||
|
'get_tasks',
|
||||||
|
'next_task',
|
||||||
|
'get_task',
|
||||||
|
'set_task_status',
|
||||||
|
'update_subtask',
|
||||||
|
'parse_prd',
|
||||||
|
'expand_task'
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate that actual tool counts match expected counts
|
||||||
|
* @returns {Object} Validation result with isValid flag and details
|
||||||
|
*/
|
||||||
|
export function validateToolCounts() {
|
||||||
|
const actual = getToolCounts();
|
||||||
|
const expected = EXPECTED_TOOL_COUNTS;
|
||||||
|
|
||||||
|
const isValid =
|
||||||
|
actual.core === expected.core &&
|
||||||
|
actual.standard === expected.standard &&
|
||||||
|
actual.total === expected.total;
|
||||||
|
|
||||||
|
return {
|
||||||
|
isValid,
|
||||||
|
actual,
|
||||||
|
expected,
|
||||||
|
differences: {
|
||||||
|
core: actual.core - expected.core,
|
||||||
|
standard: actual.standard - expected.standard,
|
||||||
|
total: actual.total - expected.total
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate that tool categories have correct structure and content
|
||||||
|
* @returns {Object} Validation result
|
||||||
|
*/
|
||||||
|
export function validateToolStructure() {
|
||||||
|
const categories = getToolCategories();
|
||||||
|
const counts = getToolCounts();
|
||||||
|
|
||||||
|
// Check that core tools are subset of standard tools
|
||||||
|
const coreInStandard = categories.core.every((tool) =>
|
||||||
|
categories.standard.includes(tool)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Check that standard tools are subset of all tools
|
||||||
|
const standardInAll = categories.standard.every((tool) =>
|
||||||
|
categories.all.includes(tool)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Check that expected core tools match actual
|
||||||
|
const expectedCoreMatch =
|
||||||
|
EXPECTED_CORE_TOOLS.every((tool) => categories.core.includes(tool)) &&
|
||||||
|
categories.core.every((tool) => EXPECTED_CORE_TOOLS.includes(tool));
|
||||||
|
|
||||||
|
// Check array lengths match counts
|
||||||
|
const lengthsMatch =
|
||||||
|
categories.core.length === counts.core &&
|
||||||
|
categories.standard.length === counts.standard &&
|
||||||
|
categories.all.length === counts.total;
|
||||||
|
|
||||||
|
return {
|
||||||
|
isValid:
|
||||||
|
coreInStandard && standardInAll && expectedCoreMatch && lengthsMatch,
|
||||||
|
details: {
|
||||||
|
coreInStandard,
|
||||||
|
standardInAll,
|
||||||
|
expectedCoreMatch,
|
||||||
|
lengthsMatch
|
||||||
|
},
|
||||||
|
categories,
|
||||||
|
counts
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get a detailed report of all tool information
|
||||||
|
* @returns {Object} Comprehensive tool information
|
||||||
|
*/
|
||||||
|
export function getToolReport() {
|
||||||
|
const counts = getToolCounts();
|
||||||
|
const categories = getToolCategories();
|
||||||
|
const validation = validateToolCounts();
|
||||||
|
const structure = validateToolStructure();
|
||||||
|
|
||||||
|
return {
|
||||||
|
counts,
|
||||||
|
categories,
|
||||||
|
validation,
|
||||||
|
structure,
|
||||||
|
summary: {
|
||||||
|
totalValid: validation.isValid && structure.isValid,
|
||||||
|
countsValid: validation.isValid,
|
||||||
|
structureValid: structure.isValid
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
410
tests/unit/mcp/tools/tool-registration.test.js
Normal file
410
tests/unit/mcp/tools/tool-registration.test.js
Normal file
@@ -0,0 +1,410 @@
|
|||||||
|
/**
|
||||||
|
* tool-registration.test.js
|
||||||
|
* Comprehensive unit tests for the Task Master MCP tool registration system
|
||||||
|
* Tests environment variable control system covering all configuration modes and edge cases
|
||||||
|
*/
|
||||||
|
|
||||||
|
import {
|
||||||
|
describe,
|
||||||
|
it,
|
||||||
|
expect,
|
||||||
|
beforeEach,
|
||||||
|
afterEach,
|
||||||
|
jest
|
||||||
|
} from '@jest/globals';
|
||||||
|
|
||||||
|
import {
|
||||||
|
EXPECTED_TOOL_COUNTS,
|
||||||
|
EXPECTED_CORE_TOOLS,
|
||||||
|
validateToolCounts,
|
||||||
|
validateToolStructure
|
||||||
|
} from '../../../helpers/tool-counts.js';
|
||||||
|
|
||||||
|
import { registerTaskMasterTools } from '../../../../mcp-server/src/tools/index.js';
|
||||||
|
import {
|
||||||
|
toolRegistry,
|
||||||
|
coreTools,
|
||||||
|
standardTools
|
||||||
|
} from '../../../../mcp-server/src/tools/tool-registry.js';
|
||||||
|
|
||||||
|
// Derive constants from imported registry to avoid brittle magic numbers
|
||||||
|
const ALL_COUNT = Object.keys(toolRegistry).length;
|
||||||
|
const CORE_COUNT = coreTools.length;
|
||||||
|
const STANDARD_COUNT = standardTools.length;
|
||||||
|
|
||||||
|
describe('Task Master Tool Registration System', () => {
|
||||||
|
let mockServer;
|
||||||
|
let originalEnv;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
originalEnv = process.env.TASK_MASTER_TOOLS;
|
||||||
|
|
||||||
|
mockServer = {
|
||||||
|
tools: [],
|
||||||
|
addTool: jest.fn((tool) => {
|
||||||
|
mockServer.tools.push(tool);
|
||||||
|
return tool;
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
delete process.env.TASK_MASTER_TOOLS;
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
if (originalEnv !== undefined) {
|
||||||
|
process.env.TASK_MASTER_TOOLS = originalEnv;
|
||||||
|
} else {
|
||||||
|
delete process.env.TASK_MASTER_TOOLS;
|
||||||
|
}
|
||||||
|
|
||||||
|
jest.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Test Environment Setup', () => {
|
||||||
|
it('should have properly configured mock server', () => {
|
||||||
|
expect(mockServer).toBeDefined();
|
||||||
|
expect(typeof mockServer.addTool).toBe('function');
|
||||||
|
expect(Array.isArray(mockServer.tools)).toBe(true);
|
||||||
|
expect(mockServer.tools.length).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have correct tool registry structure', () => {
|
||||||
|
const validation = validateToolCounts();
|
||||||
|
expect(validation.isValid).toBe(true);
|
||||||
|
|
||||||
|
if (!validation.isValid) {
|
||||||
|
console.error('Tool count validation failed:', validation);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(validation.actual.total).toBe(EXPECTED_TOOL_COUNTS.total);
|
||||||
|
expect(validation.actual.core).toBe(EXPECTED_TOOL_COUNTS.core);
|
||||||
|
expect(validation.actual.standard).toBe(EXPECTED_TOOL_COUNTS.standard);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have correct core tools', () => {
|
||||||
|
const structure = validateToolStructure();
|
||||||
|
expect(structure.isValid).toBe(true);
|
||||||
|
|
||||||
|
if (!structure.isValid) {
|
||||||
|
console.error('Tool structure validation failed:', structure);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(coreTools).toEqual(expect.arrayContaining(EXPECTED_CORE_TOOLS));
|
||||||
|
expect(coreTools.length).toBe(EXPECTED_TOOL_COUNTS.core);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have correct standard tools that include all core tools', () => {
|
||||||
|
const structure = validateToolStructure();
|
||||||
|
expect(structure.details.coreInStandard).toBe(true);
|
||||||
|
expect(standardTools.length).toBe(EXPECTED_TOOL_COUNTS.standard);
|
||||||
|
|
||||||
|
coreTools.forEach((tool) => {
|
||||||
|
expect(standardTools).toContain(tool);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have all expected tools in registry', () => {
|
||||||
|
const expectedTools = [
|
||||||
|
'initialize_project',
|
||||||
|
'models',
|
||||||
|
'research',
|
||||||
|
'add_tag',
|
||||||
|
'delete_tag',
|
||||||
|
'get_tasks',
|
||||||
|
'next_task',
|
||||||
|
'get_task'
|
||||||
|
];
|
||||||
|
expectedTools.forEach((tool) => {
|
||||||
|
expect(toolRegistry).toHaveProperty(tool);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Configuration Modes', () => {
|
||||||
|
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS is not set (default behavior)`, () => {
|
||||||
|
delete process.env.TASK_MASTER_TOOLS;
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||||
|
EXPECTED_TOOL_COUNTS.total
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS=all`, () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'all';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it(`should register exactly ${CORE_COUNT} core tools when TASK_MASTER_TOOLS=core`, () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'core';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'core');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||||
|
EXPECTED_TOOL_COUNTS.core
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it(`should register exactly ${STANDARD_COUNT} standard tools when TASK_MASTER_TOOLS=standard`, () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'standard';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'standard');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||||
|
EXPECTED_TOOL_COUNTS.standard
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it(`should treat lean as alias for core mode (${CORE_COUNT} tools)`, () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'lean';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'lean');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle case insensitive configuration values', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'CORE';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'CORE');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Custom Tool Selection and Edge Cases', () => {
|
||||||
|
it('should register specific tools from comma-separated list', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'get_tasks,next_task,get_task';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'get_tasks,next_task,get_task');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle mixed valid and invalid tool names gracefully', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS =
|
||||||
|
'invalid_tool,get_tasks,fake_tool,next_task';
|
||||||
|
|
||||||
|
registerTaskMasterTools(
|
||||||
|
mockServer,
|
||||||
|
'invalid_tool,get_tasks,fake_tool,next_task'
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should default to all tools with completely invalid input', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'completely_invalid';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty string environment variable', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = '';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle whitespace in comma-separated lists', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = ' get_tasks , next_task , get_task ';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, ' get_tasks , next_task , get_task ');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore duplicate tools in list', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'get_tasks,get_tasks,next_task,get_tasks';
|
||||||
|
|
||||||
|
registerTaskMasterTools(
|
||||||
|
mockServer,
|
||||||
|
'get_tasks,get_tasks,next_task,get_tasks'
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle only commas and empty entries', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = ',,,';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle single tool selection', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'get_tasks';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'get_tasks');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Coverage Analysis and Integration Tests', () => {
|
||||||
|
it('should provide 100% code coverage for environment control logic', () => {
|
||||||
|
const testCases = [
|
||||||
|
{
|
||||||
|
env: undefined,
|
||||||
|
expectedCount: ALL_COUNT,
|
||||||
|
description: 'undefined env (all)'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
env: '',
|
||||||
|
expectedCount: ALL_COUNT,
|
||||||
|
description: 'empty string (all)'
|
||||||
|
},
|
||||||
|
{ env: 'all', expectedCount: ALL_COUNT, description: 'all mode' },
|
||||||
|
{ env: 'core', expectedCount: CORE_COUNT, description: 'core mode' },
|
||||||
|
{
|
||||||
|
env: 'lean',
|
||||||
|
expectedCount: CORE_COUNT,
|
||||||
|
description: 'lean mode (alias)'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
env: 'standard',
|
||||||
|
expectedCount: STANDARD_COUNT,
|
||||||
|
description: 'standard mode'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
env: 'get_tasks,next_task',
|
||||||
|
expectedCount: 2,
|
||||||
|
description: 'custom list'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
env: 'invalid_tool',
|
||||||
|
expectedCount: ALL_COUNT,
|
||||||
|
description: 'invalid fallback'
|
||||||
|
}
|
||||||
|
];
|
||||||
|
|
||||||
|
testCases.forEach((testCase) => {
|
||||||
|
delete process.env.TASK_MASTER_TOOLS;
|
||||||
|
if (testCase.env !== undefined) {
|
||||||
|
process.env.TASK_MASTER_TOOLS = testCase.env;
|
||||||
|
}
|
||||||
|
|
||||||
|
mockServer.tools = [];
|
||||||
|
mockServer.addTool.mockClear();
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, testCase.env || 'all');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||||
|
testCase.expectedCount
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have optimal performance characteristics', () => {
|
||||||
|
const startTime = Date.now();
|
||||||
|
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'all';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
const endTime = Date.now();
|
||||||
|
const executionTime = endTime - startTime;
|
||||||
|
|
||||||
|
expect(executionTime).toBeLessThan(100);
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should validate token reduction claims', () => {
|
||||||
|
expect(coreTools.length).toBeLessThan(standardTools.length);
|
||||||
|
expect(standardTools.length).toBeLessThan(
|
||||||
|
Object.keys(toolRegistry).length
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(coreTools.length).toBe(CORE_COUNT);
|
||||||
|
expect(standardTools.length).toBe(STANDARD_COUNT);
|
||||||
|
expect(Object.keys(toolRegistry).length).toBe(ALL_COUNT);
|
||||||
|
|
||||||
|
const allToolsCount = Object.keys(toolRegistry).length;
|
||||||
|
const coreReduction =
|
||||||
|
((allToolsCount - coreTools.length) / allToolsCount) * 100;
|
||||||
|
const standardReduction =
|
||||||
|
((allToolsCount - standardTools.length) / allToolsCount) * 100;
|
||||||
|
|
||||||
|
expect(coreReduction).toBeGreaterThan(80);
|
||||||
|
expect(standardReduction).toBeGreaterThan(50);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should maintain referential integrity of tool registry', () => {
|
||||||
|
coreTools.forEach((tool) => {
|
||||||
|
expect(standardTools).toContain(tool);
|
||||||
|
});
|
||||||
|
|
||||||
|
standardTools.forEach((tool) => {
|
||||||
|
expect(toolRegistry).toHaveProperty(tool);
|
||||||
|
});
|
||||||
|
|
||||||
|
Object.keys(toolRegistry).forEach((tool) => {
|
||||||
|
expect(typeof toolRegistry[tool]).toBe('function');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle concurrent registration attempts', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'core';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'core');
|
||||||
|
registerTaskMasterTools(mockServer, 'core');
|
||||||
|
registerTaskMasterTools(mockServer, 'core');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT * 3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should validate all documented tool categories exist', () => {
|
||||||
|
const allTools = Object.keys(toolRegistry);
|
||||||
|
|
||||||
|
const projectSetupTools = allTools.filter((tool) =>
|
||||||
|
['initialize_project', 'models', 'rules', 'parse_prd'].includes(tool)
|
||||||
|
);
|
||||||
|
expect(projectSetupTools.length).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
const taskManagementTools = allTools.filter((tool) =>
|
||||||
|
['get_tasks', 'get_task', 'next_task', 'set_task_status'].includes(tool)
|
||||||
|
);
|
||||||
|
expect(taskManagementTools.length).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
const analysisTools = allTools.filter((tool) =>
|
||||||
|
['analyze_project_complexity', 'complexity_report'].includes(tool)
|
||||||
|
);
|
||||||
|
expect(analysisTools.length).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
const tagManagementTools = allTools.filter((tool) =>
|
||||||
|
['add_tag', 'delete_tag', 'list_tags', 'use_tag'].includes(tool)
|
||||||
|
);
|
||||||
|
expect(tagManagementTools.length).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle error conditions gracefully', () => {
|
||||||
|
const problematicInputs = [
|
||||||
|
'null',
|
||||||
|
'undefined',
|
||||||
|
' ',
|
||||||
|
'\n\t',
|
||||||
|
'special!@#$%^&*()characters',
|
||||||
|
'very,very,very,very,very,very,very,long,comma,separated,list,with,invalid,tools,that,should,fallback,to,all'
|
||||||
|
];
|
||||||
|
|
||||||
|
problematicInputs.forEach((input) => {
|
||||||
|
mockServer.tools = [];
|
||||||
|
mockServer.addTool.mockClear();
|
||||||
|
|
||||||
|
process.env.TASK_MASTER_TOOLS = input;
|
||||||
|
|
||||||
|
expect(() => registerTaskMasterTools(mockServer)).not.toThrow();
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
223
tests/unit/path-utils-find-project-root.test.js
Normal file
223
tests/unit/path-utils-find-project-root.test.js
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for findProjectRoot() function
|
||||||
|
* Tests the parent directory traversal functionality
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { jest } from '@jest/globals';
|
||||||
|
import path from 'path';
|
||||||
|
import fs from 'fs';
|
||||||
|
|
||||||
|
// Import the function to test
|
||||||
|
import { findProjectRoot } from '../../src/utils/path-utils.js';
|
||||||
|
|
||||||
|
describe('findProjectRoot', () => {
|
||||||
|
describe('Parent Directory Traversal', () => {
|
||||||
|
test('should find .taskmaster in parent directory', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
// .taskmaster exists only at /project
|
||||||
|
return normalized === path.normalize('/project/.taskmaster');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should find .git in parent directory', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
return normalized === path.normalize('/project/.git');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should find package.json in parent directory', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
return normalized === path.normalize('/project/package.json');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should traverse multiple levels to find project root', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
// Only exists at /project, not in any subdirectories
|
||||||
|
return normalized === path.normalize('/project/.taskmaster');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir/deep/nested');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should return current directory as fallback when no markers found', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
// No project markers exist anywhere
|
||||||
|
mockExistsSync.mockReturnValue(false);
|
||||||
|
|
||||||
|
const result = findProjectRoot('/some/random/path');
|
||||||
|
|
||||||
|
// Should fall back to process.cwd()
|
||||||
|
expect(result).toBe(process.cwd());
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should find markers at current directory before checking parent', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
// .git exists at /project/subdir, .taskmaster exists at /project
|
||||||
|
if (normalized.includes('/project/subdir/.git')) return true;
|
||||||
|
if (normalized.includes('/project/.taskmaster')) return true;
|
||||||
|
return false;
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
// Should find /project/subdir first because .git exists there,
|
||||||
|
// even though .taskmaster is earlier in the marker array
|
||||||
|
expect(result).toBe('/project/subdir');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should handle permission errors gracefully', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
// Throw permission error for checks in /project/subdir
|
||||||
|
if (normalized.startsWith('/project/subdir/')) {
|
||||||
|
throw new Error('EACCES: permission denied');
|
||||||
|
}
|
||||||
|
// Return true only for .taskmaster at /project
|
||||||
|
return normalized.includes('/project/.taskmaster');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
// Should handle permission errors in subdirectory and traverse to parent
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should detect filesystem root correctly', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
// No markers exist
|
||||||
|
mockExistsSync.mockReturnValue(false);
|
||||||
|
|
||||||
|
const result = findProjectRoot('/');
|
||||||
|
|
||||||
|
// Should stop at root and fall back to process.cwd()
|
||||||
|
expect(result).toBe(process.cwd());
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should recognize various project markers', () => {
|
||||||
|
const projectMarkers = [
|
||||||
|
'.taskmaster',
|
||||||
|
'.git',
|
||||||
|
'package.json',
|
||||||
|
'Cargo.toml',
|
||||||
|
'go.mod',
|
||||||
|
'pyproject.toml',
|
||||||
|
'requirements.txt',
|
||||||
|
'Gemfile',
|
||||||
|
'composer.json'
|
||||||
|
];
|
||||||
|
|
||||||
|
projectMarkers.forEach((marker) => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
return normalized.includes(`/project/${marker}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Edge Cases', () => {
|
||||||
|
test('should handle empty string as startDir', () => {
|
||||||
|
const result = findProjectRoot('');
|
||||||
|
|
||||||
|
// Should use process.cwd() or fall back appropriately
|
||||||
|
expect(typeof result).toBe('string');
|
||||||
|
expect(result.length).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should handle relative paths', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
// Simulate .git existing in the resolved path
|
||||||
|
return checkPath.includes('.git');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('./subdir');
|
||||||
|
|
||||||
|
expect(typeof result).toBe('string');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should not exceed max depth limit', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
// Track how many times existsSync is called
|
||||||
|
let callCount = 0;
|
||||||
|
mockExistsSync.mockImplementation(() => {
|
||||||
|
callCount++;
|
||||||
|
return false; // Never find a marker
|
||||||
|
});
|
||||||
|
|
||||||
|
// Create a very deep path
|
||||||
|
const deepPath = '/a/'.repeat(100) + 'deep';
|
||||||
|
const result = findProjectRoot(deepPath);
|
||||||
|
|
||||||
|
// Should stop after max depth (50) and not check 100 levels
|
||||||
|
// Each level checks multiple markers, so callCount will be high but bounded
|
||||||
|
expect(callCount).toBeLessThan(1000); // Reasonable upper bound
|
||||||
|
// With 18 markers and max depth of 50, expect around 900 calls maximum
|
||||||
|
expect(callCount).toBeLessThanOrEqual(50 * 18);
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
Reference in New Issue
Block a user