Compare commits
24 Commits
docs/auto-
...
docs/auto-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3ec3741f3b | ||
|
|
548beb4344 | ||
|
|
555da2b5b9 | ||
|
|
662e3865f3 | ||
|
|
8649c8a347 | ||
|
|
f7cab246b0 | ||
|
|
5aca107827 | ||
|
|
fb68c9fe1f | ||
|
|
ff3bd7add8 | ||
|
|
c8228e913b | ||
|
|
218b68a31e | ||
|
|
6bc75c0ac6 | ||
|
|
d7fca1844f | ||
|
|
a98d96ef04 | ||
|
|
a69d8c91dc | ||
|
|
474a86cebb | ||
|
|
3283506444 | ||
|
|
9acb900153 | ||
|
|
c4f5d89e72 | ||
|
|
e308cf4f46 | ||
|
|
11b7354010 | ||
|
|
4c1ef2ca94 | ||
|
|
663aa2dfe9 | ||
|
|
8f60a0561e |
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add changelog highlights to auto-update notifications
|
|
||||||
|
|
||||||
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
|
||||||
@@ -11,6 +11,7 @@
|
|||||||
"access": "public",
|
"access": "public",
|
||||||
"baseBranch": "main",
|
"baseBranch": "main",
|
||||||
"ignore": [
|
"ignore": [
|
||||||
"docs"
|
"docs",
|
||||||
|
"@tm/claude-code-plugin"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
5
.changeset/dirty-hairs-know.md
Normal file
5
.changeset/dirty-hairs-know.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Improve auth token refresh flow
|
||||||
7
.changeset/fix-parent-directory-traversal.md
Normal file
7
.changeset/fix-parent-directory-traversal.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Enable Task Master commands to traverse parent directories to find project root from nested paths
|
||||||
|
|
||||||
|
Fixes #1301
|
||||||
5
.changeset/fix-warning-box-alignment.md
Normal file
5
.changeset/fix-warning-box-alignment.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"@tm/cli": patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix warning message box width to match dashboard box width for consistent UI alignment
|
||||||
35
.changeset/light-owls-stay.md
Normal file
35
.changeset/light-owls-stay.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add configurable MCP tool loading to optimize LLM context usage
|
||||||
|
|
||||||
|
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
- `all` (default): Load all 36 tools
|
||||||
|
- `core` or `lean`: Load only 7 essential tools for daily development
|
||||||
|
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||||
|
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
|
||||||
|
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
||||||
|
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
|
||||||
|
|
||||||
|
**Example .mcp.json configuration:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add Claude Code plugin with marketplace distribution
|
|
||||||
|
|
||||||
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
|
||||||
|
|
||||||
## 🎉 New: Claude Code Plugin
|
|
||||||
|
|
||||||
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
|
||||||
|
|
||||||
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
|
|
||||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
|
||||||
- **MCP server integration** for deep Claude Code integration
|
|
||||||
|
|
||||||
**Installation:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/plugin marketplace add eyaltoledano/claude-task-master
|
|
||||||
/plugin install taskmaster@taskmaster
|
|
||||||
```
|
|
||||||
|
|
||||||
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
|
||||||
|
|
||||||
- Shows plugin installation instructions
|
|
||||||
- Only manages CLAUDE.md imports for agent instructions
|
|
||||||
- Directs users to install the official plugin
|
|
||||||
|
|
||||||
**Migration for Existing Users:**
|
|
||||||
|
|
||||||
If you previously used `rules add claude`:
|
|
||||||
|
|
||||||
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
|
||||||
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
|
||||||
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
|
||||||
|
|
||||||
**Why This Change?**
|
|
||||||
|
|
||||||
Claude Code plugins provide:
|
|
||||||
|
|
||||||
- ✅ Automatic updates when we release new features
|
|
||||||
- ✅ Better command organization and naming
|
|
||||||
- ✅ Seamless integration with Claude Code
|
|
||||||
- ✅ No manual file copying or management
|
|
||||||
|
|
||||||
The plugin system is the future of Task Master AI integration with Claude Code!
|
|
||||||
5
.changeset/metal-rocks-help.md
Normal file
5
.changeset/metal-rocks-help.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Improve next command to work with remote
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
|
||||||
|
|
||||||
Key features:
|
|
||||||
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
|
||||||
- Inline instructions at decision points guide AI through each section
|
|
||||||
- Good/bad examples for immediate pattern matching
|
|
||||||
- Flexible plain-text format with XML-style tags for parseability
|
|
||||||
- Critical dependency-graph section ensures correct task ordering
|
|
||||||
- Automatic inclusion during `task-master init`
|
|
||||||
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
|
||||||
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
|
||||||
|
|
||||||
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
|
||||||
5
.changeset/open-tips-notice.md
Normal file
5
.changeset/open-tips-notice.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add 4.5 haiku and sonnet to supported models for claude-code and anthropic ai providers
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix cross-level task dependencies not being saved
|
|
||||||
|
|
||||||
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
{
|
|
||||||
"mode": "exit",
|
|
||||||
"tag": "rc",
|
|
||||||
"initialVersions": {
|
|
||||||
"task-master-ai": "0.28.0",
|
|
||||||
"@tm/cli": "",
|
|
||||||
"docs": "0.0.5",
|
|
||||||
"extension": "0.25.5",
|
|
||||||
"@tm/ai-sdk-provider-grok-cli": "",
|
|
||||||
"@tm/build-config": "",
|
|
||||||
"@tm/claude-code-plugin": "0.0.1",
|
|
||||||
"@tm/core": ""
|
|
||||||
},
|
|
||||||
"changesets": [
|
|
||||||
"auto-update-changelog-highlights",
|
|
||||||
"mean-planes-wave",
|
|
||||||
"nice-ways-hope",
|
|
||||||
"plain-falcons-serve",
|
|
||||||
"smart-owls-relax"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
|
||||||
|
|
||||||
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
|
||||||
|
|
||||||
Key improvements:
|
|
||||||
- Automatic integration with complexity analysis reports
|
|
||||||
- Tag-aware complexity report path resolution
|
|
||||||
- Intelligent subtask count determination based on task complexity
|
|
||||||
- Falls back to defaults when complexity analysis is unavailable
|
|
||||||
- Enhanced logging for better visibility into expansion decisions
|
|
||||||
|
|
||||||
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
|
||||||
89
CHANGELOG.md
89
CHANGELOG.md
@@ -1,5 +1,94 @@
|
|||||||
# task-master-ai
|
# task-master-ai
|
||||||
|
|
||||||
|
## 0.29.0
|
||||||
|
|
||||||
|
### Minor Changes
|
||||||
|
|
||||||
|
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
|
||||||
|
|
||||||
|
When the CLI auto-updates to a new version, it now displays a "What's New" section.
|
||||||
|
|
||||||
|
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
|
||||||
|
|
||||||
|
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
|
||||||
|
|
||||||
|
## 🎉 New: Claude Code Plugin
|
||||||
|
|
||||||
|
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
|
||||||
|
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
|
||||||
|
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
||||||
|
- **MCP server integration** for deep Claude Code integration
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/plugin marketplace add eyaltoledano/claude-task-master
|
||||||
|
/plugin install taskmaster@taskmaster
|
||||||
|
```
|
||||||
|
|
||||||
|
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
|
||||||
|
- Shows plugin installation instructions
|
||||||
|
- Only manages CLAUDE.md imports for agent instructions
|
||||||
|
- Directs users to install the official plugin
|
||||||
|
|
||||||
|
**Migration for Existing Users:**
|
||||||
|
|
||||||
|
If you previously used `rules add claude`:
|
||||||
|
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
|
||||||
|
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
|
||||||
|
3. remove old `.claude/commands/` and `.claude/agents/` directories
|
||||||
|
|
||||||
|
**Why This Change?**
|
||||||
|
|
||||||
|
Claude Code plugins provide:
|
||||||
|
- ✅ Automatic updates when we release new features
|
||||||
|
- ✅ Better command organization and naming
|
||||||
|
- ✅ Seamless integration with Claude Code
|
||||||
|
- ✅ No manual file copying or management
|
||||||
|
|
||||||
|
The plugin system is the future of Task Master AI integration with Claude Code!
|
||||||
|
|
||||||
|
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
|
||||||
|
|
||||||
|
Key features:
|
||||||
|
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
|
||||||
|
- Inline instructions at decision points guide AI through each section
|
||||||
|
- Good/bad examples for immediate pattern matching
|
||||||
|
- Flexible plain-text format with XML-style tags for parseability
|
||||||
|
- Critical dependency-graph section ensures correct task ordering
|
||||||
|
- Automatic inclusion during `task-master init`
|
||||||
|
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
|
||||||
|
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
|
||||||
|
|
||||||
|
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
|
||||||
|
|
||||||
|
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
|
||||||
|
|
||||||
|
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
|
||||||
|
|
||||||
|
Key improvements:
|
||||||
|
- Automatic integration with complexity analysis reports
|
||||||
|
- Tag-aware complexity report path resolution
|
||||||
|
- Intelligent subtask count determination based on task complexity
|
||||||
|
- Falls back to defaults when complexity analysis is unavailable
|
||||||
|
- Enhanced logging for better visibility into expansion decisions
|
||||||
|
|
||||||
|
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
|
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
|
||||||
|
|
||||||
|
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
|
||||||
|
|
||||||
|
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`4c1ef2c`](https://github.com/eyaltoledano/claude-task-master/commit/4c1ef2ca94411c53bcd2a78ec710b06c500236dd) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
|
||||||
|
|
||||||
|
## 0.29.0-rc.1
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
|
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`a6c5152`](https://github.com/eyaltoledano/claude-task-master/commit/a6c5152f20edd8717cf1aea34e7c178b1261aa99) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
|
||||||
|
|
||||||
## 0.29.0-rc.0
|
## 0.29.0-rc.0
|
||||||
|
|
||||||
### Minor Changes
|
### Minor Changes
|
||||||
|
|||||||
80
README.md
80
README.md
@@ -119,6 +119,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
|||||||
"command": "npx",
|
"command": "npx",
|
||||||
"args": ["-y", "task-master-ai"],
|
"args": ["-y", "task-master-ai"],
|
||||||
"env": {
|
"env": {
|
||||||
|
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
|
||||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||||
@@ -148,6 +149,7 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
|||||||
"command": "npx",
|
"command": "npx",
|
||||||
"args": ["-y", "task-master-ai"],
|
"args": ["-y", "task-master-ai"],
|
||||||
"env": {
|
"env": {
|
||||||
|
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
|
||||||
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
|
||||||
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
|
||||||
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
|
||||||
@@ -196,7 +198,7 @@ Initialize taskmaster-ai in my project
|
|||||||
|
|
||||||
#### 5. Make sure you have a PRD (Recommended)
|
#### 5. Make sure you have a PRD (Recommended)
|
||||||
|
|
||||||
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`
|
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`.
|
||||||
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
|
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
|
||||||
|
|
||||||
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
|
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
|
||||||
@@ -282,6 +284,76 @@ task-master generate
|
|||||||
task-master rules add windsurf,roo,vscode
|
task-master rules add windsurf,roo,vscode
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Tool Loading Configuration
|
||||||
|
|
||||||
|
### Optimizing MCP Tool Loading
|
||||||
|
|
||||||
|
Task Master's MCP server supports selective tool loading to reduce context window usage. By default, all 36 tools are loaded (~21,000 tokens) to maintain backward compatibility with existing installations.
|
||||||
|
|
||||||
|
You can optimize performance by configuring the `TASK_MASTER_TOOLS` environment variable:
|
||||||
|
|
||||||
|
### Available Modes
|
||||||
|
|
||||||
|
| Mode | Tools | Context Usage | Use Case |
|
||||||
|
|------|-------|--------------|----------|
|
||||||
|
| `all` (default) | 36 | ~21,000 tokens | Complete feature set - all tools available |
|
||||||
|
| `standard` | 15 | ~10,000 tokens | Common task management operations |
|
||||||
|
| `core` (or `lean`) | 7 | ~5,000 tokens | Essential daily development workflow |
|
||||||
|
| `custom` | Variable | Variable | Comma-separated list of specific tools |
|
||||||
|
|
||||||
|
### Configuration Methods
|
||||||
|
|
||||||
|
#### Method 1: Environment Variable in MCP Configuration
|
||||||
|
|
||||||
|
Add `TASK_MASTER_TOOLS` to your MCP configuration file's `env` section:
|
||||||
|
|
||||||
|
```jsonc
|
||||||
|
{
|
||||||
|
"mcpServers": { // or "servers" for VS Code
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard", // Options: "all", "standard", "core", "lean", or comma-separated list
|
||||||
|
"ANTHROPIC_API_KEY": "your-key-here",
|
||||||
|
// ... other API keys
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Method 2: Claude Code CLI (One-Time Setup)
|
||||||
|
|
||||||
|
For Claude Code users, you can set the mode during installation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Core mode example (~70% token reduction)
|
||||||
|
claude mcp add task-master-ai --scope user \
|
||||||
|
--env TASK_MASTER_TOOLS="core" \
|
||||||
|
-- npx -y task-master-ai@latest
|
||||||
|
|
||||||
|
# Custom tools example
|
||||||
|
claude mcp add task-master-ai --scope user \
|
||||||
|
--env TASK_MASTER_TOOLS="get_tasks,next_task,set_task_status" \
|
||||||
|
-- npx -y task-master-ai@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Sets Details
|
||||||
|
|
||||||
|
**Core Tools (7):** `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||||
|
|
||||||
|
**Standard Tools (15):** All core tools plus `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
|
||||||
|
|
||||||
|
**All Tools (36):** Complete set including project setup, task management, analysis, dependencies, tags, research, and more
|
||||||
|
|
||||||
|
### Recommendations
|
||||||
|
|
||||||
|
- **New users**: Start with `"standard"` mode for a good balance
|
||||||
|
- **Large projects**: Use `"core"` mode to minimize token usage
|
||||||
|
- **Complex workflows**: Use `"all"` mode or custom selection
|
||||||
|
- **Backward compatibility**: If not specified, defaults to `"all"` mode
|
||||||
|
|
||||||
## Claude Code Support
|
## Claude Code Support
|
||||||
|
|
||||||
Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
|
Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
|
||||||
@@ -310,6 +382,12 @@ cd claude-task-master
|
|||||||
node scripts/init.js
|
node scripts/init.js
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Join Our Team
|
||||||
|
|
||||||
|
<a href="https://tryhamster.com" target="_blank">
|
||||||
|
<img src="./images/hamster-hiring.png" alt="Join Hamster's founding team" />
|
||||||
|
</a>
|
||||||
|
|
||||||
## Contributors
|
## Contributors
|
||||||
|
|
||||||
<a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors">
|
<a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors">
|
||||||
|
|||||||
@@ -11,6 +11,13 @@
|
|||||||
|
|
||||||
### Patch Changes
|
### Patch Changes
|
||||||
|
|
||||||
|
- Updated dependencies []:
|
||||||
|
- @tm/core@null
|
||||||
|
|
||||||
|
## null
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
- Updated dependencies []:
|
- Updated dependencies []:
|
||||||
- @tm/core@null
|
- @tm/core@null
|
||||||
|
|
||||||
|
|||||||
@@ -22,6 +22,7 @@
|
|||||||
"test:ci": "vitest run --coverage --reporter=dot"
|
"test:ci": "vitest run --coverage --reporter=dot"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
|
"@inquirer/search": "^3.2.0",
|
||||||
"@tm/core": "*",
|
"@tm/core": "*",
|
||||||
"boxen": "^8.0.1",
|
"boxen": "^8.0.1",
|
||||||
"chalk": "5.6.2",
|
"chalk": "5.6.2",
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import { Command } from 'commander';
|
|||||||
// Import all commands
|
// Import all commands
|
||||||
import { ListTasksCommand } from './commands/list.command.js';
|
import { ListTasksCommand } from './commands/list.command.js';
|
||||||
import { ShowCommand } from './commands/show.command.js';
|
import { ShowCommand } from './commands/show.command.js';
|
||||||
|
import { NextCommand } from './commands/next.command.js';
|
||||||
import { AuthCommand } from './commands/auth.command.js';
|
import { AuthCommand } from './commands/auth.command.js';
|
||||||
import { ContextCommand } from './commands/context.command.js';
|
import { ContextCommand } from './commands/context.command.js';
|
||||||
import { StartCommand } from './commands/start.command.js';
|
import { StartCommand } from './commands/start.command.js';
|
||||||
@@ -45,6 +46,12 @@ export class CommandRegistry {
|
|||||||
commandClass: ShowCommand as any,
|
commandClass: ShowCommand as any,
|
||||||
category: 'task'
|
category: 'task'
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: 'next',
|
||||||
|
description: 'Find the next available task to work on',
|
||||||
|
commandClass: NextCommand as any,
|
||||||
|
category: 'task'
|
||||||
|
},
|
||||||
{
|
{
|
||||||
name: 'start',
|
name: 'start',
|
||||||
description: 'Start working on a task with claude-code',
|
description: 'Start working on a task with claude-code',
|
||||||
|
|||||||
@@ -14,6 +14,8 @@ import {
|
|||||||
type AuthCredentials
|
type AuthCredentials
|
||||||
} from '@tm/core/auth';
|
} from '@tm/core/auth';
|
||||||
import * as ui from '../utils/ui.js';
|
import * as ui from '../utils/ui.js';
|
||||||
|
import { ContextCommand } from './context.command.js';
|
||||||
|
import { displayError } from '../utils/error-handler.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Result type from auth command
|
* Result type from auth command
|
||||||
@@ -116,8 +118,7 @@ export class AuthCommand extends Command {
|
|||||||
process.exit(0);
|
process.exit(0);
|
||||||
}, 100);
|
}, 100);
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -133,8 +134,7 @@ export class AuthCommand extends Command {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -146,8 +146,7 @@ export class AuthCommand extends Command {
|
|||||||
const result = this.displayStatus();
|
const result = this.displayStatus();
|
||||||
this.setLastResult(result);
|
this.setLastResult(result);
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -163,8 +162,7 @@ export class AuthCommand extends Command {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -187,19 +185,29 @@ export class AuthCommand extends Command {
|
|||||||
if (credentials.expiresAt) {
|
if (credentials.expiresAt) {
|
||||||
const expiresAt = new Date(credentials.expiresAt);
|
const expiresAt = new Date(credentials.expiresAt);
|
||||||
const now = new Date();
|
const now = new Date();
|
||||||
const hoursRemaining = Math.floor(
|
const timeRemaining = expiresAt.getTime() - now.getTime();
|
||||||
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
|
const hoursRemaining = Math.floor(timeRemaining / (1000 * 60 * 60));
|
||||||
);
|
const minutesRemaining = Math.floor(timeRemaining / (1000 * 60));
|
||||||
|
|
||||||
if (hoursRemaining > 0) {
|
if (timeRemaining > 0) {
|
||||||
console.log(
|
// Token is still valid
|
||||||
chalk.gray(
|
if (hoursRemaining > 0) {
|
||||||
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
|
console.log(
|
||||||
)
|
chalk.gray(
|
||||||
);
|
` Expires at: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
|
||||||
|
)
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
console.log(
|
||||||
|
chalk.gray(
|
||||||
|
` Expires at: ${expiresAt.toLocaleString()} (${minutesRemaining} minutes remaining)`
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
|
// Token has expired
|
||||||
console.log(
|
console.log(
|
||||||
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
|
chalk.yellow(` Expired at: ${expiresAt.toLocaleString()}`)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
@@ -341,6 +349,37 @@ export class AuthCommand extends Command {
|
|||||||
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
|
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// Post-auth: Set up workspace context
|
||||||
|
console.log(); // Add spacing
|
||||||
|
try {
|
||||||
|
const contextCommand = new ContextCommand();
|
||||||
|
const contextResult = await contextCommand.setupContextInteractive();
|
||||||
|
if (contextResult.success) {
|
||||||
|
if (contextResult.orgSelected && contextResult.briefSelected) {
|
||||||
|
console.log(
|
||||||
|
chalk.green('✓ Workspace context configured successfully')
|
||||||
|
);
|
||||||
|
} else if (contextResult.orgSelected) {
|
||||||
|
console.log(chalk.green('✓ Organization selected'));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
console.log(
|
||||||
|
chalk.yellow('⚠ Context setup was skipped or encountered issues')
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
chalk.gray(' You can set up context later with "tm context"')
|
||||||
|
);
|
||||||
|
}
|
||||||
|
} catch (contextError) {
|
||||||
|
console.log(chalk.yellow('⚠ Context setup encountered an error'));
|
||||||
|
console.log(
|
||||||
|
chalk.gray(' You can set up context later with "tm context"')
|
||||||
|
);
|
||||||
|
if (process.env.DEBUG) {
|
||||||
|
console.error(chalk.gray((contextError as Error).message));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
action: 'login',
|
action: 'login',
|
||||||
@@ -348,7 +387,7 @@ export class AuthCommand extends Command {
|
|||||||
message: 'Authentication successful'
|
message: 'Authentication successful'
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
this.handleAuthError(error as AuthenticationError);
|
displayError(error, { skipExit: true });
|
||||||
|
|
||||||
return {
|
return {
|
||||||
success: false,
|
success: false,
|
||||||
@@ -411,51 +450,6 @@ export class AuthCommand extends Command {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Handle authentication errors
|
|
||||||
*/
|
|
||||||
private handleAuthError(error: AuthenticationError): void {
|
|
||||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
|
||||||
|
|
||||||
switch (error.code) {
|
|
||||||
case 'NETWORK_ERROR':
|
|
||||||
ui.displayWarning(
|
|
||||||
'Please check your internet connection and try again.'
|
|
||||||
);
|
|
||||||
break;
|
|
||||||
case 'INVALID_CREDENTIALS':
|
|
||||||
ui.displayWarning('Please check your credentials and try again.');
|
|
||||||
break;
|
|
||||||
case 'AUTH_EXPIRED':
|
|
||||||
ui.displayWarning(
|
|
||||||
'Your session has expired. Please authenticate again.'
|
|
||||||
);
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
if (process.env.DEBUG) {
|
|
||||||
console.error(chalk.gray(error.stack || ''));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Handle general errors
|
|
||||||
*/
|
|
||||||
private handleError(error: any): void {
|
|
||||||
if (error instanceof AuthenticationError) {
|
|
||||||
this.handleAuthError(error);
|
|
||||||
} else {
|
|
||||||
const msg = error?.getSanitizedDetails?.() ?? {
|
|
||||||
message: error?.message ?? String(error)
|
|
||||||
};
|
|
||||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
|
||||||
|
|
||||||
if (error.stack && process.env.DEBUG) {
|
|
||||||
console.error(chalk.gray(error.stack));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Set the last result for programmatic access
|
* Set the last result for programmatic access
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -6,13 +6,11 @@
|
|||||||
import { Command } from 'commander';
|
import { Command } from 'commander';
|
||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import inquirer from 'inquirer';
|
import inquirer from 'inquirer';
|
||||||
|
import search from '@inquirer/search';
|
||||||
import ora, { Ora } from 'ora';
|
import ora, { Ora } from 'ora';
|
||||||
import {
|
import { AuthManager, type UserContext } from '@tm/core/auth';
|
||||||
AuthManager,
|
|
||||||
AuthenticationError,
|
|
||||||
type UserContext
|
|
||||||
} from '@tm/core/auth';
|
|
||||||
import * as ui from '../utils/ui.js';
|
import * as ui from '../utils/ui.js';
|
||||||
|
import { displayError } from '../utils/error-handler.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Result type from context command
|
* Result type from context command
|
||||||
@@ -118,8 +116,7 @@ export class ContextCommand extends Command {
|
|||||||
const result = this.displayContext();
|
const result = this.displayContext();
|
||||||
this.setLastResult(result);
|
this.setLastResult(result);
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -156,10 +153,14 @@ export class ContextCommand extends Command {
|
|||||||
|
|
||||||
if (context.briefName || context.briefId) {
|
if (context.briefName || context.briefId) {
|
||||||
console.log(chalk.green('\n✓ Brief'));
|
console.log(chalk.green('\n✓ Brief'));
|
||||||
if (context.briefName) {
|
if (context.briefName && context.briefId) {
|
||||||
|
const shortId = context.briefId.slice(0, 8);
|
||||||
|
console.log(
|
||||||
|
chalk.white(` ${context.briefName} `) + chalk.gray(`(${shortId})`)
|
||||||
|
);
|
||||||
|
} else if (context.briefName) {
|
||||||
console.log(chalk.white(` ${context.briefName}`));
|
console.log(chalk.white(` ${context.briefName}`));
|
||||||
}
|
} else if (context.briefId) {
|
||||||
if (context.briefId) {
|
|
||||||
console.log(chalk.gray(` ID: ${context.briefId}`));
|
console.log(chalk.gray(` ID: ${context.briefId}`));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -211,8 +212,7 @@ export class ContextCommand extends Command {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -250,9 +250,10 @@ export class ContextCommand extends Command {
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
// Update context
|
// Update context
|
||||||
await this.authManager.updateContext({
|
this.authManager.updateContext({
|
||||||
orgId: selectedOrg.id,
|
orgId: selectedOrg.id,
|
||||||
orgName: selectedOrg.name,
|
orgName: selectedOrg.name,
|
||||||
|
orgSlug: selectedOrg.slug,
|
||||||
// Clear brief when changing org
|
// Clear brief when changing org
|
||||||
briefId: undefined,
|
briefId: undefined,
|
||||||
briefName: undefined
|
briefName: undefined
|
||||||
@@ -299,8 +300,7 @@ export class ContextCommand extends Command {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -324,26 +324,54 @@ export class ContextCommand extends Command {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Prompt for selection
|
// Prompt for selection with search
|
||||||
const { selectedBrief } = await inquirer.prompt([
|
const selectedBrief = await search<(typeof briefs)[0] | null>({
|
||||||
{
|
message: 'Search for a brief:',
|
||||||
type: 'list',
|
source: async (input) => {
|
||||||
name: 'selectedBrief',
|
const searchTerm = input?.toLowerCase() || '';
|
||||||
message: 'Select a brief:',
|
|
||||||
choices: [
|
// Static option for no brief
|
||||||
{ name: '(No brief - organization level)', value: null },
|
const noBriefOption = {
|
||||||
...briefs.map((brief) => ({
|
name: '(No brief - organization level)',
|
||||||
name: `Brief ${brief.id} (${new Date(brief.createdAt).toLocaleDateString()})`,
|
value: null as any,
|
||||||
value: brief
|
description: 'Clear brief selection'
|
||||||
}))
|
};
|
||||||
]
|
|
||||||
|
// Filter and map brief options
|
||||||
|
const briefOptions = briefs
|
||||||
|
.filter((brief) => {
|
||||||
|
if (!searchTerm) return true;
|
||||||
|
|
||||||
|
const title = brief.document?.title || '';
|
||||||
|
const shortId = brief.id.slice(0, 8);
|
||||||
|
|
||||||
|
// Search by title first, then by UUID
|
||||||
|
return (
|
||||||
|
title.toLowerCase().includes(searchTerm) ||
|
||||||
|
brief.id.toLowerCase().includes(searchTerm) ||
|
||||||
|
shortId.toLowerCase().includes(searchTerm)
|
||||||
|
);
|
||||||
|
})
|
||||||
|
.map((brief) => {
|
||||||
|
const title =
|
||||||
|
brief.document?.title || `Brief ${brief.id.slice(0, 8)}`;
|
||||||
|
const shortId = brief.id.slice(0, 8);
|
||||||
|
return {
|
||||||
|
name: `${title} ${chalk.gray(`(${shortId})`)}`,
|
||||||
|
value: brief
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
return [noBriefOption, ...briefOptions];
|
||||||
}
|
}
|
||||||
]);
|
});
|
||||||
|
|
||||||
if (selectedBrief) {
|
if (selectedBrief) {
|
||||||
// Update context with brief
|
// Update context with brief
|
||||||
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
|
const briefName =
|
||||||
await this.authManager.updateContext({
|
selectedBrief.document?.title ||
|
||||||
|
`Brief ${selectedBrief.id.slice(0, 8)}`;
|
||||||
|
this.authManager.updateContext({
|
||||||
briefId: selectedBrief.id,
|
briefId: selectedBrief.id,
|
||||||
briefName: briefName
|
briefName: briefName
|
||||||
});
|
});
|
||||||
@@ -354,11 +382,11 @@ export class ContextCommand extends Command {
|
|||||||
success: true,
|
success: true,
|
||||||
action: 'select-brief',
|
action: 'select-brief',
|
||||||
context: this.authManager.getContext() || undefined,
|
context: this.authManager.getContext() || undefined,
|
||||||
message: `Selected brief: ${selectedBrief.name}`
|
message: `Selected brief: ${selectedBrief.document?.title}`
|
||||||
};
|
};
|
||||||
} else {
|
} else {
|
||||||
// Clear brief selection
|
// Clear brief selection
|
||||||
await this.authManager.updateContext({
|
this.authManager.updateContext({
|
||||||
briefId: undefined,
|
briefId: undefined,
|
||||||
briefName: undefined
|
briefName: undefined
|
||||||
});
|
});
|
||||||
@@ -396,8 +424,7 @@ export class ContextCommand extends Command {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -443,8 +470,7 @@ export class ContextCommand extends Command {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -468,7 +494,7 @@ export class ContextCommand extends Command {
|
|||||||
if (!briefId) {
|
if (!briefId) {
|
||||||
spinner.fail('Could not extract a brief ID from the provided input');
|
spinner.fail('Could not extract a brief ID from the provided input');
|
||||||
ui.displayError(
|
ui.displayError(
|
||||||
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
|
`Provide a valid brief ID or a Hamster brief URL, e.g. https://${process.env.TM_BASE_DOMAIN || process.env.TM_PUBLIC_BASE_DOMAIN}/home/hamster/briefs/<id>`
|
||||||
);
|
);
|
||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
@@ -480,20 +506,24 @@ export class ContextCommand extends Command {
|
|||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fetch org to get a friendly name (optional)
|
// Fetch org to get a friendly name and slug (optional)
|
||||||
let orgName: string | undefined;
|
let orgName: string | undefined;
|
||||||
|
let orgSlug: string | undefined;
|
||||||
try {
|
try {
|
||||||
const org = await this.authManager.getOrganization(brief.accountId);
|
const org = await this.authManager.getOrganization(brief.accountId);
|
||||||
orgName = org?.name;
|
orgName = org?.name;
|
||||||
|
orgSlug = org?.slug;
|
||||||
} catch {
|
} catch {
|
||||||
// Non-fatal if org lookup fails
|
// Non-fatal if org lookup fails
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update context: set org and brief
|
// Update context: set org and brief
|
||||||
const briefName = `Brief ${brief.id.slice(0, 8)}`;
|
const briefName =
|
||||||
await this.authManager.updateContext({
|
brief.document?.title || `Brief ${brief.id.slice(0, 8)}`;
|
||||||
|
this.authManager.updateContext({
|
||||||
orgId: brief.accountId,
|
orgId: brief.accountId,
|
||||||
orgName,
|
orgName,
|
||||||
|
orgSlug,
|
||||||
briefId: brief.id,
|
briefId: brief.id,
|
||||||
briefName
|
briefName
|
||||||
});
|
});
|
||||||
@@ -515,8 +545,7 @@ export class ContextCommand extends Command {
|
|||||||
try {
|
try {
|
||||||
if (spinner?.isSpinning) spinner.stop();
|
if (spinner?.isSpinning) spinner.stop();
|
||||||
} catch {}
|
} catch {}
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -613,7 +642,7 @@ export class ContextCommand extends Command {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
await this.authManager.updateContext(context);
|
this.authManager.updateContext(context);
|
||||||
ui.displaySuccess('Context updated');
|
ui.displaySuccess('Context updated');
|
||||||
|
|
||||||
// Display what was set
|
// Display what was set
|
||||||
@@ -645,26 +674,6 @@ export class ContextCommand extends Command {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Handle errors
|
|
||||||
*/
|
|
||||||
private handleError(error: any): void {
|
|
||||||
if (error instanceof AuthenticationError) {
|
|
||||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
|
||||||
|
|
||||||
if (error.code === 'NOT_AUTHENTICATED') {
|
|
||||||
ui.displayWarning('Please authenticate first: tm auth login');
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
const msg = error?.message ?? String(error);
|
|
||||||
console.error(chalk.red(`Error: ${msg}`));
|
|
||||||
|
|
||||||
if (error.stack && process.env.DEBUG) {
|
|
||||||
console.error(chalk.gray(error.stack));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Set the last result for programmatic access
|
* Set the last result for programmatic access
|
||||||
*/
|
*/
|
||||||
@@ -686,6 +695,53 @@ export class ContextCommand extends Command {
|
|||||||
return this.authManager.getContext();
|
return this.authManager.getContext();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Interactive context setup (for post-auth flow)
|
||||||
|
* Prompts user to select org and brief
|
||||||
|
*/
|
||||||
|
async setupContextInteractive(): Promise<{
|
||||||
|
success: boolean;
|
||||||
|
orgSelected: boolean;
|
||||||
|
briefSelected: boolean;
|
||||||
|
}> {
|
||||||
|
try {
|
||||||
|
// Ask if user wants to set up workspace context
|
||||||
|
const { setupContext } = await inquirer.prompt([
|
||||||
|
{
|
||||||
|
type: 'confirm',
|
||||||
|
name: 'setupContext',
|
||||||
|
message: 'Would you like to set up your workspace context now?',
|
||||||
|
default: true
|
||||||
|
}
|
||||||
|
]);
|
||||||
|
|
||||||
|
if (!setupContext) {
|
||||||
|
return { success: true, orgSelected: false, briefSelected: false };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select organization
|
||||||
|
const orgResult = await this.selectOrganization();
|
||||||
|
if (!orgResult.success || !orgResult.context?.orgId) {
|
||||||
|
return { success: false, orgSelected: false, briefSelected: false };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select brief
|
||||||
|
const briefResult = await this.selectBrief(orgResult.context.orgId);
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
orgSelected: true,
|
||||||
|
briefSelected: briefResult.success
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
console.error(
|
||||||
|
chalk.yellow(
|
||||||
|
'\nContext setup skipped due to error. You can set it up later with "tm context"'
|
||||||
|
)
|
||||||
|
);
|
||||||
|
return { success: false, orgSelected: false, briefSelected: false };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Clean up resources
|
* Clean up resources
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -7,13 +7,10 @@ import { Command } from 'commander';
|
|||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import inquirer from 'inquirer';
|
import inquirer from 'inquirer';
|
||||||
import ora, { Ora } from 'ora';
|
import ora, { Ora } from 'ora';
|
||||||
import {
|
import { AuthManager, type UserContext } from '@tm/core/auth';
|
||||||
AuthManager,
|
|
||||||
AuthenticationError,
|
|
||||||
type UserContext
|
|
||||||
} from '@tm/core/auth';
|
|
||||||
import { TaskMasterCore, type ExportResult } from '@tm/core';
|
import { TaskMasterCore, type ExportResult } from '@tm/core';
|
||||||
import * as ui from '../utils/ui.js';
|
import * as ui from '../utils/ui.js';
|
||||||
|
import { displayError } from '../utils/error-handler.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Result type from export command
|
* Result type from export command
|
||||||
@@ -103,7 +100,7 @@ export class ExportCommand extends Command {
|
|||||||
await this.initializeServices();
|
await this.initializeServices();
|
||||||
|
|
||||||
// Get current context
|
// Get current context
|
||||||
const context = this.authManager.getContext();
|
const context = await this.authManager.getContext();
|
||||||
|
|
||||||
// Determine org and brief IDs
|
// Determine org and brief IDs
|
||||||
let orgId = options?.org || context?.orgId;
|
let orgId = options?.org || context?.orgId;
|
||||||
@@ -197,8 +194,7 @@ export class ExportCommand extends Command {
|
|||||||
};
|
};
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
if (spinner?.isSpinning) spinner.fail('Export failed');
|
if (spinner?.isSpinning) spinner.fail('Export failed');
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -334,26 +330,6 @@ export class ExportCommand extends Command {
|
|||||||
return confirmed;
|
return confirmed;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Handle errors
|
|
||||||
*/
|
|
||||||
private handleError(error: any): void {
|
|
||||||
if (error instanceof AuthenticationError) {
|
|
||||||
console.error(chalk.red(`\n✗ ${error.message}`));
|
|
||||||
|
|
||||||
if (error.code === 'NOT_AUTHENTICATED') {
|
|
||||||
ui.displayWarning('Please authenticate first: tm auth login');
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
const msg = error?.message ?? String(error);
|
|
||||||
console.error(chalk.red(`Error: ${msg}`));
|
|
||||||
|
|
||||||
if (error.stack && process.env.DEBUG) {
|
|
||||||
console.error(chalk.gray(error.stack));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the last export result (useful for testing)
|
* Get the last export result (useful for testing)
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -17,8 +17,9 @@ import {
|
|||||||
} from '@tm/core';
|
} from '@tm/core';
|
||||||
import type { StorageType } from '@tm/core/types';
|
import type { StorageType } from '@tm/core/types';
|
||||||
import * as ui from '../utils/ui.js';
|
import * as ui from '../utils/ui.js';
|
||||||
|
import { displayError } from '../utils/error-handler.js';
|
||||||
|
import { displayCommandHeader } from '../utils/display-helpers.js';
|
||||||
import {
|
import {
|
||||||
displayHeader,
|
|
||||||
displayDashboards,
|
displayDashboards,
|
||||||
calculateTaskStatistics,
|
calculateTaskStatistics,
|
||||||
calculateSubtaskStatistics,
|
calculateSubtaskStatistics,
|
||||||
@@ -106,14 +107,7 @@ export class ListTasksCommand extends Command {
|
|||||||
this.displayResults(result, options);
|
this.displayResults(result, options);
|
||||||
}
|
}
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
const msg = error?.getSanitizedDetails?.() ?? {
|
displayError(error);
|
||||||
message: error?.message ?? String(error)
|
|
||||||
};
|
|
||||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
|
||||||
if (error.stack && process.env.DEBUG) {
|
|
||||||
console.error(chalk.gray(error.stack));
|
|
||||||
}
|
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -257,15 +251,12 @@ export class ListTasksCommand extends Command {
|
|||||||
* Display in text format with tables
|
* Display in text format with tables
|
||||||
*/
|
*/
|
||||||
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
|
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
|
||||||
const { tasks, tag } = data;
|
const { tasks, tag, storageType } = data;
|
||||||
|
|
||||||
// Get file path for display
|
// Display header using utility function
|
||||||
const filePath = this.tmCore ? `.taskmaster/tasks/tasks.json` : undefined;
|
displayCommandHeader(this.tmCore, {
|
||||||
|
|
||||||
// Display header without banner (banner already shown by main CLI)
|
|
||||||
displayHeader({
|
|
||||||
tag: tag || 'master',
|
tag: tag || 'master',
|
||||||
filePath: filePath
|
storageType
|
||||||
});
|
});
|
||||||
|
|
||||||
// No tasks message
|
// No tasks message
|
||||||
|
|||||||
248
apps/cli/src/commands/next.command.ts
Normal file
248
apps/cli/src/commands/next.command.ts
Normal file
@@ -0,0 +1,248 @@
|
|||||||
|
/**
|
||||||
|
* @fileoverview NextCommand using Commander's native class pattern
|
||||||
|
* Extends Commander.Command for better integration with the framework
|
||||||
|
*/
|
||||||
|
|
||||||
|
import path from 'node:path';
|
||||||
|
import { Command } from 'commander';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import boxen from 'boxen';
|
||||||
|
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
|
||||||
|
import type { StorageType } from '@tm/core/types';
|
||||||
|
import { displayError } from '../utils/error-handler.js';
|
||||||
|
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||||
|
import { displayCommandHeader } from '../utils/display-helpers.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Options interface for the next command
|
||||||
|
*/
|
||||||
|
export interface NextCommandOptions {
|
||||||
|
tag?: string;
|
||||||
|
format?: 'text' | 'json';
|
||||||
|
silent?: boolean;
|
||||||
|
project?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Result type from next command
|
||||||
|
*/
|
||||||
|
export interface NextTaskResult {
|
||||||
|
task: Task | null;
|
||||||
|
found: boolean;
|
||||||
|
tag: string;
|
||||||
|
storageType: Exclude<StorageType, 'auto'>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* NextCommand extending Commander's Command class
|
||||||
|
* This is a thin presentation layer over @tm/core
|
||||||
|
*/
|
||||||
|
export class NextCommand extends Command {
|
||||||
|
private tmCore?: TaskMasterCore;
|
||||||
|
private lastResult?: NextTaskResult;
|
||||||
|
|
||||||
|
constructor(name?: string) {
|
||||||
|
super(name || 'next');
|
||||||
|
|
||||||
|
// Configure the command
|
||||||
|
this.description('Find the next available task to work on')
|
||||||
|
.option('-t, --tag <tag>', 'Filter by tag')
|
||||||
|
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||||
|
.option('--silent', 'Suppress output (useful for programmatic usage)')
|
||||||
|
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||||
|
.action(async (options: NextCommandOptions) => {
|
||||||
|
await this.executeCommand(options);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Execute the next command
|
||||||
|
*/
|
||||||
|
private async executeCommand(options: NextCommandOptions): Promise<void> {
|
||||||
|
let hasError = false;
|
||||||
|
try {
|
||||||
|
// Validate options (throws on invalid options)
|
||||||
|
this.validateOptions(options);
|
||||||
|
|
||||||
|
// Initialize tm-core
|
||||||
|
await this.initializeCore(options.project || process.cwd());
|
||||||
|
|
||||||
|
// Get next task from core
|
||||||
|
const result = await this.getNextTask(options);
|
||||||
|
|
||||||
|
// Store result for programmatic access
|
||||||
|
this.setLastResult(result);
|
||||||
|
|
||||||
|
// Display results
|
||||||
|
if (!options.silent) {
|
||||||
|
this.displayResults(result, options);
|
||||||
|
}
|
||||||
|
} catch (error: any) {
|
||||||
|
hasError = true;
|
||||||
|
displayError(error, { skipExit: true });
|
||||||
|
} finally {
|
||||||
|
// Always clean up resources, even on error
|
||||||
|
await this.cleanup();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exit after cleanup completes
|
||||||
|
if (hasError) {
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate command options
|
||||||
|
*/
|
||||||
|
private validateOptions(options: NextCommandOptions): void {
|
||||||
|
// Validate format
|
||||||
|
if (options.format && !['text', 'json'].includes(options.format)) {
|
||||||
|
throw new Error(
|
||||||
|
`Invalid format: ${options.format}. Valid formats are: text, json`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Initialize TaskMasterCore
|
||||||
|
*/
|
||||||
|
private async initializeCore(projectRoot: string): Promise<void> {
|
||||||
|
if (!this.tmCore) {
|
||||||
|
const resolved = path.resolve(projectRoot);
|
||||||
|
this.tmCore = await createTaskMasterCore({ projectPath: resolved });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get next task from tm-core
|
||||||
|
*/
|
||||||
|
private async getNextTask(
|
||||||
|
options: NextCommandOptions
|
||||||
|
): Promise<NextTaskResult> {
|
||||||
|
if (!this.tmCore) {
|
||||||
|
throw new Error('TaskMasterCore not initialized');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Call tm-core to get next task
|
||||||
|
const task = await this.tmCore.getNextTask(options.tag);
|
||||||
|
|
||||||
|
// Get storage type and active tag
|
||||||
|
const storageType = this.tmCore.getStorageType();
|
||||||
|
if (storageType === 'auto') {
|
||||||
|
throw new Error('Storage type must be resolved before use');
|
||||||
|
}
|
||||||
|
const activeTag = options.tag || this.tmCore.getActiveTag();
|
||||||
|
|
||||||
|
return {
|
||||||
|
task,
|
||||||
|
found: task !== null,
|
||||||
|
tag: activeTag,
|
||||||
|
storageType
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Display results based on format
|
||||||
|
*/
|
||||||
|
private displayResults(
|
||||||
|
result: NextTaskResult,
|
||||||
|
options: NextCommandOptions
|
||||||
|
): void {
|
||||||
|
const format = options.format || 'text';
|
||||||
|
|
||||||
|
switch (format) {
|
||||||
|
case 'json':
|
||||||
|
this.displayJson(result);
|
||||||
|
break;
|
||||||
|
|
||||||
|
case 'text':
|
||||||
|
default:
|
||||||
|
this.displayText(result);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Display in JSON format
|
||||||
|
*/
|
||||||
|
private displayJson(result: NextTaskResult): void {
|
||||||
|
console.log(JSON.stringify(result, null, 2));
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Display in text format
|
||||||
|
*/
|
||||||
|
private displayText(result: NextTaskResult): void {
|
||||||
|
// Display header with storage info
|
||||||
|
displayCommandHeader(this.tmCore, {
|
||||||
|
tag: result.tag || 'master',
|
||||||
|
storageType: result.storageType
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!result.found || !result.task) {
|
||||||
|
// No next task available
|
||||||
|
console.log(
|
||||||
|
boxen(
|
||||||
|
chalk.yellow(
|
||||||
|
'No tasks available to work on. All tasks are either completed, blocked by dependencies, or in progress.'
|
||||||
|
),
|
||||||
|
{
|
||||||
|
padding: 1,
|
||||||
|
borderStyle: 'round',
|
||||||
|
borderColor: 'yellow',
|
||||||
|
title: '⚠ NO TASKS AVAILABLE ⚠',
|
||||||
|
titleAlignment: 'center'
|
||||||
|
}
|
||||||
|
)
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
`\n${chalk.dim('Tip: Try')} ${chalk.cyan('task-master list --status pending')} ${chalk.dim('to see all pending tasks')}`
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const task = result.task;
|
||||||
|
|
||||||
|
// Display the task details using the same component as 'show' command
|
||||||
|
// with a custom header indicating this is the next task
|
||||||
|
const customHeader = `Next Task: #${task.id} - ${task.title}`;
|
||||||
|
displayTaskDetails(task, {
|
||||||
|
customHeader,
|
||||||
|
headerColor: 'green',
|
||||||
|
showSuggestedActions: true
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Set the last result for programmatic access
|
||||||
|
*/
|
||||||
|
private setLastResult(result: NextTaskResult): void {
|
||||||
|
this.lastResult = result;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the last result (for programmatic usage)
|
||||||
|
*/
|
||||||
|
getLastResult(): NextTaskResult | undefined {
|
||||||
|
return this.lastResult;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clean up resources
|
||||||
|
*/
|
||||||
|
async cleanup(): Promise<void> {
|
||||||
|
if (this.tmCore) {
|
||||||
|
await this.tmCore.close();
|
||||||
|
this.tmCore = undefined;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Register this command on an existing program
|
||||||
|
*/
|
||||||
|
static register(program: Command, name?: string): NextCommand {
|
||||||
|
const nextCommand = new NextCommand(name);
|
||||||
|
program.addCommand(nextCommand);
|
||||||
|
return nextCommand;
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -12,6 +12,7 @@ import {
|
|||||||
type TaskStatus
|
type TaskStatus
|
||||||
} from '@tm/core';
|
} from '@tm/core';
|
||||||
import type { StorageType } from '@tm/core/types';
|
import type { StorageType } from '@tm/core/types';
|
||||||
|
import { displayError } from '../utils/error-handler.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Valid task status values for validation
|
* Valid task status values for validation
|
||||||
@@ -85,6 +86,7 @@ export class SetStatusCommand extends Command {
|
|||||||
private async executeCommand(
|
private async executeCommand(
|
||||||
options: SetStatusCommandOptions
|
options: SetStatusCommandOptions
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
|
let hasError = false;
|
||||||
try {
|
try {
|
||||||
// Validate required options
|
// Validate required options
|
||||||
if (!options.id) {
|
if (!options.id) {
|
||||||
@@ -135,16 +137,15 @@ export class SetStatusCommand extends Command {
|
|||||||
oldStatus: result.oldStatus,
|
oldStatus: result.oldStatus,
|
||||||
newStatus: result.newStatus
|
newStatus: result.newStatus
|
||||||
});
|
});
|
||||||
} catch (error) {
|
} catch (error: any) {
|
||||||
const errorMessage =
|
hasError = true;
|
||||||
error instanceof Error ? error.message : String(error);
|
|
||||||
|
|
||||||
if (!options.silent) {
|
|
||||||
console.error(
|
|
||||||
chalk.red(`Failed to update task ${taskId}: ${errorMessage}`)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
if (options.format === 'json') {
|
if (options.format === 'json') {
|
||||||
|
const errorMessage = error?.getSanitizedDetails
|
||||||
|
? error.getSanitizedDetails().message
|
||||||
|
: error instanceof Error
|
||||||
|
? error.message
|
||||||
|
: String(error);
|
||||||
|
|
||||||
console.log(
|
console.log(
|
||||||
JSON.stringify({
|
JSON.stringify({
|
||||||
success: false,
|
success: false,
|
||||||
@@ -153,8 +154,13 @@ export class SetStatusCommand extends Command {
|
|||||||
timestamp: new Date().toISOString()
|
timestamp: new Date().toISOString()
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
} else if (!options.silent) {
|
||||||
|
// Show which task failed with context
|
||||||
|
console.error(chalk.red(`\nFailed to update task ${taskId}:`));
|
||||||
|
displayError(error, { skipExit: true });
|
||||||
}
|
}
|
||||||
process.exit(1);
|
// Don't exit here - let finally block clean up first
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -170,25 +176,26 @@ export class SetStatusCommand extends Command {
|
|||||||
|
|
||||||
// Display results
|
// Display results
|
||||||
this.displayResults(this.lastResult, options);
|
this.displayResults(this.lastResult, options);
|
||||||
} catch (error) {
|
} catch (error: any) {
|
||||||
const errorMessage =
|
hasError = true;
|
||||||
error instanceof Error ? error.message : 'Unknown error occurred';
|
|
||||||
|
|
||||||
if (!options.silent) {
|
|
||||||
console.error(chalk.red(`Error: ${errorMessage}`));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (options.format === 'json') {
|
if (options.format === 'json') {
|
||||||
|
const errorMessage =
|
||||||
|
error instanceof Error ? error.message : 'Unknown error occurred';
|
||||||
console.log(JSON.stringify({ success: false, error: errorMessage }));
|
console.log(JSON.stringify({ success: false, error: errorMessage }));
|
||||||
|
} else if (!options.silent) {
|
||||||
|
displayError(error, { skipExit: true });
|
||||||
}
|
}
|
||||||
|
|
||||||
process.exit(1);
|
|
||||||
} finally {
|
} finally {
|
||||||
// Clean up resources
|
// Clean up resources
|
||||||
if (this.tmCore) {
|
if (this.tmCore) {
|
||||||
await this.tmCore.close();
|
await this.tmCore.close();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Exit after cleanup completes
|
||||||
|
if (hasError) {
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -9,7 +9,9 @@ import boxen from 'boxen';
|
|||||||
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
|
import { createTaskMasterCore, type Task, type TaskMasterCore } from '@tm/core';
|
||||||
import type { StorageType } from '@tm/core/types';
|
import type { StorageType } from '@tm/core/types';
|
||||||
import * as ui from '../utils/ui.js';
|
import * as ui from '../utils/ui.js';
|
||||||
|
import { displayError } from '../utils/error-handler.js';
|
||||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||||
|
import { displayCommandHeader } from '../utils/display-helpers.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Options interface for the show command
|
* Options interface for the show command
|
||||||
@@ -112,14 +114,7 @@ export class ShowCommand extends Command {
|
|||||||
this.displayResults(result, options);
|
this.displayResults(result, options);
|
||||||
}
|
}
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
const msg = error?.getSanitizedDetails?.() ?? {
|
displayError(error);
|
||||||
message: error?.message ?? String(error)
|
|
||||||
};
|
|
||||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
|
||||||
if (error.stack && process.env.DEBUG) {
|
|
||||||
console.error(chalk.gray(error.stack));
|
|
||||||
}
|
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -257,6 +252,15 @@ export class ShowCommand extends Command {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Display header with storage info
|
||||||
|
const activeTag = this.tmCore?.getActiveTag() || 'master';
|
||||||
|
displayCommandHeader(this.tmCore, {
|
||||||
|
tag: activeTag,
|
||||||
|
storageType: result.storageType
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(); // Add spacing
|
||||||
|
|
||||||
// Use the global task details display function
|
// Use the global task details display function
|
||||||
displayTaskDetails(result.task, {
|
displayTaskDetails(result.task, {
|
||||||
statusFilter: options.status,
|
statusFilter: options.status,
|
||||||
@@ -271,8 +275,12 @@ export class ShowCommand extends Command {
|
|||||||
result: ShowMultipleTasksResult,
|
result: ShowMultipleTasksResult,
|
||||||
_options: ShowCommandOptions
|
_options: ShowCommandOptions
|
||||||
): void {
|
): void {
|
||||||
// Header
|
// Display header with storage info
|
||||||
ui.displayBanner(`Tasks (${result.tasks.length} found)`);
|
const activeTag = this.tmCore?.getActiveTag() || 'master';
|
||||||
|
displayCommandHeader(this.tmCore, {
|
||||||
|
tag: activeTag,
|
||||||
|
storageType: result.storageType
|
||||||
|
});
|
||||||
|
|
||||||
if (result.notFound.length > 0) {
|
if (result.notFound.length > 0) {
|
||||||
console.log(chalk.yellow(`\n⚠ Not found: ${result.notFound.join(', ')}`));
|
console.log(chalk.yellow(`\n⚠ Not found: ${result.notFound.join(', ')}`));
|
||||||
@@ -291,8 +299,6 @@ export class ShowCommand extends Command {
|
|||||||
showDependencies: true
|
showDependencies: true
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
|
||||||
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ import {
|
|||||||
} from '@tm/core';
|
} from '@tm/core';
|
||||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||||
import * as ui from '../utils/ui.js';
|
import * as ui from '../utils/ui.js';
|
||||||
|
import { displayError } from '../utils/error-handler.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* CLI-specific options interface for the start command
|
* CLI-specific options interface for the start command
|
||||||
@@ -160,8 +161,7 @@ export class StartCommand extends Command {
|
|||||||
if (spinner) {
|
if (spinner) {
|
||||||
spinner.fail('Operation failed');
|
spinner.fail('Operation failed');
|
||||||
}
|
}
|
||||||
this.handleError(error);
|
displayError(error);
|
||||||
process.exit(1);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -452,22 +452,6 @@ export class StartCommand extends Command {
|
|||||||
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
console.log(`\n${chalk.gray('Storage: ' + result.storageType)}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Handle general errors
|
|
||||||
*/
|
|
||||||
private handleError(error: any): void {
|
|
||||||
const msg = error?.getSanitizedDetails?.() ?? {
|
|
||||||
message: error?.message ?? String(error)
|
|
||||||
};
|
|
||||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
|
||||||
|
|
||||||
// Show stack trace in development mode or when DEBUG is set
|
|
||||||
const isDevelopment = process.env.NODE_ENV !== 'production';
|
|
||||||
if ((isDevelopment || process.env.DEBUG) && error.stack) {
|
|
||||||
console.error(chalk.gray(error.stack));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Set the last result for programmatic access
|
* Set the last result for programmatic access
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -6,6 +6,7 @@
|
|||||||
// Commands
|
// Commands
|
||||||
export { ListTasksCommand } from './commands/list.command.js';
|
export { ListTasksCommand } from './commands/list.command.js';
|
||||||
export { ShowCommand } from './commands/show.command.js';
|
export { ShowCommand } from './commands/show.command.js';
|
||||||
|
export { NextCommand } from './commands/next.command.js';
|
||||||
export { AuthCommand } from './commands/auth.command.js';
|
export { AuthCommand } from './commands/auth.command.js';
|
||||||
export { ContextCommand } from './commands/context.command.js';
|
export { ContextCommand } from './commands/context.command.js';
|
||||||
export { StartCommand } from './commands/start.command.js';
|
export { StartCommand } from './commands/start.command.js';
|
||||||
@@ -23,6 +24,9 @@ export {
|
|||||||
// UI utilities (for other commands to use)
|
// UI utilities (for other commands to use)
|
||||||
export * as ui from './utils/ui.js';
|
export * as ui from './utils/ui.js';
|
||||||
|
|
||||||
|
// Error handling utilities
|
||||||
|
export { displayError, isDebugMode } from './utils/error-handler.js';
|
||||||
|
|
||||||
// Auto-update utilities
|
// Auto-update utilities
|
||||||
export {
|
export {
|
||||||
checkForUpdate,
|
checkForUpdate,
|
||||||
|
|||||||
@@ -5,6 +5,16 @@
|
|||||||
|
|
||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Brief information for API storage
|
||||||
|
*/
|
||||||
|
export interface BriefInfo {
|
||||||
|
briefId: string;
|
||||||
|
briefName: string;
|
||||||
|
orgSlug?: string;
|
||||||
|
webAppUrl?: string;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Header configuration options
|
* Header configuration options
|
||||||
*/
|
*/
|
||||||
@@ -12,22 +22,50 @@ export interface HeaderOptions {
|
|||||||
title?: string;
|
title?: string;
|
||||||
tag?: string;
|
tag?: string;
|
||||||
filePath?: string;
|
filePath?: string;
|
||||||
|
storageType?: 'api' | 'file';
|
||||||
|
briefInfo?: BriefInfo;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Display the Task Master header with project info
|
* Display the Task Master header with project info
|
||||||
*/
|
*/
|
||||||
export function displayHeader(options: HeaderOptions = {}): void {
|
export function displayHeader(options: HeaderOptions = {}): void {
|
||||||
const { filePath, tag } = options;
|
const { filePath, tag, storageType, briefInfo } = options;
|
||||||
|
|
||||||
// Display tag and file path info
|
// Display different header based on storage type
|
||||||
if (tag) {
|
if (storageType === 'api' && briefInfo) {
|
||||||
|
// API storage: Show brief information
|
||||||
|
const briefDisplay = `🏷 Brief: ${chalk.cyan(briefInfo.briefName)} ${chalk.gray(`(${briefInfo.briefId})`)}`;
|
||||||
|
console.log(briefDisplay);
|
||||||
|
|
||||||
|
// Construct and display the brief URL or ID
|
||||||
|
if (briefInfo.webAppUrl && briefInfo.orgSlug) {
|
||||||
|
const briefUrl = `${briefInfo.webAppUrl}/home/${briefInfo.orgSlug}/briefs/${briefInfo.briefId}/plan`;
|
||||||
|
console.log(`Listing tasks from: ${chalk.dim(briefUrl)}`);
|
||||||
|
} else if (briefInfo.webAppUrl) {
|
||||||
|
// Show web app URL and brief ID if org slug is missing
|
||||||
|
console.log(
|
||||||
|
`Listing tasks from: ${chalk.dim(`${briefInfo.webAppUrl} (Brief: ${briefInfo.briefId})`)}`
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
chalk.yellow(
|
||||||
|
`💡 Tip: Run ${chalk.cyan('tm context select')} to set your organization and see the full URL`
|
||||||
|
)
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
// Fallback: just show the brief ID if we can't get web app URL
|
||||||
|
console.log(
|
||||||
|
`Listing tasks from: ${chalk.dim(`API (Brief ID: ${briefInfo.briefId})`)}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
} else if (tag) {
|
||||||
|
// File storage: Show tag information
|
||||||
let tagInfo = '';
|
let tagInfo = '';
|
||||||
|
|
||||||
if (tag && tag !== 'master') {
|
if (tag && tag !== 'master') {
|
||||||
tagInfo = `🏷 tag: ${chalk.cyan(tag)}`;
|
tagInfo = `🏷 tag: ${chalk.cyan(tag)}`;
|
||||||
} else {
|
} else {
|
||||||
tagInfo = `🏷 tag: ${chalk.cyan('master')}`;
|
tagInfo = `🏷 tag: ${chalk.cyan('master')}`;
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log(tagInfo);
|
console.log(tagInfo);
|
||||||
@@ -39,7 +77,5 @@ export function displayHeader(options: HeaderOptions = {}): void {
|
|||||||
: `${process.cwd()}/${filePath}`;
|
: `${process.cwd()}/${filePath}`;
|
||||||
console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`);
|
console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log(); // Empty line for spacing
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,7 +6,7 @@
|
|||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
import type { Task } from '@tm/core/types';
|
import type { Task } from '@tm/core/types';
|
||||||
import { getComplexityWithColor } from '../../utils/ui.js';
|
import { getComplexityWithColor, getBoxWidth } from '../../utils/ui.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Next task display options
|
* Next task display options
|
||||||
@@ -113,7 +113,7 @@ export function displayRecommendedNextTask(
|
|||||||
borderColor: '#FFA500', // Orange color
|
borderColor: '#FFA500', // Orange color
|
||||||
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
|
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
|
||||||
titleAlignment: 'center',
|
titleAlignment: 'center',
|
||||||
width: process.stdout.columns * 0.97,
|
width: getBoxWidth(0.97),
|
||||||
fullscreen: false
|
fullscreen: false
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -5,6 +5,7 @@
|
|||||||
|
|
||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
|
import { getBoxWidth } from '../../utils/ui.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Display suggested next steps section
|
* Display suggested next steps section
|
||||||
@@ -24,7 +25,7 @@ export function displaySuggestedNextSteps(): void {
|
|||||||
margin: { top: 0, bottom: 1 },
|
margin: { top: 0, bottom: 1 },
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'gray',
|
borderColor: 'gray',
|
||||||
width: process.stdout.columns * 0.97
|
width: getBoxWidth(0.97)
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
|||||||
75
apps/cli/src/utils/display-helpers.ts
Normal file
75
apps/cli/src/utils/display-helpers.ts
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
/**
|
||||||
|
* @fileoverview Display helper utilities for commands
|
||||||
|
* Provides DRY utilities for displaying headers and other command output
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { TaskMasterCore } from '@tm/core';
|
||||||
|
import type { StorageType } from '@tm/core/types';
|
||||||
|
import { displayHeader, type BriefInfo } from '../ui/index.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get web app base URL from environment
|
||||||
|
*/
|
||||||
|
function getWebAppUrl(): string | undefined {
|
||||||
|
const baseDomain =
|
||||||
|
process.env.TM_BASE_DOMAIN || process.env.TM_PUBLIC_BASE_DOMAIN;
|
||||||
|
|
||||||
|
if (!baseDomain) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
// If it already includes protocol, use as-is
|
||||||
|
if (baseDomain.startsWith('http://') || baseDomain.startsWith('https://')) {
|
||||||
|
return baseDomain;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Otherwise, add protocol based on domain
|
||||||
|
if (baseDomain.includes('localhost') || baseDomain.includes('127.0.0.1')) {
|
||||||
|
return `http://${baseDomain}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
return `https://${baseDomain}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Display the command header with appropriate storage information
|
||||||
|
* Handles both API and file storage displays
|
||||||
|
*/
|
||||||
|
export function displayCommandHeader(
|
||||||
|
tmCore: TaskMasterCore | undefined,
|
||||||
|
options: {
|
||||||
|
tag?: string;
|
||||||
|
storageType: Exclude<StorageType, 'auto'>;
|
||||||
|
}
|
||||||
|
): void {
|
||||||
|
const { tag, storageType } = options;
|
||||||
|
|
||||||
|
// Get brief info if using API storage
|
||||||
|
let briefInfo: BriefInfo | undefined;
|
||||||
|
if (storageType === 'api' && tmCore) {
|
||||||
|
const storageInfo = tmCore.getStorageDisplayInfo();
|
||||||
|
if (storageInfo) {
|
||||||
|
// Construct full brief info with web app URL
|
||||||
|
briefInfo = {
|
||||||
|
...storageInfo,
|
||||||
|
webAppUrl: getWebAppUrl()
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file path for display (only for file storage)
|
||||||
|
// Note: The file structure is fixed for file storage and won't change.
|
||||||
|
// This is a display-only relative path, not used for actual file operations.
|
||||||
|
const filePath =
|
||||||
|
storageType === 'file' && tmCore
|
||||||
|
? `.taskmaster/tasks/tasks.json`
|
||||||
|
: undefined;
|
||||||
|
|
||||||
|
// Display header
|
||||||
|
displayHeader({
|
||||||
|
tag: tag || 'master',
|
||||||
|
filePath: filePath,
|
||||||
|
storageType: storageType === 'api' ? 'api' : 'file',
|
||||||
|
briefInfo: briefInfo
|
||||||
|
});
|
||||||
|
}
|
||||||
60
apps/cli/src/utils/error-handler.ts
Normal file
60
apps/cli/src/utils/error-handler.ts
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
/**
|
||||||
|
* @fileoverview Centralized error handling utilities for CLI
|
||||||
|
* Provides consistent error formatting and debug mode detection
|
||||||
|
*/
|
||||||
|
|
||||||
|
import chalk from 'chalk';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if debug mode is enabled via environment variable
|
||||||
|
* Only returns true when DEBUG is explicitly set to 'true' or '1'
|
||||||
|
*
|
||||||
|
* @returns True if debug mode is enabled
|
||||||
|
*/
|
||||||
|
export function isDebugMode(): boolean {
|
||||||
|
return process.env.DEBUG === 'true' || process.env.DEBUG === '1';
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Display an error to the user with optional stack trace in debug mode
|
||||||
|
* Handles both TaskMasterError instances and regular errors
|
||||||
|
*
|
||||||
|
* @param error - The error to display
|
||||||
|
* @param options - Display options
|
||||||
|
*/
|
||||||
|
export function displayError(
|
||||||
|
error: any,
|
||||||
|
options: {
|
||||||
|
/** Skip exit, useful when caller wants to handle exit */
|
||||||
|
skipExit?: boolean;
|
||||||
|
/** Force show stack trace regardless of debug mode */
|
||||||
|
forceStack?: boolean;
|
||||||
|
} = {}
|
||||||
|
): void {
|
||||||
|
// Check if it's a TaskMasterError with sanitized details
|
||||||
|
if (error?.getSanitizedDetails) {
|
||||||
|
const sanitized = error.getSanitizedDetails();
|
||||||
|
console.error(chalk.red(`\n${sanitized.message}`));
|
||||||
|
|
||||||
|
// Show stack trace in debug mode or if forced
|
||||||
|
if ((isDebugMode() || options.forceStack) && error.stack) {
|
||||||
|
console.error(chalk.gray('\nStack trace:'));
|
||||||
|
console.error(chalk.gray(error.stack));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// For other errors, show the message
|
||||||
|
const message = error?.message ?? String(error);
|
||||||
|
console.error(chalk.red(`\nError: ${message}`));
|
||||||
|
|
||||||
|
// Show stack trace in debug mode or if forced
|
||||||
|
if ((isDebugMode() || options.forceStack) && error?.stack) {
|
||||||
|
console.error(chalk.gray('\nStack trace:'));
|
||||||
|
console.error(chalk.gray(error.stack));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exit if not skipped
|
||||||
|
if (!options.skipExit) {
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
158
apps/cli/src/utils/ui.spec.ts
Normal file
158
apps/cli/src/utils/ui.spec.ts
Normal file
@@ -0,0 +1,158 @@
|
|||||||
|
/**
|
||||||
|
* CLI UI utilities tests
|
||||||
|
* Tests for apps/cli/src/utils/ui.ts
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
|
import type { MockInstance } from 'vitest';
|
||||||
|
import { getBoxWidth } from './ui.js';
|
||||||
|
|
||||||
|
describe('CLI UI Utilities', () => {
|
||||||
|
describe('getBoxWidth', () => {
|
||||||
|
let columnsSpy: MockInstance;
|
||||||
|
let originalDescriptor: PropertyDescriptor | undefined;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Store original descriptor if it exists
|
||||||
|
originalDescriptor = Object.getOwnPropertyDescriptor(
|
||||||
|
process.stdout,
|
||||||
|
'columns'
|
||||||
|
);
|
||||||
|
|
||||||
|
// If columns doesn't exist or isn't a getter, define it as one
|
||||||
|
if (!originalDescriptor || !originalDescriptor.get) {
|
||||||
|
const currentValue = process.stdout.columns || 80;
|
||||||
|
Object.defineProperty(process.stdout, 'columns', {
|
||||||
|
get() {
|
||||||
|
return currentValue;
|
||||||
|
},
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Now spy on the getter
|
||||||
|
columnsSpy = vi.spyOn(process.stdout, 'columns', 'get');
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Restore the spy
|
||||||
|
columnsSpy.mockRestore();
|
||||||
|
|
||||||
|
// Restore original descriptor or delete the property
|
||||||
|
if (originalDescriptor) {
|
||||||
|
Object.defineProperty(process.stdout, 'columns', originalDescriptor);
|
||||||
|
} else {
|
||||||
|
delete (process.stdout as any).columns;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should calculate width as percentage of terminal width', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
expect(width).toBe(90);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use default percentage of 0.9 when not specified', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
const width = getBoxWidth();
|
||||||
|
expect(width).toBe(90);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use default minimum width of 40 when not specified', () => {
|
||||||
|
columnsSpy.mockReturnValue(30);
|
||||||
|
const width = getBoxWidth();
|
||||||
|
expect(width).toBe(40); // Should enforce minimum
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should enforce minimum width when terminal is too narrow', () => {
|
||||||
|
columnsSpy.mockReturnValue(50);
|
||||||
|
const width = getBoxWidth(0.9, 60);
|
||||||
|
expect(width).toBe(60); // Should use minWidth instead of 45
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle undefined process.stdout.columns', () => {
|
||||||
|
columnsSpy.mockReturnValue(undefined);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
// Should fall back to 80 columns: Math.floor(80 * 0.9) = 72
|
||||||
|
expect(width).toBe(72);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle custom percentage values', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
expect(getBoxWidth(0.95, 40)).toBe(95);
|
||||||
|
expect(getBoxWidth(0.8, 40)).toBe(80);
|
||||||
|
expect(getBoxWidth(0.5, 40)).toBe(50);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle custom minimum width values', () => {
|
||||||
|
columnsSpy.mockReturnValue(60);
|
||||||
|
expect(getBoxWidth(0.9, 70)).toBe(70); // 60 * 0.9 = 54, but min is 70
|
||||||
|
expect(getBoxWidth(0.9, 50)).toBe(54); // 60 * 0.9 = 54, min is 50
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should floor the calculated width', () => {
|
||||||
|
columnsSpy.mockReturnValue(99);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
// 99 * 0.9 = 89.1, should floor to 89
|
||||||
|
expect(width).toBe(89);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should match warning box width calculation', () => {
|
||||||
|
// Test the specific case from displayWarning()
|
||||||
|
columnsSpy.mockReturnValue(80);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
expect(width).toBe(72);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should match table width calculation', () => {
|
||||||
|
// Test the specific case from createTaskTable()
|
||||||
|
columnsSpy.mockReturnValue(111);
|
||||||
|
const width = getBoxWidth(0.9, 100);
|
||||||
|
// 111 * 0.9 = 99.9, floor to 99, but max(99, 100) = 100
|
||||||
|
expect(width).toBe(100);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should match recommended task box width calculation', () => {
|
||||||
|
// Test the specific case from displayRecommendedNextTask()
|
||||||
|
columnsSpy.mockReturnValue(120);
|
||||||
|
const width = getBoxWidth(0.97, 40);
|
||||||
|
// 120 * 0.97 = 116.4, floor to 116
|
||||||
|
expect(width).toBe(116);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle edge case of zero terminal width', () => {
|
||||||
|
columnsSpy.mockReturnValue(0);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
// When columns is 0, it uses fallback of 80: Math.floor(80 * 0.9) = 72
|
||||||
|
expect(width).toBe(72);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle very large terminal widths', () => {
|
||||||
|
columnsSpy.mockReturnValue(1000);
|
||||||
|
const width = getBoxWidth(0.9, 40);
|
||||||
|
expect(width).toBe(900);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle very small percentages', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
const width = getBoxWidth(0.1, 5);
|
||||||
|
// 100 * 0.1 = 10, which is greater than min 5
|
||||||
|
expect(width).toBe(10);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle percentage of 1.0 (100%)', () => {
|
||||||
|
columnsSpy.mockReturnValue(80);
|
||||||
|
const width = getBoxWidth(1.0, 40);
|
||||||
|
expect(width).toBe(80);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should consistently return same value for same inputs', () => {
|
||||||
|
columnsSpy.mockReturnValue(100);
|
||||||
|
const width1 = getBoxWidth(0.9, 40);
|
||||||
|
const width2 = getBoxWidth(0.9, 40);
|
||||||
|
const width3 = getBoxWidth(0.9, 40);
|
||||||
|
expect(width1).toBe(width2);
|
||||||
|
expect(width2).toBe(width3);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -126,6 +126,20 @@ export function getComplexityWithScore(complexity: number | undefined): string {
|
|||||||
return color(`${complexity}/10 (${label})`);
|
return color(`${complexity}/10 (${label})`);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate box width as percentage of terminal width
|
||||||
|
* @param percentage - Percentage of terminal width to use (default: 0.9)
|
||||||
|
* @param minWidth - Minimum width to enforce (default: 40)
|
||||||
|
* @returns Calculated box width
|
||||||
|
*/
|
||||||
|
export function getBoxWidth(
|
||||||
|
percentage: number = 0.9,
|
||||||
|
minWidth: number = 40
|
||||||
|
): number {
|
||||||
|
const terminalWidth = process.stdout.columns || 80;
|
||||||
|
return Math.max(Math.floor(terminalWidth * percentage), minWidth);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Truncate text to specified length
|
* Truncate text to specified length
|
||||||
*/
|
*/
|
||||||
@@ -176,6 +190,8 @@ export function displayBanner(title: string = 'Task Master'): void {
|
|||||||
* Display an error message (matches scripts/modules/ui.js style)
|
* Display an error message (matches scripts/modules/ui.js style)
|
||||||
*/
|
*/
|
||||||
export function displayError(message: string, details?: string): void {
|
export function displayError(message: string, details?: string): void {
|
||||||
|
const boxWidth = getBoxWidth();
|
||||||
|
|
||||||
console.error(
|
console.error(
|
||||||
boxen(
|
boxen(
|
||||||
chalk.red.bold('X Error: ') +
|
chalk.red.bold('X Error: ') +
|
||||||
@@ -184,7 +200,8 @@ export function displayError(message: string, details?: string): void {
|
|||||||
{
|
{
|
||||||
padding: 1,
|
padding: 1,
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'red'
|
borderColor: 'red',
|
||||||
|
width: boxWidth
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
@@ -194,13 +211,16 @@ export function displayError(message: string, details?: string): void {
|
|||||||
* Display a success message
|
* Display a success message
|
||||||
*/
|
*/
|
||||||
export function displaySuccess(message: string): void {
|
export function displaySuccess(message: string): void {
|
||||||
|
const boxWidth = getBoxWidth();
|
||||||
|
|
||||||
console.log(
|
console.log(
|
||||||
boxen(
|
boxen(
|
||||||
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
|
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
|
||||||
{
|
{
|
||||||
padding: 1,
|
padding: 1,
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'green'
|
borderColor: 'green',
|
||||||
|
width: boxWidth
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
@@ -210,11 +230,14 @@ export function displaySuccess(message: string): void {
|
|||||||
* Display a warning message
|
* Display a warning message
|
||||||
*/
|
*/
|
||||||
export function displayWarning(message: string): void {
|
export function displayWarning(message: string): void {
|
||||||
|
const boxWidth = getBoxWidth();
|
||||||
|
|
||||||
console.log(
|
console.log(
|
||||||
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
|
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
|
||||||
padding: 1,
|
padding: 1,
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'yellow'
|
borderColor: 'yellow',
|
||||||
|
width: boxWidth
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -223,11 +246,14 @@ export function displayWarning(message: string): void {
|
|||||||
* Display info message
|
* Display info message
|
||||||
*/
|
*/
|
||||||
export function displayInfo(message: string): void {
|
export function displayInfo(message: string): void {
|
||||||
|
const boxWidth = getBoxWidth();
|
||||||
|
|
||||||
console.log(
|
console.log(
|
||||||
boxen(chalk.blue.bold('i ') + chalk.white(message), {
|
boxen(chalk.blue.bold('i ') + chalk.white(message), {
|
||||||
padding: 1,
|
padding: 1,
|
||||||
borderStyle: 'round',
|
borderStyle: 'round',
|
||||||
borderColor: 'blue'
|
borderColor: 'blue',
|
||||||
|
width: boxWidth
|
||||||
})
|
})
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -282,23 +308,23 @@ export function createTaskTable(
|
|||||||
} = options || {};
|
} = options || {};
|
||||||
|
|
||||||
// Calculate dynamic column widths based on terminal width
|
// Calculate dynamic column widths based on terminal width
|
||||||
const terminalWidth = process.stdout.columns * 0.9 || 100;
|
const tableWidth = getBoxWidth(0.9, 100);
|
||||||
// Adjust column widths to better match the original layout
|
// Adjust column widths to better match the original layout
|
||||||
const baseColWidths = showComplexity
|
const baseColWidths = showComplexity
|
||||||
? [
|
? [
|
||||||
Math.floor(terminalWidth * 0.1),
|
Math.floor(tableWidth * 0.1),
|
||||||
Math.floor(terminalWidth * 0.4),
|
Math.floor(tableWidth * 0.4),
|
||||||
Math.floor(terminalWidth * 0.15),
|
Math.floor(tableWidth * 0.15),
|
||||||
Math.floor(terminalWidth * 0.1),
|
Math.floor(tableWidth * 0.1),
|
||||||
Math.floor(terminalWidth * 0.2),
|
Math.floor(tableWidth * 0.2),
|
||||||
Math.floor(terminalWidth * 0.1)
|
Math.floor(tableWidth * 0.1)
|
||||||
] // ID, Title, Status, Priority, Dependencies, Complexity
|
] // ID, Title, Status, Priority, Dependencies, Complexity
|
||||||
: [
|
: [
|
||||||
Math.floor(terminalWidth * 0.08),
|
Math.floor(tableWidth * 0.08),
|
||||||
Math.floor(terminalWidth * 0.4),
|
Math.floor(tableWidth * 0.4),
|
||||||
Math.floor(terminalWidth * 0.18),
|
Math.floor(tableWidth * 0.18),
|
||||||
Math.floor(terminalWidth * 0.12),
|
Math.floor(tableWidth * 0.12),
|
||||||
Math.floor(terminalWidth * 0.2)
|
Math.floor(tableWidth * 0.2)
|
||||||
]; // ID, Title, Status, Priority, Dependencies
|
]; // ID, Title, Status, Priority, Dependencies
|
||||||
|
|
||||||
const headers = [
|
const headers = [
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
# docs
|
# docs
|
||||||
|
|
||||||
|
## 0.0.6
|
||||||
|
|
||||||
## 0.0.5
|
## 0.0.5
|
||||||
|
|
||||||
## 0.0.4
|
## 0.0.4
|
||||||
|
|||||||
@@ -13,6 +13,126 @@ The MCP interface is built on top of the `fastmcp` library and registers a set o
|
|||||||
|
|
||||||
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
|
||||||
|
|
||||||
|
## Configurable Tool Loading
|
||||||
|
|
||||||
|
To optimize LLM context usage, you can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This is particularly useful when working with LLMs that have context limits or when you only need a subset of tools.
|
||||||
|
|
||||||
|
### Configuration Modes
|
||||||
|
|
||||||
|
#### All Tools (Default)
|
||||||
|
Loads all 36 available tools. Use when you need full Task Master functionality.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "all",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If `TASK_MASTER_TOOLS` is not set, all tools are loaded by default.
|
||||||
|
|
||||||
|
#### Core Tools (Lean Mode)
|
||||||
|
Loads only 7 essential tools for daily development. Ideal for minimal context usage.
|
||||||
|
|
||||||
|
**Core tools included:**
|
||||||
|
- `get_tasks` - List all tasks
|
||||||
|
- `next_task` - Find the next task to work on
|
||||||
|
- `get_task` - Get detailed task information
|
||||||
|
- `set_task_status` - Update task status
|
||||||
|
- `update_subtask` - Add implementation notes
|
||||||
|
- `parse_prd` - Generate tasks from PRD
|
||||||
|
- `expand_task` - Break down tasks into subtasks
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "core",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also use `"lean"` as an alias for `"core"`.
|
||||||
|
|
||||||
|
#### Standard Tools
|
||||||
|
Loads 15 commonly used tools. Balances functionality with context efficiency.
|
||||||
|
|
||||||
|
**Standard tools include all core tools plus:**
|
||||||
|
- `initialize_project` - Set up new projects
|
||||||
|
- `analyze_project_complexity` - Analyze task complexity
|
||||||
|
- `expand_all` - Expand all eligible tasks
|
||||||
|
- `add_subtask` - Add subtasks manually
|
||||||
|
- `remove_task` - Remove tasks
|
||||||
|
- `generate` - Generate task markdown files
|
||||||
|
- `add_task` - Create new tasks
|
||||||
|
- `complexity_report` - View complexity analysis
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Custom Tool Selection
|
||||||
|
Specify exactly which tools to load using a comma-separated list. Tool names are case-insensitive and support both underscores and hyphens.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "task-master-ai"],
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "get_tasks,next_task,set_task_status,update_subtask",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Choosing the Right Configuration
|
||||||
|
|
||||||
|
- **Use `core`/`lean`**: When working with basic task management workflows or when context limits are strict
|
||||||
|
- **Use `standard`**: For most development workflows that include task creation and analysis
|
||||||
|
- **Use `all`**: When you need full functionality including tag management, dependencies, and advanced features
|
||||||
|
- **Use custom list**: When you have specific tool requirements or want to experiment with minimal sets
|
||||||
|
|
||||||
|
### Verification
|
||||||
|
|
||||||
|
When the MCP server starts, it logs which tools were loaded:
|
||||||
|
|
||||||
|
```
|
||||||
|
Task Master MCP Server starting...
|
||||||
|
Tool mode configuration: standard
|
||||||
|
Loading standard tools
|
||||||
|
Registering 15 MCP tools (mode: standard)
|
||||||
|
Successfully registered 15/15 tools
|
||||||
|
```
|
||||||
|
|
||||||
## Tool Categories
|
## Tool Categories
|
||||||
|
|
||||||
The MCP tools can be categorized in the same way as the core functionalities:
|
The MCP tools can be categorized in the same way as the core functionalities:
|
||||||
|
|||||||
@@ -37,6 +37,25 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
**Optimize Context Usage**: You can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This helps reduce LLM context usage by only loading the tools you need.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- `all` (default) - All 36 tools
|
||||||
|
- `standard` - 15 commonly used tools
|
||||||
|
- `core` or `lean` - 7 essential tools
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```json
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard",
|
||||||
|
"ANTHROPIC_API_KEY": "your_key_here"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
See the [MCP Tools documentation](/capabilities/mcp#configurable-tool-loading) for details.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
### CLI Usage: `.env` File
|
### CLI Usage: `.env` File
|
||||||
|
|
||||||
Create a `.env` file in your project root and include the keys for the providers you plan to use:
|
Create a `.env` file in your project root and include the keys for the providers you plan to use:
|
||||||
|
|||||||
@@ -31,23 +31,9 @@ cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21
|
|||||||
|
|
||||||
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
|
||||||
|
|
||||||
### Claude Code Plugin Install (Recommended)
|
### Claude Code Quick Install
|
||||||
|
|
||||||
For Claude Code users, install via the plugin marketplace:
|
For Claude Code users:
|
||||||
|
|
||||||
```bash
|
|
||||||
/plugin marketplace add eyaltoledano/claude-task-master
|
|
||||||
/plugin install taskmaster@taskmaster
|
|
||||||
```
|
|
||||||
|
|
||||||
This provides:
|
|
||||||
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
|
|
||||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
|
||||||
- **Automatic updates** when new features are released
|
|
||||||
|
|
||||||
### Claude Code MCP Alternative
|
|
||||||
|
|
||||||
You can also use MCP directly:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
claude mcp add taskmaster-ai -- npx -y task-master-ai
|
claude mcp add taskmaster-ai -- npx -y task-master-ai
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "docs",
|
"name": "docs",
|
||||||
"version": "0.0.5",
|
"version": "0.0.6",
|
||||||
"private": true,
|
"private": true,
|
||||||
"description": "Task Master documentation powered by Mintlify",
|
"description": "Task Master documentation powered by Mintlify",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
|
|||||||
@@ -3,44 +3,4 @@ title: "What's New"
|
|||||||
sidebarTitle: "What's New"
|
sidebarTitle: "What's New"
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🎉 New: Claude Code Plugin Support
|
An easy way to see the latest releases
|
||||||
|
|
||||||
Task Master AI now supports Claude Code plugins with modern marketplace distribution!
|
|
||||||
|
|
||||||
### What's New
|
|
||||||
|
|
||||||
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
|
|
||||||
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
|
|
||||||
- **MCP server integration** for deep Claude Code integration
|
|
||||||
|
|
||||||
### Installation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/plugin marketplace add eyaltoledano/claude-task-master
|
|
||||||
/plugin install taskmaster@taskmaster
|
|
||||||
```
|
|
||||||
|
|
||||||
### Migration for Existing Users
|
|
||||||
|
|
||||||
The `rules add claude` command no longer copies files to `.claude/` directories. Instead:
|
|
||||||
|
|
||||||
- Shows plugin installation instructions
|
|
||||||
- Only manages CLAUDE.md imports for agent instructions
|
|
||||||
- Directs users to install the official plugin
|
|
||||||
|
|
||||||
If you previously used `rules add claude`:
|
|
||||||
|
|
||||||
1. Old commands in `.claude/commands/` will continue working but won't receive updates
|
|
||||||
2. Install the plugin for latest features: `/plugin install taskmaster@taskmaster`
|
|
||||||
3. Remove old `.claude/commands/` and `.claude/agents/` directories
|
|
||||||
|
|
||||||
### Why This Change?
|
|
||||||
|
|
||||||
Claude Code plugins provide:
|
|
||||||
|
|
||||||
- ✅ Automatic updates when we release new features
|
|
||||||
- ✅ Better command organization and naming
|
|
||||||
- ✅ Seamless integration with Claude Code
|
|
||||||
- ✅ No manual file copying or management
|
|
||||||
|
|
||||||
The plugin system is the future of Task Master AI integration with Claude Code!
|
|
||||||
@@ -1,5 +1,7 @@
|
|||||||
# Change Log
|
# Change Log
|
||||||
|
|
||||||
|
## 0.25.6
|
||||||
|
|
||||||
## 0.25.6-rc.0
|
## 0.25.6-rc.0
|
||||||
|
|
||||||
### Patch Changes
|
### Patch Changes
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
"private": true,
|
"private": true,
|
||||||
"displayName": "TaskMaster",
|
"displayName": "TaskMaster",
|
||||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||||
"version": "0.25.6-rc.0",
|
"version": "0.25.6",
|
||||||
"publisher": "Hamster",
|
"publisher": "Hamster",
|
||||||
"icon": "assets/icon.png",
|
"icon": "assets/icon.png",
|
||||||
"engines": {
|
"engines": {
|
||||||
@@ -239,9 +239,6 @@
|
|||||||
"watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch",
|
"watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch",
|
||||||
"check-types": "tsc --noEmit"
|
"check-types": "tsc --noEmit"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
|
||||||
"task-master-ai": "0.29.0-rc.0"
|
|
||||||
},
|
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@dnd-kit/core": "^6.3.1",
|
"@dnd-kit/core": "^6.3.1",
|
||||||
"@dnd-kit/modifiers": "^9.0.0",
|
"@dnd-kit/modifiers": "^9.0.0",
|
||||||
@@ -277,7 +274,8 @@
|
|||||||
"tailwind-merge": "^3.3.1",
|
"tailwind-merge": "^3.3.1",
|
||||||
"tailwindcss": "4.1.11",
|
"tailwindcss": "4.1.11",
|
||||||
"typescript": "^5.9.2",
|
"typescript": "^5.9.2",
|
||||||
"@tm/core": "*"
|
"@tm/core": "*",
|
||||||
|
"task-master-ai": "*"
|
||||||
},
|
},
|
||||||
"overrides": {
|
"overrides": {
|
||||||
"glob@<8": "^10.4.5",
|
"glob@<8": "^10.4.5",
|
||||||
|
|||||||
@@ -59,6 +59,76 @@ Taskmaster uses two primary methods for configuration:
|
|||||||
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
|
||||||
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
|
||||||
|
|
||||||
|
## MCP Tool Loading Configuration
|
||||||
|
|
||||||
|
### TASK_MASTER_TOOLS Environment Variable
|
||||||
|
|
||||||
|
The `TASK_MASTER_TOOLS` environment variable controls which tools are loaded by the Task Master MCP server. This allows you to optimize token usage based on your workflow needs.
|
||||||
|
|
||||||
|
> Note
|
||||||
|
> Prefer setting `TASK_MASTER_TOOLS` in your MCP client's `env` block (e.g., `.cursor/mcp.json`) or in CI/deployment env. The `.env` file is reserved for API keys/endpoints; avoid persisting non-secret settings there.
|
||||||
|
|
||||||
|
#### Configuration Options
|
||||||
|
|
||||||
|
- **`all`** (default): Loads all 36 available tools (~21,000 tokens)
|
||||||
|
- Best for: Users who need the complete feature set
|
||||||
|
- Use when: Working with complex projects requiring all Task Master features
|
||||||
|
- Backward compatibility: This is the default to maintain compatibility with existing installations
|
||||||
|
|
||||||
|
- **`standard`**: Loads 15 commonly used tools (~10,000 tokens, 50% reduction)
|
||||||
|
- Best for: Regular task management workflows
|
||||||
|
- Tools included: All core tools plus project initialization, complexity analysis, task generation, and more
|
||||||
|
- Use when: You need a balanced set of features with reduced token usage
|
||||||
|
|
||||||
|
- **`core`** (or `lean`): Loads 7 essential tools (~5,000 tokens, 70% reduction)
|
||||||
|
- Best for: Daily development with minimal token overhead
|
||||||
|
- Tools included: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
|
||||||
|
- Use when: Working in large contexts where token usage is critical
|
||||||
|
- Note: "lean" is an alias for "core" (same tools, token estimate and recommended use). You can refer to it as either "core" or "lean" when configuring.
|
||||||
|
|
||||||
|
- **Custom list**: Comma-separated list of specific tool names
|
||||||
|
- Best for: Specialized workflows requiring specific tools
|
||||||
|
- Example: `"get_tasks,next_task,set_task_status"`
|
||||||
|
- Use when: You know exactly which tools you need
|
||||||
|
|
||||||
|
#### How to Configure
|
||||||
|
|
||||||
|
1. **In MCP configuration files** (`.cursor/mcp.json`, `.vscode/mcp.json`, etc.) - **Recommended**:
|
||||||
|
|
||||||
|
```jsonc
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"task-master-ai": {
|
||||||
|
"env": {
|
||||||
|
"TASK_MASTER_TOOLS": "standard", // Set tool loading mode
|
||||||
|
// API keys can still use .env for security
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Via Claude Code CLI**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
claude mcp add task-master-ai --scope user \
|
||||||
|
--env TASK_MASTER_TOOLS="core" \
|
||||||
|
-- npx -y task-master-ai@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **In CI/deployment environment variables**:
|
||||||
|
```bash
|
||||||
|
export TASK_MASTER_TOOLS="standard"
|
||||||
|
node mcp-server/server.js
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Tool Loading Behavior
|
||||||
|
|
||||||
|
- When `TASK_MASTER_TOOLS` is unset or empty, the system defaults to `"all"`
|
||||||
|
- Invalid tool names in a user-specified list are ignored (a warning is emitted for each)
|
||||||
|
- If every tool name in a custom list is invalid, the system falls back to `"all"`
|
||||||
|
- Tool names are case-insensitive (e.g., `"CORE"`, `"core"`, and `"Core"` are treated identically)
|
||||||
|
|
||||||
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
|
||||||
|
|
||||||
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Available Models as of October 5, 2025
|
# Available Models as of October 18, 2025
|
||||||
|
|
||||||
## Main Models
|
## Main Models
|
||||||
|
|
||||||
@@ -6,10 +6,13 @@
|
|||||||
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||||
|
| anthropic | claude-sonnet-4-5-20250929 | 0.73 | 3 | 15 |
|
||||||
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||||
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
||||||
|
| anthropic | claude-haiku-4-5-20251001 | 0.45 | 1 | 5 |
|
||||||
| claude-code | opus | 0.725 | 0 | 0 |
|
| claude-code | opus | 0.725 | 0 | 0 |
|
||||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
| claude-code | sonnet | 0.727 | 0 | 0 |
|
||||||
|
| claude-code | haiku | 0.45 | 0 | 0 |
|
||||||
| codex-cli | gpt-5 | 0.749 | 0 | 0 |
|
| codex-cli | gpt-5 | 0.749 | 0 | 0 |
|
||||||
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
|
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
|
||||||
| mcp | mcp-sampling | — | 0 | 0 |
|
| mcp | mcp-sampling | — | 0 | 0 |
|
||||||
@@ -140,10 +143,13 @@
|
|||||||
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||||
|
| anthropic | claude-sonnet-4-5-20250929 | 0.73 | 3 | 15 |
|
||||||
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
|
||||||
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
|
||||||
|
| anthropic | claude-haiku-4-5-20251001 | 0.45 | 1 | 5 |
|
||||||
| claude-code | opus | 0.725 | 0 | 0 |
|
| claude-code | opus | 0.725 | 0 | 0 |
|
||||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
| claude-code | sonnet | 0.727 | 0 | 0 |
|
||||||
|
| claude-code | haiku | 0.45 | 0 | 0 |
|
||||||
| codex-cli | gpt-5 | 0.749 | 0 | 0 |
|
| codex-cli | gpt-5 | 0.749 | 0 | 0 |
|
||||||
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
|
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
|
||||||
| mcp | mcp-sampling | — | 0 | 0 |
|
| mcp | mcp-sampling | — | 0 | 0 |
|
||||||
|
|||||||
BIN
images/hamster-hiring.png
Normal file
BIN
images/hamster-hiring.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 130 KiB |
@@ -4,12 +4,14 @@ import dotenv from 'dotenv';
|
|||||||
import { fileURLToPath } from 'url';
|
import { fileURLToPath } from 'url';
|
||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import logger from './logger.js';
|
import logger from './logger.js';
|
||||||
import { registerTaskMasterTools } from './tools/index.js';
|
import {
|
||||||
|
registerTaskMasterTools,
|
||||||
|
getToolsConfiguration
|
||||||
|
} from './tools/index.js';
|
||||||
import ProviderRegistry from '../../src/provider-registry/index.js';
|
import ProviderRegistry from '../../src/provider-registry/index.js';
|
||||||
import { MCPProvider } from './providers/mcp-provider.js';
|
import { MCPProvider } from './providers/mcp-provider.js';
|
||||||
import packageJson from '../../package.json' with { type: 'json' };
|
import packageJson from '../../package.json' with { type: 'json' };
|
||||||
|
|
||||||
// Load environment variables
|
|
||||||
dotenv.config();
|
dotenv.config();
|
||||||
|
|
||||||
// Constants
|
// Constants
|
||||||
@@ -29,12 +31,10 @@ class TaskMasterMCPServer {
|
|||||||
this.server = new FastMCP(this.options);
|
this.server = new FastMCP(this.options);
|
||||||
this.initialized = false;
|
this.initialized = false;
|
||||||
|
|
||||||
// Bind methods
|
|
||||||
this.init = this.init.bind(this);
|
this.init = this.init.bind(this);
|
||||||
this.start = this.start.bind(this);
|
this.start = this.start.bind(this);
|
||||||
this.stop = this.stop.bind(this);
|
this.stop = this.stop.bind(this);
|
||||||
|
|
||||||
// Setup logging
|
|
||||||
this.logger = logger;
|
this.logger = logger;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -44,8 +44,34 @@ class TaskMasterMCPServer {
|
|||||||
async init() {
|
async init() {
|
||||||
if (this.initialized) return;
|
if (this.initialized) return;
|
||||||
|
|
||||||
// Pass the manager instance to the tool registration function
|
const normalizedToolMode = getToolsConfiguration();
|
||||||
registerTaskMasterTools(this.server, this.asyncManager);
|
|
||||||
|
this.logger.info('Task Master MCP Server starting...');
|
||||||
|
this.logger.info(`Tool mode configuration: ${normalizedToolMode}`);
|
||||||
|
|
||||||
|
const registrationResult = registerTaskMasterTools(
|
||||||
|
this.server,
|
||||||
|
normalizedToolMode
|
||||||
|
);
|
||||||
|
|
||||||
|
this.logger.info(
|
||||||
|
`Normalized tool mode: ${registrationResult.normalizedMode}`
|
||||||
|
);
|
||||||
|
this.logger.info(
|
||||||
|
`Registered ${registrationResult.registeredTools.length} tools successfully`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (registrationResult.registeredTools.length > 0) {
|
||||||
|
this.logger.debug(
|
||||||
|
`Registered tools: ${registrationResult.registeredTools.join(', ')}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (registrationResult.failedTools.length > 0) {
|
||||||
|
this.logger.warn(
|
||||||
|
`Failed to register ${registrationResult.failedTools.length} tools: ${registrationResult.failedTools.join(', ')}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
this.initialized = true;
|
this.initialized = true;
|
||||||
|
|
||||||
|
|||||||
@@ -3,109 +3,238 @@
|
|||||||
* Export all Task Master CLI tools for MCP server
|
* Export all Task Master CLI tools for MCP server
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { registerListTasksTool } from './get-tasks.js';
|
|
||||||
import logger from '../logger.js';
|
import logger from '../logger.js';
|
||||||
import { registerSetTaskStatusTool } from './set-task-status.js';
|
import {
|
||||||
import { registerParsePRDTool } from './parse-prd.js';
|
toolRegistry,
|
||||||
import { registerUpdateTool } from './update.js';
|
coreTools,
|
||||||
import { registerUpdateTaskTool } from './update-task.js';
|
standardTools,
|
||||||
import { registerUpdateSubtaskTool } from './update-subtask.js';
|
getAvailableTools,
|
||||||
import { registerGenerateTool } from './generate.js';
|
getToolRegistration,
|
||||||
import { registerShowTaskTool } from './get-task.js';
|
isValidTool
|
||||||
import { registerNextTaskTool } from './next-task.js';
|
} from './tool-registry.js';
|
||||||
import { registerExpandTaskTool } from './expand-task.js';
|
|
||||||
import { registerAddTaskTool } from './add-task.js';
|
|
||||||
import { registerAddSubtaskTool } from './add-subtask.js';
|
|
||||||
import { registerRemoveSubtaskTool } from './remove-subtask.js';
|
|
||||||
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
|
|
||||||
import { registerClearSubtasksTool } from './clear-subtasks.js';
|
|
||||||
import { registerExpandAllTool } from './expand-all.js';
|
|
||||||
import { registerRemoveDependencyTool } from './remove-dependency.js';
|
|
||||||
import { registerValidateDependenciesTool } from './validate-dependencies.js';
|
|
||||||
import { registerFixDependenciesTool } from './fix-dependencies.js';
|
|
||||||
import { registerComplexityReportTool } from './complexity-report.js';
|
|
||||||
import { registerAddDependencyTool } from './add-dependency.js';
|
|
||||||
import { registerRemoveTaskTool } from './remove-task.js';
|
|
||||||
import { registerInitializeProjectTool } from './initialize-project.js';
|
|
||||||
import { registerModelsTool } from './models.js';
|
|
||||||
import { registerMoveTaskTool } from './move-task.js';
|
|
||||||
import { registerResponseLanguageTool } from './response-language.js';
|
|
||||||
import { registerAddTagTool } from './add-tag.js';
|
|
||||||
import { registerDeleteTagTool } from './delete-tag.js';
|
|
||||||
import { registerListTagsTool } from './list-tags.js';
|
|
||||||
import { registerUseTagTool } from './use-tag.js';
|
|
||||||
import { registerRenameTagTool } from './rename-tag.js';
|
|
||||||
import { registerCopyTagTool } from './copy-tag.js';
|
|
||||||
import { registerResearchTool } from './research.js';
|
|
||||||
import { registerRulesTool } from './rules.js';
|
|
||||||
import { registerScopeUpTool } from './scope-up.js';
|
|
||||||
import { registerScopeDownTool } from './scope-down.js';
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Register all Task Master tools with the MCP server
|
* Helper function to safely read and normalize the TASK_MASTER_TOOLS environment variable
|
||||||
* @param {Object} server - FastMCP server instance
|
* @returns {string} The tools configuration string, defaults to 'all'
|
||||||
*/
|
*/
|
||||||
export function registerTaskMasterTools(server) {
|
export function getToolsConfiguration() {
|
||||||
|
const rawValue = process.env.TASK_MASTER_TOOLS;
|
||||||
|
|
||||||
|
if (!rawValue || rawValue.trim() === '') {
|
||||||
|
logger.debug('No TASK_MASTER_TOOLS env var found, defaulting to "all"');
|
||||||
|
return 'all';
|
||||||
|
}
|
||||||
|
|
||||||
|
const normalizedValue = rawValue.trim();
|
||||||
|
logger.debug(`TASK_MASTER_TOOLS env var: "${normalizedValue}"`);
|
||||||
|
return normalizedValue;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Register Task Master tools with the MCP server
|
||||||
|
* Supports selective tool loading via TASK_MASTER_TOOLS environment variable
|
||||||
|
* @param {Object} server - FastMCP server instance
|
||||||
|
* @param {string} toolMode - The tool mode configuration (defaults to 'all')
|
||||||
|
* @returns {Object} Object containing registered tools, failed tools, and normalized mode
|
||||||
|
*/
|
||||||
|
export function registerTaskMasterTools(server, toolMode = 'all') {
|
||||||
|
const registeredTools = [];
|
||||||
|
const failedTools = [];
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Register each tool in a logical workflow order
|
const enabledTools = toolMode.trim();
|
||||||
|
let toolsToRegister = [];
|
||||||
|
|
||||||
// Group 1: Initialization & Setup
|
const lowerCaseConfig = enabledTools.toLowerCase();
|
||||||
registerInitializeProjectTool(server);
|
|
||||||
registerModelsTool(server);
|
|
||||||
registerRulesTool(server);
|
|
||||||
registerParsePRDTool(server);
|
|
||||||
|
|
||||||
// Group 2: Task Analysis & Expansion
|
switch (lowerCaseConfig) {
|
||||||
registerAnalyzeProjectComplexityTool(server);
|
case 'all':
|
||||||
registerExpandTaskTool(server);
|
toolsToRegister = Object.keys(toolRegistry);
|
||||||
registerExpandAllTool(server);
|
logger.info('Loading all available tools');
|
||||||
registerScopeUpTool(server);
|
break;
|
||||||
registerScopeDownTool(server);
|
case 'core':
|
||||||
|
case 'lean':
|
||||||
|
toolsToRegister = coreTools;
|
||||||
|
logger.info('Loading core tools only');
|
||||||
|
break;
|
||||||
|
case 'standard':
|
||||||
|
toolsToRegister = standardTools;
|
||||||
|
logger.info('Loading standard tools');
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
const requestedTools = enabledTools
|
||||||
|
.split(',')
|
||||||
|
.map((t) => t.trim())
|
||||||
|
.filter((t) => t.length > 0);
|
||||||
|
|
||||||
// Group 3: Task Listing & Viewing
|
const uniqueTools = new Set();
|
||||||
registerListTasksTool(server);
|
const unknownTools = [];
|
||||||
registerShowTaskTool(server);
|
|
||||||
registerNextTaskTool(server);
|
|
||||||
registerComplexityReportTool(server);
|
|
||||||
|
|
||||||
// Group 4: Task Status & Management
|
const aliasMap = {
|
||||||
registerSetTaskStatusTool(server);
|
response_language: 'response-language'
|
||||||
registerGenerateTool(server);
|
};
|
||||||
|
|
||||||
// Group 5: Task Creation & Modification
|
for (const toolName of requestedTools) {
|
||||||
registerAddTaskTool(server);
|
let resolvedName = null;
|
||||||
registerAddSubtaskTool(server);
|
const lowerToolName = toolName.toLowerCase();
|
||||||
registerUpdateTool(server);
|
|
||||||
registerUpdateTaskTool(server);
|
|
||||||
registerUpdateSubtaskTool(server);
|
|
||||||
registerRemoveTaskTool(server);
|
|
||||||
registerRemoveSubtaskTool(server);
|
|
||||||
registerClearSubtasksTool(server);
|
|
||||||
registerMoveTaskTool(server);
|
|
||||||
|
|
||||||
// Group 6: Dependency Management
|
if (aliasMap[lowerToolName]) {
|
||||||
registerAddDependencyTool(server);
|
const aliasTarget = aliasMap[lowerToolName];
|
||||||
registerRemoveDependencyTool(server);
|
for (const registryKey of Object.keys(toolRegistry)) {
|
||||||
registerValidateDependenciesTool(server);
|
if (registryKey.toLowerCase() === aliasTarget.toLowerCase()) {
|
||||||
registerFixDependenciesTool(server);
|
resolvedName = registryKey;
|
||||||
registerResponseLanguageTool(server);
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Group 7: Tag Management
|
if (!resolvedName) {
|
||||||
registerListTagsTool(server);
|
for (const registryKey of Object.keys(toolRegistry)) {
|
||||||
registerAddTagTool(server);
|
if (registryKey.toLowerCase() === lowerToolName) {
|
||||||
registerDeleteTagTool(server);
|
resolvedName = registryKey;
|
||||||
registerUseTagTool(server);
|
break;
|
||||||
registerRenameTagTool(server);
|
}
|
||||||
registerCopyTagTool(server);
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Group 8: Research Features
|
if (!resolvedName) {
|
||||||
registerResearchTool(server);
|
const withHyphens = lowerToolName.replace(/_/g, '-');
|
||||||
|
for (const registryKey of Object.keys(toolRegistry)) {
|
||||||
|
if (registryKey.toLowerCase() === withHyphens) {
|
||||||
|
resolvedName = registryKey;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!resolvedName) {
|
||||||
|
const withUnderscores = lowerToolName.replace(/-/g, '_');
|
||||||
|
for (const registryKey of Object.keys(toolRegistry)) {
|
||||||
|
if (registryKey.toLowerCase() === withUnderscores) {
|
||||||
|
resolvedName = registryKey;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (resolvedName) {
|
||||||
|
uniqueTools.add(resolvedName);
|
||||||
|
logger.debug(`Resolved tool "${toolName}" to "${resolvedName}"`);
|
||||||
|
} else {
|
||||||
|
unknownTools.push(toolName);
|
||||||
|
logger.warn(`Unknown tool specified: "${toolName}"`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
toolsToRegister = Array.from(uniqueTools);
|
||||||
|
|
||||||
|
if (unknownTools.length > 0) {
|
||||||
|
logger.warn(`Unknown tools: ${unknownTools.join(', ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (toolsToRegister.length === 0) {
|
||||||
|
logger.warn(
|
||||||
|
`No valid tools found in custom list. Loading all tools as fallback.`
|
||||||
|
);
|
||||||
|
toolsToRegister = Object.keys(toolRegistry);
|
||||||
|
} else {
|
||||||
|
logger.info(
|
||||||
|
`Loading ${toolsToRegister.length} custom tools from list (${uniqueTools.size} unique after normalization)`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
`Registering ${toolsToRegister.length} MCP tools (mode: ${enabledTools})`
|
||||||
|
);
|
||||||
|
|
||||||
|
toolsToRegister.forEach((toolName) => {
|
||||||
|
try {
|
||||||
|
const registerFunction = getToolRegistration(toolName);
|
||||||
|
if (registerFunction) {
|
||||||
|
registerFunction(server);
|
||||||
|
logger.debug(`Registered tool: ${toolName}`);
|
||||||
|
registeredTools.push(toolName);
|
||||||
|
} else {
|
||||||
|
logger.warn(`Tool ${toolName} not found in registry`);
|
||||||
|
failedTools.push(toolName);
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
if (error.message && error.message.includes('already registered')) {
|
||||||
|
logger.debug(`Tool ${toolName} already registered, skipping`);
|
||||||
|
registeredTools.push(toolName);
|
||||||
|
} else {
|
||||||
|
logger.error(`Failed to register tool ${toolName}: ${error.message}`);
|
||||||
|
failedTools.push(toolName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
`Successfully registered ${registeredTools.length}/${toolsToRegister.length} tools`
|
||||||
|
);
|
||||||
|
if (failedTools.length > 0) {
|
||||||
|
logger.warn(`Failed tools: ${failedTools.join(', ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
registeredTools,
|
||||||
|
failedTools,
|
||||||
|
normalizedMode: lowerCaseConfig
|
||||||
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(`Error registering Task Master tools: ${error.message}`);
|
logger.error(
|
||||||
throw error;
|
`Error parsing TASK_MASTER_TOOLS environment variable: ${error.message}`
|
||||||
|
);
|
||||||
|
logger.info('Falling back to loading all tools');
|
||||||
|
|
||||||
|
const fallbackTools = Object.keys(toolRegistry);
|
||||||
|
for (const toolName of fallbackTools) {
|
||||||
|
const registerFunction = getToolRegistration(toolName);
|
||||||
|
if (registerFunction) {
|
||||||
|
try {
|
||||||
|
registerFunction(server);
|
||||||
|
registeredTools.push(toolName);
|
||||||
|
} catch (err) {
|
||||||
|
if (err.message && err.message.includes('already registered')) {
|
||||||
|
logger.debug(
|
||||||
|
`Fallback tool ${toolName} already registered, skipping`
|
||||||
|
);
|
||||||
|
registeredTools.push(toolName);
|
||||||
|
} else {
|
||||||
|
logger.warn(
|
||||||
|
`Failed to register fallback tool '${toolName}': ${err.message}`
|
||||||
|
);
|
||||||
|
failedTools.push(toolName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
logger.warn(`Tool '${toolName}' not found in registry`);
|
||||||
|
failedTools.push(toolName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logger.info(
|
||||||
|
`Successfully registered ${registeredTools.length} fallback tools`
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
registeredTools,
|
||||||
|
failedTools,
|
||||||
|
normalizedMode: 'all'
|
||||||
|
};
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export {
|
||||||
|
toolRegistry,
|
||||||
|
coreTools,
|
||||||
|
standardTools,
|
||||||
|
getAvailableTools,
|
||||||
|
getToolRegistration,
|
||||||
|
isValidTool
|
||||||
|
};
|
||||||
|
|
||||||
export default {
|
export default {
|
||||||
registerTaskMasterTools
|
registerTaskMasterTools
|
||||||
};
|
};
|
||||||
|
|||||||
168
mcp-server/src/tools/tool-registry.js
Normal file
168
mcp-server/src/tools/tool-registry.js
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
/**
|
||||||
|
* tool-registry.js
|
||||||
|
* Tool Registry Object Structure - Maps all 36 tool names to registration functions
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { registerListTasksTool } from './get-tasks.js';
|
||||||
|
import { registerSetTaskStatusTool } from './set-task-status.js';
|
||||||
|
import { registerParsePRDTool } from './parse-prd.js';
|
||||||
|
import { registerUpdateTool } from './update.js';
|
||||||
|
import { registerUpdateTaskTool } from './update-task.js';
|
||||||
|
import { registerUpdateSubtaskTool } from './update-subtask.js';
|
||||||
|
import { registerGenerateTool } from './generate.js';
|
||||||
|
import { registerShowTaskTool } from './get-task.js';
|
||||||
|
import { registerNextTaskTool } from './next-task.js';
|
||||||
|
import { registerExpandTaskTool } from './expand-task.js';
|
||||||
|
import { registerAddTaskTool } from './add-task.js';
|
||||||
|
import { registerAddSubtaskTool } from './add-subtask.js';
|
||||||
|
import { registerRemoveSubtaskTool } from './remove-subtask.js';
|
||||||
|
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
|
||||||
|
import { registerClearSubtasksTool } from './clear-subtasks.js';
|
||||||
|
import { registerExpandAllTool } from './expand-all.js';
|
||||||
|
import { registerRemoveDependencyTool } from './remove-dependency.js';
|
||||||
|
import { registerValidateDependenciesTool } from './validate-dependencies.js';
|
||||||
|
import { registerFixDependenciesTool } from './fix-dependencies.js';
|
||||||
|
import { registerComplexityReportTool } from './complexity-report.js';
|
||||||
|
import { registerAddDependencyTool } from './add-dependency.js';
|
||||||
|
import { registerRemoveTaskTool } from './remove-task.js';
|
||||||
|
import { registerInitializeProjectTool } from './initialize-project.js';
|
||||||
|
import { registerModelsTool } from './models.js';
|
||||||
|
import { registerMoveTaskTool } from './move-task.js';
|
||||||
|
import { registerResponseLanguageTool } from './response-language.js';
|
||||||
|
import { registerAddTagTool } from './add-tag.js';
|
||||||
|
import { registerDeleteTagTool } from './delete-tag.js';
|
||||||
|
import { registerListTagsTool } from './list-tags.js';
|
||||||
|
import { registerUseTagTool } from './use-tag.js';
|
||||||
|
import { registerRenameTagTool } from './rename-tag.js';
|
||||||
|
import { registerCopyTagTool } from './copy-tag.js';
|
||||||
|
import { registerResearchTool } from './research.js';
|
||||||
|
import { registerRulesTool } from './rules.js';
|
||||||
|
import { registerScopeUpTool } from './scope-up.js';
|
||||||
|
import { registerScopeDownTool } from './scope-down.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Comprehensive tool registry mapping all 36 tool names to their registration functions
|
||||||
|
* Used for dynamic tool registration and validation
|
||||||
|
*/
|
||||||
|
export const toolRegistry = {
|
||||||
|
initialize_project: registerInitializeProjectTool,
|
||||||
|
models: registerModelsTool,
|
||||||
|
rules: registerRulesTool,
|
||||||
|
parse_prd: registerParsePRDTool,
|
||||||
|
'response-language': registerResponseLanguageTool,
|
||||||
|
analyze_project_complexity: registerAnalyzeProjectComplexityTool,
|
||||||
|
expand_task: registerExpandTaskTool,
|
||||||
|
expand_all: registerExpandAllTool,
|
||||||
|
scope_up_task: registerScopeUpTool,
|
||||||
|
scope_down_task: registerScopeDownTool,
|
||||||
|
get_tasks: registerListTasksTool,
|
||||||
|
get_task: registerShowTaskTool,
|
||||||
|
next_task: registerNextTaskTool,
|
||||||
|
complexity_report: registerComplexityReportTool,
|
||||||
|
set_task_status: registerSetTaskStatusTool,
|
||||||
|
generate: registerGenerateTool,
|
||||||
|
add_task: registerAddTaskTool,
|
||||||
|
add_subtask: registerAddSubtaskTool,
|
||||||
|
update: registerUpdateTool,
|
||||||
|
update_task: registerUpdateTaskTool,
|
||||||
|
update_subtask: registerUpdateSubtaskTool,
|
||||||
|
remove_task: registerRemoveTaskTool,
|
||||||
|
remove_subtask: registerRemoveSubtaskTool,
|
||||||
|
clear_subtasks: registerClearSubtasksTool,
|
||||||
|
move_task: registerMoveTaskTool,
|
||||||
|
add_dependency: registerAddDependencyTool,
|
||||||
|
remove_dependency: registerRemoveDependencyTool,
|
||||||
|
validate_dependencies: registerValidateDependenciesTool,
|
||||||
|
fix_dependencies: registerFixDependenciesTool,
|
||||||
|
list_tags: registerListTagsTool,
|
||||||
|
add_tag: registerAddTagTool,
|
||||||
|
delete_tag: registerDeleteTagTool,
|
||||||
|
use_tag: registerUseTagTool,
|
||||||
|
rename_tag: registerRenameTagTool,
|
||||||
|
copy_tag: registerCopyTagTool,
|
||||||
|
research: registerResearchTool
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Core tools array containing the 7 essential tools for daily development
|
||||||
|
* These represent the minimal set needed for basic task management operations
|
||||||
|
*/
|
||||||
|
export const coreTools = [
|
||||||
|
'get_tasks',
|
||||||
|
'next_task',
|
||||||
|
'get_task',
|
||||||
|
'set_task_status',
|
||||||
|
'update_subtask',
|
||||||
|
'parse_prd',
|
||||||
|
'expand_task'
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Standard tools array containing the 15 most commonly used tools
|
||||||
|
* Includes all core tools plus frequently used additional tools
|
||||||
|
*/
|
||||||
|
export const standardTools = [
|
||||||
|
...coreTools,
|
||||||
|
'initialize_project',
|
||||||
|
'analyze_project_complexity',
|
||||||
|
'expand_all',
|
||||||
|
'add_subtask',
|
||||||
|
'remove_task',
|
||||||
|
'generate',
|
||||||
|
'add_task',
|
||||||
|
'complexity_report'
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get all available tool names
|
||||||
|
* @returns {string[]} Array of tool names
|
||||||
|
*/
|
||||||
|
export function getAvailableTools() {
|
||||||
|
return Object.keys(toolRegistry);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get tool counts for all categories
|
||||||
|
* @returns {Object} Object with core, standard, and total counts
|
||||||
|
*/
|
||||||
|
export function getToolCounts() {
|
||||||
|
return {
|
||||||
|
core: coreTools.length,
|
||||||
|
standard: standardTools.length,
|
||||||
|
total: Object.keys(toolRegistry).length
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get tool arrays organized by category
|
||||||
|
* @returns {Object} Object with arrays for each category
|
||||||
|
*/
|
||||||
|
export function getToolCategories() {
|
||||||
|
const allTools = Object.keys(toolRegistry);
|
||||||
|
return {
|
||||||
|
core: [...coreTools],
|
||||||
|
standard: [...standardTools],
|
||||||
|
all: [...allTools],
|
||||||
|
extended: allTools.filter((t) => !standardTools.includes(t))
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get registration function for a specific tool
|
||||||
|
* @param {string} toolName - Name of the tool
|
||||||
|
* @returns {Function|null} Registration function or null if not found
|
||||||
|
*/
|
||||||
|
export function getToolRegistration(toolName) {
|
||||||
|
return toolRegistry[toolName] || null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate if a tool exists in the registry
|
||||||
|
* @param {string} toolName - Name of the tool
|
||||||
|
* @returns {boolean} True if tool exists
|
||||||
|
*/
|
||||||
|
export function isValidTool(toolName) {
|
||||||
|
return toolName in toolRegistry;
|
||||||
|
}
|
||||||
|
|
||||||
|
export default toolRegistry;
|
||||||
136
output.txt
136
output.txt
File diff suppressed because one or more lines are too long
229
package-lock.json
generated
229
package-lock.json
generated
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "npm:task-master-ai@0.29.0-rc.0",
|
"version": "0.29.0",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.29.0-rc.0",
|
"version": "0.29.0",
|
||||||
"license": "MIT WITH Commons-Clause",
|
"license": "MIT WITH Commons-Clause",
|
||||||
"workspaces": [
|
"workspaces": [
|
||||||
"apps/*",
|
"apps/*",
|
||||||
@@ -104,6 +104,7 @@
|
|||||||
"name": "@tm/cli",
|
"name": "@tm/cli",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
|
"@inquirer/search": "^3.2.0",
|
||||||
"@tm/core": "*",
|
"@tm/core": "*",
|
||||||
"boxen": "^8.0.1",
|
"boxen": "^8.0.1",
|
||||||
"chalk": "5.6.2",
|
"chalk": "5.6.2",
|
||||||
@@ -124,14 +125,144 @@
|
|||||||
"node": ">=18.0.0"
|
"node": ">=18.0.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"apps/cli/node_modules/@inquirer/ansi": {
|
||||||
|
"version": "1.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/@inquirer/ansi/-/ansi-1.0.1.tgz",
|
||||||
|
"integrity": "sha512-yqq0aJW/5XPhi5xOAL1xRCpe1eh8UFVgYFpFsjEqmIR8rKLyP+HINvFXwUaxYICflJrVlxnp7lLN6As735kVpw==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"apps/cli/node_modules/@inquirer/figures": {
|
||||||
|
"version": "1.0.14",
|
||||||
|
"resolved": "https://registry.npmjs.org/@inquirer/figures/-/figures-1.0.14.tgz",
|
||||||
|
"integrity": "sha512-DbFgdt+9/OZYFM+19dbpXOSeAstPy884FPy1KjDu4anWwymZeOYhMY1mdFri172htv6mvc/uvIAAi7b7tvjJBQ==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"apps/cli/node_modules/@inquirer/search": {
|
||||||
|
"version": "3.2.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/@inquirer/search/-/search-3.2.0.tgz",
|
||||||
|
"integrity": "sha512-a5SzB/qrXafDX1Z4AZW3CsVoiNxcIYCzYP7r9RzrfMpaLpB+yWi5U8BWagZyLmwR0pKbbL5umnGRd0RzGVI8bQ==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@inquirer/core": "^10.3.0",
|
||||||
|
"@inquirer/figures": "^1.0.14",
|
||||||
|
"@inquirer/type": "^3.0.9",
|
||||||
|
"yoctocolors-cjs": "^2.1.2"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"@types/node": ">=18"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"@types/node": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"apps/cli/node_modules/@inquirer/search/node_modules/@inquirer/core": {
|
||||||
|
"version": "10.3.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/@inquirer/core/-/core-10.3.0.tgz",
|
||||||
|
"integrity": "sha512-Uv2aPPPSK5jeCplQmQ9xadnFx2Zhj9b5Dj7bU6ZeCdDNNY11nhYy4btcSdtDguHqCT2h5oNeQTcUNSGGLA7NTA==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@inquirer/ansi": "^1.0.1",
|
||||||
|
"@inquirer/figures": "^1.0.14",
|
||||||
|
"@inquirer/type": "^3.0.9",
|
||||||
|
"cli-width": "^4.1.0",
|
||||||
|
"mute-stream": "^2.0.0",
|
||||||
|
"signal-exit": "^4.1.0",
|
||||||
|
"wrap-ansi": "^6.2.0",
|
||||||
|
"yoctocolors-cjs": "^2.1.2"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"@types/node": ">=18"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"@types/node": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"apps/cli/node_modules/@inquirer/search/node_modules/@inquirer/type": {
|
||||||
|
"version": "3.0.9",
|
||||||
|
"resolved": "https://registry.npmjs.org/@inquirer/type/-/type-3.0.9.tgz",
|
||||||
|
"integrity": "sha512-QPaNt/nmE2bLGQa9b7wwyRJoLZ7pN6rcyXvzU0YCmivmJyq1BVo94G98tStRWkoD1RgDX5C+dPlhhHzNdu/W/w==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"@types/node": ">=18"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"@types/node": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"apps/docs": {
|
"apps/docs": {
|
||||||
"version": "0.0.5",
|
"version": "0.0.6",
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"mintlify": "^4.2.111"
|
"mintlify": "^4.2.111"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"apps/extension": {
|
||||||
|
"version": "0.25.6",
|
||||||
|
"devDependencies": {
|
||||||
|
"@dnd-kit/core": "^6.3.1",
|
||||||
|
"@dnd-kit/modifiers": "^9.0.0",
|
||||||
|
"@modelcontextprotocol/sdk": "1.13.3",
|
||||||
|
"@radix-ui/react-collapsible": "^1.1.11",
|
||||||
|
"@radix-ui/react-dropdown-menu": "^2.1.15",
|
||||||
|
"@radix-ui/react-label": "^2.1.7",
|
||||||
|
"@radix-ui/react-portal": "^1.1.9",
|
||||||
|
"@radix-ui/react-scroll-area": "^1.2.9",
|
||||||
|
"@radix-ui/react-separator": "^1.1.7",
|
||||||
|
"@radix-ui/react-slot": "^1.2.3",
|
||||||
|
"@tailwindcss/postcss": "^4.1.11",
|
||||||
|
"@tanstack/react-query": "^5.83.0",
|
||||||
|
"@tm/core": "*",
|
||||||
|
"@types/mocha": "^10.0.10",
|
||||||
|
"@types/node": "^22.10.5",
|
||||||
|
"@types/react": "19.1.8",
|
||||||
|
"@types/react-dom": "19.1.6",
|
||||||
|
"@types/vscode": "^1.101.0",
|
||||||
|
"@vscode/test-cli": "^0.0.11",
|
||||||
|
"@vscode/test-electron": "^2.5.2",
|
||||||
|
"@vscode/vsce": "^2.32.0",
|
||||||
|
"autoprefixer": "10.4.21",
|
||||||
|
"class-variance-authority": "^0.7.1",
|
||||||
|
"clsx": "^2.1.1",
|
||||||
|
"esbuild": "^0.25.3",
|
||||||
|
"esbuild-postcss": "^0.0.4",
|
||||||
|
"fs-extra": "^11.3.0",
|
||||||
|
"lucide-react": "^0.525.0",
|
||||||
|
"npm-run-all": "^4.1.5",
|
||||||
|
"postcss": "8.5.6",
|
||||||
|
"react": "^19.0.0",
|
||||||
|
"react-dom": "^19.0.0",
|
||||||
|
"tailwind-merge": "^3.3.1",
|
||||||
|
"tailwindcss": "4.1.11",
|
||||||
|
"task-master-ai": "*",
|
||||||
|
"typescript": "^5.9.2"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"vscode": "^1.93.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/amazon-bedrock": {
|
"apps/extension/node_modules/@ai-sdk/amazon-bedrock": {
|
||||||
"version": "2.2.12",
|
"version": "2.2.12",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -149,6 +280,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/anthropic": {
|
"apps/extension/node_modules/@ai-sdk/anthropic": {
|
||||||
"version": "1.2.12",
|
"version": "1.2.12",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -163,6 +295,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/azure": {
|
"apps/extension/node_modules/@ai-sdk/azure": {
|
||||||
"version": "1.3.25",
|
"version": "1.3.25",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/openai": "1.3.24",
|
"@ai-sdk/openai": "1.3.24",
|
||||||
@@ -178,6 +311,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/google": {
|
"apps/extension/node_modules/@ai-sdk/google": {
|
||||||
"version": "1.2.22",
|
"version": "1.2.22",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -192,6 +326,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/google-vertex": {
|
"apps/extension/node_modules/@ai-sdk/google-vertex": {
|
||||||
"version": "2.2.27",
|
"version": "2.2.27",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/anthropic": "1.2.12",
|
"@ai-sdk/anthropic": "1.2.12",
|
||||||
@@ -209,6 +344,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/groq": {
|
"apps/extension/node_modules/@ai-sdk/groq": {
|
||||||
"version": "1.2.9",
|
"version": "1.2.9",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -223,6 +359,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/mistral": {
|
"apps/extension/node_modules/@ai-sdk/mistral": {
|
||||||
"version": "1.2.8",
|
"version": "1.2.8",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -237,6 +374,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/openai": {
|
"apps/extension/node_modules/@ai-sdk/openai": {
|
||||||
"version": "1.3.24",
|
"version": "1.3.24",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -251,6 +389,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/openai-compatible": {
|
"apps/extension/node_modules/@ai-sdk/openai-compatible": {
|
||||||
"version": "0.2.16",
|
"version": "0.2.16",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -265,6 +404,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/perplexity": {
|
"apps/extension/node_modules/@ai-sdk/perplexity": {
|
||||||
"version": "1.1.9",
|
"version": "1.1.9",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -279,6 +419,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/provider": {
|
"apps/extension/node_modules/@ai-sdk/provider": {
|
||||||
"version": "1.1.3",
|
"version": "1.1.3",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"json-schema": "^0.4.0"
|
"json-schema": "^0.4.0"
|
||||||
@@ -289,6 +430,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/provider-utils": {
|
"apps/extension/node_modules/@ai-sdk/provider-utils": {
|
||||||
"version": "2.2.8",
|
"version": "2.2.8",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -304,6 +446,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/react": {
|
"apps/extension/node_modules/@ai-sdk/react": {
|
||||||
"version": "1.2.12",
|
"version": "1.2.12",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider-utils": "2.2.8",
|
"@ai-sdk/provider-utils": "2.2.8",
|
||||||
@@ -326,6 +469,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/ui-utils": {
|
"apps/extension/node_modules/@ai-sdk/ui-utils": {
|
||||||
"version": "1.2.11",
|
"version": "1.2.11",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -341,6 +485,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@ai-sdk/xai": {
|
"apps/extension/node_modules/@ai-sdk/xai": {
|
||||||
"version": "1.2.18",
|
"version": "1.2.18",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/openai-compatible": "0.2.16",
|
"@ai-sdk/openai-compatible": "0.2.16",
|
||||||
@@ -356,6 +501,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@openrouter/ai-sdk-provider": {
|
"apps/extension/node_modules/@openrouter/ai-sdk-provider": {
|
||||||
"version": "0.4.6",
|
"version": "0.4.6",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.0.9",
|
"@ai-sdk/provider": "1.0.9",
|
||||||
@@ -370,6 +516,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider": {
|
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider": {
|
||||||
"version": "1.0.9",
|
"version": "1.0.9",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"json-schema": "^0.4.0"
|
"json-schema": "^0.4.0"
|
||||||
@@ -380,6 +527,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider-utils": {
|
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider-utils": {
|
||||||
"version": "2.1.10",
|
"version": "2.1.10",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.0.9",
|
"@ai-sdk/provider": "1.0.9",
|
||||||
@@ -401,6 +549,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/ai": {
|
"apps/extension/node_modules/ai": {
|
||||||
"version": "4.3.19",
|
"version": "4.3.19",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "1.1.3",
|
"@ai-sdk/provider": "1.1.3",
|
||||||
@@ -425,6 +574,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/ai-sdk-provider-gemini-cli": {
|
"apps/extension/node_modules/ai-sdk-provider-gemini-cli": {
|
||||||
"version": "0.1.3",
|
"version": "0.1.3",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"optional": true,
|
"optional": true,
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
@@ -458,6 +608,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/ollama-ai-provider": {
|
"apps/extension/node_modules/ollama-ai-provider": {
|
||||||
"version": "1.2.0",
|
"version": "1.2.0",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/provider": "^1.0.0",
|
"@ai-sdk/provider": "^1.0.0",
|
||||||
@@ -478,6 +629,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/openai": {
|
"apps/extension/node_modules/openai": {
|
||||||
"version": "4.104.0",
|
"version": "4.104.0",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/node": "^18.11.18",
|
"@types/node": "^18.11.18",
|
||||||
@@ -506,6 +658,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/openai/node_modules/@types/node": {
|
"apps/extension/node_modules/openai/node_modules/@types/node": {
|
||||||
"version": "18.19.127",
|
"version": "18.19.127",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"undici-types": "~5.26.4"
|
"undici-types": "~5.26.4"
|
||||||
@@ -513,10 +666,12 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/openai/node_modules/undici-types": {
|
"apps/extension/node_modules/openai/node_modules/undici-types": {
|
||||||
"version": "5.26.5",
|
"version": "5.26.5",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"apps/extension/node_modules/task-master-ai": {
|
"apps/extension/node_modules/task-master-ai": {
|
||||||
"version": "0.27.1",
|
"version": "0.27.1",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT WITH Commons-Clause",
|
"license": "MIT WITH Commons-Clause",
|
||||||
"workspaces": [
|
"workspaces": [
|
||||||
"apps/*",
|
"apps/*",
|
||||||
@@ -588,6 +743,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/zod": {
|
"apps/extension/node_modules/zod": {
|
||||||
"version": "3.25.76",
|
"version": "3.25.76",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"funding": {
|
"funding": {
|
||||||
"url": "https://github.com/sponsors/colinhacks"
|
"url": "https://github.com/sponsors/colinhacks"
|
||||||
@@ -595,6 +751,7 @@
|
|||||||
},
|
},
|
||||||
"apps/extension/node_modules/zod-to-json-schema": {
|
"apps/extension/node_modules/zod-to-json-schema": {
|
||||||
"version": "3.24.6",
|
"version": "3.24.6",
|
||||||
|
"dev": true,
|
||||||
"license": "ISC",
|
"license": "ISC",
|
||||||
"peerDependencies": {
|
"peerDependencies": {
|
||||||
"zod": "^3.24.1"
|
"zod": "^3.24.1"
|
||||||
@@ -883,6 +1040,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/@anthropic-ai/sdk": {
|
"node_modules/@anthropic-ai/sdk": {
|
||||||
"version": "0.39.0",
|
"version": "0.39.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/node": "^18.11.18",
|
"@types/node": "^18.11.18",
|
||||||
@@ -896,6 +1054,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/@anthropic-ai/sdk/node_modules/@types/node": {
|
"node_modules/@anthropic-ai/sdk/node_modules/@types/node": {
|
||||||
"version": "18.19.127",
|
"version": "18.19.127",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"undici-types": "~5.26.4"
|
"undici-types": "~5.26.4"
|
||||||
@@ -903,6 +1062,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/@anthropic-ai/sdk/node_modules/undici-types": {
|
"node_modules/@anthropic-ai/sdk/node_modules/undici-types": {
|
||||||
"version": "5.26.5",
|
"version": "5.26.5",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/@ark/schema": {
|
"node_modules/@ark/schema": {
|
||||||
@@ -8362,6 +8522,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/@types/diff-match-patch": {
|
"node_modules/@types/diff-match-patch": {
|
||||||
"version": "1.0.36",
|
"version": "1.0.36",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/@types/es-aggregate-error": {
|
"node_modules/@types/es-aggregate-error": {
|
||||||
@@ -8532,6 +8693,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/@types/node-fetch": {
|
"node_modules/@types/node-fetch": {
|
||||||
"version": "2.6.13",
|
"version": "2.6.13",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/node": "*",
|
"@types/node": "*",
|
||||||
@@ -8977,6 +9139,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/abort-controller": {
|
"node_modules/abort-controller": {
|
||||||
"version": "3.0.0",
|
"version": "3.0.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"event-target-shim": "^5.0.0"
|
"event-target-shim": "^5.0.0"
|
||||||
@@ -9038,6 +9201,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/agentkeepalive": {
|
"node_modules/agentkeepalive": {
|
||||||
"version": "4.6.0",
|
"version": "4.6.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"humanize-ms": "^1.2.1"
|
"humanize-ms": "^1.2.1"
|
||||||
@@ -9620,6 +9784,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/asynckit": {
|
"node_modules/asynckit": {
|
||||||
"version": "0.4.0",
|
"version": "0.4.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/auto-bind": {
|
"node_modules/auto-bind": {
|
||||||
@@ -11397,6 +11562,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/combined-stream": {
|
"node_modules/combined-stream": {
|
||||||
"version": "1.0.8",
|
"version": "1.0.8",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"delayed-stream": "~1.0.0"
|
"delayed-stream": "~1.0.0"
|
||||||
@@ -12035,6 +12201,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/delayed-stream": {
|
"node_modules/delayed-stream": {
|
||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=0.4.0"
|
"node": ">=0.4.0"
|
||||||
@@ -12057,6 +12224,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/dequal": {
|
"node_modules/dequal": {
|
||||||
"version": "2.0.3",
|
"version": "2.0.3",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=6"
|
"node": ">=6"
|
||||||
@@ -12176,6 +12344,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/diff-match-patch": {
|
"node_modules/diff-match-patch": {
|
||||||
"version": "1.0.5",
|
"version": "1.0.5",
|
||||||
|
"dev": true,
|
||||||
"license": "Apache-2.0"
|
"license": "Apache-2.0"
|
||||||
},
|
},
|
||||||
"node_modules/diff-sequences": {
|
"node_modules/diff-sequences": {
|
||||||
@@ -12675,6 +12844,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/es-set-tostringtag": {
|
"node_modules/es-set-tostringtag": {
|
||||||
"version": "2.1.0",
|
"version": "2.1.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"es-errors": "^1.3.0",
|
"es-errors": "^1.3.0",
|
||||||
@@ -12963,6 +13133,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/event-target-shim": {
|
"node_modules/event-target-shim": {
|
||||||
"version": "5.0.1",
|
"version": "5.0.1",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=6"
|
"node": ">=6"
|
||||||
@@ -14001,6 +14172,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/form-data": {
|
"node_modules/form-data": {
|
||||||
"version": "4.0.4",
|
"version": "4.0.4",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"asynckit": "^0.4.0",
|
"asynckit": "^0.4.0",
|
||||||
@@ -14015,6 +14187,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/form-data-encoder": {
|
"node_modules/form-data-encoder": {
|
||||||
"version": "1.7.2",
|
"version": "1.7.2",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/format": {
|
"node_modules/format": {
|
||||||
@@ -14026,6 +14199,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/formdata-node": {
|
"node_modules/formdata-node": {
|
||||||
"version": "4.4.1",
|
"version": "4.4.1",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"node-domexception": "1.0.0",
|
"node-domexception": "1.0.0",
|
||||||
@@ -14686,6 +14860,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/has-tostringtag": {
|
"node_modules/has-tostringtag": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.2",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"has-symbols": "^1.0.3"
|
"has-symbols": "^1.0.3"
|
||||||
@@ -15260,6 +15435,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/humanize-ms": {
|
"node_modules/humanize-ms": {
|
||||||
"version": "1.2.1",
|
"version": "1.2.1",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"ms": "^2.0.0"
|
"ms": "^2.0.0"
|
||||||
@@ -18046,6 +18222,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/jsondiffpatch": {
|
"node_modules/jsondiffpatch": {
|
||||||
"version": "0.6.0",
|
"version": "0.6.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/diff-match-patch": "^1.0.36",
|
"@types/diff-match-patch": "^1.0.36",
|
||||||
@@ -18309,6 +18486,7 @@
|
|||||||
"os": [
|
"os": [
|
||||||
"darwin"
|
"darwin"
|
||||||
],
|
],
|
||||||
|
"peer": true,
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">= 12.0.0"
|
"node": ">= 12.0.0"
|
||||||
},
|
},
|
||||||
@@ -20226,6 +20404,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/nanoid": {
|
"node_modules/nanoid": {
|
||||||
"version": "3.3.11",
|
"version": "3.3.11",
|
||||||
|
"devOptional": true,
|
||||||
"funding": [
|
"funding": [
|
||||||
{
|
{
|
||||||
"type": "github",
|
"type": "github",
|
||||||
@@ -20334,6 +20513,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/node-domexception": {
|
"node_modules/node-domexception": {
|
||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
|
"dev": true,
|
||||||
"funding": [
|
"funding": [
|
||||||
{
|
{
|
||||||
"type": "github",
|
"type": "github",
|
||||||
@@ -21198,6 +21378,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/partial-json": {
|
"node_modules/partial-json": {
|
||||||
"version": "0.1.7",
|
"version": "0.1.7",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/patch-console": {
|
"node_modules/patch-console": {
|
||||||
@@ -21970,6 +22151,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/react": {
|
"node_modules/react": {
|
||||||
"version": "19.1.1",
|
"version": "19.1.1",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=0.10.0"
|
"node": ">=0.10.0"
|
||||||
@@ -23096,6 +23278,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/secure-json-parse": {
|
"node_modules/secure-json-parse": {
|
||||||
"version": "2.7.0",
|
"version": "2.7.0",
|
||||||
|
"dev": true,
|
||||||
"license": "BSD-3-Clause"
|
"license": "BSD-3-Clause"
|
||||||
},
|
},
|
||||||
"node_modules/selderee": {
|
"node_modules/selderee": {
|
||||||
@@ -24143,6 +24326,26 @@
|
|||||||
"url": "https://github.com/sponsors/sindresorhus"
|
"url": "https://github.com/sponsors/sindresorhus"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/strip-literal": {
|
||||||
|
"version": "3.1.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/strip-literal/-/strip-literal-3.1.0.tgz",
|
||||||
|
"integrity": "sha512-8r3mkIM/2+PpjHoOtiAW8Rg3jJLHaV7xPwG+YRGrv6FP0wwk/toTpATxWYOW0BKdWwl82VT2tFYi5DlROa0Mxg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"js-tokens": "^9.0.1"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://github.com/sponsors/antfu"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/strip-literal/node_modules/js-tokens": {
|
||||||
|
"version": "9.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-9.0.1.tgz",
|
||||||
|
"integrity": "sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/strnum": {
|
"node_modules/strnum": {
|
||||||
"version": "2.1.1",
|
"version": "2.1.1",
|
||||||
"funding": [
|
"funding": [
|
||||||
@@ -24320,6 +24523,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/swr": {
|
"node_modules/swr": {
|
||||||
"version": "2.3.6",
|
"version": "2.3.6",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"dequal": "^2.0.3",
|
"dequal": "^2.0.3",
|
||||||
@@ -24512,6 +24716,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/throttleit": {
|
"node_modules/throttleit": {
|
||||||
"version": "2.1.0",
|
"version": "2.1.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=18"
|
"node": ">=18"
|
||||||
@@ -25614,6 +25819,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/use-sync-external-store": {
|
"node_modules/use-sync-external-store": {
|
||||||
"version": "1.5.0",
|
"version": "1.5.0",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peerDependencies": {
|
"peerDependencies": {
|
||||||
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
|
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
|
||||||
@@ -25866,6 +26072,7 @@
|
|||||||
"os": [
|
"os": [
|
||||||
"darwin"
|
"darwin"
|
||||||
],
|
],
|
||||||
|
"peer": true,
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=12"
|
"node": ">=12"
|
||||||
}
|
}
|
||||||
@@ -26021,6 +26228,7 @@
|
|||||||
},
|
},
|
||||||
"node_modules/web-streams-polyfill": {
|
"node_modules/web-streams-polyfill": {
|
||||||
"version": "4.0.0-beta.3",
|
"version": "4.0.0-beta.3",
|
||||||
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">= 14"
|
"node": ">= 14"
|
||||||
@@ -27014,19 +27222,9 @@
|
|||||||
},
|
},
|
||||||
"packages/claude-code-plugin": {
|
"packages/claude-code-plugin": {
|
||||||
"name": "@tm/claude-code-plugin",
|
"name": "@tm/claude-code-plugin",
|
||||||
"version": "0.0.1",
|
"version": "0.0.2",
|
||||||
"license": "MIT WITH Commons-Clause"
|
"license": "MIT WITH Commons-Clause"
|
||||||
},
|
},
|
||||||
"packages/claude-code-plugin/node_modules/@types/node": {
|
|
||||||
"version": "20.19.20",
|
|
||||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.20.tgz",
|
|
||||||
"integrity": "sha512-2Q7WS25j4pS1cS8yw3d6buNCVJukOTeQ39bAnwR6sOJbaxvyCGebzTMypDFN82CxBLnl+lSWVdCCWbRY6y9yZQ==",
|
|
||||||
"extraneous": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"undici-types": "~6.21.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"packages/tm-core": {
|
"packages/tm-core": {
|
||||||
"name": "@tm/core",
|
"name": "@tm/core",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
@@ -27037,6 +27235,7 @@
|
|||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/node": "^22.10.5",
|
"@types/node": "^22.10.5",
|
||||||
"@vitest/coverage-v8": "^3.2.4",
|
"@vitest/coverage-v8": "^3.2.4",
|
||||||
|
"strip-literal": "3.1.0",
|
||||||
"typescript": "^5.9.2",
|
"typescript": "^5.9.2",
|
||||||
"vitest": "^3.2.4"
|
"vitest": "^3.2.4"
|
||||||
}
|
}
|
||||||
@@ -27346,6 +27545,8 @@
|
|||||||
},
|
},
|
||||||
"packages/tm-core/node_modules/vitest": {
|
"packages/tm-core/node_modules/vitest": {
|
||||||
"version": "3.2.4",
|
"version": "3.2.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/vitest/-/vitest-3.2.4.tgz",
|
||||||
|
"integrity": "sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.29.0-rc.0",
|
"version": "0.29.0",
|
||||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||||
"main": "index.js",
|
"main": "index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
# @tm/ai-sdk-provider-grok-cli
|
# @tm/ai-sdk-provider-grok-cli
|
||||||
|
|
||||||
## null
|
## null
|
||||||
|
|
||||||
|
## null
|
||||||
|
|||||||
@@ -4,4 +4,6 @@
|
|||||||
|
|
||||||
## null
|
## null
|
||||||
|
|
||||||
|
## null
|
||||||
|
|
||||||
## 1.0.1
|
## 1.0.1
|
||||||
|
|||||||
3
packages/claude-code-plugin/CHANGELOG.md
Normal file
3
packages/claude-code-plugin/CHANGELOG.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# @tm/claude-code-plugin
|
||||||
|
|
||||||
|
## 0.0.2
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@tm/claude-code-plugin",
|
"name": "@tm/claude-code-plugin",
|
||||||
"version": "0.0.1",
|
"version": "0.0.2",
|
||||||
"description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration",
|
"description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"private": true,
|
"private": true,
|
||||||
|
|||||||
@@ -4,6 +4,8 @@
|
|||||||
|
|
||||||
## null
|
## null
|
||||||
|
|
||||||
|
## null
|
||||||
|
|
||||||
## 0.26.1
|
## 0.26.1
|
||||||
|
|
||||||
All notable changes to the @task-master/tm-core package will be documented in this file.
|
All notable changes to the @task-master/tm-core package will be documented in this file.
|
||||||
|
|||||||
@@ -37,7 +37,8 @@
|
|||||||
"@types/node": "^22.10.5",
|
"@types/node": "^22.10.5",
|
||||||
"@vitest/coverage-v8": "^3.2.4",
|
"@vitest/coverage-v8": "^3.2.4",
|
||||||
"typescript": "^5.9.2",
|
"typescript": "^5.9.2",
|
||||||
"vitest": "^3.2.4"
|
"vitest": "^3.2.4",
|
||||||
|
"strip-literal": "3.1.0"
|
||||||
},
|
},
|
||||||
"files": ["src", "README.md", "CHANGELOG.md"],
|
"files": ["src", "README.md", "CHANGELOG.md"],
|
||||||
"keywords": ["task-management", "typescript", "ai", "prd", "parser"],
|
"keywords": ["task-management", "typescript", "ai", "prd", "parser"],
|
||||||
|
|||||||
@@ -21,16 +21,21 @@ const CredentialStoreSpy = vi.fn();
|
|||||||
vi.mock('./credential-store.js', () => {
|
vi.mock('./credential-store.js', () => {
|
||||||
return {
|
return {
|
||||||
CredentialStore: class {
|
CredentialStore: class {
|
||||||
|
static getInstance(config?: any) {
|
||||||
|
return new (this as any)(config);
|
||||||
|
}
|
||||||
|
static resetInstance() {
|
||||||
|
// Mock reset instance method
|
||||||
|
}
|
||||||
constructor(config: any) {
|
constructor(config: any) {
|
||||||
CredentialStoreSpy(config);
|
CredentialStoreSpy(config);
|
||||||
this.getCredentials = vi.fn(() => null);
|
|
||||||
}
|
}
|
||||||
getCredentials() {
|
getCredentials(_options?: any) {
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
saveCredentials() {}
|
saveCredentials() {}
|
||||||
clearCredentials() {}
|
clearCredentials() {}
|
||||||
hasValidCredentials() {
|
hasCredentials() {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -85,7 +90,7 @@ describe('AuthManager Singleton', () => {
|
|||||||
expect(instance1).toBe(instance2);
|
expect(instance1).toBe(instance2);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should use config on first call', () => {
|
it('should use config on first call', async () => {
|
||||||
const config = {
|
const config = {
|
||||||
baseUrl: 'https://test.auth.com',
|
baseUrl: 'https://test.auth.com',
|
||||||
configDir: '/test/config',
|
configDir: '/test/config',
|
||||||
@@ -101,7 +106,7 @@ describe('AuthManager Singleton', () => {
|
|||||||
|
|
||||||
// Verify the config is passed to internal components through observable behavior
|
// Verify the config is passed to internal components through observable behavior
|
||||||
// getCredentials would look in the configured file path
|
// getCredentials would look in the configured file path
|
||||||
const credentials = instance.getCredentials();
|
const credentials = await instance.getCredentials();
|
||||||
expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly
|
expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -36,7 +36,10 @@ export class AuthManager {
|
|||||||
this.oauthService = new OAuthService(this.credentialStore, config);
|
this.oauthService = new OAuthService(this.credentialStore, config);
|
||||||
|
|
||||||
// Initialize Supabase client with session restoration
|
// Initialize Supabase client with session restoration
|
||||||
this.initializeSupabaseSession();
|
// Fire-and-forget with catch handler to prevent unhandled rejections
|
||||||
|
this.initializeSupabaseSession().catch(() => {
|
||||||
|
// Errors are already logged in initializeSupabaseSession
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -78,6 +81,8 @@ export class AuthManager {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Get stored authentication credentials
|
* Get stored authentication credentials
|
||||||
|
* Returns credentials as-is (even if expired). Refresh must be triggered explicitly
|
||||||
|
* via refreshToken() or will occur automatically when using the Supabase client for API calls.
|
||||||
*/
|
*/
|
||||||
getCredentials(): AuthCredentials | null {
|
getCredentials(): AuthCredentials | null {
|
||||||
return this.credentialStore.getCredentials();
|
return this.credentialStore.getCredentials();
|
||||||
@@ -162,10 +167,11 @@ export class AuthManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if authenticated
|
* Check if authenticated (credentials exist, regardless of expiration)
|
||||||
|
* @returns true if credentials are stored, including expired credentials
|
||||||
*/
|
*/
|
||||||
isAuthenticated(): boolean {
|
isAuthenticated(): boolean {
|
||||||
return this.credentialStore.hasValidCredentials();
|
return this.credentialStore.hasCredentials();
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -179,7 +185,7 @@ export class AuthManager {
|
|||||||
/**
|
/**
|
||||||
* Update the user context (org/brief selection)
|
* Update the user context (org/brief selection)
|
||||||
*/
|
*/
|
||||||
async updateContext(context: Partial<UserContext>): Promise<void> {
|
updateContext(context: Partial<UserContext>): void {
|
||||||
const credentials = this.getCredentials();
|
const credentials = this.getCredentials();
|
||||||
if (!credentials) {
|
if (!credentials) {
|
||||||
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
||||||
@@ -205,7 +211,7 @@ export class AuthManager {
|
|||||||
/**
|
/**
|
||||||
* Clear the user context
|
* Clear the user context
|
||||||
*/
|
*/
|
||||||
async clearContext(): Promise<void> {
|
clearContext(): void {
|
||||||
const credentials = this.getCredentials();
|
const credentials = this.getCredentials();
|
||||||
if (!credentials) {
|
if (!credentials) {
|
||||||
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
|
||||||
|
|||||||
@@ -7,11 +7,13 @@ import path from 'path';
|
|||||||
import { AuthConfig } from './types.js';
|
import { AuthConfig } from './types.js';
|
||||||
|
|
||||||
// Single base domain for all URLs
|
// Single base domain for all URLs
|
||||||
// Build-time: process.env.TM_PUBLIC_BASE_DOMAIN gets replaced by tsup's env option
|
// Runtime vars (TM_*) take precedence over build-time vars (TM_PUBLIC_*)
|
||||||
|
// Build-time: process.env.TM_PUBLIC_BASE_DOMAIN gets replaced by tsdown's env option
|
||||||
|
// Runtime: process.env.TM_BASE_DOMAIN can override for staging/development
|
||||||
// Default: https://tryhamster.com for production
|
// Default: https://tryhamster.com for production
|
||||||
const BASE_DOMAIN =
|
const BASE_DOMAIN =
|
||||||
process.env.TM_PUBLIC_BASE_DOMAIN || // This gets replaced at build time by tsup
|
process.env.TM_BASE_DOMAIN || // Runtime override (for staging/tux)
|
||||||
'https://tryhamster.com';
|
process.env.TM_PUBLIC_BASE_DOMAIN; // Build-time (baked into compiled code)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Default authentication configuration
|
* Default authentication configuration
|
||||||
@@ -19,7 +21,7 @@ const BASE_DOMAIN =
|
|||||||
*/
|
*/
|
||||||
export const DEFAULT_AUTH_CONFIG: AuthConfig = {
|
export const DEFAULT_AUTH_CONFIG: AuthConfig = {
|
||||||
// Base domain for all services
|
// Base domain for all services
|
||||||
baseUrl: BASE_DOMAIN,
|
baseUrl: BASE_DOMAIN!,
|
||||||
|
|
||||||
// Configuration directory and file paths
|
// Configuration directory and file paths
|
||||||
configDir: path.join(os.homedir(), '.taskmaster'),
|
configDir: path.join(os.homedir(), '.taskmaster'),
|
||||||
|
|||||||
308
packages/tm-core/src/auth/credential-store.spec.ts
Normal file
308
packages/tm-core/src/auth/credential-store.spec.ts
Normal file
@@ -0,0 +1,308 @@
|
|||||||
|
/**
|
||||||
|
* @fileoverview Unit tests for CredentialStore token expiration handling
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||||
|
import fs from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import os from 'os';
|
||||||
|
import { CredentialStore } from './credential-store';
|
||||||
|
import type { AuthCredentials } from './types';
|
||||||
|
|
||||||
|
describe('CredentialStore - Token Expiration', () => {
|
||||||
|
let credentialStore: CredentialStore;
|
||||||
|
let tmpDir: string;
|
||||||
|
let authFile: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Create temp directory for test credentials
|
||||||
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-cred-test-'));
|
||||||
|
authFile = path.join(tmpDir, 'auth.json');
|
||||||
|
|
||||||
|
// Create instance with test config
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
credentialStore = CredentialStore.getInstance({
|
||||||
|
configDir: tmpDir,
|
||||||
|
configFile: authFile
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Clean up
|
||||||
|
try {
|
||||||
|
if (fs.existsSync(tmpDir)) {
|
||||||
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Ignore cleanup errors
|
||||||
|
}
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Expiration Detection', () => {
|
||||||
|
it('should return null for expired token', () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(), // 1 minute ago
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
expect(retrieved).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return credentials for valid token', () => {
|
||||||
|
const validCredentials: AuthCredentials = {
|
||||||
|
token: 'valid-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 3600000).toISOString(), // 1 hour from now
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
expect(retrieved).not.toBeNull();
|
||||||
|
expect(retrieved?.token).toBe('valid-token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return expired token when allowExpired is true', () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: true });
|
||||||
|
|
||||||
|
expect(retrieved).not.toBeNull();
|
||||||
|
expect(retrieved?.token).toBe('expired-token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return expired token by default (allowExpired defaults to true)', () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token-default',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
// Call without options - should default to allowExpired: true
|
||||||
|
const retrieved = credentialStore.getCredentials();
|
||||||
|
|
||||||
|
expect(retrieved).not.toBeNull();
|
||||||
|
expect(retrieved?.token).toBe('expired-token-default');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Clock Skew Tolerance', () => {
|
||||||
|
it('should reject token expiring within 30-second buffer', () => {
|
||||||
|
// Token expires in 15 seconds (within 30-second buffer)
|
||||||
|
const almostExpiredCredentials: AuthCredentials = {
|
||||||
|
token: 'almost-expired-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 15000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(almostExpiredCredentials);
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
expect(retrieved).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should accept token expiring outside 30-second buffer', () => {
|
||||||
|
// Token expires in 60 seconds (outside 30-second buffer)
|
||||||
|
const validCredentials: AuthCredentials = {
|
||||||
|
token: 'valid-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
expect(retrieved).not.toBeNull();
|
||||||
|
expect(retrieved?.token).toBe('valid-token');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Timestamp Format Handling', () => {
|
||||||
|
it('should handle ISO string timestamps', () => {
|
||||||
|
const credentials: AuthCredentials = {
|
||||||
|
token: 'test-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 3600000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(credentials);
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
expect(retrieved).not.toBeNull();
|
||||||
|
expect(typeof retrieved?.expiresAt).toBe('number'); // Normalized to number
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle numeric timestamps', () => {
|
||||||
|
const credentials: AuthCredentials = {
|
||||||
|
token: 'test-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: Date.now() + 3600000,
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(credentials);
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
expect(retrieved).not.toBeNull();
|
||||||
|
expect(typeof retrieved?.expiresAt).toBe('number');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null for invalid timestamp format', () => {
|
||||||
|
// Manually write invalid timestamp to file
|
||||||
|
const invalidCredentials = {
|
||||||
|
token: 'test-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: 'invalid-date',
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
fs.writeFileSync(authFile, JSON.stringify(invalidCredentials), {
|
||||||
|
mode: 0o600
|
||||||
|
});
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
expect(retrieved).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null for missing expiresAt', () => {
|
||||||
|
const credentialsWithoutExpiry = {
|
||||||
|
token: 'test-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
fs.writeFileSync(authFile, JSON.stringify(credentialsWithoutExpiry), {
|
||||||
|
mode: 0o600
|
||||||
|
});
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
expect(retrieved).toBeNull();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Storage Persistence', () => {
|
||||||
|
it('should persist expiresAt as ISO string', () => {
|
||||||
|
const expiryTime = Date.now() + 3600000;
|
||||||
|
const credentials: AuthCredentials = {
|
||||||
|
token: 'test-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: expiryTime,
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(credentials);
|
||||||
|
|
||||||
|
// Read raw file to verify format
|
||||||
|
const fileContent = fs.readFileSync(authFile, 'utf-8');
|
||||||
|
const parsed = JSON.parse(fileContent);
|
||||||
|
|
||||||
|
// Should be stored as ISO string
|
||||||
|
expect(typeof parsed.expiresAt).toBe('string');
|
||||||
|
expect(parsed.expiresAt).toMatch(/^\d{4}-\d{2}-\d{2}T/); // ISO format
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should normalize timestamp on retrieval', () => {
|
||||||
|
const credentials: AuthCredentials = {
|
||||||
|
token: 'test-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 3600000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(credentials);
|
||||||
|
|
||||||
|
const retrieved = credentialStore.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
|
// Should be normalized to number for runtime use
|
||||||
|
expect(typeof retrieved?.expiresAt).toBe('number');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('hasCredentials', () => {
|
||||||
|
it('should return true for expired credentials', () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
expect(credentialStore.hasCredentials()).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true for valid credentials', () => {
|
||||||
|
const validCredentials: AuthCredentials = {
|
||||||
|
token: 'valid-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 3600000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
|
expect(credentialStore.hasCredentials()).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when no credentials exist', () => {
|
||||||
|
expect(credentialStore.hasCredentials()).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -197,7 +197,7 @@ describe('CredentialStore', () => {
|
|||||||
JSON.stringify(mockCredentials)
|
JSON.stringify(mockCredentials)
|
||||||
);
|
);
|
||||||
|
|
||||||
const result = store.getCredentials();
|
const result = store.getCredentials({ allowExpired: false });
|
||||||
|
|
||||||
expect(result).toBeNull();
|
expect(result).toBeNull();
|
||||||
expect(mockLogger.warn).toHaveBeenCalledWith(
|
expect(mockLogger.warn).toHaveBeenCalledWith(
|
||||||
@@ -226,6 +226,31 @@ describe('CredentialStore', () => {
|
|||||||
expect(result).not.toBeNull();
|
expect(result).not.toBeNull();
|
||||||
expect(result?.token).toBe('expired-token');
|
expect(result?.token).toBe('expired-token');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should return expired tokens by default (allowExpired defaults to true)', () => {
|
||||||
|
const expiredTimestamp = Date.now() - 3600000; // 1 hour ago
|
||||||
|
const mockCredentials = {
|
||||||
|
token: 'expired-token-default',
|
||||||
|
userId: 'user-expired',
|
||||||
|
expiresAt: expiredTimestamp,
|
||||||
|
tokenType: 'standard',
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||||
|
vi.mocked(fs.readFileSync).mockReturnValue(
|
||||||
|
JSON.stringify(mockCredentials)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Call without options - should default to allowExpired: true
|
||||||
|
const result = store.getCredentials();
|
||||||
|
|
||||||
|
expect(result).not.toBeNull();
|
||||||
|
expect(result?.token).toBe('expired-token-default');
|
||||||
|
expect(mockLogger.warn).not.toHaveBeenCalledWith(
|
||||||
|
expect.stringContaining('Authentication token has expired')
|
||||||
|
);
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('saveCredentials with timestamp normalization', () => {
|
describe('saveCredentials with timestamp normalization', () => {
|
||||||
@@ -451,7 +476,7 @@ describe('CredentialStore', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('hasValidCredentials', () => {
|
describe('hasCredentials', () => {
|
||||||
it('should return true when valid unexpired credentials exist', () => {
|
it('should return true when valid unexpired credentials exist', () => {
|
||||||
const futureDate = new Date(Date.now() + 3600000); // 1 hour from now
|
const futureDate = new Date(Date.now() + 3600000); // 1 hour from now
|
||||||
const credentials = {
|
const credentials = {
|
||||||
@@ -465,10 +490,10 @@ describe('CredentialStore', () => {
|
|||||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||||
|
|
||||||
expect(store.hasValidCredentials()).toBe(true);
|
expect(store.hasCredentials()).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false when credentials are expired', () => {
|
it('should return true when credentials are expired', () => {
|
||||||
const pastDate = new Date(Date.now() - 3600000); // 1 hour ago
|
const pastDate = new Date(Date.now() - 3600000); // 1 hour ago
|
||||||
const credentials = {
|
const credentials = {
|
||||||
token: 'expired-token',
|
token: 'expired-token',
|
||||||
@@ -481,13 +506,13 @@ describe('CredentialStore', () => {
|
|||||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||||
|
|
||||||
expect(store.hasValidCredentials()).toBe(false);
|
expect(store.hasCredentials()).toBe(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false when no credentials exist', () => {
|
it('should return false when no credentials exist', () => {
|
||||||
vi.mocked(fs.existsSync).mockReturnValue(false);
|
vi.mocked(fs.existsSync).mockReturnValue(false);
|
||||||
|
|
||||||
expect(store.hasValidCredentials()).toBe(false);
|
expect(store.hasCredentials()).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false when file contains invalid JSON', () => {
|
it('should return false when file contains invalid JSON', () => {
|
||||||
@@ -495,7 +520,7 @@ describe('CredentialStore', () => {
|
|||||||
vi.mocked(fs.readFileSync).mockReturnValue('invalid json {');
|
vi.mocked(fs.readFileSync).mockReturnValue('invalid json {');
|
||||||
vi.mocked(fs.renameSync).mockImplementation(() => undefined);
|
vi.mocked(fs.renameSync).mockImplementation(() => undefined);
|
||||||
|
|
||||||
expect(store.hasValidCredentials()).toBe(false);
|
expect(store.hasCredentials()).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return false for credentials without expiry', () => {
|
it('should return false for credentials without expiry', () => {
|
||||||
@@ -510,7 +535,7 @@ describe('CredentialStore', () => {
|
|||||||
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
|
||||||
|
|
||||||
// Credentials without expiry are considered invalid
|
// Credentials without expiry are considered invalid
|
||||||
expect(store.hasValidCredentials()).toBe(false);
|
expect(store.hasCredentials()).toBe(false);
|
||||||
|
|
||||||
// Should log warning about missing expiration
|
// Should log warning about missing expiration
|
||||||
expect(mockLogger.warn).toHaveBeenCalledWith(
|
expect(mockLogger.warn).toHaveBeenCalledWith(
|
||||||
@@ -518,14 +543,14 @@ describe('CredentialStore', () => {
|
|||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should use allowExpired=false by default', () => {
|
it('should use allowExpired=true', () => {
|
||||||
// Spy on getCredentials to verify it's called with correct params
|
// Spy on getCredentials to verify it's called with correct params
|
||||||
const getCredentialsSpy = vi.spyOn(store, 'getCredentials');
|
const getCredentialsSpy = vi.spyOn(store, 'getCredentials');
|
||||||
|
|
||||||
vi.mocked(fs.existsSync).mockReturnValue(false);
|
vi.mocked(fs.existsSync).mockReturnValue(false);
|
||||||
store.hasValidCredentials();
|
store.hasCredentials();
|
||||||
|
|
||||||
expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: false });
|
expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: true });
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -24,6 +24,8 @@ export class CredentialStore {
|
|||||||
private config: AuthConfig;
|
private config: AuthConfig;
|
||||||
// Clock skew tolerance for expiry checks (30 seconds)
|
// Clock skew tolerance for expiry checks (30 seconds)
|
||||||
private readonly CLOCK_SKEW_MS = 30_000;
|
private readonly CLOCK_SKEW_MS = 30_000;
|
||||||
|
// Track if we've already warned about missing expiration to avoid spam
|
||||||
|
private hasWarnedAboutMissingExpiration = false;
|
||||||
|
|
||||||
private constructor(config?: Partial<AuthConfig>) {
|
private constructor(config?: Partial<AuthConfig>) {
|
||||||
this.config = getAuthConfig(config);
|
this.config = getAuthConfig(config);
|
||||||
@@ -54,9 +56,12 @@ export class CredentialStore {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Get stored authentication credentials
|
* Get stored authentication credentials
|
||||||
|
* @param options.allowExpired - Whether to return expired credentials (default: true)
|
||||||
* @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use
|
* @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use
|
||||||
*/
|
*/
|
||||||
getCredentials(options?: { allowExpired?: boolean }): AuthCredentials | null {
|
getCredentials({
|
||||||
|
allowExpired = true
|
||||||
|
}: { allowExpired?: boolean } = {}): AuthCredentials | null {
|
||||||
try {
|
try {
|
||||||
if (!fs.existsSync(this.config.configFile)) {
|
if (!fs.existsSync(this.config.configFile)) {
|
||||||
return null;
|
return null;
|
||||||
@@ -81,7 +86,11 @@ export class CredentialStore {
|
|||||||
|
|
||||||
// Validate expiration time for tokens
|
// Validate expiration time for tokens
|
||||||
if (expiresAtMs === undefined) {
|
if (expiresAtMs === undefined) {
|
||||||
this.logger.warn('No valid expiration time provided for token');
|
// Only log this warning once to avoid spam during auth flows
|
||||||
|
if (!this.hasWarnedAboutMissingExpiration) {
|
||||||
|
this.logger.warn('No valid expiration time provided for token');
|
||||||
|
this.hasWarnedAboutMissingExpiration = true;
|
||||||
|
}
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -90,7 +99,6 @@ export class CredentialStore {
|
|||||||
|
|
||||||
// Check if the token has expired (with clock skew tolerance)
|
// Check if the token has expired (with clock skew tolerance)
|
||||||
const now = Date.now();
|
const now = Date.now();
|
||||||
const allowExpired = options?.allowExpired ?? false;
|
|
||||||
if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) {
|
if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) {
|
||||||
this.logger.warn(
|
this.logger.warn(
|
||||||
'Authentication token has expired or is about to expire',
|
'Authentication token has expired or is about to expire',
|
||||||
@@ -103,7 +111,7 @@ export class CredentialStore {
|
|||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Return valid token
|
// Return credentials (even if expired) to enable refresh flows
|
||||||
return authData;
|
return authData;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
this.logger.error(
|
this.logger.error(
|
||||||
@@ -172,6 +180,9 @@ export class CredentialStore {
|
|||||||
mode: 0o600
|
mode: 0o600
|
||||||
});
|
});
|
||||||
fs.renameSync(tempFile, this.config.configFile);
|
fs.renameSync(tempFile, this.config.configFile);
|
||||||
|
|
||||||
|
// Reset the warning flag so it can be shown again for future invalid tokens
|
||||||
|
this.hasWarnedAboutMissingExpiration = false;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
throw new AuthenticationError(
|
throw new AuthenticationError(
|
||||||
`Failed to save auth credentials: ${(error as Error).message}`,
|
`Failed to save auth credentials: ${(error as Error).message}`,
|
||||||
@@ -199,10 +210,11 @@ export class CredentialStore {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if credentials exist and are valid
|
* Check if credentials exist (regardless of expiration status)
|
||||||
|
* @returns true if credentials are stored, including expired credentials
|
||||||
*/
|
*/
|
||||||
hasValidCredentials(): boolean {
|
hasCredentials(): boolean {
|
||||||
const credentials = this.getCredentials({ allowExpired: false });
|
const credentials = this.getCredentials({ allowExpired: true });
|
||||||
return credentials !== null;
|
return credentials !== null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -281,15 +281,26 @@ export class OAuthService {
|
|||||||
// Exchange code for session using PKCE
|
// Exchange code for session using PKCE
|
||||||
const session = await this.supabaseClient.exchangeCodeForSession(code);
|
const session = await this.supabaseClient.exchangeCodeForSession(code);
|
||||||
|
|
||||||
|
// Calculate expiration - can be overridden with TM_TOKEN_EXPIRY_MINUTES
|
||||||
|
let expiresAt: string | undefined;
|
||||||
|
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
|
||||||
|
if (tokenExpiryMinutes) {
|
||||||
|
const minutes = parseInt(tokenExpiryMinutes);
|
||||||
|
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
|
||||||
|
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
|
||||||
|
} else {
|
||||||
|
expiresAt = session.expires_at
|
||||||
|
? new Date(session.expires_at * 1000).toISOString()
|
||||||
|
: undefined;
|
||||||
|
}
|
||||||
|
|
||||||
// Save authentication data
|
// Save authentication data
|
||||||
const authData: AuthCredentials = {
|
const authData: AuthCredentials = {
|
||||||
token: session.access_token,
|
token: session.access_token,
|
||||||
refreshToken: session.refresh_token,
|
refreshToken: session.refresh_token,
|
||||||
userId: session.user.id,
|
userId: session.user.id,
|
||||||
email: session.user.email,
|
email: session.user.email,
|
||||||
expiresAt: session.expires_at
|
expiresAt,
|
||||||
? new Date(session.expires_at * 1000).toISOString()
|
|
||||||
: undefined,
|
|
||||||
tokenType: 'standard',
|
tokenType: 'standard',
|
||||||
savedAt: new Date().toISOString()
|
savedAt: new Date().toISOString()
|
||||||
};
|
};
|
||||||
@@ -340,10 +351,18 @@ export class OAuthService {
|
|||||||
// Get user info from the session
|
// Get user info from the session
|
||||||
const user = await this.supabaseClient.getUser();
|
const user = await this.supabaseClient.getUser();
|
||||||
|
|
||||||
// Calculate expiration time
|
// Calculate expiration time - can be overridden with TM_TOKEN_EXPIRY_MINUTES
|
||||||
const expiresAt = expiresIn
|
let expiresAt: string | undefined;
|
||||||
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
|
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
|
||||||
: undefined;
|
if (tokenExpiryMinutes) {
|
||||||
|
const minutes = parseInt(tokenExpiryMinutes);
|
||||||
|
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
|
||||||
|
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
|
||||||
|
} else {
|
||||||
|
expiresAt = expiresIn
|
||||||
|
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
|
||||||
|
: undefined;
|
||||||
|
}
|
||||||
|
|
||||||
// Save authentication data
|
// Save authentication data
|
||||||
const authData: AuthCredentials = {
|
const authData: AuthCredentials = {
|
||||||
@@ -351,7 +370,7 @@ export class OAuthService {
|
|||||||
refreshToken: refreshToken || undefined,
|
refreshToken: refreshToken || undefined,
|
||||||
userId: user?.id || 'unknown',
|
userId: user?.id || 'unknown',
|
||||||
email: user?.email,
|
email: user?.email,
|
||||||
expiresAt: expiresAt,
|
expiresAt,
|
||||||
tokenType: 'standard',
|
tokenType: 'standard',
|
||||||
savedAt: new Date().toISOString()
|
savedAt: new Date().toISOString()
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -98,11 +98,11 @@ export class SupabaseSessionStorage implements SupportedStorage {
|
|||||||
// Only handle Supabase session keys
|
// Only handle Supabase session keys
|
||||||
if (key === STORAGE_KEY || key.includes('auth-token')) {
|
if (key === STORAGE_KEY || key.includes('auth-token')) {
|
||||||
try {
|
try {
|
||||||
|
this.logger.info('Supabase called setItem - storing refreshed session');
|
||||||
|
|
||||||
// Parse the session and update our credentials
|
// Parse the session and update our credentials
|
||||||
const sessionUpdates = this.parseSessionToCredentials(value);
|
const sessionUpdates = this.parseSessionToCredentials(value);
|
||||||
const existingCredentials = this.store.getCredentials({
|
const existingCredentials = this.store.getCredentials();
|
||||||
allowExpired: true
|
|
||||||
});
|
|
||||||
|
|
||||||
if (sessionUpdates.token) {
|
if (sessionUpdates.token) {
|
||||||
const updatedCredentials: AuthCredentials = {
|
const updatedCredentials: AuthCredentials = {
|
||||||
@@ -113,6 +113,9 @@ export class SupabaseSessionStorage implements SupportedStorage {
|
|||||||
} as AuthCredentials;
|
} as AuthCredentials;
|
||||||
|
|
||||||
this.store.saveCredentials(updatedCredentials);
|
this.store.saveCredentials(updatedCredentials);
|
||||||
|
this.logger.info(
|
||||||
|
'Successfully saved refreshed credentials from Supabase'
|
||||||
|
);
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
this.logger.error('Error setting session:', error);
|
this.logger.error('Error setting session:', error);
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ export interface AuthCredentials {
|
|||||||
export interface UserContext {
|
export interface UserContext {
|
||||||
orgId?: string;
|
orgId?: string;
|
||||||
orgName?: string;
|
orgName?: string;
|
||||||
|
orgSlug?: string;
|
||||||
briefId?: string;
|
briefId?: string;
|
||||||
briefName?: string;
|
briefName?: string;
|
||||||
updatedAt: string;
|
updatedAt: string;
|
||||||
|
|||||||
@@ -17,10 +17,11 @@ export class SupabaseAuthClient {
|
|||||||
private client: SupabaseJSClient | null = null;
|
private client: SupabaseJSClient | null = null;
|
||||||
private sessionStorage: SupabaseSessionStorage;
|
private sessionStorage: SupabaseSessionStorage;
|
||||||
private logger = getLogger('SupabaseAuthClient');
|
private logger = getLogger('SupabaseAuthClient');
|
||||||
|
private credentialStore: CredentialStore;
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
const credentialStore = CredentialStore.getInstance();
|
this.credentialStore = CredentialStore.getInstance();
|
||||||
this.sessionStorage = new SupabaseSessionStorage(credentialStore);
|
this.sessionStorage = new SupabaseSessionStorage(this.credentialStore);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -28,13 +29,17 @@ export class SupabaseAuthClient {
|
|||||||
*/
|
*/
|
||||||
getClient(): SupabaseJSClient {
|
getClient(): SupabaseJSClient {
|
||||||
if (!this.client) {
|
if (!this.client) {
|
||||||
// Get Supabase configuration from environment - using TM_PUBLIC prefix
|
// Get Supabase configuration from environment
|
||||||
const supabaseUrl = process.env.TM_PUBLIC_SUPABASE_URL;
|
// Runtime vars (TM_*) take precedence over build-time vars (TM_PUBLIC_*)
|
||||||
const supabaseAnonKey = process.env.TM_PUBLIC_SUPABASE_ANON_KEY;
|
const supabaseUrl =
|
||||||
|
process.env.TM_SUPABASE_URL || process.env.TM_PUBLIC_SUPABASE_URL;
|
||||||
|
const supabaseAnonKey =
|
||||||
|
process.env.TM_SUPABASE_ANON_KEY ||
|
||||||
|
process.env.TM_PUBLIC_SUPABASE_ANON_KEY;
|
||||||
|
|
||||||
if (!supabaseUrl || !supabaseAnonKey) {
|
if (!supabaseUrl || !supabaseAnonKey) {
|
||||||
throw new AuthenticationError(
|
throw new AuthenticationError(
|
||||||
'Supabase configuration missing. Please set TM_PUBLIC_SUPABASE_URL and TM_PUBLIC_SUPABASE_ANON_KEY environment variables.',
|
'Supabase configuration missing. Please set TM_SUPABASE_URL and TM_SUPABASE_ANON_KEY (runtime) or TM_PUBLIC_SUPABASE_URL and TM_PUBLIC_SUPABASE_ANON_KEY (build-time) environment variables.',
|
||||||
'CONFIG_MISSING'
|
'CONFIG_MISSING'
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -52,7 +52,10 @@ export const ERROR_CODES = {
|
|||||||
INVALID_INPUT: 'INVALID_INPUT',
|
INVALID_INPUT: 'INVALID_INPUT',
|
||||||
NOT_IMPLEMENTED: 'NOT_IMPLEMENTED',
|
NOT_IMPLEMENTED: 'NOT_IMPLEMENTED',
|
||||||
UNKNOWN_ERROR: 'UNKNOWN_ERROR',
|
UNKNOWN_ERROR: 'UNKNOWN_ERROR',
|
||||||
NOT_FOUND: 'NOT_FOUND'
|
NOT_FOUND: 'NOT_FOUND',
|
||||||
|
|
||||||
|
// Context errors
|
||||||
|
NO_BRIEF_SELECTED: 'NO_BRIEF_SELECTED'
|
||||||
} as const;
|
} as const;
|
||||||
|
|
||||||
export type ErrorCode = (typeof ERROR_CODES)[keyof typeof ERROR_CODES];
|
export type ErrorCode = (typeof ERROR_CODES)[keyof typeof ERROR_CODES];
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ export interface LoggerConfig {
|
|||||||
export class Logger {
|
export class Logger {
|
||||||
private config: Required<LoggerConfig>;
|
private config: Required<LoggerConfig>;
|
||||||
private static readonly DEFAULT_CONFIG: Required<LoggerConfig> = {
|
private static readonly DEFAULT_CONFIG: Required<LoggerConfig> = {
|
||||||
level: LogLevel.WARN,
|
level: LogLevel.SILENT,
|
||||||
silent: false,
|
silent: false,
|
||||||
prefix: '',
|
prefix: '',
|
||||||
timestamp: false,
|
timestamp: false,
|
||||||
|
|||||||
@@ -47,8 +47,8 @@ export class SupabaseTaskRepository {
|
|||||||
* Gets the current brief ID from auth context
|
* Gets the current brief ID from auth context
|
||||||
* @throws {Error} If no brief is selected
|
* @throws {Error} If no brief is selected
|
||||||
*/
|
*/
|
||||||
private getBriefIdOrThrow(): string {
|
private async getBriefIdOrThrow(): Promise<string> {
|
||||||
const context = this.authManager.getContext();
|
const context = await this.authManager.getContext();
|
||||||
if (!context?.briefId) {
|
if (!context?.briefId) {
|
||||||
throw new Error(
|
throw new Error(
|
||||||
'No brief selected. Please select a brief first using: tm context brief'
|
'No brief selected. Please select a brief first using: tm context brief'
|
||||||
@@ -61,7 +61,7 @@ export class SupabaseTaskRepository {
|
|||||||
_projectId?: string,
|
_projectId?: string,
|
||||||
options?: LoadTasksOptions
|
options?: LoadTasksOptions
|
||||||
): Promise<Task[]> {
|
): Promise<Task[]> {
|
||||||
const briefId = this.getBriefIdOrThrow();
|
const briefId = await this.getBriefIdOrThrow();
|
||||||
|
|
||||||
// Build query with filters
|
// Build query with filters
|
||||||
let query = this.supabase
|
let query = this.supabase
|
||||||
@@ -114,7 +114,7 @@ export class SupabaseTaskRepository {
|
|||||||
}
|
}
|
||||||
|
|
||||||
async getTask(_projectId: string, taskId: string): Promise<Task | null> {
|
async getTask(_projectId: string, taskId: string): Promise<Task | null> {
|
||||||
const briefId = this.getBriefIdOrThrow();
|
const briefId = await this.getBriefIdOrThrow();
|
||||||
|
|
||||||
const { data, error } = await this.supabase
|
const { data, error } = await this.supabase
|
||||||
.from('tasks')
|
.from('tasks')
|
||||||
@@ -157,7 +157,7 @@ export class SupabaseTaskRepository {
|
|||||||
taskId: string,
|
taskId: string,
|
||||||
updates: Partial<Task>
|
updates: Partial<Task>
|
||||||
): Promise<Task> {
|
): Promise<Task> {
|
||||||
const briefId = this.getBriefIdOrThrow();
|
const briefId = await this.getBriefIdOrThrow();
|
||||||
|
|
||||||
// Validate updates using Zod schema
|
// Validate updates using Zod schema
|
||||||
try {
|
try {
|
||||||
|
|||||||
@@ -105,7 +105,7 @@ export class ExportService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Get current context
|
// Get current context
|
||||||
const context = this.authManager.getContext();
|
const context = await this.authManager.getContext();
|
||||||
|
|
||||||
// Determine org and brief IDs
|
// Determine org and brief IDs
|
||||||
let orgId = options.orgId || context?.orgId;
|
let orgId = options.orgId || context?.orgId;
|
||||||
@@ -232,7 +232,7 @@ export class ExportService {
|
|||||||
hasBrief: boolean;
|
hasBrief: boolean;
|
||||||
context: UserContext | null;
|
context: UserContext | null;
|
||||||
}> {
|
}> {
|
||||||
const context = this.authManager.getContext();
|
const context = await this.authManager.getContext();
|
||||||
|
|
||||||
return {
|
return {
|
||||||
hasOrg: !!context?.orgId,
|
hasOrg: !!context?.orgId,
|
||||||
@@ -358,11 +358,12 @@ export class ExportService {
|
|||||||
tasks: any[]
|
tasks: any[]
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
// Check if we should use the API endpoint or direct Supabase
|
// Check if we should use the API endpoint or direct Supabase
|
||||||
const useAPIEndpoint = process.env.TM_PUBLIC_BASE_DOMAIN;
|
const apiEndpoint =
|
||||||
|
process.env.TM_BASE_DOMAIN || process.env.TM_PUBLIC_BASE_DOMAIN;
|
||||||
|
|
||||||
if (useAPIEndpoint) {
|
if (apiEndpoint) {
|
||||||
// Use the new bulk import API endpoint
|
// Use the new bulk import API endpoint
|
||||||
const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks/bulk`;
|
const apiUrl = `${apiEndpoint}/ai/api/v1/briefs/${briefId}/tasks`;
|
||||||
|
|
||||||
// Transform tasks to flat structure for API
|
// Transform tasks to flat structure for API
|
||||||
const flatTasks = this.transformTasksForBulkImport(tasks);
|
const flatTasks = this.transformTasksForBulkImport(tasks);
|
||||||
@@ -370,16 +371,16 @@ export class ExportService {
|
|||||||
// Prepare request body
|
// Prepare request body
|
||||||
const requestBody = {
|
const requestBody = {
|
||||||
source: 'task-master-cli',
|
source: 'task-master-cli',
|
||||||
accountId: orgId,
|
|
||||||
options: {
|
options: {
|
||||||
dryRun: false,
|
dryRun: false,
|
||||||
stopOnError: false
|
stopOnError: false
|
||||||
},
|
},
|
||||||
|
accountId: orgId,
|
||||||
tasks: flatTasks
|
tasks: flatTasks
|
||||||
};
|
};
|
||||||
|
|
||||||
// Get auth token
|
// Get auth token
|
||||||
const credentials = this.authManager.getCredentials();
|
const credentials = await this.authManager.getCredentials();
|
||||||
if (!credentials || !credentials.token) {
|
if (!credentials || !credentials.token) {
|
||||||
throw new Error('Not authenticated');
|
throw new Error('Not authenticated');
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,6 +27,12 @@ export interface Brief {
|
|||||||
status: string;
|
status: string;
|
||||||
createdAt: string;
|
createdAt: string;
|
||||||
updatedAt: string;
|
updatedAt: string;
|
||||||
|
document?: {
|
||||||
|
id: string;
|
||||||
|
title: string;
|
||||||
|
document_name: string;
|
||||||
|
description?: string;
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -171,7 +177,12 @@ export class OrganizationService {
|
|||||||
document_id,
|
document_id,
|
||||||
status,
|
status,
|
||||||
created_at,
|
created_at,
|
||||||
updated_at
|
updated_at,
|
||||||
|
document:document_id (
|
||||||
|
id,
|
||||||
|
document_name,
|
||||||
|
title
|
||||||
|
)
|
||||||
`)
|
`)
|
||||||
.eq('account_id', orgId);
|
.eq('account_id', orgId);
|
||||||
|
|
||||||
@@ -196,7 +207,14 @@ export class OrganizationService {
|
|||||||
documentId: brief.document_id,
|
documentId: brief.document_id,
|
||||||
status: brief.status,
|
status: brief.status,
|
||||||
createdAt: brief.created_at,
|
createdAt: brief.created_at,
|
||||||
updatedAt: brief.updated_at
|
updatedAt: brief.updated_at,
|
||||||
|
document: brief.document
|
||||||
|
? {
|
||||||
|
id: brief.document.id,
|
||||||
|
document_name: brief.document.document_name,
|
||||||
|
title: brief.document.title
|
||||||
|
}
|
||||||
|
: undefined
|
||||||
}));
|
}));
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
if (error instanceof TaskMasterError) {
|
if (error instanceof TaskMasterError) {
|
||||||
@@ -224,7 +242,13 @@ export class OrganizationService {
|
|||||||
document_id,
|
document_id,
|
||||||
status,
|
status,
|
||||||
created_at,
|
created_at,
|
||||||
updated_at
|
updated_at,
|
||||||
|
document:document_id (
|
||||||
|
id,
|
||||||
|
document_name,
|
||||||
|
title,
|
||||||
|
description
|
||||||
|
)
|
||||||
`)
|
`)
|
||||||
.eq('id', briefId)
|
.eq('id', briefId)
|
||||||
.single();
|
.single();
|
||||||
@@ -253,7 +277,15 @@ export class OrganizationService {
|
|||||||
documentId: briefData.document_id,
|
documentId: briefData.document_id,
|
||||||
status: briefData.status,
|
status: briefData.status,
|
||||||
createdAt: briefData.created_at,
|
createdAt: briefData.created_at,
|
||||||
updatedAt: briefData.updated_at
|
updatedAt: briefData.updated_at,
|
||||||
|
document: briefData.document
|
||||||
|
? {
|
||||||
|
id: briefData.document.id,
|
||||||
|
document_name: briefData.document.document_name,
|
||||||
|
title: briefData.document.title,
|
||||||
|
description: briefData.document.description
|
||||||
|
}
|
||||||
|
: undefined
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
if (error instanceof TaskMasterError) {
|
if (error instanceof TaskMasterError) {
|
||||||
|
|||||||
@@ -161,6 +161,16 @@ export class TaskService {
|
|||||||
storageType: this.getStorageType()
|
storageType: this.getStorageType()
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
// If it's a user-facing error (like NO_BRIEF_SELECTED), don't log it as an internal error
|
||||||
|
if (
|
||||||
|
error instanceof TaskMasterError &&
|
||||||
|
error.is(ERROR_CODES.NO_BRIEF_SELECTED)
|
||||||
|
) {
|
||||||
|
// Just re-throw user-facing errors without wrapping
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Log internal errors
|
||||||
this.logger.error('Failed to get task list', error);
|
this.logger.error('Failed to get task list', error);
|
||||||
throw new TaskMasterError(
|
throw new TaskMasterError(
|
||||||
'Failed to get task list',
|
'Failed to get task list',
|
||||||
@@ -186,6 +196,14 @@ export class TaskService {
|
|||||||
// Delegate to storage layer which handles the specific logic for tasks vs subtasks
|
// Delegate to storage layer which handles the specific logic for tasks vs subtasks
|
||||||
return await this.storage.loadTask(String(taskId), activeTag);
|
return await this.storage.loadTask(String(taskId), activeTag);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
// If it's a user-facing error (like NO_BRIEF_SELECTED), don't wrap it
|
||||||
|
if (
|
||||||
|
error instanceof TaskMasterError &&
|
||||||
|
error.is(ERROR_CODES.NO_BRIEF_SELECTED)
|
||||||
|
) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
throw new TaskMasterError(
|
throw new TaskMasterError(
|
||||||
`Failed to get task ${taskId}`,
|
`Failed to get task ${taskId}`,
|
||||||
ERROR_CODES.STORAGE_ERROR,
|
ERROR_CODES.STORAGE_ERROR,
|
||||||
@@ -522,6 +540,14 @@ export class TaskService {
|
|||||||
activeTag
|
activeTag
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
// If it's a user-facing error (like NO_BRIEF_SELECTED), don't wrap it
|
||||||
|
if (
|
||||||
|
error instanceof TaskMasterError &&
|
||||||
|
error.is(ERROR_CODES.NO_BRIEF_SELECTED)
|
||||||
|
) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
throw new TaskMasterError(
|
throw new TaskMasterError(
|
||||||
`Failed to update task status for ${taskIdStr}`,
|
`Failed to update task status for ${taskIdStr}`,
|
||||||
ERROR_CODES.STORAGE_ERROR,
|
ERROR_CODES.STORAGE_ERROR,
|
||||||
|
|||||||
@@ -37,6 +37,13 @@ export interface ApiStorageConfig {
|
|||||||
maxRetries?: number;
|
maxRetries?: number;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Auth context with a guaranteed briefId
|
||||||
|
*/
|
||||||
|
type ContextWithBrief = NonNullable<
|
||||||
|
ReturnType<typeof AuthManager.prototype.getContext>
|
||||||
|
> & { briefId: string };
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ApiStorage implementation using repository pattern
|
* ApiStorage implementation using repository pattern
|
||||||
* Provides flexibility to swap between different backend implementations
|
* Provides flexibility to swap between different backend implementations
|
||||||
@@ -112,6 +119,13 @@ export class ApiStorage implements IStorage {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the storage type
|
||||||
|
*/
|
||||||
|
getType(): 'api' {
|
||||||
|
return 'api';
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Load tags into cache
|
* Load tags into cache
|
||||||
* In our API-based system, "tags" represent briefs
|
* In our API-based system, "tags" represent briefs
|
||||||
@@ -151,15 +165,7 @@ export class ApiStorage implements IStorage {
|
|||||||
await this.ensureInitialized();
|
await this.ensureInitialized();
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const authManager = AuthManager.getInstance();
|
const context = this.ensureBriefSelected('loadTasks');
|
||||||
const context = authManager.getContext();
|
|
||||||
|
|
||||||
// If no brief is selected in context, throw an error
|
|
||||||
if (!context?.briefId) {
|
|
||||||
throw new Error(
|
|
||||||
'No brief selected. Please select a brief first using: tm context brief <brief-id>'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load tasks from the current brief context with filters pushed to repository
|
// Load tasks from the current brief context with filters pushed to repository
|
||||||
const tasks = await this.retryOperation(() =>
|
const tasks = await this.retryOperation(() =>
|
||||||
@@ -174,12 +180,11 @@ export class ApiStorage implements IStorage {
|
|||||||
|
|
||||||
return tasks;
|
return tasks;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
throw new TaskMasterError(
|
this.wrapError(error, 'Failed to load tasks from API', {
|
||||||
'Failed to load tasks from API',
|
operation: 'loadTasks',
|
||||||
ERROR_CODES.STORAGE_ERROR,
|
tag,
|
||||||
{ operation: 'loadTasks', tag, context: 'brief-based loading' },
|
context: 'brief-based loading'
|
||||||
error as Error
|
});
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -230,16 +235,17 @@ export class ApiStorage implements IStorage {
|
|||||||
await this.ensureInitialized();
|
await this.ensureInitialized();
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
this.ensureBriefSelected('loadTask');
|
||||||
|
|
||||||
return await this.retryOperation(() =>
|
return await this.retryOperation(() =>
|
||||||
this.repository.getTask(this.projectId, taskId)
|
this.repository.getTask(this.projectId, taskId)
|
||||||
);
|
);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
throw new TaskMasterError(
|
this.wrapError(error, 'Failed to load task from API', {
|
||||||
'Failed to load task from API',
|
operation: 'loadTask',
|
||||||
ERROR_CODES.STORAGE_ERROR,
|
taskId,
|
||||||
{ operation: 'loadTask', taskId, tag },
|
tag
|
||||||
error as Error
|
});
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -503,6 +509,8 @@ export class ApiStorage implements IStorage {
|
|||||||
await this.ensureInitialized();
|
await this.ensureInitialized();
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
this.ensureBriefSelected('updateTaskStatus');
|
||||||
|
|
||||||
const existingTask = await this.retryOperation(() =>
|
const existingTask = await this.retryOperation(() =>
|
||||||
this.repository.getTask(this.projectId, taskId)
|
this.repository.getTask(this.projectId, taskId)
|
||||||
);
|
);
|
||||||
@@ -539,12 +547,12 @@ export class ApiStorage implements IStorage {
|
|||||||
taskId
|
taskId
|
||||||
};
|
};
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
throw new TaskMasterError(
|
this.wrapError(error, 'Failed to update task status via API', {
|
||||||
'Failed to update task status via API',
|
operation: 'updateTaskStatus',
|
||||||
ERROR_CODES.STORAGE_ERROR,
|
taskId,
|
||||||
{ operation: 'updateTaskStatus', taskId, newStatus, tag },
|
newStatus,
|
||||||
error as Error
|
tag
|
||||||
);
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -762,6 +770,29 @@ export class ApiStorage implements IStorage {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ensure a brief is selected in the current context
|
||||||
|
* @returns The current auth context with a valid briefId
|
||||||
|
*/
|
||||||
|
private ensureBriefSelected(operation: string): ContextWithBrief {
|
||||||
|
const authManager = AuthManager.getInstance();
|
||||||
|
const context = authManager.getContext();
|
||||||
|
|
||||||
|
if (!context?.briefId) {
|
||||||
|
throw new TaskMasterError(
|
||||||
|
'No brief selected',
|
||||||
|
ERROR_CODES.NO_BRIEF_SELECTED,
|
||||||
|
{
|
||||||
|
operation,
|
||||||
|
userMessage:
|
||||||
|
'No brief selected. Please select a brief first using: tm context brief <brief-id> or tm context brief <brief-url>'
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return context as ContextWithBrief;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Retry an operation with exponential backoff
|
* Retry an operation with exponential backoff
|
||||||
*/
|
*/
|
||||||
@@ -780,4 +811,28 @@ export class ApiStorage implements IStorage {
|
|||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Wrap an error unless it's already a NO_BRIEF_SELECTED error
|
||||||
|
*/
|
||||||
|
private wrapError(
|
||||||
|
error: unknown,
|
||||||
|
message: string,
|
||||||
|
context: Record<string, unknown>
|
||||||
|
): never {
|
||||||
|
// If it's already a NO_BRIEF_SELECTED error, don't wrap it
|
||||||
|
if (
|
||||||
|
error instanceof TaskMasterError &&
|
||||||
|
error.is(ERROR_CODES.NO_BRIEF_SELECTED)
|
||||||
|
) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new TaskMasterError(
|
||||||
|
message,
|
||||||
|
ERROR_CODES.STORAGE_ERROR,
|
||||||
|
context,
|
||||||
|
error as Error
|
||||||
|
);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -44,6 +44,13 @@ export class FileStorage implements IStorage {
|
|||||||
await this.fileOps.cleanup();
|
await this.fileOps.cleanup();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the storage type
|
||||||
|
*/
|
||||||
|
getType(): 'file' {
|
||||||
|
return 'file';
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get statistics about the storage
|
* Get statistics about the storage
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -72,7 +72,7 @@ export class StorageFactory {
|
|||||||
{ storageType: 'api', missing }
|
{ storageType: 'api', missing }
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
// Use auth token from AuthManager
|
// Use auth token from AuthManager (synchronous - no auto-refresh here)
|
||||||
const credentials = authManager.getCredentials();
|
const credentials = authManager.getCredentials();
|
||||||
if (credentials) {
|
if (credentials) {
|
||||||
// Merge with existing storage config, ensuring required fields
|
// Merge with existing storage config, ensuring required fields
|
||||||
@@ -82,8 +82,8 @@ export class StorageFactory {
|
|||||||
apiAccessToken: credentials.token,
|
apiAccessToken: credentials.token,
|
||||||
apiEndpoint:
|
apiEndpoint:
|
||||||
config.storage?.apiEndpoint ||
|
config.storage?.apiEndpoint ||
|
||||||
process.env.TM_PUBLIC_BASE_DOMAIN ||
|
process.env.TM_BASE_DOMAIN ||
|
||||||
'https://tryhamster.com/api'
|
process.env.TM_PUBLIC_BASE_DOMAIN
|
||||||
};
|
};
|
||||||
config.storage = nextStorage;
|
config.storage = nextStorage;
|
||||||
}
|
}
|
||||||
@@ -112,6 +112,7 @@ export class StorageFactory {
|
|||||||
apiAccessToken: credentials.token,
|
apiAccessToken: credentials.token,
|
||||||
apiEndpoint:
|
apiEndpoint:
|
||||||
config.storage?.apiEndpoint ||
|
config.storage?.apiEndpoint ||
|
||||||
|
process.env.TM_BASE_DOMAIN ||
|
||||||
process.env.TM_PUBLIC_BASE_DOMAIN ||
|
process.env.TM_PUBLIC_BASE_DOMAIN ||
|
||||||
'https://tryhamster.com/api'
|
'https://tryhamster.com/api'
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -201,6 +201,44 @@ export class TaskMasterCore {
|
|||||||
return this.taskService.getStorageType();
|
return this.taskService.getStorageType();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get storage configuration
|
||||||
|
*/
|
||||||
|
getStorageConfig() {
|
||||||
|
return this.configManager.getStorageConfig();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get storage display information for headers
|
||||||
|
* Returns context info for API storage, null for file storage
|
||||||
|
*/
|
||||||
|
getStorageDisplayInfo(): {
|
||||||
|
briefId: string;
|
||||||
|
briefName: string;
|
||||||
|
orgSlug?: string;
|
||||||
|
} | null {
|
||||||
|
// Only return info if using API storage
|
||||||
|
const storageType = this.getStorageType();
|
||||||
|
if (storageType !== 'api') {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get credentials from auth manager
|
||||||
|
const authManager = AuthManager.getInstance();
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
const selectedContext = credentials?.selectedContext;
|
||||||
|
|
||||||
|
if (!selectedContext?.briefId || !selectedContext?.briefName) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
briefId: selectedContext.briefId,
|
||||||
|
briefName: selectedContext.briefName,
|
||||||
|
orgSlug: selectedContext.orgSlug
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get current active tag
|
* Get current active tag
|
||||||
*/
|
*/
|
||||||
|
|||||||
139
packages/tm-core/tests/auth/auth-refresh.test.ts
Normal file
139
packages/tm-core/tests/auth/auth-refresh.test.ts
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
import fs from 'fs';
|
||||||
|
import os from 'os';
|
||||||
|
import path from 'path';
|
||||||
|
import type { Session } from '@supabase/supabase-js';
|
||||||
|
import { AuthManager } from '../../src/auth/auth-manager';
|
||||||
|
import { CredentialStore } from '../../src/auth/credential-store';
|
||||||
|
import type { AuthCredentials } from '../../src/auth/types';
|
||||||
|
|
||||||
|
describe('AuthManager Token Refresh', () => {
|
||||||
|
let authManager: AuthManager;
|
||||||
|
let credentialStore: CredentialStore;
|
||||||
|
let tmpDir: string;
|
||||||
|
let authFile: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Reset singletons
|
||||||
|
AuthManager.resetInstance();
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
|
||||||
|
// Create temporary directory for test isolation
|
||||||
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-refresh-'));
|
||||||
|
authFile = path.join(tmpDir, 'auth.json');
|
||||||
|
|
||||||
|
// Initialize AuthManager with test config (this will create CredentialStore internally)
|
||||||
|
authManager = AuthManager.getInstance({
|
||||||
|
configDir: tmpDir,
|
||||||
|
configFile: authFile
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get the CredentialStore instance that AuthManager created
|
||||||
|
credentialStore = CredentialStore.getInstance();
|
||||||
|
credentialStore.clearCredentials();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Clean up
|
||||||
|
try {
|
||||||
|
credentialStore.clearCredentials();
|
||||||
|
} catch {
|
||||||
|
// Ignore cleanup errors
|
||||||
|
}
|
||||||
|
AuthManager.resetInstance();
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
|
||||||
|
// Remove temporary directory
|
||||||
|
if (tmpDir && fs.existsSync(tmpDir)) {
|
||||||
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return expired credentials to enable refresh flows', () => {
|
||||||
|
// Set up expired credentials with refresh token
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired_access_token',
|
||||||
|
refreshToken: 'valid_refresh_token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
// Get credentials should return them even if expired
|
||||||
|
// Refresh will be handled by explicit calls or client operations
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired_access_token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid_refresh_token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return valid credentials', () => {
|
||||||
|
// Set up valid (non-expired) credentials
|
||||||
|
const validCredentials: AuthCredentials = {
|
||||||
|
token: 'valid_access_token',
|
||||||
|
refreshToken: 'valid_refresh_token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 3600000).toISOString(), // Expires in 1 hour
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
expect(credentials?.token).toBe('valid_access_token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return expired credentials even without refresh token', () => {
|
||||||
|
// Set up expired credentials WITHOUT refresh token
|
||||||
|
// We still return them - it's up to the caller to handle
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired_access_token',
|
||||||
|
refreshToken: undefined,
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
// Returns credentials even if expired
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired_access_token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null if no credentials exist', () => {
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
expect(credentials).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return credentials regardless of refresh token validity', () => {
|
||||||
|
// Set up expired credentials with refresh token
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired_access_token',
|
||||||
|
refreshToken: 'invalid_refresh_token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 1000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
// Returns credentials - refresh will be attempted by the client which will handle failure
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired_access_token');
|
||||||
|
expect(credentials?.refreshToken).toBe('invalid_refresh_token');
|
||||||
|
});
|
||||||
|
});
|
||||||
336
packages/tm-core/tests/integration/auth-token-refresh.test.ts
Normal file
336
packages/tm-core/tests/integration/auth-token-refresh.test.ts
Normal file
@@ -0,0 +1,336 @@
|
|||||||
|
/**
|
||||||
|
* @fileoverview Integration tests for JWT token auto-refresh functionality
|
||||||
|
*
|
||||||
|
* These tests verify that expired tokens are automatically refreshed
|
||||||
|
* when making API calls through AuthManager.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
import fs from 'fs';
|
||||||
|
import os from 'os';
|
||||||
|
import path from 'path';
|
||||||
|
import type { Session } from '@supabase/supabase-js';
|
||||||
|
import { AuthManager } from '../../src/auth/auth-manager';
|
||||||
|
import { CredentialStore } from '../../src/auth/credential-store';
|
||||||
|
import type { AuthCredentials } from '../../src/auth/types';
|
||||||
|
|
||||||
|
describe('AuthManager - Token Auto-Refresh Integration', () => {
|
||||||
|
let authManager: AuthManager;
|
||||||
|
let credentialStore: CredentialStore;
|
||||||
|
let tmpDir: string;
|
||||||
|
let authFile: string;
|
||||||
|
|
||||||
|
// Mock Supabase session that will be returned on refresh
|
||||||
|
const mockRefreshedSession: Session = {
|
||||||
|
access_token: 'new-access-token-xyz',
|
||||||
|
refresh_token: 'new-refresh-token-xyz',
|
||||||
|
token_type: 'bearer',
|
||||||
|
expires_at: Math.floor(Date.now() / 1000) + 3600, // 1 hour from now
|
||||||
|
expires_in: 3600,
|
||||||
|
user: {
|
||||||
|
id: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
aud: 'authenticated',
|
||||||
|
role: 'authenticated',
|
||||||
|
app_metadata: {},
|
||||||
|
user_metadata: {},
|
||||||
|
created_at: new Date().toISOString()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Reset singletons
|
||||||
|
AuthManager.resetInstance();
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
|
||||||
|
// Create temporary directory for test isolation
|
||||||
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-integration-'));
|
||||||
|
authFile = path.join(tmpDir, 'auth.json');
|
||||||
|
|
||||||
|
// Initialize AuthManager with test config (this will create CredentialStore internally)
|
||||||
|
authManager = AuthManager.getInstance({
|
||||||
|
configDir: tmpDir,
|
||||||
|
configFile: authFile
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get the CredentialStore instance that AuthManager created
|
||||||
|
credentialStore = CredentialStore.getInstance();
|
||||||
|
credentialStore.clearCredentials();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Clean up
|
||||||
|
try {
|
||||||
|
credentialStore.clearCredentials();
|
||||||
|
} catch {
|
||||||
|
// Ignore cleanup errors
|
||||||
|
}
|
||||||
|
AuthManager.resetInstance();
|
||||||
|
CredentialStore.resetInstance();
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
|
||||||
|
// Remove temporary directory
|
||||||
|
if (tmpDir && fs.existsSync(tmpDir)) {
|
||||||
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Expired Token Detection', () => {
|
||||||
|
it('should return expired token for Supabase to refresh', () => {
|
||||||
|
// Set up expired credentials
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token',
|
||||||
|
refreshToken: 'valid-refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(), // 1 minute ago
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
// Get credentials returns them even if expired
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired-token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return valid token', () => {
|
||||||
|
// Set up valid credentials
|
||||||
|
const validCredentials: AuthCredentials = {
|
||||||
|
token: 'valid-token',
|
||||||
|
refreshToken: 'valid-refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 3600000).toISOString(), // 1 hour from now
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
expect(credentials?.token).toBe('valid-token');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Token Refresh Flow', () => {
|
||||||
|
it('should manually refresh expired token and save new credentials', async () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'old-token',
|
||||||
|
refreshToken: 'old-refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date(Date.now() - 3600000).toISOString(),
|
||||||
|
selectedContext: {
|
||||||
|
orgId: 'test-org',
|
||||||
|
briefId: 'test-brief',
|
||||||
|
updatedAt: new Date().toISOString()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
vi.spyOn(
|
||||||
|
authManager['supabaseClient'],
|
||||||
|
'refreshSession'
|
||||||
|
).mockResolvedValue(mockRefreshedSession);
|
||||||
|
|
||||||
|
// Explicitly call refreshToken() method
|
||||||
|
const refreshedCredentials = await authManager.refreshToken();
|
||||||
|
|
||||||
|
expect(refreshedCredentials).not.toBeNull();
|
||||||
|
expect(refreshedCredentials.token).toBe('new-access-token-xyz');
|
||||||
|
expect(refreshedCredentials.refreshToken).toBe('new-refresh-token-xyz');
|
||||||
|
|
||||||
|
// Verify context was preserved
|
||||||
|
expect(refreshedCredentials.selectedContext?.orgId).toBe('test-org');
|
||||||
|
expect(refreshedCredentials.selectedContext?.briefId).toBe('test-brief');
|
||||||
|
|
||||||
|
// Verify new expiration is in the future
|
||||||
|
const newExpiry = new Date(refreshedCredentials.expiresAt!).getTime();
|
||||||
|
const now = Date.now();
|
||||||
|
expect(newExpiry).toBeGreaterThan(now);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw error if manual refresh fails', async () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token',
|
||||||
|
refreshToken: 'invalid-refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
// Mock refresh to fail
|
||||||
|
vi.spyOn(
|
||||||
|
authManager['supabaseClient'],
|
||||||
|
'refreshSession'
|
||||||
|
).mockRejectedValue(new Error('Refresh token expired'));
|
||||||
|
|
||||||
|
// Explicit refreshToken() call should throw
|
||||||
|
await expect(authManager.refreshToken()).rejects.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return expired credentials even without refresh token', () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token',
|
||||||
|
// No refresh token
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
// Credentials are returned even without refresh token
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired-token');
|
||||||
|
expect(credentials?.refreshToken).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return null if credentials missing expiresAt', () => {
|
||||||
|
const credentialsWithoutExpiry: AuthCredentials = {
|
||||||
|
token: 'test-token',
|
||||||
|
refreshToken: 'refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
// Missing expiresAt - invalid token
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(credentialsWithoutExpiry);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
// Tokens without valid expiration are considered invalid
|
||||||
|
expect(credentials).toBeNull();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Clock Skew Tolerance', () => {
|
||||||
|
it('should return credentials within 30-second expiry window', () => {
|
||||||
|
// Token expires in 15 seconds (within 30-second buffer)
|
||||||
|
// Supabase will handle refresh automatically
|
||||||
|
const almostExpiredCredentials: AuthCredentials = {
|
||||||
|
token: 'almost-expired-token',
|
||||||
|
refreshToken: 'valid-refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 15000).toISOString(), // 15 seconds from now
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(almostExpiredCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
// Credentials are returned (Supabase handles auto-refresh in background)
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('almost-expired-token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return valid token well before expiry', () => {
|
||||||
|
// Token expires in 5 minutes
|
||||||
|
const validCredentials: AuthCredentials = {
|
||||||
|
token: 'valid-token',
|
||||||
|
refreshToken: 'valid-refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() + 300000).toISOString(), // 5 minutes
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(validCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
// Valid credentials are returned as-is
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('valid-token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Synchronous vs Async Methods', () => {
|
||||||
|
it('getCredentials should return expired credentials', () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token',
|
||||||
|
refreshToken: 'valid-refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
// Returns credentials even if expired - Supabase will handle refresh
|
||||||
|
const credentials = authManager.getCredentials();
|
||||||
|
|
||||||
|
expect(credentials).not.toBeNull();
|
||||||
|
expect(credentials?.token).toBe('expired-token');
|
||||||
|
expect(credentials?.refreshToken).toBe('valid-refresh-token');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Multiple Concurrent Calls', () => {
|
||||||
|
it('should handle concurrent getCredentials calls gracefully', () => {
|
||||||
|
const expiredCredentials: AuthCredentials = {
|
||||||
|
token: 'expired-token',
|
||||||
|
refreshToken: 'valid-refresh-token',
|
||||||
|
userId: 'test-user-id',
|
||||||
|
email: 'test@example.com',
|
||||||
|
expiresAt: new Date(Date.now() - 60000).toISOString(),
|
||||||
|
savedAt: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
credentialStore.saveCredentials(expiredCredentials);
|
||||||
|
|
||||||
|
authManager = AuthManager.getInstance();
|
||||||
|
|
||||||
|
// Make multiple concurrent calls (synchronous now)
|
||||||
|
const creds1 = authManager.getCredentials();
|
||||||
|
const creds2 = authManager.getCredentials();
|
||||||
|
const creds3 = authManager.getCredentials();
|
||||||
|
|
||||||
|
// All should get the same credentials (even if expired)
|
||||||
|
expect(creds1?.token).toBe('expired-token');
|
||||||
|
expect(creds2?.token).toBe('expired-token');
|
||||||
|
expect(creds3?.token).toBe('expired-token');
|
||||||
|
|
||||||
|
// All include refresh token for Supabase to use
|
||||||
|
expect(creds1?.refreshToken).toBe('valid-refresh-token');
|
||||||
|
expect(creds2?.refreshToken).toBe('valid-refresh-token');
|
||||||
|
expect(creds3?.refreshToken).toBe('valid-refresh-token');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -9,6 +9,8 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import dotenv from 'dotenv';
|
import dotenv from 'dotenv';
|
||||||
|
|
||||||
|
// Load .env BEFORE any other imports to ensure env vars are available
|
||||||
dotenv.config();
|
dotenv.config();
|
||||||
|
|
||||||
// Add at the very beginning of the file
|
// Add at the very beginning of the file
|
||||||
@@ -16,7 +18,8 @@ if (process.env.DEBUG === '1') {
|
|||||||
console.error('DEBUG - dev.js received args:', process.argv.slice(2));
|
console.error('DEBUG - dev.js received args:', process.argv.slice(2));
|
||||||
}
|
}
|
||||||
|
|
||||||
import { runCLI } from './modules/commands.js';
|
// Use dynamic import to ensure dotenv.config() runs before module-level code executes
|
||||||
|
const { runCLI } = await import('./modules/commands.js');
|
||||||
|
|
||||||
// Run the CLI with the process arguments
|
// Run the CLI with the process arguments
|
||||||
runCLI(process.argv);
|
runCLI(process.argv);
|
||||||
|
|||||||
@@ -19,7 +19,8 @@ import {
|
|||||||
registerAllCommands,
|
registerAllCommands,
|
||||||
checkForUpdate,
|
checkForUpdate,
|
||||||
performAutoUpdate,
|
performAutoUpdate,
|
||||||
displayUpgradeNotification
|
displayUpgradeNotification,
|
||||||
|
displayError
|
||||||
} from '@tm/cli';
|
} from '@tm/cli';
|
||||||
|
|
||||||
import {
|
import {
|
||||||
@@ -2441,57 +2442,6 @@ ${result.result}
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// next command
|
|
||||||
programInstance
|
|
||||||
.command('next')
|
|
||||||
.description(
|
|
||||||
`Show the next task to work on based on dependencies and status${chalk.reset('')}`
|
|
||||||
)
|
|
||||||
.option(
|
|
||||||
'-f, --file <file>',
|
|
||||||
'Path to the tasks file',
|
|
||||||
TASKMASTER_TASKS_FILE
|
|
||||||
)
|
|
||||||
.option(
|
|
||||||
'-r, --report <report>',
|
|
||||||
'Path to the complexity report file',
|
|
||||||
COMPLEXITY_REPORT_FILE
|
|
||||||
)
|
|
||||||
.option('--tag <tag>', 'Specify tag context for task operations')
|
|
||||||
.action(async (options) => {
|
|
||||||
const initOptions = {
|
|
||||||
tasksPath: options.file || true,
|
|
||||||
tag: options.tag
|
|
||||||
};
|
|
||||||
|
|
||||||
if (options.report && options.report !== COMPLEXITY_REPORT_FILE) {
|
|
||||||
initOptions.complexityReportPath = options.report;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize TaskMaster
|
|
||||||
const taskMaster = initTaskMaster({
|
|
||||||
tasksPath: options.file || true,
|
|
||||||
tag: options.tag,
|
|
||||||
complexityReportPath: options.report || false
|
|
||||||
});
|
|
||||||
|
|
||||||
const tag = taskMaster.getCurrentTag();
|
|
||||||
|
|
||||||
const context = {
|
|
||||||
projectRoot: taskMaster.getProjectRoot(),
|
|
||||||
tag
|
|
||||||
};
|
|
||||||
|
|
||||||
// Show current tag context
|
|
||||||
displayCurrentTagIndicator(tag);
|
|
||||||
|
|
||||||
await displayNextTask(
|
|
||||||
taskMaster.getTasksPath(),
|
|
||||||
taskMaster.getComplexityReportPath(),
|
|
||||||
context
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
// add-dependency command
|
// add-dependency command
|
||||||
programInstance
|
programInstance
|
||||||
.command('add-dependency')
|
.command('add-dependency')
|
||||||
@@ -5207,10 +5157,7 @@ async function runCLI(argv = process.argv) {
|
|||||||
);
|
);
|
||||||
} else {
|
} else {
|
||||||
// Generic error handling for other errors
|
// Generic error handling for other errors
|
||||||
console.error(chalk.red(`Error: ${error.message}`));
|
displayError(error);
|
||||||
if (getDebugFlag()) {
|
|
||||||
console.error(error);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
process.exit(1);
|
process.exit(1);
|
||||||
|
|||||||
@@ -307,6 +307,20 @@ function validateProviderModelCombination(providerName, modelId) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Gets the list of supported model IDs for a given provider from supported-models.json
|
||||||
|
* @param {string} providerName - The name of the provider (e.g., 'claude-code', 'anthropic')
|
||||||
|
* @returns {string[]} Array of supported model IDs, or empty array if provider not found
|
||||||
|
*/
|
||||||
|
export function getSupportedModelsForProvider(providerName) {
|
||||||
|
if (!MODEL_MAP[providerName]) {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
return MODEL_MAP[providerName]
|
||||||
|
.filter((model) => model.supported !== false)
|
||||||
|
.map((model) => model.id);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Validates Claude Code AI provider custom settings
|
* Validates Claude Code AI provider custom settings
|
||||||
* @param {object} settings The settings to validate
|
* @param {object} settings The settings to validate
|
||||||
|
|||||||
@@ -43,6 +43,28 @@
|
|||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 8192,
|
"max_tokens": 8192,
|
||||||
"supported": true
|
"supported": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "claude-sonnet-4-5-20250929",
|
||||||
|
"swe_score": 0.73,
|
||||||
|
"cost_per_1m_tokens": {
|
||||||
|
"input": 3.0,
|
||||||
|
"output": 15.0
|
||||||
|
},
|
||||||
|
"allowed_roles": ["main", "fallback"],
|
||||||
|
"max_tokens": 64000,
|
||||||
|
"supported": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "claude-haiku-4-5-20251001",
|
||||||
|
"swe_score": 0.45,
|
||||||
|
"cost_per_1m_tokens": {
|
||||||
|
"input": 1.0,
|
||||||
|
"output": 5.0
|
||||||
|
},
|
||||||
|
"allowed_roles": ["main", "fallback"],
|
||||||
|
"max_tokens": 200000,
|
||||||
|
"supported": true
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"claude-code": [
|
"claude-code": [
|
||||||
@@ -67,6 +89,17 @@
|
|||||||
"allowed_roles": ["main", "fallback", "research"],
|
"allowed_roles": ["main", "fallback", "research"],
|
||||||
"max_tokens": 64000,
|
"max_tokens": 64000,
|
||||||
"supported": true
|
"supported": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "haiku",
|
||||||
|
"swe_score": 0.45,
|
||||||
|
"cost_per_1m_tokens": {
|
||||||
|
"input": 0,
|
||||||
|
"output": 0
|
||||||
|
},
|
||||||
|
"allowed_roles": ["main", "fallback", "research"],
|
||||||
|
"max_tokens": 200000,
|
||||||
|
"supported": true
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"codex-cli": [
|
"codex-cli": [
|
||||||
|
|||||||
@@ -12,7 +12,10 @@
|
|||||||
|
|
||||||
import { createClaudeCode } from 'ai-sdk-provider-claude-code';
|
import { createClaudeCode } from 'ai-sdk-provider-claude-code';
|
||||||
import { BaseAIProvider } from './base-provider.js';
|
import { BaseAIProvider } from './base-provider.js';
|
||||||
import { getClaudeCodeSettingsForCommand } from '../../scripts/modules/config-manager.js';
|
import {
|
||||||
|
getClaudeCodeSettingsForCommand,
|
||||||
|
getSupportedModelsForProvider
|
||||||
|
} from '../../scripts/modules/config-manager.js';
|
||||||
import { execSync } from 'child_process';
|
import { execSync } from 'child_process';
|
||||||
import { log } from '../../scripts/modules/utils.js';
|
import { log } from '../../scripts/modules/utils.js';
|
||||||
|
|
||||||
@@ -24,14 +27,24 @@ let _claudeCliAvailable = null;
|
|||||||
*
|
*
|
||||||
* Features:
|
* Features:
|
||||||
* - No API key required (uses local Claude Code CLI)
|
* - No API key required (uses local Claude Code CLI)
|
||||||
* - Supports 'sonnet' and 'opus' models
|
* - Supported models loaded from supported-models.json
|
||||||
* - Command-specific configuration support
|
* - Command-specific configuration support
|
||||||
*/
|
*/
|
||||||
export class ClaudeCodeProvider extends BaseAIProvider {
|
export class ClaudeCodeProvider extends BaseAIProvider {
|
||||||
constructor() {
|
constructor() {
|
||||||
super();
|
super();
|
||||||
this.name = 'Claude Code';
|
this.name = 'Claude Code';
|
||||||
this.supportedModels = ['sonnet', 'opus'];
|
// Load supported models from supported-models.json
|
||||||
|
this.supportedModels = getSupportedModelsForProvider('claude-code');
|
||||||
|
|
||||||
|
// Validate that models were loaded successfully
|
||||||
|
if (this.supportedModels.length === 0) {
|
||||||
|
log(
|
||||||
|
'warn',
|
||||||
|
'No supported models found for claude-code provider. Check supported-models.json configuration.'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
// Claude Code requires explicit JSON schema mode
|
// Claude Code requires explicit JSON schema mode
|
||||||
this.needsExplicitJsonSchema = true;
|
this.needsExplicitJsonSchema = true;
|
||||||
// Claude Code does not support temperature parameter
|
// Claude Code does not support temperature parameter
|
||||||
|
|||||||
@@ -10,7 +10,10 @@ import { createCodexCli } from 'ai-sdk-provider-codex-cli';
|
|||||||
import { BaseAIProvider } from './base-provider.js';
|
import { BaseAIProvider } from './base-provider.js';
|
||||||
import { execSync } from 'child_process';
|
import { execSync } from 'child_process';
|
||||||
import { log } from '../../scripts/modules/utils.js';
|
import { log } from '../../scripts/modules/utils.js';
|
||||||
import { getCodexCliSettingsForCommand } from '../../scripts/modules/config-manager.js';
|
import {
|
||||||
|
getCodexCliSettingsForCommand,
|
||||||
|
getSupportedModelsForProvider
|
||||||
|
} from '../../scripts/modules/config-manager.js';
|
||||||
|
|
||||||
export class CodexCliProvider extends BaseAIProvider {
|
export class CodexCliProvider extends BaseAIProvider {
|
||||||
constructor() {
|
constructor() {
|
||||||
@@ -20,8 +23,17 @@ export class CodexCliProvider extends BaseAIProvider {
|
|||||||
this.needsExplicitJsonSchema = false;
|
this.needsExplicitJsonSchema = false;
|
||||||
// Codex CLI does not support temperature parameter
|
// Codex CLI does not support temperature parameter
|
||||||
this.supportsTemperature = false;
|
this.supportsTemperature = false;
|
||||||
// Restrict to supported models for OAuth subscription usage
|
// Load supported models from supported-models.json
|
||||||
this.supportedModels = ['gpt-5', 'gpt-5-codex'];
|
this.supportedModels = getSupportedModelsForProvider('codex-cli');
|
||||||
|
|
||||||
|
// Validate that models were loaded successfully
|
||||||
|
if (this.supportedModels.length === 0) {
|
||||||
|
log(
|
||||||
|
'warn',
|
||||||
|
'No supported models found for codex-cli provider. Check supported-models.json configuration.'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
// CLI availability check cache
|
// CLI availability check cache
|
||||||
this._codexCliChecked = false;
|
this._codexCliChecked = false;
|
||||||
this._codexCliAvailable = null;
|
this._codexCliAvailable = null;
|
||||||
|
|||||||
@@ -47,21 +47,33 @@ export function normalizeProjectRoot(projectRoot) {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Find the project root directory by looking for project markers
|
* Find the project root directory by looking for project markers
|
||||||
* @param {string} startDir - Directory to start searching from
|
* Traverses upwards from startDir until a project marker is found or filesystem root is reached
|
||||||
* @returns {string|null} - Project root path or null if not found
|
* Limited to 50 parent directory levels to prevent excessive traversal
|
||||||
|
* @param {string} startDir - Directory to start searching from (defaults to process.cwd())
|
||||||
|
* @returns {string} - Project root path (falls back to current directory if no markers found)
|
||||||
*/
|
*/
|
||||||
export function findProjectRoot(startDir = process.cwd()) {
|
export function findProjectRoot(startDir = process.cwd()) {
|
||||||
|
// Define project markers that indicate a project root
|
||||||
|
// Prioritize Task Master specific markers first
|
||||||
const projectMarkers = [
|
const projectMarkers = [
|
||||||
'.taskmaster',
|
'.taskmaster', // Task Master directory (highest priority)
|
||||||
TASKMASTER_TASKS_FILE,
|
TASKMASTER_CONFIG_FILE, // .taskmaster/config.json
|
||||||
'tasks.json',
|
TASKMASTER_TASKS_FILE, // .taskmaster/tasks/tasks.json
|
||||||
LEGACY_TASKS_FILE,
|
LEGACY_CONFIG_FILE, // .taskmasterconfig (legacy)
|
||||||
'.git',
|
LEGACY_TASKS_FILE, // tasks/tasks.json (legacy)
|
||||||
'.svn',
|
'tasks.json', // Root tasks.json (legacy)
|
||||||
'package.json',
|
'.git', // Git repository
|
||||||
'yarn.lock',
|
'.svn', // SVN repository
|
||||||
'package-lock.json',
|
'package.json', // Node.js project
|
||||||
'pnpm-lock.yaml'
|
'yarn.lock', // Yarn project
|
||||||
|
'package-lock.json', // npm project
|
||||||
|
'pnpm-lock.yaml', // pnpm project
|
||||||
|
'Cargo.toml', // Rust project
|
||||||
|
'go.mod', // Go project
|
||||||
|
'pyproject.toml', // Python project
|
||||||
|
'requirements.txt', // Python project
|
||||||
|
'Gemfile', // Ruby project
|
||||||
|
'composer.json' // PHP project
|
||||||
];
|
];
|
||||||
|
|
||||||
let currentDir = path.resolve(startDir);
|
let currentDir = path.resolve(startDir);
|
||||||
@@ -69,19 +81,36 @@ export function findProjectRoot(startDir = process.cwd()) {
|
|||||||
const maxDepth = 50; // Reasonable limit to prevent infinite loops
|
const maxDepth = 50; // Reasonable limit to prevent infinite loops
|
||||||
let depth = 0;
|
let depth = 0;
|
||||||
|
|
||||||
|
// Traverse upwards looking for project markers
|
||||||
while (currentDir !== rootDir && depth < maxDepth) {
|
while (currentDir !== rootDir && depth < maxDepth) {
|
||||||
// Check if current directory contains any project markers
|
// Check if current directory contains any project markers
|
||||||
for (const marker of projectMarkers) {
|
for (const marker of projectMarkers) {
|
||||||
const markerPath = path.join(currentDir, marker);
|
const markerPath = path.join(currentDir, marker);
|
||||||
if (fs.existsSync(markerPath)) {
|
try {
|
||||||
return currentDir;
|
if (fs.existsSync(markerPath)) {
|
||||||
|
// Found a project marker - return this directory as project root
|
||||||
|
return currentDir;
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
// Ignore permission errors and continue searching
|
||||||
|
continue;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
currentDir = path.dirname(currentDir);
|
|
||||||
|
// Move up one directory level
|
||||||
|
const parentDir = path.dirname(currentDir);
|
||||||
|
|
||||||
|
// Safety check: if dirname returns the same path, we've hit the root
|
||||||
|
if (parentDir === currentDir) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
currentDir = parentDir;
|
||||||
depth++;
|
depth++;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback to current working directory if no project root found
|
// Fallback to current working directory if no project root found
|
||||||
|
// This ensures the function always returns a valid path
|
||||||
return process.cwd();
|
return process.cwd();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
123
tests/helpers/tool-counts.js
Normal file
123
tests/helpers/tool-counts.js
Normal file
@@ -0,0 +1,123 @@
|
|||||||
|
/**
|
||||||
|
* tool-counts.js
|
||||||
|
* Shared helper for validating tool counts across tests and validation scripts
|
||||||
|
*/
|
||||||
|
|
||||||
|
import {
|
||||||
|
getToolCounts,
|
||||||
|
getToolCategories
|
||||||
|
} from '../../mcp-server/src/tools/tool-registry.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Expected tool counts - update these when tools are added/removed
|
||||||
|
* These serve as the canonical source of truth for expected counts
|
||||||
|
*/
|
||||||
|
export const EXPECTED_TOOL_COUNTS = {
|
||||||
|
core: 7,
|
||||||
|
standard: 15,
|
||||||
|
total: 36
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Expected core tools list for validation
|
||||||
|
*/
|
||||||
|
export const EXPECTED_CORE_TOOLS = [
|
||||||
|
'get_tasks',
|
||||||
|
'next_task',
|
||||||
|
'get_task',
|
||||||
|
'set_task_status',
|
||||||
|
'update_subtask',
|
||||||
|
'parse_prd',
|
||||||
|
'expand_task'
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate that actual tool counts match expected counts
|
||||||
|
* @returns {Object} Validation result with isValid flag and details
|
||||||
|
*/
|
||||||
|
export function validateToolCounts() {
|
||||||
|
const actual = getToolCounts();
|
||||||
|
const expected = EXPECTED_TOOL_COUNTS;
|
||||||
|
|
||||||
|
const isValid =
|
||||||
|
actual.core === expected.core &&
|
||||||
|
actual.standard === expected.standard &&
|
||||||
|
actual.total === expected.total;
|
||||||
|
|
||||||
|
return {
|
||||||
|
isValid,
|
||||||
|
actual,
|
||||||
|
expected,
|
||||||
|
differences: {
|
||||||
|
core: actual.core - expected.core,
|
||||||
|
standard: actual.standard - expected.standard,
|
||||||
|
total: actual.total - expected.total
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate that tool categories have correct structure and content
|
||||||
|
* @returns {Object} Validation result
|
||||||
|
*/
|
||||||
|
export function validateToolStructure() {
|
||||||
|
const categories = getToolCategories();
|
||||||
|
const counts = getToolCounts();
|
||||||
|
|
||||||
|
// Check that core tools are subset of standard tools
|
||||||
|
const coreInStandard = categories.core.every((tool) =>
|
||||||
|
categories.standard.includes(tool)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Check that standard tools are subset of all tools
|
||||||
|
const standardInAll = categories.standard.every((tool) =>
|
||||||
|
categories.all.includes(tool)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Check that expected core tools match actual
|
||||||
|
const expectedCoreMatch =
|
||||||
|
EXPECTED_CORE_TOOLS.every((tool) => categories.core.includes(tool)) &&
|
||||||
|
categories.core.every((tool) => EXPECTED_CORE_TOOLS.includes(tool));
|
||||||
|
|
||||||
|
// Check array lengths match counts
|
||||||
|
const lengthsMatch =
|
||||||
|
categories.core.length === counts.core &&
|
||||||
|
categories.standard.length === counts.standard &&
|
||||||
|
categories.all.length === counts.total;
|
||||||
|
|
||||||
|
return {
|
||||||
|
isValid:
|
||||||
|
coreInStandard && standardInAll && expectedCoreMatch && lengthsMatch,
|
||||||
|
details: {
|
||||||
|
coreInStandard,
|
||||||
|
standardInAll,
|
||||||
|
expectedCoreMatch,
|
||||||
|
lengthsMatch
|
||||||
|
},
|
||||||
|
categories,
|
||||||
|
counts
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get a detailed report of all tool information
|
||||||
|
* @returns {Object} Comprehensive tool information
|
||||||
|
*/
|
||||||
|
export function getToolReport() {
|
||||||
|
const counts = getToolCounts();
|
||||||
|
const categories = getToolCategories();
|
||||||
|
const validation = validateToolCounts();
|
||||||
|
const structure = validateToolStructure();
|
||||||
|
|
||||||
|
return {
|
||||||
|
counts,
|
||||||
|
categories,
|
||||||
|
validation,
|
||||||
|
structure,
|
||||||
|
summary: {
|
||||||
|
totalValid: validation.isValid && structure.isValid,
|
||||||
|
countsValid: validation.isValid,
|
||||||
|
structureValid: structure.isValid
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
@@ -43,9 +43,9 @@ describe('Claude Code Error Handling', () => {
|
|||||||
|
|
||||||
// These should work even if CLI is not available
|
// These should work even if CLI is not available
|
||||||
expect(provider.name).toBe('Claude Code');
|
expect(provider.name).toBe('Claude Code');
|
||||||
expect(provider.getSupportedModels()).toEqual(['sonnet', 'opus']);
|
expect(provider.getSupportedModels()).toEqual(['opus', 'sonnet', 'haiku']);
|
||||||
expect(provider.isModelSupported('sonnet')).toBe(true);
|
expect(provider.isModelSupported('sonnet')).toBe(true);
|
||||||
expect(provider.isModelSupported('haiku')).toBe(false);
|
expect(provider.isModelSupported('haiku')).toBe(true);
|
||||||
expect(provider.isRequiredApiKey()).toBe(false);
|
expect(provider.isRequiredApiKey()).toBe(false);
|
||||||
expect(() => provider.validateAuth()).not.toThrow();
|
expect(() => provider.validateAuth()).not.toThrow();
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -40,14 +40,14 @@ describe('Claude Code Integration (Optional)', () => {
|
|||||||
it('should create a working provider instance', () => {
|
it('should create a working provider instance', () => {
|
||||||
const provider = new ClaudeCodeProvider();
|
const provider = new ClaudeCodeProvider();
|
||||||
expect(provider.name).toBe('Claude Code');
|
expect(provider.name).toBe('Claude Code');
|
||||||
expect(provider.getSupportedModels()).toEqual(['sonnet', 'opus']);
|
expect(provider.getSupportedModels()).toEqual(['opus', 'sonnet', 'haiku']);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should support model validation', () => {
|
it('should support model validation', () => {
|
||||||
const provider = new ClaudeCodeProvider();
|
const provider = new ClaudeCodeProvider();
|
||||||
expect(provider.isModelSupported('sonnet')).toBe(true);
|
expect(provider.isModelSupported('sonnet')).toBe(true);
|
||||||
expect(provider.isModelSupported('opus')).toBe(true);
|
expect(provider.isModelSupported('opus')).toBe(true);
|
||||||
expect(provider.isModelSupported('haiku')).toBe(false);
|
expect(provider.isModelSupported('haiku')).toBe(true);
|
||||||
expect(provider.isModelSupported('unknown')).toBe(false);
|
expect(provider.isModelSupported('unknown')).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -28,6 +28,14 @@ jest.unstable_mockModule('../../../src/ai-providers/base-provider.js', () => ({
|
|||||||
}
|
}
|
||||||
}));
|
}));
|
||||||
|
|
||||||
|
// Mock config getters
|
||||||
|
jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({
|
||||||
|
getClaudeCodeSettingsForCommand: jest.fn(() => ({})),
|
||||||
|
getSupportedModelsForProvider: jest.fn(() => ['opus', 'sonnet', 'haiku']),
|
||||||
|
getDebugFlag: jest.fn(() => false),
|
||||||
|
getLogLevel: jest.fn(() => 'info')
|
||||||
|
}));
|
||||||
|
|
||||||
// Import after mocking
|
// Import after mocking
|
||||||
const { ClaudeCodeProvider } = await import(
|
const { ClaudeCodeProvider } = await import(
|
||||||
'../../../src/ai-providers/claude-code.js'
|
'../../../src/ai-providers/claude-code.js'
|
||||||
@@ -96,13 +104,13 @@ describe('ClaudeCodeProvider', () => {
|
|||||||
describe('model support', () => {
|
describe('model support', () => {
|
||||||
it('should return supported models', () => {
|
it('should return supported models', () => {
|
||||||
const models = provider.getSupportedModels();
|
const models = provider.getSupportedModels();
|
||||||
expect(models).toEqual(['sonnet', 'opus']);
|
expect(models).toEqual(['opus', 'sonnet', 'haiku']);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should check if model is supported', () => {
|
it('should check if model is supported', () => {
|
||||||
expect(provider.isModelSupported('sonnet')).toBe(true);
|
expect(provider.isModelSupported('sonnet')).toBe(true);
|
||||||
expect(provider.isModelSupported('opus')).toBe(true);
|
expect(provider.isModelSupported('opus')).toBe(true);
|
||||||
expect(provider.isModelSupported('haiku')).toBe(false);
|
expect(provider.isModelSupported('haiku')).toBe(true);
|
||||||
expect(provider.isModelSupported('unknown')).toBe(false);
|
expect(provider.isModelSupported('unknown')).toBe(false);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ jest.unstable_mockModule('ai-sdk-provider-codex-cli', () => ({
|
|||||||
// Mock config getters
|
// Mock config getters
|
||||||
jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({
|
jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({
|
||||||
getCodexCliSettingsForCommand: jest.fn(() => ({ allowNpx: true })),
|
getCodexCliSettingsForCommand: jest.fn(() => ({ allowNpx: true })),
|
||||||
|
getSupportedModelsForProvider: jest.fn(() => ['gpt-5', 'gpt-5-codex']),
|
||||||
// Provide commonly imported getters to satisfy other module imports if any
|
// Provide commonly imported getters to satisfy other module imports if any
|
||||||
getDebugFlag: jest.fn(() => false),
|
getDebugFlag: jest.fn(() => false),
|
||||||
getLogLevel: jest.fn(() => 'info')
|
getLogLevel: jest.fn(() => 'info')
|
||||||
|
|||||||
410
tests/unit/mcp/tools/tool-registration.test.js
Normal file
410
tests/unit/mcp/tools/tool-registration.test.js
Normal file
@@ -0,0 +1,410 @@
|
|||||||
|
/**
|
||||||
|
* tool-registration.test.js
|
||||||
|
* Comprehensive unit tests for the Task Master MCP tool registration system
|
||||||
|
* Tests environment variable control system covering all configuration modes and edge cases
|
||||||
|
*/
|
||||||
|
|
||||||
|
import {
|
||||||
|
describe,
|
||||||
|
it,
|
||||||
|
expect,
|
||||||
|
beforeEach,
|
||||||
|
afterEach,
|
||||||
|
jest
|
||||||
|
} from '@jest/globals';
|
||||||
|
|
||||||
|
import {
|
||||||
|
EXPECTED_TOOL_COUNTS,
|
||||||
|
EXPECTED_CORE_TOOLS,
|
||||||
|
validateToolCounts,
|
||||||
|
validateToolStructure
|
||||||
|
} from '../../../helpers/tool-counts.js';
|
||||||
|
|
||||||
|
import { registerTaskMasterTools } from '../../../../mcp-server/src/tools/index.js';
|
||||||
|
import {
|
||||||
|
toolRegistry,
|
||||||
|
coreTools,
|
||||||
|
standardTools
|
||||||
|
} from '../../../../mcp-server/src/tools/tool-registry.js';
|
||||||
|
|
||||||
|
// Derive constants from imported registry to avoid brittle magic numbers
|
||||||
|
const ALL_COUNT = Object.keys(toolRegistry).length;
|
||||||
|
const CORE_COUNT = coreTools.length;
|
||||||
|
const STANDARD_COUNT = standardTools.length;
|
||||||
|
|
||||||
|
describe('Task Master Tool Registration System', () => {
|
||||||
|
let mockServer;
|
||||||
|
let originalEnv;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
originalEnv = process.env.TASK_MASTER_TOOLS;
|
||||||
|
|
||||||
|
mockServer = {
|
||||||
|
tools: [],
|
||||||
|
addTool: jest.fn((tool) => {
|
||||||
|
mockServer.tools.push(tool);
|
||||||
|
return tool;
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
delete process.env.TASK_MASTER_TOOLS;
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
if (originalEnv !== undefined) {
|
||||||
|
process.env.TASK_MASTER_TOOLS = originalEnv;
|
||||||
|
} else {
|
||||||
|
delete process.env.TASK_MASTER_TOOLS;
|
||||||
|
}
|
||||||
|
|
||||||
|
jest.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Test Environment Setup', () => {
|
||||||
|
it('should have properly configured mock server', () => {
|
||||||
|
expect(mockServer).toBeDefined();
|
||||||
|
expect(typeof mockServer.addTool).toBe('function');
|
||||||
|
expect(Array.isArray(mockServer.tools)).toBe(true);
|
||||||
|
expect(mockServer.tools.length).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have correct tool registry structure', () => {
|
||||||
|
const validation = validateToolCounts();
|
||||||
|
expect(validation.isValid).toBe(true);
|
||||||
|
|
||||||
|
if (!validation.isValid) {
|
||||||
|
console.error('Tool count validation failed:', validation);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(validation.actual.total).toBe(EXPECTED_TOOL_COUNTS.total);
|
||||||
|
expect(validation.actual.core).toBe(EXPECTED_TOOL_COUNTS.core);
|
||||||
|
expect(validation.actual.standard).toBe(EXPECTED_TOOL_COUNTS.standard);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have correct core tools', () => {
|
||||||
|
const structure = validateToolStructure();
|
||||||
|
expect(structure.isValid).toBe(true);
|
||||||
|
|
||||||
|
if (!structure.isValid) {
|
||||||
|
console.error('Tool structure validation failed:', structure);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(coreTools).toEqual(expect.arrayContaining(EXPECTED_CORE_TOOLS));
|
||||||
|
expect(coreTools.length).toBe(EXPECTED_TOOL_COUNTS.core);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have correct standard tools that include all core tools', () => {
|
||||||
|
const structure = validateToolStructure();
|
||||||
|
expect(structure.details.coreInStandard).toBe(true);
|
||||||
|
expect(standardTools.length).toBe(EXPECTED_TOOL_COUNTS.standard);
|
||||||
|
|
||||||
|
coreTools.forEach((tool) => {
|
||||||
|
expect(standardTools).toContain(tool);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have all expected tools in registry', () => {
|
||||||
|
const expectedTools = [
|
||||||
|
'initialize_project',
|
||||||
|
'models',
|
||||||
|
'research',
|
||||||
|
'add_tag',
|
||||||
|
'delete_tag',
|
||||||
|
'get_tasks',
|
||||||
|
'next_task',
|
||||||
|
'get_task'
|
||||||
|
];
|
||||||
|
expectedTools.forEach((tool) => {
|
||||||
|
expect(toolRegistry).toHaveProperty(tool);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Configuration Modes', () => {
|
||||||
|
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS is not set (default behavior)`, () => {
|
||||||
|
delete process.env.TASK_MASTER_TOOLS;
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||||
|
EXPECTED_TOOL_COUNTS.total
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS=all`, () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'all';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it(`should register exactly ${CORE_COUNT} core tools when TASK_MASTER_TOOLS=core`, () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'core';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'core');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||||
|
EXPECTED_TOOL_COUNTS.core
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it(`should register exactly ${STANDARD_COUNT} standard tools when TASK_MASTER_TOOLS=standard`, () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'standard';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'standard');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||||
|
EXPECTED_TOOL_COUNTS.standard
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it(`should treat lean as alias for core mode (${CORE_COUNT} tools)`, () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'lean';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'lean');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle case insensitive configuration values', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'CORE';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'CORE');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Custom Tool Selection and Edge Cases', () => {
|
||||||
|
it('should register specific tools from comma-separated list', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'get_tasks,next_task,get_task';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'get_tasks,next_task,get_task');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle mixed valid and invalid tool names gracefully', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS =
|
||||||
|
'invalid_tool,get_tasks,fake_tool,next_task';
|
||||||
|
|
||||||
|
registerTaskMasterTools(
|
||||||
|
mockServer,
|
||||||
|
'invalid_tool,get_tasks,fake_tool,next_task'
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should default to all tools with completely invalid input', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'completely_invalid';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty string environment variable', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = '';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle whitespace in comma-separated lists', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = ' get_tasks , next_task , get_task ';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, ' get_tasks , next_task , get_task ');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ignore duplicate tools in list', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'get_tasks,get_tasks,next_task,get_tasks';
|
||||||
|
|
||||||
|
registerTaskMasterTools(
|
||||||
|
mockServer,
|
||||||
|
'get_tasks,get_tasks,next_task,get_tasks'
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle only commas and empty entries', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = ',,,';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle single tool selection', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'get_tasks';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'get_tasks');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Coverage Analysis and Integration Tests', () => {
|
||||||
|
it('should provide 100% code coverage for environment control logic', () => {
|
||||||
|
const testCases = [
|
||||||
|
{
|
||||||
|
env: undefined,
|
||||||
|
expectedCount: ALL_COUNT,
|
||||||
|
description: 'undefined env (all)'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
env: '',
|
||||||
|
expectedCount: ALL_COUNT,
|
||||||
|
description: 'empty string (all)'
|
||||||
|
},
|
||||||
|
{ env: 'all', expectedCount: ALL_COUNT, description: 'all mode' },
|
||||||
|
{ env: 'core', expectedCount: CORE_COUNT, description: 'core mode' },
|
||||||
|
{
|
||||||
|
env: 'lean',
|
||||||
|
expectedCount: CORE_COUNT,
|
||||||
|
description: 'lean mode (alias)'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
env: 'standard',
|
||||||
|
expectedCount: STANDARD_COUNT,
|
||||||
|
description: 'standard mode'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
env: 'get_tasks,next_task',
|
||||||
|
expectedCount: 2,
|
||||||
|
description: 'custom list'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
env: 'invalid_tool',
|
||||||
|
expectedCount: ALL_COUNT,
|
||||||
|
description: 'invalid fallback'
|
||||||
|
}
|
||||||
|
];
|
||||||
|
|
||||||
|
testCases.forEach((testCase) => {
|
||||||
|
delete process.env.TASK_MASTER_TOOLS;
|
||||||
|
if (testCase.env !== undefined) {
|
||||||
|
process.env.TASK_MASTER_TOOLS = testCase.env;
|
||||||
|
}
|
||||||
|
|
||||||
|
mockServer.tools = [];
|
||||||
|
mockServer.addTool.mockClear();
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, testCase.env || 'all');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(
|
||||||
|
testCase.expectedCount
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should have optimal performance characteristics', () => {
|
||||||
|
const startTime = Date.now();
|
||||||
|
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'all';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer);
|
||||||
|
|
||||||
|
const endTime = Date.now();
|
||||||
|
const executionTime = endTime - startTime;
|
||||||
|
|
||||||
|
expect(executionTime).toBeLessThan(100);
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should validate token reduction claims', () => {
|
||||||
|
expect(coreTools.length).toBeLessThan(standardTools.length);
|
||||||
|
expect(standardTools.length).toBeLessThan(
|
||||||
|
Object.keys(toolRegistry).length
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(coreTools.length).toBe(CORE_COUNT);
|
||||||
|
expect(standardTools.length).toBe(STANDARD_COUNT);
|
||||||
|
expect(Object.keys(toolRegistry).length).toBe(ALL_COUNT);
|
||||||
|
|
||||||
|
const allToolsCount = Object.keys(toolRegistry).length;
|
||||||
|
const coreReduction =
|
||||||
|
((allToolsCount - coreTools.length) / allToolsCount) * 100;
|
||||||
|
const standardReduction =
|
||||||
|
((allToolsCount - standardTools.length) / allToolsCount) * 100;
|
||||||
|
|
||||||
|
expect(coreReduction).toBeGreaterThan(80);
|
||||||
|
expect(standardReduction).toBeGreaterThan(50);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should maintain referential integrity of tool registry', () => {
|
||||||
|
coreTools.forEach((tool) => {
|
||||||
|
expect(standardTools).toContain(tool);
|
||||||
|
});
|
||||||
|
|
||||||
|
standardTools.forEach((tool) => {
|
||||||
|
expect(toolRegistry).toHaveProperty(tool);
|
||||||
|
});
|
||||||
|
|
||||||
|
Object.keys(toolRegistry).forEach((tool) => {
|
||||||
|
expect(typeof toolRegistry[tool]).toBe('function');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle concurrent registration attempts', () => {
|
||||||
|
process.env.TASK_MASTER_TOOLS = 'core';
|
||||||
|
|
||||||
|
registerTaskMasterTools(mockServer, 'core');
|
||||||
|
registerTaskMasterTools(mockServer, 'core');
|
||||||
|
registerTaskMasterTools(mockServer, 'core');
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT * 3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should validate all documented tool categories exist', () => {
|
||||||
|
const allTools = Object.keys(toolRegistry);
|
||||||
|
|
||||||
|
const projectSetupTools = allTools.filter((tool) =>
|
||||||
|
['initialize_project', 'models', 'rules', 'parse_prd'].includes(tool)
|
||||||
|
);
|
||||||
|
expect(projectSetupTools.length).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
const taskManagementTools = allTools.filter((tool) =>
|
||||||
|
['get_tasks', 'get_task', 'next_task', 'set_task_status'].includes(tool)
|
||||||
|
);
|
||||||
|
expect(taskManagementTools.length).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
const analysisTools = allTools.filter((tool) =>
|
||||||
|
['analyze_project_complexity', 'complexity_report'].includes(tool)
|
||||||
|
);
|
||||||
|
expect(analysisTools.length).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
const tagManagementTools = allTools.filter((tool) =>
|
||||||
|
['add_tag', 'delete_tag', 'list_tags', 'use_tag'].includes(tool)
|
||||||
|
);
|
||||||
|
expect(tagManagementTools.length).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle error conditions gracefully', () => {
|
||||||
|
const problematicInputs = [
|
||||||
|
'null',
|
||||||
|
'undefined',
|
||||||
|
' ',
|
||||||
|
'\n\t',
|
||||||
|
'special!@#$%^&*()characters',
|
||||||
|
'very,very,very,very,very,very,very,long,comma,separated,list,with,invalid,tools,that,should,fallback,to,all'
|
||||||
|
];
|
||||||
|
|
||||||
|
problematicInputs.forEach((input) => {
|
||||||
|
mockServer.tools = [];
|
||||||
|
mockServer.addTool.mockClear();
|
||||||
|
|
||||||
|
process.env.TASK_MASTER_TOOLS = input;
|
||||||
|
|
||||||
|
expect(() => registerTaskMasterTools(mockServer)).not.toThrow();
|
||||||
|
|
||||||
|
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
223
tests/unit/path-utils-find-project-root.test.js
Normal file
223
tests/unit/path-utils-find-project-root.test.js
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
/**
|
||||||
|
* Unit tests for findProjectRoot() function
|
||||||
|
* Tests the parent directory traversal functionality
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { jest } from '@jest/globals';
|
||||||
|
import path from 'path';
|
||||||
|
import fs from 'fs';
|
||||||
|
|
||||||
|
// Import the function to test
|
||||||
|
import { findProjectRoot } from '../../src/utils/path-utils.js';
|
||||||
|
|
||||||
|
describe('findProjectRoot', () => {
|
||||||
|
describe('Parent Directory Traversal', () => {
|
||||||
|
test('should find .taskmaster in parent directory', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
// .taskmaster exists only at /project
|
||||||
|
return normalized === path.normalize('/project/.taskmaster');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should find .git in parent directory', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
return normalized === path.normalize('/project/.git');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should find package.json in parent directory', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
return normalized === path.normalize('/project/package.json');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should traverse multiple levels to find project root', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
// Only exists at /project, not in any subdirectories
|
||||||
|
return normalized === path.normalize('/project/.taskmaster');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir/deep/nested');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should return current directory as fallback when no markers found', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
// No project markers exist anywhere
|
||||||
|
mockExistsSync.mockReturnValue(false);
|
||||||
|
|
||||||
|
const result = findProjectRoot('/some/random/path');
|
||||||
|
|
||||||
|
// Should fall back to process.cwd()
|
||||||
|
expect(result).toBe(process.cwd());
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should find markers at current directory before checking parent', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
// .git exists at /project/subdir, .taskmaster exists at /project
|
||||||
|
if (normalized.includes('/project/subdir/.git')) return true;
|
||||||
|
if (normalized.includes('/project/.taskmaster')) return true;
|
||||||
|
return false;
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
// Should find /project/subdir first because .git exists there,
|
||||||
|
// even though .taskmaster is earlier in the marker array
|
||||||
|
expect(result).toBe('/project/subdir');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should handle permission errors gracefully', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
// Throw permission error for checks in /project/subdir
|
||||||
|
if (normalized.startsWith('/project/subdir/')) {
|
||||||
|
throw new Error('EACCES: permission denied');
|
||||||
|
}
|
||||||
|
// Return true only for .taskmaster at /project
|
||||||
|
return normalized.includes('/project/.taskmaster');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
// Should handle permission errors in subdirectory and traverse to parent
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should detect filesystem root correctly', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
// No markers exist
|
||||||
|
mockExistsSync.mockReturnValue(false);
|
||||||
|
|
||||||
|
const result = findProjectRoot('/');
|
||||||
|
|
||||||
|
// Should stop at root and fall back to process.cwd()
|
||||||
|
expect(result).toBe(process.cwd());
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should recognize various project markers', () => {
|
||||||
|
const projectMarkers = [
|
||||||
|
'.taskmaster',
|
||||||
|
'.git',
|
||||||
|
'package.json',
|
||||||
|
'Cargo.toml',
|
||||||
|
'go.mod',
|
||||||
|
'pyproject.toml',
|
||||||
|
'requirements.txt',
|
||||||
|
'Gemfile',
|
||||||
|
'composer.json'
|
||||||
|
];
|
||||||
|
|
||||||
|
projectMarkers.forEach((marker) => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
const normalized = path.normalize(checkPath);
|
||||||
|
return normalized.includes(`/project/${marker}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('/project/subdir');
|
||||||
|
|
||||||
|
expect(result).toBe('/project');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Edge Cases', () => {
|
||||||
|
test('should handle empty string as startDir', () => {
|
||||||
|
const result = findProjectRoot('');
|
||||||
|
|
||||||
|
// Should use process.cwd() or fall back appropriately
|
||||||
|
expect(typeof result).toBe('string');
|
||||||
|
expect(result.length).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should handle relative paths', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
mockExistsSync.mockImplementation((checkPath) => {
|
||||||
|
// Simulate .git existing in the resolved path
|
||||||
|
return checkPath.includes('.git');
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = findProjectRoot('./subdir');
|
||||||
|
|
||||||
|
expect(typeof result).toBe('string');
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should not exceed max depth limit', () => {
|
||||||
|
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||||
|
|
||||||
|
// Track how many times existsSync is called
|
||||||
|
let callCount = 0;
|
||||||
|
mockExistsSync.mockImplementation(() => {
|
||||||
|
callCount++;
|
||||||
|
return false; // Never find a marker
|
||||||
|
});
|
||||||
|
|
||||||
|
// Create a very deep path
|
||||||
|
const deepPath = '/a/'.repeat(100) + 'deep';
|
||||||
|
const result = findProjectRoot(deepPath);
|
||||||
|
|
||||||
|
// Should stop after max depth (50) and not check 100 levels
|
||||||
|
// Each level checks multiple markers, so callCount will be high but bounded
|
||||||
|
expect(callCount).toBeLessThan(1000); // Reasonable upper bound
|
||||||
|
// With 18 markers and max depth of 50, expect around 900 calls maximum
|
||||||
|
expect(callCount).toBeLessThanOrEqual(50 * 18);
|
||||||
|
|
||||||
|
mockExistsSync.mockRestore();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
Reference in New Issue
Block a user