Compare commits

..

1 Commits

Author SHA1 Message Date
github-actions[bot]
533420a17e docs: auto-update documentation based on changes in next branch
This PR was automatically generated to update documentation based on recent changes.

  Original commit: chore: cleanup changelog and pre exit\n\n

  Co-authored-by: Claude <claude-assistant@anthropic.com>
2025-10-11 19:27:31 +00:00
62 changed files with 431 additions and 2943 deletions

View File

@@ -0,0 +1,7 @@
---
"task-master-ai": minor
---
Add changelog highlights to auto-update notifications
When the CLI auto-updates to a new version, it now displays a "What's New" section.

View File

@@ -11,7 +11,6 @@
"access": "public", "access": "public",
"baseBranch": "main", "baseBranch": "main",
"ignore": [ "ignore": [
"docs", "docs"
"@tm/claude-code-plugin"
] ]
} }

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Improve auth token refresh flow

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Enable Task Master commands to traverse parent directories to find project root from nested paths
Fixes #1301

View File

@@ -1,5 +0,0 @@
---
"@tm/cli": patch
---
Fix warning message box width to match dashboard box width for consistent UI alignment

View File

@@ -1,35 +0,0 @@
---
"task-master-ai": minor
---
Add configurable MCP tool loading to optimize LLM context usage
You can now control which Task Master MCP tools are loaded by setting the `TASK_MASTER_TOOLS` environment variable in your MCP configuration. This helps reduce context usage for LLMs by only loading the tools you need.
**Configuration Options:**
- `all` (default): Load all 36 tools
- `core` or `lean`: Load only 7 essential tools for daily development
- Includes: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
- `standard`: Load 15 commonly used tools (all core tools plus 8 more)
- Additional tools: `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
- Custom list: Comma-separated tool names (e.g., `get_tasks,next_task,set_task_status`)
**Example .mcp.json configuration:**
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
For complete details on all available tools, configuration examples, and usage guidelines, see the [MCP Tools documentation](https://docs.task-master.dev/capabilities/mcp#configurable-tool-loading).

View File

@@ -0,0 +1,47 @@
---
"task-master-ai": minor
---
Add Claude Code plugin with marketplace distribution
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
## 🎉 New: Claude Code Plugin
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **MCP server integration** for deep Claude Code integration
**Installation:**
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
- Shows plugin installation instructions
- Only manages CLAUDE.md imports for agent instructions
- Directs users to install the official plugin
**Migration for Existing Users:**
If you previously used `rules add claude`:
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
3. remove old `.claude/commands/` and `.claude/agents/` directories
**Why This Change?**
Claude Code plugins provide:
- ✅ Automatic updates when we release new features
- ✅ Better command organization and naming
- ✅ Seamless integration with Claude Code
- ✅ No manual file copying or management
The plugin system is the future of Task Master AI integration with Claude Code!

View File

@@ -0,0 +1,17 @@
---
"task-master-ai": minor
---
Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
Key features:
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
- Inline instructions at decision points guide AI through each section
- Good/bad examples for immediate pattern matching
- Flexible plain-text format with XML-style tags for parseability
- Critical dependency-graph section ensures correct task ordering
- Automatic inclusion during `task-master init`
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.

View File

@@ -0,0 +1,7 @@
---
"task-master-ai": patch
---
Fix cross-level task dependencies not being saved
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.

21
.changeset/pre.json Normal file
View File

@@ -0,0 +1,21 @@
{
"mode": "exit",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.28.0",
"@tm/cli": "",
"docs": "0.0.5",
"extension": "0.25.5",
"@tm/ai-sdk-provider-grok-cli": "",
"@tm/build-config": "",
"@tm/claude-code-plugin": "0.0.1",
"@tm/core": ""
},
"changesets": [
"auto-update-changelog-highlights",
"mean-planes-wave",
"nice-ways-hope",
"plain-falcons-serve",
"smart-owls-relax"
]
}

View File

@@ -0,0 +1,16 @@
---
"task-master-ai": minor
---
Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
Key improvements:
- Automatic integration with complexity analysis reports
- Tag-aware complexity report path resolution
- Intelligent subtask count determination based on task complexity
- Falls back to defaults when complexity analysis is unavailable
- Enhanced logging for better visibility into expansion decisions
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.

View File

@@ -14,4 +14,4 @@ OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
VERTEX_PROJECT_ID=your-gcp-project-id VERTEX_PROJECT_ID=your-gcp-project-id
VERTEX_LOCATION=us-central1 VERTEX_LOCATION=us-central1
# Optional: Path to service account credentials JSON file (alternative to API key) # Optional: Path to service account credentials JSON file (alternative to API key)
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json

View File

@@ -1,94 +1,5 @@
# task-master-ai # task-master-ai
## 0.29.0
### Minor Changes
- [#1286](https://github.com/eyaltoledano/claude-task-master/pull/1286) [`f12a16d`](https://github.com/eyaltoledano/claude-task-master/commit/f12a16d09649f62148515f11f616157c7d0bd2d5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add changelog highlights to auto-update notifications
When the CLI auto-updates to a new version, it now displays a "What's New" section.
- [#1293](https://github.com/eyaltoledano/claude-task-master/pull/1293) [`3010b90`](https://github.com/eyaltoledano/claude-task-master/commit/3010b90d98f3a7d8636caa92fc33d6ee69d4bed0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code plugin with marketplace distribution
This release introduces official Claude Code plugin support, marking the evolution from legacy `.claude` directory copying to a modern plugin-based architecture.
## 🎉 New: Claude Code Plugin
Task Master AI commands and agents are now distributed as a proper Claude Code plugin:
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **MCP server integration** for deep Claude Code integration
**Installation:**
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
### The `rules add claude` command no longer copies commands and agents to `.claude/commands/` and `.claude/agents/`. Instead, it now
- Shows plugin installation instructions
- Only manages CLAUDE.md imports for agent instructions
- Directs users to install the official plugin
**Migration for Existing Users:**
If you previously used `rules add claude`:
1. The old commands in `.claude/commands/` will continue to work but won't receive updates
2. Install the plugin for the latest features: `/plugin install taskmaster@taskmaster`
3. remove old `.claude/commands/` and `.claude/agents/` directories
**Why This Change?**
Claude Code plugins provide:
- ✅ Automatic updates when we release new features
- ✅ Better command organization and naming
- ✅ Seamless integration with Claude Code
- ✅ No manual file copying or management
The plugin system is the future of Task Master AI integration with Claude Code!
- [#1285](https://github.com/eyaltoledano/claude-task-master/pull/1285) [`2a910a4`](https://github.com/eyaltoledano/claude-task-master/commit/2a910a40bac375f9f61d797bf55597303d556b48) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add RPG (Repository Planning Graph) method template for structured PRD creation. The new `example_prd_rpg.txt` template teaches AI agents and developers the RPG methodology through embedded instructions, inline good/bad examples, and XML-style tags for structure. This template enables creation of dependency-aware PRDs that automatically generate topologically-ordered task graphs when parsed with Task Master.
Key features:
- Method-as-template: teaches RPG principles (dual-semantics, explicit dependencies, topological order) while being used
- Inline instructions at decision points guide AI through each section
- Good/bad examples for immediate pattern matching
- Flexible plain-text format with XML-style tags for parseability
- Critical dependency-graph section ensures correct task ordering
- Automatic inclusion during `task-master init`
- Comprehensive documentation at [docs.task-master.dev/capabilities/rpg-method](https://docs.task-master.dev/capabilities/rpg-method)
- Tool recommendations for code-context-aware PRD creation (Claude Code, Cursor, Gemini CLI, Codex/Grok)
The RPG template complements the existing `example_prd.txt` and provides a more structured approach for complex projects requiring clear module boundaries and dependency chains.
- [#1287](https://github.com/eyaltoledano/claude-task-master/pull/1287) [`90e6bdc`](https://github.com/eyaltoledano/claude-task-master/commit/90e6bdcf1c59f65ad27fcdfe3b13b9dca7e77654) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhance `expand_all` to intelligently use complexity analysis recommendations when expanding tasks.
The expand-all operation now automatically leverages recommendations from `analyze-complexity` to determine optimal subtask counts for each task, resulting in more accurate and context-aware task breakdowns.
Key improvements:
- Automatic integration with complexity analysis reports
- Tag-aware complexity report path resolution
- Intelligent subtask count determination based on task complexity
- Falls back to defaults when complexity analysis is unavailable
- Enhanced logging for better visibility into expansion decisions
When you run `task-master expand --all` after `task-master analyze-complexity`, Task Master now uses the recommended subtask counts from the complexity analysis instead of applying uniform defaults, ensuring each task is broken down according to its actual complexity.
### Patch Changes
- [#1191](https://github.com/eyaltoledano/claude-task-master/pull/1191) [`aaf903f`](https://github.com/eyaltoledano/claude-task-master/commit/aaf903ff2f606c779a22e9a4b240ab57b3683815) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix cross-level task dependencies not being saved
Fixes an issue where adding dependencies between subtasks and top-level tasks (e.g., `task-master add-dependency --id=2.2 --depends-on=11`) would report success but fail to persist the changes. Dependencies can now be created in both directions between any task levels.
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`4c1ef2c`](https://github.com/eyaltoledano/claude-task-master/commit/4c1ef2ca94411c53bcd2a78ec710b06c500236dd) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
## 0.29.0-rc.1
### Patch Changes
- [#1299](https://github.com/eyaltoledano/claude-task-master/pull/1299) [`a6c5152`](https://github.com/eyaltoledano/claude-task-master/commit/a6c5152f20edd8717cf1aea34e7c178b1261aa99) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve refresh token when authenticating
## 0.29.0-rc.0 ## 0.29.0-rc.0
### Minor Changes ### Minor Changes

View File

@@ -119,7 +119,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"command": "npx", "command": "npx",
"args": ["-y", "task-master-ai"], "args": ["-y", "task-master-ai"],
"env": { "env": {
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE", "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE", "PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE", "OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
@@ -149,7 +148,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
"command": "npx", "command": "npx",
"args": ["-y", "task-master-ai"], "args": ["-y", "task-master-ai"],
"env": { "env": {
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE", "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE", "PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE", "OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
@@ -198,7 +196,7 @@ Initialize taskmaster-ai in my project
#### 5. Make sure you have a PRD (Recommended) #### 5. Make sure you have a PRD (Recommended)
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`. For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate` For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`. An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
@@ -284,76 +282,6 @@ task-master generate
task-master rules add windsurf,roo,vscode task-master rules add windsurf,roo,vscode
``` ```
## Tool Loading Configuration
### Optimizing MCP Tool Loading
Task Master's MCP server supports selective tool loading to reduce context window usage. By default, all 36 tools are loaded (~21,000 tokens) to maintain backward compatibility with existing installations.
You can optimize performance by configuring the `TASK_MASTER_TOOLS` environment variable:
### Available Modes
| Mode | Tools | Context Usage | Use Case |
|------|-------|--------------|----------|
| `all` (default) | 36 | ~21,000 tokens | Complete feature set - all tools available |
| `standard` | 15 | ~10,000 tokens | Common task management operations |
| `core` (or `lean`) | 7 | ~5,000 tokens | Essential daily development workflow |
| `custom` | Variable | Variable | Comma-separated list of specific tools |
### Configuration Methods
#### Method 1: Environment Variable in MCP Configuration
Add `TASK_MASTER_TOOLS` to your MCP configuration file's `env` section:
```jsonc
{
"mcpServers": { // or "servers" for VS Code
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard", // Options: "all", "standard", "core", "lean", or comma-separated list
"ANTHROPIC_API_KEY": "your-key-here",
// ... other API keys
}
}
}
}
```
#### Method 2: Claude Code CLI (One-Time Setup)
For Claude Code users, you can set the mode during installation:
```bash
# Core mode example (~70% token reduction)
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="core" \
-- npx -y task-master-ai@latest
# Custom tools example
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="get_tasks,next_task,set_task_status" \
-- npx -y task-master-ai@latest
```
### Tool Sets Details
**Core Tools (7):** `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
**Standard Tools (15):** All core tools plus `initialize_project`, `analyze_project_complexity`, `expand_all`, `add_subtask`, `remove_task`, `generate`, `add_task`, `complexity_report`
**All Tools (36):** Complete set including project setup, task management, analysis, dependencies, tags, research, and more
### Recommendations
- **New users**: Start with `"standard"` mode for a good balance
- **Large projects**: Use `"core"` mode to minimize token usage
- **Complex workflows**: Use `"all"` mode or custom selection
- **Backward compatibility**: If not specified, defaults to `"all"` mode
## Claude Code Support ## Claude Code Support
Task Master now supports Claude models through the Claude Code CLI, which requires no API key: Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
@@ -382,12 +310,6 @@ cd claude-task-master
node scripts/init.js node scripts/init.js
``` ```
## Join Our Team
<a href="https://tryhamster.com" target="_blank">
<img src="./images/hamster-hiring.png" alt="Join Hamster's founding team" />
</a>
## Contributors ## Contributors
<a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors"> <a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors">

View File

@@ -11,13 +11,6 @@
### Patch Changes ### Patch Changes
- Updated dependencies []:
- @tm/core@null
## null
### Patch Changes
- Updated dependencies []: - Updated dependencies []:
- @tm/core@null - @tm/core@null

View File

@@ -187,29 +187,19 @@ export class AuthCommand extends Command {
if (credentials.expiresAt) { if (credentials.expiresAt) {
const expiresAt = new Date(credentials.expiresAt); const expiresAt = new Date(credentials.expiresAt);
const now = new Date(); const now = new Date();
const timeRemaining = expiresAt.getTime() - now.getTime(); const hoursRemaining = Math.floor(
const hoursRemaining = Math.floor(timeRemaining / (1000 * 60 * 60)); (expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
const minutesRemaining = Math.floor(timeRemaining / (1000 * 60)); );
if (timeRemaining > 0) { if (hoursRemaining > 0) {
// Token is still valid
if (hoursRemaining > 0) {
console.log(
chalk.gray(
` Expires at: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.gray(
` Expires at: ${expiresAt.toLocaleString()} (${minutesRemaining} minutes remaining)`
)
);
}
} else {
// Token has expired
console.log( console.log(
chalk.yellow(` Expired at: ${expiresAt.toLocaleString()}`) chalk.gray(
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
); );
} }
} else { } else {

View File

@@ -250,7 +250,7 @@ export class ContextCommand extends Command {
]); ]);
// Update context // Update context
this.authManager.updateContext({ await this.authManager.updateContext({
orgId: selectedOrg.id, orgId: selectedOrg.id,
orgName: selectedOrg.name, orgName: selectedOrg.name,
// Clear brief when changing org // Clear brief when changing org
@@ -343,7 +343,7 @@ export class ContextCommand extends Command {
if (selectedBrief) { if (selectedBrief) {
// Update context with brief // Update context with brief
const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`; const briefName = `Brief ${selectedBrief.id.slice(0, 8)}`;
this.authManager.updateContext({ await this.authManager.updateContext({
briefId: selectedBrief.id, briefId: selectedBrief.id,
briefName: briefName briefName: briefName
}); });
@@ -358,7 +358,7 @@ export class ContextCommand extends Command {
}; };
} else { } else {
// Clear brief selection // Clear brief selection
this.authManager.updateContext({ await this.authManager.updateContext({
briefId: undefined, briefId: undefined,
briefName: undefined briefName: undefined
}); });
@@ -491,7 +491,7 @@ export class ContextCommand extends Command {
// Update context: set org and brief // Update context: set org and brief
const briefName = `Brief ${brief.id.slice(0, 8)}`; const briefName = `Brief ${brief.id.slice(0, 8)}`;
this.authManager.updateContext({ await this.authManager.updateContext({
orgId: brief.accountId, orgId: brief.accountId,
orgName, orgName,
briefId: brief.id, briefId: brief.id,
@@ -613,7 +613,7 @@ export class ContextCommand extends Command {
}; };
} }
this.authManager.updateContext(context); await this.authManager.updateContext(context);
ui.displaySuccess('Context updated'); ui.displaySuccess('Context updated');
// Display what was set // Display what was set

View File

@@ -103,7 +103,7 @@ export class ExportCommand extends Command {
await this.initializeServices(); await this.initializeServices();
// Get current context // Get current context
const context = await this.authManager.getContext(); const context = this.authManager.getContext();
// Determine org and brief IDs // Determine org and brief IDs
let orgId = options?.org || context?.orgId; let orgId = options?.org || context?.orgId;

View File

@@ -6,7 +6,7 @@
import chalk from 'chalk'; import chalk from 'chalk';
import boxen from 'boxen'; import boxen from 'boxen';
import type { Task } from '@tm/core/types'; import type { Task } from '@tm/core/types';
import { getComplexityWithColor, getBoxWidth } from '../../utils/ui.js'; import { getComplexityWithColor } from '../../utils/ui.js';
/** /**
* Next task display options * Next task display options
@@ -113,7 +113,7 @@ export function displayRecommendedNextTask(
borderColor: '#FFA500', // Orange color borderColor: '#FFA500', // Orange color
title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'), title: chalk.hex('#FFA500')('⚡ RECOMMENDED NEXT TASK ⚡'),
titleAlignment: 'center', titleAlignment: 'center',
width: getBoxWidth(0.97), width: process.stdout.columns * 0.97,
fullscreen: false fullscreen: false
}) })
); );

View File

@@ -5,7 +5,6 @@
import chalk from 'chalk'; import chalk from 'chalk';
import boxen from 'boxen'; import boxen from 'boxen';
import { getBoxWidth } from '../../utils/ui.js';
/** /**
* Display suggested next steps section * Display suggested next steps section
@@ -25,7 +24,7 @@ export function displaySuggestedNextSteps(): void {
margin: { top: 0, bottom: 1 }, margin: { top: 0, bottom: 1 },
borderStyle: 'round', borderStyle: 'round',
borderColor: 'gray', borderColor: 'gray',
width: getBoxWidth(0.97) width: process.stdout.columns * 0.97
} }
) )
); );

View File

@@ -1,158 +0,0 @@
/**
* CLI UI utilities tests
* Tests for apps/cli/src/utils/ui.ts
*/
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import type { MockInstance } from 'vitest';
import { getBoxWidth } from './ui.js';
describe('CLI UI Utilities', () => {
describe('getBoxWidth', () => {
let columnsSpy: MockInstance;
let originalDescriptor: PropertyDescriptor | undefined;
beforeEach(() => {
// Store original descriptor if it exists
originalDescriptor = Object.getOwnPropertyDescriptor(
process.stdout,
'columns'
);
// If columns doesn't exist or isn't a getter, define it as one
if (!originalDescriptor || !originalDescriptor.get) {
const currentValue = process.stdout.columns || 80;
Object.defineProperty(process.stdout, 'columns', {
get() {
return currentValue;
},
configurable: true
});
}
// Now spy on the getter
columnsSpy = vi.spyOn(process.stdout, 'columns', 'get');
});
afterEach(() => {
// Restore the spy
columnsSpy.mockRestore();
// Restore original descriptor or delete the property
if (originalDescriptor) {
Object.defineProperty(process.stdout, 'columns', originalDescriptor);
} else {
delete (process.stdout as any).columns;
}
});
it('should calculate width as percentage of terminal width', () => {
columnsSpy.mockReturnValue(100);
const width = getBoxWidth(0.9, 40);
expect(width).toBe(90);
});
it('should use default percentage of 0.9 when not specified', () => {
columnsSpy.mockReturnValue(100);
const width = getBoxWidth();
expect(width).toBe(90);
});
it('should use default minimum width of 40 when not specified', () => {
columnsSpy.mockReturnValue(30);
const width = getBoxWidth();
expect(width).toBe(40); // Should enforce minimum
});
it('should enforce minimum width when terminal is too narrow', () => {
columnsSpy.mockReturnValue(50);
const width = getBoxWidth(0.9, 60);
expect(width).toBe(60); // Should use minWidth instead of 45
});
it('should handle undefined process.stdout.columns', () => {
columnsSpy.mockReturnValue(undefined);
const width = getBoxWidth(0.9, 40);
// Should fall back to 80 columns: Math.floor(80 * 0.9) = 72
expect(width).toBe(72);
});
it('should handle custom percentage values', () => {
columnsSpy.mockReturnValue(100);
expect(getBoxWidth(0.95, 40)).toBe(95);
expect(getBoxWidth(0.8, 40)).toBe(80);
expect(getBoxWidth(0.5, 40)).toBe(50);
});
it('should handle custom minimum width values', () => {
columnsSpy.mockReturnValue(60);
expect(getBoxWidth(0.9, 70)).toBe(70); // 60 * 0.9 = 54, but min is 70
expect(getBoxWidth(0.9, 50)).toBe(54); // 60 * 0.9 = 54, min is 50
});
it('should floor the calculated width', () => {
columnsSpy.mockReturnValue(99);
const width = getBoxWidth(0.9, 40);
// 99 * 0.9 = 89.1, should floor to 89
expect(width).toBe(89);
});
it('should match warning box width calculation', () => {
// Test the specific case from displayWarning()
columnsSpy.mockReturnValue(80);
const width = getBoxWidth(0.9, 40);
expect(width).toBe(72);
});
it('should match table width calculation', () => {
// Test the specific case from createTaskTable()
columnsSpy.mockReturnValue(111);
const width = getBoxWidth(0.9, 100);
// 111 * 0.9 = 99.9, floor to 99, but max(99, 100) = 100
expect(width).toBe(100);
});
it('should match recommended task box width calculation', () => {
// Test the specific case from displayRecommendedNextTask()
columnsSpy.mockReturnValue(120);
const width = getBoxWidth(0.97, 40);
// 120 * 0.97 = 116.4, floor to 116
expect(width).toBe(116);
});
it('should handle edge case of zero terminal width', () => {
columnsSpy.mockReturnValue(0);
const width = getBoxWidth(0.9, 40);
// When columns is 0, it uses fallback of 80: Math.floor(80 * 0.9) = 72
expect(width).toBe(72);
});
it('should handle very large terminal widths', () => {
columnsSpy.mockReturnValue(1000);
const width = getBoxWidth(0.9, 40);
expect(width).toBe(900);
});
it('should handle very small percentages', () => {
columnsSpy.mockReturnValue(100);
const width = getBoxWidth(0.1, 5);
// 100 * 0.1 = 10, which is greater than min 5
expect(width).toBe(10);
});
it('should handle percentage of 1.0 (100%)', () => {
columnsSpy.mockReturnValue(80);
const width = getBoxWidth(1.0, 40);
expect(width).toBe(80);
});
it('should consistently return same value for same inputs', () => {
columnsSpy.mockReturnValue(100);
const width1 = getBoxWidth(0.9, 40);
const width2 = getBoxWidth(0.9, 40);
const width3 = getBoxWidth(0.9, 40);
expect(width1).toBe(width2);
expect(width2).toBe(width3);
});
});
});

View File

@@ -126,20 +126,6 @@ export function getComplexityWithScore(complexity: number | undefined): string {
return color(`${complexity}/10 (${label})`); return color(`${complexity}/10 (${label})`);
} }
/**
* Calculate box width as percentage of terminal width
* @param percentage - Percentage of terminal width to use (default: 0.9)
* @param minWidth - Minimum width to enforce (default: 40)
* @returns Calculated box width
*/
export function getBoxWidth(
percentage: number = 0.9,
minWidth: number = 40
): number {
const terminalWidth = process.stdout.columns || 80;
return Math.max(Math.floor(terminalWidth * percentage), minWidth);
}
/** /**
* Truncate text to specified length * Truncate text to specified length
*/ */
@@ -190,8 +176,6 @@ export function displayBanner(title: string = 'Task Master'): void {
* Display an error message (matches scripts/modules/ui.js style) * Display an error message (matches scripts/modules/ui.js style)
*/ */
export function displayError(message: string, details?: string): void { export function displayError(message: string, details?: string): void {
const boxWidth = getBoxWidth();
console.error( console.error(
boxen( boxen(
chalk.red.bold('X Error: ') + chalk.red.bold('X Error: ') +
@@ -200,8 +184,7 @@ export function displayError(message: string, details?: string): void {
{ {
padding: 1, padding: 1,
borderStyle: 'round', borderStyle: 'round',
borderColor: 'red', borderColor: 'red'
width: boxWidth
} }
) )
); );
@@ -211,16 +194,13 @@ export function displayError(message: string, details?: string): void {
* Display a success message * Display a success message
*/ */
export function displaySuccess(message: string): void { export function displaySuccess(message: string): void {
const boxWidth = getBoxWidth();
console.log( console.log(
boxen( boxen(
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message), chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
{ {
padding: 1, padding: 1,
borderStyle: 'round', borderStyle: 'round',
borderColor: 'green', borderColor: 'green'
width: boxWidth
} }
) )
); );
@@ -230,14 +210,11 @@ export function displaySuccess(message: string): void {
* Display a warning message * Display a warning message
*/ */
export function displayWarning(message: string): void { export function displayWarning(message: string): void {
const boxWidth = getBoxWidth();
console.log( console.log(
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), { boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
padding: 1, padding: 1,
borderStyle: 'round', borderStyle: 'round',
borderColor: 'yellow', borderColor: 'yellow'
width: boxWidth
}) })
); );
} }
@@ -246,14 +223,11 @@ export function displayWarning(message: string): void {
* Display info message * Display info message
*/ */
export function displayInfo(message: string): void { export function displayInfo(message: string): void {
const boxWidth = getBoxWidth();
console.log( console.log(
boxen(chalk.blue.bold('i ') + chalk.white(message), { boxen(chalk.blue.bold('i ') + chalk.white(message), {
padding: 1, padding: 1,
borderStyle: 'round', borderStyle: 'round',
borderColor: 'blue', borderColor: 'blue'
width: boxWidth
}) })
); );
} }
@@ -308,23 +282,23 @@ export function createTaskTable(
} = options || {}; } = options || {};
// Calculate dynamic column widths based on terminal width // Calculate dynamic column widths based on terminal width
const tableWidth = getBoxWidth(0.9, 100); const terminalWidth = process.stdout.columns * 0.9 || 100;
// Adjust column widths to better match the original layout // Adjust column widths to better match the original layout
const baseColWidths = showComplexity const baseColWidths = showComplexity
? [ ? [
Math.floor(tableWidth * 0.1), Math.floor(terminalWidth * 0.1),
Math.floor(tableWidth * 0.4), Math.floor(terminalWidth * 0.4),
Math.floor(tableWidth * 0.15), Math.floor(terminalWidth * 0.15),
Math.floor(tableWidth * 0.1), Math.floor(terminalWidth * 0.1),
Math.floor(tableWidth * 0.2), Math.floor(terminalWidth * 0.2),
Math.floor(tableWidth * 0.1) Math.floor(terminalWidth * 0.1)
] // ID, Title, Status, Priority, Dependencies, Complexity ] // ID, Title, Status, Priority, Dependencies, Complexity
: [ : [
Math.floor(tableWidth * 0.08), Math.floor(terminalWidth * 0.08),
Math.floor(tableWidth * 0.4), Math.floor(terminalWidth * 0.4),
Math.floor(tableWidth * 0.18), Math.floor(terminalWidth * 0.18),
Math.floor(tableWidth * 0.12), Math.floor(terminalWidth * 0.12),
Math.floor(tableWidth * 0.2) Math.floor(terminalWidth * 0.2)
]; // ID, Title, Status, Priority, Dependencies ]; // ID, Title, Status, Priority, Dependencies
const headers = [ const headers = [

View File

@@ -1,7 +1,5 @@
# docs # docs
## 0.0.6
## 0.0.5 ## 0.0.5
## 0.0.4 ## 0.0.4

View File

@@ -13,126 +13,6 @@ The MCP interface is built on top of the `fastmcp` library and registers a set o
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`. Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
## Configurable Tool Loading
To optimize LLM context usage, you can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This is particularly useful when working with LLMs that have context limits or when you only need a subset of tools.
### Configuration Modes
#### All Tools (Default)
Loads all 36 available tools. Use when you need full Task Master functionality.
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "all",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
If `TASK_MASTER_TOOLS` is not set, all tools are loaded by default.
#### Core Tools (Lean Mode)
Loads only 7 essential tools for daily development. Ideal for minimal context usage.
**Core tools included:**
- `get_tasks` - List all tasks
- `next_task` - Find the next task to work on
- `get_task` - Get detailed task information
- `set_task_status` - Update task status
- `update_subtask` - Add implementation notes
- `parse_prd` - Generate tasks from PRD
- `expand_task` - Break down tasks into subtasks
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "core",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
You can also use `"lean"` as an alias for `"core"`.
#### Standard Tools
Loads 15 commonly used tools. Balances functionality with context efficiency.
**Standard tools include all core tools plus:**
- `initialize_project` - Set up new projects
- `analyze_project_complexity` - Analyze task complexity
- `expand_all` - Expand all eligible tasks
- `add_subtask` - Add subtasks manually
- `remove_task` - Remove tasks
- `generate` - Generate task markdown files
- `add_task` - Create new tasks
- `complexity_report` - View complexity analysis
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
#### Custom Tool Selection
Specify exactly which tools to load using a comma-separated list. Tool names are case-insensitive and support both underscores and hyphens.
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "get_tasks,next_task,set_task_status,update_subtask",
"ANTHROPIC_API_KEY": "your_key_here"
}
}
}
}
```
### Choosing the Right Configuration
- **Use `core`/`lean`**: When working with basic task management workflows or when context limits are strict
- **Use `standard`**: For most development workflows that include task creation and analysis
- **Use `all`**: When you need full functionality including tag management, dependencies, and advanced features
- **Use custom list**: When you have specific tool requirements or want to experiment with minimal sets
### Verification
When the MCP server starts, it logs which tools were loaded:
```
Task Master MCP Server starting...
Tool mode configuration: standard
Loading standard tools
Registering 15 MCP tools (mode: standard)
Successfully registered 15/15 tools
```
## Tool Categories ## Tool Categories
The MCP tools can be categorized in the same way as the core functionalities: The MCP tools can be categorized in the same way as the core functionalities:

View File

@@ -37,25 +37,6 @@ For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json
} }
``` ```
<Tip>
**Optimize Context Usage**: You can control which Task Master MCP tools are loaded using the `TASK_MASTER_TOOLS` environment variable. This helps reduce LLM context usage by only loading the tools you need.
Options:
- `all` (default) - All 36 tools
- `standard` - 15 commonly used tools
- `core` or `lean` - 7 essential tools
Example:
```json
"env": {
"TASK_MASTER_TOOLS": "standard",
"ANTHROPIC_API_KEY": "your_key_here"
}
```
See the [MCP Tools documentation](/capabilities/mcp#configurable-tool-loading) for details.
</Tip>
### CLI Usage: `.env` File ### CLI Usage: `.env` File
Create a `.env` file in your project root and include the keys for the providers you plan to use: Create a `.env` file in your project root and include the keys for the providers you plan to use:

View File

@@ -31,9 +31,23 @@ cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys. > **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
### Claude Code Quick Install ### Claude Code Plugin Install (Recommended)
For Claude Code users: For Claude Code users, install via the plugin marketplace:
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
This provides:
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **Automatic updates** when new features are released
### Claude Code MCP Alternative
You can also use MCP directly:
```bash ```bash
claude mcp add taskmaster-ai -- npx -y task-master-ai claude mcp add taskmaster-ai -- npx -y task-master-ai

View File

@@ -1,6 +1,6 @@
{ {
"name": "docs", "name": "docs",
"version": "0.0.6", "version": "0.0.5",
"private": true, "private": true,
"description": "Task Master documentation powered by Mintlify", "description": "Task Master documentation powered by Mintlify",
"scripts": { "scripts": {

View File

@@ -3,4 +3,44 @@ title: "What's New"
sidebarTitle: "What's New" sidebarTitle: "What's New"
--- ---
An easy way to see the latest releases ## 🎉 New: Claude Code Plugin Support
Task Master AI now supports Claude Code plugins with modern marketplace distribution!
### What's New
- **49 slash commands** with clean naming (`/taskmaster:command-name`)
- **3 specialized AI agents** (task-orchestrator, task-executor, task-checker)
- **MCP server integration** for deep Claude Code integration
### Installation
```bash
/plugin marketplace add eyaltoledano/claude-task-master
/plugin install taskmaster@taskmaster
```
### Migration for Existing Users
The `rules add claude` command no longer copies files to `.claude/` directories. Instead:
- Shows plugin installation instructions
- Only manages CLAUDE.md imports for agent instructions
- Directs users to install the official plugin
If you previously used `rules add claude`:
1. Old commands in `.claude/commands/` will continue working but won't receive updates
2. Install the plugin for latest features: `/plugin install taskmaster@taskmaster`
3. Remove old `.claude/commands/` and `.claude/agents/` directories
### Why This Change?
Claude Code plugins provide:
- ✅ Automatic updates when we release new features
- ✅ Better command organization and naming
- ✅ Seamless integration with Claude Code
- ✅ No manual file copying or management
The plugin system is the future of Task Master AI integration with Claude Code!

View File

@@ -1,7 +1,5 @@
# Change Log # Change Log
## 0.25.6
## 0.25.6-rc.0 ## 0.25.6-rc.0
### Patch Changes ### Patch Changes

View File

@@ -3,7 +3,7 @@
"private": true, "private": true,
"displayName": "TaskMaster", "displayName": "TaskMaster",
"description": "A visual Kanban board interface for TaskMaster projects in VS Code", "description": "A visual Kanban board interface for TaskMaster projects in VS Code",
"version": "0.25.6", "version": "0.25.6-rc.0",
"publisher": "Hamster", "publisher": "Hamster",
"icon": "assets/icon.png", "icon": "assets/icon.png",
"engines": { "engines": {
@@ -239,6 +239,9 @@
"watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch", "watch:css": "npx @tailwindcss/cli -i ./src/webview/index.css -o ./dist/index.css --watch",
"check-types": "tsc --noEmit" "check-types": "tsc --noEmit"
}, },
"dependencies": {
"task-master-ai": "0.29.0-rc.0"
},
"devDependencies": { "devDependencies": {
"@dnd-kit/core": "^6.3.1", "@dnd-kit/core": "^6.3.1",
"@dnd-kit/modifiers": "^9.0.0", "@dnd-kit/modifiers": "^9.0.0",
@@ -274,8 +277,7 @@
"tailwind-merge": "^3.3.1", "tailwind-merge": "^3.3.1",
"tailwindcss": "4.1.11", "tailwindcss": "4.1.11",
"typescript": "^5.9.2", "typescript": "^5.9.2",
"@tm/core": "*", "@tm/core": "*"
"task-master-ai": "*"
}, },
"overrides": { "overrides": {
"glob@<8": "^10.4.5", "glob@<8": "^10.4.5",

View File

@@ -59,76 +59,6 @@ Taskmaster uses two primary methods for configuration:
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`. - **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure. - **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
## MCP Tool Loading Configuration
### TASK_MASTER_TOOLS Environment Variable
The `TASK_MASTER_TOOLS` environment variable controls which tools are loaded by the Task Master MCP server. This allows you to optimize token usage based on your workflow needs.
> Note
> Prefer setting `TASK_MASTER_TOOLS` in your MCP client's `env` block (e.g., `.cursor/mcp.json`) or in CI/deployment env. The `.env` file is reserved for API keys/endpoints; avoid persisting non-secret settings there.
#### Configuration Options
- **`all`** (default): Loads all 36 available tools (~21,000 tokens)
- Best for: Users who need the complete feature set
- Use when: Working with complex projects requiring all Task Master features
- Backward compatibility: This is the default to maintain compatibility with existing installations
- **`standard`**: Loads 15 commonly used tools (~10,000 tokens, 50% reduction)
- Best for: Regular task management workflows
- Tools included: All core tools plus project initialization, complexity analysis, task generation, and more
- Use when: You need a balanced set of features with reduced token usage
- **`core`** (or `lean`): Loads 7 essential tools (~5,000 tokens, 70% reduction)
- Best for: Daily development with minimal token overhead
- Tools included: `get_tasks`, `next_task`, `get_task`, `set_task_status`, `update_subtask`, `parse_prd`, `expand_task`
- Use when: Working in large contexts where token usage is critical
- Note: "lean" is an alias for "core" (same tools, token estimate and recommended use). You can refer to it as either "core" or "lean" when configuring.
- **Custom list**: Comma-separated list of specific tool names
- Best for: Specialized workflows requiring specific tools
- Example: `"get_tasks,next_task,set_task_status"`
- Use when: You know exactly which tools you need
#### How to Configure
1. **In MCP configuration files** (`.cursor/mcp.json`, `.vscode/mcp.json`, etc.) - **Recommended**:
```jsonc
{
"mcpServers": {
"task-master-ai": {
"env": {
"TASK_MASTER_TOOLS": "standard", // Set tool loading mode
// API keys can still use .env for security
}
}
}
}
```
2. **Via Claude Code CLI**:
```bash
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="core" \
-- npx -y task-master-ai@latest
```
3. **In CI/deployment environment variables**:
```bash
export TASK_MASTER_TOOLS="standard"
node mcp-server/server.js
```
#### Tool Loading Behavior
- When `TASK_MASTER_TOOLS` is unset or empty, the system defaults to `"all"`
- Invalid tool names in a user-specified list are ignored (a warning is emitted for each)
- If every tool name in a custom list is invalid, the system falls back to `"all"`
- Tool names are case-insensitive (e.g., `"CORE"`, `"core"`, and `"Core"` are treated identically)
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only) ## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
- Used **exclusively** for sensitive API keys and specific endpoint URLs. - Used **exclusively** for sensitive API keys and specific endpoint URLs.
@@ -293,10 +223,10 @@ node scripts/init.js
```bash ```bash
# Set MCP provider for main role # Set MCP provider for main role
task-master models set-main --provider mcp --model claude-3-5-sonnet-20241022 task-master models set-main --provider mcp --model claude-3-5-sonnet-20241022
# Set MCP provider for research role # Set MCP provider for research role
task-master models set-research --provider mcp --model claude-3-opus-20240229 task-master models set-research --provider mcp --model claude-3-opus-20240229
# Verify configuration # Verify configuration
task-master models list task-master models list
``` ```
@@ -427,7 +357,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
"temperature": 0.7 "temperature": 0.7
}, },
"fallback": { "fallback": {
"provider": "azure", "provider": "azure",
"modelId": "gpt-4o-mini", "modelId": "gpt-4o-mini",
"maxTokens": 10000, "maxTokens": 10000,
"temperature": 0.7 "temperature": 0.7
@@ -446,7 +376,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
"models": { "models": {
"main": { "main": {
"provider": "azure", "provider": "azure",
"modelId": "gpt-4o", "modelId": "gpt-4o",
"maxTokens": 16000, "maxTokens": 16000,
"temperature": 0.7, "temperature": 0.7,
"baseURL": "https://your-resource-name.azure.com/openai/deployments" "baseURL": "https://your-resource-name.azure.com/openai/deployments"
@@ -460,7 +390,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
"fallback": { "fallback": {
"provider": "azure", "provider": "azure",
"modelId": "gpt-4o-mini", "modelId": "gpt-4o-mini",
"maxTokens": 10000, "maxTokens": 10000,
"temperature": 0.7, "temperature": 0.7,
"baseURL": "https://your-resource-name.azure.com/openai/deployments" "baseURL": "https://your-resource-name.azure.com/openai/deployments"
} }
@@ -472,7 +402,7 @@ Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure c
```bash ```bash
# In .env file # In .env file
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# Optional: Override endpoint for all Azure models # Optional: Override endpoint for all Azure models
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
``` ```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

View File

@@ -4,14 +4,12 @@ import dotenv from 'dotenv';
import { fileURLToPath } from 'url'; import { fileURLToPath } from 'url';
import fs from 'fs'; import fs from 'fs';
import logger from './logger.js'; import logger from './logger.js';
import { import { registerTaskMasterTools } from './tools/index.js';
registerTaskMasterTools,
getToolsConfiguration
} from './tools/index.js';
import ProviderRegistry from '../../src/provider-registry/index.js'; import ProviderRegistry from '../../src/provider-registry/index.js';
import { MCPProvider } from './providers/mcp-provider.js'; import { MCPProvider } from './providers/mcp-provider.js';
import packageJson from '../../package.json' with { type: 'json' }; import packageJson from '../../package.json' with { type: 'json' };
// Load environment variables
dotenv.config(); dotenv.config();
// Constants // Constants
@@ -31,10 +29,12 @@ class TaskMasterMCPServer {
this.server = new FastMCP(this.options); this.server = new FastMCP(this.options);
this.initialized = false; this.initialized = false;
// Bind methods
this.init = this.init.bind(this); this.init = this.init.bind(this);
this.start = this.start.bind(this); this.start = this.start.bind(this);
this.stop = this.stop.bind(this); this.stop = this.stop.bind(this);
// Setup logging
this.logger = logger; this.logger = logger;
} }
@@ -44,34 +44,8 @@ class TaskMasterMCPServer {
async init() { async init() {
if (this.initialized) return; if (this.initialized) return;
const normalizedToolMode = getToolsConfiguration(); // Pass the manager instance to the tool registration function
registerTaskMasterTools(this.server, this.asyncManager);
this.logger.info('Task Master MCP Server starting...');
this.logger.info(`Tool mode configuration: ${normalizedToolMode}`);
const registrationResult = registerTaskMasterTools(
this.server,
normalizedToolMode
);
this.logger.info(
`Normalized tool mode: ${registrationResult.normalizedMode}`
);
this.logger.info(
`Registered ${registrationResult.registeredTools.length} tools successfully`
);
if (registrationResult.registeredTools.length > 0) {
this.logger.debug(
`Registered tools: ${registrationResult.registeredTools.join(', ')}`
);
}
if (registrationResult.failedTools.length > 0) {
this.logger.warn(
`Failed to register ${registrationResult.failedTools.length} tools: ${registrationResult.failedTools.join(', ')}`
);
}
this.initialized = true; this.initialized = true;

View File

@@ -3,238 +3,109 @@
* Export all Task Master CLI tools for MCP server * Export all Task Master CLI tools for MCP server
*/ */
import { registerListTasksTool } from './get-tasks.js';
import logger from '../logger.js'; import logger from '../logger.js';
import { import { registerSetTaskStatusTool } from './set-task-status.js';
toolRegistry, import { registerParsePRDTool } from './parse-prd.js';
coreTools, import { registerUpdateTool } from './update.js';
standardTools, import { registerUpdateTaskTool } from './update-task.js';
getAvailableTools, import { registerUpdateSubtaskTool } from './update-subtask.js';
getToolRegistration, import { registerGenerateTool } from './generate.js';
isValidTool import { registerShowTaskTool } from './get-task.js';
} from './tool-registry.js'; import { registerNextTaskTool } from './next-task.js';
import { registerExpandTaskTool } from './expand-task.js';
import { registerAddTaskTool } from './add-task.js';
import { registerAddSubtaskTool } from './add-subtask.js';
import { registerRemoveSubtaskTool } from './remove-subtask.js';
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
import { registerClearSubtasksTool } from './clear-subtasks.js';
import { registerExpandAllTool } from './expand-all.js';
import { registerRemoveDependencyTool } from './remove-dependency.js';
import { registerValidateDependenciesTool } from './validate-dependencies.js';
import { registerFixDependenciesTool } from './fix-dependencies.js';
import { registerComplexityReportTool } from './complexity-report.js';
import { registerAddDependencyTool } from './add-dependency.js';
import { registerRemoveTaskTool } from './remove-task.js';
import { registerInitializeProjectTool } from './initialize-project.js';
import { registerModelsTool } from './models.js';
import { registerMoveTaskTool } from './move-task.js';
import { registerResponseLanguageTool } from './response-language.js';
import { registerAddTagTool } from './add-tag.js';
import { registerDeleteTagTool } from './delete-tag.js';
import { registerListTagsTool } from './list-tags.js';
import { registerUseTagTool } from './use-tag.js';
import { registerRenameTagTool } from './rename-tag.js';
import { registerCopyTagTool } from './copy-tag.js';
import { registerResearchTool } from './research.js';
import { registerRulesTool } from './rules.js';
import { registerScopeUpTool } from './scope-up.js';
import { registerScopeDownTool } from './scope-down.js';
/** /**
* Helper function to safely read and normalize the TASK_MASTER_TOOLS environment variable * Register all Task Master tools with the MCP server
* @returns {string} The tools configuration string, defaults to 'all'
*/
export function getToolsConfiguration() {
const rawValue = process.env.TASK_MASTER_TOOLS;
if (!rawValue || rawValue.trim() === '') {
logger.debug('No TASK_MASTER_TOOLS env var found, defaulting to "all"');
return 'all';
}
const normalizedValue = rawValue.trim();
logger.debug(`TASK_MASTER_TOOLS env var: "${normalizedValue}"`);
return normalizedValue;
}
/**
* Register Task Master tools with the MCP server
* Supports selective tool loading via TASK_MASTER_TOOLS environment variable
* @param {Object} server - FastMCP server instance * @param {Object} server - FastMCP server instance
* @param {string} toolMode - The tool mode configuration (defaults to 'all')
* @returns {Object} Object containing registered tools, failed tools, and normalized mode
*/ */
export function registerTaskMasterTools(server, toolMode = 'all') { export function registerTaskMasterTools(server) {
const registeredTools = [];
const failedTools = [];
try { try {
const enabledTools = toolMode.trim(); // Register each tool in a logical workflow order
let toolsToRegister = [];
const lowerCaseConfig = enabledTools.toLowerCase(); // Group 1: Initialization & Setup
registerInitializeProjectTool(server);
registerModelsTool(server);
registerRulesTool(server);
registerParsePRDTool(server);
switch (lowerCaseConfig) { // Group 2: Task Analysis & Expansion
case 'all': registerAnalyzeProjectComplexityTool(server);
toolsToRegister = Object.keys(toolRegistry); registerExpandTaskTool(server);
logger.info('Loading all available tools'); registerExpandAllTool(server);
break; registerScopeUpTool(server);
case 'core': registerScopeDownTool(server);
case 'lean':
toolsToRegister = coreTools;
logger.info('Loading core tools only');
break;
case 'standard':
toolsToRegister = standardTools;
logger.info('Loading standard tools');
break;
default:
const requestedTools = enabledTools
.split(',')
.map((t) => t.trim())
.filter((t) => t.length > 0);
const uniqueTools = new Set(); // Group 3: Task Listing & Viewing
const unknownTools = []; registerListTasksTool(server);
registerShowTaskTool(server);
registerNextTaskTool(server);
registerComplexityReportTool(server);
const aliasMap = { // Group 4: Task Status & Management
response_language: 'response-language' registerSetTaskStatusTool(server);
}; registerGenerateTool(server);
for (const toolName of requestedTools) { // Group 5: Task Creation & Modification
let resolvedName = null; registerAddTaskTool(server);
const lowerToolName = toolName.toLowerCase(); registerAddSubtaskTool(server);
registerUpdateTool(server);
registerUpdateTaskTool(server);
registerUpdateSubtaskTool(server);
registerRemoveTaskTool(server);
registerRemoveSubtaskTool(server);
registerClearSubtasksTool(server);
registerMoveTaskTool(server);
if (aliasMap[lowerToolName]) { // Group 6: Dependency Management
const aliasTarget = aliasMap[lowerToolName]; registerAddDependencyTool(server);
for (const registryKey of Object.keys(toolRegistry)) { registerRemoveDependencyTool(server);
if (registryKey.toLowerCase() === aliasTarget.toLowerCase()) { registerValidateDependenciesTool(server);
resolvedName = registryKey; registerFixDependenciesTool(server);
break; registerResponseLanguageTool(server);
}
}
}
if (!resolvedName) { // Group 7: Tag Management
for (const registryKey of Object.keys(toolRegistry)) { registerListTagsTool(server);
if (registryKey.toLowerCase() === lowerToolName) { registerAddTagTool(server);
resolvedName = registryKey; registerDeleteTagTool(server);
break; registerUseTagTool(server);
} registerRenameTagTool(server);
} registerCopyTagTool(server);
}
if (!resolvedName) { // Group 8: Research Features
const withHyphens = lowerToolName.replace(/_/g, '-'); registerResearchTool(server);
for (const registryKey of Object.keys(toolRegistry)) {
if (registryKey.toLowerCase() === withHyphens) {
resolvedName = registryKey;
break;
}
}
}
if (!resolvedName) {
const withUnderscores = lowerToolName.replace(/-/g, '_');
for (const registryKey of Object.keys(toolRegistry)) {
if (registryKey.toLowerCase() === withUnderscores) {
resolvedName = registryKey;
break;
}
}
}
if (resolvedName) {
uniqueTools.add(resolvedName);
logger.debug(`Resolved tool "${toolName}" to "${resolvedName}"`);
} else {
unknownTools.push(toolName);
logger.warn(`Unknown tool specified: "${toolName}"`);
}
}
toolsToRegister = Array.from(uniqueTools);
if (unknownTools.length > 0) {
logger.warn(`Unknown tools: ${unknownTools.join(', ')}`);
}
if (toolsToRegister.length === 0) {
logger.warn(
`No valid tools found in custom list. Loading all tools as fallback.`
);
toolsToRegister = Object.keys(toolRegistry);
} else {
logger.info(
`Loading ${toolsToRegister.length} custom tools from list (${uniqueTools.size} unique after normalization)`
);
}
break;
}
logger.info(
`Registering ${toolsToRegister.length} MCP tools (mode: ${enabledTools})`
);
toolsToRegister.forEach((toolName) => {
try {
const registerFunction = getToolRegistration(toolName);
if (registerFunction) {
registerFunction(server);
logger.debug(`Registered tool: ${toolName}`);
registeredTools.push(toolName);
} else {
logger.warn(`Tool ${toolName} not found in registry`);
failedTools.push(toolName);
}
} catch (error) {
if (error.message && error.message.includes('already registered')) {
logger.debug(`Tool ${toolName} already registered, skipping`);
registeredTools.push(toolName);
} else {
logger.error(`Failed to register tool ${toolName}: ${error.message}`);
failedTools.push(toolName);
}
}
});
logger.info(
`Successfully registered ${registeredTools.length}/${toolsToRegister.length} tools`
);
if (failedTools.length > 0) {
logger.warn(`Failed tools: ${failedTools.join(', ')}`);
}
return {
registeredTools,
failedTools,
normalizedMode: lowerCaseConfig
};
} catch (error) { } catch (error) {
logger.error( logger.error(`Error registering Task Master tools: ${error.message}`);
`Error parsing TASK_MASTER_TOOLS environment variable: ${error.message}` throw error;
);
logger.info('Falling back to loading all tools');
const fallbackTools = Object.keys(toolRegistry);
for (const toolName of fallbackTools) {
const registerFunction = getToolRegistration(toolName);
if (registerFunction) {
try {
registerFunction(server);
registeredTools.push(toolName);
} catch (err) {
if (err.message && err.message.includes('already registered')) {
logger.debug(
`Fallback tool ${toolName} already registered, skipping`
);
registeredTools.push(toolName);
} else {
logger.warn(
`Failed to register fallback tool '${toolName}': ${err.message}`
);
failedTools.push(toolName);
}
}
} else {
logger.warn(`Tool '${toolName}' not found in registry`);
failedTools.push(toolName);
}
}
logger.info(
`Successfully registered ${registeredTools.length} fallback tools`
);
return {
registeredTools,
failedTools,
normalizedMode: 'all'
};
} }
} }
export {
toolRegistry,
coreTools,
standardTools,
getAvailableTools,
getToolRegistration,
isValidTool
};
export default { export default {
registerTaskMasterTools registerTaskMasterTools
}; };

View File

@@ -1,168 +0,0 @@
/**
* tool-registry.js
* Tool Registry Object Structure - Maps all 36 tool names to registration functions
*/
import { registerListTasksTool } from './get-tasks.js';
import { registerSetTaskStatusTool } from './set-task-status.js';
import { registerParsePRDTool } from './parse-prd.js';
import { registerUpdateTool } from './update.js';
import { registerUpdateTaskTool } from './update-task.js';
import { registerUpdateSubtaskTool } from './update-subtask.js';
import { registerGenerateTool } from './generate.js';
import { registerShowTaskTool } from './get-task.js';
import { registerNextTaskTool } from './next-task.js';
import { registerExpandTaskTool } from './expand-task.js';
import { registerAddTaskTool } from './add-task.js';
import { registerAddSubtaskTool } from './add-subtask.js';
import { registerRemoveSubtaskTool } from './remove-subtask.js';
import { registerAnalyzeProjectComplexityTool } from './analyze.js';
import { registerClearSubtasksTool } from './clear-subtasks.js';
import { registerExpandAllTool } from './expand-all.js';
import { registerRemoveDependencyTool } from './remove-dependency.js';
import { registerValidateDependenciesTool } from './validate-dependencies.js';
import { registerFixDependenciesTool } from './fix-dependencies.js';
import { registerComplexityReportTool } from './complexity-report.js';
import { registerAddDependencyTool } from './add-dependency.js';
import { registerRemoveTaskTool } from './remove-task.js';
import { registerInitializeProjectTool } from './initialize-project.js';
import { registerModelsTool } from './models.js';
import { registerMoveTaskTool } from './move-task.js';
import { registerResponseLanguageTool } from './response-language.js';
import { registerAddTagTool } from './add-tag.js';
import { registerDeleteTagTool } from './delete-tag.js';
import { registerListTagsTool } from './list-tags.js';
import { registerUseTagTool } from './use-tag.js';
import { registerRenameTagTool } from './rename-tag.js';
import { registerCopyTagTool } from './copy-tag.js';
import { registerResearchTool } from './research.js';
import { registerRulesTool } from './rules.js';
import { registerScopeUpTool } from './scope-up.js';
import { registerScopeDownTool } from './scope-down.js';
/**
* Comprehensive tool registry mapping all 36 tool names to their registration functions
* Used for dynamic tool registration and validation
*/
export const toolRegistry = {
initialize_project: registerInitializeProjectTool,
models: registerModelsTool,
rules: registerRulesTool,
parse_prd: registerParsePRDTool,
'response-language': registerResponseLanguageTool,
analyze_project_complexity: registerAnalyzeProjectComplexityTool,
expand_task: registerExpandTaskTool,
expand_all: registerExpandAllTool,
scope_up_task: registerScopeUpTool,
scope_down_task: registerScopeDownTool,
get_tasks: registerListTasksTool,
get_task: registerShowTaskTool,
next_task: registerNextTaskTool,
complexity_report: registerComplexityReportTool,
set_task_status: registerSetTaskStatusTool,
generate: registerGenerateTool,
add_task: registerAddTaskTool,
add_subtask: registerAddSubtaskTool,
update: registerUpdateTool,
update_task: registerUpdateTaskTool,
update_subtask: registerUpdateSubtaskTool,
remove_task: registerRemoveTaskTool,
remove_subtask: registerRemoveSubtaskTool,
clear_subtasks: registerClearSubtasksTool,
move_task: registerMoveTaskTool,
add_dependency: registerAddDependencyTool,
remove_dependency: registerRemoveDependencyTool,
validate_dependencies: registerValidateDependenciesTool,
fix_dependencies: registerFixDependenciesTool,
list_tags: registerListTagsTool,
add_tag: registerAddTagTool,
delete_tag: registerDeleteTagTool,
use_tag: registerUseTagTool,
rename_tag: registerRenameTagTool,
copy_tag: registerCopyTagTool,
research: registerResearchTool
};
/**
* Core tools array containing the 7 essential tools for daily development
* These represent the minimal set needed for basic task management operations
*/
export const coreTools = [
'get_tasks',
'next_task',
'get_task',
'set_task_status',
'update_subtask',
'parse_prd',
'expand_task'
];
/**
* Standard tools array containing the 15 most commonly used tools
* Includes all core tools plus frequently used additional tools
*/
export const standardTools = [
...coreTools,
'initialize_project',
'analyze_project_complexity',
'expand_all',
'add_subtask',
'remove_task',
'generate',
'add_task',
'complexity_report'
];
/**
* Get all available tool names
* @returns {string[]} Array of tool names
*/
export function getAvailableTools() {
return Object.keys(toolRegistry);
}
/**
* Get tool counts for all categories
* @returns {Object} Object with core, standard, and total counts
*/
export function getToolCounts() {
return {
core: coreTools.length,
standard: standardTools.length,
total: Object.keys(toolRegistry).length
};
}
/**
* Get tool arrays organized by category
* @returns {Object} Object with arrays for each category
*/
export function getToolCategories() {
const allTools = Object.keys(toolRegistry);
return {
core: [...coreTools],
standard: [...standardTools],
all: [...allTools],
extended: allTools.filter((t) => !standardTools.includes(t))
};
}
/**
* Get registration function for a specific tool
* @param {string} toolName - Name of the tool
* @returns {Function|null} Registration function or null if not found
*/
export function getToolRegistration(toolName) {
return toolRegistry[toolName] || null;
}
/**
* Validate if a tool exists in the registry
* @param {string} toolName - Name of the tool
* @returns {boolean} True if tool exists
*/
export function isValidTool(toolName) {
return toolName in toolRegistry;
}
export default toolRegistry;

30
output.txt Normal file

File diff suppressed because one or more lines are too long

143
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.29.0", "version": "npm:task-master-ai@0.29.0-rc.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.29.0", "version": "0.29.0-rc.0",
"license": "MIT WITH Commons-Clause", "license": "MIT WITH Commons-Clause",
"workspaces": [ "workspaces": [
"apps/*", "apps/*",
@@ -125,58 +125,13 @@
} }
}, },
"apps/docs": { "apps/docs": {
"version": "0.0.6", "version": "0.0.5",
"devDependencies": { "devDependencies": {
"mintlify": "^4.2.111" "mintlify": "^4.2.111"
} }
}, },
"apps/extension": {
"version": "0.25.6",
"devDependencies": {
"@dnd-kit/core": "^6.3.1",
"@dnd-kit/modifiers": "^9.0.0",
"@modelcontextprotocol/sdk": "1.13.3",
"@radix-ui/react-collapsible": "^1.1.11",
"@radix-ui/react-dropdown-menu": "^2.1.15",
"@radix-ui/react-label": "^2.1.7",
"@radix-ui/react-portal": "^1.1.9",
"@radix-ui/react-scroll-area": "^1.2.9",
"@radix-ui/react-separator": "^1.1.7",
"@radix-ui/react-slot": "^1.2.3",
"@tailwindcss/postcss": "^4.1.11",
"@tanstack/react-query": "^5.83.0",
"@tm/core": "*",
"@types/mocha": "^10.0.10",
"@types/node": "^22.10.5",
"@types/react": "19.1.8",
"@types/react-dom": "19.1.6",
"@types/vscode": "^1.101.0",
"@vscode/test-cli": "^0.0.11",
"@vscode/test-electron": "^2.5.2",
"@vscode/vsce": "^2.32.0",
"autoprefixer": "10.4.21",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"esbuild": "^0.25.3",
"esbuild-postcss": "^0.0.4",
"fs-extra": "^11.3.0",
"lucide-react": "^0.525.0",
"npm-run-all": "^4.1.5",
"postcss": "8.5.6",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"tailwind-merge": "^3.3.1",
"tailwindcss": "4.1.11",
"task-master-ai": "*",
"typescript": "^5.9.2"
},
"engines": {
"vscode": "^1.93.0"
}
},
"apps/extension/node_modules/@ai-sdk/amazon-bedrock": { "apps/extension/node_modules/@ai-sdk/amazon-bedrock": {
"version": "2.2.12", "version": "2.2.12",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -194,7 +149,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/anthropic": { "apps/extension/node_modules/@ai-sdk/anthropic": {
"version": "1.2.12", "version": "1.2.12",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -209,7 +163,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/azure": { "apps/extension/node_modules/@ai-sdk/azure": {
"version": "1.3.25", "version": "1.3.25",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/openai": "1.3.24", "@ai-sdk/openai": "1.3.24",
@@ -225,7 +178,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/google": { "apps/extension/node_modules/@ai-sdk/google": {
"version": "1.2.22", "version": "1.2.22",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -240,7 +192,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/google-vertex": { "apps/extension/node_modules/@ai-sdk/google-vertex": {
"version": "2.2.27", "version": "2.2.27",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/anthropic": "1.2.12", "@ai-sdk/anthropic": "1.2.12",
@@ -258,7 +209,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/groq": { "apps/extension/node_modules/@ai-sdk/groq": {
"version": "1.2.9", "version": "1.2.9",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -273,7 +223,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/mistral": { "apps/extension/node_modules/@ai-sdk/mistral": {
"version": "1.2.8", "version": "1.2.8",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -288,7 +237,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/openai": { "apps/extension/node_modules/@ai-sdk/openai": {
"version": "1.3.24", "version": "1.3.24",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -303,7 +251,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/openai-compatible": { "apps/extension/node_modules/@ai-sdk/openai-compatible": {
"version": "0.2.16", "version": "0.2.16",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -318,7 +265,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/perplexity": { "apps/extension/node_modules/@ai-sdk/perplexity": {
"version": "1.1.9", "version": "1.1.9",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -333,7 +279,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/provider": { "apps/extension/node_modules/@ai-sdk/provider": {
"version": "1.1.3", "version": "1.1.3",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"json-schema": "^0.4.0" "json-schema": "^0.4.0"
@@ -344,7 +289,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/provider-utils": { "apps/extension/node_modules/@ai-sdk/provider-utils": {
"version": "2.2.8", "version": "2.2.8",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -360,7 +304,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/react": { "apps/extension/node_modules/@ai-sdk/react": {
"version": "1.2.12", "version": "1.2.12",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider-utils": "2.2.8", "@ai-sdk/provider-utils": "2.2.8",
@@ -383,7 +326,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/ui-utils": { "apps/extension/node_modules/@ai-sdk/ui-utils": {
"version": "1.2.11", "version": "1.2.11",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -399,7 +341,6 @@
}, },
"apps/extension/node_modules/@ai-sdk/xai": { "apps/extension/node_modules/@ai-sdk/xai": {
"version": "1.2.18", "version": "1.2.18",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/openai-compatible": "0.2.16", "@ai-sdk/openai-compatible": "0.2.16",
@@ -415,7 +356,6 @@
}, },
"apps/extension/node_modules/@openrouter/ai-sdk-provider": { "apps/extension/node_modules/@openrouter/ai-sdk-provider": {
"version": "0.4.6", "version": "0.4.6",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.0.9", "@ai-sdk/provider": "1.0.9",
@@ -430,7 +370,6 @@
}, },
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider": { "apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider": {
"version": "1.0.9", "version": "1.0.9",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"json-schema": "^0.4.0" "json-schema": "^0.4.0"
@@ -441,7 +380,6 @@
}, },
"apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider-utils": { "apps/extension/node_modules/@openrouter/ai-sdk-provider/node_modules/@ai-sdk/provider-utils": {
"version": "2.1.10", "version": "2.1.10",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.0.9", "@ai-sdk/provider": "1.0.9",
@@ -463,7 +401,6 @@
}, },
"apps/extension/node_modules/ai": { "apps/extension/node_modules/ai": {
"version": "4.3.19", "version": "4.3.19",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "1.1.3", "@ai-sdk/provider": "1.1.3",
@@ -488,7 +425,6 @@
}, },
"apps/extension/node_modules/ai-sdk-provider-gemini-cli": { "apps/extension/node_modules/ai-sdk-provider-gemini-cli": {
"version": "0.1.3", "version": "0.1.3",
"dev": true,
"license": "MIT", "license": "MIT",
"optional": true, "optional": true,
"dependencies": { "dependencies": {
@@ -522,7 +458,6 @@
}, },
"apps/extension/node_modules/ollama-ai-provider": { "apps/extension/node_modules/ollama-ai-provider": {
"version": "1.2.0", "version": "1.2.0",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@ai-sdk/provider": "^1.0.0", "@ai-sdk/provider": "^1.0.0",
@@ -543,7 +478,6 @@
}, },
"apps/extension/node_modules/openai": { "apps/extension/node_modules/openai": {
"version": "4.104.0", "version": "4.104.0",
"dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@types/node": "^18.11.18", "@types/node": "^18.11.18",
@@ -572,7 +506,6 @@
}, },
"apps/extension/node_modules/openai/node_modules/@types/node": { "apps/extension/node_modules/openai/node_modules/@types/node": {
"version": "18.19.127", "version": "18.19.127",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"undici-types": "~5.26.4" "undici-types": "~5.26.4"
@@ -580,12 +513,10 @@
}, },
"apps/extension/node_modules/openai/node_modules/undici-types": { "apps/extension/node_modules/openai/node_modules/undici-types": {
"version": "5.26.5", "version": "5.26.5",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"apps/extension/node_modules/task-master-ai": { "apps/extension/node_modules/task-master-ai": {
"version": "0.27.1", "version": "0.27.1",
"dev": true,
"license": "MIT WITH Commons-Clause", "license": "MIT WITH Commons-Clause",
"workspaces": [ "workspaces": [
"apps/*", "apps/*",
@@ -657,7 +588,6 @@
}, },
"apps/extension/node_modules/zod": { "apps/extension/node_modules/zod": {
"version": "3.25.76", "version": "3.25.76",
"dev": true,
"license": "MIT", "license": "MIT",
"funding": { "funding": {
"url": "https://github.com/sponsors/colinhacks" "url": "https://github.com/sponsors/colinhacks"
@@ -665,7 +595,6 @@
}, },
"apps/extension/node_modules/zod-to-json-schema": { "apps/extension/node_modules/zod-to-json-schema": {
"version": "3.24.6", "version": "3.24.6",
"dev": true,
"license": "ISC", "license": "ISC",
"peerDependencies": { "peerDependencies": {
"zod": "^3.24.1" "zod": "^3.24.1"
@@ -954,7 +883,6 @@
}, },
"node_modules/@anthropic-ai/sdk": { "node_modules/@anthropic-ai/sdk": {
"version": "0.39.0", "version": "0.39.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@types/node": "^18.11.18", "@types/node": "^18.11.18",
@@ -968,7 +896,6 @@
}, },
"node_modules/@anthropic-ai/sdk/node_modules/@types/node": { "node_modules/@anthropic-ai/sdk/node_modules/@types/node": {
"version": "18.19.127", "version": "18.19.127",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"undici-types": "~5.26.4" "undici-types": "~5.26.4"
@@ -976,7 +903,6 @@
}, },
"node_modules/@anthropic-ai/sdk/node_modules/undici-types": { "node_modules/@anthropic-ai/sdk/node_modules/undici-types": {
"version": "5.26.5", "version": "5.26.5",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/@ark/schema": { "node_modules/@ark/schema": {
@@ -8436,7 +8362,6 @@
}, },
"node_modules/@types/diff-match-patch": { "node_modules/@types/diff-match-patch": {
"version": "1.0.36", "version": "1.0.36",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/@types/es-aggregate-error": { "node_modules/@types/es-aggregate-error": {
@@ -8607,7 +8532,6 @@
}, },
"node_modules/@types/node-fetch": { "node_modules/@types/node-fetch": {
"version": "2.6.13", "version": "2.6.13",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@types/node": "*", "@types/node": "*",
@@ -9053,7 +8977,6 @@
}, },
"node_modules/abort-controller": { "node_modules/abort-controller": {
"version": "3.0.0", "version": "3.0.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"event-target-shim": "^5.0.0" "event-target-shim": "^5.0.0"
@@ -9115,7 +9038,6 @@
}, },
"node_modules/agentkeepalive": { "node_modules/agentkeepalive": {
"version": "4.6.0", "version": "4.6.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"humanize-ms": "^1.2.1" "humanize-ms": "^1.2.1"
@@ -9698,7 +9620,6 @@
}, },
"node_modules/asynckit": { "node_modules/asynckit": {
"version": "0.4.0", "version": "0.4.0",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/auto-bind": { "node_modules/auto-bind": {
@@ -11476,7 +11397,6 @@
}, },
"node_modules/combined-stream": { "node_modules/combined-stream": {
"version": "1.0.8", "version": "1.0.8",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"delayed-stream": "~1.0.0" "delayed-stream": "~1.0.0"
@@ -12115,7 +12035,6 @@
}, },
"node_modules/delayed-stream": { "node_modules/delayed-stream": {
"version": "1.0.0", "version": "1.0.0",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=0.4.0" "node": ">=0.4.0"
@@ -12138,7 +12057,6 @@
}, },
"node_modules/dequal": { "node_modules/dequal": {
"version": "2.0.3", "version": "2.0.3",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=6" "node": ">=6"
@@ -12258,7 +12176,6 @@
}, },
"node_modules/diff-match-patch": { "node_modules/diff-match-patch": {
"version": "1.0.5", "version": "1.0.5",
"dev": true,
"license": "Apache-2.0" "license": "Apache-2.0"
}, },
"node_modules/diff-sequences": { "node_modules/diff-sequences": {
@@ -12758,7 +12675,6 @@
}, },
"node_modules/es-set-tostringtag": { "node_modules/es-set-tostringtag": {
"version": "2.1.0", "version": "2.1.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"es-errors": "^1.3.0", "es-errors": "^1.3.0",
@@ -13047,7 +12963,6 @@
}, },
"node_modules/event-target-shim": { "node_modules/event-target-shim": {
"version": "5.0.1", "version": "5.0.1",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=6" "node": ">=6"
@@ -14086,7 +14001,6 @@
}, },
"node_modules/form-data": { "node_modules/form-data": {
"version": "4.0.4", "version": "4.0.4",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"asynckit": "^0.4.0", "asynckit": "^0.4.0",
@@ -14101,7 +14015,6 @@
}, },
"node_modules/form-data-encoder": { "node_modules/form-data-encoder": {
"version": "1.7.2", "version": "1.7.2",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/format": { "node_modules/format": {
@@ -14113,7 +14026,6 @@
}, },
"node_modules/formdata-node": { "node_modules/formdata-node": {
"version": "4.4.1", "version": "4.4.1",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"node-domexception": "1.0.0", "node-domexception": "1.0.0",
@@ -14774,7 +14686,6 @@
}, },
"node_modules/has-tostringtag": { "node_modules/has-tostringtag": {
"version": "1.0.2", "version": "1.0.2",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"has-symbols": "^1.0.3" "has-symbols": "^1.0.3"
@@ -15349,7 +15260,6 @@
}, },
"node_modules/humanize-ms": { "node_modules/humanize-ms": {
"version": "1.2.1", "version": "1.2.1",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"ms": "^2.0.0" "ms": "^2.0.0"
@@ -18136,7 +18046,6 @@
}, },
"node_modules/jsondiffpatch": { "node_modules/jsondiffpatch": {
"version": "0.6.0", "version": "0.6.0",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@types/diff-match-patch": "^1.0.36", "@types/diff-match-patch": "^1.0.36",
@@ -18400,7 +18309,6 @@
"os": [ "os": [
"darwin" "darwin"
], ],
"peer": true,
"engines": { "engines": {
"node": ">= 12.0.0" "node": ">= 12.0.0"
}, },
@@ -20318,7 +20226,6 @@
}, },
"node_modules/nanoid": { "node_modules/nanoid": {
"version": "3.3.11", "version": "3.3.11",
"devOptional": true,
"funding": [ "funding": [
{ {
"type": "github", "type": "github",
@@ -20427,7 +20334,6 @@
}, },
"node_modules/node-domexception": { "node_modules/node-domexception": {
"version": "1.0.0", "version": "1.0.0",
"dev": true,
"funding": [ "funding": [
{ {
"type": "github", "type": "github",
@@ -21292,7 +21198,6 @@
}, },
"node_modules/partial-json": { "node_modules/partial-json": {
"version": "0.1.7", "version": "0.1.7",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/patch-console": { "node_modules/patch-console": {
@@ -22065,7 +21970,6 @@
}, },
"node_modules/react": { "node_modules/react": {
"version": "19.1.1", "version": "19.1.1",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=0.10.0" "node": ">=0.10.0"
@@ -23192,7 +23096,6 @@
}, },
"node_modules/secure-json-parse": { "node_modules/secure-json-parse": {
"version": "2.7.0", "version": "2.7.0",
"dev": true,
"license": "BSD-3-Clause" "license": "BSD-3-Clause"
}, },
"node_modules/selderee": { "node_modules/selderee": {
@@ -24240,26 +24143,6 @@
"url": "https://github.com/sponsors/sindresorhus" "url": "https://github.com/sponsors/sindresorhus"
} }
}, },
"node_modules/strip-literal": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/strip-literal/-/strip-literal-3.1.0.tgz",
"integrity": "sha512-8r3mkIM/2+PpjHoOtiAW8Rg3jJLHaV7xPwG+YRGrv6FP0wwk/toTpATxWYOW0BKdWwl82VT2tFYi5DlROa0Mxg==",
"dev": true,
"license": "MIT",
"dependencies": {
"js-tokens": "^9.0.1"
},
"funding": {
"url": "https://github.com/sponsors/antfu"
}
},
"node_modules/strip-literal/node_modules/js-tokens": {
"version": "9.0.1",
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-9.0.1.tgz",
"integrity": "sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==",
"dev": true,
"license": "MIT"
},
"node_modules/strnum": { "node_modules/strnum": {
"version": "2.1.1", "version": "2.1.1",
"funding": [ "funding": [
@@ -24437,7 +24320,6 @@
}, },
"node_modules/swr": { "node_modules/swr": {
"version": "2.3.6", "version": "2.3.6",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"dequal": "^2.0.3", "dequal": "^2.0.3",
@@ -24630,7 +24512,6 @@
}, },
"node_modules/throttleit": { "node_modules/throttleit": {
"version": "2.1.0", "version": "2.1.0",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=18" "node": ">=18"
@@ -25733,7 +25614,6 @@
}, },
"node_modules/use-sync-external-store": { "node_modules/use-sync-external-store": {
"version": "1.5.0", "version": "1.5.0",
"dev": true,
"license": "MIT", "license": "MIT",
"peerDependencies": { "peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
@@ -25986,7 +25866,6 @@
"os": [ "os": [
"darwin" "darwin"
], ],
"peer": true,
"engines": { "engines": {
"node": ">=12" "node": ">=12"
} }
@@ -26142,7 +26021,6 @@
}, },
"node_modules/web-streams-polyfill": { "node_modules/web-streams-polyfill": {
"version": "4.0.0-beta.3", "version": "4.0.0-beta.3",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">= 14" "node": ">= 14"
@@ -27136,9 +27014,19 @@
}, },
"packages/claude-code-plugin": { "packages/claude-code-plugin": {
"name": "@tm/claude-code-plugin", "name": "@tm/claude-code-plugin",
"version": "0.0.2", "version": "0.0.1",
"license": "MIT WITH Commons-Clause" "license": "MIT WITH Commons-Clause"
}, },
"packages/claude-code-plugin/node_modules/@types/node": {
"version": "20.19.20",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.20.tgz",
"integrity": "sha512-2Q7WS25j4pS1cS8yw3d6buNCVJukOTeQ39bAnwR6sOJbaxvyCGebzTMypDFN82CxBLnl+lSWVdCCWbRY6y9yZQ==",
"extraneous": true,
"license": "MIT",
"dependencies": {
"undici-types": "~6.21.0"
}
},
"packages/tm-core": { "packages/tm-core": {
"name": "@tm/core", "name": "@tm/core",
"license": "MIT", "license": "MIT",
@@ -27149,7 +27037,6 @@
"devDependencies": { "devDependencies": {
"@types/node": "^22.10.5", "@types/node": "^22.10.5",
"@vitest/coverage-v8": "^3.2.4", "@vitest/coverage-v8": "^3.2.4",
"strip-literal": "3.1.0",
"typescript": "^5.9.2", "typescript": "^5.9.2",
"vitest": "^3.2.4" "vitest": "^3.2.4"
} }
@@ -27459,8 +27346,6 @@
}, },
"packages/tm-core/node_modules/vitest": { "packages/tm-core/node_modules/vitest": {
"version": "3.2.4", "version": "3.2.4",
"resolved": "https://registry.npmjs.org/vitest/-/vitest-3.2.4.tgz",
"integrity": "sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {

View File

@@ -1,6 +1,6 @@
{ {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.29.0", "version": "0.29.0-rc.0",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.", "description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js", "main": "index.js",
"type": "module", "type": "module",

View File

@@ -1,5 +1,3 @@
# @tm/ai-sdk-provider-grok-cli # @tm/ai-sdk-provider-grok-cli
## null ## null
## null

View File

@@ -4,6 +4,4 @@
## null ## null
## null
## 1.0.1 ## 1.0.1

View File

@@ -1,3 +0,0 @@
# @tm/claude-code-plugin
## 0.0.2

View File

@@ -1,6 +1,6 @@
{ {
"name": "@tm/claude-code-plugin", "name": "@tm/claude-code-plugin",
"version": "0.0.2", "version": "0.0.1",
"description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration", "description": "Task Master AI plugin for Claude Code - AI-powered task management with commands, agents, and MCP integration",
"type": "module", "type": "module",
"private": true, "private": true,

View File

@@ -4,8 +4,6 @@
## null ## null
## null
## 0.26.1 ## 0.26.1
All notable changes to the @task-master/tm-core package will be documented in this file. All notable changes to the @task-master/tm-core package will be documented in this file.

View File

@@ -37,8 +37,7 @@
"@types/node": "^22.10.5", "@types/node": "^22.10.5",
"@vitest/coverage-v8": "^3.2.4", "@vitest/coverage-v8": "^3.2.4",
"typescript": "^5.9.2", "typescript": "^5.9.2",
"vitest": "^3.2.4", "vitest": "^3.2.4"
"strip-literal": "3.1.0"
}, },
"files": ["src", "README.md", "CHANGELOG.md"], "files": ["src", "README.md", "CHANGELOG.md"],
"keywords": ["task-management", "typescript", "ai", "prd", "parser"], "keywords": ["task-management", "typescript", "ai", "prd", "parser"],

View File

@@ -21,21 +21,16 @@ const CredentialStoreSpy = vi.fn();
vi.mock('./credential-store.js', () => { vi.mock('./credential-store.js', () => {
return { return {
CredentialStore: class { CredentialStore: class {
static getInstance(config?: any) {
return new (this as any)(config);
}
static resetInstance() {
// Mock reset instance method
}
constructor(config: any) { constructor(config: any) {
CredentialStoreSpy(config); CredentialStoreSpy(config);
this.getCredentials = vi.fn(() => null);
} }
getCredentials(_options?: any) { getCredentials() {
return null; return null;
} }
saveCredentials() {} saveCredentials() {}
clearCredentials() {} clearCredentials() {}
hasCredentials() { hasValidCredentials() {
return false; return false;
} }
} }
@@ -90,7 +85,7 @@ describe('AuthManager Singleton', () => {
expect(instance1).toBe(instance2); expect(instance1).toBe(instance2);
}); });
it('should use config on first call', async () => { it('should use config on first call', () => {
const config = { const config = {
baseUrl: 'https://test.auth.com', baseUrl: 'https://test.auth.com',
configDir: '/test/config', configDir: '/test/config',
@@ -106,7 +101,7 @@ describe('AuthManager Singleton', () => {
// Verify the config is passed to internal components through observable behavior // Verify the config is passed to internal components through observable behavior
// getCredentials would look in the configured file path // getCredentials would look in the configured file path
const credentials = await instance.getCredentials(); const credentials = instance.getCredentials();
expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly expect(credentials).toBeNull(); // File doesn't exist, but config was propagated correctly
}); });

View File

@@ -36,10 +36,7 @@ export class AuthManager {
this.oauthService = new OAuthService(this.credentialStore, config); this.oauthService = new OAuthService(this.credentialStore, config);
// Initialize Supabase client with session restoration // Initialize Supabase client with session restoration
// Fire-and-forget with catch handler to prevent unhandled rejections this.initializeSupabaseSession();
this.initializeSupabaseSession().catch(() => {
// Errors are already logged in initializeSupabaseSession
});
} }
/** /**
@@ -81,8 +78,6 @@ export class AuthManager {
/** /**
* Get stored authentication credentials * Get stored authentication credentials
* Returns credentials as-is (even if expired). Refresh must be triggered explicitly
* via refreshToken() or will occur automatically when using the Supabase client for API calls.
*/ */
getCredentials(): AuthCredentials | null { getCredentials(): AuthCredentials | null {
return this.credentialStore.getCredentials(); return this.credentialStore.getCredentials();
@@ -167,11 +162,10 @@ export class AuthManager {
} }
/** /**
* Check if authenticated (credentials exist, regardless of expiration) * Check if authenticated
* @returns true if credentials are stored, including expired credentials
*/ */
isAuthenticated(): boolean { isAuthenticated(): boolean {
return this.credentialStore.hasCredentials(); return this.credentialStore.hasValidCredentials();
} }
/** /**
@@ -185,7 +179,7 @@ export class AuthManager {
/** /**
* Update the user context (org/brief selection) * Update the user context (org/brief selection)
*/ */
updateContext(context: Partial<UserContext>): void { async updateContext(context: Partial<UserContext>): Promise<void> {
const credentials = this.getCredentials(); const credentials = this.getCredentials();
if (!credentials) { if (!credentials) {
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED'); throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');
@@ -211,7 +205,7 @@ export class AuthManager {
/** /**
* Clear the user context * Clear the user context
*/ */
clearContext(): void { async clearContext(): Promise<void> {
const credentials = this.getCredentials(); const credentials = this.getCredentials();
if (!credentials) { if (!credentials) {
throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED'); throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED');

View File

@@ -1,308 +0,0 @@
/**
* @fileoverview Unit tests for CredentialStore token expiration handling
*/
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
import fs from 'fs';
import path from 'path';
import os from 'os';
import { CredentialStore } from './credential-store';
import type { AuthCredentials } from './types';
describe('CredentialStore - Token Expiration', () => {
let credentialStore: CredentialStore;
let tmpDir: string;
let authFile: string;
beforeEach(() => {
// Create temp directory for test credentials
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-cred-test-'));
authFile = path.join(tmpDir, 'auth.json');
// Create instance with test config
CredentialStore.resetInstance();
credentialStore = CredentialStore.getInstance({
configDir: tmpDir,
configFile: authFile
});
});
afterEach(() => {
// Clean up
try {
if (fs.existsSync(tmpDir)) {
fs.rmSync(tmpDir, { recursive: true, force: true });
}
} catch {
// Ignore cleanup errors
}
CredentialStore.resetInstance();
});
describe('Expiration Detection', () => {
it('should return null for expired token', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(), // 1 minute ago
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).toBeNull();
});
it('should return credentials for valid token', () => {
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(), // 1 hour from now
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).not.toBeNull();
expect(retrieved?.token).toBe('valid-token');
});
it('should return expired token when allowExpired is true', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: true });
expect(retrieved).not.toBeNull();
expect(retrieved?.token).toBe('expired-token');
});
it('should return expired token by default (allowExpired defaults to true)', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token-default',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
// Call without options - should default to allowExpired: true
const retrieved = credentialStore.getCredentials();
expect(retrieved).not.toBeNull();
expect(retrieved?.token).toBe('expired-token-default');
});
});
describe('Clock Skew Tolerance', () => {
it('should reject token expiring within 30-second buffer', () => {
// Token expires in 15 seconds (within 30-second buffer)
const almostExpiredCredentials: AuthCredentials = {
token: 'almost-expired-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 15000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(almostExpiredCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).toBeNull();
});
it('should accept token expiring outside 30-second buffer', () => {
// Token expires in 60 seconds (outside 30-second buffer)
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).not.toBeNull();
expect(retrieved?.token).toBe('valid-token');
});
});
describe('Timestamp Format Handling', () => {
it('should handle ISO string timestamps', () => {
const credentials: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(credentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).not.toBeNull();
expect(typeof retrieved?.expiresAt).toBe('number'); // Normalized to number
});
it('should handle numeric timestamps', () => {
const credentials: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: Date.now() + 3600000,
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(credentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).not.toBeNull();
expect(typeof retrieved?.expiresAt).toBe('number');
});
it('should return null for invalid timestamp format', () => {
// Manually write invalid timestamp to file
const invalidCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: 'invalid-date',
savedAt: new Date().toISOString()
};
fs.writeFileSync(authFile, JSON.stringify(invalidCredentials), {
mode: 0o600
});
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).toBeNull();
});
it('should return null for missing expiresAt', () => {
const credentialsWithoutExpiry = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
savedAt: new Date().toISOString()
};
fs.writeFileSync(authFile, JSON.stringify(credentialsWithoutExpiry), {
mode: 0o600
});
const retrieved = credentialStore.getCredentials({ allowExpired: false });
expect(retrieved).toBeNull();
});
});
describe('Storage Persistence', () => {
it('should persist expiresAt as ISO string', () => {
const expiryTime = Date.now() + 3600000;
const credentials: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: expiryTime,
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(credentials);
// Read raw file to verify format
const fileContent = fs.readFileSync(authFile, 'utf-8');
const parsed = JSON.parse(fileContent);
// Should be stored as ISO string
expect(typeof parsed.expiresAt).toBe('string');
expect(parsed.expiresAt).toMatch(/^\d{4}-\d{2}-\d{2}T/); // ISO format
});
it('should normalize timestamp on retrieval', () => {
const credentials: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(credentials);
const retrieved = credentialStore.getCredentials({ allowExpired: false });
// Should be normalized to number for runtime use
expect(typeof retrieved?.expiresAt).toBe('number');
});
});
describe('hasCredentials', () => {
it('should return true for expired credentials', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
expect(credentialStore.hasCredentials()).toBe(true);
});
it('should return true for valid credentials', () => {
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'refresh-token',
userId: 'test-user',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
expect(credentialStore.hasCredentials()).toBe(true);
});
it('should return false when no credentials exist', () => {
expect(credentialStore.hasCredentials()).toBe(false);
});
});
});

View File

@@ -197,7 +197,7 @@ describe('CredentialStore', () => {
JSON.stringify(mockCredentials) JSON.stringify(mockCredentials)
); );
const result = store.getCredentials({ allowExpired: false }); const result = store.getCredentials();
expect(result).toBeNull(); expect(result).toBeNull();
expect(mockLogger.warn).toHaveBeenCalledWith( expect(mockLogger.warn).toHaveBeenCalledWith(
@@ -226,31 +226,6 @@ describe('CredentialStore', () => {
expect(result).not.toBeNull(); expect(result).not.toBeNull();
expect(result?.token).toBe('expired-token'); expect(result?.token).toBe('expired-token');
}); });
it('should return expired tokens by default (allowExpired defaults to true)', () => {
const expiredTimestamp = Date.now() - 3600000; // 1 hour ago
const mockCredentials = {
token: 'expired-token-default',
userId: 'user-expired',
expiresAt: expiredTimestamp,
tokenType: 'standard',
savedAt: new Date().toISOString()
};
vi.mocked(fs.existsSync).mockReturnValue(true);
vi.mocked(fs.readFileSync).mockReturnValue(
JSON.stringify(mockCredentials)
);
// Call without options - should default to allowExpired: true
const result = store.getCredentials();
expect(result).not.toBeNull();
expect(result?.token).toBe('expired-token-default');
expect(mockLogger.warn).not.toHaveBeenCalledWith(
expect.stringContaining('Authentication token has expired')
);
});
}); });
describe('saveCredentials with timestamp normalization', () => { describe('saveCredentials with timestamp normalization', () => {
@@ -476,7 +451,7 @@ describe('CredentialStore', () => {
}); });
}); });
describe('hasCredentials', () => { describe('hasValidCredentials', () => {
it('should return true when valid unexpired credentials exist', () => { it('should return true when valid unexpired credentials exist', () => {
const futureDate = new Date(Date.now() + 3600000); // 1 hour from now const futureDate = new Date(Date.now() + 3600000); // 1 hour from now
const credentials = { const credentials = {
@@ -490,10 +465,10 @@ describe('CredentialStore', () => {
vi.mocked(fs.existsSync).mockReturnValue(true); vi.mocked(fs.existsSync).mockReturnValue(true);
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials)); vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
expect(store.hasCredentials()).toBe(true); expect(store.hasValidCredentials()).toBe(true);
}); });
it('should return true when credentials are expired', () => { it('should return false when credentials are expired', () => {
const pastDate = new Date(Date.now() - 3600000); // 1 hour ago const pastDate = new Date(Date.now() - 3600000); // 1 hour ago
const credentials = { const credentials = {
token: 'expired-token', token: 'expired-token',
@@ -506,13 +481,13 @@ describe('CredentialStore', () => {
vi.mocked(fs.existsSync).mockReturnValue(true); vi.mocked(fs.existsSync).mockReturnValue(true);
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials)); vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
expect(store.hasCredentials()).toBe(true); expect(store.hasValidCredentials()).toBe(false);
}); });
it('should return false when no credentials exist', () => { it('should return false when no credentials exist', () => {
vi.mocked(fs.existsSync).mockReturnValue(false); vi.mocked(fs.existsSync).mockReturnValue(false);
expect(store.hasCredentials()).toBe(false); expect(store.hasValidCredentials()).toBe(false);
}); });
it('should return false when file contains invalid JSON', () => { it('should return false when file contains invalid JSON', () => {
@@ -520,7 +495,7 @@ describe('CredentialStore', () => {
vi.mocked(fs.readFileSync).mockReturnValue('invalid json {'); vi.mocked(fs.readFileSync).mockReturnValue('invalid json {');
vi.mocked(fs.renameSync).mockImplementation(() => undefined); vi.mocked(fs.renameSync).mockImplementation(() => undefined);
expect(store.hasCredentials()).toBe(false); expect(store.hasValidCredentials()).toBe(false);
}); });
it('should return false for credentials without expiry', () => { it('should return false for credentials without expiry', () => {
@@ -535,7 +510,7 @@ describe('CredentialStore', () => {
vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials)); vi.mocked(fs.readFileSync).mockReturnValue(JSON.stringify(credentials));
// Credentials without expiry are considered invalid // Credentials without expiry are considered invalid
expect(store.hasCredentials()).toBe(false); expect(store.hasValidCredentials()).toBe(false);
// Should log warning about missing expiration // Should log warning about missing expiration
expect(mockLogger.warn).toHaveBeenCalledWith( expect(mockLogger.warn).toHaveBeenCalledWith(
@@ -543,14 +518,14 @@ describe('CredentialStore', () => {
); );
}); });
it('should use allowExpired=true', () => { it('should use allowExpired=false by default', () => {
// Spy on getCredentials to verify it's called with correct params // Spy on getCredentials to verify it's called with correct params
const getCredentialsSpy = vi.spyOn(store, 'getCredentials'); const getCredentialsSpy = vi.spyOn(store, 'getCredentials');
vi.mocked(fs.existsSync).mockReturnValue(false); vi.mocked(fs.existsSync).mockReturnValue(false);
store.hasCredentials(); store.hasValidCredentials();
expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: true }); expect(getCredentialsSpy).toHaveBeenCalledWith({ allowExpired: false });
}); });
}); });

View File

@@ -54,12 +54,9 @@ export class CredentialStore {
/** /**
* Get stored authentication credentials * Get stored authentication credentials
* @param options.allowExpired - Whether to return expired credentials (default: true)
* @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use * @returns AuthCredentials with expiresAt as number (milliseconds) for runtime use
*/ */
getCredentials({ getCredentials(options?: { allowExpired?: boolean }): AuthCredentials | null {
allowExpired = true
}: { allowExpired?: boolean } = {}): AuthCredentials | null {
try { try {
if (!fs.existsSync(this.config.configFile)) { if (!fs.existsSync(this.config.configFile)) {
return null; return null;
@@ -93,6 +90,7 @@ export class CredentialStore {
// Check if the token has expired (with clock skew tolerance) // Check if the token has expired (with clock skew tolerance)
const now = Date.now(); const now = Date.now();
const allowExpired = options?.allowExpired ?? false;
if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) { if (now >= expiresAtMs - this.CLOCK_SKEW_MS && !allowExpired) {
this.logger.warn( this.logger.warn(
'Authentication token has expired or is about to expire', 'Authentication token has expired or is about to expire',
@@ -105,7 +103,7 @@ export class CredentialStore {
return null; return null;
} }
// Return credentials (even if expired) to enable refresh flows // Return valid token
return authData; return authData;
} catch (error) { } catch (error) {
this.logger.error( this.logger.error(
@@ -201,11 +199,10 @@ export class CredentialStore {
} }
/** /**
* Check if credentials exist (regardless of expiration status) * Check if credentials exist and are valid
* @returns true if credentials are stored, including expired credentials
*/ */
hasCredentials(): boolean { hasValidCredentials(): boolean {
const credentials = this.getCredentials({ allowExpired: true }); const credentials = this.getCredentials({ allowExpired: false });
return credentials !== null; return credentials !== null;
} }

View File

@@ -281,26 +281,15 @@ export class OAuthService {
// Exchange code for session using PKCE // Exchange code for session using PKCE
const session = await this.supabaseClient.exchangeCodeForSession(code); const session = await this.supabaseClient.exchangeCodeForSession(code);
// Calculate expiration - can be overridden with TM_TOKEN_EXPIRY_MINUTES
let expiresAt: string | undefined;
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES;
if (tokenExpiryMinutes) {
const minutes = parseInt(tokenExpiryMinutes);
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
} else {
expiresAt = session.expires_at
? new Date(session.expires_at * 1000).toISOString()
: undefined;
}
// Save authentication data // Save authentication data
const authData: AuthCredentials = { const authData: AuthCredentials = {
token: session.access_token, token: session.access_token,
refreshToken: session.refresh_token, refreshToken: session.refresh_token,
userId: session.user.id, userId: session.user.id,
email: session.user.email, email: session.user.email,
expiresAt, expiresAt: session.expires_at
? new Date(session.expires_at * 1000).toISOString()
: undefined,
tokenType: 'standard', tokenType: 'standard',
savedAt: new Date().toISOString() savedAt: new Date().toISOString()
}; };
@@ -351,18 +340,10 @@ export class OAuthService {
// Get user info from the session // Get user info from the session
const user = await this.supabaseClient.getUser(); const user = await this.supabaseClient.getUser();
// Calculate expiration time - can be overridden with TM_TOKEN_EXPIRY_MINUTES // Calculate expiration time
let expiresAt: string | undefined; const expiresAt = expiresIn
const tokenExpiryMinutes = process.env.TM_TOKEN_EXPIRY_MINUTES; ? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
if (tokenExpiryMinutes) { : undefined;
const minutes = parseInt(tokenExpiryMinutes);
expiresAt = new Date(Date.now() + minutes * 60 * 1000).toISOString();
this.logger.warn(`Token expiry overridden to ${minutes} minute(s)`);
} else {
expiresAt = expiresIn
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
: undefined;
}
// Save authentication data // Save authentication data
const authData: AuthCredentials = { const authData: AuthCredentials = {
@@ -370,7 +351,7 @@ export class OAuthService {
refreshToken: refreshToken || undefined, refreshToken: refreshToken || undefined,
userId: user?.id || 'unknown', userId: user?.id || 'unknown',
email: user?.email, email: user?.email,
expiresAt, expiresAt: expiresAt,
tokenType: 'standard', tokenType: 'standard',
savedAt: new Date().toISOString() savedAt: new Date().toISOString()
}; };

View File

@@ -98,11 +98,11 @@ export class SupabaseSessionStorage implements SupportedStorage {
// Only handle Supabase session keys // Only handle Supabase session keys
if (key === STORAGE_KEY || key.includes('auth-token')) { if (key === STORAGE_KEY || key.includes('auth-token')) {
try { try {
this.logger.info('Supabase called setItem - storing refreshed session');
// Parse the session and update our credentials // Parse the session and update our credentials
const sessionUpdates = this.parseSessionToCredentials(value); const sessionUpdates = this.parseSessionToCredentials(value);
const existingCredentials = this.store.getCredentials(); const existingCredentials = this.store.getCredentials({
allowExpired: true
});
if (sessionUpdates.token) { if (sessionUpdates.token) {
const updatedCredentials: AuthCredentials = { const updatedCredentials: AuthCredentials = {
@@ -113,9 +113,6 @@ export class SupabaseSessionStorage implements SupportedStorage {
} as AuthCredentials; } as AuthCredentials;
this.store.saveCredentials(updatedCredentials); this.store.saveCredentials(updatedCredentials);
this.logger.info(
'Successfully saved refreshed credentials from Supabase'
);
} }
} catch (error) { } catch (error) {
this.logger.error('Error setting session:', error); this.logger.error('Error setting session:', error);

View File

@@ -17,11 +17,10 @@ export class SupabaseAuthClient {
private client: SupabaseJSClient | null = null; private client: SupabaseJSClient | null = null;
private sessionStorage: SupabaseSessionStorage; private sessionStorage: SupabaseSessionStorage;
private logger = getLogger('SupabaseAuthClient'); private logger = getLogger('SupabaseAuthClient');
private credentialStore: CredentialStore;
constructor() { constructor() {
this.credentialStore = CredentialStore.getInstance(); const credentialStore = CredentialStore.getInstance();
this.sessionStorage = new SupabaseSessionStorage(this.credentialStore); this.sessionStorage = new SupabaseSessionStorage(credentialStore);
} }
/** /**

View File

@@ -47,8 +47,8 @@ export class SupabaseTaskRepository {
* Gets the current brief ID from auth context * Gets the current brief ID from auth context
* @throws {Error} If no brief is selected * @throws {Error} If no brief is selected
*/ */
private async getBriefIdOrThrow(): Promise<string> { private getBriefIdOrThrow(): string {
const context = await this.authManager.getContext(); const context = this.authManager.getContext();
if (!context?.briefId) { if (!context?.briefId) {
throw new Error( throw new Error(
'No brief selected. Please select a brief first using: tm context brief' 'No brief selected. Please select a brief first using: tm context brief'
@@ -61,7 +61,7 @@ export class SupabaseTaskRepository {
_projectId?: string, _projectId?: string,
options?: LoadTasksOptions options?: LoadTasksOptions
): Promise<Task[]> { ): Promise<Task[]> {
const briefId = await this.getBriefIdOrThrow(); const briefId = this.getBriefIdOrThrow();
// Build query with filters // Build query with filters
let query = this.supabase let query = this.supabase
@@ -114,7 +114,7 @@ export class SupabaseTaskRepository {
} }
async getTask(_projectId: string, taskId: string): Promise<Task | null> { async getTask(_projectId: string, taskId: string): Promise<Task | null> {
const briefId = await this.getBriefIdOrThrow(); const briefId = this.getBriefIdOrThrow();
const { data, error } = await this.supabase const { data, error } = await this.supabase
.from('tasks') .from('tasks')
@@ -157,7 +157,7 @@ export class SupabaseTaskRepository {
taskId: string, taskId: string,
updates: Partial<Task> updates: Partial<Task>
): Promise<Task> { ): Promise<Task> {
const briefId = await this.getBriefIdOrThrow(); const briefId = this.getBriefIdOrThrow();
// Validate updates using Zod schema // Validate updates using Zod schema
try { try {

View File

@@ -105,7 +105,7 @@ export class ExportService {
} }
// Get current context // Get current context
const context = await this.authManager.getContext(); const context = this.authManager.getContext();
// Determine org and brief IDs // Determine org and brief IDs
let orgId = options.orgId || context?.orgId; let orgId = options.orgId || context?.orgId;
@@ -232,7 +232,7 @@ export class ExportService {
hasBrief: boolean; hasBrief: boolean;
context: UserContext | null; context: UserContext | null;
}> { }> {
const context = await this.authManager.getContext(); const context = this.authManager.getContext();
return { return {
hasOrg: !!context?.orgId, hasOrg: !!context?.orgId,
@@ -362,7 +362,7 @@ export class ExportService {
if (useAPIEndpoint) { if (useAPIEndpoint) {
// Use the new bulk import API endpoint // Use the new bulk import API endpoint
const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks`; const apiUrl = `${process.env.TM_PUBLIC_BASE_DOMAIN}/ai/api/v1/briefs/${briefId}/tasks/bulk`;
// Transform tasks to flat structure for API // Transform tasks to flat structure for API
const flatTasks = this.transformTasksForBulkImport(tasks); const flatTasks = this.transformTasksForBulkImport(tasks);
@@ -370,16 +370,16 @@ export class ExportService {
// Prepare request body // Prepare request body
const requestBody = { const requestBody = {
source: 'task-master-cli', source: 'task-master-cli',
accountId: orgId,
options: { options: {
dryRun: false, dryRun: false,
stopOnError: false stopOnError: false
}, },
accountId: orgId,
tasks: flatTasks tasks: flatTasks
}; };
// Get auth token // Get auth token
const credentials = await this.authManager.getCredentials(); const credentials = this.authManager.getCredentials();
if (!credentials || !credentials.token) { if (!credentials || !credentials.token) {
throw new Error('Not authenticated'); throw new Error('Not authenticated');
} }

View File

@@ -119,7 +119,7 @@ export class ApiStorage implements IStorage {
private async loadTagsIntoCache(): Promise<void> { private async loadTagsIntoCache(): Promise<void> {
try { try {
const authManager = AuthManager.getInstance(); const authManager = AuthManager.getInstance();
const context = await authManager.getContext(); const context = authManager.getContext();
// If we have a selected brief, create a virtual "tag" for it // If we have a selected brief, create a virtual "tag" for it
if (context?.briefId) { if (context?.briefId) {
@@ -152,7 +152,7 @@ export class ApiStorage implements IStorage {
try { try {
const authManager = AuthManager.getInstance(); const authManager = AuthManager.getInstance();
const context = await authManager.getContext(); const context = authManager.getContext();
// If no brief is selected in context, throw an error // If no brief is selected in context, throw an error
if (!context?.briefId) { if (!context?.briefId) {
@@ -318,7 +318,7 @@ export class ApiStorage implements IStorage {
try { try {
const authManager = AuthManager.getInstance(); const authManager = AuthManager.getInstance();
const context = await authManager.getContext(); const context = authManager.getContext();
// In our API-based system, we only have one "tag" at a time - the current brief // In our API-based system, we only have one "tag" at a time - the current brief
if (context?.briefId) { if (context?.briefId) {

View File

@@ -72,7 +72,7 @@ export class StorageFactory {
{ storageType: 'api', missing } { storageType: 'api', missing }
); );
} }
// Use auth token from AuthManager (synchronous - no auto-refresh here) // Use auth token from AuthManager
const credentials = authManager.getCredentials(); const credentials = authManager.getCredentials();
if (credentials) { if (credentials) {
// Merge with existing storage config, ensuring required fields // Merge with existing storage config, ensuring required fields

View File

@@ -1,139 +0,0 @@
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import fs from 'fs';
import os from 'os';
import path from 'path';
import type { Session } from '@supabase/supabase-js';
import { AuthManager } from '../../src/auth/auth-manager';
import { CredentialStore } from '../../src/auth/credential-store';
import type { AuthCredentials } from '../../src/auth/types';
describe('AuthManager Token Refresh', () => {
let authManager: AuthManager;
let credentialStore: CredentialStore;
let tmpDir: string;
let authFile: string;
beforeEach(() => {
// Reset singletons
AuthManager.resetInstance();
CredentialStore.resetInstance();
// Create temporary directory for test isolation
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-refresh-'));
authFile = path.join(tmpDir, 'auth.json');
// Initialize AuthManager with test config (this will create CredentialStore internally)
authManager = AuthManager.getInstance({
configDir: tmpDir,
configFile: authFile
});
// Get the CredentialStore instance that AuthManager created
credentialStore = CredentialStore.getInstance();
credentialStore.clearCredentials();
});
afterEach(() => {
// Clean up
try {
credentialStore.clearCredentials();
} catch {
// Ignore cleanup errors
}
AuthManager.resetInstance();
CredentialStore.resetInstance();
vi.restoreAllMocks();
// Remove temporary directory
if (tmpDir && fs.existsSync(tmpDir)) {
fs.rmSync(tmpDir, { recursive: true, force: true });
}
});
it('should return expired credentials to enable refresh flows', () => {
// Set up expired credentials with refresh token
const expiredCredentials: AuthCredentials = {
token: 'expired_access_token',
refreshToken: 'valid_refresh_token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
// Get credentials should return them even if expired
// Refresh will be handled by explicit calls or client operations
const credentials = authManager.getCredentials();
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired_access_token');
expect(credentials?.refreshToken).toBe('valid_refresh_token');
});
it('should return valid credentials', () => {
// Set up valid (non-expired) credentials
const validCredentials: AuthCredentials = {
token: 'valid_access_token',
refreshToken: 'valid_refresh_token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(), // Expires in 1 hour
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
const credentials = authManager.getCredentials();
expect(credentials?.token).toBe('valid_access_token');
});
it('should return expired credentials even without refresh token', () => {
// Set up expired credentials WITHOUT refresh token
// We still return them - it's up to the caller to handle
const expiredCredentials: AuthCredentials = {
token: 'expired_access_token',
refreshToken: undefined,
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 1000).toISOString(), // Expired 1 second ago
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
const credentials = authManager.getCredentials();
// Returns credentials even if expired
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired_access_token');
});
it('should return null if no credentials exist', () => {
const credentials = authManager.getCredentials();
expect(credentials).toBeNull();
});
it('should return credentials regardless of refresh token validity', () => {
// Set up expired credentials with refresh token
const expiredCredentials: AuthCredentials = {
token: 'expired_access_token',
refreshToken: 'invalid_refresh_token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 1000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
const credentials = authManager.getCredentials();
// Returns credentials - refresh will be attempted by the client which will handle failure
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired_access_token');
expect(credentials?.refreshToken).toBe('invalid_refresh_token');
});
});

View File

@@ -1,336 +0,0 @@
/**
* @fileoverview Integration tests for JWT token auto-refresh functionality
*
* These tests verify that expired tokens are automatically refreshed
* when making API calls through AuthManager.
*/
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import fs from 'fs';
import os from 'os';
import path from 'path';
import type { Session } from '@supabase/supabase-js';
import { AuthManager } from '../../src/auth/auth-manager';
import { CredentialStore } from '../../src/auth/credential-store';
import type { AuthCredentials } from '../../src/auth/types';
describe('AuthManager - Token Auto-Refresh Integration', () => {
let authManager: AuthManager;
let credentialStore: CredentialStore;
let tmpDir: string;
let authFile: string;
// Mock Supabase session that will be returned on refresh
const mockRefreshedSession: Session = {
access_token: 'new-access-token-xyz',
refresh_token: 'new-refresh-token-xyz',
token_type: 'bearer',
expires_at: Math.floor(Date.now() / 1000) + 3600, // 1 hour from now
expires_in: 3600,
user: {
id: 'test-user-id',
email: 'test@example.com',
aud: 'authenticated',
role: 'authenticated',
app_metadata: {},
user_metadata: {},
created_at: new Date().toISOString()
}
};
beforeEach(() => {
// Reset singletons
AuthManager.resetInstance();
CredentialStore.resetInstance();
// Create temporary directory for test isolation
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-auth-integration-'));
authFile = path.join(tmpDir, 'auth.json');
// Initialize AuthManager with test config (this will create CredentialStore internally)
authManager = AuthManager.getInstance({
configDir: tmpDir,
configFile: authFile
});
// Get the CredentialStore instance that AuthManager created
credentialStore = CredentialStore.getInstance();
credentialStore.clearCredentials();
});
afterEach(() => {
// Clean up
try {
credentialStore.clearCredentials();
} catch {
// Ignore cleanup errors
}
AuthManager.resetInstance();
CredentialStore.resetInstance();
vi.restoreAllMocks();
// Remove temporary directory
if (tmpDir && fs.existsSync(tmpDir)) {
fs.rmSync(tmpDir, { recursive: true, force: true });
}
});
describe('Expired Token Detection', () => {
it('should return expired token for Supabase to refresh', () => {
// Set up expired credentials
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(), // 1 minute ago
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
// Get credentials returns them even if expired
const credentials = authManager.getCredentials();
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired-token');
expect(credentials?.refreshToken).toBe('valid-refresh-token');
});
it('should return valid token', () => {
// Set up valid credentials
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 3600000).toISOString(), // 1 hour from now
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
expect(credentials?.token).toBe('valid-token');
});
});
describe('Token Refresh Flow', () => {
it('should manually refresh expired token and save new credentials', async () => {
const expiredCredentials: AuthCredentials = {
token: 'old-token',
refreshToken: 'old-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date(Date.now() - 3600000).toISOString(),
selectedContext: {
orgId: 'test-org',
briefId: 'test-brief',
updatedAt: new Date().toISOString()
}
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
vi.spyOn(
authManager['supabaseClient'],
'refreshSession'
).mockResolvedValue(mockRefreshedSession);
// Explicitly call refreshToken() method
const refreshedCredentials = await authManager.refreshToken();
expect(refreshedCredentials).not.toBeNull();
expect(refreshedCredentials.token).toBe('new-access-token-xyz');
expect(refreshedCredentials.refreshToken).toBe('new-refresh-token-xyz');
// Verify context was preserved
expect(refreshedCredentials.selectedContext?.orgId).toBe('test-org');
expect(refreshedCredentials.selectedContext?.briefId).toBe('test-brief');
// Verify new expiration is in the future
const newExpiry = new Date(refreshedCredentials.expiresAt!).getTime();
const now = Date.now();
expect(newExpiry).toBeGreaterThan(now);
});
it('should throw error if manual refresh fails', async () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'invalid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
// Mock refresh to fail
vi.spyOn(
authManager['supabaseClient'],
'refreshSession'
).mockRejectedValue(new Error('Refresh token expired'));
// Explicit refreshToken() call should throw
await expect(authManager.refreshToken()).rejects.toThrow();
});
it('should return expired credentials even without refresh token', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
// No refresh token
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
// Credentials are returned even without refresh token
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired-token');
expect(credentials?.refreshToken).toBeUndefined();
});
it('should return null if credentials missing expiresAt', () => {
const credentialsWithoutExpiry: AuthCredentials = {
token: 'test-token',
refreshToken: 'refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
// Missing expiresAt - invalid token
savedAt: new Date().toISOString()
} as any;
credentialStore.saveCredentials(credentialsWithoutExpiry);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
// Tokens without valid expiration are considered invalid
expect(credentials).toBeNull();
});
});
describe('Clock Skew Tolerance', () => {
it('should return credentials within 30-second expiry window', () => {
// Token expires in 15 seconds (within 30-second buffer)
// Supabase will handle refresh automatically
const almostExpiredCredentials: AuthCredentials = {
token: 'almost-expired-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 15000).toISOString(), // 15 seconds from now
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(almostExpiredCredentials);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
// Credentials are returned (Supabase handles auto-refresh in background)
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('almost-expired-token');
expect(credentials?.refreshToken).toBe('valid-refresh-token');
});
it('should return valid token well before expiry', () => {
// Token expires in 5 minutes
const validCredentials: AuthCredentials = {
token: 'valid-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() + 300000).toISOString(), // 5 minutes
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(validCredentials);
authManager = AuthManager.getInstance();
const credentials = authManager.getCredentials();
// Valid credentials are returned as-is
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('valid-token');
expect(credentials?.refreshToken).toBe('valid-refresh-token');
});
});
describe('Synchronous vs Async Methods', () => {
it('getCredentials should return expired credentials', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
// Returns credentials even if expired - Supabase will handle refresh
const credentials = authManager.getCredentials();
expect(credentials).not.toBeNull();
expect(credentials?.token).toBe('expired-token');
expect(credentials?.refreshToken).toBe('valid-refresh-token');
});
});
describe('Multiple Concurrent Calls', () => {
it('should handle concurrent getCredentials calls gracefully', () => {
const expiredCredentials: AuthCredentials = {
token: 'expired-token',
refreshToken: 'valid-refresh-token',
userId: 'test-user-id',
email: 'test@example.com',
expiresAt: new Date(Date.now() - 60000).toISOString(),
savedAt: new Date().toISOString()
};
credentialStore.saveCredentials(expiredCredentials);
authManager = AuthManager.getInstance();
// Make multiple concurrent calls (synchronous now)
const creds1 = authManager.getCredentials();
const creds2 = authManager.getCredentials();
const creds3 = authManager.getCredentials();
// All should get the same credentials (even if expired)
expect(creds1?.token).toBe('expired-token');
expect(creds2?.token).toBe('expired-token');
expect(creds3?.token).toBe('expired-token');
// All include refresh token for Supabase to use
expect(creds1?.refreshToken).toBe('valid-refresh-token');
expect(creds2?.refreshToken).toBe('valid-refresh-token');
expect(creds3?.refreshToken).toBe('valid-refresh-token');
});
});
});

View File

@@ -47,33 +47,21 @@ export function normalizeProjectRoot(projectRoot) {
/** /**
* Find the project root directory by looking for project markers * Find the project root directory by looking for project markers
* Traverses upwards from startDir until a project marker is found or filesystem root is reached * @param {string} startDir - Directory to start searching from
* Limited to 50 parent directory levels to prevent excessive traversal * @returns {string|null} - Project root path or null if not found
* @param {string} startDir - Directory to start searching from (defaults to process.cwd())
* @returns {string} - Project root path (falls back to current directory if no markers found)
*/ */
export function findProjectRoot(startDir = process.cwd()) { export function findProjectRoot(startDir = process.cwd()) {
// Define project markers that indicate a project root
// Prioritize Task Master specific markers first
const projectMarkers = [ const projectMarkers = [
'.taskmaster', // Task Master directory (highest priority) '.taskmaster',
TASKMASTER_CONFIG_FILE, // .taskmaster/config.json TASKMASTER_TASKS_FILE,
TASKMASTER_TASKS_FILE, // .taskmaster/tasks/tasks.json 'tasks.json',
LEGACY_CONFIG_FILE, // .taskmasterconfig (legacy) LEGACY_TASKS_FILE,
LEGACY_TASKS_FILE, // tasks/tasks.json (legacy) '.git',
'tasks.json', // Root tasks.json (legacy) '.svn',
'.git', // Git repository 'package.json',
'.svn', // SVN repository 'yarn.lock',
'package.json', // Node.js project 'package-lock.json',
'yarn.lock', // Yarn project 'pnpm-lock.yaml'
'package-lock.json', // npm project
'pnpm-lock.yaml', // pnpm project
'Cargo.toml', // Rust project
'go.mod', // Go project
'pyproject.toml', // Python project
'requirements.txt', // Python project
'Gemfile', // Ruby project
'composer.json' // PHP project
]; ];
let currentDir = path.resolve(startDir); let currentDir = path.resolve(startDir);
@@ -81,36 +69,19 @@ export function findProjectRoot(startDir = process.cwd()) {
const maxDepth = 50; // Reasonable limit to prevent infinite loops const maxDepth = 50; // Reasonable limit to prevent infinite loops
let depth = 0; let depth = 0;
// Traverse upwards looking for project markers
while (currentDir !== rootDir && depth < maxDepth) { while (currentDir !== rootDir && depth < maxDepth) {
// Check if current directory contains any project markers // Check if current directory contains any project markers
for (const marker of projectMarkers) { for (const marker of projectMarkers) {
const markerPath = path.join(currentDir, marker); const markerPath = path.join(currentDir, marker);
try { if (fs.existsSync(markerPath)) {
if (fs.existsSync(markerPath)) { return currentDir;
// Found a project marker - return this directory as project root
return currentDir;
}
} catch (error) {
// Ignore permission errors and continue searching
continue;
} }
} }
currentDir = path.dirname(currentDir);
// Move up one directory level
const parentDir = path.dirname(currentDir);
// Safety check: if dirname returns the same path, we've hit the root
if (parentDir === currentDir) {
break;
}
currentDir = parentDir;
depth++; depth++;
} }
// Fallback to current working directory if no project root found // Fallback to current working directory if no project root found
// This ensures the function always returns a valid path
return process.cwd(); return process.cwd();
} }

View File

@@ -1,123 +0,0 @@
/**
* tool-counts.js
* Shared helper for validating tool counts across tests and validation scripts
*/
import {
getToolCounts,
getToolCategories
} from '../../mcp-server/src/tools/tool-registry.js';
/**
* Expected tool counts - update these when tools are added/removed
* These serve as the canonical source of truth for expected counts
*/
export const EXPECTED_TOOL_COUNTS = {
core: 7,
standard: 15,
total: 36
};
/**
* Expected core tools list for validation
*/
export const EXPECTED_CORE_TOOLS = [
'get_tasks',
'next_task',
'get_task',
'set_task_status',
'update_subtask',
'parse_prd',
'expand_task'
];
/**
* Validate that actual tool counts match expected counts
* @returns {Object} Validation result with isValid flag and details
*/
export function validateToolCounts() {
const actual = getToolCounts();
const expected = EXPECTED_TOOL_COUNTS;
const isValid =
actual.core === expected.core &&
actual.standard === expected.standard &&
actual.total === expected.total;
return {
isValid,
actual,
expected,
differences: {
core: actual.core - expected.core,
standard: actual.standard - expected.standard,
total: actual.total - expected.total
}
};
}
/**
* Validate that tool categories have correct structure and content
* @returns {Object} Validation result
*/
export function validateToolStructure() {
const categories = getToolCategories();
const counts = getToolCounts();
// Check that core tools are subset of standard tools
const coreInStandard = categories.core.every((tool) =>
categories.standard.includes(tool)
);
// Check that standard tools are subset of all tools
const standardInAll = categories.standard.every((tool) =>
categories.all.includes(tool)
);
// Check that expected core tools match actual
const expectedCoreMatch =
EXPECTED_CORE_TOOLS.every((tool) => categories.core.includes(tool)) &&
categories.core.every((tool) => EXPECTED_CORE_TOOLS.includes(tool));
// Check array lengths match counts
const lengthsMatch =
categories.core.length === counts.core &&
categories.standard.length === counts.standard &&
categories.all.length === counts.total;
return {
isValid:
coreInStandard && standardInAll && expectedCoreMatch && lengthsMatch,
details: {
coreInStandard,
standardInAll,
expectedCoreMatch,
lengthsMatch
},
categories,
counts
};
}
/**
* Get a detailed report of all tool information
* @returns {Object} Comprehensive tool information
*/
export function getToolReport() {
const counts = getToolCounts();
const categories = getToolCategories();
const validation = validateToolCounts();
const structure = validateToolStructure();
return {
counts,
categories,
validation,
structure,
summary: {
totalValid: validation.isValid && structure.isValid,
countsValid: validation.isValid,
structureValid: structure.isValid
}
};
}

View File

@@ -1,410 +0,0 @@
/**
* tool-registration.test.js
* Comprehensive unit tests for the Task Master MCP tool registration system
* Tests environment variable control system covering all configuration modes and edge cases
*/
import {
describe,
it,
expect,
beforeEach,
afterEach,
jest
} from '@jest/globals';
import {
EXPECTED_TOOL_COUNTS,
EXPECTED_CORE_TOOLS,
validateToolCounts,
validateToolStructure
} from '../../../helpers/tool-counts.js';
import { registerTaskMasterTools } from '../../../../mcp-server/src/tools/index.js';
import {
toolRegistry,
coreTools,
standardTools
} from '../../../../mcp-server/src/tools/tool-registry.js';
// Derive constants from imported registry to avoid brittle magic numbers
const ALL_COUNT = Object.keys(toolRegistry).length;
const CORE_COUNT = coreTools.length;
const STANDARD_COUNT = standardTools.length;
describe('Task Master Tool Registration System', () => {
let mockServer;
let originalEnv;
beforeEach(() => {
originalEnv = process.env.TASK_MASTER_TOOLS;
mockServer = {
tools: [],
addTool: jest.fn((tool) => {
mockServer.tools.push(tool);
return tool;
})
};
delete process.env.TASK_MASTER_TOOLS;
});
afterEach(() => {
if (originalEnv !== undefined) {
process.env.TASK_MASTER_TOOLS = originalEnv;
} else {
delete process.env.TASK_MASTER_TOOLS;
}
jest.clearAllMocks();
});
describe('Test Environment Setup', () => {
it('should have properly configured mock server', () => {
expect(mockServer).toBeDefined();
expect(typeof mockServer.addTool).toBe('function');
expect(Array.isArray(mockServer.tools)).toBe(true);
expect(mockServer.tools.length).toBe(0);
});
it('should have correct tool registry structure', () => {
const validation = validateToolCounts();
expect(validation.isValid).toBe(true);
if (!validation.isValid) {
console.error('Tool count validation failed:', validation);
}
expect(validation.actual.total).toBe(EXPECTED_TOOL_COUNTS.total);
expect(validation.actual.core).toBe(EXPECTED_TOOL_COUNTS.core);
expect(validation.actual.standard).toBe(EXPECTED_TOOL_COUNTS.standard);
});
it('should have correct core tools', () => {
const structure = validateToolStructure();
expect(structure.isValid).toBe(true);
if (!structure.isValid) {
console.error('Tool structure validation failed:', structure);
}
expect(coreTools).toEqual(expect.arrayContaining(EXPECTED_CORE_TOOLS));
expect(coreTools.length).toBe(EXPECTED_TOOL_COUNTS.core);
});
it('should have correct standard tools that include all core tools', () => {
const structure = validateToolStructure();
expect(structure.details.coreInStandard).toBe(true);
expect(standardTools.length).toBe(EXPECTED_TOOL_COUNTS.standard);
coreTools.forEach((tool) => {
expect(standardTools).toContain(tool);
});
});
it('should have all expected tools in registry', () => {
const expectedTools = [
'initialize_project',
'models',
'research',
'add_tag',
'delete_tag',
'get_tasks',
'next_task',
'get_task'
];
expectedTools.forEach((tool) => {
expect(toolRegistry).toHaveProperty(tool);
});
});
});
describe('Configuration Modes', () => {
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS is not set (default behavior)`, () => {
delete process.env.TASK_MASTER_TOOLS;
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(
EXPECTED_TOOL_COUNTS.total
);
});
it(`should register all tools (${ALL_COUNT}) when TASK_MASTER_TOOLS=all`, () => {
process.env.TASK_MASTER_TOOLS = 'all';
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it(`should register exactly ${CORE_COUNT} core tools when TASK_MASTER_TOOLS=core`, () => {
process.env.TASK_MASTER_TOOLS = 'core';
registerTaskMasterTools(mockServer, 'core');
expect(mockServer.addTool).toHaveBeenCalledTimes(
EXPECTED_TOOL_COUNTS.core
);
});
it(`should register exactly ${STANDARD_COUNT} standard tools when TASK_MASTER_TOOLS=standard`, () => {
process.env.TASK_MASTER_TOOLS = 'standard';
registerTaskMasterTools(mockServer, 'standard');
expect(mockServer.addTool).toHaveBeenCalledTimes(
EXPECTED_TOOL_COUNTS.standard
);
});
it(`should treat lean as alias for core mode (${CORE_COUNT} tools)`, () => {
process.env.TASK_MASTER_TOOLS = 'lean';
registerTaskMasterTools(mockServer, 'lean');
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
});
it('should handle case insensitive configuration values', () => {
process.env.TASK_MASTER_TOOLS = 'CORE';
registerTaskMasterTools(mockServer, 'CORE');
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT);
});
});
describe('Custom Tool Selection and Edge Cases', () => {
it('should register specific tools from comma-separated list', () => {
process.env.TASK_MASTER_TOOLS = 'get_tasks,next_task,get_task';
registerTaskMasterTools(mockServer, 'get_tasks,next_task,get_task');
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
});
it('should handle mixed valid and invalid tool names gracefully', () => {
process.env.TASK_MASTER_TOOLS =
'invalid_tool,get_tasks,fake_tool,next_task';
registerTaskMasterTools(
mockServer,
'invalid_tool,get_tasks,fake_tool,next_task'
);
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
});
it('should default to all tools with completely invalid input', () => {
process.env.TASK_MASTER_TOOLS = 'completely_invalid';
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it('should handle empty string environment variable', () => {
process.env.TASK_MASTER_TOOLS = '';
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it('should handle whitespace in comma-separated lists', () => {
process.env.TASK_MASTER_TOOLS = ' get_tasks , next_task , get_task ';
registerTaskMasterTools(mockServer, ' get_tasks , next_task , get_task ');
expect(mockServer.addTool).toHaveBeenCalledTimes(3);
});
it('should ignore duplicate tools in list', () => {
process.env.TASK_MASTER_TOOLS = 'get_tasks,get_tasks,next_task,get_tasks';
registerTaskMasterTools(
mockServer,
'get_tasks,get_tasks,next_task,get_tasks'
);
expect(mockServer.addTool).toHaveBeenCalledTimes(2);
});
it('should handle only commas and empty entries', () => {
process.env.TASK_MASTER_TOOLS = ',,,';
registerTaskMasterTools(mockServer);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it('should handle single tool selection', () => {
process.env.TASK_MASTER_TOOLS = 'get_tasks';
registerTaskMasterTools(mockServer, 'get_tasks');
expect(mockServer.addTool).toHaveBeenCalledTimes(1);
});
});
describe('Coverage Analysis and Integration Tests', () => {
it('should provide 100% code coverage for environment control logic', () => {
const testCases = [
{
env: undefined,
expectedCount: ALL_COUNT,
description: 'undefined env (all)'
},
{
env: '',
expectedCount: ALL_COUNT,
description: 'empty string (all)'
},
{ env: 'all', expectedCount: ALL_COUNT, description: 'all mode' },
{ env: 'core', expectedCount: CORE_COUNT, description: 'core mode' },
{
env: 'lean',
expectedCount: CORE_COUNT,
description: 'lean mode (alias)'
},
{
env: 'standard',
expectedCount: STANDARD_COUNT,
description: 'standard mode'
},
{
env: 'get_tasks,next_task',
expectedCount: 2,
description: 'custom list'
},
{
env: 'invalid_tool',
expectedCount: ALL_COUNT,
description: 'invalid fallback'
}
];
testCases.forEach((testCase) => {
delete process.env.TASK_MASTER_TOOLS;
if (testCase.env !== undefined) {
process.env.TASK_MASTER_TOOLS = testCase.env;
}
mockServer.tools = [];
mockServer.addTool.mockClear();
registerTaskMasterTools(mockServer, testCase.env || 'all');
expect(mockServer.addTool).toHaveBeenCalledTimes(
testCase.expectedCount
);
});
});
it('should have optimal performance characteristics', () => {
const startTime = Date.now();
process.env.TASK_MASTER_TOOLS = 'all';
registerTaskMasterTools(mockServer);
const endTime = Date.now();
const executionTime = endTime - startTime;
expect(executionTime).toBeLessThan(100);
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
it('should validate token reduction claims', () => {
expect(coreTools.length).toBeLessThan(standardTools.length);
expect(standardTools.length).toBeLessThan(
Object.keys(toolRegistry).length
);
expect(coreTools.length).toBe(CORE_COUNT);
expect(standardTools.length).toBe(STANDARD_COUNT);
expect(Object.keys(toolRegistry).length).toBe(ALL_COUNT);
const allToolsCount = Object.keys(toolRegistry).length;
const coreReduction =
((allToolsCount - coreTools.length) / allToolsCount) * 100;
const standardReduction =
((allToolsCount - standardTools.length) / allToolsCount) * 100;
expect(coreReduction).toBeGreaterThan(80);
expect(standardReduction).toBeGreaterThan(50);
});
it('should maintain referential integrity of tool registry', () => {
coreTools.forEach((tool) => {
expect(standardTools).toContain(tool);
});
standardTools.forEach((tool) => {
expect(toolRegistry).toHaveProperty(tool);
});
Object.keys(toolRegistry).forEach((tool) => {
expect(typeof toolRegistry[tool]).toBe('function');
});
});
it('should handle concurrent registration attempts', () => {
process.env.TASK_MASTER_TOOLS = 'core';
registerTaskMasterTools(mockServer, 'core');
registerTaskMasterTools(mockServer, 'core');
registerTaskMasterTools(mockServer, 'core');
expect(mockServer.addTool).toHaveBeenCalledTimes(CORE_COUNT * 3);
});
it('should validate all documented tool categories exist', () => {
const allTools = Object.keys(toolRegistry);
const projectSetupTools = allTools.filter((tool) =>
['initialize_project', 'models', 'rules', 'parse_prd'].includes(tool)
);
expect(projectSetupTools.length).toBeGreaterThan(0);
const taskManagementTools = allTools.filter((tool) =>
['get_tasks', 'get_task', 'next_task', 'set_task_status'].includes(tool)
);
expect(taskManagementTools.length).toBeGreaterThan(0);
const analysisTools = allTools.filter((tool) =>
['analyze_project_complexity', 'complexity_report'].includes(tool)
);
expect(analysisTools.length).toBeGreaterThan(0);
const tagManagementTools = allTools.filter((tool) =>
['add_tag', 'delete_tag', 'list_tags', 'use_tag'].includes(tool)
);
expect(tagManagementTools.length).toBeGreaterThan(0);
});
it('should handle error conditions gracefully', () => {
const problematicInputs = [
'null',
'undefined',
' ',
'\n\t',
'special!@#$%^&*()characters',
'very,very,very,very,very,very,very,long,comma,separated,list,with,invalid,tools,that,should,fallback,to,all'
];
problematicInputs.forEach((input) => {
mockServer.tools = [];
mockServer.addTool.mockClear();
process.env.TASK_MASTER_TOOLS = input;
expect(() => registerTaskMasterTools(mockServer)).not.toThrow();
expect(mockServer.addTool).toHaveBeenCalledTimes(ALL_COUNT);
});
});
});
});

View File

@@ -1,223 +0,0 @@
/**
* Unit tests for findProjectRoot() function
* Tests the parent directory traversal functionality
*/
import { jest } from '@jest/globals';
import path from 'path';
import fs from 'fs';
// Import the function to test
import { findProjectRoot } from '../../src/utils/path-utils.js';
describe('findProjectRoot', () => {
describe('Parent Directory Traversal', () => {
test('should find .taskmaster in parent directory', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
// .taskmaster exists only at /project
return normalized === path.normalize('/project/.taskmaster');
});
const result = findProjectRoot('/project/subdir');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should find .git in parent directory', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
return normalized === path.normalize('/project/.git');
});
const result = findProjectRoot('/project/subdir');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should find package.json in parent directory', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
return normalized === path.normalize('/project/package.json');
});
const result = findProjectRoot('/project/subdir');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should traverse multiple levels to find project root', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
// Only exists at /project, not in any subdirectories
return normalized === path.normalize('/project/.taskmaster');
});
const result = findProjectRoot('/project/subdir/deep/nested');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should return current directory as fallback when no markers found', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
// No project markers exist anywhere
mockExistsSync.mockReturnValue(false);
const result = findProjectRoot('/some/random/path');
// Should fall back to process.cwd()
expect(result).toBe(process.cwd());
mockExistsSync.mockRestore();
});
test('should find markers at current directory before checking parent', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
// .git exists at /project/subdir, .taskmaster exists at /project
if (normalized.includes('/project/subdir/.git')) return true;
if (normalized.includes('/project/.taskmaster')) return true;
return false;
});
const result = findProjectRoot('/project/subdir');
// Should find /project/subdir first because .git exists there,
// even though .taskmaster is earlier in the marker array
expect(result).toBe('/project/subdir');
mockExistsSync.mockRestore();
});
test('should handle permission errors gracefully', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
// Throw permission error for checks in /project/subdir
if (normalized.startsWith('/project/subdir/')) {
throw new Error('EACCES: permission denied');
}
// Return true only for .taskmaster at /project
return normalized.includes('/project/.taskmaster');
});
const result = findProjectRoot('/project/subdir');
// Should handle permission errors in subdirectory and traverse to parent
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
test('should detect filesystem root correctly', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
// No markers exist
mockExistsSync.mockReturnValue(false);
const result = findProjectRoot('/');
// Should stop at root and fall back to process.cwd()
expect(result).toBe(process.cwd());
mockExistsSync.mockRestore();
});
test('should recognize various project markers', () => {
const projectMarkers = [
'.taskmaster',
'.git',
'package.json',
'Cargo.toml',
'go.mod',
'pyproject.toml',
'requirements.txt',
'Gemfile',
'composer.json'
];
projectMarkers.forEach((marker) => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
const normalized = path.normalize(checkPath);
return normalized.includes(`/project/${marker}`);
});
const result = findProjectRoot('/project/subdir');
expect(result).toBe('/project');
mockExistsSync.mockRestore();
});
});
});
describe('Edge Cases', () => {
test('should handle empty string as startDir', () => {
const result = findProjectRoot('');
// Should use process.cwd() or fall back appropriately
expect(typeof result).toBe('string');
expect(result.length).toBeGreaterThan(0);
});
test('should handle relative paths', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
mockExistsSync.mockImplementation((checkPath) => {
// Simulate .git existing in the resolved path
return checkPath.includes('.git');
});
const result = findProjectRoot('./subdir');
expect(typeof result).toBe('string');
mockExistsSync.mockRestore();
});
test('should not exceed max depth limit', () => {
const mockExistsSync = jest.spyOn(fs, 'existsSync');
// Track how many times existsSync is called
let callCount = 0;
mockExistsSync.mockImplementation(() => {
callCount++;
return false; // Never find a marker
});
// Create a very deep path
const deepPath = '/a/'.repeat(100) + 'deep';
const result = findProjectRoot(deepPath);
// Should stop after max depth (50) and not check 100 levels
// Each level checks multiple markers, so callCount will be high but bounded
expect(callCount).toBeLessThan(1000); // Reasonable upper bound
// With 18 markers and max depth of 50, expect around 900 calls maximum
expect(callCount).toBeLessThanOrEqual(50 * 18);
mockExistsSync.mockRestore();
});
});
});