Compare commits

..

41 Commits

Author SHA1 Message Date
Eyal Toledano
16e6326010 fix(config): adds missing import + task management for api key design 2025-06-05 14:20:25 -04:00
Eyal Toledano
a9c1b6bbcf fix(config-manager): Add silent mode check and improve test mocking for ensureConfigFileExists 2025-06-05 13:33:20 -04:00
Eyal Toledano
f12fc476d3 fix(init): Ensure hosted mode option available by creating .taskmasterconfig early
- Added ensureConfigFileExists() to create default config if missing
- Call early in init flows before gateway check - Preserve email from initializeUser()
- Add comprehensive tests
2025-06-05 13:30:14 -04:00
Eyal Toledano
31178e2f43 chore: adjust .taskmasterconfig defaults 2025-06-04 19:04:17 -04:00
Eyal Toledano
3fa3be4e1b chore: fix user email, telemetryEnabled by default 2025-06-04 19:03:47 -04:00
Eyal Toledano
685365270d feat: integrate Supabase authenticated users
- Updated init.js, ai-services-unified.js, user-management.js, telemetry-submission.js, and .taskmasterconfig to support Supabase authentication flow and authenticated gateway calls
2025-06-04 18:53:28 -04:00
Eyal Toledano
58aa0992f6 feat(error-handling): Implement comprehensive gateway error handling with user-friendly messages
- Add comprehensive gateway error handler with friendly user messages
- Handle subscription status errors (inactive BYOK, subscription required)
- Handle authentication errors (invalid API keys, missing tokens)
- Handle rate limiting with retry suggestions
- Handle model availability and validation errors
- Handle network connectivity issues
- Provide actionable solutions for each error type
- Prevent duplicate error messages by returning early after showing friendly error
- Fix telemetry tests to use correct environment variable names (TASKMASTER_API_KEY)
- Fix config manager getUserId function to properly save default userId to file
- All tests now passing (34 test suites, 360 tests)
2025-06-02 12:34:47 -04:00
Eyal Toledano
2819be51d3 feat: Implement TaskMaster AI Gateway integration with enhanced UX
- Fix Zod schema conversion, update headers, add premium telemetry display, improve user auth flow, and standardize email fields

Functionally complete on this end, mostly polish around user experience and need to add in profile, upgrade/downgrade, etc.

But the AI commands are working off the gateway.
2025-06-01 19:37:12 -04:00
Eyal Toledano
9b87dd23de fix(gateway/auth): Implement proper auth/init flow with automatic background userId generation
- Fix getUserId() to use placeholder that triggers auth/init if the auth/init endpoint is down for whatever reason
- Add silent auth/init attempt in AI services
- Improve hosted mode error handling
- Remove fake userId/email generation from init.js
2025-05-31 19:47:18 -04:00
Eyal Toledano
769275b3bc fix(config): Fix config structure and tests after refactoring
- Fixed getUserId() to always return value, never null (sets default '1234567890')
- Updated all test files to match new config.account structure
- Fixed config-manager.test.js default config expectations
- Updated telemetry-submission.test.js and ai-services-unified.test.js mocks
- Added getTelemetryEnabled export to all config-manager mocks
- All 44 tests now passing
2025-05-30 19:40:38 -04:00
Eyal Toledano
4e9d58a1b0 feat(config): Restructure .taskmasterconfig and enhance gateway integration
Config Structure Changes and Gateway Integration

## Configuration Structure Changes
- Restructured .taskmasterconfig to use 'account' section for user settings
- Moved userId, userEmail, mode, telemetryEnabled from global to account section
- API keys remain isolated in .env file (not accessible to AI)
- Enhanced getUserId() to always return value, never null (sets default '1234567890')

## Gateway Integration Enhancements
- Updated registerUserWithGateway() to accept both email and userId parameters
- Enhanced /auth/init endpoint integration for existing user validation
- API key updates automatically written to .env during registration process
- Improved user identification and validation flow

## Code Updates for New Structure
- Fixed config-manager.js getter functions for account section access
- Updated user-management.js to use config.account.userId/mode
- Modified telemetry-submission.js to read from account section
- Added getTelemetryEnabled() function with proper account section access
- Enhanced telemetry configuration reading with new structure

## Comprehensive Test Updates
- Updated integration tests (init-config.test.js) for new config structure
- Fixed unit tests (config-manager.test.js) with updated default config
- Updated telemetry tests (telemetry-submission.test.js) for account structure
- Added missing getTelemetryEnabled mock to ai-services-unified.test.js
- Fixed all test expectations to use config.account.* instead of config.global.*
- Removed references to deprecated config.subscription object

## Configuration Access Consistency
- Standardized configuration access patterns across entire codebase
- Clean separation: user settings in account, API keys in .env, models/global in respective sections
- All tests passing with new configuration structure
- Maintained backward compatibility during transition

Changes support enhanced telemetry system with proper user management and gateway integration while maintaining security through API key isolation.
2025-05-30 18:53:16 -04:00
Eyal Toledano
e573db3b3b feat(task-90): Complete telemetry integration with init flow improvements - Task 90.3: AI Services Integration COMPLETED with automatic submission after AI usage logging and graceful error handling - Init Flow Enhancements: restructured to prioritize gateway selection with beautiful UI for BYOK vs Hosted modes - Telemetry Improvements: modified submission to send FULL data to gateway while maintaining security filtering for users - All 344 tests passing, telemetry integration ready for production 2025-05-30 16:35:40 -04:00
Eyal Toledano
75b7b93fa4 feat(task-90): Complete telemetry integration with /auth/init + fix Roo test brittleness
- Updated telemetry submission to use /auth/init endpoint instead of /api/v1/users
- Hardcoded gateway endpoint to http://localhost:4444/api/v1/telemetry for all users
- Removed unnecessary service API key complexity - simplified authentication
- Enhanced init.js with hosted gateway setup option and user registration
- Added configureTelemetrySettings() to update .taskmasterconfig with credentials
- Fixed brittle Roo integration tests that required exact string matching
- Updated tests to use flexible regex patterns supporting any quote style
- All test suites now green: 332 tests passed, 11 skipped, 0 failed
- All 11 telemetry tests passing with live gateway integration verified
- Ready for ai-services-unified.js integration in subtask 90.3
2025-05-28 22:38:18 -04:00
Eyal Toledano
6ec3a10083 feat(task-90): Complete subtask 90.2 with gateway integration and init.js enhancements
- Hardcoded gateway endpoint to http://localhost:4444/api/v1/telemetry
- Updated credential handling to use config-based approach (not env vars)
- Added registerUserWithGateway() function for user registration/lookup
- Enhanced init.js with hosted gateway setup option and configureTelemetrySettings()
- Updated all 10 tests to reflect new architecture - all passing
- Security features maintained: sensitive data filtering, Bearer token auth
- Ready for ai-services-unified.js integration in subtask 90.3
2025-05-28 21:05:25 -04:00
Eyal Toledano
8ad31ac5eb feat(task-90): Complete subtask 90.2 with secure telemetry submission service - Implemented telemetry submission with Zod validation, retry logic, graceful error handling, and user opt-out support - Used correct Bearer token authentication with X-User-Email header - Successfully tested with live gateway endpoint, all 6 tests passing - Verified security: sensitive data filtered before submission 2025-05-28 15:12:31 -04:00
Eyal Toledano
2773e347f9 feat(task-90): Complete subtask 90.2
- Implement secure telemetry submission service
- Created scripts/modules/telemetry-submission.js with submitTelemetryData function
- Implemented secure filtering: removes commandArgs and fullOutput before remote submission
- Added comprehensive validation using Zod schema for telemetry data integrity
- Implemented exponential backoff retry logic (3 attempts max) with smart retry decisions
- Added graceful error handling that never blocks execution
- Respects user opt-out preferences via config.telemetryEnabled
- Configured for localhost testing endpoint (http://localhost:4444/api/v1/telemetry) for now
- Added comprehensive test coverage with 6/6 passing tests covering all scenarios
- Includes submitTelemetryDataAsync for fire-and-forget submissions
2025-05-28 14:51:42 -04:00
Eyal Toledano
bfc39dd377 feat(task-90): Complete subtask 90.1
- Implement secure telemetry capture with filtering - Enhanced ai-services-unified.js to capture commandArgs and fullOutput in telemetry - Added filterSensitiveTelemetryData() function to prevent sensitive data exposure - Updated processMCPResponseData() to filter telemetry before sending to MCP clients - Verified CLI displayAiUsageSummary() only shows safe fields - Added comprehensive test coverage with 4 passing tests - Resolved critical security issue: API keys and sensitive data now filtered from responses
2025-05-28 14:26:24 -04:00
Eyal Toledano
9e6c190af3 fix(move-task): Fix duplicate task creation when moving subtask to standalone task 2025-05-28 14:05:30 -04:00
Eyal Toledano
ab64437ad2 chore: task management 2025-05-28 11:16:54 -04:00
Eyal Toledano
cb95a07771 chore: task management 2025-05-28 11:16:09 -04:00
Eyal Toledano
c096f3fe9d Merge branch 'v017-adds' into gateway 2025-05-28 11:13:26 -04:00
Eyal Toledano
b6a3b8d385 chore: task management - moves 87,88,89 to 90,91,92 2025-05-28 10:38:33 -04:00
Eyal Toledano
78397fe0be chore: formatting 2025-05-28 10:25:39 -04:00
Eyal Toledano
f9b89dc25c fix(move): Fix move command bug that left duplicate tasks
- Fixed logic in moveTaskToNewId function that was incorrectly treating task-to-task moves as subtask creation instead of task replacement
- Updated moveTaskToNewId to properly handle replacing existing destination tasks instead of just placeholders
- The move command now correctly replaces destination tasks and cleans up properly without leaving duplicates

- Task Management: Moved task 93 (Google Vertex AI Provider) to position 88, Moved task 94 (Azure OpenAI Provider) to position 89, Updated task dependencies and regenerated task files, Cleaned up orphaned task files automatically
- All important validations remain in place: Prevents moving tasks to themselves, Prevents moving parent tasks to their own subtasks, Prevents circular dependencies
- Resolves the issue where moving tasks would leave both source and destination tasks in tasks.json and file system
2025-05-28 10:25:15 -04:00
Eyal Toledano
ca69e1294f Merge branch 'next' of github.com:eyaltoledano/claude-task-master into v017-adds 2025-05-28 10:08:14 -04:00
Eyal Toledano
ce09d9cdc3 chore: task mgmt 2025-05-28 09:56:08 -04:00
Eyal Toledano
b5c2cf47b0 chore: task management 2025-05-28 00:29:43 -04:00
Eyal Toledano
ac36e2497e chore: task management and removes mistakenly staged changes 2025-05-27 12:43:59 -04:00
Eyal Toledano
1d4b80fe6f chore: task management 2025-05-27 12:41:08 -04:00
Eyal Toledano
023f51c579 feat(research): Adds MCP tool for command
- New MCP Tool: research tool enables AI-powered research with project context
- Context Integration: Supports task IDs, file paths, custom context, and project tree
- Fuzzy Task Discovery: Automatically finds relevant tasks using semantic search
- Token Management: Detailed token counting and breakdown by context type
- Multiple Detail Levels: Support for low, medium, and high detail research responses
- Telemetry Integration: Full cost tracking and usage analytics
- Direct Function: researchDirect with comprehensive parameter validation
- Silent Mode: Prevents console output interference with MCP JSON responses
- Error Handling: Robust error handling with proper MCP response formatting

This completes subtasks 94.5 (Direct Function) and 94.6 (MCP Tool) for the research command implementation, providing a powerful research interface for integrated development environments like Cursor.

Updated documentation across taskmaster.mdc, README.md, command-reference.md, examples.md, tutorial.md, and docs/README.md to highlight research capabilities and usage patterns.
2025-05-25 20:16:48 -04:00
Eyal Toledano
1e020023ed feat(show): add comma-separated ID support for multi-task viewing
- Enhanced get-task/show command to support comma-separated task IDs for efficient batch operations.
- New features include multiple task retrieval, smart display logic, interactive action menu with batch operations, MCP array response for AI agent efficiency, and support for mixed parent tasks and subtasks.
- Implementation includes updated CLI show command, enhanced MCP get_task tool, modified showTaskDirect function, and maintained full backward compatibility.
- Documentation updated across all relevant files.

Benefits include faster context gathering for AI agents, improved workflow with interactive batch operations, better UX with responsive layout, and enhanced API efficiency.
2025-05-25 19:39:23 -04:00
Eyal Toledano
325f5a2aa3 chore: formatting 2025-05-25 18:58:42 -04:00
Eyal Toledano
de46bfd84b chore: removes task004 chat that had like 11k lines lol. 2025-05-25 18:50:59 -04:00
Eyal Toledano
cc26c36366 feat(research): Add subtasks to fuzzy search and follow-up questions
- Enhanced fuzzy search to include subtasks in discovery - Added interactive follow-up question functionality using inquirer
- Improved context discovery by including both tasks and subtasks
- Follow-up option for research with default to 'n' for quick workflow
2025-05-25 18:48:39 -04:00
Eyal Toledano
15ad34928d fix(move-task): Fix critical bugs in task move functionality
- Fixed parent-to-parent task moves where original task would remain as duplicate
- Fixed moving tasks to become subtasks of empty parents (validation errors)
- Fixed moving subtasks between different parent tasks
- Improved comma-separated batch moves with proper error handling
- Updated MCP tool to use core logic instead of custom implementation
- Resolves task duplication issues and enables proper task hierarchy reorganization
2025-05-25 18:03:43 -04:00
Eyal Toledano
f74d639110 fix(move): adjusts logic to prevent an issue when moving from parent to subtask if the target parent has no subtasks. 2025-05-25 17:49:32 -04:00
Eyal Toledano
de58e9ede5 feat(fuzzy): improves fuzzy search to introspect into subtasks as well. might still need improvement. 2025-05-25 17:33:00 -04:00
Eyal Toledano
947541e4ee docs: add context gathering rule and update existing rules
- Create comprehensive context_gathering.mdc rule documenting ContextGatherer utility patterns, FuzzyTaskSearch integration, token breakdown display, code block syntax highlighting, and enhanced result display patterns
- Update new_features.mdc to include context gathering step
- Update commands.mdc with context-aware command pattern
- Update ui.mdc with enhanced display patterns and syntax highlighting
- Update utilities.mdc to document new context gathering utilities
- Update glossary.mdc to include new context_gathering rule
- Establishes standardized patterns for building intelligent, context-aware commands that can leverage project knowledge for better AI assistance
2025-05-25 17:19:28 -04:00
Eyal Toledano
275cd55da7 feat: implement research command with enhanced context gathering - Add comprehensive research command with AI-powered queries - Implement ContextGatherer utility for reusable context extraction - Support multiple context types: tasks, files, custom text, project tree - Add fuzzy search integration for automatic task discovery - Implement detailed token breakdown display with syntax highlighting - Add enhanced UI with boxed output and code block formatting - Support different detail levels (low, medium, high) for responses - Include project-specific context for more relevant AI responses - Add token counting with gpt-tokens library integration - Create reusable patterns for future context-aware commands - Task 94.4 completed 2025-05-25 03:51:51 -04:00
Eyal Toledano
67ac212973 chore: task management 2025-05-25 01:26:14 -04:00
Eyal Toledano
235371ff47 chore: task management and small bug fix. 2025-05-25 01:03:58 -04:00
257 changed files with 31432 additions and 15601 deletions

View File

@@ -0,0 +1,44 @@
---
'task-master-ai': minor
---
Add comprehensive AI-powered research command with intelligent context gathering and interactive follow-ups.
The new `research` command provides AI-powered research capabilities that automatically gather relevant project context to answer your questions. The command intelligently selects context from multiple sources and supports interactive follow-up questions in CLI mode.
**Key Features:**
- **Intelligent Task Discovery**: Automatically finds relevant tasks and subtasks using fuzzy search based on your query keywords, supplementing any explicitly provided task IDs
- **Multi-Source Context**: Gathers context from tasks, files, project structure, and custom text to provide comprehensive answers
- **Interactive Follow-ups**: CLI users can ask follow-up questions that build on the conversation history while allowing fresh context discovery for each question
- **Flexible Detail Levels**: Choose from low (concise), medium (balanced), or high (comprehensive) response detail levels
- **Token Transparency**: Displays detailed token breakdown showing context size, sources, and estimated costs
- **Enhanced Display**: Syntax-highlighted code blocks and structured output with clear visual separation
**Usage Examples:**
```bash
# Basic research with auto-discovered context
task-master research "How should I implement user authentication?"
# Research with specific task context
task-master research "What's the best approach for this?" --id=15,23.2
# Research with file context and project tree
task-master research "How does the current auth system work?" --files=src/auth.js,config/auth.json --tree
# Research with custom context and low detail
task-master research "Quick implementation steps?" --context="Using JWT tokens" --detail=low
```
**Context Sources:**
- **Tasks**: Automatically discovers relevant tasks/subtasks via fuzzy search, plus any explicitly specified via `--id`
- **Files**: Include specific files via `--files` for code-aware responses
- **Project Tree**: Add `--tree` to include project structure overview
- **Custom Context**: Provide additional context via `--context` for domain-specific information
**Interactive Features (CLI only):**
- Follow-up questions that maintain conversation history
- Fresh fuzzy search for each follow-up to discover newly relevant tasks
- Cumulative context building across the conversation
- Clean visual separation between exchanges
The research command integrates with the existing AI service layer and supports all configured AI providers. MCP integration provides the same functionality for programmatic access without interactive features.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix Cursor deeplink installation by providing copy-paste instructions for GitHub compatibility

View File

@@ -0,0 +1,13 @@
---
'task-master-ai': patch
---
Fix critical bugs in task move functionality:
- **Fixed moving tasks to become subtasks of empty parents**: When moving a task to become a subtask of a parent that had no existing subtasks (e.g., task 89 → task 98.1), the operation would fail with validation errors.
- **Fixed moving subtasks between parents**: Subtasks can now be properly moved between different parent tasks, including to parents that previously had no subtasks.
- **Improved comma-separated batch moves**: Multiple tasks can now be moved simultaneously using comma-separated IDs (e.g., "88,90" → "92,93") with proper error handling and atomic operations.
These fixes enables proper task hierarchy reorganization for corner cases that were previously broken.

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
improve findTasks algorithm for resolving tasks path

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix update tool on MCP giving `No valid tasks found`

View File

@@ -1,39 +0,0 @@
---
"task-master-ai": patch
---
Enhanced add-task fuzzy search intelligence and improved user experience
**Smarter Task Discovery:**
- Remove hardcoded category system that always matched "Task management"
- Eliminate arbitrary limits on fuzzy search results (5→25 high relevance, 3→10 medium relevance, 8→20 detailed tasks)
- Improve semantic weighting in Fuse.js search (details=3, description=2, title=1.5) for better relevance
- Generate context-driven task recommendations based on true semantic similarity
**Enhanced Terminal Experience:**
- Fix duplicate banner display issue that was "eating" terminal history (closes #553)
- Remove console.clear() and redundant displayBanner() calls from UI functions
- Preserve command history for better development workflow
- Streamline banner display across all commands (list, next, show, set-status, clear-subtasks, dependency commands)
**Visual Improvements:**
- Replace emoji complexity indicators with clean filled circle characters (●) for professional appearance
- Improve consistency and readability of task complexity display
**AI Provider Compatibility:**
- Change generateObject mode from 'tool' to 'auto' for better cross-provider compatibility
- Add qwen3-235n-a22b:free model support (closes #687)
- Add smart warnings for free OpenRouter models with limitations (rate limits, restricted context, no tool_use)
**Technical Improvements:**
- Enhanced context generation in add-task to rely on semantic similarity rather than rigid pattern matching
- Improved dependency analysis and common pattern detection
- Better handling of task relationships and relevance scoring
- More intelligent task suggestion algorithms
The add-task system now provides truly relevant task context based on semantic understanding rather than arbitrary categories and limits, while maintaining a cleaner and more professional terminal experience.

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': minor
---
Add AWS bedrock support

View File

@@ -0,0 +1,13 @@
---
'task-master-ai': minor
---
# Add Google Vertex AI Provider Integration
- Implemented `VertexAIProvider` class extending BaseAIProvider
- Added authentication and configuration handling for Vertex AI
- Updated configuration manager with Vertex-specific getters
- Modified AI services unified system to integrate the provider
- Added documentation for Vertex AI setup and configuration
- Updated environment variable examples for Vertex AI support
- Implemented specialized error handling for Vertex-specific issues

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': minor
---
Add support for Azure

View File

@@ -0,0 +1,17 @@
---
'task-master-ai': minor
---
Add comprehensive `research` MCP tool for AI-powered research queries
- **New MCP Tool**: `research` tool enables AI-powered research with project context
- **Context Integration**: Supports task IDs, file paths, custom context, and project tree
- **Fuzzy Task Discovery**: Automatically finds relevant tasks using semantic search
- **Token Management**: Detailed token counting and breakdown by context type
- **Multiple Detail Levels**: Support for low, medium, and high detail research responses
- **Telemetry Integration**: Full cost tracking and usage analytics
- **Direct Function**: `researchDirect` with comprehensive parameter validation
- **Silent Mode**: Prevents console output interference with MCP JSON responses
- **Error Handling**: Robust error handling with proper MCP response formatting
This completes subtasks 94.5 (Direct Function) and 94.6 (MCP Tool) for the research command implementation, providing a powerful research interface for integrated development environments like Cursor.

View File

@@ -1,8 +0,0 @@
---
"task-master-ai": patch
---
Fixes issue with expand CLI command "Complexity report not found"
- Closes #735
- Closes #728

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Increased minimum required node version to > 18 (was > 14)

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Fix double .taskmaster directory paths in file resolution utilities
- Closes #636

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': minor
---
Renamed baseUrl to baseURL

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Add one-click MCP server installation for Cursor

View File

@@ -1,11 +0,0 @@
{
"mode": "exit",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.16.1"
},
"changesets": [
"pink-houses-lay",
"polite-areas-shave"
]
}

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Fix max_tokens error when trying to use claude-sonnet-4 and claude-opus-4

View File

@@ -0,0 +1,7 @@
---
'task-master-ai': minor
---
Add TASK_MASTER_PROJECT_ROOT env variable supported in mcp.json and .env for project root resolution
- Some users were having issues where the MCP wasn't able to detect the location of their project root, you can now set the `TASK_MASTER_PROJECT_ROOT` environment variable to the root of your project.

View File

@@ -0,0 +1,19 @@
---
'task-master-ai': minor
---
Enhanced get-task/show command to support comma-separated task IDs for efficient batch operations
**New Features:**
- **Multiple Task Retrieval**: Pass comma-separated IDs to get/show multiple tasks at once (e.g., `task-master show 1,3,5` or MCP `get_task` with `id: "1,3,5"`)
- **Smart Display Logic**: Single ID shows detailed view, multiple IDs show compact summary table with interactive options
- **Batch Action Menu**: Interactive menu for multiple tasks with copy-paste ready commands for common operations (mark as done/in-progress, expand all, view dependencies, etc.)
- **MCP Array Response**: MCP tool returns structured array of task objects for efficient AI agent context gathering
**Benefits:**
- **Faster Context Gathering**: AI agents can collect multiple tasks/subtasks in one call instead of iterating
- **Improved Workflow**: Interactive batch operations reduce repetitive command execution
- **Better UX**: Responsive layout adapts to terminal width, maintains consistency with existing UI patterns
- **API Efficiency**: RESTful array responses in MCP format enable more sophisticated integrations
This enhancement maintains full backward compatibility while significantly improving efficiency for both human users and AI agents working with multiple tasks.

View File

@@ -0,0 +1,5 @@
---
'task-master-ai': patch
---
Fix add-task MCP command causing an error

View File

@@ -1,22 +0,0 @@
---
"task-master-ai": patch
---
Add sync-readme command for a task export to GitHub README
Introduces a new `sync-readme` command that exports your task list to your project's README.md file.
**Features:**
- **Flexible filtering**: Supports `--status` filtering (e.g., pending, done) and `--with-subtasks` flag
- **Smart content management**: Automatically replaces existing exports or appends to new READMEs
- **Metadata display**: Shows export timestamp, subtask inclusion status, and filter settings
**Usage:**
- `task-master sync-readme` - Export tasks without subtasks
- `task-master sync-readme --with-subtasks` - Include subtasks in export
- `task-master sync-readme --status=pending` - Only export pending tasks
- `task-master sync-readme --status=done --with-subtasks` - Export completed tasks with subtasks
Perfect for showcasing project progress on GitHub. Experimental. Open to feedback.

View File

@@ -1,19 +1,44 @@
{
"mcpServers": {
"task-master-ai": {
"command": "node",
"args": ["./mcp-server/server.js"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}
"mcpServers": {
"task-master-ai-tm": {
"command": "node",
"args": [
"./mcp-server/server.js"
],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
},
"task-master-ai": {
"command": "npx",
"args": [
"-y",
"--package=task-master-ai",
"task-master-ai"
],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
},
"env": {
"TASKMASTER_TELEMETRY_API_KEY": "339a81c9-5b9c-4d60-92d8-cba2ee2a8cc3",
"TASKMASTER_TELEMETRY_USER_EMAIL": "user_1748640077834@taskmaster.dev"
}
}

View File

@@ -50,6 +50,7 @@ This rule guides AI assistants on how to view, configure, and interact with the
- **Key Locations** (See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) - Configuration Management):
- **MCP/Cursor:** Set keys in the `env` section of `.cursor/mcp.json`.
- **CLI:** Set keys in a `.env` file in the project root.
- As the AI agent, you do not have access to read the .env -- but do not attempt to recreate it!
- **Provider List & Keys:**
- **`anthropic`**: Requires `ANTHROPIC_API_KEY`.
- **`google`**: Requires `GOOGLE_API_KEY`.

View File

@@ -1,6 +1,7 @@
---
description: Guidelines for interacting with the unified AI service layer.
globs: scripts/modules/ai-services-unified.js, scripts/modules/task-manager/*.js, scripts/modules/commands.js
alwaysApply: false
---
# AI Services Layer Guidelines
@@ -91,7 +92,7 @@ This document outlines the architecture and usage patterns for interacting with
* ✅ **DO**: Centralize **all** LLM calls through `generateTextService` or `generateObjectService`.
* ✅ **DO**: Determine the appropriate `role` (`main`, `research`, `fallback`) in your core logic and pass it to the service.
* ✅ **DO**: Pass the `session` object (received in the `context` parameter, especially from direct function wrappers) to the service call when in MCP context.
* ✅ **DO**: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP).
* ✅ **DO**: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP). FYI: As the AI agent, you do not have access to read the .env -- so do not attempt to recreate it!
* ✅ **DO**: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).
* ✅ **DO**: Use `generateTextService` and implement robust manual JSON parsing (with Zod validation *after* parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
* ❌ **DON'T**: Import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

View File

@@ -39,12 +39,12 @@ alwaysApply: false
- **Responsibilities** (See also: [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc)):
- Exports `generateTextService`, `generateObjectService`.
- Handles provider/model selection based on `role` and `.taskmasterconfig`.
- Resolves API keys (from `.env` or `session.env`).
- Resolves API keys (from `.env` or `session.env`). As the AI agent, you do not have access to read the .env -- but do not attempt to recreate it!
- Implements fallback and retry logic.
- Orchestrates calls to provider-specific implementations (`src/ai-providers/`).
- Telemetry data generated by the AI service layer is propagated upwards through core logic, direct functions, and MCP tools. See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for the detailed integration pattern.
- **[`src/ai-providers/*.js`](mdc:src/ai-providers/): Provider-Specific Implementations**
- **[`src/ai-providers/*.js`](mdc:src/ai-providers): Provider-Specific Implementations**
- **Purpose**: Provider-specific wrappers for Vercel AI SDK functions.
- **Responsibilities**: Interact directly with Vercel AI SDK adapters.
@@ -63,7 +63,7 @@ alwaysApply: false
- API Key Resolution (`resolveEnvVariable`).
- Silent Mode Control (`enableSilentMode`, `disableSilentMode`).
- **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration**
- **[`mcp-server/`](mdc:mcp-server): MCP Server Integration**
- **Purpose**: Provides MCP interface using FastMCP.
- **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)):
- Registers tools (`mcp-server/src/tools/*.js`). Tool `execute` methods **should be wrapped** with the `withNormalizedProjectRoot` HOF (from `tools/utils.js`) to ensure consistent path handling.

View File

@@ -329,6 +329,60 @@ When implementing commands that delete or remove data (like `remove-task` or `re
};
```
## Context-Aware Command Pattern
For AI-powered commands that benefit from project context, follow the research command pattern:
- **Context Integration**:
- ✅ DO: Use `ContextGatherer` utility for multi-source context extraction
- ✅ DO: Support task IDs, file paths, custom context, and project tree
- ✅ DO: Implement fuzzy search for automatic task discovery
- ✅ DO: Display detailed token breakdown for transparency
```javascript
// ✅ DO: Follow this pattern for context-aware commands
programInstance
.command('research')
.description('Perform AI-powered research queries with project context')
.argument('<prompt>', 'Research prompt to investigate')
.option('-i, --id <ids>', 'Comma-separated task/subtask IDs to include as context')
.option('-f, --files <paths>', 'Comma-separated file paths to include as context')
.option('-c, --context <text>', 'Additional custom context')
.option('--tree', 'Include project file tree structure')
.option('-d, --detail <level>', 'Output detail level: low, medium, high', 'medium')
.action(async (prompt, options) => {
// 1. Parameter validation and parsing
const taskIds = options.id ? parseTaskIds(options.id) : [];
const filePaths = options.files ? parseFilePaths(options.files) : [];
// 2. Initialize context gatherer
const projectRoot = findProjectRoot() || '.';
const gatherer = new ContextGatherer(projectRoot, tasksPath);
// 3. Auto-discover relevant tasks if none specified
if (taskIds.length === 0) {
const fuzzySearch = new FuzzyTaskSearch(tasksData.tasks, 'research');
const discoveredIds = fuzzySearch.getTaskIds(
fuzzySearch.findRelevantTasks(prompt)
);
taskIds.push(...discoveredIds);
}
// 4. Gather context with token breakdown
const contextResult = await gatherer.gather({
tasks: taskIds,
files: filePaths,
customContext: options.context,
includeProjectTree: options.projectTree,
format: 'research',
includeTokenCounts: true
});
// 5. Display token breakdown and execute AI call
// Implementation continues...
});
```
## Error Handling
- **Exception Management**:

View File

@@ -0,0 +1,268 @@
---
description: Standardized patterns for gathering and processing context from multiple sources in Task Master commands, particularly for AI-powered features.
globs:
alwaysApply: false
---
# Context Gathering Patterns and Utilities
This document outlines the standardized patterns for gathering and processing context from multiple sources in Task Master commands, particularly for AI-powered features.
## Core Context Gathering Utility
The `ContextGatherer` class (`scripts/modules/utils/contextGatherer.js`) provides a centralized, reusable utility for extracting context from multiple sources:
### **Key Features**
- **Multi-source Context**: Tasks, files, custom text, project file tree
- **Token Counting**: Detailed breakdown using `gpt-tokens` library
- **Format Support**: Different output formats (research, chat, system-prompt)
- **Error Handling**: Graceful handling of missing files, invalid task IDs
- **Performance**: File size limits, depth limits for tree generation
### **Usage Pattern**
```javascript
import { ContextGatherer } from '../utils/contextGatherer.js';
// Initialize with project paths
const gatherer = new ContextGatherer(projectRoot, tasksPath);
// Gather context with detailed token breakdown
const result = await gatherer.gather({
tasks: ['15', '16.2'], // Task and subtask IDs
files: ['src/api.js', 'README.md'], // File paths
customContext: 'Additional context text',
includeProjectTree: true, // Include file tree
format: 'research', // Output format
includeTokenCounts: true // Get detailed token breakdown
});
// Access results
const contextString = result.context;
const tokenBreakdown = result.tokenBreakdown;
```
### **Token Breakdown Structure**
```javascript
{
customContext: { tokens: 150, characters: 800 },
tasks: [
{ id: '15', type: 'task', title: 'Task Title', tokens: 245, characters: 1200 },
{ id: '16.2', type: 'subtask', title: 'Subtask Title', tokens: 180, characters: 900 }
],
files: [
{ path: 'src/api.js', tokens: 890, characters: 4500, size: '4.5 KB' }
],
projectTree: { tokens: 320, characters: 1600 },
total: { tokens: 1785, characters: 8000 }
}
```
## Fuzzy Search Integration
The `FuzzyTaskSearch` class (`scripts/modules/utils/fuzzyTaskSearch.js`) provides intelligent task discovery:
### **Key Features**
- **Semantic Matching**: Uses Fuse.js for similarity scoring
- **Purpose Categories**: Pattern-based task categorization
- **Relevance Scoring**: High/medium/low relevance thresholds
- **Context-Aware**: Different search configurations for different use cases
### **Usage Pattern**
```javascript
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
// Initialize with tasks data and context
const fuzzySearch = new FuzzyTaskSearch(tasksData.tasks, 'research');
// Find relevant tasks
const searchResults = fuzzySearch.findRelevantTasks(query, {
maxResults: 8,
includeRecent: true,
includeCategoryMatches: true
});
// Get task IDs for context gathering
const taskIds = fuzzySearch.getTaskIds(searchResults);
```
## Implementation Patterns for Commands
### **1. Context-Aware Command Structure**
```javascript
// In command action handler
async function commandAction(prompt, options) {
// 1. Parameter validation and parsing
const taskIds = options.id ? parseTaskIds(options.id) : [];
const filePaths = options.files ? parseFilePaths(options.files) : [];
// 2. Initialize context gatherer
const projectRoot = findProjectRoot() || '.';
const tasksPath = path.join(projectRoot, 'tasks', 'tasks.json');
const gatherer = new ContextGatherer(projectRoot, tasksPath);
// 3. Auto-discover relevant tasks if none specified
if (taskIds.length === 0) {
const fuzzySearch = new FuzzyTaskSearch(tasksData.tasks, 'research');
const discoveredIds = fuzzySearch.getTaskIds(
fuzzySearch.findRelevantTasks(prompt)
);
taskIds.push(...discoveredIds);
}
// 4. Gather context with token breakdown
const contextResult = await gatherer.gather({
tasks: taskIds,
files: filePaths,
customContext: options.context,
includeProjectTree: options.projectTree,
format: 'research',
includeTokenCounts: true
});
// 5. Display token breakdown (for CLI)
if (outputFormat === 'text') {
displayDetailedTokenBreakdown(contextResult.tokenBreakdown);
}
// 6. Use context in AI call
const aiResult = await generateTextService(role, session, systemPrompt, userPrompt);
// 7. Display results with enhanced formatting
displayResults(aiResult, contextResult.tokenBreakdown);
}
```
### **2. Token Display Pattern**
```javascript
function displayDetailedTokenBreakdown(tokenBreakdown, systemTokens, userTokens) {
const sections = [];
// Build context breakdown
if (tokenBreakdown.tasks?.length > 0) {
const taskDetails = tokenBreakdown.tasks.map(task =>
`${task.type === 'subtask' ? ' ' : ''}${task.id}: ${task.tokens.toLocaleString()}`
).join('\n');
sections.push(`Tasks (${tokenBreakdown.tasks.reduce((sum, t) => sum + t.tokens, 0).toLocaleString()}):\n${taskDetails}`);
}
if (tokenBreakdown.files?.length > 0) {
const fileDetails = tokenBreakdown.files.map(file =>
` ${file.path}: ${file.tokens.toLocaleString()} (${file.size})`
).join('\n');
sections.push(`Files (${tokenBreakdown.files.reduce((sum, f) => sum + f.tokens, 0).toLocaleString()}):\n${fileDetails}`);
}
// Add prompts breakdown
sections.push(`Prompts: system ${systemTokens.toLocaleString()}, user ${userTokens.toLocaleString()}`);
// Display in clean box
const content = sections.join('\n\n');
console.log(boxen(content, {
title: chalk.cyan('Token Usage'),
padding: { top: 1, bottom: 1, left: 2, right: 2 },
borderStyle: 'round',
borderColor: 'cyan'
}));
}
```
### **3. Enhanced Result Display Pattern**
```javascript
function displayResults(result, query, detailLevel, tokenBreakdown) {
// Header with query info
const header = boxen(
chalk.green.bold('Research Results') + '\n\n' +
chalk.gray('Query: ') + chalk.white(query) + '\n' +
chalk.gray('Detail Level: ') + chalk.cyan(detailLevel),
{
padding: { top: 1, bottom: 1, left: 2, right: 2 },
margin: { top: 1, bottom: 0 },
borderStyle: 'round',
borderColor: 'green'
}
);
console.log(header);
// Process and highlight code blocks
const processedResult = processCodeBlocks(result);
// Main content in clean box
const contentBox = boxen(processedResult, {
padding: { top: 1, bottom: 1, left: 2, right: 2 },
margin: { top: 0, bottom: 1 },
borderStyle: 'single',
borderColor: 'gray'
});
console.log(contentBox);
console.log(chalk.green('✓ Research complete'));
}
```
## Code Block Enhancement
### **Syntax Highlighting Pattern**
```javascript
import { highlight } from 'cli-highlight';
function processCodeBlocks(text) {
return text.replace(/```(\w+)?\n([\s\S]*?)```/g, (match, language, code) => {
try {
const highlighted = highlight(code.trim(), {
language: language || 'javascript',
theme: 'default'
});
return `\n${highlighted}\n`;
} catch (error) {
return `\n${code.trim()}\n`;
}
});
}
```
## Integration Guidelines
### **When to Use Context Gathering**
- ✅ **DO**: Use for AI-powered commands that benefit from project context
- ✅ **DO**: Use when users might want to reference specific tasks or files
- ✅ **DO**: Use for research, analysis, or generation commands
- ❌ **DON'T**: Use for simple CRUD operations that don't need AI context
### **Performance Considerations**
- ✅ **DO**: Set reasonable file size limits (50KB default)
- ✅ **DO**: Limit project tree depth (3-5 levels)
- ✅ **DO**: Provide token counts to help users understand context size
- ✅ **DO**: Allow users to control what context is included
### **Error Handling**
- ✅ **DO**: Gracefully handle missing files with warnings
- ✅ **DO**: Validate task IDs and provide helpful error messages
- ✅ **DO**: Continue processing even if some context sources fail
- ✅ **DO**: Provide fallback behavior when context gathering fails
### **Future Command Integration**
Commands that should consider adopting this pattern:
- `analyze-complexity` - Could benefit from file context
- `expand-task` - Could use related task context
- `update-task` - Could reference similar tasks for consistency
- `add-task` - Could use project context for better task generation
## Export Patterns
### **Context Gatherer Module**
```javascript
export {
ContextGatherer,
createContextGatherer // Factory function
};
```
### **Fuzzy Search Module**
```javascript
export {
FuzzyTaskSearch,
PURPOSE_CATEGORIES,
RELEVANCE_THRESHOLDS
};
```
This context gathering system provides a foundation for building more intelligent, context-aware commands that can leverage project knowledge to provide better AI-powered assistance.

View File

@@ -104,7 +104,7 @@ Task Master offers two primary ways to interact:
Taskmaster configuration is managed through two main mechanisms:
1. **`.taskmaster/config.json` File (Primary):**
1. **`.taskmasterconfig` File (Primary):**
* Located in the project root directory.
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.

View File

@@ -0,0 +1,367 @@
---
description: Git workflow integrated with Task Master for feature development and collaboration
globs: "**/*"
alwaysApply: true
---
# Git Workflow with Task Master Integration
## **Branch Strategy**
### **Main Branch Protection**
- **main** branch contains production-ready code
- All feature development happens on task-specific branches
- Direct commits to main are prohibited
- All changes merge via Pull Requests
### **Task Branch Naming**
```bash
# ✅ DO: Use consistent task branch naming
task-001 # For Task 1
task-004 # For Task 4
task-015 # For Task 15
# ❌ DON'T: Use inconsistent naming
feature/user-auth
fix-database-issue
random-branch-name
```
## **Workflow Overview**
```mermaid
flowchart TD
A[Start: On main branch] --> B[Pull latest changes]
B --> C[Create task branch<br/>git checkout -b task-XXX]
C --> D[Set task status: in-progress]
D --> E[Get task context & expand if needed]
E --> F[Identify next subtask]
F --> G[Set subtask: in-progress]
G --> H[Research & collect context<br/>update_subtask with findings]
H --> I[Implement subtask]
I --> J[Update subtask with completion]
J --> K[Set subtask: done]
K --> L[Git commit subtask]
L --> M{More subtasks?}
M -->|Yes| F
M -->|No| N[Run final tests]
N --> O[Commit tests if added]
O --> P[Push task branch]
P --> Q[Create Pull Request]
Q --> R[Human review & merge]
R --> S[Switch to main & pull]
S --> T[Delete task branch]
T --> U[Ready for next task]
style A fill:#e1f5fe
style C fill:#f3e5f5
style G fill:#fff3e0
style L fill:#e8f5e8
style Q fill:#fce4ec
style R fill:#f1f8e9
style U fill:#e1f5fe
```
## **Complete Task Development Workflow**
### **Phase 1: Task Preparation**
```bash
# 1. Ensure you're on main branch and pull latest
git checkout main
git pull origin main
# 2. Check current branch status
git branch # Verify you're on main
# 3. Create task-specific branch
git checkout -b task-004 # For Task 4
# 4. Set task status in Task Master
# Use: set_task_status tool or `task-master set-status --id=4 --status=in-progress`
```
### **Phase 2: Task Analysis & Planning**
```bash
# 5. Get task context and expand if needed
# Use: get_task tool or `task-master show 4`
# Use: expand_task tool or `task-master expand --id=4 --research --force` (if complex)
# 6. Identify next subtask to work on
# Use: next_task tool or `task-master next`
```
### **Phase 3: Subtask Implementation Loop**
For each subtask, follow this pattern:
```bash
# 7. Mark subtask as in-progress
# Use: set_task_status tool or `task-master set-status --id=4.1 --status=in-progress`
# 8. Gather context and research (if needed)
# Use: update_subtask tool with research flag or:
# `task-master update-subtask --id=4.1 --prompt="Research findings..." --research`
# 9. Collect code context through AI exploration
# Document findings in subtask using update_subtask
# 10. Implement the subtask
# Write code, tests, documentation
# 11. Update subtask with completion details
# Use: update_subtask tool or:
# `task-master update-subtask --id=4.1 --prompt="Implementation complete..."`
# 12. Mark subtask as done
# Use: set_task_status tool or `task-master set-status --id=4.1 --status=done`
# 13. Commit the subtask implementation
git add .
git commit -m "feat(task-4): Complete subtask 4.1 - [Subtask Title]
- Implementation details
- Key changes made
- Any important notes
Subtask 4.1: [Brief description of what was accomplished]
Relates to Task 4: [Main task title]"
```
### **Phase 4: Task Completion**
```bash
# 14. When all subtasks are complete, run final testing
# Create test file if needed, ensure all tests pass
npm test # or jest, or manual testing
# 15. If tests were added/modified, commit them
git add .
git commit -m "test(task-4): Add comprehensive tests for Task 4
- Unit tests for core functionality
- Integration tests for API endpoints
- All tests passing
Task 4: [Main task title] - Testing complete"
# 16. Push the task branch
git push origin task-004
# 17. Create Pull Request
# Title: "Task 4: [Task Title]"
# Description should include:
# - Task overview
# - Subtasks completed
# - Testing approach
# - Any breaking changes or considerations
```
### **Phase 5: PR Merge & Cleanup**
```bash
# 18. Human reviews and merges PR into main
# 19. Switch back to main and pull merged changes
git checkout main
git pull origin main
# 20. Delete the feature branch (optional cleanup)
git branch -d task-004
git push origin --delete task-004
```
## **Commit Message Standards**
### **Subtask Commits**
```bash
# ✅ DO: Consistent subtask commit format
git commit -m "feat(task-4): Complete subtask 4.1 - Initialize Express server
- Set up Express.js with TypeScript configuration
- Added CORS and body parsing middleware
- Implemented health check endpoints
- Basic error handling middleware
Subtask 4.1: Initialize project with npm and install dependencies
Relates to Task 4: Setup Express.js Server Project"
# ❌ DON'T: Vague or inconsistent commits
git commit -m "fixed stuff"
git commit -m "working on task"
```
### **Test Commits**
```bash
# ✅ DO: Separate test commits when substantial
git commit -m "test(task-4): Add comprehensive tests for Express server setup
- Unit tests for middleware configuration
- Integration tests for health check endpoints
- Mock tests for database connection
- All tests passing with 95% coverage
Task 4: Setup Express.js Server Project - Testing complete"
```
### **Commit Type Prefixes**
- `feat(task-X):` - New feature implementation
- `fix(task-X):` - Bug fixes
- `test(task-X):` - Test additions/modifications
- `docs(task-X):` - Documentation updates
- `refactor(task-X):` - Code refactoring
- `chore(task-X):` - Build/tooling changes
## **Task Master Commands Integration**
### **Essential Commands for Git Workflow**
```bash
# Task management
task-master show <id> # Get task/subtask details
task-master next # Find next task to work on
task-master set-status --id=<id> --status=<status>
task-master update-subtask --id=<id> --prompt="..." --research
# Task expansion (for complex tasks)
task-master expand --id=<id> --research --force
# Progress tracking
task-master list # View all tasks and status
task-master list --status=in-progress # View active tasks
```
### **MCP Tool Equivalents**
When using Cursor or other MCP-integrated tools:
- `get_task` instead of `task-master show`
- `next_task` instead of `task-master next`
- `set_task_status` instead of `task-master set-status`
- `update_subtask` instead of `task-master update-subtask`
## **Branch Management Rules**
### **Branch Protection**
```bash
# ✅ DO: Always work on task branches
git checkout -b task-005
# Make changes
git commit -m "..."
git push origin task-005
# ❌ DON'T: Commit directly to main
git checkout main
git commit -m "..." # NEVER do this
```
### **Keeping Branches Updated**
```bash
# ✅ DO: Regularly sync with main (for long-running tasks)
git checkout task-005
git fetch origin
git rebase origin/main # or merge if preferred
# Resolve any conflicts and continue
```
## **Pull Request Guidelines**
### **PR Title Format**
```
Task <ID>: <Task Title>
Examples:
Task 4: Setup Express.js Server Project
Task 7: Implement User Authentication
Task 12: Add Stripe Payment Integration
```
### **PR Description Template**
```markdown
## Task Overview
Brief description of the main task objective.
## Subtasks Completed
- [x] 4.1: Initialize project with npm and install dependencies
- [x] 4.2: Configure TypeScript, ESLint and Prettier
- [x] 4.3: Create basic Express app with middleware and health check route
## Implementation Details
- Key architectural decisions made
- Important code changes
- Any deviations from original plan
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests passing
- [ ] Manual testing completed
## Breaking Changes
List any breaking changes or migration requirements.
## Related Tasks
Mention any dependent tasks or follow-up work needed.
```
## **Conflict Resolution**
### **Task Conflicts**
```bash
# If multiple people work on overlapping tasks:
# 1. Use Task Master's move functionality to reorganize
task-master move --from=5 --to=25 # Move conflicting task
# 2. Update task dependencies
task-master add-dependency --id=6 --depends-on=5
# 3. Coordinate through PR comments and task updates
```
### **Code Conflicts**
```bash
# Standard Git conflict resolution
git fetch origin
git rebase origin/main
# Resolve conflicts in files
git add .
git rebase --continue
```
## **Emergency Procedures**
### **Hotfixes**
```bash
# For urgent production fixes:
git checkout main
git pull origin main
git checkout -b hotfix-urgent-issue
# Make minimal fix
git commit -m "hotfix: Fix critical production issue
- Specific fix description
- Minimal impact change
- Requires immediate deployment"
git push origin hotfix-urgent-issue
# Create emergency PR for immediate review
```
### **Task Abandonment**
```bash
# If task needs to be abandoned or significantly changed:
# 1. Update task status
task-master set-status --id=<id> --status=cancelled
# 2. Clean up branch
git checkout main
git branch -D task-<id>
git push origin --delete task-<id>
# 3. Document reasoning in task
task-master update-task --id=<id> --prompt="Task cancelled due to..."
```
---
**References:**
- [Task Master Workflow](mdc:.cursor/rules/dev_workflow.mdc)
- [Architecture Guidelines](mdc:.cursor/rules/architecture.mdc)
- [Task Master Commands](mdc:.cursor/rules/taskmaster.mdc)

View File

@@ -24,17 +24,22 @@ alwaysApply: false
The standard pattern for adding a feature follows this workflow:
1. **Core Logic**: Implement the business logic in the appropriate module (e.g., [`task-manager.js`](mdc:scripts/modules/task-manager.js)).
2. **AI Integration (If Applicable)**:
2. **Context Gathering (If Applicable)**:
- For AI-powered commands that benefit from project context, use the standardized context gathering patterns from [`context_gathering.mdc`](mdc:.cursor/rules/context_gathering.mdc).
- Import `ContextGatherer` and `FuzzyTaskSearch` utilities for reusable context extraction.
- Support multiple context types: tasks, files, custom text, project tree.
- Implement detailed token breakdown display for transparency.
3. **AI Integration (If Applicable)**:
- Import necessary service functions (e.g., `generateTextService`, `streamTextService`) from [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js).
- Prepare parameters (`role`, `session`, `systemPrompt`, `prompt`).
- Call the service function.
- Handle the response (direct text or stream object).
- **Important**: Prefer `generateTextService` for calls sending large context (like stringified JSON) where incremental display is not needed. See [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc) for detailed usage patterns and cautions.
3. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js) following [`ui.mdc`](mdc:.cursor/rules/ui.mdc).
4. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js) following [`commands.mdc`](mdc:.cursor/rules/commands.mdc).
5. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
6. **Configuration**: Update configuration settings or add new ones in [`config-manager.js`](mdc:scripts/modules/config-manager.js) and ensure getters/setters are appropriate. Update documentation in [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). Update the `.taskmasterconfig` structure if needed.
7. **Documentation**: Update help text and documentation in [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
4. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js) following [`ui.mdc`](mdc:.cursor/rules/ui.mdc). Consider enhanced formatting with syntax highlighting for code blocks.
5. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js) following [`commands.mdc`](mdc:.cursor/rules/commands.mdc).
6. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
7. **Configuration**: Update configuration settings or add new ones in [`config-manager.js`](mdc:scripts/modules/config-manager.js) and ensure getters/setters are appropriate. Update documentation in [`utilities.mdc`](mdc:.cursor/rules/utilities.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). Update the `.taskmasterconfig` structure if needed.
8. **Documentation**: Update help text and documentation in [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) and [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc).
## Critical Checklist for New Features

View File

@@ -36,7 +36,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in scripts/example_prd.txt.
### 2. Parse PRD (`parse_prd`)
@@ -45,12 +45,12 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
* **Key Parameters/Options:**
* `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`)
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`)
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to 'tasks/tasks.json'.` (CLI: `-o, --output <file>`)
* `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`)
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `scripts/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
---
@@ -77,10 +77,10 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
* **Notes:** Configuration is stored in `.taskmasterconfig` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
* **Warning:** DO NOT MANUALLY EDIT THE .taskmasterconfig FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
---
@@ -112,9 +112,10 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master show [id] [options]`
* **Description:** `Display detailed information for a specific Taskmaster task or subtask by its ID.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to view.` (CLI: `[id]` positional or `-i, --id <id>`)
* `id`: `Required. The ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to view.` (CLI: `[id]. Supports comma-separated list of tasks to get multiple tasks at once.` positional or `-i, --id <id>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Understand the full details, implementation notes, and test strategy for a specific task before starting work.
* **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful.
---
@@ -348,7 +349,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master analyze-complexity [options]`
* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.`
* **Key Parameters/Options:**
* `output`: `Where to save the complexity analysis report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-o, --output <file>`)
* `output`: `Where to save the complexity analysis report (default: 'scripts/task-complexity-report.json').` (CLI: `-o, --output <file>`)
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
@@ -361,11 +362,48 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **CLI Command:** `task-master complexity-report [options]`
* **Description:** `Display the task complexity analysis report in a readable format.`
* **Key Parameters/Options:**
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
* `file`: `Path to the complexity report (default: 'scripts/task-complexity-report.json').` (CLI: `-f, --file <file>`)
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
---
## AI-Powered Research
### 25. Research (`research`)
* **MCP Tool:** `research`
* **CLI Command:** `task-master research [options]`
* **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.`
* **Key Parameters/Options:**
* `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`)
* `taskIds`: `Comma-separated list of task/subtask IDs for context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`)
* `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`)
* `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`)
* `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`)
* `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`)
* `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically)
* **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to:
* Get fresh information beyond knowledge cutoff dates
* Research latest best practices, library updates, security patches
* Find implementation examples for specific technologies
* Validate approaches against current industry standards
* Get contextual advice based on project files and tasks
* **When to Consider Using Research:**
* **Before implementing any task** - Research current best practices
* **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc)
* **For security-related tasks** - Find latest security recommendations
* **When updating dependencies** - Research breaking changes and migration guides
* **For performance optimization** - Get current performance best practices
* **When debugging complex issues** - Research known solutions and workarounds
* **Research + Action Pattern:**
* Use `research` to gather fresh information
* Use `update_subtask` to commit findings with timestamps
* Use `update_task` to incorporate research into task details
* Use `add_task` with research flag for informed task creation
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments.
---
## File Management
### 24. Generate Task Files (`generate`)
@@ -382,7 +420,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
## Environment Variables Configuration (Updated)
Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
Taskmaster primarily uses the **`.taskmasterconfig`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
@@ -395,12 +433,12 @@ Environment variables are used **only** for sensitive API keys related to AI pro
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
* `OPENROUTER_API_KEY`
* `XAI_API_KEY`
* `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):**
* `OLLANA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
* **Endpoints (Optional/Provider Specific inside .taskmasterconfig):**
* `AZURE_OPENAI_ENDPOINT`
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool.
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmasterconfig` via `task-master models` command or `models` MCP tool.
---

View File

@@ -0,0 +1,408 @@
---
description:
globs:
alwaysApply: true
---
# Test Workflow & Development Process
## **Test-Driven Development (TDD) Integration**
### **Core TDD Cycle with Jest**
```bash
# 1. Start development with watch mode
npm run test:watch
# 2. Write failing test first
# Create test file: src/utils/newFeature.test.ts
# Write test that describes expected behavior
# 3. Implement minimum code to make test pass
# 4. Refactor while keeping tests green
# 5. Add edge cases and error scenarios
```
### **TDD Workflow Per Subtask**
```bash
# When starting a new subtask:
task-master set-status --id=4.1 --status=in-progress
# Begin TDD cycle:
npm run test:watch # Keep running during development
# Document TDD progress in subtask:
task-master update-subtask --id=4.1 --prompt="TDD Progress:
- Written 3 failing tests for core functionality
- Implemented basic feature, tests now passing
- Adding edge case tests for error handling"
# Complete subtask with test summary:
task-master update-subtask --id=4.1 --prompt="Implementation complete:
- Feature implemented with 8 unit tests
- Coverage: 95% statements, 88% branches
- All tests passing, TDD cycle complete"
```
## **Testing Commands & Usage**
### **Development Commands**
```bash
# Primary development command - use during coding
npm run test:watch # Watch mode with Jest
npm run test:watch -- --testNamePattern="auth" # Watch specific tests
# Targeted testing during development
npm run test:unit # Run only unit tests
npm run test:unit -- --coverage # Unit tests with coverage
# Integration testing when APIs are ready
npm run test:integration # Run integration tests
npm run test:integration -- --detectOpenHandles # Debug hanging tests
# End-to-end testing for workflows
npm run test:e2e # Run E2E tests
npm run test:e2e -- --timeout=30000 # Extended timeout for E2E
```
### **Quality Assurance Commands**
```bash
# Full test suite with coverage (before commits)
npm run test:coverage # Complete coverage analysis
# All tests (CI/CD pipeline)
npm test # Run all test projects
# Specific test file execution
npm test -- auth.test.ts # Run specific test file
npm test -- --testNamePattern="should handle errors" # Run specific tests
```
## **Test Implementation Patterns**
### **Unit Test Development**
```typescript
// ✅ DO: Follow established patterns from auth.test.ts
describe('FeatureName', () => {
beforeEach(() => {
jest.clearAllMocks();
// Setup mocks with proper typing
});
describe('functionName', () => {
it('should handle normal case', () => {
// Test implementation with specific assertions
});
it('should throw error for invalid input', async () => {
// Error scenario testing
await expect(functionName(invalidInput))
.rejects.toThrow('Specific error message');
});
});
});
```
### **Integration Test Development**
```typescript
// ✅ DO: Use supertest for API endpoint testing
import request from 'supertest';
import { app } from '../../src/app';
describe('POST /api/auth/register', () => {
beforeEach(async () => {
await integrationTestUtils.cleanupTestData();
});
it('should register user successfully', async () => {
const userData = createTestUser();
const response = await request(app)
.post('/api/auth/register')
.send(userData)
.expect(201);
expect(response.body).toMatchObject({
id: expect.any(String),
email: userData.email
});
// Verify database state
const user = await prisma.user.findUnique({
where: { email: userData.email }
});
expect(user).toBeTruthy();
});
});
```
### **E2E Test Development**
```typescript
// ✅ DO: Test complete user workflows
describe('User Authentication Flow', () => {
it('should complete registration → login → protected access', async () => {
// Step 1: Register
const userData = createTestUser();
await request(app)
.post('/api/auth/register')
.send(userData)
.expect(201);
// Step 2: Login
const loginResponse = await request(app)
.post('/api/auth/login')
.send({ email: userData.email, password: userData.password })
.expect(200);
const { token } = loginResponse.body;
// Step 3: Access protected resource
await request(app)
.get('/api/profile')
.set('Authorization', `Bearer ${token}`)
.expect(200);
}, 30000); // Extended timeout for E2E
});
```
## **Mocking & Test Utilities**
### **Established Mocking Patterns**
```typescript
// ✅ DO: Use established bcrypt mocking pattern
jest.mock('bcrypt');
import bcrypt from 'bcrypt';
const mockHash = bcrypt.hash as jest.MockedFunction<typeof bcrypt.hash>;
const mockCompare = bcrypt.compare as jest.MockedFunction<typeof bcrypt.compare>;
// ✅ DO: Use Prisma mocking for unit tests
jest.mock('@prisma/client', () => ({
PrismaClient: jest.fn().mockImplementation(() => ({
user: {
create: jest.fn(),
findUnique: jest.fn(),
},
$connect: jest.fn(),
$disconnect: jest.fn(),
})),
}));
```
### **Test Fixtures Usage**
```typescript
// ✅ DO: Use centralized test fixtures
import { createTestUser, adminUser, invalidUser } from '../fixtures/users';
describe('User Service', () => {
it('should handle admin user creation', async () => {
const userData = createTestUser(adminUser);
// Test implementation
});
it('should reject invalid user data', async () => {
const userData = createTestUser(invalidUser);
// Error testing
});
});
```
## **Coverage Standards & Monitoring**
### **Coverage Thresholds**
- **Global Standards**: 80% lines/functions, 70% branches
- **Critical Code**: 90% utils, 85% middleware
- **New Features**: Must meet or exceed global thresholds
- **Legacy Code**: Gradual improvement with each change
### **Coverage Reporting & Analysis**
```bash
# Generate coverage reports
npm run test:coverage
# View detailed HTML report
open coverage/lcov-report/index.html
# Coverage files generated:
# - coverage/lcov-report/index.html # Detailed HTML report
# - coverage/lcov.info # LCOV format for IDE integration
# - coverage/coverage-final.json # JSON format for tooling
```
### **Coverage Quality Checks**
```typescript
// ✅ DO: Test all code paths
describe('validateInput', () => {
it('should return true for valid input', () => {
expect(validateInput('valid')).toBe(true);
});
it('should return false for various invalid inputs', () => {
expect(validateInput('')).toBe(false); // Empty string
expect(validateInput(null)).toBe(false); // Null value
expect(validateInput(undefined)).toBe(false); // Undefined
});
it('should throw for unexpected input types', () => {
expect(() => validateInput(123)).toThrow('Invalid input type');
});
});
```
## **Testing During Development Phases**
### **Feature Development Phase**
```bash
# 1. Start feature development
task-master set-status --id=X.Y --status=in-progress
# 2. Begin TDD cycle
npm run test:watch
# 3. Document test progress in subtask
task-master update-subtask --id=X.Y --prompt="Test development:
- Created test file with 5 failing tests
- Implemented core functionality
- Tests passing, adding error scenarios"
# 4. Verify coverage before completion
npm run test:coverage
# 5. Update subtask with final test status
task-master update-subtask --id=X.Y --prompt="Testing complete:
- 12 unit tests with full coverage
- All edge cases and error scenarios covered
- Ready for integration testing"
```
### **Integration Testing Phase**
```bash
# After API endpoints are implemented
npm run test:integration
# Update integration test templates
# Replace placeholder tests with real endpoint calls
# Document integration test results
task-master update-subtask --id=X.Y --prompt="Integration tests:
- Updated auth endpoint tests
- Database integration verified
- All HTTP status codes and responses tested"
```
### **Pre-Commit Testing Phase**
```bash
# Before committing code
npm run test:coverage # Verify all tests pass with coverage
npm run test:unit # Quick unit test verification
npm run test:integration # Integration test verification (if applicable)
# Commit pattern for test updates
git add tests/ src/**/*.test.ts
git commit -m "test(task-X): Add comprehensive tests for Feature Y
- Unit tests with 95% coverage (exceeds 90% threshold)
- Integration tests for API endpoints
- Test fixtures for data generation
- Proper mocking patterns established
Task X: Feature Y - Testing complete"
```
## **Error Handling & Debugging**
### **Test Debugging Techniques**
```typescript
// ✅ DO: Use test utilities for debugging
import { testUtils } from '../setup';
it('should debug complex operation', () => {
testUtils.withConsole(() => {
// Console output visible only for this test
console.log('Debug info:', complexData);
service.complexOperation();
});
});
// ✅ DO: Use proper async debugging
it('should handle async operations', async () => {
const promise = service.asyncOperation();
// Test intermediate state
expect(service.isProcessing()).toBe(true);
const result = await promise;
expect(result).toBe('expected');
expect(service.isProcessing()).toBe(false);
});
```
### **Common Test Issues & Solutions**
```bash
# Hanging tests (common with database connections)
npm run test:integration -- --detectOpenHandles
# Memory leaks in tests
npm run test:unit -- --logHeapUsage
# Slow tests identification
npm run test:coverage -- --verbose
# Mock not working properly
# Check: mock is declared before imports
# Check: jest.clearAllMocks() in beforeEach
# Check: TypeScript typing is correct
```
## **Continuous Integration Integration**
### **CI/CD Pipeline Testing**
```yaml
# Example GitHub Actions integration
- name: Run tests
run: |
npm ci
npm run test:coverage
- name: Upload coverage reports
uses: codecov/codecov-action@v3
with:
file: ./coverage/lcov.info
```
### **Pre-commit Hooks**
```bash
# Setup pre-commit testing (recommended)
# In package.json scripts:
"pre-commit": "npm run test:unit && npm run test:integration"
# Husky integration example:
npx husky add .husky/pre-commit "npm run test:unit"
```
## **Test Maintenance & Evolution**
### **Adding Tests for New Features**
1. **Create test file** alongside source code or in `tests/unit/`
2. **Follow established patterns** from `src/utils/auth.test.ts`
3. **Use existing fixtures** from `tests/fixtures/`
4. **Apply proper mocking** patterns for dependencies
5. **Meet coverage thresholds** for the module
### **Updating Integration/E2E Tests**
1. **Update templates** in `tests/integration/` when APIs change
2. **Modify E2E workflows** in `tests/e2e/` for new user journeys
3. **Update test fixtures** for new data requirements
4. **Maintain database cleanup** utilities
### **Test Performance Optimization**
- **Parallel execution**: Jest runs tests in parallel by default
- **Test isolation**: Use proper setup/teardown for independence
- **Mock optimization**: Mock heavy dependencies appropriately
- **Database efficiency**: Use transaction rollbacks where possible
---
**Key References:**
- [Testing Standards](mdc:.cursor/rules/tests.mdc)
- [Git Workflow](mdc:.cursor/rules/git_workflow.mdc)
- [Development Workflow](mdc:.cursor/rules/dev_workflow.mdc)
- [Jest Configuration](mdc:jest.config.js)
- [Auth Test Example](mdc:src/utils/auth.test.ts)

View File

@@ -150,4 +150,91 @@ alwaysApply: false
));
```
Refer to [`ui.js`](mdc:scripts/modules/ui.js) for implementation examples and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines.
## Enhanced Display Patterns
### **Token Breakdown Display**
- Use detailed, granular token breakdowns for AI-powered commands
- Display context sources with individual token counts
- Show both token count and character count for transparency
```javascript
// ✅ DO: Display detailed token breakdown
function displayDetailedTokenBreakdown(tokenBreakdown, systemTokens, userTokens) {
const sections = [];
if (tokenBreakdown.tasks?.length > 0) {
const taskDetails = tokenBreakdown.tasks.map(task =>
`${task.type === 'subtask' ? ' ' : ''}${task.id}: ${task.tokens.toLocaleString()}`
).join('\n');
sections.push(`Tasks (${tokenBreakdown.tasks.reduce((sum, t) => sum + t.tokens, 0).toLocaleString()}):\n${taskDetails}`);
}
const content = sections.join('\n\n');
console.log(boxen(content, {
title: chalk.cyan('Token Usage'),
padding: { top: 1, bottom: 1, left: 2, right: 2 },
borderStyle: 'round',
borderColor: 'cyan'
}));
}
```
### **Code Block Syntax Highlighting**
- Use `cli-highlight` library for syntax highlighting in terminal output
- Process code blocks in AI responses for better readability
```javascript
// ✅ DO: Enhance code blocks with syntax highlighting
import { highlight } from 'cli-highlight';
function processCodeBlocks(text) {
return text.replace(/```(\w+)?\n([\s\S]*?)```/g, (match, language, code) => {
try {
const highlighted = highlight(code.trim(), {
language: language || 'javascript',
theme: 'default'
});
return `\n${highlighted}\n`;
} catch (error) {
return `\n${code.trim()}\n`;
}
});
}
```
### **Multi-Section Result Display**
- Use separate boxes for headers, content, and metadata
- Maintain consistent styling across different result types
```javascript
// ✅ DO: Use structured result display
function displayResults(result, query, detailLevel) {
// Header with query info
const header = boxen(
chalk.green.bold('Research Results') + '\n\n' +
chalk.gray('Query: ') + chalk.white(query) + '\n' +
chalk.gray('Detail Level: ') + chalk.cyan(detailLevel),
{
padding: { top: 1, bottom: 1, left: 2, right: 2 },
margin: { top: 1, bottom: 0 },
borderStyle: 'round',
borderColor: 'green'
}
);
console.log(header);
// Process and display main content
const processedResult = processCodeBlocks(result);
const contentBox = boxen(processedResult, {
padding: { top: 1, bottom: 1, left: 2, right: 2 },
margin: { top: 0, bottom: 1 },
borderStyle: 'single',
borderColor: 'gray'
});
console.log(contentBox);
console.log(chalk.green('✓ Operation complete'));
}
```
Refer to [`ui.js`](mdc:scripts/modules/ui.js) for implementation examples, [`context_gathering.mdc`](mdc:.cursor/rules/context_gathering.mdc) for context display patterns, and [`new_features.mdc`](mdc:.cursor/rules/new_features.mdc) for integration guidelines.

View File

@@ -46,7 +46,7 @@ alwaysApply: false
- **Location**:
- **Core CLI Utilities**: Place utilities used primarily by the core `task-master` CLI logic and command modules (`scripts/modules/*`) into [`scripts/modules/utils.js`](mdc:scripts/modules/utils.js).
- **MCP Server Utilities**: Place utilities specifically designed to support the MCP server implementation into the appropriate subdirectories within `mcp-server/src/`.
- Path/Core Logic Helpers: [`mcp-server/src/core/utils/`](mdc:mcp-server/src/core/utils/) (e.g., `path-utils.js`).
- Path/Core Logic Helpers: [`mcp-server/src/core/utils/`](mdc:mcp-server/src/core/utils) (e.g., `path-utils.js`).
- Tool Execution/Response Helpers: [`mcp-server/src/tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
## Documentation Standards
@@ -110,7 +110,7 @@ Taskmaster configuration (excluding API keys) is primarily managed through the `
- ✅ DO: Use appropriate icons for different log levels
- ✅ DO: Respect the configured log level
- ❌ DON'T: Add direct console.log calls outside the logging utility
- **Note on Passed Loggers**: When a logger object (like the FastMCP `log` object) is passed *as a parameter* (e.g., as `mcpLog`) into core Task Master functions, the receiving function often expects specific methods (`.info`, `.warn`, `.error`, etc.) to be directly callable on that object (e.g., `mcpLog[level](...)`). If the passed logger doesn't have this exact structure, a wrapper object may be needed. See the **Handling Logging Context (`mcpLog`)** section in [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for the standard pattern used in direct functions.
- **Note on Passed Loggers**: When a logger object (like the FastMCP `log` object) is passed *as a parameter* (e.g., as `mcpLog`) into core Task Master functions, the receiving function often expects specific methods (`.info`, `.warn`, `.error`, etc.) to be directly callable on that object (e.g., `mcpLog[level](mdc:...)`). If the passed logger doesn't have this exact structure, a wrapper object may be needed. See the **Handling Logging Context (`mcpLog`)** section in [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for the standard pattern used in direct functions.
- **Logger Wrapper Pattern**:
- ✅ DO: Use the logger wrapper pattern when passing loggers to prevent `mcpLog[level] is not a function` errors:
@@ -548,4 +548,56 @@ export {
};
```
Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) and [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for more context on MCP server architecture and integration.
## Context Gathering Utilities
### **ContextGatherer** (`scripts/modules/utils/contextGatherer.js`)
- **Multi-Source Context Extraction**:
- ✅ DO: Use for AI-powered commands that need project context
- ✅ DO: Support tasks, files, custom text, and project tree context
- ✅ DO: Implement detailed token counting with `gpt-tokens` library
- ✅ DO: Provide multiple output formats (research, chat, system-prompt)
```javascript
// ✅ DO: Use ContextGatherer for consistent context extraction
import { ContextGatherer } from '../utils/contextGatherer.js';
const gatherer = new ContextGatherer(projectRoot, tasksPath);
const result = await gatherer.gather({
tasks: ['15', '16.2'],
files: ['src/api.js'],
customContext: 'Additional context',
includeProjectTree: true,
format: 'research',
includeTokenCounts: true
});
```
### **FuzzyTaskSearch** (`scripts/modules/utils/fuzzyTaskSearch.js`)
- **Intelligent Task Discovery**:
- ✅ DO: Use for automatic task relevance detection
- ✅ DO: Configure search parameters based on use case context
- ✅ DO: Implement purpose-based categorization for better matching
- ✅ DO: Sort results by relevance score and task ID
```javascript
// ✅ DO: Use FuzzyTaskSearch for intelligent task discovery
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
const fuzzySearch = new FuzzyTaskSearch(tasksData.tasks, 'research');
const searchResults = fuzzySearch.findRelevantTasks(query, {
maxResults: 8,
includeRecent: true,
includeCategoryMatches: true
});
const taskIds = fuzzySearch.getTaskIds(searchResults);
```
- **Integration Guidelines**:
- ✅ DO: Use fuzzy search to supplement user-provided task IDs
- ✅ DO: Display discovered task IDs to users for transparency
- ✅ DO: Sort discovered task IDs numerically for better readability
- ❌ DON'T: Replace explicit user task selections with fuzzy results
Refer to [`context_gathering.mdc`](mdc:.cursor/rules/context_gathering.mdc) for detailed implementation patterns, [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) and [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for more context on MCP server architecture and integration.

33
.gitignore vendored
View File

@@ -19,13 +19,26 @@ npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
tests/e2e/_runs/
tests/e2e/log/
# Coverage directory used by tools like istanbul
coverage
coverage/
*.lcov
# Jest cache
.jest/
# Test temporary files and directories
tests/temp/
tests/e2e/_runs/
tests/e2e/log/
tests/**/*.log
tests/**/coverage/
# Test database files (if any)
tests/**/*.db
tests/**/*.sqlite
tests/**/*.sqlite3
# Optional npm cache directory
.npm
@@ -64,3 +77,17 @@ dev-debug.log
# NPMRC
.npmrc
# Added by Claude Task Master
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# OS specific
# Task files
tasks.json
tasks/

View File

@@ -1,33 +0,0 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-sonnet-4-20250514",
"maxTokens": 50000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 128000,
"temperature": 0.2
}
},
"global": {
"userId": "1234567890",
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseURL": "http://localhost:11434/api",
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com",
"azureBaseURL": "https://your-endpoint.azure.com/"
}
}

View File

@@ -1,528 +0,0 @@
# Claude Task Master - Product Requirements Document
<PRD>
# Technical Architecture
## System Components
1. **Task Management Core**
- Tasks.json file structure (single source of truth)
- Task model with dependencies, priorities, and metadata
- Task state management system
- Task file generation subsystem
2. **AI Integration Layer**
- Anthropic Claude API integration
- Perplexity API integration (optional)
- Prompt engineering components
- Response parsing and processing
3. **Command Line Interface**
- Command parsing and execution
- Interactive user input handling
- Display and formatting utilities
- Status reporting and feedback system
4. **Cursor AI Integration**
- Cursor rules documentation
- Agent interaction patterns
- Workflow guideline specifications
## Data Models
### Task Model
```json
{
"id": 1,
"title": "Task Title",
"description": "Brief task description",
"status": "pending|done|deferred",
"dependencies": [0],
"priority": "high|medium|low",
"details": "Detailed implementation instructions",
"testStrategy": "Verification approach details",
"subtasks": [
{
"id": 1,
"title": "Subtask Title",
"description": "Subtask description",
"status": "pending|done|deferred",
"dependencies": [],
"acceptanceCriteria": "Verification criteria"
}
]
}
```
### Tasks Collection Model
```json
{
"meta": {
"projectName": "Project Name",
"version": "1.0.0",
"prdSource": "path/to/prd.txt",
"createdAt": "ISO-8601 timestamp",
"updatedAt": "ISO-8601 timestamp"
},
"tasks": [
// Array of Task objects
]
}
```
### Task File Format
```
# Task ID: <id>
# Title: <title>
# Status: <status>
# Dependencies: <comma-separated list of dependency IDs>
# Priority: <priority>
# Description: <brief description>
# Details:
<detailed implementation notes>
# Test Strategy:
<verification approach>
# Subtasks:
1. <subtask title> - <subtask description>
```
## APIs and Integrations
1. **Anthropic Claude API**
- Authentication via API key
- Prompt construction and streaming
- Response parsing and extraction
- Error handling and retries
2. **Perplexity API (via OpenAI client)**
- Authentication via API key
- Research-oriented prompt construction
- Enhanced contextual response handling
- Fallback mechanisms to Claude
3. **File System API**
- Reading/writing tasks.json
- Managing individual task files
- Command execution logging
- Debug logging system
## Infrastructure Requirements
1. **Node.js Runtime**
- Version 14.0.0 or higher
- ES Module support
- File system access rights
- Command execution capabilities
2. **Configuration Management**
- Environment variable handling
- .env file support
- Configuration validation
- Sensible defaults with overrides
3. **Development Environment**
- Git repository
- NPM package management
- Cursor editor integration
- Command-line terminal access
# Development Roadmap
## Phase 1: Core Task Management System
1. **Task Data Structure**
- Design and implement the tasks.json structure
- Create task model validation
- Implement basic task operations (create, read, update)
- Develop file system interactions
2. **Command Line Interface Foundation**
- Implement command parsing with Commander.js
- Create help documentation
- Implement colorized console output
- Add logging system with configurable levels
3. **Basic Task Operations**
- Implement task listing functionality
- Create task status update capability
- Add dependency tracking
- Implement priority management
4. **Task File Generation**
- Create task file templates
- Implement generation from tasks.json
- Add bi-directional synchronization
- Implement proper file naming and organization
## Phase 2: AI Integration
1. **Claude API Integration**
- Implement API authentication
- Create prompt templates for PRD parsing
- Design response handlers
- Add error management and retries
2. **PRD Parsing System**
- Implement PRD file reading
- Create PRD to task conversion logic
- Add intelligent dependency inference
- Implement priority assignment logic
3. **Task Expansion With Claude**
- Create subtask generation prompts
- Implement subtask creation workflow
- Add context-aware expansion capabilities
- Implement parent-child relationship management
4. **Implementation Drift Handling**
- Add capability to update future tasks
- Implement task rewriting based on new context
- Create dependency chain updates
- Preserve completed work while updating future tasks
## Phase 3: Advanced Features
1. **Perplexity Integration**
- Implement Perplexity API authentication
- Create research-oriented prompts
- Add fallback to Claude when unavailable
- Implement response quality comparison logic
2. **Research-Backed Subtask Generation**
- Create specialized research prompts
- Implement context enrichment
- Add domain-specific knowledge incorporation
- Create more detailed subtask generation
3. **Batch Operations**
- Implement multi-task status updates
- Add bulk subtask generation
- Create task filtering and querying
- Implement advanced dependency management
4. **Project Initialization**
- Create project templating system
- Implement interactive setup
- Add environment configuration
- Create documentation generation
## Phase 4: Cursor AI Integration
1. **Cursor Rules Implementation**
- Create dev_workflow.mdc documentation
- Implement cursor_rules.mdc
- Add self_improve.mdc
- Design rule integration documentation
2. **Agent Workflow Guidelines**
- Document task discovery workflow
- Create task selection guidelines
- Implement implementation guidance
- Add verification procedures
3. **Agent Command Integration**
- Document command syntax for agents
- Create example interactions
- Implement agent response patterns
- Add context management for agents
4. **User Documentation**
- Create detailed README
- Add scripts documentation
- Implement example workflows
- Create troubleshooting guides
# Logical Dependency Chain
## Foundation Layer
1. **Task Data Structure**
- Must be implemented first as all other functionality depends on this
- Defines the core data model for the entire system
- Establishes the single source of truth concept
2. **Command Line Interface**
- Built on top of the task data structure
- Provides the primary user interaction mechanism
- Required for all subsequent operations to be accessible
3. **Basic Task Operations**
- Depends on both task data structure and CLI
- Provides the fundamental operations for task management
- Enables the minimal viable workflow
## Functional Layer
4. **Task File Generation**
- Depends on task data structure and basic operations
- Creates the individual task files for reference
- Enables the file-based workflow complementing tasks.json
5. **Claude API Integration**
- Independent of most previous components but needs the task data structure
- Provides the AI capabilities that enhance the system
- Gateway to advanced task generation features
6. **PRD Parsing System**
- Depends on Claude API integration and task data structure
- Enables the initial task generation workflow
- Creates the starting point for new projects
## Enhancement Layer
7. **Task Expansion With Claude**
- Depends on Claude API integration and basic task operations
- Enhances existing tasks with more detailed subtasks
- Improves the implementation guidance
8. **Implementation Drift Handling**
- Depends on Claude API integration and task operations
- Addresses a key challenge in AI-driven development
- Maintains the relevance of task planning as implementation evolves
9. **Perplexity Integration**
- Can be developed in parallel with other features after Claude integration
- Enhances the quality of generated content
- Provides research-backed improvements
## Advanced Layer
10. **Research-Backed Subtask Generation**
- Depends on Perplexity integration and task expansion
- Provides higher quality, more contextual subtasks
- Enhances the value of the task breakdown
11. **Batch Operations**
- Depends on basic task operations
- Improves efficiency for managing multiple tasks
- Quality-of-life enhancement for larger projects
12. **Project Initialization**
- Depends on most previous components being stable
- Provides a smooth onboarding experience
- Creates a complete project setup in one step
## Integration Layer
13. **Cursor Rules Implementation**
- Can be developed in parallel after basic functionality
- Provides the guidance for Cursor AI agent
- Enhances the AI-driven workflow
14. **Agent Workflow Guidelines**
- Depends on Cursor rules implementation
- Structures how the agent interacts with the system
- Ensures consistent agent behavior
15. **Agent Command Integration**
- Depends on agent workflow guidelines
- Provides specific command patterns for the agent
- Optimizes the agent-user interaction
16. **User Documentation**
- Should be developed alongside all features
- Must be completed before release
- Ensures users can effectively use the system
# Risks and Mitigations
## Technical Challenges
### API Reliability
**Risk**: Anthropic or Perplexity API could have downtime, rate limiting, or breaking changes.
**Mitigation**:
- Implement robust error handling with exponential backoff
- Add fallback mechanisms (Claude fallback for Perplexity)
- Cache important responses to reduce API dependency
- Support offline mode for critical functions
### Model Output Variability
**Risk**: AI models may produce inconsistent or unexpected outputs.
**Mitigation**:
- Design robust prompt templates with strict output formatting requirements
- Implement response validation and error detection
- Add self-correction mechanisms and retries with improved prompts
- Allow manual editing of generated content
### Node.js Version Compatibility
**Risk**: Differences in Node.js versions could cause unexpected behavior.
**Mitigation**:
- Clearly document minimum Node.js version requirements
- Use transpilers if needed for compatibility
- Test across multiple Node.js versions
- Handle version-specific features gracefully
## MVP Definition
### Feature Prioritization
**Risk**: Including too many features in the MVP could delay release and adoption.
**Mitigation**:
- Define MVP as core task management + basic Claude integration
- Ensure each phase delivers a complete, usable product
- Implement feature flags for easy enabling/disabling of features
- Get early user feedback to validate feature importance
### Scope Creep
**Risk**: The project could expand beyond its original intent, becoming too complex.
**Mitigation**:
- Maintain a strict definition of what the tool is and isn't
- Focus on task management for AI-driven development
- Evaluate new features against core value proposition
- Implement extensibility rather than building every feature
### User Expectations
**Risk**: Users might expect a full project management solution rather than a task tracking system.
**Mitigation**:
- Clearly communicate the tool's purpose and limitations
- Provide integration points with existing project management tools
- Focus on the unique value of AI-driven development
- Document specific use cases and example workflows
## Resource Constraints
### Development Capacity
**Risk**: Limited development resources could delay implementation.
**Mitigation**:
- Phase implementation to deliver value incrementally
- Focus on core functionality first
- Leverage open source libraries where possible
- Design for extensibility to allow community contributions
### AI Cost Management
**Risk**: Excessive API usage could lead to high costs.
**Mitigation**:
- Implement token usage tracking and reporting
- Add configurable limits to prevent unexpected costs
- Cache responses where appropriate
- Optimize prompts for token efficiency
- Support local LLM options in the future
### Documentation Overhead
**Risk**: Complexity of the system requires extensive documentation that is time-consuming to maintain.
**Mitigation**:
- Use AI to help generate and maintain documentation
- Create self-documenting commands and features
- Implement progressive documentation (basic to advanced)
- Build help directly into the CLI
# Appendix
## AI Prompt Engineering Specifications
### PRD Parsing Prompt Structure
```
You are assisting with transforming a Product Requirements Document (PRD) into a structured set of development tasks.
Given the following PRD, create a comprehensive list of development tasks that would be needed to implement the described product.
For each task:
1. Assign a short, descriptive title
2. Write a concise description
3. Identify dependencies (which tasks must be completed before this one)
4. Assign a priority (high, medium, low)
5. Include detailed implementation notes
6. Describe a test strategy to verify completion
Structure the tasks in a logical order of implementation.
PRD:
{prd_content}
```
### Task Expansion Prompt Structure
```
You are helping to break down a development task into more manageable subtasks.
Main task:
Title: {task_title}
Description: {task_description}
Details: {task_details}
Please create {num_subtasks} specific subtasks that together would accomplish this main task.
For each subtask, provide:
1. A clear, actionable title
2. A concise description
3. Any dependencies on other subtasks
4. Specific acceptance criteria to verify completion
Additional context:
{additional_context}
```
### Research-Backed Expansion Prompt Structure
```
You are a technical researcher and developer helping to break down a software development task into detailed, well-researched subtasks.
Main task:
Title: {task_title}
Description: {task_description}
Details: {task_details}
Research the latest best practices, technologies, and implementation patterns for this type of task. Then create {num_subtasks} specific, actionable subtasks that together would accomplish the main task.
For each subtask:
1. Provide a clear, specific title
2. Write a detailed description including technical approach
3. Identify dependencies on other subtasks
4. Include specific acceptance criteria
5. Reference any relevant libraries, tools, or resources that should be used
Consider security, performance, maintainability, and user experience in your recommendations.
```
## Task File System Specification
### Directory Structure
```
/
├── .cursor/
│ └── rules/
│ ├── dev_workflow.mdc
│ ├── cursor_rules.mdc
│ └── self_improve.mdc
├── scripts/
│ ├── dev.js
│ └── README.md
├── tasks/
│ ├── task_001.txt
│ ├── task_002.txt
│ └── ...
├── .env
├── .env.example
├── .gitignore
├── package.json
├── README.md
└── tasks.json
```
### Task ID Specification
- Main tasks: Sequential integers (1, 2, 3, ...)
- Subtasks: Parent ID + dot + sequential integer (1.1, 1.2, 2.1, ...)
- ID references: Used in dependencies, command parameters
- ID ordering: Implies suggested implementation order
## Command-Line Interface Specification
### Global Options
- `--help`: Display help information
- `--version`: Display version information
- `--file=<file>`: Specify an alternative tasks.json file
- `--quiet`: Reduce output verbosity
- `--debug`: Increase output verbosity
- `--json`: Output in JSON format (for programmatic use)
### Command Structure
- `node scripts/dev.js <command> [options]`
- All commands operate on tasks.json by default
- Commands follow consistent parameter naming
- Common parameter styles: `--id=<id>`, `--status=<status>`, `--prompt="<text>"`
- Boolean flags: `--all`, `--force`, `--with-subtasks`
## API Integration Specifications
### Anthropic API Configuration
- Authentication: ANTHROPIC_API_KEY environment variable
- Model selection: MODEL environment variable
- Default model: claude-3-7-sonnet-20250219
- Maximum tokens: MAX_TOKENS environment variable (default: 4000)
- Temperature: TEMPERATURE environment variable (default: 0.7)
### Perplexity API Configuration
- Authentication: PERPLEXITY_API_KEY environment variable
- Model selection: PERPLEXITY_MODEL environment variable
- Default model: sonar-medium-online
- Connection: Via OpenAI client
- Fallback: Use Claude if Perplexity unavailable
</PRD>

View File

@@ -1,357 +0,0 @@
{
"meta": {
"generatedAt": "2025-05-22T05:48:33.026Z",
"tasksAnalyzed": 6,
"totalTasks": 88,
"analysisCount": 43,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": true
},
"complexityAnalysis": [
{
"taskId": 24,
"taskTitle": "Implement AI-Powered Test Generation Command",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the implementation of the AI-powered test generation command into detailed subtasks covering: command structure setup, AI prompt engineering, test file generation logic, integration with Claude API, and comprehensive error handling.",
"reasoning": "This task involves complex integration with an AI service (Claude), requires sophisticated prompt engineering, and needs to generate structured code files. The existing 3 subtasks are a good start but could be expanded to include more detailed steps for AI integration, error handling, and test file formatting."
},
{
"taskId": 26,
"taskTitle": "Implement Context Foundation for AI Operations",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing the context foundation appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or integration with existing systems.",
"reasoning": "This task involves creating a foundation for context integration with several well-defined components. The existing 4 subtasks cover the main implementation areas (context-file flag, cursor rules integration, context extraction utility, and command handler updates). The complexity is moderate as it requires careful integration with existing systems but has clear requirements."
},
{
"taskId": 27,
"taskTitle": "Implement Context Enhancements for AI Operations",
"complexityScore": 7,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing context enhancements appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or performance optimization.",
"reasoning": "This task builds upon the foundation from Task #26 and adds more sophisticated context handling features. The 4 existing subtasks cover the main implementation areas (code context extraction, task history context, PRD context integration, and context formatting). The complexity is higher than the foundation task due to the need for intelligent context selection and optimization."
},
{
"taskId": 28,
"taskTitle": "Implement Advanced ContextManager System",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the advanced ContextManager system appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or backward compatibility with previous context implementations.",
"reasoning": "This task represents the most complex phase of the context implementation, requiring a sophisticated class design, optimization algorithms, and integration with multiple systems. The 5 existing subtasks cover the core implementation areas, but the complexity is high due to the need for intelligent context prioritization, token management, and performance monitoring."
},
{
"taskId": 40,
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing the 'plan' command appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or integration with existing task management workflows.",
"reasoning": "This task involves creating a new command that leverages AI to generate implementation plans. The existing 4 subtasks cover the main implementation areas (retrieving task content, generating plans with AI, formatting in XML, and error handling). The complexity is moderate as it builds on existing patterns for task updates but requires careful AI integration."
},
{
"taskId": 41,
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
"complexityScore": 8,
"recommendedSubtasks": 10,
"expansionPrompt": "The current 10 subtasks for implementing the visual task dependency graph appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large graphs or additional visualization options.",
"reasoning": "This task involves creating a sophisticated visualization system for terminal display, which is inherently complex due to layout algorithms, ASCII/Unicode rendering, and handling complex dependency relationships. The 10 existing subtasks cover all major aspects of implementation, from CLI interface to accessibility features."
},
{
"taskId": 42,
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
"complexityScore": 9,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for implementing the MCP-to-MCP communication protocol appear well-structured. Consider if any additional subtasks are needed for security hardening, performance optimization, or comprehensive documentation.",
"reasoning": "This task involves designing and implementing a complex communication protocol between different MCP tools and servers. It requires sophisticated adapter patterns, client-server architecture, and handling of multiple operational modes. The complexity is very high due to the need for standardization, security, and backward compatibility."
},
{
"taskId": 44,
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for implementing task automation with webhooks appear comprehensive. Consider if any additional subtasks are needed for security testing, rate limiting implementation, or webhook monitoring tools.",
"reasoning": "This task involves creating a sophisticated event system with webhooks for integration with external services. The complexity is high due to the need for secure authentication, reliable delivery mechanisms, and handling of various webhook formats and protocols. The existing subtasks cover the main implementation areas but security and monitoring could be emphasized more."
},
{
"taskId": 45,
"taskTitle": "Implement GitHub Issue Import Feature",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the GitHub issue import feature appear well-structured. Consider if any additional subtasks are needed for handling GitHub API rate limiting, caching, or supporting additional issue metadata.",
"reasoning": "This task involves integrating with the GitHub API to import issues as tasks. The complexity is moderate as it requires API authentication, data mapping, and error handling. The existing 5 subtasks cover the main implementation areas from design to end-to-end implementation."
},
{
"taskId": 46,
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the ICE analysis command appear comprehensive. Consider if any additional subtasks are needed for visualization of ICE scores or integration with other prioritization methods.",
"reasoning": "This task involves creating an AI-powered analysis system for task prioritization using the ICE methodology. The complexity is high due to the need for sophisticated scoring algorithms, AI integration, and report generation. The existing subtasks cover the main implementation areas from algorithm design to integration with existing systems."
},
{
"taskId": 47,
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for enhancing the task suggestion actions card workflow appear well-structured. Consider if any additional subtasks are needed for user testing, accessibility improvements, or performance optimization.",
"reasoning": "This task involves redesigning the UI workflow for task expansion and management. The complexity is moderate as it requires careful UX design and state management but builds on existing components. The 6 existing subtasks cover the main implementation areas from design to testing."
},
{
"taskId": 48,
"taskTitle": "Refactor Prompts into Centralized Structure",
"complexityScore": 4,
"recommendedSubtasks": 3,
"expansionPrompt": "The current 3 subtasks for refactoring prompts into a centralized structure appear appropriate. Consider if any additional subtasks are needed for prompt versioning, documentation, or testing.",
"reasoning": "This task involves a straightforward refactoring to improve code organization. The complexity is relatively low as it primarily involves moving code rather than creating new functionality. The 3 existing subtasks cover the main implementation areas from directory structure to integration."
},
{
"taskId": 49,
"taskTitle": "Implement Code Quality Analysis Command",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the code quality analysis command appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large codebases or integration with existing code quality tools.",
"reasoning": "This task involves creating a sophisticated code analysis system with pattern recognition, best practice verification, and AI-powered recommendations. The complexity is high due to the need for code parsing, complex analysis algorithms, and integration with AI services. The existing subtasks cover the main implementation areas from algorithm design to user interface."
},
{
"taskId": 50,
"taskTitle": "Implement Test Coverage Tracking System by Task",
"complexityScore": 9,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the test coverage tracking system appear well-structured. Consider if any additional subtasks are needed for integration with CI/CD systems, performance optimization, or visualization tools.",
"reasoning": "This task involves creating a complex system that maps test coverage to specific tasks and subtasks. The complexity is very high due to the need for sophisticated data structures, integration with coverage tools, and AI-powered test generation. The existing subtasks are comprehensive and cover the main implementation areas from data structure design to AI integration."
},
{
"taskId": 51,
"taskTitle": "Implement Perplexity Research Command",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the Perplexity research command appear comprehensive. Consider if any additional subtasks are needed for caching optimization, result formatting, or integration with other research tools.",
"reasoning": "This task involves creating a new command that integrates with the Perplexity AI API for research. The complexity is moderate as it requires API integration, context extraction, and result formatting. The 5 existing subtasks cover the main implementation areas from API client to caching system."
},
{
"taskId": 52,
"taskTitle": "Implement Task Suggestion Command for CLI",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the task suggestion command appear well-structured. Consider if any additional subtasks are needed for suggestion quality evaluation, user feedback collection, or integration with existing task workflows.",
"reasoning": "This task involves creating a new CLI command that generates contextually relevant task suggestions using AI. The complexity is moderate as it requires AI integration, context collection, and interactive CLI interfaces. The existing subtasks cover the main implementation areas from data collection to user interface."
},
{
"taskId": 53,
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the subtask suggestion feature appear comprehensive. Consider if any additional subtasks are needed for suggestion quality metrics, user feedback collection, or performance optimization.",
"reasoning": "This task involves creating a feature that suggests contextually relevant subtasks for parent tasks. The complexity is moderate as it builds on existing task management systems but requires sophisticated AI integration and context analysis. The 6 existing subtasks cover the main implementation areas from validation to testing."
},
{
"taskId": 55,
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing positional arguments support appear well-structured. Consider if any additional subtasks are needed for backward compatibility testing, documentation updates, or user experience improvements.",
"reasoning": "This task involves modifying the command parsing logic to support positional arguments alongside the existing flag-based syntax. The complexity is moderate as it requires careful handling of different argument styles and edge cases. The 5 existing subtasks cover the main implementation areas from analysis to documentation."
},
{
"taskId": 57,
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for enhancing the CLI user experience appear comprehensive. Consider if any additional subtasks are needed for accessibility testing, internationalization, or performance optimization.",
"reasoning": "This task involves a significant overhaul of the CLI interface to improve user experience. The complexity is high due to the breadth of changes (logging, visual elements, interactive components, etc.) and the need for consistent design across all commands. The 6 existing subtasks cover the main implementation areas from log management to help systems."
},
{
"taskId": 60,
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for implementing the mentor system appear well-structured. Consider if any additional subtasks are needed for mentor personality consistency, discussion quality evaluation, or performance optimization with multiple mentors.",
"reasoning": "This task involves creating a sophisticated mentor simulation system with round-table discussions. The complexity is high due to the need for personality simulation, complex LLM integration, and structured discussion management. The 7 existing subtasks cover the main implementation areas from architecture to testing."
},
{
"taskId": 62,
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
"complexityScore": 4,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for implementing the --simple flag appear comprehensive. Consider if any additional subtasks are needed for user experience testing or documentation updates.",
"reasoning": "This task involves adding a simple flag option to bypass AI processing for updates. The complexity is relatively low as it primarily involves modifying existing command handlers and adding a flag. The 8 existing subtasks are very detailed and cover all aspects of implementation from command parsing to testing."
},
{
"taskId": 63,
"taskTitle": "Add pnpm Support for the Taskmaster Package",
"complexityScore": 5,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for adding pnpm support appear comprehensive. Consider if any additional subtasks are needed for CI/CD integration, performance comparison, or documentation updates.",
"reasoning": "This task involves ensuring the package works correctly with pnpm as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 8 existing subtasks cover all major aspects from documentation to binary verification."
},
{
"taskId": 64,
"taskTitle": "Add Yarn Support for Taskmaster Installation",
"complexityScore": 5,
"recommendedSubtasks": 9,
"expansionPrompt": "The current 9 subtasks for adding Yarn support appear comprehensive. Consider if any additional subtasks are needed for performance testing, CI/CD integration, or compatibility with different Yarn versions.",
"reasoning": "This task involves ensuring the package works correctly with Yarn as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 9 existing subtasks are very detailed and cover all aspects from configuration to testing."
},
{
"taskId": 65,
"taskTitle": "Add Bun Support for Taskmaster Installation",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for adding Bun support appear well-structured. Consider if any additional subtasks are needed for handling Bun-specific issues, performance testing, or documentation updates.",
"reasoning": "This task involves adding support for the newer Bun package manager. The complexity is slightly higher than the other package manager tasks due to Bun's differences from Node.js and potential compatibility issues. The 6 existing subtasks cover the main implementation areas from research to documentation."
},
{
"taskId": 67,
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing JSON output and Cursor keybindings appear well-structured. Consider if any additional subtasks are needed for testing across different operating systems, documentation updates, or user experience improvements.",
"reasoning": "This task involves two distinct features: adding JSON output to CLI commands and creating a keybindings installation command. The complexity is moderate as it requires careful handling of different output formats and OS-specific file paths. The 5 existing subtasks cover the main implementation areas for both features."
},
{
"taskId": 68,
"taskTitle": "Ability to create tasks without parsing PRD",
"complexityScore": 3,
"recommendedSubtasks": 2,
"expansionPrompt": "The current 2 subtasks for implementing task creation without PRD appear appropriate. Consider if any additional subtasks are needed for validation, error handling, or integration with existing task management workflows.",
"reasoning": "This task involves a relatively simple modification to allow task creation without requiring a PRD document. The complexity is low as it primarily involves creating a form interface and saving functionality. The 2 existing subtasks cover the main implementation areas of UI design and data saving."
},
{
"taskId": 72,
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing PDF generation appear comprehensive. Consider if any additional subtasks are needed for handling large projects, additional visualization options, or integration with existing reporting tools.",
"reasoning": "This task involves creating a feature to generate PDF reports of project progress and dependency visualization. The complexity is high due to the need for PDF generation, data collection, and visualization integration. The 6 existing subtasks cover the main implementation areas from library selection to export options."
},
{
"taskId": 75,
"taskTitle": "Integrate Google Search Grounding for Research Role",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for integrating Google Search Grounding appear well-structured. Consider if any additional subtasks are needed for testing with different query types, error handling, or performance optimization.",
"reasoning": "This task involves updating the AI service layer to enable Google Search Grounding for research roles. The complexity is moderate as it requires careful integration with the existing AI service architecture and conditional logic. The 4 existing subtasks cover the main implementation areas from service layer modification to testing."
},
{
"taskId": 76,
"taskTitle": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for developing the E2E test framework appear comprehensive. Consider if any additional subtasks are needed for test result reporting, CI/CD integration, or performance benchmarking.",
"reasoning": "This task involves creating a sophisticated end-to-end testing framework for the MCP server. The complexity is high due to the need for subprocess management, protocol handling, and robust test case definition. The 7 existing subtasks cover the main implementation areas from architecture to documentation."
},
{
"taskId": 77,
"taskTitle": "Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)",
"complexityScore": 7,
"recommendedSubtasks": 18,
"expansionPrompt": "The current 18 subtasks for implementing AI usage telemetry appear very comprehensive. Consider if any additional subtasks are needed for security hardening, privacy compliance, or user feedback collection.",
"reasoning": "This task involves creating a telemetry system to track AI usage metrics. The complexity is high due to the need for secure data transmission, comprehensive data collection, and integration across multiple commands. The 18 existing subtasks are extremely detailed and cover all aspects of implementation from core utility to provider-specific updates."
},
{
"taskId": 80,
"taskTitle": "Implement Unique User ID Generation and Storage During Installation",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing unique user ID generation appear well-structured. Consider if any additional subtasks are needed for privacy compliance, security auditing, or integration with the telemetry system.",
"reasoning": "This task involves generating and storing a unique user identifier during installation. The complexity is relatively low as it primarily involves UUID generation and configuration file management. The 5 existing subtasks cover the main implementation areas from script structure to documentation."
},
{
"taskId": 81,
"taskTitle": "Task #81: Implement Comprehensive Local Telemetry System with Future Server Integration Capability",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the comprehensive local telemetry system appear well-structured. Consider if any additional subtasks are needed for data migration, storage optimization, or visualization tools.",
"reasoning": "This task involves expanding the telemetry system to capture additional metrics and implement local storage with future server integration capability. The complexity is high due to the breadth of data collection, storage requirements, and privacy considerations. The 6 existing subtasks cover the main implementation areas from data collection to user-facing benefits."
},
{
"taskId": 82,
"taskTitle": "Update supported-models.json with token limit fields",
"complexityScore": 3,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on researching accurate token limit values for each model and ensuring backward compatibility.",
"reasoning": "This task involves a simple update to the supported-models.json file to include new token limit fields. The complexity is low as it primarily involves research and data entry. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 83,
"taskTitle": "Update config-manager.js defaults and getters",
"complexityScore": 4,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on updating the DEFAULTS object and related getter functions while maintaining backward compatibility.",
"reasoning": "This task involves updating the config-manager.js module to replace maxTokens with more specific token limit fields. The complexity is relatively low as it primarily involves modifying existing code rather than creating new functionality. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 84,
"taskTitle": "Implement token counting utility",
"complexityScore": 5,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing accurate token counting for different models and proper fallback mechanisms.",
"reasoning": "This task involves creating a utility function to count tokens for different AI models. The complexity is moderate as it requires integration with the tiktoken library and handling different tokenization schemes. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 69,
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the task 'Enhance Analyze Complexity for Specific Task IDs' into 6 subtasks focusing on: 1) Core logic modification to accept ID parameters, 2) Report merging functionality, 3) CLI interface updates, 4) MCP tool integration, 5) Documentation updates, and 6) Comprehensive testing across all components.",
"reasoning": "This task involves modifying existing functionality across multiple components (core logic, CLI, MCP) with complex logic for filtering tasks and merging reports. The implementation requires careful handling of different parameter combinations and edge cases. The task has interdependent components that need to work together seamlessly, and the report merging functionality adds significant complexity."
},
{
"taskId": 70,
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the 'diagram' command implementation into 5 subtasks: 1) Command interface and parameter handling, 2) Task data extraction and transformation to Mermaid syntax, 3) Diagram rendering with status color coding, 4) Output formatting and file export functionality, and 5) Error handling and edge case management.",
"reasoning": "This task requires implementing a new feature rather than modifying existing code, which reduces complexity from integration challenges. However, it involves working with visualization logic, dependency mapping, and multiple output formats. The color coding based on status and handling of dependency relationships adds moderate complexity. The task is well-defined but requires careful attention to diagram formatting and error handling."
},
{
"taskId": 85,
"taskTitle": "Update ai-services-unified.js for dynamic token limits",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the update of ai-services-unified.js for dynamic token limits into subtasks such as: (1) Import and integrate the token counting utility, (2) Refactor _unifiedServiceRunner to calculate and enforce dynamic token limits, (3) Update error handling for token limit violations, (4) Add and verify logging for token usage, (5) Write and execute tests for various prompt and model scenarios.",
"reasoning": "This task involves significant code changes to a core function, integration of a new utility, dynamic logic for multiple models, and robust error handling. It also requires comprehensive testing for edge cases and integration, making it moderately complex and best managed by splitting into focused subtasks."
},
{
"taskId": 86,
"taskTitle": "Update .taskmasterconfig schema and user guide",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "Expand this task into subtasks: (1) Draft a migration guide for users, (2) Update user documentation to explain new config fields, (3) Modify schema validation logic in config-manager.js, (4) Test and validate backward compatibility and error messaging.",
"reasoning": "The task spans documentation, schema changes, migration guidance, and validation logic. While not algorithmically complex, it requires careful coordination and thorough testing to ensure a smooth user transition and robust validation."
},
{
"taskId": 87,
"taskTitle": "Implement validation and error handling",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "Decompose this task into: (1) Add validation logic for model and config loading, (2) Implement error handling and fallback mechanisms, (3) Enhance logging and reporting for token usage, (4) Develop helper functions for configuration suggestions and improvements.",
"reasoning": "This task is primarily about adding validation, error handling, and logging. While important for robustness, the logic is straightforward and can be modularized into a few clear subtasks."
},
{
"taskId": 89,
"taskTitle": "Introduce Prioritize Command with Enhanced Priority Levels",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Expand this task into: (1) Implement the prioritize command with all required flags and shorthands, (2) Update CLI output and help documentation for new priority levels, (3) Ensure backward compatibility with existing commands, (4) Add error handling for invalid inputs, (5) Write and run tests for all command scenarios.",
"reasoning": "This CLI feature requires command parsing, updating internal logic for new priority levels, documentation, and robust error handling. The complexity is moderate due to the need for backward compatibility and comprehensive testing."
},
{
"taskId": 90,
"taskTitle": "Implement Subtask Progress Analyzer and Reporting System",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the analyzer implementation into: (1) Design and implement progress tracking logic, (2) Develop status validation and issue detection, (3) Build the reporting system with multiple output formats, (4) Integrate analyzer with the existing task management system, (5) Optimize for performance and scalability, (6) Write unit, integration, and performance tests.",
"reasoning": "This is a complex, multi-faceted feature involving data analysis, reporting, integration, and performance optimization. It touches many parts of the system and requires careful design, making it one of the most complex tasks in the list."
},
{
"taskId": 91,
"taskTitle": "Implement Move Command for Tasks and Subtasks",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Expand this task into: (1) Implement move logic for tasks and subtasks, (2) Handle edge cases (invalid ids, non-existent parents, circular dependencies), (3) Update CLI to support move command with flags, (4) Ensure data integrity and update relationships, (5) Write and execute tests for various move scenarios.",
"reasoning": "Moving tasks and subtasks requires careful handling of hierarchical data, edge cases, and data integrity. The command must be robust and user-friendly, necessitating multiple focused subtasks for safe implementation."
}
]
}

View File

@@ -1,87 +0,0 @@
# Task ID: 45
# Title: Implement GitHub Issue Import Feature
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Add a '--from-github' flag to the add-task command that accepts a GitHub issue URL and automatically generates a corresponding task with relevant details.
# Details:
Implement a new flag '--from-github' for the add-task command that allows users to create tasks directly from GitHub issues. The implementation should:
1. Accept a GitHub issue URL as an argument (e.g., 'taskmaster add-task --from-github https://github.com/owner/repo/issues/123')
2. Parse the URL to extract the repository owner, name, and issue number
3. Use the GitHub API to fetch the issue details including:
- Issue title (to be used as task title)
- Issue description (to be used as task description)
- Issue labels (to be potentially used as tags)
- Issue assignees (for reference)
- Issue status (open/closed)
4. Generate a well-formatted task with this information
5. Include a reference link back to the original GitHub issue
6. Handle authentication for private repositories using GitHub tokens from environment variables or config file
7. Implement proper error handling for:
- Invalid URLs
- Non-existent issues
- API rate limiting
- Authentication failures
- Network issues
8. Allow users to override or supplement the imported details with additional command-line arguments
9. Add appropriate documentation in help text and user guide
# Test Strategy:
Testing should cover the following scenarios:
1. Unit tests:
- Test URL parsing functionality with valid and invalid GitHub issue URLs
- Test GitHub API response parsing with mocked API responses
- Test error handling for various failure cases
2. Integration tests:
- Test with real GitHub public issues (use well-known repositories)
- Test with both open and closed issues
- Test with issues containing various elements (labels, assignees, comments)
3. Error case tests:
- Invalid URL format
- Non-existent repository
- Non-existent issue number
- API rate limit exceeded
- Authentication failures for private repos
4. End-to-end tests:
- Verify that a task created from a GitHub issue contains all expected information
- Verify that the task can be properly managed after creation
- Test the interaction with other flags and commands
Create mock GitHub API responses for testing to avoid hitting rate limits during development and testing. Use environment variables to configure test credentials if needed.
# Subtasks:
## 1. Design GitHub API integration architecture [pending]
### Dependencies: None
### Description: Create a technical design document outlining the architecture for GitHub API integration, including authentication flow, rate limiting considerations, and error handling strategies.
### Details:
Document should include: API endpoints to be used, authentication method (OAuth vs Personal Access Token), data flow diagrams, and security considerations. Research GitHub API rate limits and implement appropriate throttling mechanisms.
## 2. Implement GitHub URL parsing and validation [pending]
### Dependencies: 45.1
### Description: Create a module to parse and validate GitHub issue URLs, extracting repository owner, repository name, and issue number.
### Details:
Handle various GitHub URL formats (e.g., github.com/owner/repo/issues/123, github.com/owner/repo/pull/123). Implement validation to ensure the URL points to a valid issue or pull request. Return structured data with owner, repo, and issue number for valid URLs.
## 3. Develop GitHub API client for issue fetching [pending]
### Dependencies: 45.1, 45.2
### Description: Create a service to authenticate with GitHub and fetch issue details using the GitHub REST API.
### Details:
Implement authentication using GitHub Personal Access Tokens or OAuth. Handle API responses, including error cases (rate limiting, authentication failures, not found). Extract relevant issue data: title, description, labels, assignees, and comments.
## 4. Create task formatter for GitHub issues [pending]
### Dependencies: 45.3
### Description: Develop a formatter to convert GitHub issue data into the application's task format.
### Details:
Map GitHub issue fields to task fields (title, description, etc.). Convert GitHub markdown to the application's supported format. Handle special GitHub features like issue references and user mentions. Generate appropriate tags based on GitHub labels.
## 5. Implement end-to-end import flow with UI [pending]
### Dependencies: 45.4
### Description: Create the user interface and workflow for importing GitHub issues, including progress indicators and error handling.
### Details:
Design and implement UI for URL input and import confirmation. Show loading states during API calls. Display meaningful error messages for various failure scenarios. Allow users to review and modify imported task details before saving. Add automated tests for the entire import flow.

View File

@@ -1,34 +0,0 @@
# Task ID: 82
# Title: Update supported-models.json with token limit fields
# Status: pending
# Dependencies: None
# Priority: high
# Description: Modify the supported-models.json file to include contextWindowTokens and maxOutputTokens fields for each model, replacing the ambiguous max_tokens field.
# Details:
For each model entry in supported-models.json:
1. Add `contextWindowTokens` field representing the total context window (input + output tokens)
2. Add `maxOutputTokens` field representing the maximum tokens the model can generate
3. Remove or deprecate the ambiguous `max_tokens` field if present
Research and populate accurate values for each model from official documentation:
- For OpenAI models (e.g., gpt-4o): contextWindowTokens=128000, maxOutputTokens=16384
- For Anthropic models (e.g., Claude 3.7): contextWindowTokens=200000, maxOutputTokens=8192
- For other providers, find official documentation or use reasonable defaults
Example entry:
```json
{
"id": "claude-3-7-sonnet-20250219",
"swe_score": 0.623,
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
"allowed_roles": ["main", "fallback"],
"contextWindowTokens": 200000,
"maxOutputTokens": 8192
}
```
# Test Strategy:
1. Validate JSON syntax after changes
2. Verify all models have the new fields with reasonable values
3. Check that the values align with official documentation from each provider
4. Ensure backward compatibility by maintaining any fields other systems might depend on

View File

@@ -1,95 +0,0 @@
# Task ID: 83
# Title: Update config-manager.js defaults and getters
# Status: pending
# Dependencies: 82
# Priority: high
# Description: Modify the config-manager.js module to replace maxTokens with maxInputTokens and maxOutputTokens in the DEFAULTS object and update related getter functions.
# Details:
1. Update the `DEFAULTS` object in config-manager.js:
```javascript
const DEFAULTS = {
// ... existing defaults
main: {
// Replace maxTokens with these two fields
maxInputTokens: 16000, // Example default
maxOutputTokens: 4000, // Example default
temperature: 0.7
// ... other fields
},
research: {
maxInputTokens: 16000,
maxOutputTokens: 4000,
temperature: 0.7
// ... other fields
},
fallback: {
maxInputTokens: 8000,
maxOutputTokens: 2000,
temperature: 0.7
// ... other fields
}
// ... rest of DEFAULTS
};
```
2. Update `getParametersForRole` function to return the new fields:
```javascript
function getParametersForRole(role, explicitRoot = null) {
const config = _getConfig(explicitRoot);
return {
maxInputTokens: config[role]?.maxInputTokens,
maxOutputTokens: config[role]?.maxOutputTokens,
temperature: config[role]?.temperature
// ... any other parameters
};
}
```
3. Add a new function to get model capabilities:
```javascript
function getModelCapabilities(providerName, modelId) {
const models = MODEL_MAP[providerName?.toLowerCase()];
const model = models?.find(m => m.id === modelId);
return {
contextWindowTokens: model?.contextWindowTokens,
maxOutputTokens: model?.maxOutputTokens
};
}
```
4. Deprecate or update the role-specific maxTokens getters:
```javascript
// Either remove these or update them to return maxInputTokens
function getMainMaxTokens(explicitRoot = null) {
console.warn('getMainMaxTokens is deprecated. Use getParametersForRole("main") instead.');
return getParametersForRole("main", explicitRoot).maxInputTokens;
}
// Same for getResearchMaxTokens and getFallbackMaxTokens
```
5. Export the new functions:
```javascript
module.exports = {
// ... existing exports
getParametersForRole,
getModelCapabilities
};
```
# Test Strategy:
1. Unit test the updated getParametersForRole function with various configurations
2. Verify the new getModelCapabilities function returns correct values
3. Test with both default and custom configurations
4. Ensure backward compatibility by checking that existing code using the old getters still works (with warnings)
# Subtasks:
## 1. Update config-manager.js with specific token limit fields [pending]
### Dependencies: None
### Description: Modify the DEFAULTS object in config-manager.js to replace maxTokens with more specific token limit fields (maxInputTokens, maxOutputTokens, maxTotalTokens) and update related getter functions while maintaining backward compatibility.
### Details:
1. Replace maxTokens in the DEFAULTS object with maxInputTokens, maxOutputTokens, and maxTotalTokens
2. Update any getter functions that reference maxTokens to handle both old and new configurations
3. Ensure backward compatibility so existing code using maxTokens continues to work
4. Update any related documentation or comments to reflect the new token limit fields
5. Test the changes to verify both new specific token limits and legacy maxTokens usage work correctly

View File

@@ -1,93 +0,0 @@
# Task ID: 84
# Title: Implement token counting utility
# Status: pending
# Dependencies: 82
# Priority: high
# Description: Create a utility function to count tokens for prompts based on the model being used, primarily using tiktoken for OpenAI and Anthropic models with character-based fallbacks for other providers.
# Details:
1. Install the tiktoken package:
```bash
npm install tiktoken
```
2. Create a new file `scripts/modules/token-counter.js`:
```javascript
const tiktoken = require('tiktoken');
/**
* Count tokens for a given text and model
* @param {string} text - The text to count tokens for
* @param {string} provider - The AI provider (e.g., 'openai', 'anthropic')
* @param {string} modelId - The model ID
* @returns {number} - Estimated token count
*/
function countTokens(text, provider, modelId) {
if (!text) return 0;
// Convert to lowercase for case-insensitive matching
const providerLower = provider?.toLowerCase();
try {
// OpenAI models
if (providerLower === 'openai') {
// Most OpenAI chat models use cl100k_base encoding
const encoding = tiktoken.encoding_for_model(modelId) || tiktoken.get_encoding('cl100k_base');
return encoding.encode(text).length;
}
// Anthropic models - can use cl100k_base as an approximation
// or follow Anthropic's guidance
if (providerLower === 'anthropic') {
try {
// Try to use cl100k_base as a reasonable approximation
const encoding = tiktoken.get_encoding('cl100k_base');
return encoding.encode(text).length;
} catch (e) {
// Fallback to Anthropic's character-based estimation
return Math.ceil(text.length / 3.5); // ~3.5 chars per token for English
}
}
// For other providers, use character-based estimation as fallback
// Different providers may have different tokenization schemes
return Math.ceil(text.length / 4); // General fallback estimate
} catch (error) {
console.warn(`Token counting error: ${error.message}. Using character-based estimate.`);
return Math.ceil(text.length / 4); // Fallback if tiktoken fails
}
}
module.exports = { countTokens };
```
3. Add tests for the token counter in `tests/token-counter.test.js`:
```javascript
const { countTokens } = require('../scripts/modules/token-counter');
describe('Token Counter', () => {
test('counts tokens for OpenAI models', () => {
const text = 'Hello, world! This is a test.';
const count = countTokens(text, 'openai', 'gpt-4');
expect(count).toBeGreaterThan(0);
expect(typeof count).toBe('number');
});
test('counts tokens for Anthropic models', () => {
const text = 'Hello, world! This is a test.';
const count = countTokens(text, 'anthropic', 'claude-3-7-sonnet-20250219');
expect(count).toBeGreaterThan(0);
expect(typeof count).toBe('number');
});
test('handles empty text', () => {
expect(countTokens('', 'openai', 'gpt-4')).toBe(0);
expect(countTokens(null, 'openai', 'gpt-4')).toBe(0);
});
});
```
# Test Strategy:
1. Unit test the countTokens function with various inputs and models
2. Compare token counts with known examples from OpenAI and Anthropic documentation
3. Test edge cases: empty strings, very long texts, non-English texts
4. Test fallback behavior when tiktoken fails or is not applicable

View File

@@ -1,104 +0,0 @@
# Task ID: 85
# Title: Update ai-services-unified.js for dynamic token limits
# Status: pending
# Dependencies: 83, 84
# Priority: medium
# Description: Modify the _unifiedServiceRunner function in ai-services-unified.js to use the new token counting utility and dynamically adjust output token limits based on input length.
# Details:
1. Import the token counter in `ai-services-unified.js`:
```javascript
const { countTokens } = require('./token-counter');
const { getParametersForRole, getModelCapabilities } = require('./config-manager');
```
2. Update the `_unifiedServiceRunner` function to implement dynamic token limit adjustment:
```javascript
async function _unifiedServiceRunner({
serviceType,
provider,
modelId,
systemPrompt,
prompt,
temperature,
currentRole,
effectiveProjectRoot,
// ... other parameters
}) {
// Get role parameters with new token limits
const roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
// Get model capabilities
const modelCapabilities = getModelCapabilities(provider, modelId);
// Count tokens in the prompts
const systemPromptTokens = countTokens(systemPrompt, provider, modelId);
const userPromptTokens = countTokens(prompt, provider, modelId);
const totalPromptTokens = systemPromptTokens + userPromptTokens;
// Validate against input token limits
if (totalPromptTokens > roleParams.maxInputTokens) {
throw new Error(
`Prompt (${totalPromptTokens} tokens) exceeds configured max input tokens (${roleParams.maxInputTokens}) for role '${currentRole}'.`
);
}
// Validate against model's absolute context window
if (modelCapabilities.contextWindowTokens && totalPromptTokens > modelCapabilities.contextWindowTokens) {
throw new Error(
`Prompt (${totalPromptTokens} tokens) exceeds model's context window (${modelCapabilities.contextWindowTokens}) for ${modelId}.`
);
}
// Calculate available output tokens
// If model has a combined context window, we need to subtract input tokens
let availableOutputTokens = roleParams.maxOutputTokens;
// If model has a context window constraint, ensure we don't exceed it
if (modelCapabilities.contextWindowTokens) {
const remainingContextTokens = modelCapabilities.contextWindowTokens - totalPromptTokens;
availableOutputTokens = Math.min(availableOutputTokens, remainingContextTokens);
}
// Also respect the model's absolute max output limit
if (modelCapabilities.maxOutputTokens) {
availableOutputTokens = Math.min(availableOutputTokens, modelCapabilities.maxOutputTokens);
}
// Prepare API call parameters
const callParams = {
apiKey,
modelId,
maxTokens: availableOutputTokens, // Use dynamically calculated output limit
temperature: roleParams.temperature,
messages,
baseUrl,
...(serviceType === 'generateObject' && { schema, objectName }),
...restApiParams
};
// Log token usage information
console.debug(`Token usage: ${totalPromptTokens} input tokens, ${availableOutputTokens} max output tokens`);
// Rest of the function remains the same...
}
```
3. Update the error handling to provide clear messages about token limits:
```javascript
try {
// Existing code...
} catch (error) {
if (error.message.includes('tokens')) {
// Token-related errors should be clearly identified
console.error(`Token limit error: ${error.message}`);
}
throw error;
}
```
# Test Strategy:
1. Test with prompts of various lengths to verify dynamic adjustment
2. Test with different models to ensure model-specific limits are respected
3. Verify error messages are clear when limits are exceeded
4. Test edge cases: very short prompts, prompts near the limit
5. Integration test with actual API calls to verify the calculated limits work in practice

View File

@@ -1,107 +0,0 @@
# Task ID: 86
# Title: Update .taskmasterconfig schema and user guide
# Status: pending
# Dependencies: 83
# Priority: medium
# Description: Create a migration guide for users to update their .taskmasterconfig files and document the new token limit configuration options.
# Details:
1. Create a migration script or guide for users to update their existing `.taskmasterconfig` files:
```javascript
// Example migration snippet for .taskmasterconfig
{
"main": {
// Before:
// "maxTokens": 16000,
// After:
"maxInputTokens": 16000,
"maxOutputTokens": 4000,
"temperature": 0.7
},
"research": {
"maxInputTokens": 16000,
"maxOutputTokens": 4000,
"temperature": 0.7
},
"fallback": {
"maxInputTokens": 8000,
"maxOutputTokens": 2000,
"temperature": 0.7
}
}
```
2. Update the user documentation to explain the new token limit fields:
```markdown
# Token Limit Configuration
Task Master now provides more granular control over token limits with separate settings for input and output tokens:
- `maxInputTokens`: Maximum number of tokens allowed in the input prompt (system prompt + user prompt)
- `maxOutputTokens`: Maximum number of tokens the model should generate in its response
## Benefits
- More precise control over token usage
- Better cost management
- Reduced likelihood of hitting model context limits
- Dynamic adjustment to maximize output space based on input length
## Migration from Previous Versions
If you're upgrading from a previous version, you'll need to update your `.taskmasterconfig` file:
1. Replace the single `maxTokens` field with separate `maxInputTokens` and `maxOutputTokens` fields
2. Recommended starting values:
- Set `maxInputTokens` to your previous `maxTokens` value
- Set `maxOutputTokens` to approximately 1/4 of your model's context window
## Example Configuration
```json
{
"main": {
"maxInputTokens": 16000,
"maxOutputTokens": 4000,
"temperature": 0.7
}
}
```
```
3. Update the schema validation in `config-manager.js` to validate the new fields:
```javascript
function _validateConfig(config) {
// ... existing validation
// Validate token limits for each role
['main', 'research', 'fallback'].forEach(role => {
if (config[role]) {
// Check if old maxTokens is present and warn about migration
if (config[role].maxTokens !== undefined) {
console.warn(`Warning: 'maxTokens' in ${role} role is deprecated. Please use 'maxInputTokens' and 'maxOutputTokens' instead.`);
}
// Validate new token limit fields
if (config[role].maxInputTokens !== undefined && (!Number.isInteger(config[role].maxInputTokens) || config[role].maxInputTokens <= 0)) {
throw new Error(`Invalid maxInputTokens for ${role} role: must be a positive integer`);
}
if (config[role].maxOutputTokens !== undefined && (!Number.isInteger(config[role].maxOutputTokens) || config[role].maxOutputTokens <= 0)) {
throw new Error(`Invalid maxOutputTokens for ${role} role: must be a positive integer`);
}
}
});
return config;
}
```
# Test Strategy:
1. Verify documentation is clear and provides migration steps
2. Test the validation logic with various config formats
3. Test backward compatibility with old config format
4. Ensure error messages are helpful when validation fails

View File

@@ -1,119 +0,0 @@
# Task ID: 87
# Title: Implement validation and error handling
# Status: pending
# Dependencies: 85
# Priority: low
# Description: Add comprehensive validation and error handling for token limits throughout the system, including helpful error messages and graceful fallbacks.
# Details:
1. Add validation when loading models in `config-manager.js`:
```javascript
function _validateModelMap(modelMap) {
// Validate each provider's models
Object.entries(modelMap).forEach(([provider, models]) => {
models.forEach(model => {
// Check for required token limit fields
if (!model.contextWindowTokens) {
console.warn(`Warning: Model ${model.id} from ${provider} is missing contextWindowTokens field`);
}
if (!model.maxOutputTokens) {
console.warn(`Warning: Model ${model.id} from ${provider} is missing maxOutputTokens field`);
}
});
});
return modelMap;
}
```
2. Add validation when setting up a model in the CLI:
```javascript
function validateModelConfig(modelConfig, modelCapabilities) {
const issues = [];
// Check if input tokens exceed model's context window
if (modelConfig.maxInputTokens > modelCapabilities.contextWindowTokens) {
issues.push(`maxInputTokens (${modelConfig.maxInputTokens}) exceeds model's context window (${modelCapabilities.contextWindowTokens})`);
}
// Check if output tokens exceed model's maximum
if (modelConfig.maxOutputTokens > modelCapabilities.maxOutputTokens) {
issues.push(`maxOutputTokens (${modelConfig.maxOutputTokens}) exceeds model's maximum output tokens (${modelCapabilities.maxOutputTokens})`);
}
// Check if combined tokens exceed context window
if (modelConfig.maxInputTokens + modelConfig.maxOutputTokens > modelCapabilities.contextWindowTokens) {
issues.push(`Combined maxInputTokens and maxOutputTokens (${modelConfig.maxInputTokens + modelConfig.maxOutputTokens}) exceeds model's context window (${modelCapabilities.contextWindowTokens})`);
}
return issues;
}
```
3. Add graceful fallbacks in `ai-services-unified.js`:
```javascript
// Fallback for missing token limits
if (!roleParams.maxInputTokens) {
console.warn(`Warning: maxInputTokens not specified for role '${currentRole}'. Using default value.`);
roleParams.maxInputTokens = 8000; // Reasonable default
}
if (!roleParams.maxOutputTokens) {
console.warn(`Warning: maxOutputTokens not specified for role '${currentRole}'. Using default value.`);
roleParams.maxOutputTokens = 2000; // Reasonable default
}
// Fallback for missing model capabilities
if (!modelCapabilities.contextWindowTokens) {
console.warn(`Warning: contextWindowTokens not specified for model ${modelId}. Using conservative estimate.`);
modelCapabilities.contextWindowTokens = roleParams.maxInputTokens + roleParams.maxOutputTokens;
}
if (!modelCapabilities.maxOutputTokens) {
console.warn(`Warning: maxOutputTokens not specified for model ${modelId}. Using role configuration.`);
modelCapabilities.maxOutputTokens = roleParams.maxOutputTokens;
}
```
4. Add detailed logging for token usage:
```javascript
function logTokenUsage(provider, modelId, inputTokens, outputTokens, role) {
const inputCost = calculateTokenCost(provider, modelId, 'input', inputTokens);
const outputCost = calculateTokenCost(provider, modelId, 'output', outputTokens);
console.info(`Token usage for ${role} role with ${provider}/${modelId}:`);
console.info(`- Input: ${inputTokens.toLocaleString()} tokens ($${inputCost.toFixed(6)})`);
console.info(`- Output: ${outputTokens.toLocaleString()} tokens ($${outputCost.toFixed(6)})`);
console.info(`- Total cost: $${(inputCost + outputCost).toFixed(6)}`);
console.info(`- Available output tokens: ${availableOutputTokens.toLocaleString()}`);
}
```
5. Add a helper function to suggest configuration improvements:
```javascript
function suggestTokenConfigImprovements(roleParams, modelCapabilities, promptTokens) {
const suggestions = [];
// If prompt is using less than 50% of allowed input
if (promptTokens < roleParams.maxInputTokens * 0.5) {
suggestions.push(`Consider reducing maxInputTokens from ${roleParams.maxInputTokens} to save on potential costs`);
}
// If output tokens are very limited due to large input
const availableOutput = Math.min(
roleParams.maxOutputTokens,
modelCapabilities.contextWindowTokens - promptTokens
);
if (availableOutput < roleParams.maxOutputTokens * 0.5) {
suggestions.push(`Available output tokens (${availableOutput}) are significantly less than configured maxOutputTokens (${roleParams.maxOutputTokens}) due to large input`);
}
return suggestions;
}
```
# Test Strategy:
1. Test validation functions with valid and invalid configurations
2. Verify fallback behavior works correctly when configuration is missing
3. Test error messages are clear and actionable
4. Test logging functions provide useful information
5. Verify suggestion logic provides helpful recommendations

View File

@@ -1,57 +0,0 @@
# Task ID: 88
# Title: Enhance Add-Task Functionality to Consider All Task Dependencies
# Status: done
# Dependencies: None
# Priority: medium
# Description: Improve the add-task feature to accurately account for all dependencies among tasks, ensuring proper task ordering and execution.
# Details:
1. Review current implementation of add-task functionality.
2. Identify existing mechanisms for handling task dependencies.
3. Modify add-task to recursively analyze and incorporate all dependencies.
4. Ensure that dependencies are resolved in the correct order during task execution.
5. Update documentation to reflect changes in dependency handling.
6. Consider edge cases such as circular dependencies and handle them appropriately.
7. Optimize performance to ensure efficient dependency resolution, especially for projects with a large number of tasks.
8. Integrate with existing validation and error handling mechanisms (from Task 87) to provide clear feedback if dependencies cannot be resolved.
9. Test thoroughly with various dependency scenarios to ensure robustness.
# Test Strategy:
1. Create test cases with simple linear dependencies to verify correct ordering.
2. Develop test cases with complex, nested dependencies to ensure recursive resolution works correctly.
3. Include tests for edge cases such as circular dependencies, verifying appropriate error messages are displayed.
4. Measure performance with large sets of tasks and dependencies to ensure efficiency.
5. Conduct integration testing with other components that rely on task dependencies.
6. Perform manual code reviews to validate implementation against requirements.
7. Execute automated tests to verify no regressions in existing functionality.
# Subtasks:
## 1. Review Current Add-Task Implementation and Identify Dependency Mechanisms [done]
### Dependencies: None
### Description: Examine the existing add-task functionality to understand how task dependencies are currently handled.
### Details:
Conduct a code review of the add-task feature. Document any existing mechanisms for handling task dependencies.
## 2. Modify Add-Task to Recursively Analyze Dependencies [done]
### Dependencies: 88.1
### Description: Update the add-task functionality to recursively analyze and incorporate all task dependencies.
### Details:
Implement a recursive algorithm that identifies and incorporates all dependencies for a given task. Ensure it handles nested dependencies correctly.
## 3. Ensure Correct Order of Dependency Resolution [done]
### Dependencies: 88.2
### Description: Modify the add-task functionality to ensure that dependencies are resolved in the correct order during task execution.
### Details:
Implement logic to sort and execute tasks based on their dependency order. Handle cases where multiple tasks depend on each other.
## 4. Integrate with Existing Validation and Error Handling [done]
### Dependencies: 88.3
### Description: Update the add-task functionality to integrate with existing validation and error handling mechanisms (from Task 87).
### Details:
Modify the code to provide clear feedback if dependencies cannot be resolved. Ensure that circular dependencies are detected and handled appropriately.
## 5. Optimize Performance for Large Projects [done]
### Dependencies: 88.4
### Description: Optimize the add-task functionality to ensure efficient dependency resolution, especially for projects with a large number of tasks.
### Details:
Profile and optimize the recursive dependency analysis algorithm. Implement caching or other performance improvements as needed.

View File

@@ -1,67 +0,0 @@
# Task ID: 90
# Title: Implement Subtask Progress Analyzer and Reporting System
# Status: pending
# Dependencies: 1, 3
# Priority: medium
# Description: Develop a subtask analyzer that monitors the progress of all subtasks, validates their status, and generates comprehensive reports for users to track project advancement.
# Details:
The subtask analyzer should be implemented with the following components and considerations:
1. Progress Tracking Mechanism:
- Create a function to scan the task data structure and identify all tasks with subtasks
- Implement logic to determine the completion status of each subtask
- Calculate overall progress percentages for tasks with multiple subtasks
2. Status Validation:
- Develop validation rules to check if subtasks are progressing according to expected timelines
- Implement detection for stalled or blocked subtasks
- Create alerts for subtasks that are behind schedule or have dependency issues
3. Reporting System:
- Design a structured report format that clearly presents:
- Overall project progress
- Task-by-task breakdown with subtask status
- Highlighted issues or blockers
- Support multiple output formats (console, JSON, exportable text)
- Include visual indicators for progress (e.g., progress bars in CLI)
4. Integration Points:
- Hook into the existing task management system
- Ensure the analyzer can be triggered via CLI commands
- Make the reporting feature accessible through the main command interface
5. Performance Considerations:
- Optimize for large task lists with many subtasks
- Implement caching if necessary to avoid redundant calculations
- Ensure reports generate quickly even for complex project structures
The implementation should follow the existing code style and patterns, leveraging the task data structure already in place. The analyzer should be non-intrusive to existing functionality while providing valuable insights to users.
# Test Strategy:
Testing for the subtask analyzer should include:
1. Unit Tests:
- Test the progress calculation logic with various task/subtask configurations
- Verify status validation correctly identifies issues in different scenarios
- Ensure report generation produces consistent and accurate output
- Test edge cases (empty subtasks, all complete, all incomplete, mixed states)
2. Integration Tests:
- Verify the analyzer correctly integrates with the existing task data structure
- Test CLI command integration and parameter handling
- Ensure reports reflect actual changes to task/subtask status
3. Performance Tests:
- Benchmark report generation with large task sets (100+ tasks with multiple subtasks)
- Verify memory usage remains reasonable during analysis
4. User Acceptance Testing:
- Create sample projects with various subtask configurations
- Generate reports and verify they provide clear, actionable information
- Confirm visual indicators accurately represent progress
5. Regression Testing:
- Verify that the analyzer doesn't interfere with existing task management functionality
- Ensure backward compatibility with existing task data structures
Documentation should be updated to include examples of how to use the new analyzer and interpret the reports. Success criteria include accurate progress tracking, clear reporting, and performance that scales with project size.

View File

@@ -1,49 +0,0 @@
# Task ID: 91
# Title: Implement Move Command for Tasks and Subtasks
# Status: done
# Dependencies: 1, 3
# Priority: medium
# Description: Introduce a 'move' command to enable moving tasks or subtasks to a different id, facilitating conflict resolution by allowing teams to assign new ids as needed.
# Details:
The move command will consist of three core components: 1) Core Logic Function in scripts/modules/task-manager/move-task.js, 2) Direct Function Wrapper in mcp-server/src/core/direct-functions/move-task.js, and 3) MCP Tool in mcp-server/src/tools/move-task.js. The command will accept source and destination IDs, handling various scenarios including moving tasks to become subtasks, subtasks to become tasks, and subtasks between different parents. The implementation will handle edge cases such as invalid ids, non-existent parents, circular dependencies, and will properly update all dependencies.
# Test Strategy:
Testing will follow a three-tier approach: 1) Unit tests for core functionality including moving tasks to subtasks, subtasks to tasks, subtasks between parents, dependency handling, and validation error cases; 2) Integration tests for the direct function with mock MCP environment and task file regeneration; 3) End-to-end tests for the full MCP tool call path. This will verify all scenarios including moving a task to a new id, moving a subtask under a different parent while preserving its hierarchy, and handling errors for invalid operations.
# Subtasks:
## 1. Design and implement core move logic [done]
### Dependencies: None
### Description: Create the fundamental logic for moving tasks and subtasks within the task management system hierarchy
### Details:
Implement the core logic function in scripts/modules/task-manager/move-task.js with the signature that accepts tasksPath, sourceId, destinationId, and generateFiles parameters. Develop functions to handle all movement operations including task-to-subtask, subtask-to-task, and subtask-to-subtask conversions. Implement validation for source and destination IDs, and ensure proper updating of parent-child relationships and dependencies.
## 2. Implement edge case handling [done]
### Dependencies: 91.1
### Description: Develop robust error handling for all potential edge cases in the move operation
### Details:
Create validation functions to detect invalid task IDs, non-existent parent tasks, and circular dependencies. Handle special cases such as moving a task to become the first/last subtask, reordering within the same parent, preventing moving a task to itself, and preventing moving a parent to its own subtask. Implement proper error messages and status codes for each edge case, and ensure system stability if a move operation fails.
## 3. Update CLI interface for move commands [done]
### Dependencies: 91.1
### Description: Extend the command-line interface to support the new move functionality with appropriate flags and options
### Details:
Create the Direct Function Wrapper in mcp-server/src/core/direct-functions/move-task.js to adapt the core logic for MCP, handling path resolution and parameter validation. Implement silent mode to prevent console output interfering with JSON responses. Create the MCP Tool in mcp-server/src/tools/move-task.js that exposes the functionality to Cursor, handles project root resolution, and includes proper Zod parameter definitions. Update MCP tool definition in .cursor/mcp.json and register the tool in mcp-server/src/tools/index.js.
## 4. Ensure data integrity during moves [done]
### Dependencies: 91.1, 91.2
### Description: Implement safeguards to maintain data consistency and update all relationships during move operations
### Details:
Implement dependency handling logic to update dependencies when converting between task/subtask, add appropriate parent dependencies when needed, and validate no circular dependencies are created. Create transaction-like operations to ensure atomic moves that either complete fully or roll back. Implement functions to update all affected task relationships after a move, and add verification steps to confirm data integrity post-move.
## 5. Create comprehensive test suite [done]
### Dependencies: 91.1, 91.2, 91.3, 91.4
### Description: Develop and execute tests covering all move scenarios and edge cases
### Details:
Create unit tests for core functionality including moving tasks to subtasks, subtasks to tasks, subtasks between parents, dependency handling, and validation error cases. Implement integration tests for the direct function with mock MCP environment and task file regeneration. Develop end-to-end tests for the full MCP tool call path. Ensure tests cover all identified edge cases and potential failure points, and verify data integrity after moves.
## 6. Export and integrate the move function [done]
### Dependencies: 91.1
### Description: Ensure the move function is properly exported and integrated with existing code
### Details:
Export the move function in scripts/modules/task-manager.js. Update task-master-core.js to include the direct function. Reuse validation logic from add-subtask.js and remove-subtask.js where appropriate. Follow silent mode implementation pattern from other direct functions and match parameter naming conventions in MCP tools.

View File

@@ -1,94 +0,0 @@
# Task ID: 92
# Title: Implement Project Root Environment Variable Support in MCP Configuration
# Status: review
# Dependencies: 1, 3, 17
# Priority: medium
# Description: Add support for a 'TASK_MASTER_PROJECT_ROOT' environment variable in MCP configuration, allowing it to be set in both mcp.json and .env, with precedence over other methods. This will define the root directory for the MCP server and take precedence over all other project root resolution methods. The implementation should be backward compatible with existing workflows that don't use this variable.
# Details:
Update the MCP server configuration system to support the TASK_MASTER_PROJECT_ROOT environment variable as the standard way to specify the project root directory. This provides better namespacing and avoids conflicts with other tools that might use a generic PROJECT_ROOT variable. Implement a clear precedence order for project root resolution:
1. TASK_MASTER_PROJECT_ROOT environment variable (from shell or .env file)
2. 'projectRoot' key in mcp_config.toml or mcp.json configuration files
3. Existing resolution logic (CLI args, current working directory, etc.)
Modify the configuration loading logic to check for these sources in the specified order, ensuring backward compatibility. All MCP tools and components should use this standardized project root resolution logic. The TASK_MASTER_PROJECT_ROOT environment variable will be required because path resolution is delegated to the MCP client implementation, ensuring consistent behavior across different environments.
Implementation steps:
1. Identify all code locations where project root is determined (initialization, utility functions)
2. Update configuration loaders to check for TASK_MASTER_PROJECT_ROOT in environment variables
3. Add support for 'projectRoot' in configuration files as a fallback
4. Refactor project root resolution logic to follow the new precedence rules
5. Ensure all MCP tools and functions use the updated resolution logic
6. Add comprehensive error handling for cases where TASK_MASTER_PROJECT_ROOT is not set or invalid
7. Implement validation to ensure the specified directory exists and is accessible
# Test Strategy:
1. Write unit tests to verify that the config loader correctly reads project root from environment variables and configuration files with the expected precedence:
- Test TASK_MASTER_PROJECT_ROOT environment variable takes precedence when set
- Test 'projectRoot' in configuration files is used when environment variable is absent
- Test fallback to existing resolution logic when neither is specified
2. Add integration tests to ensure that the MCP server and all tools use the correct project root:
- Test server startup with TASK_MASTER_PROJECT_ROOT set to various valid and invalid paths
- Test configuration file loading from the specified project root
- Test path resolution for resources relative to the project root
3. Test backward compatibility:
- Verify existing workflows function correctly without the new variables
- Ensure no regression in projects not using the new configuration options
4. Manual testing:
- Set TASK_MASTER_PROJECT_ROOT in shell environment and verify correct behavior
- Set TASK_MASTER_PROJECT_ROOT in .env file and verify it's properly loaded
- Configure 'projectRoot' in configuration files and test precedence
- Test with invalid or non-existent directories to verify error handling
# Subtasks:
## 1. Update configuration loader to check for TASK_MASTER_PROJECT_ROOT environment variable [pending]
### Dependencies: None
### Description: Modify the configuration loading system to check for the TASK_MASTER_PROJECT_ROOT environment variable as the primary source for project root directory. Ensure proper error handling if the variable is set but points to a non-existent or inaccessible directory.
### Details:
## 2. Add support for 'projectRoot' in configuration files [pending]
### Dependencies: None
### Description: Implement support for a 'projectRoot' key in mcp_config.toml and mcp.json configuration files as a fallback when the environment variable is not set. Update the configuration parser to recognize and validate this field.
### Details:
## 3. Refactor project root resolution logic with clear precedence rules [pending]
### Dependencies: None
### Description: Create a unified project root resolution function that follows the precedence order: 1) TASK_MASTER_PROJECT_ROOT environment variable, 2) 'projectRoot' in config files, 3) existing resolution methods. Ensure this function is used consistently throughout the codebase.
### Details:
## 4. Update all MCP tools to use the new project root resolution [pending]
### Dependencies: None
### Description: Identify all MCP tools and components that need to access the project root and update them to use the new resolution logic. Ensure consistent behavior across all parts of the system.
### Details:
## 5. Add comprehensive tests for the new project root resolution [pending]
### Dependencies: None
### Description: Create unit and integration tests to verify the correct behavior of the project root resolution logic under various configurations and edge cases.
### Details:
## 6. Update documentation with new configuration options [pending]
### Dependencies: None
### Description: Update the project documentation to clearly explain the new TASK_MASTER_PROJECT_ROOT environment variable, the 'projectRoot' configuration option, and the precedence rules. Include examples of different configuration scenarios.
### Details:
## 7. Implement validation for project root directory [pending]
### Dependencies: None
### Description: Add validation to ensure the specified project root directory exists and has the necessary permissions. Provide clear error messages when validation fails.
### Details:
## 8. Implement support for loading environment variables from .env files [pending]
### Dependencies: None
### Description: Add functionality to load the TASK_MASTER_PROJECT_ROOT variable from .env files in the workspace, following best practices for environment variable management in MCP servers.
### Details:

View File

@@ -1,149 +0,0 @@
# Task ID: 95
# Title: Implement .taskmaster Directory Structure
# Status: done
# Dependencies: 1, 3, 4, 17
# Priority: high
# Description: Consolidate all Task Master-managed files in user projects into a clean, centralized .taskmaster/ directory structure to improve organization and keep user project directories clean, based on GitHub issue #275.
# Details:
This task involves restructuring how Task Master organizes files within user projects to improve maintainability and keep project directories clean:
1. Create a new `.taskmaster/` directory structure in user projects:
- Move task files from `tasks/` to `.taskmaster/tasks/`
- Move PRD files from `scripts/` to `.taskmaster/docs/`
- Move analysis reports to `.taskmaster/reports/`
- Move configuration from `.taskmasterconfig` to `.taskmaster/config.json`
- Create `.taskmaster/templates/` for user templates
2. Update all Task Master code that creates/reads user files:
- Modify task file generation to use `.taskmaster/tasks/`
- Update PRD file handling to use `.taskmaster/docs/`
- Adjust report generation to save to `.taskmaster/reports/`
- Update configuration loading to look for `.taskmaster/config.json`
- Modify any path resolution logic in Task Master's codebase
3. Ensure backward compatibility during migration:
- Implement path fallback logic that checks both old and new locations
- Add deprecation warnings when old paths are detected
- Create a migration command to help users transition to the new structure
- Preserve existing user data during migration
4. Update the project initialization process:
- Modify the init command to create the new `.taskmaster/` directory structure
- Update default file creation to use new paths
5. Benefits of the new structure:
- Keeps user project directories clean and organized
- Clearly separates Task Master files from user project files
- Makes it easier to add Task Master to .gitignore if desired
- Provides logical grouping of different file types
6. Test thoroughly to ensure all functionality works with the new structure:
- Verify all Task Master commands work with the new paths
- Ensure backward compatibility functions correctly
- Test migration process preserves all user data
7. Update documentation:
- Update README.md to reflect the new user file structure
- Add migration guide for existing users
- Document the benefits of the cleaner organization
# Test Strategy:
1. Unit Testing:
- Create unit tests for path resolution that verify both new and old paths work
- Test configuration loading with both `.taskmasterconfig` and `.taskmaster/config.json`
- Verify the migration command correctly moves files and preserves content
- Test file creation in all new subdirectories
2. Integration Testing:
- Run all existing integration tests with the new directory structure
- Verify that all Task Master commands function correctly with new paths
- Test backward compatibility by running commands with old file structure
3. Migration Testing:
- Test the migration process on sample projects with existing tasks and files
- Verify all tasks, PRDs, reports, and configurations are correctly moved
- Ensure no data loss occurs during migration
- Test migration with partial existing structures (e.g., only tasks/ exists)
4. User Workflow Testing:
- Test complete workflows: init → create tasks → generate reports → update PRDs
- Verify all generated files go to correct locations in `.taskmaster/`
- Test that user project directories remain clean
5. Manual Testing:
- Perform end-to-end testing with the new structure
- Create, update, and delete tasks using the new structure
- Generate reports and verify they're saved to `.taskmaster/reports/`
6. Documentation Verification:
- Review all documentation to ensure it accurately reflects the new user file structure
- Verify the migration guide provides clear instructions
7. Regression Testing:
- Run the full test suite to ensure no regressions were introduced
- Verify existing user projects continue to work during transition period
# Subtasks:
## 1. Create .taskmaster directory structure [done]
### Dependencies: None
### Description: Create the new .taskmaster directory and move existing files to their new locations
### Details:
Create a new .taskmaster/ directory in the project root. Move the tasks/ directory to .taskmaster/tasks/. Move the scripts/ directory to .taskmaster/scripts/. Move the .taskmasterconfig file to .taskmaster/config.json. Ensure proper file permissions are maintained during the move.
<info added on 2025-05-29T15:03:56.912Z>
Create the new .taskmaster/ directory structure in user projects with subdirectories for tasks/, docs/, reports/, and templates/. Move the existing .taskmasterconfig file to .taskmaster/config.json. Since this project is also a Task Master user, move this project's current user files (tasks.json, PRD files, etc.) to the new .taskmaster/ structure to test the implementation. This subtask focuses on user project directory structure, not Task Master source code relocation.
</info added on 2025-05-29T15:03:56.912Z>
## 2. Update Task Master code for new user file paths [done]
### Dependencies: 95.1
### Description: Modify all Task Master code that creates or reads user project files to use the new .taskmaster structure
### Details:
Update Task Master's file handling code to use the new paths: tasks in .taskmaster/tasks/, PRD files in .taskmaster/docs/, reports in .taskmaster/reports/, and config in .taskmaster/config.json. Modify path resolution logic throughout the Task Master codebase to reference the new user file locations.
## 3. Update task file generation system [done]
### Dependencies: 95.1
### Description: Modify the task file generation system to use the new directory structure
### Details:
Update the task file generation system to create and read task files from .taskmaster/tasks/ instead of tasks/. Ensure all template paths are updated. Modify any path resolution logic specific to task file handling.
## 4. Implement backward compatibility logic [done]
### Dependencies: 95.2, 95.3
### Description: Add fallback mechanisms to support both old and new file locations during transition
### Details:
Implement path fallback logic that checks both old and new locations when files aren't found. Add deprecation warnings when old paths are used, informing users about the new structure. Ensure error messages are clear about the transition.
## 5. Create migration command for users [done]
### Dependencies: 95.1, 95.4
### Description: Develop a Task Master command to help users transition their existing projects to the new structure
### Details:
Create a 'taskmaster migrate' command that automatically moves user files from old locations to the new .taskmaster structure. Move tasks/ to .taskmaster/tasks/, scripts/prd.txt to .taskmaster/docs/, reports to .taskmaster/reports/, and .taskmasterconfig to .taskmaster/config.json. Include backup functionality and validation to ensure migration completed successfully.
## 6. Update project initialization process [done]
### Dependencies: 95.1
### Description: Modify the init command to create the new directory structure for new projects
### Details:
Update the init command to create the .taskmaster directory and its subdirectories (tasks/, docs/, reports/, templates/). Modify default file creation to use the new paths. Ensure new projects are created with the correct structure from the start.
## 7. Update PRD and report file handling [done]
### Dependencies: 95.2, 95.6
### Description: Modify PRD file creation and report generation to use the new directory structure
### Details:
Update PRD file handling to create and read files from .taskmaster/docs/ instead of scripts/. Modify report generation (like task-complexity-report.json) to save to .taskmaster/reports/. Ensure all file operations use the new paths consistently.
## 8. Update documentation and create migration guide [done]
### Dependencies: 95.5, 95.6, 95.7
### Description: Update all documentation to reflect the new directory structure and provide migration guidance
### Details:
Update README.md and other documentation to reflect the new .taskmaster structure for user projects. Create a comprehensive migration guide explaining the benefits of the new structure and how to migrate existing projects. Include examples of the new directory layout and explain how it keeps user project directories clean.
## 9. Add templates directory support [done]
### Dependencies: 95.2, 95.6
### Description: Implement support for user templates in the .taskmaster/templates/ directory
### Details:
Create functionality to support user-defined templates in .taskmaster/templates/. Allow users to store custom task templates, PRD templates, or other reusable files. Update Task Master commands to recognize and use templates from this directory when available.
## 10. Verify clean user project directories [done]
### Dependencies: 95.8, 95.9
### Description: Ensure the new structure keeps user project root directories clean and organized
### Details:
Validate that after implementing the new structure, user project root directories only contain their actual project files plus the single .taskmaster/ directory. Verify that no Task Master files are created outside of .taskmaster/. Test that users can easily add .taskmaster/ to .gitignore if they choose to exclude Task Master files from version control.

View File

@@ -1,37 +0,0 @@
# Task ID: 96
# Title: Create Export Command for On-Demand Task File and PDF Generation
# Status: pending
# Dependencies: 2, 4, 95
# Priority: medium
# Description: Develop an 'export' CLI command that generates task files and comprehensive PDF exports on-demand, replacing automatic file generation and providing users with flexible export options.
# Details:
Implement a new 'export' command in the CLI that supports two primary modes: (1) generating individual task files on-demand (superseding the current automatic generation system), and (2) producing a comprehensive PDF export. The PDF should include: a first page with the output of 'tm list --with-subtasks', followed by individual pages for each task (using 'tm show <task_id>') and each subtask (using 'tm show <subtask_id>'). Integrate PDF generation using a robust library (e.g., pdfkit, Puppeteer, or jsPDF) to ensure high-quality output and proper pagination. Refactor or disable any existing automatic file generation logic to avoid performance overhead. Ensure the command supports flexible output paths and options for exporting only files, only PDF, or both. Update documentation and help output to reflect the new export capabilities. Consider concurrency and error handling for large projects. Ensure the export process is efficient and does not block the main CLI thread unnecessarily.
# Test Strategy:
1. Run the 'export' command with various options and verify that task files are generated only on-demand, not automatically. 2. Generate a PDF export and confirm that the first page contains the correct 'tm list --with-subtasks' output, and that each subsequent page accurately reflects the output of 'tm show <task_id>' and 'tm show <subtask_id>' for all tasks and subtasks. 3. Test exporting in projects with large numbers of tasks and subtasks to ensure performance and correctness. 4. Attempt exports with invalid paths or missing data to verify robust error handling. 5. Confirm that no automatic file generation occurs during normal task operations. 6. Review CLI help output and documentation for accuracy regarding the new export functionality.
# Subtasks:
## 1. Remove Automatic Task File Generation from Task Operations [pending]
### Dependencies: None
### Description: Eliminate all calls to generateTaskFiles() from task operations such as add-task, remove-task, set-status, and similar commands to prevent unnecessary performance overhead.
### Details:
Audit the codebase for any automatic invocations of generateTaskFiles() and remove or refactor them to ensure task files are not generated automatically during task operations.
## 2. Implement Export Command Infrastructure with On-Demand Task File Generation [pending]
### Dependencies: 96.1
### Description: Develop the CLI 'export' command infrastructure, enabling users to generate task files on-demand by invoking the preserved generateTaskFiles function only when requested.
### Details:
Create the export command with options for output paths and modes (files, PDF, or both). Ensure generateTaskFiles is only called within this command and not elsewhere.
## 3. Implement Comprehensive PDF Export Functionality [pending]
### Dependencies: 96.2
### Description: Add PDF export capability to the export command, generating a structured PDF with a first page listing all tasks and subtasks, followed by individual pages for each task and subtask, using a robust PDF library.
### Details:
Integrate a PDF generation library (e.g., pdfkit, Puppeteer, or jsPDF). Ensure the PDF includes the output of 'tm list --with-subtasks' on the first page, and uses 'tm show <task_id>' and 'tm show <subtask_id>' for subsequent pages. Handle pagination, concurrency, and error handling for large projects.
## 4. Update Documentation, Tests, and CLI Help for Export Workflow [pending]
### Dependencies: 96.2, 96.3
### Description: Revise all relevant documentation, automated tests, and CLI help output to reflect the new export-based workflow and available options.
### Details:
Update user guides, README files, and CLI help text. Add or modify tests to cover the new export command and its options. Ensure all documentation accurately describes the new workflow and usage.

View File

@@ -1,47 +0,0 @@
<context>
# Overview
[Provide a high-level overview of your product here. Explain what problem it solves, who it's for, and why it's valuable.]
# Core Features
[List and describe the main features of your product. For each feature, include:
- What it does
- Why it's important
- How it works at a high level]
# User Experience
[Describe the user journey and experience. Include:
- User personas
- Key user flows
- UI/UX considerations]
</context>
<PRD>
# Technical Architecture
[Outline the technical implementation details:
- System components
- Data models
- APIs and integrations
- Infrastructure requirements]
# Development Roadmap
[Break down the development process into phases:
- MVP requirements
- Future enhancements
- Do not think about timelines whatsoever -- all that matters is scope and detailing exactly what needs to be build in each phase so it can later be cut up into tasks]
# Logical Dependency Chain
[Define the logical order of development:
- Which features need to be built first (foundation)
- Getting as quickly as possible to something usable/visible front end that works
- Properly pacing and scoping each feature so it is atomic but can also be built upon and improved as development approaches]
# Risks and Mitigations
[Identify potential risks and how they'll be addressed:
- Technical challenges
- Figuring out the MVP that we can build upon
- Resource constraints]
# Appendix
[Include any additional information:
- Research findings
- Technical specifications]
</PRD>

37
.taskmasterconfig Normal file
View File

@@ -0,0 +1,37 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-sonnet-4-20250514",
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/"
},
"account": {
"userId": "1234567890",
"email": "",
"mode": "byok",
"telemetryEnabled": true
}
}

View File

@@ -1,149 +1,5 @@
# task-master-ai
## 0.16.2-rc.0
### Patch Changes
- [#655](https://github.com/eyaltoledano/claude-task-master/pull/655) [`edaa5fe`](https://github.com/eyaltoledano/claude-task-master/commit/edaa5fe0d56e0e4e7c4370670a7a388eebd922ac) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix double .taskmaster directory paths in file resolution utilities
- Closes #636
- [#671](https://github.com/eyaltoledano/claude-task-master/pull/671) [`86ea6d1`](https://github.com/eyaltoledano/claude-task-master/commit/86ea6d1dbc03eeb39f524f565b50b7017b1d2c9c) Thanks [@joedanz](https://github.com/joedanz)! - Add one-click MCP server installation for Cursor
## 0.16.1
### Patch Changes
- [#641](https://github.com/eyaltoledano/claude-task-master/pull/641) [`ad61276`](https://github.com/eyaltoledano/claude-task-master/commit/ad612763ffbdd35aa1b593c9613edc1dc27a8856) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bedrock issues
- [#648](https://github.com/eyaltoledano/claude-task-master/pull/648) [`9b4168b`](https://github.com/eyaltoledano/claude-task-master/commit/9b4168bb4e4dfc2f4fb0cf6bd5f81a8565879176) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP tool calls logging errors
- [#641](https://github.com/eyaltoledano/claude-task-master/pull/641) [`ad61276`](https://github.com/eyaltoledano/claude-task-master/commit/ad612763ffbdd35aa1b593c9613edc1dc27a8856) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Update rules for new directory structure
- [#648](https://github.com/eyaltoledano/claude-task-master/pull/648) [`9b4168b`](https://github.com/eyaltoledano/claude-task-master/commit/9b4168bb4e4dfc2f4fb0cf6bd5f81a8565879176) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bug in expand_all mcp tool
- [#641](https://github.com/eyaltoledano/claude-task-master/pull/641) [`ad61276`](https://github.com/eyaltoledano/claude-task-master/commit/ad612763ffbdd35aa1b593c9613edc1dc27a8856) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP crashing after certain commands due to console logs
## 0.16.0
### Minor Changes
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add AWS bedrock support
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - # Add Google Vertex AI Provider Integration
- Implemented `VertexAIProvider` class extending BaseAIProvider
- Added authentication and configuration handling for Vertex AI
- Updated configuration manager with Vertex-specific getters
- Modified AI services unified system to integrate the provider
- Added documentation for Vertex AI setup and configuration
- Updated environment variable examples for Vertex AI support
- Implemented specialized error handling for Vertex-specific issues
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for Azure
- [#612](https://github.com/eyaltoledano/claude-task-master/pull/612) [`669b744`](https://github.com/eyaltoledano/claude-task-master/commit/669b744ced454116a7b29de6c58b4b8da977186a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Increased minimum required node version to > 18 (was > 14)
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Renamed baseUrl to baseURL
- [#604](https://github.com/eyaltoledano/claude-task-master/pull/604) [`80735f9`](https://github.com/eyaltoledano/claude-task-master/commit/80735f9e60c7dda7207e169697f8ac07b6733634) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add TASK_MASTER_PROJECT_ROOT env variable supported in mcp.json and .env for project root resolution
- Some users were having issues where the MCP wasn't able to detect the location of their project root, you can now set the `TASK_MASTER_PROJECT_ROOT` environment variable to the root of your project.
- [#619](https://github.com/eyaltoledano/claude-task-master/pull/619) [`3f64202`](https://github.com/eyaltoledano/claude-task-master/commit/3f64202c9feef83f2bf383c79e4367d337c37e20) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Consolidate Task Master files into unified .taskmaster directory structure
This release introduces a new consolidated directory structure that organizes all Task Master files under a single `.taskmaster/` directory for better project organization and cleaner workspace management.
**New Directory Structure:**
- `.taskmaster/tasks/` - Task files (previously `tasks/`)
- `.taskmaster/docs/` - Documentation including PRD files (previously `scripts/`)
- `.taskmaster/reports/` - Complexity analysis reports (previously `scripts/`)
- `.taskmaster/templates/` - Template files like example PRD
- `.taskmaster/config.json` - Configuration (previously `.taskmasterconfig`)
**Migration & Backward Compatibility:**
- Existing projects continue to work with legacy file locations
- New projects use the consolidated structure automatically
- Run `task-master migrate` to move existing projects to the new structure
- All CLI commands and MCP tools automatically detect and use appropriate file locations
**Benefits:**
- Cleaner project root with Task Master files organized in one location
- Reduced file scatter across multiple directories
- Improved project navigation and maintenance
- Consistent file organization across all Task Master projects
This change maintains full backward compatibility while providing a migration path to the improved structure.
### Patch Changes
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix max_tokens error when trying to use claude-sonnet-4 and claude-opus-4
- [#625](https://github.com/eyaltoledano/claude-task-master/pull/625) [`2d520de`](https://github.com/eyaltoledano/claude-task-master/commit/2d520de2694da3efe537b475ca52baf3c869edda) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix add-task MCP command causing an error
## 0.16.0-rc.0
### Minor Changes
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add AWS bedrock support
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - # Add Google Vertex AI Provider Integration
- Implemented `VertexAIProvider` class extending BaseAIProvider
- Added authentication and configuration handling for Vertex AI
- Updated configuration manager with Vertex-specific getters
- Modified AI services unified system to integrate the provider
- Added documentation for Vertex AI setup and configuration
- Updated environment variable examples for Vertex AI support
- Implemented specialized error handling for Vertex-specific issues
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for Azure
- [#612](https://github.com/eyaltoledano/claude-task-master/pull/612) [`669b744`](https://github.com/eyaltoledano/claude-task-master/commit/669b744ced454116a7b29de6c58b4b8da977186a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Increased minimum required node version to > 18 (was > 14)
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Renamed baseUrl to baseURL
- [#604](https://github.com/eyaltoledano/claude-task-master/pull/604) [`80735f9`](https://github.com/eyaltoledano/claude-task-master/commit/80735f9e60c7dda7207e169697f8ac07b6733634) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add TASK_MASTER_PROJECT_ROOT env variable supported in mcp.json and .env for project root resolution
- Some users were having issues where the MCP wasn't able to detect the location of their project root, you can now set the `TASK_MASTER_PROJECT_ROOT` environment variable to the root of your project.
- [#619](https://github.com/eyaltoledano/claude-task-master/pull/619) [`3f64202`](https://github.com/eyaltoledano/claude-task-master/commit/3f64202c9feef83f2bf383c79e4367d337c37e20) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Consolidate Task Master files into unified .taskmaster directory structure
This release introduces a new consolidated directory structure that organizes all Task Master files under a single `.taskmaster/` directory for better project organization and cleaner workspace management.
**New Directory Structure:**
- `.taskmaster/tasks/` - Task files (previously `tasks/`)
- `.taskmaster/docs/` - Documentation including PRD files (previously `scripts/`)
- `.taskmaster/reports/` - Complexity analysis reports (previously `scripts/`)
- `.taskmaster/templates/` - Template files like example PRD
- `.taskmaster/config.json` - Configuration (previously `.taskmasterconfig`)
**Migration & Backward Compatibility:**
- Existing projects continue to work with legacy file locations
- New projects use the consolidated structure automatically
- Run `task-master migrate` to move existing projects to the new structure
- All CLI commands and MCP tools automatically detect and use appropriate file locations
**Benefits:**
- Cleaner project root with Task Master files organized in one location
- Reduced file scatter across multiple directories
- Improved project navigation and maintenance
- Consistent file organization across all Task Master projects
This change maintains full backward compatibility while providing a migration path to the improved structure.
### Patch Changes
- [#607](https://github.com/eyaltoledano/claude-task-master/pull/607) [`6a8a68e`](https://github.com/eyaltoledano/claude-task-master/commit/6a8a68e1a3f34dcdf40b355b4602a08d291f8e38) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix max_tokens error when trying to use claude-sonnet-4 and claude-opus-4
- [#597](https://github.com/eyaltoledano/claude-task-master/pull/597) [`2d520de`](https://github.com/eyaltoledano/claude-task-master/commit/2d520de2694da3efe537b475ca52baf3c869edda) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Fix add-task MCP command causing an error
## 0.15.0
### Minor Changes

View File

@@ -170,7 +170,7 @@ Example:
### Prerequisites
- Node.js 18+
- Node.js 14+
- npm or yarn
### Environment Setup

147
README.md
View File

@@ -2,37 +2,13 @@
[![CI](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg)](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml) [![npm version](https://badge.fury.io/js/task-master-ai.svg)](https://badge.fury.io/js/task-master-ai) [![Discord](https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat)](https://discord.gg/taskmasterai) [![License: MIT with Commons Clause](https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg)](LICENSE)
[![NPM Downloads](https://img.shields.io/npm/d18m/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai) [![NPM Downloads](https://img.shields.io/npm/dm/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai) [![NPM Downloads](https://img.shields.io/npm/dw/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai)
### By [@eyaltoledano](https://x.com/eyaltoledano) & [@RalphEcom](https://x.com/RalphEcom)
## By [@eyaltoledano](https://x.com/eyaltoledano), [@RalphEcom](https://x.com/RalphEcom) & [@jasonzhou1993](https://x.com/jasonzhou1993)
[![Twitter Follow](https://img.shields.io/twitter/follow/eyaltoledano)](https://x.com/eyaltoledano)
[![Twitter Follow](https://img.shields.io/twitter/follow/RalphEcom)](https://x.com/RalphEcom)
[![Twitter Follow](https://img.shields.io/twitter/follow/jasonzhou1993)](https://x.com/jasonzhou1993)
[![Twitter Follow](https://img.shields.io/twitter/follow/eyaltoledano?style=flat)](https://x.com/eyaltoledano)
[![Twitter Follow](https://img.shields.io/twitter/follow/RalphEcom?style=flat)](https://x.com/RalphEcom)
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
## Documentation
For more detailed information, check out the documentation in the `docs` directory:
- [Configuration Guide](docs/configuration.md) - Set up environment variables and customize Task Master
- [Tutorial](docs/tutorial.md) - Step-by-step guide to getting started with Task Master
- [Command Reference](docs/command-reference.md) - Complete list of all available commands
- [Task Structure](docs/task-structure.md) - Understanding the task format and features
- [Example Interactions](docs/examples.md) - Common Cursor AI interaction examples
- [Migration Guide](docs/migration-guide.md) - Guide to migrating to the new project structure
#### Quick Install for Cursor 1.0+ (One-Click)
📋 Click the copy button (top-right of code block) then paste into your browser:
```text
cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQo=
```
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
## Requirements
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
@@ -63,57 +39,55 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
| **Cursor** | Global | `~/.cursor/mcp.json` | `%USERPROFILE%\.cursor\mcp.json` | `mcpServers` |
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
| **VSCode** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
##### Manual Configuration
##### Cursor & Windsurf (`mcpServers`)
###### Cursor & Windsurf (`mcpServers`)
```json
```jsonc
{
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
}
}
}
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
}
}
}
}
```
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
###### VSCode (`servers` + `type`)
##### VSCode (`servers` + `type`)
```json
```jsonc
{
"servers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
},
"type": "stdio"
}
}
"servers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
},
"type": "stdio"
}
}
}
```
@@ -125,7 +99,7 @@ Open Cursor Settings (Ctrl+Shift+J) ➡ Click on MCP tab on the left ➡ Enable
#### 3. (Optional) Configure the models you want to use
In your editor's AI chat pane, say:
In your editors AI chat pane, say:
```txt
Change the main, research and fallback models to <model_name>, <model_name> and <model_name> respectively.
@@ -135,21 +109,15 @@ Change the main, research and fallback models to <model_name>, <model_name> and
#### 4. Initialize Task Master
In your editor's AI chat pane, say:
In your editors AI chat pane, say:
```txt
Initialize taskmaster-ai in my project
```
#### 5. Make sure you have a PRD (Recommended)
#### 5. Make sure you have a PRD in `<project_folder>/scripts/prd.txt`
For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt`
For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate`
An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`.
> [!NOTE]
> While a PRD is recommended for complex projects, you can always create individual tasks by asking "Can you help me implement [description of what you want to do]?" in chat.
An example of a PRD is located into `<project_folder>/scripts/example_prd.txt`.
**Always start with a detailed PRD.**
@@ -160,9 +128,12 @@ The more detailed your PRD, the better the generated tasks will be.
Use your AI assistant to:
- Parse requirements: `Can you parse my PRD at scripts/prd.txt?`
- Plan next step: `What's the next task I should work on?`
- Plan next step: `Whats the next task I should work on?`
- Implement a task: `Can you help me implement task 3?`
- View multiple tasks: `Can you show me tasks 1, 3, and 5?`
- Expand a task: `Can you help me expand task 4?`
- **Research fresh information**: `Research the latest best practices for implementing JWT authentication with Node.js`
- **Research with context**: `Research React Query v5 migration strategies for our current API implementation in src/api.js`
[More examples on how to use Task Master in chat](docs/examples.md)
@@ -205,13 +176,29 @@ task-master list
# Show the next task to work on
task-master next
# Show specific task(s) - supports comma-separated IDs
task-master show 1,3,5
# Research fresh information with project context
task-master research "What are the latest best practices for JWT authentication?"
# Generate task files
task-master generate
```
## Documentation
For more detailed information, check out the documentation in the `docs` directory:
- [Configuration Guide](docs/configuration.md) - Set up environment variables and customize Task Master
- [Tutorial](docs/tutorial.md) - Step-by-step guide to getting started with Task Master
- [Command Reference](docs/command-reference.md) - Complete list of all available commands
- [Task Structure](docs/task-structure.md) - Understanding the task format and features
- [Example Interactions](docs/examples.md) - Common Cursor AI interaction examples
## Troubleshooting
### If `task-master init` doesn't respond
### If `task-master init` doesn't respond:
Try running it with Node directly:

View File

@@ -7,7 +7,7 @@
```bash
# Project Setup
task-master init # Initialize Task Master in current project
task-master parse-prd .taskmaster/docs/prd.txt # Generate tasks from PRD document
task-master parse-prd scripts/prd.txt # Generate tasks from PRD document
task-master models --setup # Configure AI models interactively
# Daily Development Workflow
@@ -39,10 +39,10 @@ task-master generate # Update task markd
### Core Files
- `.taskmaster/tasks/tasks.json` - Main task data file (auto-managed)
- `.taskmaster/config.json` - AI model configuration (use `task-master models` to modify)
- `.taskmaster/docs/prd.txt` - Product Requirements Document for parsing
- `.taskmaster/tasks/*.txt` - Individual task files (auto-generated from tasks.json)
- `tasks/tasks.json` - Main task data file (auto-managed)
- `.taskmasterconfig` - AI model configuration (use `task-master models` to modify)
- `scripts/prd.txt` - Product Requirements Document for parsing
- `tasks/*.txt` - Individual task files (auto-generated from tasks.json)
- `.env` - API keys for CLI usage
### Claude Code Integration Files
@@ -56,24 +56,20 @@ task-master generate # Update task markd
```
project/
├── .taskmaster/
│ ├── tasks/ # Task files directory
│ ├── tasks.json # Main task database
│ ├── task-1.md # Individual task files
│ │ └── task-2.md
│ ├── docs/ # Documentation directory
│ ├── prd.txt # Product requirements
│ ├── reports/ # Analysis reports directory
│ │ └── task-complexity-report.json
│ ├── templates/ # Template files
│ │ └── example_prd.txt # Example PRD template
│ └── config.json # AI models & settings
├── tasks/
│ ├── tasks.json # Main task database
│ ├── task-1.md # Individual task files
── task-2.md
├── scripts/
│ ├── prd.txt # Product requirements
└── task-complexity-report.json
├── .claude/
│ ├── settings.json # Claude Code configuration
│ └── commands/ # Custom slash commands
├── .env # API keys
├── .mcp.json # MCP configuration
── CLAUDE.md # This file - auto-loaded by Claude Code
│ ├── settings.json # Claude Code configuration
│ └── commands/ # Custom slash commands
├── .taskmasterconfig # AI models & settings
├── .env # API keys
── .mcp.json # MCP configuration
└── CLAUDE.md # This file - auto-loaded by Claude Code
```
## MCP Integration
@@ -82,23 +78,23 @@ Task Master provides an MCP server that Claude Code can connect to. Configure in
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_key_here",
"PERPLEXITY_API_KEY": "your_key_here",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_key_here",
"PERPLEXITY_API_KEY": "your_key_here",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}
}
```
@@ -139,7 +135,7 @@ complexity_report; // = task-master complexity-report
task-master init
# Create or obtain PRD, then parse it
task-master parse-prd .taskmaster/docs/prd.txt
task-master parse-prd scripts/prd.txt
# Analyze complexity and expand tasks
task-master analyze-complexity --research
@@ -212,14 +208,14 @@ Add to `.claude/settings.json`:
```json
{
"allowedTools": [
"Edit",
"Bash(task-master *)",
"Bash(git commit:*)",
"Bash(git add:*)",
"Bash(npm run *)",
"mcp__task_master_ai__*"
]
"allowedTools": [
"Edit",
"Bash(task-master *)",
"Bash(git commit:*)",
"Bash(git add:*)",
"Bash(npm run *)",
"mcp__task_master_ai__*"
]
}
```
@@ -272,15 +268,15 @@ task-master models --set-fallback gpt-4o-mini
```json
{
"id": "1.2",
"title": "Implement user authentication",
"description": "Set up JWT-based auth system",
"status": "pending",
"priority": "high",
"dependencies": ["1.1"],
"details": "Use bcrypt for hashing, JWT for tokens...",
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
"subtasks": []
"id": "1.2",
"title": "Implement user authentication",
"description": "Set up JWT-based auth system",
"status": "pending",
"priority": "high",
"dependencies": ["1.1"],
"details": "Use bcrypt for hashing, JWT for tokens...",
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
"subtasks": []
}
```
@@ -388,7 +384,7 @@ These commands make AI calls and may take up to a minute:
### File Management
- Never manually edit `tasks.json` - use commands instead
- Never manually edit `.taskmaster/config.json` - use `task-master models`
- Never manually edit `.taskmasterconfig` - use `task-master models`
- Task markdown files in `tasks/` are auto-generated
- Run `task-master generate` after manual changes to tasks.json

View File

@@ -1,32 +0,0 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20240620",
"maxTokens": 8192,
"temperature": 0.1
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseURL": "http://localhost:11434/api",
"azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/",
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com"
}
}

View File

@@ -5,5 +5,5 @@ OPENAI_API_KEY="your_openai_api_key_here" # Optional, for OpenAI/Ope
GOOGLE_API_KEY="your_google_api_key_here" # Optional, for Google Gemini models.
MISTRAL_API_KEY="your_mistral_key_here" # Optional, for Mistral AI models.
XAI_API_KEY="YOUR_XAI_KEY_HERE" # Optional, for xAI AI models.
AZURE_OPENAI_API_KEY="your_azure_key_here" # Optional, for Azure OpenAI models (requires endpoint in .taskmaster/config.json).
AZURE_OPENAI_API_KEY="your_azure_key_here" # Optional, for Azure OpenAI models (requires endpoint in .taskmasterconfig).
OLLAMA_API_KEY="your_ollama_api_key_here" # Optional: For remote Ollama servers that require authentication.

View File

@@ -20,7 +20,7 @@ In an AI-driven development process—particularly with tools like [Cursor](http
Task Master configuration is now managed through two primary methods:
1. **`.taskmaster/config.json` File (Project Root - Primary)**
1. **`.taskmasterconfig` File (Project Root - Primary)**
- Stores AI model selections (`main`, `research`, `fallback`), model parameters (`maxTokens`, `temperature`), `logLevel`, `defaultSubtasks`, `defaultPriority`, `projectName`, etc.
- Managed using the `task-master models --setup` command or the `models` MCP tool.
@@ -192,7 +192,7 @@ Notes:
## AI Integration (Updated)
- The script now uses a unified AI service layer (`ai-services-unified.js`).
- Model selection (e.g., Claude vs. Perplexity for `--research`) is determined by the configuration in `.taskmaster/config.json` based on the requested `role` (`main` or `research`).
- Model selection (e.g., Claude vs. Perplexity for `--research`) is determined by the configuration in `.taskmasterconfig` based on the requested `role` (`main` or `research`).
- API keys are automatically resolved from your `.env` file (for CLI) or MCP session environment.
- To use the research capabilities (e.g., `expand --research`), ensure you have:
1. Configured a model for the `research` role using `task-master models --setup` (Perplexity models are recommended).
@@ -357,25 +357,25 @@ The output report structure is:
```json
{
"meta": {
"generatedAt": "2023-06-15T12:34:56.789Z",
"tasksAnalyzed": 20,
"thresholdScore": 5,
"projectName": "Your Project Name",
"usedResearch": true
},
"complexityAnalysis": [
{
"taskId": 8,
"taskTitle": "Develop Implementation Drift Handling",
"complexityScore": 9.5,
"recommendedSubtasks": 6,
"expansionPrompt": "Create subtasks that handle detecting...",
"reasoning": "This task requires sophisticated logic...",
"expansionCommand": "task-master expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research"
}
// More tasks sorted by complexity score (highest first)
]
"meta": {
"generatedAt": "2023-06-15T12:34:56.789Z",
"tasksAnalyzed": 20,
"thresholdScore": 5,
"projectName": "Your Project Name",
"usedResearch": true
},
"complexityAnalysis": [
{
"taskId": 8,
"taskTitle": "Develop Implementation Drift Handling",
"complexityScore": 9.5,
"recommendedSubtasks": 6,
"expansionPrompt": "Create subtasks that handle detecting...",
"reasoning": "This task requires sophisticated logic...",
"expansionCommand": "task-master expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research"
}
// More tasks sorted by complexity score (highest first)
]
}
```

View File

@@ -9,7 +9,7 @@ Welcome to the Task Master documentation. Use the links below to navigate to the
## Reference
- [Command Reference](command-reference.md) - Complete list of all available commands
- [Command Reference](command-reference.md) - Complete list of all available commands (including research and multi-task viewing)
- [Task Structure](task-structure.md) - Understanding the task format and features
## Examples & Licensing

View File

@@ -43,10 +43,28 @@ task-master show <id>
# or
task-master show --id=<id>
# View multiple tasks with comma-separated IDs
task-master show 1,3,5
task-master show 44,55
# View a specific subtask (e.g., subtask 2 of task 1)
task-master show 1.2
# Mix parent tasks and subtasks
task-master show 44,44.1,55,55.2
```
**Multiple Task Display:**
- **Single ID**: Shows detailed task view with full implementation details
- **Multiple IDs**: Shows compact summary table with interactive action menu
- **Action Menu**: Provides copy-paste ready commands for batch operations:
- Mark all as in-progress/done
- Show next available task
- Expand all tasks (generate subtasks)
- View dependency relationships
- Generate task files
## Update Tasks
```bash
@@ -262,3 +280,42 @@ task-master models --setup
```
Configuration is stored in `.taskmasterconfig` in your project root. API keys are still managed via `.env` or MCP configuration. Use `task-master models` without flags to see available built-in models. Use `--setup` for a guided experience.
## Research Fresh Information
```bash
# Perform AI-powered research with fresh, up-to-date information
task-master research "What are the latest best practices for JWT authentication in Node.js?"
# Research with specific task context
task-master research "How to implement OAuth 2.0?" --id=15,16
# Research with file context for code-aware suggestions
task-master research "How can I optimize this API implementation?" --files=src/api.js,src/auth.js
# Research with custom context and project tree
task-master research "Best practices for error handling" --context="We're using Express.js" --tree
# Research with different detail levels
task-master research "React Query v5 migration guide" --detail=high
# Save research results to a file
task-master research "Database optimization techniques" --save=research/db-optimization.md
```
**The research command is a powerful tool that provides:**
- **Fresh information beyond AI knowledge cutoffs**
- **Project-aware context** from your tasks and files
- **Automatic task discovery** using fuzzy search
- **Multiple detail levels** (low, medium, high)
- **Token counting and cost tracking**
- **Interactive follow-up questions**
**Use research frequently to:**
- Get current best practices before implementing features
- Research new technologies and libraries
- Find solutions to complex problems
- Validate your implementation approaches
- Stay updated with latest security recommendations

View File

@@ -2,82 +2,73 @@
Taskmaster uses two primary methods for configuration:
1. **`.taskmaster/config.json` File (Recommended - New Structure)**
1. **`.taskmasterconfig` File (Project Root - Recommended for most settings)**
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
- **Location:** This file is created in the `.taskmaster/` directory when you run the `task-master models --setup` interactive setup or initialize a new project with `task-master init`.
- **Migration:** Existing projects with `.taskmasterconfig` in the root will continue to work, but should be migrated to the new structure using `task-master migrate`.
- **Location:** This file is created in the root directory of your project when you run the `task-master models --setup` interactive setup. You typically do this during the initialization sequence. Do not manually edit this file beyond adjusting Temperature and Max Tokens depending on your model.
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
- **Example Structure:**
```json
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2,
"baseURL": "https://api.anthropic.com/v1"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1,
"baseURL": "https://api.perplexity.ai/v1"
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Your Project Name",
"ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/",
"vertexProjectId": "your-gcp-project-id",
"vertexLocation": "us-central1"
}
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2,
"baseURL": "https://api.anthropic.com/v1"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1,
"baseURL": "https://api.perplexity.ai/v1"
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Your Project Name",
"ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/",
"vertexProjectId": "your-gcp-project-id",
"vertexLocation": "us-central1"
}
}
```
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
2. **Environment Variables (`.env` file or MCP `env` block - For API Keys Only)**
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
- **Location:**
- For CLI usage: Create a `.env` file in your project root.
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
- **Required API Keys (Depending on configured providers):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
- `OPENAI_API_KEY`: Your OpenAI API key.
- `GOOGLE_API_KEY`: Your Google API key (also used for Vertex AI provider).
- `MISTRAL_API_KEY`: Your Mistral API key.
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides:**
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
- For projects that haven't migrated to the new structure yet.
- **Location:** Project root directory.
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
- **Location:**
- For CLI usage: Create a `.env` file in your project root.
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
- **Required API Keys (Depending on configured providers):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
- `OPENAI_API_KEY`: Your OpenAI API key.
- `GOOGLE_API_KEY`: Your Google API key (also used for Vertex AI provider).
- `MISTRAL_API_KEY`: Your Mistral API key.
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides:**
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmasterconfig`**, not environment variables.
## Example `.env` File (for API Keys)
@@ -103,9 +94,8 @@ PERPLEXITY_API_KEY=pplx-your-key-here
### Configuration Errors
- If Task Master reports errors about missing configuration or cannot find the config file, run `task-master models --setup` in your project root to create or repair the file.
- For new projects, config will be created at `.taskmaster/config.json`. For legacy projects, you may want to use `task-master migrate` to move to the new structure.
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in your config file.
- If Task Master reports errors about missing configuration or cannot find `.taskmasterconfig`, run `task-master models --setup` in your project root to create or repair the file.
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in `.taskmasterconfig`.
### If `task-master init` doesn't respond:

View File

@@ -5,7 +5,7 @@ Here are some common interactions with Cursor AI when using Task Master:
## Starting a new project
```
I've just initialized a new project with Claude Task Master. I have a PRD at .taskmaster/docs/prd.txt.
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
Can you help me parse it and set up the initial tasks?
```
@@ -21,6 +21,20 @@ What's the next task I should work on? Please consider dependencies and prioriti
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
```
## Viewing multiple tasks
```
Can you show me tasks 1, 3, and 5 so I can understand their relationship?
```
```
I need to see the status of tasks 44, 55, and their subtasks. Can you show me those?
```
```
Show me tasks 10, 12, and 15 and give me some batch actions I can perform on them.
```
## Managing subtasks
```
@@ -109,3 +123,56 @@ Please add a new task to implement user profile image uploads using Cloudinary,
```
(Agent runs: `task-master add-task --prompt="Implement user profile image uploads using Cloudinary" --research`)
## Research-Driven Development
### Getting Fresh Information
```
Research the latest best practices for implementing JWT authentication in Node.js applications.
```
(Agent runs: `task-master research "Latest best practices for JWT authentication in Node.js"`)
### Research with Project Context
```
I'm working on task 15 which involves API optimization. Can you research current best practices for our specific implementation?
```
(Agent runs: `task-master research "API optimization best practices" --id=15 --files=src/api.js`)
### Research Before Implementation
```
Before I implement task 8 (React Query integration), can you research the latest React Query v5 patterns and any breaking changes?
```
(Agent runs: `task-master research "React Query v5 patterns and breaking changes" --id=8`)
### Research and Update Pattern
```
Research the latest security recommendations for Express.js applications and update our authentication task with the findings.
```
(Agent runs:
1. `task-master research "Latest Express.js security recommendations" --id=12`
2. `task-master update-subtask --id=12.3 --prompt="Updated with latest security findings: [research results]"`)
### Research for Debugging
```
I'm having issues with our WebSocket implementation in task 20. Can you research common WebSocket problems and solutions?
```
(Agent runs: `task-master research "Common WebSocket implementation problems and solutions" --id=20 --files=src/websocket.js`)
### Research Technology Comparisons
```
We need to choose between Redis and Memcached for caching. Can you research the current recommendations for our use case?
```
(Agent runs: `task-master research "Redis vs Memcached 2024 comparison for session caching" --tree`)

View File

@@ -1,235 +0,0 @@
# Migration Guide: New .taskmaster Directory Structure
## Overview
Task Master v0.16.0 introduces a new `.taskmaster/` directory structure to keep your project directories clean and organized. This guide explains the benefits of the new structure and how to migrate existing projects.
## What's New
### Before (Legacy Structure)
```
your-project/
├── tasks/ # Task files
│ ├── tasks.json
│ ├── task-1.txt
│ └── task-2.txt
├── scripts/ # PRD and reports
│ ├── prd.txt
│ ├── example_prd.txt
│ └── task-complexity-report.json
├── .taskmasterconfig # Configuration
└── ... (your project files)
```
### After (New Structure)
```
your-project/
├── .taskmaster/ # Consolidated Task Master files
│ ├── config.json # Configuration (was .taskmasterconfig)
│ ├── tasks/ # Task files
│ │ ├── tasks.json
│ │ ├── task-1.txt
│ │ └── task-2.txt
│ ├── docs/ # Project documentation
│ │ └── prd.txt
│ ├── reports/ # Generated reports
│ │ └── task-complexity-report.json
│ └── templates/ # Example/template files
│ └── example_prd.txt
└── ... (your project files)
```
## Benefits of the New Structure
**Cleaner Project Root**: No more scattered Task Master files
**Better Organization**: Logical separation of tasks, docs, reports, and templates
**Hidden by Default**: `.taskmaster/` directory is hidden from most file browsers
**Future-Proof**: Centralized location for Task Master extensions
**Backward Compatible**: Existing projects continue to work until migrated
## Migration Options
### Option 1: Automatic Migration (Recommended)
Task Master provides a built-in migration command that handles everything automatically:
#### CLI Migration
```bash
# Dry run to see what would be migrated
task-master migrate --dry-run
# Perform the migration with backup
task-master migrate --backup
# Force migration (overwrites existing files)
task-master migrate --force
# Clean up legacy files after migration
task-master migrate --cleanup
```
#### MCP Migration (Cursor/AI Editors)
Ask your AI assistant:
```
Please migrate my Task Master project to the new .taskmaster directory structure
```
### Option 2: Manual Migration
If you prefer to migrate manually:
1. **Create the new directory structure:**
```bash
mkdir -p .taskmaster/{tasks,docs,reports,templates}
```
2. **Move your files:**
```bash
# Move tasks
mv tasks/* .taskmaster/tasks/
# Move configuration
mv .taskmasterconfig .taskmaster/config.json
# Move PRD and documentation
mv scripts/prd.txt .taskmaster/docs/
mv scripts/example_prd.txt .taskmaster/templates/
# Move reports (if they exist)
mv scripts/task-complexity-report.json .taskmaster/reports/ 2>/dev/null || true
```
3. **Clean up empty directories:**
```bash
rmdir tasks scripts 2>/dev/null || true
```
## What Gets Migrated
The migration process handles these file types:
### Tasks Directory → `.taskmaster/tasks/`
- `tasks.json`
- Individual task text files (`.txt`)
### Scripts Directory → Multiple Destinations
- **PRD files** → `.taskmaster/docs/`
- `prd.txt`, `requirements.txt`, etc.
- **Example/Template files** → `.taskmaster/templates/`
- `example_prd.txt`, template files
- **Reports** → `.taskmaster/reports/`
- `task-complexity-report.json`
### Configuration
- `.taskmasterconfig` → `.taskmaster/config.json`
## After Migration
Once migrated, Task Master will:
✅ **Automatically use** the new directory structure
✅ **Show deprecation warnings** when legacy files are detected
✅ **Create new files** in the proper locations
✅ **Fall back gracefully** to legacy locations if new ones don't exist
### Verification
After migration, verify everything works:
1. **List your tasks:**
```bash
task-master list
```
2. **Check your configuration:**
```bash
task-master models
```
3. **Generate new task files:**
```bash
task-master generate
```
## Troubleshooting
### Migration Issues
**Q: Migration says "no files to migrate"**
A: Your project may already be using the new structure or have no Task Master files to migrate.
**Q: Migration fails with permission errors**
A: Ensure you have write permissions in your project directory.
**Q: Some files weren't migrated**
A: Check the migration output - some files may not match the expected patterns. You can migrate these manually.
### Working with Legacy Projects
If you're working with an older project that hasn't been migrated:
- Task Master will continue to work with the old structure
- You'll see deprecation warnings in the output
- New files will still be created in legacy locations
- Use the migration command when ready to upgrade
### New Project Initialization
New projects automatically use the new structure:
```bash
task-master init # Creates .taskmaster/ structure
```
## Path Changes for Developers
If you're developing tools or scripts that interact with Task Master files:
### Configuration File
- **Old:** `.taskmasterconfig`
- **New:** `.taskmaster/config.json`
- **Fallback:** Task Master checks both locations
### Tasks File
- **Old:** `tasks/tasks.json`
- **New:** `.taskmaster/tasks/tasks.json`
- **Fallback:** Task Master checks both locations
### Reports
- **Old:** `scripts/task-complexity-report.json`
- **New:** `.taskmaster/reports/task-complexity-report.json`
- **Fallback:** Task Master checks both locations
### PRD Files
- **Old:** `scripts/prd.txt`
- **New:** `.taskmaster/docs/prd.txt`
- **Fallback:** Task Master checks both locations
## Need Help?
If you encounter issues during migration:
1. **Check the logs:** Add `--debug` flag for detailed output
2. **Backup first:** Always use `--backup` option for safety
3. **Test with dry-run:** Use `--dry-run` to preview changes
4. **Ask for help:** Use our Discord community or GitHub issues
---
_This migration guide applies to Task Master v3.x and later. For older versions, please upgrade to the latest version first._

View File

@@ -1,4 +1,4 @@
# Available Models as of June 8, 2025
# Available Models as of May 27, 2025
## Main Models
@@ -24,7 +24,6 @@
| google | gemini-2.5-flash-preview-04-17 | — | — | — |
| google | gemini-2.0-flash | 0.754 | 0.15 | 0.6 |
| google | gemini-2.0-flash-lite | — | — | — |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
@@ -71,8 +70,6 @@
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | — | 1 | 1 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |

View File

@@ -20,22 +20,22 @@ npm i -g task-master-ai
```json
{
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
}
}
}
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
}
}
}
}
```
@@ -60,12 +60,12 @@ The AI will:
- Set up initial configuration files
- Guide you through the rest of the process
5. Place your PRD document in the `.taskmaster/docs/` directory (e.g., `.taskmaster/docs/prd.txt`)
5. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
6. **Use natural language commands** to interact with Task Master:
```
Can you parse my PRD at .taskmaster/docs/prd.txt?
Can you parse my PRD at scripts/prd.txt?
What's the next task I should work on?
Can you help me implement task 3?
```
@@ -132,7 +132,7 @@ If you're not using MCP, you can still set up Cursor integration:
1. After initializing your project, open it in Cursor
2. The `.cursor/rules/dev_workflow.mdc` file is automatically loaded by Cursor, providing the AI with knowledge about the task management system
3. Place your PRD document in the `.taskmaster/docs/` directory (e.g., `.taskmaster/docs/prd.txt`)
3. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
4. Open Cursor's AI chat and switch to Agent mode
### Alternative MCP Setup in Cursor
@@ -155,13 +155,13 @@ Once configured, you can interact with Task Master's task management commands di
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
```
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at .taskmaster/docs/prd.txt.
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
```
The agent will execute:
```bash
task-master parse-prd .taskmaster/docs/prd.txt
task-master parse-prd scripts/prd.txt
```
This will:
@@ -198,10 +198,15 @@ Ask the agent to list available tasks:
What tasks are available to work on next?
```
```
Can you show me tasks 1, 3, and 5 to understand their current status?
```
The agent will:
- Run `task-master list` to see all tasks
- Run `task-master next` to determine the next task to work on
- Run `task-master show 1,3,5` to display multiple tasks with interactive options
- Analyze dependencies to determine which tasks are ready to be worked on
- Prioritize tasks based on priority level and ID order
- Suggest the next task(s) to implement
@@ -221,6 +226,21 @@ You can ask:
Let's implement task 3. What does it involve?
```
### 2.1. Viewing Multiple Tasks
For efficient context gathering and batch operations:
```
Show me tasks 5, 7, and 9 so I can plan my implementation approach.
```
The agent will:
- Run `task-master show 5,7,9` to display a compact summary table
- Show task status, priority, and progress indicators
- Provide an interactive action menu with batch operations
- Allow you to perform group actions like marking multiple tasks as in-progress
### 3. Task Verification
Before marking a task as complete, verify it according to:
@@ -377,7 +397,7 @@ task-master expand --id=5 --research
### Starting a new project
```
I've just initialized a new project with Claude Task Master. I have a PRD at .taskmaster/docs/prd.txt.
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
Can you help me parse it and set up the initial tasks?
```
@@ -423,3 +443,55 @@ Can you analyze the complexity of our tasks to help me understand which ones nee
```
Can you show me the complexity report in a more readable format?
```
### Research-Driven Development
Task Master includes a powerful research tool that provides fresh, up-to-date information beyond the AI's knowledge cutoff. This is particularly valuable for:
#### Getting Current Best Practices
```
Before implementing task 5 (authentication), research the latest JWT security recommendations.
```
The agent will execute:
```bash
task-master research "Latest JWT security recommendations 2024" --id=5
```
#### Research with Project Context
```
Research React Query v5 migration strategies for our current API implementation.
```
The agent will execute:
```bash
task-master research "React Query v5 migration strategies" --files=src/api.js,src/hooks.js
```
#### Research and Update Pattern
A powerful workflow is to research first, then update tasks with findings:
```
Research the latest Node.js performance optimization techniques and update task 12 with the findings.
```
The agent will:
1. Run research: `task-master research "Node.js performance optimization 2024" --id=12`
2. Update the task: `task-master update-subtask --id=12.2 --prompt="Updated with latest performance findings: [research results]"`
#### When to Use Research
- **Before implementing any new technology**
- **When encountering security-related tasks**
- **For performance optimization tasks**
- **When debugging complex issues**
- **Before making architectural decisions**
- **When updating dependencies**
The research tool automatically includes relevant project context and provides fresh information that can significantly improve implementation quality.

View File

@@ -1,52 +1,52 @@
export default {
// Use Node.js environment for testing
testEnvironment: 'node',
// Use Node.js environment for testing
testEnvironment: "node",
// Automatically clear mock calls between every test
clearMocks: true,
// Automatically clear mock calls between every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
collectCoverage: false,
// Indicates whether the coverage information should be collected while executing the test
collectCoverage: false,
// The directory where Jest should output its coverage files
coverageDirectory: 'coverage',
// The directory where Jest should output its coverage files
coverageDirectory: "coverage",
// A list of paths to directories that Jest should use to search for files in
roots: ['<rootDir>/tests'],
// A list of paths to directories that Jest should use to search for files in
roots: ["<rootDir>/tests"],
// The glob patterns Jest uses to detect test files
testMatch: ['**/__tests__/**/*.js', '**/?(*.)+(spec|test).js'],
// The glob patterns Jest uses to detect test files
testMatch: ["**/__tests__/**/*.js", "**/?(*.)+(spec|test).js"],
// Transform files
transform: {},
// Transform files
transform: {},
// Disable transformations for node_modules
transformIgnorePatterns: ['/node_modules/'],
// Disable transformations for node_modules
transformIgnorePatterns: ["/node_modules/"],
// Set moduleNameMapper for absolute paths
moduleNameMapper: {
'^@/(.*)$': '<rootDir>/$1'
},
// Set moduleNameMapper for absolute paths
moduleNameMapper: {
"^@/(.*)$": "<rootDir>/$1",
},
// Setup module aliases
moduleDirectories: ['node_modules', '<rootDir>'],
// Setup module aliases
moduleDirectories: ["node_modules", "<rootDir>"],
// Configure test coverage thresholds
coverageThreshold: {
global: {
branches: 80,
functions: 80,
lines: 80,
statements: 80
}
},
// Configure test coverage thresholds
coverageThreshold: {
global: {
branches: 80,
functions: 80,
lines: 80,
statements: 80,
},
},
// Generate coverage report in these formats
coverageReporters: ['text', 'lcov'],
// Generate coverage report in these formats
coverageReporters: ["text", "lcov"],
// Verbose output
verbose: true,
// Verbose output
verbose: true,
// Setup file
setupFilesAfterEnv: ['<rootDir>/tests/setup.js']
// Setup file
setupFilesAfterEnv: ["<rootDir>/tests/setup.js"],
};

View File

@@ -1,4 +1,4 @@
# Taskmaster AI Installation Guide
``# Taskmaster AI Installation Guide
This guide helps AI assistants install and configure Taskmaster for users in their development projects.

View File

@@ -28,7 +28,8 @@ export async function complexityReportDirect(args, log) {
log.error('complexityReportDirect called without reportPath');
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: 'reportPath is required' }
error: { code: 'MISSING_ARGUMENT', message: 'reportPath is required' },
fromCache: false
};
}
@@ -110,7 +111,8 @@ export async function complexityReportDirect(args, log) {
error: {
code: 'UNEXPECTED_ERROR',
message: error.message
}
},
fromCache: false
};
}
}

View File

@@ -60,8 +60,7 @@ export async function expandAllTasksDirect(args, log, context = {}) {
useResearch,
additionalContext,
forceFlag,
{ session, mcpLog, projectRoot },
'json'
{ session, mcpLog, projectRoot }
);
// Core function now returns a summary object including the *aggregated* telemetryData

View File

@@ -29,7 +29,7 @@ import { createLogWrapper } from '../../tools/utils.js';
* @param {Object} log - Logger object
* @param {Object} context - Context object containing session
* @param {Object} [context.session] - MCP Session object
* @returns {Promise<Object>} - Task expansion result { success: boolean, data?: any, error?: { code: string, message: string } }
* @returns {Promise<Object>} - Task expansion result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }
*/
export async function expandTaskDirect(args, log, context = {}) {
const { session } = context; // Extract session
@@ -54,7 +54,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
},
fromCache: false
};
}
@@ -72,7 +73,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'INPUT_VALIDATION_ERROR',
message: 'Task ID is required'
}
},
fromCache: false
};
}
@@ -103,7 +105,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'INVALID_TASKS_FILE',
message: `No valid tasks found in ${tasksPath}. readJSON returned: ${JSON.stringify(data)}`
}
},
fromCache: false
};
}
@@ -118,7 +121,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'TASK_NOT_FOUND',
message: `Task with ID ${taskId} not found`
}
},
fromCache: false
};
}
@@ -129,7 +133,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'TASK_COMPLETED',
message: `Task ${taskId} is already marked as ${task.status} and cannot be expanded`
}
},
fromCache: false
};
}
@@ -146,7 +151,8 @@ export async function expandTaskDirect(args, log, context = {}) {
task,
subtasksAdded: 0,
hasExistingSubtasks
}
},
fromCache: false
};
}
@@ -226,7 +232,8 @@ export async function expandTaskDirect(args, log, context = {}) {
subtasksAdded,
hasExistingSubtasks,
telemetryData: coreResult.telemetryData
}
},
fromCache: false
};
} catch (error) {
// Make sure to restore normal logging even if there's an error
@@ -238,7 +245,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'CORE_FUNCTION_ERROR',
message: error.message || 'Failed to expand task'
}
},
fromCache: false
};
}
} catch (error) {
@@ -248,7 +256,8 @@ export async function expandTaskDirect(args, log, context = {}) {
error: {
code: 'CORE_FUNCTION_ERROR',
message: error.message || 'Failed to expand task'
}
},
fromCache: false
};
}
}

View File

@@ -28,7 +28,8 @@ export async function generateTaskFilesDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
if (!outputDir) {
@@ -36,7 +37,8 @@ export async function generateTaskFilesDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
@@ -63,7 +65,8 @@ export async function generateTaskFilesDirect(args, log) {
log.error(`Error in generateTaskFiles: ${genError.message}`);
return {
success: false,
error: { code: 'GENERATE_FILES_ERROR', message: genError.message }
error: { code: 'GENERATE_FILES_ERROR', message: genError.message },
fromCache: false
};
}
@@ -76,7 +79,8 @@ export async function generateTaskFilesDirect(args, log) {
outputDir: resolvedOutputDir,
taskFiles:
'Individual task files have been generated in the output directory'
}
},
fromCache: false // This operation always modifies state and should never be cached
};
} catch (error) {
// Make sure to restore normal logging if an outer error occurs
@@ -88,7 +92,8 @@ export async function generateTaskFilesDirect(args, log) {
error: {
code: 'GENERATE_TASKS_ERROR',
message: error.message || 'Unknown error generating task files'
}
},
fromCache: false
};
}
}

View File

@@ -41,7 +41,8 @@ export async function initializeProjectDirect(args, log, context = {}) {
code: 'INVALID_TARGET_DIRECTORY',
message: `Cannot initialize project: Invalid target directory '${targetDirectory}' received. Please ensure a valid workspace/folder is open or specified.`,
details: `Received args.projectRoot: ${args.projectRoot}` // Show what was received
}
},
fromCache: false
};
}
@@ -74,7 +75,7 @@ export async function initializeProjectDirect(args, log, context = {}) {
resultData = {
message: 'Project initialized successfully.',
next_step:
'Now that the project is initialized, the next step is to create the tasks by parsing a PRD. This will create the tasks folder and the initial task files (tasks folder will be created when parse-prd is run). The parse-prd tool will require a prd.txt file as input (typically found in .taskmaster/docs/ directory). You can create a prd.txt file by asking the user about their idea, and then using the .taskmaster/templates/example_prd.txt file as a template to generate a prd.txt file in .taskmaster/docs/. You may skip all of this if the user already has a prd.txt file. You can THEN use the parse-prd tool to create the tasks. So: step 1 after initialization is to create a prd.txt file in .taskmaster/docs/prd.txt or confirm the user already has one. Step 2 is to use the parse-prd tool to create the tasks. Do not bother looking for tasks after initialization, just use the parse-prd tool to create the tasks after creating a prd.txt from which to parse the tasks. You do NOT need to reinitialize the project to parse-prd.',
'Now that the project is initialized, the next step is to create the tasks by parsing a PRD. This will create the tasks folder and the initial task files (tasks folder will be created when parse-prd is run). The parse-prd tool will require a prd.txt file as input (typically found in the project root directory, scripts/ directory). You can create a prd.txt file by asking the user about their idea, and then using the scripts/example_prd.txt file as a template to genrate a prd.txt file in scripts/. You may skip all of this if the user already has a prd.txt file. You can THEN use the parse-prd tool to create the tasks. So: step 1 after initialization is to create a prd.txt file in scripts/prd.txt or confirm the user already has one. Step 2 is to use the parse-prd tool to create the tasks. Do not bother looking for tasks after initialization, just use the parse-prd tool to create the tasks after creating a prd.txt from which to parse the tasks. You do NOT need to reinitialize the project to parse-prd.',
...result
};
success = true;
@@ -96,8 +97,8 @@ export async function initializeProjectDirect(args, log, context = {}) {
}
if (success) {
return { success: true, data: resultData };
return { success: true, data: resultData, fromCache: false };
} else {
return { success: false, error: errorResult };
return { success: false, error: errorResult, fromCache: false };
}
}

View File

@@ -14,7 +14,7 @@ import {
*
* @param {Object} args - Command arguments (now expecting tasksJsonPath explicitly).
* @param {Object} log - Logger object.
* @returns {Promise<Object>} - Task list result { success: boolean, data?: any, error?: { code: string, message: string } }.
* @returns {Promise<Object>} - Task list result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }.
*/
export async function listTasksDirect(args, log) {
// Destructure the explicit tasksJsonPath from args
@@ -27,7 +27,8 @@ export async function listTasksDirect(args, log) {
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
},
fromCache: false
};
}

View File

@@ -3,7 +3,7 @@
*/
import { moveTask } from '../../../../scripts/modules/task-manager.js';
import { findTasksPath } from '../utils/path-utils.js';
import { findTasksJsonPath } from '../utils/path-utils.js';
import {
enableSilentMode,
disableSilentMode
@@ -13,10 +13,11 @@ import {
* Move a task or subtask to a new position
* @param {Object} args - Function arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file
* @param {string} args.sourceId - ID of the task/subtask to move (e.g., '5' or '5.2')
* @param {string} args.destinationId - ID of the destination (e.g., '7' or '7.3')
* @param {string} args.sourceId - ID of the task/subtask to move (e.g., '5' or '5.2' or '5,6,7')
* @param {string} args.destinationId - ID of the destination (e.g., '7' or '7.3' or '7,8,9')
* @param {string} args.file - Alternative path to the tasks.json file
* @param {string} args.projectRoot - Project root directory
* @param {boolean} args.generateFiles - Whether to regenerate task files after moving (default: true)
* @param {Object} log - Logger object
* @returns {Promise<{success: boolean, data?: Object, error?: Object}>}
*/
@@ -58,18 +59,19 @@ export async function moveTaskDirect(args, log, context = {}) {
}
};
}
tasksPath = findTasksPath(args, log);
tasksPath = findTasksJsonPath(args, log);
}
// Enable silent mode to prevent console output during MCP operation
enableSilentMode();
// Call the core moveTask function, always generate files
// Call the core moveTask function with file generation control
const generateFiles = args.generateFiles !== false; // Default to true
const result = await moveTask(
tasksPath,
args.sourceId,
args.destinationId,
true
generateFiles
);
// Restore console output
@@ -78,7 +80,7 @@ export async function moveTaskDirect(args, log, context = {}) {
return {
success: true,
data: {
movedTask: result.movedTask,
...result,
message: `Successfully moved task/subtask ${args.sourceId} to ${args.destinationId}`
}
};

View File

@@ -19,7 +19,7 @@ import {
* @param {Object} args - Command arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {Object} log - Logger object
* @returns {Promise<Object>} - Next task result { success: boolean, data?: any, error?: { code: string, message: string } }
* @returns {Promise<Object>} - Next task result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: boolean }
*/
export async function nextTaskDirect(args, log) {
// Destructure expected args
@@ -32,7 +32,8 @@ export async function nextTaskDirect(args, log) {
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
},
fromCache: false
};
}
@@ -120,7 +121,7 @@ export async function nextTaskDirect(args, log) {
// Use the caching utility
try {
const result = await coreNextTaskAction();
log.info('nextTaskDirect completed.');
log.info(`nextTaskDirect completed.`);
return result;
} catch (error) {
log.error(`Unexpected error during nextTask: ${error.message}`);
@@ -129,7 +130,8 @@ export async function nextTaskDirect(args, log) {
error: {
code: 'UNEXPECTED_ERROR',
message: error.message
}
},
fromCache: false
};
}
}

View File

@@ -13,8 +13,6 @@ import {
} from '../../../../scripts/modules/utils.js';
import { createLogWrapper } from '../../tools/utils.js';
import { getDefaultNumTasks } from '../../../../scripts/modules/config-manager.js';
import { resolvePrdPath, resolveProjectPath } from '../utils/path-utils.js';
import { TASKMASTER_TASKS_FILE } from '../../../../src/constants/paths.js';
/**
* Direct function wrapper for parsing PRD documents and generating tasks.
@@ -51,20 +49,7 @@ export async function parsePRDDirect(args, log, context = {}) {
}
};
}
// Resolve input path using path utilities
let inputPath;
if (inputArg) {
try {
inputPath = resolvePrdPath({ input: inputArg, projectRoot }, session);
} catch (error) {
logWrapper.error(`Error resolving PRD path: ${error.message}`);
return {
success: false,
error: { code: 'FILE_NOT_FOUND', message: error.message }
};
}
} else {
if (!inputArg) {
logWrapper.error('parsePRDDirect called without input path');
return {
success: false,
@@ -72,13 +57,11 @@ export async function parsePRDDirect(args, log, context = {}) {
};
}
// Resolve output path - use new path utilities for default
// Resolve input and output paths relative to projectRoot
const inputPath = path.resolve(projectRoot, inputArg);
const outputPath = outputArg
? path.isAbsolute(outputArg)
? outputArg
: path.resolve(projectRoot, outputArg)
: resolveProjectPath(TASKMASTER_TASKS_FILE, args) ||
path.resolve(projectRoot, TASKMASTER_TASKS_FILE);
? path.resolve(projectRoot, outputArg)
: path.resolve(projectRoot, 'tasks', 'tasks.json'); // Default output path
// Check if input file exists
if (!fs.existsSync(inputPath)) {
@@ -96,12 +79,17 @@ export async function parsePRDDirect(args, log, context = {}) {
logWrapper.info(`Creating output directory: ${outputDir}`);
fs.mkdirSync(outputDir, { recursive: true });
}
} catch (error) {
const errorMsg = `Failed to create output directory ${outputDir}: ${error.message}`;
logWrapper.error(errorMsg);
} catch (dirError) {
logWrapper.error(
`Failed to create output directory ${outputDir}: ${dirError.message}`
);
// Return an error response immediately if dir creation fails
return {
success: false,
error: { code: 'DIRECTORY_CREATE_FAILED', message: errorMsg }
error: {
code: 'DIRECTORY_CREATION_ERROR',
message: `Failed to create output directory: ${dirError.message}`
}
};
}
@@ -109,7 +97,7 @@ export async function parsePRDDirect(args, log, context = {}) {
if (numTasksArg) {
numTasks =
typeof numTasksArg === 'string' ? parseInt(numTasksArg, 10) : numTasksArg;
if (Number.isNaN(numTasks) || numTasks <= 0) {
if (isNaN(numTasks) || numTasks <= 0) {
// Ensure positive number
numTasks = getDefaultNumTasks(projectRoot); // Fallback to default if parsing fails or invalid
logWrapper.warn(

View File

@@ -21,7 +21,7 @@ import {
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file.
* @param {string} args.id - The ID(s) of the task(s) or subtask(s) to remove (comma-separated for multiple).
* @param {Object} log - Logger object
* @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string } }
* @returns {Promise<Object>} - Remove task result { success: boolean, data?: any, error?: { code: string, message: string }, fromCache: false }
*/
export async function removeTaskDirect(args, log) {
// Destructure expected args
@@ -35,7 +35,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'MISSING_ARGUMENT',
message: 'tasksJsonPath is required'
}
},
fromCache: false
};
}
@@ -47,7 +48,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'INPUT_VALIDATION_ERROR',
message: 'Task ID is required'
}
},
fromCache: false
};
}
@@ -66,7 +68,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'INVALID_TASKS_FILE',
message: `No valid tasks found in ${tasksJsonPath}`
}
},
fromCache: false
};
}
@@ -80,7 +83,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'INVALID_TASK_ID',
message: `The following tasks were not found: ${invalidTasks.join(', ')}`
}
},
fromCache: false
};
}
@@ -129,7 +133,8 @@ export async function removeTaskDirect(args, log) {
details: failedRemovals
.map((r) => `${r.taskId}: ${r.error}`)
.join('; ')
}
},
fromCache: false
};
}
@@ -142,7 +147,8 @@ export async function removeTaskDirect(args, log) {
failed: failedRemovals.length,
results: results,
tasksPath: tasksJsonPath
}
},
fromCache: false
};
} catch (error) {
// Ensure silent mode is disabled even if an outer error occurs
@@ -155,7 +161,8 @@ export async function removeTaskDirect(args, log) {
error: {
code: 'UNEXPECTED_ERROR',
message: error.message
}
},
fromCache: false
};
}
}

View File

@@ -0,0 +1,159 @@
/**
* research.js
* Direct function implementation for AI-powered research queries
*/
import { performResearch } from '../../../../scripts/modules/task-manager.js';
import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
import { createLogWrapper } from '../../tools/utils.js';
/**
* Direct function wrapper for performing AI-powered research with project context.
*
* @param {Object} args - Command arguments
* @param {string} args.query - Research query/prompt (required)
* @param {string} [args.taskIds] - Comma-separated list of task/subtask IDs for context
* @param {string} [args.filePaths] - Comma-separated list of file paths for context
* @param {string} [args.customContext] - Additional custom context text
* @param {boolean} [args.includeProjectTree=false] - Include project file tree in context
* @param {string} [args.detailLevel='medium'] - Detail level: 'low', 'medium', 'high'
* @param {string} [args.projectRoot] - Project root path
* @param {Object} log - Logger object
* @param {Object} context - Additional context (session)
* @returns {Promise<Object>} - Result object { success: boolean, data?: any, error?: { code: string, message: string } }
*/
export async function researchDirect(args, log, context = {}) {
// Destructure expected args
const {
query,
taskIds,
filePaths,
customContext,
includeProjectTree = false,
detailLevel = 'medium',
projectRoot
} = args;
const { session } = context; // Destructure session from context
// Enable silent mode to prevent console logs from interfering with JSON response
enableSilentMode();
// Create logger wrapper using the utility
const mcpLog = createLogWrapper(log);
try {
// Check required parameters
if (!query || typeof query !== 'string' || query.trim().length === 0) {
log.error('Missing or invalid required parameter: query');
disableSilentMode();
return {
success: false,
error: {
code: 'MISSING_PARAMETER',
message:
'The query parameter is required and must be a non-empty string'
}
};
}
// Parse comma-separated task IDs if provided
const parsedTaskIds = taskIds
? taskIds
.split(',')
.map((id) => id.trim())
.filter((id) => id.length > 0)
: [];
// Parse comma-separated file paths if provided
const parsedFilePaths = filePaths
? filePaths
.split(',')
.map((path) => path.trim())
.filter((path) => path.length > 0)
: [];
// Validate detail level
const validDetailLevels = ['low', 'medium', 'high'];
if (!validDetailLevels.includes(detailLevel)) {
log.error(`Invalid detail level: ${detailLevel}`);
disableSilentMode();
return {
success: false,
error: {
code: 'INVALID_PARAMETER',
message: `Detail level must be one of: ${validDetailLevels.join(', ')}`
}
};
}
log.info(
`Performing research query: "${query.substring(0, 100)}${query.length > 100 ? '...' : ''}", ` +
`taskIds: [${parsedTaskIds.join(', ')}], ` +
`filePaths: [${parsedFilePaths.join(', ')}], ` +
`detailLevel: ${detailLevel}, ` +
`includeProjectTree: ${includeProjectTree}, ` +
`projectRoot: ${projectRoot}`
);
// Prepare options for the research function
const researchOptions = {
taskIds: parsedTaskIds,
filePaths: parsedFilePaths,
customContext: customContext || '',
includeProjectTree,
detailLevel,
projectRoot
};
// Prepare context for the research function
const researchContext = {
session,
mcpLog,
commandName: 'research',
outputType: 'mcp'
};
// Call the performResearch function
const result = await performResearch(
query.trim(),
researchOptions,
researchContext,
'json', // outputFormat - use 'json' to suppress CLI UI
false // allowFollowUp - disable for MCP calls
);
// Restore normal logging
disableSilentMode();
return {
success: true,
data: {
query: result.query,
result: result.result,
contextSize: result.contextSize,
contextTokens: result.contextTokens,
tokenBreakdown: result.tokenBreakdown,
systemPromptTokens: result.systemPromptTokens,
userPromptTokens: result.userPromptTokens,
totalInputTokens: result.totalInputTokens,
detailLevel: result.detailLevel,
telemetryData: result.telemetryData
}
};
} catch (error) {
// Make sure to restore normal logging even if there's an error
disableSilentMode();
log.error(`Error in researchDirect: ${error.message}`);
return {
success: false,
error: {
code: error.code || 'RESEARCH_ERROR',
message: error.message
}
};
}
}

View File

@@ -29,7 +29,8 @@ export async function setTaskStatusDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
@@ -40,7 +41,8 @@ export async function setTaskStatusDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_TASK_ID', message: errorMessage }
error: { code: 'MISSING_TASK_ID', message: errorMessage },
fromCache: false
};
}
@@ -50,7 +52,8 @@ export async function setTaskStatusDirect(args, log) {
log.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_STATUS', message: errorMessage }
error: { code: 'MISSING_STATUS', message: errorMessage },
fromCache: false
};
}
@@ -79,7 +82,8 @@ export async function setTaskStatusDirect(args, log) {
taskId,
status: newStatus,
tasksPath: tasksPath // Return the path used
}
},
fromCache: false // This operation always modifies state and should never be cached
};
// If the task was completed, attempt to fetch the next task
@@ -122,7 +126,8 @@ export async function setTaskStatusDirect(args, log) {
error: {
code: 'SET_STATUS_ERROR',
message: error.message || 'Unknown error setting task status'
}
},
fromCache: false
};
} finally {
// ALWAYS restore normal logging in finally block
@@ -140,7 +145,8 @@ export async function setTaskStatusDirect(args, log) {
error: {
code: 'SET_STATUS_ERROR',
message: error.message || 'Unknown error setting task status'
}
},
fromCache: false
};
}
}

View File

@@ -8,14 +8,14 @@ import {
readComplexityReport,
readJSON
} from '../../../../scripts/modules/utils.js';
import { findTasksPath } from '../utils/path-utils.js';
import { findTasksJsonPath } from '../utils/path-utils.js';
/**
* Direct function wrapper for getting task details.
*
* @param {Object} args - Command arguments.
* @param {string} args.id - Task ID to show.
* @param {string} [args.file] - Optional path to the tasks file (passed to findTasksPath).
* @param {string} [args.file] - Optional path to the tasks file (passed to findTasksJsonPath).
* @param {string} args.reportPath - Explicit path to the complexity report file.
* @param {string} [args.status] - Optional status to filter subtasks by.
* @param {string} args.projectRoot - Absolute path to the project root directory (already normalized by tool).
@@ -37,7 +37,7 @@ export async function showTaskDirect(args, log) {
let tasksJsonPath;
try {
// Use the projectRoot passed directly from args
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: projectRoot, file: file },
log
);
@@ -66,32 +66,91 @@ export async function showTaskDirect(args, log) {
const complexityReport = readComplexityReport(reportPath);
const { task, originalSubtaskCount } = findTaskById(
tasksData.tasks,
id,
complexityReport,
status
);
// Parse comma-separated IDs
const taskIds = id
.split(',')
.map((taskId) => taskId.trim())
.filter((taskId) => taskId.length > 0);
if (!task) {
if (taskIds.length === 0) {
return {
success: false,
error: {
code: 'TASK_NOT_FOUND',
message: `Task or subtask with ID ${id} not found`
code: 'INVALID_TASK_ID',
message: 'No valid task IDs provided'
}
};
}
log.info(`Successfully retrieved task ${id}.`);
// Handle single task ID (existing behavior)
if (taskIds.length === 1) {
const { task, originalSubtaskCount } = findTaskById(
tasksData.tasks,
taskIds[0],
complexityReport,
status
);
const returnData = { ...task };
if (originalSubtaskCount !== null) {
returnData._originalSubtaskCount = originalSubtaskCount;
returnData._subtaskFilter = status;
if (!task) {
return {
success: false,
error: {
code: 'TASK_NOT_FOUND',
message: `Task or subtask with ID ${taskIds[0]} not found`
}
};
}
log.info(`Successfully retrieved task ${taskIds[0]}.`);
const returnData = { ...task };
if (originalSubtaskCount !== null) {
returnData._originalSubtaskCount = originalSubtaskCount;
returnData._subtaskFilter = status;
}
return { success: true, data: returnData };
}
return { success: true, data: returnData };
// Handle multiple task IDs
const foundTasks = [];
const notFoundIds = [];
taskIds.forEach((taskId) => {
const { task, originalSubtaskCount } = findTaskById(
tasksData.tasks,
taskId,
complexityReport,
status
);
if (task) {
const taskData = { ...task };
if (originalSubtaskCount !== null) {
taskData._originalSubtaskCount = originalSubtaskCount;
taskData._subtaskFilter = status;
}
foundTasks.push(taskData);
} else {
notFoundIds.push(taskId);
}
});
log.info(
`Successfully retrieved ${foundTasks.length} of ${taskIds.length} requested tasks.`
);
// Return multiple tasks with metadata
return {
success: true,
data: {
tasks: foundTasks,
requestedIds: taskIds,
foundCount: foundTasks.length,
notFoundIds: notFoundIds,
isMultiple: true
}
};
} catch (error) {
log.error(`Error showing task ${id}: ${error.message}`);
return {

View File

@@ -42,7 +42,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
@@ -53,7 +54,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_SUBTASK_ID', message: errorMessage }
error: { code: 'INVALID_SUBTASK_ID', message: errorMessage },
fromCache: false
};
}
@@ -63,7 +65,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_PROMPT', message: errorMessage }
error: { code: 'MISSING_PROMPT', message: errorMessage },
fromCache: false
};
}
@@ -74,7 +77,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
log.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_SUBTASK_ID_TYPE', message: errorMessage }
error: { code: 'INVALID_SUBTASK_ID_TYPE', message: errorMessage },
fromCache: false
};
}
@@ -84,7 +88,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
log.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_SUBTASK_ID_FORMAT', message: errorMessage }
error: { code: 'INVALID_SUBTASK_ID_FORMAT', message: errorMessage },
fromCache: false
};
}
@@ -123,7 +128,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
logWrapper.error(message);
return {
success: false,
error: { code: 'SUBTASK_NOT_FOUND', message: message }
error: { code: 'SUBTASK_NOT_FOUND', message: message },
fromCache: false
};
}
@@ -140,7 +146,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
tasksPath,
useResearch,
telemetryData: coreResult.telemetryData
}
},
fromCache: false
};
} catch (error) {
logWrapper.error(`Error updating subtask by ID: ${error.message}`);
@@ -149,7 +156,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
error: {
code: 'UPDATE_SUBTASK_CORE_ERROR',
message: error.message || 'Unknown error updating subtask'
}
},
fromCache: false
};
} finally {
if (!wasSilent && isSilentMode()) {
@@ -166,7 +174,8 @@ export async function updateSubtaskByIdDirect(args, log, context = {}) {
error: {
code: 'DIRECT_FUNCTION_SETUP_ERROR',
message: error.message || 'Unknown setup error'
}
},
fromCache: false
};
}
}

View File

@@ -42,7 +42,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_ARGUMENT', message: errorMessage }
error: { code: 'MISSING_ARGUMENT', message: errorMessage },
fromCache: false
};
}
@@ -53,7 +54,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_TASK_ID', message: errorMessage }
error: { code: 'MISSING_TASK_ID', message: errorMessage },
fromCache: false
};
}
@@ -63,7 +65,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'MISSING_PROMPT', message: errorMessage }
error: { code: 'MISSING_PROMPT', message: errorMessage },
fromCache: false
};
}
@@ -81,7 +84,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
logWrapper.error(errorMessage);
return {
success: false,
error: { code: 'INVALID_TASK_ID', message: errorMessage }
error: { code: 'INVALID_TASK_ID', message: errorMessage },
fromCache: false
};
}
}
@@ -133,7 +137,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
taskId: taskId,
updated: false,
telemetryData: coreResult?.telemetryData
}
},
fromCache: false
};
}
@@ -150,7 +155,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
updated: true,
updatedTask: coreResult.updatedTask,
telemetryData: coreResult.telemetryData
}
},
fromCache: false
};
} catch (error) {
logWrapper.error(`Error updating task by ID: ${error.message}`);
@@ -159,7 +165,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
error: {
code: 'UPDATE_TASK_CORE_ERROR',
message: error.message || 'Unknown error updating task'
}
},
fromCache: false
};
} finally {
if (!wasSilent && isSilentMode()) {
@@ -174,7 +181,8 @@ export async function updateTaskByIdDirect(args, log, context = {}) {
error: {
code: 'DIRECT_FUNCTION_SETUP_ERROR',
message: error.message || 'Unknown setup error'
}
},
fromCache: false
};
}
}

View File

@@ -21,7 +21,7 @@ import {
*/
export async function updateTasksDirect(args, log, context = {}) {
const { session } = context;
const { from, prompt, research, tasksJsonPath, projectRoot } = args;
const { from, prompt, research, file: fileArg, projectRoot } = args;
// Create the standard logger wrapper
const logWrapper = createLogWrapper(log);
@@ -60,15 +60,20 @@ export async function updateTasksDirect(args, log, context = {}) {
};
}
// Resolve tasks file path
const tasksFile = fileArg
? path.resolve(projectRoot, fileArg)
: path.resolve(projectRoot, 'tasks', 'tasks.json');
logWrapper.info(
`Updating tasks via direct function. From: ${from}, Research: ${research}, File: ${tasksJsonPath}, ProjectRoot: ${projectRoot}`
`Updating tasks via direct function. From: ${from}, Research: ${research}, File: ${tasksFile}, ProjectRoot: ${projectRoot}`
);
enableSilentMode(); // Enable silent mode
try {
// Call the core updateTasks function
const result = await updateTasks(
tasksJsonPath,
tasksFile,
from,
prompt,
research,
@@ -88,7 +93,7 @@ export async function updateTasksDirect(args, log, context = {}) {
success: true,
data: {
message: `Successfully updated ${result.updatedTasks.length} tasks.`,
tasksPath: tasksJsonPath,
tasksFile,
updatedCount: result.updatedTasks.length,
telemetryData: result.telemetryData
}

View File

@@ -31,9 +31,10 @@ import { removeTaskDirect } from './direct-functions/remove-task.js';
import { initializeProjectDirect } from './direct-functions/initialize-project.js';
import { modelsDirect } from './direct-functions/models.js';
import { moveTaskDirect } from './direct-functions/move-task.js';
import { researchDirect } from './direct-functions/research.js';
// Re-export utility functions
export { findTasksPath } from './utils/path-utils.js';
export { findTasksJsonPath } from './utils/path-utils.js';
// Use Map for potential future enhancements like introspection or dynamic dispatch
export const directFunctions = new Map([
@@ -62,7 +63,8 @@ export const directFunctions = new Map([
['removeTaskDirect', removeTaskDirect],
['initializeProjectDirect', initializeProjectDirect],
['modelsDirect', modelsDirect],
['moveTaskDirect', moveTaskDirect]
['moveTaskDirect', moveTaskDirect],
['researchDirect', researchDirect]
]);
// Re-export all direct function implementations
@@ -92,5 +94,6 @@ export {
removeTaskDirect,
initializeProjectDirect,
modelsDirect,
moveTaskDirect
moveTaskDirect,
researchDirect
};

View File

@@ -1,220 +1,436 @@
/**
* path-utils.js
* Utility functions for file path operations in Task Master
*
* This module provides robust path resolution for both:
* 1. PACKAGE PATH: Where task-master code is installed
* (global node_modules OR local ./node_modules/task-master OR direct from repo)
* 2. PROJECT PATH: Where user's tasks.json resides (typically user's project root)
*/
import path from 'path';
import {
findTasksPath as coreFindTasksPath,
findPRDPath as coreFindPrdPath,
findComplexityReportPath as coreFindComplexityReportPath,
findProjectRoot as coreFindProjectRoot,
normalizeProjectRoot
} from '../../../../src/utils/path-utils.js';
import { PROJECT_MARKERS } from '../../../../src/constants/paths.js';
import fs from 'fs';
import { fileURLToPath } from 'url';
import os from 'os';
// Store last found project root to improve performance on subsequent calls (primarily for CLI)
export let lastFoundProjectRoot = null;
// Project marker files that indicate a potential project root
export const PROJECT_MARKERS = [
// Task Master specific
'tasks.json',
'tasks/tasks.json',
// Common version control
'.git',
'.svn',
// Common package files
'package.json',
'pyproject.toml',
'Gemfile',
'go.mod',
'Cargo.toml',
// Common IDE/editor folders
'.cursor',
'.vscode',
'.idea',
// Common dependency directories (check if directory)
'node_modules',
'venv',
'.venv',
// Common config files
'.env',
'.eslintrc',
'tsconfig.json',
'babel.config.js',
'jest.config.js',
'webpack.config.js',
// Common CI/CD files
'.github/workflows',
'.gitlab-ci.yml',
'.circleci/config.yml'
];
/**
* MCP-specific path utilities that extend core path utilities with session support
* This module handles session-specific path resolution for the MCP server
* Gets the path to the task-master package installation directory
* NOTE: This might become unnecessary if CLI fallback in MCP utils is removed.
* @returns {string} - Absolute path to the package installation directory
*/
export function getPackagePath() {
// When running from source, __dirname is the directory containing this file
// When running from npm, we need to find the package root
const thisFilePath = fileURLToPath(import.meta.url);
const thisFileDir = path.dirname(thisFilePath);
/**
* Silent logger for MCP context to prevent console output
*/
const silentLogger = {
info: () => {},
warn: () => {},
error: () => {},
debug: () => {},
success: () => {}
};
/**
* Cache for last found project root to improve performance
*/
export const lastFoundProjectRoot = null;
/**
* Find PRD file with MCP support
* @param {string} [explicitPath] - Explicit path to PRD file (highest priority)
* @param {Object} [args] - Arguments object for context
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to PRD file or null if not found
*/
export function findPrdPath(explicitPath, args = null, log = silentLogger) {
return coreFindPrdPath(explicitPath, args, log);
// Navigate from core/utils up to the package root
// In dev: /path/to/task-master/mcp-server/src/core/utils -> /path/to/task-master
// In npm: /path/to/node_modules/task-master/mcp-server/src/core/utils -> /path/to/node_modules/task-master
return path.resolve(thisFileDir, '../../../../');
}
/**
* Resolve tasks.json path from arguments
* Prioritizes explicit path parameter, then uses fallback logic
* @param {Object} args - Arguments object containing projectRoot and optional file path
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to tasks.json or null if not found
* Finds the absolute path to the tasks.json file based on project root and arguments.
* @param {Object} args - Command arguments, potentially including 'projectRoot' and 'file'.
* @param {Object} log - Logger object.
* @returns {string} - Absolute path to the tasks.json file.
* @throws {Error} - If tasks.json cannot be found.
*/
export function resolveTasksPath(args, log = silentLogger) {
// Get explicit path from args.file if provided
const explicitPath = args?.file;
const rawProjectRoot = args?.projectRoot;
export function findTasksJsonPath(args, log) {
// PRECEDENCE ORDER for finding tasks.json:
// 1. Explicitly provided `projectRoot` in args (Highest priority, expected in MCP context)
// 2. Previously found/cached `lastFoundProjectRoot` (primarily for CLI performance)
// 3. Search upwards from current working directory (`process.cwd()`) - CLI usage
// If explicit path is provided and absolute, use it directly
if (explicitPath && path.isAbsolute(explicitPath)) {
return explicitPath;
// 1. If project root is explicitly provided (e.g., from MCP session), use it directly
if (args.projectRoot) {
const projectRoot = args.projectRoot;
log.info(`Using explicitly provided project root: ${projectRoot}`);
try {
// This will throw if tasks.json isn't found within this root
return findTasksJsonInDirectory(projectRoot, args.file, log);
} catch (error) {
// Include debug info in error
const debugInfo = {
projectRoot,
currentDir: process.cwd(),
serverDir: path.dirname(process.argv[1]),
possibleProjectRoot: path.resolve(
path.dirname(process.argv[1]),
'../..'
),
lastFoundProjectRoot,
searchedPaths: error.message
};
error.message = `Tasks file not found in any of the expected locations relative to project root "${projectRoot}" (from session).\nDebug Info: ${JSON.stringify(debugInfo, null, 2)}`;
throw error;
}
}
// Normalize project root if provided
const projectRoot = rawProjectRoot
? normalizeProjectRoot(rawProjectRoot)
: null;
// --- Fallback logic primarily for CLI or when projectRoot isn't passed ---
// If explicit path is relative, resolve it relative to normalized projectRoot
if (explicitPath && projectRoot) {
return path.resolve(projectRoot, explicitPath);
// 2. If we have a last known project root that worked, try it first
if (lastFoundProjectRoot) {
log.info(`Trying last known project root: ${lastFoundProjectRoot}`);
try {
// Use the cached root
const tasksPath = findTasksJsonInDirectory(
lastFoundProjectRoot,
args.file,
log
);
return tasksPath; // Return if found in cached root
} catch (error) {
log.info(
`Task file not found in last known project root, continuing search.`
);
// Continue with search if not found in cache
}
}
// Use core findTasksPath with explicit path and normalized projectRoot context
if (projectRoot) {
return coreFindTasksPath(explicitPath, { projectRoot }, log);
}
// 3. Start search from current directory (most common CLI scenario)
const startDir = process.cwd();
log.info(
`Searching for tasks.json starting from current directory: ${startDir}`
);
// Fallback to core function without projectRoot context
return coreFindTasksPath(explicitPath, null, log);
// Try to find tasks.json by walking up the directory tree from cwd
try {
// This will throw if not found in the CWD tree
return findTasksJsonWithParentSearch(startDir, args.file, log);
} catch (error) {
// If all attempts fail, augment and throw the original error from CWD search
error.message = `${error.message}\n\nPossible solutions:\n1. Run the command from your project directory containing tasks.json\n2. Use --project-root=/path/to/project to specify the project location (if using CLI)\n3. Ensure the project root is correctly passed from the client (if using MCP)\n\nCurrent working directory: ${startDir}\nLast known project root: ${lastFoundProjectRoot}\nProject root from args: ${args.projectRoot}`;
throw error;
}
}
/**
* Resolve PRD path from arguments
* @param {Object} args - Arguments object containing projectRoot and optional input path
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to PRD file or null if not found
* Check if a directory contains any project marker files or directories
* @param {string} dirPath - Directory to check
* @returns {boolean} - True if the directory contains any project markers
*/
export function resolvePrdPath(args, log = silentLogger) {
// Get explicit path from args.input if provided
const explicitPath = args?.input;
const rawProjectRoot = args?.projectRoot;
// If explicit path is provided and absolute, use it directly
if (explicitPath && path.isAbsolute(explicitPath)) {
return explicitPath;
}
// Normalize project root if provided
const projectRoot = rawProjectRoot
? normalizeProjectRoot(rawProjectRoot)
: null;
// If explicit path is relative, resolve it relative to normalized projectRoot
if (explicitPath && projectRoot) {
return path.resolve(projectRoot, explicitPath);
}
// Use core findPRDPath with explicit path and normalized projectRoot context
if (projectRoot) {
return coreFindPrdPath(explicitPath, { projectRoot }, log);
}
// Fallback to core function without projectRoot context
return coreFindPrdPath(explicitPath, null, log);
function hasProjectMarkers(dirPath) {
return PROJECT_MARKERS.some((marker) => {
const markerPath = path.join(dirPath, marker);
// Check if the marker exists as either a file or directory
return fs.existsSync(markerPath);
});
}
/**
* Resolve complexity report path from arguments
* @param {Object} args - Arguments object containing projectRoot and optional complexityReport path
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to complexity report or null if not found
* Search for tasks.json in a specific directory
* @param {string} dirPath - Directory to search in
* @param {string} explicitFilePath - Optional explicit file path relative to dirPath
* @param {Object} log - Logger object
* @returns {string} - Absolute path to tasks.json
* @throws {Error} - If tasks.json cannot be found
*/
export function resolveComplexityReportPath(args, log = silentLogger) {
// Get explicit path from args.complexityReport if provided
const explicitPath = args?.complexityReport;
const rawProjectRoot = args?.projectRoot;
function findTasksJsonInDirectory(dirPath, explicitFilePath, log) {
const possiblePaths = [];
// If explicit path is provided and absolute, use it directly
if (explicitPath && path.isAbsolute(explicitPath)) {
return explicitPath;
// 1. If a file is explicitly provided relative to dirPath
if (explicitFilePath) {
possiblePaths.push(path.resolve(dirPath, explicitFilePath));
}
// Normalize project root if provided
const projectRoot = rawProjectRoot
? normalizeProjectRoot(rawProjectRoot)
: null;
// 2. Check the standard locations relative to dirPath
possiblePaths.push(
path.join(dirPath, 'tasks.json'),
path.join(dirPath, 'tasks', 'tasks.json')
);
// If explicit path is relative, resolve it relative to normalized projectRoot
if (explicitPath && projectRoot) {
return path.resolve(projectRoot, explicitPath);
log.info(`Checking potential task file paths: ${possiblePaths.join(', ')}`);
// Find the first existing path
for (const p of possiblePaths) {
log.info(`Checking if exists: ${p}`);
const exists = fs.existsSync(p);
log.info(`Path ${p} exists: ${exists}`);
if (exists) {
log.info(`Found tasks file at: ${p}`);
// Store the project root for future use
lastFoundProjectRoot = dirPath;
return p;
}
}
// Use core findComplexityReportPath with explicit path and normalized projectRoot context
if (projectRoot) {
return coreFindComplexityReportPath(explicitPath, { projectRoot }, log);
}
// Fallback to core function without projectRoot context
return coreFindComplexityReportPath(explicitPath, null, log);
// If no file was found, throw an error
const error = new Error(
`Tasks file not found in any of the expected locations relative to ${dirPath}: ${possiblePaths.join(', ')}`
);
error.code = 'TASKS_FILE_NOT_FOUND';
throw error;
}
/**
* Resolve any project-relative path from arguments
* @param {string} relativePath - Relative path to resolve
* @param {Object} args - Arguments object containing projectRoot
* @returns {string} - Resolved absolute path
* Recursively search for tasks.json in the given directory and parent directories
* Also looks for project markers to identify potential project roots
* @param {string} startDir - Directory to start searching from
* @param {string} explicitFilePath - Optional explicit file path
* @param {Object} log - Logger object
* @returns {string} - Absolute path to tasks.json
* @throws {Error} - If tasks.json cannot be found in any parent directory
*/
export function resolveProjectPath(relativePath, args) {
// Ensure we have a projectRoot from args
if (!args?.projectRoot) {
throw new Error('projectRoot is required in args to resolve project paths');
function findTasksJsonWithParentSearch(startDir, explicitFilePath, log) {
let currentDir = startDir;
const rootDir = path.parse(currentDir).root;
// Keep traversing up until we hit the root directory
while (currentDir !== rootDir) {
// First check for tasks.json directly
try {
return findTasksJsonInDirectory(currentDir, explicitFilePath, log);
} catch (error) {
// If tasks.json not found but the directory has project markers,
// log it as a potential project root (helpful for debugging)
if (hasProjectMarkers(currentDir)) {
log.info(`Found project markers in ${currentDir}, but no tasks.json`);
}
// Move up to parent directory
const parentDir = path.dirname(currentDir);
// Check if we've reached the root
if (parentDir === currentDir) {
break;
}
log.info(
`Tasks file not found in ${currentDir}, searching in parent directory: ${parentDir}`
);
currentDir = parentDir;
}
}
// Normalize the project root to prevent double .taskmaster paths
const projectRoot = normalizeProjectRoot(args.projectRoot);
// If we've searched all the way to the root and found nothing
const error = new Error(
`Tasks file not found in ${startDir} or any parent directory.`
);
error.code = 'TASKS_FILE_NOT_FOUND';
throw error;
}
// If already absolute, return as-is
if (path.isAbsolute(relativePath)) {
return relativePath;
// Note: findTasksWithNpmConsideration is not used by findTasksJsonPath and might be legacy or used elsewhere.
// If confirmed unused, it could potentially be removed in a separate cleanup.
function findTasksWithNpmConsideration(startDir, log) {
// First try our recursive parent search from cwd
try {
return findTasksJsonWithParentSearch(startDir, null, log);
} catch (error) {
// If that fails, try looking relative to the executable location
const execPath = process.argv[1];
const execDir = path.dirname(execPath);
log.info(`Looking for tasks file relative to executable at: ${execDir}`);
try {
return findTasksJsonWithParentSearch(execDir, null, log);
} catch (secondError) {
// If that also fails, check standard locations in user's home directory
const homeDir = os.homedir();
log.info(`Looking for tasks file in home directory: ${homeDir}`);
try {
// Check standard locations in home dir
return findTasksJsonInDirectory(
path.join(homeDir, '.task-master'),
null,
log
);
} catch (thirdError) {
// If all approaches fail, throw the original error
throw error;
}
}
}
}
/**
* Finds potential PRD document files based on common naming patterns
* @param {string} projectRoot - The project root directory
* @param {string|null} explicitPath - Optional explicit path provided by the user
* @param {Object} log - Logger object
* @returns {string|null} - The path to the first found PRD file, or null if none found
*/
export function findPRDDocumentPath(projectRoot, explicitPath, log) {
// If explicit path is provided, check if it exists
if (explicitPath) {
const fullPath = path.isAbsolute(explicitPath)
? explicitPath
: path.resolve(projectRoot, explicitPath);
if (fs.existsSync(fullPath)) {
log.info(`Using provided PRD document path: ${fullPath}`);
return fullPath;
} else {
log.warn(
`Provided PRD document path not found: ${fullPath}, will search for alternatives`
);
}
}
// Resolve relative to normalized projectRoot
return path.resolve(projectRoot, relativePath);
// Common locations and file patterns for PRD documents
const commonLocations = [
'', // Project root
'scripts/'
];
const commonFileNames = ['PRD.md', 'prd.md', 'PRD.txt', 'prd.txt'];
// Check all possible combinations
for (const location of commonLocations) {
for (const fileName of commonFileNames) {
const potentialPath = path.join(projectRoot, location, fileName);
if (fs.existsSync(potentialPath)) {
log.info(`Found PRD document at: ${potentialPath}`);
return potentialPath;
}
}
}
log.warn(`No PRD document found in common locations within ${projectRoot}`);
return null;
}
export function findComplexityReportPath(projectRoot, explicitPath, log) {
// If explicit path is provided, check if it exists
if (explicitPath) {
const fullPath = path.isAbsolute(explicitPath)
? explicitPath
: path.resolve(projectRoot, explicitPath);
if (fs.existsSync(fullPath)) {
log.info(`Using provided PRD document path: ${fullPath}`);
return fullPath;
} else {
log.warn(
`Provided PRD document path not found: ${fullPath}, will search for alternatives`
);
}
}
// Common locations and file patterns for PRD documents
const commonLocations = [
'', // Project root
'scripts/'
];
const commonFileNames = [
'complexity-report.json',
'task-complexity-report.json'
];
// Check all possible combinations
for (const location of commonLocations) {
for (const fileName of commonFileNames) {
const potentialPath = path.join(projectRoot, location, fileName);
if (fs.existsSync(potentialPath)) {
log.info(`Found PRD document at: ${potentialPath}`);
return potentialPath;
}
}
}
log.warn(`No PRD document found in common locations within ${projectRoot}`);
return null;
}
/**
* Find project root using core utility
* @param {string} [startDir] - Directory to start searching from
* @returns {string|null} - Project root path or null if not found
* Resolves the tasks output directory path
* @param {string} projectRoot - The project root directory
* @param {string|null} explicitPath - Optional explicit output path provided by the user
* @param {Object} log - Logger object
* @returns {string} - The resolved tasks directory path
*/
export function findProjectRoot(startDir) {
return coreFindProjectRoot(startDir);
}
export function resolveTasksOutputPath(projectRoot, explicitPath, log) {
// If explicit path is provided, use it
if (explicitPath) {
const outputPath = path.isAbsolute(explicitPath)
? explicitPath
: path.resolve(projectRoot, explicitPath);
// MAIN EXPORTS FOR MCP TOOLS - these are the functions MCP tools should use
log.info(`Using provided tasks output path: ${outputPath}`);
return outputPath;
}
/**
* Find tasks.json path from arguments - primary MCP function
* @param {Object} args - Arguments object containing projectRoot and optional file path
* @param {Object} [log] - Log function to prevent console logging
* @returns {string|null} - Resolved path to tasks.json or null if not found
*/
export function findTasksPath(args, log = silentLogger) {
return resolveTasksPath(args, log);
// Default output path: tasks/tasks.json in the project root
const defaultPath = path.resolve(projectRoot, 'tasks', 'tasks.json');
log.info(`Using default tasks output path: ${defaultPath}`);
// Ensure the directory exists
const outputDir = path.dirname(defaultPath);
if (!fs.existsSync(outputDir)) {
log.info(`Creating tasks directory: ${outputDir}`);
fs.mkdirSync(outputDir, { recursive: true });
}
return defaultPath;
}
/**
* Find complexity report path from arguments - primary MCP function
* @param {Object} args - Arguments object containing projectRoot and optional complexityReport path
* @param {Object} [log] - Log function to prevent console logging
* @returns {string|null} - Resolved path to complexity report or null if not found
* Resolves various file paths needed for MCP operations based on project root
* @param {string} projectRoot - The project root directory
* @param {Object} args - Command arguments that may contain explicit paths
* @param {Object} log - Logger object
* @returns {Object} - An object containing resolved paths
*/
export function findComplexityReportPath(args, log = silentLogger) {
return resolveComplexityReportPath(args, log);
export function resolveProjectPaths(projectRoot, args, log) {
const prdPath = findPRDDocumentPath(projectRoot, args.input, log);
const tasksJsonPath = resolveTasksOutputPath(projectRoot, args.output, log);
// You can add more path resolutions here as needed
return {
projectRoot,
prdPath,
tasksJsonPath
// Add additional path properties as needed
};
}
/**
* Find PRD path - primary MCP function
* @param {string} [explicitPath] - Explicit path to PRD file
* @param {Object} [args] - Arguments object for context (not used in current implementation)
* @param {Object} [log] - Logger object to prevent console logging
* @returns {string|null} - Resolved path to PRD file or null if not found
*/
export function findPRDPath(explicitPath, args = null, log = silentLogger) {
return findPrdPath(explicitPath, args, log);
}
// Legacy aliases for backward compatibility - DEPRECATED
export const findTasksJsonPath = findTasksPath;
export const findComplexityReportJsonPath = findComplexityReportPath;
// Re-export PROJECT_MARKERS for MCP tools that import it from this module
export { PROJECT_MARKERS };

View File

@@ -7,10 +7,11 @@ import { z } from 'zod';
import {
handleApiResult,
createErrorResponse,
getProjectRootFromSession,
withNormalizedProjectRoot
} from './utils.js';
import { addDependencyDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the addDependency tool with the MCP server
@@ -43,7 +44,7 @@ export function registerAddDependencyTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { addSubtaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the addSubtask tool with the MCP server
@@ -67,7 +67,7 @@ export function registerAddSubtaskTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { addTaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the addTask tool with the MCP server
@@ -70,7 +70,7 @@ export function registerAddTaskTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -12,8 +12,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { analyzeTaskComplexityDirect } from '../core/task-master-core.js'; // Assuming core functions are exported via task-master-core.js
import { findTasksPath } from '../core/utils/path-utils.js';
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the analyze_project_complexity tool
@@ -42,7 +41,7 @@ export function registerAnalyzeProjectComplexityTool(server) {
.string()
.optional()
.describe(
`Output file path relative to project root (default: ${COMPLEXITY_REPORT_FILE}).`
'Output file path relative to project root (default: scripts/task-complexity-report.json).'
),
file: z
.string()
@@ -81,7 +80,7 @@ export function registerAnalyzeProjectComplexityTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);
@@ -95,7 +94,11 @@ export function registerAnalyzeProjectComplexityTool(server) {
const outputPath = args.output
? path.resolve(args.projectRoot, args.output)
: path.resolve(args.projectRoot, COMPLEXITY_REPORT_FILE);
: path.resolve(
args.projectRoot,
'scripts',
'task-complexity-report.json'
);
log.info(`${toolName}: Report output path: ${outputPath}`);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { clearSubtasksDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the clearSubtasks tool with the MCP server
@@ -48,7 +48,7 @@ export function registerClearSubtasksTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,8 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { complexityReportDirect } from '../core/task-master-core.js';
import { COMPLEXITY_REPORT_FILE } from '../../../src/constants/paths.js';
import { findComplexityReportPath } from '../core/utils/path-utils.js';
import path from 'path';
/**
* Register the complexityReport tool with the MCP server
@@ -26,7 +25,7 @@ export function registerComplexityReportTool(server) {
.string()
.optional()
.describe(
`Path to the report file (default: ${COMPLEXITY_REPORT_FILE})`
'Path to the report file (default: scripts/task-complexity-report.json)'
),
projectRoot: z
.string()
@@ -38,18 +37,14 @@ export function registerComplexityReportTool(server) {
`Getting complexity report with args: ${JSON.stringify(args)}`
);
const pathArgs = {
projectRoot: args.projectRoot,
complexityReport: args.file
};
const reportPath = findComplexityReportPath(pathArgs, log);
if (!reportPath) {
return createErrorResponse(
'No complexity report found. Run task-master analyze-complexity first.'
);
}
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
const reportPath = args.file
? path.resolve(args.projectRoot, args.file)
: path.resolve(
args.projectRoot,
'scripts',
'task-complexity-report.json'
);
const result = await complexityReportDirect(
{
@@ -59,7 +54,9 @@ export function registerComplexityReportTool(server) {
);
if (result.success) {
log.info('Successfully retrieved complexity report');
log.info(
`Successfully retrieved complexity report${result.fromCache ? ' (from cache)' : ''}`
);
} else {
log.error(
`Failed to retrieve complexity report: ${result.error.message}`

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { expandAllTasksDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the expandAll tool with the MCP server
@@ -67,7 +67,7 @@ export function registerExpandAllTool(server) {
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { expandTaskDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the expand-task tool with the MCP server
@@ -54,7 +54,7 @@ export function registerExpandTaskTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { fixDependenciesDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
/**
* Register the fixDependencies tool with the MCP server
@@ -33,7 +33,7 @@ export function registerFixDependenciesTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -10,7 +10,7 @@ import {
withNormalizedProjectRoot
} from './utils.js';
import { generateTaskFilesDirect } from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { findTasksJsonPath } from '../core/utils/path-utils.js';
import path from 'path';
/**
@@ -39,7 +39,7 @@ export function registerGenerateTool(server) {
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);

View File

@@ -11,7 +11,7 @@ import {
} from './utils.js';
import { showTaskDirect } from '../core/task-master-core.js';
import {
findTasksPath,
findTasksJsonPath,
findComplexityReportPath
} from '../core/utils/path-utils.js';
@@ -44,7 +44,11 @@ export function registerShowTaskTool(server) {
name: 'get_task',
description: 'Get detailed information about a specific task',
parameters: z.object({
id: z.string().describe('Task ID to get'),
id: z
.string()
.describe(
'Task ID(s) to get (can be comma-separated for multiple tasks)'
),
status: z
.string()
.optional()
@@ -66,7 +70,7 @@ export function registerShowTaskTool(server) {
'Absolute path to the project root directory (Optional, usually from session)'
)
}),
execute: withNormalizedProjectRoot(async (args, { log }) => {
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
const { id, file, status, projectRoot } = args;
try {
@@ -77,7 +81,7 @@ export function registerShowTaskTool(server) {
// Resolve the path to tasks.json using the NORMALIZED projectRoot from args
let tasksJsonPath;
try {
tasksJsonPath = findTasksPath(
tasksJsonPath = findTasksJsonPath(
{ projectRoot: projectRoot, file: file },
log
);
@@ -94,10 +98,8 @@ export function registerShowTaskTool(server) {
let complexityReportPath;
try {
complexityReportPath = findComplexityReportPath(
{
projectRoot: projectRoot,
complexityReport: args.complexityReport
},
projectRoot,
args.complexityReport,
log
);
} catch (error) {
@@ -112,11 +114,14 @@ export function registerShowTaskTool(server) {
status: status,
projectRoot: projectRoot
},
log
log,
{ session }
);
if (result.success) {
log.info(`Successfully retrieved task details for ID: ${args.id}`);
log.info(
`Successfully retrieved task details for ID: ${args.id}${result.fromCache ? ' (from cache)' : ''}`
);
} else {
log.error(`Failed to get task: ${result.error.message}`);
}

View File

@@ -11,8 +11,8 @@ import {
} from './utils.js';
import { listTasksDirect } from '../core/task-master-core.js';
import {
resolveTasksPath,
resolveComplexityReportPath
findTasksJsonPath,
findComplexityReportPath
} from '../core/utils/path-utils.js';
/**
@@ -55,10 +55,13 @@ export function registerListTasksTool(server) {
try {
log.info(`Getting tasks with filters: ${JSON.stringify(args)}`);
// Resolve the path to tasks.json using new path utilities
// Use args.projectRoot directly (guaranteed by withNormalizedProjectRoot)
let tasksJsonPath;
try {
tasksJsonPath = resolveTasksPath(args, log);
tasksJsonPath = findTasksJsonPath(
{ projectRoot: args.projectRoot, file: args.file },
log
);
} catch (error) {
log.error(`Error finding tasks.json: ${error.message}`);
return createErrorResponse(
@@ -69,13 +72,14 @@ export function registerListTasksTool(server) {
// Resolve the path to complexity report
let complexityReportPath;
try {
complexityReportPath = resolveComplexityReportPath(args, session);
complexityReportPath = findComplexityReportPath(
args.projectRoot,
args.complexityReport,
log
);
} catch (error) {
log.error(`Error finding complexity report: ${error.message}`);
// This is optional, so we don't fail the operation
complexityReportPath = null;
}
const result = await listTasksDirect(
{
tasksJsonPath: tasksJsonPath,
@@ -87,7 +91,7 @@ export function registerListTasksTool(server) {
);
log.info(
`Retrieved ${result.success ? result.data?.tasks?.length || 0 : 0} tasks`
`Retrieved ${result.success ? result.data?.tasks?.length || 0 : 0} tasks${result.fromCache ? ' (from cache)' : ''}`
);
return handleApiResult(result, log, 'Error getting tasks');
} catch (error) {

View File

@@ -29,6 +29,7 @@ import { registerRemoveTaskTool } from './remove-task.js';
import { registerInitializeProjectTool } from './initialize-project.js';
import { registerModelsTool } from './models.js';
import { registerMoveTaskTool } from './move-task.js';
import { registerResearchTool } from './research.js';
/**
* Register all Task Master tools with the MCP server
@@ -74,6 +75,9 @@ export function registerTaskMasterTools(server) {
registerRemoveDependencyTool(server);
registerValidateDependenciesTool(server);
registerFixDependenciesTool(server);
// Group 7: AI-Powered Features
registerResearchTool(server);
} catch (error) {
logger.error(`Error registering Task Master tools: ${error.message}`);
throw error;

Some files were not shown because too many files have changed in this diff Show More