Compare commits
1 Commits
v0.15.0-rc
...
fix/ollama
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
db6fdebdcd |
5
.changeset/beige-doodles-type.md
Normal file
5
.changeset/beige-doodles-type.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Resolve all issues related to MCP
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Added comprehensive Ollama model validation and interactive setup support
|
|
||||||
|
|
||||||
- **Interactive Setup Enhancement**: Added "Custom Ollama model" option to `task-master models --setup`, matching the existing OpenRouter functionality
|
|
||||||
- **Live Model Validation**: When setting Ollama models, Taskmaster now validates against the local Ollama instance by querying `/api/tags` endpoint
|
|
||||||
- **Configurable Endpoints**: Uses the `ollamaBaseUrl` from `.taskmasterconfig` (with role-specific `baseUrl` overrides supported)
|
|
||||||
- **Robust Error Handling**:
|
|
||||||
- Detects when Ollama server is not running and provides clear error messages
|
|
||||||
- Validates model existence and lists available alternatives when model not found
|
|
||||||
- Graceful fallback behavior for connection issues
|
|
||||||
- **Full Platform Support**: Both MCP server tools and CLI commands support the new validation
|
|
||||||
- **Improved User Experience**: Clear feedback during model validation with informative success/error messages
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Adds and updates supported AI models with costs:
|
|
||||||
- Added new OpenRouter models: GPT-4.1 series, O3, Codex Mini, Llama 4 Maverick, Llama 4 Scout, Qwen3-235b
|
|
||||||
- Added Mistral models: Devstral Small, Mistral Nemo
|
|
||||||
- Updated Ollama models with latest variants: Devstral, Qwen3, Mistral-small3.1, Llama3.3
|
|
||||||
- Updated Gemini model to latest 2.5 Flash preview version
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add `--research` flag to parse-prd command, enabling enhanced task generation from PRD files. When used, Taskmaster leverages the research model to:
|
|
||||||
|
|
||||||
- Research current technologies and best practices relevant to the project
|
|
||||||
- Identify technical challenges and security concerns not explicitly mentioned in the PRD
|
|
||||||
- Include specific library recommendations with version numbers
|
|
||||||
- Provide more detailed implementation guidance based on industry standards
|
|
||||||
- Create more accurate dependency relationships between tasks
|
|
||||||
|
|
||||||
This results in higher quality, more actionable tasks with minimal additional effort.
|
|
||||||
|
|
||||||
*NOTE* That this is an experimental feature. Research models don't typically do great at structured output. You may find some failures when using research mode, so please share your feedback so we can improve this.
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Adjusts default main model model to Claude Sonnet 4. Adjusts default fallback to Claude Sonney 3.7"
|
|
||||||
9
.changeset/floppy-plants-marry.md
Normal file
9
.changeset/floppy-plants-marry.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix CLI --force flag for parse-prd command
|
||||||
|
|
||||||
|
Previously, the --force flag was not respected when running `parse-prd`, causing the command to prompt for confirmation or fail even when --force was provided. This patch ensures that the flag is correctly passed and handled, allowing users to overwrite existing tasks.json files as intended.
|
||||||
|
|
||||||
|
- Fixes #477
|
||||||
5
.changeset/forty-plums-stay.md
Normal file
5
.changeset/forty-plums-stay.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
.taskmasterconfig now supports a baseUrl field per model role (main, research, fallback), allowing endpoint overrides for any provider.
|
||||||
11
.changeset/free-bikes-smile.md
Normal file
11
.changeset/free-bikes-smile.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add Ollama as a supported AI provider.
|
||||||
|
|
||||||
|
- You can now add it by running `task-master models --setup` and selecting it.
|
||||||
|
- Ollama is a local model provider, so no API key is required.
|
||||||
|
- Ollama models are available at `http://localhost:11434/api` by default.
|
||||||
|
- You can change the default URL by setting the `OLLAMA_BASE_URL` environment variable or by adding a `baseUrl` property to the `ollama` model role in `.taskmasterconfig`.
|
||||||
|
- If you want to use a custom API key, you can set it in the `OLLAMA_API_KEY` environment variable.
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Adds llms-install.md to the root to enable AI agents to programmatically install the Taskmaster MCP server. This is specifically being introduced for the Cline MCP marketplace and will be adjusted over time for other MCP clients as needed.
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': minor
|
|
||||||
---
|
|
||||||
|
|
||||||
This change significantly enhances the `add-task` command's intelligence. When you add a new task, Taskmaster now automatically:
|
|
||||||
- Analyzes your existing tasks to find those most relevant to your new task's description.
|
|
||||||
- Provides the AI with detailed context from these relevant tasks.
|
|
||||||
|
|
||||||
This results in newly created tasks being more accurately placed within your project's dependency structure, saving you time and any need to update tasks just for dependencies, all without significantly increasing AI costs. You'll get smarter, more connected tasks right from the start.
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Adds AGENTS.md to power Claude Code integration more natively based on Anthropic's best practice and Claude-specific MCP client behaviours. Also adds in advanced workflows that tie Taskmaster commands together into one Claude workflow."
|
|
||||||
5
.changeset/many-wasps-sell.md
Normal file
5
.changeset/many-wasps-sell.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Task Master no longer tells you to update when you're already up to date
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Enhance analyze-complexity to support analyzing specific task IDs.
|
|
||||||
- You can now analyze individual tasks or selected task groups by using the new `--id` option with comma-separated IDs, or `--from` and `--to` options to specify a range of tasks.
|
|
||||||
- The feature intelligently merges analysis results with existing reports, allowing incremental analysis while preserving previous results.
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fixes issue with force/append flag combinations for parse-prd.
|
|
||||||
5
.changeset/nice-lies-cover.md
Normal file
5
.changeset/nice-lies-cover.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Adds costs information to AI commands using input/output tokens and model costs.
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': patch
|
|
||||||
---
|
|
||||||
|
|
||||||
You can now add tasks to a newly initialized project without having to parse a prd. This will automatically create the missing tasks.json file and create the first task. Lets you vibe if you want to vibe."
|
|
||||||
26
.changeset/pre.json
Normal file
26
.changeset/pre.json
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
{
|
||||||
|
"mode": "exit",
|
||||||
|
"tag": "rc",
|
||||||
|
"initialVersions": {
|
||||||
|
"task-master-ai": "0.14.0-rc.0"
|
||||||
|
},
|
||||||
|
"changesets": [
|
||||||
|
"beige-doodles-type",
|
||||||
|
"floppy-plants-marry",
|
||||||
|
"forty-plums-stay",
|
||||||
|
"free-bikes-smile",
|
||||||
|
"many-wasps-sell",
|
||||||
|
"nice-lies-cover",
|
||||||
|
"red-oranges-attend",
|
||||||
|
"red-suns-wash",
|
||||||
|
"sharp-dingos-melt",
|
||||||
|
"six-cloths-happen",
|
||||||
|
"slow-singers-swim",
|
||||||
|
"small-toys-fly",
|
||||||
|
"social-masks-fold",
|
||||||
|
"soft-zoos-flow",
|
||||||
|
"ten-ways-mate",
|
||||||
|
"tricky-wombats-spend",
|
||||||
|
"wide-eyes-relax"
|
||||||
|
]
|
||||||
|
}
|
||||||
5
.changeset/red-oranges-attend.md
Normal file
5
.changeset/red-oranges-attend.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix ERR_MODULE_NOT_FOUND when trying to run MCP Server
|
||||||
5
.changeset/red-suns-wash.md
Normal file
5
.changeset/red-suns-wash.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Add src directory to exports
|
||||||
5
.changeset/sharp-dingos-melt.md
Normal file
5
.changeset/sharp-dingos-melt.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix the error handling of task status settings
|
||||||
7
.changeset/six-cloths-happen.md
Normal file
7
.changeset/six-cloths-happen.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Remove caching layer from MCP direct functions for task listing, next task, and complexity report
|
||||||
|
|
||||||
|
- Fixes issues users where having where they were getting stale data
|
||||||
5
.changeset/slow-singers-swim.md
Normal file
5
.changeset/slow-singers-swim.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix for issue #409 LOG_LEVEL Pydantic validation error
|
||||||
7
.changeset/small-toys-fly.md
Normal file
7
.changeset/small-toys-fly.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Small fixes
|
||||||
|
- `next` command no longer incorrectly suggests that subtasks be broken down into subtasks in the CLI
|
||||||
|
- fixes the `append` flag so it properly works in the CLI
|
||||||
5
.changeset/social-masks-fold.md
Normal file
5
.changeset/social-masks-fold.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Display task complexity scores in task lists, next task, and task details views.
|
||||||
7
.changeset/soft-zoos-flow.md
Normal file
7
.changeset/soft-zoos-flow.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix initial .env.example to work out of the box
|
||||||
|
|
||||||
|
- Closes #419
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fixes an issue where the research fallback would attempt to make API calls without checking for a valid API key first. This ensures proper error handling when the main task generation and first fallback both fail. Closes #421 #519.
|
|
||||||
5
.changeset/ten-ways-mate.md
Normal file
5
.changeset/ten-ways-mate.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix default fallback model and maxTokens in Taskmaster initialization
|
||||||
5
.changeset/tricky-wombats-spend.md
Normal file
5
.changeset/tricky-wombats-spend.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix bug when updating tasks on the MCP server (#412)
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add next task to set task status response
|
|
||||||
Status: DONE
|
|
||||||
11
.changeset/wide-eyes-relax.md
Normal file
11
.changeset/wide-eyes-relax.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
---
|
||||||
|
'task-master-ai': patch
|
||||||
|
---
|
||||||
|
|
||||||
|
Fix duplicate output on CLI help screen
|
||||||
|
|
||||||
|
- Prevent the Task Master CLI from printing the help screen more than once when using `-h` or `--help`.
|
||||||
|
- Removed redundant manual event handlers and guards for help output; now only the Commander `.helpInformation` override is used for custom help.
|
||||||
|
- Simplified logic so that help is only shown once for both "no arguments" and help flag flows.
|
||||||
|
- Ensures a clean, branded help experience with no repeated content.
|
||||||
|
- Fixes #339
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
---
|
|
||||||
'task-master-ai': minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add move command to enable moving tasks and subtasks within the task hierarchy. This new command supports moving standalone tasks to become subtasks, subtasks to become standalone tasks, and moving subtasks between different parents. The implementation handles circular dependencies, validation, and proper updating of parent-child relationships.
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
- CLI command: `task-master move --from=<id> --to=<id>`
|
|
||||||
- MCP tool: `move_task` with parameters:
|
|
||||||
- `from`: ID of task/subtask to move (e.g., "5" or "5.2")
|
|
||||||
- `to`: ID of destination (e.g., "7" or "7.3")
|
|
||||||
- `file` (optional): Custom path to tasks.json
|
|
||||||
|
|
||||||
**Example scenarios:**
|
|
||||||
- Move task to become subtask: `--from="5" --to="7"`
|
|
||||||
- Move subtask to standalone task: `--from="5.2" --to="7"`
|
|
||||||
- Move subtask to different parent: `--from="5.2" --to="7.3"`
|
|
||||||
- Reorder subtask within same parent: `--from="5.2" --to="5.4"`
|
|
||||||
- Move multiple tasks at once: `--from="10,11,12" --to="16,17,18"`
|
|
||||||
- Move task to new ID: `--from="5" --to="25"` (creates a new task with ID 25)
|
|
||||||
|
|
||||||
**Multiple Task Support:**
|
|
||||||
The command supports moving multiple tasks simultaneously by providing comma-separated lists for both `--from` and `--to` parameters. The number of source and destination IDs must match. This is particularly useful for resolving merge conflicts in task files when multiple team members have created tasks on different branches.
|
|
||||||
|
|
||||||
**Validation Features:**
|
|
||||||
- Allows moving tasks to new, non-existent IDs (automatically creates placeholders)
|
|
||||||
- Prevents moving to existing task IDs that already contain content (to avoid overwriting)
|
|
||||||
- Validates source tasks exist before attempting to move them
|
|
||||||
- Ensures proper parent-child relationships are maintained
|
|
||||||
@@ -49,7 +49,6 @@ Task Master offers two primary ways to interact:
|
|||||||
- Maintain valid dependency structure with `add_dependency`/`remove_dependency` tools or `task-master add-dependency`/`remove-dependency` commands, `validate_dependencies` / `task-master validate-dependencies`, and `fix_dependencies` / `task-master fix-dependencies` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) when needed
|
- Maintain valid dependency structure with `add_dependency`/`remove_dependency` tools or `task-master add-dependency`/`remove-dependency` commands, `validate_dependencies` / `task-master validate-dependencies`, and `fix_dependencies` / `task-master fix-dependencies` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) when needed
|
||||||
- Respect dependency chains and task priorities when selecting work
|
- Respect dependency chains and task priorities when selecting work
|
||||||
- Report progress regularly using `get_tasks` / `task-master list`
|
- Report progress regularly using `get_tasks` / `task-master list`
|
||||||
- Reorganize tasks as needed using `move_task` / `task-master move --from=<id> --to=<id>` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to change task hierarchy or ordering
|
|
||||||
|
|
||||||
## Task Complexity Analysis
|
## Task Complexity Analysis
|
||||||
|
|
||||||
@@ -155,25 +154,6 @@ Taskmaster configuration is managed through two main mechanisms:
|
|||||||
- Task files are automatically regenerated after dependency changes
|
- Task files are automatically regenerated after dependency changes
|
||||||
- Dependencies are visualized with status indicators in task listings and files
|
- Dependencies are visualized with status indicators in task listings and files
|
||||||
|
|
||||||
## Task Reorganization
|
|
||||||
|
|
||||||
- Use `move_task` / `task-master move --from=<id> --to=<id>` to move tasks or subtasks within the hierarchy
|
|
||||||
- This command supports several use cases:
|
|
||||||
- Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`)
|
|
||||||
- Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`)
|
|
||||||
- Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`)
|
|
||||||
- Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`)
|
|
||||||
- Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`)
|
|
||||||
- Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`)
|
|
||||||
- The system includes validation to prevent data loss:
|
|
||||||
- Allows moving to non-existent IDs by creating placeholder tasks
|
|
||||||
- Prevents moving to existing task IDs that have content (to avoid overwriting)
|
|
||||||
- Validates source tasks exist before attempting to move them
|
|
||||||
- The system maintains proper parent-child relationships and dependency integrity
|
|
||||||
- Task files are automatically regenerated after the move operation
|
|
||||||
- This provides greater flexibility in organizing and refining your task structure as project understanding evolves
|
|
||||||
- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs.
|
|
||||||
|
|
||||||
## Iterative Subtask Implementation
|
## Iterative Subtask Implementation
|
||||||
|
|
||||||
Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation:
|
Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation:
|
||||||
|
|||||||
@@ -269,36 +269,11 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
|
||||||
|
|
||||||
### 17. Move Task (`move_task`)
|
|
||||||
|
|
||||||
* **MCP Tool:** `move_task`
|
|
||||||
* **CLI Command:** `task-master move [options]`
|
|
||||||
* **Description:** `Move a task or subtask to a new position within the task hierarchy.`
|
|
||||||
* **Key Parameters/Options:**
|
|
||||||
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
|
|
||||||
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
|
|
||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
|
||||||
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
|
|
||||||
* Moving a task to become a subtask
|
|
||||||
* Moving a subtask to become a standalone task
|
|
||||||
* Moving a subtask to a different parent
|
|
||||||
* Reordering subtasks within the same parent
|
|
||||||
* Moving a task to a new, non-existent ID (automatically creates placeholders)
|
|
||||||
* Moving multiple tasks at once with comma-separated IDs
|
|
||||||
* **Validation Features:**
|
|
||||||
* Allows moving tasks to non-existent destination IDs (creates placeholder tasks)
|
|
||||||
* Prevents moving to existing task IDs that already have content (to avoid overwriting)
|
|
||||||
* Validates that source tasks exist before attempting to move them
|
|
||||||
* Maintains proper parent-child relationships
|
|
||||||
* **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3.
|
|
||||||
* **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions.
|
|
||||||
* **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Dependency Management
|
## Dependency Management
|
||||||
|
|
||||||
### 18. Add Dependency (`add_dependency`)
|
### 17. Add Dependency (`add_dependency`)
|
||||||
|
|
||||||
* **MCP Tool:** `add_dependency`
|
* **MCP Tool:** `add_dependency`
|
||||||
* **CLI Command:** `task-master add-dependency [options]`
|
* **CLI Command:** `task-master add-dependency [options]`
|
||||||
@@ -309,7 +284,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
|
||||||
* **Usage:** Establish the correct order of execution between tasks.
|
* **Usage:** Establish the correct order of execution between tasks.
|
||||||
|
|
||||||
### 19. Remove Dependency (`remove_dependency`)
|
### 18. Remove Dependency (`remove_dependency`)
|
||||||
|
|
||||||
* **MCP Tool:** `remove_dependency`
|
* **MCP Tool:** `remove_dependency`
|
||||||
* **CLI Command:** `task-master remove-dependency [options]`
|
* **CLI Command:** `task-master remove-dependency [options]`
|
||||||
@@ -320,7 +295,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Update task relationships when the order of execution changes.
|
* **Usage:** Update task relationships when the order of execution changes.
|
||||||
|
|
||||||
### 20. Validate Dependencies (`validate_dependencies`)
|
### 19. Validate Dependencies (`validate_dependencies`)
|
||||||
|
|
||||||
* **MCP Tool:** `validate_dependencies`
|
* **MCP Tool:** `validate_dependencies`
|
||||||
* **CLI Command:** `task-master validate-dependencies [options]`
|
* **CLI Command:** `task-master validate-dependencies [options]`
|
||||||
@@ -329,7 +304,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
|
||||||
* **Usage:** Audit the integrity of your task dependencies.
|
* **Usage:** Audit the integrity of your task dependencies.
|
||||||
|
|
||||||
### 21. Fix Dependencies (`fix_dependencies`)
|
### 20. Fix Dependencies (`fix_dependencies`)
|
||||||
|
|
||||||
* **MCP Tool:** `fix_dependencies`
|
* **MCP Tool:** `fix_dependencies`
|
||||||
* **CLI Command:** `task-master fix-dependencies [options]`
|
* **CLI Command:** `task-master fix-dependencies [options]`
|
||||||
@@ -342,7 +317,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
|
|
||||||
## Analysis & Reporting
|
## Analysis & Reporting
|
||||||
|
|
||||||
### 22. Analyze Project Complexity (`analyze_project_complexity`)
|
### 21. Analyze Project Complexity (`analyze_project_complexity`)
|
||||||
|
|
||||||
* **MCP Tool:** `analyze_project_complexity`
|
* **MCP Tool:** `analyze_project_complexity`
|
||||||
* **CLI Command:** `task-master analyze-complexity [options]`
|
* **CLI Command:** `task-master analyze-complexity [options]`
|
||||||
@@ -355,7 +330,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
|
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
|
||||||
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
|
||||||
|
|
||||||
### 23. View Complexity Report (`complexity_report`)
|
### 22. View Complexity Report (`complexity_report`)
|
||||||
|
|
||||||
* **MCP Tool:** `complexity_report`
|
* **MCP Tool:** `complexity_report`
|
||||||
* **CLI Command:** `task-master complexity-report [options]`
|
* **CLI Command:** `task-master complexity-report [options]`
|
||||||
@@ -368,7 +343,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
|
|||||||
|
|
||||||
## File Management
|
## File Management
|
||||||
|
|
||||||
### 24. Generate Task Files (`generate`)
|
### 23. Generate Task Files (`generate`)
|
||||||
|
|
||||||
* **MCP Tool:** `generate`
|
* **MCP Tool:** `generate`
|
||||||
* **CLI Command:** `task-master generate [options]`
|
* **CLI Command:** `task-master generate [options]`
|
||||||
|
|||||||
@@ -2,8 +2,8 @@
|
|||||||
"models": {
|
"models": {
|
||||||
"main": {
|
"main": {
|
||||||
"provider": "anthropic",
|
"provider": "anthropic",
|
||||||
"modelId": "claude-sonnet-4-20250514",
|
"modelId": "claude-3-7-sonnet-20250219",
|
||||||
"maxTokens": 50000,
|
"maxTokens": 100000,
|
||||||
"temperature": 0.2
|
"temperature": 0.2
|
||||||
},
|
},
|
||||||
"research": {
|
"research": {
|
||||||
@@ -15,7 +15,7 @@
|
|||||||
"fallback": {
|
"fallback": {
|
||||||
"provider": "anthropic",
|
"provider": "anthropic",
|
||||||
"modelId": "claude-3-7-sonnet-20250219",
|
"modelId": "claude-3-7-sonnet-20250219",
|
||||||
"maxTokens": 128000,
|
"maxTokens": 8192,
|
||||||
"temperature": 0.2
|
"temperature": 0.2
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|||||||
60
CHANGELOG.md
60
CHANGELOG.md
@@ -1,65 +1,5 @@
|
|||||||
# task-master-ai
|
# task-master-ai
|
||||||
|
|
||||||
## 0.14.0
|
|
||||||
|
|
||||||
### Minor Changes
|
|
||||||
|
|
||||||
- [#521](https://github.com/eyaltoledano/claude-task-master/pull/521) [`ed17cb0`](https://github.com/eyaltoledano/claude-task-master/commit/ed17cb0e0a04dedde6c616f68f24f3660f68dd04) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - .taskmasterconfig now supports a baseUrl field per model role (main, research, fallback), allowing endpoint overrides for any provider.
|
|
||||||
|
|
||||||
- [#536](https://github.com/eyaltoledano/claude-task-master/pull/536) [`f4a83ec`](https://github.com/eyaltoledano/claude-task-master/commit/f4a83ec047b057196833e3a9b861d4bceaec805d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Ollama as a supported AI provider.
|
|
||||||
|
|
||||||
- You can now add it by running `task-master models --setup` and selecting it.
|
|
||||||
- Ollama is a local model provider, so no API key is required.
|
|
||||||
- Ollama models are available at `http://localhost:11434/api` by default.
|
|
||||||
- You can change the default URL by setting the `OLLAMA_BASE_URL` environment variable or by adding a `baseUrl` property to the `ollama` model role in `.taskmasterconfig`.
|
|
||||||
- If you want to use a custom API key, you can set it in the `OLLAMA_API_KEY` environment variable.
|
|
||||||
|
|
||||||
- [#528](https://github.com/eyaltoledano/claude-task-master/pull/528) [`58b417a`](https://github.com/eyaltoledano/claude-task-master/commit/58b417a8ce697e655f749ca4d759b1c20014c523) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Display task complexity scores in task lists, next task, and task details views.
|
|
||||||
|
|
||||||
### Patch Changes
|
|
||||||
|
|
||||||
- [#402](https://github.com/eyaltoledano/claude-task-master/pull/402) [`01963af`](https://github.com/eyaltoledano/claude-task-master/commit/01963af2cb6f77f43b2ad8a6e4a838ec205412bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Resolve all issues related to MCP
|
|
||||||
|
|
||||||
- [#478](https://github.com/eyaltoledano/claude-task-master/pull/478) [`4117f71`](https://github.com/eyaltoledano/claude-task-master/commit/4117f71c18ee4d321a9c91308d00d5d69bfac61e) Thanks [@joedanz](https://github.com/joedanz)! - Fix CLI --force flag for parse-prd command
|
|
||||||
|
|
||||||
Previously, the --force flag was not respected when running `parse-prd`, causing the command to prompt for confirmation or fail even when --force was provided. This patch ensures that the flag is correctly passed and handled, allowing users to overwrite existing tasks.json files as intended.
|
|
||||||
|
|
||||||
- Fixes #477
|
|
||||||
|
|
||||||
- [#511](https://github.com/eyaltoledano/claude-task-master/pull/511) [`17294ff`](https://github.com/eyaltoledano/claude-task-master/commit/17294ff25918d64278674e558698a1a9ad785098) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Task Master no longer tells you to update when you're already up to date
|
|
||||||
|
|
||||||
- [#442](https://github.com/eyaltoledano/claude-task-master/pull/442) [`2b3ae8b`](https://github.com/eyaltoledano/claude-task-master/commit/2b3ae8bf89dc471c4ce92f3a12ded57f61faa449) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds costs information to AI commands using input/output tokens and model costs.
|
|
||||||
|
|
||||||
- [#402](https://github.com/eyaltoledano/claude-task-master/pull/402) [`01963af`](https://github.com/eyaltoledano/claude-task-master/commit/01963af2cb6f77f43b2ad8a6e4a838ec205412bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix ERR_MODULE_NOT_FOUND when trying to run MCP Server
|
|
||||||
|
|
||||||
- [#402](https://github.com/eyaltoledano/claude-task-master/pull/402) [`01963af`](https://github.com/eyaltoledano/claude-task-master/commit/01963af2cb6f77f43b2ad8a6e4a838ec205412bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add src directory to exports
|
|
||||||
|
|
||||||
- [#523](https://github.com/eyaltoledano/claude-task-master/pull/523) [`da317f2`](https://github.com/eyaltoledano/claude-task-master/commit/da317f2607ca34db1be78c19954996f634c40923) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix the error handling of task status settings
|
|
||||||
|
|
||||||
- [#527](https://github.com/eyaltoledano/claude-task-master/pull/527) [`a8dabf4`](https://github.com/eyaltoledano/claude-task-master/commit/a8dabf44856713f488960224ee838761716bba26) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove caching layer from MCP direct functions for task listing, next task, and complexity report
|
|
||||||
|
|
||||||
- Fixes issues users where having where they were getting stale data
|
|
||||||
|
|
||||||
- [#417](https://github.com/eyaltoledano/claude-task-master/pull/417) [`a1f8d52`](https://github.com/eyaltoledano/claude-task-master/commit/a1f8d52474fdbdf48e17a63e3f567a6d63010d9f) Thanks [@ksylvan](https://github.com/ksylvan)! - Fix for issue #409 LOG_LEVEL Pydantic validation error
|
|
||||||
|
|
||||||
- [#442](https://github.com/eyaltoledano/claude-task-master/pull/442) [`0288311`](https://github.com/eyaltoledano/claude-task-master/commit/0288311965ae2a343ebee4a0c710dde94d2ae7e7) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Small fixes - `next` command no longer incorrectly suggests that subtasks be broken down into subtasks in the CLI - fixes the `append` flag so it properly works in the CLI
|
|
||||||
|
|
||||||
- [#501](https://github.com/eyaltoledano/claude-task-master/pull/501) [`0a61184`](https://github.com/eyaltoledano/claude-task-master/commit/0a611843b56a856ef0a479dc34078326e05ac3a8) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix initial .env.example to work out of the box
|
|
||||||
|
|
||||||
- Closes #419
|
|
||||||
|
|
||||||
- [#435](https://github.com/eyaltoledano/claude-task-master/pull/435) [`a96215a`](https://github.com/eyaltoledano/claude-task-master/commit/a96215a359b25061fd3b3f3c7b10e8ac0390c062) Thanks [@lebsral](https://github.com/lebsral)! - Fix default fallback model and maxTokens in Taskmaster initialization
|
|
||||||
|
|
||||||
- [#517](https://github.com/eyaltoledano/claude-task-master/pull/517) [`e96734a`](https://github.com/eyaltoledano/claude-task-master/commit/e96734a6cc6fec7731de72eb46b182a6e3743d02) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bug when updating tasks on the MCP server (#412)
|
|
||||||
|
|
||||||
- [#496](https://github.com/eyaltoledano/claude-task-master/pull/496) [`efce374`](https://github.com/eyaltoledano/claude-task-master/commit/efce37469bc58eceef46763ba32df1ed45242211) Thanks [@joedanz](https://github.com/joedanz)! - Fix duplicate output on CLI help screen
|
|
||||||
|
|
||||||
- Prevent the Task Master CLI from printing the help screen more than once when using `-h` or `--help`.
|
|
||||||
- Removed redundant manual event handlers and guards for help output; now only the Commander `.helpInformation` override is used for custom help.
|
|
||||||
- Simplified logic so that help is only shown once for both "no arguments" and help flag flows.
|
|
||||||
- Ensures a clean, branded help experience with no repeated content.
|
|
||||||
- Fixes #339
|
|
||||||
|
|
||||||
## 0.14.0-rc.1
|
## 0.14.0-rc.1
|
||||||
|
|
||||||
### Minor Changes
|
### Minor Changes
|
||||||
|
|||||||
413
assets/AGENTS.md
413
assets/AGENTS.md
@@ -1,413 +0,0 @@
|
|||||||
# Task Master AI - Claude Code Integration Guide
|
|
||||||
|
|
||||||
## Essential Commands
|
|
||||||
|
|
||||||
### Core Workflow Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Project Setup
|
|
||||||
task-master init # Initialize Task Master in current project
|
|
||||||
task-master parse-prd scripts/prd.txt # Generate tasks from PRD document
|
|
||||||
task-master models --setup # Configure AI models interactively
|
|
||||||
|
|
||||||
# Daily Development Workflow
|
|
||||||
task-master list # Show all tasks with status
|
|
||||||
task-master next # Get next available task to work on
|
|
||||||
task-master show <id> # View detailed task information (e.g., task-master show 1.2)
|
|
||||||
task-master set-status --id=<id> --status=done # Mark task complete
|
|
||||||
|
|
||||||
# Task Management
|
|
||||||
task-master add-task --prompt="description" --research # Add new task with AI assistance
|
|
||||||
task-master expand --id=<id> --research --force # Break task into subtasks
|
|
||||||
task-master update-task --id=<id> --prompt="changes" # Update specific task
|
|
||||||
task-master update --from=<id> --prompt="changes" # Update multiple tasks from ID onwards
|
|
||||||
task-master update-subtask --id=<id> --prompt="notes" # Add implementation notes to subtask
|
|
||||||
|
|
||||||
# Analysis & Planning
|
|
||||||
task-master analyze-complexity --research # Analyze task complexity
|
|
||||||
task-master complexity-report # View complexity analysis
|
|
||||||
task-master expand --all --research # Expand all eligible tasks
|
|
||||||
|
|
||||||
# Dependencies & Organization
|
|
||||||
task-master add-dependency --id=<id> --depends-on=<id> # Add task dependency
|
|
||||||
task-master move --from=<id> --to=<id> # Reorganize task hierarchy
|
|
||||||
task-master validate-dependencies # Check for dependency issues
|
|
||||||
task-master generate # Update task markdown files (usually auto-called)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key Files & Project Structure
|
|
||||||
|
|
||||||
### Core Files
|
|
||||||
|
|
||||||
- `tasks/tasks.json` - Main task data file (auto-managed)
|
|
||||||
- `.taskmasterconfig` - AI model configuration (use `task-master models` to modify)
|
|
||||||
- `scripts/prd.txt` - Product Requirements Document for parsing
|
|
||||||
- `tasks/*.txt` - Individual task files (auto-generated from tasks.json)
|
|
||||||
- `.env` - API keys for CLI usage
|
|
||||||
|
|
||||||
### Claude Code Integration Files
|
|
||||||
|
|
||||||
- `CLAUDE.md` - Auto-loaded context for Claude Code (this file)
|
|
||||||
- `.claude/settings.json` - Claude Code tool allowlist and preferences
|
|
||||||
- `.claude/commands/` - Custom slash commands for repeated workflows
|
|
||||||
- `.mcp.json` - MCP server configuration (project-specific)
|
|
||||||
|
|
||||||
### Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
project/
|
|
||||||
├── tasks/
|
|
||||||
│ ├── tasks.json # Main task database
|
|
||||||
│ ├── task-1.md # Individual task files
|
|
||||||
│ └── task-2.md
|
|
||||||
├── scripts/
|
|
||||||
│ ├── prd.txt # Product requirements
|
|
||||||
│ └── task-complexity-report.json
|
|
||||||
├── .claude/
|
|
||||||
│ ├── settings.json # Claude Code configuration
|
|
||||||
│ └── commands/ # Custom slash commands
|
|
||||||
├── .taskmasterconfig # AI models & settings
|
|
||||||
├── .env # API keys
|
|
||||||
├── .mcp.json # MCP configuration
|
|
||||||
└── CLAUDE.md # This file - auto-loaded by Claude Code
|
|
||||||
```
|
|
||||||
|
|
||||||
## MCP Integration
|
|
||||||
|
|
||||||
Task Master provides an MCP server that Claude Code can connect to. Configure in `.mcp.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"task-master-ai": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
|
||||||
"env": {
|
|
||||||
"ANTHROPIC_API_KEY": "your_key_here",
|
|
||||||
"PERPLEXITY_API_KEY": "your_key_here",
|
|
||||||
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
|
|
||||||
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
|
|
||||||
"XAI_API_KEY": "XAI_API_KEY_HERE",
|
|
||||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
|
|
||||||
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
|
|
||||||
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
|
|
||||||
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Essential MCP Tools
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
help; // = shows available taskmaster commands
|
|
||||||
// Project setup
|
|
||||||
initialize_project; // = task-master init
|
|
||||||
parse_prd; // = task-master parse-prd
|
|
||||||
|
|
||||||
// Daily workflow
|
|
||||||
get_tasks; // = task-master list
|
|
||||||
next_task; // = task-master next
|
|
||||||
get_task; // = task-master show <id>
|
|
||||||
set_task_status; // = task-master set-status
|
|
||||||
|
|
||||||
// Task management
|
|
||||||
add_task; // = task-master add-task
|
|
||||||
expand_task; // = task-master expand
|
|
||||||
update_task; // = task-master update-task
|
|
||||||
update_subtask; // = task-master update-subtask
|
|
||||||
update; // = task-master update
|
|
||||||
|
|
||||||
// Analysis
|
|
||||||
analyze_project_complexity; // = task-master analyze-complexity
|
|
||||||
complexity_report; // = task-master complexity-report
|
|
||||||
```
|
|
||||||
|
|
||||||
## Claude Code Workflow Integration
|
|
||||||
|
|
||||||
### Standard Development Workflow
|
|
||||||
|
|
||||||
#### 1. Project Initialization
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Initialize Task Master
|
|
||||||
task-master init
|
|
||||||
|
|
||||||
# Create or obtain PRD, then parse it
|
|
||||||
task-master parse-prd scripts/prd.txt
|
|
||||||
|
|
||||||
# Analyze complexity and expand tasks
|
|
||||||
task-master analyze-complexity --research
|
|
||||||
task-master expand --all --research
|
|
||||||
```
|
|
||||||
|
|
||||||
If tasks already exist, another PRD can be parsed (with new information only!) using parse-prd with --append flag. This will add the generated tasks to the existing list of tasks..
|
|
||||||
|
|
||||||
#### 2. Daily Development Loop
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start each session
|
|
||||||
task-master next # Find next available task
|
|
||||||
task-master show <id> # Review task details
|
|
||||||
|
|
||||||
# During implementation, check in code context into the tasks and subtasks
|
|
||||||
task-master update-subtask --id=<id> --prompt="implementation notes..."
|
|
||||||
|
|
||||||
# Complete tasks
|
|
||||||
task-master set-status --id=<id> --status=done
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Multi-Claude Workflows
|
|
||||||
|
|
||||||
For complex projects, use multiple Claude Code sessions:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Terminal 1: Main implementation
|
|
||||||
cd project && claude
|
|
||||||
|
|
||||||
# Terminal 2: Testing and validation
|
|
||||||
cd project-test-worktree && claude
|
|
||||||
|
|
||||||
# Terminal 3: Documentation updates
|
|
||||||
cd project-docs-worktree && claude
|
|
||||||
```
|
|
||||||
|
|
||||||
### Custom Slash Commands
|
|
||||||
|
|
||||||
Create `.claude/commands/taskmaster-next.md`:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
Find the next available Task Master task and show its details.
|
|
||||||
|
|
||||||
Steps:
|
|
||||||
|
|
||||||
1. Run `task-master next` to get the next task
|
|
||||||
2. If a task is available, run `task-master show <id>` for full details
|
|
||||||
3. Provide a summary of what needs to be implemented
|
|
||||||
4. Suggest the first implementation step
|
|
||||||
```
|
|
||||||
|
|
||||||
Create `.claude/commands/taskmaster-complete.md`:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
Complete a Task Master task: $ARGUMENTS
|
|
||||||
|
|
||||||
Steps:
|
|
||||||
|
|
||||||
1. Review the current task with `task-master show $ARGUMENTS`
|
|
||||||
2. Verify all implementation is complete
|
|
||||||
3. Run any tests related to this task
|
|
||||||
4. Mark as complete: `task-master set-status --id=$ARGUMENTS --status=done`
|
|
||||||
5. Show the next available task with `task-master next`
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tool Allowlist Recommendations
|
|
||||||
|
|
||||||
Add to `.claude/settings.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"allowedTools": [
|
|
||||||
"Edit",
|
|
||||||
"Bash(task-master *)",
|
|
||||||
"Bash(git commit:*)",
|
|
||||||
"Bash(git add:*)",
|
|
||||||
"Bash(npm run *)",
|
|
||||||
"mcp__task_master_ai__*"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration & Setup
|
|
||||||
|
|
||||||
### API Keys Required
|
|
||||||
|
|
||||||
At least **one** of these API keys must be configured:
|
|
||||||
|
|
||||||
- `ANTHROPIC_API_KEY` (Claude models) - **Recommended**
|
|
||||||
- `PERPLEXITY_API_KEY` (Research features) - **Highly recommended**
|
|
||||||
- `OPENAI_API_KEY` (GPT models)
|
|
||||||
- `GOOGLE_API_KEY` (Gemini models)
|
|
||||||
- `MISTRAL_API_KEY` (Mistral models)
|
|
||||||
- `OPENROUTER_API_KEY` (Multiple models)
|
|
||||||
- `XAI_API_KEY` (Grok models)
|
|
||||||
|
|
||||||
An API key is required for any provider used across any of the 3 roles defined in the `models` command.
|
|
||||||
|
|
||||||
### Model Configuration
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Interactive setup (recommended)
|
|
||||||
task-master models --setup
|
|
||||||
|
|
||||||
# Set specific models
|
|
||||||
task-master models --set-main claude-3-5-sonnet-20241022
|
|
||||||
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
|
|
||||||
task-master models --set-fallback gpt-4o-mini
|
|
||||||
```
|
|
||||||
|
|
||||||
## Task Structure & IDs
|
|
||||||
|
|
||||||
### Task ID Format
|
|
||||||
|
|
||||||
- Main tasks: `1`, `2`, `3`, etc.
|
|
||||||
- Subtasks: `1.1`, `1.2`, `2.1`, etc.
|
|
||||||
- Sub-subtasks: `1.1.1`, `1.1.2`, etc.
|
|
||||||
|
|
||||||
### Task Status Values
|
|
||||||
|
|
||||||
- `pending` - Ready to work on
|
|
||||||
- `in-progress` - Currently being worked on
|
|
||||||
- `done` - Completed and verified
|
|
||||||
- `deferred` - Postponed
|
|
||||||
- `cancelled` - No longer needed
|
|
||||||
- `blocked` - Waiting on external factors
|
|
||||||
|
|
||||||
### Task Fields
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"id": "1.2",
|
|
||||||
"title": "Implement user authentication",
|
|
||||||
"description": "Set up JWT-based auth system",
|
|
||||||
"status": "pending",
|
|
||||||
"priority": "high",
|
|
||||||
"dependencies": ["1.1"],
|
|
||||||
"details": "Use bcrypt for hashing, JWT for tokens...",
|
|
||||||
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
|
|
||||||
"subtasks": []
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Claude Code Best Practices with Task Master
|
|
||||||
|
|
||||||
### Context Management
|
|
||||||
|
|
||||||
- Use `/clear` between different tasks to maintain focus
|
|
||||||
- This CLAUDE.md file is automatically loaded for context
|
|
||||||
- Use `task-master show <id>` to pull specific task context when needed
|
|
||||||
|
|
||||||
### Iterative Implementation
|
|
||||||
|
|
||||||
1. `task-master show <subtask-id>` - Understand requirements
|
|
||||||
2. Explore codebase and plan implementation
|
|
||||||
3. `task-master update-subtask --id=<id> --prompt="detailed plan"` - Log plan
|
|
||||||
4. `task-master set-status --id=<id> --status=in-progress` - Start work
|
|
||||||
5. Implement code following logged plan
|
|
||||||
6. `task-master update-subtask --id=<id> --prompt="what worked/didn't work"` - Log progress
|
|
||||||
7. `task-master set-status --id=<id> --status=done` - Complete task
|
|
||||||
|
|
||||||
### Complex Workflows with Checklists
|
|
||||||
|
|
||||||
For large migrations or multi-step processes:
|
|
||||||
|
|
||||||
1. Create a markdown PRD file describing the new changes: `touch task-migration-checklist.md` (prds can be .txt or .md)
|
|
||||||
2. Use Taskmaster to parse the new prd with `task-master parse-prd --append` (also available in MCP)
|
|
||||||
3. Use Taskmaster to expand the newly generated tasks into subtasks. Consdier using `analyze-complexity` with the correct --to and --from IDs (the new ids) to identify the ideal subtask amounts for each task. Then expand them.
|
|
||||||
4. Work through items systematically, checking them off as completed
|
|
||||||
5. Use `task-master update-subtask` to log progress on each task/subtask and/or updating/researching them before/during implementation if getting stuck
|
|
||||||
|
|
||||||
### Git Integration
|
|
||||||
|
|
||||||
Task Master works well with `gh` CLI:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create PR for completed task
|
|
||||||
gh pr create --title "Complete task 1.2: User authentication" --body "Implements JWT auth system as specified in task 1.2"
|
|
||||||
|
|
||||||
# Reference task in commits
|
|
||||||
git commit -m "feat: implement JWT auth (task 1.2)"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Parallel Development with Git Worktrees
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create worktrees for parallel task development
|
|
||||||
git worktree add ../project-auth feature/auth-system
|
|
||||||
git worktree add ../project-api feature/api-refactor
|
|
||||||
|
|
||||||
# Run Claude Code in each worktree
|
|
||||||
cd ../project-auth && claude # Terminal 1: Auth work
|
|
||||||
cd ../project-api && claude # Terminal 2: API work
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### AI Commands Failing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check API keys are configured
|
|
||||||
cat .env # For CLI usage
|
|
||||||
|
|
||||||
# Verify model configuration
|
|
||||||
task-master models
|
|
||||||
|
|
||||||
# Test with different model
|
|
||||||
task-master models --set-fallback gpt-4o-mini
|
|
||||||
```
|
|
||||||
|
|
||||||
### MCP Connection Issues
|
|
||||||
|
|
||||||
- Check `.mcp.json` configuration
|
|
||||||
- Verify Node.js installation
|
|
||||||
- Use `--mcp-debug` flag when starting Claude Code
|
|
||||||
- Use CLI as fallback if MCP unavailable
|
|
||||||
|
|
||||||
### Task File Sync Issues
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Regenerate task files from tasks.json
|
|
||||||
task-master generate
|
|
||||||
|
|
||||||
# Fix dependency issues
|
|
||||||
task-master fix-dependencies
|
|
||||||
```
|
|
||||||
|
|
||||||
DO NOT RE-INITIALIZE. That will not do anything beyond re-adding the same Taskmaster core files.
|
|
||||||
|
|
||||||
## Important Notes
|
|
||||||
|
|
||||||
### AI-Powered Operations
|
|
||||||
|
|
||||||
These commands make AI calls and may take up to a minute:
|
|
||||||
|
|
||||||
- `parse_prd` / `task-master parse-prd`
|
|
||||||
- `analyze_project_complexity` / `task-master analyze-complexity`
|
|
||||||
- `expand_task` / `task-master expand`
|
|
||||||
- `expand_all` / `task-master expand --all`
|
|
||||||
- `add_task` / `task-master add-task`
|
|
||||||
- `update` / `task-master update`
|
|
||||||
- `update_task` / `task-master update-task`
|
|
||||||
- `update_subtask` / `task-master update-subtask`
|
|
||||||
|
|
||||||
### File Management
|
|
||||||
|
|
||||||
- Never manually edit `tasks.json` - use commands instead
|
|
||||||
- Never manually edit `.taskmasterconfig` - use `task-master models`
|
|
||||||
- Task markdown files in `tasks/` are auto-generated
|
|
||||||
- Run `task-master generate` after manual changes to tasks.json
|
|
||||||
|
|
||||||
### Claude Code Session Management
|
|
||||||
|
|
||||||
- Use `/clear` frequently to maintain focused context
|
|
||||||
- Create custom slash commands for repeated Task Master workflows
|
|
||||||
- Configure tool allowlist to streamline permissions
|
|
||||||
- Use headless mode for automation: `claude -p "task-master next"`
|
|
||||||
|
|
||||||
### Multi-Task Updates
|
|
||||||
|
|
||||||
- Use `update --from=<id>` to update multiple future tasks
|
|
||||||
- Use `update-task --id=<id>` for single task updates
|
|
||||||
- Use `update-subtask --id=<id>` for implementation logging
|
|
||||||
|
|
||||||
### Research Mode
|
|
||||||
|
|
||||||
- Add `--research` flag for research-based AI enhancement
|
|
||||||
- Requires a research model API key like Perplexity (`PERPLEXITY_API_KEY`) in environment
|
|
||||||
- Provides more informed task creation and updates
|
|
||||||
- Recommended for complex technical tasks
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
_This guide ensures Claude Code has immediate access to Task Master's essential functionality for agentic development workflows._
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -187,32 +187,6 @@ task-master validate-dependencies
|
|||||||
task-master fix-dependencies
|
task-master fix-dependencies
|
||||||
```
|
```
|
||||||
|
|
||||||
## Move Tasks
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Move a task or subtask to a new position
|
|
||||||
task-master move --from=<id> --to=<id>
|
|
||||||
|
|
||||||
# Examples:
|
|
||||||
# Move task to become a subtask
|
|
||||||
task-master move --from=5 --to=7
|
|
||||||
|
|
||||||
# Move subtask to become a standalone task
|
|
||||||
task-master move --from=5.2 --to=7
|
|
||||||
|
|
||||||
# Move subtask to a different parent
|
|
||||||
task-master move --from=5.2 --to=7.3
|
|
||||||
|
|
||||||
# Reorder subtasks within the same parent
|
|
||||||
task-master move --from=5.2 --to=5.4
|
|
||||||
|
|
||||||
# Move a task to a new ID position (creates placeholder if doesn't exist)
|
|
||||||
task-master move --from=5 --to=25
|
|
||||||
|
|
||||||
# Move multiple tasks at once (must have the same number of IDs)
|
|
||||||
task-master move --from=10,11,12 --to=16,17,18
|
|
||||||
```
|
|
||||||
|
|
||||||
## Add a New Task
|
## Add a New Task
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
@@ -30,7 +30,7 @@ I need to regenerate the subtasks for task 3 with a different approach. Can you
|
|||||||
## Handling changes
|
## Handling changes
|
||||||
|
|
||||||
```
|
```
|
||||||
I've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
|
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
|
||||||
```
|
```
|
||||||
|
|
||||||
## Completing work
|
## Completing work
|
||||||
@@ -40,34 +40,6 @@ I've finished implementing the authentication system described in task 2. All te
|
|||||||
Please mark it as complete and tell me what I should work on next.
|
Please mark it as complete and tell me what I should work on next.
|
||||||
```
|
```
|
||||||
|
|
||||||
## Reorganizing tasks
|
|
||||||
|
|
||||||
```
|
|
||||||
I think subtask 5.2 would fit better as part of task 7. Can you move it there?
|
|
||||||
```
|
|
||||||
|
|
||||||
(Agent runs: `task-master move --from=5.2 --to=7.3`)
|
|
||||||
|
|
||||||
```
|
|
||||||
Task 8 should actually be a subtask of task 4. Can you reorganize this?
|
|
||||||
```
|
|
||||||
|
|
||||||
(Agent runs: `task-master move --from=8 --to=4.1`)
|
|
||||||
|
|
||||||
```
|
|
||||||
I just merged the main branch and there's a conflict in tasks.json. My teammates created tasks 10-15 on their branch while I created tasks 10-12 on my branch. Can you help me resolve this by moving my tasks?
|
|
||||||
```
|
|
||||||
|
|
||||||
(Agent runs:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
task-master move --from=10 --to=16
|
|
||||||
task-master move --from=11 --to=17
|
|
||||||
task-master move --from=12 --to=18
|
|
||||||
```
|
|
||||||
|
|
||||||
)
|
|
||||||
|
|
||||||
## Analyzing complexity
|
## Analyzing complexity
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -268,61 +268,7 @@ task-master update --from=4 --prompt="Update to use MongoDB, researching best pr
|
|||||||
|
|
||||||
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
|
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
|
||||||
|
|
||||||
### 6. Reorganizing Tasks
|
### 6. Breaking Down Complex Tasks
|
||||||
|
|
||||||
If you need to reorganize your task structure:
|
|
||||||
|
|
||||||
```
|
|
||||||
I think subtask 5.2 would fit better as part of task 7 instead. Can you move it there?
|
|
||||||
```
|
|
||||||
|
|
||||||
The agent will execute:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
task-master move --from=5.2 --to=7.3
|
|
||||||
```
|
|
||||||
|
|
||||||
You can reorganize tasks in various ways:
|
|
||||||
|
|
||||||
- Moving a standalone task to become a subtask: `--from=5 --to=7`
|
|
||||||
- Moving a subtask to become a standalone task: `--from=5.2 --to=7`
|
|
||||||
- Moving a subtask to a different parent: `--from=5.2 --to=7.3`
|
|
||||||
- Reordering subtasks within the same parent: `--from=5.2 --to=5.4`
|
|
||||||
- Moving a task to a new ID position: `--from=5 --to=25` (even if task 25 doesn't exist yet)
|
|
||||||
- Moving multiple tasks at once: `--from=10,11,12 --to=16,17,18` (must have same number of IDs, Taskmaster will look through each position)
|
|
||||||
|
|
||||||
When moving tasks to new IDs:
|
|
||||||
|
|
||||||
- The system automatically creates placeholder tasks for non-existent destination IDs
|
|
||||||
- This prevents accidental data loss during reorganization
|
|
||||||
- Any tasks that depend on moved tasks will have their dependencies updated
|
|
||||||
- When moving a parent task, all its subtasks are automatically moved with it and renumbered
|
|
||||||
|
|
||||||
This is particularly useful as your project understanding evolves and you need to refine your task structure.
|
|
||||||
|
|
||||||
### 7. Resolving Merge Conflicts with Tasks
|
|
||||||
|
|
||||||
When working with a team, you might encounter merge conflicts in your tasks.json file if multiple team members create tasks on different branches. The move command makes resolving these conflicts straightforward:
|
|
||||||
|
|
||||||
```
|
|
||||||
I just merged the main branch and there's a conflict with tasks.json. My teammates created tasks 10-15 while I created tasks 10-12 on my branch. Can you help me resolve this?
|
|
||||||
```
|
|
||||||
|
|
||||||
The agent will help you:
|
|
||||||
|
|
||||||
1. Keep your teammates' tasks (10-15)
|
|
||||||
2. Move your tasks to new positions to avoid conflicts:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Move your tasks to new positions (e.g., 16-18)
|
|
||||||
task-master move --from=10 --to=16
|
|
||||||
task-master move --from=11 --to=17
|
|
||||||
task-master move --from=12 --to=18
|
|
||||||
```
|
|
||||||
|
|
||||||
This approach preserves everyone's work while maintaining a clean task structure, making it much easier to handle task conflicts than trying to manually merge JSON files.
|
|
||||||
|
|
||||||
### 8. Breaking Down Complex Tasks
|
|
||||||
|
|
||||||
For complex tasks that need more granularity:
|
For complex tasks that need more granularity:
|
||||||
|
|
||||||
|
|||||||
131
llms-install.md
131
llms-install.md
@@ -1,131 +0,0 @@
|
|||||||
``# Taskmaster AI Installation Guide
|
|
||||||
|
|
||||||
This guide helps AI assistants install and configure Taskmaster for users in their development projects.
|
|
||||||
|
|
||||||
## What is Taskmaster?
|
|
||||||
|
|
||||||
Taskmaster is an AI-driven task management system designed for development workflows. It helps break down projects into manageable tasks, track dependencies, and maintain development momentum through structured, AI-enhanced planning.
|
|
||||||
|
|
||||||
## Installation Steps
|
|
||||||
|
|
||||||
### Step 1: Add MCP Configuration
|
|
||||||
|
|
||||||
Add the following configuration to the user's MCP settings file (`.cursor/mcp.json` for Cursor, or equivalent for other editors):
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"taskmaster-ai": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
|
|
||||||
"env": {
|
|
||||||
"ANTHROPIC_API_KEY": "user_will_add_their_key_here",
|
|
||||||
"PERPLEXITY_API_KEY": "user_will_add_their_key_here",
|
|
||||||
"OPENAI_API_KEY": "user_will_add_their_key_here",
|
|
||||||
"GOOGLE_API_KEY": "user_will_add_their_key_here",
|
|
||||||
"MISTRAL_API_KEY": "user_will_add_their_key_here",
|
|
||||||
"OPENROUTER_API_KEY": "user_will_add_their_key_here",
|
|
||||||
"XAI_API_KEY": "user_will_add_their_key_here"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: API Key Requirements
|
|
||||||
|
|
||||||
Inform the user they need **at least one** API key from the following providers:
|
|
||||||
|
|
||||||
- **Anthropic** (for Claude models) - Recommended
|
|
||||||
- **OpenAI** (for GPT models)
|
|
||||||
- **Google** (for Gemini models)
|
|
||||||
- **Perplexity** (for research features) - Highly recommended
|
|
||||||
- **Mistral** (for Mistral models)
|
|
||||||
- **OpenRouter** (access to multiple models)
|
|
||||||
- **xAI** (for Grok models)
|
|
||||||
|
|
||||||
The user will be able to define 3 separate roles (can be the same provider or separate providers) for main AI operations, research operations (research providers/models only), and a fallback model in case of errors.
|
|
||||||
|
|
||||||
### Step 3: Initialize Project
|
|
||||||
|
|
||||||
Once the MCP server is configured and API keys are added, initialize Taskmaster in the user's project:
|
|
||||||
|
|
||||||
> Can you initialize Task Master in my project?
|
|
||||||
|
|
||||||
This will run the `initialize_project` tool to set up the basic file structure.
|
|
||||||
|
|
||||||
### Step 4: Create Initial Tasks
|
|
||||||
|
|
||||||
Users have two options for creating initial tasks:
|
|
||||||
|
|
||||||
**Option A: Parse a PRD (Recommended)**
|
|
||||||
If they have a Product Requirements Document:
|
|
||||||
|
|
||||||
> Can you parse my PRD file at [path/to/prd.txt] to generate initial tasks?
|
|
||||||
|
|
||||||
If the user does not have a PRD, the AI agent can help them create one and store it in scripts/prd.txt for parsing.
|
|
||||||
|
|
||||||
**Option B: Start from scratch**
|
|
||||||
|
|
||||||
> Can you help me add my first task: [describe the task]
|
|
||||||
|
|
||||||
## Common Usage Patterns
|
|
||||||
|
|
||||||
### Daily Workflow
|
|
||||||
|
|
||||||
> What's the next task I should work on?
|
|
||||||
> Can you show me the details for task [ID]?
|
|
||||||
> Can you mark task [ID] as done?
|
|
||||||
|
|
||||||
### Task Management
|
|
||||||
|
|
||||||
> Can you break down task [ID] into subtasks?
|
|
||||||
> Can you add a new task: [description]
|
|
||||||
> Can you analyze the complexity of my tasks?
|
|
||||||
|
|
||||||
### Project Organization
|
|
||||||
|
|
||||||
> Can you show me all my pending tasks?
|
|
||||||
> Can you move task [ID] to become a subtask of [parent ID]?
|
|
||||||
> Can you update task [ID] with this new information: [details]
|
|
||||||
|
|
||||||
## Verification Steps
|
|
||||||
|
|
||||||
After installation, verify everything is working:
|
|
||||||
|
|
||||||
1. **Check MCP Connection**: The AI should be able to access Task Master tools
|
|
||||||
2. **Test Basic Commands**: Try `get_tasks` to list current tasks
|
|
||||||
3. **Verify API Keys**: Ensure AI-powered commands work (like `add_task`)
|
|
||||||
|
|
||||||
Note: An API key fallback exists that allows the MCP server to read API keys from `.env` instead of the MCP JSON config. It is recommended to have keys in both places in case the MCP server is unable to read keys from its environment for whatever reason.
|
|
||||||
|
|
||||||
When adding keys to `.env` only, the `models` tool will explain that the keys are not OK for MCP. Despite this, the fallback should kick in and the API keys will be read from the `.env` file.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
**If MCP server doesn't start:**
|
|
||||||
|
|
||||||
- Verify the JSON configuration is valid
|
|
||||||
- Check that Node.js is installed
|
|
||||||
- Ensure API keys are properly formatted
|
|
||||||
|
|
||||||
**If AI commands fail:**
|
|
||||||
|
|
||||||
- Verify at least one API key is configured
|
|
||||||
- Check API key permissions and quotas
|
|
||||||
- Try using a different model via the `models` tool
|
|
||||||
|
|
||||||
## CLI Fallback
|
|
||||||
|
|
||||||
Taskmaster is also available via CLI commands, by installing with `npm install task-master-ai@latest` in a terminal. Running `task-master help` will show all available commands, which offer a 1:1 experience with the MCP server. As the AI agent, you should refer to the system prompts and rules provided to you to identify Taskmaster-specific rules that help you understand how and when to use it.
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
Once installed, users can:
|
|
||||||
|
|
||||||
- Create new tasks with `add-task` or parse a PRD (scripts/prd.txt) into tasks with `parse-prd`
|
|
||||||
- Set up model preferences with `models` tool
|
|
||||||
- Expand tasks into subtasks with `expand-all` and `expand-task`
|
|
||||||
- Explore advanced features like research mode and complexity analysis
|
|
||||||
|
|
||||||
For detailed documentation, refer to the Task Master docs directory.``
|
|
||||||
@@ -18,9 +18,6 @@ import { createLogWrapper } from '../../tools/utils.js'; // Import the new utili
|
|||||||
* @param {string} args.outputPath - Explicit absolute path to save the report.
|
* @param {string} args.outputPath - Explicit absolute path to save the report.
|
||||||
* @param {string|number} [args.threshold] - Minimum complexity score to recommend expansion (1-10)
|
* @param {string|number} [args.threshold] - Minimum complexity score to recommend expansion (1-10)
|
||||||
* @param {boolean} [args.research] - Use Perplexity AI for research-backed complexity analysis
|
* @param {boolean} [args.research] - Use Perplexity AI for research-backed complexity analysis
|
||||||
* @param {string} [args.ids] - Comma-separated list of task IDs to analyze
|
|
||||||
* @param {number} [args.from] - Starting task ID in a range to analyze
|
|
||||||
* @param {number} [args.to] - Ending task ID in a range to analyze
|
|
||||||
* @param {string} [args.projectRoot] - Project root path.
|
* @param {string} [args.projectRoot] - Project root path.
|
||||||
* @param {Object} log - Logger object
|
* @param {Object} log - Logger object
|
||||||
* @param {Object} [context={}] - Context object containing session data
|
* @param {Object} [context={}] - Context object containing session data
|
||||||
@@ -29,16 +26,7 @@ import { createLogWrapper } from '../../tools/utils.js'; // Import the new utili
|
|||||||
*/
|
*/
|
||||||
export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
||||||
const { session } = context;
|
const { session } = context;
|
||||||
const {
|
const { tasksJsonPath, outputPath, threshold, research, projectRoot } = args;
|
||||||
tasksJsonPath,
|
|
||||||
outputPath,
|
|
||||||
threshold,
|
|
||||||
research,
|
|
||||||
projectRoot,
|
|
||||||
ids,
|
|
||||||
from,
|
|
||||||
to
|
|
||||||
} = args;
|
|
||||||
|
|
||||||
const logWrapper = createLogWrapper(log);
|
const logWrapper = createLogWrapper(log);
|
||||||
|
|
||||||
@@ -70,14 +58,6 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
|||||||
log.info(`Analyzing task complexity from: ${tasksPath}`);
|
log.info(`Analyzing task complexity from: ${tasksPath}`);
|
||||||
log.info(`Output report will be saved to: ${resolvedOutputPath}`);
|
log.info(`Output report will be saved to: ${resolvedOutputPath}`);
|
||||||
|
|
||||||
if (ids) {
|
|
||||||
log.info(`Analyzing specific task IDs: ${ids}`);
|
|
||||||
} else if (from || to) {
|
|
||||||
const fromStr = from !== undefined ? from : 'first';
|
|
||||||
const toStr = to !== undefined ? to : 'last';
|
|
||||||
log.info(`Analyzing tasks in range: ${fromStr} to ${toStr}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (research) {
|
if (research) {
|
||||||
log.info('Using research role for complexity analysis');
|
log.info('Using research role for complexity analysis');
|
||||||
}
|
}
|
||||||
@@ -88,10 +68,7 @@ export async function analyzeTaskComplexityDirect(args, log, context = {}) {
|
|||||||
output: outputPath,
|
output: outputPath,
|
||||||
threshold: threshold,
|
threshold: threshold,
|
||||||
research: research === true, // Ensure boolean
|
research: research === true, // Ensure boolean
|
||||||
projectRoot: projectRoot, // Pass projectRoot here
|
projectRoot: projectRoot // Pass projectRoot here
|
||||||
id: ids, // Pass the ids parameter to the core function as 'id'
|
|
||||||
from: from, // Pass from parameter
|
|
||||||
to: to // Pass to parameter
|
|
||||||
};
|
};
|
||||||
// --- End Initial Checks ---
|
// --- End Initial Checks ---
|
||||||
|
|
||||||
|
|||||||
@@ -1,99 +0,0 @@
|
|||||||
/**
|
|
||||||
* Direct function wrapper for moveTask
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { moveTask } from '../../../../scripts/modules/task-manager.js';
|
|
||||||
import { findTasksJsonPath } from '../utils/path-utils.js';
|
|
||||||
import {
|
|
||||||
enableSilentMode,
|
|
||||||
disableSilentMode
|
|
||||||
} from '../../../../scripts/modules/utils.js';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Move a task or subtask to a new position
|
|
||||||
* @param {Object} args - Function arguments
|
|
||||||
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file
|
|
||||||
* @param {string} args.sourceId - ID of the task/subtask to move (e.g., '5' or '5.2')
|
|
||||||
* @param {string} args.destinationId - ID of the destination (e.g., '7' or '7.3')
|
|
||||||
* @param {string} args.file - Alternative path to the tasks.json file
|
|
||||||
* @param {string} args.projectRoot - Project root directory
|
|
||||||
* @param {Object} log - Logger object
|
|
||||||
* @returns {Promise<{success: boolean, data?: Object, error?: Object}>}
|
|
||||||
*/
|
|
||||||
export async function moveTaskDirect(args, log, context = {}) {
|
|
||||||
const { session } = context;
|
|
||||||
|
|
||||||
// Validate required parameters
|
|
||||||
if (!args.sourceId) {
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
message: 'Source ID is required',
|
|
||||||
code: 'MISSING_SOURCE_ID'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!args.destinationId) {
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
message: 'Destination ID is required',
|
|
||||||
code: 'MISSING_DESTINATION_ID'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Find tasks.json path if not provided
|
|
||||||
let tasksPath = args.tasksJsonPath || args.file;
|
|
||||||
if (!tasksPath) {
|
|
||||||
if (!args.projectRoot) {
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
message:
|
|
||||||
'Project root is required if tasksJsonPath is not provided',
|
|
||||||
code: 'MISSING_PROJECT_ROOT'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
tasksPath = findTasksJsonPath(args, log);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Enable silent mode to prevent console output during MCP operation
|
|
||||||
enableSilentMode();
|
|
||||||
|
|
||||||
// Call the core moveTask function, always generate files
|
|
||||||
const result = await moveTask(
|
|
||||||
tasksPath,
|
|
||||||
args.sourceId,
|
|
||||||
args.destinationId,
|
|
||||||
true
|
|
||||||
);
|
|
||||||
|
|
||||||
// Restore console output
|
|
||||||
disableSilentMode();
|
|
||||||
|
|
||||||
return {
|
|
||||||
success: true,
|
|
||||||
data: {
|
|
||||||
movedTask: result.movedTask,
|
|
||||||
message: `Successfully moved task/subtask ${args.sourceId} to ${args.destinationId}`
|
|
||||||
}
|
|
||||||
};
|
|
||||||
} catch (error) {
|
|
||||||
// Restore console output in case of error
|
|
||||||
disableSilentMode();
|
|
||||||
|
|
||||||
log.error(`Failed to move task: ${error.message}`);
|
|
||||||
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
message: error.message,
|
|
||||||
code: 'MOVE_TASK_ERROR'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -31,7 +31,6 @@ export async function parsePRDDirect(args, log, context = {}) {
|
|||||||
numTasks: numTasksArg,
|
numTasks: numTasksArg,
|
||||||
force,
|
force,
|
||||||
append,
|
append,
|
||||||
research,
|
|
||||||
projectRoot
|
projectRoot
|
||||||
} = args;
|
} = args;
|
||||||
|
|
||||||
@@ -115,14 +114,8 @@ export async function parsePRDDirect(args, log, context = {}) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (research) {
|
|
||||||
logWrapper.info(
|
|
||||||
'Research mode enabled. Using Perplexity AI for enhanced PRD analysis.'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
logWrapper.info(
|
logWrapper.info(
|
||||||
`Parsing PRD via direct function. Input: ${inputPath}, Output: ${outputPath}, NumTasks: ${numTasks}, Force: ${force}, Append: ${append}, Research: ${research}, ProjectRoot: ${projectRoot}`
|
`Parsing PRD via direct function. Input: ${inputPath}, Output: ${outputPath}, NumTasks: ${numTasks}, Force: ${force}, Append: ${append}, ProjectRoot: ${projectRoot}`
|
||||||
);
|
);
|
||||||
|
|
||||||
const wasSilent = isSilentMode();
|
const wasSilent = isSilentMode();
|
||||||
@@ -142,7 +135,6 @@ export async function parsePRDDirect(args, log, context = {}) {
|
|||||||
projectRoot,
|
projectRoot,
|
||||||
force,
|
force,
|
||||||
append,
|
append,
|
||||||
research,
|
|
||||||
commandName: 'parse-prd',
|
commandName: 'parse-prd',
|
||||||
outputType: 'mcp'
|
outputType: 'mcp'
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ import {
|
|||||||
disableSilentMode,
|
disableSilentMode,
|
||||||
isSilentMode
|
isSilentMode
|
||||||
} from '../../../../scripts/modules/utils.js';
|
} from '../../../../scripts/modules/utils.js';
|
||||||
import { nextTaskDirect } from './next-task.js';
|
|
||||||
/**
|
/**
|
||||||
* Direct function wrapper for setTaskStatus with error handling.
|
* Direct function wrapper for setTaskStatus with error handling.
|
||||||
*
|
*
|
||||||
@@ -19,7 +19,7 @@ import { nextTaskDirect } from './next-task.js';
|
|||||||
*/
|
*/
|
||||||
export async function setTaskStatusDirect(args, log) {
|
export async function setTaskStatusDirect(args, log) {
|
||||||
// Destructure expected args, including the resolved tasksJsonPath
|
// Destructure expected args, including the resolved tasksJsonPath
|
||||||
const { tasksJsonPath, id, status, complexityReportPath } = args;
|
const { tasksJsonPath, id, status } = args;
|
||||||
try {
|
try {
|
||||||
log.info(`Setting task status with args: ${JSON.stringify(args)}`);
|
log.info(`Setting task status with args: ${JSON.stringify(args)}`);
|
||||||
|
|
||||||
@@ -85,39 +85,6 @@ export async function setTaskStatusDirect(args, log) {
|
|||||||
},
|
},
|
||||||
fromCache: false // This operation always modifies state and should never be cached
|
fromCache: false // This operation always modifies state and should never be cached
|
||||||
};
|
};
|
||||||
|
|
||||||
// If the task was completed, attempt to fetch the next task
|
|
||||||
if (result.data.status === 'done') {
|
|
||||||
try {
|
|
||||||
log.info(`Attempting to fetch next task for task ${taskId}`);
|
|
||||||
const nextResult = await nextTaskDirect(
|
|
||||||
{
|
|
||||||
tasksJsonPath: tasksJsonPath,
|
|
||||||
reportPath: complexityReportPath
|
|
||||||
},
|
|
||||||
log
|
|
||||||
);
|
|
||||||
|
|
||||||
if (nextResult.success) {
|
|
||||||
log.info(
|
|
||||||
`Successfully retrieved next task: ${nextResult.data.nextTask}`
|
|
||||||
);
|
|
||||||
result.data = {
|
|
||||||
...result.data,
|
|
||||||
nextTask: nextResult.data.nextTask,
|
|
||||||
isNextSubtask: nextResult.data.isSubtask,
|
|
||||||
nextSteps: nextResult.data.nextSteps
|
|
||||||
};
|
|
||||||
} else {
|
|
||||||
log.warn(
|
|
||||||
`Failed to retrieve next task: ${nextResult.error?.message || 'Unknown error'}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
} catch (nextErr) {
|
|
||||||
log.error(`Error retrieving next task: ${nextErr.message}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
log.error(`Error setting task status: ${error.message}`);
|
log.error(`Error setting task status: ${error.message}`);
|
||||||
|
|||||||
@@ -30,7 +30,6 @@ import { addDependencyDirect } from './direct-functions/add-dependency.js';
|
|||||||
import { removeTaskDirect } from './direct-functions/remove-task.js';
|
import { removeTaskDirect } from './direct-functions/remove-task.js';
|
||||||
import { initializeProjectDirect } from './direct-functions/initialize-project.js';
|
import { initializeProjectDirect } from './direct-functions/initialize-project.js';
|
||||||
import { modelsDirect } from './direct-functions/models.js';
|
import { modelsDirect } from './direct-functions/models.js';
|
||||||
import { moveTaskDirect } from './direct-functions/move-task.js';
|
|
||||||
|
|
||||||
// Re-export utility functions
|
// Re-export utility functions
|
||||||
export { findTasksJsonPath } from './utils/path-utils.js';
|
export { findTasksJsonPath } from './utils/path-utils.js';
|
||||||
@@ -61,8 +60,7 @@ export const directFunctions = new Map([
|
|||||||
['addDependencyDirect', addDependencyDirect],
|
['addDependencyDirect', addDependencyDirect],
|
||||||
['removeTaskDirect', removeTaskDirect],
|
['removeTaskDirect', removeTaskDirect],
|
||||||
['initializeProjectDirect', initializeProjectDirect],
|
['initializeProjectDirect', initializeProjectDirect],
|
||||||
['modelsDirect', modelsDirect],
|
['modelsDirect', modelsDirect]
|
||||||
['moveTaskDirect', moveTaskDirect]
|
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// Re-export all direct function implementations
|
// Re-export all direct function implementations
|
||||||
@@ -91,6 +89,5 @@ export {
|
|||||||
addDependencyDirect,
|
addDependencyDirect,
|
||||||
removeTaskDirect,
|
removeTaskDirect,
|
||||||
initializeProjectDirect,
|
initializeProjectDirect,
|
||||||
modelsDirect,
|
modelsDirect
|
||||||
moveTaskDirect
|
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -49,24 +49,6 @@ export function registerAnalyzeProjectComplexityTool(server) {
|
|||||||
.describe(
|
.describe(
|
||||||
'Path to the tasks file relative to project root (default: tasks/tasks.json).'
|
'Path to the tasks file relative to project root (default: tasks/tasks.json).'
|
||||||
),
|
),
|
||||||
ids: z
|
|
||||||
.string()
|
|
||||||
.optional()
|
|
||||||
.describe(
|
|
||||||
'Comma-separated list of task IDs to analyze specifically (e.g., "1,3,5").'
|
|
||||||
),
|
|
||||||
from: z.coerce
|
|
||||||
.number()
|
|
||||||
.int()
|
|
||||||
.positive()
|
|
||||||
.optional()
|
|
||||||
.describe('Starting task ID in a range to analyze.'),
|
|
||||||
to: z.coerce
|
|
||||||
.number()
|
|
||||||
.int()
|
|
||||||
.positive()
|
|
||||||
.optional()
|
|
||||||
.describe('Ending task ID in a range to analyze.'),
|
|
||||||
projectRoot: z
|
projectRoot: z
|
||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
@@ -125,10 +107,7 @@ export function registerAnalyzeProjectComplexityTool(server) {
|
|||||||
outputPath: outputPath,
|
outputPath: outputPath,
|
||||||
threshold: args.threshold,
|
threshold: args.threshold,
|
||||||
research: args.research,
|
research: args.research,
|
||||||
projectRoot: args.projectRoot,
|
projectRoot: args.projectRoot
|
||||||
ids: args.ids,
|
|
||||||
from: args.from,
|
|
||||||
to: args.to
|
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
|
|||||||
@@ -28,7 +28,6 @@ import { registerAddDependencyTool } from './add-dependency.js';
|
|||||||
import { registerRemoveTaskTool } from './remove-task.js';
|
import { registerRemoveTaskTool } from './remove-task.js';
|
||||||
import { registerInitializeProjectTool } from './initialize-project.js';
|
import { registerInitializeProjectTool } from './initialize-project.js';
|
||||||
import { registerModelsTool } from './models.js';
|
import { registerModelsTool } from './models.js';
|
||||||
import { registerMoveTaskTool } from './move-task.js';
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Register all Task Master tools with the MCP server
|
* Register all Task Master tools with the MCP server
|
||||||
@@ -62,7 +61,6 @@ export function registerTaskMasterTools(server) {
|
|||||||
registerRemoveTaskTool(server);
|
registerRemoveTaskTool(server);
|
||||||
registerRemoveSubtaskTool(server);
|
registerRemoveSubtaskTool(server);
|
||||||
registerClearSubtasksTool(server);
|
registerClearSubtasksTool(server);
|
||||||
registerMoveTaskTool(server);
|
|
||||||
|
|
||||||
// Group 5: Task Analysis & Expansion
|
// Group 5: Task Analysis & Expansion
|
||||||
registerAnalyzeProjectComplexityTool(server);
|
registerAnalyzeProjectComplexityTool(server);
|
||||||
|
|||||||
@@ -1,129 +0,0 @@
|
|||||||
/**
|
|
||||||
* tools/move-task.js
|
|
||||||
* Tool for moving tasks or subtasks to a new position
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { z } from 'zod';
|
|
||||||
import {
|
|
||||||
handleApiResult,
|
|
||||||
createErrorResponse,
|
|
||||||
withNormalizedProjectRoot
|
|
||||||
} from './utils.js';
|
|
||||||
import { moveTaskDirect } from '../core/task-master-core.js';
|
|
||||||
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Register the moveTask tool with the MCP server
|
|
||||||
* @param {Object} server - FastMCP server instance
|
|
||||||
*/
|
|
||||||
export function registerMoveTaskTool(server) {
|
|
||||||
server.addTool({
|
|
||||||
name: 'move_task',
|
|
||||||
description: 'Move a task or subtask to a new position',
|
|
||||||
parameters: z.object({
|
|
||||||
from: z
|
|
||||||
.string()
|
|
||||||
.describe(
|
|
||||||
'ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated to move multiple tasks (e.g., "5,6,7")'
|
|
||||||
),
|
|
||||||
to: z
|
|
||||||
.string()
|
|
||||||
.describe(
|
|
||||||
'ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated'
|
|
||||||
),
|
|
||||||
file: z.string().optional().describe('Custom path to tasks.json file'),
|
|
||||||
projectRoot: z
|
|
||||||
.string()
|
|
||||||
.optional()
|
|
||||||
.describe(
|
|
||||||
'Root directory of the project (typically derived from session)'
|
|
||||||
)
|
|
||||||
}),
|
|
||||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
|
||||||
try {
|
|
||||||
// Find tasks.json path if not provided
|
|
||||||
let tasksJsonPath = args.file;
|
|
||||||
|
|
||||||
if (!tasksJsonPath) {
|
|
||||||
tasksJsonPath = findTasksJsonPath(args, log);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse comma-separated IDs
|
|
||||||
const fromIds = args.from.split(',').map((id) => id.trim());
|
|
||||||
const toIds = args.to.split(',').map((id) => id.trim());
|
|
||||||
|
|
||||||
// Validate matching IDs count
|
|
||||||
if (fromIds.length !== toIds.length) {
|
|
||||||
return createErrorResponse(
|
|
||||||
'The number of source and destination IDs must match',
|
|
||||||
'MISMATCHED_ID_COUNT'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// If moving multiple tasks
|
|
||||||
if (fromIds.length > 1) {
|
|
||||||
const results = [];
|
|
||||||
// Move tasks one by one, only generate files on the last move
|
|
||||||
for (let i = 0; i < fromIds.length; i++) {
|
|
||||||
const fromId = fromIds[i];
|
|
||||||
const toId = toIds[i];
|
|
||||||
|
|
||||||
// Skip if source and destination are the same
|
|
||||||
if (fromId === toId) {
|
|
||||||
log.info(`Skipping ${fromId} -> ${toId} (same ID)`);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
const shouldGenerateFiles = i === fromIds.length - 1;
|
|
||||||
const result = await moveTaskDirect(
|
|
||||||
{
|
|
||||||
sourceId: fromId,
|
|
||||||
destinationId: toId,
|
|
||||||
tasksJsonPath,
|
|
||||||
projectRoot: args.projectRoot
|
|
||||||
},
|
|
||||||
log,
|
|
||||||
{ session }
|
|
||||||
);
|
|
||||||
|
|
||||||
if (!result.success) {
|
|
||||||
log.error(
|
|
||||||
`Failed to move ${fromId} to ${toId}: ${result.error.message}`
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
results.push(result.data);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
success: true,
|
|
||||||
data: {
|
|
||||||
moves: results,
|
|
||||||
message: `Successfully moved ${results.length} tasks`
|
|
||||||
}
|
|
||||||
};
|
|
||||||
} else {
|
|
||||||
// Moving a single task
|
|
||||||
return handleApiResult(
|
|
||||||
await moveTaskDirect(
|
|
||||||
{
|
|
||||||
sourceId: args.from,
|
|
||||||
destinationId: args.to,
|
|
||||||
tasksJsonPath,
|
|
||||||
projectRoot: args.projectRoot
|
|
||||||
},
|
|
||||||
log,
|
|
||||||
{ session }
|
|
||||||
),
|
|
||||||
log
|
|
||||||
);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
return createErrorResponse(
|
|
||||||
`Failed to move task: ${error.message}`,
|
|
||||||
'MOVE_TASK_ERROR'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
})
|
|
||||||
});
|
|
||||||
}
|
|
||||||
@@ -49,13 +49,6 @@ export function registerParsePRDTool(server) {
|
|||||||
.optional()
|
.optional()
|
||||||
.default(false)
|
.default(false)
|
||||||
.describe('Append generated tasks to existing file.'),
|
.describe('Append generated tasks to existing file.'),
|
||||||
research: z
|
|
||||||
.boolean()
|
|
||||||
.optional()
|
|
||||||
.default(false)
|
|
||||||
.describe(
|
|
||||||
'Use the research model for research-backed task generation, providing more comprehensive, accurate and up-to-date task details.'
|
|
||||||
),
|
|
||||||
projectRoot: z
|
projectRoot: z
|
||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
@@ -75,7 +68,6 @@ export function registerParsePRDTool(server) {
|
|||||||
numTasks: args.numTasks,
|
numTasks: args.numTasks,
|
||||||
force: args.force,
|
force: args.force,
|
||||||
append: args.append,
|
append: args.append,
|
||||||
research: args.research,
|
|
||||||
projectRoot: args.projectRoot
|
projectRoot: args.projectRoot
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
|
|||||||
@@ -9,14 +9,8 @@ import {
|
|||||||
createErrorResponse,
|
createErrorResponse,
|
||||||
withNormalizedProjectRoot
|
withNormalizedProjectRoot
|
||||||
} from './utils.js';
|
} from './utils.js';
|
||||||
import {
|
import { setTaskStatusDirect } from '../core/task-master-core.js';
|
||||||
setTaskStatusDirect,
|
import { findTasksJsonPath } from '../core/utils/path-utils.js';
|
||||||
nextTaskDirect
|
|
||||||
} from '../core/task-master-core.js';
|
|
||||||
import {
|
|
||||||
findTasksJsonPath,
|
|
||||||
findComplexityReportPath
|
|
||||||
} from '../core/utils/path-utils.js';
|
|
||||||
import { TASK_STATUS_OPTIONS } from '../../../src/constants/task-status.js';
|
import { TASK_STATUS_OPTIONS } from '../../../src/constants/task-status.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -39,12 +33,6 @@ export function registerSetTaskStatusTool(server) {
|
|||||||
"New status to set (e.g., 'pending', 'done', 'in-progress', 'review', 'deferred', 'cancelled'."
|
"New status to set (e.g., 'pending', 'done', 'in-progress', 'review', 'deferred', 'cancelled'."
|
||||||
),
|
),
|
||||||
file: z.string().optional().describe('Absolute path to the tasks file'),
|
file: z.string().optional().describe('Absolute path to the tasks file'),
|
||||||
complexityReport: z
|
|
||||||
.string()
|
|
||||||
.optional()
|
|
||||||
.describe(
|
|
||||||
'Path to the complexity report file (relative to project root or absolute)'
|
|
||||||
),
|
|
||||||
projectRoot: z
|
projectRoot: z
|
||||||
.string()
|
.string()
|
||||||
.describe('The directory of the project. Must be an absolute path.')
|
.describe('The directory of the project. Must be an absolute path.')
|
||||||
@@ -67,23 +55,11 @@ export function registerSetTaskStatusTool(server) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
let complexityReportPath;
|
|
||||||
try {
|
|
||||||
complexityReportPath = findComplexityReportPath(
|
|
||||||
args.projectRoot,
|
|
||||||
args.complexityReport,
|
|
||||||
log
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
log.error(`Error finding complexity report: ${error.message}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = await setTaskStatusDirect(
|
const result = await setTaskStatusDirect(
|
||||||
{
|
{
|
||||||
tasksJsonPath: tasksJsonPath,
|
tasksJsonPath: tasksJsonPath,
|
||||||
id: args.id,
|
id: args.id,
|
||||||
status: args.status,
|
status: args.status
|
||||||
complexityReportPath
|
|
||||||
},
|
},
|
||||||
log
|
log
|
||||||
);
|
);
|
||||||
|
|||||||
6
package-lock.json
generated
6
package-lock.json
generated
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.14.0",
|
"version": "0.13.2",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.14.0",
|
"version": "0.13.2",
|
||||||
"license": "MIT WITH Commons-Clause",
|
"license": "MIT WITH Commons-Clause",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@ai-sdk/anthropic": "^1.2.10",
|
"@ai-sdk/anthropic": "^1.2.10",
|
||||||
@@ -28,7 +28,7 @@
|
|||||||
"express": "^4.21.2",
|
"express": "^4.21.2",
|
||||||
"fastmcp": "^1.20.5",
|
"fastmcp": "^1.20.5",
|
||||||
"figlet": "^1.8.0",
|
"figlet": "^1.8.0",
|
||||||
"fuse.js": "^7.1.0",
|
"fuse.js": "^7.0.0",
|
||||||
"gradient-string": "^3.0.0",
|
"gradient-string": "^3.0.0",
|
||||||
"helmet": "^8.1.0",
|
"helmet": "^8.1.0",
|
||||||
"inquirer": "^12.5.0",
|
"inquirer": "^12.5.0",
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.14.0",
|
"version": "0.14.0-rc.1",
|
||||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||||
"main": "index.js",
|
"main": "index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
@@ -58,7 +58,7 @@
|
|||||||
"express": "^4.21.2",
|
"express": "^4.21.2",
|
||||||
"fastmcp": "^1.20.5",
|
"fastmcp": "^1.20.5",
|
||||||
"figlet": "^1.8.0",
|
"figlet": "^1.8.0",
|
||||||
"fuse.js": "^7.1.0",
|
"fuse.js": "^7.0.0",
|
||||||
"gradient-string": "^3.0.0",
|
"gradient-string": "^3.0.0",
|
||||||
"helmet": "^8.1.0",
|
"helmet": "^8.1.0",
|
||||||
"inquirer": "^12.5.0",
|
"inquirer": "^12.5.0",
|
||||||
|
|||||||
@@ -18,10 +18,9 @@ import {
|
|||||||
getUserId,
|
getUserId,
|
||||||
MODEL_MAP,
|
MODEL_MAP,
|
||||||
getDebugFlag,
|
getDebugFlag,
|
||||||
getBaseUrlForRole,
|
getBaseUrlForRole
|
||||||
isApiKeySet
|
|
||||||
} from './config-manager.js';
|
} from './config-manager.js';
|
||||||
import { log, resolveEnvVariable, findProjectRoot } from './utils.js';
|
import { log, resolveEnvVariable, isSilentMode } from './utils.js';
|
||||||
|
|
||||||
import * as anthropic from '../../src/ai-providers/anthropic.js';
|
import * as anthropic from '../../src/ai-providers/anthropic.js';
|
||||||
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
||||||
@@ -323,7 +322,11 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
const effectiveProjectRoot = projectRoot || findProjectRoot();
|
// Determine the effective project root (passed in or detected if needed by config getters)
|
||||||
|
const { findProjectRoot: detectProjectRoot } = await import('./utils.js'); // Dynamically import if needed
|
||||||
|
const effectiveProjectRoot = projectRoot || detectProjectRoot();
|
||||||
|
|
||||||
|
// Get userId from config - ensure effectiveProjectRoot is passed
|
||||||
const userId = getUserId(effectiveProjectRoot);
|
const userId = getUserId(effectiveProjectRoot);
|
||||||
|
|
||||||
let sequence;
|
let sequence;
|
||||||
@@ -359,6 +362,8 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
try {
|
try {
|
||||||
log('info', `New AI service call with role: ${currentRole}`);
|
log('info', `New AI service call with role: ${currentRole}`);
|
||||||
|
|
||||||
|
// 1. Get Config: Provider, Model, Parameters for the current role
|
||||||
|
// Pass effectiveProjectRoot to config getters
|
||||||
if (currentRole === 'main') {
|
if (currentRole === 'main') {
|
||||||
providerName = getMainProvider(effectiveProjectRoot);
|
providerName = getMainProvider(effectiveProjectRoot);
|
||||||
modelId = getMainModelId(effectiveProjectRoot);
|
modelId = getMainModelId(effectiveProjectRoot);
|
||||||
@@ -391,24 +396,11 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if API key is set for the current provider and role (excluding 'ollama')
|
// Pass effectiveProjectRoot to getParametersForRole
|
||||||
if (providerName?.toLowerCase() !== 'ollama') {
|
|
||||||
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
|
|
||||||
log(
|
|
||||||
'warn',
|
|
||||||
`Skipping role '${currentRole}' (Provider: ${providerName}): API key not set or invalid.`
|
|
||||||
);
|
|
||||||
lastError =
|
|
||||||
lastError ||
|
|
||||||
new Error(
|
|
||||||
`API key for provider '${providerName}' (role: ${currentRole}) is not set.`
|
|
||||||
);
|
|
||||||
continue; // Skip to the next role in the sequence
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
|
roleParams = getParametersForRole(currentRole, effectiveProjectRoot);
|
||||||
baseUrl = getBaseUrlForRole(currentRole, effectiveProjectRoot);
|
baseUrl = getBaseUrlForRole(currentRole, effectiveProjectRoot);
|
||||||
|
|
||||||
|
// 2. Get Provider Function Set
|
||||||
providerFnSet = PROVIDER_FUNCTIONS[providerName?.toLowerCase()];
|
providerFnSet = PROVIDER_FUNCTIONS[providerName?.toLowerCase()];
|
||||||
if (!providerFnSet) {
|
if (!providerFnSet) {
|
||||||
log(
|
log(
|
||||||
@@ -421,6 +413,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Use the original service type to get the function
|
||||||
providerApiFn = providerFnSet[serviceType];
|
providerApiFn = providerFnSet[serviceType];
|
||||||
if (typeof providerApiFn !== 'function') {
|
if (typeof providerApiFn !== 'function') {
|
||||||
log(
|
log(
|
||||||
@@ -435,12 +428,15 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 3. Resolve API Key (will throw if required and missing)
|
||||||
|
// Pass effectiveProjectRoot to _resolveApiKey
|
||||||
apiKey = _resolveApiKey(
|
apiKey = _resolveApiKey(
|
||||||
providerName?.toLowerCase(),
|
providerName?.toLowerCase(),
|
||||||
session,
|
session,
|
||||||
effectiveProjectRoot
|
effectiveProjectRoot
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// 4. Construct Messages Array
|
||||||
const messages = [];
|
const messages = [];
|
||||||
if (systemPrompt) {
|
if (systemPrompt) {
|
||||||
messages.push({ role: 'system', content: systemPrompt });
|
messages.push({ role: 'system', content: systemPrompt });
|
||||||
@@ -465,11 +461,14 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
// }
|
// }
|
||||||
|
|
||||||
if (prompt) {
|
if (prompt) {
|
||||||
|
// Ensure prompt exists before adding
|
||||||
messages.push({ role: 'user', content: prompt });
|
messages.push({ role: 'user', content: prompt });
|
||||||
} else {
|
} else {
|
||||||
|
// Throw an error if the prompt is missing, as it's essential
|
||||||
throw new Error('User prompt content is missing.');
|
throw new Error('User prompt content is missing.');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 5. Prepare call parameters (using messages array)
|
||||||
const callParams = {
|
const callParams = {
|
||||||
apiKey,
|
apiKey,
|
||||||
modelId,
|
modelId,
|
||||||
@@ -481,6 +480,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
...restApiParams
|
...restApiParams
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// 6. Attempt the call with retries
|
||||||
providerResponse = await _attemptProviderCallWithRetries(
|
providerResponse = await _attemptProviderCallWithRetries(
|
||||||
providerApiFn,
|
providerApiFn,
|
||||||
callParams,
|
callParams,
|
||||||
@@ -489,6 +489,8 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
currentRole
|
currentRole
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// --- Log Telemetry & Capture Data ---
|
||||||
|
// Use providerResponse which contains the usage data directly for text/object
|
||||||
if (userId && providerResponse && providerResponse.usage) {
|
if (userId && providerResponse && providerResponse.usage) {
|
||||||
try {
|
try {
|
||||||
telemetryData = await logAiUsage({
|
telemetryData = await logAiUsage({
|
||||||
@@ -510,22 +512,26 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
`Cannot log telemetry for ${commandName} (${providerName}/${modelId}): AI result missing 'usage' data. (May be expected for streams)`
|
`Cannot log telemetry for ${commandName} (${providerName}/${modelId}): AI result missing 'usage' data. (May be expected for streams)`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
// --- End Log Telemetry ---
|
||||||
|
|
||||||
|
// --- Extract the correct main result based on serviceType ---
|
||||||
let finalMainResult;
|
let finalMainResult;
|
||||||
if (serviceType === 'generateText') {
|
if (serviceType === 'generateText') {
|
||||||
finalMainResult = providerResponse.text;
|
finalMainResult = providerResponse.text;
|
||||||
} else if (serviceType === 'generateObject') {
|
} else if (serviceType === 'generateObject') {
|
||||||
finalMainResult = providerResponse.object;
|
finalMainResult = providerResponse.object;
|
||||||
} else if (serviceType === 'streamText') {
|
} else if (serviceType === 'streamText') {
|
||||||
finalMainResult = providerResponse;
|
finalMainResult = providerResponse; // Return the whole stream object
|
||||||
} else {
|
} else {
|
||||||
log(
|
log(
|
||||||
'error',
|
'error',
|
||||||
`Unknown serviceType in _unifiedServiceRunner: ${serviceType}`
|
`Unknown serviceType in _unifiedServiceRunner: ${serviceType}`
|
||||||
);
|
);
|
||||||
finalMainResult = providerResponse;
|
finalMainResult = providerResponse; // Default to returning the whole object as fallback
|
||||||
}
|
}
|
||||||
|
// --- End Main Result Extraction ---
|
||||||
|
|
||||||
|
// Return a composite object including the extracted main result and telemetry data
|
||||||
return {
|
return {
|
||||||
mainResult: finalMainResult,
|
mainResult: finalMainResult,
|
||||||
telemetryData: telemetryData
|
telemetryData: telemetryData
|
||||||
@@ -558,7 +564,9 @@ async function _unifiedServiceRunner(serviceType, params) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If loop completes, all roles failed
|
||||||
log('error', `All roles in the sequence [${sequence.join(', ')}] failed.`);
|
log('error', `All roles in the sequence [${sequence.join(', ')}] failed.`);
|
||||||
|
// Throw a new error with the cleaner message from the last failure
|
||||||
throw new Error(lastCleanErrorMessage);
|
throw new Error(lastCleanErrorMessage);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ import chalk from 'chalk';
|
|||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import https from 'https';
|
import https from 'https';
|
||||||
import http from 'http';
|
|
||||||
import inquirer from 'inquirer';
|
import inquirer from 'inquirer';
|
||||||
import ora from 'ora'; // Import ora
|
import ora from 'ora'; // Import ora
|
||||||
|
|
||||||
@@ -31,8 +30,7 @@ import {
|
|||||||
updateSubtaskById,
|
updateSubtaskById,
|
||||||
removeTask,
|
removeTask,
|
||||||
findTaskById,
|
findTaskById,
|
||||||
taskExists,
|
taskExists
|
||||||
moveTask
|
|
||||||
} from './task-manager.js';
|
} from './task-manager.js';
|
||||||
|
|
||||||
import {
|
import {
|
||||||
@@ -49,8 +47,7 @@ import {
|
|||||||
writeConfig,
|
writeConfig,
|
||||||
ConfigurationError,
|
ConfigurationError,
|
||||||
isConfigFilePresent,
|
isConfigFilePresent,
|
||||||
getAvailableModels,
|
getAvailableModels
|
||||||
getBaseUrlForRole
|
|
||||||
} from './config-manager.js';
|
} from './config-manager.js';
|
||||||
|
|
||||||
import {
|
import {
|
||||||
@@ -155,64 +152,6 @@ async function runInteractiveSetup(projectRoot) {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper function to fetch Ollama models (duplicated for CLI context)
|
|
||||||
function fetchOllamaModelsCLI(baseUrl = 'http://localhost:11434/api') {
|
|
||||||
return new Promise((resolve) => {
|
|
||||||
try {
|
|
||||||
// Parse the base URL to extract hostname, port, and base path
|
|
||||||
const url = new URL(baseUrl);
|
|
||||||
const isHttps = url.protocol === 'https:';
|
|
||||||
const port = url.port || (isHttps ? 443 : 80);
|
|
||||||
const basePath = url.pathname.endsWith('/')
|
|
||||||
? url.pathname.slice(0, -1)
|
|
||||||
: url.pathname;
|
|
||||||
|
|
||||||
const options = {
|
|
||||||
hostname: url.hostname,
|
|
||||||
port: parseInt(port, 10),
|
|
||||||
path: `${basePath}/tags`,
|
|
||||||
method: 'GET',
|
|
||||||
headers: {
|
|
||||||
Accept: 'application/json'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const requestLib = isHttps ? https : http;
|
|
||||||
const req = requestLib.request(options, (res) => {
|
|
||||||
let data = '';
|
|
||||||
res.on('data', (chunk) => {
|
|
||||||
data += chunk;
|
|
||||||
});
|
|
||||||
res.on('end', () => {
|
|
||||||
if (res.statusCode === 200) {
|
|
||||||
try {
|
|
||||||
const parsedData = JSON.parse(data);
|
|
||||||
resolve(parsedData.models || []); // Return the array of models
|
|
||||||
} catch (e) {
|
|
||||||
console.error('Error parsing Ollama response:', e);
|
|
||||||
resolve(null); // Indicate failure
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
console.error(
|
|
||||||
`Ollama API request failed with status code: ${res.statusCode}`
|
|
||||||
);
|
|
||||||
resolve(null); // Indicate failure
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
req.on('error', (e) => {
|
|
||||||
console.error('Error fetching Ollama models:', e);
|
|
||||||
resolve(null); // Indicate failure
|
|
||||||
});
|
|
||||||
req.end();
|
|
||||||
} catch (e) {
|
|
||||||
console.error('Error parsing Ollama base URL:', e);
|
|
||||||
resolve(null); // Indicate failure
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Helper to get choices and default index for a role
|
// Helper to get choices and default index for a role
|
||||||
const getPromptData = (role, allowNone = false) => {
|
const getPromptData = (role, allowNone = false) => {
|
||||||
const currentModel = currentModels[role]; // Use the fetched data
|
const currentModel = currentModels[role]; // Use the fetched data
|
||||||
@@ -240,11 +179,6 @@ async function runInteractiveSetup(projectRoot) {
|
|||||||
value: '__CUSTOM_OPENROUTER__'
|
value: '__CUSTOM_OPENROUTER__'
|
||||||
};
|
};
|
||||||
|
|
||||||
const customOllamaOption = {
|
|
||||||
name: '* Custom Ollama model', // Symbol updated
|
|
||||||
value: '__CUSTOM_OLLAMA__'
|
|
||||||
};
|
|
||||||
|
|
||||||
let choices = [];
|
let choices = [];
|
||||||
let defaultIndex = 0; // Default to 'Cancel'
|
let defaultIndex = 0; // Default to 'Cancel'
|
||||||
|
|
||||||
@@ -290,7 +224,6 @@ async function runInteractiveSetup(projectRoot) {
|
|||||||
}
|
}
|
||||||
commonPrefix.push(cancelOption);
|
commonPrefix.push(cancelOption);
|
||||||
commonPrefix.push(customOpenRouterOption);
|
commonPrefix.push(customOpenRouterOption);
|
||||||
commonPrefix.push(customOllamaOption);
|
|
||||||
|
|
||||||
let prefixLength = commonPrefix.length; // Initial prefix length
|
let prefixLength = commonPrefix.length; // Initial prefix length
|
||||||
|
|
||||||
@@ -421,47 +354,6 @@ async function runInteractiveSetup(projectRoot) {
|
|||||||
setupSuccess = false;
|
setupSuccess = false;
|
||||||
return true; // Continue setup, but mark as failed
|
return true; // Continue setup, but mark as failed
|
||||||
}
|
}
|
||||||
} else if (selectedValue === '__CUSTOM_OLLAMA__') {
|
|
||||||
isCustomSelection = true;
|
|
||||||
const { customId } = await inquirer.prompt([
|
|
||||||
{
|
|
||||||
type: 'input',
|
|
||||||
name: 'customId',
|
|
||||||
message: `Enter the custom Ollama Model ID for the ${role} role:`
|
|
||||||
}
|
|
||||||
]);
|
|
||||||
if (!customId) {
|
|
||||||
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
|
|
||||||
return true; // Continue setup, but don't set this role
|
|
||||||
}
|
|
||||||
modelIdToSet = customId;
|
|
||||||
providerHint = 'ollama';
|
|
||||||
// Get the Ollama base URL from config for this role
|
|
||||||
const ollamaBaseUrl = getBaseUrlForRole(role, projectRoot);
|
|
||||||
// Validate against live Ollama list
|
|
||||||
const ollamaModels = await fetchOllamaModelsCLI(ollamaBaseUrl);
|
|
||||||
if (ollamaModels === null) {
|
|
||||||
console.error(
|
|
||||||
chalk.red(
|
|
||||||
`Error: Unable to connect to Ollama server at ${ollamaBaseUrl}. Please ensure Ollama is running and try again.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
setupSuccess = false;
|
|
||||||
return true; // Continue setup, but mark as failed
|
|
||||||
} else if (!ollamaModels.some((m) => m.model === modelIdToSet)) {
|
|
||||||
console.error(
|
|
||||||
chalk.red(
|
|
||||||
`Error: Model ID "${modelIdToSet}" not found in the Ollama instance. Please verify the model is pulled and available.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(
|
|
||||||
`You can check available models with: curl ${ollamaBaseUrl}/tags`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
setupSuccess = false;
|
|
||||||
return true; // Continue setup, but mark as failed
|
|
||||||
}
|
|
||||||
} else if (
|
} else if (
|
||||||
selectedValue &&
|
selectedValue &&
|
||||||
typeof selectedValue === 'object' &&
|
typeof selectedValue === 'object' &&
|
||||||
@@ -615,10 +507,6 @@ function registerCommands(programInstance) {
|
|||||||
'--append',
|
'--append',
|
||||||
'Append new tasks to existing tasks.json instead of overwriting'
|
'Append new tasks to existing tasks.json instead of overwriting'
|
||||||
)
|
)
|
||||||
.option(
|
|
||||||
'-r, --research',
|
|
||||||
'Use Perplexity AI for research-backed task generation, providing more comprehensive and accurate task breakdown'
|
|
||||||
)
|
|
||||||
.action(async (file, options) => {
|
.action(async (file, options) => {
|
||||||
// Use input option if file argument not provided
|
// Use input option if file argument not provided
|
||||||
const inputFile = file || options.input;
|
const inputFile = file || options.input;
|
||||||
@@ -627,7 +515,6 @@ function registerCommands(programInstance) {
|
|||||||
const outputPath = options.output;
|
const outputPath = options.output;
|
||||||
const force = options.force || false;
|
const force = options.force || false;
|
||||||
const append = options.append || false;
|
const append = options.append || false;
|
||||||
const research = options.research || false;
|
|
||||||
let useForce = force;
|
let useForce = force;
|
||||||
let useAppend = append;
|
let useAppend = append;
|
||||||
|
|
||||||
@@ -660,8 +547,7 @@ function registerCommands(programInstance) {
|
|||||||
spinner = ora('Parsing PRD and generating tasks...\n').start();
|
spinner = ora('Parsing PRD and generating tasks...\n').start();
|
||||||
await parsePRD(defaultPrdPath, outputPath, numTasks, {
|
await parsePRD(defaultPrdPath, outputPath, numTasks, {
|
||||||
append: useAppend, // Changed key from useAppend to append
|
append: useAppend, // Changed key from useAppend to append
|
||||||
force: useForce, // Changed key from useForce to force
|
force: useForce // Changed key from useForce to force
|
||||||
research: research
|
|
||||||
});
|
});
|
||||||
spinner.succeed('Tasks generated successfully!');
|
spinner.succeed('Tasks generated successfully!');
|
||||||
return;
|
return;
|
||||||
@@ -685,15 +571,13 @@ function registerCommands(programInstance) {
|
|||||||
' -o, --output <file> Output file path (default: "tasks/tasks.json")\n' +
|
' -o, --output <file> Output file path (default: "tasks/tasks.json")\n' +
|
||||||
' -n, --num-tasks <number> Number of tasks to generate (default: 10)\n' +
|
' -n, --num-tasks <number> Number of tasks to generate (default: 10)\n' +
|
||||||
' -f, --force Skip confirmation when overwriting existing tasks\n' +
|
' -f, --force Skip confirmation when overwriting existing tasks\n' +
|
||||||
' --append Append new tasks to existing tasks.json instead of overwriting\n' +
|
' --append Append new tasks to existing tasks.json instead of overwriting\n\n' +
|
||||||
' -r, --research Use Perplexity AI for research-backed task generation\n\n' +
|
|
||||||
chalk.cyan('Example:') +
|
chalk.cyan('Example:') +
|
||||||
'\n' +
|
'\n' +
|
||||||
' task-master parse-prd requirements.txt --num-tasks 15\n' +
|
' task-master parse-prd requirements.txt --num-tasks 15\n' +
|
||||||
' task-master parse-prd --input=requirements.txt\n' +
|
' task-master parse-prd --input=requirements.txt\n' +
|
||||||
' task-master parse-prd --force\n' +
|
' task-master parse-prd --force\n' +
|
||||||
' task-master parse-prd requirements_v2.txt --append\n' +
|
' task-master parse-prd requirements_v2.txt --append\n\n' +
|
||||||
' task-master parse-prd requirements.txt --research\n\n' +
|
|
||||||
chalk.yellow('Note: This command will:') +
|
chalk.yellow('Note: This command will:') +
|
||||||
'\n' +
|
'\n' +
|
||||||
' 1. Look for a PRD file at scripts/prd.txt by default\n' +
|
' 1. Look for a PRD file at scripts/prd.txt by default\n' +
|
||||||
@@ -721,19 +605,11 @@ function registerCommands(programInstance) {
|
|||||||
if (append) {
|
if (append) {
|
||||||
console.log(chalk.blue('Appending to existing tasks...'));
|
console.log(chalk.blue('Appending to existing tasks...'));
|
||||||
}
|
}
|
||||||
if (research) {
|
|
||||||
console.log(
|
|
||||||
chalk.blue(
|
|
||||||
'Using Perplexity AI for research-backed task generation'
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
spinner = ora('Parsing PRD and generating tasks...\n').start();
|
spinner = ora('Parsing PRD and generating tasks...\n').start();
|
||||||
await parsePRD(inputFile, outputPath, numTasks, {
|
await parsePRD(inputFile, outputPath, numTasks, {
|
||||||
append: useAppend,
|
useAppend: useAppend,
|
||||||
force: useForce,
|
useForce: useForce
|
||||||
research: research
|
|
||||||
});
|
});
|
||||||
spinner.succeed('Tasks generated successfully!');
|
spinner.succeed('Tasks generated successfully!');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -1155,8 +1031,6 @@ function registerCommands(programInstance) {
|
|||||||
// set-status command
|
// set-status command
|
||||||
programInstance
|
programInstance
|
||||||
.command('set-status')
|
.command('set-status')
|
||||||
.alias('mark')
|
|
||||||
.alias('set')
|
|
||||||
.description('Set the status of a task')
|
.description('Set the status of a task')
|
||||||
.option(
|
.option(
|
||||||
'-i, --id <id>',
|
'-i, --id <id>',
|
||||||
@@ -1337,12 +1211,6 @@ function registerCommands(programInstance) {
|
|||||||
'-r, --research',
|
'-r, --research',
|
||||||
'Use Perplexity AI for research-backed complexity analysis'
|
'Use Perplexity AI for research-backed complexity analysis'
|
||||||
)
|
)
|
||||||
.option(
|
|
||||||
'-i, --id <ids>',
|
|
||||||
'Comma-separated list of specific task IDs to analyze (e.g., "1,3,5")'
|
|
||||||
)
|
|
||||||
.option('--from <id>', 'Starting task ID in a range to analyze')
|
|
||||||
.option('--to <id>', 'Ending task ID in a range to analyze')
|
|
||||||
.action(async (options) => {
|
.action(async (options) => {
|
||||||
const tasksPath = options.file || 'tasks/tasks.json';
|
const tasksPath = options.file || 'tasks/tasks.json';
|
||||||
const outputPath = options.output;
|
const outputPath = options.output;
|
||||||
@@ -1353,16 +1221,6 @@ function registerCommands(programInstance) {
|
|||||||
console.log(chalk.blue(`Analyzing task complexity from: ${tasksPath}`));
|
console.log(chalk.blue(`Analyzing task complexity from: ${tasksPath}`));
|
||||||
console.log(chalk.blue(`Output report will be saved to: ${outputPath}`));
|
console.log(chalk.blue(`Output report will be saved to: ${outputPath}`));
|
||||||
|
|
||||||
if (options.id) {
|
|
||||||
console.log(chalk.blue(`Analyzing specific task IDs: ${options.id}`));
|
|
||||||
} else if (options.from || options.to) {
|
|
||||||
const fromStr = options.from ? options.from : 'first';
|
|
||||||
const toStr = options.to ? options.to : 'last';
|
|
||||||
console.log(
|
|
||||||
chalk.blue(`Analyzing tasks in range: ${fromStr} to ${toStr}`)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (useResearch) {
|
if (useResearch) {
|
||||||
console.log(
|
console.log(
|
||||||
chalk.blue(
|
chalk.blue(
|
||||||
@@ -2491,135 +2349,6 @@ Examples:
|
|||||||
return; // Stop execution here
|
return; // Stop execution here
|
||||||
});
|
});
|
||||||
|
|
||||||
// move-task command
|
|
||||||
programInstance
|
|
||||||
.command('move')
|
|
||||||
.description('Move a task or subtask to a new position')
|
|
||||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
|
||||||
.option(
|
|
||||||
'--from <id>',
|
|
||||||
'ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated to move multiple tasks (e.g., "5,6,7")'
|
|
||||||
)
|
|
||||||
.option(
|
|
||||||
'--to <id>',
|
|
||||||
'ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated'
|
|
||||||
)
|
|
||||||
.action(async (options) => {
|
|
||||||
const tasksPath = options.file;
|
|
||||||
const sourceId = options.from;
|
|
||||||
const destinationId = options.to;
|
|
||||||
|
|
||||||
if (!sourceId || !destinationId) {
|
|
||||||
console.error(
|
|
||||||
chalk.red('Error: Both --from and --to parameters are required')
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(
|
|
||||||
'Usage: task-master move --from=<sourceId> --to=<destinationId>'
|
|
||||||
)
|
|
||||||
);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if we're moving multiple tasks (comma-separated IDs)
|
|
||||||
const sourceIds = sourceId.split(',').map((id) => id.trim());
|
|
||||||
const destinationIds = destinationId.split(',').map((id) => id.trim());
|
|
||||||
|
|
||||||
// Validate that the number of source and destination IDs match
|
|
||||||
if (sourceIds.length !== destinationIds.length) {
|
|
||||||
console.error(
|
|
||||||
chalk.red(
|
|
||||||
'Error: The number of source and destination IDs must match'
|
|
||||||
)
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
chalk.yellow('Example: task-master move --from=5,6,7 --to=10,11,12')
|
|
||||||
);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// If moving multiple tasks
|
|
||||||
if (sourceIds.length > 1) {
|
|
||||||
console.log(
|
|
||||||
chalk.blue(
|
|
||||||
`Moving multiple tasks: ${sourceIds.join(', ')} to ${destinationIds.join(', ')}...`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Read tasks data once to validate destination IDs
|
|
||||||
const tasksData = readJSON(tasksPath);
|
|
||||||
if (!tasksData || !tasksData.tasks) {
|
|
||||||
console.error(
|
|
||||||
chalk.red(`Error: Invalid or missing tasks file at ${tasksPath}`)
|
|
||||||
);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Move tasks one by one
|
|
||||||
for (let i = 0; i < sourceIds.length; i++) {
|
|
||||||
const fromId = sourceIds[i];
|
|
||||||
const toId = destinationIds[i];
|
|
||||||
|
|
||||||
// Skip if source and destination are the same
|
|
||||||
if (fromId === toId) {
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(`Skipping ${fromId} -> ${toId} (same ID)`)
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
chalk.blue(`Moving task/subtask ${fromId} to ${toId}...`)
|
|
||||||
);
|
|
||||||
try {
|
|
||||||
await moveTask(
|
|
||||||
tasksPath,
|
|
||||||
fromId,
|
|
||||||
toId,
|
|
||||||
i === sourceIds.length - 1
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
chalk.green(
|
|
||||||
`✓ Successfully moved task/subtask ${fromId} to ${toId}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(
|
|
||||||
chalk.red(`Error moving ${fromId} to ${toId}: ${error.message}`)
|
|
||||||
);
|
|
||||||
// Continue with the next task rather than exiting
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
console.error(chalk.red(`Error: ${error.message}`));
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Moving a single task (existing logic)
|
|
||||||
console.log(
|
|
||||||
chalk.blue(`Moving task/subtask ${sourceId} to ${destinationId}...`)
|
|
||||||
);
|
|
||||||
|
|
||||||
try {
|
|
||||||
const result = await moveTask(
|
|
||||||
tasksPath,
|
|
||||||
sourceId,
|
|
||||||
destinationId,
|
|
||||||
true
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
chalk.green(
|
|
||||||
`✓ Successfully moved task/subtask ${sourceId} to ${destinationId}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
console.error(chalk.red(`Error: ${error.message}`));
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
return programInstance;
|
return programInstance;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,19 +1,5 @@
|
|||||||
{
|
{
|
||||||
"anthropic": [
|
"anthropic": [
|
||||||
{
|
|
||||||
"id": "claude-sonnet-4-20250514",
|
|
||||||
"swe_score": 0.727,
|
|
||||||
"cost_per_1m_tokens": { "input": 3.0, "output": 15.0 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 120000
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "claude-opus-4-20250514",
|
|
||||||
"swe_score": 0.725,
|
|
||||||
"cost_per_1m_tokens": { "input": 15.0, "output": 75.0 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 120000
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"id": "claude-3-7-sonnet-20250219",
|
"id": "claude-3-7-sonnet-20250219",
|
||||||
"swe_score": 0.623,
|
"swe_score": 0.623,
|
||||||
@@ -205,43 +191,43 @@
|
|||||||
],
|
],
|
||||||
"ollama": [
|
"ollama": [
|
||||||
{
|
{
|
||||||
"id": "devstral:latest",
|
"id": "gemma3:27b",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main", "fallback"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "qwen3:latest",
|
"id": "gemma3:12b",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main", "fallback"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "qwen3:14b",
|
"id": "qwq",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main", "fallback"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "qwen3:32b",
|
"id": "deepseek-r1",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main", "fallback"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "mistral-small3.1:latest",
|
"id": "mistral-small3.1",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main", "fallback"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "llama3.3:latest",
|
"id": "llama3.3",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main", "fallback"]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "phi4:latest",
|
"id": "phi4",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"]
|
"allowed_roles": ["main", "fallback"]
|
||||||
@@ -249,16 +235,9 @@
|
|||||||
],
|
],
|
||||||
"openrouter": [
|
"openrouter": [
|
||||||
{
|
{
|
||||||
"id": "google/gemini-2.5-flash-preview-05-20",
|
"id": "google/gemini-2.0-flash-001",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
"cost_per_1m_tokens": { "input": 0.1, "output": 0.4 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 1048576
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "google/gemini-2.5-flash-preview-05-20:thinking",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 0.15, "output": 3.5 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 1048576
|
"max_tokens": 1048576
|
||||||
},
|
},
|
||||||
@@ -284,25 +263,40 @@
|
|||||||
"max_tokens": 64000
|
"max_tokens": 64000
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "openai/gpt-4.1",
|
"id": "deepseek/deepseek-r1:free",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 2, "output": 8 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 1000000
|
"max_tokens": 163840
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
"id": "microsoft/mai-ds-r1:free",
|
||||||
|
"swe_score": 0,
|
||||||
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
|
"allowed_roles": ["main", "fallback"],
|
||||||
|
"max_tokens": 163840
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "openai/gpt-4.1-mini",
|
"id": "google/gemini-2.5-pro-preview-03-25",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.4, "output": 1.6 },
|
"cost_per_1m_tokens": { "input": 1.25, "output": 10 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 1000000
|
"max_tokens": 65535
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "openai/gpt-4.1-nano",
|
"id": "google/gemini-2.5-flash-preview",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.1, "output": 0.4 },
|
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main"],
|
||||||
"max_tokens": 1000000
|
"max_tokens": 65535
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "google/gemini-2.5-flash-preview:thinking",
|
||||||
|
"swe_score": 0,
|
||||||
|
"cost_per_1m_tokens": { "input": 0.15, "output": 3.5 },
|
||||||
|
"allowed_roles": ["main"],
|
||||||
|
"max_tokens": 65535
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "openai/o3",
|
"id": "openai/o3",
|
||||||
@@ -311,20 +305,6 @@
|
|||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 200000
|
"max_tokens": 200000
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"id": "openai/codex-mini",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 1.5, "output": 6 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 100000
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "openai/gpt-4o-mini",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 0.15, "output": 0.6 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 100000
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"id": "openai/o4-mini",
|
"id": "openai/o4-mini",
|
||||||
"swe_score": 0.45,
|
"swe_score": 0.45,
|
||||||
@@ -354,18 +334,46 @@
|
|||||||
"max_tokens": 1048576
|
"max_tokens": 1048576
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "meta-llama/llama-4-maverick",
|
"id": "google/gemma-3-12b-it:free",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.18, "output": 0.6 },
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 1000000
|
"max_tokens": 131072
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "meta-llama/llama-4-scout",
|
"id": "google/gemma-3-12b-it",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
"cost_per_1m_tokens": { "input": 0.08, "output": 0.3 },
|
"cost_per_1m_tokens": { "input": 50, "output": 100 },
|
||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 1000000
|
"max_tokens": 131072
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "google/gemma-3-27b-it:free",
|
||||||
|
"swe_score": 0,
|
||||||
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
|
"allowed_roles": ["main", "fallback"],
|
||||||
|
"max_tokens": 96000
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "google/gemma-3-27b-it",
|
||||||
|
"swe_score": 0,
|
||||||
|
"cost_per_1m_tokens": { "input": 100, "output": 200 },
|
||||||
|
"allowed_roles": ["main", "fallback"],
|
||||||
|
"max_tokens": 131072
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "qwen/qwq-32b:free",
|
||||||
|
"swe_score": 0,
|
||||||
|
"cost_per_1m_tokens": { "input": 0, "output": 0 },
|
||||||
|
"allowed_roles": ["main", "fallback"],
|
||||||
|
"max_tokens": 40000
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "qwen/qwq-32b",
|
||||||
|
"swe_score": 0,
|
||||||
|
"cost_per_1m_tokens": { "input": 150, "output": 200 },
|
||||||
|
"allowed_roles": ["main", "fallback"],
|
||||||
|
"max_tokens": 131072
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "qwen/qwen-max",
|
"id": "qwen/qwen-max",
|
||||||
@@ -381,13 +389,6 @@
|
|||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 1000000
|
"max_tokens": 1000000
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"id": "qwen/qwen3-235b-a22b",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 0.14, "output": 2 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 24000
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"id": "mistralai/mistral-small-3.1-24b-instruct:free",
|
"id": "mistralai/mistral-small-3.1-24b-instruct:free",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
@@ -402,20 +403,6 @@
|
|||||||
"allowed_roles": ["main", "fallback"],
|
"allowed_roles": ["main", "fallback"],
|
||||||
"max_tokens": 128000
|
"max_tokens": 128000
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"id": "mistralai/devstral-small",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 0.1, "output": 0.3 },
|
|
||||||
"allowed_roles": ["main"],
|
|
||||||
"max_tokens": 110000
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "mistralai/mistral-nemo",
|
|
||||||
"swe_score": 0,
|
|
||||||
"cost_per_1m_tokens": { "input": 0.03, "output": 0.07 },
|
|
||||||
"allowed_roles": ["main", "fallback"],
|
|
||||||
"max_tokens": 100000
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"id": "thudm/glm-4-32b:free",
|
"id": "thudm/glm-4-32b:free",
|
||||||
"swe_score": 0,
|
"swe_score": 0,
|
||||||
|
|||||||
@@ -23,7 +23,6 @@ import updateSubtaskById from './task-manager/update-subtask-by-id.js';
|
|||||||
import removeTask from './task-manager/remove-task.js';
|
import removeTask from './task-manager/remove-task.js';
|
||||||
import taskExists from './task-manager/task-exists.js';
|
import taskExists from './task-manager/task-exists.js';
|
||||||
import isTaskDependentOn from './task-manager/is-task-dependent.js';
|
import isTaskDependentOn from './task-manager/is-task-dependent.js';
|
||||||
import moveTask from './task-manager/move-task.js';
|
|
||||||
import { readComplexityReport } from './utils.js';
|
import { readComplexityReport } from './utils.js';
|
||||||
// Export task manager functions
|
// Export task manager functions
|
||||||
export {
|
export {
|
||||||
@@ -47,6 +46,5 @@ export {
|
|||||||
findTaskById,
|
findTaskById,
|
||||||
taskExists,
|
taskExists,
|
||||||
isTaskDependentOn,
|
isTaskDependentOn,
|
||||||
moveTask,
|
|
||||||
readComplexityReport
|
readComplexityReport
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ import chalk from 'chalk';
|
|||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
import Table from 'cli-table3';
|
import Table from 'cli-table3';
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import Fuse from 'fuse.js'; // Import Fuse.js for advanced fuzzy search
|
|
||||||
|
|
||||||
import {
|
import {
|
||||||
displayBanner,
|
displayBanner,
|
||||||
@@ -28,13 +27,7 @@ const AiTaskDataSchema = z.object({
|
|||||||
.describe('In-depth implementation details, considerations, and guidance'),
|
.describe('In-depth implementation details, considerations, and guidance'),
|
||||||
testStrategy: z
|
testStrategy: z
|
||||||
.string()
|
.string()
|
||||||
.describe('Detailed approach for verifying task completion'),
|
.describe('Detailed approach for verifying task completion')
|
||||||
dependencies: z
|
|
||||||
.array(z.number())
|
|
||||||
.optional()
|
|
||||||
.describe(
|
|
||||||
'Array of task IDs that this task depends on (must be completed before this task can start)'
|
|
||||||
)
|
|
||||||
});
|
});
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -99,81 +92,12 @@ async function addTask(
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
|
||||||
* Recursively builds a dependency graph for a given task
|
|
||||||
* @param {Array} tasks - All tasks from tasks.json
|
|
||||||
* @param {number} taskId - ID of the task to analyze
|
|
||||||
* @param {Set} visited - Set of already visited task IDs
|
|
||||||
* @param {Map} depthMap - Map of task ID to its depth in the graph
|
|
||||||
* @param {number} depth - Current depth in the recursion
|
|
||||||
* @return {Object} Dependency graph data
|
|
||||||
*/
|
|
||||||
function buildDependencyGraph(
|
|
||||||
tasks,
|
|
||||||
taskId,
|
|
||||||
visited = new Set(),
|
|
||||||
depthMap = new Map(),
|
|
||||||
depth = 0
|
|
||||||
) {
|
|
||||||
// Skip if we've already visited this task or it doesn't exist
|
|
||||||
if (visited.has(taskId)) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find the task
|
|
||||||
const task = tasks.find((t) => t.id === taskId);
|
|
||||||
if (!task) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mark as visited
|
|
||||||
visited.add(taskId);
|
|
||||||
|
|
||||||
// Update depth if this is a deeper path to this task
|
|
||||||
if (!depthMap.has(taskId) || depth < depthMap.get(taskId)) {
|
|
||||||
depthMap.set(taskId, depth);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process dependencies
|
|
||||||
const dependencyData = [];
|
|
||||||
if (task.dependencies && task.dependencies.length > 0) {
|
|
||||||
for (const depId of task.dependencies) {
|
|
||||||
const depData = buildDependencyGraph(
|
|
||||||
tasks,
|
|
||||||
depId,
|
|
||||||
visited,
|
|
||||||
depthMap,
|
|
||||||
depth + 1
|
|
||||||
);
|
|
||||||
if (depData) {
|
|
||||||
dependencyData.push(depData);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
id: task.id,
|
|
||||||
title: task.title,
|
|
||||||
description: task.description,
|
|
||||||
status: task.status,
|
|
||||||
dependencies: dependencyData
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Read the existing tasks
|
// Read the existing tasks
|
||||||
let data = readJSON(tasksPath);
|
const data = readJSON(tasksPath);
|
||||||
|
|
||||||
// If tasks.json doesn't exist or is invalid, create a new one
|
|
||||||
if (!data || !data.tasks) {
|
if (!data || !data.tasks) {
|
||||||
report('tasks.json not found or invalid. Creating a new one.', 'info');
|
report('Invalid or missing tasks.json.', 'error');
|
||||||
// Create default tasks data structure
|
throw new Error('Invalid or missing tasks.json.');
|
||||||
data = {
|
|
||||||
tasks: []
|
|
||||||
};
|
|
||||||
// Ensure the directory exists and write the new file
|
|
||||||
writeJSON(tasksPath, data);
|
|
||||||
report('Created new tasks.json file with empty tasks array.', 'info');
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find the highest task ID to determine the next ID
|
// Find the highest task ID to determine the next ID
|
||||||
@@ -213,29 +137,6 @@ async function addTask(
|
|||||||
// Ensure dependencies are numbers
|
// Ensure dependencies are numbers
|
||||||
const numericDependencies = dependencies.map((dep) => parseInt(dep, 10));
|
const numericDependencies = dependencies.map((dep) => parseInt(dep, 10));
|
||||||
|
|
||||||
// Build dependency graphs for explicitly specified dependencies
|
|
||||||
const dependencyGraphs = [];
|
|
||||||
const allRelatedTaskIds = new Set();
|
|
||||||
const depthMap = new Map();
|
|
||||||
|
|
||||||
// First pass: build a complete dependency graph for each specified dependency
|
|
||||||
for (const depId of numericDependencies) {
|
|
||||||
const graph = buildDependencyGraph(
|
|
||||||
data.tasks,
|
|
||||||
depId,
|
|
||||||
new Set(),
|
|
||||||
depthMap
|
|
||||||
);
|
|
||||||
if (graph) {
|
|
||||||
dependencyGraphs.push(graph);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Second pass: build a set of all related task IDs for flat analysis
|
|
||||||
for (const [taskId, depth] of depthMap.entries()) {
|
|
||||||
allRelatedTaskIds.add(taskId);
|
|
||||||
}
|
|
||||||
|
|
||||||
let taskData;
|
let taskData;
|
||||||
|
|
||||||
// Check if manual task data is provided
|
// Check if manual task data is provided
|
||||||
@@ -262,644 +163,36 @@ async function addTask(
|
|||||||
|
|
||||||
// Create context string for task creation prompt
|
// Create context string for task creation prompt
|
||||||
let contextTasks = '';
|
let contextTasks = '';
|
||||||
|
|
||||||
// Create a dependency map for better understanding of the task relationships
|
|
||||||
const taskMap = {};
|
|
||||||
data.tasks.forEach((t) => {
|
|
||||||
// For each task, only include id, title, description, and dependencies
|
|
||||||
taskMap[t.id] = {
|
|
||||||
id: t.id,
|
|
||||||
title: t.title,
|
|
||||||
description: t.description,
|
|
||||||
dependencies: t.dependencies || [],
|
|
||||||
status: t.status
|
|
||||||
};
|
|
||||||
});
|
|
||||||
|
|
||||||
// CLI-only feedback for the dependency analysis
|
|
||||||
if (outputFormat === 'text') {
|
|
||||||
console.log(
|
|
||||||
boxen(chalk.cyan.bold('Task Context Analysis') + '\n', {
|
|
||||||
padding: { top: 0, bottom: 0, left: 1, right: 1 },
|
|
||||||
margin: { top: 0, bottom: 0 },
|
|
||||||
borderColor: 'cyan',
|
|
||||||
borderStyle: 'round'
|
|
||||||
})
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize variables that will be used in either branch
|
|
||||||
let uniqueDetailedTasks = [];
|
|
||||||
let dependentTasks = [];
|
|
||||||
let promptCategory = null;
|
|
||||||
|
|
||||||
if (numericDependencies.length > 0) {
|
if (numericDependencies.length > 0) {
|
||||||
// If specific dependencies were provided, focus on them
|
const dependentTasks = data.tasks.filter((t) =>
|
||||||
// Get all tasks that were found in the dependency graph
|
|
||||||
dependentTasks = Array.from(allRelatedTaskIds)
|
|
||||||
.map((id) => data.tasks.find((t) => t.id === id))
|
|
||||||
.filter(Boolean);
|
|
||||||
|
|
||||||
// Sort by depth in the dependency chain
|
|
||||||
dependentTasks.sort((a, b) => {
|
|
||||||
const depthA = depthMap.get(a.id) || 0;
|
|
||||||
const depthB = depthMap.get(b.id) || 0;
|
|
||||||
return depthA - depthB; // Lowest depth (root dependencies) first
|
|
||||||
});
|
|
||||||
|
|
||||||
// Limit the number of detailed tasks to avoid context explosion
|
|
||||||
uniqueDetailedTasks = dependentTasks.slice(0, 8);
|
|
||||||
|
|
||||||
contextTasks = `\nThis task relates to a dependency structure with ${dependentTasks.length} related tasks in the chain.\n\nDirect dependencies:`;
|
|
||||||
const directDeps = data.tasks.filter((t) =>
|
|
||||||
numericDependencies.includes(t.id)
|
numericDependencies.includes(t.id)
|
||||||
);
|
);
|
||||||
contextTasks += `\n${directDeps.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`).join('\n')}`;
|
contextTasks = `\nThis task depends on the following tasks:\n${dependentTasks
|
||||||
|
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
|
||||||
// Add an overview of indirect dependencies if present
|
.join('\n')}`;
|
||||||
const indirectDeps = dependentTasks.filter(
|
|
||||||
(t) => !numericDependencies.includes(t.id)
|
|
||||||
);
|
|
||||||
if (indirectDeps.length > 0) {
|
|
||||||
contextTasks += `\n\nIndirect dependencies (dependencies of dependencies):`;
|
|
||||||
contextTasks += `\n${indirectDeps
|
|
||||||
.slice(0, 5)
|
|
||||||
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
|
|
||||||
.join('\n')}`;
|
|
||||||
if (indirectDeps.length > 5) {
|
|
||||||
contextTasks += `\n- ... and ${indirectDeps.length - 5} more indirect dependencies`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add more details about each dependency, prioritizing direct dependencies
|
|
||||||
contextTasks += `\n\nDetailed information about dependencies:`;
|
|
||||||
for (const depTask of uniqueDetailedTasks) {
|
|
||||||
const depthInfo = depthMap.get(depTask.id)
|
|
||||||
? ` (depth: ${depthMap.get(depTask.id)})`
|
|
||||||
: '';
|
|
||||||
const isDirect = numericDependencies.includes(depTask.id)
|
|
||||||
? ' [DIRECT DEPENDENCY]'
|
|
||||||
: '';
|
|
||||||
|
|
||||||
contextTasks += `\n\n------ Task ${depTask.id}${isDirect}${depthInfo}: ${depTask.title} ------\n`;
|
|
||||||
contextTasks += `Description: ${depTask.description}\n`;
|
|
||||||
contextTasks += `Status: ${depTask.status || 'pending'}\n`;
|
|
||||||
contextTasks += `Priority: ${depTask.priority || 'medium'}\n`;
|
|
||||||
|
|
||||||
// List its dependencies
|
|
||||||
if (depTask.dependencies && depTask.dependencies.length > 0) {
|
|
||||||
const depDeps = depTask.dependencies.map((dId) => {
|
|
||||||
const depDepTask = data.tasks.find((t) => t.id === dId);
|
|
||||||
return depDepTask
|
|
||||||
? `Task ${dId}: ${depDepTask.title}`
|
|
||||||
: `Task ${dId}`;
|
|
||||||
});
|
|
||||||
contextTasks += `Dependencies: ${depDeps.join(', ')}\n`;
|
|
||||||
} else {
|
|
||||||
contextTasks += `Dependencies: None\n`;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add implementation details but truncate if too long
|
|
||||||
if (depTask.details) {
|
|
||||||
const truncatedDetails =
|
|
||||||
depTask.details.length > 400
|
|
||||||
? depTask.details.substring(0, 400) + '... (truncated)'
|
|
||||||
: depTask.details;
|
|
||||||
contextTasks += `Implementation Details: ${truncatedDetails}\n`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add dependency chain visualization
|
|
||||||
if (dependencyGraphs.length > 0) {
|
|
||||||
contextTasks += '\n\nDependency Chain Visualization:';
|
|
||||||
|
|
||||||
// Helper function to format dependency chain as text
|
|
||||||
function formatDependencyChain(
|
|
||||||
node,
|
|
||||||
prefix = '',
|
|
||||||
isLast = true,
|
|
||||||
depth = 0
|
|
||||||
) {
|
|
||||||
if (depth > 3) return ''; // Limit depth to avoid excessive nesting
|
|
||||||
|
|
||||||
const connector = isLast ? '└── ' : '├── ';
|
|
||||||
const childPrefix = isLast ? ' ' : '│ ';
|
|
||||||
|
|
||||||
let result = `\n${prefix}${connector}Task ${node.id}: ${node.title}`;
|
|
||||||
|
|
||||||
if (node.dependencies && node.dependencies.length > 0) {
|
|
||||||
for (let i = 0; i < node.dependencies.length; i++) {
|
|
||||||
const isLastChild = i === node.dependencies.length - 1;
|
|
||||||
result += formatDependencyChain(
|
|
||||||
node.dependencies[i],
|
|
||||||
prefix + childPrefix,
|
|
||||||
isLastChild,
|
|
||||||
depth + 1
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Format each dependency graph
|
|
||||||
for (const graph of dependencyGraphs) {
|
|
||||||
contextTasks += formatDependencyChain(graph);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Show dependency analysis in CLI mode
|
|
||||||
if (outputFormat === 'text') {
|
|
||||||
if (directDeps.length > 0) {
|
|
||||||
console.log(chalk.gray(` Explicitly specified dependencies:`));
|
|
||||||
directDeps.forEach((t) => {
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(` • Task ${t.id}: ${truncate(t.title, 50)}`)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (indirectDeps.length > 0) {
|
|
||||||
console.log(
|
|
||||||
chalk.gray(
|
|
||||||
`\n Indirect dependencies (${indirectDeps.length} total):`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
indirectDeps.slice(0, 3).forEach((t) => {
|
|
||||||
const depth = depthMap.get(t.id) || 0;
|
|
||||||
console.log(
|
|
||||||
chalk.cyan(
|
|
||||||
` • Task ${t.id} [depth ${depth}]: ${truncate(t.title, 45)}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
if (indirectDeps.length > 3) {
|
|
||||||
console.log(
|
|
||||||
chalk.cyan(
|
|
||||||
` • ... and ${indirectDeps.length - 3} more indirect dependencies`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Visualize the dependency chain
|
|
||||||
if (dependencyGraphs.length > 0) {
|
|
||||||
console.log(chalk.gray(`\n Dependency chain visualization:`));
|
|
||||||
|
|
||||||
// Convert dependency graph to ASCII art for terminal
|
|
||||||
function visualizeDependencyGraph(
|
|
||||||
node,
|
|
||||||
prefix = '',
|
|
||||||
isLast = true,
|
|
||||||
depth = 0
|
|
||||||
) {
|
|
||||||
if (depth > 2) return; // Limit depth for display
|
|
||||||
|
|
||||||
const connector = isLast ? '└── ' : '├── ';
|
|
||||||
const childPrefix = isLast ? ' ' : '│ ';
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
chalk.blue(
|
|
||||||
` ${prefix}${connector}Task ${node.id}: ${truncate(node.title, 40)}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
if (node.dependencies && node.dependencies.length > 0) {
|
|
||||||
for (let i = 0; i < node.dependencies.length; i++) {
|
|
||||||
const isLastChild = i === node.dependencies.length - 1;
|
|
||||||
visualizeDependencyGraph(
|
|
||||||
node.dependencies[i],
|
|
||||||
prefix + childPrefix,
|
|
||||||
isLastChild,
|
|
||||||
depth + 1
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Visualize each dependency graph
|
|
||||||
for (const graph of dependencyGraphs) {
|
|
||||||
visualizeDependencyGraph(graph);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(); // Add spacing
|
|
||||||
}
|
|
||||||
} else {
|
} else {
|
||||||
// If no dependencies provided, use Fuse.js to find semantically related tasks
|
|
||||||
// Create fuzzy search index for all tasks
|
|
||||||
const searchOptions = {
|
|
||||||
includeScore: true, // Return match scores
|
|
||||||
threshold: 0.4, // Lower threshold = stricter matching (range 0-1)
|
|
||||||
keys: [
|
|
||||||
{ name: 'title', weight: 2 }, // Title is most important
|
|
||||||
{ name: 'description', weight: 1.5 }, // Description is next
|
|
||||||
{ name: 'details', weight: 0.8 }, // Details is less important
|
|
||||||
// Search dependencies to find tasks that depend on similar things
|
|
||||||
{ name: 'dependencyTitles', weight: 0.5 }
|
|
||||||
],
|
|
||||||
// Sort matches by score (lower is better)
|
|
||||||
shouldSort: true,
|
|
||||||
// Allow searching in nested properties
|
|
||||||
useExtendedSearch: true,
|
|
||||||
// Return up to 15 matches
|
|
||||||
limit: 15
|
|
||||||
};
|
|
||||||
|
|
||||||
// Prepare task data with dependencies expanded as titles for better semantic search
|
|
||||||
const searchableTasks = data.tasks.map((task) => {
|
|
||||||
// Get titles of this task's dependencies if they exist
|
|
||||||
const dependencyTitles =
|
|
||||||
task.dependencies?.length > 0
|
|
||||||
? task.dependencies
|
|
||||||
.map((depId) => {
|
|
||||||
const depTask = data.tasks.find((t) => t.id === depId);
|
|
||||||
return depTask ? depTask.title : '';
|
|
||||||
})
|
|
||||||
.filter((title) => title)
|
|
||||||
.join(' ')
|
|
||||||
: '';
|
|
||||||
|
|
||||||
return {
|
|
||||||
...task,
|
|
||||||
dependencyTitles
|
|
||||||
};
|
|
||||||
});
|
|
||||||
|
|
||||||
// Create search index using Fuse.js
|
|
||||||
const fuse = new Fuse(searchableTasks, searchOptions);
|
|
||||||
|
|
||||||
// Extract significant words and phrases from the prompt
|
|
||||||
const promptWords = prompt
|
|
||||||
.toLowerCase()
|
|
||||||
.replace(/[^\w\s-]/g, ' ') // Replace non-alphanumeric chars with spaces
|
|
||||||
.split(/\s+/)
|
|
||||||
.filter((word) => word.length > 3); // Words at least 4 chars
|
|
||||||
|
|
||||||
// Use the user's prompt for fuzzy search
|
|
||||||
const fuzzyResults = fuse.search(prompt);
|
|
||||||
|
|
||||||
// Also search for each significant word to catch different aspects
|
|
||||||
let wordResults = [];
|
|
||||||
for (const word of promptWords) {
|
|
||||||
if (word.length > 5) {
|
|
||||||
// Only use significant words
|
|
||||||
const results = fuse.search(word);
|
|
||||||
if (results.length > 0) {
|
|
||||||
wordResults.push(...results);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Merge and deduplicate results
|
|
||||||
const mergedResults = [...fuzzyResults];
|
|
||||||
|
|
||||||
// Add word results that aren't already in fuzzyResults
|
|
||||||
for (const wordResult of wordResults) {
|
|
||||||
if (!mergedResults.some((r) => r.item.id === wordResult.item.id)) {
|
|
||||||
mergedResults.push(wordResult);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Group search results by relevance
|
|
||||||
const highRelevance = mergedResults
|
|
||||||
.filter((result) => result.score < 0.25)
|
|
||||||
.map((result) => result.item);
|
|
||||||
|
|
||||||
const mediumRelevance = mergedResults
|
|
||||||
.filter((result) => result.score >= 0.25 && result.score < 0.4)
|
|
||||||
.map((result) => result.item);
|
|
||||||
|
|
||||||
// Get recent tasks (newest first)
|
|
||||||
const recentTasks = [...data.tasks]
|
const recentTasks = [...data.tasks]
|
||||||
.sort((a, b) => b.id - a.id)
|
.sort((a, b) => b.id - a.id)
|
||||||
.slice(0, 5);
|
.slice(0, 3);
|
||||||
|
if (recentTasks.length > 0) {
|
||||||
// Combine high relevance, medium relevance, and recent tasks
|
contextTasks = `\nRecent tasks in the project:\n${recentTasks
|
||||||
// Prioritize high relevance first
|
|
||||||
const allRelevantTasks = [...highRelevance];
|
|
||||||
|
|
||||||
// Add medium relevance if not already included
|
|
||||||
for (const task of mediumRelevance) {
|
|
||||||
if (!allRelevantTasks.some((t) => t.id === task.id)) {
|
|
||||||
allRelevantTasks.push(task);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add recent tasks if not already included
|
|
||||||
for (const task of recentTasks) {
|
|
||||||
if (!allRelevantTasks.some((t) => t.id === task.id)) {
|
|
||||||
allRelevantTasks.push(task);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get top N results for context
|
|
||||||
const relatedTasks = allRelevantTasks.slice(0, 8);
|
|
||||||
|
|
||||||
// Also look for tasks with similar purposes or categories
|
|
||||||
const purposeCategories = [
|
|
||||||
{ pattern: /(command|cli|flag)/i, label: 'CLI commands' },
|
|
||||||
{ pattern: /(task|subtask|add)/i, label: 'Task management' },
|
|
||||||
{ pattern: /(dependency|depend)/i, label: 'Dependency handling' },
|
|
||||||
{ pattern: /(AI|model|prompt)/i, label: 'AI integration' },
|
|
||||||
{ pattern: /(UI|display|show)/i, label: 'User interface' },
|
|
||||||
{ pattern: /(schedule|time|cron)/i, label: 'Scheduling' }, // Added scheduling category
|
|
||||||
{ pattern: /(config|setting|option)/i, label: 'Configuration' } // Added configuration category
|
|
||||||
];
|
|
||||||
|
|
||||||
promptCategory = purposeCategories.find((cat) =>
|
|
||||||
cat.pattern.test(prompt)
|
|
||||||
);
|
|
||||||
const categoryTasks = promptCategory
|
|
||||||
? data.tasks
|
|
||||||
.filter(
|
|
||||||
(t) =>
|
|
||||||
promptCategory.pattern.test(t.title) ||
|
|
||||||
promptCategory.pattern.test(t.description) ||
|
|
||||||
(t.details && promptCategory.pattern.test(t.details))
|
|
||||||
)
|
|
||||||
.filter((t) => !relatedTasks.some((rt) => rt.id === t.id))
|
|
||||||
.slice(0, 3)
|
|
||||||
: [];
|
|
||||||
|
|
||||||
// Format basic task overviews
|
|
||||||
if (relatedTasks.length > 0) {
|
|
||||||
contextTasks = `\nRelevant tasks identified by semantic similarity:\n${relatedTasks
|
|
||||||
.map((t, i) => {
|
|
||||||
const relevanceMarker = i < highRelevance.length ? '⭐ ' : '';
|
|
||||||
return `- ${relevanceMarker}Task ${t.id}: ${t.title} - ${t.description}`;
|
|
||||||
})
|
|
||||||
.join('\n')}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (categoryTasks.length > 0) {
|
|
||||||
contextTasks += `\n\nTasks related to ${promptCategory.label}:\n${categoryTasks
|
|
||||||
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
|
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
|
||||||
.join('\n')}`;
|
.join('\n')}`;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (
|
|
||||||
recentTasks.length > 0 &&
|
|
||||||
!contextTasks.includes('Recently created tasks')
|
|
||||||
) {
|
|
||||||
contextTasks += `\n\nRecently created tasks:\n${recentTasks
|
|
||||||
.filter((t) => !relatedTasks.some((rt) => rt.id === t.id))
|
|
||||||
.slice(0, 3)
|
|
||||||
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
|
|
||||||
.join('\n')}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add detailed information about the most relevant tasks
|
|
||||||
const allDetailedTasks = [
|
|
||||||
...relatedTasks.slice(0, 5),
|
|
||||||
...categoryTasks.slice(0, 2)
|
|
||||||
];
|
|
||||||
uniqueDetailedTasks = Array.from(
|
|
||||||
new Map(allDetailedTasks.map((t) => [t.id, t])).values()
|
|
||||||
).slice(0, 8);
|
|
||||||
|
|
||||||
if (uniqueDetailedTasks.length > 0) {
|
|
||||||
contextTasks += `\n\nDetailed information about relevant tasks:`;
|
|
||||||
for (const task of uniqueDetailedTasks) {
|
|
||||||
contextTasks += `\n\n------ Task ${task.id}: ${task.title} ------\n`;
|
|
||||||
contextTasks += `Description: ${task.description}\n`;
|
|
||||||
contextTasks += `Status: ${task.status || 'pending'}\n`;
|
|
||||||
contextTasks += `Priority: ${task.priority || 'medium'}\n`;
|
|
||||||
if (task.dependencies && task.dependencies.length > 0) {
|
|
||||||
// Format dependency list with titles
|
|
||||||
const depList = task.dependencies.map((depId) => {
|
|
||||||
const depTask = data.tasks.find((t) => t.id === depId);
|
|
||||||
return depTask
|
|
||||||
? `Task ${depId} (${depTask.title})`
|
|
||||||
: `Task ${depId}`;
|
|
||||||
});
|
|
||||||
contextTasks += `Dependencies: ${depList.join(', ')}\n`;
|
|
||||||
}
|
|
||||||
// Add implementation details but truncate if too long
|
|
||||||
if (task.details) {
|
|
||||||
const truncatedDetails =
|
|
||||||
task.details.length > 400
|
|
||||||
? task.details.substring(0, 400) + '... (truncated)'
|
|
||||||
: task.details;
|
|
||||||
contextTasks += `Implementation Details: ${truncatedDetails}\n`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add a concise view of the task dependency structure
|
|
||||||
contextTasks += '\n\nSummary of task dependencies in the project:';
|
|
||||||
|
|
||||||
// Get pending/in-progress tasks that might be most relevant based on fuzzy search
|
|
||||||
// Prioritize tasks from our similarity search
|
|
||||||
const relevantTaskIds = new Set(uniqueDetailedTasks.map((t) => t.id));
|
|
||||||
const relevantPendingTasks = data.tasks
|
|
||||||
.filter(
|
|
||||||
(t) =>
|
|
||||||
(t.status === 'pending' || t.status === 'in-progress') &&
|
|
||||||
// Either in our relevant set OR has relevant words in title/description
|
|
||||||
(relevantTaskIds.has(t.id) ||
|
|
||||||
promptWords.some(
|
|
||||||
(word) =>
|
|
||||||
t.title.toLowerCase().includes(word) ||
|
|
||||||
t.description.toLowerCase().includes(word)
|
|
||||||
))
|
|
||||||
)
|
|
||||||
.slice(0, 10);
|
|
||||||
|
|
||||||
for (const task of relevantPendingTasks) {
|
|
||||||
const depsStr =
|
|
||||||
task.dependencies && task.dependencies.length > 0
|
|
||||||
? task.dependencies.join(', ')
|
|
||||||
: 'None';
|
|
||||||
contextTasks += `\n- Task ${task.id}: depends on [${depsStr}]`;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Additional analysis of common patterns
|
|
||||||
const similarPurposeTasks = promptCategory
|
|
||||||
? data.tasks.filter(
|
|
||||||
(t) =>
|
|
||||||
promptCategory.pattern.test(t.title) ||
|
|
||||||
promptCategory.pattern.test(t.description)
|
|
||||||
)
|
|
||||||
: [];
|
|
||||||
|
|
||||||
let commonDeps = []; // Initialize commonDeps
|
|
||||||
|
|
||||||
if (similarPurposeTasks.length > 0) {
|
|
||||||
contextTasks += `\n\nCommon patterns for ${promptCategory ? promptCategory.label : 'similar'} tasks:`;
|
|
||||||
|
|
||||||
// Collect dependencies from similar purpose tasks
|
|
||||||
const similarDeps = similarPurposeTasks
|
|
||||||
.filter((t) => t.dependencies && t.dependencies.length > 0)
|
|
||||||
.map((t) => t.dependencies)
|
|
||||||
.flat();
|
|
||||||
|
|
||||||
// Count frequency of each dependency
|
|
||||||
const depCounts = {};
|
|
||||||
similarDeps.forEach((dep) => {
|
|
||||||
depCounts[dep] = (depCounts[dep] || 0) + 1;
|
|
||||||
});
|
|
||||||
|
|
||||||
// Get most common dependencies for similar tasks
|
|
||||||
commonDeps = Object.entries(depCounts)
|
|
||||||
.sort((a, b) => b[1] - a[1])
|
|
||||||
.slice(0, 5);
|
|
||||||
|
|
||||||
if (commonDeps.length > 0) {
|
|
||||||
contextTasks += '\nMost common dependencies for similar tasks:';
|
|
||||||
commonDeps.forEach(([depId, count]) => {
|
|
||||||
const depTask = data.tasks.find((t) => t.id === parseInt(depId));
|
|
||||||
if (depTask) {
|
|
||||||
contextTasks += `\n- Task ${depId} (used by ${count} similar tasks): ${depTask.title}`;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Show fuzzy search analysis in CLI mode
|
|
||||||
if (outputFormat === 'text') {
|
|
||||||
console.log(
|
|
||||||
chalk.gray(
|
|
||||||
` Fuzzy search across ${data.tasks.length} tasks using full prompt and ${promptWords.length} keywords`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
if (highRelevance.length > 0) {
|
|
||||||
console.log(
|
|
||||||
chalk.gray(`\n High relevance matches (score < 0.25):`)
|
|
||||||
);
|
|
||||||
highRelevance.slice(0, 5).forEach((t) => {
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(` • ⭐ Task ${t.id}: ${truncate(t.title, 50)}`)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (mediumRelevance.length > 0) {
|
|
||||||
console.log(
|
|
||||||
chalk.gray(`\n Medium relevance matches (score < 0.4):`)
|
|
||||||
);
|
|
||||||
mediumRelevance.slice(0, 3).forEach((t) => {
|
|
||||||
console.log(
|
|
||||||
chalk.green(` • Task ${t.id}: ${truncate(t.title, 50)}`)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (promptCategory && categoryTasks.length > 0) {
|
|
||||||
console.log(
|
|
||||||
chalk.gray(`\n Tasks related to ${promptCategory.label}:`)
|
|
||||||
);
|
|
||||||
categoryTasks.forEach((t) => {
|
|
||||||
console.log(
|
|
||||||
chalk.magenta(` • Task ${t.id}: ${truncate(t.title, 50)}`)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Show dependency patterns
|
|
||||||
if (commonDeps && commonDeps.length > 0) {
|
|
||||||
console.log(
|
|
||||||
chalk.gray(`\n Common dependency patterns for similar tasks:`)
|
|
||||||
);
|
|
||||||
commonDeps.slice(0, 3).forEach(([depId, count]) => {
|
|
||||||
const depTask = data.tasks.find((t) => t.id === parseInt(depId));
|
|
||||||
if (depTask) {
|
|
||||||
console.log(
|
|
||||||
chalk.blue(
|
|
||||||
` • Task ${depId} (${count}x): ${truncate(depTask.title, 45)}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add information about which tasks will be provided in detail
|
|
||||||
if (uniqueDetailedTasks.length > 0) {
|
|
||||||
console.log(
|
|
||||||
chalk.gray(
|
|
||||||
`\n Providing detailed context for ${uniqueDetailedTasks.length} most relevant tasks:`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
uniqueDetailedTasks.forEach((t) => {
|
|
||||||
const isHighRelevance = highRelevance.some(
|
|
||||||
(ht) => ht.id === t.id
|
|
||||||
);
|
|
||||||
const relevanceIndicator = isHighRelevance ? '⭐ ' : '';
|
|
||||||
console.log(
|
|
||||||
chalk.cyan(
|
|
||||||
` • ${relevanceIndicator}Task ${t.id}: ${truncate(t.title, 40)}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(); // Add spacing
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// DETERMINE THE ACTUAL COUNT OF DETAILED TASKS BEING USED FOR AI CONTEXT
|
// System Prompt
|
||||||
let actualDetailedTasksCount = 0;
|
|
||||||
if (numericDependencies.length > 0) {
|
|
||||||
// In explicit dependency mode, we used 'uniqueDetailedTasks' derived from 'dependentTasks'
|
|
||||||
// Ensure 'uniqueDetailedTasks' from THAT scope is used or re-evaluate.
|
|
||||||
// For simplicity, let's assume 'dependentTasks' reflects the detailed tasks.
|
|
||||||
actualDetailedTasksCount = dependentTasks.length;
|
|
||||||
} else {
|
|
||||||
// In fuzzy search mode, 'uniqueDetailedTasks' from THIS scope is correct.
|
|
||||||
actualDetailedTasksCount = uniqueDetailedTasks
|
|
||||||
? uniqueDetailedTasks.length
|
|
||||||
: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add a visual transition to show we're moving to AI generation
|
|
||||||
console.log(
|
|
||||||
boxen(
|
|
||||||
chalk.white.bold('AI Task Generation') +
|
|
||||||
`\n\n${chalk.gray('Analyzing context and generating task details using AI...')}` +
|
|
||||||
`\n${chalk.cyan('Context size: ')}${chalk.yellow(contextTasks.length.toLocaleString())} characters` +
|
|
||||||
`\n${chalk.cyan('Dependency detection: ')}${chalk.yellow(numericDependencies.length > 0 ? 'Explicit dependencies' : 'Auto-discovery mode')}` +
|
|
||||||
`\n${chalk.cyan('Detailed tasks: ')}${chalk.yellow(
|
|
||||||
numericDependencies.length > 0
|
|
||||||
? dependentTasks.length // Use length of tasks from explicit dependency path
|
|
||||||
: uniqueDetailedTasks.length // Use length of tasks from fuzzy search path
|
|
||||||
)}` +
|
|
||||||
(promptCategory
|
|
||||||
? `\n${chalk.cyan('Category detected: ')}${chalk.yellow(promptCategory.label)}`
|
|
||||||
: ''),
|
|
||||||
{
|
|
||||||
padding: { top: 0, bottom: 1, left: 1, right: 1 },
|
|
||||||
margin: { top: 1, bottom: 0 },
|
|
||||||
borderColor: 'white',
|
|
||||||
borderStyle: 'round'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
);
|
|
||||||
console.log(); // Add spacing
|
|
||||||
|
|
||||||
// System Prompt - Enhanced for dependency awareness
|
|
||||||
const systemPrompt =
|
const systemPrompt =
|
||||||
"You are a helpful assistant that creates well-structured tasks for a software development project. Generate a single new task based on the user's description, adhering strictly to the provided JSON schema. Pay special attention to dependencies between tasks, ensuring the new task correctly references any tasks it depends on.\n\n" +
|
"You are a helpful assistant that creates well-structured tasks for a software development project. Generate a single new task based on the user's description, adhering strictly to the provided JSON schema.";
|
||||||
'When determining dependencies for a new task, follow these principles:\n' +
|
|
||||||
'1. Select dependencies based on logical requirements - what must be completed before this task can begin.\n' +
|
|
||||||
'2. Prioritize task dependencies that are semantically related to the functionality being built.\n' +
|
|
||||||
'3. Consider both direct dependencies (immediately prerequisite) and indirect dependencies.\n' +
|
|
||||||
'4. Avoid adding unnecessary dependencies - only include tasks that are genuinely prerequisite.\n' +
|
|
||||||
'5. Consider the current status of tasks - prefer completed tasks as dependencies when possible.\n' +
|
|
||||||
"6. Pay special attention to foundation tasks (1-5) but don't automatically include them without reason.\n" +
|
|
||||||
'7. Recent tasks (higher ID numbers) may be more relevant for newer functionality.\n\n' +
|
|
||||||
'The dependencies array should contain task IDs (numbers) of prerequisite tasks.\n';
|
|
||||||
|
|
||||||
// Task Structure Description (for user prompt)
|
// Task Structure Description (for user prompt)
|
||||||
const taskStructureDesc = `
|
const taskStructureDesc = `
|
||||||
{
|
{
|
||||||
"title": "Task title goes here",
|
"title": "Task title goes here",
|
||||||
"description": "A concise one or two sentence description of what the task involves",
|
"description": "A concise one or two sentence description of what the task involves",
|
||||||
"details": "Detailed implementation steps, considerations, code examples, or technical approach",
|
"details": "In-depth implementation details, considerations, and guidance.",
|
||||||
"testStrategy": "Specific steps to verify correct implementation and functionality",
|
"testStrategy": "Detailed approach for verifying task completion."
|
||||||
"dependencies": [1, 3] // Example: IDs of tasks that must be completed before this task
|
}`;
|
||||||
}
|
|
||||||
`;
|
|
||||||
|
|
||||||
// Add any manually provided details to the prompt for context
|
// Add any manually provided details to the prompt for context
|
||||||
let contextFromArgs = '';
|
let contextFromArgs = '';
|
||||||
@@ -913,18 +206,15 @@ async function addTask(
|
|||||||
contextFromArgs += `\n- Additional Test Strategy Context: "${manualTaskData.testStrategy}"`;
|
contextFromArgs += `\n- Additional Test Strategy Context: "${manualTaskData.testStrategy}"`;
|
||||||
|
|
||||||
// User Prompt
|
// User Prompt
|
||||||
const userPrompt = `You are generating the details for Task #${newTaskId}. Based on the user's request: "${prompt}", create a comprehensive new task for a software development project.
|
const userPrompt = `Create a comprehensive new task (Task #${newTaskId}) for a software development project based on this description: "${prompt}"
|
||||||
|
|
||||||
${contextTasks}
|
${contextTasks}
|
||||||
${contextFromArgs ? `\nConsider these additional details provided by the user:${contextFromArgs}` : ''}
|
${contextFromArgs ? `\nConsider these additional details provided by the user:${contextFromArgs}` : ''}
|
||||||
|
|
||||||
Based on the information about existing tasks provided above, include appropriate dependencies in the "dependencies" array. Only include task IDs that this new task directly depends on.
|
|
||||||
|
|
||||||
Return your answer as a single JSON object matching the schema precisely:
|
Return your answer as a single JSON object matching the schema precisely:
|
||||||
${taskStructureDesc}
|
${taskStructureDesc}
|
||||||
|
|
||||||
Make sure the details and test strategy are comprehensive and specific. DO NOT include the task ID in the title.
|
Make sure the details and test strategy are thorough and specific.`;
|
||||||
`;
|
|
||||||
|
|
||||||
// Start the loading indicator - only for text mode
|
// Start the loading indicator - only for text mode
|
||||||
if (outputFormat === 'text') {
|
if (outputFormat === 'text') {
|
||||||
@@ -997,32 +287,11 @@ async function addTask(
|
|||||||
details: taskData.details || '',
|
details: taskData.details || '',
|
||||||
testStrategy: taskData.testStrategy || '',
|
testStrategy: taskData.testStrategy || '',
|
||||||
status: 'pending',
|
status: 'pending',
|
||||||
dependencies: taskData.dependencies?.length
|
dependencies: numericDependencies, // Use validated numeric dependencies
|
||||||
? taskData.dependencies
|
|
||||||
: numericDependencies, // Use AI-suggested dependencies if available, fallback to manually specified
|
|
||||||
priority: effectivePriority,
|
priority: effectivePriority,
|
||||||
subtasks: [] // Initialize with empty subtasks array
|
subtasks: [] // Initialize with empty subtasks array
|
||||||
};
|
};
|
||||||
|
|
||||||
// Additional check: validate all dependencies in the AI response
|
|
||||||
if (taskData.dependencies?.length) {
|
|
||||||
const allValidDeps = taskData.dependencies.every((depId) => {
|
|
||||||
const numDepId = parseInt(depId, 10);
|
|
||||||
return !isNaN(numDepId) && data.tasks.some((t) => t.id === numDepId);
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!allValidDeps) {
|
|
||||||
report(
|
|
||||||
'AI suggested invalid dependencies. Filtering them out...',
|
|
||||||
'warn'
|
|
||||||
);
|
|
||||||
newTask.dependencies = taskData.dependencies.filter((depId) => {
|
|
||||||
const numDepId = parseInt(depId, 10);
|
|
||||||
return !isNaN(numDepId) && data.tasks.some((t) => t.id === numDepId);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add the task to the tasks array
|
// Add the task to the tasks array
|
||||||
data.tasks.push(newTask);
|
data.tasks.push(newTask);
|
||||||
|
|
||||||
@@ -1071,72 +340,6 @@ async function addTask(
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Check if AI added new dependencies that weren't explicitly provided
|
|
||||||
const aiAddedDeps = newTask.dependencies.filter(
|
|
||||||
(dep) => !numericDependencies.includes(dep)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Check if AI removed any dependencies that were explicitly provided
|
|
||||||
const aiRemovedDeps = numericDependencies.filter(
|
|
||||||
(dep) => !newTask.dependencies.includes(dep)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Get task titles for dependencies to display
|
|
||||||
const depTitles = {};
|
|
||||||
newTask.dependencies.forEach((dep) => {
|
|
||||||
const depTask = data.tasks.find((t) => t.id === dep);
|
|
||||||
if (depTask) {
|
|
||||||
depTitles[dep] = truncate(depTask.title, 30);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Prepare dependency display string
|
|
||||||
let dependencyDisplay = '';
|
|
||||||
if (newTask.dependencies.length > 0) {
|
|
||||||
dependencyDisplay = chalk.white('Dependencies:') + '\n';
|
|
||||||
newTask.dependencies.forEach((dep) => {
|
|
||||||
const isAiAdded = aiAddedDeps.includes(dep);
|
|
||||||
const depType = isAiAdded ? chalk.yellow(' (AI suggested)') : '';
|
|
||||||
dependencyDisplay +=
|
|
||||||
chalk.white(
|
|
||||||
` - ${dep}: ${depTitles[dep] || 'Unknown task'}${depType}`
|
|
||||||
) + '\n';
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
dependencyDisplay = chalk.white('Dependencies: None') + '\n';
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add info about removed dependencies if any
|
|
||||||
if (aiRemovedDeps.length > 0) {
|
|
||||||
dependencyDisplay +=
|
|
||||||
chalk.gray('\nUser-specified dependencies that were not used:') +
|
|
||||||
'\n';
|
|
||||||
aiRemovedDeps.forEach((dep) => {
|
|
||||||
const depTask = data.tasks.find((t) => t.id === dep);
|
|
||||||
const title = depTask ? truncate(depTask.title, 30) : 'Unknown task';
|
|
||||||
dependencyDisplay += chalk.gray(` - ${dep}: ${title}`) + '\n';
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add dependency analysis summary
|
|
||||||
let dependencyAnalysis = '';
|
|
||||||
if (aiAddedDeps.length > 0 || aiRemovedDeps.length > 0) {
|
|
||||||
dependencyAnalysis =
|
|
||||||
'\n' + chalk.white.bold('Dependency Analysis:') + '\n';
|
|
||||||
if (aiAddedDeps.length > 0) {
|
|
||||||
dependencyAnalysis +=
|
|
||||||
chalk.green(
|
|
||||||
`AI identified ${aiAddedDeps.length} additional dependencies`
|
|
||||||
) + '\n';
|
|
||||||
}
|
|
||||||
if (aiRemovedDeps.length > 0) {
|
|
||||||
dependencyAnalysis +=
|
|
||||||
chalk.yellow(
|
|
||||||
`AI excluded ${aiRemovedDeps.length} user-provided dependencies`
|
|
||||||
) + '\n';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Show success message box
|
// Show success message box
|
||||||
console.log(
|
console.log(
|
||||||
boxen(
|
boxen(
|
||||||
@@ -1149,9 +352,11 @@ async function addTask(
|
|||||||
chalk.white(
|
chalk.white(
|
||||||
`Priority: ${chalk[getPriorityColor(newTask.priority)](newTask.priority)}`
|
`Priority: ${chalk[getPriorityColor(newTask.priority)](newTask.priority)}`
|
||||||
) +
|
) +
|
||||||
'\n\n' +
|
'\n' +
|
||||||
dependencyDisplay +
|
(numericDependencies.length > 0
|
||||||
dependencyAnalysis +
|
? chalk.white(`Dependencies: ${numericDependencies.join(', ')}`) +
|
||||||
|
'\n'
|
||||||
|
: '') +
|
||||||
'\n' +
|
'\n' +
|
||||||
chalk.white.bold('Next Steps:') +
|
chalk.white.bold('Next Steps:') +
|
||||||
'\n' +
|
'\n' +
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import boxen from 'boxen';
|
import boxen from 'boxen';
|
||||||
import readline from 'readline';
|
import readline from 'readline';
|
||||||
import fs from 'fs';
|
|
||||||
|
|
||||||
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
|
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
|
||||||
|
|
||||||
@@ -52,9 +51,6 @@ Do not include any explanatory text, markdown formatting, or code block markers
|
|||||||
* @param {string|number} [options.threshold] - Complexity threshold
|
* @param {string|number} [options.threshold] - Complexity threshold
|
||||||
* @param {boolean} [options.research] - Use research role
|
* @param {boolean} [options.research] - Use research role
|
||||||
* @param {string} [options.projectRoot] - Project root path (for MCP/env fallback).
|
* @param {string} [options.projectRoot] - Project root path (for MCP/env fallback).
|
||||||
* @param {string} [options.id] - Comma-separated list of task IDs to analyze specifically
|
|
||||||
* @param {number} [options.from] - Starting task ID in a range to analyze
|
|
||||||
* @param {number} [options.to] - Ending task ID in a range to analyze
|
|
||||||
* @param {Object} [options._filteredTasksData] - Pre-filtered task data (internal use)
|
* @param {Object} [options._filteredTasksData] - Pre-filtered task data (internal use)
|
||||||
* @param {number} [options._originalTaskCount] - Original task count (internal use)
|
* @param {number} [options._originalTaskCount] - Original task count (internal use)
|
||||||
* @param {Object} context - Context object, potentially containing session and mcpLog
|
* @param {Object} context - Context object, potentially containing session and mcpLog
|
||||||
@@ -69,15 +65,6 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
const thresholdScore = parseFloat(options.threshold || '5');
|
const thresholdScore = parseFloat(options.threshold || '5');
|
||||||
const useResearch = options.research || false;
|
const useResearch = options.research || false;
|
||||||
const projectRoot = options.projectRoot;
|
const projectRoot = options.projectRoot;
|
||||||
// New parameters for task ID filtering
|
|
||||||
const specificIds = options.id
|
|
||||||
? options.id
|
|
||||||
.split(',')
|
|
||||||
.map((id) => parseInt(id.trim(), 10))
|
|
||||||
.filter((id) => !isNaN(id))
|
|
||||||
: null;
|
|
||||||
const fromId = options.from !== undefined ? parseInt(options.from, 10) : null;
|
|
||||||
const toId = options.to !== undefined ? parseInt(options.to, 10) : null;
|
|
||||||
|
|
||||||
const outputFormat = mcpLog ? 'json' : 'text';
|
const outputFormat = mcpLog ? 'json' : 'text';
|
||||||
|
|
||||||
@@ -101,14 +88,13 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
reportLog(`Reading tasks from ${tasksPath}...`, 'info');
|
reportLog(`Reading tasks from ${tasksPath}...`, 'info');
|
||||||
let tasksData;
|
let tasksData;
|
||||||
let originalTaskCount = 0;
|
let originalTaskCount = 0;
|
||||||
let originalData = null;
|
|
||||||
|
|
||||||
if (options._filteredTasksData) {
|
if (options._filteredTasksData) {
|
||||||
tasksData = options._filteredTasksData;
|
tasksData = options._filteredTasksData;
|
||||||
originalTaskCount = options._originalTaskCount || tasksData.tasks.length;
|
originalTaskCount = options._originalTaskCount || tasksData.tasks.length;
|
||||||
if (!options._originalTaskCount) {
|
if (!options._originalTaskCount) {
|
||||||
try {
|
try {
|
||||||
originalData = readJSON(tasksPath);
|
const originalData = readJSON(tasksPath);
|
||||||
if (originalData && originalData.tasks) {
|
if (originalData && originalData.tasks) {
|
||||||
originalTaskCount = originalData.tasks.length;
|
originalTaskCount = originalData.tasks.length;
|
||||||
}
|
}
|
||||||
@@ -117,80 +103,22 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
originalData = readJSON(tasksPath);
|
tasksData = readJSON(tasksPath);
|
||||||
if (
|
if (
|
||||||
!originalData ||
|
!tasksData ||
|
||||||
!originalData.tasks ||
|
!tasksData.tasks ||
|
||||||
!Array.isArray(originalData.tasks) ||
|
!Array.isArray(tasksData.tasks) ||
|
||||||
originalData.tasks.length === 0
|
tasksData.tasks.length === 0
|
||||||
) {
|
) {
|
||||||
throw new Error('No tasks found in the tasks file');
|
throw new Error('No tasks found in the tasks file');
|
||||||
}
|
}
|
||||||
originalTaskCount = originalData.tasks.length;
|
originalTaskCount = tasksData.tasks.length;
|
||||||
|
|
||||||
// Filter tasks based on active status
|
|
||||||
const activeStatuses = ['pending', 'blocked', 'in-progress'];
|
const activeStatuses = ['pending', 'blocked', 'in-progress'];
|
||||||
let filteredTasks = originalData.tasks.filter((task) =>
|
const filteredTasks = tasksData.tasks.filter((task) =>
|
||||||
activeStatuses.includes(task.status?.toLowerCase() || 'pending')
|
activeStatuses.includes(task.status?.toLowerCase() || 'pending')
|
||||||
);
|
);
|
||||||
|
|
||||||
// Apply ID filtering if specified
|
|
||||||
if (specificIds && specificIds.length > 0) {
|
|
||||||
reportLog(
|
|
||||||
`Filtering tasks by specific IDs: ${specificIds.join(', ')}`,
|
|
||||||
'info'
|
|
||||||
);
|
|
||||||
filteredTasks = filteredTasks.filter((task) =>
|
|
||||||
specificIds.includes(task.id)
|
|
||||||
);
|
|
||||||
|
|
||||||
if (outputFormat === 'text') {
|
|
||||||
if (filteredTasks.length === 0 && specificIds.length > 0) {
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(
|
|
||||||
`Warning: No active tasks found with IDs: ${specificIds.join(', ')}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
} else if (filteredTasks.length < specificIds.length) {
|
|
||||||
const foundIds = filteredTasks.map((t) => t.id);
|
|
||||||
const missingIds = specificIds.filter(
|
|
||||||
(id) => !foundIds.includes(id)
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(
|
|
||||||
`Warning: Some requested task IDs were not found or are not active: ${missingIds.join(', ')}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Apply range filtering if specified
|
|
||||||
else if (fromId !== null || toId !== null) {
|
|
||||||
const effectiveFromId = fromId !== null ? fromId : 1;
|
|
||||||
const effectiveToId =
|
|
||||||
toId !== null
|
|
||||||
? toId
|
|
||||||
: Math.max(...originalData.tasks.map((t) => t.id));
|
|
||||||
|
|
||||||
reportLog(
|
|
||||||
`Filtering tasks by ID range: ${effectiveFromId} to ${effectiveToId}`,
|
|
||||||
'info'
|
|
||||||
);
|
|
||||||
filteredTasks = filteredTasks.filter(
|
|
||||||
(task) => task.id >= effectiveFromId && task.id <= effectiveToId
|
|
||||||
);
|
|
||||||
|
|
||||||
if (outputFormat === 'text' && filteredTasks.length === 0) {
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(
|
|
||||||
`Warning: No active tasks found in range: ${effectiveFromId}-${effectiveToId}`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
tasksData = {
|
tasksData = {
|
||||||
...originalData,
|
...tasksData,
|
||||||
tasks: filteredTasks,
|
tasks: filteredTasks,
|
||||||
_originalTaskCount: originalTaskCount
|
_originalTaskCount: originalTaskCount
|
||||||
};
|
};
|
||||||
@@ -201,18 +129,7 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
`Found ${originalTaskCount} total tasks in the task file.`,
|
`Found ${originalTaskCount} total tasks in the task file.`,
|
||||||
'info'
|
'info'
|
||||||
);
|
);
|
||||||
|
if (skippedCount > 0) {
|
||||||
// Updated messaging to reflect filtering logic
|
|
||||||
if (specificIds || fromId !== null || toId !== null) {
|
|
||||||
const filterMsg = specificIds
|
|
||||||
? `Analyzing ${tasksData.tasks.length} tasks with specific IDs: ${specificIds.join(', ')}`
|
|
||||||
: `Analyzing ${tasksData.tasks.length} tasks in range: ${fromId || 1} to ${toId || 'end'}`;
|
|
||||||
|
|
||||||
reportLog(filterMsg, 'info');
|
|
||||||
if (outputFormat === 'text') {
|
|
||||||
console.log(chalk.blue(filterMsg));
|
|
||||||
}
|
|
||||||
} else if (skippedCount > 0) {
|
|
||||||
const skipMessage = `Skipping ${skippedCount} tasks marked as done/cancelled/deferred. Analyzing ${tasksData.tasks.length} active tasks.`;
|
const skipMessage = `Skipping ${skippedCount} tasks marked as done/cancelled/deferred. Analyzing ${tasksData.tasks.length} active tasks.`;
|
||||||
reportLog(skipMessage, 'info');
|
reportLog(skipMessage, 'info');
|
||||||
if (outputFormat === 'text') {
|
if (outputFormat === 'text') {
|
||||||
@@ -220,59 +137,7 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check for existing report before doing analysis
|
|
||||||
let existingReport = null;
|
|
||||||
let existingAnalysisMap = new Map(); // For quick lookups by task ID
|
|
||||||
try {
|
|
||||||
if (fs.existsSync(outputPath)) {
|
|
||||||
existingReport = readJSON(outputPath);
|
|
||||||
reportLog(`Found existing complexity report at ${outputPath}`, 'info');
|
|
||||||
|
|
||||||
if (
|
|
||||||
existingReport &&
|
|
||||||
existingReport.complexityAnalysis &&
|
|
||||||
Array.isArray(existingReport.complexityAnalysis)
|
|
||||||
) {
|
|
||||||
// Create lookup map of existing analysis entries
|
|
||||||
existingReport.complexityAnalysis.forEach((item) => {
|
|
||||||
existingAnalysisMap.set(item.taskId, item);
|
|
||||||
});
|
|
||||||
reportLog(
|
|
||||||
`Existing report contains ${existingReport.complexityAnalysis.length} task analyses`,
|
|
||||||
'info'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (readError) {
|
|
||||||
reportLog(
|
|
||||||
`Warning: Could not read existing report: ${readError.message}`,
|
|
||||||
'warn'
|
|
||||||
);
|
|
||||||
existingReport = null;
|
|
||||||
existingAnalysisMap.clear();
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tasksData.tasks.length === 0) {
|
if (tasksData.tasks.length === 0) {
|
||||||
// If using ID filtering but no matching tasks, return existing report or empty
|
|
||||||
if (existingReport && (specificIds || fromId !== null || toId !== null)) {
|
|
||||||
reportLog(
|
|
||||||
`No matching tasks found for analysis. Keeping existing report.`,
|
|
||||||
'info'
|
|
||||||
);
|
|
||||||
if (outputFormat === 'text') {
|
|
||||||
console.log(
|
|
||||||
chalk.yellow(
|
|
||||||
`No matching tasks found for analysis. Keeping existing report.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return {
|
|
||||||
report: existingReport,
|
|
||||||
telemetryData: null
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Otherwise create empty report
|
|
||||||
const emptyReport = {
|
const emptyReport = {
|
||||||
meta: {
|
meta: {
|
||||||
generatedAt: new Date().toISOString(),
|
generatedAt: new Date().toISOString(),
|
||||||
@@ -281,9 +146,9 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
projectName: getProjectName(session),
|
projectName: getProjectName(session),
|
||||||
usedResearch: useResearch
|
usedResearch: useResearch
|
||||||
},
|
},
|
||||||
complexityAnalysis: existingReport?.complexityAnalysis || []
|
complexityAnalysis: []
|
||||||
};
|
};
|
||||||
reportLog(`Writing complexity report to ${outputPath}...`, 'info');
|
reportLog(`Writing empty complexity report to ${outputPath}...`, 'info');
|
||||||
writeJSON(outputPath, emptyReport);
|
writeJSON(outputPath, emptyReport);
|
||||||
reportLog(
|
reportLog(
|
||||||
`Task complexity analysis complete. Report written to ${outputPath}`,
|
`Task complexity analysis complete. Report written to ${outputPath}`,
|
||||||
@@ -331,13 +196,9 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
)
|
)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
return {
|
return emptyReport;
|
||||||
report: emptyReport,
|
|
||||||
telemetryData: null
|
|
||||||
};
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Continue with regular analysis path
|
|
||||||
const prompt = generateInternalComplexityAnalysisPrompt(tasksData);
|
const prompt = generateInternalComplexityAnalysisPrompt(tasksData);
|
||||||
const systemPrompt =
|
const systemPrompt =
|
||||||
'You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.';
|
'You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.';
|
||||||
@@ -465,47 +326,15 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Merge with existing report
|
|
||||||
let finalComplexityAnalysis = [];
|
|
||||||
|
|
||||||
if (existingReport && Array.isArray(existingReport.complexityAnalysis)) {
|
|
||||||
// Create a map of task IDs that we just analyzed
|
|
||||||
const analyzedTaskIds = new Set(
|
|
||||||
complexityAnalysis.map((item) => item.taskId)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Keep existing entries that weren't in this analysis run
|
|
||||||
const existingEntriesNotAnalyzed =
|
|
||||||
existingReport.complexityAnalysis.filter(
|
|
||||||
(item) => !analyzedTaskIds.has(item.taskId)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Combine with new analysis
|
|
||||||
finalComplexityAnalysis = [
|
|
||||||
...existingEntriesNotAnalyzed,
|
|
||||||
...complexityAnalysis
|
|
||||||
];
|
|
||||||
|
|
||||||
reportLog(
|
|
||||||
`Merged ${complexityAnalysis.length} new analyses with ${existingEntriesNotAnalyzed.length} existing entries`,
|
|
||||||
'info'
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
// No existing report or invalid format, just use the new analysis
|
|
||||||
finalComplexityAnalysis = complexityAnalysis;
|
|
||||||
}
|
|
||||||
|
|
||||||
const report = {
|
const report = {
|
||||||
meta: {
|
meta: {
|
||||||
generatedAt: new Date().toISOString(),
|
generatedAt: new Date().toISOString(),
|
||||||
tasksAnalyzed: tasksData.tasks.length,
|
tasksAnalyzed: tasksData.tasks.length,
|
||||||
totalTasks: originalTaskCount,
|
|
||||||
analysisCount: finalComplexityAnalysis.length,
|
|
||||||
thresholdScore: thresholdScore,
|
thresholdScore: thresholdScore,
|
||||||
projectName: getProjectName(session),
|
projectName: getProjectName(session),
|
||||||
usedResearch: useResearch
|
usedResearch: useResearch
|
||||||
},
|
},
|
||||||
complexityAnalysis: finalComplexityAnalysis
|
complexityAnalysis: complexityAnalysis
|
||||||
};
|
};
|
||||||
reportLog(`Writing complexity report to ${outputPath}...`, 'info');
|
reportLog(`Writing complexity report to ${outputPath}...`, 'info');
|
||||||
writeJSON(outputPath, report);
|
writeJSON(outputPath, report);
|
||||||
@@ -521,7 +350,6 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
`Task complexity analysis complete. Report written to ${outputPath}`
|
`Task complexity analysis complete. Report written to ${outputPath}`
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
// Calculate statistics specifically for this analysis run
|
|
||||||
const highComplexity = complexityAnalysis.filter(
|
const highComplexity = complexityAnalysis.filter(
|
||||||
(t) => t.complexityScore >= 8
|
(t) => t.complexityScore >= 8
|
||||||
).length;
|
).length;
|
||||||
@@ -533,25 +361,18 @@ async function analyzeTaskComplexity(options, context = {}) {
|
|||||||
).length;
|
).length;
|
||||||
const totalAnalyzed = complexityAnalysis.length;
|
const totalAnalyzed = complexityAnalysis.length;
|
||||||
|
|
||||||
console.log('\nCurrent Analysis Summary:');
|
console.log('\nComplexity Analysis Summary:');
|
||||||
console.log('----------------------------');
|
console.log('----------------------------');
|
||||||
console.log(`Tasks analyzed in this run: ${totalAnalyzed}`);
|
console.log(
|
||||||
|
`Active tasks sent for analysis: ${tasksData.tasks.length}`
|
||||||
|
);
|
||||||
|
console.log(`Tasks successfully analyzed: ${totalAnalyzed}`);
|
||||||
console.log(`High complexity tasks: ${highComplexity}`);
|
console.log(`High complexity tasks: ${highComplexity}`);
|
||||||
console.log(`Medium complexity tasks: ${mediumComplexity}`);
|
console.log(`Medium complexity tasks: ${mediumComplexity}`);
|
||||||
console.log(`Low complexity tasks: ${lowComplexity}`);
|
console.log(`Low complexity tasks: ${lowComplexity}`);
|
||||||
|
console.log(
|
||||||
if (existingReport) {
|
`Sum verification: ${highComplexity + mediumComplexity + lowComplexity} (should equal ${totalAnalyzed})`
|
||||||
console.log('\nUpdated Report Summary:');
|
);
|
||||||
console.log('----------------------------');
|
|
||||||
console.log(
|
|
||||||
`Total analyses in report: ${finalComplexityAnalysis.length}`
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
`Analyses from previous runs: ${finalComplexityAnalysis.length - totalAnalyzed}`
|
|
||||||
);
|
|
||||||
console.log(`New/updated analyses: ${totalAnalyzed}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`Research-backed analysis: ${useResearch ? 'Yes' : 'No'}`);
|
console.log(`Research-backed analysis: ${useResearch ? 'Yes' : 'No'}`);
|
||||||
console.log(
|
console.log(
|
||||||
`\nSee ${outputPath} for the full report and expansion commands.`
|
`\nSee ${outputPath} for the full report and expansion commands.`
|
||||||
|
|||||||
@@ -35,53 +35,6 @@ function generateTaskFiles(tasksPath, outputDir, options = {}) {
|
|||||||
log('info', `Validating and fixing dependencies`);
|
log('info', `Validating and fixing dependencies`);
|
||||||
validateAndFixDependencies(data, tasksPath);
|
validateAndFixDependencies(data, tasksPath);
|
||||||
|
|
||||||
// Get valid task IDs from tasks.json
|
|
||||||
const validTaskIds = data.tasks.map((task) => task.id);
|
|
||||||
|
|
||||||
// Cleanup orphaned task files
|
|
||||||
log('info', 'Checking for orphaned task files to clean up...');
|
|
||||||
try {
|
|
||||||
// Get all task files in the output directory
|
|
||||||
const files = fs.readdirSync(outputDir);
|
|
||||||
const taskFilePattern = /^task_(\d+)\.txt$/;
|
|
||||||
|
|
||||||
// Filter for task files and check if they match a valid task ID
|
|
||||||
const orphanedFiles = files.filter((file) => {
|
|
||||||
const match = file.match(taskFilePattern);
|
|
||||||
if (match) {
|
|
||||||
const fileTaskId = parseInt(match[1], 10);
|
|
||||||
return !validTaskIds.includes(fileTaskId);
|
|
||||||
}
|
|
||||||
return false;
|
|
||||||
});
|
|
||||||
|
|
||||||
// Delete orphaned files
|
|
||||||
if (orphanedFiles.length > 0) {
|
|
||||||
log(
|
|
||||||
'info',
|
|
||||||
`Found ${orphanedFiles.length} orphaned task files to remove`
|
|
||||||
);
|
|
||||||
|
|
||||||
orphanedFiles.forEach((file) => {
|
|
||||||
const filePath = path.join(outputDir, file);
|
|
||||||
try {
|
|
||||||
fs.unlinkSync(filePath);
|
|
||||||
log('info', `Removed orphaned task file: ${file}`);
|
|
||||||
} catch (err) {
|
|
||||||
log(
|
|
||||||
'warn',
|
|
||||||
`Failed to remove orphaned task file ${file}: ${err.message}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
log('info', 'No orphaned task files found');
|
|
||||||
}
|
|
||||||
} catch (err) {
|
|
||||||
log('warn', `Error cleaning up orphaned task files: ${err.message}`);
|
|
||||||
// Continue with file generation even if cleanup fails
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate task files
|
// Generate task files
|
||||||
log('info', 'Generating individual task files...');
|
log('info', 'Generating individual task files...');
|
||||||
data.tasks.forEach((task) => {
|
data.tasks.forEach((task) => {
|
||||||
|
|||||||
@@ -6,7 +6,6 @@
|
|||||||
import path from 'path';
|
import path from 'path';
|
||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import https from 'https';
|
import https from 'https';
|
||||||
import http from 'http';
|
|
||||||
import {
|
import {
|
||||||
getMainModelId,
|
getMainModelId,
|
||||||
getResearchModelId,
|
getResearchModelId,
|
||||||
@@ -20,8 +19,7 @@ import {
|
|||||||
getConfig,
|
getConfig,
|
||||||
writeConfig,
|
writeConfig,
|
||||||
isConfigFilePresent,
|
isConfigFilePresent,
|
||||||
getAllProviders,
|
getAllProviders
|
||||||
getBaseUrlForRole
|
|
||||||
} from '../config-manager.js';
|
} from '../config-manager.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -70,68 +68,6 @@ function fetchOpenRouterModels() {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Fetches the list of models from Ollama instance.
|
|
||||||
* @param {string} baseUrl - The base URL for the Ollama API (e.g., "http://localhost:11434/api")
|
|
||||||
* @returns {Promise<Array|null>} A promise that resolves with the list of model objects or null if fetch fails.
|
|
||||||
*/
|
|
||||||
function fetchOllamaModels(baseUrl = 'http://localhost:11434/api') {
|
|
||||||
return new Promise((resolve) => {
|
|
||||||
try {
|
|
||||||
// Parse the base URL to extract hostname, port, and base path
|
|
||||||
const url = new URL(baseUrl);
|
|
||||||
const isHttps = url.protocol === 'https:';
|
|
||||||
const port = url.port || (isHttps ? 443 : 80);
|
|
||||||
const basePath = url.pathname.endsWith('/')
|
|
||||||
? url.pathname.slice(0, -1)
|
|
||||||
: url.pathname;
|
|
||||||
|
|
||||||
const options = {
|
|
||||||
hostname: url.hostname,
|
|
||||||
port: parseInt(port, 10),
|
|
||||||
path: `${basePath}/tags`,
|
|
||||||
method: 'GET',
|
|
||||||
headers: {
|
|
||||||
Accept: 'application/json'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const requestLib = isHttps ? https : http;
|
|
||||||
const req = requestLib.request(options, (res) => {
|
|
||||||
let data = '';
|
|
||||||
res.on('data', (chunk) => {
|
|
||||||
data += chunk;
|
|
||||||
});
|
|
||||||
res.on('end', () => {
|
|
||||||
if (res.statusCode === 200) {
|
|
||||||
try {
|
|
||||||
const parsedData = JSON.parse(data);
|
|
||||||
resolve(parsedData.models || []); // Return the array of models
|
|
||||||
} catch (e) {
|
|
||||||
console.error('Error parsing Ollama response:', e);
|
|
||||||
resolve(null); // Indicate failure
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
console.error(
|
|
||||||
`Ollama API request failed with status code: ${res.statusCode}`
|
|
||||||
);
|
|
||||||
resolve(null); // Indicate failure
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
req.on('error', (e) => {
|
|
||||||
console.error('Error fetching Ollama models:', e);
|
|
||||||
resolve(null); // Indicate failure
|
|
||||||
});
|
|
||||||
req.end();
|
|
||||||
} catch (e) {
|
|
||||||
console.error('Error parsing Ollama base URL:', e);
|
|
||||||
resolve(null); // Indicate failure
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the current model configuration
|
* Get the current model configuration
|
||||||
* @param {Object} [options] - Options for the operation
|
* @param {Object} [options] - Options for the operation
|
||||||
@@ -480,29 +416,10 @@ async function setModel(role, modelId, options = {}) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
} else if (providerHint === 'ollama') {
|
} else if (providerHint === 'ollama') {
|
||||||
// Check Ollama ONLY because hint was ollama
|
// Hinted as Ollama - set provider directly WITHOUT checking OpenRouter
|
||||||
report('info', `Checking Ollama for ${modelId} (as hinted)...`);
|
determinedProvider = 'ollama';
|
||||||
|
warningMessage = `Warning: Custom Ollama model '${modelId}' set. Ensure your Ollama server is running and has pulled this model. Taskmaster cannot guarantee compatibility.`;
|
||||||
// Get the Ollama base URL from config
|
report('warn', warningMessage);
|
||||||
const ollamaBaseUrl = getBaseUrlForRole(role, projectRoot);
|
|
||||||
const ollamaModels = await fetchOllamaModels(ollamaBaseUrl);
|
|
||||||
|
|
||||||
if (ollamaModels === null) {
|
|
||||||
// Connection failed - server probably not running
|
|
||||||
throw new Error(
|
|
||||||
`Unable to connect to Ollama server at ${ollamaBaseUrl}. Please ensure Ollama is running and try again.`
|
|
||||||
);
|
|
||||||
} else if (ollamaModels.some((m) => m.model === modelId)) {
|
|
||||||
determinedProvider = 'ollama';
|
|
||||||
warningMessage = `Warning: Custom Ollama model '${modelId}' set. Ensure your Ollama server is running and has pulled this model. Taskmaster cannot guarantee compatibility.`;
|
|
||||||
report('warn', warningMessage);
|
|
||||||
} else {
|
|
||||||
// Server is running but model not found
|
|
||||||
const tagsUrl = `${ollamaBaseUrl}/tags`;
|
|
||||||
throw new Error(
|
|
||||||
`Model ID "${modelId}" not found in the Ollama instance. Please verify the model is pulled and available. You can check available models with: curl ${tagsUrl}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
} else {
|
} else {
|
||||||
// Invalid provider hint - should not happen
|
// Invalid provider hint - should not happen
|
||||||
throw new Error(`Invalid provider hint received: ${providerHint}`);
|
throw new Error(`Invalid provider hint received: ${providerHint}`);
|
||||||
|
|||||||
@@ -1,571 +0,0 @@
|
|||||||
import path from 'path';
|
|
||||||
import { log, readJSON, writeJSON } from '../utils.js';
|
|
||||||
import { isTaskDependentOn } from '../task-manager.js';
|
|
||||||
import generateTaskFiles from './generate-task-files.js';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Move a task or subtask to a new position
|
|
||||||
* @param {string} tasksPath - Path to tasks.json file
|
|
||||||
* @param {string} sourceId - ID of the task/subtask to move (e.g., '5' or '5.2')
|
|
||||||
* @param {string} destinationId - ID of the destination (e.g., '7' or '7.3')
|
|
||||||
* @param {boolean} generateFiles - Whether to regenerate task files after moving
|
|
||||||
* @returns {Object} Result object with moved task details
|
|
||||||
*/
|
|
||||||
async function moveTask(
|
|
||||||
tasksPath,
|
|
||||||
sourceId,
|
|
||||||
destinationId,
|
|
||||||
generateFiles = true
|
|
||||||
) {
|
|
||||||
try {
|
|
||||||
log('info', `Moving task/subtask ${sourceId} to ${destinationId}...`);
|
|
||||||
|
|
||||||
// Read the existing tasks
|
|
||||||
const data = readJSON(tasksPath);
|
|
||||||
if (!data || !data.tasks) {
|
|
||||||
throw new Error(`Invalid or missing tasks file at ${tasksPath}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse source ID to determine if it's a task or subtask
|
|
||||||
const isSourceSubtask = sourceId.includes('.');
|
|
||||||
let sourceTask,
|
|
||||||
sourceParentTask,
|
|
||||||
sourceSubtask,
|
|
||||||
sourceTaskIndex,
|
|
||||||
sourceSubtaskIndex;
|
|
||||||
|
|
||||||
// Parse destination ID to determine the target
|
|
||||||
const isDestinationSubtask = destinationId.includes('.');
|
|
||||||
let destTask, destParentTask, destSubtask, destTaskIndex, destSubtaskIndex;
|
|
||||||
|
|
||||||
// Validate source exists
|
|
||||||
if (isSourceSubtask) {
|
|
||||||
// Source is a subtask
|
|
||||||
const [parentIdStr, subtaskIdStr] = sourceId.split('.');
|
|
||||||
const parentIdNum = parseInt(parentIdStr, 10);
|
|
||||||
const subtaskIdNum = parseInt(subtaskIdStr, 10);
|
|
||||||
|
|
||||||
sourceParentTask = data.tasks.find((t) => t.id === parentIdNum);
|
|
||||||
if (!sourceParentTask) {
|
|
||||||
throw new Error(`Source parent task with ID ${parentIdNum} not found`);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (
|
|
||||||
!sourceParentTask.subtasks ||
|
|
||||||
sourceParentTask.subtasks.length === 0
|
|
||||||
) {
|
|
||||||
throw new Error(`Source parent task ${parentIdNum} has no subtasks`);
|
|
||||||
}
|
|
||||||
|
|
||||||
sourceSubtaskIndex = sourceParentTask.subtasks.findIndex(
|
|
||||||
(st) => st.id === subtaskIdNum
|
|
||||||
);
|
|
||||||
if (sourceSubtaskIndex === -1) {
|
|
||||||
throw new Error(`Source subtask ${sourceId} not found`);
|
|
||||||
}
|
|
||||||
|
|
||||||
sourceSubtask = { ...sourceParentTask.subtasks[sourceSubtaskIndex] };
|
|
||||||
} else {
|
|
||||||
// Source is a task
|
|
||||||
const sourceIdNum = parseInt(sourceId, 10);
|
|
||||||
sourceTaskIndex = data.tasks.findIndex((t) => t.id === sourceIdNum);
|
|
||||||
if (sourceTaskIndex === -1) {
|
|
||||||
throw new Error(`Source task with ID ${sourceIdNum} not found`);
|
|
||||||
}
|
|
||||||
|
|
||||||
sourceTask = { ...data.tasks[sourceTaskIndex] };
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate destination exists
|
|
||||||
if (isDestinationSubtask) {
|
|
||||||
// Destination is a subtask (target will be the parent of this subtask)
|
|
||||||
const [parentIdStr, subtaskIdStr] = destinationId.split('.');
|
|
||||||
const parentIdNum = parseInt(parentIdStr, 10);
|
|
||||||
const subtaskIdNum = parseInt(subtaskIdStr, 10);
|
|
||||||
|
|
||||||
destParentTask = data.tasks.find((t) => t.id === parentIdNum);
|
|
||||||
if (!destParentTask) {
|
|
||||||
throw new Error(
|
|
||||||
`Destination parent task with ID ${parentIdNum} not found`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!destParentTask.subtasks || destParentTask.subtasks.length === 0) {
|
|
||||||
throw new Error(
|
|
||||||
`Destination parent task ${parentIdNum} has no subtasks`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
destSubtaskIndex = destParentTask.subtasks.findIndex(
|
|
||||||
(st) => st.id === subtaskIdNum
|
|
||||||
);
|
|
||||||
if (destSubtaskIndex === -1) {
|
|
||||||
throw new Error(`Destination subtask ${destinationId} not found`);
|
|
||||||
}
|
|
||||||
|
|
||||||
destSubtask = destParentTask.subtasks[destSubtaskIndex];
|
|
||||||
} else {
|
|
||||||
// Destination is a task
|
|
||||||
const destIdNum = parseInt(destinationId, 10);
|
|
||||||
destTaskIndex = data.tasks.findIndex((t) => t.id === destIdNum);
|
|
||||||
|
|
||||||
if (destTaskIndex === -1) {
|
|
||||||
// Create placeholder for destination if it doesn't exist
|
|
||||||
log('info', `Creating placeholder for destination task ${destIdNum}`);
|
|
||||||
const newTask = {
|
|
||||||
id: destIdNum,
|
|
||||||
title: `Task ${destIdNum}`,
|
|
||||||
description: '',
|
|
||||||
status: 'pending',
|
|
||||||
priority: 'medium',
|
|
||||||
details: '',
|
|
||||||
testStrategy: ''
|
|
||||||
};
|
|
||||||
|
|
||||||
// Find correct position to insert the new task
|
|
||||||
let insertIndex = 0;
|
|
||||||
while (
|
|
||||||
insertIndex < data.tasks.length &&
|
|
||||||
data.tasks[insertIndex].id < destIdNum
|
|
||||||
) {
|
|
||||||
insertIndex++;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Insert the new task at the appropriate position
|
|
||||||
data.tasks.splice(insertIndex, 0, newTask);
|
|
||||||
destTaskIndex = insertIndex;
|
|
||||||
destTask = data.tasks[destTaskIndex];
|
|
||||||
} else {
|
|
||||||
destTask = data.tasks[destTaskIndex];
|
|
||||||
|
|
||||||
// Check if destination task is already a "real" task with content
|
|
||||||
// Only allow moving to destination IDs that don't have meaningful content
|
|
||||||
if (
|
|
||||||
destTask.title !== `Task ${destTask.id}` ||
|
|
||||||
destTask.description !== '' ||
|
|
||||||
destTask.details !== ''
|
|
||||||
) {
|
|
||||||
throw new Error(
|
|
||||||
`Cannot move to task ID ${destIdNum} as it already contains content. Choose a different destination ID.`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate that we aren't trying to move a task to itself
|
|
||||||
if (sourceId === destinationId) {
|
|
||||||
throw new Error('Cannot move a task/subtask to itself');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Prevent moving a parent to its own subtask
|
|
||||||
if (!isSourceSubtask && isDestinationSubtask) {
|
|
||||||
const destParentId = parseInt(destinationId.split('.')[0], 10);
|
|
||||||
if (parseInt(sourceId, 10) === destParentId) {
|
|
||||||
throw new Error('Cannot move a parent task to one of its own subtasks');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for circular dependency when moving tasks
|
|
||||||
if (!isSourceSubtask && !isDestinationSubtask) {
|
|
||||||
const sourceIdNum = parseInt(sourceId, 10);
|
|
||||||
const destIdNum = parseInt(destinationId, 10);
|
|
||||||
|
|
||||||
// Check if destination is dependent on source
|
|
||||||
if (isTaskDependentOn(data.tasks, destTask, sourceIdNum)) {
|
|
||||||
throw new Error(
|
|
||||||
`Cannot move task ${sourceId} to task ${destinationId} as it would create a circular dependency`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
let movedTask;
|
|
||||||
|
|
||||||
// Handle different move scenarios
|
|
||||||
if (!isSourceSubtask && !isDestinationSubtask) {
|
|
||||||
// Check if destination is a placeholder we just created
|
|
||||||
if (
|
|
||||||
destTask.title === `Task ${destTask.id}` &&
|
|
||||||
destTask.description === '' &&
|
|
||||||
destTask.details === ''
|
|
||||||
) {
|
|
||||||
// Case 0: Move task to a new position/ID (destination is a placeholder)
|
|
||||||
movedTask = moveTaskToNewId(
|
|
||||||
data,
|
|
||||||
sourceTask,
|
|
||||||
sourceTaskIndex,
|
|
||||||
destTask,
|
|
||||||
destTaskIndex
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
// Case 1: Move standalone task to become a subtask of another task
|
|
||||||
movedTask = moveTaskToTask(data, sourceTask, sourceTaskIndex, destTask);
|
|
||||||
}
|
|
||||||
} else if (!isSourceSubtask && isDestinationSubtask) {
|
|
||||||
// Case 2: Move standalone task to become a subtask at a specific position
|
|
||||||
movedTask = moveTaskToSubtaskPosition(
|
|
||||||
data,
|
|
||||||
sourceTask,
|
|
||||||
sourceTaskIndex,
|
|
||||||
destParentTask,
|
|
||||||
destSubtaskIndex
|
|
||||||
);
|
|
||||||
} else if (isSourceSubtask && !isDestinationSubtask) {
|
|
||||||
// Case 3: Move subtask to become a standalone task
|
|
||||||
movedTask = moveSubtaskToTask(
|
|
||||||
data,
|
|
||||||
sourceSubtask,
|
|
||||||
sourceParentTask,
|
|
||||||
sourceSubtaskIndex,
|
|
||||||
destTask
|
|
||||||
);
|
|
||||||
} else if (isSourceSubtask && isDestinationSubtask) {
|
|
||||||
// Case 4: Move subtask to another parent or position
|
|
||||||
// First check if it's the same parent
|
|
||||||
const sourceParentId = parseInt(sourceId.split('.')[0], 10);
|
|
||||||
const destParentId = parseInt(destinationId.split('.')[0], 10);
|
|
||||||
|
|
||||||
if (sourceParentId === destParentId) {
|
|
||||||
// Case 4a: Move subtask within the same parent (reordering)
|
|
||||||
movedTask = reorderSubtask(
|
|
||||||
sourceParentTask,
|
|
||||||
sourceSubtaskIndex,
|
|
||||||
destSubtaskIndex
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
// Case 4b: Move subtask to a different parent
|
|
||||||
movedTask = moveSubtaskToAnotherParent(
|
|
||||||
sourceSubtask,
|
|
||||||
sourceParentTask,
|
|
||||||
sourceSubtaskIndex,
|
|
||||||
destParentTask,
|
|
||||||
destSubtaskIndex
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write the updated tasks back to the file
|
|
||||||
writeJSON(tasksPath, data);
|
|
||||||
|
|
||||||
// Generate task files if requested
|
|
||||||
if (generateFiles) {
|
|
||||||
log('info', 'Regenerating task files...');
|
|
||||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
|
||||||
}
|
|
||||||
|
|
||||||
return movedTask;
|
|
||||||
} catch (error) {
|
|
||||||
log('error', `Error moving task/subtask: ${error.message}`);
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Move a standalone task to become a subtask of another task
|
|
||||||
* @param {Object} data - Tasks data object
|
|
||||||
* @param {Object} sourceTask - Source task to move
|
|
||||||
* @param {number} sourceTaskIndex - Index of source task in data.tasks
|
|
||||||
* @param {Object} destTask - Destination task
|
|
||||||
* @returns {Object} Moved task object
|
|
||||||
*/
|
|
||||||
function moveTaskToTask(data, sourceTask, sourceTaskIndex, destTask) {
|
|
||||||
// Initialize subtasks array if it doesn't exist
|
|
||||||
if (!destTask.subtasks) {
|
|
||||||
destTask.subtasks = [];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find the highest subtask ID to determine the next ID
|
|
||||||
const highestSubtaskId =
|
|
||||||
destTask.subtasks.length > 0
|
|
||||||
? Math.max(...destTask.subtasks.map((st) => st.id))
|
|
||||||
: 0;
|
|
||||||
const newSubtaskId = highestSubtaskId + 1;
|
|
||||||
|
|
||||||
// Create the new subtask from the source task
|
|
||||||
const newSubtask = {
|
|
||||||
...sourceTask,
|
|
||||||
id: newSubtaskId,
|
|
||||||
parentTaskId: destTask.id
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add to destination's subtasks
|
|
||||||
destTask.subtasks.push(newSubtask);
|
|
||||||
|
|
||||||
// Remove the original task from the tasks array
|
|
||||||
data.tasks.splice(sourceTaskIndex, 1);
|
|
||||||
|
|
||||||
log(
|
|
||||||
'info',
|
|
||||||
`Moved task ${sourceTask.id} to become subtask ${destTask.id}.${newSubtaskId}`
|
|
||||||
);
|
|
||||||
|
|
||||||
return newSubtask;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Move a standalone task to become a subtask at a specific position
|
|
||||||
* @param {Object} data - Tasks data object
|
|
||||||
* @param {Object} sourceTask - Source task to move
|
|
||||||
* @param {number} sourceTaskIndex - Index of source task in data.tasks
|
|
||||||
* @param {Object} destParentTask - Destination parent task
|
|
||||||
* @param {number} destSubtaskIndex - Index of the subtask before which to insert
|
|
||||||
* @returns {Object} Moved task object
|
|
||||||
*/
|
|
||||||
function moveTaskToSubtaskPosition(
|
|
||||||
data,
|
|
||||||
sourceTask,
|
|
||||||
sourceTaskIndex,
|
|
||||||
destParentTask,
|
|
||||||
destSubtaskIndex
|
|
||||||
) {
|
|
||||||
// Initialize subtasks array if it doesn't exist
|
|
||||||
if (!destParentTask.subtasks) {
|
|
||||||
destParentTask.subtasks = [];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find the highest subtask ID to determine the next ID
|
|
||||||
const highestSubtaskId =
|
|
||||||
destParentTask.subtasks.length > 0
|
|
||||||
? Math.max(...destParentTask.subtasks.map((st) => st.id))
|
|
||||||
: 0;
|
|
||||||
const newSubtaskId = highestSubtaskId + 1;
|
|
||||||
|
|
||||||
// Create the new subtask from the source task
|
|
||||||
const newSubtask = {
|
|
||||||
...sourceTask,
|
|
||||||
id: newSubtaskId,
|
|
||||||
parentTaskId: destParentTask.id
|
|
||||||
};
|
|
||||||
|
|
||||||
// Insert at specific position
|
|
||||||
destParentTask.subtasks.splice(destSubtaskIndex + 1, 0, newSubtask);
|
|
||||||
|
|
||||||
// Remove the original task from the tasks array
|
|
||||||
data.tasks.splice(sourceTaskIndex, 1);
|
|
||||||
|
|
||||||
log(
|
|
||||||
'info',
|
|
||||||
`Moved task ${sourceTask.id} to become subtask ${destParentTask.id}.${newSubtaskId}`
|
|
||||||
);
|
|
||||||
|
|
||||||
return newSubtask;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Move a subtask to become a standalone task
|
|
||||||
* @param {Object} data - Tasks data object
|
|
||||||
* @param {Object} sourceSubtask - Source subtask to move
|
|
||||||
* @param {Object} sourceParentTask - Parent task of the source subtask
|
|
||||||
* @param {number} sourceSubtaskIndex - Index of source subtask in parent's subtasks
|
|
||||||
* @param {Object} destTask - Destination task (for position reference)
|
|
||||||
* @returns {Object} Moved task object
|
|
||||||
*/
|
|
||||||
function moveSubtaskToTask(
|
|
||||||
data,
|
|
||||||
sourceSubtask,
|
|
||||||
sourceParentTask,
|
|
||||||
sourceSubtaskIndex,
|
|
||||||
destTask
|
|
||||||
) {
|
|
||||||
// Find the highest task ID to determine the next ID
|
|
||||||
const highestId = Math.max(...data.tasks.map((t) => t.id));
|
|
||||||
const newTaskId = highestId + 1;
|
|
||||||
|
|
||||||
// Create the new task from the subtask
|
|
||||||
const newTask = {
|
|
||||||
...sourceSubtask,
|
|
||||||
id: newTaskId,
|
|
||||||
priority: sourceParentTask.priority || 'medium' // Inherit priority from parent
|
|
||||||
};
|
|
||||||
delete newTask.parentTaskId;
|
|
||||||
|
|
||||||
// Add the parent task as a dependency if not already present
|
|
||||||
if (!newTask.dependencies) {
|
|
||||||
newTask.dependencies = [];
|
|
||||||
}
|
|
||||||
if (!newTask.dependencies.includes(sourceParentTask.id)) {
|
|
||||||
newTask.dependencies.push(sourceParentTask.id);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find the destination index to insert the new task
|
|
||||||
const destTaskIndex = data.tasks.findIndex((t) => t.id === destTask.id);
|
|
||||||
|
|
||||||
// Insert the new task after the destination task
|
|
||||||
data.tasks.splice(destTaskIndex + 1, 0, newTask);
|
|
||||||
|
|
||||||
// Remove the subtask from the parent
|
|
||||||
sourceParentTask.subtasks.splice(sourceSubtaskIndex, 1);
|
|
||||||
|
|
||||||
// If parent has no more subtasks, remove the subtasks array
|
|
||||||
if (sourceParentTask.subtasks.length === 0) {
|
|
||||||
delete sourceParentTask.subtasks;
|
|
||||||
}
|
|
||||||
|
|
||||||
log(
|
|
||||||
'info',
|
|
||||||
`Moved subtask ${sourceParentTask.id}.${sourceSubtask.id} to become task ${newTaskId}`
|
|
||||||
);
|
|
||||||
|
|
||||||
return newTask;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Reorder a subtask within the same parent
|
|
||||||
* @param {Object} parentTask - Parent task containing the subtask
|
|
||||||
* @param {number} sourceIndex - Current index of the subtask
|
|
||||||
* @param {number} destIndex - Destination index for the subtask
|
|
||||||
* @returns {Object} Moved subtask object
|
|
||||||
*/
|
|
||||||
function reorderSubtask(parentTask, sourceIndex, destIndex) {
|
|
||||||
// Get the subtask to move
|
|
||||||
const subtask = parentTask.subtasks[sourceIndex];
|
|
||||||
|
|
||||||
// Remove the subtask from its current position
|
|
||||||
parentTask.subtasks.splice(sourceIndex, 1);
|
|
||||||
|
|
||||||
// Insert the subtask at the new position
|
|
||||||
// If destIndex was after sourceIndex, it's now one less because we removed an item
|
|
||||||
const adjustedDestIndex = sourceIndex < destIndex ? destIndex - 1 : destIndex;
|
|
||||||
parentTask.subtasks.splice(adjustedDestIndex, 0, subtask);
|
|
||||||
|
|
||||||
log(
|
|
||||||
'info',
|
|
||||||
`Reordered subtask ${parentTask.id}.${subtask.id} within parent task ${parentTask.id}`
|
|
||||||
);
|
|
||||||
|
|
||||||
return subtask;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Move a subtask to a different parent
|
|
||||||
* @param {Object} sourceSubtask - Source subtask to move
|
|
||||||
* @param {Object} sourceParentTask - Parent task of the source subtask
|
|
||||||
* @param {number} sourceSubtaskIndex - Index of source subtask in parent's subtasks
|
|
||||||
* @param {Object} destParentTask - Destination parent task
|
|
||||||
* @param {number} destSubtaskIndex - Index of the subtask before which to insert
|
|
||||||
* @returns {Object} Moved subtask object
|
|
||||||
*/
|
|
||||||
function moveSubtaskToAnotherParent(
|
|
||||||
sourceSubtask,
|
|
||||||
sourceParentTask,
|
|
||||||
sourceSubtaskIndex,
|
|
||||||
destParentTask,
|
|
||||||
destSubtaskIndex
|
|
||||||
) {
|
|
||||||
// Find the highest subtask ID in the destination parent
|
|
||||||
const highestSubtaskId =
|
|
||||||
destParentTask.subtasks.length > 0
|
|
||||||
? Math.max(...destParentTask.subtasks.map((st) => st.id))
|
|
||||||
: 0;
|
|
||||||
const newSubtaskId = highestSubtaskId + 1;
|
|
||||||
|
|
||||||
// Create the new subtask with updated parent reference
|
|
||||||
const newSubtask = {
|
|
||||||
...sourceSubtask,
|
|
||||||
id: newSubtaskId,
|
|
||||||
parentTaskId: destParentTask.id
|
|
||||||
};
|
|
||||||
|
|
||||||
// If the subtask depends on its original parent, keep that dependency
|
|
||||||
if (!newSubtask.dependencies) {
|
|
||||||
newSubtask.dependencies = [];
|
|
||||||
}
|
|
||||||
if (!newSubtask.dependencies.includes(sourceParentTask.id)) {
|
|
||||||
newSubtask.dependencies.push(sourceParentTask.id);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Insert at the destination position
|
|
||||||
destParentTask.subtasks.splice(destSubtaskIndex + 1, 0, newSubtask);
|
|
||||||
|
|
||||||
// Remove the subtask from the original parent
|
|
||||||
sourceParentTask.subtasks.splice(sourceSubtaskIndex, 1);
|
|
||||||
|
|
||||||
// If original parent has no more subtasks, remove the subtasks array
|
|
||||||
if (sourceParentTask.subtasks.length === 0) {
|
|
||||||
delete sourceParentTask.subtasks;
|
|
||||||
}
|
|
||||||
|
|
||||||
log(
|
|
||||||
'info',
|
|
||||||
`Moved subtask ${sourceParentTask.id}.${sourceSubtask.id} to become subtask ${destParentTask.id}.${newSubtaskId}`
|
|
||||||
);
|
|
||||||
|
|
||||||
return newSubtask;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Move a standalone task to a new ID position
|
|
||||||
* @param {Object} data - Tasks data object
|
|
||||||
* @param {Object} sourceTask - Source task to move
|
|
||||||
* @param {number} sourceTaskIndex - Index of source task in data.tasks
|
|
||||||
* @param {Object} destTask - Destination placeholder task
|
|
||||||
* @param {number} destTaskIndex - Index of destination task in data.tasks
|
|
||||||
* @returns {Object} Moved task object
|
|
||||||
*/
|
|
||||||
function moveTaskToNewId(
|
|
||||||
data,
|
|
||||||
sourceTask,
|
|
||||||
sourceTaskIndex,
|
|
||||||
destTask,
|
|
||||||
destTaskIndex
|
|
||||||
) {
|
|
||||||
// Create a copy of the source task with the new ID
|
|
||||||
const movedTask = {
|
|
||||||
...sourceTask,
|
|
||||||
id: destTask.id
|
|
||||||
};
|
|
||||||
|
|
||||||
// Get numeric IDs for comparison
|
|
||||||
const sourceIdNum = parseInt(sourceTask.id, 10);
|
|
||||||
const destIdNum = parseInt(destTask.id, 10);
|
|
||||||
|
|
||||||
// Handle subtasks if present
|
|
||||||
if (sourceTask.subtasks && sourceTask.subtasks.length > 0) {
|
|
||||||
// Update subtasks to reference the new parent ID if needed
|
|
||||||
movedTask.subtasks = sourceTask.subtasks.map((subtask) => ({
|
|
||||||
...subtask,
|
|
||||||
parentTaskId: destIdNum
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update any dependencies in other tasks that referenced the old ID
|
|
||||||
data.tasks.forEach((task) => {
|
|
||||||
if (task.dependencies && task.dependencies.includes(sourceIdNum)) {
|
|
||||||
// Replace the old ID with the new ID
|
|
||||||
const depIndex = task.dependencies.indexOf(sourceIdNum);
|
|
||||||
task.dependencies[depIndex] = destIdNum;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also check for subtask dependencies that might reference this task
|
|
||||||
if (task.subtasks && task.subtasks.length > 0) {
|
|
||||||
task.subtasks.forEach((subtask) => {
|
|
||||||
if (
|
|
||||||
subtask.dependencies &&
|
|
||||||
subtask.dependencies.includes(sourceIdNum)
|
|
||||||
) {
|
|
||||||
const depIndex = subtask.dependencies.indexOf(sourceIdNum);
|
|
||||||
subtask.dependencies[depIndex] = destIdNum;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Remove the original task from its position
|
|
||||||
data.tasks.splice(sourceTaskIndex, 1);
|
|
||||||
|
|
||||||
// If we're moving to a position after the original, adjust the destination index
|
|
||||||
// since removing the original shifts everything down by 1
|
|
||||||
const adjustedDestIndex =
|
|
||||||
sourceTaskIndex < destTaskIndex ? destTaskIndex - 1 : destTaskIndex;
|
|
||||||
|
|
||||||
// Remove the placeholder destination task
|
|
||||||
data.tasks.splice(adjustedDestIndex, 1);
|
|
||||||
|
|
||||||
// Insert the moved task at the destination position
|
|
||||||
data.tasks.splice(adjustedDestIndex, 0, movedTask);
|
|
||||||
|
|
||||||
log('info', `Moved task ${sourceIdNum} to new ID ${destIdNum}`);
|
|
||||||
|
|
||||||
return movedTask;
|
|
||||||
}
|
|
||||||
|
|
||||||
export default moveTask;
|
|
||||||
@@ -50,7 +50,6 @@ const prdResponseSchema = z.object({
|
|||||||
* @param {Object} options - Additional options
|
* @param {Object} options - Additional options
|
||||||
* @param {boolean} [options.force=false] - Whether to overwrite existing tasks.json.
|
* @param {boolean} [options.force=false] - Whether to overwrite existing tasks.json.
|
||||||
* @param {boolean} [options.append=false] - Append to existing tasks file.
|
* @param {boolean} [options.append=false] - Append to existing tasks file.
|
||||||
* @param {boolean} [options.research=false] - Use research model for enhanced PRD analysis.
|
|
||||||
* @param {Object} [options.reportProgress] - Function to report progress (optional, likely unused).
|
* @param {Object} [options.reportProgress] - Function to report progress (optional, likely unused).
|
||||||
* @param {Object} [options.mcpLog] - MCP logger object (optional).
|
* @param {Object} [options.mcpLog] - MCP logger object (optional).
|
||||||
* @param {Object} [options.session] - Session object from MCP server (optional).
|
* @param {Object} [options.session] - Session object from MCP server (optional).
|
||||||
@@ -64,8 +63,7 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
|||||||
session,
|
session,
|
||||||
projectRoot,
|
projectRoot,
|
||||||
force = false,
|
force = false,
|
||||||
append = false,
|
append = false
|
||||||
research = false
|
|
||||||
} = options;
|
} = options;
|
||||||
const isMCP = !!mcpLog;
|
const isMCP = !!mcpLog;
|
||||||
const outputFormat = isMCP ? 'json' : 'text';
|
const outputFormat = isMCP ? 'json' : 'text';
|
||||||
@@ -92,9 +90,7 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
report(
|
report(`Parsing PRD file: ${prdPath}, Force: ${force}, Append: ${append}`);
|
||||||
`Parsing PRD file: ${prdPath}, Force: ${force}, Append: ${append}, Research: ${research}`
|
|
||||||
);
|
|
||||||
|
|
||||||
let existingTasks = [];
|
let existingTasks = [];
|
||||||
let nextId = 1;
|
let nextId = 1;
|
||||||
@@ -152,22 +148,8 @@ async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
|||||||
throw new Error(`Input file ${prdPath} is empty or could not be read.`);
|
throw new Error(`Input file ${prdPath} is empty or could not be read.`);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Research-specific enhancements to the system prompt
|
// Build system prompt for PRD parsing
|
||||||
const researchPromptAddition = research
|
const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.
|
||||||
? `\nBefore breaking down the PRD into tasks, you will:
|
|
||||||
1. Research and analyze the latest technologies, libraries, frameworks, and best practices that would be appropriate for this project
|
|
||||||
2. Identify any potential technical challenges, security concerns, or scalability issues not explicitly mentioned in the PRD without discarding any explicit requirements or going overboard with complexity -- always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches
|
|
||||||
3. Consider current industry standards and evolving trends relevant to this project (this step aims to solve LLM hallucinations and out of date information due to training data cutoff dates)
|
|
||||||
4. Evaluate alternative implementation approaches and recommend the most efficient path
|
|
||||||
5. Include specific library versions, helpful APIs, and concrete implementation guidance based on your research
|
|
||||||
6. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches
|
|
||||||
|
|
||||||
Your task breakdown should incorporate this research, resulting in more detailed implementation guidance, more accurate dependency mapping, and more precise technology recommendations than would be possible from the PRD text alone, while maintaining all explicit requirements and best practices and all details and nuances of the PRD.`
|
|
||||||
: '';
|
|
||||||
|
|
||||||
// Base system prompt for PRD parsing
|
|
||||||
const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.${researchPromptAddition}
|
|
||||||
|
|
||||||
Analyze the provided PRD content and generate approximately ${numTasks} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
|
Analyze the provided PRD content and generate approximately ${numTasks} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
|
||||||
Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.
|
Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.
|
||||||
Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.
|
Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.
|
||||||
@@ -194,13 +176,13 @@ Guidelines:
|
|||||||
5. Include clear validation/testing approach for each task
|
5. Include clear validation/testing approach for each task
|
||||||
6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than ${nextId} if applicable)
|
6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than ${nextId} if applicable)
|
||||||
7. Assign priority (high/medium/low) based on criticality and dependency order
|
7. Assign priority (high/medium/low) based on criticality and dependency order
|
||||||
8. Include detailed implementation guidance in the "details" field${research ? ', with specific libraries and version recommendations based on your research' : ''}
|
8. Include detailed implementation guidance in the "details" field
|
||||||
9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance
|
9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance
|
||||||
10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements
|
10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements
|
||||||
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches${research ? '\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research' : ''}`;
|
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches`;
|
||||||
|
|
||||||
// Build user prompt with PRD content
|
// Build user prompt with PRD content
|
||||||
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n
|
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks} tasks, starting IDs from ${nextId}:\n\n${prdContent}\n\n
|
||||||
|
|
||||||
Return your response in this format:
|
Return your response in this format:
|
||||||
{
|
{
|
||||||
@@ -222,14 +204,11 @@ Guidelines:
|
|||||||
}`;
|
}`;
|
||||||
|
|
||||||
// Call the unified AI service
|
// Call the unified AI service
|
||||||
report(
|
report('Calling AI service to generate tasks from PRD...', 'info');
|
||||||
`Calling AI service to generate tasks from PRD${research ? ' with research-backed analysis' : ''}...`,
|
|
||||||
'info'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Call generateObjectService with the CORRECT schema and additional telemetry params
|
// Call generateObjectService with the CORRECT schema and additional telemetry params
|
||||||
aiServiceResponse = await generateObjectService({
|
aiServiceResponse = await generateObjectService({
|
||||||
role: research ? 'research' : 'main', // Use research role if flag is set
|
role: 'main',
|
||||||
session: session,
|
session: session,
|
||||||
projectRoot: projectRoot,
|
projectRoot: projectRoot,
|
||||||
schema: prdResponseSchema,
|
schema: prdResponseSchema,
|
||||||
@@ -245,9 +224,7 @@ Guidelines:
|
|||||||
if (!fs.existsSync(tasksDir)) {
|
if (!fs.existsSync(tasksDir)) {
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
fs.mkdirSync(tasksDir, { recursive: true });
|
||||||
}
|
}
|
||||||
logFn.success(
|
logFn.success('Successfully parsed PRD via AI service.\n');
|
||||||
`Successfully parsed PRD via AI service${research ? ' with research-backed analysis' : ''}.`
|
|
||||||
);
|
|
||||||
|
|
||||||
// Validate and Process Tasks
|
// Validate and Process Tasks
|
||||||
// const generatedData = aiServiceResponse?.mainResult?.object;
|
// const generatedData = aiServiceResponse?.mainResult?.object;
|
||||||
@@ -317,7 +294,7 @@ Guidelines:
|
|||||||
// Write the final tasks to the file
|
// Write the final tasks to the file
|
||||||
writeJSON(tasksPath, outputData);
|
writeJSON(tasksPath, outputData);
|
||||||
report(
|
report(
|
||||||
`Successfully ${append ? 'appended' : 'generated'} ${processedNewTasks.length} tasks in ${tasksPath}${research ? ' with research-backed analysis' : ''}`,
|
`Successfully ${append ? 'appended' : 'generated'} ${processedNewTasks.length} tasks in ${tasksPath}`,
|
||||||
'success'
|
'success'
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -329,7 +306,7 @@ Guidelines:
|
|||||||
console.log(
|
console.log(
|
||||||
boxen(
|
boxen(
|
||||||
chalk.green(
|
chalk.green(
|
||||||
`Successfully generated ${processedNewTasks.length} new tasks${research ? ' with research-backed analysis' : ''}. Total tasks in ${tasksPath}: ${finalTasks.length}`
|
`Successfully generated ${processedNewTasks.length} new tasks. Total tasks in ${tasksPath}: ${finalTasks.length}`
|
||||||
),
|
),
|
||||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,12 +1,10 @@
|
|||||||
{
|
{
|
||||||
"meta": {
|
"meta": {
|
||||||
"generatedAt": "2025-05-22T05:48:33.026Z",
|
"generatedAt": "2025-05-17T22:29:22.179Z",
|
||||||
"tasksAnalyzed": 6,
|
"tasksAnalyzed": 40,
|
||||||
"totalTasks": 88,
|
|
||||||
"analysisCount": 43,
|
|
||||||
"thresholdScore": 5,
|
"thresholdScore": 5,
|
||||||
"projectName": "Taskmaster",
|
"projectName": "Taskmaster",
|
||||||
"usedResearch": true
|
"usedResearch": false
|
||||||
},
|
},
|
||||||
"complexityAnalysis": [
|
"complexityAnalysis": [
|
||||||
{
|
{
|
||||||
@@ -217,6 +215,22 @@
|
|||||||
"expansionPrompt": "The current 2 subtasks for implementing task creation without PRD appear appropriate. Consider if any additional subtasks are needed for validation, error handling, or integration with existing task management workflows.",
|
"expansionPrompt": "The current 2 subtasks for implementing task creation without PRD appear appropriate. Consider if any additional subtasks are needed for validation, error handling, or integration with existing task management workflows.",
|
||||||
"reasoning": "This task involves a relatively simple modification to allow task creation without requiring a PRD document. The complexity is low as it primarily involves creating a form interface and saving functionality. The 2 existing subtasks cover the main implementation areas of UI design and data saving."
|
"reasoning": "This task involves a relatively simple modification to allow task creation without requiring a PRD document. The complexity is low as it primarily involves creating a form interface and saving functionality. The 2 existing subtasks cover the main implementation areas of UI design and data saving."
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"taskId": 69,
|
||||||
|
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
|
||||||
|
"complexityScore": 5,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "The current 4 subtasks for enhancing the analyze-complexity feature appear well-structured. Consider if any additional subtasks are needed for performance optimization with large task sets or visualization improvements.",
|
||||||
|
"reasoning": "This task involves modifying the existing analyze-complexity feature to support analyzing specific task IDs and updating reports. The complexity is moderate as it requires careful handling of report merging and filtering logic. The 4 existing subtasks cover the main implementation areas from core logic to testing."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"taskId": 70,
|
||||||
|
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
|
||||||
|
"complexityScore": 6,
|
||||||
|
"recommendedSubtasks": 4,
|
||||||
|
"expansionPrompt": "The current 4 subtasks for implementing the 'diagram' command appear well-structured. Consider if any additional subtasks are needed for handling large dependency graphs, additional output formats, or integration with existing visualization tools.",
|
||||||
|
"reasoning": "This task involves creating a new command that generates Mermaid diagrams to visualize task dependencies. The complexity is moderate as it requires parsing task relationships, generating proper Mermaid syntax, and handling various output options. The 4 existing subtasks cover the main implementation areas from interface design to documentation."
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"taskId": 72,
|
"taskId": 72,
|
||||||
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
|
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
|
||||||
@@ -289,69 +303,29 @@
|
|||||||
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing accurate token counting for different models and proper fallback mechanisms.",
|
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing accurate token counting for different models and proper fallback mechanisms.",
|
||||||
"reasoning": "This task involves creating a utility function to count tokens for different AI models. The complexity is moderate as it requires integration with the tiktoken library and handling different tokenization schemes. No subtasks are necessary as the task is well-defined and focused."
|
"reasoning": "This task involves creating a utility function to count tokens for different AI models. The complexity is moderate as it requires integration with the tiktoken library and handling different tokenization schemes. No subtasks are necessary as the task is well-defined and focused."
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"taskId": 69,
|
|
||||||
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 6,
|
|
||||||
"expansionPrompt": "Break down the task 'Enhance Analyze Complexity for Specific Task IDs' into 6 subtasks focusing on: 1) Core logic modification to accept ID parameters, 2) Report merging functionality, 3) CLI interface updates, 4) MCP tool integration, 5) Documentation updates, and 6) Comprehensive testing across all components.",
|
|
||||||
"reasoning": "This task involves modifying existing functionality across multiple components (core logic, CLI, MCP) with complex logic for filtering tasks and merging reports. The implementation requires careful handling of different parameter combinations and edge cases. The task has interdependent components that need to work together seamlessly, and the report merging functionality adds significant complexity."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 70,
|
|
||||||
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Break down the 'diagram' command implementation into 5 subtasks: 1) Command interface and parameter handling, 2) Task data extraction and transformation to Mermaid syntax, 3) Diagram rendering with status color coding, 4) Output formatting and file export functionality, and 5) Error handling and edge case management.",
|
|
||||||
"reasoning": "This task requires implementing a new feature rather than modifying existing code, which reduces complexity from integration challenges. However, it involves working with visualization logic, dependency mapping, and multiple output formats. The color coding based on status and handling of dependency relationships adds moderate complexity. The task is well-defined but requires careful attention to diagram formatting and error handling."
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"taskId": 85,
|
"taskId": 85,
|
||||||
"taskTitle": "Update ai-services-unified.js for dynamic token limits",
|
"taskTitle": "Update ai-services-unified.js for dynamic token limits",
|
||||||
"complexityScore": 7,
|
"complexityScore": 6,
|
||||||
"recommendedSubtasks": 5,
|
"recommendedSubtasks": 1,
|
||||||
"expansionPrompt": "Break down the update of ai-services-unified.js for dynamic token limits into subtasks such as: (1) Import and integrate the token counting utility, (2) Refactor _unifiedServiceRunner to calculate and enforce dynamic token limits, (3) Update error handling for token limit violations, (4) Add and verify logging for token usage, (5) Write and execute tests for various prompt and model scenarios.",
|
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing dynamic token limit adjustment based on input length and model capabilities.",
|
||||||
"reasoning": "This task involves significant code changes to a core function, integration of a new utility, dynamic logic for multiple models, and robust error handling. It also requires comprehensive testing for edge cases and integration, making it moderately complex and best managed by splitting into focused subtasks."
|
"reasoning": "This task involves modifying the AI service runner to use the new token counting utility and dynamically adjust output token limits. The complexity is moderate to high as it requires careful integration with existing code and handling various edge cases. No subtasks are necessary as the task is well-defined and focused."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 86,
|
"taskId": 86,
|
||||||
"taskTitle": "Update .taskmasterconfig schema and user guide",
|
"taskTitle": "Update .taskmasterconfig schema and user guide",
|
||||||
"complexityScore": 6,
|
"complexityScore": 4,
|
||||||
"recommendedSubtasks": 4,
|
"recommendedSubtasks": 1,
|
||||||
"expansionPrompt": "Expand this task into subtasks: (1) Draft a migration guide for users, (2) Update user documentation to explain new config fields, (3) Modify schema validation logic in config-manager.js, (4) Test and validate backward compatibility and error messaging.",
|
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on creating clear migration guidance and updating documentation.",
|
||||||
"reasoning": "The task spans documentation, schema changes, migration guidance, and validation logic. While not algorithmically complex, it requires careful coordination and thorough testing to ensure a smooth user transition and robust validation."
|
"reasoning": "This task involves creating a migration guide for users to update their configuration files and documenting the new token limit options. The complexity is relatively low as it primarily involves documentation and schema validation. No subtasks are necessary as the task is well-defined and focused."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"taskId": 87,
|
"taskId": 87,
|
||||||
"taskTitle": "Implement validation and error handling",
|
"taskTitle": "Implement validation and error handling",
|
||||||
"complexityScore": 5,
|
"complexityScore": 5,
|
||||||
"recommendedSubtasks": 4,
|
"recommendedSubtasks": 1,
|
||||||
"expansionPrompt": "Decompose this task into: (1) Add validation logic for model and config loading, (2) Implement error handling and fallback mechanisms, (3) Enhance logging and reporting for token usage, (4) Develop helper functions for configuration suggestions and improvements.",
|
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on comprehensive validation and helpful error messages throughout the system.",
|
||||||
"reasoning": "This task is primarily about adding validation, error handling, and logging. While important for robustness, the logic is straightforward and can be modularized into a few clear subtasks."
|
"reasoning": "This task involves adding validation and error handling for token limits throughout the system. The complexity is moderate as it requires careful integration with multiple components and creating helpful error messages. No subtasks are necessary as the task is well-defined and focused."
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 89,
|
|
||||||
"taskTitle": "Introduce Prioritize Command with Enhanced Priority Levels",
|
|
||||||
"complexityScore": 6,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Expand this task into: (1) Implement the prioritize command with all required flags and shorthands, (2) Update CLI output and help documentation for new priority levels, (3) Ensure backward compatibility with existing commands, (4) Add error handling for invalid inputs, (5) Write and run tests for all command scenarios.",
|
|
||||||
"reasoning": "This CLI feature requires command parsing, updating internal logic for new priority levels, documentation, and robust error handling. The complexity is moderate due to the need for backward compatibility and comprehensive testing."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 90,
|
|
||||||
"taskTitle": "Implement Subtask Progress Analyzer and Reporting System",
|
|
||||||
"complexityScore": 8,
|
|
||||||
"recommendedSubtasks": 6,
|
|
||||||
"expansionPrompt": "Break down the analyzer implementation into: (1) Design and implement progress tracking logic, (2) Develop status validation and issue detection, (3) Build the reporting system with multiple output formats, (4) Integrate analyzer with the existing task management system, (5) Optimize for performance and scalability, (6) Write unit, integration, and performance tests.",
|
|
||||||
"reasoning": "This is a complex, multi-faceted feature involving data analysis, reporting, integration, and performance optimization. It touches many parts of the system and requires careful design, making it one of the most complex tasks in the list."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"taskId": 91,
|
|
||||||
"taskTitle": "Implement Move Command for Tasks and Subtasks",
|
|
||||||
"complexityScore": 7,
|
|
||||||
"recommendedSubtasks": 5,
|
|
||||||
"expansionPrompt": "Expand this task into: (1) Implement move logic for tasks and subtasks, (2) Handle edge cases (invalid ids, non-existent parents, circular dependencies), (3) Update CLI to support move command with flags, (4) Ensure data integrity and update relationships, (5) Write and execute tests for various move scenarios.",
|
|
||||||
"reasoning": "Moving tasks and subtasks requires careful handling of hierarchical data, edge cases, and data integrity. The command must be robust and user-friendly, necessitating multiple focused subtasks for safe implementation."
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -58,50 +58,6 @@ Testing approach:
|
|||||||
- Test parameter validation (missing ID, invalid ID format)
|
- Test parameter validation (missing ID, invalid ID format)
|
||||||
- Test error handling for non-existent task IDs
|
- Test error handling for non-existent task IDs
|
||||||
- Test basic command flow with a mock task store
|
- Test basic command flow with a mock task store
|
||||||
<info added on 2025-05-23T21:02:03.909Z>
|
|
||||||
## Updated Implementation Approach
|
|
||||||
|
|
||||||
Based on code review findings, the implementation approach needs to be revised:
|
|
||||||
|
|
||||||
1. Implement the command in `scripts/modules/commands.js` instead of creating a new file
|
|
||||||
2. Add command registration in the `registerCommands()` function (around line 482)
|
|
||||||
3. Follow existing command structure pattern:
|
|
||||||
```javascript
|
|
||||||
programInstance
|
|
||||||
.command('generate-test')
|
|
||||||
.description('Generate test cases for a task using AI')
|
|
||||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
|
||||||
.option('-i, --id <id>', 'Task ID parameter')
|
|
||||||
.option('-p, --prompt <text>', 'Additional prompt context')
|
|
||||||
.option('-r, --research', 'Use research model')
|
|
||||||
.action(async (options) => {
|
|
||||||
// Implementation
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Use the following utilities:
|
|
||||||
- `findProjectRoot()` for resolving project paths
|
|
||||||
- `findTaskById()` for retrieving task data
|
|
||||||
- `chalk` for formatted console output
|
|
||||||
|
|
||||||
5. Implement error handling following the pattern:
|
|
||||||
```javascript
|
|
||||||
try {
|
|
||||||
// Implementation
|
|
||||||
} catch (error) {
|
|
||||||
console.error(chalk.red(`Error generating test: ${error.message}`));
|
|
||||||
if (error.details) {
|
|
||||||
console.error(chalk.red(error.details));
|
|
||||||
}
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
6. Required imports:
|
|
||||||
- chalk for colored output
|
|
||||||
- path for file path operations
|
|
||||||
- findProjectRoot and findTaskById from './utils.js'
|
|
||||||
</info added on 2025-05-23T21:02:03.909Z>
|
|
||||||
|
|
||||||
## 2. Implement AI prompt construction and FastMCP integration [pending]
|
## 2. Implement AI prompt construction and FastMCP integration [pending]
|
||||||
### Dependencies: 24.1
|
### Dependencies: 24.1
|
||||||
@@ -120,50 +76,6 @@ Testing approach:
|
|||||||
- Test FastMCP integration with mocked responses
|
- Test FastMCP integration with mocked responses
|
||||||
- Test error handling for FastMCP failures
|
- Test error handling for FastMCP failures
|
||||||
- Test response processing with sample FastMCP outputs
|
- Test response processing with sample FastMCP outputs
|
||||||
<info added on 2025-05-23T21:04:33.890Z>
|
|
||||||
## AI Integration Implementation
|
|
||||||
|
|
||||||
### AI Service Integration
|
|
||||||
- Use the unified AI service layer, not FastMCP directly
|
|
||||||
- Implement with `generateObjectService` from '../ai-services-unified.js'
|
|
||||||
- Define Zod schema for structured test generation output:
|
|
||||||
- testContent: Complete Jest test file content
|
|
||||||
- fileName: Suggested filename for the test file
|
|
||||||
- mockRequirements: External dependencies that need mocking
|
|
||||||
|
|
||||||
### Prompt Construction
|
|
||||||
- Create system prompt defining AI's role as test generator
|
|
||||||
- Build user prompt with task context (ID, title, description, details)
|
|
||||||
- Include test strategy and subtasks context in the prompt
|
|
||||||
- Follow patterns from add-task.js for prompt structure
|
|
||||||
|
|
||||||
### Task Analysis
|
|
||||||
- Retrieve task data using `findTaskById()` from utils.js
|
|
||||||
- Build context by analyzing task description, details, and testStrategy
|
|
||||||
- Examine project structure for import patterns
|
|
||||||
- Parse specific testing requirements from task.testStrategy field
|
|
||||||
|
|
||||||
### File System Operations
|
|
||||||
- Determine output path in same directory as tasks.json
|
|
||||||
- Generate standardized filename based on task ID
|
|
||||||
- Use fs.writeFileSync for writing test content to file
|
|
||||||
|
|
||||||
### Error Handling & UI
|
|
||||||
- Implement try/catch blocks for AI service calls
|
|
||||||
- Display user-friendly error messages with chalk
|
|
||||||
- Use loading indicators during AI processing
|
|
||||||
- Support both research and main AI models
|
|
||||||
|
|
||||||
### Telemetry
|
|
||||||
- Pass through telemetryData from AI service response
|
|
||||||
- Display AI usage summary for CLI output
|
|
||||||
|
|
||||||
### Required Dependencies
|
|
||||||
- generateObjectService from ai-services-unified.js
|
|
||||||
- UI components (loading indicators, display functions)
|
|
||||||
- Zod for schema validation
|
|
||||||
- Chalk for formatted console output
|
|
||||||
</info added on 2025-05-23T21:04:33.890Z>
|
|
||||||
|
|
||||||
## 3. Implement test file generation and output [pending]
|
## 3. Implement test file generation and output [pending]
|
||||||
### Dependencies: 24.2
|
### Dependencies: 24.2
|
||||||
@@ -185,419 +97,4 @@ Testing approach:
|
|||||||
- Test file system operations with mocked fs module
|
- Test file system operations with mocked fs module
|
||||||
- Test the complete flow from command input to file output
|
- Test the complete flow from command input to file output
|
||||||
- Verify generated tests can be executed by Jest
|
- Verify generated tests can be executed by Jest
|
||||||
<info added on 2025-05-23T21:06:32.457Z>
|
|
||||||
## Detailed Implementation Guidelines
|
|
||||||
|
|
||||||
### File Naming Convention Implementation
|
|
||||||
```javascript
|
|
||||||
function generateTestFileName(taskId, isSubtask = false) {
|
|
||||||
if (isSubtask) {
|
|
||||||
// For subtasks like "24.1", generate "task_024_001.test.js"
|
|
||||||
const [parentId, subtaskId] = taskId.split('.');
|
|
||||||
return `task_${parentId.padStart(3, '0')}_${subtaskId.padStart(3, '0')}.test.js`;
|
|
||||||
} else {
|
|
||||||
// For parent tasks like "24", generate "task_024.test.js"
|
|
||||||
return `task_${taskId.toString().padStart(3, '0')}.test.js`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### File Location Strategy
|
|
||||||
- Place generated test files in the `tasks/` directory alongside task files
|
|
||||||
- This ensures co-location with task documentation and simplifies implementation
|
|
||||||
|
|
||||||
### File Content Structure Template
|
|
||||||
```javascript
|
|
||||||
/**
|
|
||||||
* Test file for Task ${taskId}: ${taskTitle}
|
|
||||||
* Generated automatically by Task Master
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { jest } from '@jest/globals';
|
|
||||||
// Additional imports based on task requirements
|
|
||||||
|
|
||||||
describe('Task ${taskId}: ${taskTitle}', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
// Setup code
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
// Cleanup code
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should ${testDescription}', () => {
|
|
||||||
// Test implementation
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Code Formatting Standards
|
|
||||||
- Follow project's .prettierrc configuration:
|
|
||||||
- Tab width: 2 spaces (useTabs: true)
|
|
||||||
- Print width: 80 characters
|
|
||||||
- Semicolons: Required (semi: true)
|
|
||||||
- Quotes: Single quotes (singleQuote: true)
|
|
||||||
- Trailing commas: None (trailingComma: "none")
|
|
||||||
- Bracket spacing: True
|
|
||||||
- Arrow parens: Always
|
|
||||||
|
|
||||||
### File System Operations Implementation
|
|
||||||
```javascript
|
|
||||||
import fs from 'fs';
|
|
||||||
import path from 'path';
|
|
||||||
|
|
||||||
// Determine output path
|
|
||||||
const tasksDir = path.dirname(tasksPath); // Same directory as tasks.json
|
|
||||||
const fileName = generateTestFileName(task.id, isSubtask);
|
|
||||||
const filePath = path.join(tasksDir, fileName);
|
|
||||||
|
|
||||||
// Ensure directory exists
|
|
||||||
if (!fs.existsSync(tasksDir)) {
|
|
||||||
fs.mkdirSync(tasksDir, { recursive: true });
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write test file with proper error handling
|
|
||||||
try {
|
|
||||||
fs.writeFileSync(filePath, formattedTestContent, 'utf8');
|
|
||||||
} catch (error) {
|
|
||||||
throw new Error(`Failed to write test file: ${error.message}`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Handling for File Operations
|
|
||||||
```javascript
|
|
||||||
try {
|
|
||||||
// File writing operation
|
|
||||||
fs.writeFileSync(filePath, testContent, 'utf8');
|
|
||||||
} catch (error) {
|
|
||||||
if (error.code === 'ENOENT') {
|
|
||||||
throw new Error(`Directory does not exist: ${path.dirname(filePath)}`);
|
|
||||||
} else if (error.code === 'EACCES') {
|
|
||||||
throw new Error(`Permission denied writing to: ${filePath}`);
|
|
||||||
} else if (error.code === 'ENOSPC') {
|
|
||||||
throw new Error('Insufficient disk space to write test file');
|
|
||||||
} else {
|
|
||||||
throw new Error(`Failed to write test file: ${error.message}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### User Feedback Implementation
|
|
||||||
```javascript
|
|
||||||
// Success feedback
|
|
||||||
console.log(chalk.green('✅ Test file generated successfully:'));
|
|
||||||
console.log(chalk.cyan(` File: ${fileName}`));
|
|
||||||
console.log(chalk.cyan(` Location: ${filePath}`));
|
|
||||||
console.log(chalk.gray(` Size: ${testContent.length} characters`));
|
|
||||||
|
|
||||||
// Additional info
|
|
||||||
if (mockRequirements && mockRequirements.length > 0) {
|
|
||||||
console.log(chalk.yellow(` Mocks needed: ${mockRequirements.join(', ')}`));
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Content Validation Requirements
|
|
||||||
1. Jest Syntax Validation:
|
|
||||||
- Ensure proper describe/test structure
|
|
||||||
- Validate import statements
|
|
||||||
- Check for balanced brackets and parentheses
|
|
||||||
|
|
||||||
2. Code Quality Checks:
|
|
||||||
- Verify no syntax errors
|
|
||||||
- Ensure proper indentation
|
|
||||||
- Check for required imports
|
|
||||||
|
|
||||||
3. Test Completeness:
|
|
||||||
- At least one test case
|
|
||||||
- Proper test descriptions
|
|
||||||
- Appropriate assertions
|
|
||||||
|
|
||||||
### Required Dependencies
|
|
||||||
```javascript
|
|
||||||
import fs from 'fs';
|
|
||||||
import path from 'path';
|
|
||||||
import chalk from 'chalk';
|
|
||||||
import { log } from '../utils.js';
|
|
||||||
```
|
|
||||||
|
|
||||||
### Integration with Existing Patterns
|
|
||||||
Follow the pattern from `generate-task-files.js`:
|
|
||||||
1. Read task data using existing utilities
|
|
||||||
2. Process content with proper formatting
|
|
||||||
3. Write files with error handling
|
|
||||||
4. Provide feedback to user
|
|
||||||
5. Return success data for MCP integration
|
|
||||||
</info added on 2025-05-23T21:06:32.457Z>
|
|
||||||
<info added on 2025-05-23T21:18:25.369Z>
|
|
||||||
## Corrected Implementation Approach
|
|
||||||
|
|
||||||
### Updated File Location Strategy
|
|
||||||
|
|
||||||
**CORRECTION**: Tests should go in `/tests/` directory, not `/tasks/` directory.
|
|
||||||
|
|
||||||
Based on Jest configuration analysis:
|
|
||||||
- Jest is configured with `roots: ['<rootDir>/tests']`
|
|
||||||
- Test pattern: `**/?(*.)+(spec|test).js`
|
|
||||||
- Current test structure has `/tests/unit/`, `/tests/integration/`, etc.
|
|
||||||
|
|
||||||
### Recommended Directory Structure:
|
|
||||||
```
|
|
||||||
tests/
|
|
||||||
├── unit/ # Manual unit tests
|
|
||||||
├── integration/ # Manual integration tests
|
|
||||||
├── generated/ # AI-generated tests
|
|
||||||
│ ├── tasks/ # Generated task tests
|
|
||||||
│ │ ├── task_024.test.js
|
|
||||||
│ │ └── task_024_001.test.js
|
|
||||||
│ └── README.md # Explains generated tests
|
|
||||||
└── fixtures/ # Test fixtures
|
|
||||||
```
|
|
||||||
|
|
||||||
### Updated File Path Logic:
|
|
||||||
```javascript
|
|
||||||
// Determine output path - place in tests/generated/tasks/
|
|
||||||
const projectRoot = findProjectRoot() || '.';
|
|
||||||
const testsDir = path.join(projectRoot, 'tests', 'generated', 'tasks');
|
|
||||||
const fileName = generateTestFileName(task.id, isSubtask);
|
|
||||||
const filePath = path.join(testsDir, fileName);
|
|
||||||
|
|
||||||
// Ensure directory structure exists
|
|
||||||
if (!fs.existsSync(testsDir)) {
|
|
||||||
fs.mkdirSync(testsDir, { recursive: true });
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Testing Framework Configuration
|
|
||||||
|
|
||||||
The generate-test command should read the configured testing framework from `.taskmasterconfig`:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Read testing framework from config
|
|
||||||
const config = getConfig(projectRoot);
|
|
||||||
const testingFramework = config.testingFramework || 'jest'; // Default to Jest
|
|
||||||
|
|
||||||
// Generate different templates based on framework
|
|
||||||
switch (testingFramework) {
|
|
||||||
case 'jest':
|
|
||||||
return generateJestTest(task, context);
|
|
||||||
case 'mocha':
|
|
||||||
return generateMochaTest(task, context);
|
|
||||||
case 'vitest':
|
|
||||||
return generateVitestTest(task, context);
|
|
||||||
default:
|
|
||||||
throw new Error(`Unsupported testing framework: ${testingFramework}`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Framework-Specific Templates
|
|
||||||
|
|
||||||
**Jest Template** (current):
|
|
||||||
```javascript
|
|
||||||
/**
|
|
||||||
* Test file for Task ${taskId}: ${taskTitle}
|
|
||||||
* Generated automatically by Task Master
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { jest } from '@jest/globals';
|
|
||||||
// Task-specific imports
|
|
||||||
|
|
||||||
describe('Task ${taskId}: ${taskTitle}', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
jest.clearAllMocks();
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should ${testDescription}', () => {
|
|
||||||
// Test implementation
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Mocha Template**:
|
|
||||||
```javascript
|
|
||||||
/**
|
|
||||||
* Test file for Task ${taskId}: ${taskTitle}
|
|
||||||
* Generated automatically by Task Master
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { expect } from 'chai';
|
|
||||||
import sinon from 'sinon';
|
|
||||||
// Task-specific imports
|
|
||||||
|
|
||||||
describe('Task ${taskId}: ${taskTitle}', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
sinon.restore();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should ${testDescription}', () => {
|
|
||||||
// Test implementation
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Vitest Template**:
|
|
||||||
```javascript
|
|
||||||
/**
|
|
||||||
* Test file for Task ${taskId}: ${taskTitle}
|
|
||||||
* Generated automatically by Task Master
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { describe, test, expect, vi, beforeEach } from 'vitest';
|
|
||||||
// Task-specific imports
|
|
||||||
|
|
||||||
describe('Task ${taskId}: ${taskTitle}', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
vi.clearAllMocks();
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should ${testDescription}', () => {
|
|
||||||
// Test implementation
|
|
||||||
});
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### AI Prompt Enhancement for Mocking
|
|
||||||
|
|
||||||
To address the mocking challenge, enhance the AI prompt with project context:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const systemPrompt = `You are an expert at generating comprehensive test files. When generating tests, pay special attention to mocking external dependencies correctly.
|
|
||||||
|
|
||||||
CRITICAL MOCKING GUIDELINES:
|
|
||||||
1. Analyze the task requirements to identify external dependencies (APIs, databases, file system, etc.)
|
|
||||||
2. Mock external dependencies at the module level, not inline
|
|
||||||
3. Use the testing framework's mocking utilities (jest.mock(), sinon.stub(), vi.mock())
|
|
||||||
4. Create realistic mock data that matches the expected API responses
|
|
||||||
5. Test both success and error scenarios for mocked dependencies
|
|
||||||
6. Ensure mocks are cleared between tests to prevent test pollution
|
|
||||||
|
|
||||||
Testing Framework: ${testingFramework}
|
|
||||||
Project Structure: ${projectStructureContext}
|
|
||||||
`;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Integration with Future Features
|
|
||||||
|
|
||||||
This primitive command design enables:
|
|
||||||
1. **Automatic test generation**: `task-master add-task --with-test`
|
|
||||||
2. **Batch test generation**: `task-master generate-tests --all`
|
|
||||||
3. **Framework-agnostic**: Support multiple testing frameworks
|
|
||||||
4. **Smart mocking**: LLM analyzes dependencies and generates appropriate mocks
|
|
||||||
|
|
||||||
### Updated Implementation Requirements:
|
|
||||||
|
|
||||||
1. **Read testing framework** from `.taskmasterconfig`
|
|
||||||
2. **Create tests directory structure** if it doesn't exist
|
|
||||||
3. **Generate framework-specific templates** based on configuration
|
|
||||||
4. **Enhanced AI prompts** with mocking best practices
|
|
||||||
5. **Project structure analysis** for better import resolution
|
|
||||||
6. **Mock dependency detection** from task requirements
|
|
||||||
</info added on 2025-05-23T21:18:25.369Z>
|
|
||||||
|
|
||||||
## 4. Implement MCP tool integration for generate-test command [pending]
|
|
||||||
### Dependencies: 24.3
|
|
||||||
### Description: Create MCP server tool support for the generate-test command to enable integration with Claude Code and other MCP clients.
|
|
||||||
### Details:
|
|
||||||
Implementation steps:
|
|
||||||
1. Create direct function wrapper in mcp-server/src/core/direct-functions/
|
|
||||||
2. Create MCP tool registration in mcp-server/src/tools/
|
|
||||||
3. Add tool to the main tools index
|
|
||||||
4. Implement proper parameter validation and error handling
|
|
||||||
5. Ensure telemetry data is properly passed through
|
|
||||||
6. Add tool to MCP server registration
|
|
||||||
|
|
||||||
The MCP tool should support the same parameters as the CLI command:
|
|
||||||
- id: Task ID to generate tests for
|
|
||||||
- file: Path to tasks.json file
|
|
||||||
- research: Whether to use research model
|
|
||||||
- prompt: Additional context for test generation
|
|
||||||
|
|
||||||
Follow the existing pattern from other MCP tools like add-task.js and expand-task.js.
|
|
||||||
|
|
||||||
## 5. Add testing framework configuration to project initialization [pending]
|
|
||||||
### Dependencies: 24.3
|
|
||||||
### Description: Enhance the init.js process to let users choose their preferred testing framework (Jest, Mocha, Vitest, etc.) and store this choice in .taskmasterconfig for use by the generate-test command.
|
|
||||||
### Details:
|
|
||||||
Implementation requirements:
|
|
||||||
|
|
||||||
1. **Add Testing Framework Prompt to init.js**:
|
|
||||||
- Add interactive prompt asking users to choose testing framework
|
|
||||||
- Support Jest (default), Mocha + Chai, Vitest, Ava, Jasmine
|
|
||||||
- Include brief descriptions of each framework
|
|
||||||
- Allow --testing-framework flag for non-interactive mode
|
|
||||||
|
|
||||||
2. **Update .taskmasterconfig Template**:
|
|
||||||
- Add testingFramework field to configuration file
|
|
||||||
- Include default dependencies for each framework
|
|
||||||
- Store framework-specific configuration options
|
|
||||||
|
|
||||||
3. **Framework-Specific Setup**:
|
|
||||||
- Generate appropriate config files (jest.config.js, vitest.config.ts, etc.)
|
|
||||||
- Add framework dependencies to package.json suggestions
|
|
||||||
- Create sample test file for the chosen framework
|
|
||||||
|
|
||||||
4. **Integration Points**:
|
|
||||||
- Ensure generate-test command reads testingFramework from config
|
|
||||||
- Add validation to prevent conflicts between framework choices
|
|
||||||
- Support switching frameworks later via models command or separate config command
|
|
||||||
|
|
||||||
This makes the generate-test command truly framework-agnostic and sets up the foundation for --with-test flags in other commands.
|
|
||||||
<info added on 2025-05-23T21:22:02.048Z>
|
|
||||||
# Implementation Plan for Testing Framework Integration
|
|
||||||
|
|
||||||
## Code Structure
|
|
||||||
|
|
||||||
### 1. Update init.js
|
|
||||||
- Add testing framework prompt after addAliases prompt
|
|
||||||
- Implement framework selection with descriptions
|
|
||||||
- Support non-interactive mode with --testing-framework flag
|
|
||||||
- Create setupTestingFramework() function to handle framework-specific setup
|
|
||||||
|
|
||||||
### 2. Create New Module Files
|
|
||||||
- Create `scripts/modules/testing-frameworks.js` for framework templates and setup
|
|
||||||
- Add sample test generators for each supported framework
|
|
||||||
- Implement config file generation for each framework
|
|
||||||
|
|
||||||
### 3. Update Configuration Templates
|
|
||||||
- Modify `assets/.taskmasterconfig` to include testing fields:
|
|
||||||
```json
|
|
||||||
"testingFramework": "{{testingFramework}}",
|
|
||||||
"testingConfig": {
|
|
||||||
"framework": "{{testingFramework}}",
|
|
||||||
"setupFiles": [],
|
|
||||||
"testDirectory": "tests",
|
|
||||||
"testPattern": "**/*.test.js",
|
|
||||||
"coverage": {
|
|
||||||
"enabled": false,
|
|
||||||
"threshold": 80
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Create Framework-Specific Templates
|
|
||||||
- `assets/jest.config.template.js`
|
|
||||||
- `assets/vitest.config.template.ts`
|
|
||||||
- `assets/.mocharc.template.json`
|
|
||||||
- `assets/ava.config.template.js`
|
|
||||||
- `assets/jasmine.json.template`
|
|
||||||
|
|
||||||
### 5. Update commands.js
|
|
||||||
- Add `--testing-framework <framework>` option to init command
|
|
||||||
- Add validation for supported frameworks
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
- Validate selected framework against supported list
|
|
||||||
- Handle existing config files gracefully with warning/overwrite prompt
|
|
||||||
- Provide recovery options if framework setup fails
|
|
||||||
- Add conflict detection for multiple testing frameworks
|
|
||||||
|
|
||||||
## Integration Points
|
|
||||||
- Ensure generate-test command reads testingFramework from config
|
|
||||||
- Prepare for future --with-test flag in other commands
|
|
||||||
- Support framework switching via config command
|
|
||||||
|
|
||||||
## Testing Requirements
|
|
||||||
- Unit tests for framework selection logic
|
|
||||||
- Integration tests for config file generation
|
|
||||||
- Validation tests for each supported framework
|
|
||||||
</info added on 2025-05-23T21:22:02.048Z>
|
|
||||||
|
|
||||||
|
|||||||
@@ -77,263 +77,48 @@ This implementation should include:
|
|||||||
### Description: Design and implement the command-line interface for the dependency graph tool, including argument parsing and help documentation.
|
### Description: Design and implement the command-line interface for the dependency graph tool, including argument parsing and help documentation.
|
||||||
### Details:
|
### Details:
|
||||||
Define commands for input file specification, output options, filtering, and other user-configurable parameters.
|
Define commands for input file specification, output options, filtering, and other user-configurable parameters.
|
||||||
<info added on 2025-05-23T21:02:26.442Z>
|
|
||||||
Implement a new 'diagram' command (with 'graph' alias) in commands.js following the Commander.js pattern. The command should:
|
|
||||||
|
|
||||||
1. Import diagram-generator.js module functions for generating visual representations
|
|
||||||
2. Support multiple visualization types with --type option:
|
|
||||||
- dependencies: show task dependency relationships
|
|
||||||
- subtasks: show task/subtask hierarchy
|
|
||||||
- flow: show task workflow
|
|
||||||
- gantt: show timeline visualization
|
|
||||||
|
|
||||||
3. Include the following options:
|
|
||||||
- --task <id>: Filter diagram to show only specified task and its relationships
|
|
||||||
- --mermaid: Output raw Mermaid markdown for external rendering
|
|
||||||
- --visual: Render diagram directly in terminal
|
|
||||||
- --format <format>: Output format (text, svg, png)
|
|
||||||
|
|
||||||
4. Implement proper error handling and validation:
|
|
||||||
- Validate task IDs using existing taskExists() function
|
|
||||||
- Handle invalid option combinations
|
|
||||||
- Provide descriptive error messages
|
|
||||||
|
|
||||||
5. Integrate with UI components:
|
|
||||||
- Use ui.js display functions for consistent output formatting
|
|
||||||
- Apply chalk coloring for terminal output
|
|
||||||
- Use boxen formatting consistent with other commands
|
|
||||||
|
|
||||||
6. Handle file operations:
|
|
||||||
- Resolve file paths using findProjectRoot() pattern
|
|
||||||
- Support saving diagrams to files when appropriate
|
|
||||||
|
|
||||||
7. Include comprehensive help text following the established pattern in other commands
|
|
||||||
</info added on 2025-05-23T21:02:26.442Z>
|
|
||||||
|
|
||||||
## 2. Graph Layout Algorithms [pending]
|
## 2. Graph Layout Algorithms [pending]
|
||||||
### Dependencies: 41.1
|
### Dependencies: 41.1
|
||||||
### Description: Develop or integrate algorithms to compute optimal node and edge placement for clear and readable graph layouts in a terminal environment.
|
### Description: Develop or integrate algorithms to compute optimal node and edge placement for clear and readable graph layouts in a terminal environment.
|
||||||
### Details:
|
### Details:
|
||||||
Consider topological sorting, hierarchical, and force-directed layouts suitable for ASCII/Unicode rendering.
|
Consider topological sorting, hierarchical, and force-directed layouts suitable for ASCII/Unicode rendering.
|
||||||
<info added on 2025-05-23T21:02:49.434Z>
|
|
||||||
Create a new diagram-generator.js module in the scripts/modules/ directory following Task Master's module architecture pattern. The module should include:
|
|
||||||
|
|
||||||
1. Core functions for generating Mermaid diagrams:
|
|
||||||
- generateDependencyGraph(tasks, options) - creates flowchart showing task dependencies
|
|
||||||
- generateSubtaskDiagram(task, options) - creates hierarchy diagram for subtasks
|
|
||||||
- generateProjectFlow(tasks, options) - creates overall project workflow
|
|
||||||
- generateGanttChart(tasks, options) - creates timeline visualization
|
|
||||||
|
|
||||||
2. Integration with existing Task Master data structures:
|
|
||||||
- Use the same task object format from task-manager.js
|
|
||||||
- Leverage dependency analysis from dependency-manager.js
|
|
||||||
- Support complexity scores from analyze-complexity functionality
|
|
||||||
- Handle both main tasks and subtasks with proper ID notation (parentId.subtaskId)
|
|
||||||
|
|
||||||
3. Layout algorithm considerations for Mermaid:
|
|
||||||
- Topological sorting for dependency flows
|
|
||||||
- Hierarchical layouts for subtask trees
|
|
||||||
- Circular dependency detection and highlighting
|
|
||||||
- Terminal width-aware formatting for ASCII fallback
|
|
||||||
|
|
||||||
4. Export functions following the existing module pattern at the bottom of the file
|
|
||||||
</info added on 2025-05-23T21:02:49.434Z>
|
|
||||||
|
|
||||||
## 3. ASCII/Unicode Rendering Engine [pending]
|
## 3. ASCII/Unicode Rendering Engine [pending]
|
||||||
### Dependencies: 41.2
|
### Dependencies: 41.2
|
||||||
### Description: Implement rendering logic to display the dependency graph using ASCII and Unicode characters in the terminal.
|
### Description: Implement rendering logic to display the dependency graph using ASCII and Unicode characters in the terminal.
|
||||||
### Details:
|
### Details:
|
||||||
Support for various node and edge styles, and ensure compatibility with different terminal types.
|
Support for various node and edge styles, and ensure compatibility with different terminal types.
|
||||||
<info added on 2025-05-23T21:03:10.001Z>
|
|
||||||
Extend ui.js with diagram display functions that integrate with Task Master's existing UI patterns:
|
|
||||||
|
|
||||||
1. Implement core diagram display functions:
|
|
||||||
- displayTaskDiagram(tasksPath, diagramType, options) as the main entry point
|
|
||||||
- displayMermaidCode(mermaidCode, title) for formatted code output with boxen
|
|
||||||
- displayDiagramLegend() to explain symbols and colors
|
|
||||||
|
|
||||||
2. Ensure UI consistency by:
|
|
||||||
- Using established chalk color schemes (blue/green/yellow/red)
|
|
||||||
- Applying boxen for consistent component formatting
|
|
||||||
- Following existing display function patterns (displayTaskById, displayComplexityReport)
|
|
||||||
- Utilizing cli-table3 for any diagram metadata tables
|
|
||||||
|
|
||||||
3. Address terminal rendering challenges:
|
|
||||||
- Implement ASCII/Unicode fallback when Mermaid rendering isn't available
|
|
||||||
- Respect terminal width constraints using process.stdout.columns
|
|
||||||
- Integrate with loading indicators via startLoadingIndicator/stopLoadingIndicator
|
|
||||||
|
|
||||||
4. Update task file generation to include Mermaid diagram sections in individual task files
|
|
||||||
|
|
||||||
5. Support both CLI and MCP output formats through the outputFormat parameter
|
|
||||||
</info added on 2025-05-23T21:03:10.001Z>
|
|
||||||
|
|
||||||
## 4. Color Coding Support [pending]
|
## 4. Color Coding Support [pending]
|
||||||
### Dependencies: 41.3
|
### Dependencies: 41.3
|
||||||
### Description: Add color coding to nodes and edges to visually distinguish types, statuses, or other attributes in the graph.
|
### Description: Add color coding to nodes and edges to visually distinguish types, statuses, or other attributes in the graph.
|
||||||
### Details:
|
### Details:
|
||||||
Use ANSI escape codes for color; provide options for colorblind-friendly palettes.
|
Use ANSI escape codes for color; provide options for colorblind-friendly palettes.
|
||||||
<info added on 2025-05-23T21:03:35.762Z>
|
|
||||||
Integrate color coding with Task Master's existing status system:
|
|
||||||
|
|
||||||
1. Extend getStatusWithColor() in ui.js to support diagram contexts:
|
|
||||||
- Add 'diagram' parameter to determine rendering context
|
|
||||||
- Modify color intensity for better visibility in graph elements
|
|
||||||
|
|
||||||
2. Implement Task Master's established color scheme using ANSI codes:
|
|
||||||
- Green (\x1b[32m) for 'done'/'completed' tasks
|
|
||||||
- Yellow (\x1b[33m) for 'pending' tasks
|
|
||||||
- Orange (\x1b[38;5;208m) for 'in-progress' tasks
|
|
||||||
- Red (\x1b[31m) for 'blocked' tasks
|
|
||||||
- Gray (\x1b[90m) for 'deferred'/'cancelled' tasks
|
|
||||||
- Magenta (\x1b[35m) for 'review' tasks
|
|
||||||
|
|
||||||
3. Create diagram-specific color functions:
|
|
||||||
- getDependencyLineColor(fromTaskStatus, toTaskStatus) - color dependency arrows based on relationship status
|
|
||||||
- getNodeBorderColor(task) - style node borders using priority/complexity indicators
|
|
||||||
- getSubtaskGroupColor(parentTask) - visually group related subtasks
|
|
||||||
|
|
||||||
4. Integrate complexity visualization:
|
|
||||||
- Use getComplexityWithColor() for node background or border thickness
|
|
||||||
- Map complexity scores to visual weight in the graph
|
|
||||||
|
|
||||||
5. Ensure accessibility:
|
|
||||||
- Add text-based indicators (symbols like ✓, ⚠, ⏳) alongside colors
|
|
||||||
- Implement colorblind-friendly palettes as user-selectable option
|
|
||||||
- Include shape variations for different statuses
|
|
||||||
|
|
||||||
6. Follow existing ANSI patterns:
|
|
||||||
- Maintain consistency with terminal UI color usage
|
|
||||||
- Reuse color constants from the codebase
|
|
||||||
|
|
||||||
7. Support graceful degradation:
|
|
||||||
- Check terminal capabilities using existing detection
|
|
||||||
- Provide monochrome fallbacks with distinctive patterns
|
|
||||||
- Use bold/underline as alternatives when colors unavailable
|
|
||||||
</info added on 2025-05-23T21:03:35.762Z>
|
|
||||||
|
|
||||||
## 5. Circular Dependency Detection [pending]
|
## 5. Circular Dependency Detection [pending]
|
||||||
### Dependencies: 41.2
|
### Dependencies: 41.2
|
||||||
### Description: Implement algorithms to detect and highlight circular dependencies within the graph.
|
### Description: Implement algorithms to detect and highlight circular dependencies within the graph.
|
||||||
### Details:
|
### Details:
|
||||||
Clearly mark cycles in the rendered output and provide warnings or errors as appropriate.
|
Clearly mark cycles in the rendered output and provide warnings or errors as appropriate.
|
||||||
<info added on 2025-05-23T21:04:20.125Z>
|
|
||||||
Integrate with Task Master's existing circular dependency detection:
|
|
||||||
|
|
||||||
1. Import the dependency detection logic from dependency-manager.js module
|
|
||||||
2. Utilize the findCycles function from utils.js or dependency-manager.js
|
|
||||||
3. Extend validateDependenciesCommand functionality to highlight cycles in diagrams
|
|
||||||
|
|
||||||
Visual representation in Mermaid diagrams:
|
|
||||||
- Apply red/bold styling to nodes involved in dependency cycles
|
|
||||||
- Add warning annotations to cyclic edges
|
|
||||||
- Implement cycle path highlighting with distinctive line styles
|
|
||||||
|
|
||||||
Integration with validation workflow:
|
|
||||||
- Execute dependency validation before diagram generation
|
|
||||||
- Display cycle warnings consistent with existing CLI error messaging
|
|
||||||
- Utilize chalk.red and boxen for error highlighting following established patterns
|
|
||||||
|
|
||||||
Add diagram legend entries that explain cycle notation and warnings
|
|
||||||
|
|
||||||
Ensure detection of cycles in both:
|
|
||||||
- Main task dependencies
|
|
||||||
- Subtask dependencies within parent tasks
|
|
||||||
|
|
||||||
Follow Task Master's error handling patterns for graceful cycle reporting and user notification
|
|
||||||
</info added on 2025-05-23T21:04:20.125Z>
|
|
||||||
|
|
||||||
## 6. Filtering and Search Functionality [pending]
|
## 6. Filtering and Search Functionality [pending]
|
||||||
### Dependencies: 41.1, 41.2
|
### Dependencies: 41.1, 41.2
|
||||||
### Description: Enable users to filter nodes and edges by criteria such as name, type, or dependency depth.
|
### Description: Enable users to filter nodes and edges by criteria such as name, type, or dependency depth.
|
||||||
### Details:
|
### Details:
|
||||||
Support command-line flags for filtering and interactive search if feasible.
|
Support command-line flags for filtering and interactive search if feasible.
|
||||||
<info added on 2025-05-23T21:04:57.811Z>
|
|
||||||
Implement MCP tool integration for task dependency visualization:
|
|
||||||
|
|
||||||
1. Create task_diagram.js in mcp-server/src/tools/ following existing tool patterns
|
|
||||||
2. Implement taskDiagramDirect.js in mcp-server/src/core/direct-functions/
|
|
||||||
3. Use Zod schema for parameter validation:
|
|
||||||
- diagramType (dependencies, subtasks, flow, gantt)
|
|
||||||
- taskId (optional string)
|
|
||||||
- format (mermaid, text, json)
|
|
||||||
- includeComplexity (boolean)
|
|
||||||
|
|
||||||
4. Structure response data with:
|
|
||||||
- mermaidCode for client-side rendering
|
|
||||||
- metadata (nodeCount, edgeCount, cycleWarnings)
|
|
||||||
- support for both task-specific and project-wide diagrams
|
|
||||||
|
|
||||||
5. Integrate with session management and project root handling
|
|
||||||
6. Implement error handling using handleApiResult pattern
|
|
||||||
7. Register the tool in tools/index.js
|
|
||||||
|
|
||||||
Maintain compatibility with existing command-line flags for filtering and interactive search.
|
|
||||||
</info added on 2025-05-23T21:04:57.811Z>
|
|
||||||
|
|
||||||
## 7. Accessibility Features [pending]
|
## 7. Accessibility Features [pending]
|
||||||
### Dependencies: 41.3, 41.4
|
### Dependencies: 41.3, 41.4
|
||||||
### Description: Ensure the tool is accessible, including support for screen readers, high-contrast modes, and keyboard navigation.
|
### Description: Ensure the tool is accessible, including support for screen readers, high-contrast modes, and keyboard navigation.
|
||||||
### Details:
|
### Details:
|
||||||
Provide alternative text output and ensure color is not the sole means of conveying information.
|
Provide alternative text output and ensure color is not the sole means of conveying information.
|
||||||
<info added on 2025-05-23T21:05:54.584Z>
|
|
||||||
# Accessibility and Export Integration
|
|
||||||
|
|
||||||
## Accessibility Features
|
|
||||||
- Provide alternative text output for visual elements
|
|
||||||
- Ensure color is not the sole means of conveying information
|
|
||||||
- Support keyboard navigation through the dependency graph
|
|
||||||
- Add screen reader compatible node descriptions
|
|
||||||
|
|
||||||
## Export Integration
|
|
||||||
- Extend generateTaskFiles function in task-manager.js to include Mermaid diagram sections
|
|
||||||
- Add Mermaid code blocks to task markdown files under ## Diagrams header
|
|
||||||
- Follow existing task file generation patterns and markdown structure
|
|
||||||
- Support multiple diagram types per task file:
|
|
||||||
* Task dependencies (prerequisite relationships)
|
|
||||||
* Subtask hierarchy visualization
|
|
||||||
* Task flow context in project workflow
|
|
||||||
- Integrate with existing fs module file writing operations
|
|
||||||
- Add diagram export options to the generate command in commands.js
|
|
||||||
- Support SVG and PNG export using Mermaid CLI when available
|
|
||||||
- Implement error handling for diagram generation failures
|
|
||||||
- Reference exported diagrams in task markdown with proper paths
|
|
||||||
- Update CLI generate command with options like --include-diagrams
|
|
||||||
</info added on 2025-05-23T21:05:54.584Z>
|
|
||||||
|
|
||||||
## 8. Performance Optimization [pending]
|
## 8. Performance Optimization [pending]
|
||||||
### Dependencies: 41.2, 41.3, 41.4, 41.5, 41.6
|
### Dependencies: 41.2, 41.3, 41.4, 41.5, 41.6
|
||||||
### Description: Profile and optimize the tool for large graphs to ensure responsive rendering and low memory usage.
|
### Description: Profile and optimize the tool for large graphs to ensure responsive rendering and low memory usage.
|
||||||
### Details:
|
### Details:
|
||||||
Implement lazy loading, efficient data structures, and parallel processing where appropriate.
|
Implement lazy loading, efficient data structures, and parallel processing where appropriate.
|
||||||
<info added on 2025-05-23T21:06:14.533Z>
|
|
||||||
# Mermaid Library Integration and Terminal-Specific Handling
|
|
||||||
|
|
||||||
## Package Dependencies
|
|
||||||
- Add mermaid package as an optional dependency in package.json for generating raw Mermaid diagram code
|
|
||||||
- Consider mermaid-cli for SVG/PNG conversion capabilities
|
|
||||||
- Evaluate terminal-image or similar libraries for terminals with image support
|
|
||||||
- Explore ascii-art-ansi or box-drawing character libraries for text-only terminals
|
|
||||||
|
|
||||||
## Terminal Capability Detection
|
|
||||||
- Leverage existing terminal detection from ui.js to assess rendering capabilities
|
|
||||||
- Implement detection for:
|
|
||||||
- iTerm2 and other terminals with image protocol support
|
|
||||||
- Terminals with Unicode/extended character support
|
|
||||||
- Basic terminals requiring pure ASCII output
|
|
||||||
|
|
||||||
## Rendering Strategy with Fallbacks
|
|
||||||
1. Primary: Generate raw Mermaid code for user copy/paste
|
|
||||||
2. Secondary: Render simplified ASCII tree/flow representation using box characters
|
|
||||||
3. Tertiary: Present dependencies in tabular format for minimal terminals
|
|
||||||
|
|
||||||
## Implementation Approach
|
|
||||||
- Use dynamic imports for optional rendering libraries to maintain lightweight core
|
|
||||||
- Implement graceful degradation when optional packages aren't available
|
|
||||||
- Follow Task Master's philosophy of minimal dependencies
|
|
||||||
- Ensure performance optimization through lazy loading where appropriate
|
|
||||||
- Design modular rendering components that can be swapped based on terminal capabilities
|
|
||||||
</info added on 2025-05-23T21:06:14.533Z>
|
|
||||||
|
|
||||||
## 9. Documentation [pending]
|
## 9. Documentation [pending]
|
||||||
### Dependencies: 41.1, 41.2, 41.3, 41.4, 41.5, 41.6, 41.7, 41.8
|
### Dependencies: 41.1, 41.2, 41.3, 41.4, 41.5, 41.6, 41.7, 41.8
|
||||||
@@ -346,28 +131,4 @@ Include examples, troubleshooting, and contribution guidelines.
|
|||||||
### Description: Develop automated tests for all major features, including CLI parsing, layout correctness, rendering, color coding, filtering, and cycle detection.
|
### Description: Develop automated tests for all major features, including CLI parsing, layout correctness, rendering, color coding, filtering, and cycle detection.
|
||||||
### Details:
|
### Details:
|
||||||
Include unit, integration, and regression tests; validate accessibility and performance claims.
|
Include unit, integration, and regression tests; validate accessibility and performance claims.
|
||||||
<info added on 2025-05-23T21:08:36.329Z>
|
|
||||||
# Documentation Tasks for Visual Task Dependency Graph
|
|
||||||
|
|
||||||
## User Documentation
|
|
||||||
1. Update README.md with diagram command documentation following existing command reference format
|
|
||||||
2. Add examples to CLI command help text in commands.js matching patterns from other commands
|
|
||||||
3. Create docs/diagrams.md with detailed usage guide including:
|
|
||||||
- Command examples for each diagram type
|
|
||||||
- Mermaid code samples and output
|
|
||||||
- Terminal compatibility notes
|
|
||||||
- Integration with task workflow examples
|
|
||||||
- Troubleshooting section for common diagram rendering issues
|
|
||||||
- Accessibility features and terminal fallback options
|
|
||||||
|
|
||||||
## Developer Documentation
|
|
||||||
1. Update MCP tool documentation to include the new task_diagram tool
|
|
||||||
2. Add JSDoc comments to all new functions following existing code standards
|
|
||||||
3. Create contributor documentation for extending diagram types
|
|
||||||
4. Update API documentation for any new MCP interface endpoints
|
|
||||||
|
|
||||||
## Integration Documentation
|
|
||||||
1. Document integration with existing commands (analyze-complexity, generate, etc.)
|
|
||||||
2. Provide examples showing how diagrams complement other Task Master features
|
|
||||||
</info added on 2025-05-23T21:08:36.329Z>
|
|
||||||
|
|
||||||
|
|||||||
@@ -3,89 +3,56 @@
|
|||||||
# Status: pending
|
# Status: pending
|
||||||
# Dependencies: None
|
# Dependencies: None
|
||||||
# Priority: medium
|
# Priority: medium
|
||||||
# Description: Create an interactive REPL-style chat interface for AI-powered research that maintains conversation context, integrates project information, and provides session management capabilities.
|
# Description: Create a command that allows users to quickly research topics using Perplexity AI, with options to include task context or custom prompts.
|
||||||
# Details:
|
# Details:
|
||||||
Develop an interactive REPL-style chat interface for AI-powered research that allows users to have ongoing research conversations with context awareness. The system should:
|
Develop a new command called 'research' that integrates with Perplexity AI's API to fetch information on specified topics. The command should:
|
||||||
|
|
||||||
1. Create an interactive REPL using inquirer that:
|
1. Accept the following parameters:
|
||||||
- Maintains conversation history and context
|
- A search query string (required)
|
||||||
- Provides a natural chat-like experience
|
- A task or subtask ID for context (optional)
|
||||||
- Supports special commands with the '/' prefix
|
- A custom prompt to guide the research (optional)
|
||||||
|
|
||||||
2. Integrate with the existing ai-services-unified.js using research mode:
|
2. When a task/subtask ID is provided, extract relevant information from it to enrich the research query with context.
|
||||||
- Leverage our unified AI service architecture
|
|
||||||
- Configure appropriate system prompts for research context
|
|
||||||
- Handle streaming responses for real-time feedback
|
|
||||||
|
|
||||||
3. Support multiple context sources:
|
3. Implement proper API integration with Perplexity, including authentication and rate limiting handling.
|
||||||
- Task/subtask IDs for project context
|
|
||||||
- File paths for code or document context
|
|
||||||
- Custom prompts for specific research directions
|
|
||||||
- Project file tree for system context
|
|
||||||
|
|
||||||
4. Implement chat commands including:
|
4. Format and display the research results in a readable format in the terminal, with options to:
|
||||||
- `/save` - Save conversation to file
|
- Save the results to a file
|
||||||
- `/task` - Associate with or load context from a task
|
- Copy results to clipboard
|
||||||
- `/help` - Show available commands and usage
|
- Generate a summary of key points
|
||||||
- `/exit` - End the research session
|
|
||||||
- `/copy` - Copy last response to clipboard
|
|
||||||
- `/summary` - Generate summary of conversation
|
|
||||||
- `/detail` - Adjust research depth level
|
|
||||||
|
|
||||||
5. Create session management capabilities:
|
5. Cache research results to avoid redundant API calls for the same queries.
|
||||||
- Generate and track unique session IDs
|
|
||||||
- Save/load sessions automatically
|
|
||||||
- Browse and switch between previous sessions
|
|
||||||
- Export sessions to portable formats
|
|
||||||
|
|
||||||
6. Design a consistent UI using ui.js patterns:
|
6. Provide a configuration option to set the depth/detail level of research (quick overview vs. comprehensive).
|
||||||
- Color-coded messages for user/AI distinction
|
|
||||||
- Support for markdown rendering in terminal
|
|
||||||
- Progressive display of AI responses
|
|
||||||
- Clear visual hierarchy and readability
|
|
||||||
|
|
||||||
7. Follow the "taskmaster way":
|
7. Handle errors gracefully, especially network issues or API limitations.
|
||||||
- Create something new and exciting
|
|
||||||
- Focus on usefulness and practicality
|
|
||||||
- Avoid over-engineering
|
|
||||||
- Maintain consistency with existing patterns
|
|
||||||
|
|
||||||
The REPL should feel like a natural conversation while providing powerful research capabilities that integrate seamlessly with the rest of the system.
|
The command should follow the existing CLI structure and maintain consistency with other commands in the system.
|
||||||
|
|
||||||
# Test Strategy:
|
# Test Strategy:
|
||||||
1. Unit tests:
|
1. Unit tests:
|
||||||
- Test the REPL command parsing and execution
|
- Test the command with various combinations of parameters (query only, query+task, query+custom prompt, all parameters)
|
||||||
- Mock AI service responses to test different scenarios
|
- Mock the Perplexity API responses to test different scenarios (successful response, error response, rate limiting)
|
||||||
- Verify context extraction and integration from various sources
|
- Verify that task context is correctly extracted and incorporated into the research query
|
||||||
- Test session serialization and deserialization
|
|
||||||
|
|
||||||
2. Integration tests:
|
2. Integration tests:
|
||||||
- Test actual AI service integration with the REPL
|
- Test actual API calls to Perplexity with valid credentials (using a test account)
|
||||||
- Verify session persistence across application restarts
|
- Verify the caching mechanism works correctly for repeated queries
|
||||||
- Test conversation state management with long interactions
|
- Test error handling with intentionally invalid requests
|
||||||
- Verify context switching between different tasks and files
|
|
||||||
|
|
||||||
3. User acceptance testing:
|
3. User acceptance testing:
|
||||||
- Have team members use the REPL for real research needs
|
- Have team members use the command for real research needs and provide feedback
|
||||||
- Test the conversation flow and command usability
|
- Verify the command works in different network environments
|
||||||
- Verify the UI is intuitive and responsive
|
- Test the command with very long queries and responses
|
||||||
- Test with various terminal sizes and environments
|
|
||||||
|
|
||||||
4. Performance testing:
|
4. Performance testing:
|
||||||
- Measure and optimize response time for queries
|
- Measure and optimize response time for queries
|
||||||
- Test behavior with large conversation histories
|
- Test behavior under poor network conditions
|
||||||
- Verify performance with complex context sources
|
|
||||||
- Test under poor network conditions
|
|
||||||
|
|
||||||
5. Specific test scenarios:
|
Validate that the research results are properly formatted, readable, and that all output options (save, copy) function correctly.
|
||||||
- Verify markdown rendering for complex formatting
|
|
||||||
- Test streaming display with various response lengths
|
|
||||||
- Verify export features create properly formatted files
|
|
||||||
- Test session recovery from simulated crashes
|
|
||||||
- Validate handling of special characters and unicode
|
|
||||||
|
|
||||||
# Subtasks:
|
# Subtasks:
|
||||||
## 1. Create Perplexity API Client Service [cancelled]
|
## 1. Create Perplexity API Client Service [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Develop a service module that handles all interactions with the Perplexity AI API, including authentication, request formatting, and response handling.
|
### Description: Develop a service module that handles all interactions with the Perplexity AI API, including authentication, request formatting, and response handling.
|
||||||
### Details:
|
### Details:
|
||||||
@@ -105,9 +72,6 @@ Testing approach:
|
|||||||
- Test error handling with simulated network failures
|
- Test error handling with simulated network failures
|
||||||
- Verify caching mechanism works correctly
|
- Verify caching mechanism works correctly
|
||||||
- Test with various query types and options
|
- Test with various query types and options
|
||||||
<info added on 2025-05-23T21:06:45.726Z>
|
|
||||||
DEPRECATION NOTICE: This subtask is no longer needed and has been marked for removal. Instead of creating a new Perplexity service, we will leverage the existing ai-services-unified.js with research mode. This approach allows us to maintain a unified architecture for AI services rather than implementing a separate service specifically for Perplexity.
|
|
||||||
</info added on 2025-05-23T21:06:45.726Z>
|
|
||||||
|
|
||||||
## 2. Implement Task Context Extraction Logic [pending]
|
## 2. Implement Task Context Extraction Logic [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
@@ -130,37 +94,6 @@ Testing approach:
|
|||||||
- Test with various task structures and content types
|
- Test with various task structures and content types
|
||||||
- Verify error handling for missing or invalid tasks
|
- Verify error handling for missing or invalid tasks
|
||||||
- Test the quality of extracted context with sample queries
|
- Test the quality of extracted context with sample queries
|
||||||
<info added on 2025-05-23T21:11:44.560Z>
|
|
||||||
Updated Implementation Approach:
|
|
||||||
|
|
||||||
REFACTORED IMPLEMENTATION:
|
|
||||||
1. Extract the fuzzy search logic from add-task.js (lines ~240-400) into `utils/contextExtractor.js`
|
|
||||||
2. Implement a reusable `TaskContextExtractor` class with the following methods:
|
|
||||||
- `extractTaskContext(taskId)` - Base context extraction
|
|
||||||
- `performFuzzySearch(query, options)` - Enhanced Fuse.js implementation
|
|
||||||
- `getRelevanceScore(task, query)` - Scoring mechanism from add-task.js
|
|
||||||
- `detectPurposeCategories(task)` - Category classification logic
|
|
||||||
- `findRelatedTasks(taskId)` - Identify dependencies and relationships
|
|
||||||
- `aggregateMultiQueryContext(queries)` - Support for multiple search terms
|
|
||||||
|
|
||||||
3. Add configurable context depth levels:
|
|
||||||
- Minimal: Just task title and description
|
|
||||||
- Standard: Include details and immediate relationships
|
|
||||||
- Comprehensive: Full context with all dependencies and related tasks
|
|
||||||
|
|
||||||
4. Implement context formatters:
|
|
||||||
- `formatForSystemPrompt(context)` - Structured for AI system instructions
|
|
||||||
- `formatForChatContext(context)` - Conversational format for chat
|
|
||||||
- `formatForResearchQuery(context, query)` - Optimized for research commands
|
|
||||||
|
|
||||||
5. Add caching layer for performance optimization:
|
|
||||||
- Implement LRU cache for expensive fuzzy search results
|
|
||||||
- Cache invalidation on task updates
|
|
||||||
|
|
||||||
6. Ensure backward compatibility with existing context extraction requirements
|
|
||||||
|
|
||||||
This approach leverages our existing sophisticated search logic rather than rebuilding from scratch, while making it more flexible and reusable across the application.
|
|
||||||
</info added on 2025-05-23T21:11:44.560Z>
|
|
||||||
|
|
||||||
## 3. Build Research Command CLI Interface [pending]
|
## 3. Build Research Command CLI Interface [pending]
|
||||||
### Dependencies: 51.1, 51.2
|
### Dependencies: 51.1, 51.2
|
||||||
@@ -187,40 +120,6 @@ Testing approach:
|
|||||||
- Verify command validation logic works correctly
|
- Verify command validation logic works correctly
|
||||||
- Test with various combinations of options
|
- Test with various combinations of options
|
||||||
- Ensure proper error messages for invalid inputs
|
- Ensure proper error messages for invalid inputs
|
||||||
<info added on 2025-05-23T21:09:08.478Z>
|
|
||||||
Implementation details:
|
|
||||||
1. Create a new module `repl/research-chat.js` for the interactive research experience
|
|
||||||
2. Implement REPL-style chat interface using inquirer with:
|
|
||||||
- Persistent conversation history management
|
|
||||||
- Context-aware prompting system
|
|
||||||
- Command parsing for special instructions
|
|
||||||
3. Implement REPL commands:
|
|
||||||
- `/save` - Save conversation to file
|
|
||||||
- `/task` - Associate with or load context from a task
|
|
||||||
- `/help` - Show available commands and usage
|
|
||||||
- `/exit` - End the research session
|
|
||||||
- `/copy` - Copy last response to clipboard
|
|
||||||
- `/summary` - Generate summary of conversation
|
|
||||||
- `/detail` - Adjust research depth level
|
|
||||||
4. Create context initialization system:
|
|
||||||
- Task/subtask context loading
|
|
||||||
- File content integration
|
|
||||||
- System prompt configuration
|
|
||||||
5. Integrate with ai-services-unified.js research mode
|
|
||||||
6. Implement conversation state management:
|
|
||||||
- Track message history
|
|
||||||
- Maintain context window
|
|
||||||
- Handle context pruning for long conversations
|
|
||||||
7. Design consistent UI patterns using ui.js library
|
|
||||||
8. Add entry point in main CLI application
|
|
||||||
|
|
||||||
Testing approach:
|
|
||||||
- Test REPL command parsing and execution
|
|
||||||
- Verify context initialization with various inputs
|
|
||||||
- Test conversation state management
|
|
||||||
- Ensure proper error handling and recovery
|
|
||||||
- Validate UI consistency across different terminal environments
|
|
||||||
</info added on 2025-05-23T21:09:08.478Z>
|
|
||||||
|
|
||||||
## 4. Implement Results Processing and Output Formatting [pending]
|
## 4. Implement Results Processing and Output Formatting [pending]
|
||||||
### Dependencies: 51.1, 51.3
|
### Dependencies: 51.1, 51.3
|
||||||
@@ -246,45 +145,8 @@ Testing approach:
|
|||||||
- Verify file saving functionality creates proper files with correct content
|
- Verify file saving functionality creates proper files with correct content
|
||||||
- Test clipboard functionality
|
- Test clipboard functionality
|
||||||
- Verify summarization produces useful results
|
- Verify summarization produces useful results
|
||||||
<info added on 2025-05-23T21:10:00.181Z>
|
|
||||||
Implementation details:
|
|
||||||
1. Create a new module `utils/chatFormatter.js` for REPL interface formatting
|
|
||||||
2. Implement terminal output formatting for conversational display:
|
|
||||||
- Color-coded messages distinguishing user inputs and AI responses
|
|
||||||
- Proper text wrapping and indentation for readability
|
|
||||||
- Support for markdown rendering in terminal
|
|
||||||
- Visual indicators for system messages and status updates
|
|
||||||
3. Implement streaming/progressive display of AI responses:
|
|
||||||
- Character-by-character or chunk-by-chunk display
|
|
||||||
- Cursor animations during response generation
|
|
||||||
- Ability to interrupt long responses
|
|
||||||
4. Design chat history visualization:
|
|
||||||
- Scrollable history with clear message boundaries
|
|
||||||
- Timestamp display options
|
|
||||||
- Session identification
|
|
||||||
5. Create specialized formatters for different content types:
|
|
||||||
- Code blocks with syntax highlighting
|
|
||||||
- Bulleted and numbered lists
|
|
||||||
- Tables and structured data
|
|
||||||
- Citations and references
|
|
||||||
6. Implement export functionality:
|
|
||||||
- Save conversations to markdown or text files
|
|
||||||
- Export individual responses
|
|
||||||
- Copy responses to clipboard
|
|
||||||
7. Adapt existing ui.js patterns for conversational context:
|
|
||||||
- Maintain consistent styling while supporting chat flow
|
|
||||||
- Handle multi-turn context appropriately
|
|
||||||
|
|
||||||
Testing approach:
|
## 5. Implement Caching and Results Management System [pending]
|
||||||
- Test streaming display with various response lengths and speeds
|
|
||||||
- Verify markdown rendering accuracy for complex formatting
|
|
||||||
- Test history navigation and scrolling functionality
|
|
||||||
- Verify export features create properly formatted files
|
|
||||||
- Test display on various terminal sizes and configurations
|
|
||||||
- Verify handling of special characters and unicode
|
|
||||||
</info added on 2025-05-23T21:10:00.181Z>
|
|
||||||
|
|
||||||
## 5. Implement Caching and Results Management System [cancelled]
|
|
||||||
### Dependencies: 51.1, 51.4
|
### Dependencies: 51.1, 51.4
|
||||||
### Description: Create a persistent caching system for research results and implement functionality to manage, retrieve, and reference previous research.
|
### Description: Create a persistent caching system for research results and implement functionality to manage, retrieve, and reference previous research.
|
||||||
### Details:
|
### Details:
|
||||||
@@ -311,142 +173,4 @@ Testing approach:
|
|||||||
- Test history management commands
|
- Test history management commands
|
||||||
- Verify task association functionality
|
- Verify task association functionality
|
||||||
- Test with large cache sizes to ensure performance
|
- Test with large cache sizes to ensure performance
|
||||||
<info added on 2025-05-23T21:10:28.544Z>
|
|
||||||
Implementation details:
|
|
||||||
1. Create a session management system for the REPL experience:
|
|
||||||
- Generate and track unique session IDs
|
|
||||||
- Store conversation history with timestamps
|
|
||||||
- Maintain context and state between interactions
|
|
||||||
2. Implement session persistence:
|
|
||||||
- Save sessions to disk automatically
|
|
||||||
- Load previous sessions on startup
|
|
||||||
- Handle graceful recovery from crashes
|
|
||||||
3. Build session browser and selector:
|
|
||||||
- List available sessions with preview
|
|
||||||
- Filter sessions by date, topic, or content
|
|
||||||
- Enable quick switching between sessions
|
|
||||||
4. Implement conversation state serialization:
|
|
||||||
- Capture full conversation context
|
|
||||||
- Preserve user preferences per session
|
|
||||||
- Handle state migration during updates
|
|
||||||
5. Add session sharing capabilities:
|
|
||||||
- Export sessions to portable formats
|
|
||||||
- Import sessions from files
|
|
||||||
- Generate shareable links (if applicable)
|
|
||||||
6. Create session management commands:
|
|
||||||
- Create new sessions
|
|
||||||
- Clone existing sessions
|
|
||||||
- Archive or delete old sessions
|
|
||||||
|
|
||||||
Testing approach:
|
|
||||||
- Verify session persistence across application restarts
|
|
||||||
- Test session recovery from simulated crashes
|
|
||||||
- Validate state serialization with complex conversations
|
|
||||||
- Ensure session switching maintains proper context
|
|
||||||
- Test session import/export functionality
|
|
||||||
- Verify performance with large conversation histories
|
|
||||||
</info added on 2025-05-23T21:10:28.544Z>
|
|
||||||
|
|
||||||
## 6. Implement Project Context Generation [pending]
|
|
||||||
### Dependencies: 51.2
|
|
||||||
### Description: Create functionality to generate and include project-level context such as file trees, repository structure, and codebase insights for more informed research.
|
|
||||||
### Details:
|
|
||||||
Implementation details:
|
|
||||||
1. Create a new module `utils/projectContextGenerator.js` for project-level context extraction
|
|
||||||
2. Implement file tree generation functionality:
|
|
||||||
- Scan project directory structure recursively
|
|
||||||
- Filter out irrelevant files (node_modules, .git, etc.)
|
|
||||||
- Format file tree for AI consumption
|
|
||||||
- Include file counts and structure statistics
|
|
||||||
3. Add code analysis capabilities:
|
|
||||||
- Extract key imports and dependencies
|
|
||||||
- Identify main modules and their relationships
|
|
||||||
- Generate high-level architecture overview
|
|
||||||
4. Implement context summarization:
|
|
||||||
- Create concise project overview
|
|
||||||
- Identify key technologies and patterns
|
|
||||||
- Summarize project purpose and structure
|
|
||||||
5. Add caching for expensive operations:
|
|
||||||
- Cache file tree with invalidation on changes
|
|
||||||
- Store analysis results with TTL
|
|
||||||
6. Create integration with research REPL:
|
|
||||||
- Add project context to system prompts
|
|
||||||
- Support `/project` command to refresh context
|
|
||||||
- Allow selective inclusion of project components
|
|
||||||
|
|
||||||
Testing approach:
|
|
||||||
- Test file tree generation with various project structures
|
|
||||||
- Verify filtering logic works correctly
|
|
||||||
- Test context summarization quality
|
|
||||||
- Measure performance impact of context generation
|
|
||||||
- Verify caching mechanism effectiveness
|
|
||||||
|
|
||||||
## 7. Create REPL Command System [pending]
|
|
||||||
### Dependencies: 51.3
|
|
||||||
### Description: Implement a flexible command system for the research REPL that allows users to control the conversation flow, manage sessions, and access additional functionality.
|
|
||||||
### Details:
|
|
||||||
Implementation details:
|
|
||||||
1. Create a new module `repl/commands.js` for REPL command handling
|
|
||||||
2. Implement a command parser that:
|
|
||||||
- Detects commands starting with `/`
|
|
||||||
- Parses arguments and options
|
|
||||||
- Handles quoted strings and special characters
|
|
||||||
3. Create a command registry system:
|
|
||||||
- Register command handlers with descriptions
|
|
||||||
- Support command aliases
|
|
||||||
- Enable command discovery and help
|
|
||||||
4. Implement core commands:
|
|
||||||
- `/save [filename]` - Save conversation
|
|
||||||
- `/task <taskId>` - Load task context
|
|
||||||
- `/file <path>` - Include file content
|
|
||||||
- `/help [command]` - Show help
|
|
||||||
- `/exit` - End session
|
|
||||||
- `/copy [n]` - Copy nth response
|
|
||||||
- `/summary` - Generate conversation summary
|
|
||||||
- `/detail <level>` - Set detail level
|
|
||||||
- `/clear` - Clear conversation
|
|
||||||
- `/project` - Refresh project context
|
|
||||||
- `/session <id|new>` - Switch/create session
|
|
||||||
5. Add command completion and suggestions
|
|
||||||
6. Implement error handling for invalid commands
|
|
||||||
7. Create a help system with examples
|
|
||||||
|
|
||||||
Testing approach:
|
|
||||||
- Test command parsing with various inputs
|
|
||||||
- Verify command execution and error handling
|
|
||||||
- Test command completion functionality
|
|
||||||
- Verify help system provides useful information
|
|
||||||
- Test with complex command sequences
|
|
||||||
|
|
||||||
## 8. Integrate with AI Services Unified [pending]
|
|
||||||
### Dependencies: 51.3, 51.4
|
|
||||||
### Description: Integrate the research REPL with the existing ai-services-unified.js to leverage the unified AI service architecture with research mode.
|
|
||||||
### Details:
|
|
||||||
Implementation details:
|
|
||||||
1. Update `repl/research-chat.js` to integrate with ai-services-unified.js
|
|
||||||
2. Configure research mode in AI service:
|
|
||||||
- Set appropriate system prompts
|
|
||||||
- Configure temperature and other parameters
|
|
||||||
- Enable streaming responses
|
|
||||||
3. Implement context management:
|
|
||||||
- Format conversation history for AI context
|
|
||||||
- Include task and project context
|
|
||||||
- Handle context window limitations
|
|
||||||
4. Add support for different research styles:
|
|
||||||
- Exploratory research with broader context
|
|
||||||
- Focused research with specific questions
|
|
||||||
- Comparative analysis between concepts
|
|
||||||
5. Implement response handling:
|
|
||||||
- Process streaming chunks
|
|
||||||
- Format and display responses
|
|
||||||
- Handle errors and retries
|
|
||||||
6. Add configuration options for AI service selection
|
|
||||||
7. Implement fallback mechanisms for service unavailability
|
|
||||||
|
|
||||||
Testing approach:
|
|
||||||
- Test integration with mocked AI services
|
|
||||||
- Verify context formatting and management
|
|
||||||
- Test streaming response handling
|
|
||||||
- Verify error handling and recovery
|
|
||||||
- Test with various research styles and queries
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Task ID: 63
|
# Task ID: 63
|
||||||
# Title: Add pnpm Support for the Taskmaster Package
|
# Title: Add pnpm Support for the Taskmaster Package
|
||||||
# Status: done
|
# Status: pending
|
||||||
# Dependencies: None
|
# Dependencies: None
|
||||||
# Priority: medium
|
# Priority: medium
|
||||||
# Description: Implement full support for pnpm as an alternative package manager in the Taskmaster application, ensuring users have the exact same experience as with npm when installing and managing the package. The installation process, including any CLI prompts or web interfaces, must serve the exact same content and user experience regardless of whether npm or pnpm is used. The project uses 'module' as the package type, defines binaries 'task-master' and 'task-master-mcp', and its core logic resides in 'scripts/modules/'. The 'init' command (via scripts/init.js) creates the directory structure (.cursor/rules, scripts, tasks), copies templates (.env.example, .gitignore, rule files, dev.js), manages package.json merging, and sets up MCP config (.cursor/mcp.json). All dependencies are standard npm dependencies listed in package.json, and manual modifications are being removed.
|
# Description: Implement full support for pnpm as an alternative package manager in the Taskmaster application, ensuring users have the exact same experience as with npm when installing and managing the package. The installation process, including any CLI prompts or web interfaces, must serve the exact same content and user experience regardless of whether npm or pnpm is used. The project uses 'module' as the package type, defines binaries 'task-master' and 'task-master-mcp', and its core logic resides in 'scripts/modules/'. The 'init' command (via scripts/init.js) creates the directory structure (.cursor/rules, scripts, tasks), copies templates (.env.example, .gitignore, rule files, dev.js), manages package.json merging, and sets up MCP config (.cursor/mcp.json). All dependencies are standard npm dependencies listed in package.json, and manual modifications are being removed.
|
||||||
@@ -88,49 +88,49 @@ This implementation should maintain full feature parity and identical user exper
|
|||||||
Success criteria: Taskmaster should install and function identically regardless of whether it was installed via npm or pnpm, with no degradation in functionality, performance, or user experience. All binaries should be properly linked, and the directory structure should be correctly created.
|
Success criteria: Taskmaster should install and function identically regardless of whether it was installed via npm or pnpm, with no degradation in functionality, performance, or user experience. All binaries should be properly linked, and the directory structure should be correctly created.
|
||||||
|
|
||||||
# Subtasks:
|
# Subtasks:
|
||||||
## 1. Update Documentation for pnpm Support [done]
|
## 1. Update Documentation for pnpm Support [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Revise installation and usage documentation to include pnpm commands and instructions for installing and managing Taskmaster with pnpm. Clearly state that the installation process, including any website or UI shown, is identical to npm. Ensure documentation reflects the use of 'module' package type, binaries, and the init process as defined in scripts/init.js.
|
### Description: Revise installation and usage documentation to include pnpm commands and instructions for installing and managing Taskmaster with pnpm. Clearly state that the installation process, including any website or UI shown, is identical to npm. Ensure documentation reflects the use of 'module' package type, binaries, and the init process as defined in scripts/init.js.
|
||||||
### Details:
|
### Details:
|
||||||
Add pnpm installation commands (e.g., `pnpm add taskmaster`) and update all relevant sections in the README and official docs to reflect pnpm as a supported package manager. Document that any installation website or prompt is the same as with npm. Include notes on the 'module' package type, binaries, and the directory/template setup performed by scripts/init.js.
|
Add pnpm installation commands (e.g., `pnpm add taskmaster`) and update all relevant sections in the README and official docs to reflect pnpm as a supported package manager. Document that any installation website or prompt is the same as with npm. Include notes on the 'module' package type, binaries, and the directory/template setup performed by scripts/init.js.
|
||||||
|
|
||||||
## 2. Ensure Package Scripts Compatibility with pnpm [done]
|
## 2. Ensure Package Scripts Compatibility with pnpm [pending]
|
||||||
### Dependencies: 63.1
|
### Dependencies: 63.1
|
||||||
### Description: Review and update package.json scripts to ensure they work seamlessly with pnpm's execution model. Confirm that any scripts responsible for showing a website or prompt during install behave identically with pnpm and npm. Ensure compatibility with 'module' package type and correct binary definitions.
|
### Description: Review and update package.json scripts to ensure they work seamlessly with pnpm's execution model. Confirm that any scripts responsible for showing a website or prompt during install behave identically with pnpm and npm. Ensure compatibility with 'module' package type and correct binary definitions.
|
||||||
### Details:
|
### Details:
|
||||||
Test all scripts using `pnpm run <script>`, address any pnpm-specific path or execution differences, and modify scripts as needed for compatibility. Pay special attention to any scripts that trigger a website or prompt during installation, ensuring they serve the same content as npm. Validate that scripts/init.js and binaries are referenced correctly for ESM ('module') projects.
|
Test all scripts using `pnpm run <script>`, address any pnpm-specific path or execution differences, and modify scripts as needed for compatibility. Pay special attention to any scripts that trigger a website or prompt during installation, ensuring they serve the same content as npm. Validate that scripts/init.js and binaries are referenced correctly for ESM ('module') projects.
|
||||||
|
|
||||||
## 3. Generate and Validate pnpm Lockfile [done]
|
## 3. Generate and Validate pnpm Lockfile [pending]
|
||||||
### Dependencies: 63.2
|
### Dependencies: 63.2
|
||||||
### Description: Install dependencies using pnpm to create a pnpm-lock.yaml file and ensure it accurately reflects the project's dependency tree, considering the 'module' package type.
|
### Description: Install dependencies using pnpm to create a pnpm-lock.yaml file and ensure it accurately reflects the project's dependency tree, considering the 'module' package type.
|
||||||
### Details:
|
### Details:
|
||||||
Run `pnpm install` to generate the lockfile, check it into version control, and verify that dependency resolution is correct and consistent. Ensure that all dependencies listed in package.json are resolved as expected for an ESM project.
|
Run `pnpm install` to generate the lockfile, check it into version control, and verify that dependency resolution is correct and consistent. Ensure that all dependencies listed in package.json are resolved as expected for an ESM project.
|
||||||
|
|
||||||
## 4. Test Taskmaster Installation and Operation with pnpm [done]
|
## 4. Test Taskmaster Installation and Operation with pnpm [pending]
|
||||||
### Dependencies: 63.3
|
### Dependencies: 63.3
|
||||||
### Description: Thoroughly test Taskmaster's installation and CLI operation when installed via pnpm, both globally and locally. Confirm that any website or UI shown during installation is identical to npm. Validate that binaries and the init process (scripts/init.js) work as expected.
|
### Description: Thoroughly test Taskmaster's installation and CLI operation when installed via pnpm, both globally and locally. Confirm that any website or UI shown during installation is identical to npm. Validate that binaries and the init process (scripts/init.js) work as expected.
|
||||||
### Details:
|
### Details:
|
||||||
Perform global (`pnpm add -g taskmaster`) and local installations, verify CLI commands, and check for any pnpm-specific issues or incompatibilities. Ensure any installation UIs or websites appear identical to npm installations, including any website or prompt shown during install. Test that binaries 'task-master' and 'task-master-mcp' are linked and that scripts/init.js creates the correct structure and templates.
|
Perform global (`pnpm add -g taskmaster`) and local installations, verify CLI commands, and check for any pnpm-specific issues or incompatibilities. Ensure any installation UIs or websites appear identical to npm installations, including any website or prompt shown during install. Test that binaries 'task-master' and 'task-master-mcp' are linked and that scripts/init.js creates the correct structure and templates.
|
||||||
|
|
||||||
## 5. Integrate pnpm into CI/CD Pipeline [done]
|
## 5. Integrate pnpm into CI/CD Pipeline [pending]
|
||||||
### Dependencies: 63.4
|
### Dependencies: 63.4
|
||||||
### Description: Update CI/CD workflows to include pnpm in the test matrix, ensuring all tests pass when dependencies are installed with pnpm. Confirm that tests cover the 'module' package type, binaries, and init process.
|
### Description: Update CI/CD workflows to include pnpm in the test matrix, ensuring all tests pass when dependencies are installed with pnpm. Confirm that tests cover the 'module' package type, binaries, and init process.
|
||||||
### Details:
|
### Details:
|
||||||
Modify GitHub Actions or other CI configurations to use pnpm/action-setup, run tests with pnpm, and cache pnpm dependencies for efficiency. Ensure that CI covers CLI commands, binary linking, and the directory/template setup performed by scripts/init.js.
|
Modify GitHub Actions or other CI configurations to use pnpm/action-setup, run tests with pnpm, and cache pnpm dependencies for efficiency. Ensure that CI covers CLI commands, binary linking, and the directory/template setup performed by scripts/init.js.
|
||||||
|
|
||||||
## 6. Verify Installation UI/Website Consistency [done]
|
## 6. Verify Installation UI/Website Consistency [pending]
|
||||||
### Dependencies: 63.4
|
### Dependencies: 63.4
|
||||||
### Description: Ensure any installation UIs, websites, or interactive prompts—including any website or prompt shown during install—appear and function identically when installing with pnpm compared to npm. Confirm that the experience is consistent for the 'module' package type and the init process.
|
### Description: Ensure any installation UIs, websites, or interactive prompts—including any website or prompt shown during install—appear and function identically when installing with pnpm compared to npm. Confirm that the experience is consistent for the 'module' package type and the init process.
|
||||||
### Details:
|
### Details:
|
||||||
Identify all user-facing elements during the installation process, including any website or prompt shown during install, and verify they are consistent across package managers. If a website is shown during installation, ensure it appears the same regardless of package manager used. Validate that any prompts or UIs triggered by scripts/init.js are identical.
|
Identify all user-facing elements during the installation process, including any website or prompt shown during install, and verify they are consistent across package managers. If a website is shown during installation, ensure it appears the same regardless of package manager used. Validate that any prompts or UIs triggered by scripts/init.js are identical.
|
||||||
|
|
||||||
## 7. Test init.js Script with pnpm [done]
|
## 7. Test init.js Script with pnpm [pending]
|
||||||
### Dependencies: 63.4
|
### Dependencies: 63.4
|
||||||
### Description: Verify that the scripts/init.js file works correctly when Taskmaster is installed via pnpm, creating the proper directory structure and copying all required templates as defined in the project structure.
|
### Description: Verify that the scripts/init.js file works correctly when Taskmaster is installed via pnpm, creating the proper directory structure and copying all required templates as defined in the project structure.
|
||||||
### Details:
|
### Details:
|
||||||
Test the init command to ensure it properly creates .cursor/rules, scripts, and tasks directories, copies templates (.env.example, .gitignore, rule files, dev.js), handles package.json merging, and sets up MCP config (.cursor/mcp.json) as per scripts/init.js.
|
Test the init command to ensure it properly creates .cursor/rules, scripts, and tasks directories, copies templates (.env.example, .gitignore, rule files, dev.js), handles package.json merging, and sets up MCP config (.cursor/mcp.json) as per scripts/init.js.
|
||||||
|
|
||||||
## 8. Verify Binary Links with pnpm [done]
|
## 8. Verify Binary Links with pnpm [pending]
|
||||||
### Dependencies: 63.4
|
### Dependencies: 63.4
|
||||||
### Description: Ensure that the task-master and task-master-mcp binaries are properly defined in package.json, linked, and executable when installed via pnpm, in both global and local installations.
|
### Description: Ensure that the task-master and task-master-mcp binaries are properly defined in package.json, linked, and executable when installed via pnpm, in both global and local installations.
|
||||||
### Details:
|
### Details:
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Task ID: 64
|
# Task ID: 64
|
||||||
# Title: Add Yarn Support for Taskmaster Installation
|
# Title: Add Yarn Support for Taskmaster Installation
|
||||||
# Status: done
|
# Status: pending
|
||||||
# Dependencies: None
|
# Dependencies: None
|
||||||
# Priority: medium
|
# Priority: medium
|
||||||
# Description: Implement full support for installing and managing Taskmaster using Yarn package manager, ensuring users have the exact same experience as with npm or pnpm. The installation process, including any CLI prompts or web interfaces, must serve the exact same content and user experience regardless of whether npm, pnpm, or Yarn is used. The project uses 'module' as the package type, defines binaries 'task-master' and 'task-master-mcp', and its core logic resides in 'scripts/modules/'. The 'init' command (via scripts/init.js) creates the directory structure (.cursor/rules, scripts, tasks), copies templates (.env.example, .gitignore, rule files, dev.js), manages package.json merging, and sets up MCP config (.cursor/mcp.json). All dependencies are standard npm dependencies listed in package.json, and manual modifications are being removed.
|
# Description: Implement full support for installing and managing Taskmaster using Yarn package manager, ensuring users have the exact same experience as with npm or pnpm. The installation process, including any CLI prompts or web interfaces, must serve the exact same content and user experience regardless of whether npm, pnpm, or Yarn is used. The project uses 'module' as the package type, defines binaries 'task-master' and 'task-master-mcp', and its core logic resides in 'scripts/modules/'. The 'init' command (via scripts/init.js) creates the directory structure (.cursor/rules, scripts, tasks), copies templates (.env.example, .gitignore, rule files, dev.js), manages package.json merging, and sets up MCP config (.cursor/mcp.json). All dependencies are standard npm dependencies listed in package.json, and manual modifications are being removed.
|
||||||
@@ -74,55 +74,55 @@ Testing should verify complete Yarn support through the following steps:
|
|||||||
All tests should pass with the same results as when using npm, with identical user experience throughout the installation and usage process.
|
All tests should pass with the same results as when using npm, with identical user experience throughout the installation and usage process.
|
||||||
|
|
||||||
# Subtasks:
|
# Subtasks:
|
||||||
## 1. Update package.json for Yarn Compatibility [done]
|
## 1. Update package.json for Yarn Compatibility [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Modify the package.json file to ensure all dependencies, scripts, and configurations are compatible with Yarn's installation and resolution methods. Confirm that any scripts responsible for showing a website or prompt during install behave identically with Yarn and npm. Ensure compatibility with 'module' package type and correct binary definitions.
|
### Description: Modify the package.json file to ensure all dependencies, scripts, and configurations are compatible with Yarn's installation and resolution methods. Confirm that any scripts responsible for showing a website or prompt during install behave identically with Yarn and npm. Ensure compatibility with 'module' package type and correct binary definitions.
|
||||||
### Details:
|
### Details:
|
||||||
Review and update dependency declarations, script syntax, and any package manager-specific fields to avoid conflicts or unsupported features when using Yarn. Pay special attention to any scripts that trigger a website or prompt during installation, ensuring they serve the same content as npm. Validate that scripts/init.js and binaries are referenced correctly for ESM ('module') projects.
|
Review and update dependency declarations, script syntax, and any package manager-specific fields to avoid conflicts or unsupported features when using Yarn. Pay special attention to any scripts that trigger a website or prompt during installation, ensuring they serve the same content as npm. Validate that scripts/init.js and binaries are referenced correctly for ESM ('module') projects.
|
||||||
|
|
||||||
## 2. Add Yarn-Specific Configuration Files [done]
|
## 2. Add Yarn-Specific Configuration Files [pending]
|
||||||
### Dependencies: 64.1
|
### Dependencies: 64.1
|
||||||
### Description: Introduce Yarn-specific configuration files such as .yarnrc.yml if needed to optimize Yarn behavior and ensure consistent installs for 'module' package type and binary definitions.
|
### Description: Introduce Yarn-specific configuration files such as .yarnrc.yml if needed to optimize Yarn behavior and ensure consistent installs for 'module' package type and binary definitions.
|
||||||
### Details:
|
### Details:
|
||||||
Determine if Yarn v2+ (Berry) or classic requires additional configuration for the project, and add or update .yarnrc.yml or .yarnrc files accordingly. Ensure configuration supports ESM and binary linking.
|
Determine if Yarn v2+ (Berry) or classic requires additional configuration for the project, and add or update .yarnrc.yml or .yarnrc files accordingly. Ensure configuration supports ESM and binary linking.
|
||||||
|
|
||||||
## 3. Test and Fix Yarn Compatibility for Scripts and CLI [done]
|
## 3. Test and Fix Yarn Compatibility for Scripts and CLI [pending]
|
||||||
### Dependencies: 64.2
|
### Dependencies: 64.2
|
||||||
### Description: Ensure all scripts, post-install hooks, and CLI commands function correctly when Taskmaster is installed and managed via Yarn. Confirm that any website or UI shown during installation is identical to npm. Validate that binaries and the init process (scripts/init.js) work as expected.
|
### Description: Ensure all scripts, post-install hooks, and CLI commands function correctly when Taskmaster is installed and managed via Yarn. Confirm that any website or UI shown during installation is identical to npm. Validate that binaries and the init process (scripts/init.js) work as expected.
|
||||||
### Details:
|
### Details:
|
||||||
Test all lifecycle scripts, post-install actions, and CLI commands using Yarn. Address any issues related to environment variables, script execution, or dependency hoisting. Ensure any website or prompt shown during install is the same as with npm. Validate that binaries 'task-master' and 'task-master-mcp' are linked and that scripts/init.js creates the correct structure and templates.
|
Test all lifecycle scripts, post-install actions, and CLI commands using Yarn. Address any issues related to environment variables, script execution, or dependency hoisting. Ensure any website or prompt shown during install is the same as with npm. Validate that binaries 'task-master' and 'task-master-mcp' are linked and that scripts/init.js creates the correct structure and templates.
|
||||||
|
|
||||||
## 4. Update Documentation for Yarn Installation and Usage [done]
|
## 4. Update Documentation for Yarn Installation and Usage [pending]
|
||||||
### Dependencies: 64.3
|
### Dependencies: 64.3
|
||||||
### Description: Revise installation and usage documentation to include clear instructions for installing and managing Taskmaster with Yarn. Clearly state that the installation process, including any website or UI shown, is identical to npm. Ensure documentation reflects the use of 'module' package type, binaries, and the init process as defined in scripts/init.js. If the installation process includes a website component or requires account setup, document the steps users must follow. If not, explicitly state that no website or account setup is required.
|
### Description: Revise installation and usage documentation to include clear instructions for installing and managing Taskmaster with Yarn. Clearly state that the installation process, including any website or UI shown, is identical to npm. Ensure documentation reflects the use of 'module' package type, binaries, and the init process as defined in scripts/init.js. If the installation process includes a website component or requires account setup, document the steps users must follow. If not, explicitly state that no website or account setup is required.
|
||||||
### Details:
|
### Details:
|
||||||
Add Yarn-specific installation commands, troubleshooting tips, and notes on version compatibility to the README and any relevant docs. Document that any installation website or prompt is the same as with npm. Include notes on the 'module' package type, binaries, and the directory/template setup performed by scripts/init.js. If website or account setup is required during installation, provide clear instructions; otherwise, confirm and document that no such steps are needed.
|
Add Yarn-specific installation commands, troubleshooting tips, and notes on version compatibility to the README and any relevant docs. Document that any installation website or prompt is the same as with npm. Include notes on the 'module' package type, binaries, and the directory/template setup performed by scripts/init.js. If website or account setup is required during installation, provide clear instructions; otherwise, confirm and document that no such steps are needed.
|
||||||
|
|
||||||
## 5. Implement and Test Package Manager Detection Logic [done]
|
## 5. Implement and Test Package Manager Detection Logic [pending]
|
||||||
### Dependencies: 64.4
|
### Dependencies: 64.4
|
||||||
### Description: Update or add logic in the codebase to detect Yarn installations and handle Yarn-specific behaviors, ensuring feature parity across package managers. Ensure detection logic works for 'module' package type and binary definitions.
|
### Description: Update or add logic in the codebase to detect Yarn installations and handle Yarn-specific behaviors, ensuring feature parity across package managers. Ensure detection logic works for 'module' package type and binary definitions.
|
||||||
### Details:
|
### Details:
|
||||||
Modify detection logic to recognize Yarn (classic and berry), handle lockfile generation, and resolve any Yarn-specific package resolution or hoisting issues. Ensure detection logic supports ESM and binary linking.
|
Modify detection logic to recognize Yarn (classic and berry), handle lockfile generation, and resolve any Yarn-specific package resolution or hoisting issues. Ensure detection logic supports ESM and binary linking.
|
||||||
|
|
||||||
## 6. Verify Installation UI/Website Consistency [done]
|
## 6. Verify Installation UI/Website Consistency [pending]
|
||||||
### Dependencies: 64.3
|
### Dependencies: 64.3
|
||||||
### Description: Ensure any installation UIs, websites, or interactive prompts—including any website or prompt shown during install—appear and function identically when installing with Yarn compared to npm. Confirm that the experience is consistent for the 'module' package type and the init process. If the installation process includes a website or account setup, verify that all required website actions (e.g., account creation, login) are consistent and documented. If not, confirm and document that no website or account setup is needed.
|
### Description: Ensure any installation UIs, websites, or interactive prompts—including any website or prompt shown during install—appear and function identically when installing with Yarn compared to npm. Confirm that the experience is consistent for the 'module' package type and the init process. If the installation process includes a website or account setup, verify that all required website actions (e.g., account creation, login) are consistent and documented. If not, confirm and document that no website or account setup is needed.
|
||||||
### Details:
|
### Details:
|
||||||
Identify all user-facing elements during the installation process, including any website or prompt shown during install, and verify they are consistent across package managers. If a website is shown during installation or account setup is required, ensure it appears and functions the same regardless of package manager used, and document the steps. If not, confirm and document that no website or account setup is needed. Validate that any prompts or UIs triggered by scripts/init.js are identical.
|
Identify all user-facing elements during the installation process, including any website or prompt shown during install, and verify they are consistent across package managers. If a website is shown during installation or account setup is required, ensure it appears and functions the same regardless of package manager used, and document the steps. If not, confirm and document that no website or account setup is needed. Validate that any prompts or UIs triggered by scripts/init.js are identical.
|
||||||
|
|
||||||
## 7. Test init.js Script with Yarn [done]
|
## 7. Test init.js Script with Yarn [pending]
|
||||||
### Dependencies: 64.3
|
### Dependencies: 64.3
|
||||||
### Description: Verify that the scripts/init.js file works correctly when Taskmaster is installed via Yarn, creating the proper directory structure and copying all required templates as defined in the project structure.
|
### Description: Verify that the scripts/init.js file works correctly when Taskmaster is installed via Yarn, creating the proper directory structure and copying all required templates as defined in the project structure.
|
||||||
### Details:
|
### Details:
|
||||||
Test the init command to ensure it properly creates .cursor/rules, scripts, and tasks directories, copies templates (.env.example, .gitignore, rule files, dev.js), handles package.json merging, and sets up MCP config (.cursor/mcp.json) as per scripts/init.js.
|
Test the init command to ensure it properly creates .cursor/rules, scripts, and tasks directories, copies templates (.env.example, .gitignore, rule files, dev.js), handles package.json merging, and sets up MCP config (.cursor/mcp.json) as per scripts/init.js.
|
||||||
|
|
||||||
## 8. Verify Binary Links with Yarn [done]
|
## 8. Verify Binary Links with Yarn [pending]
|
||||||
### Dependencies: 64.3
|
### Dependencies: 64.3
|
||||||
### Description: Ensure that the task-master and task-master-mcp binaries are properly defined in package.json, linked, and executable when installed via Yarn, in both global and local installations.
|
### Description: Ensure that the task-master and task-master-mcp binaries are properly defined in package.json, linked, and executable when installed via Yarn, in both global and local installations.
|
||||||
### Details:
|
### Details:
|
||||||
Check that the binaries defined in package.json are correctly linked in node_modules/.bin when installed with Yarn, and that they can be executed without errors. Validate that binaries work for ESM ('module') projects and are accessible after both global and local installs.
|
Check that the binaries defined in package.json are correctly linked in node_modules/.bin when installed with Yarn, and that they can be executed without errors. Validate that binaries work for ESM ('module') projects and are accessible after both global and local installs.
|
||||||
|
|
||||||
## 9. Test Website Account Setup with Yarn [done]
|
## 9. Test Website Account Setup with Yarn [pending]
|
||||||
### Dependencies: 64.6
|
### Dependencies: 64.6
|
||||||
### Description: If the installation process includes a website component, verify that account setup, registration, or any other user-specific configurations work correctly when Taskmaster is installed via Yarn. If no website or account setup is required, confirm and document this explicitly.
|
### Description: If the installation process includes a website component, verify that account setup, registration, or any other user-specific configurations work correctly when Taskmaster is installed via Yarn. If no website or account setup is required, confirm and document this explicitly.
|
||||||
### Details:
|
### Details:
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Task ID: 65
|
# Task ID: 65
|
||||||
# Title: Add Bun Support for Taskmaster Installation
|
# Title: Add Bun Support for Taskmaster Installation
|
||||||
# Status: done
|
# Status: pending
|
||||||
# Dependencies: None
|
# Dependencies: None
|
||||||
# Priority: medium
|
# Priority: medium
|
||||||
# Description: Implement full support for installing and managing Taskmaster using the Bun package manager, ensuring the installation process and user experience are identical to npm, pnpm, and Yarn.
|
# Description: Implement full support for installing and managing Taskmaster using the Bun package manager, ensuring the installation process and user experience are identical to npm, pnpm, and Yarn.
|
||||||
@@ -11,37 +11,37 @@ Update the Taskmaster installation scripts and documentation to support Bun as a
|
|||||||
1. Install Taskmaster using Bun on macOS, Linux, and Windows (including WSL and PowerShell), following the updated documentation. 2. Run the full installation and initialization process, verifying that the directory structure, templates, and MCP config are set up identically to npm, pnpm, and Yarn. 3. Execute all CLI commands (including 'init') and confirm functional parity. 4. If a website or account setup is required, test these flows for consistency; if not, confirm and document this. 5. Check for Bun-specific issues (e.g., install hangs) and verify that troubleshooting steps are effective. 6. Ensure the documentation is clear, accurate, and up to date for all supported platforms.
|
1. Install Taskmaster using Bun on macOS, Linux, and Windows (including WSL and PowerShell), following the updated documentation. 2. Run the full installation and initialization process, verifying that the directory structure, templates, and MCP config are set up identically to npm, pnpm, and Yarn. 3. Execute all CLI commands (including 'init') and confirm functional parity. 4. If a website or account setup is required, test these flows for consistency; if not, confirm and document this. 5. Check for Bun-specific issues (e.g., install hangs) and verify that troubleshooting steps are effective. 6. Ensure the documentation is clear, accurate, and up to date for all supported platforms.
|
||||||
|
|
||||||
# Subtasks:
|
# Subtasks:
|
||||||
## 1. Research Bun compatibility requirements [done]
|
## 1. Research Bun compatibility requirements [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Investigate Bun's JavaScript runtime environment and identify key differences from Node.js that may affect Taskmaster's installation and operation.
|
### Description: Investigate Bun's JavaScript runtime environment and identify key differences from Node.js that may affect Taskmaster's installation and operation.
|
||||||
### Details:
|
### Details:
|
||||||
Research Bun's package management, module resolution, and API compatibility with Node.js. Document any potential issues or limitations that might affect Taskmaster. Identify required changes to make Taskmaster compatible with Bun's execution model.
|
Research Bun's package management, module resolution, and API compatibility with Node.js. Document any potential issues or limitations that might affect Taskmaster. Identify required changes to make Taskmaster compatible with Bun's execution model.
|
||||||
|
|
||||||
## 2. Update installation scripts for Bun compatibility [done]
|
## 2. Update installation scripts for Bun compatibility [pending]
|
||||||
### Dependencies: 65.1
|
### Dependencies: 65.1
|
||||||
### Description: Modify the existing installation scripts to detect and support Bun as a runtime environment.
|
### Description: Modify the existing installation scripts to detect and support Bun as a runtime environment.
|
||||||
### Details:
|
### Details:
|
||||||
Add Bun detection logic to installation scripts. Update package management commands to use Bun equivalents where needed. Ensure all dependencies are compatible with Bun. Modify any Node.js-specific code to work with Bun's runtime.
|
Add Bun detection logic to installation scripts. Update package management commands to use Bun equivalents where needed. Ensure all dependencies are compatible with Bun. Modify any Node.js-specific code to work with Bun's runtime.
|
||||||
|
|
||||||
## 3. Create Bun-specific installation path [done]
|
## 3. Create Bun-specific installation path [pending]
|
||||||
### Dependencies: 65.2
|
### Dependencies: 65.2
|
||||||
### Description: Implement a dedicated installation flow for Bun users that optimizes for Bun's capabilities.
|
### Description: Implement a dedicated installation flow for Bun users that optimizes for Bun's capabilities.
|
||||||
### Details:
|
### Details:
|
||||||
Create a Bun-specific installation script that leverages Bun's performance advantages. Update any environment detection logic to properly identify Bun environments. Ensure proper path resolution and environment variable handling for Bun.
|
Create a Bun-specific installation script that leverages Bun's performance advantages. Update any environment detection logic to properly identify Bun environments. Ensure proper path resolution and environment variable handling for Bun.
|
||||||
|
|
||||||
## 4. Test Taskmaster installation with Bun [done]
|
## 4. Test Taskmaster installation with Bun [pending]
|
||||||
### Dependencies: 65.3
|
### Dependencies: 65.3
|
||||||
### Description: Perform comprehensive testing of the installation process using Bun across different operating systems.
|
### Description: Perform comprehensive testing of the installation process using Bun across different operating systems.
|
||||||
### Details:
|
### Details:
|
||||||
Test installation on Windows, macOS, and Linux using Bun. Verify that all Taskmaster features work correctly when installed via Bun. Document any issues encountered and implement fixes as needed.
|
Test installation on Windows, macOS, and Linux using Bun. Verify that all Taskmaster features work correctly when installed via Bun. Document any issues encountered and implement fixes as needed.
|
||||||
|
|
||||||
## 5. Test Taskmaster operation with Bun [done]
|
## 5. Test Taskmaster operation with Bun [pending]
|
||||||
### Dependencies: 65.4
|
### Dependencies: 65.4
|
||||||
### Description: Ensure all Taskmaster functionality works correctly when running under Bun.
|
### Description: Ensure all Taskmaster functionality works correctly when running under Bun.
|
||||||
### Details:
|
### Details:
|
||||||
Test all Taskmaster commands and features when running with Bun. Compare performance metrics between Node.js and Bun. Identify and fix any runtime issues specific to Bun. Ensure all plugins and extensions are compatible.
|
Test all Taskmaster commands and features when running with Bun. Compare performance metrics between Node.js and Bun. Identify and fix any runtime issues specific to Bun. Ensure all plugins and extensions are compatible.
|
||||||
|
|
||||||
## 6. Update documentation for Bun support [done]
|
## 6. Update documentation for Bun support [pending]
|
||||||
### Dependencies: 65.4, 65.5
|
### Dependencies: 65.4, 65.5
|
||||||
### Description: Update all relevant documentation to include information about installing and running Taskmaster with Bun.
|
### Description: Update all relevant documentation to include information about installing and running Taskmaster with Bun.
|
||||||
### Details:
|
### Details:
|
||||||
|
|||||||
@@ -15,92 +15,29 @@ This task has two main components:\n\n1. Add `--json` flag to all relevant CLI c
|
|||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Modify the command handlers for `task-master next` and `task-master show <id>` to recognize and handle a `--json` flag. When the flag is present, output the raw data received from MCP tools directly as JSON.
|
### Description: Modify the command handlers for `task-master next` and `task-master show <id>` to recognize and handle a `--json` flag. When the flag is present, output the raw data received from MCP tools directly as JSON.
|
||||||
### Details:
|
### Details:
|
||||||
1. Update the CLI argument parser to add the `--json` boolean flag to both commands
|
Use a CLI argument parsing library (e.g., argparse, click, commander) to add the `--json` boolean flag. In the command execution logic, check if the flag is set. If true, serialize the data object (before any human-readable formatting) into a JSON string and print it to stdout. If false, proceed with the existing formatting logic. Focus on these two commands first to establish the pattern.
|
||||||
2. Create a `formatAsJson` utility function in `src/utils/output.js` that takes a data object and returns a properly formatted JSON string
|
|
||||||
3. In the command handler functions (`src/commands/next.js` and `src/commands/show.js`), add a conditional check for the `--json` flag
|
|
||||||
4. If the flag is set, call the `formatAsJson` function with the raw data object and print the result
|
|
||||||
5. If the flag is not set, continue with the existing human-readable formatting logic
|
|
||||||
6. Ensure proper error handling for JSON serialization failures
|
|
||||||
7. Update the command help text in both files to document the new flag
|
|
||||||
|
|
||||||
## 2. Extend JSON Output to All Relevant Commands and Ensure Schema Consistency [pending]
|
## 2. Extend JSON Output to All Relevant Commands and Ensure Schema Consistency [pending]
|
||||||
### Dependencies: 67.1
|
### Dependencies: 67.1
|
||||||
### Description: Apply the JSON output pattern established in subtask 1 to all other relevant Taskmaster CLI commands that display data (e.g., `list`, `status`, etc.). Ensure the JSON structure is consistent where applicable (e.g., task objects should have the same fields). Add help text mentioning the `--json` flag for each modified command.
|
### Description: Apply the JSON output pattern established in subtask 1 to all other relevant Taskmaster CLI commands that display data (e.g., `list`, `status`, etc.). Ensure the JSON structure is consistent where applicable (e.g., task objects should have the same fields). Add help text mentioning the `--json` flag for each modified command.
|
||||||
### Details:
|
### Details:
|
||||||
1. Create a JSON schema definition file at `src/schemas/task.json` to define the standard structure for task objects
|
Identify all commands that output structured data. Refactor the JSON output logic into a reusable utility function if possible. Define a standard schema for common data types like tasks. Update the help documentation for each command to include the `--json` flag description. Ensure error outputs are also handled appropriately (e.g., potentially outputting JSON error objects).
|
||||||
2. Modify the following command files to support the `--json` flag:
|
|
||||||
- `src/commands/list.js`
|
|
||||||
- `src/commands/status.js`
|
|
||||||
- `src/commands/search.js`
|
|
||||||
- `src/commands/summary.js`
|
|
||||||
3. Refactor the `formatAsJson` utility to handle different data types (single task, task array, status object, etc.)
|
|
||||||
4. Add a `validateJsonSchema` function in `src/utils/validation.js` to ensure output conforms to defined schemas
|
|
||||||
5. Update each command's help text documentation to include the `--json` flag description
|
|
||||||
6. Implement consistent error handling for JSON output (using a standard error object format)
|
|
||||||
7. For list-type commands, ensure array outputs are properly formatted as JSON arrays
|
|
||||||
|
|
||||||
## 3. Create `install-keybindings` Command Structure and OS Detection [pending]
|
## 3. Create `install-keybindings` Command Structure and OS Detection [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Set up the basic structure for the new `task-master install-keybindings` command. Implement logic to detect the user's operating system (Linux, macOS, Windows) and determine the default path to Cursor's `keybindings.json` file.
|
### Description: Set up the basic structure for the new `task-master install-keybindings` command. Implement logic to detect the user's operating system (Linux, macOS, Windows) and determine the default path to Cursor's `keybindings.json` file.
|
||||||
### Details:
|
### Details:
|
||||||
1. Create a new command file at `src/commands/install-keybindings.js`
|
Add a new command entry point using the CLI framework. Use standard library functions (e.g., `os.platform()` in Node, `platform.system()` in Python) to detect the OS. Define constants or a configuration map for the default `keybindings.json` paths for each supported OS. Handle cases where the path might vary (e.g., different installation methods for Cursor). Add basic help text for the new command.
|
||||||
2. Register the command in the main CLI entry point (`src/index.js`)
|
|
||||||
3. Implement OS detection using `os.platform()` in Node.js
|
|
||||||
4. Define the following path constants in `src/config/paths.js`:
|
|
||||||
- Windows: `%APPDATA%\Cursor\User\keybindings.json`
|
|
||||||
- macOS: `~/Library/Application Support/Cursor/User/keybindings.json`
|
|
||||||
- Linux: `~/.config/Cursor/User/keybindings.json`
|
|
||||||
5. Create a `getCursorKeybindingsPath()` function that returns the appropriate path based on detected OS
|
|
||||||
6. Add path override capability via a `--path` command line option
|
|
||||||
7. Implement proper error handling for unsupported operating systems
|
|
||||||
8. Add detailed help text explaining the command's purpose and options
|
|
||||||
|
|
||||||
## 4. Implement Keybinding File Handling and Backup Logic [pending]
|
## 4. Implement Keybinding File Handling and Backup Logic [pending]
|
||||||
### Dependencies: 67.3
|
### Dependencies: 67.3
|
||||||
### Description: Implement the core logic within the `install-keybindings` command to read the target `keybindings.json` file. If it exists, create a backup. If it doesn't exist, create a new file with an empty JSON array `[]`. Prepare the structure to add new keybindings.
|
### Description: Implement the core logic within the `install-keybindings` command to read the target `keybindings.json` file. If it exists, create a backup. If it doesn't exist, create a new file with an empty JSON array `[]`. Prepare the structure to add new keybindings.
|
||||||
### Details:
|
### Details:
|
||||||
1. Create a `KeybindingsManager` class in `src/utils/keybindings.js` with the following methods:
|
Use file system modules to check for file existence, read, write, and copy files. Implement a backup mechanism (e.g., copy `keybindings.json` to `keybindings.json.bak`). Handle potential file I/O errors gracefully (e.g., permissions issues). Parse the existing JSON content; if parsing fails, report an error and potentially abort. Ensure the file is created with `[]` if it's missing.
|
||||||
- `checkFileExists(path)`: Verify if the keybindings file exists
|
|
||||||
- `createBackup(path)`: Copy existing file to `keybindings.json.bak`
|
|
||||||
- `readKeybindings(path)`: Read and parse the JSON file
|
|
||||||
- `writeKeybindings(path, data)`: Serialize and write data to the file
|
|
||||||
- `createEmptyFile(path)`: Create a new file with `[]` content
|
|
||||||
2. In the command handler, use these methods to:
|
|
||||||
- Check if the target file exists
|
|
||||||
- Create a backup if it does (with timestamp in filename)
|
|
||||||
- Read existing keybindings or create an empty file
|
|
||||||
- Parse the JSON content with proper error handling
|
|
||||||
3. Add a `--no-backup` flag to skip backup creation
|
|
||||||
4. Implement verbose logging with a `--verbose` flag
|
|
||||||
5. Handle all potential file system errors (permissions, disk space, etc.)
|
|
||||||
6. Add a `--dry-run` option that shows what would be done without making changes
|
|
||||||
|
|
||||||
## 5. Add Taskmaster Keybindings, Prevent Duplicates, and Support Customization [pending]
|
## 5. Add Taskmaster Keybindings, Prevent Duplicates, and Support Customization [pending]
|
||||||
### Dependencies: 67.4
|
### Dependencies: 67.4
|
||||||
### Description: Define the specific Taskmaster keybindings (e.g., next task to clipboard, status update, open agent chat) and implement the logic to merge them into the user's `keybindings.json` data. Prevent adding duplicate keybindings (based on command ID or key combination). Add support for custom key combinations via command flags.
|
### Description: Define the specific Taskmaster keybindings (e.g., next task to clipboard, status update, open agent chat) and implement the logic to merge them into the user's `keybindings.json` data. Prevent adding duplicate keybindings (based on command ID or key combination). Add support for custom key combinations via command flags.
|
||||||
### Details:
|
### Details:
|
||||||
1. Define default Taskmaster keybindings in `src/config/default-keybindings.js` as an array of objects with:
|
Define the desired keybindings as a list of JSON objects following Cursor's format. Before adding, iterate through the existing keybindings (parsed in subtask 4) to check if a Taskmaster keybinding with the same command or key combination already exists. If not, append the new keybinding to the list. Add command-line flags (e.g., `--next-key='ctrl+alt+n'`) to allow users to override default key combinations. Serialize the updated list back to JSON and write it to the `keybindings.json` file.
|
||||||
- `key`: Default key combination (e.g., `"ctrl+alt+n"`)
|
|
||||||
- `command`: Cursor command ID (e.g., `"taskmaster.nextTask"`)
|
|
||||||
- `when`: Context when keybinding is active (e.g., `"editorTextFocus"`)
|
|
||||||
- `args`: Any command arguments as an object
|
|
||||||
- `description`: Human-readable description of what the keybinding does
|
|
||||||
2. Implement the following keybindings:
|
|
||||||
- Next task to clipboard: `ctrl+alt+n`
|
|
||||||
- Update task status: `ctrl+alt+u`
|
|
||||||
- Open agent chat with task context: `ctrl+alt+a`
|
|
||||||
- Show task details: `ctrl+alt+d`
|
|
||||||
3. Add command-line options to customize each keybinding:
|
|
||||||
- `--next-key="ctrl+alt+n"`
|
|
||||||
- `--update-key="ctrl+alt+u"`
|
|
||||||
- `--agent-key="ctrl+alt+a"`
|
|
||||||
- `--details-key="ctrl+alt+d"`
|
|
||||||
4. Implement a `mergeKeybindings(existing, new)` function that:
|
|
||||||
- Checks for duplicates based on command ID
|
|
||||||
- Checks for key combination conflicts
|
|
||||||
- Warns about conflicts but allows override with `--force` flag
|
|
||||||
- Preserves existing non-Taskmaster keybindings
|
|
||||||
5. Add a `--reset` flag to remove all existing Taskmaster keybindings before adding new ones
|
|
||||||
6. Add a `--list` option to display currently installed Taskmaster keybindings
|
|
||||||
7. Implement an `--uninstall` option to remove all Taskmaster keybindings
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Task ID: 68
|
# Task ID: 68
|
||||||
# Title: Ability to create tasks without parsing PRD
|
# Title: Ability to create tasks without parsing PRD
|
||||||
# Status: done
|
# Status: pending
|
||||||
# Dependencies: None
|
# Dependencies: None
|
||||||
# Priority: medium
|
# Priority: medium
|
||||||
# Description: Which just means that when we create a task, if there's no tasks.json, we should create it calling the same function that is done by parse-prd. this lets taskmaster be used without a prd as a starding point.
|
# Description: Which just means that when we create a task, if there's no tasks.json, we should create it calling the same function that is done by parse-prd. this lets taskmaster be used without a prd as a starding point.
|
||||||
@@ -11,13 +11,13 @@
|
|||||||
|
|
||||||
|
|
||||||
# Subtasks:
|
# Subtasks:
|
||||||
## 1. Design task creation form without PRD [done]
|
## 1. Design task creation form without PRD [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Create a user interface form that allows users to manually input task details without requiring a PRD document
|
### Description: Create a user interface form that allows users to manually input task details without requiring a PRD document
|
||||||
### Details:
|
### Details:
|
||||||
Design a form with fields for task title, description, priority, assignee, due date, and other relevant task attributes. Include validation to ensure required fields are completed. The form should be intuitive and provide clear guidance on how to create a task manually.
|
Design a form with fields for task title, description, priority, assignee, due date, and other relevant task attributes. Include validation to ensure required fields are completed. The form should be intuitive and provide clear guidance on how to create a task manually.
|
||||||
|
|
||||||
## 2. Implement task saving functionality [done]
|
## 2. Implement task saving functionality [pending]
|
||||||
### Dependencies: 68.1
|
### Dependencies: 68.1
|
||||||
### Description: Develop the backend functionality to save manually created tasks to the database
|
### Description: Develop the backend functionality to save manually created tasks to the database
|
||||||
### Details:
|
### Details:
|
||||||
|
|||||||
@@ -1,95 +1,83 @@
|
|||||||
# Task ID: 69
|
# Task ID: 69
|
||||||
# Title: Enhance Analyze Complexity for Specific Task IDs
|
# Title: Enhance Analyze Complexity for Specific Task IDs
|
||||||
# Status: done
|
# Status: pending
|
||||||
# Dependencies: None
|
# Dependencies: None
|
||||||
# Priority: medium
|
# Priority: medium
|
||||||
# Description: Modify the analyze-complexity feature (CLI and MCP) to allow analyzing only specified task IDs or ranges, and append/update results in the report.
|
# Description: Modify the analyze-complexity feature (CLI and MCP) to allow analyzing only specified task IDs and append/update results in the report.
|
||||||
# Details:
|
# Details:
|
||||||
|
|
||||||
Implementation Plan:
|
Implementation Plan:
|
||||||
|
|
||||||
1. **Core Logic (`scripts/modules/task-manager/analyze-task-complexity.js`)**
|
1. **Core Logic (`scripts/modules/task-manager/analyze-task-complexity.js`):**
|
||||||
* Modify function signature to accept optional parameters: `options.ids` (string, comma-separated IDs) and range parameters `options.from` and `options.to`.
|
* Modify the function signature to accept an optional `options.ids` parameter (string, comma-separated IDs).
|
||||||
* If `options.ids` is present:
|
* If `options.ids` is present:
|
||||||
* Parse the `ids` string into an array of target IDs.
|
* Parse the `ids` string into an array of target IDs.
|
||||||
* Filter `tasksData.tasks` to include only tasks matching the target IDs.
|
* Filter `tasksData.tasks` to *only* include tasks matching the target IDs. Use this filtered list for analysis.
|
||||||
* Handle cases where provided IDs don't exist in `tasks.json`.
|
* Handle cases where provided IDs don't exist in `tasks.json`.
|
||||||
* If range parameters (`options.from` and `options.to`) are present:
|
* If `options.ids` is *not* present: Continue with existing logic (filtering by active status).
|
||||||
* Parse these values into integers.
|
* **Report Handling:**
|
||||||
* Filter tasks within the specified ID range (inclusive).
|
* Before generating the analysis, check if the `outputPath` report file exists.
|
||||||
* If neither `options.ids` nor range parameters are present: Continue with existing logic (filtering by active status).
|
* If it exists, read the existing `complexityAnalysis` array.
|
||||||
* Maintain existing logic for skipping completed tasks.
|
* Generate the new analysis *only* for the target tasks (filtered by ID or status).
|
||||||
* **Report Handling:**
|
* Merge the results: Remove any entries from the *existing* array that match the IDs analyzed in the *current run*. Then, append the *new* analysis results to the array.
|
||||||
* Before generating analysis, check if the `outputPath` report file exists.
|
* Update the `meta` section (`generatedAt`, `tasksAnalyzed` should reflect *this run*).
|
||||||
* If it exists:
|
* Write the *merged* `complexityAnalysis` array and updated `meta` back to the report file.
|
||||||
* Read the existing `complexityAnalysis` array.
|
* If the report file doesn't exist, create it as usual.
|
||||||
* Generate new analysis only for target tasks (filtered by ID or range).
|
* **Prompt Generation:** Ensure `generateInternalComplexityAnalysisPrompt` receives the correctly filtered list of tasks.
|
||||||
* Merge results: Remove entries from the existing array that match IDs analyzed in the current run, then append new analysis results to the array.
|
|
||||||
* Update the `meta` section (`generatedAt`, `tasksAnalyzed`).
|
|
||||||
* Write merged `complexityAnalysis` and updated `meta` back to report file.
|
|
||||||
* If the report file doesn't exist: Create it as usual.
|
|
||||||
* **Prompt Generation:** Ensure `generateInternalComplexityAnalysisPrompt` receives correctly filtered list of tasks.
|
|
||||||
|
|
||||||
2. **CLI (`scripts/modules/commands.js`)**
|
2. **CLI (`scripts/modules/commands.js`):**
|
||||||
* Add new options to the `analyze-complexity` command:
|
* Add a new option `--id <ids>` to the `analyze-complexity` command definition. Description: "Comma-separated list of specific task IDs to analyze".
|
||||||
* `--id/-i <ids>`: "Comma-separated list of specific task IDs to analyze"
|
* In the `.action` handler:
|
||||||
* `--from/-f <startId>`: "Start ID for range analysis (inclusive)"
|
* Check if `options.id` is provided.
|
||||||
* `--to/-t <endId>`: "End ID for range analysis (inclusive)"
|
* If yes, pass `options.id` (as the comma-separated string) to the `analyzeTaskComplexity` core function via the `options` object.
|
||||||
* In the `.action` handler:
|
* Update user feedback messages to indicate specific task analysis.
|
||||||
* Check if `options.id`, `options.from`, or `options.to` are provided.
|
|
||||||
* If yes, pass appropriate values to the `analyzeTaskComplexity` core function via the `options` object.
|
|
||||||
* Update user feedback messages to indicate specific task analysis.
|
|
||||||
|
|
||||||
3. **MCP Tool (`mcp-server/src/tools/analyze.js`)**
|
3. **MCP Tool (`mcp-server/src/tools/analyze.js`):**
|
||||||
* Add new optional parameters to Zod schema for `analyze_project_complexity` tool:
|
* Add a new optional parameter `ids: z.string().optional().describe("Comma-separated list of task IDs to analyze specifically")` to the Zod schema for the `analyze_project_complexity` tool.
|
||||||
* `ids: z.string().optional().describe("Comma-separated list of task IDs to analyze specifically")`
|
* In the `execute` method, pass `args.ids` to the `analyzeTaskComplexityDirect` function within its `args` object.
|
||||||
* `from: z.number().optional().describe("Start ID for range analysis (inclusive)")`
|
|
||||||
* `to: z.number().optional().describe("End ID for range analysis (inclusive)")`
|
|
||||||
* In the `execute` method, pass `args.ids`, `args.from`, and `args.to` to the `analyzeTaskComplexityDirect` function within its `args` object.
|
|
||||||
|
|
||||||
4. **Direct Function (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`)**
|
4. **Direct Function (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**
|
||||||
* Update function to receive `ids`, `from`, and `to` values within the `args` object.
|
* Update the function to receive the `ids` string within the `args` object.
|
||||||
* Pass these values along to the core `analyzeTaskComplexity` function within its `options` object.
|
* Pass the `ids` string along to the core `analyzeTaskComplexity` function within its `options` object.
|
||||||
|
|
||||||
|
5. **Documentation:** Update relevant rule files (`commands.mdc`, `taskmaster.mdc`) to reflect the new `--id` option/parameter.
|
||||||
|
|
||||||
5. **Documentation:** Update relevant rule files (`commands.mdc`, `taskmaster.mdc`) to reflect new `--id/-i`, `--from/-f`, and `--to/-t` options/parameters.
|
|
||||||
|
|
||||||
# Test Strategy:
|
# Test Strategy:
|
||||||
|
|
||||||
1. **CLI:**
|
1. **CLI:**
|
||||||
* Run `task-master analyze-complexity -i=<id1>` (where report doesn't exist). Verify report created with only task id1.
|
* Run `task-master analyze-complexity --id=<id1>` (where report doesn't exist). Verify report created with only task id1.
|
||||||
* Run `task-master analyze-complexity -i=<id2>` (where report exists). Verify report updated, containing analysis for both id1 and id2 (id2 replaces any previous id2 analysis).
|
* Run `task-master analyze-complexity --id=<id2>` (where report exists). Verify report updated, containing analysis for both id1 and id2 (id2 replaces any previous id2 analysis).
|
||||||
* Run `task-master analyze-complexity -i=<id1>,<id3>`. Verify report updated, containing id1, id2, id3.
|
* Run `task-master analyze-complexity --id=<id1>,<id3>`. Verify report updated, containing id1, id2, id3.
|
||||||
* Run `task-master analyze-complexity -f=50 -t=60`. Verify report created/updated with tasks in the range 50-60.
|
* Run `task-master analyze-complexity` (no id). Verify it analyzes *all* active tasks and updates the report accordingly, merging with previous specific analyses.
|
||||||
* Run `task-master analyze-complexity` (no flags). Verify it analyzes all active tasks and updates the report accordingly, merging with previous specific analyses.
|
* Test with invalid/non-existent IDs.
|
||||||
* Test with invalid/non-existent IDs or ranges.
|
2. **MCP:**
|
||||||
* Verify that completed tasks are still skipped in all scenarios, maintaining existing behavior.
|
* Call `analyze_project_complexity` tool with `ids: "<id1>"`. Verify report creation/update.
|
||||||
2. **MCP:**
|
* Call `analyze_project_complexity` tool with `ids: "<id1>,<id2>"`. Verify report merging.
|
||||||
* Call `analyze_project_complexity` tool with `ids: "<id1>"`. Verify report creation/update.
|
* Call `analyze_project_complexity` tool without `ids`. Verify full analysis and merging.
|
||||||
* Call `analyze_project_complexity` tool with `ids: "<id1>,<id2>,<id3>"`. Verify report created/updated with multiple specific tasks.
|
3. Verify report `meta` section is updated correctly on each run.
|
||||||
* Call `analyze_project_complexity` tool with `from: 50, to: 60`. Verify report created/updated for tasks in range.
|
|
||||||
* Call `analyze_project_complexity` tool without parameters. Verify full analysis and merging.
|
|
||||||
3. Verify report `meta` section is updated correctly on each run.
|
|
||||||
|
|
||||||
# Subtasks:
|
# Subtasks:
|
||||||
## 1. Modify core complexity analysis logic [done]
|
## 1. Modify core complexity analysis logic [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Update the core complexity analysis function to accept specific task IDs or ranges as input parameters
|
### Description: Update the core complexity analysis function to accept specific task IDs as input parameters
|
||||||
### Details:
|
### Details:
|
||||||
Refactor the existing complexity analysis module to allow filtering by task IDs or ranges. This involves modifying the data processing pipeline to filter tasks before analysis, ensuring the complexity metrics are calculated only for the specified tasks while maintaining context awareness.
|
Refactor the existing complexity analysis module to allow filtering by task IDs. This involves modifying the data processing pipeline to filter tasks before analysis, ensuring the complexity metrics are calculated only for the specified tasks while maintaining context awareness.
|
||||||
|
|
||||||
## 2. Update CLI interface for task-specific complexity analysis [done]
|
## 2. Update CLI interface for task-specific complexity analysis [pending]
|
||||||
### Dependencies: 69.1
|
### Dependencies: 69.1
|
||||||
### Description: Extend the CLI to accept task IDs or ranges as parameters for the complexity analysis command
|
### Description: Extend the CLI to accept task IDs as parameters for the complexity analysis command
|
||||||
### Details:
|
### Details:
|
||||||
Add new flags `--id/-i`, `--from/-f`, and `--to/-t` to the CLI that allow users to specify task IDs or ranges for targeted complexity analysis. Update the command parser, help documentation, and ensure proper validation of the provided values.
|
Add a new flag or parameter to the CLI that allows users to specify task IDs for targeted complexity analysis. Update the command parser, help documentation, and ensure proper validation of the provided task IDs.
|
||||||
|
|
||||||
## 3. Integrate task-specific analysis with MCP tool [done]
|
## 3. Integrate task-specific analysis with MCP tool [pending]
|
||||||
### Dependencies: 69.1
|
### Dependencies: 69.1
|
||||||
### Description: Update the MCP tool interface to support analyzing complexity for specific tasks or ranges
|
### Description: Update the MCP tool interface to support analyzing complexity for specific tasks
|
||||||
### Details:
|
### Details:
|
||||||
Modify the MCP tool's API endpoints and UI components to allow users to select specific tasks or ranges for complexity analysis. Ensure the UI provides clear feedback about which tasks are being analyzed and update the visualization components to properly display partial analysis results.
|
Modify the MCP tool's API endpoints and UI components to allow users to select specific tasks for complexity analysis. Ensure the UI provides clear feedback about which tasks are being analyzed and update the visualization components to properly display partial analysis results.
|
||||||
|
|
||||||
## 4. Create comprehensive tests for task-specific complexity analysis [done]
|
## 4. Create comprehensive tests for task-specific complexity analysis [pending]
|
||||||
### Dependencies: 69.1, 69.2, 69.3
|
### Dependencies: 69.1, 69.2, 69.3
|
||||||
### Description: Develop test cases to verify the correct functioning of task-specific complexity analysis
|
### Description: Develop test cases to verify the correct functioning of task-specific complexity analysis
|
||||||
### Details:
|
### Details:
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Task ID: 77
|
# Task ID: 77
|
||||||
# Title: Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)
|
# Title: Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)
|
||||||
# Status: done
|
# Status: in-progress
|
||||||
# Dependencies: None
|
# Dependencies: None
|
||||||
# Priority: medium
|
# Priority: medium
|
||||||
# Description: Capture detailed AI usage data (tokens, costs, models, commands) within Taskmaster and send this telemetry to an external, closed-source analytics backend for usage analysis, profitability measurement, and pricing optimization.
|
# Description: Capture detailed AI usage data (tokens, costs, models, commands) within Taskmaster and send this telemetry to an external, closed-source analytics backend for usage analysis, profitability measurement, and pricing optimization.
|
||||||
@@ -536,13 +536,13 @@ async function callAiService(params) {
|
|||||||
### Details:
|
### Details:
|
||||||
Update the provider functions in `src/ai-providers/google.js` to ensure they return telemetry-compatible results:\n\n1. **`generateGoogleText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generateGoogleObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamGoogleText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
|
Update the provider functions in `src/ai-providers/google.js` to ensure they return telemetry-compatible results:\n\n1. **`generateGoogleText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generateGoogleObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamGoogleText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
|
||||||
|
|
||||||
## 14. Update openai.js for Telemetry Compatibility [done]
|
## 14. Update openai.js for Telemetry Compatibility [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Modify src/ai-providers/openai.js functions to return usage data.
|
### Description: Modify src/ai-providers/openai.js functions to return usage data.
|
||||||
### Details:
|
### Details:
|
||||||
Update the provider functions in `src/ai-providers/openai.js` to ensure they return telemetry-compatible results:\n\n1. **`generateOpenAIText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generateOpenAIObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamOpenAIText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
|
Update the provider functions in `src/ai-providers/openai.js` to ensure they return telemetry-compatible results:\n\n1. **`generateOpenAIText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generateOpenAIObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamOpenAIText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
|
||||||
|
|
||||||
## 15. Update openrouter.js for Telemetry Compatibility [done]
|
## 15. Update openrouter.js for Telemetry Compatibility [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Modify src/ai-providers/openrouter.js functions to return usage data.
|
### Description: Modify src/ai-providers/openrouter.js functions to return usage data.
|
||||||
### Details:
|
### Details:
|
||||||
@@ -554,13 +554,13 @@ Update the provider functions in `src/ai-providers/openrouter.js` to ensure they
|
|||||||
### Details:
|
### Details:
|
||||||
Update the provider functions in `src/ai-providers/perplexity.js` to ensure they return telemetry-compatible results:\n\n1. **`generatePerplexityText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generatePerplexityObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamPerplexityText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
|
Update the provider functions in `src/ai-providers/perplexity.js` to ensure they return telemetry-compatible results:\n\n1. **`generatePerplexityText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generatePerplexityObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamPerplexityText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
|
||||||
|
|
||||||
## 17. Update xai.js for Telemetry Compatibility [done]
|
## 17. Update xai.js for Telemetry Compatibility [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Modify src/ai-providers/xai.js functions to return usage data.
|
### Description: Modify src/ai-providers/xai.js functions to return usage data.
|
||||||
### Details:
|
### Details:
|
||||||
Update the provider functions in `src/ai-providers/xai.js` to ensure they return telemetry-compatible results:\n\n1. **`generateXaiText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generateXaiObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamXaiText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
|
Update the provider functions in `src/ai-providers/xai.js` to ensure they return telemetry-compatible results:\n\n1. **`generateXaiText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generateXaiObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamXaiText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
|
||||||
|
|
||||||
## 18. Create dedicated telemetry transmission module [done]
|
## 18. Create dedicated telemetry transmission module [pending]
|
||||||
### Dependencies: 77.1, 77.3
|
### Dependencies: 77.1, 77.3
|
||||||
### Description: Implement a separate module for handling telemetry transmission logic
|
### Description: Implement a separate module for handling telemetry transmission logic
|
||||||
### Details:
|
### Details:
|
||||||
|
|||||||
144
tasks/task_081.txt
Normal file
144
tasks/task_081.txt
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
# Task ID: 81
|
||||||
|
# Title: Task #81: Implement Comprehensive Local Telemetry System with Future Server Integration Capability
|
||||||
|
# Status: pending
|
||||||
|
# Dependencies: None
|
||||||
|
# Priority: medium
|
||||||
|
# Description: Expand the existing telemetry system to capture additional metrics about feature usage, performance, and user behavior patterns, implementing local storage and aggregation of telemetry data with the capability for future server integration.
|
||||||
|
# Details:
|
||||||
|
This task builds upon the existing telemetry infrastructure (Tasks #77 and #80) to provide more comprehensive insights into how users interact with the application, while storing data locally until a server endpoint becomes available.
|
||||||
|
|
||||||
|
Key implementation details:
|
||||||
|
1. Identify and implement additional telemetry data points:
|
||||||
|
- Command execution frequency and timing metrics
|
||||||
|
- Feature usage patterns (which commands/features are most/least used)
|
||||||
|
- Performance metrics (execution time, memory usage, etc.)
|
||||||
|
- Error rates and types
|
||||||
|
- Session duration and activity patterns
|
||||||
|
- System environment information (OS, Node version, etc.)
|
||||||
|
|
||||||
|
2. Implement a local telemetry storage system:
|
||||||
|
- Create a robust local storage mechanism to hold telemetry data indefinitely
|
||||||
|
- Implement data aggregation to combine similar events and reduce storage size
|
||||||
|
- Add data retention policies to prevent excessive local storage usage
|
||||||
|
- Implement configurable storage limits and cleanup procedures
|
||||||
|
- Design the storage format to be compatible with future server transmission
|
||||||
|
|
||||||
|
3. Add privacy-preserving mechanisms:
|
||||||
|
- Ensure all personally identifiable information is properly anonymized
|
||||||
|
- Implement data minimization principles (only collect what's necessary)
|
||||||
|
- Add user-configurable telemetry levels (basic, enhanced, full)
|
||||||
|
- Provide clear documentation on what data is collected and how it's used
|
||||||
|
|
||||||
|
4. Design for future server integration:
|
||||||
|
- Create a pluggable transmission architecture that can be connected to a server later
|
||||||
|
- Define API contracts and data formats for future server endpoints
|
||||||
|
- Add configuration options for server URLs and authentication that will be used later
|
||||||
|
- Implement feature flags to easily enable server transmission when available
|
||||||
|
|
||||||
|
5. Add telemetry debugging capabilities:
|
||||||
|
- Create a developer mode to view telemetry data being collected
|
||||||
|
- Implement logging of telemetry events (when in debug mode)
|
||||||
|
- Add commands to export telemetry data for manual analysis
|
||||||
|
- Create visualization tools for local telemetry data
|
||||||
|
|
||||||
|
6. Focus on user-facing benefits:
|
||||||
|
- Implement personal usage dashboards showing the user's own patterns
|
||||||
|
- Add productivity insights based on collected telemetry
|
||||||
|
- Create features that allow users to optimize their workflow based on their usage data
|
||||||
|
- Ensure all telemetry collection provides immediate value to the user
|
||||||
|
|
||||||
|
# Test Strategy:
|
||||||
|
The testing strategy for the expanded telemetry system should be comprehensive and cover all aspects of the implementation:
|
||||||
|
|
||||||
|
1. Unit Tests:
|
||||||
|
- Test each telemetry collection function in isolation
|
||||||
|
- Verify proper anonymization of sensitive data
|
||||||
|
- Test aggregation logic with various input scenarios
|
||||||
|
- Validate local storage mechanisms with different data volumes
|
||||||
|
- Test data retention and cleanup policies
|
||||||
|
|
||||||
|
2. Integration Tests:
|
||||||
|
- Verify telemetry data is properly stored locally
|
||||||
|
- Test the complete flow from data collection to local storage
|
||||||
|
- Validate that the storage format is suitable for future server transmission
|
||||||
|
- Test different application states (startup, shutdown, crash recovery)
|
||||||
|
- Verify proper handling of storage failures
|
||||||
|
|
||||||
|
3. End-to-End Tests:
|
||||||
|
- Create automated E2E tests that perform various user actions and verify telemetry is captured
|
||||||
|
- Test with simulated long-term usage to verify storage efficiency
|
||||||
|
- Verify that aggregated data accurately represents the performed actions
|
||||||
|
|
||||||
|
4. Performance Tests:
|
||||||
|
- Measure the performance impact of the expanded telemetry system
|
||||||
|
- Test with large volumes of telemetry data to ensure efficient handling
|
||||||
|
- Verify memory usage remains within acceptable limits
|
||||||
|
- Test CPU utilization during telemetry collection and storage operations
|
||||||
|
|
||||||
|
5. Manual Testing:
|
||||||
|
- Verify telemetry debug mode correctly displays collected data
|
||||||
|
- Test different telemetry level configurations
|
||||||
|
- Manually verify the accuracy of collected metrics
|
||||||
|
- Test the export functionality and analyze the exported data
|
||||||
|
- Validate that user-facing insights and dashboards provide accurate and useful information
|
||||||
|
|
||||||
|
6. Privacy Compliance Testing:
|
||||||
|
- Verify no PII is stored without proper anonymization
|
||||||
|
- Test opt-out functionality works correctly
|
||||||
|
- Ensure telemetry levels properly restrict data collection as configured
|
||||||
|
|
||||||
|
7. Regression Testing:
|
||||||
|
- Verify existing functionality continues to work with the expanded telemetry
|
||||||
|
- Ensure the system is designed to be compatible with future server integration
|
||||||
|
|
||||||
|
8. User Experience Testing:
|
||||||
|
- Test the usability of personal dashboards and insights features
|
||||||
|
- Gather feedback on the usefulness of telemetry-based recommendations
|
||||||
|
- Verify users can easily understand their own usage patterns
|
||||||
|
|
||||||
|
# Subtasks:
|
||||||
|
## 1. Implement Additional Telemetry Data Collection Points [pending]
|
||||||
|
### Dependencies: None
|
||||||
|
### Description: Extend the telemetry system to capture new metrics including command execution frequency, feature usage patterns, performance metrics, error rates, session data, and system environment information. [Updated: 5/8/2025] [Updated: 5/8/2025] [Updated: 5/8/2025]
|
||||||
|
### Details:
|
||||||
|
Create new telemetry event types and collection points throughout the codebase. Implement hooks in the command execution pipeline to track timing and frequency. Add performance monitoring for key operations using high-resolution timers. Capture system environment data at startup. Implement error tracking that records error types and frequencies. Add session tracking with start/end events and periodic heartbeats.
|
||||||
|
<info added on 2025-05-08T22:57:23.259Z>
|
||||||
|
This is a test note added via the MCP tool. The telemetry collection system should be thoroughly tested before implementation.
|
||||||
|
</info added on 2025-05-08T22:57:23.259Z>
|
||||||
|
<info added on 2025-05-08T22:59:29.818Z>
|
||||||
|
For future server integration, Prometheus time-series database with its companion storage solutions (like Cortex or Thanos) would be an excellent choice for handling our telemetry data. The local telemetry collection system should be designed with compatible data structures and metrics formatting that will allow seamless export to Prometheus once server-side infrastructure is in place. This approach would provide powerful querying capabilities, visualization options through Grafana, and scalable long-term storage. Consider implementing the OpenMetrics format locally to ensure compatibility with the Prometheus ecosystem.
|
||||||
|
</info added on 2025-05-08T22:59:29.818Z>
|
||||||
|
<info added on 2025-05-08T23:02:59.692Z>
|
||||||
|
Prometheus would be an excellent choice for server-side telemetry storage and analysis. When designing the local telemetry collection system, we should structure our metrics and events to be compatible with Prometheus' data model (time series with key-value pairs). This would allow for straightforward export to Prometheus once server infrastructure is established. For long-term storage, companion solutions like Cortex or Thanos could extend Prometheus' capabilities, enabling historical analysis and scalable retention. Additionally, adopting the OpenMetrics format locally would ensure seamless integration with the broader Prometheus ecosystem, including visualization through Grafana dashboards.
|
||||||
|
</info added on 2025-05-08T23:02:59.692Z>
|
||||||
|
|
||||||
|
## 2. Build Robust Local Telemetry Storage System [pending]
|
||||||
|
### Dependencies: None
|
||||||
|
### Description: Create a persistent local storage mechanism to hold telemetry data indefinitely with aggregation capabilities to combine similar events and reduce storage requirements.
|
||||||
|
### Details:
|
||||||
|
Implement a persistent local store using SQLite or similar lightweight database. Create data schemas for different telemetry types. Develop aggregation functions that can combine similar events (e.g., multiple instances of the same command) into summary statistics. Implement data retention policies to prevent excessive storage usage. Add serialization/deserialization for telemetry objects. Design the storage format to be compatible with future server transmission needs.
|
||||||
|
|
||||||
|
## 3. Design Server Transmission Architecture for Future Implementation [pending]
|
||||||
|
### Dependencies: None
|
||||||
|
### Description: Create a pluggable architecture for future server transmission capabilities while maintaining local-only functionality for now.
|
||||||
|
### Details:
|
||||||
|
Design a modular transmission system with clear interfaces that can be implemented later when a server becomes available. Define data formats and API contracts for future server endpoints. Add configuration options for server URLs and authentication that will be used in the future. Implement feature flags to easily enable server transmission when available. Create a transmission queue design that can be activated later. Document the architecture for future implementation.
|
||||||
|
|
||||||
|
## 4. Implement Privacy Controls and User Configuration [pending]
|
||||||
|
### Dependencies: None
|
||||||
|
### Description: Add privacy-preserving mechanisms including data anonymization, minimization principles, and user-configurable telemetry levels.
|
||||||
|
### Details:
|
||||||
|
Create a telemetry sanitization layer that removes or hashes PII before storage. Implement three telemetry levels (basic, enhanced, full) with clear documentation of what each includes. Add user settings UI for controlling telemetry levels. Create a first-run experience that explains telemetry and requests user consent. Implement runtime filtering of telemetry events based on user settings.
|
||||||
|
|
||||||
|
## 5. Add Telemetry Debugging and Local Analysis Tools [pending]
|
||||||
|
### Dependencies: None
|
||||||
|
### Description: Create developer tools for debugging telemetry including a developer mode to view collected data, logging capabilities, and local data analysis features.
|
||||||
|
### Details:
|
||||||
|
Implement a developer console command to toggle telemetry debug mode. Create a UI panel that displays collected telemetry data when in debug mode. Add detailed logging of telemetry events to the application log when debugging is enabled. Create commands to export telemetry data in various formats (JSON, CSV) for manual analysis. Implement basic visualization tools for local telemetry data to help users understand their own usage patterns.
|
||||||
|
|
||||||
|
## 6. Develop User-Facing Telemetry Benefits [pending]
|
||||||
|
### Dependencies: 81.1, 81.2
|
||||||
|
### Description: Create features that provide immediate value to users based on their telemetry data, focusing on personal insights and workflow optimization.
|
||||||
|
### Details:
|
||||||
|
Implement a personal usage dashboard that visualizes the user's command usage patterns, feature adoption, and productivity trends. Create a 'productivity insights' feature that offers personalized recommendations based on usage patterns. Add workflow optimization suggestions that help users discover more efficient ways to use the application. Develop weekly/monthly usage reports that users can view to track their own progress. Ensure all telemetry collection has a direct benefit to the user in the absence of server-side analysis.
|
||||||
|
|
||||||
@@ -1,57 +0,0 @@
|
|||||||
# Task ID: 88
|
|
||||||
# Title: Enhance Add-Task Functionality to Consider All Task Dependencies
|
|
||||||
# Status: done
|
|
||||||
# Dependencies: None
|
|
||||||
# Priority: medium
|
|
||||||
# Description: Improve the add-task feature to accurately account for all dependencies among tasks, ensuring proper task ordering and execution.
|
|
||||||
# Details:
|
|
||||||
1. Review current implementation of add-task functionality.
|
|
||||||
2. Identify existing mechanisms for handling task dependencies.
|
|
||||||
3. Modify add-task to recursively analyze and incorporate all dependencies.
|
|
||||||
4. Ensure that dependencies are resolved in the correct order during task execution.
|
|
||||||
5. Update documentation to reflect changes in dependency handling.
|
|
||||||
6. Consider edge cases such as circular dependencies and handle them appropriately.
|
|
||||||
7. Optimize performance to ensure efficient dependency resolution, especially for projects with a large number of tasks.
|
|
||||||
8. Integrate with existing validation and error handling mechanisms (from Task 87) to provide clear feedback if dependencies cannot be resolved.
|
|
||||||
9. Test thoroughly with various dependency scenarios to ensure robustness.
|
|
||||||
|
|
||||||
# Test Strategy:
|
|
||||||
1. Create test cases with simple linear dependencies to verify correct ordering.
|
|
||||||
2. Develop test cases with complex, nested dependencies to ensure recursive resolution works correctly.
|
|
||||||
3. Include tests for edge cases such as circular dependencies, verifying appropriate error messages are displayed.
|
|
||||||
4. Measure performance with large sets of tasks and dependencies to ensure efficiency.
|
|
||||||
5. Conduct integration testing with other components that rely on task dependencies.
|
|
||||||
6. Perform manual code reviews to validate implementation against requirements.
|
|
||||||
7. Execute automated tests to verify no regressions in existing functionality.
|
|
||||||
|
|
||||||
# Subtasks:
|
|
||||||
## 1. Review Current Add-Task Implementation and Identify Dependency Mechanisms [done]
|
|
||||||
### Dependencies: None
|
|
||||||
### Description: Examine the existing add-task functionality to understand how task dependencies are currently handled.
|
|
||||||
### Details:
|
|
||||||
Conduct a code review of the add-task feature. Document any existing mechanisms for handling task dependencies.
|
|
||||||
|
|
||||||
## 2. Modify Add-Task to Recursively Analyze Dependencies [done]
|
|
||||||
### Dependencies: 88.1
|
|
||||||
### Description: Update the add-task functionality to recursively analyze and incorporate all task dependencies.
|
|
||||||
### Details:
|
|
||||||
Implement a recursive algorithm that identifies and incorporates all dependencies for a given task. Ensure it handles nested dependencies correctly.
|
|
||||||
|
|
||||||
## 3. Ensure Correct Order of Dependency Resolution [done]
|
|
||||||
### Dependencies: 88.2
|
|
||||||
### Description: Modify the add-task functionality to ensure that dependencies are resolved in the correct order during task execution.
|
|
||||||
### Details:
|
|
||||||
Implement logic to sort and execute tasks based on their dependency order. Handle cases where multiple tasks depend on each other.
|
|
||||||
|
|
||||||
## 4. Integrate with Existing Validation and Error Handling [done]
|
|
||||||
### Dependencies: 88.3
|
|
||||||
### Description: Update the add-task functionality to integrate with existing validation and error handling mechanisms (from Task 87).
|
|
||||||
### Details:
|
|
||||||
Modify the code to provide clear feedback if dependencies cannot be resolved. Ensure that circular dependencies are detected and handled appropriately.
|
|
||||||
|
|
||||||
## 5. Optimize Performance for Large Projects [done]
|
|
||||||
### Dependencies: 88.4
|
|
||||||
### Description: Optimize the add-task functionality to ensure efficient dependency resolution, especially for projects with a large number of tasks.
|
|
||||||
### Details:
|
|
||||||
Profile and optimize the recursive dependency analysis algorithm. Implement caching or other performance improvements as needed.
|
|
||||||
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
# Task ID: 89
|
|
||||||
# Title: Introduce Prioritize Command with Enhanced Priority Levels
|
|
||||||
# Status: pending
|
|
||||||
# Dependencies: None
|
|
||||||
# Priority: medium
|
|
||||||
# Description: Implement a prioritize command with --up/--down/--priority/--id flags and shorthand equivalents (-u/-d/-p/-i). Add 'lowest' and 'highest' priority levels, updating CLI output accordingly.
|
|
||||||
# Details:
|
|
||||||
The new prioritize command should allow users to adjust task priorities using the specified flags. The --up and --down flags will modify the priority relative to the current level, while --priority sets an absolute priority. The --id flag specifies which task to prioritize. Shorthand equivalents (-u/-d/-p/-i) should be supported for user convenience.
|
|
||||||
|
|
||||||
The priority levels should now include 'lowest', 'low', 'medium', 'high', and 'highest'. The CLI output should be updated to reflect these new priority levels accurately.
|
|
||||||
|
|
||||||
Considerations:
|
|
||||||
- Ensure backward compatibility with existing commands and configurations.
|
|
||||||
- Update the help documentation to include the new command and its usage.
|
|
||||||
- Implement proper error handling for invalid priority levels or missing flags.
|
|
||||||
|
|
||||||
# Test Strategy:
|
|
||||||
To verify task completion, perform the following tests:
|
|
||||||
1. Test each flag (--up, --down, --priority, --id) individually and in combination to ensure they function as expected.
|
|
||||||
2. Verify that shorthand equivalents (-u, -d, -p, -i) work correctly.
|
|
||||||
3. Check that the new priority levels ('lowest' and 'highest') are recognized and displayed properly in CLI output.
|
|
||||||
4. Test error handling for invalid inputs (e.g., non-existent task IDs, invalid priority levels).
|
|
||||||
5. Ensure that the help command displays accurate information about the new prioritize command.
|
|
||||||
@@ -1,67 +0,0 @@
|
|||||||
# Task ID: 90
|
|
||||||
# Title: Implement Subtask Progress Analyzer and Reporting System
|
|
||||||
# Status: pending
|
|
||||||
# Dependencies: 1, 3
|
|
||||||
# Priority: medium
|
|
||||||
# Description: Develop a subtask analyzer that monitors the progress of all subtasks, validates their status, and generates comprehensive reports for users to track project advancement.
|
|
||||||
# Details:
|
|
||||||
The subtask analyzer should be implemented with the following components and considerations:
|
|
||||||
|
|
||||||
1. Progress Tracking Mechanism:
|
|
||||||
- Create a function to scan the task data structure and identify all tasks with subtasks
|
|
||||||
- Implement logic to determine the completion status of each subtask
|
|
||||||
- Calculate overall progress percentages for tasks with multiple subtasks
|
|
||||||
|
|
||||||
2. Status Validation:
|
|
||||||
- Develop validation rules to check if subtasks are progressing according to expected timelines
|
|
||||||
- Implement detection for stalled or blocked subtasks
|
|
||||||
- Create alerts for subtasks that are behind schedule or have dependency issues
|
|
||||||
|
|
||||||
3. Reporting System:
|
|
||||||
- Design a structured report format that clearly presents:
|
|
||||||
- Overall project progress
|
|
||||||
- Task-by-task breakdown with subtask status
|
|
||||||
- Highlighted issues or blockers
|
|
||||||
- Support multiple output formats (console, JSON, exportable text)
|
|
||||||
- Include visual indicators for progress (e.g., progress bars in CLI)
|
|
||||||
|
|
||||||
4. Integration Points:
|
|
||||||
- Hook into the existing task management system
|
|
||||||
- Ensure the analyzer can be triggered via CLI commands
|
|
||||||
- Make the reporting feature accessible through the main command interface
|
|
||||||
|
|
||||||
5. Performance Considerations:
|
|
||||||
- Optimize for large task lists with many subtasks
|
|
||||||
- Implement caching if necessary to avoid redundant calculations
|
|
||||||
- Ensure reports generate quickly even for complex project structures
|
|
||||||
|
|
||||||
The implementation should follow the existing code style and patterns, leveraging the task data structure already in place. The analyzer should be non-intrusive to existing functionality while providing valuable insights to users.
|
|
||||||
|
|
||||||
# Test Strategy:
|
|
||||||
Testing for the subtask analyzer should include:
|
|
||||||
|
|
||||||
1. Unit Tests:
|
|
||||||
- Test the progress calculation logic with various task/subtask configurations
|
|
||||||
- Verify status validation correctly identifies issues in different scenarios
|
|
||||||
- Ensure report generation produces consistent and accurate output
|
|
||||||
- Test edge cases (empty subtasks, all complete, all incomplete, mixed states)
|
|
||||||
|
|
||||||
2. Integration Tests:
|
|
||||||
- Verify the analyzer correctly integrates with the existing task data structure
|
|
||||||
- Test CLI command integration and parameter handling
|
|
||||||
- Ensure reports reflect actual changes to task/subtask status
|
|
||||||
|
|
||||||
3. Performance Tests:
|
|
||||||
- Benchmark report generation with large task sets (100+ tasks with multiple subtasks)
|
|
||||||
- Verify memory usage remains reasonable during analysis
|
|
||||||
|
|
||||||
4. User Acceptance Testing:
|
|
||||||
- Create sample projects with various subtask configurations
|
|
||||||
- Generate reports and verify they provide clear, actionable information
|
|
||||||
- Confirm visual indicators accurately represent progress
|
|
||||||
|
|
||||||
5. Regression Testing:
|
|
||||||
- Verify that the analyzer doesn't interfere with existing task management functionality
|
|
||||||
- Ensure backward compatibility with existing task data structures
|
|
||||||
|
|
||||||
Documentation should be updated to include examples of how to use the new analyzer and interpret the reports. Success criteria include accurate progress tracking, clear reporting, and performance that scales with project size.
|
|
||||||
@@ -1,49 +0,0 @@
|
|||||||
# Task ID: 91
|
|
||||||
# Title: Implement Move Command for Tasks and Subtasks
|
|
||||||
# Status: done
|
|
||||||
# Dependencies: 1, 3
|
|
||||||
# Priority: medium
|
|
||||||
# Description: Introduce a 'move' command to enable moving tasks or subtasks to a different id, facilitating conflict resolution by allowing teams to assign new ids as needed.
|
|
||||||
# Details:
|
|
||||||
The move command will consist of three core components: 1) Core Logic Function in scripts/modules/task-manager/move-task.js, 2) Direct Function Wrapper in mcp-server/src/core/direct-functions/move-task.js, and 3) MCP Tool in mcp-server/src/tools/move-task.js. The command will accept source and destination IDs, handling various scenarios including moving tasks to become subtasks, subtasks to become tasks, and subtasks between different parents. The implementation will handle edge cases such as invalid ids, non-existent parents, circular dependencies, and will properly update all dependencies.
|
|
||||||
|
|
||||||
# Test Strategy:
|
|
||||||
Testing will follow a three-tier approach: 1) Unit tests for core functionality including moving tasks to subtasks, subtasks to tasks, subtasks between parents, dependency handling, and validation error cases; 2) Integration tests for the direct function with mock MCP environment and task file regeneration; 3) End-to-end tests for the full MCP tool call path. This will verify all scenarios including moving a task to a new id, moving a subtask under a different parent while preserving its hierarchy, and handling errors for invalid operations.
|
|
||||||
|
|
||||||
# Subtasks:
|
|
||||||
## 1. Design and implement core move logic [done]
|
|
||||||
### Dependencies: None
|
|
||||||
### Description: Create the fundamental logic for moving tasks and subtasks within the task management system hierarchy
|
|
||||||
### Details:
|
|
||||||
Implement the core logic function in scripts/modules/task-manager/move-task.js with the signature that accepts tasksPath, sourceId, destinationId, and generateFiles parameters. Develop functions to handle all movement operations including task-to-subtask, subtask-to-task, and subtask-to-subtask conversions. Implement validation for source and destination IDs, and ensure proper updating of parent-child relationships and dependencies.
|
|
||||||
|
|
||||||
## 2. Implement edge case handling [done]
|
|
||||||
### Dependencies: 91.1
|
|
||||||
### Description: Develop robust error handling for all potential edge cases in the move operation
|
|
||||||
### Details:
|
|
||||||
Create validation functions to detect invalid task IDs, non-existent parent tasks, and circular dependencies. Handle special cases such as moving a task to become the first/last subtask, reordering within the same parent, preventing moving a task to itself, and preventing moving a parent to its own subtask. Implement proper error messages and status codes for each edge case, and ensure system stability if a move operation fails.
|
|
||||||
|
|
||||||
## 3. Update CLI interface for move commands [done]
|
|
||||||
### Dependencies: 91.1
|
|
||||||
### Description: Extend the command-line interface to support the new move functionality with appropriate flags and options
|
|
||||||
### Details:
|
|
||||||
Create the Direct Function Wrapper in mcp-server/src/core/direct-functions/move-task.js to adapt the core logic for MCP, handling path resolution and parameter validation. Implement silent mode to prevent console output interfering with JSON responses. Create the MCP Tool in mcp-server/src/tools/move-task.js that exposes the functionality to Cursor, handles project root resolution, and includes proper Zod parameter definitions. Update MCP tool definition in .cursor/mcp.json and register the tool in mcp-server/src/tools/index.js.
|
|
||||||
|
|
||||||
## 4. Ensure data integrity during moves [done]
|
|
||||||
### Dependencies: 91.1, 91.2
|
|
||||||
### Description: Implement safeguards to maintain data consistency and update all relationships during move operations
|
|
||||||
### Details:
|
|
||||||
Implement dependency handling logic to update dependencies when converting between task/subtask, add appropriate parent dependencies when needed, and validate no circular dependencies are created. Create transaction-like operations to ensure atomic moves that either complete fully or roll back. Implement functions to update all affected task relationships after a move, and add verification steps to confirm data integrity post-move.
|
|
||||||
|
|
||||||
## 5. Create comprehensive test suite [done]
|
|
||||||
### Dependencies: 91.1, 91.2, 91.3, 91.4
|
|
||||||
### Description: Develop and execute tests covering all move scenarios and edge cases
|
|
||||||
### Details:
|
|
||||||
Create unit tests for core functionality including moving tasks to subtasks, subtasks to tasks, subtasks between parents, dependency handling, and validation error cases. Implement integration tests for the direct function with mock MCP environment and task file regeneration. Develop end-to-end tests for the full MCP tool call path. Ensure tests cover all identified edge cases and potential failure points, and verify data integrity after moves.
|
|
||||||
|
|
||||||
## 6. Export and integrate the move function [done]
|
|
||||||
### Dependencies: 91.1
|
|
||||||
### Description: Ensure the move function is properly exported and integrated with existing code
|
|
||||||
### Details:
|
|
||||||
Export the move function in scripts/modules/task-manager.js. Update task-master-core.js to include the direct function. Reuse validation logic from add-subtask.js and remove-subtask.js where appropriate. Follow silent mode implementation pattern from other direct functions and match parameter naming conventions in MCP tools.
|
|
||||||
|
|
||||||
460
tasks/tasks.json
460
tasks/tasks.json
File diff suppressed because one or more lines are too long
@@ -10,7 +10,6 @@ const mockGetFallbackModelId = jest.fn();
|
|||||||
const mockGetParametersForRole = jest.fn();
|
const mockGetParametersForRole = jest.fn();
|
||||||
const mockGetUserId = jest.fn();
|
const mockGetUserId = jest.fn();
|
||||||
const mockGetDebugFlag = jest.fn();
|
const mockGetDebugFlag = jest.fn();
|
||||||
const mockIsApiKeySet = jest.fn();
|
|
||||||
|
|
||||||
// --- Mock MODEL_MAP Data ---
|
// --- Mock MODEL_MAP Data ---
|
||||||
// Provide a simplified structure sufficient for cost calculation tests
|
// Provide a simplified structure sufficient for cost calculation tests
|
||||||
@@ -30,12 +29,6 @@ const mockModelMap = {
|
|||||||
id: 'test-research-model',
|
id: 'test-research-model',
|
||||||
cost_per_1m_tokens: { input: 1, output: 1, currency: 'USD' }
|
cost_per_1m_tokens: { input: 1, output: 1, currency: 'USD' }
|
||||||
}
|
}
|
||||||
],
|
|
||||||
openai: [
|
|
||||||
{
|
|
||||||
id: 'test-openai-model',
|
|
||||||
cost_per_1m_tokens: { input: 2, output: 6, currency: 'USD' }
|
|
||||||
}
|
|
||||||
]
|
]
|
||||||
// Add other providers/models if needed for specific tests
|
// Add other providers/models if needed for specific tests
|
||||||
};
|
};
|
||||||
@@ -52,8 +45,7 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
|
|||||||
getUserId: mockGetUserId,
|
getUserId: mockGetUserId,
|
||||||
getDebugFlag: mockGetDebugFlag,
|
getDebugFlag: mockGetDebugFlag,
|
||||||
MODEL_MAP: mockModelMap,
|
MODEL_MAP: mockModelMap,
|
||||||
getBaseUrlForRole: mockGetBaseUrlForRole,
|
getBaseUrlForRole: mockGetBaseUrlForRole
|
||||||
isApiKeySet: mockIsApiKeySet
|
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Mock AI Provider Modules
|
// Mock AI Provider Modules
|
||||||
@@ -75,24 +67,7 @@ jest.unstable_mockModule('../../src/ai-providers/perplexity.js', () => ({
|
|||||||
generatePerplexityObject: mockGeneratePerplexityObject
|
generatePerplexityObject: mockGeneratePerplexityObject
|
||||||
}));
|
}));
|
||||||
|
|
||||||
const mockGenerateOpenAIText = jest.fn();
|
// ... Mock other providers (google, openai, etc.) similarly ...
|
||||||
const mockStreamOpenAIText = jest.fn();
|
|
||||||
const mockGenerateOpenAIObject = jest.fn();
|
|
||||||
jest.unstable_mockModule('../../src/ai-providers/openai.js', () => ({
|
|
||||||
generateOpenAIText: mockGenerateOpenAIText,
|
|
||||||
streamOpenAIText: mockStreamOpenAIText,
|
|
||||||
generateOpenAIObject: mockGenerateOpenAIObject
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Mock ollama provider (for special case testing - API key is optional)
|
|
||||||
const mockGenerateOllamaText = jest.fn();
|
|
||||||
const mockStreamOllamaText = jest.fn();
|
|
||||||
const mockGenerateOllamaObject = jest.fn();
|
|
||||||
jest.unstable_mockModule('../../src/ai-providers/ollama.js', () => ({
|
|
||||||
generateOllamaText: mockGenerateOllamaText,
|
|
||||||
streamOllamaText: mockStreamOllamaText,
|
|
||||||
generateOllamaObject: mockGenerateOllamaObject
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Mock utils logger, API key resolver, AND findProjectRoot
|
// Mock utils logger, API key resolver, AND findProjectRoot
|
||||||
const mockLog = jest.fn();
|
const mockLog = jest.fn();
|
||||||
@@ -137,8 +112,6 @@ describe('Unified AI Services', () => {
|
|||||||
mockResolveEnvVariable.mockImplementation((key) => {
|
mockResolveEnvVariable.mockImplementation((key) => {
|
||||||
if (key === 'ANTHROPIC_API_KEY') return 'mock-anthropic-key';
|
if (key === 'ANTHROPIC_API_KEY') return 'mock-anthropic-key';
|
||||||
if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key';
|
if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key';
|
||||||
if (key === 'OPENAI_API_KEY') return 'mock-openai-key';
|
|
||||||
if (key === 'OLLAMA_API_KEY') return 'mock-ollama-key';
|
|
||||||
return null;
|
return null;
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -146,7 +119,6 @@ describe('Unified AI Services', () => {
|
|||||||
mockFindProjectRoot.mockReturnValue(fakeProjectRoot);
|
mockFindProjectRoot.mockReturnValue(fakeProjectRoot);
|
||||||
mockGetDebugFlag.mockReturnValue(false);
|
mockGetDebugFlag.mockReturnValue(false);
|
||||||
mockGetUserId.mockReturnValue('test-user-id'); // Add default mock for getUserId
|
mockGetUserId.mockReturnValue('test-user-id'); // Add default mock for getUserId
|
||||||
mockIsApiKeySet.mockReturnValue(true); // Default to true for most tests
|
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('generateTextService', () => {
|
describe('generateTextService', () => {
|
||||||
@@ -361,256 +333,13 @@ describe('Unified AI Services', () => {
|
|||||||
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(1);
|
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
// New tests for API key checking and fallback sequence
|
// Add more tests for edge cases:
|
||||||
// These tests verify that:
|
// - Missing API keys (should throw from _resolveApiKey)
|
||||||
// 1. The system checks if API keys are set before trying to use a provider
|
// - Unsupported provider configured (should skip and log)
|
||||||
// 2. If a provider's API key is missing, it skips to the next provider in the fallback sequence
|
// - Missing provider/model config for a role (should skip and log)
|
||||||
// 3. The system throws an appropriate error if all providers' API keys are missing
|
// - Missing prompt
|
||||||
// 4. Ollama is a special case where API key is optional and not checked
|
// - Different initial roles (research, fallback)
|
||||||
// 5. Session context is correctly used for API key checks
|
// - generateObjectService (mock schema, check object result)
|
||||||
|
// - streamTextService (more complex to test, might need stream helpers)
|
||||||
test('should skip provider with missing API key and try next in fallback sequence', async () => {
|
|
||||||
// Setup isApiKeySet to return false for anthropic but true for perplexity
|
|
||||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
|
||||||
if (provider === 'anthropic') return false; // Main provider has no key
|
|
||||||
return true; // Other providers have keys
|
|
||||||
});
|
|
||||||
|
|
||||||
// Mock perplexity text response (since we'll skip anthropic)
|
|
||||||
mockGeneratePerplexityText.mockResolvedValue({
|
|
||||||
text: 'Perplexity response (skipped to research)',
|
|
||||||
usage: { inputTokens: 20, outputTokens: 30, totalTokens: 50 }
|
|
||||||
});
|
|
||||||
|
|
||||||
const params = {
|
|
||||||
role: 'main',
|
|
||||||
prompt: 'Skip main provider test',
|
|
||||||
session: { env: {} }
|
|
||||||
};
|
|
||||||
|
|
||||||
const result = await generateTextService(params);
|
|
||||||
|
|
||||||
// Should have gotten the perplexity response
|
|
||||||
expect(result.mainResult).toBe(
|
|
||||||
'Perplexity response (skipped to research)'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should check API keys
|
|
||||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
|
||||||
'anthropic',
|
|
||||||
params.session,
|
|
||||||
fakeProjectRoot
|
|
||||||
);
|
|
||||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
|
||||||
'perplexity',
|
|
||||||
params.session,
|
|
||||||
fakeProjectRoot
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should log a warning
|
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
|
||||||
'warn',
|
|
||||||
expect.stringContaining(
|
|
||||||
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should NOT call anthropic provider
|
|
||||||
expect(mockGenerateAnthropicText).not.toHaveBeenCalled();
|
|
||||||
|
|
||||||
// Should call perplexity provider
|
|
||||||
expect(mockGeneratePerplexityText).toHaveBeenCalledTimes(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should skip multiple providers with missing API keys and use first available', async () => {
|
|
||||||
// Setup: Main and fallback providers have no keys, only research has a key
|
|
||||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
|
||||||
if (provider === 'anthropic') return false; // Main and fallback are both anthropic
|
|
||||||
if (provider === 'perplexity') return true; // Research has a key
|
|
||||||
return false;
|
|
||||||
});
|
|
||||||
|
|
||||||
// Define different providers for testing multiple skips
|
|
||||||
mockGetFallbackProvider.mockReturnValue('openai'); // Different from main
|
|
||||||
mockGetFallbackModelId.mockReturnValue('test-openai-model');
|
|
||||||
|
|
||||||
// Mock isApiKeySet to return false for both main and fallback
|
|
||||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
|
||||||
if (provider === 'anthropic') return false; // Main provider has no key
|
|
||||||
if (provider === 'openai') return false; // Fallback provider has no key
|
|
||||||
return true; // Research provider has a key
|
|
||||||
});
|
|
||||||
|
|
||||||
// Mock perplexity text response (since we'll skip to research)
|
|
||||||
mockGeneratePerplexityText.mockResolvedValue({
|
|
||||||
text: 'Research response after skipping main and fallback',
|
|
||||||
usage: { inputTokens: 20, outputTokens: 30, totalTokens: 50 }
|
|
||||||
});
|
|
||||||
|
|
||||||
const params = {
|
|
||||||
role: 'main',
|
|
||||||
prompt: 'Skip multiple providers test',
|
|
||||||
session: { env: {} }
|
|
||||||
};
|
|
||||||
|
|
||||||
const result = await generateTextService(params);
|
|
||||||
|
|
||||||
// Should have gotten the perplexity (research) response
|
|
||||||
expect(result.mainResult).toBe(
|
|
||||||
'Research response after skipping main and fallback'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should check API keys for all three roles
|
|
||||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
|
||||||
'anthropic',
|
|
||||||
params.session,
|
|
||||||
fakeProjectRoot
|
|
||||||
);
|
|
||||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
|
||||||
'openai',
|
|
||||||
params.session,
|
|
||||||
fakeProjectRoot
|
|
||||||
);
|
|
||||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
|
||||||
'perplexity',
|
|
||||||
params.session,
|
|
||||||
fakeProjectRoot
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should log warnings for both skipped providers
|
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
|
||||||
'warn',
|
|
||||||
expect.stringContaining(
|
|
||||||
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
|
||||||
'warn',
|
|
||||||
expect.stringContaining(
|
|
||||||
`Skipping role 'fallback' (Provider: openai): API key not set or invalid.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should NOT call skipped providers
|
|
||||||
expect(mockGenerateAnthropicText).not.toHaveBeenCalled();
|
|
||||||
expect(mockGenerateOpenAIText).not.toHaveBeenCalled();
|
|
||||||
|
|
||||||
// Should call perplexity provider
|
|
||||||
expect(mockGeneratePerplexityText).toHaveBeenCalledTimes(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should throw error if all providers in sequence have missing API keys', async () => {
|
|
||||||
// Mock all providers to have missing API keys
|
|
||||||
mockIsApiKeySet.mockReturnValue(false);
|
|
||||||
|
|
||||||
const params = {
|
|
||||||
role: 'main',
|
|
||||||
prompt: 'All API keys missing test',
|
|
||||||
session: { env: {} }
|
|
||||||
};
|
|
||||||
|
|
||||||
// Should throw error since all providers would be skipped
|
|
||||||
await expect(generateTextService(params)).rejects.toThrow(
|
|
||||||
'AI service call failed for all configured roles'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should log warnings for all skipped providers
|
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
|
||||||
'warn',
|
|
||||||
expect.stringContaining(
|
|
||||||
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
|
||||||
'warn',
|
|
||||||
expect.stringContaining(
|
|
||||||
`Skipping role 'fallback' (Provider: anthropic): API key not set or invalid.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
|
||||||
'warn',
|
|
||||||
expect.stringContaining(
|
|
||||||
`Skipping role 'research' (Provider: perplexity): API key not set or invalid.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should log final error
|
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
|
||||||
'error',
|
|
||||||
expect.stringContaining(
|
|
||||||
'All roles in the sequence [main, fallback, research] failed.'
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should NOT call any providers
|
|
||||||
expect(mockGenerateAnthropicText).not.toHaveBeenCalled();
|
|
||||||
expect(mockGeneratePerplexityText).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should not check API key for Ollama provider and try to use it', async () => {
|
|
||||||
// Setup: Set main provider to ollama
|
|
||||||
mockGetMainProvider.mockReturnValue('ollama');
|
|
||||||
mockGetMainModelId.mockReturnValue('llama3');
|
|
||||||
|
|
||||||
// Mock Ollama text generation to succeed
|
|
||||||
mockGenerateOllamaText.mockResolvedValue({
|
|
||||||
text: 'Ollama response (no API key required)',
|
|
||||||
usage: { inputTokens: 10, outputTokens: 10, totalTokens: 20 }
|
|
||||||
});
|
|
||||||
|
|
||||||
const params = {
|
|
||||||
role: 'main',
|
|
||||||
prompt: 'Ollama special case test',
|
|
||||||
session: { env: {} }
|
|
||||||
};
|
|
||||||
|
|
||||||
const result = await generateTextService(params);
|
|
||||||
|
|
||||||
// Should have gotten the Ollama response
|
|
||||||
expect(result.mainResult).toBe('Ollama response (no API key required)');
|
|
||||||
|
|
||||||
// isApiKeySet shouldn't be called for Ollama
|
|
||||||
// Note: This is indirect - the code just doesn't check isApiKeySet for ollama
|
|
||||||
// so we're verifying ollama provider was called despite isApiKeySet being mocked to false
|
|
||||||
mockIsApiKeySet.mockReturnValue(false); // Should be ignored for Ollama
|
|
||||||
|
|
||||||
// Should call Ollama provider
|
|
||||||
expect(mockGenerateOllamaText).toHaveBeenCalledTimes(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should correctly use the provided session for API key check', async () => {
|
|
||||||
// Mock custom session object with env vars
|
|
||||||
const customSession = { env: { ANTHROPIC_API_KEY: 'session-api-key' } };
|
|
||||||
|
|
||||||
// Setup API key check to verify the session is passed correctly
|
|
||||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
|
||||||
// Only return true if the correct session was provided
|
|
||||||
return session === customSession;
|
|
||||||
});
|
|
||||||
|
|
||||||
// Mock the anthropic response
|
|
||||||
mockGenerateAnthropicText.mockResolvedValue({
|
|
||||||
text: 'Anthropic response with session key',
|
|
||||||
usage: { inputTokens: 10, outputTokens: 10, totalTokens: 20 }
|
|
||||||
});
|
|
||||||
|
|
||||||
const params = {
|
|
||||||
role: 'main',
|
|
||||||
prompt: 'Session API key test',
|
|
||||||
session: customSession
|
|
||||||
};
|
|
||||||
|
|
||||||
const result = await generateTextService(params);
|
|
||||||
|
|
||||||
// Should check API key with the custom session
|
|
||||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
|
||||||
'anthropic',
|
|
||||||
customSession,
|
|
||||||
fakeProjectRoot
|
|
||||||
);
|
|
||||||
|
|
||||||
// Should have gotten the anthropic response
|
|
||||||
expect(result.mainResult).toBe('Anthropic response with session key');
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -1,878 +0,0 @@
|
|||||||
// @ts-check
|
|
||||||
/**
|
|
||||||
* Module to test the config-manager.js functionality
|
|
||||||
* This file uses ES module syntax (.mjs) to properly handle imports
|
|
||||||
*/
|
|
||||||
|
|
||||||
import fs from 'fs';
|
|
||||||
import path from 'path';
|
|
||||||
import { jest } from '@jest/globals';
|
|
||||||
import { fileURLToPath } from 'url';
|
|
||||||
import { sampleTasks } from '../fixtures/sample-tasks.js';
|
|
||||||
|
|
||||||
// Disable chalk's color detection which can cause fs.readFileSync calls
|
|
||||||
process.env.FORCE_COLOR = '0';
|
|
||||||
|
|
||||||
// --- Read REAL supported-models.json data BEFORE mocks ---
|
|
||||||
const __filename = fileURLToPath(import.meta.url); // Get current file path
|
|
||||||
const __dirname = path.dirname(__filename); // Get current directory
|
|
||||||
const realSupportedModelsPath = path.resolve(
|
|
||||||
__dirname,
|
|
||||||
'../../scripts/modules/supported-models.json'
|
|
||||||
);
|
|
||||||
let REAL_SUPPORTED_MODELS_CONTENT;
|
|
||||||
let REAL_SUPPORTED_MODELS_DATA;
|
|
||||||
try {
|
|
||||||
REAL_SUPPORTED_MODELS_CONTENT = fs.readFileSync(
|
|
||||||
realSupportedModelsPath,
|
|
||||||
'utf-8'
|
|
||||||
);
|
|
||||||
REAL_SUPPORTED_MODELS_DATA = JSON.parse(REAL_SUPPORTED_MODELS_CONTENT);
|
|
||||||
} catch (err) {
|
|
||||||
console.error(
|
|
||||||
'FATAL TEST SETUP ERROR: Could not read or parse real supported-models.json',
|
|
||||||
err
|
|
||||||
);
|
|
||||||
REAL_SUPPORTED_MODELS_CONTENT = '{}'; // Default to empty object on error
|
|
||||||
REAL_SUPPORTED_MODELS_DATA = {};
|
|
||||||
process.exit(1); // Exit if essential test data can't be loaded
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Define Mock Function Instances ---
|
|
||||||
const mockFindProjectRoot = jest.fn();
|
|
||||||
const mockLog = jest.fn();
|
|
||||||
const mockResolveEnvVariable = jest.fn();
|
|
||||||
|
|
||||||
// --- Mock fs functions directly instead of the whole module ---
|
|
||||||
const mockExistsSync = jest.fn();
|
|
||||||
const mockReadFileSync = jest.fn();
|
|
||||||
const mockWriteFileSync = jest.fn();
|
|
||||||
|
|
||||||
// Instead of mocking the entire fs module, mock just the functions we need
|
|
||||||
fs.existsSync = mockExistsSync;
|
|
||||||
fs.readFileSync = mockReadFileSync;
|
|
||||||
fs.writeFileSync = mockWriteFileSync;
|
|
||||||
|
|
||||||
// --- Test Data (Keep as is, ensure DEFAULT_CONFIG is accurate) ---
|
|
||||||
const MOCK_PROJECT_ROOT = '/mock/project';
|
|
||||||
const MOCK_CONFIG_PATH = path.join(MOCK_PROJECT_ROOT, '.taskmasterconfig');
|
|
||||||
|
|
||||||
// Updated DEFAULT_CONFIG reflecting the implementation
|
|
||||||
const DEFAULT_CONFIG = {
|
|
||||||
models: {
|
|
||||||
main: {
|
|
||||||
provider: 'anthropic',
|
|
||||||
modelId: 'claude-3-7-sonnet-20250219',
|
|
||||||
maxTokens: 64000,
|
|
||||||
temperature: 0.2
|
|
||||||
},
|
|
||||||
research: {
|
|
||||||
provider: 'perplexity',
|
|
||||||
modelId: 'sonar-pro',
|
|
||||||
maxTokens: 8700,
|
|
||||||
temperature: 0.1
|
|
||||||
},
|
|
||||||
fallback: {
|
|
||||||
provider: 'anthropic',
|
|
||||||
modelId: 'claude-3-5-sonnet',
|
|
||||||
maxTokens: 64000,
|
|
||||||
temperature: 0.2
|
|
||||||
}
|
|
||||||
},
|
|
||||||
global: {
|
|
||||||
logLevel: 'info',
|
|
||||||
debug: false,
|
|
||||||
defaultSubtasks: 5,
|
|
||||||
defaultPriority: 'medium',
|
|
||||||
projectName: 'Task Master',
|
|
||||||
ollamaBaseUrl: 'http://localhost:11434/api'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Other test data (VALID_CUSTOM_CONFIG, PARTIAL_CONFIG, INVALID_PROVIDER_CONFIG)
|
|
||||||
const VALID_CUSTOM_CONFIG = {
|
|
||||||
models: {
|
|
||||||
main: {
|
|
||||||
provider: 'openai',
|
|
||||||
modelId: 'gpt-4o',
|
|
||||||
maxTokens: 4096,
|
|
||||||
temperature: 0.5
|
|
||||||
},
|
|
||||||
research: {
|
|
||||||
provider: 'google',
|
|
||||||
modelId: 'gemini-1.5-pro-latest',
|
|
||||||
maxTokens: 8192,
|
|
||||||
temperature: 0.3
|
|
||||||
},
|
|
||||||
fallback: {
|
|
||||||
provider: 'anthropic',
|
|
||||||
modelId: 'claude-3-opus-20240229',
|
|
||||||
maxTokens: 100000,
|
|
||||||
temperature: 0.4
|
|
||||||
}
|
|
||||||
},
|
|
||||||
global: {
|
|
||||||
logLevel: 'debug',
|
|
||||||
defaultPriority: 'high',
|
|
||||||
projectName: 'My Custom Project'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const PARTIAL_CONFIG = {
|
|
||||||
models: {
|
|
||||||
main: { provider: 'openai', modelId: 'gpt-4-turbo' }
|
|
||||||
},
|
|
||||||
global: {
|
|
||||||
projectName: 'Partial Project'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const INVALID_PROVIDER_CONFIG = {
|
|
||||||
models: {
|
|
||||||
main: { provider: 'invalid-provider', modelId: 'some-model' },
|
|
||||||
research: {
|
|
||||||
provider: 'perplexity',
|
|
||||||
modelId: 'llama-3-sonar-large-32k-online'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
global: {
|
|
||||||
logLevel: 'warn'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Define spies globally to be restored in afterAll
|
|
||||||
let consoleErrorSpy;
|
|
||||||
let consoleWarnSpy;
|
|
||||||
|
|
||||||
beforeAll(() => {
|
|
||||||
// Set up console spies
|
|
||||||
consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation(() => {});
|
|
||||||
consoleWarnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {});
|
|
||||||
});
|
|
||||||
|
|
||||||
afterAll(() => {
|
|
||||||
// Restore all spies
|
|
||||||
jest.restoreAllMocks();
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('Config Manager Module', () => {
|
|
||||||
// Declare variables for imported module
|
|
||||||
let configManager;
|
|
||||||
|
|
||||||
// Reset mocks before each test for isolation
|
|
||||||
beforeEach(async () => {
|
|
||||||
// Clear all mock calls and reset implementations between tests
|
|
||||||
jest.clearAllMocks();
|
|
||||||
// Reset the external mock instances for utils
|
|
||||||
mockFindProjectRoot.mockReset();
|
|
||||||
mockLog.mockReset();
|
|
||||||
mockResolveEnvVariable.mockReset();
|
|
||||||
mockExistsSync.mockReset();
|
|
||||||
mockReadFileSync.mockReset();
|
|
||||||
mockWriteFileSync.mockReset();
|
|
||||||
|
|
||||||
// --- Mock Dependencies BEFORE importing the module under test ---
|
|
||||||
// Mock the 'utils.js' module using doMock (applied at runtime)
|
|
||||||
jest.doMock('../../scripts/modules/utils.js', () => ({
|
|
||||||
__esModule: true, // Indicate it's an ES module mock
|
|
||||||
findProjectRoot: mockFindProjectRoot, // Use the mock function instance
|
|
||||||
log: mockLog, // Use the mock function instance
|
|
||||||
resolveEnvVariable: mockResolveEnvVariable // Use the mock function instance
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Dynamically import the module under test AFTER mocking dependencies
|
|
||||||
configManager = await import('../../scripts/modules/config-manager.js');
|
|
||||||
|
|
||||||
// --- Default Mock Implementations ---
|
|
||||||
mockFindProjectRoot.mockReturnValue(MOCK_PROJECT_ROOT); // Default for utils.findProjectRoot
|
|
||||||
mockExistsSync.mockReturnValue(true); // Assume files exist by default
|
|
||||||
|
|
||||||
// Default readFileSync: Return REAL models content, mocked config, or throw error
|
|
||||||
mockReadFileSync.mockImplementation((filePath) => {
|
|
||||||
const baseName = path.basename(filePath);
|
|
||||||
if (baseName === 'supported-models.json') {
|
|
||||||
// Return the REAL file content stringified
|
|
||||||
return REAL_SUPPORTED_MODELS_CONTENT;
|
|
||||||
} else if (filePath === MOCK_CONFIG_PATH) {
|
|
||||||
// Still mock the .taskmasterconfig reads
|
|
||||||
return JSON.stringify(DEFAULT_CONFIG); // Default behavior
|
|
||||||
}
|
|
||||||
// Throw for unexpected reads - helps catch errors
|
|
||||||
throw new Error(`Unexpected fs.readFileSync call in test: ${filePath}`);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Default writeFileSync: Do nothing, just allow calls
|
|
||||||
mockWriteFileSync.mockImplementation(() => {});
|
|
||||||
});
|
|
||||||
|
|
||||||
// --- Validation Functions ---
|
|
||||||
describe('Validation Functions', () => {
|
|
||||||
// Tests for validateProvider and validateProviderModelCombination
|
|
||||||
test('validateProvider should return true for valid providers', () => {
|
|
||||||
expect(configManager.validateProvider('openai')).toBe(true);
|
|
||||||
expect(configManager.validateProvider('anthropic')).toBe(true);
|
|
||||||
expect(configManager.validateProvider('google')).toBe(true);
|
|
||||||
expect(configManager.validateProvider('perplexity')).toBe(true);
|
|
||||||
expect(configManager.validateProvider('ollama')).toBe(true);
|
|
||||||
expect(configManager.validateProvider('openrouter')).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('validateProvider should return false for invalid providers', () => {
|
|
||||||
expect(configManager.validateProvider('invalid-provider')).toBe(false);
|
|
||||||
expect(configManager.validateProvider('grok')).toBe(false); // Not in mock map
|
|
||||||
expect(configManager.validateProvider('')).toBe(false);
|
|
||||||
expect(configManager.validateProvider(null)).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('validateProviderModelCombination should validate known good combinations', () => {
|
|
||||||
// Re-load config to ensure MODEL_MAP is populated from mock (now real data)
|
|
||||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
|
||||||
expect(
|
|
||||||
configManager.validateProviderModelCombination('openai', 'gpt-4o')
|
|
||||||
).toBe(true);
|
|
||||||
expect(
|
|
||||||
configManager.validateProviderModelCombination(
|
|
||||||
'anthropic',
|
|
||||||
'claude-3-5-sonnet-20241022'
|
|
||||||
)
|
|
||||||
).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('validateProviderModelCombination should return false for known bad combinations', () => {
|
|
||||||
// Re-load config to ensure MODEL_MAP is populated from mock (now real data)
|
|
||||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
|
||||||
expect(
|
|
||||||
configManager.validateProviderModelCombination(
|
|
||||||
'openai',
|
|
||||||
'claude-3-opus-20240229'
|
|
||||||
)
|
|
||||||
).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('validateProviderModelCombination should return true for ollama/openrouter (empty lists in map)', () => {
|
|
||||||
// Re-load config to ensure MODEL_MAP is populated from mock (now real data)
|
|
||||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
|
||||||
expect(
|
|
||||||
configManager.validateProviderModelCombination('ollama', 'any-model')
|
|
||||||
).toBe(false);
|
|
||||||
expect(
|
|
||||||
configManager.validateProviderModelCombination(
|
|
||||||
'openrouter',
|
|
||||||
'any/model'
|
|
||||||
)
|
|
||||||
).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('validateProviderModelCombination should return true for providers not in map', () => {
|
|
||||||
// Re-load config to ensure MODEL_MAP is populated from mock (now real data)
|
|
||||||
configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
|
||||||
// The implementation returns true if the provider isn't in the map
|
|
||||||
expect(
|
|
||||||
configManager.validateProviderModelCombination(
|
|
||||||
'unknown-provider',
|
|
||||||
'some-model'
|
|
||||||
)
|
|
||||||
).toBe(true);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// --- getConfig Tests ---
|
|
||||||
describe('getConfig Tests', () => {
|
|
||||||
test('should return default config if .taskmasterconfig does not exist', () => {
|
|
||||||
// Arrange
|
|
||||||
mockExistsSync.mockReturnValue(false);
|
|
||||||
// findProjectRoot mock is set in beforeEach
|
|
||||||
|
|
||||||
// Act: Call getConfig with explicit root
|
|
||||||
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); // Force reload
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(config).toEqual(DEFAULT_CONFIG);
|
|
||||||
expect(mockFindProjectRoot).not.toHaveBeenCalled(); // Explicit root provided
|
|
||||||
expect(mockExistsSync).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
|
|
||||||
expect(mockReadFileSync).not.toHaveBeenCalled(); // No read if file doesn't exist
|
|
||||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
|
||||||
expect.stringContaining('not found at provided project root')
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test.skip('should use findProjectRoot and return defaults if file not found', () => {
|
|
||||||
// TODO: Fix mock interaction, findProjectRoot isn't being registered as called
|
|
||||||
// Arrange
|
|
||||||
mockExistsSync.mockReturnValue(false);
|
|
||||||
// findProjectRoot mock is set in beforeEach
|
|
||||||
|
|
||||||
// Act: Call getConfig without explicit root
|
|
||||||
const config = configManager.getConfig(null, true); // Force reload
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(mockFindProjectRoot).toHaveBeenCalled(); // Should be called now
|
|
||||||
expect(mockExistsSync).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
|
|
||||||
expect(config).toEqual(DEFAULT_CONFIG);
|
|
||||||
expect(mockReadFileSync).not.toHaveBeenCalled();
|
|
||||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
|
||||||
expect.stringContaining('not found at derived root')
|
|
||||||
); // Adjusted expected warning
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should read and merge valid config file with defaults', () => {
|
|
||||||
// Arrange: Override readFileSync for this test
|
|
||||||
mockReadFileSync.mockImplementation((filePath) => {
|
|
||||||
if (filePath === MOCK_CONFIG_PATH)
|
|
||||||
return JSON.stringify(VALID_CUSTOM_CONFIG);
|
|
||||||
if (path.basename(filePath) === 'supported-models.json') {
|
|
||||||
// Provide necessary models for validation within getConfig
|
|
||||||
return JSON.stringify({
|
|
||||||
openai: [{ id: 'gpt-4o' }],
|
|
||||||
google: [{ id: 'gemini-1.5-pro-latest' }],
|
|
||||||
perplexity: [{ id: 'sonar-pro' }],
|
|
||||||
anthropic: [
|
|
||||||
{ id: 'claude-3-opus-20240229' },
|
|
||||||
{ id: 'claude-3-5-sonnet' },
|
|
||||||
{ id: 'claude-3-7-sonnet-20250219' },
|
|
||||||
{ id: 'claude-3-5-sonnet' }
|
|
||||||
],
|
|
||||||
ollama: [],
|
|
||||||
openrouter: []
|
|
||||||
});
|
|
||||||
}
|
|
||||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
|
||||||
});
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); // Force reload
|
|
||||||
|
|
||||||
// Assert: Construct expected merged config
|
|
||||||
const expectedMergedConfig = {
|
|
||||||
models: {
|
|
||||||
main: {
|
|
||||||
...DEFAULT_CONFIG.models.main,
|
|
||||||
...VALID_CUSTOM_CONFIG.models.main
|
|
||||||
},
|
|
||||||
research: {
|
|
||||||
...DEFAULT_CONFIG.models.research,
|
|
||||||
...VALID_CUSTOM_CONFIG.models.research
|
|
||||||
},
|
|
||||||
fallback: {
|
|
||||||
...DEFAULT_CONFIG.models.fallback,
|
|
||||||
...VALID_CUSTOM_CONFIG.models.fallback
|
|
||||||
}
|
|
||||||
},
|
|
||||||
global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global }
|
|
||||||
};
|
|
||||||
expect(config).toEqual(expectedMergedConfig);
|
|
||||||
expect(mockExistsSync).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
|
|
||||||
expect(mockReadFileSync).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should merge defaults for partial config file', () => {
|
|
||||||
// Arrange
|
|
||||||
mockReadFileSync.mockImplementation((filePath) => {
|
|
||||||
if (filePath === MOCK_CONFIG_PATH)
|
|
||||||
return JSON.stringify(PARTIAL_CONFIG);
|
|
||||||
if (path.basename(filePath) === 'supported-models.json') {
|
|
||||||
return JSON.stringify({
|
|
||||||
openai: [{ id: 'gpt-4-turbo' }],
|
|
||||||
perplexity: [{ id: 'sonar-pro' }],
|
|
||||||
anthropic: [
|
|
||||||
{ id: 'claude-3-7-sonnet-20250219' },
|
|
||||||
{ id: 'claude-3-5-sonnet' }
|
|
||||||
],
|
|
||||||
ollama: [],
|
|
||||||
openrouter: []
|
|
||||||
});
|
|
||||||
}
|
|
||||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
|
||||||
});
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
|
||||||
|
|
||||||
// Assert: Construct expected merged config
|
|
||||||
const expectedMergedConfig = {
|
|
||||||
models: {
|
|
||||||
main: {
|
|
||||||
...DEFAULT_CONFIG.models.main,
|
|
||||||
...PARTIAL_CONFIG.models.main
|
|
||||||
},
|
|
||||||
research: { ...DEFAULT_CONFIG.models.research },
|
|
||||||
fallback: { ...DEFAULT_CONFIG.models.fallback }
|
|
||||||
},
|
|
||||||
global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global }
|
|
||||||
};
|
|
||||||
expect(config).toEqual(expectedMergedConfig);
|
|
||||||
expect(mockReadFileSync).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle JSON parsing error and return defaults', () => {
|
|
||||||
// Arrange
|
|
||||||
mockReadFileSync.mockImplementation((filePath) => {
|
|
||||||
if (filePath === MOCK_CONFIG_PATH) return 'invalid json';
|
|
||||||
// Mock models read needed for initial load before parse error
|
|
||||||
if (path.basename(filePath) === 'supported-models.json') {
|
|
||||||
return JSON.stringify({
|
|
||||||
anthropic: [{ id: 'claude-3-7-sonnet-20250219' }],
|
|
||||||
perplexity: [{ id: 'sonar-pro' }],
|
|
||||||
fallback: [{ id: 'claude-3-5-sonnet' }],
|
|
||||||
ollama: [],
|
|
||||||
openrouter: []
|
|
||||||
});
|
|
||||||
}
|
|
||||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
|
||||||
});
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(config).toEqual(DEFAULT_CONFIG);
|
|
||||||
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
|
||||||
expect.stringContaining('Error reading or parsing')
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle file read error and return defaults', () => {
|
|
||||||
// Arrange
|
|
||||||
const readError = new Error('Permission denied');
|
|
||||||
mockReadFileSync.mockImplementation((filePath) => {
|
|
||||||
if (filePath === MOCK_CONFIG_PATH) throw readError;
|
|
||||||
// Mock models read needed for initial load before read error
|
|
||||||
if (path.basename(filePath) === 'supported-models.json') {
|
|
||||||
return JSON.stringify({
|
|
||||||
anthropic: [{ id: 'claude-3-7-sonnet-20250219' }],
|
|
||||||
perplexity: [{ id: 'sonar-pro' }],
|
|
||||||
fallback: [{ id: 'claude-3-5-sonnet' }],
|
|
||||||
ollama: [],
|
|
||||||
openrouter: []
|
|
||||||
});
|
|
||||||
}
|
|
||||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
|
||||||
});
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(config).toEqual(DEFAULT_CONFIG);
|
|
||||||
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
|
||||||
expect.stringContaining(
|
|
||||||
`Permission denied. Using default configuration.`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should validate provider and fallback to default if invalid', () => {
|
|
||||||
// Arrange
|
|
||||||
mockReadFileSync.mockImplementation((filePath) => {
|
|
||||||
if (filePath === MOCK_CONFIG_PATH)
|
|
||||||
return JSON.stringify(INVALID_PROVIDER_CONFIG);
|
|
||||||
if (path.basename(filePath) === 'supported-models.json') {
|
|
||||||
return JSON.stringify({
|
|
||||||
perplexity: [{ id: 'llama-3-sonar-large-32k-online' }],
|
|
||||||
anthropic: [
|
|
||||||
{ id: 'claude-3-7-sonnet-20250219' },
|
|
||||||
{ id: 'claude-3-5-sonnet' }
|
|
||||||
],
|
|
||||||
ollama: [],
|
|
||||||
openrouter: []
|
|
||||||
});
|
|
||||||
}
|
|
||||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
|
||||||
});
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
|
||||||
expect.stringContaining(
|
|
||||||
'Warning: Invalid main provider "invalid-provider"'
|
|
||||||
)
|
|
||||||
);
|
|
||||||
const expectedMergedConfig = {
|
|
||||||
models: {
|
|
||||||
main: { ...DEFAULT_CONFIG.models.main },
|
|
||||||
research: {
|
|
||||||
...DEFAULT_CONFIG.models.research,
|
|
||||||
...INVALID_PROVIDER_CONFIG.models.research
|
|
||||||
},
|
|
||||||
fallback: { ...DEFAULT_CONFIG.models.fallback }
|
|
||||||
},
|
|
||||||
global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global }
|
|
||||||
};
|
|
||||||
expect(config).toEqual(expectedMergedConfig);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// --- writeConfig Tests ---
|
|
||||||
describe('writeConfig', () => {
|
|
||||||
test('should write valid config to file', () => {
|
|
||||||
// Arrange (Default mocks are sufficient)
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
mockWriteFileSync.mockImplementation(() => {}); // Ensure it doesn't throw
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const success = configManager.writeConfig(
|
|
||||||
VALID_CUSTOM_CONFIG,
|
|
||||||
MOCK_PROJECT_ROOT
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(success).toBe(true);
|
|
||||||
expect(mockWriteFileSync).toHaveBeenCalledWith(
|
|
||||||
MOCK_CONFIG_PATH,
|
|
||||||
JSON.stringify(VALID_CUSTOM_CONFIG, null, 2) // writeConfig stringifies
|
|
||||||
);
|
|
||||||
expect(consoleErrorSpy).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should return false and log error if write fails', () => {
|
|
||||||
// Arrange
|
|
||||||
const mockWriteError = new Error('Disk full');
|
|
||||||
mockWriteFileSync.mockImplementation(() => {
|
|
||||||
throw mockWriteError;
|
|
||||||
});
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const success = configManager.writeConfig(
|
|
||||||
VALID_CUSTOM_CONFIG,
|
|
||||||
MOCK_PROJECT_ROOT
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(success).toBe(false);
|
|
||||||
expect(mockWriteFileSync).toHaveBeenCalled();
|
|
||||||
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
|
||||||
expect.stringContaining(`Disk full`)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test.skip('should return false if project root cannot be determined', () => {
|
|
||||||
// TODO: Fix mock interaction or function logic, returns true unexpectedly in test
|
|
||||||
// Arrange: Override mock for this specific test
|
|
||||||
mockFindProjectRoot.mockReturnValue(null);
|
|
||||||
|
|
||||||
// Act: Call without explicit root
|
|
||||||
const success = configManager.writeConfig(VALID_CUSTOM_CONFIG);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(success).toBe(false); // Function should return false if root is null
|
|
||||||
expect(mockFindProjectRoot).toHaveBeenCalled();
|
|
||||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
|
||||||
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
|
||||||
expect.stringContaining('Could not determine project root')
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// --- Getter Functions ---
|
|
||||||
describe('Getter Functions', () => {
|
|
||||||
test('getMainProvider should return provider from config', () => {
|
|
||||||
// Arrange: Set up readFileSync to return VALID_CUSTOM_CONFIG
|
|
||||||
mockReadFileSync.mockImplementation((filePath) => {
|
|
||||||
if (filePath === MOCK_CONFIG_PATH)
|
|
||||||
return JSON.stringify(VALID_CUSTOM_CONFIG);
|
|
||||||
if (path.basename(filePath) === 'supported-models.json') {
|
|
||||||
return JSON.stringify({
|
|
||||||
openai: [{ id: 'gpt-4o' }],
|
|
||||||
google: [{ id: 'gemini-1.5-pro-latest' }],
|
|
||||||
anthropic: [
|
|
||||||
{ id: 'claude-3-opus-20240229' },
|
|
||||||
{ id: 'claude-3-7-sonnet-20250219' },
|
|
||||||
{ id: 'claude-3-5-sonnet' }
|
|
||||||
],
|
|
||||||
perplexity: [{ id: 'sonar-pro' }],
|
|
||||||
ollama: [],
|
|
||||||
openrouter: []
|
|
||||||
}); // Added perplexity
|
|
||||||
}
|
|
||||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
|
||||||
});
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const provider = configManager.getMainProvider(MOCK_PROJECT_ROOT);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(provider).toBe(VALID_CUSTOM_CONFIG.models.main.provider);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('getLogLevel should return logLevel from config', () => {
|
|
||||||
// Arrange: Set up readFileSync to return VALID_CUSTOM_CONFIG
|
|
||||||
mockReadFileSync.mockImplementation((filePath) => {
|
|
||||||
if (filePath === MOCK_CONFIG_PATH)
|
|
||||||
return JSON.stringify(VALID_CUSTOM_CONFIG);
|
|
||||||
if (path.basename(filePath) === 'supported-models.json') {
|
|
||||||
// Provide enough mock model data for validation within getConfig
|
|
||||||
return JSON.stringify({
|
|
||||||
openai: [{ id: 'gpt-4o' }],
|
|
||||||
google: [{ id: 'gemini-1.5-pro-latest' }],
|
|
||||||
anthropic: [
|
|
||||||
{ id: 'claude-3-opus-20240229' },
|
|
||||||
{ id: 'claude-3-7-sonnet-20250219' },
|
|
||||||
{ id: 'claude-3-5-sonnet' }
|
|
||||||
],
|
|
||||||
perplexity: [{ id: 'sonar-pro' }],
|
|
||||||
ollama: [],
|
|
||||||
openrouter: []
|
|
||||||
});
|
|
||||||
}
|
|
||||||
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
|
|
||||||
});
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const logLevel = configManager.getLogLevel(MOCK_PROJECT_ROOT);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(logLevel).toBe(VALID_CUSTOM_CONFIG.global.logLevel);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Add more tests for other getters (getResearchProvider, getProjectName, etc.)
|
|
||||||
});
|
|
||||||
|
|
||||||
// --- isConfigFilePresent Tests ---
|
|
||||||
describe('isConfigFilePresent', () => {
|
|
||||||
test('should return true if config file exists', () => {
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
expect(configManager.isConfigFilePresent(MOCK_PROJECT_ROOT)).toBe(true);
|
|
||||||
expect(mockExistsSync).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should return false if config file does not exist', () => {
|
|
||||||
mockExistsSync.mockReturnValue(false);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
expect(configManager.isConfigFilePresent(MOCK_PROJECT_ROOT)).toBe(false);
|
|
||||||
expect(mockExistsSync).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
|
|
||||||
});
|
|
||||||
|
|
||||||
test.skip('should use findProjectRoot if explicitRoot is not provided', () => {
|
|
||||||
// TODO: Fix mock interaction, findProjectRoot isn't being registered as called
|
|
||||||
mockExistsSync.mockReturnValue(true);
|
|
||||||
// findProjectRoot mock set in beforeEach
|
|
||||||
expect(configManager.isConfigFilePresent()).toBe(true);
|
|
||||||
expect(mockFindProjectRoot).toHaveBeenCalled(); // Should be called now
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// --- getAllProviders Tests ---
|
|
||||||
describe('getAllProviders', () => {
|
|
||||||
test('should return list of providers from supported-models.json', () => {
|
|
||||||
// Arrange: Ensure config is loaded with real data
|
|
||||||
configManager.getConfig(null, true); // Force load using the mock that returns real data
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const providers = configManager.getAllProviders();
|
|
||||||
// Assert
|
|
||||||
// Assert against the actual keys in the REAL loaded data
|
|
||||||
const expectedProviders = Object.keys(REAL_SUPPORTED_MODELS_DATA);
|
|
||||||
expect(providers).toEqual(expect.arrayContaining(expectedProviders));
|
|
||||||
expect(providers.length).toBe(expectedProviders.length);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// Add tests for getParametersForRole if needed
|
|
||||||
|
|
||||||
// Note: Tests for setMainModel, setResearchModel were removed as the functions were removed in the implementation.
|
|
||||||
// If similar setter functions exist, add tests for them following the writeConfig pattern.
|
|
||||||
|
|
||||||
// --- isApiKeySet Tests ---
|
|
||||||
describe('isApiKeySet', () => {
|
|
||||||
const mockSession = { env: {} }; // Mock session for MCP context
|
|
||||||
|
|
||||||
// Test cases: [providerName, envVarName, keyValue, expectedResult, testName]
|
|
||||||
const testCases = [
|
|
||||||
// Valid Keys
|
|
||||||
[
|
|
||||||
'anthropic',
|
|
||||||
'ANTHROPIC_API_KEY',
|
|
||||||
'sk-valid-key',
|
|
||||||
true,
|
|
||||||
'valid Anthropic key'
|
|
||||||
],
|
|
||||||
[
|
|
||||||
'openai',
|
|
||||||
'OPENAI_API_KEY',
|
|
||||||
'sk-another-valid-key',
|
|
||||||
true,
|
|
||||||
'valid OpenAI key'
|
|
||||||
],
|
|
||||||
[
|
|
||||||
'perplexity',
|
|
||||||
'PERPLEXITY_API_KEY',
|
|
||||||
'pplx-valid',
|
|
||||||
true,
|
|
||||||
'valid Perplexity key'
|
|
||||||
],
|
|
||||||
[
|
|
||||||
'google',
|
|
||||||
'GOOGLE_API_KEY',
|
|
||||||
'google-valid-key',
|
|
||||||
true,
|
|
||||||
'valid Google key'
|
|
||||||
],
|
|
||||||
[
|
|
||||||
'mistral',
|
|
||||||
'MISTRAL_API_KEY',
|
|
||||||
'mistral-valid-key',
|
|
||||||
true,
|
|
||||||
'valid Mistral key'
|
|
||||||
],
|
|
||||||
[
|
|
||||||
'openrouter',
|
|
||||||
'OPENROUTER_API_KEY',
|
|
||||||
'or-valid-key',
|
|
||||||
true,
|
|
||||||
'valid OpenRouter key'
|
|
||||||
],
|
|
||||||
['xai', 'XAI_API_KEY', 'xai-valid-key', true, 'valid XAI key'],
|
|
||||||
[
|
|
||||||
'azure',
|
|
||||||
'AZURE_OPENAI_API_KEY',
|
|
||||||
'azure-valid-key',
|
|
||||||
true,
|
|
||||||
'valid Azure key'
|
|
||||||
],
|
|
||||||
|
|
||||||
// Ollama (special case - no key needed)
|
|
||||||
[
|
|
||||||
'ollama',
|
|
||||||
'OLLAMA_API_KEY',
|
|
||||||
undefined,
|
|
||||||
true,
|
|
||||||
'Ollama provider (no key needed)'
|
|
||||||
], // OLLAMA_API_KEY might not be in keyMap
|
|
||||||
|
|
||||||
// Invalid / Missing Keys
|
|
||||||
[
|
|
||||||
'anthropic',
|
|
||||||
'ANTHROPIC_API_KEY',
|
|
||||||
undefined,
|
|
||||||
false,
|
|
||||||
'missing Anthropic key'
|
|
||||||
],
|
|
||||||
['anthropic', 'ANTHROPIC_API_KEY', null, false, 'null Anthropic key'],
|
|
||||||
['openai', 'OPENAI_API_KEY', '', false, 'empty OpenAI key'],
|
|
||||||
[
|
|
||||||
'perplexity',
|
|
||||||
'PERPLEXITY_API_KEY',
|
|
||||||
' ',
|
|
||||||
false,
|
|
||||||
'whitespace Perplexity key'
|
|
||||||
],
|
|
||||||
|
|
||||||
// Placeholder Keys
|
|
||||||
[
|
|
||||||
'google',
|
|
||||||
'GOOGLE_API_KEY',
|
|
||||||
'YOUR_GOOGLE_API_KEY_HERE',
|
|
||||||
false,
|
|
||||||
'placeholder Google key (YOUR_..._HERE)'
|
|
||||||
],
|
|
||||||
[
|
|
||||||
'mistral',
|
|
||||||
'MISTRAL_API_KEY',
|
|
||||||
'MISTRAL_KEY_HERE',
|
|
||||||
false,
|
|
||||||
'placeholder Mistral key (..._KEY_HERE)'
|
|
||||||
],
|
|
||||||
[
|
|
||||||
'openrouter',
|
|
||||||
'OPENROUTER_API_KEY',
|
|
||||||
'ENTER_OPENROUTER_KEY_HERE',
|
|
||||||
false,
|
|
||||||
'placeholder OpenRouter key (general ...KEY_HERE)'
|
|
||||||
],
|
|
||||||
|
|
||||||
// Unknown provider
|
|
||||||
['unknownprovider', 'UNKNOWN_KEY', 'any-key', false, 'unknown provider']
|
|
||||||
];
|
|
||||||
|
|
||||||
testCases.forEach(
|
|
||||||
([providerName, envVarName, keyValue, expectedResult, testName]) => {
|
|
||||||
test(`should return ${expectedResult} for ${testName} (CLI context)`, () => {
|
|
||||||
// CLI context (resolveEnvVariable uses process.env or .env via projectRoot)
|
|
||||||
mockResolveEnvVariable.mockImplementation((key) => {
|
|
||||||
return key === envVarName ? keyValue : undefined;
|
|
||||||
});
|
|
||||||
expect(
|
|
||||||
configManager.isApiKeySet(providerName, null, MOCK_PROJECT_ROOT)
|
|
||||||
).toBe(expectedResult);
|
|
||||||
if (providerName !== 'ollama' && providerName !== 'unknownprovider') {
|
|
||||||
// Ollama and unknown don't try to resolve
|
|
||||||
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
|
||||||
envVarName,
|
|
||||||
null,
|
|
||||||
MOCK_PROJECT_ROOT
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
test(`should return ${expectedResult} for ${testName} (MCP context)`, () => {
|
|
||||||
// MCP context (resolveEnvVariable uses session.env)
|
|
||||||
const mcpSession = { env: { [envVarName]: keyValue } };
|
|
||||||
mockResolveEnvVariable.mockImplementation((key, sessionArg) => {
|
|
||||||
return sessionArg && sessionArg.env
|
|
||||||
? sessionArg.env[key]
|
|
||||||
: undefined;
|
|
||||||
});
|
|
||||||
expect(
|
|
||||||
configManager.isApiKeySet(providerName, mcpSession, null)
|
|
||||||
).toBe(expectedResult);
|
|
||||||
if (providerName !== 'ollama' && providerName !== 'unknownprovider') {
|
|
||||||
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
|
||||||
envVarName,
|
|
||||||
mcpSession,
|
|
||||||
null
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
test('isApiKeySet should log a warning for an unknown provider', () => {
|
|
||||||
mockLog.mockClear(); // Clear previous log calls
|
|
||||||
configManager.isApiKeySet('nonexistentprovider');
|
|
||||||
expect(mockLog).toHaveBeenCalledWith(
|
|
||||||
'warn',
|
|
||||||
expect.stringContaining('Unknown provider name: nonexistentprovider')
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('isApiKeySet should handle provider names case-insensitively for keyMap lookup', () => {
|
|
||||||
mockResolveEnvVariable.mockReturnValue('a-valid-key');
|
|
||||||
expect(
|
|
||||||
configManager.isApiKeySet('Anthropic', null, MOCK_PROJECT_ROOT)
|
|
||||||
).toBe(true);
|
|
||||||
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
|
||||||
'ANTHROPIC_API_KEY',
|
|
||||||
null,
|
|
||||||
MOCK_PROJECT_ROOT
|
|
||||||
);
|
|
||||||
|
|
||||||
mockResolveEnvVariable.mockReturnValue('another-valid-key');
|
|
||||||
expect(configManager.isApiKeySet('OPENAI', null, MOCK_PROJECT_ROOT)).toBe(
|
|
||||||
true
|
|
||||||
);
|
|
||||||
expect(mockResolveEnvVariable).toHaveBeenCalledWith(
|
|
||||||
'OPENAI_API_KEY',
|
|
||||||
null,
|
|
||||||
MOCK_PROJECT_ROOT
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
Reference in New Issue
Block a user