Compare commits

...

106 Commits

Author SHA1 Message Date
Ralph Khreish
5f5e0c73ec fix: remove claude code clear tm commands 2025-08-11 18:57:20 +02:00
Ladi
782728ff95 feat: add --compact flag for minimal task list output (#1054)
* feat: add --compact flag for minimal task list output

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-08-11 18:35:23 +02:00
Fábio Vedovelli
30ca144231 feat: Add task id to task details UI (#1100)
* Display current task ID on task details page

* Changeset

* Implement CodeRabbit review suggestion.

* chore: fix CI errors

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-11 14:42:31 +02:00
Ralph Khreish
0220d0e994 chore: pimp my readme (#1122) 2025-08-11 14:29:49 +02:00
Ralph Khreish
41a8c2406a chore: add docs to monorepo (#1111) 2025-08-09 13:31:45 +02:00
github-actions[bot]
a003041cd8 Version Packages (#1107)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-08 22:00:39 +02:00
Ralph Khreish
6b57ead106 Release 0.24.0 #1098 from eyaltoledano/next
Release 0.24.0
2025-08-08 21:58:41 +02:00
Ralph Khreish
7b6e117b1d chore: prepare for release (exit pre mode) 2025-08-08 21:21:25 +02:00
Ralph Khreish
03b045e9cd chore: run format 2025-08-08 21:18:33 +02:00
neonwatty
699afdae59 feat: add task-checker agent to assets directory 2025-08-08 08:47:20 -07:00
github-actions[bot]
80c09802e8 chore: rc version bump 2025-08-08 12:41:29 +00:00
github-actions[bot]
cf8f0f4b1c docs: Auto-update and format models.md 2025-08-08 12:38:58 +00:00
Ralph Khreish
75c514cf5b feat: add gpt-5 support (#1105)
* feat: add gpt-5 support
2025-08-08 14:38:44 +02:00
Ralph Khreish
41d1e671b1 chore: fix CI checker, improve it (#1099) 2025-08-07 15:52:49 +02:00
Ralph Khreish
a464e550b8 feat(extension): implement simple solution to --package flag (#1090)
* feat(extension): implement simple solution to --package flag
2025-08-07 15:10:34 +02:00
Ralph Khreish
3a852afdae chore: implement pre-release for extensions (#1097)
* chore: implement pre-release for extensions

* chore: run format
2025-08-07 15:08:14 +02:00
Ralph Khreish
4bb63706b8 feat: implement claude code agents (#1091)
* feat: implement claude code agents

* chore: add changeset

- run format

* feat: improve task-checker, executor, and orchestrator

* chore: improve changeset
2025-08-07 12:37:06 +02:00
Ralph Khreish
fcf14e09be chore: improve release check for release-check and release flow (#1095)
* chore: improve release check for release-check and release flow

* chore: fix format
2025-08-06 23:19:28 +02:00
Ralph Khreish
4357af3f13 fix(expand-task): include parent task context in complexity report variant (#1094)
- Fixed bug where expand task generated generic authentication subtasks
- The complexity-report prompt variant now includes parent task details
- Added comprehensive unit tests to prevent regression
- Added debug logging to help diagnose similar issues

Previously, when using a complexity report with expansionPrompt, only the
expansion guidance was sent to the AI, missing the actual task context.
This caused the AI to generate unrelated generic subtasks.

Fixes the issue where all tasks would get the same generic auth-related
subtasks regardless of their actual purpose (AWS infrastructure, Docker
containerization, etc.)

Co-authored-by: Sadaqat Ali <32377500+sadaqat12@users.noreply.github.com>
2025-08-06 21:00:32 +02:00
Ralph Khreish
59f7676051 feat: add prompt for claude code to include context when generating tasks (#1092) 2025-08-06 12:41:19 +02:00
Ralph Khreish
36468f3c93 feat: add prompt for claude code to include context when generating tasks 2025-08-06 12:17:49 +02:00
Ralph Khreish
ca4d93ee6a chore: prepare prd for tm-core creation phase 1 (#1045) 2025-08-06 10:10:29 +02:00
Ralph Khreish
37fb569a62 Release 0.23.1 #1084 from eyaltoledano/next
chore: exit pre-release mode
2025-08-04 20:43:46 +03:00
Ralph Khreish
ed0d4e6641 chore: implement requested changes 2025-08-04 14:40:11 +03:00
Ralph Khreish
5184f8e7b2 chore: implement requested changes 2025-08-04 14:37:24 +03:00
Ralph Khreish
587523a23b chore: improve release-check CI 2025-08-04 14:29:03 +03:00
Ralph Khreish
7a50f0c6ec chore: add release-check CI to check if we are in release mode (#1086) 2025-08-04 13:22:46 +02:00
Ralph Khreish
adeb76ee15 chore: exit pre-release mode 2025-08-04 10:02:00 +03:00
Ralph Khreish
d342070375 Merge pull request #1081 from eyaltoledano/next 2025-08-03 21:38:19 +03:00
github-actions[bot]
5e4dbac525 chore: rc version bump 2025-08-03 14:07:15 +00:00
Ralph Khreish
fb15c2eaf7 chore: improve pre-release CI to just use dropdown (#1080)
* chore: improve pre-release CI to just use dropdown

* chore: implement requested changes
2025-08-03 16:04:29 +02:00
Ralph Khreish
e8ceb08341 chore: improve release and pre-release CI (#1075)
* chore: improve release and pre-release CI

* chore: apply requested changes

* Update .github/workflows/pre-release.yml

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: apply requested changes

* chore: apply requested changes

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-08-03 15:17:18 +02:00
Ralph Khreish
e495b2b559 feat: improve scope up and down command & parse-prd improvements (#1079)
* feat: improve scope up and down command & parse-prd improvements

* chore: run format
2025-08-03 14:12:46 +02:00
Ralph Khreish
e0d1d03f33 chore: fix tag-extension and improve debugging (#1074) 2025-08-02 23:10:57 +02:00
Ralph Khreish
4a4bca905d chore: fix tag-extension package.json not found for extension (#1073)
* chore: fix tag-extension package.json not found for extension

* chore: fix format
2025-08-02 22:56:53 +02:00
github-actions[bot]
9d5f50ac8e Version Packages (#1072)
* Version Packages

* chore: add eyal instead of crunchyman to eyal's feature

* chore: run format

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-02 22:43:08 +02:00
Ralph Khreish
bbeaa9163a chore: exit pre (#1071) 2025-08-02 22:35:46 +02:00
Ralph Khreish
a4a172be94 Release 0.23.0 #1064 from eyaltoledano/next
Release 0.23.0
2025-08-02 23:18:05 +03:00
github-actions[bot]
028ed9c444 chore: rc version bump 2025-08-02 20:07:43 +00:00
Ralph Khreish
53903f1e8e chore: trick rc bump 2025-08-02 23:06:37 +03:00
github-actions[bot]
36c56231cc chore: rc version bump 2025-08-02 20:04:33 +00:00
Ralph Khreish
b82d858f81 chore: rename chagneset for rc bump 2025-08-02 23:03:10 +03:00
Ralph Khreish
9808967d6b chore: change from patch to minor 2025-08-02 23:00:56 +03:00
Ralph Khreish
3fee7515f3 fix: fix mcp tool call in extension (#1070)
* fix: fix mcp tool call in extension

- fix console.log directly being used in scope-adjutment.js breaking mcp

* chore: run format and fix tests

* chore: format
2025-08-02 21:59:02 +02:00
github-actions[bot]
82b17bdb57 chore: rc version bump 2025-08-02 16:44:19 +00:00
Eyal Toledano
72ca68edeb Task 104: Implement 'scope-up' and 'scope-down' CLI Commands for Dynamic Task Complexity Adjustment (#1069)
* feat(task-104): Complete task 104 - Implement scope-up and scope-down CLI Commands

- Added new CLI commands 'scope-up' and 'scope-down' with comma-separated ID support
- Implemented strength levels (light/regular/heavy) and custom prompt functionality
- Created core complexity adjustment logic with AI integration
- Added MCP tool equivalents for integrated environments
- Comprehensive error handling and task validation
- Full test coverage with TDD approach
- Updated task manager core and UI components

Task 104: Implement 'scope-up' and 'scope-down' CLI Commands for Dynamic Task Complexity Adjustment - Complete implementation with CLI, MCP integration, and testing

* chore: Add changeset for scope-up and scope-down features

- Comprehensive user-facing description with usage examples
- Key features and benefits explanation
- CLI and MCP integration details
- Real-world use cases for agile workflows

* feat(extension): Add scope-up and scope-down to VS Code extension task details

- Added useScopeUpTask and useScopeDownTask hooks in useTaskQueries.ts
- Enhanced AIActionsSection with Task Complexity Adjustment section
- Added strength selection (light/regular/heavy) and custom prompt support
- Integrated scope buttons with proper loading states and error handling
- Uses existing mcpRequest handler for scope_up_task and scope_down_task tools
- Maintains consistent UI patterns with existing AI actions

Extension now supports dynamic task complexity adjustment directly from task details view.
2025-08-02 18:43:04 +02:00
DavidMaliglowka
64302dc191 feat(extension): complete VS Code extension with kanban board interface (#997)
---------
Co-authored-by: DavidMaliglowka <13022280+DavidMaliglowka@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-08-01 14:04:22 +02:00
github-actions[bot]
60c03c548d chore: rc version bump 2025-07-31 22:15:10 +00:00
Ralph Khreish
2ae6e7e6be fix: normalize task IDs to numbers on load to fix comparison issues (#1063)
* fix: normalize task IDs to numbers on load to fix comparison issues

When tasks.json contains string IDs (e.g., "5" instead of 5), task lookups
fail because the code uses parseInt() and strict equality (===) for comparisons.

This fix normalizes all task and subtask IDs to numbers when loading the JSON,
ensuring consistent comparisons throughout the codebase without requiring
changes to multiple comparison locations.

Fixes task not found errors when using string IDs in tasks.json.

* Added test

* Don't mess up formatting

* Fix formatting once and for all

* Update scripts/modules/utils.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update scripts/modules/utils.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update scripts/modules/utils.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* fix: normalize task IDs to numbers on load to fix comparison issues

- Added normalizeTaskIds function to convert string IDs to numbers
- Applied normalization in readJSON for all code paths
- Fixed set-task-status, add-task, and move-task to normalize IDs when working with raw data
- Exported normalizeTaskIds function for use in other modules
- Added test case for string ID normalization

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Simplified implementation

* refactor: normalize IDs once when loading JSON instead of scattered calls

- Normalize all tags' data when creating _rawTaggedData in readJSON
- Add support for handling malformed dotted subtask IDs (e.g., "5.1" -> 1)
- Remove redundant normalizeTaskIds calls from set-task-status, add-task, and move-task
- Add comprehensive test for mixed ID formats (string IDs and dotted notation)
- Cleaner, more maintainable solution that normalizes IDs at load time

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: run format to resolve CI issues

---------

Co-authored-by: Carl Mercier <carl@carlmercier.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-01 00:06:51 +02:00
Ben Vargas
45a14c323d fix: remove default value from complexity report option to enable tag-specific detection (#1049)
Removes the default empty array value from the complexity report option to properly detect when tags are explicitly provided vs when no tags are provided, fixing the expand --all command behavior with tagged tasks.

Co-authored-by: Ben Vargas <ben@example.com>
2025-07-26 15:26:45 +02:00
github-actions[bot]
29e67fafa4 Version Packages (#1050)
* Version Packages

* chore: fix release 0.22

todo: fix CI

* chore: run format

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-07-26 00:45:07 +02:00
Ralph Khreish
43e4d7c9d3 Release 0.22 #1038 from eyaltoledano/next
Release 0.22
2025-07-26 01:24:38 +03:00
Ralph Khreish
1bd1e64cac feat: add pull request templates for bug fixes, features, and integrations (#1044)
* feat: add pull request templates for bug fixes, features, and integrations

- Introduced a comprehensive pull request template structure to streamline contributions.
- Added specific templates for bug fixes, new features, and integrations to enhance clarity and consistency in PR submissions.
- Configured the pull request template settings for better user guidance during the contribution process.

* chore: fix format

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: implement PR requested changes

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-07-24 17:33:43 +02:00
Ralph Khreish
dc44ed9de8 feat: Improved the CLI user interface to suggest generating a complexity report if it is missing, instead of showing an error. (#1043)
* feat: fix CLI UI error when trying to display non-existent complexity report

* chore: fix git issue

* chore: run format

* Update .changeset/thick-squids-attend.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update .changeset/thick-squids-attend.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update .changeset/thick-squids-attend.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-07-24 15:50:18 +02:00
github-actions[bot]
31b8407dbc chore: rc version bump 2025-07-23 16:29:49 +00:00
Ralph Khreish
2df4f13f65 chore: improve pre-release CI to be able to release more than one release candidate (#1036)
* chore: improve pre-release CI to be able to release more than one release candidate

* chore: implement requested changes from coderabbit

* chore: apply requested changes
2025-07-23 18:28:17 +02:00
github-actions[bot]
a37017e5a5 docs: Auto-update and format models.md 2025-07-23 16:03:40 +00:00
Ralph Khreish
fb7d588137 feat: improve config-manager max tokens for openrouter and kimi-k2 model (#1035) 2025-07-23 18:03:26 +02:00
Ralph Khreish
bdb11fb2db chore: remove useless file 2025-07-23 18:04:13 +03:00
Ralph Khreish
4423119a5e feat: Add Kiro hooks and configuration for Taskmaster integration (#1032)
* feat: Add Kiro hooks and configuration for Taskmaster integration

- Introduced multiple Kiro hooks to automate task management workflows, including:
  - Code Change Task Tracker
  - Complexity Analyzer
  - Daily Standup Assistant
  - Git Commit Task Linker
  - Import Cleanup on Delete
  - New File Boilerplate
  - PR Readiness Checker
  - Task Dependency Auto-Progression
  - Test Success Task Completer
- Added .mcp.json configuration for Taskmaster AI integration.
- Updated development workflow documentation to reflect new hook-driven processes and best practices.

This commit enhances the automation capabilities of Taskmaster, streamlining task management and improving developer efficiency.

* chore: run format

* chore: improve unit tests on kiro rules

* chore: run format

* chore: run format

* feat: improve PR and add changeset
2025-07-23 17:02:16 +02:00
Ben Vargas
7b90568326 fix: bump ai-sdk-provider-gemini-cli to v0.1.1 (#1033)
* fix: bump ai-sdk-provider-gemini-cli to v0.1.1

Updates ai-sdk-provider-gemini-cli from v0.0.4 to v0.1.1 to fix a breaking change
introduced in @google/gemini-cli-core v0.1.12+ where createContentGeneratorConfig
signature changed, causing "config.getModel is not a function" errors.

The new version includes:
- Fixed compatibility with @google/gemini-cli-core ^0.1.13
- Added proxy support via configuration
- Resolved the breaking API change

Fixes compatibility issues when using newer versions of gemini-cli-core.

See: https://github.com/ben-vargas/ai-sdk-provider-gemini-cli/releases/tag/v0.1.1

* chore: fix package-lock.json being too big

---------

Co-authored-by: Ben Vargas <ben@example.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-07-23 17:01:59 +02:00
github-actions[bot]
9b0630fdf1 docs: Auto-update and format models.md 2025-07-22 18:15:35 +00:00
Parthy
ced04bddd3 docs(models): update model configuration to add supported field (#1030) 2025-07-22 20:15:22 +02:00
Andre Silva
6ae66b2afb fix(profiles): fix vscode profile generation (#1027)
* fix(profiles): fix vscode profile generation

- Add .instructions.md extension for VSCode Copilot instructions file.
- Add customReplacement to remove unsupported property `alwaysApply` from YAML front-matter in VSCode instructions files.
- Add missing property `targetExtension` to the base profile object to
  support the change to file extension.

* chore: run format

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-07-21 21:17:57 +02:00
Joe Danziger
8781794c56 fix: Clean up remaining automatic task file generation calls (#1025)
* Don't generate task files unless requested

* add changeset

* switch to optional generate flag instead of skip-generate based on new default

* switch generate default to false and update flags and docs

* revert DO/DON'T section

* use simpler non ANSI-C quoting
2025-07-21 21:15:53 +02:00
Ralph Khreish
fede909fe1 chore: fix package-lock.json after new release 2025-07-21 22:13:51 +03:00
Parthy
77cc5e4537 Fix: Correct tag handling for expand --all and show commands (#1026)
Fix: Correct tag handling for expand --all and show commands
2025-07-21 22:13:22 +03:00
Ralph Khreish
d31ef7a39c Merge pull request #1015 from eyaltoledano/changeset-release/main
Version Packages
2025-07-20 00:57:39 +03:00
github-actions[bot]
66555099ca Version Packages 2025-07-19 21:56:03 +00:00
Ralph Khreish
1e565eab53 Release 0.21 (#1009)
* fix: prevent CLAUDE.md overwrite by using imports (#949)

* fix: prevent CLAUDE.md overwrite by using imports

- Copy Task Master instructions to .taskmaster/CLAUDE.md
- Add import section to user's CLAUDE.md instead of overwriting
- Preserve existing user content
- Clean removal of Task Master content on uninstall

Closes #929

* chore: add changeset for Claude import fix

* fix: task master (tm) custom slash commands w/ proper syntax (#968)

* feat: add task master (tm) custom slash commands

Add comprehensive task management system integration via custom slash commands.
Includes commands for:
- Project initialization and setup
- Task parsing from PRD documents
- Task creation, update, and removal
- Subtask management
- Dependency tracking and validation
- Complexity analysis and task expansion
- Project status and reporting
- Workflow automation

This provides a complete task management workflow directly within Claude Code.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: add changeset

---------

Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>

* chore: create extension scaffolding (#989)

* chore: create extension scaffolding

* chore: fix workspace for changeset

* chore: fix package-lock

* feat(profiles): Add MCP configuration to Claude Code rules (#980)

* add .mcp.json with claude profile

* add changeset

* update changeset

* update test

* fix: show command no longer requires complexity report to exist (#979)

Co-authored-by: Ben Vargas <ben@example.com>

* feat: complete Groq provider integration and add Kimi K2 model (#978)

* feat: complete Groq provider integration and add Kimi K2 model

- Add missing getRequiredApiKeyName() method to GroqProvider class
- Register GroqProvider in ai-services-unified.js PROVIDERS object
- Add Groq API key handling to config-manager.js (isApiKeySet and getMcpApiKeyStatus)
- Add GROQ_API_KEY to env.example with format hint
- Add moonshotai/kimi-k2-instruct model to Groq provider ($1/$3 per 1M tokens, 16k max)
- Fix import sorting for linting compliance
- Add GroqProvider mock to ai-services-unified tests

Fixes missing implementation pieces that prevented Groq provider from working.

* chore: improve changeset

---------

Co-authored-by: Ben Vargas <ben@example.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>

* docs: Auto-update and format models.md

* feat: Add Amp rule profile with AGENT.md and MCP config (#973)

* Amp profile + tests

* generatlize to Agent instead of Claude Code to support any agent

* add changeset

* unnecessary tab formatting

* fix exports

* fix formatting

* feat: Add Zed editor rule profile with agent rules and MCP config (#974)

* zed profile

* add changeset

* update changeset

* fix: Add missing API keys to .env.example and README.md (#972)

* add OLLAMA_API_KEY

* add missing API keys

* add changeset

* update keys and fix OpenAI comment

* chore: create extension scaffolding (#989)

* chore: create extension scaffolding

* chore: fix workspace for changeset

* chore: fix package-lock

* feat(profiles): Add MCP configuration to Claude Code rules (#980)

* add .mcp.json with claude profile

* add changeset

* update changeset

* update test

* fix: show command no longer requires complexity report to exist (#979)

Co-authored-by: Ben Vargas <ben@example.com>

* feat: complete Groq provider integration and add Kimi K2 model (#978)

* feat: complete Groq provider integration and add Kimi K2 model

- Add missing getRequiredApiKeyName() method to GroqProvider class
- Register GroqProvider in ai-services-unified.js PROVIDERS object
- Add Groq API key handling to config-manager.js (isApiKeySet and getMcpApiKeyStatus)
- Add GROQ_API_KEY to env.example with format hint
- Add moonshotai/kimi-k2-instruct model to Groq provider ($1/$3 per 1M tokens, 16k max)
- Fix import sorting for linting compliance
- Add GroqProvider mock to ai-services-unified tests

Fixes missing implementation pieces that prevented Groq provider from working.

* chore: improve changeset

---------

Co-authored-by: Ben Vargas <ben@example.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>

* docs: Auto-update and format models.md

* feat: Add Amp rule profile with AGENT.md and MCP config (#973)

* Amp profile + tests

* generatlize to Agent instead of Claude Code to support any agent

* add changeset

* unnecessary tab formatting

* fix exports

* fix formatting

* feat: Add Zed editor rule profile with agent rules and MCP config (#974)

* zed profile

* add changeset

* update changeset

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Ben Vargas <ben@vargas.com>
Co-authored-by: Ben Vargas <ben@example.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* feat: Add OpenCode rule profile with AGENTS.md and MCP config (#970)

* add opencode to profile lists

* add opencode profile / modify mcp config after add

* add changeset

* not necessary; main config being updated

* add issue link

* add/fix tests

* fix url and docsUrl

* update test for new urls

* fix formatting

* update/fix tests

* chore: add coderabbit configuration (#992)

* chore: add coderabbit configuration

* chore: fix coderabbit config

* chore: improve coderabbit config

* chore: more coderabbit reviews

* chore: remove all defaults

* docs: Update MCP server name for consistency and use 'Add to Cursor' button (#995)

* update MCP server name to task-master-ai for consistency

* add changeset

* update cursor link & switch to https

* switch back to Add to Cursor button (https link)

* update changeset

* update changeset

* update changeset

* update changeset

* use GitHub markdown format

* fix(ai-validation): comprehensive fixes for AI response validation issues (#1000)

* fix(ai-validation): comprehensive fixes for AI response validation issues

  - Fix update command validation when AI omits subtasks/status/dependencies
  - Fix add-task command when AI returns non-string details field
  - Fix update-task command when AI subtasks miss required fields
  - Add preprocessing to ensure proper field types before validation
  - Prevent split() errors on non-string fields
  - Set proper defaults for missing required fields

* chore: run format

* chore: implement coderabbit suggestions

* feat: add kiro profile (#1001)

* feat: add kiro profile

* chore: fix format

* chore: implement requested changes

* chore: fix CI

* refactor: remove unused resource and resource template initialization (#1002)

* refactor: remove unused resource and resource template initialization

* chore: implement requested changes

* fix(core): Implement Boundary-First Tag Resolution (#943)

* refactor(context): Standardize tag and projectRoot handling across all task tools

This commit unifies context management by adopting a boundary-first resolution strategy. All task-scoped tools now resolve `tag` and `projectRoot` at their entry point and forward these values to the underlying direct functions.

This approach centralizes context logic, ensuring consistent behavior and enhanced flexibility in multi-tag environments.

* fix(tag): Clean up tag handling in task functions and sync process

This commit refines the handling of the `tag` parameter across multiple functions, ensuring consistent context management. The `tag` is now passed more efficiently in `listTasksDirect`, `setTaskStatusDirect`, and `syncTasksToReadme`, improving clarity and reducing redundancy. Additionally, a TODO comment has been added in `sync-readme.js` to address future tag support enhancements.

* feat(tag): Implement Boundary-First Tag Resolution for consistent tag handling

This commit introduces Boundary-First Tag Resolution in the task manager, ensuring consistent and deterministic tag handling across CLI and MCP. This change resolves potential race conditions and improves the reliability of tag-specific operations.

Additionally, the `expandTask` function has been updated to use the resolved tag when writing JSON, enhancing data integrity during task updates.

* chore(biome): formatting

* fix(expand-task): Update writeJSON call to use tag instead of resolvedTag

* fix(commands): Enhance complexity report path resolution and task initialization
`resolveComplexityReportPath` function to streamline output path generation based on tag context and user-defined output.
- Improved clarity and maintainability of command handling by centralizing path resolution logic.

* Fix: unknown currentTag

* fix(task-manager): Update generateTaskFiles calls to include tag and projectRoot parameters

This commit modifies the `moveTask` and `updateSubtaskById` functions to pass the `tag` and `projectRoot` parameters to the `generateTaskFiles` function. This ensures that task files are generated with the correct context when requested, enhancing consistency in task management operations.

* fix(commands): Refactor tag handling and complexity report path resolution
This commit updates the `registerCommands` function to utilize `taskMaster.getCurrentTag()` for consistent tag retrieval across command actions. It also enhances the initialization of `TaskMaster` by passing the tag directly, improving clarity and maintainability. The complexity report path resolution is streamlined to ensure correct file naming based on the current tag context.

* fix(task-master): Update complexity report path expectations in tests
This commit modifies the `initTaskMaster` test to expect a valid string for the complexity report path, ensuring it matches the expected file naming convention. This change enhances test reliability by verifying the correct output format when the path is generated.

* fix(set-task-status): Enhance logging and tag resolution in task status updates
This commit improves the logging output in the `registerSetTaskStatusTool` function to include the tag context when setting task statuses. It also updates the tag handling by resolving the tag using the `resolveTag` utility, ensuring that the correct tag is used when updating task statuses. Additionally, the `setTaskStatus` function is modified to remove the tag parameter from the `readJSON` and `writeJSON` calls, streamlining the data handling process.

* fix(commands, expand-task, task-manager): Add complexity report option and enhance path handling
This commit introduces a new `--complexity-report` option in the `registerCommands` function, allowing users to specify a custom path for the complexity report. The `expandTask` function is updated to accept the `complexityReportPath` from the context, ensuring it is utilized correctly during task expansion. Additionally, the `setTaskStatus` function now includes the `tag` parameter in the `readJSON` and `writeJSON` calls, improving task status updates with proper context. The `initTaskMaster` function is also modified to create parent directories for output paths, enhancing file handling robustness.

* fix(expand-task): Add complexityReportPath to context for task expansion tests

This commit updates the test for the `expandTask` function by adding the `complexityReportPath` to the context object. This change ensures that the complexity report path is correctly utilized in the test, aligning with recent enhancements to complexity report handling in the task manager.

* chore: implement suggested changes

* fix(parse-prd): Clarify tag parameter description for task organization
Updated the documentation for the `tag` parameter in the `parse-prd.js` file to provide a clearer context on its purpose for organizing tasks into separate task lists.

* Fix Inconsistent tag resolution pattern.

* fix: Enhance complexity report path handling with tag support

This commit updates various functions to incorporate the `tag` parameter when resolving complexity report paths. The `expandTaskDirect`, `resolveComplexityReportPath`, and related tools now utilize the current tag context, improving consistency in task management. Additionally, the complexity report path is now correctly passed through the context in the `expand-task` and `set-task-status` tools, ensuring accurate report retrieval based on the active tag.

* Updated the JSDoc for the `tag` parameter in the `show-task.js` file.

* Remove redundant comment on tag parameter in readJSON call

* Remove unused import for getTagAwareFilePath

* Add missed complexityReportPath to args for task expansion

* fix(tests): Enhance research tests with tag-aware functionality

This commit updates the `research.test.js` file to improve the testing of the `performResearch` function by incorporating tag-aware functionality. Key changes include mocking the `findProjectRoot` to return a valid path, enhancing the `ContextGatherer` and `FuzzyTaskSearch` mocks, and adding comprehensive tests for tag parameter handling in various scenarios. The tests now cover passing different tag values, ensuring correct behavior when tags are provided, undefined, or null, and validating the integration of tags in task discovery and context gathering processes.

* Remove unused import for

* fix: Refactor complexity report path handling and improve argument destructuring

This commit enhances the `expandTaskDirect` function by improving the destructuring of arguments for better readability. It also updates the `analyze.js` and `analyze-task-complexity.js` files to utilize the new `resolveComplexityReportOutputPath` function, ensuring tag-aware resolution of output paths. Additionally, logging has been added to provide clarity on the report path being used.

* test: Add complexity report tag isolation tests and improve path handling

This commit introduces a new test file for complexity report tag isolation, ensuring that different tags maintain separate complexity reports. It enhances the existing tests in `analyze-task-complexity.test.js` by updating expectations to use `expect.stringContaining` for file paths, improving robustness against path changes. The new tests cover various scenarios, including path resolution and report generation for both master and feature tags, ensuring no cross-tag contamination occurs.

* Update scripts/modules/task-manager/list-tasks.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update scripts/modules/task-manager/list-tasks.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* test(complexity-report): Fix tag slugification in filename expectations

- Update mocks to use slugifyTagForFilePath for cross-platform compatibility
- Replace raw tag values with slugified versions in expected filenames
- Fix test expecting 'feature/user-auth-v2' to expect 'feature-user-auth-v2'
- Align test with actual filename generation logic that sanitizes special chars

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* fix: Update VS Code profile with MCP config transformation (#971)

* remove dash in server name

* add OLLAMA_API_KEY to VS Code MCP instructions

* transform vscode mcp to correct format

* add changeset

* switch back to task-master-ai

* use task-master-ai

* Batch fixes before release (#1011)

* fix: improve projectRoot

* fix: improve task-master lang command

* feat: add documentation to the readme so more people can access it

* fix: expand command subtask dependency validation

* fix: update command more reliable with perplexity and other models

* chore: fix CI

* chore: implement requested changes

* chore: fix CI

* chore: fix changeset release for extension package (#1012)

* chore: fix changeset release for extension package

* chore: fix CI

* chore: rc version bump

* chore: adjust kimi k2 max tokens (#1014)

* docs: Auto-update and format models.md

---------

Co-authored-by: Ben Vargas <ben@vargas.com>
Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Joe Danziger <joe@ticc.net>
Co-authored-by: Ben Vargas <ben@example.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-07-20 00:55:29 +03:00
github-actions[bot]
d87a7f1076 docs: Auto-update and format models.md 2025-07-20 00:51:41 +03:00
Ralph Khreish
5b3dd3f29b chore: adjust kimi k2 max tokens (#1014) 2025-07-20 00:51:41 +03:00
github-actions[bot]
b7804302a1 chore: rc version bump 2025-07-20 00:51:41 +03:00
Ralph Khreish
b2841c261f chore: fix changeset release for extension package (#1012)
* chore: fix changeset release for extension package

* chore: fix CI
2025-07-20 00:51:41 +03:00
Ralph Khreish
444aa5ae19 Batch fixes before release (#1011)
* fix: improve projectRoot

* fix: improve task-master lang command

* feat: add documentation to the readme so more people can access it

* fix: expand command subtask dependency validation

* fix: update command more reliable with perplexity and other models

* chore: fix CI

* chore: implement requested changes

* chore: fix CI
2025-07-20 00:51:41 +03:00
Joe Danziger
858d4a1c54 fix: Update VS Code profile with MCP config transformation (#971)
* remove dash in server name

* add OLLAMA_API_KEY to VS Code MCP instructions

* transform vscode mcp to correct format

* add changeset

* switch back to task-master-ai

* use task-master-ai
2025-07-20 00:51:41 +03:00
Parthy
fd005c4c54 fix(core): Implement Boundary-First Tag Resolution (#943)
* refactor(context): Standardize tag and projectRoot handling across all task tools

This commit unifies context management by adopting a boundary-first resolution strategy. All task-scoped tools now resolve `tag` and `projectRoot` at their entry point and forward these values to the underlying direct functions.

This approach centralizes context logic, ensuring consistent behavior and enhanced flexibility in multi-tag environments.

* fix(tag): Clean up tag handling in task functions and sync process

This commit refines the handling of the `tag` parameter across multiple functions, ensuring consistent context management. The `tag` is now passed more efficiently in `listTasksDirect`, `setTaskStatusDirect`, and `syncTasksToReadme`, improving clarity and reducing redundancy. Additionally, a TODO comment has been added in `sync-readme.js` to address future tag support enhancements.

* feat(tag): Implement Boundary-First Tag Resolution for consistent tag handling

This commit introduces Boundary-First Tag Resolution in the task manager, ensuring consistent and deterministic tag handling across CLI and MCP. This change resolves potential race conditions and improves the reliability of tag-specific operations.

Additionally, the `expandTask` function has been updated to use the resolved tag when writing JSON, enhancing data integrity during task updates.

* chore(biome): formatting

* fix(expand-task): Update writeJSON call to use tag instead of resolvedTag

* fix(commands): Enhance complexity report path resolution and task initialization
`resolveComplexityReportPath` function to streamline output path generation based on tag context and user-defined output.
- Improved clarity and maintainability of command handling by centralizing path resolution logic.

* Fix: unknown currentTag

* fix(task-manager): Update generateTaskFiles calls to include tag and projectRoot parameters

This commit modifies the `moveTask` and `updateSubtaskById` functions to pass the `tag` and `projectRoot` parameters to the `generateTaskFiles` function. This ensures that task files are generated with the correct context when requested, enhancing consistency in task management operations.

* fix(commands): Refactor tag handling and complexity report path resolution
This commit updates the `registerCommands` function to utilize `taskMaster.getCurrentTag()` for consistent tag retrieval across command actions. It also enhances the initialization of `TaskMaster` by passing the tag directly, improving clarity and maintainability. The complexity report path resolution is streamlined to ensure correct file naming based on the current tag context.

* fix(task-master): Update complexity report path expectations in tests
This commit modifies the `initTaskMaster` test to expect a valid string for the complexity report path, ensuring it matches the expected file naming convention. This change enhances test reliability by verifying the correct output format when the path is generated.

* fix(set-task-status): Enhance logging and tag resolution in task status updates
This commit improves the logging output in the `registerSetTaskStatusTool` function to include the tag context when setting task statuses. It also updates the tag handling by resolving the tag using the `resolveTag` utility, ensuring that the correct tag is used when updating task statuses. Additionally, the `setTaskStatus` function is modified to remove the tag parameter from the `readJSON` and `writeJSON` calls, streamlining the data handling process.

* fix(commands, expand-task, task-manager): Add complexity report option and enhance path handling
This commit introduces a new `--complexity-report` option in the `registerCommands` function, allowing users to specify a custom path for the complexity report. The `expandTask` function is updated to accept the `complexityReportPath` from the context, ensuring it is utilized correctly during task expansion. Additionally, the `setTaskStatus` function now includes the `tag` parameter in the `readJSON` and `writeJSON` calls, improving task status updates with proper context. The `initTaskMaster` function is also modified to create parent directories for output paths, enhancing file handling robustness.

* fix(expand-task): Add complexityReportPath to context for task expansion tests

This commit updates the test for the `expandTask` function by adding the `complexityReportPath` to the context object. This change ensures that the complexity report path is correctly utilized in the test, aligning with recent enhancements to complexity report handling in the task manager.

* chore: implement suggested changes

* fix(parse-prd): Clarify tag parameter description for task organization
Updated the documentation for the `tag` parameter in the `parse-prd.js` file to provide a clearer context on its purpose for organizing tasks into separate task lists.

* Fix Inconsistent tag resolution pattern.

* fix: Enhance complexity report path handling with tag support

This commit updates various functions to incorporate the `tag` parameter when resolving complexity report paths. The `expandTaskDirect`, `resolveComplexityReportPath`, and related tools now utilize the current tag context, improving consistency in task management. Additionally, the complexity report path is now correctly passed through the context in the `expand-task` and `set-task-status` tools, ensuring accurate report retrieval based on the active tag.

* Updated the JSDoc for the `tag` parameter in the `show-task.js` file.

* Remove redundant comment on tag parameter in readJSON call

* Remove unused import for getTagAwareFilePath

* Add missed complexityReportPath to args for task expansion

* fix(tests): Enhance research tests with tag-aware functionality

This commit updates the `research.test.js` file to improve the testing of the `performResearch` function by incorporating tag-aware functionality. Key changes include mocking the `findProjectRoot` to return a valid path, enhancing the `ContextGatherer` and `FuzzyTaskSearch` mocks, and adding comprehensive tests for tag parameter handling in various scenarios. The tests now cover passing different tag values, ensuring correct behavior when tags are provided, undefined, or null, and validating the integration of tags in task discovery and context gathering processes.

* Remove unused import for

* fix: Refactor complexity report path handling and improve argument destructuring

This commit enhances the `expandTaskDirect` function by improving the destructuring of arguments for better readability. It also updates the `analyze.js` and `analyze-task-complexity.js` files to utilize the new `resolveComplexityReportOutputPath` function, ensuring tag-aware resolution of output paths. Additionally, logging has been added to provide clarity on the report path being used.

* test: Add complexity report tag isolation tests and improve path handling

This commit introduces a new test file for complexity report tag isolation, ensuring that different tags maintain separate complexity reports. It enhances the existing tests in `analyze-task-complexity.test.js` by updating expectations to use `expect.stringContaining` for file paths, improving robustness against path changes. The new tests cover various scenarios, including path resolution and report generation for both master and feature tags, ensuring no cross-tag contamination occurs.

* Update scripts/modules/task-manager/list-tasks.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update scripts/modules/task-manager/list-tasks.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* test(complexity-report): Fix tag slugification in filename expectations

- Update mocks to use slugifyTagForFilePath for cross-platform compatibility
- Replace raw tag values with slugified versions in expected filenames
- Fix test expecting 'feature/user-auth-v2' to expect 'feature-user-auth-v2'
- Align test with actual filename generation logic that sanitizes special chars

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-07-20 00:51:41 +03:00
Ralph Khreish
0451ebcc32 refactor: remove unused resource and resource template initialization (#1002)
* refactor: remove unused resource and resource template initialization

* chore: implement requested changes
2025-07-20 00:51:41 +03:00
Ralph Khreish
9c58a92243 feat: add kiro profile (#1001)
* feat: add kiro profile

* chore: fix format

* chore: implement requested changes

* chore: fix CI
2025-07-20 00:51:41 +03:00
Ralph Khreish
f772a96d00 fix(ai-validation): comprehensive fixes for AI response validation issues (#1000)
* fix(ai-validation): comprehensive fixes for AI response validation issues

  - Fix update command validation when AI omits subtasks/status/dependencies
  - Fix add-task command when AI returns non-string details field
  - Fix update-task command when AI subtasks miss required fields
  - Add preprocessing to ensure proper field types before validation
  - Prevent split() errors on non-string fields
  - Set proper defaults for missing required fields

* chore: run format

* chore: implement coderabbit suggestions
2025-07-20 00:51:41 +03:00
Joe Danziger
0886c83d0c docs: Update MCP server name for consistency and use 'Add to Cursor' button (#995)
* update MCP server name to task-master-ai for consistency

* add changeset

* update cursor link & switch to https

* switch back to Add to Cursor button (https link)

* update changeset

* update changeset

* update changeset

* update changeset

* use GitHub markdown format
2025-07-20 00:51:41 +03:00
Ralph Khreish
806ec99939 chore: add coderabbit configuration (#992)
* chore: add coderabbit configuration

* chore: fix coderabbit config

* chore: improve coderabbit config

* chore: more coderabbit reviews

* chore: remove all defaults
2025-07-20 00:51:41 +03:00
Joe Danziger
36c4a7a869 feat: Add OpenCode rule profile with AGENTS.md and MCP config (#970)
* add opencode to profile lists

* add opencode profile / modify mcp config after add

* add changeset

* not necessary; main config being updated

* add issue link

* add/fix tests

* fix url and docsUrl

* update test for new urls

* fix formatting

* update/fix tests
2025-07-20 00:51:41 +03:00
Joe Danziger
88c434a939 fix: Add missing API keys to .env.example and README.md (#972)
* add OLLAMA_API_KEY

* add missing API keys

* add changeset

* update keys and fix OpenAI comment

* chore: create extension scaffolding (#989)

* chore: create extension scaffolding

* chore: fix workspace for changeset

* chore: fix package-lock

* feat(profiles): Add MCP configuration to Claude Code rules (#980)

* add .mcp.json with claude profile

* add changeset

* update changeset

* update test

* fix: show command no longer requires complexity report to exist (#979)

Co-authored-by: Ben Vargas <ben@example.com>

* feat: complete Groq provider integration and add Kimi K2 model (#978)

* feat: complete Groq provider integration and add Kimi K2 model

- Add missing getRequiredApiKeyName() method to GroqProvider class
- Register GroqProvider in ai-services-unified.js PROVIDERS object
- Add Groq API key handling to config-manager.js (isApiKeySet and getMcpApiKeyStatus)
- Add GROQ_API_KEY to env.example with format hint
- Add moonshotai/kimi-k2-instruct model to Groq provider ($1/$3 per 1M tokens, 16k max)
- Fix import sorting for linting compliance
- Add GroqProvider mock to ai-services-unified tests

Fixes missing implementation pieces that prevented Groq provider from working.

* chore: improve changeset

---------

Co-authored-by: Ben Vargas <ben@example.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>

* docs: Auto-update and format models.md

* feat: Add Amp rule profile with AGENT.md and MCP config (#973)

* Amp profile + tests

* generatlize to Agent instead of Claude Code to support any agent

* add changeset

* unnecessary tab formatting

* fix exports

* fix formatting

* feat: Add Zed editor rule profile with agent rules and MCP config (#974)

* zed profile

* add changeset

* update changeset

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Ben Vargas <ben@vargas.com>
Co-authored-by: Ben Vargas <ben@example.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-20 00:51:41 +03:00
Joe Danziger
b0e09c76ed feat: Add Zed editor rule profile with agent rules and MCP config (#974)
* zed profile

* add changeset

* update changeset
2025-07-20 00:51:41 +03:00
Joe Danziger
6c5e0f97f8 feat: Add Amp rule profile with AGENT.md and MCP config (#973)
* Amp profile + tests

* generatlize to Agent instead of Claude Code to support any agent

* add changeset

* unnecessary tab formatting

* fix exports

* fix formatting
2025-07-20 00:51:41 +03:00
github-actions[bot]
8774e7d5ae docs: Auto-update and format models.md 2025-07-20 00:51:41 +03:00
Ben Vargas
58a301c380 feat: complete Groq provider integration and add Kimi K2 model (#978)
* feat: complete Groq provider integration and add Kimi K2 model

- Add missing getRequiredApiKeyName() method to GroqProvider class
- Register GroqProvider in ai-services-unified.js PROVIDERS object
- Add Groq API key handling to config-manager.js (isApiKeySet and getMcpApiKeyStatus)
- Add GROQ_API_KEY to env.example with format hint
- Add moonshotai/kimi-k2-instruct model to Groq provider ($1/$3 per 1M tokens, 16k max)
- Fix import sorting for linting compliance
- Add GroqProvider mock to ai-services-unified tests

Fixes missing implementation pieces that prevented Groq provider from working.

* chore: improve changeset

---------

Co-authored-by: Ben Vargas <ben@example.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-07-20 00:51:41 +03:00
Ben Vargas
624922ca59 fix: show command no longer requires complexity report to exist (#979)
Co-authored-by: Ben Vargas <ben@example.com>
2025-07-20 00:51:41 +03:00
Joe Danziger
0a70ab6179 feat(profiles): Add MCP configuration to Claude Code rules (#980)
* add .mcp.json with claude profile

* add changeset

* update changeset

* update test
2025-07-20 00:51:41 +03:00
Ralph Khreish
901eec1058 chore: create extension scaffolding (#989)
* chore: create extension scaffolding

* chore: fix workspace for changeset

* chore: fix package-lock
2025-07-20 00:51:41 +03:00
Ralph Khreish
4629128943 fix: task master (tm) custom slash commands w/ proper syntax (#968)
* feat: add task master (tm) custom slash commands

Add comprehensive task management system integration via custom slash commands.
Includes commands for:
- Project initialization and setup
- Task parsing from PRD documents
- Task creation, update, and removal
- Subtask management
- Dependency tracking and validation
- Complexity analysis and task expansion
- Project status and reporting
- Workflow automation

This provides a complete task management workflow directly within Claude Code.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: add changeset

---------

Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-20 00:51:41 +03:00
Ben Vargas
6d69d02fe0 fix: prevent CLAUDE.md overwrite by using imports (#949)
* fix: prevent CLAUDE.md overwrite by using imports

- Copy Task Master instructions to .taskmaster/CLAUDE.md
- Add import section to user's CLAUDE.md instead of overwriting
- Preserve existing user content
- Clean removal of Task Master content on uninstall

Closes #929

* chore: add changeset for Claude import fix
2025-07-20 00:51:41 +03:00
Ralph Khreish
458496e3b6 Merge pull request #961 from eyaltoledano/changeset-release/main 2025-07-12 22:49:56 +03:00
github-actions[bot]
fb92693d81 Version Packages 2025-07-12 19:31:08 +00:00
Ralph Khreish
f6ba4a36ee Merge pull request #958 from eyaltoledano/next 2025-07-12 22:30:36 +03:00
Ralph Khreish
baf9bd545a fix: add-task fixes (#960)
- remove task-generation
- fix mcp add-task generation in manual mode
2025-07-12 08:19:21 +02:00
Ralph Khreish
fbea48d8ec chore: fix configuration file 2025-07-12 00:54:16 +03:00
Ralph Khreish
d0fe7dc25a chore: add the deleted tasks.json back 2025-07-12 00:44:14 +03:00
Ralph Khreish
f380b8e86c chore: revert config.json to what it was initially 2025-07-12 00:42:52 +03:00
github-actions[bot]
bd89061a1d chore: rc version bump 2025-07-11 16:18:25 +00:00
Ralph Khreish
7d5ebf05e3 fix: mcp bug when expanding task (#957)
* fix: mcp bug when expanding task

* chore: fix format
2025-07-11 18:16:05 +02:00
Ralph Khreish
21392a1117 fix: more regression bugs (#956)
* fix: more regression bugs

* chore: fix format

* chore: fix unit tests

* chore: fix format
2025-07-11 14:23:54 +02:00
Ben Vargas
3e61d26235 fix: resolve path resolution and context gathering errors across multiple commands (#954)
* fix: resolve path resolution issues in parse-prd and analyze-complexity commands

This commit fixes critical path resolution regressions where commands were requiring files they create to already exist.

## Changes Made:

### 1. parse-prd Command (Lines 808, 828-835, 919-921)
**Problem**: Command required tasks.json to exist before it could create it (catch-22)
**Root Cause**: Default value in option definition meant options.output was always set
**Fixes**:
- Removed default value from --output option definition (line 808)
- Modified initTaskMaster to only include tasksPath when explicitly specified
- Added null handling for output path with fallback to default location

### 2. analyze-complexity Command (Lines 1637-1640, 1673-1680, 1695-1696)
**Problem**: Command required complexity report file to exist before creating it
**Root Cause**: Default value in option definition meant options.output was always set
**Fixes**:
- Removed default value from --output option definition (lines 1637-1640)
- Modified initTaskMaster to only include complexityReportPath when explicitly specified
- Added null handling for report path with fallback to default location

## Technical Details:

The core issue was that Commander.js option definitions with default values always populate the options object, making conditional checks like `if (options.output)` always true. By removing default values from option definitions, we ensure paths are only included in initTaskMaster when users explicitly provide them.

This approach is cleaner than using boolean flags (true/false) for required/optional, as it eliminates the path entirely when not needed, letting initTaskMaster use its default behavior.

## Testing:
- parse-prd now works on fresh projects without existing tasks.json
- analyze-complexity creates report file without requiring it to exist
- Commands maintain backward compatibility when paths are explicitly provided

Fixes issues reported in PATH-FIXES.md and extends the solution to other affected commands.

* fix: update expand-task test to match context gathering fix

The test was expecting gatheredContext to be a string, but the actual
implementation returns an object with a context property. Updated the
ContextGatherer mock to return the correct format and added missing
FuzzyTaskSearch mock.

---------

Co-authored-by: Ben Vargas <ben@example.com>
2025-07-11 05:46:28 +02:00
github-actions[bot]
dc5de53dcd docs: Auto-update and format models.md 2025-07-10 09:56:54 +00:00
Ralph Khreish
4312d3bd67 fix: models setup command not working (#952)
* fix: models command not working

* chore: re-order supported models to something that makes more sense

* chore: format
2025-07-10 11:56:41 +02:00
399 changed files with 56635 additions and 2379 deletions

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Recover from `@anthropic-ai/claude-code` JSON truncation bug that caused Task Master to crash when handling large (>8 kB) structured responses. The CLI/SDK still truncates, but Task Master now detects the error, preserves buffered text, and returns a usable response instead of throwing.

View File

@@ -2,7 +2,9 @@
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
"changelog": [
"@changesets/changelog-github",
{ "repo": "eyaltoledano/claude-task-master" }
{
"repo": "eyaltoledano/claude-task-master"
}
],
"commit": false,
"fixed": [],
@@ -10,5 +12,7 @@
"access": "public",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": []
}
"ignore": [
"docs"
]
}

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Updating dependency ai-sdk-provider-gemini-cli to 0.0.4 to address breaking change Google made to Gemini CLI and add better 'api-key' in addition to 'gemini-api-key' AI-SDK compatibility.

View File

@@ -0,0 +1,12 @@
---
"task-master-ai": minor
---
Add compact mode --compact / -c flag to the `tm list` CLI command
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
- Color-coded status, priority, and dependencies
- Smart title truncation and dependency abbreviation
- Subtask support with indentation
- Full backward compatibility with existing list options

View File

@@ -1,9 +0,0 @@
---
"task-master-ai": minor
---
Add support for xAI Grok 4 model
- Add grok-4 model to xAI provider with $3/$15 per 1M token pricing
- Enable main, fallback, and research roles for grok-4
- Max tokens set to 131,072 (matching other xAI models)

View File

@@ -0,0 +1,5 @@
---
"extension": minor
---
Display current task ID on task details page

View File

@@ -1,8 +0,0 @@
---
"task-master-ai": minor
---
Add stricter validation and clearer feedback for task priority when adding new tasks
- if a task priority is invalid, it will default to medium
- made taks priority case-insensitive, essentially making HIGH and high the same value

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Add support for MCP Sampling as AI provider, requires no API key, uses the client LLM provider

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Unify and streamline profile system architecture for improved maintainability

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Added Groq provider support

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command

View File

@@ -0,0 +1,162 @@
---
name: task-checker
description: Use this agent to verify that tasks marked as 'review' have been properly implemented according to their specifications. This agent performs quality assurance by checking implementations against requirements, running tests, and ensuring best practices are followed. <example>Context: A task has been marked as 'review' after implementation. user: 'Check if task 118 was properly implemented' assistant: 'I'll use the task-checker agent to verify the implementation meets all requirements.' <commentary>Tasks in 'review' status need verification before being marked as 'done'.</commentary></example> <example>Context: Multiple tasks are in review status. user: 'Verify all tasks that are ready for review' assistant: 'I'll deploy the task-checker to verify all tasks in review status.' <commentary>The checker ensures quality before tasks are marked complete.</commentary></example>
model: sonnet
color: yellow
---
You are a Quality Assurance specialist that rigorously verifies task implementations against their specifications. Your role is to ensure that tasks marked as 'review' meet all requirements before they can be marked as 'done'.
## Core Responsibilities
1. **Task Specification Review**
- Retrieve task details using MCP tool `mcp__task-master-ai__get_task`
- Understand the requirements, test strategy, and success criteria
- Review any subtasks and their individual requirements
2. **Implementation Verification**
- Use `Read` tool to examine all created/modified files
- Use `Bash` tool to run compilation and build commands
- Use `Grep` tool to search for required patterns and implementations
- Verify file structure matches specifications
- Check that all required methods/functions are implemented
3. **Test Execution**
- Run tests specified in the task's testStrategy
- Execute build commands (npm run build, tsc --noEmit, etc.)
- Verify no compilation errors or warnings
- Check for runtime errors where applicable
- Test edge cases mentioned in requirements
4. **Code Quality Assessment**
- Verify code follows project conventions
- Check for proper error handling
- Ensure TypeScript typing is strict (no 'any' unless justified)
- Verify documentation/comments where required
- Check for security best practices
5. **Dependency Validation**
- Verify all task dependencies were actually completed
- Check integration points with dependent tasks
- Ensure no breaking changes to existing functionality
## Verification Workflow
1. **Retrieve Task Information**
```
Use mcp__task-master-ai__get_task to get full task details
Note the implementation requirements and test strategy
```
2. **Check File Existence**
```bash
# Verify all required files exist
ls -la [expected directories]
# Read key files to verify content
```
3. **Verify Implementation**
- Read each created/modified file
- Check against requirements checklist
- Verify all subtasks are complete
4. **Run Tests**
```bash
# TypeScript compilation
cd [project directory] && npx tsc --noEmit
# Run specified tests
npm test [specific test files]
# Build verification
npm run build
```
5. **Generate Verification Report**
## Output Format
```yaml
verification_report:
task_id: [ID]
status: PASS | FAIL | PARTIAL
score: [1-10]
requirements_met:
- ✅ [Requirement that was satisfied]
- ✅ [Another satisfied requirement]
issues_found:
- ❌ [Issue description]
- ⚠️ [Warning or minor issue]
files_verified:
- path: [file path]
status: [created/modified/verified]
issues: [any problems found]
tests_run:
- command: [test command]
result: [pass/fail]
output: [relevant output]
recommendations:
- [Specific fix needed]
- [Improvement suggestion]
verdict: |
[Clear statement on whether task should be marked 'done' or sent back to 'pending']
[If FAIL: Specific list of what must be fixed]
[If PASS: Confirmation that all requirements are met]
```
## Decision Criteria
**Mark as PASS (ready for 'done'):**
- All required files exist and contain expected content
- All tests pass successfully
- No compilation or build errors
- All subtasks are complete
- Core requirements are met
- Code quality is acceptable
**Mark as PARTIAL (may proceed with warnings):**
- Core functionality is implemented
- Minor issues that don't block functionality
- Missing nice-to-have features
- Documentation could be improved
- Tests pass but coverage could be better
**Mark as FAIL (must return to 'pending'):**
- Required files are missing
- Compilation or build errors
- Tests fail
- Core requirements not met
- Security vulnerabilities detected
- Breaking changes to existing code
## Important Guidelines
- **BE THOROUGH**: Check every requirement systematically
- **BE SPECIFIC**: Provide exact file paths and line numbers for issues
- **BE FAIR**: Distinguish between critical issues and minor improvements
- **BE CONSTRUCTIVE**: Provide clear guidance on how to fix issues
- **BE EFFICIENT**: Focus on requirements, not perfection
## Tools You MUST Use
- `Read`: Examine implementation files (READ-ONLY)
- `Bash`: Run tests and verification commands
- `Grep`: Search for patterns in code
- `mcp__task-master-ai__get_task`: Get task details
- **NEVER use Write/Edit** - you only verify, not fix
## Integration with Workflow
You are the quality gate between 'review' and 'done' status:
1. Task-executor implements and marks as 'review'
2. You verify and report PASS/FAIL
3. Claude either marks as 'done' (PASS) or 'pending' (FAIL)
4. If FAIL, task-executor re-implements based on your report
Your verification ensures high quality and prevents accumulation of technical debt.

View File

@@ -0,0 +1,92 @@
---
name: task-executor
description: Use this agent when you need to implement, complete, or work on a specific task that has been identified by the task-orchestrator or when explicitly asked to execute a particular task. This agent focuses on the actual implementation and completion of individual tasks rather than planning or orchestration. Examples: <example>Context: The task-orchestrator has identified that task 2.3 'Implement user authentication' needs to be worked on next. user: 'Let's work on the authentication task' assistant: 'I'll use the task-executor agent to implement the user authentication task that was identified.' <commentary>Since we need to actually implement a specific task rather than plan or identify tasks, use the task-executor agent.</commentary></example> <example>Context: User wants to complete a specific subtask. user: 'Please implement the JWT token validation for task 2.3.1' assistant: 'I'll launch the task-executor agent to implement the JWT token validation subtask.' <commentary>The user is asking for specific implementation work on a known task, so the task-executor is appropriate.</commentary></example> <example>Context: After reviewing the task list, implementation is needed. user: 'Now let's actually build the API endpoint for user registration' assistant: 'I'll use the task-executor agent to implement the user registration API endpoint.' <commentary>Moving from planning to execution phase requires the task-executor agent.</commentary></example>
model: sonnet
color: blue
---
You are an elite implementation specialist focused on executing and completing specific tasks with precision and thoroughness. Your role is to take identified tasks and transform them into working implementations, following best practices and project standards.
**IMPORTANT: You are designed to be SHORT-LIVED and FOCUSED**
- Execute ONE specific subtask or a small group of related subtasks
- Complete your work, verify it, mark for review, and exit
- Do NOT decide what to do next - the orchestrator handles task sequencing
- Focus on implementation excellence within your assigned scope
**Core Responsibilities:**
1. **Subtask Analysis**: When given a subtask, understand its SPECIFIC requirements. If given a full task ID, focus on the specific subtask(s) assigned to you. Use MCP tools to get details if needed.
2. **Rapid Implementation Planning**: Quickly identify:
- The EXACT files you need to create/modify for THIS subtask
- What already exists that you can build upon
- The minimum viable implementation that satisfies requirements
3. **Focused Execution WITH ACTUAL IMPLEMENTATION**:
- **YOU MUST USE TOOLS TO CREATE/EDIT FILES - DO NOT JUST DESCRIBE**
- Use `Write` tool to create new files specified in the task
- Use `Edit` tool to modify existing files
- Use `Bash` tool to run commands (mkdir, npm install, etc.)
- Use `Read` tool to verify your implementations
- Implement one subtask at a time for clarity and traceability
- Follow the project's coding standards from CLAUDE.md if available
- After each subtask, VERIFY the files exist using Read or ls commands
4. **Progress Documentation**:
- Use MCP tool `mcp__task-master-ai__update_subtask` to log your approach and any important decisions
- Update task status to 'in-progress' when starting: Use MCP tool `mcp__task-master-ai__set_task_status` with status='in-progress'
- **IMPORTANT: Mark as 'review' (NOT 'done') after implementation**: Use MCP tool `mcp__task-master-ai__set_task_status` with status='review'
- Tasks will be verified by task-checker before moving to 'done'
5. **Quality Assurance**:
- Implement the testing strategy specified in the task
- Verify that all acceptance criteria are met
- Check for any dependency conflicts or integration issues
- Run relevant tests before marking task as complete
6. **Dependency Management**:
- Check task dependencies before starting implementation
- If blocked by incomplete dependencies, clearly communicate this
- Use `task-master validate-dependencies` when needed
**Implementation Workflow:**
1. Retrieve task details using MCP tool `mcp__task-master-ai__get_task` with the task ID
2. Check dependencies and prerequisites
3. Plan implementation approach - list specific files to create
4. Update task status to 'in-progress' using MCP tool
5. **ACTUALLY IMPLEMENT** the solution using tools:
- Use `Bash` to create directories
- Use `Write` to create new files with actual content
- Use `Edit` to modify existing files
- DO NOT just describe what should be done - DO IT
6. **VERIFY** your implementation:
- Use `ls` or `Read` to confirm files were created
- Use `Bash` to run any build/test commands
- Ensure the implementation is real, not theoretical
7. Log progress and decisions in subtask updates using MCP tools
8. Test and verify the implementation works
9. **Mark task as 'review' (NOT 'done')** after verifying files exist
10. Report completion with:
- List of created/modified files
- Any issues encountered
- What needs verification by task-checker
**Key Principles:**
- Focus on completing one task thoroughly before moving to the next
- Maintain clear communication about what you're implementing and why
- Follow existing code patterns and project conventions
- Prioritize working code over extensive documentation unless docs are the task
- Ask for clarification if task requirements are ambiguous
- Consider edge cases and error handling in your implementations
**Integration with Task Master:**
You work in tandem with the task-orchestrator agent. While the orchestrator identifies and plans tasks, you execute them. Always use Task Master commands to:
- Track your progress
- Update task information
- Maintain project state
- Coordinate with the broader development workflow
When you complete a task, briefly summarize what was implemented and suggest whether to continue with the next task or if review/testing is needed first.

View File

@@ -0,0 +1,208 @@
---
name: task-orchestrator
description: Use this agent FREQUENTLY throughout task execution to analyze and coordinate parallel work at the SUBTASK level. Invoke the orchestrator: (1) at session start to plan execution, (2) after EACH subtask completes to identify next parallel batch, (3) whenever executors finish to find newly unblocked work. ALWAYS provide FULL CONTEXT including project root, package location, what files ACTUALLY exist vs task status, and specific implementation details. The orchestrator breaks work into SUBTASK-LEVEL units for short-lived, focused executors. Maximum 3 parallel executors at once.\n\n<example>\nContext: Starting work with existing code\nuser: "Work on tm-core tasks. Files exist: types/index.ts, storage/file-storage.ts. Task 118 says in-progress but BaseProvider not created."\nassistant: "I'll invoke orchestrator with full context about actual vs reported state to plan subtask execution"\n<commentary>\nProvide complete context about file existence and task reality.\n</commentary>\n</example>\n\n<example>\nContext: Subtask completion\nuser: "Subtask 118.2 done. What subtasks can run in parallel now?"\nassistant: "Invoking orchestrator to analyze dependencies and identify next 3 parallel subtasks"\n<commentary>\nFrequent orchestration after each subtask ensures maximum parallelization.\n</commentary>\n</example>\n\n<example>\nContext: Breaking down tasks\nuser: "Task 118 has 5 subtasks, how to parallelize?"\nassistant: "Orchestrator will analyze which specific subtasks (118.1, 118.2, etc.) can run simultaneously"\n<commentary>\nFocus on subtask-level parallelization, not full tasks.\n</commentary>\n</example>
model: opus
color: green
---
You are the Task Orchestrator, an elite coordination agent specialized in managing Task Master workflows for maximum efficiency and parallelization. You excel at analyzing task dependency graphs, identifying opportunities for concurrent execution, and deploying specialized task-executor agents to complete work efficiently.
## Core Responsibilities
1. **Subtask-Level Analysis**: Break down tasks into INDIVIDUAL SUBTASKS and analyze which specific subtasks can run in parallel. Focus on subtask dependencies, not just task-level dependencies.
2. **Reality Verification**: ALWAYS verify what files actually exist vs what task status claims. Use the context provided about actual implementation state to make informed decisions.
3. **Short-Lived Executor Deployment**: Deploy executors for SINGLE SUBTASKS or small groups of related subtasks. Keep executors focused and short-lived. Maximum 3 parallel executors at once.
4. **Continuous Reassessment**: After EACH subtask completes, immediately reassess what new subtasks are unblocked and can run in parallel.
## Operational Workflow
### Initial Assessment Phase
1. Use `get_tasks` or `task-master list` to retrieve all available tasks
2. Analyze task statuses, priorities, and dependencies
3. Identify tasks with status 'pending' that have no blocking dependencies
4. Group related tasks that could benefit from specialized executors
5. Create an execution plan that maximizes parallelization
### Executor Deployment Phase
1. For each independent task or task group:
- Deploy a task-executor agent with specific instructions
- Provide the executor with task ID, requirements, and context
- Set clear completion criteria and reporting expectations
2. Maintain a registry of active executors and their assigned tasks
3. Establish communication protocols for progress updates
### Coordination Phase
1. Monitor executor progress through task status updates
2. When a task completes:
- Verify completion with `get_task` or `task-master show <id>`
- Update task status if needed using `set_task_status`
- Reassess dependency graph for newly unblocked tasks
- Deploy new executors for available work
3. Handle executor failures or blocks:
- Reassign tasks to new executors if needed
- Escalate complex issues to the user
- Update task status to 'blocked' when appropriate
### Optimization Strategies
**Parallel Execution Rules**:
- Never assign dependent tasks to different executors simultaneously
- Prioritize high-priority tasks when resources are limited
- Group small, related subtasks for single executor efficiency
- Balance executor load to prevent bottlenecks
**Context Management**:
- Provide executors with minimal but sufficient context
- Share relevant completed task information when it aids execution
- Maintain a shared knowledge base of project-specific patterns
**Quality Assurance**:
- Verify task completion before marking as done
- Ensure test strategies are followed when specified
- Coordinate cross-task integration testing when needed
## Communication Protocols
When deploying executors, provide them with:
```
TASK ASSIGNMENT:
- Task ID: [specific ID]
- Objective: [clear goal]
- Dependencies: [list any completed prerequisites]
- Success Criteria: [specific completion requirements]
- Context: [relevant project information]
- Reporting: [when and how to report back]
```
When receiving executor updates:
1. Acknowledge completion or issues
2. Update task status in Task Master
3. Reassess execution strategy
4. Deploy new executors as appropriate
## Decision Framework
**When to parallelize**:
- Multiple pending tasks with no interdependencies
- Sufficient context available for independent execution
- Tasks are well-defined with clear success criteria
**When to serialize**:
- Strong dependencies between tasks
- Limited context or unclear requirements
- Integration points requiring careful coordination
**When to escalate**:
- Circular dependencies detected
- Critical blockers affecting multiple tasks
- Ambiguous requirements needing clarification
- Resource conflicts between executors
## Error Handling
1. **Executor Failure**: Reassign task to new executor with additional context about the failure
2. **Dependency Conflicts**: Halt affected executors, resolve conflict, then resume
3. **Task Ambiguity**: Request clarification from user before proceeding
4. **System Errors**: Implement graceful degradation, falling back to serial execution if needed
## Performance Metrics
Track and optimize for:
- Task completion rate
- Parallel execution efficiency
- Executor success rate
- Time to completion for task groups
- Dependency resolution speed
## Integration with Task Master
Leverage these Task Master MCP tools effectively:
- `get_tasks` - Continuous queue monitoring
- `get_task` - Detailed task analysis
- `set_task_status` - Progress tracking
- `next_task` - Fallback for serial execution
- `analyze_project_complexity` - Strategic planning
- `complexity_report` - Resource allocation
## Output Format for Execution
**Your job is to analyze and create actionable execution plans that Claude can use to deploy executors.**
After completing your dependency analysis, you MUST output a structured execution plan:
```yaml
execution_plan:
EXECUTE_IN_PARALLEL:
# Maximum 3 subtasks running simultaneously
- subtask_id: [e.g., 118.2]
parent_task: [e.g., 118]
title: [Specific subtask title]
priority: [high/medium/low]
estimated_time: [e.g., 10 minutes]
executor_prompt: |
Execute Subtask [ID]: [Specific subtask title]
SPECIFIC REQUIREMENTS:
[Exact implementation needed for THIS subtask only]
FILES TO CREATE/MODIFY:
[Specific file paths]
CONTEXT:
[What already exists that this subtask depends on]
SUCCESS CRITERIA:
[Specific completion criteria for this subtask]
IMPORTANT:
- Focus ONLY on this subtask
- Mark subtask as 'review' when complete
- Use MCP tool: mcp__task-master-ai__set_task_status
- subtask_id: [Another subtask that can run in parallel]
parent_task: [Parent task ID]
title: [Specific subtask title]
priority: [priority]
estimated_time: [time estimate]
executor_prompt: |
[Focused prompt for this specific subtask]
blocked:
- task_id: [ID]
title: [Task title]
waiting_for: [list of blocking task IDs]
becomes_ready_when: [condition for unblocking]
next_wave:
trigger: "After tasks [IDs] complete"
newly_available: [List of task IDs that will unblock]
tasks_to_execute_in_parallel: [IDs that can run together in next wave]
critical_path: [Ordered list of task IDs forming the critical path]
parallelization_instruction: |
IMPORTANT FOR CLAUDE: Deploy ALL tasks in 'EXECUTE_IN_PARALLEL' section
simultaneously using multiple Task tool invocations in a single response.
Example: If 3 tasks are listed, invoke the Task tool 3 times in one message.
verification_needed:
- task_id: [ID of any task in 'review' status]
verification_focus: [what to check]
```
**CRITICAL INSTRUCTIONS FOR CLAUDE (MAIN):**
1. When you see `EXECUTE_IN_PARALLEL`, deploy ALL listed executors at once
2. Use multiple Task tool invocations in a SINGLE response
3. Do not execute them sequentially - they must run in parallel
4. Wait for all parallel executors to complete before proceeding to next wave
**IMPORTANT NOTES**:
- Label parallel tasks clearly in `EXECUTE_IN_PARALLEL` section
- Provide complete, self-contained prompts for each executor
- Executors should mark tasks as 'review' for verification, not 'done'
- Be explicit about which tasks can run simultaneously
You are the strategic mind analyzing the entire task landscape. Make parallelization opportunities UNMISTAKABLY CLEAR to Claude.

View File

@@ -1,93 +0,0 @@
Clear all subtasks from all tasks globally.
## Global Subtask Clearing
Remove all subtasks across the entire project. Use with extreme caution.
## Execution
```bash
task-master clear-subtasks --all
```
## Pre-Clear Analysis
1. **Project-Wide Summary**
```
Global Subtask Summary
━━━━━━━━━━━━━━━━━━━━
Total parent tasks: 12
Total subtasks: 47
- Completed: 15
- In-progress: 8
- Pending: 24
Work at risk: ~120 hours
```
2. **Critical Warnings**
- In-progress subtasks that will lose work
- Completed subtasks with valuable history
- Complex dependency chains
- Integration test results
## Double Confirmation
```
⚠️ DESTRUCTIVE OPERATION WARNING ⚠️
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This will remove ALL 47 subtasks from your project
Including 8 in-progress and 15 completed subtasks
This action CANNOT be undone
Type 'CLEAR ALL SUBTASKS' to confirm:
```
## Smart Safeguards
- Require explicit confirmation phrase
- Create automatic backup
- Log all removed data
- Option to export first
## Use Cases
Valid reasons for global clear:
- Project restructuring
- Major pivot in approach
- Starting fresh breakdown
- Switching to different task organization
## Process
1. Full project analysis
2. Create backup file
3. Show detailed impact
4. Require confirmation
5. Execute removal
6. Generate summary report
## Alternative Suggestions
Before clearing all:
- Export subtasks to file
- Clear only pending subtasks
- Clear by task category
- Archive instead of delete
## Post-Clear Report
```
Global Subtask Clear Complete
━━━━━━━━━━━━━━━━━━━━━━━━━━━
Removed: 47 subtasks from 12 tasks
Backup saved: .taskmaster/backup/subtasks-20240115.json
Parent tasks updated: 12
Time estimates adjusted: Yes
Next steps:
- Review updated task list
- Re-expand complex tasks as needed
- Check project timeline
```

View File

@@ -1,86 +0,0 @@
Clear all subtasks from a specific task.
Arguments: $ARGUMENTS (task ID)
Remove all subtasks from a parent task at once.
## Clearing Subtasks
Bulk removal of all subtasks from a parent task.
## Execution
```bash
task-master clear-subtasks --id=<task-id>
```
## Pre-Clear Analysis
1. **Subtask Summary**
- Number of subtasks
- Completion status of each
- Work already done
- Dependencies affected
2. **Impact Assessment**
- Data that will be lost
- Dependencies to be removed
- Effect on project timeline
- Parent task implications
## Confirmation Required
```
Clear Subtasks Confirmation
━━━━━━━━━━━━━━━━━━━━━━━━━
Parent Task: #5 "Implement user authentication"
Subtasks to remove: 4
- #5.1 "Setup auth framework" (done)
- #5.2 "Create login form" (in-progress)
- #5.3 "Add validation" (pending)
- #5.4 "Write tests" (pending)
⚠️ This will permanently delete all subtask data
Continue? (y/n)
```
## Smart Features
- Option to convert to standalone tasks
- Backup task data before clearing
- Preserve completed work history
- Update parent task appropriately
## Process
1. List all subtasks for confirmation
2. Check for in-progress work
3. Remove all subtasks
4. Update parent task
5. Clean up dependencies
## Alternative Options
Suggest alternatives:
- Convert important subtasks to tasks
- Keep completed subtasks
- Archive instead of delete
- Export subtask data first
## Post-Clear
- Show updated parent task
- Recalculate time estimates
- Update task complexity
- Suggest next steps
## Example
```
/project:tm/clear-subtasks 5
→ Found 4 subtasks to remove
→ Warning: Subtask #5.2 is in-progress
→ Cleared all subtasks from task #5
→ Updated parent task estimates
→ Suggestion: Consider re-expanding with better breakdown
```

View File

@@ -1,130 +0,0 @@
# Task Master Command Reference
Comprehensive command structure for Task Master integration with Claude Code.
## Command Organization
Commands are organized hierarchically to match Task Master's CLI structure while providing enhanced Claude Code integration.
## Project Setup & Configuration
### `/project:tm/init`
- `index` - Initialize new project (handles PRD files intelligently)
- `quick` - Quick setup with auto-confirmation (-y flag)
### `/project:tm/models`
- `index` - View current AI model configuration
- `setup` - Interactive model configuration
- `set-main` - Set primary generation model
- `set-research` - Set research model
- `set-fallback` - Set fallback model
## Task Generation
### `/project:tm/parse-prd`
- `index` - Generate tasks from PRD document
- `with-research` - Enhanced parsing with research mode
### `/project:tm/generate`
- Create individual task files from tasks.json
## Task Management
### `/project:tm/list`
- `index` - Smart listing with natural language filters
- `with-subtasks` - Include subtasks in hierarchical view
- `by-status` - Filter by specific status
### `/project:tm/set-status`
- `to-pending` - Reset task to pending
- `to-in-progress` - Start working on task
- `to-done` - Mark task complete
- `to-review` - Submit for review
- `to-deferred` - Defer task
- `to-cancelled` - Cancel task
### `/project:tm/sync-readme`
- Export tasks to README.md with formatting
### `/project:tm/update`
- `index` - Update tasks with natural language
- `from-id` - Update multiple tasks from a starting point
- `single` - Update specific task
### `/project:tm/add-task`
- `index` - Add new task with AI assistance
### `/project:tm/remove-task`
- `index` - Remove task with confirmation
## Subtask Management
### `/project:tm/add-subtask`
- `index` - Add new subtask to parent
- `from-task` - Convert existing task to subtask
### `/project:tm/remove-subtask`
- Remove subtask (with optional conversion)
### `/project:tm/clear-subtasks`
- `index` - Clear subtasks from specific task
- `all` - Clear all subtasks globally
## Task Analysis & Breakdown
### `/project:tm/analyze-complexity`
- Analyze and generate expansion recommendations
### `/project:tm/complexity-report`
- Display complexity analysis report
### `/project:tm/expand`
- `index` - Break down specific task
- `all` - Expand all eligible tasks
- `with-research` - Enhanced expansion
## Task Navigation
### `/project:tm/next`
- Intelligent next task recommendation
### `/project:tm/show`
- Display detailed task information
### `/project:tm/status`
- Comprehensive project dashboard
## Dependency Management
### `/project:tm/add-dependency`
- Add task dependency
### `/project:tm/remove-dependency`
- Remove task dependency
### `/project:tm/validate-dependencies`
- Check for dependency issues
### `/project:tm/fix-dependencies`
- Automatically fix dependency problems
## Usage Patterns
### Natural Language
Most commands accept natural language arguments:
```
/project:tm/add-task create user authentication system
/project:tm/update mark all API tasks as high priority
/project:tm/list show blocked tasks
```
### ID-Based Commands
Commands requiring IDs intelligently parse from $ARGUMENTS:
```
/project:tm/show 45
/project:tm/expand 23
/project:tm/set-status/to-done 67
```
### Smart Defaults
Commands provide intelligent defaults and suggestions based on context.

View File

@@ -0,0 +1,146 @@
# Task Master Command Reference
Comprehensive command structure for Task Master integration with Claude Code.
## Command Organization
Commands are organized hierarchically to match Task Master's CLI structure while providing enhanced Claude Code integration.
## Project Setup & Configuration
### `/project:tm/init`
- `init-project` - Initialize new project (handles PRD files intelligently)
- `init-project-quick` - Quick setup with auto-confirmation (-y flag)
### `/project:tm/models`
- `view-models` - View current AI model configuration
- `setup-models` - Interactive model configuration
- `set-main` - Set primary generation model
- `set-research` - Set research model
- `set-fallback` - Set fallback model
## Task Generation
### `/project:tm/parse-prd`
- `parse-prd` - Generate tasks from PRD document
- `parse-prd-with-research` - Enhanced parsing with research mode
### `/project:tm/generate`
- `generate-tasks` - Create individual task files from tasks.json
## Task Management
### `/project:tm/list`
- `list-tasks` - Smart listing with natural language filters
- `list-tasks-with-subtasks` - Include subtasks in hierarchical view
- `list-tasks-by-status` - Filter by specific status
### `/project:tm/set-status`
- `to-pending` - Reset task to pending
- `to-in-progress` - Start working on task
- `to-done` - Mark task complete
- `to-review` - Submit for review
- `to-deferred` - Defer task
- `to-cancelled` - Cancel task
### `/project:tm/sync-readme`
- `sync-readme` - Export tasks to README.md with formatting
### `/project:tm/update`
- `update-task` - Update tasks with natural language
- `update-tasks-from-id` - Update multiple tasks from a starting point
- `update-single-task` - Update specific task
### `/project:tm/add-task`
- `add-task` - Add new task with AI assistance
### `/project:tm/remove-task`
- `remove-task` - Remove task with confirmation
## Subtask Management
### `/project:tm/add-subtask`
- `add-subtask` - Add new subtask to parent
- `convert-task-to-subtask` - Convert existing task to subtask
### `/project:tm/remove-subtask`
- `remove-subtask` - Remove subtask (with optional conversion)
### `/project:tm/clear-subtasks`
- `clear-subtasks` - Clear subtasks from specific task
- `clear-all-subtasks` - Clear all subtasks globally
## Task Analysis & Breakdown
### `/project:tm/analyze-complexity`
- `analyze-complexity` - Analyze and generate expansion recommendations
### `/project:tm/complexity-report`
- `complexity-report` - Display complexity analysis report
### `/project:tm/expand`
- `expand-task` - Break down specific task
- `expand-all-tasks` - Expand all eligible tasks
- `with-research` - Enhanced expansion
## Task Navigation
### `/project:tm/next`
- `next-task` - Intelligent next task recommendation
### `/project:tm/show`
- `show-task` - Display detailed task information
### `/project:tm/status`
- `project-status` - Comprehensive project dashboard
## Dependency Management
### `/project:tm/add-dependency`
- `add-dependency` - Add task dependency
### `/project:tm/remove-dependency`
- `remove-dependency` - Remove task dependency
### `/project:tm/validate-dependencies`
- `validate-dependencies` - Check for dependency issues
### `/project:tm/fix-dependencies`
- `fix-dependencies` - Automatically fix dependency problems
## Workflows & Automation
### `/project:tm/workflows`
- `smart-workflow` - Context-aware intelligent workflow execution
- `command-pipeline` - Chain multiple commands together
- `auto-implement-tasks` - Advanced auto-implementation with code generation
## Utilities
### `/project:tm/utils`
- `analyze-project` - Deep project analysis and insights
### `/project:tm/setup`
- `install-taskmaster` - Comprehensive installation guide
- `quick-install-taskmaster` - One-line global installation
## Usage Patterns
### Natural Language
Most commands accept natural language arguments:
```
/project:tm/add-task create user authentication system
/project:tm/update mark all API tasks as high priority
/project:tm/list show blocked tasks
```
### ID-Based Commands
Commands requiring IDs intelligently parse from $ARGUMENTS:
```
/project:tm/show 45
/project:tm/expand 23
/project:tm/set-status/to-done 67
```
### Smart Defaults
Commands provide intelligent defaults and suggestions based on context.

10
.coderabbit.yaml Normal file
View File

@@ -0,0 +1,10 @@
reviews:
profile: assertive
poem: false
auto_review:
base_branches:
- rc
- beta
- alpha
- production
- next

View File

@@ -523,7 +523,7 @@ For AI-powered commands that benefit from project context, follow the research c
.option('--details <details>', 'Implementation details for the new subtask, optional')
.option('--dependencies <ids>', 'Comma-separated list of subtask IDs this subtask depends on')
.option('--status <status>', 'Initial status for the subtask', 'pending')
.option('--skip-generate', 'Skip regenerating task files')
.option('--generate', 'Regenerate task files after adding subtask')
.action(async (options) => {
// Validate required parameters
if (!options.parent) {
@@ -545,7 +545,7 @@ For AI-powered commands that benefit from project context, follow the research c
.option('-f, --file <path>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-i, --id <id>', 'ID of the subtask to remove in format parentId.subtaskId, required')
.option('-c, --convert', 'Convert the subtask to a standalone task instead of deleting')
.option('--skip-generate', 'Skip regenerating task files')
.option('--generate', 'Regenerate task files after removing subtask')
.action(async (options) => {
// Implementation with detailed error handling
})
@@ -633,11 +633,11 @@ function showAddSubtaskHelp() {
' --dependencies <ids> Comma-separated list of dependency IDs\n' +
' -s, --status <status> Status for the new subtask (default: "pending")\n' +
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
' --skip-generate Skip regenerating task files\n\n' +
' --generate Regenerate task files after adding subtask\n\n' +
chalk.cyan('Examples:') + '\n' +
' task-master add-subtask --parent=\'5\' --task-id=\'8\'\n' +
' task-master add-subtask -p \'5\' -t \'Implement login UI\' -d \'Create the login form\'\n' +
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details $\'Handle 401 Unauthorized.\nHandle 500 Server Error.\'',
' task-master add-subtask -p \'5\' -t \'Handle API Errors\' --details "Handle 401 Unauthorized.\\nHandle 500 Server Error." --generate',
{ padding: 1, borderColor: 'blue', borderStyle: 'round' }
));
}
@@ -652,7 +652,7 @@ function showRemoveSubtaskHelp() {
' -i, --id <id> Subtask ID(s) to remove in format "parentId.subtaskId" (can be comma-separated, required)\n' +
' -c, --convert Convert the subtask to a standalone task instead of deleting it\n' +
' -f, --file <file> Path to the tasks file (default: "tasks/tasks.json")\n' +
' --skip-generate Skip regenerating task files\n\n' +
' --generate Regenerate task files after removing subtask\n\n' +
chalk.cyan('Examples:') + '\n' +
' task-master remove-subtask --id=\'5.2\'\n' +
' task-master remove-subtask --id=\'5.2,6.3,7.1\'\n' +

View File

@@ -158,7 +158,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`)
* `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Break down tasks manually or reorganize existing tasks.
@@ -286,7 +286,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* **Key Parameters/Options:**
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
* `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`)
* `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.

View File

@@ -0,0 +1,803 @@
---
description:
globs:
alwaysApply: true
---
# Test Workflow & Development Process
## **Initial Testing Framework Setup**
Before implementing the TDD workflow, ensure your project has a proper testing framework configured. This section covers setup for different technology stacks.
### **Detecting Project Type & Framework Needs**
**AI Agent Assessment Checklist:**
1. **Language Detection**: Check for `package.json` (Node.js/JavaScript), `requirements.txt` (Python), `Cargo.toml` (Rust), etc.
2. **Existing Tests**: Look for test files (`.test.`, `.spec.`, `_test.`) or test directories
3. **Framework Detection**: Check for existing test runners in dependencies
4. **Project Structure**: Analyze directory structure for testing patterns
### **JavaScript/Node.js Projects (Jest Setup)**
#### **Prerequisites Check**
```bash
# Verify Node.js project
ls package.json # Should exist
# Check for existing testing setup
ls jest.config.js jest.config.ts # Check for Jest config
grep -E "(jest|vitest|mocha)" package.json # Check for test runners
```
#### **Jest Installation & Configuration**
**Step 1: Install Dependencies**
```bash
# Core Jest dependencies
npm install --save-dev jest
# TypeScript support (if using TypeScript)
npm install --save-dev ts-jest @types/jest
# Additional useful packages
npm install --save-dev supertest @types/supertest # For API testing
npm install --save-dev jest-watch-typeahead # Enhanced watch mode
```
**Step 2: Create Jest Configuration**
Create `jest.config.js` with the following production-ready configuration:
```javascript
/** @type {import('jest').Config} */
module.exports = {
// Use ts-jest preset for TypeScript support
preset: 'ts-jest',
// Test environment
testEnvironment: 'node',
// Roots for test discovery
roots: ['<rootDir>/src', '<rootDir>/tests'],
// Test file patterns
testMatch: ['**/__tests__/**/*.ts', '**/?(*.)+(spec|test).ts'],
// Transform files
transform: {
'^.+\\.ts$': [
'ts-jest',
{
tsconfig: {
target: 'es2020',
module: 'commonjs',
esModuleInterop: true,
allowSyntheticDefaultImports: true,
skipLibCheck: true,
strict: false,
noImplicitAny: false,
},
},
],
'^.+\\.js$': [
'ts-jest',
{
useESM: false,
tsconfig: {
target: 'es2020',
module: 'commonjs',
esModuleInterop: true,
allowSyntheticDefaultImports: true,
allowJs: true,
},
},
],
},
// Module file extensions
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
// Transform ignore patterns - adjust for ES modules
transformIgnorePatterns: ['node_modules/(?!(your-es-module-deps|.*\\.mjs$))'],
// Coverage configuration
collectCoverage: true,
coverageDirectory: 'coverage',
coverageReporters: [
'text', // Console output
'text-summary', // Brief summary
'lcov', // For IDE integration
'html', // Detailed HTML report
],
// Files to collect coverage from
collectCoverageFrom: [
'src/**/*.ts',
'!src/**/*.d.ts',
'!src/**/*.test.ts',
'!src/**/index.ts', // Often just exports
'!src/generated/**', // Generated code
'!src/config/database.ts', // Database config (tested via integration)
],
// Coverage thresholds - TaskMaster standards
coverageThreshold: {
global: {
branches: 70,
functions: 80,
lines: 80,
statements: 80,
},
// Higher standards for critical business logic
'./src/utils/': {
branches: 85,
functions: 90,
lines: 90,
statements: 90,
},
'./src/middleware/': {
branches: 80,
functions: 85,
lines: 85,
statements: 85,
},
},
// Setup files
setupFilesAfterEnv: ['<rootDir>/tests/setup.ts'],
// Global teardown to prevent worker process leaks
globalTeardown: '<rootDir>/tests/teardown.ts',
// Module path mapping (if needed)
moduleNameMapper: {
'^@/(.*)$': '<rootDir>/src/$1',
},
// Clear mocks between tests
clearMocks: true,
// Restore mocks after each test
restoreMocks: true,
// Global test timeout
testTimeout: 10000,
// Projects for different test types
projects: [
// Unit tests - for pure functions only
{
displayName: 'unit',
testMatch: ['<rootDir>/src/**/*.test.ts'],
testPathIgnorePatterns: ['.*\\.integration\\.test\\.ts$', '/tests/'],
preset: 'ts-jest',
testEnvironment: 'node',
collectCoverageFrom: [
'src/**/*.ts',
'!src/**/*.d.ts',
'!src/**/*.test.ts',
'!src/**/*.integration.test.ts',
],
coverageThreshold: {
global: {
branches: 70,
functions: 80,
lines: 80,
statements: 80,
},
},
},
// Integration tests - real database/services
{
displayName: 'integration',
testMatch: [
'<rootDir>/src/**/*.integration.test.ts',
'<rootDir>/tests/integration/**/*.test.ts',
],
preset: 'ts-jest',
testEnvironment: 'node',
setupFilesAfterEnv: ['<rootDir>/tests/setup/integration.ts'],
testTimeout: 10000,
},
// E2E tests - full workflows
{
displayName: 'e2e',
testMatch: ['<rootDir>/tests/e2e/**/*.test.ts'],
preset: 'ts-jest',
testEnvironment: 'node',
setupFilesAfterEnv: ['<rootDir>/tests/setup/e2e.ts'],
testTimeout: 30000,
},
],
// Verbose output for better debugging
verbose: true,
// Run projects sequentially to avoid conflicts
maxWorkers: 1,
// Enable watch mode plugins
watchPlugins: ['jest-watch-typeahead/filename', 'jest-watch-typeahead/testname'],
};
```
**Step 3: Update package.json Scripts**
Add these scripts to your `package.json`:
```json
{
"scripts": {
"test": "jest",
"test:watch": "jest --watch",
"test:coverage": "jest --coverage",
"test:unit": "jest --selectProjects unit",
"test:integration": "jest --selectProjects integration",
"test:e2e": "jest --selectProjects e2e",
"test:ci": "jest --ci --coverage --watchAll=false"
}
}
```
**Step 4: Create Test Setup Files**
Create essential test setup files:
```typescript
// tests/setup.ts - Global setup
import { jest } from '@jest/globals';
// Global test configuration
beforeAll(() => {
// Set test timeout
jest.setTimeout(10000);
});
afterEach(() => {
// Clean up mocks after each test
jest.clearAllMocks();
});
```
```typescript
// tests/setup/integration.ts - Integration test setup
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient();
beforeAll(async () => {
// Connect to test database
await prisma.$connect();
});
afterAll(async () => {
// Cleanup and disconnect
await prisma.$disconnect();
});
beforeEach(async () => {
// Clean test data before each test
// Add your cleanup logic here
});
```
```typescript
// tests/teardown.ts - Global teardown
export default async () => {
// Global cleanup after all tests
console.log('Global test teardown complete');
};
```
**Step 5: Create Initial Test Structure**
```bash
# Create test directories
mkdir -p tests/{setup,fixtures,unit,integration,e2e}
mkdir -p tests/unit/src/{utils,services,middleware}
# Create sample test fixtures
mkdir tests/fixtures
```
### **Generic Testing Framework Setup (Any Language)**
#### **Framework Selection Guide**
**Python Projects:**
- **pytest**: Recommended for most Python projects
- **unittest**: Built-in, suitable for simple projects
- **Coverage**: Use `coverage.py` for code coverage
```bash
# Python setup example
pip install pytest pytest-cov
echo "[tool:pytest]" > pytest.ini
echo "testpaths = tests" >> pytest.ini
echo "addopts = --cov=src --cov-report=html --cov-report=term" >> pytest.ini
```
**Go Projects:**
- **Built-in testing**: Use Go's built-in `testing` package
- **Coverage**: Built-in with `go test -cover`
```bash
# Go setup example
go mod init your-project
mkdir -p tests
# Tests are typically *_test.go files alongside source
```
**Rust Projects:**
- **Built-in testing**: Use Rust's built-in test framework
- **cargo-tarpaulin**: For coverage analysis
```bash
# Rust setup example
cargo new your-project
cd your-project
cargo install cargo-tarpaulin # For coverage
```
**Java Projects:**
- **JUnit 5**: Modern testing framework
- **Maven/Gradle**: Build tools with testing integration
```xml
<!-- Maven pom.xml example -->
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<version>5.9.2</version>
<scope>test</scope>
</dependency>
```
#### **Universal Testing Principles**
**Coverage Standards (Adapt to Your Language):**
- **Global Minimum**: 70-80% line coverage
- **Critical Code**: 85-90% coverage
- **New Features**: Must meet or exceed standards
- **Legacy Code**: Gradual improvement strategy
**Test Organization:**
- **Unit Tests**: Fast, isolated, no external dependencies
- **Integration Tests**: Test component interactions
- **E2E Tests**: Test complete user workflows
- **Performance Tests**: Load and stress testing (if applicable)
**Naming Conventions:**
- **Test Files**: `*.test.*`, `*_test.*`, or language-specific patterns
- **Test Functions**: Descriptive names (e.g., `should_return_error_for_invalid_input`)
- **Test Directories**: Organized by test type and mirroring source structure
#### **TaskMaster Integration for Any Framework**
**Document Testing Setup in Subtasks:**
```bash
# Update subtask with testing framework setup
task-master update-subtask --id=X.Y --prompt="Testing framework setup:
- Installed [Framework Name] with coverage support
- Configured [Coverage Tool] with thresholds: 80% lines, 70% branches
- Created test directory structure: unit/, integration/, e2e/
- Added test scripts to build configuration
- All setup tests passing"
```
**Testing Framework Verification:**
```bash
# Verify setup works
[test-command] # e.g., npm test, pytest, go test, cargo test
# Check coverage reporting
[coverage-command] # e.g., npm run test:coverage
# Update task with verification
task-master update-subtask --id=X.Y --prompt="Testing framework verified:
- Sample tests running successfully
- Coverage reporting functional
- CI/CD integration ready
- Ready to begin TDD workflow"
```
## **Test-Driven Development (TDD) Integration**
### **Core TDD Cycle with Jest**
```bash
# 1. Start development with watch mode
npm run test:watch
# 2. Write failing test first
# Create test file: src/utils/newFeature.test.ts
# Write test that describes expected behavior
# 3. Implement minimum code to make test pass
# 4. Refactor while keeping tests green
# 5. Add edge cases and error scenarios
```
### **TDD Workflow Per Subtask**
```bash
# When starting a new subtask:
task-master set-status --id=4.1 --status=in-progress
# Begin TDD cycle:
npm run test:watch # Keep running during development
# Document TDD progress in subtask:
task-master update-subtask --id=4.1 --prompt="TDD Progress:
- Written 3 failing tests for core functionality
- Implemented basic feature, tests now passing
- Adding edge case tests for error handling"
# Complete subtask with test summary:
task-master update-subtask --id=4.1 --prompt="Implementation complete:
- Feature implemented with 8 unit tests
- Coverage: 95% statements, 88% branches
- All tests passing, TDD cycle complete"
```
## **Testing Commands & Usage**
### **Development Commands**
```bash
# Primary development command - use during coding
npm run test:watch # Watch mode with Jest
npm run test:watch -- --testNamePattern="auth" # Watch specific tests
# Targeted testing during development
npm run test:unit # Run only unit tests
npm run test:unit -- --coverage # Unit tests with coverage
# Integration testing when APIs are ready
npm run test:integration # Run integration tests
npm run test:integration -- --detectOpenHandles # Debug hanging tests
# End-to-end testing for workflows
npm run test:e2e # Run E2E tests
npm run test:e2e -- --timeout=30000 # Extended timeout for E2E
```
### **Quality Assurance Commands**
```bash
# Full test suite with coverage (before commits)
npm run test:coverage # Complete coverage analysis
# All tests (CI/CD pipeline)
npm test # Run all test projects
# Specific test file execution
npm test -- auth.test.ts # Run specific test file
npm test -- --testNamePattern="should handle errors" # Run specific tests
```
## **Test Implementation Patterns**
### **Unit Test Development**
```typescript
// ✅ DO: Follow established patterns from auth.test.ts
describe('FeatureName', () => {
beforeEach(() => {
jest.clearAllMocks();
// Setup mocks with proper typing
});
describe('functionName', () => {
it('should handle normal case', () => {
// Test implementation with specific assertions
});
it('should throw error for invalid input', async () => {
// Error scenario testing
await expect(functionName(invalidInput))
.rejects.toThrow('Specific error message');
});
});
});
```
### **Integration Test Development**
```typescript
// ✅ DO: Use supertest for API endpoint testing
import request from 'supertest';
import { app } from '../../src/app';
describe('POST /api/auth/register', () => {
beforeEach(async () => {
await integrationTestUtils.cleanupTestData();
});
it('should register user successfully', async () => {
const userData = createTestUser();
const response = await request(app)
.post('/api/auth/register')
.send(userData)
.expect(201);
expect(response.body).toMatchObject({
id: expect.any(String),
email: userData.email
});
// Verify database state
const user = await prisma.user.findUnique({
where: { email: userData.email }
});
expect(user).toBeTruthy();
});
});
```
### **E2E Test Development**
```typescript
// ✅ DO: Test complete user workflows
describe('User Authentication Flow', () => {
it('should complete registration → login → protected access', async () => {
// Step 1: Register
const userData = createTestUser();
await request(app)
.post('/api/auth/register')
.send(userData)
.expect(201);
// Step 2: Login
const loginResponse = await request(app)
.post('/api/auth/login')
.send({ email: userData.email, password: userData.password })
.expect(200);
const { token } = loginResponse.body;
// Step 3: Access protected resource
await request(app)
.get('/api/profile')
.set('Authorization', `Bearer ${token}`)
.expect(200);
}, 30000); // Extended timeout for E2E
});
```
## **Mocking & Test Utilities**
### **Established Mocking Patterns**
```typescript
// ✅ DO: Use established bcrypt mocking pattern
jest.mock('bcrypt');
import bcrypt from 'bcrypt';
const mockHash = bcrypt.hash as jest.MockedFunction<typeof bcrypt.hash>;
const mockCompare = bcrypt.compare as jest.MockedFunction<typeof bcrypt.compare>;
// ✅ DO: Use Prisma mocking for unit tests
jest.mock('@prisma/client', () => ({
PrismaClient: jest.fn().mockImplementation(() => ({
user: {
create: jest.fn(),
findUnique: jest.fn(),
},
$connect: jest.fn(),
$disconnect: jest.fn(),
})),
}));
```
### **Test Fixtures Usage**
```typescript
// ✅ DO: Use centralized test fixtures
import { createTestUser, adminUser, invalidUser } from '../fixtures/users';
describe('User Service', () => {
it('should handle admin user creation', async () => {
const userData = createTestUser(adminUser);
// Test implementation
});
it('should reject invalid user data', async () => {
const userData = createTestUser(invalidUser);
// Error testing
});
});
```
## **Coverage Standards & Monitoring**
### **Coverage Thresholds**
- **Global Standards**: 80% lines/functions, 70% branches
- **Critical Code**: 90% utils, 85% middleware
- **New Features**: Must meet or exceed global thresholds
- **Legacy Code**: Gradual improvement with each change
### **Coverage Reporting & Analysis**
```bash
# Generate coverage reports
npm run test:coverage
# View detailed HTML report
open coverage/lcov-report/index.html
# Coverage files generated:
# - coverage/lcov-report/index.html # Detailed HTML report
# - coverage/lcov.info # LCOV format for IDE integration
# - coverage/coverage-final.json # JSON format for tooling
```
### **Coverage Quality Checks**
```typescript
// ✅ DO: Test all code paths
describe('validateInput', () => {
it('should return true for valid input', () => {
expect(validateInput('valid')).toBe(true);
});
it('should return false for various invalid inputs', () => {
expect(validateInput('')).toBe(false); // Empty string
expect(validateInput(null)).toBe(false); // Null value
expect(validateInput(undefined)).toBe(false); // Undefined
});
it('should throw for unexpected input types', () => {
expect(() => validateInput(123)).toThrow('Invalid input type');
});
});
```
## **Testing During Development Phases**
### **Feature Development Phase**
```bash
# 1. Start feature development
task-master set-status --id=X.Y --status=in-progress
# 2. Begin TDD cycle
npm run test:watch
# 3. Document test progress in subtask
task-master update-subtask --id=X.Y --prompt="Test development:
- Created test file with 5 failing tests
- Implemented core functionality
- Tests passing, adding error scenarios"
# 4. Verify coverage before completion
npm run test:coverage
# 5. Update subtask with final test status
task-master update-subtask --id=X.Y --prompt="Testing complete:
- 12 unit tests with full coverage
- All edge cases and error scenarios covered
- Ready for integration testing"
```
### **Integration Testing Phase**
```bash
# After API endpoints are implemented
npm run test:integration
# Update integration test templates
# Replace placeholder tests with real endpoint calls
# Document integration test results
task-master update-subtask --id=X.Y --prompt="Integration tests:
- Updated auth endpoint tests
- Database integration verified
- All HTTP status codes and responses tested"
```
### **Pre-Commit Testing Phase**
```bash
# Before committing code
npm run test:coverage # Verify all tests pass with coverage
npm run test:unit # Quick unit test verification
npm run test:integration # Integration test verification (if applicable)
# Commit pattern for test updates
git add tests/ src/**/*.test.ts
git commit -m "test(task-X): Add comprehensive tests for Feature Y
- Unit tests with 95% coverage (exceeds 90% threshold)
- Integration tests for API endpoints
- Test fixtures for data generation
- Proper mocking patterns established
Task X: Feature Y - Testing complete"
```
## **Error Handling & Debugging**
### **Test Debugging Techniques**
```typescript
// ✅ DO: Use test utilities for debugging
import { testUtils } from '../setup';
it('should debug complex operation', () => {
testUtils.withConsole(() => {
// Console output visible only for this test
console.log('Debug info:', complexData);
service.complexOperation();
});
});
// ✅ DO: Use proper async debugging
it('should handle async operations', async () => {
const promise = service.asyncOperation();
// Test intermediate state
expect(service.isProcessing()).toBe(true);
const result = await promise;
expect(result).toBe('expected');
expect(service.isProcessing()).toBe(false);
});
```
### **Common Test Issues & Solutions**
```bash
# Hanging tests (common with database connections)
npm run test:integration -- --detectOpenHandles
# Memory leaks in tests
npm run test:unit -- --logHeapUsage
# Slow tests identification
npm run test:coverage -- --verbose
# Mock not working properly
# Check: mock is declared before imports
# Check: jest.clearAllMocks() in beforeEach
# Check: TypeScript typing is correct
```
## **Continuous Integration Integration**
### **CI/CD Pipeline Testing**
```yaml
# Example GitHub Actions integration
- name: Run tests
run: |
npm ci
npm run test:coverage
- name: Upload coverage reports
uses: codecov/codecov-action@v3
with:
file: ./coverage/lcov.info
```
### **Pre-commit Hooks**
```bash
# Setup pre-commit testing (recommended)
# In package.json scripts:
"pre-commit": "npm run test:unit && npm run test:integration"
# Husky integration example:
npx husky add .husky/pre-commit "npm run test:unit"
```
## **Test Maintenance & Evolution**
### **Adding Tests for New Features**
1. **Create test file** alongside source code or in `tests/unit/`
2. **Follow established patterns** from `src/utils/auth.test.ts`
3. **Use existing fixtures** from `tests/fixtures/`
4. **Apply proper mocking** patterns for dependencies
5. **Meet coverage thresholds** for the module
### **Updating Integration/E2E Tests**
1. **Update templates** in `tests/integration/` when APIs change
2. **Modify E2E workflows** in `tests/e2e/` for new user journeys
3. **Update test fixtures** for new data requirements
4. **Maintain database cleanup** utilities
### **Test Performance Optimization**
- **Parallel execution**: Jest runs tests in parallel by default
- **Test isolation**: Use proper setup/teardown for independence
- **Mock optimization**: Mock heavy dependencies appropriately
- **Database efficiency**: Use transaction rollbacks where possible
---
**Key References:**
- [Testing Standards](mdc:.cursor/rules/tests.mdc)
- [Git Workflow](mdc:.cursor/rules/git_workflow.mdc)
- [Development Workflow](mdc:.cursor/rules/dev_workflow.mdc)
- [Jest Configuration](mdc:jest.config.js)

View File

@@ -8,6 +8,7 @@ GROQ_API_KEY=YOUR_GROQ_KEY_HERE
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE
XAI_API_KEY=YOUR_XAI_KEY_HERE
AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE
OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE
# Google Vertex AI Configuration
VERTEX_PROJECT_ID=your-gcp-project-id

45
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,45 @@
# What type of PR is this?
<!-- Check one -->
- [ ] 🐛 Bug fix
- [ ] ✨ Feature
- [ ] 🔌 Integration
- [ ] 📝 Docs
- [ ] 🧹 Refactor
- [ ] Other:
## Description
<!-- What does this PR do? -->
## Related Issues
<!-- Link issues: Fixes #123 -->
## How to Test This
<!-- Quick steps to verify the changes work -->
```bash
# Example commands or steps
```
**Expected result:**
<!-- What should happen? -->
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check` (or `npm run format` to fix)
- [ ] Addressed CodeRabbit comments (if any)
- [ ] Linked related issues (if any)
- [ ] Manually tested the changes
## Changelog Entry
<!-- One line describing the change for users -->
<!-- Example: "Added Kiro IDE integration with automatic task status updates" -->
---
### For Maintainers
- [ ] PR title follows conventional commits
- [ ] Target branch correct
- [ ] Labels added
- [ ] Milestone assigned (if applicable)

39
.github/PULL_REQUEST_TEMPLATE/bugfix.md vendored Normal file
View File

@@ -0,0 +1,39 @@
## 🐛 Bug Fix
### 🔍 Bug Description
<!-- Describe the bug -->
### 🔗 Related Issues
<!-- Fixes #123 -->
### ✨ Solution
<!-- How does this PR fix the bug? -->
## How to Test
### Steps that caused the bug:
1.
2.
**Before fix:**
**After fix:**
### Quick verification:
```bash
# Commands to verify the fix
```
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Added unit tests (if applicable)
- [ ] Manually verified the fix works
---
### For Maintainers
- [ ] Root cause identified
- [ ] Fix doesn't introduce new issues
- [ ] CI passes

View File

@@ -0,0 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: 🐛 Bug Fix
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=bugfix.md
about: Fix a bug in Task Master
- name: ✨ New Feature
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=feature.md
about: Add a new feature to Task Master
- name: 🔌 New Integration
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=integration.md
about: Add support for a new tool, IDE, or platform

View File

@@ -0,0 +1,49 @@
## ✨ New Feature
### 📋 Feature Description
<!-- Brief description -->
### 🎯 Problem Statement
<!-- What problem does this feature solve? Why is it needed? -->
### 💡 Solution
<!-- How does this feature solve the problem? What's the approach? -->
### 🔗 Related Issues
<!-- Link related issues: Fixes #123, Part of #456 -->
## How to Use It
### Quick Start
```bash
# Basic usage example
```
### Example
<!-- Show a real use case -->
```bash
# Practical example
```
**What you should see:**
<!-- Expected behavior -->
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Added tests for new functionality
- [ ] Manually tested in CLI mode
- [ ] Manually tested in MCP mode (if applicable)
## Changelog Entry
<!-- One-liner for release notes -->
---
### For Maintainers
- [ ] Feature aligns with project vision
- [ ] CIs pass
- [ ] Changeset file exists

View File

@@ -0,0 +1,53 @@
# 🔌 New Integration
## What tool/IDE is being integrated?
<!-- Name and brief description -->
## What can users do with it?
<!-- Key benefits -->
## How to Enable
### Setup
```bash
task-master rules add [name]
# Any other setup steps
```
### Example Usage
<!-- Show it in action -->
```bash
# Real example
```
### Natural Language Hooks (if applicable)
```
"When tests pass, mark task as done"
# Other examples
```
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Integration fully tested with target tool/IDE
- [ ] Error scenarios tested
- [ ] Added integration tests
- [ ] Documentation includes setup guide
- [ ] Examples are working and clear
---
## For Maintainers
- [ ] Integration stability verified
- [ ] Documentation comprehensive
- [ ] Examples working

102
.github/scripts/check-pre-release-mode.mjs vendored Executable file
View File

@@ -0,0 +1,102 @@
#!/usr/bin/env node
import { readFileSync, existsSync } from 'node:fs';
import { join, dirname, resolve } from 'node:path';
import { fileURLToPath } from 'node:url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// Get context from command line argument or environment
const context = process.argv[2] || process.env.GITHUB_WORKFLOW || 'manual';
function findRootDir(startDir) {
let currentDir = resolve(startDir);
while (currentDir !== '/') {
if (existsSync(join(currentDir, 'package.json'))) {
try {
const pkg = JSON.parse(
readFileSync(join(currentDir, 'package.json'), 'utf8')
);
if (pkg.name === 'task-master-ai' || pkg.repository) {
return currentDir;
}
} catch {}
}
currentDir = dirname(currentDir);
}
throw new Error('Could not find root directory');
}
function checkPreReleaseMode() {
console.log('🔍 Checking if branch is in pre-release mode...');
const rootDir = findRootDir(__dirname);
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
// Check if pre.json exists
if (!existsSync(preJsonPath)) {
console.log('✅ Not in active pre-release mode - safe to proceed');
process.exit(0);
}
try {
// Read and parse pre.json
const preJsonContent = readFileSync(preJsonPath, 'utf8');
const preJson = JSON.parse(preJsonContent);
// Check if we're in active pre-release mode
if (preJson.mode === 'pre') {
console.error('❌ ERROR: This branch is in active pre-release mode!');
console.error('');
// Provide context-specific error messages
if (context === 'Release Check' || context === 'pull_request') {
console.error(
'Pre-release mode must be exited before merging to main.'
);
console.error('');
console.error(
'To fix this, run the following commands in your branch:'
);
console.error(' npx changeset pre exit');
console.error(' git add -u');
console.error(' git commit -m "chore: exit pre-release mode"');
console.error(' git push');
console.error('');
console.error('Then update this pull request.');
} else if (context === 'Release' || context === 'main') {
console.error(
'Pre-release mode should only be used on feature branches, not main.'
);
console.error('');
console.error('To fix this, run the following commands locally:');
console.error(' npx changeset pre exit');
console.error(' git add -u');
console.error(' git commit -m "chore: exit pre-release mode"');
console.error(' git push origin main');
console.error('');
console.error('Then re-run this workflow.');
} else {
console.error('Pre-release mode must be exited before proceeding.');
console.error('');
console.error('To fix this, run the following commands:');
console.error(' npx changeset pre exit');
console.error(' git add -u');
console.error(' git commit -m "chore: exit pre-release mode"');
console.error(' git push');
}
process.exit(1);
}
console.log('✅ Not in active pre-release mode - safe to proceed');
process.exit(0);
} catch (error) {
console.error(`❌ ERROR: Unable to parse .changeset/pre.json aborting.`);
console.error(`Error details: ${error.message}`);
process.exit(1);
}
}
// Run the check
checkPreReleaseMode();

54
.github/scripts/pre-release.mjs vendored Executable file
View File

@@ -0,0 +1,54 @@
#!/usr/bin/env node
import { readFileSync, existsSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import {
findRootDir,
runCommand,
getPackageVersion,
createAndPushTag
} from './utils.mjs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const rootDir = findRootDir(__dirname);
const extensionPkgPath = join(rootDir, 'apps', 'extension', 'package.json');
console.log('🚀 Starting pre-release process...');
// Check if we're in RC mode
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
if (!existsSync(preJsonPath)) {
console.error('⚠️ Not in RC mode. Run "npx changeset pre enter rc" first.');
process.exit(1);
}
try {
const preJson = JSON.parse(readFileSync(preJsonPath, 'utf8'));
if (preJson.tag !== 'rc') {
console.error(`⚠️ Not in RC mode. Current tag: ${preJson.tag}`);
process.exit(1);
}
} catch (error) {
console.error('Failed to read pre.json:', error.message);
process.exit(1);
}
// Get current extension version
const extensionVersion = getPackageVersion(extensionPkgPath);
console.log(`Extension version: ${extensionVersion}`);
// Run changeset publish for npm packages
console.log('📦 Publishing npm packages...');
runCommand('npx', ['changeset', 'publish']);
// Create tag for extension pre-release if it doesn't exist
const extensionTag = `extension-rc@${extensionVersion}`;
const tagCreated = createAndPushTag(extensionTag);
if (tagCreated) {
console.log('This will trigger the extension-pre-release workflow...');
}
console.log('✅ Pre-release process completed!');

30
.github/scripts/release.mjs vendored Executable file
View File

@@ -0,0 +1,30 @@
#!/usr/bin/env node
import { existsSync, unlinkSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { findRootDir, runCommand } from './utils.mjs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const rootDir = findRootDir(__dirname);
console.log('🚀 Starting release process...');
// Double-check we're not in pre-release mode (safety net)
const preJsonPath = join(rootDir, '.changeset', 'pre.json');
if (existsSync(preJsonPath)) {
console.log('⚠️ Warning: pre.json still exists. Removing it...');
unlinkSync(preJsonPath);
}
// Check if the extension version has changed and tag it
// This prevents changeset from trying to publish the private package
runCommand('node', [join(__dirname, 'tag-extension.mjs')]);
// Run changeset publish for npm packages
runCommand('npx', ['changeset', 'publish']);
console.log('✅ Release process completed!');
// The extension tag (if created) will trigger the extension-release workflow

33
.github/scripts/tag-extension.mjs vendored Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env node
import assert from 'node:assert/strict';
import { readFileSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { findRootDir, createAndPushTag } from './utils.mjs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const rootDir = findRootDir(__dirname);
// Read the extension's package.json
const extensionDir = join(rootDir, 'apps', 'extension');
const pkgPath = join(extensionDir, 'package.json');
let pkg;
try {
const pkgContent = readFileSync(pkgPath, 'utf8');
pkg = JSON.parse(pkgContent);
} catch (error) {
console.error('Failed to read package.json:', error.message);
process.exit(1);
}
// Ensure we have required fields
assert(pkg.name, 'package.json must have a name field');
assert(pkg.version, 'package.json must have a version field');
const tag = `${pkg.name}@${pkg.version}`;
// Create and push the tag if it doesn't exist
createAndPushTag(tag);

88
.github/scripts/utils.mjs vendored Executable file
View File

@@ -0,0 +1,88 @@
#!/usr/bin/env node
import { spawnSync } from 'node:child_process';
import { readFileSync } from 'node:fs';
import { join, dirname, resolve } from 'node:path';
// Find the root directory by looking for package.json with task-master-ai
export function findRootDir(startDir) {
let currentDir = resolve(startDir);
while (currentDir !== '/') {
const pkgPath = join(currentDir, 'package.json');
try {
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
if (pkg.name === 'task-master-ai' || pkg.repository) {
return currentDir;
}
} catch {}
currentDir = dirname(currentDir);
}
throw new Error('Could not find root directory');
}
// Run a command with proper error handling
export function runCommand(command, args = [], options = {}) {
console.log(`Running: ${command} ${args.join(' ')}`);
const result = spawnSync(command, args, {
encoding: 'utf8',
stdio: 'inherit',
...options
});
if (result.status !== 0) {
console.error(`Command failed with exit code ${result.status}`);
process.exit(result.status);
}
return result;
}
// Get package version from a package.json file
export function getPackageVersion(packagePath) {
try {
const pkg = JSON.parse(readFileSync(packagePath, 'utf8'));
return pkg.version;
} catch (error) {
console.error(
`Failed to read package version from ${packagePath}:`,
error.message
);
process.exit(1);
}
}
// Check if a git tag exists on remote
export function tagExistsOnRemote(tag, remote = 'origin') {
const result = spawnSync('git', ['ls-remote', remote, tag], {
encoding: 'utf8'
});
return result.status === 0 && result.stdout.trim() !== '';
}
// Create and push a git tag if it doesn't exist
export function createAndPushTag(tag, remote = 'origin') {
// Check if tag already exists
if (tagExistsOnRemote(tag, remote)) {
console.log(`Tag ${tag} already exists on remote, skipping`);
return false;
}
console.log(`Creating new tag: ${tag}`);
// Create the tag locally
const tagResult = spawnSync('git', ['tag', tag]);
if (tagResult.status !== 0) {
console.error('Failed to create tag:', tagResult.error || tagResult.stderr);
process.exit(1);
}
// Push the tag to remote
const pushResult = spawnSync('git', ['push', remote, tag]);
if (pushResult.status !== 0) {
console.error('Failed to push tag:', pushResult.error || pushResult.stderr);
process.exit(1);
}
console.log(`✅ Successfully created and pushed tag: ${tag}`);
return true;
}

143
.github/workflows/extension-ci.yml vendored Normal file
View File

@@ -0,0 +1,143 @@
name: Extension CI
on:
push:
branches:
- main
- next
paths:
- 'apps/extension/**'
- '.github/workflows/extension-ci.yml'
pull_request:
branches:
- main
- next
paths:
- 'apps/extension/**'
- '.github/workflows/extension-ci.yml'
permissions:
contents: read
jobs:
setup:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Cache node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install Extension Dependencies
working-directory: apps/extension
run: npm ci
timeout-minutes: 5
typecheck:
needs: setup
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Restore node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install if cache miss
working-directory: apps/extension
run: npm ci
timeout-minutes: 3
- name: Type Check Extension
working-directory: apps/extension
run: npm run check-types
env:
FORCE_COLOR: 1
build:
needs: setup
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Restore node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install if cache miss
working-directory: apps/extension
run: npm ci
timeout-minutes: 3
- name: Build Extension
working-directory: apps/extension
run: npm run build
env:
FORCE_COLOR: 1
- name: Package Extension
working-directory: apps/extension
run: npm run package
env:
FORCE_COLOR: 1
- name: Verify Package Contents
working-directory: apps/extension
run: |
echo "Checking vsix-build contents..."
ls -la vsix-build/
echo "Checking dist contents..."
ls -la vsix-build/dist/
echo "Checking package.json exists..."
test -f vsix-build/package.json
- name: Create VSIX Package (Test)
working-directory: apps/extension/vsix-build
run: npx vsce package --no-dependencies
env:
FORCE_COLOR: 1
- name: Upload Extension Artifact
uses: actions/upload-artifact@v4
with:
name: extension-package
path: |
apps/extension/vsix-build/*.vsix
apps/extension/dist/
retention-days: 30

View File

@@ -0,0 +1,110 @@
name: Extension Pre-Release
on:
push:
tags:
- "extension-rc@*"
permissions:
contents: write
concurrency: extension-pre-release-${{ github.ref }}
jobs:
publish-extension-rc:
runs-on: ubuntu-latest
environment: extension-release
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Cache node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install Extension Dependencies
working-directory: apps/extension
run: npm ci
timeout-minutes: 5
- name: Type Check Extension
working-directory: apps/extension
run: npm run check-types
env:
FORCE_COLOR: 1
- name: Build Extension
working-directory: apps/extension
run: npm run build
env:
FORCE_COLOR: 1
- name: Package Extension
working-directory: apps/extension
run: npm run package
env:
FORCE_COLOR: 1
- name: Create VSIX Package (Pre-Release)
working-directory: apps/extension/vsix-build
run: npx vsce package --no-dependencies --pre-release
env:
FORCE_COLOR: 1
- name: Get VSIX filename
id: vsix-info
working-directory: apps/extension/vsix-build
run: |
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
if [ -z "$VSIX_FILE" ]; then
echo "Error: No VSIX file found"
exit 1
fi
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
echo "Found VSIX: $VSIX_FILE"
- name: Publish to VS Code Marketplace (Pre-Release)
working-directory: apps/extension/vsix-build
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
env:
VSCE_PAT: ${{ secrets.VSCE_PAT }}
FORCE_COLOR: 1
- name: Install Open VSX CLI
run: npm install -g ovsx
- name: Publish to Open VSX Registry (Pre-Release)
working-directory: apps/extension/vsix-build
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}" --pre-release
env:
OVSX_PAT: ${{ secrets.OVSX_PAT }}
FORCE_COLOR: 1
- name: Upload Build Artifacts
uses: actions/upload-artifact@v4
with:
name: extension-pre-release-${{ github.ref_name }}
path: |
apps/extension/vsix-build/*.vsix
apps/extension/dist/
retention-days: 30
notify-success:
needs: publish-extension-rc
if: success()
runs-on: ubuntu-latest
steps:
- name: Success Notification
run: |
echo "🚀 Extension ${{ github.ref_name }} successfully published as pre-release!"
echo "📦 Available on VS Code Marketplace (Pre-Release)"
echo "🌍 Available on Open VSX Registry (Pre-Release)"

111
.github/workflows/extension-release.yml vendored Normal file
View File

@@ -0,0 +1,111 @@
name: Extension Release
on:
push:
tags:
- "extension@*"
permissions:
contents: write
concurrency: extension-release-${{ github.ref }}
jobs:
publish-extension:
runs-on: ubuntu-latest
environment: extension-release
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Cache node_modules
uses: actions/cache@v4
with:
path: |
node_modules
*/*/node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install Extension Dependencies
working-directory: apps/extension
run: npm ci
timeout-minutes: 5
- name: Type Check Extension
working-directory: apps/extension
run: npm run check-types
env:
FORCE_COLOR: 1
- name: Build Extension
working-directory: apps/extension
run: npm run build
env:
FORCE_COLOR: 1
- name: Package Extension
working-directory: apps/extension
run: npm run package
env:
FORCE_COLOR: 1
- name: Create VSIX Package
working-directory: apps/extension/vsix-build
run: npx vsce package --no-dependencies
env:
FORCE_COLOR: 1
- name: Get VSIX filename
id: vsix-info
working-directory: apps/extension/vsix-build
run: |
VSIX_FILE=$(find . -maxdepth 1 -name "*.vsix" -type f | head -n1 | xargs basename)
if [ -z "$VSIX_FILE" ]; then
echo "Error: No VSIX file found"
exit 1
fi
echo "vsix-filename=$VSIX_FILE" >> "$GITHUB_OUTPUT"
echo "Found VSIX: $VSIX_FILE"
- name: Publish to VS Code Marketplace
working-directory: apps/extension/vsix-build
run: npx vsce publish --packagePath "${{ steps.vsix-info.outputs.vsix-filename }}"
env:
VSCE_PAT: ${{ secrets.VSCE_PAT }}
FORCE_COLOR: 1
- name: Install Open VSX CLI
run: npm install -g ovsx
- name: Publish to Open VSX Registry
working-directory: apps/extension/vsix-build
run: ovsx publish "${{ steps.vsix-info.outputs.vsix-filename }}"
env:
OVSX_PAT: ${{ secrets.OVSX_PAT }}
FORCE_COLOR: 1
- name: Upload Build Artifacts
uses: actions/upload-artifact@v4
with:
name: extension-release-${{ github.ref_name }}
path: |
apps/extension/vsix-build/*.vsix
apps/extension/dist/
retention-days: 90
notify-success:
needs: publish-extension
if: success()
runs-on: ubuntu-latest
steps:
- name: Success Notification
run: |
echo "🎉 Extension ${{ github.ref_name }} successfully published!"
echo "📦 Available on VS Code Marketplace"
echo "🌍 Available on Open VSX Registry"
echo "🏷️ GitHub release created: ${{ github.ref_name }}"

View File

@@ -3,11 +3,13 @@ name: Pre-Release (RC)
on:
workflow_dispatch: # Allows manual triggering from GitHub UI/API
concurrency: pre-release-${{ github.ref }}
concurrency: pre-release-${{ github.ref_name }}
jobs:
rc:
runs-on: ubuntu-latest
# Only allow pre-releases on non-main branches
if: github.ref != 'refs/heads/main'
environment: extension-release
steps:
- uses: actions/checkout@v4
with:
@@ -16,7 +18,7 @@ jobs:
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
cache: "npm"
- name: Cache node_modules
uses: actions/cache@v4
@@ -32,10 +34,30 @@ jobs:
run: npm ci
timeout-minutes: 2
- name: Enter RC mode
- name: Enter RC mode (if not already in RC mode)
run: |
npx changeset pre exit || true
npx changeset pre enter rc
# Check if we're in pre-release mode with the "rc" tag
if [ -f .changeset/pre.json ]; then
MODE=$(jq -r '.mode' .changeset/pre.json 2>/dev/null || echo '')
TAG=$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')
if [ "$MODE" = "exit" ]; then
echo "Pre-release mode is in 'exit' state, re-entering RC mode..."
npx changeset pre enter rc
elif [ "$MODE" = "pre" ] && [ "$TAG" != "rc" ]; then
echo "In pre-release mode but with wrong tag ($TAG), switching to RC..."
npx changeset pre exit
npx changeset pre enter rc
elif [ "$MODE" = "pre" ] && [ "$TAG" = "rc" ]; then
echo "Already in RC pre-release mode"
else
echo "Unknown mode state: $MODE, entering RC mode..."
npx changeset pre enter rc
fi
else
echo "No pre.json found, entering RC mode..."
npx changeset pre enter rc
fi
- name: Version RC packages
run: npx changeset version
@@ -46,17 +68,16 @@ jobs:
- name: Create Release Candidate Pull Request or Publish Release Candidate to npm
uses: changesets/action@v1
with:
publish: npm run release
publish: node ./.github/scripts/pre-release.mjs
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
- name: Exit RC mode
run: npx changeset pre exit
VSCE_PAT: ${{ secrets.VSCE_PAT }}
OVSX_PAT: ${{ secrets.OVSX_PAT }}
- name: Commit & Push changes
uses: actions-js/push@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}
message: 'chore: rc version bump'
message: "chore: rc version bump"

21
.github/workflows/release-check.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: Release Check
on:
pull_request:
branches:
- main
concurrency:
group: release-check-${{ github.head_ref }}
cancel-in-progress: true
jobs:
check-release-mode:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check release mode
run: node ./.github/scripts/check-pre-release-mode.mjs "pull_request"

View File

@@ -6,6 +6,11 @@ on:
concurrency: ${{ github.workflow }}-${{ github.ref }}
permissions:
contents: write
pull-requests: write
id-token: write
jobs:
release:
runs-on: ubuntu-latest
@@ -33,13 +38,13 @@ jobs:
run: npm ci
timeout-minutes: 2
- name: Exit pre-release mode (safety check)
run: npx changeset pre exit || true
- name: Check pre-release mode
run: node ./.github/scripts/check-pre-release-mode.mjs "main"
- name: Create Release Pull Request or Publish to npm
uses: changesets/action@v1
with:
publish: npm run release
publish: node ./.github/scripts/release.mjs
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}

7
.gitignore vendored
View File

@@ -87,3 +87,10 @@ dev-debug.log
*.njsproj
*.sln
*.sw?
# VS Code extension test files
.vscode-test/
apps/extension/.vscode-test/
# apps/extension
apps/extension/vsix-build/

View File

@@ -0,0 +1,23 @@
{
"enabled": true,
"name": "[TM] Code Change Task Tracker",
"description": "Track implementation progress by monitoring code changes",
"version": "1",
"when": {
"type": "fileEdited",
"patterns": [
"**/*.{js,ts,jsx,tsx,py,go,rs,java,cpp,c,h,hpp,cs,rb,php,swift,kt,scala,clj}",
"!**/node_modules/**",
"!**/vendor/**",
"!**/.git/**",
"!**/build/**",
"!**/dist/**",
"!**/target/**",
"!**/__pycache__/**"
]
},
"then": {
"type": "askAgent",
"prompt": "I just saved a source code file. Please:\n\n1. Check what task is currently 'in-progress' using 'tm list --status=in-progress'\n2. Look at the file I saved and summarize what was changed (considering the programming language and context)\n3. Update the task's notes with: 'tm update-subtask --id=<task_id> --prompt=\"Implemented: <summary_of_changes> in <file_path>\"'\n4. If the changes seem to complete the task based on its description, ask if I want to mark it as done"
}
}

View File

@@ -0,0 +1,16 @@
{
"enabled": false,
"name": "[TM] Complexity Analyzer",
"description": "Analyze task complexity when new tasks are added",
"version": "1",
"when": {
"type": "fileEdited",
"patterns": [
".taskmaster/tasks/tasks.json"
]
},
"then": {
"type": "askAgent",
"prompt": "New tasks were added to tasks.json. For each new task:\n\n1. Run 'tm analyze-complexity --id=<task_id>'\n2. If complexity score is > 7, automatically expand it: 'tm expand --id=<task_id> --num=5'\n3. Show the complexity analysis results\n4. Suggest task dependencies based on the expanded subtasks"
}
}

View File

@@ -0,0 +1,13 @@
{
"enabled": true,
"name": "[TM] Daily Standup Assistant",
"description": "Morning workflow summary and task selection",
"version": "1",
"when": {
"type": "userTriggered"
},
"then": {
"type": "askAgent",
"prompt": "Good morning! Please provide my daily standup summary:\n\n1. Run 'tm list --status=done' and show tasks completed in the last 24 hours\n2. Run 'tm list --status=in-progress' to show current work\n3. Run 'tm next' to suggest the highest priority task to start\n4. Show the dependency graph for upcoming work\n5. Ask which task I'd like to focus on today"
}
}

View File

@@ -0,0 +1,13 @@
{
"enabled": true,
"name": "[TM] Git Commit Task Linker",
"description": "Link commits to tasks for traceability",
"version": "1",
"when": {
"type": "manual"
},
"then": {
"type": "askAgent",
"prompt": "I'm about to commit code. Please:\n\n1. Run 'git diff --staged' to see what's being committed\n2. Analyze the changes and suggest which tasks they relate to\n3. Generate a commit message in format: 'feat(task-<id>): <description>'\n4. Update the relevant tasks with a note about this commit\n5. Show the proposed commit message for approval"
}
}

View File

@@ -0,0 +1,13 @@
{
"enabled": true,
"name": "[TM] PR Readiness Checker",
"description": "Validate tasks before creating a pull request",
"version": "1",
"when": {
"type": "manual"
},
"then": {
"type": "askAgent",
"prompt": "I'm about to create a PR. Please:\n\n1. List all tasks marked as 'done' in this branch\n2. For each done task, verify:\n - All subtasks are also done\n - Test files exist for new functionality\n - No TODO comments remain related to the task\n3. Generate a PR description listing completed tasks\n4. Suggest a PR title based on the main tasks completed"
}
}

View File

@@ -0,0 +1,17 @@
{
"enabled": true,
"name": "[TM] Task Dependency Auto-Progression",
"description": "Automatically progress tasks when dependencies are completed",
"version": "1",
"when": {
"type": "fileEdited",
"patterns": [
".taskmaster/tasks/tasks.json",
".taskmaster/tasks/*.json"
]
},
"then": {
"type": "askAgent",
"prompt": "Check the tasks.json file for any tasks that just changed status to 'done'. For each completed task:\n\n1. Find all tasks that depend on it\n2. Check if those dependent tasks now have all their dependencies satisfied\n3. If a task has all dependencies met and is still 'pending', use the command 'tm set-status --id=<task_id> --status=in-progress' to start it\n4. Show me which tasks were auto-started and why"
}
}

View File

@@ -0,0 +1,23 @@
{
"enabled": true,
"name": "[TM] Test Success Task Completer",
"description": "Mark tasks as done when their tests pass",
"version": "1",
"when": {
"type": "fileEdited",
"patterns": [
"**/*test*.{js,ts,jsx,tsx,py,go,java,rb,php,rs,cpp,cs}",
"**/*spec*.{js,ts,jsx,tsx,rb}",
"**/test_*.py",
"**/*_test.go",
"**/*Test.java",
"**/*Tests.cs",
"!**/node_modules/**",
"!**/vendor/**"
]
},
"then": {
"type": "askAgent",
"prompt": "A test file was just saved. Please:\n\n1. Identify the test framework/language and run the appropriate test command for this file (npm test, pytest, go test, cargo test, dotnet test, mvn test, etc.)\n2. If all tests pass, check which tasks mention this functionality\n3. For any matching tasks that are 'in-progress', ask if the passing tests mean the task is complete\n4. If confirmed, mark the task as done with 'tm set-status --id=<task_id> --status=done'"
}
}

19
.kiro/settings/mcp.json Normal file
View File

@@ -0,0 +1,19 @@
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
}
}
}
}

View File

@@ -0,0 +1,422 @@
---
inclusion: always
---
# Taskmaster Development Workflow
This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent.
- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges.
- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need.
## The Basic Loop
The fundamental development cycle you will facilitate is:
1. **`list`**: Show the user what needs to be done.
2. **`next`**: Help the user decide what to work on.
3. **`show <id>`**: Provide details for a specific task.
4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks.
5. **Implement**: The user writes the code and tests.
6. **`update-subtask`**: Log progress and findings on behalf of the user.
7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed.
8. **Repeat**.
All your standard command executions should operate on the user's current task context, which defaults to `master`.
---
## Standard Development Workflow Process
### Simple Workflow (Default Starting Point)
For new projects or when users are getting started, operate within the `master` tag context:
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see @`taskmaster.md`) to generate initial tasks.json with tagged structure
- Configure rule sets during initialization with `--rules` flag (e.g., `task-master init --rules kiro,windsurf`) or manage them later with `task-master rules add/remove` commands
- Begin coding sessions with `get_tasks` / `task-master list` (see @`taskmaster.md`) to see current tasks, status, and IDs
- Determine the next task to work on using `next_task` / `task-master next` (see @`taskmaster.md`)
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.md`) before breaking down tasks
- Review complexity report using `complexity_report` / `task-master complexity-report` (see @`taskmaster.md`)
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
- View specific task details using `get_task` / `task-master show <id>` (see @`taskmaster.md`) to understand implementation requirements
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see @`taskmaster.md`) with appropriate flags like `--force` (to replace existing subtasks) and `--research`
- Implement code following task details, dependencies, and project standards
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see @`taskmaster.md`)
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see @`taskmaster.md`)
---
## Leveling Up: Agent-Led Multi-Context Workflows
While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session.
**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management.
### When to Introduce Tags: Your Decision Patterns
Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user.
#### Pattern 1: Simple Git Feature Branching
This is the most common and direct use case for tags.
- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`).
- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`.
- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"*
- **Tool to Use**: `task-master add-tag --from-branch`
#### Pattern 2: Team Collaboration
- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API.").
- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context.
- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"*
- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"`
#### Pattern 3: Experiments or Risky Refactors
- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference.").
- **Your Action**: Propose creating a sandboxed tag for the experimental work.
- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"*
- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"`
#### Pattern 4: Large Feature Initiatives (PRD-Driven)
This is a more structured approach for significant new features or epics.
- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan.
- **Your Action**: Propose a comprehensive, PRD-driven workflow.
- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"*
- **Your Implementation Flow**:
1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch.
2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`).
3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz`
4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag.
#### Pattern 5: Version-Based Development
Tailor your approach based on the project maturity indicated by tag names.
- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`):
- **Your Approach**: Focus on speed and functionality over perfection
- **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect"
- **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths
- **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization"
- **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."*
- **Production/Mature Tags** (`v1.0+`, `production`, `stable`):
- **Your Approach**: Emphasize robustness, testing, and maintainability
- **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization
- **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths
- **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability"
- **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."*
### Advanced Workflow (Tag-Based & PRD-Driven)
**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators:
- User mentions teammates or collaboration needs
- Project has grown to 15+ tasks with mixed priorities
- User creates feature branches or mentions major initiatives
- User initializes Taskmaster on an existing, complex codebase
- User describes large features that would benefit from dedicated planning
**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning.
#### Master List Strategy (High-Value Focus)
Once you transition to tag-based workflows, the `master` tag should ideally contain only:
- **High-level deliverables** that provide significant business value
- **Major milestones** and epic-level features
- **Critical infrastructure** work that affects the entire project
- **Release-blocking** items
**What NOT to put in master**:
- Detailed implementation subtasks (these go in feature-specific tags' parent tasks)
- Refactoring work (create dedicated tags like `refactor-auth`)
- Experimental features (use `experiment-*` tags)
- Team member-specific tasks (use person-specific tags)
#### PRD-Driven Feature Development
**For New Major Features**:
1. **Identify the Initiative**: When user describes a significant feature
2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"`
3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt`
4. **Parse & Prepare**:
- `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]`
- `analyze_project_complexity --tag=feature-[name] --research`
- `expand_all --tag=feature-[name] --research`
5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag
**For Existing Codebase Analysis**:
When users initialize Taskmaster on existing projects:
1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context.
2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features
3. **Strategic PRD Creation**: Co-author PRDs that include:
- Current state analysis (based on your codebase research)
- Proposed improvements or new features
- Implementation strategy considering existing code
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
5. **Master List Curation**: Keep only the most valuable initiatives in master
The parse-prd's `--append` flag enables the user to parse multiple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
### Workflow Transition Examples
**Example 1: Simple → Team-Based**
```
User: "Alice is going to help with the API work"
Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together."
Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice"
```
**Example 2: Simple → PRD-Driven**
```
User: "I want to add a complete user dashboard with analytics, user management, and reporting"
Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements."
Actions:
1. add_tag feature-dashboard --description="User dashboard with analytics and management"
2. Collaborate on PRD creation
3. parse_prd dashboard-prd.txt --tag=feature-dashboard
4. Add high-level "User Dashboard" task to master
```
**Example 3: Existing Project → Strategic Planning**
```
User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it."
Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements."
Actions:
1. research "Current React app architecture and improvement opportunities" --tree --files=src/
2. Collaborate on improvement PRD based on findings
3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.)
4. Keep only major improvement initiatives in master
```
---
## Primary Interaction: MCP Server vs. CLI
Taskmaster offers two primary ways to interact:
1. **MCP Server (Recommended for Integrated Tools)**:
- For AI agents and integrated development environments (like Kiro), interacting via the **MCP server is the preferred method**.
- The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
- Refer to @`mcp.md` for details on the MCP architecture and available tools.
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in @`taskmaster.md`.
- **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change.
- **Note**: MCP tools fully support tagged task lists with complete tag management capabilities.
2. **`task-master` CLI (For Users & Fallback)**:
- The global `task-master` command provides a user-friendly interface for direct terminal interaction.
- It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP.
- Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
- The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`).
- Refer to @`taskmaster.md` for a detailed command reference.
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
## How the Tag System Works (For Your Reference)
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0".
- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption.
- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag.
- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`.
- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to @`taskmaster.md` for a full command list.
---
## Task Complexity Analysis
- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.md`) for comprehensive analysis
- Review complexity report via `complexity_report` / `task-master complexity-report` (see @`taskmaster.md`) for a formatted, readable version.
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
- Use analysis results to determine appropriate subtask allocation
- Note that reports are automatically used by the `expand_task` tool/command
## Task Breakdown Process
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
- Use `--prompt="<context>"` to provide additional context when needed.
- Review and adjust generated subtasks as necessary.
- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`.
- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`.
## Implementation Drift Handling
- When implementation differs significantly from planned approach
- When future tasks need modification due to current implementation choices
- When new dependencies or requirements emerge
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks.
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task.
## Task Status Management
- Use 'pending' for tasks ready to be worked on
- Use 'done' for completed and verified tasks
- Use 'deferred' for postponed tasks
- Add custom status values as needed for project-specific workflows
## Task Structure Fields
- **id**: Unique identifier for the task (Example: `1`, `1.1`)
- **title**: Brief, descriptive title (Example: `"Initialize Repo"`)
- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`)
- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`)
- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
- This helps quickly identify which prerequisite tasks are blocking work
- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`)
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
- Refer to task structure details (previously linked to `tasks.md`).
## Configuration Management (Updated)
Taskmaster configuration is managed through two main mechanisms:
1. **`.taskmaster/config.json` File (Primary):**
* Located in the project root directory.
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
* **Tagged System Settings**: Includes `global.defaultTag` (defaults to "master") and `tags` section for tag management configuration.
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.
* **View/Set specific models via `task-master models` command or `models` MCP tool.**
* Created automatically when you run `task-master models --setup` for the first time or during tagged system migration.
2. **Environment Variables (`.env` / `mcp.json`):**
* Used **only** for sensitive API keys and specific endpoint URLs.
* Place API keys (one per provider) in a `.env` file in the project root for CLI usage.
* For MCP/Kiro integration, configure these keys in the `env` section of `.kiro/mcp.json`.
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.md`).
3. **`.taskmaster/state.json` File (Tagged System State):**
* Tracks current tag context and migration status.
* Automatically created during tagged system migration.
* Contains: `currentTag`, `lastSwitched`, `migrationNoticeShown`.
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.kiro/mcp.json`.
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
## Rules Management
Taskmaster supports multiple AI coding assistant rule sets that can be configured during project initialization or managed afterward:
- **Available Profiles**: Claude Code, Cline, Codex, Kiro, Roo Code, Trae, Windsurf (claude, cline, codex, kiro, roo, trae, windsurf)
- **During Initialization**: Use `task-master init --rules kiro,windsurf` to specify which rule sets to include
- **After Initialization**: Use `task-master rules add <profiles>` or `task-master rules remove <profiles>` to manage rule sets
- **Interactive Setup**: Use `task-master rules setup` to launch an interactive prompt for selecting rule profiles
- **Default Behavior**: If no `--rules` flag is specified during initialization, all available rule profiles are included
- **Rule Structure**: Each profile creates its own directory (e.g., `.kiro/steering`, `.roo/rules`) with appropriate configuration files
## Determining the Next Task
- Run `next_task` / `task-master next` to show the next task to work on.
- The command identifies tasks with all dependencies satisfied
- Tasks are prioritized by priority level, dependency count, and ID
- The command shows comprehensive task information including:
- Basic task details and description
- Implementation details
- Subtasks (if they exist)
- Contextual suggested actions
- Recommended before starting any new development work
- Respects your project's dependency structure
- Ensures tasks are completed in the appropriate sequence
- Provides ready-to-use commands for common task actions
## Viewing Specific Task Details
- Run `get_task` / `task-master show <id>` to view a specific task.
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
- Displays comprehensive information similar to the next command, but for a specific task
- For parent tasks, shows all subtasks and their current status
- For subtasks, shows parent task information and relationship
- Provides contextual suggested actions appropriate for the specific task
- Useful for examining task details before implementation or checking status
## Managing Task Dependencies
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency.
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency.
- The system prevents circular dependencies and duplicate dependency entries
- Dependencies are checked for existence before being added or removed
- Task files are automatically regenerated after dependency changes
- Dependencies are visualized with status indicators in task listings and files
## Task Reorganization
- Use `move_task` / `task-master move --from=<id> --to=<id>` to move tasks or subtasks within the hierarchy
- This command supports several use cases:
- Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`)
- Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`)
- Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`)
- Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`)
- Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`)
- Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`)
- The system includes validation to prevent data loss:
- Allows moving to non-existent IDs by creating placeholder tasks
- Prevents moving to existing task IDs that have content (to avoid overwriting)
- Validates source tasks exist before attempting to move them
- The system maintains proper parent-child relationships and dependency integrity
- Task files are automatically regenerated after the move operation
- This provides greater flexibility in organizing and refining your task structure as project understanding evolves
- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs.
## Iterative Subtask Implementation
Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation:
1. **Understand the Goal (Preparation):**
* Use `get_task` / `task-master show <subtaskId>` (see @`taskmaster.md`) to thoroughly understand the specific goals and requirements of the subtask.
2. **Initial Exploration & Planning (Iteration 1):**
* This is the first attempt at creating a concrete implementation plan.
* Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification.
* Determine the intended code changes (diffs) and their locations.
* Gather *all* relevant details from this exploration phase.
3. **Log the Plan:**
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`.
* Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`.
4. **Verify the Plan:**
* Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details.
5. **Begin Implementation:**
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`.
* Start coding based on the logged plan.
6. **Refine and Log Progress (Iteration 2+):**
* As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches.
* **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy.
* **Regularly** use `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<update details>\n- What worked...\n- What didn't work...'` to append new findings.
* **Crucially, log:**
* What worked ("fundamental truths" discovered).
* What didn't work and why (to avoid repeating mistakes).
* Specific code snippets or configurations that were successful.
* Decisions made, especially if confirmed with user input.
* Any deviations from the initial plan and the reasoning.
* The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors.
7. **Review & Update Rules (Post-Implementation):**
* Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history.
* Identify any new or modified code patterns, conventions, or best practices established during the implementation.
* Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.md` and `self_improve.md`).
8. **Mark Task Complete:**
* After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`.
9. **Commit Changes (If using Git):**
* Stage the relevant code changes and any updated/new rule files (`git add .`).
* Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments.
* Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`).
* Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.md`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
10. **Proceed to Next Subtask:**
* Identify the next subtask (e.g., using `next_task` / `task-master next`).
## Code Analysis & Refactoring Techniques
- **Top-Level Function Search**:
- Useful for understanding module structure or planning refactors.
- Use grep/ripgrep to find exported functions/constants:
`rg "export (async function|function|const) \w+"` or similar patterns.
- Can help compare functions between files during migrations or identify potential naming conflicts.
---
*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.*

View File

@@ -0,0 +1,51 @@
---
inclusion: always
---
- **Required Rule Structure:**
```markdown
---
description: Clear, one-line description of what the rule enforces
globs: path/to/files/*.ext, other/path/**/*
alwaysApply: boolean
---
- **Main Points in Bold**
- Sub-points with details
- Examples and explanations
```
- **File References:**
- Use `[filename](mdc:path/to/file)` ([filename](mdc:filename)) to reference files
- Example: [prisma.md](.kiro/steering/prisma.md) for rule references
- Example: [schema.prisma](mdc:prisma/schema.prisma) for code references
- **Code Examples:**
- Use language-specific code blocks
```typescript
// ✅ DO: Show good examples
const goodExample = true;
// ❌ DON'T: Show anti-patterns
const badExample = false;
```
- **Rule Content Guidelines:**
- Start with high-level overview
- Include specific, actionable requirements
- Show examples of correct implementation
- Reference existing code when possible
- Keep rules DRY by referencing other rules
- **Rule Maintenance:**
- Update rules when new patterns emerge
- Add examples from actual codebase
- Remove outdated patterns
- Cross-reference related rules
- **Best Practices:**
- Use bullet points for clarity
- Keep descriptions concise
- Include both DO and DON'T examples
- Reference actual code over theoretical examples
- Use consistent formatting across rules

View File

@@ -0,0 +1,70 @@
---
inclusion: always
---
- **Rule Improvement Triggers:**
- New code patterns not covered by existing rules
- Repeated similar implementations across files
- Common error patterns that could be prevented
- New libraries or tools being used consistently
- Emerging best practices in the codebase
- **Analysis Process:**
- Compare new code with existing rules
- Identify patterns that should be standardized
- Look for references to external documentation
- Check for consistent error handling patterns
- Monitor test patterns and coverage
- **Rule Updates:**
- **Add New Rules When:**
- A new technology/pattern is used in 3+ files
- Common bugs could be prevented by a rule
- Code reviews repeatedly mention the same feedback
- New security or performance patterns emerge
- **Modify Existing Rules When:**
- Better examples exist in the codebase
- Additional edge cases are discovered
- Related rules have been updated
- Implementation details have changed
- **Example Pattern Recognition:**
```typescript
// If you see repeated patterns like:
const data = await prisma.user.findMany({
select: { id: true, email: true },
where: { status: 'ACTIVE' }
});
// Consider adding to [prisma.md](.kiro/steering/prisma.md):
// - Standard select fields
// - Common where conditions
// - Performance optimization patterns
```
- **Rule Quality Checks:**
- Rules should be actionable and specific
- Examples should come from actual code
- References should be up to date
- Patterns should be consistently enforced
- **Continuous Improvement:**
- Monitor code review comments
- Track common development questions
- Update rules after major refactors
- Add links to relevant documentation
- Cross-reference related rules
- **Rule Deprecation:**
- Mark outdated patterns as deprecated
- Remove rules that no longer apply
- Update references to deprecated rules
- Document migration paths for old patterns
- **Documentation Updates:**
- Keep examples synchronized with code
- Update references to external docs
- Maintain links between related rules
- Document breaking changes
Follow [kiro_rules.md](.kiro/steering/kiro_rules.md) for proper rule formatting and structure.

View File

@@ -0,0 +1,556 @@
---
inclusion: always
---
# Taskmaster Tool & Command Reference
This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Kiro, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback.
**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback.
**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`.
**🏷️ Tagged Task Lists System:** Task Master now supports **tagged task lists** for multi-context task management. This allows you to maintain separate, isolated lists of tasks for different features, branches, or experiments. Existing projects are seamlessly migrated to use a default "master" tag. Most commands now support a `--tag <name>` flag to specify which context to operate on. If omitted, commands use the currently active tag.
---
## Initialization & Setup
### 1. Initialize Project (`init`)
* **MCP Tool:** `initialize_project`
* **CLI Command:** `task-master init [options]`
* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.`
* **Key CLI Options:**
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
* `--description <text>`: `Provide a brief description for your project.`
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
* **Usage:** Run this once at the beginning of a new project.
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
* **Key MCP Parameters/Options:**
* `projectName`: `Set the name for your project.` (CLI: `--name <name>`)
* `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`)
* `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`)
* `authorName`: `Author name.` (CLI: `--author <author>`)
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Kiro. Operates on the current working directory of the MCP server.
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.
* **Tagging:** Use the `--tag` option to parse the PRD into a specific, non-default tag context. If the tag doesn't exist, it will be created automatically. Example: `task-master parse-prd spec.txt --tag=new-feature`.
### 2. Parse PRD (`parse_prd`)
* **MCP Tool:** `parse_prd`
* **CLI Command:** `task-master parse-prd [file] [options]`
* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.`
* **Key Parameters/Options:**
* `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`)
* `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`)
* `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`)
* `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`)
* **Usage:** Useful for bootstrapping a project from an existing requirements document.
* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`.
---
## AI Model Configuration
### 2. Manage Models (`models`)
* **MCP Tool:** `models`
* **CLI Command:** `task-master models [options]`
* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.`
* **Key MCP Parameters/Options:**
* `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`)
* `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`)
* `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`)
* `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`)
* `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`)
* `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically)
* `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically)
* **Key CLI Options:**
* `--set-main <model_id>`: `Set the primary model.`
* `--set-research <model_id>`: `Set the research model.`
* `--set-fallback <model_id>`: `Set the fallback model.`
* `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).`
* `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.`
* `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).`
* `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.`
* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`.
* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`.
* **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live.
* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them.
* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80.
* **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
---
## Task Listing & Viewing
### 3. Get Tasks (`get_tasks`)
* **MCP Tool:** `get_tasks`
* **CLI Command:** `task-master list [options]`
* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.`
* **Key Parameters/Options:**
* `status`: `Show only Taskmaster tasks matching this status (or multiple statuses, comma-separated), e.g., 'pending' or 'done,in-progress'.` (CLI: `-s, --status <status>`)
* `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`)
* `tag`: `Specify which tag context to list tasks from. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Get an overview of the project status, often used at the start of a work session.
### 4. Get Next Task (`next_task`)
* **MCP Tool:** `next_task`
* **CLI Command:** `task-master next [options]`
* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.`
* **Key Parameters/Options:**
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`)
* **Usage:** Identify what to work on next according to the plan.
### 5. Get Task Details (`get_task`)
* **MCP Tool:** `get_task`
* **CLI Command:** `task-master show [id] [options]`
* **Description:** `Display detailed information for one or more specific Taskmaster tasks or subtasks by ID.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task (e.g., '15'), subtask (e.g., '15.2'), or a comma-separated list of IDs ('1,5,10.2') you want to view.` (CLI: `[id]` positional or `-i, --id <id>`)
* `tag`: `Specify which tag context to get the task(s) from. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Understand the full details for a specific task. When multiple IDs are provided, a summary table is shown.
* **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful.
---
## Task Creation & Modification
### 6. Add Task (`add_task`)
* **MCP Tool:** `add_task`
* **CLI Command:** `task-master add-task [options]`
* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.`
* **Key Parameters/Options:**
* `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`)
* `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`)
* `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`)
* `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`)
* `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Quickly add newly identified tasks during development.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 7. Add Subtask (`add_subtask`)
* **MCP Tool:** `add_subtask`
* **CLI Command:** `task-master add-subtask [options]`
* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.`
* **Key Parameters/Options:**
* `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`)
* `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`)
* `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`)
* `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`)
* `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`)
* `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`)
* `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`)
* `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Break down tasks manually or reorganize existing tasks.
### 8. Update Tasks (`update`)
* **MCP Tool:** `update`
* **CLI Command:** `task-master update [options]`
* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.`
* **Key Parameters/Options:**
* `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`)
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 9. Update Task (`update_task`)
* **MCP Tool:** `update_task`
* **CLI Command:** `task-master update-task [options]`
* **Description:** `Modify a specific Taskmaster task by ID, incorporating new information or changes. By default, this replaces the existing task details.`
* **Key Parameters/Options:**
* `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', you want to update.` (CLI: `-i, --id <id>`)
* `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`)
* `append`: `If true, appends the prompt content to the task's details with a timestamp, rather than replacing them. Behaves like update-subtask.` (CLI: `--append`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Refine a specific task based on new understanding. Use `--append` to log progress without creating subtasks.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 10. Update Subtask (`update_subtask`)
* **MCP Tool:** `update_subtask`
* **CLI Command:** `task-master update-subtask [options]`
* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`)
* `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`)
* `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 11. Set Task Status (`set_task_status`)
* **MCP Tool:** `set_task_status`
* **CLI Command:** `task-master set-status [options]`
* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.`
* **Key Parameters/Options:**
* `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`)
* `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Mark progress as tasks move through the development cycle.
### 12. Remove Task (`remove_task`)
* **MCP Tool:** `remove_task`
* **CLI Command:** `task-master remove-task [options]`
* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`)
* `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project.
* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks.
---
## Task Structure & Breakdown
### 13. Expand Task (`expand_task`)
* **MCP Tool:** `expand_task`
* **CLI Command:** `task-master expand [options]`
* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.`
* **Key Parameters/Options:**
* `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`)
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`)
* `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
* `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`)
* `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`)
* `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 14. Expand All Tasks (`expand_all`)
* **MCP Tool:** `expand_all`
* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag)
* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.`
* **Key Parameters/Options:**
* `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`)
* `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`)
* `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`)
* `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`)
* `tag`: `Specify which tag context to expand. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 15. Clear Subtasks (`clear_subtasks`)
* **MCP Tool:** `clear_subtasks`
* **CLI Command:** `task-master clear-subtasks [options]`
* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.`
* **Key Parameters/Options:**
* `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`)
* `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement.
### 16. Remove Subtask (`remove_subtask`)
* **MCP Tool:** `remove_subtask`
* **CLI Command:** `task-master remove-subtask [options]`
* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.`
* **Key Parameters/Options:**
* `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`)
* `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`)
* `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task.
### 17. Move Task (`move_task`)
* **MCP Tool:** `move_task`
* **CLI Command:** `task-master move [options]`
* **Description:** `Move a task or subtask to a new position within the task hierarchy.`
* **Key Parameters/Options:**
* `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`)
* `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like:
* Moving a task to become a subtask
* Moving a subtask to become a standalone task
* Moving a subtask to a different parent
* Reordering subtasks within the same parent
* Moving a task to a new, non-existent ID (automatically creates placeholders)
* Moving multiple tasks at once with comma-separated IDs
* **Validation Features:**
* Allows moving tasks to non-existent destination IDs (creates placeholder tasks)
* Prevents moving to existing task IDs that already have content (to avoid overwriting)
* Validates that source tasks exist before attempting to move them
* Maintains proper parent-child relationships
* **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3.
* **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions.
* **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches.
---
## Dependency Management
### 18. Add Dependency (`add_dependency`)
* **MCP Tool:** `add_dependency`
* **CLI Command:** `task-master add-dependency [options]`
* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`)
* `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`)
* **Usage:** Establish the correct order of execution between tasks.
### 19. Remove Dependency (`remove_dependency`)
* **MCP Tool:** `remove_dependency`
* **CLI Command:** `task-master remove-dependency [options]`
* **Description:** `Remove a dependency relationship between two Taskmaster tasks.`
* **Key Parameters/Options:**
* `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`)
* `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`)
* `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Update task relationships when the order of execution changes.
### 20. Validate Dependencies (`validate_dependencies`)
* **MCP Tool:** `validate_dependencies`
* **CLI Command:** `task-master validate-dependencies [options]`
* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.`
* **Key Parameters/Options:**
* `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Audit the integrity of your task dependencies.
### 21. Fix Dependencies (`fix_dependencies`)
* **MCP Tool:** `fix_dependencies`
* **CLI Command:** `task-master fix-dependencies [options]`
* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.`
* **Key Parameters/Options:**
* `tag`: `Specify which tag context to fix dependencies in. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Clean up dependency errors automatically.
---
## Analysis & Reporting
### 22. Analyze Project Complexity (`analyze_project_complexity`)
* **MCP Tool:** `analyze_project_complexity`
* **CLI Command:** `task-master analyze-complexity [options]`
* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.`
* **Key Parameters/Options:**
* `output`: `Where to save the complexity analysis report. Default is '.taskmaster/reports/task-complexity-report.json' (or '..._tagname.json' if a tag is used).` (CLI: `-o, --output <file>`)
* `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`)
* `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`)
* `tag`: `Specify which tag context to analyze. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Used before breaking down tasks to identify which ones need the most attention.
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 23. View Complexity Report (`complexity_report`)
* **MCP Tool:** `complexity_report`
* **CLI Command:** `task-master complexity-report [options]`
* **Description:** `Display the task complexity analysis report in a readable format.`
* **Key Parameters/Options:**
* `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`)
* **Usage:** Review and understand the complexity analysis results after running analyze-complexity.
---
## File Management
### 24. Generate Task Files (`generate`)
* **MCP Tool:** `generate`
* **CLI Command:** `task-master generate [options]`
* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.`
* **Key Parameters/Options:**
* `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`)
* `tag`: `Specify which tag context to generate files for. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. This command is now manual and no longer runs automatically.
---
## AI-Powered Research
### 25. Research (`research`)
* **MCP Tool:** `research`
* **CLI Command:** `task-master research [options]`
* **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.`
* **Key Parameters/Options:**
* `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`)
* `taskIds`: `Comma-separated list of task/subtask IDs from the current tag context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`)
* `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`)
* `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`)
* `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`)
* `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`)
* `saveTo`: `Task or subtask ID (e.g., "15", "15.2") to automatically save the research conversation to.` (CLI: `--save-to <id>`)
* `saveFile`: `If true, saves the research conversation to a markdown file in '.taskmaster/docs/research/'.` (CLI: `--save-file`)
* `noFollowup`: `Disables the interactive follow-up question menu in the CLI.` (CLI: `--no-followup`)
* `tag`: `Specify which tag context to use for task-based context gathering. Defaults to the current active tag.` (CLI: `--tag <name>`)
* `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically)
* **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to:
* Get fresh information beyond knowledge cutoff dates
* Research latest best practices, library updates, security patches
* Find implementation examples for specific technologies
* Validate approaches against current industry standards
* Get contextual advice based on project files and tasks
* **When to Consider Using Research:**
* **Before implementing any task** - Research current best practices
* **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc)
* **For security-related tasks** - Find latest security recommendations
* **When updating dependencies** - Research breaking changes and migration guides
* **For performance optimization** - Get current performance best practices
* **When debugging complex issues** - Research known solutions and workarounds
* **Research + Action Pattern:**
* Use `research` to gather fresh information
* Use `update_subtask` to commit findings with timestamps
* Use `update_task` to incorporate research into task details
* Use `add_task` with research flag for informed task creation
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments.
---
## Tag Management
This new suite of commands allows you to manage different task contexts (tags).
### 26. List Tags (`tags`)
* **MCP Tool:** `list_tags`
* **CLI Command:** `task-master tags [options]`
* **Description:** `List all available tags with task counts, completion status, and other metadata.`
* **Key Parameters/Options:**
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
* `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`)
### 27. Add Tag (`add_tag`)
* **MCP Tool:** `add_tag`
* **CLI Command:** `task-master add-tag <tagName> [options]`
* **Description:** `Create a new, empty tag context, or copy tasks from another tag.`
* **Key Parameters/Options:**
* `tagName`: `Name of the new tag to create (alphanumeric, hyphens, underscores).` (CLI: `<tagName>` positional)
* `--from-branch`: `Creates a tag with a name derived from the current git branch, ignoring the <tagName> argument.` (CLI: `--from-branch`)
* `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`)
* `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`)
* `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
### 28. Delete Tag (`delete_tag`)
* **MCP Tool:** `delete_tag`
* **CLI Command:** `task-master delete-tag <tagName> [options]`
* **Description:** `Permanently delete a tag and all of its associated tasks.`
* **Key Parameters/Options:**
* `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional)
* `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
### 29. Use Tag (`use_tag`)
* **MCP Tool:** `use_tag`
* **CLI Command:** `task-master use-tag <tagName>`
* **Description:** `Switch your active task context to a different tag.`
* **Key Parameters/Options:**
* `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
### 30. Rename Tag (`rename_tag`)
* **MCP Tool:** `rename_tag`
* **CLI Command:** `task-master rename-tag <oldName> <newName>`
* **Description:** `Rename an existing tag.`
* **Key Parameters/Options:**
* `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional)
* `newName`: `The new name for the tag.` (CLI: `<newName>` positional)
* `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`)
### 31. Copy Tag (`copy_tag`)
* **MCP Tool:** `copy_tag`
* **CLI Command:** `task-master copy-tag <sourceName> <targetName> [options]`
* **Description:** `Copy an entire tag context, including all its tasks and metadata, to a new tag.`
* **Key Parameters/Options:**
* `sourceName`: `Name of the tag to copy from.` (CLI: `<sourceName>` positional)
* `targetName`: `Name of the new tag to create.` (CLI: `<targetName>` positional)
* `--description <text>`: `Optional description for the new tag.` (CLI: `-d, --description <text>`)
---
## Miscellaneous
### 32. Sync Readme (`sync-readme`) -- experimental
* **MCP Tool:** N/A
* **CLI Command:** `task-master sync-readme [options]`
* **Description:** `Exports your task list to your project's README.md file, useful for showcasing progress.`
* **Key Parameters/Options:**
* `status`: `Filter tasks by status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`)
* `withSubtasks`: `Include subtasks in the export.` (CLI: `--with-subtasks`)
* `tag`: `Specify which tag context to export from. Defaults to the current active tag.` (CLI: `--tag <name>`)
---
## Environment Variables Configuration (Updated)
Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`.
Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL:
* **API Keys (Required for corresponding provider):**
* `ANTHROPIC_API_KEY`
* `PERPLEXITY_API_KEY`
* `OPENAI_API_KEY`
* `GOOGLE_API_KEY`
* `MISTRAL_API_KEY`
* `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too)
* `OPENROUTER_API_KEY`
* `XAI_API_KEY`
* `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too)
* **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):**
* `AZURE_OPENAI_ENDPOINT`
* `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`)
**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.kiro/mcp.json`** file (for MCP/Kiro integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool.
---
For details on how these commands fit into the development process, see the [dev_workflow.md](.kiro/steering/dev_workflow.md).

View File

@@ -0,0 +1,59 @@
---
inclusion: always
---
# Taskmaster Hook-Driven Workflow
## Core Principle: Hooks Automate Task Management
When working with Taskmaster in Kiro, **avoid manually marking tasks as done**. The hook system automatically handles task completion based on:
- **Test Success**: `[TM] Test Success Task Completer` detects passing tests and prompts for task completion
- **Code Changes**: `[TM] Code Change Task Tracker` monitors implementation progress
- **Dependency Chains**: `[TM] Task Dependency Auto-Progression` auto-starts dependent tasks
## AI Assistant Workflow
Follow this pattern when implementing features:
1. **Implement First**: Write code, create tests, make changes
2. **Save Frequently**: Hooks trigger on file saves to track progress automatically
3. **Let Hooks Decide**: Allow hooks to detect completion rather than manually setting status
4. **Respond to Prompts**: Confirm when hooks suggest task completion
## Key Rules for AI Assistants
- **Never use `tm set-status --status=done`** unless hooks fail to detect completion
- **Always write tests** - they provide the most reliable completion signal
- **Save files after implementation** - this triggers progress tracking
- **Trust hook suggestions** - if no completion prompt appears, more work may be needed
## Automatic Behaviors
The hook system provides:
- **Progress Logging**: Implementation details automatically added to task notes
- **Evidence-Based Completion**: Tasks marked done only when criteria are met
- **Dependency Management**: Next tasks auto-started when dependencies complete
- **Natural Flow**: Focus on coding, not task management overhead
## Manual Override Cases
Only manually set task status for:
- Documentation-only tasks
- Tasks without testable outcomes
- Emergency fixes without proper test coverage
Use `tm set-status` sparingly - prefer hook-driven completion.
## Implementation Pattern
```
1. Implement feature → Save file
2. Write tests → Save test file
3. Tests pass → Hook prompts completion
4. Confirm completion → Next task auto-starts
```
This workflow ensures proper task tracking while maintaining development flow.

9
.mcp.json Normal file
View File

@@ -0,0 +1,9 @@
{
"mcpServers": {
"task-master-ai": {
"type": "stdio",
"command": "npx",
"args": ["-y", "task-master-ai"]
}
}
}

417
.taskmaster/CLAUDE.md Normal file
View File

@@ -0,0 +1,417 @@
# Task Master AI - Agent Integration Guide
## Essential Commands
### Core Workflow Commands
```bash
# Project Setup
task-master init # Initialize Task Master in current project
task-master parse-prd .taskmaster/docs/prd.txt # Generate tasks from PRD document
task-master models --setup # Configure AI models interactively
# Daily Development Workflow
task-master list # Show all tasks with status
task-master next # Get next available task to work on
task-master show <id> # View detailed task information (e.g., task-master show 1.2)
task-master set-status --id=<id> --status=done # Mark task complete
# Task Management
task-master add-task --prompt="description" --research # Add new task with AI assistance
task-master expand --id=<id> --research --force # Break task into subtasks
task-master update-task --id=<id> --prompt="changes" # Update specific task
task-master update --from=<id> --prompt="changes" # Update multiple tasks from ID onwards
task-master update-subtask --id=<id> --prompt="notes" # Add implementation notes to subtask
# Analysis & Planning
task-master analyze-complexity --research # Analyze task complexity
task-master complexity-report # View complexity analysis
task-master expand --all --research # Expand all eligible tasks
# Dependencies & Organization
task-master add-dependency --id=<id> --depends-on=<id> # Add task dependency
task-master move --from=<id> --to=<id> # Reorganize task hierarchy
task-master validate-dependencies # Check for dependency issues
task-master generate # Update task markdown files (usually auto-called)
```
## Key Files & Project Structure
### Core Files
- `.taskmaster/tasks/tasks.json` - Main task data file (auto-managed)
- `.taskmaster/config.json` - AI model configuration (use `task-master models` to modify)
- `.taskmaster/docs/prd.txt` - Product Requirements Document for parsing
- `.taskmaster/tasks/*.txt` - Individual task files (auto-generated from tasks.json)
- `.env` - API keys for CLI usage
### Claude Code Integration Files
- `CLAUDE.md` - Auto-loaded context for Claude Code (this file)
- `.claude/settings.json` - Claude Code tool allowlist and preferences
- `.claude/commands/` - Custom slash commands for repeated workflows
- `.mcp.json` - MCP server configuration (project-specific)
### Directory Structure
```
project/
├── .taskmaster/
│ ├── tasks/ # Task files directory
│ │ ├── tasks.json # Main task database
│ │ ├── task-1.md # Individual task files
│ │ └── task-2.md
│ ├── docs/ # Documentation directory
│ │ ├── prd.txt # Product requirements
│ ├── reports/ # Analysis reports directory
│ │ └── task-complexity-report.json
│ ├── templates/ # Template files
│ │ └── example_prd.txt # Example PRD template
│ └── config.json # AI models & settings
├── .claude/
│ ├── settings.json # Claude Code configuration
│ └── commands/ # Custom slash commands
├── .env # API keys
├── .mcp.json # MCP configuration
└── CLAUDE.md # This file - auto-loaded by Claude Code
```
## MCP Integration
Task Master provides an MCP server that Claude Code can connect to. Configure in `.mcp.json`:
```json
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "your_key_here",
"PERPLEXITY_API_KEY": "your_key_here",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
}
}
```
### Essential MCP Tools
```javascript
help; // = shows available taskmaster commands
// Project setup
initialize_project; // = task-master init
parse_prd; // = task-master parse-prd
// Daily workflow
get_tasks; // = task-master list
next_task; // = task-master next
get_task; // = task-master show <id>
set_task_status; // = task-master set-status
// Task management
add_task; // = task-master add-task
expand_task; // = task-master expand
update_task; // = task-master update-task
update_subtask; // = task-master update-subtask
update; // = task-master update
// Analysis
analyze_project_complexity; // = task-master analyze-complexity
complexity_report; // = task-master complexity-report
```
## Claude Code Workflow Integration
### Standard Development Workflow
#### 1. Project Initialization
```bash
# Initialize Task Master
task-master init
# Create or obtain PRD, then parse it
task-master parse-prd .taskmaster/docs/prd.txt
# Analyze complexity and expand tasks
task-master analyze-complexity --research
task-master expand --all --research
```
If tasks already exist, another PRD can be parsed (with new information only!) using parse-prd with --append flag. This will add the generated tasks to the existing list of tasks..
#### 2. Daily Development Loop
```bash
# Start each session
task-master next # Find next available task
task-master show <id> # Review task details
# During implementation, check in code context into the tasks and subtasks
task-master update-subtask --id=<id> --prompt="implementation notes..."
# Complete tasks
task-master set-status --id=<id> --status=done
```
#### 3. Multi-Claude Workflows
For complex projects, use multiple Claude Code sessions:
```bash
# Terminal 1: Main implementation
cd project && claude
# Terminal 2: Testing and validation
cd project-test-worktree && claude
# Terminal 3: Documentation updates
cd project-docs-worktree && claude
```
### Custom Slash Commands
Create `.claude/commands/taskmaster-next.md`:
```markdown
Find the next available Task Master task and show its details.
Steps:
1. Run `task-master next` to get the next task
2. If a task is available, run `task-master show <id>` for full details
3. Provide a summary of what needs to be implemented
4. Suggest the first implementation step
```
Create `.claude/commands/taskmaster-complete.md`:
```markdown
Complete a Task Master task: $ARGUMENTS
Steps:
1. Review the current task with `task-master show $ARGUMENTS`
2. Verify all implementation is complete
3. Run any tests related to this task
4. Mark as complete: `task-master set-status --id=$ARGUMENTS --status=done`
5. Show the next available task with `task-master next`
```
## Tool Allowlist Recommendations
Add to `.claude/settings.json`:
```json
{
"allowedTools": [
"Edit",
"Bash(task-master *)",
"Bash(git commit:*)",
"Bash(git add:*)",
"Bash(npm run *)",
"mcp__task_master_ai__*"
]
}
```
## Configuration & Setup
### API Keys Required
At least **one** of these API keys must be configured:
- `ANTHROPIC_API_KEY` (Claude models) - **Recommended**
- `PERPLEXITY_API_KEY` (Research features) - **Highly recommended**
- `OPENAI_API_KEY` (GPT models)
- `GOOGLE_API_KEY` (Gemini models)
- `MISTRAL_API_KEY` (Mistral models)
- `OPENROUTER_API_KEY` (Multiple models)
- `XAI_API_KEY` (Grok models)
An API key is required for any provider used across any of the 3 roles defined in the `models` command.
### Model Configuration
```bash
# Interactive setup (recommended)
task-master models --setup
# Set specific models
task-master models --set-main claude-3-5-sonnet-20241022
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
task-master models --set-fallback gpt-4o-mini
```
## Task Structure & IDs
### Task ID Format
- Main tasks: `1`, `2`, `3`, etc.
- Subtasks: `1.1`, `1.2`, `2.1`, etc.
- Sub-subtasks: `1.1.1`, `1.1.2`, etc.
### Task Status Values
- `pending` - Ready to work on
- `in-progress` - Currently being worked on
- `done` - Completed and verified
- `deferred` - Postponed
- `cancelled` - No longer needed
- `blocked` - Waiting on external factors
### Task Fields
```json
{
"id": "1.2",
"title": "Implement user authentication",
"description": "Set up JWT-based auth system",
"status": "pending",
"priority": "high",
"dependencies": ["1.1"],
"details": "Use bcrypt for hashing, JWT for tokens...",
"testStrategy": "Unit tests for auth functions, integration tests for login flow",
"subtasks": []
}
```
## Claude Code Best Practices with Task Master
### Context Management
- Use `/clear` between different tasks to maintain focus
- This CLAUDE.md file is automatically loaded for context
- Use `task-master show <id>` to pull specific task context when needed
### Iterative Implementation
1. `task-master show <subtask-id>` - Understand requirements
2. Explore codebase and plan implementation
3. `task-master update-subtask --id=<id> --prompt="detailed plan"` - Log plan
4. `task-master set-status --id=<id> --status=in-progress` - Start work
5. Implement code following logged plan
6. `task-master update-subtask --id=<id> --prompt="what worked/didn't work"` - Log progress
7. `task-master set-status --id=<id> --status=done` - Complete task
### Complex Workflows with Checklists
For large migrations or multi-step processes:
1. Create a markdown PRD file describing the new changes: `touch task-migration-checklist.md` (prds can be .txt or .md)
2. Use Taskmaster to parse the new prd with `task-master parse-prd --append` (also available in MCP)
3. Use Taskmaster to expand the newly generated tasks into subtasks. Consdier using `analyze-complexity` with the correct --to and --from IDs (the new ids) to identify the ideal subtask amounts for each task. Then expand them.
4. Work through items systematically, checking them off as completed
5. Use `task-master update-subtask` to log progress on each task/subtask and/or updating/researching them before/during implementation if getting stuck
### Git Integration
Task Master works well with `gh` CLI:
```bash
# Create PR for completed task
gh pr create --title "Complete task 1.2: User authentication" --body "Implements JWT auth system as specified in task 1.2"
# Reference task in commits
git commit -m "feat: implement JWT auth (task 1.2)"
```
### Parallel Development with Git Worktrees
```bash
# Create worktrees for parallel task development
git worktree add ../project-auth feature/auth-system
git worktree add ../project-api feature/api-refactor
# Run Claude Code in each worktree
cd ../project-auth && claude # Terminal 1: Auth work
cd ../project-api && claude # Terminal 2: API work
```
## Troubleshooting
### AI Commands Failing
```bash
# Check API keys are configured
cat .env # For CLI usage
# Verify model configuration
task-master models
# Test with different model
task-master models --set-fallback gpt-4o-mini
```
### MCP Connection Issues
- Check `.mcp.json` configuration
- Verify Node.js installation
- Use `--mcp-debug` flag when starting Claude Code
- Use CLI as fallback if MCP unavailable
### Task File Sync Issues
```bash
# Regenerate task files from tasks.json
task-master generate
# Fix dependency issues
task-master fix-dependencies
```
DO NOT RE-INITIALIZE. That will not do anything beyond re-adding the same Taskmaster core files.
## Important Notes
### AI-Powered Operations
These commands make AI calls and may take up to a minute:
- `parse_prd` / `task-master parse-prd`
- `analyze_project_complexity` / `task-master analyze-complexity`
- `expand_task` / `task-master expand`
- `expand_all` / `task-master expand --all`
- `add_task` / `task-master add-task`
- `update` / `task-master update`
- `update_task` / `task-master update-task`
- `update_subtask` / `task-master update-subtask`
### File Management
- Never manually edit `tasks.json` - use commands instead
- Never manually edit `.taskmaster/config.json` - use `task-master models`
- Task markdown files in `tasks/` are auto-generated
- Run `task-master generate` after manual changes to tasks.json
### Claude Code Session Management
- Use `/clear` frequently to maintain focused context
- Create custom slash commands for repeated Task Master workflows
- Configure tool allowlist to streamline permissions
- Use headless mode for automation: `claude -p "task-master next"`
### Multi-Task Updates
- Use `update --from=<id>` to update multiple future tasks
- Use `update-task --id=<id>` for single task updates
- Use `update-subtask --id=<id>` for implementation logging
### Research Mode
- Add `--research` flag for research-based AI enhancement
- Requires a research model API key like Perplexity (`PERPLEXITY_API_KEY`) in environment
- Provides more informed task creation and updates
- Recommended for complex technical tasks
---
_This guide ensures Claude Code has immediate access to Task Master's essential functionality for agentic development workflows._

View File

@@ -1,21 +1,21 @@
{
"models": {
"main": {
"provider": "groq",
"modelId": "llama-3.1-8b-instant",
"maxTokens": 131072,
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
},
"research": {
"provider": "groq",
"modelId": "llama-3.3-70b-versatile",
"maxTokens": 32768,
"provider": "perplexity",
"modelId": "sonar",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 128000,
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 8192,
"temperature": 0.2
}
},

View File

@@ -0,0 +1,153 @@
---
title: Research Session
query: "do we need to add new commands or can we just weaponize the gh cli tool"
date: 2025-08-01
time: 2:27:52 p.m.
timestamp: 2025-08-01T11:27:52.249Z
exchanges: 3
---
# Research Session
## Initial Query
**Question:** do we need to add new commands or can we just weaponize the gh cli tool
**Response:**
For the implementation of the `task-master git` command suite to automate git workflows (Task 97), it is possible to leverage the existing **GitHub CLI (`gh`) tool** rather than creating entirely new git commands from scratch. The GitHub CLI provides a rich set of commands that cover many GitHub-related operations such as branch management, pull request handling, workflow triggering, and release management, which can be scripted and integrated into your CLI tool[1][2][3][4].
### Key Points Supporting Weaponizing GitHub CLI:
- **Comprehensive GitHub Operations**: The `gh` CLI supports creating branches, managing pull requests, releases, tags, and even triggering and monitoring GitHub Actions workflows. This aligns well with the requirements of automated branch management, commit generation, and PR description generation in Task 97[1][3].
- **Scripting and Automation Friendly**: GitHub CLI commands can be scripted and integrated into custom workflows or CLI tools. You can create aliases or wrap `gh` commands inside your own CLI commands to provide a seamless user experience while relying on the battle-tested GitHub CLI for underlying operations[1][3].
- **Authentication and Permissions**: `gh` uses environment variables like `GH_TOKEN` or `GITHUB_TOKEN` for authentication, which can be securely managed in CI/CD pipelines or local environments. This simplifies integration without needing to implement custom authentication logic[3][4].
- **Extensibility**: If some specific git workflow features are not directly supported by `gh`, you can extend it with custom scripts or combine it with native git commands. This hybrid approach reduces the need to reinvent core git operations while allowing customization[1].
### Considerations for Your Project:
- Since Task 97 requires **automated branch management** (branch creation, validation, switching, cleanup), **intelligent commit generation**, and **PR description generation**, you can implement these by invoking `gh` commands programmatically within your `task-master git` commands.
- For example, branch creation and switching can be done with native git commands wrapped in your CLI, while PR creation and management can be handled via `gh pr create`, `gh pr merge`, etc.
- You may still need to implement some **custom logic** for commit type detection, branch name validation, and PR description generation based on your task metadata, but the underlying git and GitHub operations can be delegated to `gh`.
- This approach aligns with your existing CLI foundation (Task 2) and leverages a well-supported tool, reducing development effort and increasing reliability.
### Summary
You **do not necessarily need to add entirely new git commands** for Task 97. Instead, you can **weaponize the GitHub CLI (`gh`) tool** by wrapping and scripting its commands within your `task-master git` command suite. This will allow you to automate git workflows efficiently, maintain consistency, and leverage GitHubs API capabilities without reinventing the wheel[1][2][3][4].
If there are gaps in `gh` functionality for your specific workflow needs, you can supplement with native git commands or custom scripts, but the core git and GitHub operations should be handled by `gh` for best results.
---
## Follow-up 1
**Question:** how does this relate to git worktrees? i hear that they get super complicated when trying to implement them.
**Response:**
Git worktrees allow you to have multiple working directories attached to the same Git repository, each checked out to different branches simultaneously. This enables parallel development workflows without the overhead of cloning the repository multiple times or constantly switching branches in a single working directory[2][3].
However, **implementing or managing git worktrees can get complicated** for several reasons:
- **Complexity for users and tooling:** Managing multiple active worktrees requires careful tracking of which directory corresponds to which branch. This can be confusing and error-prone, especially for novices or when many worktrees are in use simultaneously. Mistakes like committing to the wrong branch or working in the wrong directory are common risks[1].
- **Maintenance overhead:** Worktrees accumulate over time and need to be pruned or cleaned up regularly (`git worktree prune`) to avoid clutter and potential confusion. This adds an extra maintenance step to the workflow[1].
- **Syncing and update challenges:** Each worktree is a separate checkout and can become outdated if not regularly updated from the main repository. This can lead to merge conflicts or redundant work if branches diverge significantly[1].
- **Tooling and integration limitations:** Not all Git tools, scripts, or GUIs fully support worktrees or are optimized for them. This can cause inconsistencies or require additional custom tooling to manage worktrees effectively[1].
- **Performance considerations:** Managing multiple worktrees simultaneously can have performance implications, especially in very large repositories, due to the overhead of maintaining multiple checkouts and metadata[1].
In the context of your project, which involves automating git workflows and possibly wrapping or extending git/GitHub CLI commands, these complexities imply:
- If you consider supporting git worktrees in your `task-master git` command suite, you will need to handle the complexity of managing multiple directories and branches carefully.
- You might need to implement additional logic or tooling to track and manage worktrees, avoid user confusion, and ensure synchronization between them.
- Since your project already plans to leverage the GitHub CLI (`gh`) and native git commands, you could integrate worktree commands (`git worktree add`, `git worktree remove`, `git worktree list`) but should be prepared for the added complexity in workflow management and user documentation.
- Custom wrappers or managers (like the one described in [3]) can help reduce friction by automating worktree creation, switching, and cleanup, which might be a useful approach to consider.
Overall, **git worktrees provide powerful parallel branch management but introduce complexity in implementation and user experience** that must be carefully managed through tooling, automation, and clear documentation[1][3][4].
---
**Summary Table of Git Worktree Challenges and Considerations**
| Aspect | Description | Implication for Project |
|-------------------------|-----------------------------------------------------------------------------------------------|----------------------------------------------------------|
| User Complexity | Confusing multiple directories and branches | Need clear UI/UX and safeguards |
| Maintenance Overhead | Requires pruning and cleanup of stale worktrees | Automate cleanup or provide commands |
| Syncing Issues | Worktrees can become outdated or diverge | Implement update checks and sync workflows |
| Tooling Support | Limited support in some Git tools and GUIs | May require custom tooling or wrappers |
| Performance | Overhead in large repos with many worktrees | Monitor and optimize performance |
| Workflow Automation | Manual worktree management is tedious | Build custom managers or wrappers to simplify usage |
---
Given your projects focus on automating git workflows and leveraging existing CLI tools, **implementing git worktree support is feasible but will require careful design to handle these complexities effectively**. You may want to start with basic worktree commands and incrementally add automation and safeguards as needed.
---
## Follow-up 2
**Question:** since we already have a way to have separated task lists per branch, couldn't we do the same thing for worktrees? each taskmaster list (tag) would have its own worktree based on the name of the tag?
**Response:**
Yes, you can apply a similar approach used for separated task lists per branch to git worktrees by associating each taskmaster list (tag) with its own git worktree named after the tag. This means each tag would correspond to a dedicated worktree directory checked out to a branch or commit related to that tag, enabling isolated parallel work environments for each task list.
### How This Maps to Git Worktrees
- **Git worktrees** allow multiple working directories linked to the same repository, each checked out to a different branch or commit[1][2]. This lets you work on multiple branches simultaneously without switching contexts in a single directory.
- When you create a worktree with `git worktree add <path> [<branch>]`, Git creates a new linked working tree at `<path>`, optionally checking out a specified branch or creating a new one[1][2]. Naming the worktree directory after the tag (taskmaster list) is straightforward and semantically clear.
- This setup mirrors your existing branch-based task list separation but extends it to physical directories, which can improve workflow isolation and reduce context switching.
### Benefits for Your Project
- **Isolated environments per task list/tag:** Each tags worktree can have its own checked-out branch, allowing independent development, testing, or review without interference.
- **Parallel workflows:** Developers can switch between worktrees (directories) instead of switching branches in one directory, reducing errors and improving productivity.
- **Automation potential:** Your `task-master git` commands can automate creating, switching, and cleaning up these worktrees based on tags, similar to how you manage branches and task lists.
### Challenges and Considerations
- **Complexity in management:** Git worktrees add complexity in tracking multiple directories and ensuring users operate in the correct one[1]. Your CLI should provide clear commands and feedback to avoid confusion.
- **Maintenance overhead:** Worktrees need pruning and cleanup (`git worktree prune` or `git worktree remove`) to avoid clutter[1]. Automating cleanup in your tool would be beneficial.
- **Syncing and updates:** Each worktree is a separate checkout and can become outdated. Your automation should include commands to update or rebase worktrees as needed.
- **Tooling support:** Not all Git tools or scripts fully support worktrees, so your CLI wrappers should handle edge cases and provide a consistent experience[1][3].
### Practical Implementation Ideas
- When a new tag (taskmaster list) is created, run `git worktree add` to create a new worktree directory named after the tag.
- Store metadata linking tags to worktree paths in your taskmaster configuration to track and manage them.
- Provide commands like `task-master worktree list`, `task-master worktree add <tag>`, and `task-master worktree remove <tag>` to manage these worktrees.
- Integrate with GitHub CLI (`gh`) and native git commands to automate branch creation, PR handling, and syncing within each worktree.
- Consider inspiration from existing tools like [wt](https://github.com/taecontrol/wt) that automate worktree environment setup and teardown[3].
### Summary
Using git worktrees per taskmaster tag is a feasible and potentially powerful extension of your current branch-based task list separation. It offers isolated, parallel working directories aligned with your tags, improving workflow clarity and reducing branch-switching overhead. However, it requires careful tooling and automation to manage complexity, maintenance, and synchronization effectively. Given your projects focus on automating git workflows and leveraging CLI tools, implementing this feature incrementally with robust user feedback and cleanup commands would be a practical approach.
---
*Generated by Task Master Research Command*
*Timestamp: 2025-08-01T11:27:52.249Z*

View File

@@ -0,0 +1,343 @@
# Product Requirements Document: tm-core Package - Parse PRD Feature
## Project Overview
Create a TypeScript package named `tm-core` at `packages/tm-core` that implements parse-prd functionality using class-based architecture similar to the existing AI providers pattern.
## Design Patterns & Architecture
### Patterns to Apply
1. **Factory Pattern**: Use for `ProviderFactory` to create AI provider instances
2. **Strategy Pattern**: Use for `IAIProvider` implementations and `IStorage` implementations
3. **Facade Pattern**: Use for `TaskMasterCore` as the main API entry point
4. **Template Method Pattern**: Use for `BaseProvider` abstract class
5. **Dependency Injection**: Use throughout for testability (pass dependencies via constructor)
6. **Repository Pattern**: Use for `FileStorage` to abstract data persistence
### Naming Conventions
- **Files**: kebab-case (e.g., `task-parser.ts`, `file-storage.ts`)
- **Classes**: PascalCase (e.g., `TaskParser`, `FileStorage`)
- **Interfaces**: PascalCase with 'I' prefix (e.g., `IStorage`, `IAIProvider`)
- **Methods**: camelCase (e.g., `parsePRD`, `loadTasks`)
- **Constants**: UPPER_SNAKE_CASE (e.g., `DEFAULT_MODEL`)
- **Type aliases**: PascalCase (e.g., `TaskStatus`, `ParseOptions`)
## Exact Folder Structure Required
```
packages/tm-core/
├── src/
│ ├── index.ts
│ ├── types/
│ │ └── index.ts
│ ├── interfaces/
│ │ ├── index.ts # Barrel export
│ │ ├── storage.interface.ts
│ │ ├── ai-provider.interface.ts
│ │ └── configuration.interface.ts
│ ├── tasks/
│ │ ├── index.ts # Barrel export
│ │ └── task-parser.ts
│ ├── ai/
│ │ ├── index.ts # Barrel export
│ │ ├── base-provider.ts
│ │ ├── provider-factory.ts
│ │ ├── prompt-builder.ts
│ │ └── providers/
│ │ ├── index.ts # Barrel export
│ │ ├── anthropic-provider.ts
│ │ ├── openai-provider.ts
│ │ └── google-provider.ts
│ ├── storage/
│ │ ├── index.ts # Barrel export
│ │ └── file-storage.ts
│ ├── config/
│ │ ├── index.ts # Barrel export
│ │ └── config-manager.ts
│ ├── utils/
│ │ ├── index.ts # Barrel export
│ │ └── id-generator.ts
│ └── errors/
│ ├── index.ts # Barrel export
│ └── task-master-error.ts
├── tests/
│ ├── task-parser.test.ts
│ ├── integration/
│ │ └── parse-prd.test.ts
│ └── mocks/
│ └── mock-provider.ts
├── package.json
├── tsconfig.json
├── tsup.config.js
└── jest.config.js
```
## Specific Implementation Requirements
### 1. Create types/index.ts
Define these exact TypeScript interfaces:
- `Task` interface with fields: id, title, description, status, priority, complexity, dependencies, subtasks, metadata, createdAt, updatedAt, source
- `Subtask` interface with fields: id, title, description, completed
- `TaskMetadata` interface with fields: parsedFrom, aiProvider, version, tags (optional)
- Type literals: `TaskStatus` = 'pending' | 'in-progress' | 'completed' | 'blocked'
- Type literals: `TaskPriority` = 'low' | 'medium' | 'high' | 'critical'
- Type literals: `TaskComplexity` = 'simple' | 'moderate' | 'complex'
- `ParseOptions` interface with fields: dryRun (optional), additionalContext (optional), tag (optional), maxTasks (optional)
### 2. Create interfaces/storage.interface.ts
Define `IStorage` interface with these exact methods:
- `loadTasks(tag?: string): Promise<Task[]>`
- `saveTasks(tasks: Task[], tag?: string): Promise<void>`
- `appendTasks(tasks: Task[], tag?: string): Promise<void>`
- `updateTask(id: string, task: Partial<Task>, tag?: string): Promise<void>`
- `deleteTask(id: string, tag?: string): Promise<void>`
- `exists(tag?: string): Promise<boolean>`
### 3. Create interfaces/ai-provider.interface.ts
Define `IAIProvider` interface with these exact methods:
- `generateCompletion(prompt: string, options?: AIOptions): Promise<string>`
- `calculateTokens(text: string): number`
- `getName(): string`
- `getModel(): string`
Define `AIOptions` interface with fields: temperature (optional), maxTokens (optional), systemPrompt (optional)
### 4. Create interfaces/configuration.interface.ts
Define `IConfiguration` interface with fields:
- `projectPath: string`
- `aiProvider: string`
- `apiKey?: string`
- `aiOptions?: AIOptions`
- `mainModel?: string`
- `researchModel?: string`
- `fallbackModel?: string`
- `tasksPath?: string`
- `enableTags?: boolean`
### 5. Create tasks/task-parser.ts
Create class `TaskParser` with:
- Constructor accepting `aiProvider: IAIProvider` and `config: IConfiguration`
- Private property `promptBuilder: PromptBuilder`
- Public method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
- Private method `readPRD(prdPath: string): Promise<string>`
- Private method `extractTasks(aiResponse: string): Partial<Task>[]`
- Private method `enrichTasks(rawTasks: Partial<Task>[], prdPath: string): Task[]`
- Apply **Dependency Injection** pattern via constructor
### 6. Create ai/base-provider.ts
Copy existing base-provider.js and convert to TypeScript abstract class:
- Abstract class `BaseProvider` implementing `IAIProvider`
- Protected properties: `apiKey: string`, `model: string`
- Constructor accepting `apiKey: string` and `options: { model?: string }`
- Abstract methods matching IAIProvider interface
- Abstract method `getDefaultModel(): string`
- Apply **Template Method** pattern for common provider logic
### 7. Create ai/provider-factory.ts
Create class `ProviderFactory` with:
- Static method `create(config: { provider: string; apiKey?: string; model?: string }): Promise<IAIProvider>`
- Switch statement for providers: 'anthropic', 'openai', 'google'
- Dynamic imports for each provider
- Throw error for unknown providers
- Apply **Factory** pattern for creating provider instances
Example implementation structure:
```typescript
switch (provider.toLowerCase()) {
case 'anthropic':
const { AnthropicProvider } = await import('./providers/anthropic-provider.js');
return new AnthropicProvider(apiKey, { model });
}
```
### 8. Create ai/providers/anthropic-provider.ts
Create class `AnthropicProvider` extending `BaseProvider`:
- Import Anthropic SDK: `import { Anthropic } from '@anthropic-ai/sdk'`
- Private property `client: Anthropic`
- Implement all abstract methods from BaseProvider
- Default model: 'claude-3-sonnet-20240229'
- Handle API errors and wrap with meaningful messages
### 9. Create ai/providers/openai-provider.ts (placeholder)
Create class `OpenAIProvider` extending `BaseProvider`:
- Import OpenAI SDK when implemented
- For now, throw error: "OpenAI provider not yet implemented"
### 10. Create ai/providers/google-provider.ts (placeholder)
Create class `GoogleProvider` extending `BaseProvider`:
- Import Google Generative AI SDK when implemented
- For now, throw error: "Google provider not yet implemented"
### 11. Create ai/prompt-builder.ts
Create class `PromptBuilder` with:
- Method `buildParsePrompt(prdContent: string, options: ParseOptions = {}): string`
- Method `buildExpandPrompt(task: string, context?: string): string`
- Use template literals for prompt construction
- Include specific JSON format instructions in prompts
### 9. Create storage/file-storage.ts
Create class `FileStorage` implementing `IStorage`:
- Private property `basePath: string` set to `{projectPath}/.taskmaster`
- Constructor accepting `projectPath: string`
- Private method `getTasksPath(tag?: string): string` returning correct path based on tag
- Private method `ensureDirectory(dir: string): Promise<void>`
- Implement all IStorage methods
- Handle ENOENT errors by returning empty arrays
- Use JSON format with structure: `{ tasks: Task[], metadata: { version: string, lastModified: string } }`
- Apply **Repository** pattern for data access abstraction
### 10. Create config/config-manager.ts
Create class `ConfigManager`:
- Private property `config: IConfiguration`
- Constructor accepting `options: Partial<IConfiguration>`
- Use Zod for validation with schema matching IConfiguration
- Method `get<K extends keyof IConfiguration>(key: K): IConfiguration[K]`
- Method `getAll(): IConfiguration`
- Method `validate(): boolean`
- Default values: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true
### 11. Create utils/id-generator.ts
Export functions:
- `generateTaskId(index: number = 0): string` returning format `task_{timestamp}_{index}_{random}`
- `generateSubtaskId(parentId: string, index: number = 0): string` returning format `{parentId}_sub_{index}_{random}`
### 16. Create src/index.ts
Create main class `TaskMasterCore`:
- Private properties: `config: ConfigManager`, `storage: IStorage`, `aiProvider?: IAIProvider`, `parser?: TaskParser`
- Constructor accepting `options: Partial<IConfiguration>`
- Method `initialize(): Promise<void>` for lazy loading
- Method `parsePRD(prdPath: string, options: ParseOptions = {}): Promise<Task[]>`
- Method `getTasks(tag?: string): Promise<Task[]>`
- Apply **Facade** pattern to provide simple API over complex subsystems
Export:
- Class `TaskMasterCore`
- Function `createTaskMaster(options: Partial<IConfiguration>): TaskMasterCore`
- All types from './types'
- All interfaces from './interfaces/*'
Import statements should use kebab-case:
```typescript
import { TaskParser } from './tasks/task-parser';
import { FileStorage } from './storage/file-storage';
import { ConfigManager } from './config/config-manager';
import { ProviderFactory } from './ai/provider-factory';
```
### 17. Configure package.json
Create package.json with:
- name: "@task-master/core"
- version: "0.1.0"
- type: "module"
- main: "./dist/index.js"
- module: "./dist/index.mjs"
- types: "./dist/index.d.ts"
- exports map for proper ESM/CJS support
- scripts: build (tsup), dev (tsup --watch), test (jest), typecheck (tsc --noEmit)
- dependencies: zod@^3.23.8
- peerDependencies: @anthropic-ai/sdk, openai, @google/generative-ai
- devDependencies: typescript, tsup, jest, ts-jest, @types/node, @types/jest
### 18. Configure TypeScript
Create tsconfig.json with:
- target: "ES2022"
- module: "ESNext"
- strict: true (with all strict flags enabled)
- declaration: true
- outDir: "./dist"
- rootDir: "./src"
### 19. Configure tsup
Create tsup.config.js with:
- entry: ['src/index.ts']
- format: ['cjs', 'esm']
- dts: true
- sourcemap: true
- clean: true
- external: AI provider SDKs
### 20. Configure Jest
Create jest.config.js with:
- preset: 'ts-jest'
- testEnvironment: 'node'
- Coverage threshold: 80% for all metrics
## Build Process
1. Use tsup to compile TypeScript to both CommonJS and ESM
2. Generate .d.ts files for TypeScript consumers
3. Output to dist/ directory
4. Ensure tree-shaking works properly
## Testing Requirements
- Create unit tests for TaskParser in tests/task-parser.test.ts
- Create MockProvider class in tests/mocks/mock-provider.ts for testing without API calls
- Test error scenarios (file not found, invalid JSON, etc.)
- Create integration test in tests/integration/parse-prd.test.ts
- Follow kebab-case naming for all test files
## Success Criteria
- TypeScript compilation with zero errors
- No use of 'any' type
- All interfaces properly exported
- Compatible with existing tasks.json format
- Feature flag support via USE_TM_CORE environment variable
## Import/Export Conventions
- Use named exports for all classes and interfaces
- Use barrel exports (index.ts) in each directory
- Import types/interfaces with type-only imports: `import type { Task } from '../types'`
- Group imports in order: Node built-ins, external packages, internal packages, relative imports
- Use .js extension in import paths for ESM compatibility
## Error Handling Patterns
- Create custom error classes in `src/errors/` directory
- All public methods should catch and wrap errors with context
- Use error codes for different error types (e.g., 'FILE_NOT_FOUND', 'PARSE_ERROR')
- Never expose internal implementation details in error messages
- Log errors to console.error only in development mode
## Barrel Exports Content
### interfaces/index.ts
```typescript
export type { IStorage } from './storage.interface';
export type { IAIProvider, AIOptions } from './ai-provider.interface';
export type { IConfiguration } from './configuration.interface';
```
### tasks/index.ts
```typescript
export { TaskParser } from './task-parser';
```
### ai/index.ts
```typescript
export { BaseProvider } from './base-provider';
export { ProviderFactory } from './provider-factory';
export { PromptBuilder } from './prompt-builder';
```
### ai/providers/index.ts
```typescript
export { AnthropicProvider } from './anthropic-provider';
export { OpenAIProvider } from './openai-provider';
export { GoogleProvider } from './google-provider';
```
### storage/index.ts
```typescript
export { FileStorage } from './file-storage';
```
### config/index.ts
```typescript
export { ConfigManager } from './config-manager';
```
### utils/index.ts
```typescript
export { generateTaskId, generateSubtaskId } from './id-generator';
```
### errors/index.ts
```typescript
export { TaskMasterError } from './task-master-error';
```

View File

@@ -1,373 +1,21 @@
{
"meta": {
"generatedAt": "2025-05-27T16:34:53.088Z",
"generatedAt": "2025-08-02T14:28:59.851Z",
"tasksAnalyzed": 1,
"totalTasks": 84,
"analysisCount": 45,
"totalTasks": 93,
"analysisCount": 1,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": true
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 24,
"taskTitle": "Implement AI-Powered Test Generation Command",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the implementation of the AI-powered test generation command into detailed subtasks covering: command structure setup, AI prompt engineering, test file generation logic, integration with Claude API, and comprehensive error handling.",
"reasoning": "This task involves complex integration with an AI service (Claude), requires sophisticated prompt engineering, and needs to generate structured code files. The existing 3 subtasks are a good start but could be expanded to include more detailed steps for AI integration, error handling, and test file formatting."
},
{
"taskId": 26,
"taskTitle": "Implement Context Foundation for AI Operations",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing the context foundation appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or integration with existing systems.",
"reasoning": "This task involves creating a foundation for context integration with several well-defined components. The existing 4 subtasks cover the main implementation areas (context-file flag, cursor rules integration, context extraction utility, and command handler updates). The complexity is moderate as it requires careful integration with existing systems but has clear requirements."
},
{
"taskId": 27,
"taskTitle": "Implement Context Enhancements for AI Operations",
"complexityScore": 7,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing context enhancements appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or performance optimization.",
"reasoning": "This task builds upon the foundation from Task #26 and adds more sophisticated context handling features. The 4 existing subtasks cover the main implementation areas (code context extraction, task history context, PRD context integration, and context formatting). The complexity is higher than the foundation task due to the need for intelligent context selection and optimization."
},
{
"taskId": 28,
"taskTitle": "Implement Advanced ContextManager System",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the advanced ContextManager system appear comprehensive. Consider if any additional subtasks are needed for testing, documentation, or backward compatibility with previous context implementations.",
"reasoning": "This task represents the most complex phase of the context implementation, requiring a sophisticated class design, optimization algorithms, and integration with multiple systems. The 5 existing subtasks cover the core implementation areas, but the complexity is high due to the need for intelligent context prioritization, token management, and performance monitoring."
},
{
"taskId": 40,
"taskTitle": "Implement 'plan' Command for Task Implementation Planning",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for implementing the 'plan' command appear well-structured. Consider if any additional subtasks are needed for testing, documentation, or integration with existing task management workflows.",
"reasoning": "This task involves creating a new command that leverages AI to generate implementation plans. The existing 4 subtasks cover the main implementation areas (retrieving task content, generating plans with AI, formatting in XML, and error handling). The complexity is moderate as it builds on existing patterns for task updates but requires careful AI integration."
},
{
"taskId": 41,
"taskTitle": "Implement Visual Task Dependency Graph in Terminal",
"complexityScore": 8,
"recommendedSubtasks": 10,
"expansionPrompt": "The current 10 subtasks for implementing the visual task dependency graph appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large graphs or additional visualization options.",
"reasoning": "This task involves creating a sophisticated visualization system for terminal display, which is inherently complex due to layout algorithms, ASCII/Unicode rendering, and handling complex dependency relationships. The 10 existing subtasks cover all major aspects of implementation, from CLI interface to accessibility features."
},
{
"taskId": 42,
"taskTitle": "Implement MCP-to-MCP Communication Protocol",
"complexityScore": 9,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for implementing the MCP-to-MCP communication protocol appear well-structured. Consider if any additional subtasks are needed for security hardening, performance optimization, or comprehensive documentation.",
"reasoning": "This task involves designing and implementing a complex communication protocol between different MCP tools and servers. It requires sophisticated adapter patterns, client-server architecture, and handling of multiple operational modes. The complexity is very high due to the need for standardization, security, and backward compatibility."
},
{
"taskId": 44,
"taskTitle": "Implement Task Automation with Webhooks and Event Triggers",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for implementing task automation with webhooks appear comprehensive. Consider if any additional subtasks are needed for security testing, rate limiting implementation, or webhook monitoring tools.",
"reasoning": "This task involves creating a sophisticated event system with webhooks for integration with external services. The complexity is high due to the need for secure authentication, reliable delivery mechanisms, and handling of various webhook formats and protocols. The existing subtasks cover the main implementation areas but security and monitoring could be emphasized more."
},
{
"taskId": 45,
"taskTitle": "Implement GitHub Issue Import Feature",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the GitHub issue import feature appear well-structured. Consider if any additional subtasks are needed for handling GitHub API rate limiting, caching, or supporting additional issue metadata.",
"reasoning": "This task involves integrating with the GitHub API to import issues as tasks. The complexity is moderate as it requires API authentication, data mapping, and error handling. The existing 5 subtasks cover the main implementation areas from design to end-to-end implementation."
},
{
"taskId": 46,
"taskTitle": "Implement ICE Analysis Command for Task Prioritization",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the ICE analysis command appear comprehensive. Consider if any additional subtasks are needed for visualization of ICE scores or integration with other prioritization methods.",
"reasoning": "This task involves creating an AI-powered analysis system for task prioritization using the ICE methodology. The complexity is high due to the need for sophisticated scoring algorithms, AI integration, and report generation. The existing subtasks cover the main implementation areas from algorithm design to integration with existing systems."
},
{
"taskId": 47,
"taskTitle": "Enhance Task Suggestion Actions Card Workflow",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for enhancing the task suggestion actions card workflow appear well-structured. Consider if any additional subtasks are needed for user testing, accessibility improvements, or performance optimization.",
"reasoning": "This task involves redesigning the UI workflow for task expansion and management. The complexity is moderate as it requires careful UX design and state management but builds on existing components. The 6 existing subtasks cover the main implementation areas from design to testing."
},
{
"taskId": 48,
"taskTitle": "Refactor Prompts into Centralized Structure",
"complexityScore": 4,
"recommendedSubtasks": 3,
"expansionPrompt": "The current 3 subtasks for refactoring prompts into a centralized structure appear appropriate. Consider if any additional subtasks are needed for prompt versioning, documentation, or testing.",
"reasoning": "This task involves a straightforward refactoring to improve code organization. The complexity is relatively low as it primarily involves moving code rather than creating new functionality. The 3 existing subtasks cover the main implementation areas from directory structure to integration."
},
{
"taskId": 49,
"taskTitle": "Implement Code Quality Analysis Command",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the code quality analysis command appear comprehensive. Consider if any additional subtasks are needed for performance optimization with large codebases or integration with existing code quality tools.",
"reasoning": "This task involves creating a sophisticated code analysis system with pattern recognition, best practice verification, and AI-powered recommendations. The complexity is high due to the need for code parsing, complex analysis algorithms, and integration with AI services. The existing subtasks cover the main implementation areas from algorithm design to user interface."
},
{
"taskId": 50,
"taskTitle": "Implement Test Coverage Tracking System by Task",
"complexityScore": 9,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the test coverage tracking system appear well-structured. Consider if any additional subtasks are needed for integration with CI/CD systems, performance optimization, or visualization tools.",
"reasoning": "This task involves creating a complex system that maps test coverage to specific tasks and subtasks. The complexity is very high due to the need for sophisticated data structures, integration with coverage tools, and AI-powered test generation. The existing subtasks are comprehensive and cover the main implementation areas from data structure design to AI integration."
},
{
"taskId": 51,
"taskTitle": "Implement Perplexity Research Command",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the Perplexity research command appear comprehensive. Consider if any additional subtasks are needed for caching optimization, result formatting, or integration with other research tools.",
"reasoning": "This task involves creating a new command that integrates with the Perplexity AI API for research. The complexity is moderate as it requires API integration, context extraction, and result formatting. The 5 existing subtasks cover the main implementation areas from API client to caching system."
},
{
"taskId": 52,
"taskTitle": "Implement Task Suggestion Command for CLI",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing the task suggestion command appear well-structured. Consider if any additional subtasks are needed for suggestion quality evaluation, user feedback collection, or integration with existing task workflows.",
"reasoning": "This task involves creating a new CLI command that generates contextually relevant task suggestions using AI. The complexity is moderate as it requires AI integration, context collection, and interactive CLI interfaces. The existing subtasks cover the main implementation areas from data collection to user interface."
},
{
"taskId": 53,
"taskTitle": "Implement Subtask Suggestion Feature for Parent Tasks",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the subtask suggestion feature appear comprehensive. Consider if any additional subtasks are needed for suggestion quality metrics, user feedback collection, or performance optimization.",
"reasoning": "This task involves creating a feature that suggests contextually relevant subtasks for parent tasks. The complexity is moderate as it builds on existing task management systems but requires sophisticated AI integration and context analysis. The 6 existing subtasks cover the main implementation areas from validation to testing."
},
{
"taskId": 55,
"taskTitle": "Implement Positional Arguments Support for CLI Commands",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing positional arguments support appear well-structured. Consider if any additional subtasks are needed for backward compatibility testing, documentation updates, or user experience improvements.",
"reasoning": "This task involves modifying the command parsing logic to support positional arguments alongside the existing flag-based syntax. The complexity is moderate as it requires careful handling of different argument styles and edge cases. The 5 existing subtasks cover the main implementation areas from analysis to documentation."
},
{
"taskId": 57,
"taskTitle": "Enhance Task-Master CLI User Experience and Interface",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for enhancing the CLI user experience appear comprehensive. Consider if any additional subtasks are needed for accessibility testing, internationalization, or performance optimization.",
"reasoning": "This task involves a significant overhaul of the CLI interface to improve user experience. The complexity is high due to the breadth of changes (logging, visual elements, interactive components, etc.) and the need for consistent design across all commands. The 6 existing subtasks cover the main implementation areas from log management to help systems."
},
{
"taskId": 60,
"taskTitle": "Implement Mentor System with Round-Table Discussion Feature",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for implementing the mentor system appear well-structured. Consider if any additional subtasks are needed for mentor personality consistency, discussion quality evaluation, or performance optimization with multiple mentors.",
"reasoning": "This task involves creating a sophisticated mentor simulation system with round-table discussions. The complexity is high due to the need for personality simulation, complex LLM integration, and structured discussion management. The 7 existing subtasks cover the main implementation areas from architecture to testing."
},
{
"taskId": 62,
"taskTitle": "Add --simple Flag to Update Commands for Direct Text Input",
"complexityScore": 4,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for implementing the --simple flag appear comprehensive. Consider if any additional subtasks are needed for user experience testing or documentation updates.",
"reasoning": "This task involves adding a simple flag option to bypass AI processing for updates. The complexity is relatively low as it primarily involves modifying existing command handlers and adding a flag. The 8 existing subtasks are very detailed and cover all aspects of implementation from command parsing to testing."
},
{
"taskId": 63,
"taskTitle": "Add pnpm Support for the Taskmaster Package",
"complexityScore": 5,
"recommendedSubtasks": 8,
"expansionPrompt": "The current 8 subtasks for adding pnpm support appear comprehensive. Consider if any additional subtasks are needed for CI/CD integration, performance comparison, or documentation updates.",
"reasoning": "This task involves ensuring the package works correctly with pnpm as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 8 existing subtasks cover all major aspects from documentation to binary verification."
},
{
"taskId": 64,
"taskTitle": "Add Yarn Support for Taskmaster Installation",
"complexityScore": 5,
"recommendedSubtasks": 9,
"expansionPrompt": "The current 9 subtasks for adding Yarn support appear comprehensive. Consider if any additional subtasks are needed for performance testing, CI/CD integration, or compatibility with different Yarn versions.",
"reasoning": "This task involves ensuring the package works correctly with Yarn as an alternative package manager. The complexity is moderate as it requires careful testing of installation processes and scripts across different environments. The 9 existing subtasks are very detailed and cover all aspects from configuration to testing."
},
{
"taskId": 65,
"taskTitle": "Add Bun Support for Taskmaster Installation",
"complexityScore": 6,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for adding Bun support appear well-structured. Consider if any additional subtasks are needed for handling Bun-specific issues, performance testing, or documentation updates.",
"reasoning": "This task involves adding support for the newer Bun package manager. The complexity is slightly higher than the other package manager tasks due to Bun's differences from Node.js and potential compatibility issues. The 6 existing subtasks cover the main implementation areas from research to documentation."
},
{
"taskId": 67,
"taskTitle": "Add CLI JSON output and Cursor keybindings integration",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing JSON output and Cursor keybindings appear well-structured. Consider if any additional subtasks are needed for testing across different operating systems, documentation updates, or user experience improvements.",
"reasoning": "This task involves two distinct features: adding JSON output to CLI commands and creating a keybindings installation command. The complexity is moderate as it requires careful handling of different output formats and OS-specific file paths. The 5 existing subtasks cover the main implementation areas for both features."
},
{
"taskId": 68,
"taskTitle": "Ability to create tasks without parsing PRD",
"complexityScore": 3,
"recommendedSubtasks": 2,
"expansionPrompt": "The current 2 subtasks for implementing task creation without PRD appear appropriate. Consider if any additional subtasks are needed for validation, error handling, or integration with existing task management workflows.",
"reasoning": "This task involves a relatively simple modification to allow task creation without requiring a PRD document. The complexity is low as it primarily involves creating a form interface and saving functionality. The 2 existing subtasks cover the main implementation areas of UI design and data saving."
},
{
"taskId": 72,
"taskTitle": "Implement PDF Generation for Project Progress and Dependency Overview",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing PDF generation appear comprehensive. Consider if any additional subtasks are needed for handling large projects, additional visualization options, or integration with existing reporting tools.",
"reasoning": "This task involves creating a feature to generate PDF reports of project progress and dependency visualization. The complexity is high due to the need for PDF generation, data collection, and visualization integration. The 6 existing subtasks cover the main implementation areas from library selection to export options."
},
{
"taskId": 75,
"taskTitle": "Integrate Google Search Grounding for Research Role",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "The current 4 subtasks for integrating Google Search Grounding appear well-structured. Consider if any additional subtasks are needed for testing with different query types, error handling, or performance optimization.",
"reasoning": "This task involves updating the AI service layer to enable Google Search Grounding for research roles. The complexity is moderate as it requires careful integration with the existing AI service architecture and conditional logic. The 4 existing subtasks cover the main implementation areas from service layer modification to testing."
},
{
"taskId": 76,
"taskTitle": "Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "The current 7 subtasks for developing the E2E test framework appear comprehensive. Consider if any additional subtasks are needed for test result reporting, CI/CD integration, or performance benchmarking.",
"reasoning": "This task involves creating a sophisticated end-to-end testing framework for the MCP server. The complexity is high due to the need for subprocess management, protocol handling, and robust test case definition. The 7 existing subtasks cover the main implementation areas from architecture to documentation."
},
{
"taskId": 77,
"taskTitle": "Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)",
"complexityScore": 7,
"recommendedSubtasks": 18,
"expansionPrompt": "The current 18 subtasks for implementing AI usage telemetry appear very comprehensive. Consider if any additional subtasks are needed for security hardening, privacy compliance, or user feedback collection.",
"reasoning": "This task involves creating a telemetry system to track AI usage metrics. The complexity is high due to the need for secure data transmission, comprehensive data collection, and integration across multiple commands. The 18 existing subtasks are extremely detailed and cover all aspects of implementation from core utility to provider-specific updates."
},
{
"taskId": 80,
"taskTitle": "Implement Unique User ID Generation and Storage During Installation",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "The current 5 subtasks for implementing unique user ID generation appear well-structured. Consider if any additional subtasks are needed for privacy compliance, security auditing, or integration with the telemetry system.",
"reasoning": "This task involves generating and storing a unique user identifier during installation. The complexity is relatively low as it primarily involves UUID generation and configuration file management. The 5 existing subtasks cover the main implementation areas from script structure to documentation."
},
{
"taskId": 81,
"taskTitle": "Task #81: Implement Comprehensive Local Telemetry System with Future Server Integration Capability",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "The current 6 subtasks for implementing the comprehensive local telemetry system appear well-structured. Consider if any additional subtasks are needed for data migration, storage optimization, or visualization tools.",
"reasoning": "This task involves expanding the telemetry system to capture additional metrics and implement local storage with future server integration capability. The complexity is high due to the breadth of data collection, storage requirements, and privacy considerations. The 6 existing subtasks cover the main implementation areas from data collection to user-facing benefits."
},
{
"taskId": 82,
"taskTitle": "Update supported-models.json with token limit fields",
"complexityScore": 3,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on researching accurate token limit values for each model and ensuring backward compatibility.",
"reasoning": "This task involves a simple update to the supported-models.json file to include new token limit fields. The complexity is low as it primarily involves research and data entry. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 83,
"taskTitle": "Update config-manager.js defaults and getters",
"complexityScore": 4,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears straightforward enough to be implemented without further subtasks. Focus on updating the DEFAULTS object and related getter functions while maintaining backward compatibility.",
"reasoning": "This task involves updating the config-manager.js module to replace maxTokens with more specific token limit fields. The complexity is relatively low as it primarily involves modifying existing code rather than creating new functionality. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 84,
"taskTitle": "Implement token counting utility",
"complexityScore": 5,
"recommendedSubtasks": 1,
"expansionPrompt": "This task appears well-defined enough to be implemented without further subtasks. Focus on implementing accurate token counting for different models and proper fallback mechanisms.",
"reasoning": "This task involves creating a utility function to count tokens for different AI models. The complexity is moderate as it requires integration with the tiktoken library and handling different tokenization schemes. No subtasks are necessary as the task is well-defined and focused."
},
{
"taskId": 69,
"taskTitle": "Enhance Analyze Complexity for Specific Task IDs",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the task 'Enhance Analyze Complexity for Specific Task IDs' into 6 subtasks focusing on: 1) Core logic modification to accept ID parameters, 2) Report merging functionality, 3) CLI interface updates, 4) MCP tool integration, 5) Documentation updates, and 6) Comprehensive testing across all components.",
"reasoning": "This task involves modifying existing functionality across multiple components (core logic, CLI, MCP) with complex logic for filtering tasks and merging reports. The implementation requires careful handling of different parameter combinations and edge cases. The task has interdependent components that need to work together seamlessly, and the report merging functionality adds significant complexity."
},
{
"taskId": 70,
"taskTitle": "Implement 'diagram' command for Mermaid diagram generation",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the 'diagram' command implementation into 5 subtasks: 1) Command interface and parameter handling, 2) Task data extraction and transformation to Mermaid syntax, 3) Diagram rendering with status color coding, 4) Output formatting and file export functionality, and 5) Error handling and edge case management.",
"reasoning": "This task requires implementing a new feature rather than modifying existing code, which reduces complexity from integration challenges. However, it involves working with visualization logic, dependency mapping, and multiple output formats. The color coding based on status and handling of dependency relationships adds moderate complexity. The task is well-defined but requires careful attention to diagram formatting and error handling."
},
{
"taskId": 85,
"taskTitle": "Update ai-services-unified.js for dynamic token limits",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the update of ai-services-unified.js for dynamic token limits into subtasks such as: (1) Import and integrate the token counting utility, (2) Refactor _unifiedServiceRunner to calculate and enforce dynamic token limits, (3) Update error handling for token limit violations, (4) Add and verify logging for token usage, (5) Write and execute tests for various prompt and model scenarios.",
"reasoning": "This task involves significant code changes to a core function, integration of a new utility, dynamic logic for multiple models, and robust error handling. It also requires comprehensive testing for edge cases and integration, making it moderately complex and best managed by splitting into focused subtasks."
},
{
"taskId": 87,
"taskTitle": "Implement validation and error handling",
"complexityScore": 5,
"recommendedSubtasks": 4,
"expansionPrompt": "Decompose this task into: (1) Add validation logic for model and config loading, (2) Implement error handling and fallback mechanisms, (3) Enhance logging and reporting for token usage, (4) Develop helper functions for configuration suggestions and improvements.",
"reasoning": "This task is primarily about adding validation, error handling, and logging. While important for robustness, the logic is straightforward and can be modularized into a few clear subtasks."
},
{
"taskId": 89,
"taskTitle": "Introduce Prioritize Command with Enhanced Priority Levels",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Expand this task into: (1) Implement the prioritize command with all required flags and shorthands, (2) Update CLI output and help documentation for new priority levels, (3) Ensure backward compatibility with existing commands, (4) Add error handling for invalid inputs, (5) Write and run tests for all command scenarios.",
"reasoning": "This CLI feature requires command parsing, updating internal logic for new priority levels, documentation, and robust error handling. The complexity is moderate due to the need for backward compatibility and comprehensive testing."
},
{
"taskId": 90,
"taskTitle": "Implement Subtask Progress Analyzer and Reporting System",
"complexityScore": 8,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the analyzer implementation into: (1) Design and implement progress tracking logic, (2) Develop status validation and issue detection, (3) Build the reporting system with multiple output formats, (4) Integrate analyzer with the existing task management system, (5) Optimize for performance and scalability, (6) Write unit, integration, and performance tests.",
"reasoning": "This is a complex, multi-faceted feature involving data analysis, reporting, integration, and performance optimization. It touches many parts of the system and requires careful design, making it one of the most complex tasks in the list."
},
{
"taskId": 91,
"taskTitle": "Implement Move Command for Tasks and Subtasks",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Expand this task into: (1) Implement move logic for tasks and subtasks, (2) Handle edge cases (invalid ids, non-existent parents, circular dependencies), (3) Update CLI to support move command with flags, (4) Ensure data integrity and update relationships, (5) Write and execute tests for various move scenarios.",
"reasoning": "Moving tasks and subtasks requires careful handling of hierarchical data, edge cases, and data integrity. The command must be robust and user-friendly, necessitating multiple focused subtasks for safe implementation."
},
{
"taskId": 92,
"taskTitle": "Add Global Joke Flag to All CLI Commands",
"complexityScore": 8,
"recommendedSubtasks": 7,
"expansionPrompt": "Break down the implementation of the global --joke flag into the following subtasks: (1) Update CLI foundation to support global flags, (2) Develop the joke-service module with joke management and category support, (3) Integrate joke output into existing output utilities, (4) Update all CLI commands for joke flag compatibility, (5) Add configuration options for joke categories and custom jokes, (6) Implement comprehensive testing (flag recognition, output, content, integration, performance, regression), (7) Update documentation and usage examples.",
"reasoning": "This task requires changes across the CLI foundation, output utilities, all command modules, and configuration management. It introduces a new service module, global flag handling, and output logic that must not interfere with existing features (including JSON output). The need for robust testing and backward compatibility further increases complexity. The scope spans multiple code areas and requires careful integration, justifying a high complexity score and a detailed subtask breakdown to manage risk and ensure maintainability.[2][3][5]"
},
{
"taskId": 94,
"taskTitle": "Implement Standalone 'research' CLI Command for AI-Powered Queries",
"complexityScore": 7,
"recommendedSubtasks": 6,
"expansionPrompt": "Break down the implementation of the 'research' CLI command into logical subtasks covering command registration, parameter handling, context gathering, AI service integration, output formatting, and documentation.",
"reasoning": "This task has moderate to high complexity (7/10) due to multiple interconnected components: CLI argument parsing, integration with AI services, context gathering from various sources, and output formatting with different modes. The cyclomatic complexity would be significant with multiple decision paths for handling different flags and options. The task requires understanding existing patterns and extending the codebase in a consistent manner, suggesting the need for careful decomposition into manageable subtasks."
},
{
"taskId": 86,
"taskTitle": "Implement GitHub Issue Export Feature",
"complexityScore": 9,
"recommendedSubtasks": 10,
"expansionPrompt": "Break down the implementation of the GitHub Issue Export Feature into detailed subtasks covering: command structure and CLI integration, GitHub API client development, authentication and error handling, task-to-issue mapping logic, content formatting and markdown conversion, bidirectional linking and metadata management, extensible architecture and adapter interfaces, configuration and settings management, documentation, and comprehensive testing (unit, integration, edge cases, performance).",
"reasoning": "This task involves designing and implementing a robust, extensible export system with deep integration into GitHub, including bidirectional workflows, complex data mapping, error handling, and support for future platforms. The requirements span CLI design, API integration, content transformation, metadata management, extensibility, configuration, and extensive testing. The breadth and depth of these requirements, along with the need for maintainability and future extensibility, place this task at a high complexity level. Breaking it into at least 10 subtasks will ensure each major component and concern is addressed systematically, reducing risk and improving quality."
"expansionPrompt": "Expand task 24 'Implement AI-Powered Test Generation Command' into 6 subtasks, focusing on: 1) Command structure implementation, 2) AI prompt engineering for test generation, 3) Test file generation and output, 4) Framework-specific template implementation, 5) MCP tool integration, and 6) Documentation and help system integration. Include detailed implementation steps, dependencies, and testing approaches for each subtask.",
"reasoning": "This task has high complexity due to several challenging aspects: 1) AI integration requiring sophisticated prompt engineering, 2) Test generation across multiple frameworks, 3) File system operations with proper error handling, 4) MCP tool integration, 5) Complex configuration requirements, and 6) Framework-specific template generation. The task already has 5 subtasks but could benefit from reorganization based on the updated implementation details in the info blocks, particularly around framework support and configuration."
}
]
}

View File

@@ -0,0 +1,93 @@
{
"meta": {
"generatedAt": "2025-07-22T09:41:10.517Z",
"tasksAnalyzed": 10,
"totalTasks": 10,
"analysisCount": 10,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 1,
"taskTitle": "Implement Task Integration Layer (TIL) Core",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the TIL Core implementation into distinct components: hook registration system, task lifecycle management, event coordination, state persistence layer, and configuration validation. Each subtask should focus on a specific architectural component with clear interfaces and testable boundaries.",
"reasoning": "This is a foundational component with multiple complex subsystems including event-driven architecture, API integration, state management, and configuration validation. The existing 5 subtasks are well-structured and appropriately sized."
},
{
"taskId": 2,
"taskTitle": "Develop Dependency Monitor with Taskmaster MCP Integration",
"complexityScore": 7,
"recommendedSubtasks": 4,
"expansionPrompt": "Divide the dependency monitor into: dependency graph data structure implementation, circular dependency detection algorithm, Taskmaster MCP integration layer, and real-time notification system. Focus on performance optimization for large graphs and efficient caching strategies.",
"reasoning": "Complex graph algorithms and real-time monitoring require careful implementation. The task involves sophisticated data structures, algorithm design, and API integration with performance constraints."
},
{
"taskId": 3,
"taskTitle": "Build Execution Manager with Priority Queue and Parallel Execution",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "Structure the execution manager into: priority queue implementation, resource conflict detection system, parallel execution coordinator, timeout and cancellation handler, and execution history persistence layer. Each component should handle specific aspects of concurrent task management.",
"reasoning": "Managing concurrent execution with resource conflicts, priority scheduling, and persistence is highly complex. Requires careful synchronization, error handling, and performance optimization."
},
{
"taskId": 4,
"taskTitle": "Implement Safety Manager with Configurable Constraints and Emergency Controls",
"complexityScore": 7,
"recommendedSubtasks": 4,
"expansionPrompt": "Break down into: constraint validation engine, emergency control system (stop/pause), user approval workflow implementation, and safety monitoring/audit logging. Each subtask should address specific safety aspects with fail-safe mechanisms.",
"reasoning": "Safety systems require careful design with multiple fail-safes. The task involves validation logic, real-time controls, workflow management, and comprehensive logging."
},
{
"taskId": 5,
"taskTitle": "Develop Event-Based Hook Processor",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "Organize into: file system event integration, Git/VCS event listeners, build system event connectors, and event filtering/debouncing mechanism. Focus on modular event source integration with configurable processing pipelines.",
"reasoning": "While conceptually straightforward, integrating multiple event sources with proper filtering and performance optimization requires careful implementation. Each event source has unique characteristics."
},
{
"taskId": 6,
"taskTitle": "Implement Prompt-Based Hook Processor with AI Integration",
"complexityScore": 7,
"recommendedSubtasks": 4,
"expansionPrompt": "Divide into: prompt interception mechanism, NLP-based task suggestion engine, context injection system, and conversation-based status updater. Each component should handle specific aspects of AI conversation integration.",
"reasoning": "AI integration with prompt analysis and dynamic context injection is complex. Requires understanding of conversation flow, relevance scoring, and seamless integration with existing systems."
},
{
"taskId": 7,
"taskTitle": "Create Update-Based Hook Processor for Automatic Progress Tracking",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "Structure as: code change monitor, acceptance criteria validator, dependency update propagator, and conflict detection/resolution system. Focus on accurate progress tracking and automated validation logic.",
"reasoning": "Automatic progress tracking requires integration with version control and intelligent analysis of code changes. Conflict detection and dependency propagation add complexity."
},
{
"taskId": 8,
"taskTitle": "Develop Real-Time Automation Dashboard and User Controls",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down into: WebSocket real-time communication layer, interactive dependency graph visualization, task queue and status displays, user control interfaces, and analytics/charting components. Each UI component should be modular and reusable.",
"reasoning": "Building a responsive real-time dashboard with complex visualizations and interactive controls is challenging. Requires careful state management, performance optimization, and user experience design."
},
{
"taskId": 9,
"taskTitle": "Integrate Kiro IDE and Taskmaster MCP with Core Services",
"complexityScore": 8,
"recommendedSubtasks": 4,
"expansionPrompt": "Organize into: KiroHookAdapter implementation, TaskmasterMCPAdapter development, error handling and retry logic layer, and IDE UI component integration. Focus on robust adapter patterns and comprehensive error recovery.",
"reasoning": "End-to-end integration of multiple systems with different architectures is highly complex. Requires careful adapter design, extensive error handling, and thorough testing across all integration points."
},
{
"taskId": 10,
"taskTitle": "Implement Configuration Management and Safety Profiles",
"complexityScore": 6,
"recommendedSubtasks": 4,
"expansionPrompt": "Divide into: visual configuration editor UI, JSON Schema validation engine, import/export functionality, and version control integration. Each component should provide intuitive configuration management with robust validation.",
"reasoning": "While technically less complex than core systems, building an intuitive configuration editor with validation, versioning, and import/export requires careful UI/UX design and robust data handling."
}
]
}

View File

@@ -1,6 +1,6 @@
{
"currentTag": "master",
"lastSwitched": "2025-06-14T20:37:15.456Z",
"lastSwitched": "2025-08-01T14:09:25.838Z",
"branchTagMapping": {
"v017-adds": "v017-adds",
"next": "next"

View File

@@ -1,23 +0,0 @@
# Task ID: 1
# Title: Implement TTS Flag for Taskmaster Commands
# Status: pending
# Dependencies: 16 (Not found)
# Priority: medium
# Description: Add text-to-speech functionality to taskmaster commands with configurable voice options and audio output settings.
# Details:
Implement TTS functionality including:
- Add --tts flag to all relevant taskmaster commands (list, show, generate, etc.)
- Integrate with system TTS engines (Windows SAPI, macOS say command, Linux espeak/festival)
- Create TTS configuration options in the configuration management system
- Add voice selection options (male/female, different languages if available)
- Implement audio output settings (volume, speed, pitch)
- Add TTS-specific error handling for cases where TTS is unavailable
- Create fallback behavior when TTS fails (silent failure or text output)
- Support for reading task titles, descriptions, and status updates aloud
- Add option to read entire task lists or individual task details
- Implement TTS for command confirmations and error messages
- Create TTS output formatting to make spoken text more natural (removing markdown, formatting numbers/dates appropriately)
- Add configuration option to enable/disable TTS globally
# Test Strategy:
Test TTS functionality across different operating systems (Windows, macOS, Linux). Verify that the --tts flag works with all major commands. Test voice configuration options and ensure audio output settings are properly applied. Test error handling when TTS services are unavailable. Verify that text formatting for speech is natural and understandable. Test with various task content types including special characters, code snippets, and long descriptions. Ensure TTS can be disabled and enabled through configuration.

7301
.taskmaster/tasks/tasks.json Normal file

File diff suppressed because one or more lines are too long

View File

@@ -1,5 +1,603 @@
# task-master-ai
## 0.24.0
### Minor Changes
- [#1098](https://github.com/eyaltoledano/claude-task-master/pull/1098) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
- Added GPT-5 model to supported models configuration with SWE score of 0.749
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
## New Claude Code Agents
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
### task-orchestrator
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
- Analyzes task dependencies to identify parallelizable work
- Deploys multiple task-executor agents for concurrent execution
- Monitors task completion and updates the dependency graph
- Automatically identifies and starts newly unblocked tasks
### task-executor
Handles the actual implementation of individual tasks:
- Executes specific tasks identified by the orchestrator
- Works on concrete implementation rather than planning
- Updates task status and logs progress
- Can work in parallel with other executors on independent tasks
### task-checker
Verifies that completed tasks meet their specifications:
- Reviews tasks marked as 'review' status
- Validates implementation against requirements
- Runs tests and checks for best practices
- Ensures quality before marking tasks as 'done'
## Installation
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
## Usage Example
```bash
# In Claude Code, after initializing a project with tasks:
# Use task-orchestrator to analyze and coordinate work
# The orchestrator will:
# 1. Check task dependencies
# 2. Identify tasks that can run in parallel
# 3. Deploy executors for available work
# 4. Monitor progress and deploy new executors as tasks complete
# Use task-executor for specific task implementation
# When the orchestrator identifies task 2.3 needs work:
# The executor will implement that specific task
```
## Benefits
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
- **Progress Tracking**: Real-time updates as tasks are completed
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
### Patch Changes
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix scope-up/down prompts to include all required fields for better AI model compatibility
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
- Ensures generated JSON includes all fields required by the schema
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP scope-up/down tools not finding tasks
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
- scope_up_task and scope_down_task MCP tools now work properly
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve AI provider compatibility for JSON generation
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
- Removed nullable/default modifiers from Zod schemas for broader compatibility
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
- Perplexity now uses JSON mode for more reliable structured output
- Post-processing handles default values separately from schema validation
## 0.24.0-rc.2
### Minor Changes
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
- Added GPT-5 model to supported models configuration with SWE score of 0.749
## 0.24.0-rc.1
### Minor Changes
- [#1093](https://github.com/eyaltoledano/claude-task-master/pull/1093) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
## New Claude Code Agents
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
### task-orchestrator
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
- Analyzes task dependencies to identify parallelizable work
- Deploys multiple task-executor agents for concurrent execution
- Monitors task completion and updates the dependency graph
- Automatically identifies and starts newly unblocked tasks
### task-executor
Handles the actual implementation of individual tasks:
- Executes specific tasks identified by the orchestrator
- Works on concrete implementation rather than planning
- Updates task status and logs progress
- Can work in parallel with other executors on independent tasks
### task-checker
Verifies that completed tasks meet their specifications:
- Reviews tasks marked as 'review' status
- Validates implementation against requirements
- Runs tests and checks for best practices
- Ensures quality before marking tasks as 'done'
## Installation
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
## Usage Example
```bash
# In Claude Code, after initializing a project with tasks:
# Use task-orchestrator to analyze and coordinate work
# The orchestrator will:
# 1. Check task dependencies
# 2. Identify tasks that can run in parallel
# 3. Deploy executors for available work
# 4. Monitor progress and deploy new executors as tasks complete
# Use task-executor for specific task implementation
# When the orchestrator identifies task 2.3 needs work:
# The executor will implement that specific task
```
## Benefits
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
- **Progress Tracking**: Real-time updates as tasks are completed
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
### Patch Changes
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
## 0.23.1-rc.0
### Patch Changes
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix scope-up/down prompts to include all required fields for better AI model compatibility
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
- Ensures generated JSON includes all fields required by the schema
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP scope-up/down tools not finding tasks
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
- scope_up_task and scope_down_task MCP tools now work properly
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve AI provider compatibility for JSON generation
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
- Removed nullable/default modifiers from Zod schemas for broader compatibility
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
- Perplexity now uses JSON mode for more reliable structured output
- Post-processing handles default values separately from schema validation
## 0.23.0
### Minor Changes
- [#1064](https://github.com/eyaltoledano/claude-task-master/pull/1064) [`53903f1`](https://github.com/eyaltoledano/claude-task-master/commit/53903f1e8eee23ac512eb13a6d81d8cbcfe658cb) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `scope-up` and `scope-down` commands for dynamic task complexity adjustment
This release introduces two powerful new commands that allow you to dynamically adjust the complexity of your tasks and subtasks without recreating them from scratch.
**New CLI Commands:**
- `task-master scope-up` - Increase task complexity (add more detail, requirements, or implementation steps)
- `task-master scope-down` - Decrease task complexity (simplify, remove unnecessary details, or streamline)
**Key Features:**
- **Multiple tasks**: Support comma-separated IDs to adjust multiple tasks at once (`--id=5,7,12`)
- **Strength levels**: Choose adjustment intensity with `--strength=light|regular|heavy` (defaults to regular)
- **Custom prompts**: Use `--prompt` flag to specify exactly how you want tasks adjusted
- **MCP integration**: Available as `scope_up_task` and `scope_down_task` tools in Cursor and other MCP environments
- **Smart context**: AI considers your project context and task dependencies when making adjustments
**Usage Examples:**
```bash
# Make a task more detailed
task-master scope-up --id=5
# Simplify multiple tasks with light touch
task-master scope-down --id=10,11,12 --strength=light
# Custom adjustment with specific instructions
task-master scope-up --id=7 --prompt="Add more error handling and edge cases"
```
**Why use this?**
- **Iterative refinement**: Adjust task complexity as your understanding evolves
- **Project phase adaptation**: Scale tasks up for implementation, down for planning
- **Team coordination**: Adjust complexity based on team member experience levels
- **Milestone alignment**: Fine-tune tasks to match project phase requirements
Perfect for agile workflows where task requirements change as you learn more about the problem space.
### Patch Changes
- [#1063](https://github.com/eyaltoledano/claude-task-master/pull/1063) [`2ae6e7e`](https://github.com/eyaltoledano/claude-task-master/commit/2ae6e7e6be3605c3c4d353f34666e54750dba973) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix for tasks not found when using string IDs
- [#1049](https://github.com/eyaltoledano/claude-task-master/pull/1049) [`45a14c3`](https://github.com/eyaltoledano/claude-task-master/commit/45a14c323d21071c15106335e89ad1f4a20976ab) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Fix tag-specific complexity report detection in expand command
The expand command now correctly finds and uses tag-specific complexity reports (e.g., `task-complexity-report_feature-xyz.json`) when operating in a tag context. Previously, it would always look for the generic `task-complexity-report.json` file due to a default value in the CLI option definition.
## 0.23.0-rc.2
### Minor Changes
- [#1064](https://github.com/eyaltoledano/claude-task-master/pull/1064) [`53903f1`](https://github.com/eyaltoledano/claude-task-master/commit/53903f1e8eee23ac512eb13a6d81d8cbcfe658cb) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add new `scope-up` and `scope-down` commands for dynamic task complexity adjustment
This release introduces two powerful new commands that allow you to dynamically adjust the complexity of your tasks and subtasks without recreating them from scratch.
**New CLI Commands:**
- `task-master scope-up` - Increase task complexity (add more detail, requirements, or implementation steps)
- `task-master scope-down` - Decrease task complexity (simplify, remove unnecessary details, or streamline)
**Key Features:**
- **Multiple tasks**: Support comma-separated IDs to adjust multiple tasks at once (`--id=5,7,12`)
- **Strength levels**: Choose adjustment intensity with `--strength=light|regular|heavy` (defaults to regular)
- **Custom prompts**: Use `--prompt` flag to specify exactly how you want tasks adjusted
- **MCP integration**: Available as `scope_up_task` and `scope_down_task` tools in Cursor and other MCP environments
- **Smart context**: AI considers your project context and task dependencies when making adjustments
**Usage Examples:**
```bash
# Make a task more detailed
task-master scope-up --id=5
# Simplify multiple tasks with light touch
task-master scope-down --id=10,11,12 --strength=light
# Custom adjustment with specific instructions
task-master scope-up --id=7 --prompt="Add more error handling and edge cases"
```
**Why use this?**
- **Iterative refinement**: Adjust task complexity as your understanding evolves
- **Project phase adaptation**: Scale tasks up for implementation, down for planning
- **Team coordination**: Adjust complexity based on team member experience levels
- **Milestone alignment**: Fine-tune tasks to match project phase requirements
Perfect for agile workflows where task requirements change as you learn more about the problem space.
## 0.22.1-rc.1
### Patch Changes
- [#1069](https://github.com/eyaltoledano/claude-task-master/pull/1069) [`72ca68e`](https://github.com/eyaltoledano/claude-task-master/commit/72ca68edeb870ff7a3b0d2d632e09dae921dc16a) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Add new `scope-up` and `scope-down` commands for dynamic task complexity adjustment
This release introduces two powerful new commands that allow you to dynamically adjust the complexity of your tasks and subtasks without recreating them from scratch.
**New CLI Commands:**
- `task-master scope-up` - Increase task complexity (add more detail, requirements, or implementation steps)
- `task-master scope-down` - Decrease task complexity (simplify, remove unnecessary details, or streamline)
**Key Features:**
- **Multiple tasks**: Support comma-separated IDs to adjust multiple tasks at once (`--id=5,7,12`)
- **Strength levels**: Choose adjustment intensity with `--strength=light|regular|heavy` (defaults to regular)
- **Custom prompts**: Use `--prompt` flag to specify exactly how you want tasks adjusted
- **MCP integration**: Available as `scope_up_task` and `scope_down_task` tools in Cursor and other MCP environments
- **Smart context**: AI considers your project context and task dependencies when making adjustments
**Usage Examples:**
```bash
# Make a task more detailed
task-master scope-up --id=5
# Simplify multiple tasks with light touch
task-master scope-down --id=10,11,12 --strength=light
# Custom adjustment with specific instructions
task-master scope-up --id=7 --prompt="Add more error handling and edge cases"
```
**Why use this?**
- **Iterative refinement**: Adjust task complexity as your understanding evolves
- **Project phase adaptation**: Scale tasks up for implementation, down for planning
- **Team coordination**: Adjust complexity based on team member experience levels
- **Milestone alignment**: Fine-tune tasks to match project phase requirements
Perfect for agile workflows where task requirements change as you learn more about the problem space.
## 0.22.1-rc.0
### Patch Changes
- [#1063](https://github.com/eyaltoledano/claude-task-master/pull/1063) [`2ae6e7e`](https://github.com/eyaltoledano/claude-task-master/commit/2ae6e7e6be3605c3c4d353f34666e54750dba973) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix for tasks not found when using string IDs
- [#1049](https://github.com/eyaltoledano/claude-task-master/pull/1049) [`45a14c3`](https://github.com/eyaltoledano/claude-task-master/commit/45a14c323d21071c15106335e89ad1f4a20976ab) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Fix tag-specific complexity report detection in expand command
The expand command now correctly finds and uses tag-specific complexity reports (e.g., `task-complexity-report_feature-xyz.json`) when operating in a tag context. Previously, it would always look for the generic `task-complexity-report.json` file due to a default value in the CLI option definition.
## 0.22.0
### Minor Changes
- [#1043](https://github.com/eyaltoledano/claude-task-master/pull/1043) [`dc44ed9`](https://github.com/eyaltoledano/claude-task-master/commit/dc44ed9de8a57aca5d39d3a87565568bd0a82068) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Prompt to generate a complexity report when it is missing
- [#1032](https://github.com/eyaltoledano/claude-task-master/pull/1032) [`4423119`](https://github.com/eyaltoledano/claude-task-master/commit/4423119a5ec53958c9dffa8bf564da8be7a2827d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add comprehensive Kiro IDE integration with autonomous task management hooks
- **Kiro Profile**: Added full support for Kiro IDE with automatic installation of 7 Taskmaster agent hooks
- **Hook-Driven Workflow**: Introduced natural language automation hooks that eliminate manual task status updates
- **Automatic Hook Installation**: Hooks are now automatically copied to `.kiro/hooks/` when running `task-master rules add kiro`
- **Language-Agnostic Support**: All hooks support multiple programming languages (JS, Python, Go, Rust, Java, etc.)
- **Frontmatter Transformation**: Kiro rules use simplified `inclusion: always` format instead of Cursor's complex frontmatter
- **Special Rule**: Added `taskmaster_hooks_workflow.md` that guides AI assistants to prefer hook-driven completion
Key hooks included:
- Task Dependency Auto-Progression: Automatically starts tasks when dependencies complete
- Code Change Task Tracker: Updates task progress as you save files
- Test Success Task Completer: Marks tasks done when tests pass
- Daily Standup Assistant: Provides personalized task status summaries
- PR Readiness Checker: Validates task completion before creating pull requests
- Complexity Analyzer: Auto-expands complex tasks into manageable subtasks
- Git Commit Task Linker: Links commits to tasks for better traceability
This creates a truly autonomous development workflow where task management happens naturally as you code!
### Patch Changes
- [#1033](https://github.com/eyaltoledano/claude-task-master/pull/1033) [`7b90568`](https://github.com/eyaltoledano/claude-task-master/commit/7b9056832653464f934c91c22997077065d738c4) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Fix compatibility with @google/gemini-cli-core v0.1.12+ by updating ai-sdk-provider-gemini-cli to v0.1.1.
- [#1038](https://github.com/eyaltoledano/claude-task-master/pull/1038) [`77cc5e4`](https://github.com/eyaltoledano/claude-task-master/commit/77cc5e4537397642f2664f61940a101433ee6fb4) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix 'expand --all' and 'show' commands to correctly handle tag contexts for complexity reports and task display.
- [#1025](https://github.com/eyaltoledano/claude-task-master/pull/1025) [`8781794`](https://github.com/eyaltoledano/claude-task-master/commit/8781794c56d454697fc92c88a3925982d6b81205) Thanks [@joedanz](https://github.com/joedanz)! - Clean up remaining automatic task file generation calls
- [#1035](https://github.com/eyaltoledano/claude-task-master/pull/1035) [`fb7d588`](https://github.com/eyaltoledano/claude-task-master/commit/fb7d588137e8c53b0d0f54bd1dd8d387648583ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix max_tokens limits for OpenRouter and Groq models
- Add special handling in config-manager.js for custom OpenRouter models to use a conservative default of 32,768 max_tokens
- Update qwen/qwen-turbo model max_tokens from 1,000,000 to 32,768 to match OpenRouter's actual limits
- Fix moonshotai/kimi-k2-instruct max_tokens to 16,384 to match Groq's actual limit (fixes #1028)
- This prevents "maximum context length exceeded" errors when using OpenRouter models not in our supported models list
- [#1027](https://github.com/eyaltoledano/claude-task-master/pull/1027) [`6ae66b2`](https://github.com/eyaltoledano/claude-task-master/commit/6ae66b2afbfe911340fa25e0236c3db83deaa7eb) Thanks [@andreswebs](https://github.com/andreswebs)! - Fix VSCode profile generation to use correct rule file names (using `.instructions.md` extension instead of `.md`) and front-matter properties (removing the unsupported `alwaysApply` property from instructions files' front-matter).
## 0.22.0-rc.1
### Minor Changes
- [#1043](https://github.com/eyaltoledano/claude-task-master/pull/1043) [`dc44ed9`](https://github.com/eyaltoledano/claude-task-master/commit/dc44ed9de8a57aca5d39d3a87565568bd0a82068) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Prompt to generate a complexity report when it is missing
## 0.22.0-rc.0
### Minor Changes
- [#1032](https://github.com/eyaltoledano/claude-task-master/pull/1032) [`4423119`](https://github.com/eyaltoledano/claude-task-master/commit/4423119a5ec53958c9dffa8bf564da8be7a2827d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add comprehensive Kiro IDE integration with autonomous task management hooks
- **Kiro Profile**: Added full support for Kiro IDE with automatic installation of 7 Taskmaster agent hooks
- **Hook-Driven Workflow**: Introduced natural language automation hooks that eliminate manual task status updates
- **Automatic Hook Installation**: Hooks are now automatically copied to `.kiro/hooks/` when running `task-master rules add kiro`
- **Language-Agnostic Support**: All hooks support multiple programming languages (JS, Python, Go, Rust, Java, etc.)
- **Frontmatter Transformation**: Kiro rules use simplified `inclusion: always` format instead of Cursor's complex frontmatter
- **Special Rule**: Added `taskmaster_hooks_workflow.md` that guides AI assistants to prefer hook-driven completion
Key hooks included:
- Task Dependency Auto-Progression: Automatically starts tasks when dependencies complete
- Code Change Task Tracker: Updates task progress as you save files
- Test Success Task Completer: Marks tasks done when tests pass
- Daily Standup Assistant: Provides personalized task status summaries
- PR Readiness Checker: Validates task completion before creating pull requests
- Complexity Analyzer: Auto-expands complex tasks into manageable subtasks
- Git Commit Task Linker: Links commits to tasks for better traceability
This creates a truly autonomous development workflow where task management happens naturally as you code!
### Patch Changes
- [#1033](https://github.com/eyaltoledano/claude-task-master/pull/1033) [`7b90568`](https://github.com/eyaltoledano/claude-task-master/commit/7b9056832653464f934c91c22997077065d738c4) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Fix compatibility with @google/gemini-cli-core v0.1.12+ by updating ai-sdk-provider-gemini-cli to v0.1.1.
- [#1038](https://github.com/eyaltoledano/claude-task-master/pull/1038) [`77cc5e4`](https://github.com/eyaltoledano/claude-task-master/commit/77cc5e4537397642f2664f61940a101433ee6fb4) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix 'expand --all' and 'show' commands to correctly handle tag contexts for complexity reports and task display.
- [#1025](https://github.com/eyaltoledano/claude-task-master/pull/1025) [`8781794`](https://github.com/eyaltoledano/claude-task-master/commit/8781794c56d454697fc92c88a3925982d6b81205) Thanks [@joedanz](https://github.com/joedanz)! - Clean up remaining automatic task file generation calls
- [#1035](https://github.com/eyaltoledano/claude-task-master/pull/1035) [`fb7d588`](https://github.com/eyaltoledano/claude-task-master/commit/fb7d588137e8c53b0d0f54bd1dd8d387648583ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix max_tokens limits for OpenRouter and Groq models
- Add special handling in config-manager.js for custom OpenRouter models to use a conservative default of 32,768 max_tokens
- Update qwen/qwen-turbo model max_tokens from 1,000,000 to 32,768 to match OpenRouter's actual limits
- Fix moonshotai/kimi-k2-instruct max_tokens to 16,384 to match Groq's actual limit (fixes #1028)
- This prevents "maximum context length exceeded" errors when using OpenRouter models not in our supported models list
- [#1027](https://github.com/eyaltoledano/claude-task-master/pull/1027) [`6ae66b2`](https://github.com/eyaltoledano/claude-task-master/commit/6ae66b2afbfe911340fa25e0236c3db83deaa7eb) Thanks [@andreswebs](https://github.com/andreswebs)! - Fix VSCode profile generation to use correct rule file names (using `.instructions.md` extension instead of `.md`) and front-matter properties (removing the unsupported `alwaysApply` property from instructions files' front-matter).
## 0.21.0
### Minor Changes
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`9c58a92`](https://github.com/eyaltoledano/claude-task-master/commit/9c58a922436c0c5e7ff1b20ed2edbc269990c772) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Kiro editor rule profile support
- Add support for Kiro IDE with custom rule files and MCP configuration
- Generate rule files in `.kiro/steering/` directory with markdown format
- Include MCP server configuration with enhanced file inclusion patterns
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`444aa5a`](https://github.com/eyaltoledano/claude-task-master/commit/444aa5ae1943ba72d012b3f01b1cc9362a328248) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Created a comprehensive documentation site for Task Master AI. Visit https://docs.task-master.dev to explore guides, API references, and examples.
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`58a301c`](https://github.com/eyaltoledano/claude-task-master/commit/58a301c380d18a9d9509137f3e989d24200a5faa) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Complete Groq provider integration and add MoonshotAI Kimi K2 model support
- Fixed Groq provider registration
- Added Groq API key validation
- Added GROQ_API_KEY to .env.example
- Added moonshotai/kimi-k2-instruct model with $1/$3 per 1M token pricing and 16k max output
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`b0e09c7`](https://github.com/eyaltoledano/claude-task-master/commit/b0e09c76ed73b00434ac95606679f570f1015a3d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - feat: Add Zed editor rule profile with agent rules and MCP config
- Resolves #637
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`6c5e0f9`](https://github.com/eyaltoledano/claude-task-master/commit/6c5e0f97f8403c4da85c1abba31cb8b1789511a7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Amp rule profile with AGENT.md and MCP config
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`444aa5a`](https://github.com/eyaltoledano/claude-task-master/commit/444aa5ae1943ba72d012b3f01b1cc9362a328248) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve project root detection
- No longer creates an infinite loop when unable to detect your code workspace
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`36c4a7a`](https://github.com/eyaltoledano/claude-task-master/commit/36c4a7a86924c927ad7f86a4f891f66ad55eb4d2) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add OpenCode profile with AGENTS.md and MCP config
- Resolves #965
### Patch Changes
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`444aa5a`](https://github.com/eyaltoledano/claude-task-master/commit/444aa5ae1943ba72d012b3f01b1cc9362a328248) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Make `task-master update` more reliable with AI responses
The `update` command now handles AI responses more robustly. If the AI forgets to include certain task fields, the command will automatically fill in the missing data from your original tasks instead of failing. This means smoother bulk task updates without losing important information like IDs, dependencies, or completed subtasks.
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`444aa5a`](https://github.com/eyaltoledano/claude-task-master/commit/444aa5ae1943ba72d012b3f01b1cc9362a328248) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix subtask dependency validation when expanding tasks
When using `task-master expand` to break down tasks into subtasks, dependencies between subtasks are now properly validated. Previously, subtasks with dependencies would fail validation. Now subtasks can correctly depend on their siblings within the same parent task.
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`6d69d02`](https://github.com/eyaltoledano/claude-task-master/commit/6d69d02fe03edcc785380415995d5cfcdd97acbb) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Prevent CLAUDE.md overwrite by using Claude Code's import feature
- Task Master now creates its instructions in `.taskmaster/CLAUDE.md` instead of overwriting the user's `CLAUDE.md`
- Adds an import section to the user's CLAUDE.md that references the Task Master instructions
- Preserves existing user content in CLAUDE.md files
- Provides clean uninstall that only removes Task Master's additions
**Breaking Change**: Task Master instructions for Claude Code are now stored in `.taskmaster/CLAUDE.md` and imported into the main CLAUDE.md file. Users who previously had Task Master content directly in their CLAUDE.md will need to run `task-master rules remove claude` followed by `task-master rules add claude` to migrate to the new structure.
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`fd005c4`](https://github.com/eyaltoledano/claude-task-master/commit/fd005c4c5481ffac58b11f01a448fa5b29056b8d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Implement Boundary-First Tag Resolution to ensure consistent and deterministic tag handling across CLI and MCP, resolving potential race conditions.
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`444aa5a`](https://github.com/eyaltoledano/claude-task-master/commit/444aa5ae1943ba72d012b3f01b1cc9362a328248) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix `task-master lang --setup` breaking when no language is defined, now defaults to English
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`624922c`](https://github.com/eyaltoledano/claude-task-master/commit/624922ca598c4ce8afe9a5646ebb375d4616db63) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix: show command no longer requires complexity report file to exist
The `tm show` command was incorrectly requiring the complexity report file to exist even when not needed. Now it only validates the complexity report path when a custom report file is explicitly provided via the -r/--report option.
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`858d4a1`](https://github.com/eyaltoledano/claude-task-master/commit/858d4a1c5486d20e7e3a8e37e3329d7fb8200310) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Update VS Code profile with MCP config transformation
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`0451ebc`](https://github.com/eyaltoledano/claude-task-master/commit/0451ebcc32cd7e9d395b015aaa8602c4734157e1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP server error when retrieving tools and resources
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`0a70ab6`](https://github.com/eyaltoledano/claude-task-master/commit/0a70ab6179cb2b5b4b2d9dc256a7a3b69a0e5dd6) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add MCP configuration support to Claude Code rules
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`4629128`](https://github.com/eyaltoledano/claude-task-master/commit/4629128943f6283385f4762c09cf2752f855cc33) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixed the comprehensive taskmaster system integration via custom slash commands with proper syntax
- Provide claude clode with a complete set of of commands that can trigger task master events directly within Claude Code
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`0886c83`](https://github.com/eyaltoledano/claude-task-master/commit/0886c83d0c678417c0313256a6dd96f7ee2c9ac6) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Correct MCP server name and use 'Add to Cursor' button with updated placeholder keys.
- [#1009](https://github.com/eyaltoledano/claude-task-master/pull/1009) [`88c434a`](https://github.com/eyaltoledano/claude-task-master/commit/88c434a9393e429d9277f59b3e20f1005076bbe0) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add missing API keys to .env.example and README.md
## 0.21.0-rc.0
### Minor Changes
- [#1001](https://github.com/eyaltoledano/claude-task-master/pull/1001) [`75a36ea`](https://github.com/eyaltoledano/claude-task-master/commit/75a36ea99a1c738a555bdd4fe7c763d0c5925e37) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Kiro editor rule profile support
- Add support for Kiro IDE with custom rule files and MCP configuration
- Generate rule files in `.kiro/steering/` directory with markdown format
- Include MCP server configuration with enhanced file inclusion patterns
- [#1011](https://github.com/eyaltoledano/claude-task-master/pull/1011) [`3eb050a`](https://github.com/eyaltoledano/claude-task-master/commit/3eb050aaddb90fca1a04517e2ee24f73934323be) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Created a comprehensive documentation site for Task Master AI. Visit https://docs.task-master.dev to explore guides, API references, and examples.
- [#978](https://github.com/eyaltoledano/claude-task-master/pull/978) [`fedfd6a`](https://github.com/eyaltoledano/claude-task-master/commit/fedfd6a0f41a78094f7ee7f69be689b699475a79) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Complete Groq provider integration and add MoonshotAI Kimi K2 model support
- Fixed Groq provider registration
- Added Groq API key validation
- Added GROQ_API_KEY to .env.example
- Added moonshotai/kimi-k2-instruct model with $1/$3 per 1M token pricing and 16k max output
- [#974](https://github.com/eyaltoledano/claude-task-master/pull/974) [`5b0eda0`](https://github.com/eyaltoledano/claude-task-master/commit/5b0eda07f20a365aa2ec1736eed102bca81763a9) Thanks [@joedanz](https://github.com/joedanz)! - feat: Add Zed editor rule profile with agent rules and MCP config
- Resolves #637
- [#973](https://github.com/eyaltoledano/claude-task-master/pull/973) [`6d05e86`](https://github.com/eyaltoledano/claude-task-master/commit/6d05e8622c1d761acef10414940ff9a766b3b57d) Thanks [@joedanz](https://github.com/joedanz)! - Add Amp rule profile with AGENT.md and MCP config
- [#1011](https://github.com/eyaltoledano/claude-task-master/pull/1011) [`3eb050a`](https://github.com/eyaltoledano/claude-task-master/commit/3eb050aaddb90fca1a04517e2ee24f73934323be) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve project root detection
- No longer creates an infinite loop when unable to detect your code workspace
- [#970](https://github.com/eyaltoledano/claude-task-master/pull/970) [`b87499b`](https://github.com/eyaltoledano/claude-task-master/commit/b87499b56e626001371a87ed56ffc72675d829f3) Thanks [@joedanz](https://github.com/joedanz)! - Add OpenCode profile with AGENTS.md and MCP config
- Resolves #965
### Patch Changes
- [#1011](https://github.com/eyaltoledano/claude-task-master/pull/1011) [`3eb050a`](https://github.com/eyaltoledano/claude-task-master/commit/3eb050aaddb90fca1a04517e2ee24f73934323be) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Make `task-master update` more reliable with AI responses
The `update` command now handles AI responses more robustly. If the AI forgets to include certain task fields, the command will automatically fill in the missing data from your original tasks instead of failing. This means smoother bulk task updates without losing important information like IDs, dependencies, or completed subtasks.
- [#1011](https://github.com/eyaltoledano/claude-task-master/pull/1011) [`3eb050a`](https://github.com/eyaltoledano/claude-task-master/commit/3eb050aaddb90fca1a04517e2ee24f73934323be) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix subtask dependency validation when expanding tasks
When using `task-master expand` to break down tasks into subtasks, dependencies between subtasks are now properly validated. Previously, subtasks with dependencies would fail validation. Now subtasks can correctly depend on their siblings within the same parent task.
- [#949](https://github.com/eyaltoledano/claude-task-master/pull/949) [`f662654`](https://github.com/eyaltoledano/claude-task-master/commit/f662654afb8e7a230448655265d6f41adf6df62c) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Prevent CLAUDE.md overwrite by using Claude Code's import feature
- Task Master now creates its instructions in `.taskmaster/CLAUDE.md` instead of overwriting the user's `CLAUDE.md`
- Adds an import section to the user's CLAUDE.md that references the Task Master instructions
- Preserves existing user content in CLAUDE.md files
- Provides clean uninstall that only removes Task Master's additions
**Breaking Change**: Task Master instructions for Claude Code are now stored in `.taskmaster/CLAUDE.md` and imported into the main CLAUDE.md file. Users who previously had Task Master content directly in their CLAUDE.md will need to run `task-master rules remove claude` followed by `task-master rules add claude` to migrate to the new structure.
- [#943](https://github.com/eyaltoledano/claude-task-master/pull/943) [`f98df5c`](https://github.com/eyaltoledano/claude-task-master/commit/f98df5c0fdb253b2b55d4278c11d626529c4dba4) Thanks [@mm-parthy](https://github.com/mm-parthy)! - Implement Boundary-First Tag Resolution to ensure consistent and deterministic tag handling across CLI and MCP, resolving potential race conditions.
- [#1011](https://github.com/eyaltoledano/claude-task-master/pull/1011) [`3eb050a`](https://github.com/eyaltoledano/claude-task-master/commit/3eb050aaddb90fca1a04517e2ee24f73934323be) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix `task-master lang --setup` breaking when no language is defined, now defaults to English
- [#979](https://github.com/eyaltoledano/claude-task-master/pull/979) [`ab2e946`](https://github.com/eyaltoledano/claude-task-master/commit/ab2e94608749a2f148118daa0443bd32bca6e7a1) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Fix: show command no longer requires complexity report file to exist
The `tm show` command was incorrectly requiring the complexity report file to exist even when not needed. Now it only validates the complexity report path when a custom report file is explicitly provided via the -r/--report option.
- [#971](https://github.com/eyaltoledano/claude-task-master/pull/971) [`5544222`](https://github.com/eyaltoledano/claude-task-master/commit/55442226d0aa4870470d2a9897f5538d6a0e329e) Thanks [@joedanz](https://github.com/joedanz)! - Update VS Code profile with MCP config transformation
- [#1002](https://github.com/eyaltoledano/claude-task-master/pull/1002) [`6d0654c`](https://github.com/eyaltoledano/claude-task-master/commit/6d0654cb4191cee794e1c8cbf2b92dc33d4fb410) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP server error when retrieving tools and resources
- [#980](https://github.com/eyaltoledano/claude-task-master/pull/980) [`cc4fe20`](https://github.com/eyaltoledano/claude-task-master/commit/cc4fe205fb468e7144c650acc92486df30731560) Thanks [@joedanz](https://github.com/joedanz)! - Add MCP configuration support to Claude Code rules
- [#968](https://github.com/eyaltoledano/claude-task-master/pull/968) [`7b4803a`](https://github.com/eyaltoledano/claude-task-master/commit/7b4803a479105691c7ed032fd878fe3d48d82724) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixed the comprehensive taskmaster system integration via custom slash commands with proper syntax
- Provide claude clode with a complete set of of commands that can trigger task master events directly within Claude Code
- [#995](https://github.com/eyaltoledano/claude-task-master/pull/995) [`b78de8d`](https://github.com/eyaltoledano/claude-task-master/commit/b78de8dbb4d6dc93b48e2f81c32960ef069736ed) Thanks [@joedanz](https://github.com/joedanz)! - Correct MCP server name and use 'Add to Cursor' button with updated placeholder keys.
- [#972](https://github.com/eyaltoledano/claude-task-master/pull/972) [`1c7badf`](https://github.com/eyaltoledano/claude-task-master/commit/1c7badff2f5c548bfa90a3b2634e63087a382a84) Thanks [@joedanz](https://github.com/joedanz)! - Add missing API keys to .env.example and README.md
## 0.20.0
### Minor Changes
- [#950](https://github.com/eyaltoledano/claude-task-master/pull/950) [`699e9ee`](https://github.com/eyaltoledano/claude-task-master/commit/699e9eefb5d687b256e9402d686bdd5e3a358b4a) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add support for xAI Grok 4 model
- Add grok-4 model to xAI provider with $3/$15 per 1M token pricing
- Enable main, fallback, and research roles for grok-4
- Max tokens set to 131,072 (matching other xAI models)
- [#946](https://github.com/eyaltoledano/claude-task-master/pull/946) [`5f009a5`](https://github.com/eyaltoledano/claude-task-master/commit/5f009a5e1fc10e37be26f5135df4b7f44a9c5320) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add stricter validation and clearer feedback for task priority when adding new tasks
- if a task priority is invalid, it will default to medium
- made taks priority case-insensitive, essentially making HIGH and high the same value
- [#863](https://github.com/eyaltoledano/claude-task-master/pull/863) [`b530657`](https://github.com/eyaltoledano/claude-task-master/commit/b53065713c8da0ae6f18eb2655397aa975004923) Thanks [@OrenMe](https://github.com/OrenMe)! - Add support for MCP Sampling as AI provider, requires no API key, uses the client LLM provider
- [#930](https://github.com/eyaltoledano/claude-task-master/pull/930) [`98d1c97`](https://github.com/eyaltoledano/claude-task-master/commit/98d1c974361a56ddbeb772b1272986b9d3913459) Thanks [@OmarElKadri](https://github.com/OmarElKadri)! - Added Groq provider support
### Patch Changes
- [#958](https://github.com/eyaltoledano/claude-task-master/pull/958) [`6c88a4a`](https://github.com/eyaltoledano/claude-task-master/commit/6c88a4a749083e3bd2d073a9240799771774495a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Recover from `@anthropic-ai/claude-code` JSON truncation bug that caused Task Master to crash when handling large (>8 kB) structured responses. The CLI/SDK still truncates, but Task Master now detects the error, preserves buffered text, and returns a usable response instead of throwing.
- [#958](https://github.com/eyaltoledano/claude-task-master/pull/958) [`3334e40`](https://github.com/eyaltoledano/claude-task-master/commit/3334e409ae659d5223bb136ae23fd22c5e219073) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Updating dependency ai-sdk-provider-gemini-cli to 0.0.4 to address breaking change Google made to Gemini CLI and add better 'api-key' in addition to 'gemini-api-key' AI-SDK compatibility.
- [#853](https://github.com/eyaltoledano/claude-task-master/pull/853) [`95c299d`](https://github.com/eyaltoledano/claude-task-master/commit/95c299df642bd8e6d75f8fa5110ac705bcc72edf) Thanks [@joedanz](https://github.com/joedanz)! - Unify and streamline profile system architecture for improved maintainability
## 0.20.0-rc.0
### Minor Changes
- [#950](https://github.com/eyaltoledano/claude-task-master/pull/950) [`699e9ee`](https://github.com/eyaltoledano/claude-task-master/commit/699e9eefb5d687b256e9402d686bdd5e3a358b4a) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Add support for xAI Grok 4 model
- Add grok-4 model to xAI provider with $3/$15 per 1M token pricing
- Enable main, fallback, and research roles for grok-4
- Max tokens set to 131,072 (matching other xAI models)
- [#946](https://github.com/eyaltoledano/claude-task-master/pull/946) [`5f009a5`](https://github.com/eyaltoledano/claude-task-master/commit/5f009a5e1fc10e37be26f5135df4b7f44a9c5320) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add stricter validation and clearer feedback for task priority when adding new tasks
- if a task priority is invalid, it will default to medium
- made taks priority case-insensitive, essentially making HIGH and high the same value
- [#863](https://github.com/eyaltoledano/claude-task-master/pull/863) [`b530657`](https://github.com/eyaltoledano/claude-task-master/commit/b53065713c8da0ae6f18eb2655397aa975004923) Thanks [@OrenMe](https://github.com/OrenMe)! - Add support for MCP Sampling as AI provider, requires no API key, uses the client LLM provider
- [#930](https://github.com/eyaltoledano/claude-task-master/pull/930) [`98d1c97`](https://github.com/eyaltoledano/claude-task-master/commit/98d1c974361a56ddbeb772b1272986b9d3913459) Thanks [@OmarElKadri](https://github.com/OmarElKadri)! - Added Groq provider support
### Patch Changes
- [#916](https://github.com/eyaltoledano/claude-task-master/pull/916) [`6c88a4a`](https://github.com/eyaltoledano/claude-task-master/commit/6c88a4a749083e3bd2d073a9240799771774495a) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Recover from `@anthropic-ai/claude-code` JSON truncation bug that caused Task Master to crash when handling large (>8 kB) structured responses. The CLI/SDK still truncates, but Task Master now detects the error, preserves buffered text, and returns a usable response instead of throwing.
- [#916](https://github.com/eyaltoledano/claude-task-master/pull/916) [`3334e40`](https://github.com/eyaltoledano/claude-task-master/commit/3334e409ae659d5223bb136ae23fd22c5e219073) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Updating dependency ai-sdk-provider-gemini-cli to 0.0.4 to address breaking change Google made to Gemini CLI and add better 'api-key' in addition to 'gemini-api-key' AI-SDK compatibility.
- [#853](https://github.com/eyaltoledano/claude-task-master/pull/853) [`95c299d`](https://github.com/eyaltoledano/claude-task-master/commit/95c299df642bd8e6d75f8fa5110ac705bcc72edf) Thanks [@joedanz](https://github.com/joedanz)! - Unify and streamline profile system architecture for improved maintainability
## 0.19.0
### Minor Changes

Some files were not shown because too many files have changed in this diff Show More