Files
claude-task-master/assets/rules/dev_workflow.mdc
Ralph Khreish b40139ca05 Release 0.18.0 (#840)
* Update SWE scores (#657)

* docs: Auto-update and format models.md

* feat: Flexible brand rules management (#460)

* chore(docs): update docs and rules related to model management.

* feat(ai): Add OpenRouter AI provider support

Integrates the OpenRouter AI provider using the Vercel AI SDK adapter (@openrouter/ai-sdk-provider). This allows users to configure and utilize models available through the OpenRouter platform.

- Added src/ai-providers/openrouter.js with standard Vercel AI SDK wrapper functions (generateText, streamText, generateObject).

- Updated ai-services-unified.js to include the OpenRouter provider in the PROVIDER_FUNCTIONS map and API key resolution logic.

- Verified config-manager.js handles OpenRouter API key checks correctly.

- Users can configure OpenRouter models via .taskmasterconfig using the task-master models command or MCP models tool. Requires OPENROUTER_API_KEY.

- Enhanced error handling in ai-services-unified.js to provide clearer messages when generateObjectService fails due to lack of underlying tool support in the selected model/provider endpoint.

* feat(cli): Add --status/-s filter flag to show command and get-task MCP tool

Implements the ability to filter subtasks displayed by the `task-master show <id>` command using the `--status` (or `-s`) flag. This is also available in the MCP context.

- Modified `commands.js` to add the `--status` option to the `show` command definition.

- Updated `utils.js` (`findTaskById`) to handle the filtering logic and return original subtask counts/arrays when filtering.

- Updated `ui.js` (`displayTaskById`) to use the filtered subtasks for the table, display a summary line when filtering, and use the original subtask list for the progress bar calculation.

- Updated MCP `get_task` tool and `showTaskDirect` function to accept and pass the `status` parameter.

- Added changeset entry.

* fix(tasks): Improve next task logic to be subtask-aware

* fix(tasks): Enable removing multiple tasks/subtasks via comma-separated IDs

- Refactors the core `removeTask` function (`task-manager/remove-task.js`) to accept and iterate over comma-separated task/subtask IDs.

- Updates dependency cleanup and file regeneration logic to run once after processing all specified IDs.

- Adjusts the `remove-task` CLI command (`commands.js`) description and confirmation prompt to handle multiple IDs correctly.

- Fixes a bug in the CLI confirmation prompt where task/subtask titles were not being displayed correctly.

- Updates the `remove_task` MCP tool description to reflect the new multi-ID capability.

This addresses the previously known issue where only the first ID in a comma-separated list was processed.

Closes #140

* Update README.md (#342)

* Update Discord badge (#337)

* refactor(init): Improve robustness and dependencies; Update template deps for AI SDKs; Silence npm install in MCP; Improve conditional model setup logic; Refactor init.js flags; Tweak Getting Started text; Fix MCP server launch command; Update default model in config template

* Refactor: Improve MCP logging, update E2E & tests

Refactors MCP server logging and updates testing infrastructure.

- MCP Server:

  - Replaced manual logger wrappers with centralized `createLogWrapper` utility.

  - Updated direct function calls to use `{ session, mcpLog }` context.

  - Removed deprecated `model` parameter from analyze, expand-all, expand-task tools.

  - Adjusted MCP tool import paths and parameter descriptions.

- Documentation:

  - Modified `docs/configuration.md`.

  - Modified `docs/tutorial.md`.

- Testing:

  - E2E Script (`run_e2e.sh`):

    - Removed `set -e`.

    - Added LLM analysis function (`analyze_log_with_llm`) & integration.

    - Adjusted test run directory creation timing.

    - Added debug echo statements.

  - Deleted Unit Tests: Removed `ai-client-factory.test.js`, `ai-client-utils.test.js`, `ai-services.test.js`.

  - Modified Fixtures: Updated `scripts/task-complexity-report.json`.

- Dev Scripts:

  - Modified `scripts/dev.js`.

* chore(tests): Passes tests for merge candidate
- Adjusted the interactive model default choice to be 'no change' instead of 'cancel setup'
- E2E script has been perfected and works as designed provided there are all provider API keys .env in the root
- Fixes the entire test suite to make sure it passes with the new architecture.
- Fixes dependency command to properly show there is a validation failure if there is one.
- Refactored config-manager.test.js mocking strategy and fixed assertions to read the real supported-models.json
- Fixed rule-transformer.test.js assertion syntax and transformation logic adjusting replacement for search which was too broad.
- Skip unstable tests in utils.test.js (log, readJSON, writeJSON error paths) due to SIGABRT crash. These tests trigger a native crash (SIGABRT), likely stemming from a conflict between internal chalk usage within the functions and Jest's test environment, possibly related to ESM module handling.

* chore(wtf): removes chai. not sure how that even made it in here. also removes duplicate test in scripts/.

* fix: ensure API key detection properly reads .env in MCP context

Problem:
- Task Master model configuration wasn't properly checking for API keys in the project's .env file when running through MCP
- The isApiKeySet function was only checking session.env and process.env but not inspecting the .env file directly
- This caused incorrect API key status reporting in MCP tools even when keys were properly set in .env

Solution:
- Modified resolveEnvVariable function in utils.js to properly read from .env file at projectRoot
- Updated isApiKeySet to correctly pass projectRoot to resolveEnvVariable
- Enhanced the key detection logic to have consistent behavior between CLI and MCP contexts
- Maintains the correct precedence: session.env → .env file → process.env

Testing:
- Verified working correctly with both MCP and CLI tools
- API keys properly detected in .env file in both contexts
- Deleted .cursor/mcp.json to confirm introspection of .env as fallback works

* fix(update): pass projectRoot through update command flow

Modified ai-services-unified.js, update.js tool, and update-tasks.js direct function to correctly pass projectRoot. This enables the .env file API key fallback mechanism for the update command when running via MCP, ensuring consistent key resolution with the CLI context.

* fix(analyze-complexity): pass projectRoot through analyze-complexity flow

Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity.

* fix(add-task): pass projectRoot and fix logging/refs

Modified add-task core, direct function, and tool to pass projectRoot for .env API key fallback. Fixed logFn reference error and removed deprecated reportProgress call in core addTask function. Verified working.

* fix(parse-prd): pass projectRoot and fix schema/logging

Modified parse-prd core, direct function, and tool to pass projectRoot for .env API key fallback. Corrected Zod schema used in generateObjectService call. Fixed logFn reference error in core parsePRD. Updated unit test mock for utils.js.

* fix(update-task): pass projectRoot and adjust parsing

Modified update-task-by-id core, direct function, and tool to pass projectRoot. Reverted parsing logic in core function to prioritize `{...}` extraction, resolving parsing errors. Fixed ReferenceError by correctly destructuring projectRoot.

* fix(update-subtask): pass projectRoot and allow updating done subtasks

Modified update-subtask-by-id core, direct function, and tool to pass projectRoot for .env API key fallback. Removed check preventing appending details to completed subtasks.

* fix(mcp, expand): pass projectRoot through expand/expand-all flows

Problem: expand_task & expand_all MCP tools failed with .env keys due to missing projectRoot propagation for API key resolution. Also fixed a ReferenceError: wasSilent is not defined in expandTaskDirect.

Solution: Modified core logic, direct functions, and MCP tools for expand-task and expand-all to correctly destructure projectRoot from arguments and pass it down through the context object to the AI service call (generateTextService). Fixed wasSilent scope in expandTaskDirect.

Verification: Tested expand_task successfully in MCP using .env keys. Reviewed expand_all flow for correct projectRoot propagation.

* chore: prettier

* fix(expand-all): add projectRoot to expandAllTasksDirect invokation.

* fix(update-tasks): Improve AI response parsing for 'update' command

Refactors the JSON array parsing logic within
in .

The previous logic primarily relied on extracting content from markdown
code blocks (json or javascript), which proved brittle when the AI
response included comments or non-JSON text within the block, leading to
parsing errors for the  command.

This change modifies the parsing strategy to first attempt extracting
content directly between the outermost '[' and ']' brackets. This is
more robust as it targets the expected array structure directly. If
bracket extraction fails, it falls back to looking for a strict json
code block, then prefix stripping, before attempting a raw parse.

This approach aligns with the successful parsing strategy used for
single-object responses in  and resolves the
parsing errors previously observed with the  command.

* refactor(mcp): introduce withNormalizedProjectRoot HOF for path normalization

Added HOF to mcp tools utils to normalize projectRoot from args/session. Refactored get-task tool to use HOF. Updated relevant documentation.

* refactor(mcp): apply withNormalizedProjectRoot HOF to update tool

Problem: The  MCP tool previously handled project root acquisition and path resolution within its  method, leading to potential inconsistencies and repetition.

Solution: Refactored the  tool () to utilize the new  Higher-Order Function (HOF) from .

Specific Changes:
- Imported  HOF.
- Updated the Zod schema for the  parameter to be optional, as the HOF handles deriving it from the session if not provided.
- Wrapped the entire  function body with the  HOF.
- Removed the manual call to  from within the  function body.
- Destructured the  from the  object received by the wrapped  function, ensuring it's the normalized path provided by the HOF.
- Used the normalized  variable when calling  and when passing arguments to .

This change standardizes project root handling for the  tool, simplifies its  method, and ensures consistent path normalization. This serves as the pattern for refactoring other MCP tools.

* fix: apply to all tools withNormalizedProjectRoot to fix projectRoot issues for linux and windows

* fix: add rest of tools that need wrapper

* chore: cleanup tools to stop using rootFolder and remove unused imports

* chore: more cleanup

* refactor: Improve update-subtask, consolidate utils, update config

This commit introduces several improvements and refactorings across MCP tools, core logic, and configuration.

**Major Changes:**

1.  **Refactor updateSubtaskById:**
    - Switched from generateTextService to generateObjectService for structured AI responses, using a Zod schema (subtaskSchema) for validation.
    - Revised prompts to have the AI generate relevant content based on user request and context (parent/sibling tasks), while explicitly preventing AI from handling timestamp/tag formatting.
    - Implemented **local timestamp generation (new Date().toISOString()) and formatting** (using <info added on ...> tags) within the function *after* receiving the AI response. This ensures reliable and correctly formatted details are appended.
    - Corrected logic to append only the locally formatted, AI-generated content block to the existing subtask.details.

2.  **Consolidate MCP Utilities:**
    - Moved/consolidated the withNormalizedProjectRoot HOF into mcp-server/src/tools/utils.js.
    - Updated MCP tools (like update-subtask.js) to import withNormalizedProjectRoot from the new location.

3.  **Refactor Project Initialization:**
    - Deleted the redundant mcp-server/src/core/direct-functions/initialize-project-direct.js file.
    - Updated mcp-server/src/core/task-master-core.js to import initializeProjectDirect from its correct location (./direct-functions/initialize-project.js).

**Other Changes:**

-   Updated .taskmasterconfig fallback model to claude-3-7-sonnet-20250219.
-   Clarified model cost representation in the models tool description (taskmaster.mdc and mcp-server/src/tools/models.js).

* fix: displayBanner logging when silentMode is active (#385)

* fix: improve error handling, test options, and model configuration

- Enhance error validation in parse-prd.js and update-tasks.js
- Fix bug where mcpLog was incorrectly passed as logWrapper
- Improve error messages and response formatting
- Add --skip-verification flag to E2E tests
- Update MCP server config that ships with init to match new API key structure
- Fix task force/append handling in parse-prd command
- Increase column width in update-tasks display

* chore: fixes parse prd to show loading indicator in cli.

* fix(parse-prd): suggested fix for mcpLog was incorrect. reverting to my previously working code.

* chore(init): No longer ships readme with task-master init (commented out for now). No longer looking for task-master-mcp, instead checked for task-master-ai - this should prevent the init sequence from needlessly adding another mcp server with task-master-mcp to the mpc.json which a ton of people probably ran into.

* chore: restores 3.7 sonnet as the main role.

* fix(add/remove-dependency): dependency mcp tools were failing due to hard-coded tasks path in generate task files.

* chore: removes tasks json backup that was temporarily created.

* fix(next): adjusts mcp tool response to correctly return the next task/subtask. Also adds nextSteps to the next task response.

* chore: prettier

* chore: readme typos

* fix(config): restores sonnet 3.7 as default main role.

* Version Packages

* hotfix: move production package to "dependencies" (#399)

* Version Packages

* Fix: issues with 0.13.0 not working (#402)

* Exit prerelease mode and version packages

* hotfix: move production package to "dependencies"

* Enter prerelease mode and version packages

* Enter prerelease mode and version packages

* chore: cleanup

* chore: improve pre.json and add pre-release workflow

* chore: fix package.json

* chore: cleanup

* chore: improve pre-release workflow

* chore: allow github actions to commit

* extract fileMap and conversionConfig into brand profile

* extract into brand profile

* add windsurf profile

* add remove brand rules function

* fix regex

* add rules command to add/remove rules for a specific brand

* fix post processing for roo

* allow multiples

* add cursor profile

* update test for new structure

* move rules to assets

* use assets/rules for rules files

* use standardized setupMCP function

* fix formatting

* fix formatting

* add logging

* fix escapes

* default to cursor

* allow init with certain rulesets; no more .windsurfrules

* update docs

* update log msg

* fix formatting

* keep mdc extension for cursor

* don't rewrite .mdc to .md inside the files

* fix roo init (add modes)

* fix cursor init (don't use roo transformation by default)

* use more generic function names

* update docs

* fix formatting

* update function names

* add changeset

* add rules to mcp initialize project

* register tool with mcp server

* update docs

* add integration test

* fix cursor initialization

* rule selection

* fix formatting

* fix MCP - remove yes flag

* add import

* update roo tests

* add/update tests

* remove test

* add rules command test

* update MCP responses, centralize rules profiles & helpers

* fix logging and MCP response messages

* fix formatting

* incorrect test

* fix tests

* update fileMap

* fix file extension transformations

* fix formatting

* add rules command test

* test already covered

* fix formatting

* move renaming logic into profiles

* make sure dir is deleted (DS_Store)

* add confirmation for rules removal

* add force flag for rules remove

* use force flag for test

* remove yes parameter

* fix formatting

* import brand profiles from rule-transformer.js

* update comment

* add interactive rules setup

* optimize

* only copy rules specifically listed in fileMap

* update comment

* add cline profile

* add brandDir to remove ambiguity and support Cline

* specify whether to create mcp config and filename

* add mcpConfigName value for parh

* fix formatting

* remove rules just for this repository - only include rules to be distributed

* update error message

* update "brand rules" to "rules"

* update to minor

* remove comment

* remove comments

* move to /src/utils

* optimize imports

* move rules-setup.js to /src/utils

* move rule-transformer.js to /src/utils

* move confirmation to /src/ui/confirm.js

* default to all rules

* use profile js for mcp config settings

* only run rules interactive setup if not provided via command line

* update comments

* initialize with all brands if nothing specified

* update var name

* clean up

* enumerate brands for brand rules

* update instructions

* add test to check for brand profiles

* fix quotes

* update semantics and terminology from 'brand rules' to 'rules profiles'

* fix formatting

* fix formatting

* update function name and remove copying of cursor rules, now handled by rules transformer

* update comment

* rename to mcp-config-setup.js

* use enums for rules actions

* add aggregate reporting for rules add command

* add missing log message

* use simpler path

* use base profile with modifications for each brand

* use displayName and don't select any defaults in setup

* add confirmation if removing ALL rules profiles, and add --force flag on rules remove

* Use profile-detection instead of rules-detection

* add newline at end of mcp config

* add proper formatting for mcp.json

* update rules

* update rules

* update rules

* add checks for other rules and other profile folder items before removing

* update confirmation for rules remove

* update docs

* update changeset

* fix for filepath at bottom of rule

* Update cline profile and add test; adjust other rules tests

* update changeset

* update changeset

* clarify init for all profiles if not specified

* update rule text

* revert text

* use "rule profiles" instead of "rules profiles"

* use standard tool mappings for windsurf

* add Trae support

* update changeset

* update wording

* update to 'rule profile'

* remove unneeded exports to optimize loc

* combine to /src/utils/profiles.js; add codex and claude code profiles

* rename function and add boxen

* add claude and codex integration tests

* organize tests into profiles folder

* mock fs for transformer tests

* update UI

* add cline and trae integration tests

* update test

* update function name

* update formatting

* Update change set with new profiles

* move profile integration tests to subdirectory

* properly create temp directories in /tmp folder

* fix formatting

* use taskmaster subfolder for the 2 TM rules

* update wording

* ensure subdirectory exists

* update rules from next

* update from next

* update taskmaster rule

* add details on new rules command and init

* fix mcp init

* fix MCP path to assets

* remove duplication

* remove duplication

* MCP server path fixes for rules command

* fix for CLI roo rules add/remove

* update tests

* fix formatting

* fix pattern for interactive rule profiles setup

* restore comments

* restore comments

* restore comments

* remove unused import, fix quotes

* add missing integration tests

* add VS Code profile and tests

* update docs and rules to include vscode profile

* add rules subdirectory support per-profile

* move profiles to /src

* fix formatting

* rename to remove ambiguity

* use --setup for rules interactive setup

* Fix Cursor deeplink installation with copy-paste instructions (#723)

* change roo boomerang to orchestrator; update tests that don't use modes

* fix newline

* chore: cleanup

---------

Co-authored-by: Eyal Toledano <eyal@microangel.so>
Co-authored-by: Yuval <yuvalbl@users.noreply.github.com>
Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com>
Co-authored-by: Eyal Toledano <eutait@gmail.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix: providers config for azure, bedrock, and vertex (#822)

* fix: providers config for azure, bedrock, and vertex

* chore: improve changelog

* chore: fix CI

* fix: switch to ESM export to avoid mixed format (#633)

* fix: switch to ESM export to avoid mixed format

The CLI entrypoint was using `module.exports` alongside ESM `import` statements,
resulting in an invalid mixed module format. Replaced the CommonJS export with
a proper ESM `export` to maintain consistency and prevent module resolution issues.

* chore: add changeset

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>

* fix: Fix external provider support (#726)

* fix(bedrock): improve AWS credential handling and add model definitions (#826)

* fix(bedrock): improve AWS credential handling and add model definitions

- Change error to warning when AWS credentials are missing in environment
- Allow fallback to system configuration (aws config files or instance profiles)
- Remove hardcoded region and profile parameters in Bedrock client
- Add Claude 3.7 Sonnet and DeepSeek R1 model definitions for Bedrock
- Update config manager to properly handle Bedrock provider

* chore: cleanup and format and small refactor

---------

Co-authored-by: Ray Krueger <raykrueger@gmail.com>

* docs: Auto-update and format models.md

* Version Packages

* chore: fix package.json

* Fix/expand command tag corruption (#827)

* fix(expand): Fix tag corruption in expand command - Fix tag parameter passing through MCP expand-task flow - Add tag parameter to direct function and tool registration - Fix contextGatherer method name from _buildDependencyContext to _buildDependencyGraphs - Add comprehensive test coverage for tag handling in expand-task - Ensures tagged task structure is preserved during expansion - Prevents corruption when tag is undefined. Fixes expand command causing tag corruption in tagged task lists. All existing tests pass and new test coverage added.

* test(e2e): Add comprehensive tag-aware expand testing to verify tag corruption fix - Add new test section for feature-expand tag creation and testing - Verify tag preservation during expand, force expand, and expand --all operations - Test that master tag remains intact and feature-expand tag receives subtasks correctly - Fix file path references to use correct .taskmaster/tasks/tasks.json location - Fix config file check to use .taskmaster/config.json instead of .taskmasterconfig - All tag corruption verification tests pass successfully in E2E test

* fix(changeset): Update E2E test improvements changeset to properly reflect tag corruption fix verification

* chore(changeset): combine duplicate changesets for expand tag corruption fix

Merge eighty-breads-wonder.md into bright-llamas-enter.md to consolidate
the expand command fix and its comprehensive E2E testing enhancements
into a single changeset entry.

* Delete .changeset/eighty-breads-wonder.md

* Version Packages

* chore: fix package.json

* fix(expand): Enhance context handling in expandAllTasks function
- Added `tag` to context destructuring for better context management.
- Updated `readJSON` call to include `contextTag` for improved data integrity.
- Ensured the correct tag is passed during task expansion to prevent tag corruption.

---------

Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de>
Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Add pyproject.toml as project root marker (#804)

* feat: Add pyproject.toml as project root marker - Added 'pyproject.toml' to the project markers array in findProjectRoot() - Enables Task Master to recognize Python projects using pyproject.toml - Improves project root detection for modern Python development workflows - Maintains compatibility with existing Node.js and Git-based detection

* chore: add changeset

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>

* feat: add Claude Code provider support

Implements Claude Code as a new AI provider that uses the Claude Code CLI
without requiring API keys. This enables users to leverage Claude models
through their local Claude Code installation.

Key changes:
- Add complete AI SDK v1 implementation for Claude Code provider
  - Custom SDK with streaming/non-streaming support
  - Session management for conversation continuity
  - JSON extraction for object generation mode
  - Support for advanced settings (maxTurns, allowedTools, etc.)

- Integrate Claude Code into Task Master's provider system
  - Update ai-services-unified.js to handle keyless authentication
  - Add provider to supported-models.json with opus/sonnet models
  - Ensure correct maxTokens values are applied (opus: 32000, sonnet: 64000)

- Fix maxTokens configuration issue
  - Add max_tokens property to getAvailableModels() output
  - Update setModel() to properly handle claude-code models
  - Create update-config-tokens.js utility for init process

- Add comprehensive documentation
  - User guide with configuration examples
  - Advanced settings explanation and future integration options

The implementation maintains full backward compatibility with existing
providers while adding seamless Claude Code support to all Task Master
commands.

* fix(docs): correct invalid commands in claude-code usage examples

- Remove non-existent 'do', 'estimate', and 'analyze' commands
- Replace with actual Task Master commands: next, show, set-status
- Use correct syntax for parse-prd and analyze-complexity

* feat: make @anthropic-ai/claude-code an optional dependency

This change makes the Claude Code SDK package optional, preventing installation failures for users who don't need Claude Code functionality.

Changes:
- Added @anthropic-ai/claude-code to optionalDependencies in package.json
- Implemented lazy loading in language-model.js to only import the SDK when actually used
- Updated documentation to explain the optional installation requirement
- Applied formatting fixes to ensure code consistency

Benefits:
- Users without Claude Code subscriptions don't need to install the dependency
- Reduces package size for users who don't use Claude Code
- Prevents installation failures if the package is unavailable
- Provides clear error messages when the package is needed but not installed

The implementation uses dynamic imports to load the SDK only when doGenerate() or doStream() is called, ensuring the provider can be instantiated without the package present.

* test: add comprehensive tests for ClaudeCodeProvider

Addresses code review feedback about missing automated tests for the ClaudeCodeProvider.

## Changes

- Added unit tests for ClaudeCodeProvider class covering constructor, validateAuth, and getClient methods
- Added unit tests for ClaudeCodeLanguageModel testing lazy loading behavior and error handling
- Added integration tests verifying optional dependency behavior when @anthropic-ai/claude-code is not installed

## Test Coverage

1. **Unit Tests**:
   - ClaudeCodeProvider: Basic functionality, no API key requirement, client creation
   - ClaudeCodeLanguageModel: Model initialization, lazy loading, error messages, warning generation

2. **Integration Tests**:
   - Optional dependency behavior when package is not installed
   - Clear error messages for users about missing package
   - Provider instantiation works but usage fails gracefully

All tests pass and provide comprehensive coverage for the claude-code provider implementation.

* revert: remove maxTokens update functionality from init

This functionality was out of scope for the Claude Code provider PR.
The automatic updating of maxTokens values in config.json during
initialization is a general improvement that should be in a separate PR.

Additionally, Claude Code ignores maxTokens and temperature parameters
anyway, making this change irrelevant for the Claude Code integration.

Removed:
- scripts/modules/update-config-tokens.js
- Import and usage in scripts/init.js

* docs: add Claude Code support information to README

- Added Claude Code to the list of supported providers in Requirements section
- Noted that Claude Code requires no API key but needs Claude Code CLI
- Added example of configuring claude-code/sonnet model
- Created dedicated Claude Code Support section with key information
- Added link to detailed Claude Code setup documentation

This ensures users are aware of the Claude Code option as a no-API-key
alternative for using Claude models.

* style: apply biome formatting to test files

* fix(models): add missing --claude-code flag to models command

The models command was missing the --claude-code provider flag, preventing users from setting Claude Code models via CLI. While the backend already supported claude-code as a provider hint, there was no command-line flag to trigger it.

Changes:
- Added --claude-code option to models command alongside existing provider flags
- Updated provider flags validation to include claudeCode option
- Added claude-code to providerHint logic for all three model roles (main, research, fallback)
- Updated error message to include --claude-code in list of mutually exclusive flags
- Added example usage in help text

This allows users to properly set Claude Code models using commands like:
  task-master models --set-main sonnet --claude-code
  task-master models --set-main opus --claude-code

Without this flag, users would get "Model ID not found" errors when trying to set claude-code models, as the system couldn't determine the correct provider for generic model names like "sonnet" or "opus".

* chore: add changeset for Claude Code provider feature

* docs: Auto-update and format models.md

* readme: add troubleshooting note for MCP tools not working

* Feature/compatibleapisupport (#830)

* add compatible platform api support

* Adjust the code according to the suggestions

* Fully revised as requested: restored all required checks, improved compatibility, and converted all comments to English.

* feat: Add support for compatible API endpoints via baseURL

* chore: Add changeset for compatible API support

* chore: cleanup

* chore: improve changeset

* fix: package-lock.json

* fix: package-lock.json

---------

Co-authored-by: He-Xun <1226807142@qq.com>

* Rename Roo Code "Boomerang" role to "Orchestrator" (#831)

* feat: Enhanced project initialization with Git worktree detection (#743)

* Fix Cursor deeplink installation with copy-paste instructions (#723)

* detect git worktree

* add changeset

* add aliases and git flags

* add changeset

* rename and update test

* add store tasks in git functionality

* update changeset

* fix newline

* remove unused import

* update command wording

* update command option text

* fix: update task by id (#834)

* store tasks in git by default (#835)

* Call rules interactive setup during init (#833)

* chore: rc version bump

* feat: Claude Code slash commands for Task Master (#774)

* Fix Cursor deeplink installation with copy-paste instructions (#723)

* fix: expand-task (#755)

* docs: Update o3 model price (#751)

* docs: Auto-update and format models.md

* docs: Auto-update and format models.md

* feat: Add Claude Code task master commands

Adds Task Master slash commands for Claude Code under /project:tm/ namespace

---------

Co-authored-by: Joe Danziger <joe@ticc.net>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com>

* feat: make more compatible with "o" family models (#839)

* docs: Auto-update and format models.md

* docs: Add comprehensive Azure OpenAI configuration documentation (#837)

* docs: Add comprehensive Azure OpenAI configuration documentation

- Add detailed Azure OpenAI configuration section with prerequisites, authentication, and setup options
- Include both global and per-model baseURL configuration examples
- Add comprehensive troubleshooting guide for common Azure OpenAI issues
- Update environment variables section with Azure OpenAI examples
- Add Azure OpenAI models to all model tables (Main, Research, Fallback)
- Include prominent Azure configuration example in main documentation
- Fix azureBaseURL format to use correct Azure OpenAI endpoint structure

Addresses common Azure OpenAI setup challenges and provides clear guidance for new users.

* refactor: Move Azure models from docs/models.md to scripts/modules/supported-models.json

- Remove Azure model entries from documentation tables
- Add Azure provider section to supported-models.json with gpt-4o, gpt-4o-mini, and gpt-4-1
- Maintain consistency with existing model configuration structure

* docs: Auto-update and format models.md

* Version Packages

* chore: format fix

---------

Co-authored-by: Riccardo (Ricky) Esclapon <32306488+ries9112@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Joe Danziger <joe@ticc.net>
Co-authored-by: Eyal Toledano <eyal@microangel.so>
Co-authored-by: Yuval <yuvalbl@users.noreply.github.com>
Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com>
Co-authored-by: Eyal Toledano <eutait@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Nathan Marley <nathan@glowberrylabs.com>
Co-authored-by: Ray Krueger <raykrueger@gmail.com>
Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de>
Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com>
Co-authored-by: ejones40 <ethan.jones@fortyau.com>
Co-authored-by: Ben Vargas <ben@vargas.com>
Co-authored-by: V4G4X <34249137+V4G4X@users.noreply.github.com>
Co-authored-by: He-Xun <1226807142@qq.com>
Co-authored-by: neno <github@meaning.systems>
Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com>
Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com>
Co-authored-by: Jitesh Thakur <56656484+Jitha-afk@users.noreply.github.com>
2025-06-21 13:54:17 -07:00

424 lines
30 KiB
Plaintext

---
description: Guide for using Taskmaster to manage task-driven development workflows
globs: **/*
alwaysApply: true
---
# Taskmaster Development Workflow
This guide outlines the standard process for using Taskmaster to manage software development projects. It is written as a set of instructions for you, the AI agent.
- **Your Default Stance**: For most projects, the user can work directly within the `master` task context. Your initial actions should operate on this default context unless a clear pattern for multi-context work emerges.
- **Your Goal**: Your role is to elevate the user's workflow by intelligently introducing advanced features like **Tagged Task Lists** when you detect the appropriate context. Do not force tags on the user; suggest them as a helpful solution to a specific need.
## The Basic Loop
The fundamental development cycle you will facilitate is:
1. **`list`**: Show the user what needs to be done.
2. **`next`**: Help the user decide what to work on.
3. **`show <id>`**: Provide details for a specific task.
4. **`expand <id>`**: Break down a complex task into smaller, manageable subtasks.
5. **Implement**: The user writes the code and tests.
6. **`update-subtask`**: Log progress and findings on behalf of the user.
7. **`set-status`**: Mark tasks and subtasks as `done` as work is completed.
8. **Repeat**.
All your standard command executions should operate on the user's current task context, which defaults to `master`.
---
## Standard Development Workflow Process
### Simple Workflow (Default Starting Point)
For new projects or when users are getting started, operate within the `master` tag context:
- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input='<prd-file.txt>'` (see @`taskmaster.mdc`) to generate initial tasks.json with tagged structure
- Configure rule sets during initialization with `--rules` flag (e.g., `task-master init --rules cursor,windsurf`) or manage them later with `task-master rules add/remove` commands
- Begin coding sessions with `get_tasks` / `task-master list` (see @`taskmaster.mdc`) to see current tasks, status, and IDs
- Determine the next task to work on using `next_task` / `task-master next` (see @`taskmaster.mdc`)
- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.mdc`) before breaking down tasks
- Review complexity report using `complexity_report` / `task-master complexity-report` (see @`taskmaster.mdc`)
- Select tasks based on dependencies (all marked 'done'), priority level, and ID order
- View specific task details using `get_task` / `task-master show <id>` (see @`taskmaster.mdc`) to understand implementation requirements
- Break down complex tasks using `expand_task` / `task-master expand --id=<id> --force --research` (see @`taskmaster.mdc`) with appropriate flags like `--force` (to replace existing subtasks) and `--research`
- Implement code following task details, dependencies, and project standards
- Mark completed tasks with `set_task_status` / `task-master set-status --id=<id> --status=done` (see @`taskmaster.mdc`)
- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from=<id> --prompt="..."` or `update_task` / `task-master update-task --id=<id> --prompt="..."` (see @`taskmaster.mdc`)
---
## Leveling Up: Agent-Led Multi-Context Workflows
While the basic workflow is powerful, your primary opportunity to add value is by identifying when to introduce **Tagged Task Lists**. These patterns are your tools for creating a more organized and efficient development environment for the user, especially if you detect agentic or parallel development happening across the same session.
**Critical Principle**: Most users should never see a difference in their experience. Only introduce advanced workflows when you detect clear indicators that the project has evolved beyond simple task management.
### When to Introduce Tags: Your Decision Patterns
Here are the patterns to look for. When you detect one, you should propose the corresponding workflow to the user.
#### Pattern 1: Simple Git Feature Branching
This is the most common and direct use case for tags.
- **Trigger**: The user creates a new git branch (e.g., `git checkout -b feature/user-auth`).
- **Your Action**: Propose creating a new tag that mirrors the branch name to isolate the feature's tasks from `master`.
- **Your Suggested Prompt**: *"I see you've created a new branch named 'feature/user-auth'. To keep all related tasks neatly organized and separate from your main list, I can create a corresponding task tag for you. This helps prevent merge conflicts in your `tasks.json` file later. Shall I create the 'feature-user-auth' tag?"*
- **Tool to Use**: `task-master add-tag --from-branch`
#### Pattern 2: Team Collaboration
- **Trigger**: The user mentions working with teammates (e.g., "My teammate Alice is handling the database schema," or "I need to review Bob's work on the API.").
- **Your Action**: Suggest creating a separate tag for the user's work to prevent conflicts with shared master context.
- **Your Suggested Prompt**: *"Since you're working with Alice, I can create a separate task context for your work to avoid conflicts. This way, Alice can continue working with the master list while you have your own isolated context. When you're ready to merge your work, we can coordinate the tasks back to master. Shall I create a tag for your current work?"*
- **Tool to Use**: `task-master add-tag my-work --copy-from-current --description="My tasks while collaborating with Alice"`
#### Pattern 3: Experiments or Risky Refactors
- **Trigger**: The user wants to try something that might not be kept (e.g., "I want to experiment with switching our state management library," or "Let's refactor the old API module, but I want to keep the current tasks as a reference.").
- **Your Action**: Propose creating a sandboxed tag for the experimental work.
- **Your Suggested Prompt**: *"This sounds like a great experiment. To keep these new tasks separate from our main plan, I can create a temporary 'experiment-zustand' tag for this work. If we decide not to proceed, we can simply delete the tag without affecting the main task list. Sound good?"*
- **Tool to Use**: `task-master add-tag experiment-zustand --description="Exploring Zustand migration"`
#### Pattern 4: Large Feature Initiatives (PRD-Driven)
This is a more structured approach for significant new features or epics.
- **Trigger**: The user describes a large, multi-step feature that would benefit from a formal plan.
- **Your Action**: Propose a comprehensive, PRD-driven workflow.
- **Your Suggested Prompt**: *"This sounds like a significant new feature. To manage this effectively, I suggest we create a dedicated task context for it. Here's the plan: I'll create a new tag called 'feature-xyz', then we can draft a Product Requirements Document (PRD) together to scope the work. Once the PRD is ready, I'll automatically generate all the necessary tasks within that new tag. How does that sound?"*
- **Your Implementation Flow**:
1. **Create an empty tag**: `task-master add-tag feature-xyz --description "Tasks for the new XYZ feature"`. You can also start by creating a git branch if applicable, and then create the tag from that branch.
2. **Collaborate & Create PRD**: Work with the user to create a detailed PRD file (e.g., `.taskmaster/docs/feature-xyz-prd.txt`).
3. **Parse PRD into the new tag**: `task-master parse-prd .taskmaster/docs/feature-xyz-prd.txt --tag feature-xyz`
4. **Prepare the new task list**: Follow up by suggesting `analyze-complexity` and `expand-all` for the newly created tasks within the `feature-xyz` tag.
#### Pattern 5: Version-Based Development
Tailor your approach based on the project maturity indicated by tag names.
- **Prototype/MVP Tags** (`prototype`, `mvp`, `poc`, `v0.x`):
- **Your Approach**: Focus on speed and functionality over perfection
- **Task Generation**: Create tasks that emphasize "get it working" over "get it perfect"
- **Complexity Level**: Lower complexity, fewer subtasks, more direct implementation paths
- **Research Prompts**: Include context like "This is a prototype - prioritize speed and basic functionality over optimization"
- **Example Prompt Addition**: *"Since this is for the MVP, I'll focus on tasks that get core functionality working quickly rather than over-engineering."*
- **Production/Mature Tags** (`v1.0+`, `production`, `stable`):
- **Your Approach**: Emphasize robustness, testing, and maintainability
- **Task Generation**: Include comprehensive error handling, testing, documentation, and optimization
- **Complexity Level**: Higher complexity, more detailed subtasks, thorough implementation paths
- **Research Prompts**: Include context like "This is for production - prioritize reliability, performance, and maintainability"
- **Example Prompt Addition**: *"Since this is for production, I'll ensure tasks include proper error handling, testing, and documentation."*
### Advanced Workflow (Tag-Based & PRD-Driven)
**When to Transition**: Recognize when the project has evolved (or has initiated a project which existing code) beyond simple task management. Look for these indicators:
- User mentions teammates or collaboration needs
- Project has grown to 15+ tasks with mixed priorities
- User creates feature branches or mentions major initiatives
- User initializes Taskmaster on an existing, complex codebase
- User describes large features that would benefit from dedicated planning
**Your Role in Transition**: Guide the user to a more sophisticated workflow that leverages tags for organization and PRDs for comprehensive planning.
#### Master List Strategy (High-Value Focus)
Once you transition to tag-based workflows, the `master` tag should ideally contain only:
- **High-level deliverables** that provide significant business value
- **Major milestones** and epic-level features
- **Critical infrastructure** work that affects the entire project
- **Release-blocking** items
**What NOT to put in master**:
- Detailed implementation subtasks (these go in feature-specific tags' parent tasks)
- Refactoring work (create dedicated tags like `refactor-auth`)
- Experimental features (use `experiment-*` tags)
- Team member-specific tasks (use person-specific tags)
#### PRD-Driven Feature Development
**For New Major Features**:
1. **Identify the Initiative**: When user describes a significant feature
2. **Create Dedicated Tag**: `add_tag feature-[name] --description="[Feature description]"`
3. **Collaborative PRD Creation**: Work with user to create comprehensive PRD in `.taskmaster/docs/feature-[name]-prd.txt`
4. **Parse & Prepare**:
- `parse_prd .taskmaster/docs/feature-[name]-prd.txt --tag=feature-[name]`
- `analyze_project_complexity --tag=feature-[name] --research`
- `expand_all --tag=feature-[name] --research`
5. **Add Master Reference**: Create a high-level task in `master` that references the feature tag
**For Existing Codebase Analysis**:
When users initialize Taskmaster on existing projects:
1. **Codebase Discovery**: Use your native tools for producing deep context about the code base. You may use `research` tool with `--tree` and `--files` to collect up to date information using the existing architecture as context.
2. **Collaborative Assessment**: Work with user to identify improvement areas, technical debt, or new features
3. **Strategic PRD Creation**: Co-author PRDs that include:
- Current state analysis (based on your codebase research)
- Proposed improvements or new features
- Implementation strategy considering existing code
4. **Tag-Based Organization**: Parse PRDs into appropriate tags (`refactor-api`, `feature-dashboard`, `tech-debt`, etc.)
5. **Master List Curation**: Keep only the most valuable initiatives in master
The parse-prd's `--append` flag enables the user to parse multple PRDs within tags or across tags. PRDs should be focused and the number of tasks they are parsed into should be strategically chosen relative to the PRD's complexity and level of detail.
### Workflow Transition Examples
**Example 1: Simple → Team-Based**
```
User: "Alice is going to help with the API work"
Your Response: "Great! To avoid conflicts, I'll create a separate task context for your work. Alice can continue with the master list while you work in your own context. When you're ready to merge, we can coordinate the tasks back together."
Action: add_tag my-api-work --copy-from-current --description="My API tasks while collaborating with Alice"
```
**Example 2: Simple → PRD-Driven**
```
User: "I want to add a complete user dashboard with analytics, user management, and reporting"
Your Response: "This sounds like a major feature that would benefit from detailed planning. Let me create a dedicated context for this work and we can draft a PRD together to ensure we capture all requirements."
Actions:
1. add_tag feature-dashboard --description="User dashboard with analytics and management"
2. Collaborate on PRD creation
3. parse_prd dashboard-prd.txt --tag=feature-dashboard
4. Add high-level "User Dashboard" task to master
```
**Example 3: Existing Project → Strategic Planning**
```
User: "I just initialized Taskmaster on my existing React app. It's getting messy and I want to improve it."
Your Response: "Let me research your codebase to understand the current architecture, then we can create a strategic plan for improvements."
Actions:
1. research "Current React app architecture and improvement opportunities" --tree --files=src/
2. Collaborate on improvement PRD based on findings
3. Create tags for different improvement areas (refactor-components, improve-state-management, etc.)
4. Keep only major improvement initiatives in master
```
---
## Primary Interaction: MCP Server vs. CLI
Taskmaster offers two primary ways to interact:
1. **MCP Server (Recommended for Integrated Tools)**:
- For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**.
- The MCP server exposes Taskmaster functionality through a set of tools (e.g., `get_tasks`, `add_subtask`).
- This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing.
- Refer to @`mcp.mdc` for details on the MCP architecture and available tools.
- A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in @`taskmaster.mdc`.
- **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change.
- **Note**: MCP tools fully support tagged task lists with complete tag management capabilities.
2. **`task-master` CLI (For Users & Fallback)**:
- The global `task-master` command provides a user-friendly interface for direct terminal interaction.
- It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP.
- Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
- The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`).
- Refer to @`taskmaster.mdc` for a detailed command reference.
- **Tagged Task Lists**: CLI fully supports the new tagged system with seamless migration.
## How the Tag System Works (For Your Reference)
- **Data Structure**: Tasks are organized into separate contexts (tags) like "master", "feature-branch", or "v2.0".
- **Silent Migration**: Existing projects automatically migrate to use a "master" tag with zero disruption.
- **Context Isolation**: Tasks in different tags are completely separate. Changes in one tag do not affect any other tag.
- **Manual Control**: The user is always in control. There is no automatic switching. You facilitate switching by using `use-tag <name>`.
- **Full CLI & MCP Support**: All tag management commands are available through both the CLI and MCP tools for you to use. Refer to @`taskmaster.mdc` for a full command list.
---
## Task Complexity Analysis
- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see @`taskmaster.mdc`) for comprehensive analysis
- Review complexity report via `complexity_report` / `task-master complexity-report` (see @`taskmaster.mdc`) for a formatted, readable version.
- Focus on tasks with highest complexity scores (8-10) for detailed breakdown
- Use analysis results to determine appropriate subtask allocation
- Note that reports are automatically used by the `expand_task` tool/command
## Task Breakdown Process
- Use `expand_task` / `task-master expand --id=<id>`. It automatically uses the complexity report if found, otherwise generates default number of subtasks.
- Use `--num=<number>` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations.
- Add `--research` flag to leverage Perplexity AI for research-backed expansion.
- Add `--force` flag to clear existing subtasks before generating new ones (default is to append).
- Use `--prompt="<context>"` to provide additional context when needed.
- Review and adjust generated subtasks as necessary.
- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`.
- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=<id>`.
## Implementation Drift Handling
- When implementation differs significantly from planned approach
- When future tasks need modification due to current implementation choices
- When new dependencies or requirements emerge
- Use `update` / `task-master update --from=<futureTaskId> --prompt='<explanation>\nUpdate context...' --research` to update multiple future tasks.
- Use `update_task` / `task-master update-task --id=<taskId> --prompt='<explanation>\nUpdate context...' --research` to update a single specific task.
## Task Status Management
- Use 'pending' for tasks ready to be worked on
- Use 'done' for completed and verified tasks
- Use 'deferred' for postponed tasks
- Add custom status values as needed for project-specific workflows
## Task Structure Fields
- **id**: Unique identifier for the task (Example: `1`, `1.1`)
- **title**: Brief, descriptive title (Example: `"Initialize Repo"`)
- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`)
- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`)
- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`)
- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
- This helps quickly identify which prerequisite tasks are blocking work
- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`)
- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`)
- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`)
- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`)
- Refer to task structure details (previously linked to `tasks.mdc`).
## Configuration Management (Updated)
Taskmaster configuration is managed through two main mechanisms:
1. **`.taskmaster/config.json` File (Primary):**
* Located in the project root directory.
* Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc.
* **Tagged System Settings**: Includes `global.defaultTag` (defaults to "master") and `tags` section for tag management configuration.
* **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing.
* **View/Set specific models via `task-master models` command or `models` MCP tool.**
* Created automatically when you run `task-master models --setup` for the first time or during tagged system migration.
2. **Environment Variables (`.env` / `mcp.json`):**
* Used **only** for sensitive API keys and specific endpoint URLs.
* Place API keys (one per provider) in a `.env` file in the project root for CLI usage.
* For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`.
* Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`).
3. **`.taskmaster/state.json` File (Tagged System State):**
* Tracks current tag context and migration status.
* Automatically created during tagged system migration.
* Contains: `currentTag`, `lastSwitched`, `migrationNoticeShown`.
**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `TASKMASTER_LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool.
**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`.
**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project.
## Rules Management
Taskmaster supports multiple AI coding assistant rule sets that can be configured during project initialization or managed afterward:
- **Available Profiles**: Claude Code, Cline, Codex, Cursor, Roo Code, Trae, Windsurf (claude, cline, codex, cursor, roo, trae, windsurf)
- **During Initialization**: Use `task-master init --rules cursor,windsurf` to specify which rule sets to include
- **After Initialization**: Use `task-master rules add <profiles>` or `task-master rules remove <profiles>` to manage rule sets
- **Interactive Setup**: Use `task-master rules setup` to launch an interactive prompt for selecting rule profiles
- **Default Behavior**: If no `--rules` flag is specified during initialization, all available rule profiles are included
- **Rule Structure**: Each profile creates its own directory (e.g., `.cursor/rules`, `.roo/rules`) with appropriate configuration files
## Determining the Next Task
- Run `next_task` / `task-master next` to show the next task to work on.
- The command identifies tasks with all dependencies satisfied
- Tasks are prioritized by priority level, dependency count, and ID
- The command shows comprehensive task information including:
- Basic task details and description
- Implementation details
- Subtasks (if they exist)
- Contextual suggested actions
- Recommended before starting any new development work
- Respects your project's dependency structure
- Ensures tasks are completed in the appropriate sequence
- Provides ready-to-use commands for common task actions
## Viewing Specific Task Details
- Run `get_task` / `task-master show <id>` to view a specific task.
- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1)
- Displays comprehensive information similar to the next command, but for a specific task
- For parent tasks, shows all subtasks and their current status
- For subtasks, shows parent task information and relationship
- Provides contextual suggested actions appropriate for the specific task
- Useful for examining task details before implementation or checking status
## Managing Task Dependencies
- Use `add_dependency` / `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency.
- Use `remove_dependency` / `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency.
- The system prevents circular dependencies and duplicate dependency entries
- Dependencies are checked for existence before being added or removed
- Task files are automatically regenerated after dependency changes
- Dependencies are visualized with status indicators in task listings and files
## Task Reorganization
- Use `move_task` / `task-master move --from=<id> --to=<id>` to move tasks or subtasks within the hierarchy
- This command supports several use cases:
- Moving a standalone task to become a subtask (e.g., `--from=5 --to=7`)
- Moving a subtask to become a standalone task (e.g., `--from=5.2 --to=7`)
- Moving a subtask to a different parent (e.g., `--from=5.2 --to=7.3`)
- Reordering subtasks within the same parent (e.g., `--from=5.2 --to=5.4`)
- Moving a task to a new, non-existent ID position (e.g., `--from=5 --to=25`)
- Moving multiple tasks at once using comma-separated IDs (e.g., `--from=10,11,12 --to=16,17,18`)
- The system includes validation to prevent data loss:
- Allows moving to non-existent IDs by creating placeholder tasks
- Prevents moving to existing task IDs that have content (to avoid overwriting)
- Validates source tasks exist before attempting to move them
- The system maintains proper parent-child relationships and dependency integrity
- Task files are automatically regenerated after the move operation
- This provides greater flexibility in organizing and refining your task structure as project understanding evolves
- This is especially useful when dealing with potential merge conflicts arising from teams creating tasks on separate branches. Solve these conflicts very easily by moving your tasks and keeping theirs.
## Iterative Subtask Implementation
Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation:
1. **Understand the Goal (Preparation):**
* Use `get_task` / `task-master show <subtaskId>` (see @`taskmaster.mdc`) to thoroughly understand the specific goals and requirements of the subtask.
2. **Initial Exploration & Planning (Iteration 1):**
* This is the first attempt at creating a concrete implementation plan.
* Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification.
* Determine the intended code changes (diffs) and their locations.
* Gather *all* relevant details from this exploration phase.
3. **Log the Plan:**
* Run `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<detailed plan>'`.
* Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`.
4. **Verify the Plan:**
* Run `get_task` / `task-master show <subtaskId>` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details.
5. **Begin Implementation:**
* Set the subtask status using `set_task_status` / `task-master set-status --id=<subtaskId> --status=in-progress`.
* Start coding based on the logged plan.
6. **Refine and Log Progress (Iteration 2+):**
* As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches.
* **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy.
* **Regularly** use `update_subtask` / `task-master update-subtask --id=<subtaskId> --prompt='<update details>\n- What worked...\n- What didn't work...'` to append new findings.
* **Crucially, log:**
* What worked ("fundamental truths" discovered).
* What didn't work and why (to avoid repeating mistakes).
* Specific code snippets or configurations that were successful.
* Decisions made, especially if confirmed with user input.
* Any deviations from the initial plan and the reasoning.
* The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors.
7. **Review & Update Rules (Post-Implementation):**
* Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history.
* Identify any new or modified code patterns, conventions, or best practices established during the implementation.
* Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.mdc` and `self_improve.mdc`).
8. **Mark Task Complete:**
* After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id=<subtaskId> --status=done`.
9. **Commit Changes (If using Git):**
* Stage the relevant code changes and any updated/new rule files (`git add .`).
* Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments.
* Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask <subtaskId>\n\n- Details about changes...\n- Updated rule Y for pattern Z'`).
* Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.mdc`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one.
10. **Proceed to Next Subtask:**
* Identify the next subtask (e.g., using `next_task` / `task-master next`).
## Code Analysis & Refactoring Techniques
- **Top-Level Function Search**:
- Useful for understanding module structure or planning refactors.
- Use grep/ripgrep to find exported functions/constants:
`rg "export (async function|function|const) \w+"` or similar patterns.
- Can help compare functions between files during migrations or identify potential naming conflicts.
---
*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.*