Release 0.18.0 (#840)
* Update SWE scores (#657) * docs: Auto-update and format models.md * feat: Flexible brand rules management (#460) * chore(docs): update docs and rules related to model management. * feat(ai): Add OpenRouter AI provider support Integrates the OpenRouter AI provider using the Vercel AI SDK adapter (@openrouter/ai-sdk-provider). This allows users to configure and utilize models available through the OpenRouter platform. - Added src/ai-providers/openrouter.js with standard Vercel AI SDK wrapper functions (generateText, streamText, generateObject). - Updated ai-services-unified.js to include the OpenRouter provider in the PROVIDER_FUNCTIONS map and API key resolution logic. - Verified config-manager.js handles OpenRouter API key checks correctly. - Users can configure OpenRouter models via .taskmasterconfig using the task-master models command or MCP models tool. Requires OPENROUTER_API_KEY. - Enhanced error handling in ai-services-unified.js to provide clearer messages when generateObjectService fails due to lack of underlying tool support in the selected model/provider endpoint. * feat(cli): Add --status/-s filter flag to show command and get-task MCP tool Implements the ability to filter subtasks displayed by the `task-master show <id>` command using the `--status` (or `-s`) flag. This is also available in the MCP context. - Modified `commands.js` to add the `--status` option to the `show` command definition. - Updated `utils.js` (`findTaskById`) to handle the filtering logic and return original subtask counts/arrays when filtering. - Updated `ui.js` (`displayTaskById`) to use the filtered subtasks for the table, display a summary line when filtering, and use the original subtask list for the progress bar calculation. - Updated MCP `get_task` tool and `showTaskDirect` function to accept and pass the `status` parameter. - Added changeset entry. * fix(tasks): Improve next task logic to be subtask-aware * fix(tasks): Enable removing multiple tasks/subtasks via comma-separated IDs - Refactors the core `removeTask` function (`task-manager/remove-task.js`) to accept and iterate over comma-separated task/subtask IDs. - Updates dependency cleanup and file regeneration logic to run once after processing all specified IDs. - Adjusts the `remove-task` CLI command (`commands.js`) description and confirmation prompt to handle multiple IDs correctly. - Fixes a bug in the CLI confirmation prompt where task/subtask titles were not being displayed correctly. - Updates the `remove_task` MCP tool description to reflect the new multi-ID capability. This addresses the previously known issue where only the first ID in a comma-separated list was processed. Closes #140 * Update README.md (#342) * Update Discord badge (#337) * refactor(init): Improve robustness and dependencies; Update template deps for AI SDKs; Silence npm install in MCP; Improve conditional model setup logic; Refactor init.js flags; Tweak Getting Started text; Fix MCP server launch command; Update default model in config template * Refactor: Improve MCP logging, update E2E & tests Refactors MCP server logging and updates testing infrastructure. - MCP Server: - Replaced manual logger wrappers with centralized `createLogWrapper` utility. - Updated direct function calls to use `{ session, mcpLog }` context. - Removed deprecated `model` parameter from analyze, expand-all, expand-task tools. - Adjusted MCP tool import paths and parameter descriptions. - Documentation: - Modified `docs/configuration.md`. - Modified `docs/tutorial.md`. - Testing: - E2E Script (`run_e2e.sh`): - Removed `set -e`. - Added LLM analysis function (`analyze_log_with_llm`) & integration. - Adjusted test run directory creation timing. - Added debug echo statements. - Deleted Unit Tests: Removed `ai-client-factory.test.js`, `ai-client-utils.test.js`, `ai-services.test.js`. - Modified Fixtures: Updated `scripts/task-complexity-report.json`. - Dev Scripts: - Modified `scripts/dev.js`. * chore(tests): Passes tests for merge candidate - Adjusted the interactive model default choice to be 'no change' instead of 'cancel setup' - E2E script has been perfected and works as designed provided there are all provider API keys .env in the root - Fixes the entire test suite to make sure it passes with the new architecture. - Fixes dependency command to properly show there is a validation failure if there is one. - Refactored config-manager.test.js mocking strategy and fixed assertions to read the real supported-models.json - Fixed rule-transformer.test.js assertion syntax and transformation logic adjusting replacement for search which was too broad. - Skip unstable tests in utils.test.js (log, readJSON, writeJSON error paths) due to SIGABRT crash. These tests trigger a native crash (SIGABRT), likely stemming from a conflict between internal chalk usage within the functions and Jest's test environment, possibly related to ESM module handling. * chore(wtf): removes chai. not sure how that even made it in here. also removes duplicate test in scripts/. * fix: ensure API key detection properly reads .env in MCP context Problem: - Task Master model configuration wasn't properly checking for API keys in the project's .env file when running through MCP - The isApiKeySet function was only checking session.env and process.env but not inspecting the .env file directly - This caused incorrect API key status reporting in MCP tools even when keys were properly set in .env Solution: - Modified resolveEnvVariable function in utils.js to properly read from .env file at projectRoot - Updated isApiKeySet to correctly pass projectRoot to resolveEnvVariable - Enhanced the key detection logic to have consistent behavior between CLI and MCP contexts - Maintains the correct precedence: session.env → .env file → process.env Testing: - Verified working correctly with both MCP and CLI tools - API keys properly detected in .env file in both contexts - Deleted .cursor/mcp.json to confirm introspection of .env as fallback works * fix(update): pass projectRoot through update command flow Modified ai-services-unified.js, update.js tool, and update-tasks.js direct function to correctly pass projectRoot. This enables the .env file API key fallback mechanism for the update command when running via MCP, ensuring consistent key resolution with the CLI context. * fix(analyze-complexity): pass projectRoot through analyze-complexity flow Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity. * fix(add-task): pass projectRoot and fix logging/refs Modified add-task core, direct function, and tool to pass projectRoot for .env API key fallback. Fixed logFn reference error and removed deprecated reportProgress call in core addTask function. Verified working. * fix(parse-prd): pass projectRoot and fix schema/logging Modified parse-prd core, direct function, and tool to pass projectRoot for .env API key fallback. Corrected Zod schema used in generateObjectService call. Fixed logFn reference error in core parsePRD. Updated unit test mock for utils.js. * fix(update-task): pass projectRoot and adjust parsing Modified update-task-by-id core, direct function, and tool to pass projectRoot. Reverted parsing logic in core function to prioritize `{...}` extraction, resolving parsing errors. Fixed ReferenceError by correctly destructuring projectRoot. * fix(update-subtask): pass projectRoot and allow updating done subtasks Modified update-subtask-by-id core, direct function, and tool to pass projectRoot for .env API key fallback. Removed check preventing appending details to completed subtasks. * fix(mcp, expand): pass projectRoot through expand/expand-all flows Problem: expand_task & expand_all MCP tools failed with .env keys due to missing projectRoot propagation for API key resolution. Also fixed a ReferenceError: wasSilent is not defined in expandTaskDirect. Solution: Modified core logic, direct functions, and MCP tools for expand-task and expand-all to correctly destructure projectRoot from arguments and pass it down through the context object to the AI service call (generateTextService). Fixed wasSilent scope in expandTaskDirect. Verification: Tested expand_task successfully in MCP using .env keys. Reviewed expand_all flow for correct projectRoot propagation. * chore: prettier * fix(expand-all): add projectRoot to expandAllTasksDirect invokation. * fix(update-tasks): Improve AI response parsing for 'update' command Refactors the JSON array parsing logic within in . The previous logic primarily relied on extracting content from markdown code blocks (json or javascript), which proved brittle when the AI response included comments or non-JSON text within the block, leading to parsing errors for the command. This change modifies the parsing strategy to first attempt extracting content directly between the outermost '[' and ']' brackets. This is more robust as it targets the expected array structure directly. If bracket extraction fails, it falls back to looking for a strict json code block, then prefix stripping, before attempting a raw parse. This approach aligns with the successful parsing strategy used for single-object responses in and resolves the parsing errors previously observed with the command. * refactor(mcp): introduce withNormalizedProjectRoot HOF for path normalization Added HOF to mcp tools utils to normalize projectRoot from args/session. Refactored get-task tool to use HOF. Updated relevant documentation. * refactor(mcp): apply withNormalizedProjectRoot HOF to update tool Problem: The MCP tool previously handled project root acquisition and path resolution within its method, leading to potential inconsistencies and repetition. Solution: Refactored the tool () to utilize the new Higher-Order Function (HOF) from . Specific Changes: - Imported HOF. - Updated the Zod schema for the parameter to be optional, as the HOF handles deriving it from the session if not provided. - Wrapped the entire function body with the HOF. - Removed the manual call to from within the function body. - Destructured the from the object received by the wrapped function, ensuring it's the normalized path provided by the HOF. - Used the normalized variable when calling and when passing arguments to . This change standardizes project root handling for the tool, simplifies its method, and ensures consistent path normalization. This serves as the pattern for refactoring other MCP tools. * fix: apply to all tools withNormalizedProjectRoot to fix projectRoot issues for linux and windows * fix: add rest of tools that need wrapper * chore: cleanup tools to stop using rootFolder and remove unused imports * chore: more cleanup * refactor: Improve update-subtask, consolidate utils, update config This commit introduces several improvements and refactorings across MCP tools, core logic, and configuration. **Major Changes:** 1. **Refactor updateSubtaskById:** - Switched from generateTextService to generateObjectService for structured AI responses, using a Zod schema (subtaskSchema) for validation. - Revised prompts to have the AI generate relevant content based on user request and context (parent/sibling tasks), while explicitly preventing AI from handling timestamp/tag formatting. - Implemented **local timestamp generation (new Date().toISOString()) and formatting** (using <info added on ...> tags) within the function *after* receiving the AI response. This ensures reliable and correctly formatted details are appended. - Corrected logic to append only the locally formatted, AI-generated content block to the existing subtask.details. 2. **Consolidate MCP Utilities:** - Moved/consolidated the withNormalizedProjectRoot HOF into mcp-server/src/tools/utils.js. - Updated MCP tools (like update-subtask.js) to import withNormalizedProjectRoot from the new location. 3. **Refactor Project Initialization:** - Deleted the redundant mcp-server/src/core/direct-functions/initialize-project-direct.js file. - Updated mcp-server/src/core/task-master-core.js to import initializeProjectDirect from its correct location (./direct-functions/initialize-project.js). **Other Changes:** - Updated .taskmasterconfig fallback model to claude-3-7-sonnet-20250219. - Clarified model cost representation in the models tool description (taskmaster.mdc and mcp-server/src/tools/models.js). * fix: displayBanner logging when silentMode is active (#385) * fix: improve error handling, test options, and model configuration - Enhance error validation in parse-prd.js and update-tasks.js - Fix bug where mcpLog was incorrectly passed as logWrapper - Improve error messages and response formatting - Add --skip-verification flag to E2E tests - Update MCP server config that ships with init to match new API key structure - Fix task force/append handling in parse-prd command - Increase column width in update-tasks display * chore: fixes parse prd to show loading indicator in cli. * fix(parse-prd): suggested fix for mcpLog was incorrect. reverting to my previously working code. * chore(init): No longer ships readme with task-master init (commented out for now). No longer looking for task-master-mcp, instead checked for task-master-ai - this should prevent the init sequence from needlessly adding another mcp server with task-master-mcp to the mpc.json which a ton of people probably ran into. * chore: restores 3.7 sonnet as the main role. * fix(add/remove-dependency): dependency mcp tools were failing due to hard-coded tasks path in generate task files. * chore: removes tasks json backup that was temporarily created. * fix(next): adjusts mcp tool response to correctly return the next task/subtask. Also adds nextSteps to the next task response. * chore: prettier * chore: readme typos * fix(config): restores sonnet 3.7 as default main role. * Version Packages * hotfix: move production package to "dependencies" (#399) * Version Packages * Fix: issues with 0.13.0 not working (#402) * Exit prerelease mode and version packages * hotfix: move production package to "dependencies" * Enter prerelease mode and version packages * Enter prerelease mode and version packages * chore: cleanup * chore: improve pre.json and add pre-release workflow * chore: fix package.json * chore: cleanup * chore: improve pre-release workflow * chore: allow github actions to commit * extract fileMap and conversionConfig into brand profile * extract into brand profile * add windsurf profile * add remove brand rules function * fix regex * add rules command to add/remove rules for a specific brand * fix post processing for roo * allow multiples * add cursor profile * update test for new structure * move rules to assets * use assets/rules for rules files * use standardized setupMCP function * fix formatting * fix formatting * add logging * fix escapes * default to cursor * allow init with certain rulesets; no more .windsurfrules * update docs * update log msg * fix formatting * keep mdc extension for cursor * don't rewrite .mdc to .md inside the files * fix roo init (add modes) * fix cursor init (don't use roo transformation by default) * use more generic function names * update docs * fix formatting * update function names * add changeset * add rules to mcp initialize project * register tool with mcp server * update docs * add integration test * fix cursor initialization * rule selection * fix formatting * fix MCP - remove yes flag * add import * update roo tests * add/update tests * remove test * add rules command test * update MCP responses, centralize rules profiles & helpers * fix logging and MCP response messages * fix formatting * incorrect test * fix tests * update fileMap * fix file extension transformations * fix formatting * add rules command test * test already covered * fix formatting * move renaming logic into profiles * make sure dir is deleted (DS_Store) * add confirmation for rules removal * add force flag for rules remove * use force flag for test * remove yes parameter * fix formatting * import brand profiles from rule-transformer.js * update comment * add interactive rules setup * optimize * only copy rules specifically listed in fileMap * update comment * add cline profile * add brandDir to remove ambiguity and support Cline * specify whether to create mcp config and filename * add mcpConfigName value for parh * fix formatting * remove rules just for this repository - only include rules to be distributed * update error message * update "brand rules" to "rules" * update to minor * remove comment * remove comments * move to /src/utils * optimize imports * move rules-setup.js to /src/utils * move rule-transformer.js to /src/utils * move confirmation to /src/ui/confirm.js * default to all rules * use profile js for mcp config settings * only run rules interactive setup if not provided via command line * update comments * initialize with all brands if nothing specified * update var name * clean up * enumerate brands for brand rules * update instructions * add test to check for brand profiles * fix quotes * update semantics and terminology from 'brand rules' to 'rules profiles' * fix formatting * fix formatting * update function name and remove copying of cursor rules, now handled by rules transformer * update comment * rename to mcp-config-setup.js * use enums for rules actions * add aggregate reporting for rules add command * add missing log message * use simpler path * use base profile with modifications for each brand * use displayName and don't select any defaults in setup * add confirmation if removing ALL rules profiles, and add --force flag on rules remove * Use profile-detection instead of rules-detection * add newline at end of mcp config * add proper formatting for mcp.json * update rules * update rules * update rules * add checks for other rules and other profile folder items before removing * update confirmation for rules remove * update docs * update changeset * fix for filepath at bottom of rule * Update cline profile and add test; adjust other rules tests * update changeset * update changeset * clarify init for all profiles if not specified * update rule text * revert text * use "rule profiles" instead of "rules profiles" * use standard tool mappings for windsurf * add Trae support * update changeset * update wording * update to 'rule profile' * remove unneeded exports to optimize loc * combine to /src/utils/profiles.js; add codex and claude code profiles * rename function and add boxen * add claude and codex integration tests * organize tests into profiles folder * mock fs for transformer tests * update UI * add cline and trae integration tests * update test * update function name * update formatting * Update change set with new profiles * move profile integration tests to subdirectory * properly create temp directories in /tmp folder * fix formatting * use taskmaster subfolder for the 2 TM rules * update wording * ensure subdirectory exists * update rules from next * update from next * update taskmaster rule * add details on new rules command and init * fix mcp init * fix MCP path to assets * remove duplication * remove duplication * MCP server path fixes for rules command * fix for CLI roo rules add/remove * update tests * fix formatting * fix pattern for interactive rule profiles setup * restore comments * restore comments * restore comments * remove unused import, fix quotes * add missing integration tests * add VS Code profile and tests * update docs and rules to include vscode profile * add rules subdirectory support per-profile * move profiles to /src * fix formatting * rename to remove ambiguity * use --setup for rules interactive setup * Fix Cursor deeplink installation with copy-paste instructions (#723) * change roo boomerang to orchestrator; update tests that don't use modes * fix newline * chore: cleanup --------- Co-authored-by: Eyal Toledano <eyal@microangel.so> Co-authored-by: Yuval <yuvalbl@users.noreply.github.com> Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com> Co-authored-by: Eyal Toledano <eutait@gmail.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * fix: providers config for azure, bedrock, and vertex (#822) * fix: providers config for azure, bedrock, and vertex * chore: improve changelog * chore: fix CI * fix: switch to ESM export to avoid mixed format (#633) * fix: switch to ESM export to avoid mixed format The CLI entrypoint was using `module.exports` alongside ESM `import` statements, resulting in an invalid mixed module format. Replaced the CommonJS export with a proper ESM `export` to maintain consistency and prevent module resolution issues. * chore: add changeset --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> * fix: Fix external provider support (#726) * fix(bedrock): improve AWS credential handling and add model definitions (#826) * fix(bedrock): improve AWS credential handling and add model definitions - Change error to warning when AWS credentials are missing in environment - Allow fallback to system configuration (aws config files or instance profiles) - Remove hardcoded region and profile parameters in Bedrock client - Add Claude 3.7 Sonnet and DeepSeek R1 model definitions for Bedrock - Update config manager to properly handle Bedrock provider * chore: cleanup and format and small refactor --------- Co-authored-by: Ray Krueger <raykrueger@gmail.com> * docs: Auto-update and format models.md * Version Packages * chore: fix package.json * Fix/expand command tag corruption (#827) * fix(expand): Fix tag corruption in expand command - Fix tag parameter passing through MCP expand-task flow - Add tag parameter to direct function and tool registration - Fix contextGatherer method name from _buildDependencyContext to _buildDependencyGraphs - Add comprehensive test coverage for tag handling in expand-task - Ensures tagged task structure is preserved during expansion - Prevents corruption when tag is undefined. Fixes expand command causing tag corruption in tagged task lists. All existing tests pass and new test coverage added. * test(e2e): Add comprehensive tag-aware expand testing to verify tag corruption fix - Add new test section for feature-expand tag creation and testing - Verify tag preservation during expand, force expand, and expand --all operations - Test that master tag remains intact and feature-expand tag receives subtasks correctly - Fix file path references to use correct .taskmaster/tasks/tasks.json location - Fix config file check to use .taskmaster/config.json instead of .taskmasterconfig - All tag corruption verification tests pass successfully in E2E test * fix(changeset): Update E2E test improvements changeset to properly reflect tag corruption fix verification * chore(changeset): combine duplicate changesets for expand tag corruption fix Merge eighty-breads-wonder.md into bright-llamas-enter.md to consolidate the expand command fix and its comprehensive E2E testing enhancements into a single changeset entry. * Delete .changeset/eighty-breads-wonder.md * Version Packages * chore: fix package.json * fix(expand): Enhance context handling in expandAllTasks function - Added `tag` to context destructuring for better context management. - Updated `readJSON` call to include `contextTag` for improved data integrity. - Ensured the correct tag is passed during task expansion to prevent tag corruption. --------- Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de> Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Add pyproject.toml as project root marker (#804) * feat: Add pyproject.toml as project root marker - Added 'pyproject.toml' to the project markers array in findProjectRoot() - Enables Task Master to recognize Python projects using pyproject.toml - Improves project root detection for modern Python development workflows - Maintains compatibility with existing Node.js and Git-based detection * chore: add changeset --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> * feat: add Claude Code provider support Implements Claude Code as a new AI provider that uses the Claude Code CLI without requiring API keys. This enables users to leverage Claude models through their local Claude Code installation. Key changes: - Add complete AI SDK v1 implementation for Claude Code provider - Custom SDK with streaming/non-streaming support - Session management for conversation continuity - JSON extraction for object generation mode - Support for advanced settings (maxTurns, allowedTools, etc.) - Integrate Claude Code into Task Master's provider system - Update ai-services-unified.js to handle keyless authentication - Add provider to supported-models.json with opus/sonnet models - Ensure correct maxTokens values are applied (opus: 32000, sonnet: 64000) - Fix maxTokens configuration issue - Add max_tokens property to getAvailableModels() output - Update setModel() to properly handle claude-code models - Create update-config-tokens.js utility for init process - Add comprehensive documentation - User guide with configuration examples - Advanced settings explanation and future integration options The implementation maintains full backward compatibility with existing providers while adding seamless Claude Code support to all Task Master commands. * fix(docs): correct invalid commands in claude-code usage examples - Remove non-existent 'do', 'estimate', and 'analyze' commands - Replace with actual Task Master commands: next, show, set-status - Use correct syntax for parse-prd and analyze-complexity * feat: make @anthropic-ai/claude-code an optional dependency This change makes the Claude Code SDK package optional, preventing installation failures for users who don't need Claude Code functionality. Changes: - Added @anthropic-ai/claude-code to optionalDependencies in package.json - Implemented lazy loading in language-model.js to only import the SDK when actually used - Updated documentation to explain the optional installation requirement - Applied formatting fixes to ensure code consistency Benefits: - Users without Claude Code subscriptions don't need to install the dependency - Reduces package size for users who don't use Claude Code - Prevents installation failures if the package is unavailable - Provides clear error messages when the package is needed but not installed The implementation uses dynamic imports to load the SDK only when doGenerate() or doStream() is called, ensuring the provider can be instantiated without the package present. * test: add comprehensive tests for ClaudeCodeProvider Addresses code review feedback about missing automated tests for the ClaudeCodeProvider. ## Changes - Added unit tests for ClaudeCodeProvider class covering constructor, validateAuth, and getClient methods - Added unit tests for ClaudeCodeLanguageModel testing lazy loading behavior and error handling - Added integration tests verifying optional dependency behavior when @anthropic-ai/claude-code is not installed ## Test Coverage 1. **Unit Tests**: - ClaudeCodeProvider: Basic functionality, no API key requirement, client creation - ClaudeCodeLanguageModel: Model initialization, lazy loading, error messages, warning generation 2. **Integration Tests**: - Optional dependency behavior when package is not installed - Clear error messages for users about missing package - Provider instantiation works but usage fails gracefully All tests pass and provide comprehensive coverage for the claude-code provider implementation. * revert: remove maxTokens update functionality from init This functionality was out of scope for the Claude Code provider PR. The automatic updating of maxTokens values in config.json during initialization is a general improvement that should be in a separate PR. Additionally, Claude Code ignores maxTokens and temperature parameters anyway, making this change irrelevant for the Claude Code integration. Removed: - scripts/modules/update-config-tokens.js - Import and usage in scripts/init.js * docs: add Claude Code support information to README - Added Claude Code to the list of supported providers in Requirements section - Noted that Claude Code requires no API key but needs Claude Code CLI - Added example of configuring claude-code/sonnet model - Created dedicated Claude Code Support section with key information - Added link to detailed Claude Code setup documentation This ensures users are aware of the Claude Code option as a no-API-key alternative for using Claude models. * style: apply biome formatting to test files * fix(models): add missing --claude-code flag to models command The models command was missing the --claude-code provider flag, preventing users from setting Claude Code models via CLI. While the backend already supported claude-code as a provider hint, there was no command-line flag to trigger it. Changes: - Added --claude-code option to models command alongside existing provider flags - Updated provider flags validation to include claudeCode option - Added claude-code to providerHint logic for all three model roles (main, research, fallback) - Updated error message to include --claude-code in list of mutually exclusive flags - Added example usage in help text This allows users to properly set Claude Code models using commands like: task-master models --set-main sonnet --claude-code task-master models --set-main opus --claude-code Without this flag, users would get "Model ID not found" errors when trying to set claude-code models, as the system couldn't determine the correct provider for generic model names like "sonnet" or "opus". * chore: add changeset for Claude Code provider feature * docs: Auto-update and format models.md * readme: add troubleshooting note for MCP tools not working * Feature/compatibleapisupport (#830) * add compatible platform api support * Adjust the code according to the suggestions * Fully revised as requested: restored all required checks, improved compatibility, and converted all comments to English. * feat: Add support for compatible API endpoints via baseURL * chore: Add changeset for compatible API support * chore: cleanup * chore: improve changeset * fix: package-lock.json * fix: package-lock.json --------- Co-authored-by: He-Xun <1226807142@qq.com> * Rename Roo Code "Boomerang" role to "Orchestrator" (#831) * feat: Enhanced project initialization with Git worktree detection (#743) * Fix Cursor deeplink installation with copy-paste instructions (#723) * detect git worktree * add changeset * add aliases and git flags * add changeset * rename and update test * add store tasks in git functionality * update changeset * fix newline * remove unused import * update command wording * update command option text * fix: update task by id (#834) * store tasks in git by default (#835) * Call rules interactive setup during init (#833) * chore: rc version bump * feat: Claude Code slash commands for Task Master (#774) * Fix Cursor deeplink installation with copy-paste instructions (#723) * fix: expand-task (#755) * docs: Update o3 model price (#751) * docs: Auto-update and format models.md * docs: Auto-update and format models.md * feat: Add Claude Code task master commands Adds Task Master slash commands for Claude Code under /project:tm/ namespace --------- Co-authored-by: Joe Danziger <joe@ticc.net> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com> * feat: make more compatible with "o" family models (#839) * docs: Auto-update and format models.md * docs: Add comprehensive Azure OpenAI configuration documentation (#837) * docs: Add comprehensive Azure OpenAI configuration documentation - Add detailed Azure OpenAI configuration section with prerequisites, authentication, and setup options - Include both global and per-model baseURL configuration examples - Add comprehensive troubleshooting guide for common Azure OpenAI issues - Update environment variables section with Azure OpenAI examples - Add Azure OpenAI models to all model tables (Main, Research, Fallback) - Include prominent Azure configuration example in main documentation - Fix azureBaseURL format to use correct Azure OpenAI endpoint structure Addresses common Azure OpenAI setup challenges and provides clear guidance for new users. * refactor: Move Azure models from docs/models.md to scripts/modules/supported-models.json - Remove Azure model entries from documentation tables - Add Azure provider section to supported-models.json with gpt-4o, gpt-4o-mini, and gpt-4-1 - Maintain consistency with existing model configuration structure * docs: Auto-update and format models.md * Version Packages * chore: format fix --------- Co-authored-by: Riccardo (Ricky) Esclapon <32306488+ries9112@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Joe Danziger <joe@ticc.net> Co-authored-by: Eyal Toledano <eyal@microangel.so> Co-authored-by: Yuval <yuvalbl@users.noreply.github.com> Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com> Co-authored-by: Eyal Toledano <eutait@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Nathan Marley <nathan@glowberrylabs.com> Co-authored-by: Ray Krueger <raykrueger@gmail.com> Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de> Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com> Co-authored-by: ejones40 <ethan.jones@fortyau.com> Co-authored-by: Ben Vargas <ben@vargas.com> Co-authored-by: V4G4X <34249137+V4G4X@users.noreply.github.com> Co-authored-by: He-Xun <1226807142@qq.com> Co-authored-by: neno <github@meaning.systems> Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com> Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com> Co-authored-by: Jitesh Thakur <56656484+Jitha-afk@users.noreply.github.com>
This commit is contained in:
@@ -21,18 +21,10 @@ export class BedrockAIProvider extends BaseAIProvider {
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const {
|
||||
profile = process.env.AWS_PROFILE || 'default',
|
||||
region = process.env.AWS_DEFAULT_REGION || 'us-east-1',
|
||||
baseURL
|
||||
} = params;
|
||||
|
||||
const credentialProvider = fromNodeProviderChain({ profile });
|
||||
const credentialProvider = fromNodeProviderChain();
|
||||
|
||||
return createAmazonBedrock({
|
||||
region,
|
||||
credentialProvider,
|
||||
...(baseURL && { baseURL })
|
||||
credentialProvider
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
|
||||
47
src/ai-providers/claude-code.js
Normal file
47
src/ai-providers/claude-code.js
Normal file
@@ -0,0 +1,47 @@
|
||||
/**
|
||||
* src/ai-providers/claude-code.js
|
||||
*
|
||||
* Implementation for interacting with Claude models via Claude Code CLI
|
||||
* using a custom AI SDK implementation.
|
||||
*/
|
||||
|
||||
import { createClaudeCode } from './custom-sdk/claude-code/index.js';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
|
||||
export class ClaudeCodeProvider extends BaseAIProvider {
|
||||
constructor() {
|
||||
super();
|
||||
this.name = 'Claude Code';
|
||||
}
|
||||
|
||||
/**
|
||||
* Override validateAuth to skip API key validation for Claude Code
|
||||
* @param {object} params - Parameters to validate
|
||||
*/
|
||||
validateAuth(params) {
|
||||
// Claude Code doesn't require an API key
|
||||
// No validation needed
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns a Claude Code client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint (not used by Claude Code)
|
||||
* @returns {Function} Claude Code client function
|
||||
* @throws {Error} If initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
// Claude Code doesn't use API keys or base URLs
|
||||
// Just return the provider factory
|
||||
return createClaudeCode({
|
||||
defaultSettings: {
|
||||
// Add any default settings if needed
|
||||
// These can be overridden per request
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
126
src/ai-providers/custom-sdk/claude-code/errors.js
Normal file
126
src/ai-providers/custom-sdk/claude-code/errors.js
Normal file
@@ -0,0 +1,126 @@
|
||||
/**
|
||||
* @fileoverview Error handling utilities for Claude Code provider
|
||||
*/
|
||||
|
||||
import { APICallError, LoadAPIKeyError } from '@ai-sdk/provider';
|
||||
|
||||
/**
|
||||
* @typedef {import('./types.js').ClaudeCodeErrorMetadata} ClaudeCodeErrorMetadata
|
||||
*/
|
||||
|
||||
/**
|
||||
* Create an API call error with Claude Code specific metadata
|
||||
* @param {Object} params - Error parameters
|
||||
* @param {string} params.message - Error message
|
||||
* @param {string} [params.code] - Error code
|
||||
* @param {number} [params.exitCode] - Process exit code
|
||||
* @param {string} [params.stderr] - Standard error output
|
||||
* @param {string} [params.promptExcerpt] - Excerpt of the prompt
|
||||
* @param {boolean} [params.isRetryable=false] - Whether the error is retryable
|
||||
* @returns {APICallError}
|
||||
*/
|
||||
export function createAPICallError({
|
||||
message,
|
||||
code,
|
||||
exitCode,
|
||||
stderr,
|
||||
promptExcerpt,
|
||||
isRetryable = false
|
||||
}) {
|
||||
/** @type {ClaudeCodeErrorMetadata} */
|
||||
const metadata = {
|
||||
code,
|
||||
exitCode,
|
||||
stderr,
|
||||
promptExcerpt
|
||||
};
|
||||
|
||||
return new APICallError({
|
||||
message,
|
||||
isRetryable,
|
||||
url: 'claude-code-cli://command',
|
||||
requestBodyValues: promptExcerpt ? { prompt: promptExcerpt } : undefined,
|
||||
data: metadata
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Create an authentication error
|
||||
* @param {Object} params - Error parameters
|
||||
* @param {string} params.message - Error message
|
||||
* @returns {LoadAPIKeyError}
|
||||
*/
|
||||
export function createAuthenticationError({ message }) {
|
||||
return new LoadAPIKeyError({
|
||||
message:
|
||||
message ||
|
||||
'Authentication failed. Please ensure Claude Code CLI is properly authenticated.'
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a timeout error
|
||||
* @param {Object} params - Error parameters
|
||||
* @param {string} params.message - Error message
|
||||
* @param {string} [params.promptExcerpt] - Excerpt of the prompt
|
||||
* @param {number} params.timeoutMs - Timeout in milliseconds
|
||||
* @returns {APICallError}
|
||||
*/
|
||||
export function createTimeoutError({ message, promptExcerpt, timeoutMs }) {
|
||||
// Store timeoutMs in metadata for potential use by error handlers
|
||||
/** @type {ClaudeCodeErrorMetadata & { timeoutMs: number }} */
|
||||
const metadata = {
|
||||
code: 'TIMEOUT',
|
||||
promptExcerpt,
|
||||
timeoutMs
|
||||
};
|
||||
|
||||
return new APICallError({
|
||||
message,
|
||||
isRetryable: true,
|
||||
url: 'claude-code-cli://command',
|
||||
requestBodyValues: promptExcerpt ? { prompt: promptExcerpt } : undefined,
|
||||
data: metadata
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if an error is an authentication error
|
||||
* @param {unknown} error - Error to check
|
||||
* @returns {boolean}
|
||||
*/
|
||||
export function isAuthenticationError(error) {
|
||||
if (error instanceof LoadAPIKeyError) return true;
|
||||
if (
|
||||
error instanceof APICallError &&
|
||||
/** @type {ClaudeCodeErrorMetadata} */ (error.data)?.exitCode === 401
|
||||
)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if an error is a timeout error
|
||||
* @param {unknown} error - Error to check
|
||||
* @returns {boolean}
|
||||
*/
|
||||
export function isTimeoutError(error) {
|
||||
if (
|
||||
error instanceof APICallError &&
|
||||
/** @type {ClaudeCodeErrorMetadata} */ (error.data)?.code === 'TIMEOUT'
|
||||
)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get error metadata from an error
|
||||
* @param {unknown} error - Error to extract metadata from
|
||||
* @returns {ClaudeCodeErrorMetadata|undefined}
|
||||
*/
|
||||
export function getErrorMetadata(error) {
|
||||
if (error instanceof APICallError && error.data) {
|
||||
return /** @type {ClaudeCodeErrorMetadata} */ (error.data);
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
83
src/ai-providers/custom-sdk/claude-code/index.js
Normal file
83
src/ai-providers/custom-sdk/claude-code/index.js
Normal file
@@ -0,0 +1,83 @@
|
||||
/**
|
||||
* @fileoverview Claude Code provider factory and exports
|
||||
*/
|
||||
|
||||
import { NoSuchModelError } from '@ai-sdk/provider';
|
||||
import { ClaudeCodeLanguageModel } from './language-model.js';
|
||||
|
||||
/**
|
||||
* @typedef {import('./types.js').ClaudeCodeSettings} ClaudeCodeSettings
|
||||
* @typedef {import('./types.js').ClaudeCodeModelId} ClaudeCodeModelId
|
||||
* @typedef {import('./types.js').ClaudeCodeProvider} ClaudeCodeProvider
|
||||
* @typedef {import('./types.js').ClaudeCodeProviderSettings} ClaudeCodeProviderSettings
|
||||
*/
|
||||
|
||||
/**
|
||||
* Create a Claude Code provider using the official SDK
|
||||
* @param {ClaudeCodeProviderSettings} [options={}] - Provider configuration options
|
||||
* @returns {ClaudeCodeProvider} Claude Code provider instance
|
||||
*/
|
||||
export function createClaudeCode(options = {}) {
|
||||
/**
|
||||
* Create a language model instance
|
||||
* @param {ClaudeCodeModelId} modelId - Model ID
|
||||
* @param {ClaudeCodeSettings} [settings={}] - Model settings
|
||||
* @returns {ClaudeCodeLanguageModel}
|
||||
*/
|
||||
const createModel = (modelId, settings = {}) => {
|
||||
return new ClaudeCodeLanguageModel({
|
||||
id: modelId,
|
||||
settings: {
|
||||
...options.defaultSettings,
|
||||
...settings
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Provider function
|
||||
* @param {ClaudeCodeModelId} modelId - Model ID
|
||||
* @param {ClaudeCodeSettings} [settings] - Model settings
|
||||
* @returns {ClaudeCodeLanguageModel}
|
||||
*/
|
||||
const provider = function (modelId, settings) {
|
||||
if (new.target) {
|
||||
throw new Error(
|
||||
'The Claude Code model function cannot be called with the new keyword.'
|
||||
);
|
||||
}
|
||||
|
||||
return createModel(modelId, settings);
|
||||
};
|
||||
|
||||
provider.languageModel = createModel;
|
||||
provider.chat = createModel; // Alias for languageModel
|
||||
|
||||
// Add textEmbeddingModel method that throws NoSuchModelError
|
||||
provider.textEmbeddingModel = (modelId) => {
|
||||
throw new NoSuchModelError({
|
||||
modelId,
|
||||
modelType: 'textEmbeddingModel'
|
||||
});
|
||||
};
|
||||
|
||||
return /** @type {ClaudeCodeProvider} */ (provider);
|
||||
}
|
||||
|
||||
/**
|
||||
* Default Claude Code provider instance
|
||||
*/
|
||||
export const claudeCode = createClaudeCode();
|
||||
|
||||
// Provider exports
|
||||
export { ClaudeCodeLanguageModel } from './language-model.js';
|
||||
|
||||
// Error handling exports
|
||||
export {
|
||||
isAuthenticationError,
|
||||
isTimeoutError,
|
||||
getErrorMetadata,
|
||||
createAPICallError,
|
||||
createAuthenticationError,
|
||||
createTimeoutError
|
||||
} from './errors.js';
|
||||
59
src/ai-providers/custom-sdk/claude-code/json-extractor.js
Normal file
59
src/ai-providers/custom-sdk/claude-code/json-extractor.js
Normal file
@@ -0,0 +1,59 @@
|
||||
/**
|
||||
* @fileoverview Extract JSON from Claude's response, handling markdown blocks and other formatting
|
||||
*/
|
||||
|
||||
/**
|
||||
* Extract JSON from Claude's response
|
||||
* @param {string} text - The text to extract JSON from
|
||||
* @returns {string} - The extracted JSON string
|
||||
*/
|
||||
export function extractJson(text) {
|
||||
// Remove markdown code blocks if present
|
||||
let jsonText = text.trim();
|
||||
|
||||
// Remove ```json blocks
|
||||
jsonText = jsonText.replace(/^```json\s*/gm, '');
|
||||
jsonText = jsonText.replace(/^```\s*/gm, '');
|
||||
jsonText = jsonText.replace(/```\s*$/gm, '');
|
||||
|
||||
// Remove common TypeScript/JavaScript patterns
|
||||
jsonText = jsonText.replace(/^const\s+\w+\s*=\s*/, ''); // Remove "const varName = "
|
||||
jsonText = jsonText.replace(/^let\s+\w+\s*=\s*/, ''); // Remove "let varName = "
|
||||
jsonText = jsonText.replace(/^var\s+\w+\s*=\s*/, ''); // Remove "var varName = "
|
||||
jsonText = jsonText.replace(/;?\s*$/, ''); // Remove trailing semicolons
|
||||
|
||||
// Try to extract JSON object or array
|
||||
const objectMatch = jsonText.match(/{[\s\S]*}/);
|
||||
const arrayMatch = jsonText.match(/\[[\s\S]*\]/);
|
||||
|
||||
if (objectMatch) {
|
||||
jsonText = objectMatch[0];
|
||||
} else if (arrayMatch) {
|
||||
jsonText = arrayMatch[0];
|
||||
}
|
||||
|
||||
// First try to parse as valid JSON
|
||||
try {
|
||||
JSON.parse(jsonText);
|
||||
return jsonText;
|
||||
} catch {
|
||||
// If it's not valid JSON, it might be a JavaScript object literal
|
||||
// Try to convert it to valid JSON
|
||||
try {
|
||||
// This is a simple conversion that handles basic cases
|
||||
// Replace unquoted keys with quoted keys
|
||||
const converted = jsonText
|
||||
.replace(/([{,]\s*)([a-zA-Z_$][a-zA-Z0-9_$]*)\s*:/g, '$1"$2":')
|
||||
// Replace single quotes with double quotes
|
||||
.replace(/'/g, '"');
|
||||
|
||||
// Validate the converted JSON
|
||||
JSON.parse(converted);
|
||||
return converted;
|
||||
} catch {
|
||||
// If all else fails, return the original text
|
||||
// The AI SDK will handle the error appropriately
|
||||
return text;
|
||||
}
|
||||
}
|
||||
}
|
||||
458
src/ai-providers/custom-sdk/claude-code/language-model.js
Normal file
458
src/ai-providers/custom-sdk/claude-code/language-model.js
Normal file
@@ -0,0 +1,458 @@
|
||||
/**
|
||||
* @fileoverview Claude Code Language Model implementation
|
||||
*/
|
||||
|
||||
import { NoSuchModelError } from '@ai-sdk/provider';
|
||||
import { generateId } from '@ai-sdk/provider-utils';
|
||||
import { convertToClaudeCodeMessages } from './message-converter.js';
|
||||
import { extractJson } from './json-extractor.js';
|
||||
import { createAPICallError, createAuthenticationError } from './errors.js';
|
||||
|
||||
let query;
|
||||
let AbortError;
|
||||
|
||||
async function loadClaudeCodeModule() {
|
||||
if (!query || !AbortError) {
|
||||
try {
|
||||
const mod = await import('@anthropic-ai/claude-code');
|
||||
query = mod.query;
|
||||
AbortError = mod.AbortError;
|
||||
} catch (err) {
|
||||
throw new Error(
|
||||
"Claude Code SDK is not installed. Please install '@anthropic-ai/claude-code' to use the claude-code provider."
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @typedef {import('./types.js').ClaudeCodeSettings} ClaudeCodeSettings
|
||||
* @typedef {import('./types.js').ClaudeCodeModelId} ClaudeCodeModelId
|
||||
* @typedef {import('./types.js').ClaudeCodeLanguageModelOptions} ClaudeCodeLanguageModelOptions
|
||||
*/
|
||||
|
||||
const modelMap = {
|
||||
opus: 'opus',
|
||||
sonnet: 'sonnet'
|
||||
};
|
||||
|
||||
export class ClaudeCodeLanguageModel {
|
||||
specificationVersion = 'v1';
|
||||
defaultObjectGenerationMode = 'json';
|
||||
supportsImageUrls = false;
|
||||
supportsStructuredOutputs = false;
|
||||
|
||||
/** @type {ClaudeCodeModelId} */
|
||||
modelId;
|
||||
|
||||
/** @type {ClaudeCodeSettings} */
|
||||
settings;
|
||||
|
||||
/** @type {string|undefined} */
|
||||
sessionId;
|
||||
|
||||
/**
|
||||
* @param {ClaudeCodeLanguageModelOptions} options
|
||||
*/
|
||||
constructor(options) {
|
||||
this.modelId = options.id;
|
||||
this.settings = options.settings ?? {};
|
||||
|
||||
// Validate model ID format
|
||||
if (
|
||||
!this.modelId ||
|
||||
typeof this.modelId !== 'string' ||
|
||||
this.modelId.trim() === ''
|
||||
) {
|
||||
throw new NoSuchModelError({
|
||||
modelId: this.modelId,
|
||||
modelType: 'languageModel'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
get provider() {
|
||||
return 'claude-code';
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the model name for Claude Code CLI
|
||||
* @returns {string}
|
||||
*/
|
||||
getModel() {
|
||||
const mapped = modelMap[this.modelId];
|
||||
return mapped ?? this.modelId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate unsupported parameter warnings
|
||||
* @param {Object} options - Generation options
|
||||
* @returns {Array} Warnings array
|
||||
*/
|
||||
generateUnsupportedWarnings(options) {
|
||||
const warnings = [];
|
||||
const unsupportedParams = [];
|
||||
|
||||
// Check for unsupported parameters
|
||||
if (options.temperature !== undefined)
|
||||
unsupportedParams.push('temperature');
|
||||
if (options.maxTokens !== undefined) unsupportedParams.push('maxTokens');
|
||||
if (options.topP !== undefined) unsupportedParams.push('topP');
|
||||
if (options.topK !== undefined) unsupportedParams.push('topK');
|
||||
if (options.presencePenalty !== undefined)
|
||||
unsupportedParams.push('presencePenalty');
|
||||
if (options.frequencyPenalty !== undefined)
|
||||
unsupportedParams.push('frequencyPenalty');
|
||||
if (options.stopSequences !== undefined && options.stopSequences.length > 0)
|
||||
unsupportedParams.push('stopSequences');
|
||||
if (options.seed !== undefined) unsupportedParams.push('seed');
|
||||
|
||||
if (unsupportedParams.length > 0) {
|
||||
// Add a warning for each unsupported parameter
|
||||
for (const param of unsupportedParams) {
|
||||
warnings.push({
|
||||
type: 'unsupported-setting',
|
||||
setting: param,
|
||||
details: `Claude Code CLI does not support the ${param} parameter. It will be ignored.`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return warnings;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate text using Claude Code
|
||||
* @param {Object} options - Generation options
|
||||
* @returns {Promise<Object>}
|
||||
*/
|
||||
async doGenerate(options) {
|
||||
await loadClaudeCodeModule();
|
||||
const { messagesPrompt } = convertToClaudeCodeMessages(
|
||||
options.prompt,
|
||||
options.mode
|
||||
);
|
||||
|
||||
const abortController = new AbortController();
|
||||
if (options.abortSignal) {
|
||||
options.abortSignal.addEventListener('abort', () =>
|
||||
abortController.abort()
|
||||
);
|
||||
}
|
||||
|
||||
const queryOptions = {
|
||||
model: this.getModel(),
|
||||
abortController,
|
||||
resume: this.sessionId,
|
||||
pathToClaudeCodeExecutable: this.settings.pathToClaudeCodeExecutable,
|
||||
customSystemPrompt: this.settings.customSystemPrompt,
|
||||
appendSystemPrompt: this.settings.appendSystemPrompt,
|
||||
maxTurns: this.settings.maxTurns,
|
||||
maxThinkingTokens: this.settings.maxThinkingTokens,
|
||||
cwd: this.settings.cwd,
|
||||
executable: this.settings.executable,
|
||||
executableArgs: this.settings.executableArgs,
|
||||
permissionMode: this.settings.permissionMode,
|
||||
permissionPromptToolName: this.settings.permissionPromptToolName,
|
||||
continue: this.settings.continue,
|
||||
allowedTools: this.settings.allowedTools,
|
||||
disallowedTools: this.settings.disallowedTools,
|
||||
mcpServers: this.settings.mcpServers
|
||||
};
|
||||
|
||||
let text = '';
|
||||
let usage = { promptTokens: 0, completionTokens: 0 };
|
||||
let finishReason = 'stop';
|
||||
let costUsd;
|
||||
let durationMs;
|
||||
let rawUsage;
|
||||
const warnings = this.generateUnsupportedWarnings(options);
|
||||
|
||||
try {
|
||||
const response = query({
|
||||
prompt: messagesPrompt,
|
||||
options: queryOptions
|
||||
});
|
||||
|
||||
for await (const message of response) {
|
||||
if (message.type === 'assistant') {
|
||||
text += message.message.content
|
||||
.map((c) => (c.type === 'text' ? c.text : ''))
|
||||
.join('');
|
||||
} else if (message.type === 'result') {
|
||||
this.sessionId = message.session_id;
|
||||
costUsd = message.total_cost_usd;
|
||||
durationMs = message.duration_ms;
|
||||
|
||||
if ('usage' in message) {
|
||||
rawUsage = message.usage;
|
||||
usage = {
|
||||
promptTokens:
|
||||
(message.usage.cache_creation_input_tokens ?? 0) +
|
||||
(message.usage.cache_read_input_tokens ?? 0) +
|
||||
(message.usage.input_tokens ?? 0),
|
||||
completionTokens: message.usage.output_tokens ?? 0
|
||||
};
|
||||
}
|
||||
|
||||
if (message.subtype === 'error_max_turns') {
|
||||
finishReason = 'length';
|
||||
} else if (message.subtype === 'error_during_execution') {
|
||||
finishReason = 'error';
|
||||
}
|
||||
} else if (message.type === 'system' && message.subtype === 'init') {
|
||||
this.sessionId = message.session_id;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
if (error instanceof AbortError) {
|
||||
throw options.abortSignal?.aborted ? options.abortSignal.reason : error;
|
||||
}
|
||||
|
||||
// Check for authentication errors
|
||||
if (
|
||||
error.message?.includes('not logged in') ||
|
||||
error.message?.includes('authentication') ||
|
||||
error.exitCode === 401
|
||||
) {
|
||||
throw createAuthenticationError({
|
||||
message:
|
||||
error.message ||
|
||||
'Authentication failed. Please ensure Claude Code CLI is properly authenticated.'
|
||||
});
|
||||
}
|
||||
|
||||
// Wrap other errors with API call error
|
||||
throw createAPICallError({
|
||||
message: error.message || 'Claude Code CLI error',
|
||||
code: error.code,
|
||||
exitCode: error.exitCode,
|
||||
stderr: error.stderr,
|
||||
promptExcerpt: messagesPrompt.substring(0, 200),
|
||||
isRetryable: error.code === 'ENOENT' || error.code === 'ECONNREFUSED'
|
||||
});
|
||||
}
|
||||
|
||||
// Extract JSON if in object-json mode
|
||||
if (options.mode?.type === 'object-json' && text) {
|
||||
text = extractJson(text);
|
||||
}
|
||||
|
||||
return {
|
||||
text: text || undefined,
|
||||
usage,
|
||||
finishReason,
|
||||
rawCall: {
|
||||
rawPrompt: messagesPrompt,
|
||||
rawSettings: queryOptions
|
||||
},
|
||||
warnings: warnings.length > 0 ? warnings : undefined,
|
||||
response: {
|
||||
id: generateId(),
|
||||
timestamp: new Date(),
|
||||
modelId: this.modelId
|
||||
},
|
||||
request: {
|
||||
body: messagesPrompt
|
||||
},
|
||||
providerMetadata: {
|
||||
'claude-code': {
|
||||
...(this.sessionId !== undefined && { sessionId: this.sessionId }),
|
||||
...(costUsd !== undefined && { costUsd }),
|
||||
...(durationMs !== undefined && { durationMs }),
|
||||
...(rawUsage !== undefined && { rawUsage })
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Stream text using Claude Code
|
||||
* @param {Object} options - Stream options
|
||||
* @returns {Promise<Object>}
|
||||
*/
|
||||
async doStream(options) {
|
||||
await loadClaudeCodeModule();
|
||||
const { messagesPrompt } = convertToClaudeCodeMessages(
|
||||
options.prompt,
|
||||
options.mode
|
||||
);
|
||||
|
||||
const abortController = new AbortController();
|
||||
if (options.abortSignal) {
|
||||
options.abortSignal.addEventListener('abort', () =>
|
||||
abortController.abort()
|
||||
);
|
||||
}
|
||||
|
||||
const queryOptions = {
|
||||
model: this.getModel(),
|
||||
abortController,
|
||||
resume: this.sessionId,
|
||||
pathToClaudeCodeExecutable: this.settings.pathToClaudeCodeExecutable,
|
||||
customSystemPrompt: this.settings.customSystemPrompt,
|
||||
appendSystemPrompt: this.settings.appendSystemPrompt,
|
||||
maxTurns: this.settings.maxTurns,
|
||||
maxThinkingTokens: this.settings.maxThinkingTokens,
|
||||
cwd: this.settings.cwd,
|
||||
executable: this.settings.executable,
|
||||
executableArgs: this.settings.executableArgs,
|
||||
permissionMode: this.settings.permissionMode,
|
||||
permissionPromptToolName: this.settings.permissionPromptToolName,
|
||||
continue: this.settings.continue,
|
||||
allowedTools: this.settings.allowedTools,
|
||||
disallowedTools: this.settings.disallowedTools,
|
||||
mcpServers: this.settings.mcpServers
|
||||
};
|
||||
|
||||
const warnings = this.generateUnsupportedWarnings(options);
|
||||
|
||||
const stream = new ReadableStream({
|
||||
start: async (controller) => {
|
||||
try {
|
||||
const response = query({
|
||||
prompt: messagesPrompt,
|
||||
options: queryOptions
|
||||
});
|
||||
|
||||
let usage = { promptTokens: 0, completionTokens: 0 };
|
||||
let accumulatedText = '';
|
||||
|
||||
for await (const message of response) {
|
||||
if (message.type === 'assistant') {
|
||||
const text = message.message.content
|
||||
.map((c) => (c.type === 'text' ? c.text : ''))
|
||||
.join('');
|
||||
|
||||
if (text) {
|
||||
accumulatedText += text;
|
||||
|
||||
// In object-json mode, we need to accumulate the full text
|
||||
// and extract JSON at the end, so don't stream individual deltas
|
||||
if (options.mode?.type !== 'object-json') {
|
||||
controller.enqueue({
|
||||
type: 'text-delta',
|
||||
textDelta: text
|
||||
});
|
||||
}
|
||||
}
|
||||
} else if (message.type === 'result') {
|
||||
let rawUsage;
|
||||
if ('usage' in message) {
|
||||
rawUsage = message.usage;
|
||||
usage = {
|
||||
promptTokens:
|
||||
(message.usage.cache_creation_input_tokens ?? 0) +
|
||||
(message.usage.cache_read_input_tokens ?? 0) +
|
||||
(message.usage.input_tokens ?? 0),
|
||||
completionTokens: message.usage.output_tokens ?? 0
|
||||
};
|
||||
}
|
||||
|
||||
let finishReason = 'stop';
|
||||
if (message.subtype === 'error_max_turns') {
|
||||
finishReason = 'length';
|
||||
} else if (message.subtype === 'error_during_execution') {
|
||||
finishReason = 'error';
|
||||
}
|
||||
|
||||
// Store session ID in the model instance
|
||||
this.sessionId = message.session_id;
|
||||
|
||||
// In object-json mode, extract JSON and send the full text at once
|
||||
if (options.mode?.type === 'object-json' && accumulatedText) {
|
||||
const extractedJson = extractJson(accumulatedText);
|
||||
controller.enqueue({
|
||||
type: 'text-delta',
|
||||
textDelta: extractedJson
|
||||
});
|
||||
}
|
||||
|
||||
controller.enqueue({
|
||||
type: 'finish',
|
||||
finishReason,
|
||||
usage,
|
||||
providerMetadata: {
|
||||
'claude-code': {
|
||||
sessionId: message.session_id,
|
||||
...(message.total_cost_usd !== undefined && {
|
||||
costUsd: message.total_cost_usd
|
||||
}),
|
||||
...(message.duration_ms !== undefined && {
|
||||
durationMs: message.duration_ms
|
||||
}),
|
||||
...(rawUsage !== undefined && { rawUsage })
|
||||
}
|
||||
}
|
||||
});
|
||||
} else if (
|
||||
message.type === 'system' &&
|
||||
message.subtype === 'init'
|
||||
) {
|
||||
// Store session ID for future use
|
||||
this.sessionId = message.session_id;
|
||||
|
||||
// Emit response metadata when session is initialized
|
||||
controller.enqueue({
|
||||
type: 'response-metadata',
|
||||
id: message.session_id,
|
||||
timestamp: new Date(),
|
||||
modelId: this.modelId
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
controller.close();
|
||||
} catch (error) {
|
||||
let errorToEmit;
|
||||
|
||||
if (error instanceof AbortError) {
|
||||
errorToEmit = options.abortSignal?.aborted
|
||||
? options.abortSignal.reason
|
||||
: error;
|
||||
} else if (
|
||||
error.message?.includes('not logged in') ||
|
||||
error.message?.includes('authentication') ||
|
||||
error.exitCode === 401
|
||||
) {
|
||||
errorToEmit = createAuthenticationError({
|
||||
message:
|
||||
error.message ||
|
||||
'Authentication failed. Please ensure Claude Code CLI is properly authenticated.'
|
||||
});
|
||||
} else {
|
||||
errorToEmit = createAPICallError({
|
||||
message: error.message || 'Claude Code CLI error',
|
||||
code: error.code,
|
||||
exitCode: error.exitCode,
|
||||
stderr: error.stderr,
|
||||
promptExcerpt: messagesPrompt.substring(0, 200),
|
||||
isRetryable:
|
||||
error.code === 'ENOENT' || error.code === 'ECONNREFUSED'
|
||||
});
|
||||
}
|
||||
|
||||
// Emit error as a stream part
|
||||
controller.enqueue({
|
||||
type: 'error',
|
||||
error: errorToEmit
|
||||
});
|
||||
|
||||
controller.close();
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
stream,
|
||||
rawCall: {
|
||||
rawPrompt: messagesPrompt,
|
||||
rawSettings: queryOptions
|
||||
},
|
||||
warnings: warnings.length > 0 ? warnings : undefined,
|
||||
request: {
|
||||
body: messagesPrompt
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
139
src/ai-providers/custom-sdk/claude-code/message-converter.js
Normal file
139
src/ai-providers/custom-sdk/claude-code/message-converter.js
Normal file
@@ -0,0 +1,139 @@
|
||||
/**
|
||||
* @fileoverview Converts AI SDK prompt format to Claude Code message format
|
||||
*/
|
||||
|
||||
/**
|
||||
* Convert AI SDK prompt to Claude Code messages format
|
||||
* @param {Array} prompt - AI SDK prompt array
|
||||
* @param {Object} [mode] - Generation mode
|
||||
* @param {string} mode.type - Mode type ('regular', 'object-json', 'object-tool')
|
||||
* @returns {{messagesPrompt: string, systemPrompt?: string}}
|
||||
*/
|
||||
export function convertToClaudeCodeMessages(prompt, mode) {
|
||||
const messages = [];
|
||||
let systemPrompt;
|
||||
|
||||
for (const message of prompt) {
|
||||
switch (message.role) {
|
||||
case 'system':
|
||||
systemPrompt = message.content;
|
||||
break;
|
||||
|
||||
case 'user':
|
||||
if (typeof message.content === 'string') {
|
||||
messages.push(message.content);
|
||||
} else {
|
||||
// Handle multi-part content
|
||||
const textParts = message.content
|
||||
.filter((part) => part.type === 'text')
|
||||
.map((part) => part.text)
|
||||
.join('\n');
|
||||
|
||||
if (textParts) {
|
||||
messages.push(textParts);
|
||||
}
|
||||
|
||||
// Note: Image parts are not supported by Claude Code CLI
|
||||
const imageParts = message.content.filter(
|
||||
(part) => part.type === 'image'
|
||||
);
|
||||
if (imageParts.length > 0) {
|
||||
console.warn(
|
||||
'Claude Code CLI does not support image inputs. Images will be ignored.'
|
||||
);
|
||||
}
|
||||
}
|
||||
break;
|
||||
|
||||
case 'assistant':
|
||||
if (typeof message.content === 'string') {
|
||||
messages.push(`Assistant: ${message.content}`);
|
||||
} else {
|
||||
const textParts = message.content
|
||||
.filter((part) => part.type === 'text')
|
||||
.map((part) => part.text)
|
||||
.join('\n');
|
||||
|
||||
if (textParts) {
|
||||
messages.push(`Assistant: ${textParts}`);
|
||||
}
|
||||
|
||||
// Handle tool calls if present
|
||||
const toolCalls = message.content.filter(
|
||||
(part) => part.type === 'tool-call'
|
||||
);
|
||||
if (toolCalls.length > 0) {
|
||||
// For now, we'll just note that tool calls were made
|
||||
messages.push(`Assistant: [Tool calls made]`);
|
||||
}
|
||||
}
|
||||
break;
|
||||
|
||||
case 'tool':
|
||||
// Tool results could be included in the conversation
|
||||
messages.push(
|
||||
`Tool Result (${message.content[0].toolName}): ${JSON.stringify(
|
||||
message.content[0].result
|
||||
)}`
|
||||
);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// For the SDK, we need to provide a single prompt string
|
||||
// Format the conversation history properly
|
||||
|
||||
// Combine system prompt with messages
|
||||
let finalPrompt = '';
|
||||
|
||||
// Add system prompt at the beginning if present
|
||||
if (systemPrompt) {
|
||||
finalPrompt = systemPrompt;
|
||||
}
|
||||
|
||||
if (messages.length === 0) {
|
||||
return { messagesPrompt: finalPrompt, systemPrompt };
|
||||
}
|
||||
|
||||
// Format messages
|
||||
const formattedMessages = [];
|
||||
for (let i = 0; i < messages.length; i++) {
|
||||
const msg = messages[i];
|
||||
// Check if this is a user or assistant message based on content
|
||||
if (msg.startsWith('Assistant:') || msg.startsWith('Tool Result')) {
|
||||
formattedMessages.push(msg);
|
||||
} else {
|
||||
// User messages
|
||||
formattedMessages.push(`Human: ${msg}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Combine system prompt with messages
|
||||
if (finalPrompt) {
|
||||
finalPrompt = finalPrompt + '\n\n' + formattedMessages.join('\n\n');
|
||||
} else {
|
||||
finalPrompt = formattedMessages.join('\n\n');
|
||||
}
|
||||
|
||||
// For JSON mode, add explicit instruction to ensure JSON output
|
||||
if (mode?.type === 'object-json') {
|
||||
// Make the JSON instruction even more explicit
|
||||
finalPrompt = `${finalPrompt}
|
||||
|
||||
CRITICAL INSTRUCTION: You MUST respond with ONLY valid JSON. Follow these rules EXACTLY:
|
||||
1. Start your response with an opening brace {
|
||||
2. End your response with a closing brace }
|
||||
3. Do NOT include any text before the opening brace
|
||||
4. Do NOT include any text after the closing brace
|
||||
5. Do NOT use markdown code blocks or backticks
|
||||
6. Do NOT include explanations or commentary
|
||||
7. The ENTIRE response must be valid JSON that can be parsed with JSON.parse()
|
||||
|
||||
Begin your response with { and end with }`;
|
||||
}
|
||||
|
||||
return {
|
||||
messagesPrompt: finalPrompt,
|
||||
systemPrompt
|
||||
};
|
||||
}
|
||||
73
src/ai-providers/custom-sdk/claude-code/types.js
Normal file
73
src/ai-providers/custom-sdk/claude-code/types.js
Normal file
@@ -0,0 +1,73 @@
|
||||
/**
|
||||
* @fileoverview Type definitions for Claude Code AI SDK provider
|
||||
* These JSDoc types mirror the TypeScript interfaces from the original provider
|
||||
*/
|
||||
|
||||
/**
|
||||
* Claude Code provider settings
|
||||
* @typedef {Object} ClaudeCodeSettings
|
||||
* @property {string} [pathToClaudeCodeExecutable='claude'] - Custom path to Claude Code CLI executable
|
||||
* @property {string} [customSystemPrompt] - Custom system prompt to use
|
||||
* @property {string} [appendSystemPrompt] - Append additional content to the system prompt
|
||||
* @property {number} [maxTurns] - Maximum number of turns for the conversation
|
||||
* @property {number} [maxThinkingTokens] - Maximum thinking tokens for the model
|
||||
* @property {string} [cwd] - Working directory for CLI operations
|
||||
* @property {'bun'|'deno'|'node'} [executable='node'] - JavaScript runtime to use
|
||||
* @property {string[]} [executableArgs] - Additional arguments for the JavaScript runtime
|
||||
* @property {'default'|'acceptEdits'|'bypassPermissions'|'plan'} [permissionMode='default'] - Permission mode for tool usage
|
||||
* @property {string} [permissionPromptToolName] - Custom tool name for permission prompts
|
||||
* @property {boolean} [continue] - Continue the most recent conversation
|
||||
* @property {string} [resume] - Resume a specific session by ID
|
||||
* @property {string[]} [allowedTools] - Tools to explicitly allow during execution (e.g., ['Read', 'LS', 'Bash(git log:*)'])
|
||||
* @property {string[]} [disallowedTools] - Tools to disallow during execution (e.g., ['Write', 'Edit', 'Bash(rm:*)'])
|
||||
* @property {Object.<string, MCPServerConfig>} [mcpServers] - MCP server configuration
|
||||
* @property {boolean} [verbose] - Enable verbose logging for debugging
|
||||
*/
|
||||
|
||||
/**
|
||||
* MCP Server configuration
|
||||
* @typedef {Object} MCPServerConfig
|
||||
* @property {'stdio'|'sse'} [type='stdio'] - Server type
|
||||
* @property {string} command - Command to execute (for stdio type)
|
||||
* @property {string[]} [args] - Arguments for the command
|
||||
* @property {Object.<string, string>} [env] - Environment variables
|
||||
* @property {string} url - URL for SSE type servers
|
||||
* @property {Object.<string, string>} [headers] - Headers for SSE type servers
|
||||
*/
|
||||
|
||||
/**
|
||||
* Model ID type - either 'opus', 'sonnet', or any string
|
||||
* @typedef {'opus'|'sonnet'|string} ClaudeCodeModelId
|
||||
*/
|
||||
|
||||
/**
|
||||
* Language model options
|
||||
* @typedef {Object} ClaudeCodeLanguageModelOptions
|
||||
* @property {ClaudeCodeModelId} id - The model ID
|
||||
* @property {ClaudeCodeSettings} [settings] - Optional settings
|
||||
*/
|
||||
|
||||
/**
|
||||
* Error metadata for Claude Code errors
|
||||
* @typedef {Object} ClaudeCodeErrorMetadata
|
||||
* @property {string} [code] - Error code
|
||||
* @property {number} [exitCode] - Process exit code
|
||||
* @property {string} [stderr] - Standard error output
|
||||
* @property {string} [promptExcerpt] - Excerpt of the prompt that caused the error
|
||||
*/
|
||||
|
||||
/**
|
||||
* Claude Code provider interface
|
||||
* @typedef {Object} ClaudeCodeProvider
|
||||
* @property {function(ClaudeCodeModelId, ClaudeCodeSettings=): Object} languageModel - Create a language model
|
||||
* @property {function(ClaudeCodeModelId, ClaudeCodeSettings=): Object} chat - Alias for languageModel
|
||||
* @property {function(string): never} textEmbeddingModel - Throws NoSuchModelError (not supported)
|
||||
*/
|
||||
|
||||
/**
|
||||
* Claude Code provider settings
|
||||
* @typedef {Object} ClaudeCodeProviderSettings
|
||||
* @property {ClaudeCodeSettings} [defaultSettings] - Default settings to use for all models
|
||||
*/
|
||||
|
||||
export {}; // This ensures the file is treated as a module
|
||||
@@ -13,3 +13,4 @@ export { OllamaAIProvider } from './ollama.js';
|
||||
export { BedrockAIProvider } from './bedrock.js';
|
||||
export { AzureProvider } from './azure.js';
|
||||
export { VertexAIProvider } from './google-vertex.js';
|
||||
export { ClaudeCodeProvider } from './claude-code.js';
|
||||
|
||||
59
src/constants/profiles.js
Normal file
59
src/constants/profiles.js
Normal file
@@ -0,0 +1,59 @@
|
||||
/**
|
||||
* @typedef {'claude' | 'cline' | 'codex' | 'cursor' | 'roo' | 'trae' | 'windsurf' | 'vscode'} RulesProfile
|
||||
*/
|
||||
|
||||
/**
|
||||
* Available rule profiles for project initialization and rules command
|
||||
*
|
||||
* ⚠️ SINGLE SOURCE OF TRUTH: This is the authoritative list of all supported rule profiles.
|
||||
* This constant is used directly throughout the codebase (previously aliased as PROFILE_NAMES).
|
||||
*
|
||||
* @type {RulesProfile[]}
|
||||
* @description Defines possible rule profile sets:
|
||||
* - claude: Claude Code integration
|
||||
* - cline: Cline IDE rules
|
||||
* - codex: Codex integration
|
||||
* - cursor: Cursor IDE rules
|
||||
* - roo: Roo Code IDE rules
|
||||
* - trae: Trae IDE rules
|
||||
* - vscode: VS Code with GitHub Copilot integration
|
||||
* - windsurf: Windsurf IDE rules
|
||||
*
|
||||
* To add a new rule profile:
|
||||
* 1. Add the profile name to this array
|
||||
* 2. Create a profile file in src/profiles/{profile}.js
|
||||
* 3. Export it as {profile}Profile in src/profiles/index.js
|
||||
*/
|
||||
export const RULE_PROFILES = [
|
||||
'claude',
|
||||
'cline',
|
||||
'codex',
|
||||
'cursor',
|
||||
'roo',
|
||||
'trae',
|
||||
'vscode',
|
||||
'windsurf'
|
||||
];
|
||||
|
||||
/**
|
||||
* Centralized enum for all supported Roo agent modes
|
||||
* @type {string[]}
|
||||
* @description Available Roo Code IDE modes for rule generation
|
||||
*/
|
||||
export const ROO_MODES = [
|
||||
'architect',
|
||||
'ask',
|
||||
'orchestrator',
|
||||
'code',
|
||||
'debug',
|
||||
'test'
|
||||
];
|
||||
|
||||
/**
|
||||
* Check if a given rule profile is valid
|
||||
* @param {string} rulesProfile - The rule profile to check
|
||||
* @returns {boolean} True if the rule profile is valid, false otherwise
|
||||
*/
|
||||
export function isValidRulesProfile(rulesProfile) {
|
||||
return RULE_PROFILES.includes(rulesProfile);
|
||||
}
|
||||
33
src/constants/providers.js
Normal file
33
src/constants/providers.js
Normal file
@@ -0,0 +1,33 @@
|
||||
/**
|
||||
* Provider validation constants
|
||||
* Defines which providers should be validated against the supported-models.json file
|
||||
*/
|
||||
|
||||
// Providers that have predefined model lists and should be validated
|
||||
export const VALIDATED_PROVIDERS = [
|
||||
'anthropic',
|
||||
'openai',
|
||||
'google',
|
||||
'perplexity',
|
||||
'xai',
|
||||
'mistral'
|
||||
];
|
||||
|
||||
// Custom providers object for easy named access
|
||||
export const CUSTOM_PROVIDERS = {
|
||||
AZURE: 'azure',
|
||||
VERTEX: 'vertex',
|
||||
BEDROCK: 'bedrock',
|
||||
OPENROUTER: 'openrouter',
|
||||
OLLAMA: 'ollama',
|
||||
CLAUDE_CODE: 'claude-code'
|
||||
};
|
||||
|
||||
// Custom providers array (for backward compatibility and iteration)
|
||||
export const CUSTOM_PROVIDERS_ARRAY = Object.values(CUSTOM_PROVIDERS);
|
||||
|
||||
// All known providers (for reference)
|
||||
export const ALL_PROVIDERS = [
|
||||
...VALIDATED_PROVIDERS,
|
||||
...CUSTOM_PROVIDERS_ARRAY
|
||||
];
|
||||
25
src/constants/rules-actions.js
Normal file
25
src/constants/rules-actions.js
Normal file
@@ -0,0 +1,25 @@
|
||||
/**
|
||||
* @typedef {'add' | 'remove'} RulesAction
|
||||
*/
|
||||
|
||||
/**
|
||||
* Individual rules action constants
|
||||
*/
|
||||
export const RULES_ACTIONS = {
|
||||
ADD: 'add',
|
||||
REMOVE: 'remove'
|
||||
};
|
||||
|
||||
/**
|
||||
* Special rules command (not a CRUD operation)
|
||||
*/
|
||||
export const RULES_SETUP_ACTION = 'setup';
|
||||
|
||||
/**
|
||||
* Check if a given action is a valid rules action
|
||||
* @param {string} action - The action to check
|
||||
* @returns {boolean} True if the action is valid, false otherwise
|
||||
*/
|
||||
export function isValidRulesAction(action) {
|
||||
return Object.values(RULES_ACTIONS).includes(action);
|
||||
}
|
||||
249
src/profiles/base-profile.js
Normal file
249
src/profiles/base-profile.js
Normal file
@@ -0,0 +1,249 @@
|
||||
// Base profile factory for rule-transformer
|
||||
import path from 'path';
|
||||
|
||||
/**
|
||||
* Creates a standardized profile configuration for different editors
|
||||
* @param {Object} editorConfig - Editor-specific configuration
|
||||
* @param {string} editorConfig.name - Profile name (e.g., 'cursor', 'vscode')
|
||||
* @param {string} [editorConfig.displayName] - Display name for the editor (defaults to name)
|
||||
* @param {string} editorConfig.url - Editor website URL
|
||||
* @param {string} editorConfig.docsUrl - Editor documentation URL
|
||||
* @param {string} editorConfig.profileDir - Directory for profile configuration
|
||||
* @param {string} [editorConfig.rulesDir] - Directory for rules files (defaults to profileDir/rules)
|
||||
* @param {boolean} [editorConfig.mcpConfig=true] - Whether to create MCP configuration
|
||||
* @param {string} [editorConfig.mcpConfigName='mcp.json'] - Name of MCP config file
|
||||
* @param {string} [editorConfig.fileExtension='.mdc'] - Source file extension
|
||||
* @param {string} [editorConfig.targetExtension='.md'] - Target file extension
|
||||
* @param {Object} [editorConfig.toolMappings={}] - Tool name mappings
|
||||
* @param {Array} [editorConfig.customReplacements=[]] - Custom text replacements
|
||||
* @param {Object} [editorConfig.customFileMap={}] - Custom file name mappings
|
||||
* @param {boolean} [editorConfig.supportsRulesSubdirectories=false] - Whether to use taskmaster/ subdirectory for taskmaster-specific rules (only Cursor uses this by default)
|
||||
* @param {Function} [editorConfig.onAdd] - Lifecycle hook for profile addition
|
||||
* @param {Function} [editorConfig.onRemove] - Lifecycle hook for profile removal
|
||||
* @param {Function} [editorConfig.onPostConvert] - Lifecycle hook for post-conversion
|
||||
* @returns {Object} - Complete profile configuration
|
||||
*/
|
||||
export function createProfile(editorConfig) {
|
||||
const {
|
||||
name,
|
||||
displayName = name,
|
||||
url,
|
||||
docsUrl,
|
||||
profileDir,
|
||||
rulesDir = `${profileDir}/rules`,
|
||||
mcpConfig = true,
|
||||
mcpConfigName = 'mcp.json',
|
||||
fileExtension = '.mdc',
|
||||
targetExtension = '.md',
|
||||
toolMappings = {},
|
||||
customReplacements = [],
|
||||
customFileMap = {},
|
||||
supportsRulesSubdirectories = false,
|
||||
onAdd,
|
||||
onRemove,
|
||||
onPostConvert
|
||||
} = editorConfig;
|
||||
|
||||
const mcpConfigPath = `${profileDir}/${mcpConfigName}`;
|
||||
|
||||
// Standard file mapping with custom overrides
|
||||
// Use taskmaster subdirectory only if profile supports it
|
||||
const taskmasterPrefix = supportsRulesSubdirectories ? 'taskmaster/' : '';
|
||||
const defaultFileMap = {
|
||||
'cursor_rules.mdc': `${name.toLowerCase()}_rules${targetExtension}`,
|
||||
'dev_workflow.mdc': `${taskmasterPrefix}dev_workflow${targetExtension}`,
|
||||
'self_improve.mdc': `self_improve${targetExtension}`,
|
||||
'taskmaster.mdc': `${taskmasterPrefix}taskmaster${targetExtension}`
|
||||
};
|
||||
|
||||
const fileMap = { ...defaultFileMap, ...customFileMap };
|
||||
|
||||
// Base global replacements that work for all editors
|
||||
const baseGlobalReplacements = [
|
||||
// Handle URLs in any context
|
||||
{ from: /cursor\.so/gi, to: url },
|
||||
{ from: /cursor\s*\.\s*so/gi, to: url },
|
||||
{ from: /https?:\/\/cursor\.so/gi, to: `https://${url}` },
|
||||
{ from: /https?:\/\/www\.cursor\.so/gi, to: `https://www.${url}` },
|
||||
|
||||
// Handle tool references
|
||||
{ from: /\bedit_file\b/gi, to: toolMappings.edit_file || 'edit_file' },
|
||||
{
|
||||
from: /\bsearch tool\b/gi,
|
||||
to: `${toolMappings.search || 'search'} tool`
|
||||
},
|
||||
{ from: /\bSearch Tool\b/g, to: `${toolMappings.search || 'Search'} Tool` },
|
||||
|
||||
// Handle basic terms with proper case handling
|
||||
{
|
||||
from: /\bcursor\b/gi,
|
||||
to: (match) =>
|
||||
match.charAt(0) === 'C' ? displayName : name.toLowerCase()
|
||||
},
|
||||
{ from: /Cursor/g, to: displayName },
|
||||
{ from: /CURSOR/g, to: displayName.toUpperCase() },
|
||||
|
||||
// Handle file extensions if different
|
||||
...(targetExtension !== fileExtension
|
||||
? [
|
||||
{
|
||||
from: new RegExp(`\\${fileExtension}(?!\\])\\b`, 'g'),
|
||||
to: targetExtension
|
||||
}
|
||||
]
|
||||
: []),
|
||||
|
||||
// Handle documentation URLs
|
||||
{ from: /docs\.cursor\.com/gi, to: docsUrl },
|
||||
|
||||
// Custom editor-specific replacements
|
||||
...customReplacements
|
||||
];
|
||||
|
||||
// Standard tool mappings
|
||||
const defaultToolMappings = {
|
||||
search: 'search',
|
||||
read_file: 'read_file',
|
||||
edit_file: 'edit_file',
|
||||
create_file: 'create_file',
|
||||
run_command: 'run_command',
|
||||
terminal_command: 'terminal_command',
|
||||
use_mcp: 'use_mcp',
|
||||
switch_mode: 'switch_mode',
|
||||
...toolMappings
|
||||
};
|
||||
|
||||
// Create conversion config
|
||||
const conversionConfig = {
|
||||
// Profile name replacements
|
||||
profileTerms: [
|
||||
{ from: /cursor\.so/g, to: url },
|
||||
{ from: /\[cursor\.so\]/g, to: `[${url}]` },
|
||||
{ from: /href="https:\/\/cursor\.so/g, to: `href="https://${url}` },
|
||||
{ from: /\(https:\/\/cursor\.so/g, to: `(https://${url}` },
|
||||
{
|
||||
from: /\bcursor\b/gi,
|
||||
to: (match) => (match === 'Cursor' ? displayName : name.toLowerCase())
|
||||
},
|
||||
{ from: /Cursor/g, to: displayName }
|
||||
],
|
||||
|
||||
// File extension replacements
|
||||
fileExtensions:
|
||||
targetExtension !== fileExtension
|
||||
? [
|
||||
{
|
||||
from: new RegExp(`\\${fileExtension}\\b`, 'g'),
|
||||
to: targetExtension
|
||||
}
|
||||
]
|
||||
: [],
|
||||
|
||||
// Documentation URL replacements
|
||||
docUrls: [
|
||||
{
|
||||
from: new RegExp(`https:\\/\\/docs\\.cursor\\.com\\/[^\\s)'\"]+`, 'g'),
|
||||
to: (match) => match.replace('docs.cursor.com', docsUrl)
|
||||
},
|
||||
{
|
||||
from: new RegExp(`https:\\/\\/${docsUrl}\\/`, 'g'),
|
||||
to: `https://${docsUrl}/`
|
||||
}
|
||||
],
|
||||
|
||||
// Tool references - direct replacements
|
||||
toolNames: defaultToolMappings,
|
||||
|
||||
// Tool references in context - more specific replacements
|
||||
toolContexts: Object.entries(defaultToolMappings).flatMap(
|
||||
([original, mapped]) => [
|
||||
{
|
||||
from: new RegExp(`\\b${original} tool\\b`, 'g'),
|
||||
to: `${mapped} tool`
|
||||
},
|
||||
{ from: new RegExp(`\\bthe ${original}\\b`, 'g'), to: `the ${mapped}` },
|
||||
{ from: new RegExp(`\\bThe ${original}\\b`, 'g'), to: `The ${mapped}` },
|
||||
{
|
||||
from: new RegExp(`\\bCursor ${original}\\b`, 'g'),
|
||||
to: `${displayName} ${mapped}`
|
||||
}
|
||||
]
|
||||
),
|
||||
|
||||
// Tool group and category names
|
||||
toolGroups: [
|
||||
{ from: /\bSearch tools\b/g, to: 'Read Group tools' },
|
||||
{ from: /\bEdit tools\b/g, to: 'Edit Group tools' },
|
||||
{ from: /\bRun tools\b/g, to: 'Command Group tools' },
|
||||
{ from: /\bMCP servers\b/g, to: 'MCP Group tools' },
|
||||
{ from: /\bSearch Group\b/g, to: 'Read Group' },
|
||||
{ from: /\bEdit Group\b/g, to: 'Edit Group' },
|
||||
{ from: /\bRun Group\b/g, to: 'Command Group' }
|
||||
],
|
||||
|
||||
// File references in markdown links
|
||||
fileReferences: {
|
||||
pathPattern: /\[(.+?)\]\(mdc:\.cursor\/rules\/(.+?)\.mdc\)/g,
|
||||
replacement: (match, text, filePath) => {
|
||||
const baseName = path.basename(filePath, '.mdc');
|
||||
const newFileName =
|
||||
fileMap[`${baseName}.mdc`] || `${baseName}${targetExtension}`;
|
||||
// Update the link text to match the new filename (strip directory path for display)
|
||||
const newLinkText = path.basename(newFileName);
|
||||
// For Cursor, keep the mdc: protocol; for others, use standard relative paths
|
||||
if (name.toLowerCase() === 'cursor') {
|
||||
return `[${newLinkText}](mdc:${rulesDir}/${newFileName})`;
|
||||
} else {
|
||||
return `[${newLinkText}](${rulesDir}/${newFileName})`;
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
function getTargetRuleFilename(sourceFilename) {
|
||||
if (fileMap[sourceFilename]) {
|
||||
return fileMap[sourceFilename];
|
||||
}
|
||||
return targetExtension !== fileExtension
|
||||
? sourceFilename.replace(
|
||||
new RegExp(`\\${fileExtension}$`),
|
||||
targetExtension
|
||||
)
|
||||
: sourceFilename;
|
||||
}
|
||||
|
||||
return {
|
||||
profileName: name, // Use name for programmatic access (tests expect this)
|
||||
displayName: displayName, // Keep displayName for UI purposes
|
||||
profileDir,
|
||||
rulesDir,
|
||||
mcpConfig,
|
||||
mcpConfigName,
|
||||
mcpConfigPath,
|
||||
supportsRulesSubdirectories,
|
||||
fileMap,
|
||||
globalReplacements: baseGlobalReplacements,
|
||||
conversionConfig,
|
||||
getTargetRuleFilename,
|
||||
// Optional lifecycle hooks
|
||||
...(onAdd && { onAddRulesProfile: onAdd }),
|
||||
...(onRemove && { onRemoveRulesProfile: onRemove }),
|
||||
...(onPostConvert && { onPostConvertRulesProfile: onPostConvert })
|
||||
};
|
||||
}
|
||||
|
||||
// Common tool mappings for editors that share similar tool sets
|
||||
export const COMMON_TOOL_MAPPINGS = {
|
||||
// Most editors (Cursor, Cline, Windsurf) keep original tool names
|
||||
STANDARD: {},
|
||||
|
||||
// Roo Code uses different tool names
|
||||
ROO_STYLE: {
|
||||
edit_file: 'apply_diff',
|
||||
search: 'search_files',
|
||||
create_file: 'write_to_file',
|
||||
run_command: 'execute_command',
|
||||
terminal_command: 'execute_command',
|
||||
use_mcp: 'use_mcp_tool'
|
||||
}
|
||||
};
|
||||
59
src/profiles/claude.js
Normal file
59
src/profiles/claude.js
Normal file
@@ -0,0 +1,59 @@
|
||||
// Claude Code profile for rule-transformer
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import { isSilentMode, log } from '../../scripts/modules/utils.js';
|
||||
|
||||
// Lifecycle functions for Claude Code profile
|
||||
function onAddRulesProfile(targetDir, assetsDir) {
|
||||
// Use the provided assets directory to find the source file
|
||||
const sourceFile = path.join(assetsDir, 'AGENTS.md');
|
||||
const destFile = path.join(targetDir, 'CLAUDE.md');
|
||||
|
||||
if (fs.existsSync(sourceFile)) {
|
||||
try {
|
||||
fs.copyFileSync(sourceFile, destFile);
|
||||
log('debug', `[Claude] Copied AGENTS.md to ${destFile}`);
|
||||
} catch (err) {
|
||||
log('error', `[Claude] Failed to copy AGENTS.md: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function onRemoveRulesProfile(targetDir) {
|
||||
const claudeFile = path.join(targetDir, 'CLAUDE.md');
|
||||
if (fs.existsSync(claudeFile)) {
|
||||
try {
|
||||
fs.rmSync(claudeFile, { force: true });
|
||||
log('debug', `[Claude] Removed CLAUDE.md from ${claudeFile}`);
|
||||
} catch (err) {
|
||||
log('error', `[Claude] Failed to remove CLAUDE.md: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function onPostConvertRulesProfile(targetDir, assetsDir) {
|
||||
onAddRulesProfile(targetDir, assetsDir);
|
||||
}
|
||||
|
||||
// Simple filename function
|
||||
function getTargetRuleFilename(sourceFilename) {
|
||||
return sourceFilename;
|
||||
}
|
||||
|
||||
// Simple profile configuration - bypasses base-profile system
|
||||
export const claudeProfile = {
|
||||
profileName: 'claude',
|
||||
displayName: 'Claude Code',
|
||||
profileDir: '.', // Root directory
|
||||
rulesDir: '.', // No rules directory needed
|
||||
mcpConfig: false, // No MCP config needed
|
||||
mcpConfigName: null,
|
||||
mcpConfigPath: null,
|
||||
conversionConfig: {},
|
||||
fileMap: {},
|
||||
globalReplacements: [],
|
||||
getTargetRuleFilename,
|
||||
onAddRulesProfile,
|
||||
onRemoveRulesProfile,
|
||||
onPostConvertRulesProfile
|
||||
};
|
||||
20
src/profiles/cline.js
Normal file
20
src/profiles/cline.js
Normal file
@@ -0,0 +1,20 @@
|
||||
// Cline conversion profile for rule-transformer
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
|
||||
// Create and export cline profile using the base factory
|
||||
export const clineProfile = createProfile({
|
||||
name: 'cline',
|
||||
displayName: 'Cline',
|
||||
url: 'cline.bot',
|
||||
docsUrl: 'docs.cline.bot',
|
||||
profileDir: '.clinerules',
|
||||
rulesDir: '.clinerules',
|
||||
mcpConfig: false,
|
||||
mcpConfigName: 'cline_mcp_settings.json',
|
||||
fileExtension: '.mdc',
|
||||
targetExtension: '.md',
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.STANDARD, // Cline uses standard tool names
|
||||
customFileMap: {
|
||||
'cursor_rules.mdc': 'cline_rules.md'
|
||||
}
|
||||
});
|
||||
59
src/profiles/codex.js
Normal file
59
src/profiles/codex.js
Normal file
@@ -0,0 +1,59 @@
|
||||
// Codex profile for rule-transformer
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import { isSilentMode, log } from '../../scripts/modules/utils.js';
|
||||
|
||||
// Lifecycle functions for Codex profile
|
||||
function onAddRulesProfile(targetDir, assetsDir) {
|
||||
// Use the provided assets directory to find the source file
|
||||
const sourceFile = path.join(assetsDir, 'AGENTS.md');
|
||||
const destFile = path.join(targetDir, 'AGENTS.md');
|
||||
|
||||
if (fs.existsSync(sourceFile)) {
|
||||
try {
|
||||
fs.copyFileSync(sourceFile, destFile);
|
||||
log('debug', `[Codex] Copied AGENTS.md to ${destFile}`);
|
||||
} catch (err) {
|
||||
log('error', `[Codex] Failed to copy AGENTS.md: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function onRemoveRulesProfile(targetDir) {
|
||||
const agentsFile = path.join(targetDir, 'AGENTS.md');
|
||||
if (fs.existsSync(agentsFile)) {
|
||||
try {
|
||||
fs.rmSync(agentsFile, { force: true });
|
||||
log('debug', `[Codex] Removed AGENTS.md from ${agentsFile}`);
|
||||
} catch (err) {
|
||||
log('error', `[Codex] Failed to remove AGENTS.md: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function onPostConvertRulesProfile(targetDir, assetsDir) {
|
||||
onAddRulesProfile(targetDir, assetsDir);
|
||||
}
|
||||
|
||||
// Simple filename function
|
||||
function getTargetRuleFilename(sourceFilename) {
|
||||
return sourceFilename;
|
||||
}
|
||||
|
||||
// Simple profile configuration - bypasses base-profile system
|
||||
export const codexProfile = {
|
||||
profileName: 'codex',
|
||||
displayName: 'Codex',
|
||||
profileDir: '.', // Root directory
|
||||
rulesDir: '.', // No rules directory needed
|
||||
mcpConfig: false, // No MCP config needed
|
||||
mcpConfigName: null,
|
||||
mcpConfigPath: null,
|
||||
conversionConfig: {},
|
||||
fileMap: {},
|
||||
globalReplacements: [],
|
||||
getTargetRuleFilename,
|
||||
onAddRulesProfile,
|
||||
onRemoveRulesProfile,
|
||||
onPostConvertRulesProfile
|
||||
};
|
||||
21
src/profiles/cursor.js
Normal file
21
src/profiles/cursor.js
Normal file
@@ -0,0 +1,21 @@
|
||||
// Cursor conversion profile for rule-transformer
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
|
||||
// Create and export cursor profile using the base factory
|
||||
export const cursorProfile = createProfile({
|
||||
name: 'cursor',
|
||||
displayName: 'Cursor',
|
||||
url: 'cursor.so',
|
||||
docsUrl: 'docs.cursor.com',
|
||||
profileDir: '.cursor',
|
||||
rulesDir: '.cursor/rules',
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
fileExtension: '.mdc',
|
||||
targetExtension: '.mdc', // Cursor keeps .mdc extension
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.STANDARD,
|
||||
supportsRulesSubdirectories: true,
|
||||
customFileMap: {
|
||||
'cursor_rules.mdc': 'cursor_rules.mdc' // Keep the same name for cursor
|
||||
}
|
||||
});
|
||||
9
src/profiles/index.js
Normal file
9
src/profiles/index.js
Normal file
@@ -0,0 +1,9 @@
|
||||
// Profile exports for centralized importing
|
||||
export { claudeProfile } from './claude.js';
|
||||
export { clineProfile } from './cline.js';
|
||||
export { codexProfile } from './codex.js';
|
||||
export { cursorProfile } from './cursor.js';
|
||||
export { rooProfile } from './roo.js';
|
||||
export { traeProfile } from './trae.js';
|
||||
export { vscodeProfile } from './vscode.js';
|
||||
export { windsurfProfile } from './windsurf.js';
|
||||
129
src/profiles/roo.js
Normal file
129
src/profiles/roo.js
Normal file
@@ -0,0 +1,129 @@
|
||||
// Roo Code conversion profile for rule-transformer
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import { isSilentMode, log } from '../../scripts/modules/utils.js';
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
import { ROO_MODES } from '../constants/profiles.js';
|
||||
|
||||
// Lifecycle functions for Roo profile
|
||||
function onAddRulesProfile(targetDir, assetsDir) {
|
||||
// Use the provided assets directory to find the roocode directory
|
||||
const sourceDir = path.join(assetsDir, 'roocode');
|
||||
|
||||
if (!fs.existsSync(sourceDir)) {
|
||||
log('error', `[Roo] Source directory does not exist: ${sourceDir}`);
|
||||
return;
|
||||
}
|
||||
|
||||
copyRecursiveSync(sourceDir, targetDir);
|
||||
log('debug', `[Roo] Copied roocode directory to ${targetDir}`);
|
||||
|
||||
const rooModesDir = path.join(sourceDir, '.roo');
|
||||
|
||||
// Copy .roomodes to project root
|
||||
const roomodesSrc = path.join(sourceDir, '.roomodes');
|
||||
const roomodesDest = path.join(targetDir, '.roomodes');
|
||||
if (fs.existsSync(roomodesSrc)) {
|
||||
try {
|
||||
fs.copyFileSync(roomodesSrc, roomodesDest);
|
||||
log('debug', `[Roo] Copied .roomodes to ${roomodesDest}`);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to copy .roomodes: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
for (const mode of ROO_MODES) {
|
||||
const src = path.join(rooModesDir, `rules-${mode}`, `${mode}-rules`);
|
||||
const dest = path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`);
|
||||
if (fs.existsSync(src)) {
|
||||
try {
|
||||
const destDir = path.dirname(dest);
|
||||
if (!fs.existsSync(destDir)) fs.mkdirSync(destDir, { recursive: true });
|
||||
fs.copyFileSync(src, dest);
|
||||
log('debug', `[Roo] Copied ${mode}-rules to ${dest}`);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to copy ${src} to ${dest}: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function copyRecursiveSync(src, dest) {
|
||||
const exists = fs.existsSync(src);
|
||||
const stats = exists && fs.statSync(src);
|
||||
const isDirectory = exists && stats.isDirectory();
|
||||
if (isDirectory) {
|
||||
if (!fs.existsSync(dest)) fs.mkdirSync(dest, { recursive: true });
|
||||
fs.readdirSync(src).forEach((childItemName) => {
|
||||
copyRecursiveSync(
|
||||
path.join(src, childItemName),
|
||||
path.join(dest, childItemName)
|
||||
);
|
||||
});
|
||||
} else {
|
||||
fs.copyFileSync(src, dest);
|
||||
}
|
||||
}
|
||||
|
||||
function onRemoveRulesProfile(targetDir) {
|
||||
const roomodesPath = path.join(targetDir, '.roomodes');
|
||||
if (fs.existsSync(roomodesPath)) {
|
||||
try {
|
||||
fs.rmSync(roomodesPath, { force: true });
|
||||
log('debug', `[Roo] Removed .roomodes from ${roomodesPath}`);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to remove .roomodes: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
const rooDir = path.join(targetDir, '.roo');
|
||||
if (fs.existsSync(rooDir)) {
|
||||
fs.readdirSync(rooDir).forEach((entry) => {
|
||||
if (entry.startsWith('rules-')) {
|
||||
const modeDir = path.join(rooDir, entry);
|
||||
try {
|
||||
fs.rmSync(modeDir, { recursive: true, force: true });
|
||||
log('debug', `[Roo] Removed ${entry} directory from ${modeDir}`);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to remove ${modeDir}: ${err.message}`);
|
||||
}
|
||||
}
|
||||
});
|
||||
if (fs.readdirSync(rooDir).length === 0) {
|
||||
try {
|
||||
fs.rmSync(rooDir, { recursive: true, force: true });
|
||||
log('debug', `[Roo] Removed empty .roo directory from ${rooDir}`);
|
||||
} catch (err) {
|
||||
log('error', `[Roo] Failed to remove .roo directory: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function onPostConvertRulesProfile(targetDir, assetsDir) {
|
||||
onAddRulesProfile(targetDir, assetsDir);
|
||||
}
|
||||
|
||||
// Create and export roo profile using the base factory
|
||||
export const rooProfile = createProfile({
|
||||
name: 'roo',
|
||||
displayName: 'Roo Code',
|
||||
url: 'roocode.com',
|
||||
docsUrl: 'docs.roocode.com',
|
||||
profileDir: '.roo',
|
||||
rulesDir: '.roo/rules',
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
fileExtension: '.mdc',
|
||||
targetExtension: '.md',
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.ROO_STYLE,
|
||||
customFileMap: {
|
||||
'cursor_rules.mdc': 'roo_rules.md'
|
||||
},
|
||||
onAdd: onAddRulesProfile,
|
||||
onRemove: onRemoveRulesProfile,
|
||||
onPostConvert: onPostConvertRulesProfile
|
||||
});
|
||||
|
||||
// Export lifecycle functions separately to avoid naming conflicts
|
||||
export { onAddRulesProfile, onRemoveRulesProfile, onPostConvertRulesProfile };
|
||||
17
src/profiles/trae.js
Normal file
17
src/profiles/trae.js
Normal file
@@ -0,0 +1,17 @@
|
||||
// Trae conversion profile for rule-transformer
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
|
||||
// Create and export trae profile using the base factory
|
||||
export const traeProfile = createProfile({
|
||||
name: 'trae',
|
||||
displayName: 'Trae',
|
||||
url: 'trae.ai',
|
||||
docsUrl: 'docs.trae.ai',
|
||||
profileDir: '.trae',
|
||||
rulesDir: '.trae/rules',
|
||||
mcpConfig: false,
|
||||
mcpConfigName: 'trae_mcp_settings.json',
|
||||
fileExtension: '.mdc',
|
||||
targetExtension: '.md',
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.STANDARD // Trae uses standard tool names
|
||||
});
|
||||
41
src/profiles/vscode.js
Normal file
41
src/profiles/vscode.js
Normal file
@@ -0,0 +1,41 @@
|
||||
// VS Code conversion profile for rule-transformer
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
|
||||
// Create and export vscode profile using the base factory
|
||||
export const vscodeProfile = createProfile({
|
||||
name: 'vscode',
|
||||
displayName: 'VS Code',
|
||||
url: 'code.visualstudio.com',
|
||||
docsUrl: 'code.visualstudio.com/docs',
|
||||
profileDir: '.vscode', // MCP config location
|
||||
rulesDir: '.github/instructions', // VS Code instructions location
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
fileExtension: '.mdc',
|
||||
targetExtension: '.md',
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.STANDARD, // VS Code uses standard tool names
|
||||
customFileMap: {
|
||||
'cursor_rules.mdc': 'vscode_rules.md' // Rename cursor_rules to vscode_rules
|
||||
},
|
||||
customReplacements: [
|
||||
// Core VS Code directory structure changes
|
||||
{ from: /\.cursor\/rules/g, to: '.github/instructions' },
|
||||
{ from: /\.cursor\/mcp\.json/g, to: '.vscode/mcp.json' },
|
||||
|
||||
// Fix any remaining vscode/rules references that might be created during transformation
|
||||
{ from: /\.vscode\/rules/g, to: '.github/instructions' },
|
||||
|
||||
// VS Code custom instructions format - use applyTo with quoted patterns instead of globs
|
||||
{ from: /^globs:\s*(.+)$/gm, to: 'applyTo: "$1"' },
|
||||
|
||||
// Essential markdown link transformations for VS Code structure
|
||||
{
|
||||
from: /\[(.+?)\]\(mdc:\.cursor\/rules\/(.+?)\.mdc\)/g,
|
||||
to: '[$1](.github/instructions/$2.md)'
|
||||
},
|
||||
|
||||
// VS Code specific terminology
|
||||
{ from: /rules directory/g, to: 'instructions directory' },
|
||||
{ from: /cursor rules/gi, to: 'VS Code instructions' }
|
||||
]
|
||||
});
|
||||
17
src/profiles/windsurf.js
Normal file
17
src/profiles/windsurf.js
Normal file
@@ -0,0 +1,17 @@
|
||||
// Windsurf conversion profile for rule-transformer
|
||||
import { createProfile, COMMON_TOOL_MAPPINGS } from './base-profile.js';
|
||||
|
||||
// Create and export windsurf profile using the base factory
|
||||
export const windsurfProfile = createProfile({
|
||||
name: 'windsurf',
|
||||
displayName: 'Windsurf',
|
||||
url: 'windsurf.com',
|
||||
docsUrl: 'docs.windsurf.com',
|
||||
profileDir: '.windsurf',
|
||||
rulesDir: '.windsurf/rules',
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
fileExtension: '.mdc',
|
||||
targetExtension: '.md',
|
||||
toolMappings: COMMON_TOOL_MAPPINGS.STANDARD // Windsurf uses standard tool names
|
||||
});
|
||||
100
src/ui/confirm.js
Normal file
100
src/ui/confirm.js
Normal file
@@ -0,0 +1,100 @@
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
|
||||
/**
|
||||
* Confirm removing profile rules (destructive operation)
|
||||
* @param {string[]} profiles - Array of profile names to remove
|
||||
* @returns {Promise<boolean>} - Promise resolving to true if user confirms, false otherwise
|
||||
*/
|
||||
async function confirmProfilesRemove(profiles) {
|
||||
const profileList = profiles
|
||||
.map((b) => b.charAt(0).toUpperCase() + b.slice(1))
|
||||
.join(', ');
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.yellow(
|
||||
`WARNING: This will selectively remove Task Master components for: ${profileList}.
|
||||
|
||||
What will be removed:
|
||||
• Task Master specific rule files (e.g., cursor_rules.mdc, taskmaster.mdc, etc.)
|
||||
• Task Master MCP server configuration (if no other MCP servers exist)
|
||||
|
||||
What will be preserved:
|
||||
• Your existing custom rule files
|
||||
• Other MCP server configurations
|
||||
• The profile directory itself (unless completely empty after removal)
|
||||
|
||||
The .[profile] directory will only be removed if ALL of the following are true:
|
||||
• All rules in the directory were Task Master rules (no custom rules)
|
||||
• No other files or folders exist in the profile directory
|
||||
• The MCP configuration was completely removed (no other servers)
|
||||
|
||||
Are you sure you want to proceed?`
|
||||
),
|
||||
{ padding: 1, borderColor: 'yellow', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
const inquirer = await import('inquirer');
|
||||
const { confirm } = await inquirer.default.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'confirm',
|
||||
message: 'Type y to confirm selective removal, or n to abort:',
|
||||
default: false
|
||||
}
|
||||
]);
|
||||
return confirm;
|
||||
}
|
||||
|
||||
/**
|
||||
* Confirm removing ALL remaining profile rules (extremely critical operation)
|
||||
* @param {string[]} profiles - Array of profile names to remove
|
||||
* @param {string[]} remainingProfiles - Array of profiles that would be left after removal
|
||||
* @returns {Promise<boolean>} - Promise resolving to true if user confirms, false otherwise
|
||||
*/
|
||||
async function confirmRemoveAllRemainingProfiles(profiles, remainingProfiles) {
|
||||
const profileList = profiles
|
||||
.map((p) => p.charAt(0).toUpperCase() + p.slice(1))
|
||||
.join(', ');
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.red.bold(
|
||||
`⚠️ CRITICAL WARNING: REMOVING ALL TASK MASTER RULE PROFILES ⚠️\n\n` +
|
||||
`You are about to remove Task Master components for: ${profileList}\n` +
|
||||
`This will leave your project with NO Task Master rule profiles remaining!\n\n` +
|
||||
`What will be removed:\n` +
|
||||
`• All Task Master specific rule files\n` +
|
||||
`• Task Master MCP server configurations\n` +
|
||||
`• Profile directories (only if completely empty after removal)\n\n` +
|
||||
`What will be preserved:\n` +
|
||||
`• Your existing custom rule files\n` +
|
||||
`• Other MCP server configurations\n` +
|
||||
`• Profile directories with custom content\n\n` +
|
||||
`This could impact Task Master functionality but will preserve your custom configurations.\n\n` +
|
||||
`Are you absolutely sure you want to proceed?`
|
||||
),
|
||||
{
|
||||
padding: 1,
|
||||
borderColor: 'red',
|
||||
borderStyle: 'double',
|
||||
title: '🚨 CRITICAL OPERATION',
|
||||
titleAlignment: 'center'
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
const inquirer = await import('inquirer');
|
||||
const { confirm } = await inquirer.default.prompt([
|
||||
{
|
||||
type: 'confirm',
|
||||
name: 'confirm',
|
||||
message:
|
||||
'Type y to confirm removing ALL Task Master rule profiles, or n to abort:',
|
||||
default: false
|
||||
}
|
||||
]);
|
||||
return confirm;
|
||||
}
|
||||
|
||||
export { confirmProfilesRemove, confirmRemoveAllRemainingProfiles };
|
||||
264
src/utils/create-mcp-config.js
Normal file
264
src/utils/create-mcp-config.js
Normal file
@@ -0,0 +1,264 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { log } from '../../scripts/modules/utils.js';
|
||||
|
||||
// Return JSON with existing mcp.json formatting style
|
||||
function formatJSONWithTabs(obj) {
|
||||
let json = JSON.stringify(obj, null, '\t');
|
||||
|
||||
json = json.replace(
|
||||
/(\[\n\t+)([^[\]]+?)(\n\t+\])/g,
|
||||
(match, openBracket, content, closeBracket) => {
|
||||
// Only convert to single line if content doesn't contain nested objects/arrays
|
||||
if (!content.includes('{') && !content.includes('[')) {
|
||||
const singleLineContent = content
|
||||
.replace(/\n\t+/g, ' ')
|
||||
.replace(/\s+/g, ' ')
|
||||
.trim();
|
||||
return `[${singleLineContent}]`;
|
||||
}
|
||||
return match;
|
||||
}
|
||||
);
|
||||
|
||||
return json;
|
||||
}
|
||||
|
||||
// Structure matches project conventions (see scripts/init.js)
|
||||
export function setupMCPConfiguration(projectDir, mcpConfigPath) {
|
||||
// Handle null mcpConfigPath (e.g., for Claude/Codex profiles)
|
||||
if (!mcpConfigPath) {
|
||||
log(
|
||||
'debug',
|
||||
'[MCP Config] No mcpConfigPath provided, skipping MCP configuration setup'
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Build the full path to the MCP config file
|
||||
const mcpPath = path.join(projectDir, mcpConfigPath);
|
||||
const configDir = path.dirname(mcpPath);
|
||||
|
||||
log('info', `Setting up MCP configuration at ${mcpPath}...`);
|
||||
|
||||
// New MCP config to be added - references the installed package
|
||||
const newMCPServer = {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['-y', '--package=task-master-ai', 'task-master-ai'],
|
||||
env: {
|
||||
ANTHROPIC_API_KEY: 'ANTHROPIC_API_KEY_HERE',
|
||||
PERPLEXITY_API_KEY: 'PERPLEXITY_API_KEY_HERE',
|
||||
OPENAI_API_KEY: 'OPENAI_API_KEY_HERE',
|
||||
GOOGLE_API_KEY: 'GOOGLE_API_KEY_HERE',
|
||||
XAI_API_KEY: 'XAI_API_KEY_HERE',
|
||||
OPENROUTER_API_KEY: 'OPENROUTER_API_KEY_HERE',
|
||||
MISTRAL_API_KEY: 'MISTRAL_API_KEY_HERE',
|
||||
AZURE_OPENAI_API_KEY: 'AZURE_OPENAI_API_KEY_HERE',
|
||||
OLLAMA_API_KEY: 'OLLAMA_API_KEY_HERE'
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Create config directory if it doesn't exist
|
||||
if (!fs.existsSync(configDir)) {
|
||||
fs.mkdirSync(configDir, { recursive: true });
|
||||
}
|
||||
|
||||
if (fs.existsSync(mcpPath)) {
|
||||
log(
|
||||
'info',
|
||||
'MCP configuration file already exists, checking for existing task-master-ai...'
|
||||
);
|
||||
try {
|
||||
// Read existing config
|
||||
const mcpConfig = JSON.parse(fs.readFileSync(mcpPath, 'utf8'));
|
||||
// Initialize mcpServers if it doesn't exist
|
||||
if (!mcpConfig.mcpServers) {
|
||||
mcpConfig.mcpServers = {};
|
||||
}
|
||||
// Check if any existing server configuration already has task-master-ai in its args
|
||||
const hasMCPString = Object.values(mcpConfig.mcpServers).some(
|
||||
(server) =>
|
||||
server.args &&
|
||||
Array.isArray(server.args) &&
|
||||
server.args.some(
|
||||
(arg) => typeof arg === 'string' && arg.includes('task-master-ai')
|
||||
)
|
||||
);
|
||||
if (hasMCPString) {
|
||||
log(
|
||||
'info',
|
||||
'Found existing task-master-ai MCP configuration in mcp.json, leaving untouched'
|
||||
);
|
||||
return; // Exit early, don't modify the existing configuration
|
||||
}
|
||||
// Add the task-master-ai server if it doesn't exist
|
||||
if (!mcpConfig.mcpServers['task-master-ai']) {
|
||||
mcpConfig.mcpServers['task-master-ai'] = newMCPServer['task-master-ai'];
|
||||
log(
|
||||
'info',
|
||||
'Added task-master-ai server to existing MCP configuration'
|
||||
);
|
||||
} else {
|
||||
log('info', 'task-master-ai server already configured in mcp.json');
|
||||
}
|
||||
// Write the updated configuration
|
||||
fs.writeFileSync(mcpPath, formatJSONWithTabs(mcpConfig) + '\n');
|
||||
log('success', 'Updated MCP configuration file');
|
||||
} catch (error) {
|
||||
log('error', `Failed to update MCP configuration: ${error.message}`);
|
||||
// Create a backup before potentially modifying
|
||||
const backupPath = `${mcpPath}.backup-${Date.now()}`;
|
||||
if (fs.existsSync(mcpPath)) {
|
||||
fs.copyFileSync(mcpPath, backupPath);
|
||||
log('info', `Created backup of existing mcp.json at ${backupPath}`);
|
||||
}
|
||||
// Create new configuration
|
||||
const newMCPConfig = {
|
||||
mcpServers: newMCPServer
|
||||
};
|
||||
fs.writeFileSync(mcpPath, formatJSONWithTabs(newMCPConfig) + '\n');
|
||||
log(
|
||||
'warn',
|
||||
'Created new MCP configuration file (backup of original file was created if it existed)'
|
||||
);
|
||||
}
|
||||
} else {
|
||||
// If mcp.json doesn't exist, create it
|
||||
const newMCPConfig = {
|
||||
mcpServers: newMCPServer
|
||||
};
|
||||
fs.writeFileSync(mcpPath, formatJSONWithTabs(newMCPConfig) + '\n');
|
||||
log('success', `Created MCP configuration file at ${mcpPath}`);
|
||||
}
|
||||
|
||||
// Add note to console about MCP integration
|
||||
log('info', 'MCP server will use the installed task-master-ai package');
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove Task Master MCP server configuration from an existing mcp.json file
|
||||
* Only removes Task Master entries, preserving other MCP servers
|
||||
* @param {string} projectDir - Target project directory
|
||||
* @param {string} mcpConfigPath - Relative path to MCP config file (e.g., '.cursor/mcp.json')
|
||||
* @returns {Object} Result object with success status and details
|
||||
*/
|
||||
export function removeTaskMasterMCPConfiguration(projectDir, mcpConfigPath) {
|
||||
// Handle null mcpConfigPath (e.g., for Claude/Codex profiles)
|
||||
if (!mcpConfigPath) {
|
||||
return {
|
||||
success: true,
|
||||
removed: false,
|
||||
deleted: false,
|
||||
error: null,
|
||||
hasOtherServers: false
|
||||
};
|
||||
}
|
||||
|
||||
const mcpPath = path.join(projectDir, mcpConfigPath);
|
||||
|
||||
let result = {
|
||||
success: false,
|
||||
removed: false,
|
||||
deleted: false,
|
||||
error: null,
|
||||
hasOtherServers: false
|
||||
};
|
||||
|
||||
if (!fs.existsSync(mcpPath)) {
|
||||
result.success = true;
|
||||
result.removed = false;
|
||||
log('debug', `[MCP Config] MCP config file does not exist: ${mcpPath}`);
|
||||
return result;
|
||||
}
|
||||
|
||||
try {
|
||||
// Read existing config
|
||||
const mcpConfig = JSON.parse(fs.readFileSync(mcpPath, 'utf8'));
|
||||
|
||||
if (!mcpConfig.mcpServers) {
|
||||
result.success = true;
|
||||
result.removed = false;
|
||||
log('debug', `[MCP Config] No mcpServers section found in: ${mcpPath}`);
|
||||
return result;
|
||||
}
|
||||
|
||||
// Check if Task Master is configured
|
||||
const hasTaskMaster =
|
||||
mcpConfig.mcpServers['task-master-ai'] ||
|
||||
Object.values(mcpConfig.mcpServers).some(
|
||||
(server) =>
|
||||
server.args &&
|
||||
Array.isArray(server.args) &&
|
||||
server.args.some(
|
||||
(arg) => typeof arg === 'string' && arg.includes('task-master-ai')
|
||||
)
|
||||
);
|
||||
|
||||
if (!hasTaskMaster) {
|
||||
result.success = true;
|
||||
result.removed = false;
|
||||
log(
|
||||
'debug',
|
||||
`[MCP Config] Task Master not found in MCP config: ${mcpPath}`
|
||||
);
|
||||
return result;
|
||||
}
|
||||
|
||||
// Remove task-master-ai server
|
||||
delete mcpConfig.mcpServers['task-master-ai'];
|
||||
|
||||
// Also remove any servers that have task-master-ai in their args
|
||||
Object.keys(mcpConfig.mcpServers).forEach((serverName) => {
|
||||
const server = mcpConfig.mcpServers[serverName];
|
||||
if (
|
||||
server.args &&
|
||||
Array.isArray(server.args) &&
|
||||
server.args.some(
|
||||
(arg) => typeof arg === 'string' && arg.includes('task-master-ai')
|
||||
)
|
||||
) {
|
||||
delete mcpConfig.mcpServers[serverName];
|
||||
log(
|
||||
'debug',
|
||||
`[MCP Config] Removed server '${serverName}' containing task-master-ai`
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
// Check if there are other MCP servers remaining
|
||||
const remainingServers = Object.keys(mcpConfig.mcpServers);
|
||||
result.hasOtherServers = remainingServers.length > 0;
|
||||
|
||||
if (result.hasOtherServers) {
|
||||
// Write back the modified config with remaining servers
|
||||
fs.writeFileSync(mcpPath, formatJSONWithTabs(mcpConfig) + '\n');
|
||||
result.success = true;
|
||||
result.removed = true;
|
||||
result.deleted = false;
|
||||
log(
|
||||
'info',
|
||||
`[MCP Config] Removed Task Master from MCP config, preserving other servers: ${remainingServers.join(', ')}`
|
||||
);
|
||||
} else {
|
||||
// No other servers, delete the entire file
|
||||
fs.rmSync(mcpPath, { force: true });
|
||||
result.success = true;
|
||||
result.removed = true;
|
||||
result.deleted = true;
|
||||
log(
|
||||
'info',
|
||||
`[MCP Config] Removed MCP config file (no other servers remaining): ${mcpPath}`
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
result.error = error.message;
|
||||
log(
|
||||
'error',
|
||||
`[MCP Config] Failed to remove Task Master from MCP config: ${error.message}`
|
||||
);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
293
src/utils/manage-gitignore.js
Normal file
293
src/utils/manage-gitignore.js
Normal file
@@ -0,0 +1,293 @@
|
||||
// Utility to manage .gitignore files with task file preferences and template merging
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
// Constants
|
||||
const TASK_FILES_COMMENT = '# Task files';
|
||||
const TASK_JSON_PATTERN = 'tasks.json';
|
||||
const TASK_DIR_PATTERN = 'tasks/';
|
||||
|
||||
/**
|
||||
* Normalizes a line by removing comments and trimming whitespace
|
||||
* @param {string} line - Line to normalize
|
||||
* @returns {string} Normalized line
|
||||
*/
|
||||
function normalizeLine(line) {
|
||||
return line.trim().replace(/^#/, '').trim();
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a line is task-related (tasks.json or tasks/)
|
||||
* @param {string} line - Line to check
|
||||
* @returns {boolean} True if line is task-related
|
||||
*/
|
||||
function isTaskLine(line) {
|
||||
const normalized = normalizeLine(line);
|
||||
return normalized === TASK_JSON_PATTERN || normalized === TASK_DIR_PATTERN;
|
||||
}
|
||||
|
||||
/**
|
||||
* Adjusts task-related lines in template based on storage preference
|
||||
* @param {string[]} templateLines - Array of template lines
|
||||
* @param {boolean} storeTasksInGit - Whether to comment out task lines
|
||||
* @returns {string[]} Adjusted template lines
|
||||
*/
|
||||
function adjustTaskLinesInTemplate(templateLines, storeTasksInGit) {
|
||||
return templateLines.map((line) => {
|
||||
if (isTaskLine(line)) {
|
||||
const normalized = normalizeLine(line);
|
||||
// Preserve original trailing whitespace from the line
|
||||
const originalTrailingSpace = line.match(/\s*$/)[0];
|
||||
return storeTasksInGit
|
||||
? `# ${normalized}${originalTrailingSpace}`
|
||||
: `${normalized}${originalTrailingSpace}`;
|
||||
}
|
||||
return line;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes existing task files section from content
|
||||
* @param {string[]} existingLines - Existing file lines
|
||||
* @returns {string[]} Lines with task section removed
|
||||
*/
|
||||
function removeExistingTaskSection(existingLines) {
|
||||
const cleanedLines = [];
|
||||
let inTaskSection = false;
|
||||
|
||||
for (const line of existingLines) {
|
||||
// Start of task files section
|
||||
if (line.trim() === TASK_FILES_COMMENT) {
|
||||
inTaskSection = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Task lines (commented or not)
|
||||
if (isTaskLine(line)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Empty lines within task section
|
||||
if (inTaskSection && !line.trim()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// End of task section (any non-empty, non-task line)
|
||||
if (inTaskSection && line.trim() && !isTaskLine(line)) {
|
||||
inTaskSection = false;
|
||||
}
|
||||
|
||||
// Keep all other lines
|
||||
if (!inTaskSection) {
|
||||
cleanedLines.push(line);
|
||||
}
|
||||
}
|
||||
|
||||
return cleanedLines;
|
||||
}
|
||||
|
||||
/**
|
||||
* Filters template lines to only include new content not already present
|
||||
* @param {string[]} templateLines - Template lines
|
||||
* @param {Set<string>} existingLinesSet - Set of existing trimmed lines
|
||||
* @returns {string[]} New lines to add
|
||||
*/
|
||||
function filterNewTemplateLines(templateLines, existingLinesSet) {
|
||||
return templateLines.filter((line) => {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed) return false;
|
||||
|
||||
// Skip task-related lines (handled separately)
|
||||
if (isTaskLine(line) || trimmed === TASK_FILES_COMMENT) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Include only if not already present
|
||||
return !existingLinesSet.has(trimmed);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Builds the task files section based on storage preference
|
||||
* @param {boolean} storeTasksInGit - Whether to comment out task lines
|
||||
* @returns {string[]} Task files section lines
|
||||
*/
|
||||
function buildTaskFilesSection(storeTasksInGit) {
|
||||
const section = [TASK_FILES_COMMENT];
|
||||
|
||||
if (storeTasksInGit) {
|
||||
section.push(`# ${TASK_JSON_PATTERN}`, `# ${TASK_DIR_PATTERN} `);
|
||||
} else {
|
||||
section.push(TASK_JSON_PATTERN, `${TASK_DIR_PATTERN} `);
|
||||
}
|
||||
|
||||
return section;
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a separator line if needed (avoids double spacing)
|
||||
* @param {string[]} lines - Current lines array
|
||||
*/
|
||||
function addSeparatorIfNeeded(lines) {
|
||||
if (lines.some((line) => line.trim())) {
|
||||
const lastLine = lines[lines.length - 1];
|
||||
if (lastLine && lastLine.trim()) {
|
||||
lines.push('');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates input parameters
|
||||
* @param {string} targetPath - Path to .gitignore file
|
||||
* @param {string} content - Template content
|
||||
* @param {boolean} storeTasksInGit - Storage preference
|
||||
* @throws {Error} If validation fails
|
||||
*/
|
||||
function validateInputs(targetPath, content, storeTasksInGit) {
|
||||
if (!targetPath || typeof targetPath !== 'string') {
|
||||
throw new Error('targetPath must be a non-empty string');
|
||||
}
|
||||
|
||||
if (!targetPath.endsWith('.gitignore')) {
|
||||
throw new Error('targetPath must end with .gitignore');
|
||||
}
|
||||
|
||||
if (!content || typeof content !== 'string') {
|
||||
throw new Error('content must be a non-empty string');
|
||||
}
|
||||
|
||||
if (typeof storeTasksInGit !== 'boolean') {
|
||||
throw new Error('storeTasksInGit must be a boolean');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a new .gitignore file from template
|
||||
* @param {string} targetPath - Path to create file at
|
||||
* @param {string[]} templateLines - Adjusted template lines
|
||||
* @param {function} log - Logging function
|
||||
*/
|
||||
function createNewGitignoreFile(targetPath, templateLines, log) {
|
||||
try {
|
||||
fs.writeFileSync(targetPath, templateLines.join('\n'));
|
||||
if (typeof log === 'function') {
|
||||
log('success', `Created ${targetPath} with full template`);
|
||||
}
|
||||
} catch (error) {
|
||||
if (typeof log === 'function') {
|
||||
log('error', `Failed to create ${targetPath}: ${error.message}`);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Merges template content with existing .gitignore file
|
||||
* @param {string} targetPath - Path to existing file
|
||||
* @param {string[]} templateLines - Adjusted template lines
|
||||
* @param {boolean} storeTasksInGit - Storage preference
|
||||
* @param {function} log - Logging function
|
||||
*/
|
||||
function mergeWithExistingFile(
|
||||
targetPath,
|
||||
templateLines,
|
||||
storeTasksInGit,
|
||||
log
|
||||
) {
|
||||
try {
|
||||
// Read and process existing file
|
||||
const existingContent = fs.readFileSync(targetPath, 'utf8');
|
||||
const existingLines = existingContent.split('\n');
|
||||
|
||||
// Remove existing task section
|
||||
const cleanedExistingLines = removeExistingTaskSection(existingLines);
|
||||
|
||||
// Find new template lines to add
|
||||
const existingLinesSet = new Set(
|
||||
cleanedExistingLines.map((line) => line.trim()).filter((line) => line)
|
||||
);
|
||||
const newLines = filterNewTemplateLines(templateLines, existingLinesSet);
|
||||
|
||||
// Build final content
|
||||
const finalLines = [...cleanedExistingLines];
|
||||
|
||||
// Add new template content
|
||||
if (newLines.length > 0) {
|
||||
addSeparatorIfNeeded(finalLines);
|
||||
finalLines.push(...newLines);
|
||||
}
|
||||
|
||||
// Add task files section
|
||||
addSeparatorIfNeeded(finalLines);
|
||||
finalLines.push(...buildTaskFilesSection(storeTasksInGit));
|
||||
|
||||
// Write result
|
||||
fs.writeFileSync(targetPath, finalLines.join('\n'));
|
||||
|
||||
if (typeof log === 'function') {
|
||||
const hasNewContent =
|
||||
newLines.length > 0 ? ' and merged new content' : '';
|
||||
log(
|
||||
'success',
|
||||
`Updated ${targetPath} according to user preference${hasNewContent}`
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
if (typeof log === 'function') {
|
||||
log(
|
||||
'error',
|
||||
`Failed to merge content with ${targetPath}: ${error.message}`
|
||||
);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Manages .gitignore file creation and updates with task file preferences
|
||||
* @param {string} targetPath - Path to the .gitignore file
|
||||
* @param {string} content - Template content for .gitignore
|
||||
* @param {boolean} storeTasksInGit - Whether to store tasks in git or not
|
||||
* @param {function} log - Logging function (level, message)
|
||||
* @throws {Error} If validation or file operations fail
|
||||
*/
|
||||
function manageGitignoreFile(
|
||||
targetPath,
|
||||
content,
|
||||
storeTasksInGit = true,
|
||||
log = null
|
||||
) {
|
||||
// Validate inputs
|
||||
validateInputs(targetPath, content, storeTasksInGit);
|
||||
|
||||
// Process template with task preference
|
||||
const templateLines = content.split('\n');
|
||||
const adjustedTemplateLines = adjustTaskLinesInTemplate(
|
||||
templateLines,
|
||||
storeTasksInGit
|
||||
);
|
||||
|
||||
// Handle file creation or merging
|
||||
if (!fs.existsSync(targetPath)) {
|
||||
createNewGitignoreFile(targetPath, adjustedTemplateLines, log);
|
||||
} else {
|
||||
mergeWithExistingFile(
|
||||
targetPath,
|
||||
adjustedTemplateLines,
|
||||
storeTasksInGit,
|
||||
log
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export default manageGitignoreFile;
|
||||
export {
|
||||
manageGitignoreFile,
|
||||
normalizeLine,
|
||||
isTaskLine,
|
||||
buildTaskFilesSection,
|
||||
TASK_FILES_COMMENT,
|
||||
TASK_JSON_PATTERN,
|
||||
TASK_DIR_PATTERN
|
||||
};
|
||||
283
src/utils/profiles.js
Normal file
283
src/utils/profiles.js
Normal file
@@ -0,0 +1,283 @@
|
||||
/**
|
||||
* Profiles Utility
|
||||
* Consolidated utilities for profile detection, setup, and summary generation
|
||||
*/
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import inquirer from 'inquirer';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import { log } from '../../scripts/modules/utils.js';
|
||||
import { getRulesProfile } from './rule-transformer.js';
|
||||
import { RULE_PROFILES } from '../constants/profiles.js';
|
||||
|
||||
// =============================================================================
|
||||
// PROFILE DETECTION
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Detect which profiles are currently installed in the project
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @returns {string[]} Array of installed profile names
|
||||
*/
|
||||
export function getInstalledProfiles(projectRoot) {
|
||||
const installedProfiles = [];
|
||||
|
||||
for (const profileName of RULE_PROFILES) {
|
||||
const profileConfig = getRulesProfile(profileName);
|
||||
if (!profileConfig) continue;
|
||||
|
||||
// Check if the profile directory exists
|
||||
const profileDir = path.join(projectRoot, profileConfig.profileDir);
|
||||
const rulesDir = path.join(projectRoot, profileConfig.rulesDir);
|
||||
|
||||
// A profile is considered installed if either the profile dir or rules dir exists
|
||||
if (fs.existsSync(profileDir) || fs.existsSync(rulesDir)) {
|
||||
installedProfiles.push(profileName);
|
||||
}
|
||||
}
|
||||
|
||||
return installedProfiles;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if removing the specified profiles would result in no profiles remaining
|
||||
* @param {string} projectRoot - Project root directory
|
||||
* @param {string[]} profilesToRemove - Array of profile names to remove
|
||||
* @returns {boolean} True if removal would result in no profiles remaining
|
||||
*/
|
||||
export function wouldRemovalLeaveNoProfiles(projectRoot, profilesToRemove) {
|
||||
const installedProfiles = getInstalledProfiles(projectRoot);
|
||||
const remainingProfiles = installedProfiles.filter(
|
||||
(profile) => !profilesToRemove.includes(profile)
|
||||
);
|
||||
|
||||
return remainingProfiles.length === 0 && installedProfiles.length > 0;
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// PROFILE SETUP
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Get the display name for a profile
|
||||
*/
|
||||
function getProfileDisplayName(name) {
|
||||
const profile = getRulesProfile(name);
|
||||
return profile?.displayName || name.charAt(0).toUpperCase() + name.slice(1);
|
||||
}
|
||||
|
||||
// Note: Profile choices are now generated dynamically within runInteractiveProfilesSetup()
|
||||
// to ensure proper alphabetical sorting and pagination configuration
|
||||
|
||||
/**
|
||||
* Launches an interactive prompt for selecting which rule profiles to include in your project.
|
||||
*
|
||||
* This function dynamically lists all available profiles (from RULE_PROFILES) and presents them as checkboxes.
|
||||
* The user must select at least one profile (no defaults are pre-selected). The result is an array of selected profile names.
|
||||
*
|
||||
* Used by both project initialization (init) and the CLI 'task-master rules setup' command.
|
||||
*
|
||||
* @returns {Promise<string[]>} Array of selected profile names (e.g., ['cursor', 'windsurf'])
|
||||
*/
|
||||
export async function runInteractiveProfilesSetup() {
|
||||
// Generate the profile list dynamically with proper display names, alphabetized
|
||||
const profileDescriptions = RULE_PROFILES.map((profileName) => {
|
||||
const displayName = getProfileDisplayName(profileName);
|
||||
const profile = getRulesProfile(profileName);
|
||||
|
||||
// Determine description based on profile type
|
||||
let description;
|
||||
if (Object.keys(profile.fileMap).length === 0) {
|
||||
// Simple profiles (Claude, Codex) - specify the target file
|
||||
const targetFileName =
|
||||
profileName === 'claude' ? 'CLAUDE.md' : 'AGENTS.md';
|
||||
description = `Integration guide (${targetFileName})`;
|
||||
} else {
|
||||
// Full profiles with rules - check if they have MCP config
|
||||
const hasMcpConfig = profile.mcpConfig === true;
|
||||
if (hasMcpConfig) {
|
||||
// Special case for Roo to mention agent modes
|
||||
if (profileName === 'roo') {
|
||||
description = 'Rule profile, MCP config, and agent modes';
|
||||
} else {
|
||||
description = 'Rule profile and MCP config';
|
||||
}
|
||||
} else {
|
||||
description = 'Rule profile';
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
profileName,
|
||||
displayName,
|
||||
description
|
||||
};
|
||||
}).sort((a, b) => a.displayName.localeCompare(b.displayName));
|
||||
|
||||
const profileListText = profileDescriptions
|
||||
.map(
|
||||
({ displayName, description }) =>
|
||||
`${chalk.white('• ')}${chalk.yellow(displayName)}${chalk.white(` - ${description}`)}`
|
||||
)
|
||||
.join('\n');
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
`${chalk.white.bold('Rule Profiles Setup')}\n\n${chalk.white(
|
||||
'Rule profiles help enforce best practices and conventions for Task Master.\n' +
|
||||
'Each profile provides coding guidelines tailored for specific AI coding environments.\n\n'
|
||||
)}${chalk.cyan('Available Profiles:')}\n${profileListText}`,
|
||||
{
|
||||
padding: 1,
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
margin: { top: 1, bottom: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
// Generate choices in the same order as the display text above
|
||||
const sortedChoices = profileDescriptions.map(
|
||||
({ profileName, displayName }) => ({
|
||||
name: displayName,
|
||||
value: profileName
|
||||
})
|
||||
);
|
||||
|
||||
const ruleProfilesQuestion = {
|
||||
type: 'checkbox',
|
||||
name: 'ruleProfiles',
|
||||
message: 'Which rule profiles would you like to add to your project?',
|
||||
choices: sortedChoices,
|
||||
pageSize: sortedChoices.length, // Show all options without pagination
|
||||
loop: false, // Disable loop scrolling
|
||||
validate: (input) => input.length > 0 || 'You must select at least one.'
|
||||
};
|
||||
const { ruleProfiles } = await inquirer.prompt([ruleProfilesQuestion]);
|
||||
return ruleProfiles;
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// PROFILE SUMMARY
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Generate appropriate summary message for a profile based on its type
|
||||
* @param {string} profileName - Name of the profile
|
||||
* @param {Object} addResult - Result object with success/failed counts
|
||||
* @returns {string} Formatted summary message
|
||||
*/
|
||||
export function generateProfileSummary(profileName, addResult) {
|
||||
const profileConfig = getRulesProfile(profileName);
|
||||
const isSimpleProfile = Object.keys(profileConfig.fileMap).length === 0;
|
||||
|
||||
if (isSimpleProfile) {
|
||||
// Simple profiles like Claude and Codex only copy AGENTS.md
|
||||
const targetFileName = profileName === 'claude' ? 'CLAUDE.md' : 'AGENTS.md';
|
||||
return `Summary for ${profileName}: Integration guide copied to ${targetFileName}`;
|
||||
} else {
|
||||
return `Summary for ${profileName}: ${addResult.success} rules added, ${addResult.failed} failed.`;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate appropriate summary message for profile removal
|
||||
* @param {string} profileName - Name of the profile
|
||||
* @param {Object} removeResult - Result object from removal operation
|
||||
* @returns {string} Formatted summary message
|
||||
*/
|
||||
export function generateProfileRemovalSummary(profileName, removeResult) {
|
||||
const profileConfig = getRulesProfile(profileName);
|
||||
const isSimpleProfile = Object.keys(profileConfig.fileMap).length === 0;
|
||||
|
||||
if (removeResult.skipped) {
|
||||
return `Summary for ${profileName}: Skipped (default or protected files)`;
|
||||
}
|
||||
|
||||
if (removeResult.error && !removeResult.success) {
|
||||
return `Summary for ${profileName}: Failed to remove - ${removeResult.error}`;
|
||||
}
|
||||
|
||||
if (isSimpleProfile) {
|
||||
// Simple profiles like Claude and Codex only have an integration guide
|
||||
const targetFileName = profileName === 'claude' ? 'CLAUDE.md' : 'AGENTS.md';
|
||||
return `Summary for ${profileName}: Integration guide (${targetFileName}) removed`;
|
||||
} else {
|
||||
// Full profiles have rules directories and potentially MCP configs
|
||||
const baseMessage = `Summary for ${profileName}: Rules directory removed`;
|
||||
if (removeResult.notice) {
|
||||
return `${baseMessage} (${removeResult.notice})`;
|
||||
}
|
||||
return baseMessage;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Categorize profiles and generate final summary statistics
|
||||
* @param {Array} addResults - Array of add result objects
|
||||
* @returns {Object} Object with categorized profiles and totals
|
||||
*/
|
||||
export function categorizeProfileResults(addResults) {
|
||||
const successfulProfiles = [];
|
||||
const simpleProfiles = [];
|
||||
let totalSuccess = 0;
|
||||
let totalFailed = 0;
|
||||
|
||||
addResults.forEach((r) => {
|
||||
totalSuccess += r.success;
|
||||
totalFailed += r.failed;
|
||||
|
||||
const profileConfig = getRulesProfile(r.profileName);
|
||||
const isSimpleProfile = Object.keys(profileConfig.fileMap).length === 0;
|
||||
|
||||
if (isSimpleProfile) {
|
||||
// Simple profiles are successful if they completed without error
|
||||
simpleProfiles.push(r.profileName);
|
||||
} else if (r.success > 0) {
|
||||
// Full profiles are successful if they added rules
|
||||
successfulProfiles.push(r.profileName);
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
successfulProfiles,
|
||||
simpleProfiles,
|
||||
allSuccessfulProfiles: [...successfulProfiles, ...simpleProfiles],
|
||||
totalSuccess,
|
||||
totalFailed
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Categorize removal results and generate final summary statistics
|
||||
* @param {Array} removalResults - Array of removal result objects
|
||||
* @returns {Object} Object with categorized removal results
|
||||
*/
|
||||
export function categorizeRemovalResults(removalResults) {
|
||||
const successfulRemovals = [];
|
||||
const skippedRemovals = [];
|
||||
const failedRemovals = [];
|
||||
const removalsWithNotices = [];
|
||||
|
||||
removalResults.forEach((result) => {
|
||||
if (result.success) {
|
||||
successfulRemovals.push(result.profileName);
|
||||
} else if (result.skipped) {
|
||||
skippedRemovals.push(result.profileName);
|
||||
} else if (result.error) {
|
||||
failedRemovals.push(result);
|
||||
}
|
||||
|
||||
if (result.notice) {
|
||||
removalsWithNotices.push(result);
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
successfulRemovals,
|
||||
skippedRemovals,
|
||||
failedRemovals,
|
||||
removalsWithNotices
|
||||
};
|
||||
}
|
||||
504
src/utils/rule-transformer.js
Normal file
504
src/utils/rule-transformer.js
Normal file
@@ -0,0 +1,504 @@
|
||||
/**
|
||||
* Rule Transformer Module
|
||||
* Handles conversion of Cursor rules to profile rules
|
||||
*
|
||||
* This module procedurally generates .{profile}/rules files from assets/rules files,
|
||||
* eliminating the need to maintain both sets of files manually.
|
||||
*/
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { log } from '../../scripts/modules/utils.js';
|
||||
|
||||
// Import the shared MCP configuration helper
|
||||
import {
|
||||
setupMCPConfiguration,
|
||||
removeTaskMasterMCPConfiguration
|
||||
} from './create-mcp-config.js';
|
||||
|
||||
// Import profile constants (single source of truth)
|
||||
import { RULE_PROFILES } from '../constants/profiles.js';
|
||||
|
||||
// --- Profile Imports ---
|
||||
import * as profilesModule from '../profiles/index.js';
|
||||
|
||||
export function isValidProfile(profile) {
|
||||
return RULE_PROFILES.includes(profile);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get rule profile by name
|
||||
* @param {string} name - Profile name
|
||||
* @returns {Object|null} Profile object or null if not found
|
||||
*/
|
||||
export function getRulesProfile(name) {
|
||||
if (!isValidProfile(name)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Get the profile from the imported profiles module
|
||||
const profileKey = `${name}Profile`;
|
||||
const profile = profilesModule[profileKey];
|
||||
|
||||
if (!profile) {
|
||||
throw new Error(
|
||||
`Profile not found: static import missing for '${name}'. Valid profiles: ${RULE_PROFILES.join(', ')}`
|
||||
);
|
||||
}
|
||||
|
||||
return profile;
|
||||
}
|
||||
|
||||
/**
|
||||
* Replace basic Cursor terms with profile equivalents
|
||||
*/
|
||||
function replaceBasicTerms(content, conversionConfig) {
|
||||
let result = content;
|
||||
|
||||
// Apply profile term replacements
|
||||
conversionConfig.profileTerms.forEach((pattern) => {
|
||||
if (typeof pattern.to === 'function') {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
} else {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
}
|
||||
});
|
||||
|
||||
// Apply file extension replacements
|
||||
conversionConfig.fileExtensions.forEach((pattern) => {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Replace Cursor tool references with profile tool equivalents
|
||||
*/
|
||||
function replaceToolReferences(content, conversionConfig) {
|
||||
let result = content;
|
||||
|
||||
// Basic pattern for direct tool name replacements
|
||||
const toolNames = conversionConfig.toolNames;
|
||||
const toolReferencePattern = new RegExp(
|
||||
`\\b(${Object.keys(toolNames).join('|')})\\b`,
|
||||
'g'
|
||||
);
|
||||
|
||||
// Apply direct tool name replacements
|
||||
result = result.replace(toolReferencePattern, (match, toolName) => {
|
||||
return toolNames[toolName] || toolName;
|
||||
});
|
||||
|
||||
// Apply contextual tool replacements
|
||||
conversionConfig.toolContexts.forEach((pattern) => {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
});
|
||||
|
||||
// Apply tool group replacements
|
||||
conversionConfig.toolGroups.forEach((pattern) => {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update documentation URLs to point to profile documentation
|
||||
*/
|
||||
function updateDocReferences(content, conversionConfig) {
|
||||
let result = content;
|
||||
|
||||
// Apply documentation URL replacements
|
||||
conversionConfig.docUrls.forEach((pattern) => {
|
||||
if (typeof pattern.to === 'function') {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
} else {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
}
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update file references in markdown links
|
||||
*/
|
||||
function updateFileReferences(content, conversionConfig) {
|
||||
const { pathPattern, replacement } = conversionConfig.fileReferences;
|
||||
return content.replace(pathPattern, replacement);
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform rule content to profile-specific rules
|
||||
* @param {string} content - The content to transform
|
||||
* @param {Object} conversionConfig - The conversion configuration
|
||||
* @param {Object} globalReplacements - Global text replacements
|
||||
* @returns {string} - The transformed content
|
||||
*/
|
||||
function transformRuleContent(content, conversionConfig, globalReplacements) {
|
||||
let result = content;
|
||||
|
||||
// Apply all transformations in appropriate order
|
||||
result = updateFileReferences(result, conversionConfig);
|
||||
result = replaceBasicTerms(result, conversionConfig);
|
||||
result = replaceToolReferences(result, conversionConfig);
|
||||
result = updateDocReferences(result, conversionConfig);
|
||||
|
||||
// Apply any global/catch-all replacements from the profile
|
||||
// Super aggressive failsafe pass to catch any variations we might have missed
|
||||
// This ensures critical transformations are applied even in contexts we didn't anticipate
|
||||
globalReplacements.forEach((pattern) => {
|
||||
if (typeof pattern.to === 'function') {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
} else {
|
||||
result = result.replace(pattern.from, pattern.to);
|
||||
}
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a Cursor rule file to a profile-specific rule file
|
||||
* @param {string} sourcePath - Path to the source .mdc file
|
||||
* @param {string} targetPath - Path to the target file
|
||||
* @param {Object} profile - The profile configuration
|
||||
* @returns {boolean} - Success status
|
||||
*/
|
||||
export function convertRuleToProfileRule(sourcePath, targetPath, profile) {
|
||||
const { conversionConfig, globalReplacements } = profile;
|
||||
try {
|
||||
// Read source content
|
||||
const content = fs.readFileSync(sourcePath, 'utf8');
|
||||
|
||||
// Transform content
|
||||
const transformedContent = transformRuleContent(
|
||||
content,
|
||||
conversionConfig,
|
||||
globalReplacements
|
||||
);
|
||||
|
||||
// Ensure target directory exists
|
||||
const targetDir = path.dirname(targetPath);
|
||||
if (!fs.existsSync(targetDir)) {
|
||||
fs.mkdirSync(targetDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Write transformed content
|
||||
fs.writeFileSync(targetPath, transformedContent);
|
||||
|
||||
return true;
|
||||
} catch (error) {
|
||||
console.error(`Error converting rule file: ${error.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert all Cursor rules to profile rules for a specific profile
|
||||
*/
|
||||
export function convertAllRulesToProfileRules(projectDir, profile) {
|
||||
// Handle simple profiles (Claude, Codex) that just copy files to root
|
||||
const isSimpleProfile = Object.keys(profile.fileMap).length === 0;
|
||||
if (isSimpleProfile) {
|
||||
// For simple profiles, just call their post-processing hook and return
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
const assetsDir = path.join(__dirname, '..', '..', 'assets');
|
||||
|
||||
if (typeof profile.onPostConvertRulesProfile === 'function') {
|
||||
profile.onPostConvertRulesProfile(projectDir, assetsDir);
|
||||
}
|
||||
return { success: 1, failed: 0 };
|
||||
}
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
const sourceDir = path.join(__dirname, '..', '..', 'assets', 'rules');
|
||||
const targetDir = path.join(projectDir, profile.rulesDir);
|
||||
|
||||
// Ensure target directory exists
|
||||
if (!fs.existsSync(targetDir)) {
|
||||
fs.mkdirSync(targetDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Setup MCP configuration if enabled
|
||||
if (profile.mcpConfig !== false) {
|
||||
setupMCPConfiguration(projectDir, profile.mcpConfigPath);
|
||||
}
|
||||
|
||||
let success = 0;
|
||||
let failed = 0;
|
||||
|
||||
// Use fileMap to determine which files to copy
|
||||
const sourceFiles = Object.keys(profile.fileMap);
|
||||
|
||||
for (const sourceFile of sourceFiles) {
|
||||
try {
|
||||
const sourcePath = path.join(sourceDir, sourceFile);
|
||||
|
||||
// Check if source file exists
|
||||
if (!fs.existsSync(sourcePath)) {
|
||||
log(
|
||||
'warn',
|
||||
`[Rule Transformer] Source file not found: ${sourceFile}, skipping`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const targetFilename = profile.fileMap[sourceFile];
|
||||
const targetPath = path.join(targetDir, targetFilename);
|
||||
|
||||
// Ensure target subdirectory exists (for rules like taskmaster/dev_workflow.md)
|
||||
const targetFileDir = path.dirname(targetPath);
|
||||
if (!fs.existsSync(targetFileDir)) {
|
||||
fs.mkdirSync(targetFileDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Read source content
|
||||
let content = fs.readFileSync(sourcePath, 'utf8');
|
||||
|
||||
// Apply transformations
|
||||
content = transformRuleContent(
|
||||
content,
|
||||
profile.conversionConfig,
|
||||
profile.globalReplacements
|
||||
);
|
||||
|
||||
// Write to target
|
||||
fs.writeFileSync(targetPath, content, 'utf8');
|
||||
success++;
|
||||
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Converted ${sourceFile} -> ${targetFilename} for ${profile.profileName}`
|
||||
);
|
||||
} catch (error) {
|
||||
failed++;
|
||||
log(
|
||||
'error',
|
||||
`[Rule Transformer] Failed to convert ${sourceFile} for ${profile.profileName}: ${error.message}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Call post-processing hook if defined (e.g., for Roo's rules-*mode* folders)
|
||||
if (typeof profile.onPostConvertRulesProfile === 'function') {
|
||||
const assetsDir = path.join(__dirname, '..', '..', 'assets');
|
||||
profile.onPostConvertRulesProfile(projectDir, assetsDir);
|
||||
}
|
||||
|
||||
return { success, failed };
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove only Task Master specific files from a profile, leaving other existing rules intact
|
||||
* @param {string} projectDir - Target project directory
|
||||
* @param {Object} profile - Profile configuration
|
||||
* @returns {Object} Result object
|
||||
*/
|
||||
export function removeProfileRules(projectDir, profile) {
|
||||
const targetDir = path.join(projectDir, profile.rulesDir);
|
||||
const profileDir = path.join(projectDir, profile.profileDir);
|
||||
|
||||
const result = {
|
||||
profileName: profile.profileName,
|
||||
success: false,
|
||||
skipped: false,
|
||||
error: null,
|
||||
filesRemoved: [],
|
||||
mcpResult: null,
|
||||
profileDirRemoved: false,
|
||||
notice: null
|
||||
};
|
||||
|
||||
try {
|
||||
// Handle simple profiles (Claude, Codex) that just copy files to root
|
||||
const isSimpleProfile = Object.keys(profile.fileMap).length === 0;
|
||||
|
||||
if (isSimpleProfile) {
|
||||
// For simple profiles, just call their removal hook and return
|
||||
if (typeof profile.onRemoveRulesProfile === 'function') {
|
||||
profile.onRemoveRulesProfile(projectDir);
|
||||
}
|
||||
result.success = true;
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Successfully removed ${profile.profileName} files from ${projectDir}`
|
||||
);
|
||||
return result;
|
||||
}
|
||||
|
||||
// Check if profile directory exists at all (for full profiles)
|
||||
if (!fs.existsSync(profileDir)) {
|
||||
result.success = true;
|
||||
result.skipped = true;
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Profile directory does not exist: ${profileDir}`
|
||||
);
|
||||
return result;
|
||||
}
|
||||
|
||||
// 1. Remove only Task Master specific files from the rules directory
|
||||
let hasOtherRulesFiles = false;
|
||||
if (fs.existsSync(targetDir)) {
|
||||
const taskmasterFiles = Object.values(profile.fileMap);
|
||||
const removedFiles = [];
|
||||
|
||||
// Helper function to recursively check and remove Task Master files
|
||||
function processDirectory(dirPath, relativePath = '') {
|
||||
const items = fs.readdirSync(dirPath);
|
||||
|
||||
for (const item of items) {
|
||||
const itemPath = path.join(dirPath, item);
|
||||
const relativeItemPath = relativePath
|
||||
? path.join(relativePath, item)
|
||||
: item;
|
||||
const stat = fs.statSync(itemPath);
|
||||
|
||||
if (stat.isDirectory()) {
|
||||
// Recursively process subdirectory
|
||||
processDirectory(itemPath, relativeItemPath);
|
||||
|
||||
// Check if directory is empty after processing and remove if so
|
||||
try {
|
||||
const remainingItems = fs.readdirSync(itemPath);
|
||||
if (remainingItems.length === 0) {
|
||||
fs.rmSync(itemPath, { recursive: true, force: true });
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Removed empty directory: ${relativeItemPath}`
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
// Directory might have been removed already, ignore
|
||||
}
|
||||
} else if (stat.isFile()) {
|
||||
if (taskmasterFiles.includes(relativeItemPath)) {
|
||||
// This is a Task Master file, remove it
|
||||
fs.rmSync(itemPath, { force: true });
|
||||
removedFiles.push(relativeItemPath);
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Removed Task Master file: ${relativeItemPath}`
|
||||
);
|
||||
} else {
|
||||
// This is not a Task Master file, leave it
|
||||
hasOtherRulesFiles = true;
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Preserved existing file: ${relativeItemPath}`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Process the rules directory recursively
|
||||
processDirectory(targetDir);
|
||||
|
||||
result.filesRemoved = removedFiles;
|
||||
|
||||
// Only remove the rules directory if it's empty after removing Task Master files
|
||||
const remainingFiles = fs.readdirSync(targetDir);
|
||||
if (remainingFiles.length === 0) {
|
||||
fs.rmSync(targetDir, { recursive: true, force: true });
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Removed empty rules directory: ${targetDir}`
|
||||
);
|
||||
} else if (hasOtherRulesFiles) {
|
||||
result.notice = `Preserved ${remainingFiles.length} existing rule files in ${profile.rulesDir}`;
|
||||
log('info', `[Rule Transformer] ${result.notice}`);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Handle MCP configuration - only remove Task Master, preserve other servers
|
||||
if (profile.mcpConfig !== false) {
|
||||
result.mcpResult = removeTaskMasterMCPConfiguration(
|
||||
projectDir,
|
||||
profile.mcpConfigPath
|
||||
);
|
||||
if (result.mcpResult.hasOtherServers) {
|
||||
if (!result.notice) {
|
||||
result.notice = 'Preserved other MCP server configurations';
|
||||
} else {
|
||||
result.notice += '; preserved other MCP server configurations';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Call removal hook if defined (e.g., Roo's custom cleanup)
|
||||
if (typeof profile.onRemoveRulesProfile === 'function') {
|
||||
profile.onRemoveRulesProfile(projectDir);
|
||||
}
|
||||
|
||||
// 4. Only remove profile directory if:
|
||||
// - It's completely empty after all operations, AND
|
||||
// - All rules removed were Task Master rules (no existing rules preserved), AND
|
||||
// - MCP config was completely deleted (not just Task Master removed), AND
|
||||
// - No other files or folders exist in the profile directory
|
||||
if (fs.existsSync(profileDir)) {
|
||||
const remaining = fs.readdirSync(profileDir);
|
||||
const allRulesWereTaskMaster = !hasOtherRulesFiles;
|
||||
const mcpConfigCompletelyDeleted = result.mcpResult?.deleted === true;
|
||||
|
||||
// Check if there are any other files or folders beyond what we expect
|
||||
const hasOtherFilesOrFolders = remaining.length > 0;
|
||||
|
||||
if (
|
||||
remaining.length === 0 &&
|
||||
allRulesWereTaskMaster &&
|
||||
(profile.mcpConfig === false || mcpConfigCompletelyDeleted) &&
|
||||
!hasOtherFilesOrFolders
|
||||
) {
|
||||
fs.rmSync(profileDir, { recursive: true, force: true });
|
||||
result.profileDirRemoved = true;
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Removed profile directory: ${profileDir} (completely empty, all rules were Task Master rules, and MCP config was completely removed)`
|
||||
);
|
||||
} else {
|
||||
// Determine what was preserved and why
|
||||
const preservationReasons = [];
|
||||
if (hasOtherFilesOrFolders) {
|
||||
preservationReasons.push(
|
||||
`${remaining.length} existing files/folders`
|
||||
);
|
||||
}
|
||||
if (hasOtherRulesFiles) {
|
||||
preservationReasons.push('existing rule files');
|
||||
}
|
||||
if (result.mcpResult?.hasOtherServers) {
|
||||
preservationReasons.push('other MCP server configurations');
|
||||
}
|
||||
|
||||
const preservationMessage = `Preserved ${preservationReasons.join(', ')} in ${profile.profileDir}`;
|
||||
|
||||
if (!result.notice) {
|
||||
result.notice = preservationMessage;
|
||||
} else if (!result.notice.includes('Preserved')) {
|
||||
result.notice += `; ${preservationMessage.toLowerCase()}`;
|
||||
}
|
||||
|
||||
log('info', `[Rule Transformer] ${preservationMessage}`);
|
||||
}
|
||||
}
|
||||
|
||||
result.success = true;
|
||||
log(
|
||||
'debug',
|
||||
`[Rule Transformer] Successfully removed ${profile.profileName} Task Master files from ${projectDir}`
|
||||
);
|
||||
} catch (error) {
|
||||
result.error = error.message;
|
||||
log(
|
||||
'error',
|
||||
`[Rule Transformer] Failed to remove ${profile.profileName} rules: ${error.message}`
|
||||
);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
Reference in New Issue
Block a user