Release 0.18.0 (#840)
* Update SWE scores (#657) * docs: Auto-update and format models.md * feat: Flexible brand rules management (#460) * chore(docs): update docs and rules related to model management. * feat(ai): Add OpenRouter AI provider support Integrates the OpenRouter AI provider using the Vercel AI SDK adapter (@openrouter/ai-sdk-provider). This allows users to configure and utilize models available through the OpenRouter platform. - Added src/ai-providers/openrouter.js with standard Vercel AI SDK wrapper functions (generateText, streamText, generateObject). - Updated ai-services-unified.js to include the OpenRouter provider in the PROVIDER_FUNCTIONS map and API key resolution logic. - Verified config-manager.js handles OpenRouter API key checks correctly. - Users can configure OpenRouter models via .taskmasterconfig using the task-master models command or MCP models tool. Requires OPENROUTER_API_KEY. - Enhanced error handling in ai-services-unified.js to provide clearer messages when generateObjectService fails due to lack of underlying tool support in the selected model/provider endpoint. * feat(cli): Add --status/-s filter flag to show command and get-task MCP tool Implements the ability to filter subtasks displayed by the `task-master show <id>` command using the `--status` (or `-s`) flag. This is also available in the MCP context. - Modified `commands.js` to add the `--status` option to the `show` command definition. - Updated `utils.js` (`findTaskById`) to handle the filtering logic and return original subtask counts/arrays when filtering. - Updated `ui.js` (`displayTaskById`) to use the filtered subtasks for the table, display a summary line when filtering, and use the original subtask list for the progress bar calculation. - Updated MCP `get_task` tool and `showTaskDirect` function to accept and pass the `status` parameter. - Added changeset entry. * fix(tasks): Improve next task logic to be subtask-aware * fix(tasks): Enable removing multiple tasks/subtasks via comma-separated IDs - Refactors the core `removeTask` function (`task-manager/remove-task.js`) to accept and iterate over comma-separated task/subtask IDs. - Updates dependency cleanup and file regeneration logic to run once after processing all specified IDs. - Adjusts the `remove-task` CLI command (`commands.js`) description and confirmation prompt to handle multiple IDs correctly. - Fixes a bug in the CLI confirmation prompt where task/subtask titles were not being displayed correctly. - Updates the `remove_task` MCP tool description to reflect the new multi-ID capability. This addresses the previously known issue where only the first ID in a comma-separated list was processed. Closes #140 * Update README.md (#342) * Update Discord badge (#337) * refactor(init): Improve robustness and dependencies; Update template deps for AI SDKs; Silence npm install in MCP; Improve conditional model setup logic; Refactor init.js flags; Tweak Getting Started text; Fix MCP server launch command; Update default model in config template * Refactor: Improve MCP logging, update E2E & tests Refactors MCP server logging and updates testing infrastructure. - MCP Server: - Replaced manual logger wrappers with centralized `createLogWrapper` utility. - Updated direct function calls to use `{ session, mcpLog }` context. - Removed deprecated `model` parameter from analyze, expand-all, expand-task tools. - Adjusted MCP tool import paths and parameter descriptions. - Documentation: - Modified `docs/configuration.md`. - Modified `docs/tutorial.md`. - Testing: - E2E Script (`run_e2e.sh`): - Removed `set -e`. - Added LLM analysis function (`analyze_log_with_llm`) & integration. - Adjusted test run directory creation timing. - Added debug echo statements. - Deleted Unit Tests: Removed `ai-client-factory.test.js`, `ai-client-utils.test.js`, `ai-services.test.js`. - Modified Fixtures: Updated `scripts/task-complexity-report.json`. - Dev Scripts: - Modified `scripts/dev.js`. * chore(tests): Passes tests for merge candidate - Adjusted the interactive model default choice to be 'no change' instead of 'cancel setup' - E2E script has been perfected and works as designed provided there are all provider API keys .env in the root - Fixes the entire test suite to make sure it passes with the new architecture. - Fixes dependency command to properly show there is a validation failure if there is one. - Refactored config-manager.test.js mocking strategy and fixed assertions to read the real supported-models.json - Fixed rule-transformer.test.js assertion syntax and transformation logic adjusting replacement for search which was too broad. - Skip unstable tests in utils.test.js (log, readJSON, writeJSON error paths) due to SIGABRT crash. These tests trigger a native crash (SIGABRT), likely stemming from a conflict between internal chalk usage within the functions and Jest's test environment, possibly related to ESM module handling. * chore(wtf): removes chai. not sure how that even made it in here. also removes duplicate test in scripts/. * fix: ensure API key detection properly reads .env in MCP context Problem: - Task Master model configuration wasn't properly checking for API keys in the project's .env file when running through MCP - The isApiKeySet function was only checking session.env and process.env but not inspecting the .env file directly - This caused incorrect API key status reporting in MCP tools even when keys were properly set in .env Solution: - Modified resolveEnvVariable function in utils.js to properly read from .env file at projectRoot - Updated isApiKeySet to correctly pass projectRoot to resolveEnvVariable - Enhanced the key detection logic to have consistent behavior between CLI and MCP contexts - Maintains the correct precedence: session.env → .env file → process.env Testing: - Verified working correctly with both MCP and CLI tools - API keys properly detected in .env file in both contexts - Deleted .cursor/mcp.json to confirm introspection of .env as fallback works * fix(update): pass projectRoot through update command flow Modified ai-services-unified.js, update.js tool, and update-tasks.js direct function to correctly pass projectRoot. This enables the .env file API key fallback mechanism for the update command when running via MCP, ensuring consistent key resolution with the CLI context. * fix(analyze-complexity): pass projectRoot through analyze-complexity flow Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity. * fix(add-task): pass projectRoot and fix logging/refs Modified add-task core, direct function, and tool to pass projectRoot for .env API key fallback. Fixed logFn reference error and removed deprecated reportProgress call in core addTask function. Verified working. * fix(parse-prd): pass projectRoot and fix schema/logging Modified parse-prd core, direct function, and tool to pass projectRoot for .env API key fallback. Corrected Zod schema used in generateObjectService call. Fixed logFn reference error in core parsePRD. Updated unit test mock for utils.js. * fix(update-task): pass projectRoot and adjust parsing Modified update-task-by-id core, direct function, and tool to pass projectRoot. Reverted parsing logic in core function to prioritize `{...}` extraction, resolving parsing errors. Fixed ReferenceError by correctly destructuring projectRoot. * fix(update-subtask): pass projectRoot and allow updating done subtasks Modified update-subtask-by-id core, direct function, and tool to pass projectRoot for .env API key fallback. Removed check preventing appending details to completed subtasks. * fix(mcp, expand): pass projectRoot through expand/expand-all flows Problem: expand_task & expand_all MCP tools failed with .env keys due to missing projectRoot propagation for API key resolution. Also fixed a ReferenceError: wasSilent is not defined in expandTaskDirect. Solution: Modified core logic, direct functions, and MCP tools for expand-task and expand-all to correctly destructure projectRoot from arguments and pass it down through the context object to the AI service call (generateTextService). Fixed wasSilent scope in expandTaskDirect. Verification: Tested expand_task successfully in MCP using .env keys. Reviewed expand_all flow for correct projectRoot propagation. * chore: prettier * fix(expand-all): add projectRoot to expandAllTasksDirect invokation. * fix(update-tasks): Improve AI response parsing for 'update' command Refactors the JSON array parsing logic within in . The previous logic primarily relied on extracting content from markdown code blocks (json or javascript), which proved brittle when the AI response included comments or non-JSON text within the block, leading to parsing errors for the command. This change modifies the parsing strategy to first attempt extracting content directly between the outermost '[' and ']' brackets. This is more robust as it targets the expected array structure directly. If bracket extraction fails, it falls back to looking for a strict json code block, then prefix stripping, before attempting a raw parse. This approach aligns with the successful parsing strategy used for single-object responses in and resolves the parsing errors previously observed with the command. * refactor(mcp): introduce withNormalizedProjectRoot HOF for path normalization Added HOF to mcp tools utils to normalize projectRoot from args/session. Refactored get-task tool to use HOF. Updated relevant documentation. * refactor(mcp): apply withNormalizedProjectRoot HOF to update tool Problem: The MCP tool previously handled project root acquisition and path resolution within its method, leading to potential inconsistencies and repetition. Solution: Refactored the tool () to utilize the new Higher-Order Function (HOF) from . Specific Changes: - Imported HOF. - Updated the Zod schema for the parameter to be optional, as the HOF handles deriving it from the session if not provided. - Wrapped the entire function body with the HOF. - Removed the manual call to from within the function body. - Destructured the from the object received by the wrapped function, ensuring it's the normalized path provided by the HOF. - Used the normalized variable when calling and when passing arguments to . This change standardizes project root handling for the tool, simplifies its method, and ensures consistent path normalization. This serves as the pattern for refactoring other MCP tools. * fix: apply to all tools withNormalizedProjectRoot to fix projectRoot issues for linux and windows * fix: add rest of tools that need wrapper * chore: cleanup tools to stop using rootFolder and remove unused imports * chore: more cleanup * refactor: Improve update-subtask, consolidate utils, update config This commit introduces several improvements and refactorings across MCP tools, core logic, and configuration. **Major Changes:** 1. **Refactor updateSubtaskById:** - Switched from generateTextService to generateObjectService for structured AI responses, using a Zod schema (subtaskSchema) for validation. - Revised prompts to have the AI generate relevant content based on user request and context (parent/sibling tasks), while explicitly preventing AI from handling timestamp/tag formatting. - Implemented **local timestamp generation (new Date().toISOString()) and formatting** (using <info added on ...> tags) within the function *after* receiving the AI response. This ensures reliable and correctly formatted details are appended. - Corrected logic to append only the locally formatted, AI-generated content block to the existing subtask.details. 2. **Consolidate MCP Utilities:** - Moved/consolidated the withNormalizedProjectRoot HOF into mcp-server/src/tools/utils.js. - Updated MCP tools (like update-subtask.js) to import withNormalizedProjectRoot from the new location. 3. **Refactor Project Initialization:** - Deleted the redundant mcp-server/src/core/direct-functions/initialize-project-direct.js file. - Updated mcp-server/src/core/task-master-core.js to import initializeProjectDirect from its correct location (./direct-functions/initialize-project.js). **Other Changes:** - Updated .taskmasterconfig fallback model to claude-3-7-sonnet-20250219. - Clarified model cost representation in the models tool description (taskmaster.mdc and mcp-server/src/tools/models.js). * fix: displayBanner logging when silentMode is active (#385) * fix: improve error handling, test options, and model configuration - Enhance error validation in parse-prd.js and update-tasks.js - Fix bug where mcpLog was incorrectly passed as logWrapper - Improve error messages and response formatting - Add --skip-verification flag to E2E tests - Update MCP server config that ships with init to match new API key structure - Fix task force/append handling in parse-prd command - Increase column width in update-tasks display * chore: fixes parse prd to show loading indicator in cli. * fix(parse-prd): suggested fix for mcpLog was incorrect. reverting to my previously working code. * chore(init): No longer ships readme with task-master init (commented out for now). No longer looking for task-master-mcp, instead checked for task-master-ai - this should prevent the init sequence from needlessly adding another mcp server with task-master-mcp to the mpc.json which a ton of people probably ran into. * chore: restores 3.7 sonnet as the main role. * fix(add/remove-dependency): dependency mcp tools were failing due to hard-coded tasks path in generate task files. * chore: removes tasks json backup that was temporarily created. * fix(next): adjusts mcp tool response to correctly return the next task/subtask. Also adds nextSteps to the next task response. * chore: prettier * chore: readme typos * fix(config): restores sonnet 3.7 as default main role. * Version Packages * hotfix: move production package to "dependencies" (#399) * Version Packages * Fix: issues with 0.13.0 not working (#402) * Exit prerelease mode and version packages * hotfix: move production package to "dependencies" * Enter prerelease mode and version packages * Enter prerelease mode and version packages * chore: cleanup * chore: improve pre.json and add pre-release workflow * chore: fix package.json * chore: cleanup * chore: improve pre-release workflow * chore: allow github actions to commit * extract fileMap and conversionConfig into brand profile * extract into brand profile * add windsurf profile * add remove brand rules function * fix regex * add rules command to add/remove rules for a specific brand * fix post processing for roo * allow multiples * add cursor profile * update test for new structure * move rules to assets * use assets/rules for rules files * use standardized setupMCP function * fix formatting * fix formatting * add logging * fix escapes * default to cursor * allow init with certain rulesets; no more .windsurfrules * update docs * update log msg * fix formatting * keep mdc extension for cursor * don't rewrite .mdc to .md inside the files * fix roo init (add modes) * fix cursor init (don't use roo transformation by default) * use more generic function names * update docs * fix formatting * update function names * add changeset * add rules to mcp initialize project * register tool with mcp server * update docs * add integration test * fix cursor initialization * rule selection * fix formatting * fix MCP - remove yes flag * add import * update roo tests * add/update tests * remove test * add rules command test * update MCP responses, centralize rules profiles & helpers * fix logging and MCP response messages * fix formatting * incorrect test * fix tests * update fileMap * fix file extension transformations * fix formatting * add rules command test * test already covered * fix formatting * move renaming logic into profiles * make sure dir is deleted (DS_Store) * add confirmation for rules removal * add force flag for rules remove * use force flag for test * remove yes parameter * fix formatting * import brand profiles from rule-transformer.js * update comment * add interactive rules setup * optimize * only copy rules specifically listed in fileMap * update comment * add cline profile * add brandDir to remove ambiguity and support Cline * specify whether to create mcp config and filename * add mcpConfigName value for parh * fix formatting * remove rules just for this repository - only include rules to be distributed * update error message * update "brand rules" to "rules" * update to minor * remove comment * remove comments * move to /src/utils * optimize imports * move rules-setup.js to /src/utils * move rule-transformer.js to /src/utils * move confirmation to /src/ui/confirm.js * default to all rules * use profile js for mcp config settings * only run rules interactive setup if not provided via command line * update comments * initialize with all brands if nothing specified * update var name * clean up * enumerate brands for brand rules * update instructions * add test to check for brand profiles * fix quotes * update semantics and terminology from 'brand rules' to 'rules profiles' * fix formatting * fix formatting * update function name and remove copying of cursor rules, now handled by rules transformer * update comment * rename to mcp-config-setup.js * use enums for rules actions * add aggregate reporting for rules add command * add missing log message * use simpler path * use base profile with modifications for each brand * use displayName and don't select any defaults in setup * add confirmation if removing ALL rules profiles, and add --force flag on rules remove * Use profile-detection instead of rules-detection * add newline at end of mcp config * add proper formatting for mcp.json * update rules * update rules * update rules * add checks for other rules and other profile folder items before removing * update confirmation for rules remove * update docs * update changeset * fix for filepath at bottom of rule * Update cline profile and add test; adjust other rules tests * update changeset * update changeset * clarify init for all profiles if not specified * update rule text * revert text * use "rule profiles" instead of "rules profiles" * use standard tool mappings for windsurf * add Trae support * update changeset * update wording * update to 'rule profile' * remove unneeded exports to optimize loc * combine to /src/utils/profiles.js; add codex and claude code profiles * rename function and add boxen * add claude and codex integration tests * organize tests into profiles folder * mock fs for transformer tests * update UI * add cline and trae integration tests * update test * update function name * update formatting * Update change set with new profiles * move profile integration tests to subdirectory * properly create temp directories in /tmp folder * fix formatting * use taskmaster subfolder for the 2 TM rules * update wording * ensure subdirectory exists * update rules from next * update from next * update taskmaster rule * add details on new rules command and init * fix mcp init * fix MCP path to assets * remove duplication * remove duplication * MCP server path fixes for rules command * fix for CLI roo rules add/remove * update tests * fix formatting * fix pattern for interactive rule profiles setup * restore comments * restore comments * restore comments * remove unused import, fix quotes * add missing integration tests * add VS Code profile and tests * update docs and rules to include vscode profile * add rules subdirectory support per-profile * move profiles to /src * fix formatting * rename to remove ambiguity * use --setup for rules interactive setup * Fix Cursor deeplink installation with copy-paste instructions (#723) * change roo boomerang to orchestrator; update tests that don't use modes * fix newline * chore: cleanup --------- Co-authored-by: Eyal Toledano <eyal@microangel.so> Co-authored-by: Yuval <yuvalbl@users.noreply.github.com> Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com> Co-authored-by: Eyal Toledano <eutait@gmail.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * fix: providers config for azure, bedrock, and vertex (#822) * fix: providers config for azure, bedrock, and vertex * chore: improve changelog * chore: fix CI * fix: switch to ESM export to avoid mixed format (#633) * fix: switch to ESM export to avoid mixed format The CLI entrypoint was using `module.exports` alongside ESM `import` statements, resulting in an invalid mixed module format. Replaced the CommonJS export with a proper ESM `export` to maintain consistency and prevent module resolution issues. * chore: add changeset --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> * fix: Fix external provider support (#726) * fix(bedrock): improve AWS credential handling and add model definitions (#826) * fix(bedrock): improve AWS credential handling and add model definitions - Change error to warning when AWS credentials are missing in environment - Allow fallback to system configuration (aws config files or instance profiles) - Remove hardcoded region and profile parameters in Bedrock client - Add Claude 3.7 Sonnet and DeepSeek R1 model definitions for Bedrock - Update config manager to properly handle Bedrock provider * chore: cleanup and format and small refactor --------- Co-authored-by: Ray Krueger <raykrueger@gmail.com> * docs: Auto-update and format models.md * Version Packages * chore: fix package.json * Fix/expand command tag corruption (#827) * fix(expand): Fix tag corruption in expand command - Fix tag parameter passing through MCP expand-task flow - Add tag parameter to direct function and tool registration - Fix contextGatherer method name from _buildDependencyContext to _buildDependencyGraphs - Add comprehensive test coverage for tag handling in expand-task - Ensures tagged task structure is preserved during expansion - Prevents corruption when tag is undefined. Fixes expand command causing tag corruption in tagged task lists. All existing tests pass and new test coverage added. * test(e2e): Add comprehensive tag-aware expand testing to verify tag corruption fix - Add new test section for feature-expand tag creation and testing - Verify tag preservation during expand, force expand, and expand --all operations - Test that master tag remains intact and feature-expand tag receives subtasks correctly - Fix file path references to use correct .taskmaster/tasks/tasks.json location - Fix config file check to use .taskmaster/config.json instead of .taskmasterconfig - All tag corruption verification tests pass successfully in E2E test * fix(changeset): Update E2E test improvements changeset to properly reflect tag corruption fix verification * chore(changeset): combine duplicate changesets for expand tag corruption fix Merge eighty-breads-wonder.md into bright-llamas-enter.md to consolidate the expand command fix and its comprehensive E2E testing enhancements into a single changeset entry. * Delete .changeset/eighty-breads-wonder.md * Version Packages * chore: fix package.json * fix(expand): Enhance context handling in expandAllTasks function - Added `tag` to context destructuring for better context management. - Updated `readJSON` call to include `contextTag` for improved data integrity. - Ensured the correct tag is passed during task expansion to prevent tag corruption. --------- Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de> Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Add pyproject.toml as project root marker (#804) * feat: Add pyproject.toml as project root marker - Added 'pyproject.toml' to the project markers array in findProjectRoot() - Enables Task Master to recognize Python projects using pyproject.toml - Improves project root detection for modern Python development workflows - Maintains compatibility with existing Node.js and Git-based detection * chore: add changeset --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> * feat: add Claude Code provider support Implements Claude Code as a new AI provider that uses the Claude Code CLI without requiring API keys. This enables users to leverage Claude models through their local Claude Code installation. Key changes: - Add complete AI SDK v1 implementation for Claude Code provider - Custom SDK with streaming/non-streaming support - Session management for conversation continuity - JSON extraction for object generation mode - Support for advanced settings (maxTurns, allowedTools, etc.) - Integrate Claude Code into Task Master's provider system - Update ai-services-unified.js to handle keyless authentication - Add provider to supported-models.json with opus/sonnet models - Ensure correct maxTokens values are applied (opus: 32000, sonnet: 64000) - Fix maxTokens configuration issue - Add max_tokens property to getAvailableModels() output - Update setModel() to properly handle claude-code models - Create update-config-tokens.js utility for init process - Add comprehensive documentation - User guide with configuration examples - Advanced settings explanation and future integration options The implementation maintains full backward compatibility with existing providers while adding seamless Claude Code support to all Task Master commands. * fix(docs): correct invalid commands in claude-code usage examples - Remove non-existent 'do', 'estimate', and 'analyze' commands - Replace with actual Task Master commands: next, show, set-status - Use correct syntax for parse-prd and analyze-complexity * feat: make @anthropic-ai/claude-code an optional dependency This change makes the Claude Code SDK package optional, preventing installation failures for users who don't need Claude Code functionality. Changes: - Added @anthropic-ai/claude-code to optionalDependencies in package.json - Implemented lazy loading in language-model.js to only import the SDK when actually used - Updated documentation to explain the optional installation requirement - Applied formatting fixes to ensure code consistency Benefits: - Users without Claude Code subscriptions don't need to install the dependency - Reduces package size for users who don't use Claude Code - Prevents installation failures if the package is unavailable - Provides clear error messages when the package is needed but not installed The implementation uses dynamic imports to load the SDK only when doGenerate() or doStream() is called, ensuring the provider can be instantiated without the package present. * test: add comprehensive tests for ClaudeCodeProvider Addresses code review feedback about missing automated tests for the ClaudeCodeProvider. ## Changes - Added unit tests for ClaudeCodeProvider class covering constructor, validateAuth, and getClient methods - Added unit tests for ClaudeCodeLanguageModel testing lazy loading behavior and error handling - Added integration tests verifying optional dependency behavior when @anthropic-ai/claude-code is not installed ## Test Coverage 1. **Unit Tests**: - ClaudeCodeProvider: Basic functionality, no API key requirement, client creation - ClaudeCodeLanguageModel: Model initialization, lazy loading, error messages, warning generation 2. **Integration Tests**: - Optional dependency behavior when package is not installed - Clear error messages for users about missing package - Provider instantiation works but usage fails gracefully All tests pass and provide comprehensive coverage for the claude-code provider implementation. * revert: remove maxTokens update functionality from init This functionality was out of scope for the Claude Code provider PR. The automatic updating of maxTokens values in config.json during initialization is a general improvement that should be in a separate PR. Additionally, Claude Code ignores maxTokens and temperature parameters anyway, making this change irrelevant for the Claude Code integration. Removed: - scripts/modules/update-config-tokens.js - Import and usage in scripts/init.js * docs: add Claude Code support information to README - Added Claude Code to the list of supported providers in Requirements section - Noted that Claude Code requires no API key but needs Claude Code CLI - Added example of configuring claude-code/sonnet model - Created dedicated Claude Code Support section with key information - Added link to detailed Claude Code setup documentation This ensures users are aware of the Claude Code option as a no-API-key alternative for using Claude models. * style: apply biome formatting to test files * fix(models): add missing --claude-code flag to models command The models command was missing the --claude-code provider flag, preventing users from setting Claude Code models via CLI. While the backend already supported claude-code as a provider hint, there was no command-line flag to trigger it. Changes: - Added --claude-code option to models command alongside existing provider flags - Updated provider flags validation to include claudeCode option - Added claude-code to providerHint logic for all three model roles (main, research, fallback) - Updated error message to include --claude-code in list of mutually exclusive flags - Added example usage in help text This allows users to properly set Claude Code models using commands like: task-master models --set-main sonnet --claude-code task-master models --set-main opus --claude-code Without this flag, users would get "Model ID not found" errors when trying to set claude-code models, as the system couldn't determine the correct provider for generic model names like "sonnet" or "opus". * chore: add changeset for Claude Code provider feature * docs: Auto-update and format models.md * readme: add troubleshooting note for MCP tools not working * Feature/compatibleapisupport (#830) * add compatible platform api support * Adjust the code according to the suggestions * Fully revised as requested: restored all required checks, improved compatibility, and converted all comments to English. * feat: Add support for compatible API endpoints via baseURL * chore: Add changeset for compatible API support * chore: cleanup * chore: improve changeset * fix: package-lock.json * fix: package-lock.json --------- Co-authored-by: He-Xun <1226807142@qq.com> * Rename Roo Code "Boomerang" role to "Orchestrator" (#831) * feat: Enhanced project initialization with Git worktree detection (#743) * Fix Cursor deeplink installation with copy-paste instructions (#723) * detect git worktree * add changeset * add aliases and git flags * add changeset * rename and update test * add store tasks in git functionality * update changeset * fix newline * remove unused import * update command wording * update command option text * fix: update task by id (#834) * store tasks in git by default (#835) * Call rules interactive setup during init (#833) * chore: rc version bump * feat: Claude Code slash commands for Task Master (#774) * Fix Cursor deeplink installation with copy-paste instructions (#723) * fix: expand-task (#755) * docs: Update o3 model price (#751) * docs: Auto-update and format models.md * docs: Auto-update and format models.md * feat: Add Claude Code task master commands Adds Task Master slash commands for Claude Code under /project:tm/ namespace --------- Co-authored-by: Joe Danziger <joe@ticc.net> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com> * feat: make more compatible with "o" family models (#839) * docs: Auto-update and format models.md * docs: Add comprehensive Azure OpenAI configuration documentation (#837) * docs: Add comprehensive Azure OpenAI configuration documentation - Add detailed Azure OpenAI configuration section with prerequisites, authentication, and setup options - Include both global and per-model baseURL configuration examples - Add comprehensive troubleshooting guide for common Azure OpenAI issues - Update environment variables section with Azure OpenAI examples - Add Azure OpenAI models to all model tables (Main, Research, Fallback) - Include prominent Azure configuration example in main documentation - Fix azureBaseURL format to use correct Azure OpenAI endpoint structure Addresses common Azure OpenAI setup challenges and provides clear guidance for new users. * refactor: Move Azure models from docs/models.md to scripts/modules/supported-models.json - Remove Azure model entries from documentation tables - Add Azure provider section to supported-models.json with gpt-4o, gpt-4o-mini, and gpt-4-1 - Maintain consistency with existing model configuration structure * docs: Auto-update and format models.md * Version Packages * chore: format fix --------- Co-authored-by: Riccardo (Ricky) Esclapon <32306488+ries9112@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Joe Danziger <joe@ticc.net> Co-authored-by: Eyal Toledano <eyal@microangel.so> Co-authored-by: Yuval <yuvalbl@users.noreply.github.com> Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com> Co-authored-by: Eyal Toledano <eutait@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Nathan Marley <nathan@glowberrylabs.com> Co-authored-by: Ray Krueger <raykrueger@gmail.com> Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de> Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com> Co-authored-by: ejones40 <ethan.jones@fortyau.com> Co-authored-by: Ben Vargas <ben@vargas.com> Co-authored-by: V4G4X <34249137+V4G4X@users.noreply.github.com> Co-authored-by: He-Xun <1226807142@qq.com> Co-authored-by: neno <github@meaning.systems> Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com> Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com> Co-authored-by: Jitesh Thakur <56656484+Jitha-afk@users.noreply.github.com>
This commit is contained in:
@@ -333,8 +333,8 @@ log_step() {
|
||||
|
||||
log_step "Initializing Task Master project (non-interactive)"
|
||||
task-master init -y --name="E2E Test $TIMESTAMP" --description="Automated E2E test run"
|
||||
if [ ! -f ".taskmasterconfig" ]; then
|
||||
log_error "Initialization failed: .taskmasterconfig not found."
|
||||
if [ ! -f ".taskmaster/config.json" ]; then
|
||||
log_error "Initialization failed: .taskmaster/config.json not found."
|
||||
exit 1
|
||||
fi
|
||||
log_success "Project initialized."
|
||||
@@ -344,8 +344,8 @@ log_step() {
|
||||
exit_status_prd=$?
|
||||
echo "$cmd_output_prd"
|
||||
extract_and_sum_cost "$cmd_output_prd"
|
||||
if [ $exit_status_prd -ne 0 ] || [ ! -s "tasks/tasks.json" ]; then
|
||||
log_error "Parsing PRD failed: tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
|
||||
if [ $exit_status_prd -ne 0 ] || [ ! -s ".taskmaster/tasks/tasks.json" ]; then
|
||||
log_error "Parsing PRD failed: .taskmaster/tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
|
||||
exit 1
|
||||
else
|
||||
log_success "PRD parsed successfully."
|
||||
@@ -386,6 +386,95 @@ log_step() {
|
||||
task-master list --with-subtasks > task_list_after_changes.log
|
||||
log_success "Task list after changes saved to task_list_after_changes.log"
|
||||
|
||||
# === Start New Test Section: Tag-Aware Expand Testing ===
|
||||
log_step "Creating additional tag for expand testing"
|
||||
task-master add-tag feature-expand --description="Tag for testing expand command with tag preservation"
|
||||
log_success "Created feature-expand tag."
|
||||
|
||||
log_step "Adding task to feature-expand tag"
|
||||
task-master add-task --tag=feature-expand --prompt="Test task for tag-aware expansion" --priority=medium
|
||||
# Get the new task ID dynamically
|
||||
new_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
|
||||
log_success "Added task $new_expand_task_id to feature-expand tag."
|
||||
|
||||
log_step "Verifying tags exist before expand test"
|
||||
task-master tags > tags_before_expand.log
|
||||
tag_count_before=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
||||
log_success "Tag count before expand: $tag_count_before"
|
||||
|
||||
log_step "Expanding task in feature-expand tag (testing tag corruption fix)"
|
||||
cmd_output_expand_tagged=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" 2>&1)
|
||||
exit_status_expand_tagged=$?
|
||||
echo "$cmd_output_expand_tagged"
|
||||
extract_and_sum_cost "$cmd_output_expand_tagged"
|
||||
if [ $exit_status_expand_tagged -ne 0 ]; then
|
||||
log_error "Tagged expand failed. Exit status: $exit_status_expand_tagged"
|
||||
else
|
||||
log_success "Tagged expand completed."
|
||||
fi
|
||||
|
||||
log_step "Verifying tag preservation after expand"
|
||||
task-master tags > tags_after_expand.log
|
||||
tag_count_after=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
||||
|
||||
if [ "$tag_count_before" -eq "$tag_count_after" ]; then
|
||||
log_success "Tag count preserved: $tag_count_after (no corruption detected)"
|
||||
else
|
||||
log_error "Tag corruption detected! Before: $tag_count_before, After: $tag_count_after"
|
||||
fi
|
||||
|
||||
log_step "Verifying master tag still exists and has tasks"
|
||||
master_task_count=$(jq -r '.master.tasks | length' .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
|
||||
if [ "$master_task_count" -gt "0" ]; then
|
||||
log_success "Master tag preserved with $master_task_count tasks"
|
||||
else
|
||||
log_error "Master tag corrupted or empty after tagged expand"
|
||||
fi
|
||||
|
||||
log_step "Verifying feature-expand tag has expanded subtasks"
|
||||
expanded_subtask_count=$(jq -r ".\"feature-expand\".tasks[] | select(.id == $new_expand_task_id) | .subtasks | length" .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
|
||||
if [ "$expanded_subtask_count" -gt "0" ]; then
|
||||
log_success "Expand successful: $expanded_subtask_count subtasks created in feature-expand tag"
|
||||
else
|
||||
log_error "Expand failed: No subtasks found in feature-expand tag"
|
||||
fi
|
||||
|
||||
log_step "Testing force expand with tag preservation"
|
||||
cmd_output_force_expand=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" --force 2>&1)
|
||||
exit_status_force_expand=$?
|
||||
echo "$cmd_output_force_expand"
|
||||
extract_and_sum_cost "$cmd_output_force_expand"
|
||||
|
||||
# Verify tags still preserved after force expand
|
||||
tag_count_after_force=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
||||
if [ "$tag_count_before" -eq "$tag_count_after_force" ]; then
|
||||
log_success "Force expand preserved all tags"
|
||||
else
|
||||
log_error "Force expand caused tag corruption"
|
||||
fi
|
||||
|
||||
log_step "Testing expand --all with tag preservation"
|
||||
# Add another task to feature-expand for expand-all testing
|
||||
task-master add-task --tag=feature-expand --prompt="Second task for expand-all testing" --priority=low
|
||||
second_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
|
||||
|
||||
cmd_output_expand_all=$(task-master expand --tag=feature-expand --all 2>&1)
|
||||
exit_status_expand_all=$?
|
||||
echo "$cmd_output_expand_all"
|
||||
extract_and_sum_cost "$cmd_output_expand_all"
|
||||
|
||||
# Verify tags preserved after expand-all
|
||||
tag_count_after_all=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
||||
if [ "$tag_count_before" -eq "$tag_count_after_all" ]; then
|
||||
log_success "Expand --all preserved all tags"
|
||||
else
|
||||
log_error "Expand --all caused tag corruption"
|
||||
fi
|
||||
|
||||
log_success "Completed expand --all tag preservation test."
|
||||
|
||||
# === End New Test Section: Tag-Aware Expand Testing ===
|
||||
|
||||
# === Test Model Commands ===
|
||||
log_step "Checking initial model configuration"
|
||||
task-master models > models_initial_config.log
|
||||
@@ -626,7 +715,7 @@ log_step() {
|
||||
|
||||
# Find the next available task ID dynamically instead of hardcoding 11, 12
|
||||
# Assuming tasks are added sequentially and we didn't remove any core tasks yet
|
||||
last_task_id=$(jq '[.tasks[].id] | max' tasks/tasks.json)
|
||||
last_task_id=$(jq '[.master.tasks[].id] | max' .taskmaster/tasks/tasks.json)
|
||||
manual_task_id=$((last_task_id + 1))
|
||||
ai_task_id=$((manual_task_id + 1))
|
||||
|
||||
@@ -747,30 +836,30 @@ log_step() {
|
||||
task-master list --with-subtasks > task_list_after_clear_all.log
|
||||
log_success "Task list after clear-all saved. (Manual/LLM check recommended to verify subtasks removed)"
|
||||
|
||||
log_step "Expanding Task 1 again (to have subtasks for next test)"
|
||||
task-master expand --id=1
|
||||
log_success "Attempted to expand Task 1 again."
|
||||
# Verify 1.1 exists again
|
||||
if ! jq -e '.tasks[] | select(.id == 1) | .subtasks[] | select(.id == 1)' tasks/tasks.json > /dev/null; then
|
||||
log_error "Subtask 1.1 not found in tasks.json after re-expanding Task 1."
|
||||
log_step "Expanding Task 3 again (to have subtasks for next test)"
|
||||
task-master expand --id=3
|
||||
log_success "Attempted to expand Task 3."
|
||||
# Verify 3.1 exists
|
||||
if ! jq -e '.master.tasks[] | select(.id == 3) | .subtasks[] | select(.id == 1)' .taskmaster/tasks/tasks.json > /dev/null; then
|
||||
log_error "Subtask 3.1 not found in tasks.json after expanding Task 3."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_step "Adding dependency: Task 3 depends on Subtask 1.1"
|
||||
task-master add-dependency --id=3 --depends-on=1.1
|
||||
log_success "Added dependency 3 -> 1.1."
|
||||
log_step "Adding dependency: Task 4 depends on Subtask 3.1"
|
||||
task-master add-dependency --id=4 --depends-on=3.1
|
||||
log_success "Added dependency 4 -> 3.1."
|
||||
|
||||
log_step "Showing Task 3 details (after adding subtask dependency)"
|
||||
task-master show 3 > task_3_details_after_dep_add.log
|
||||
log_success "Task 3 details saved. (Manual/LLM check recommended for dependency [1.1])"
|
||||
log_step "Showing Task 4 details (after adding subtask dependency)"
|
||||
task-master show 4 > task_4_details_after_dep_add.log
|
||||
log_success "Task 4 details saved. (Manual/LLM check recommended for dependency [3.1])"
|
||||
|
||||
log_step "Removing dependency: Task 3 depends on Subtask 1.1"
|
||||
task-master remove-dependency --id=3 --depends-on=1.1
|
||||
log_success "Removed dependency 3 -> 1.1."
|
||||
log_step "Removing dependency: Task 4 depends on Subtask 3.1"
|
||||
task-master remove-dependency --id=4 --depends-on=3.1
|
||||
log_success "Removed dependency 4 -> 3.1."
|
||||
|
||||
log_step "Showing Task 3 details (after removing subtask dependency)"
|
||||
task-master show 3 > task_3_details_after_dep_remove.log
|
||||
log_success "Task 3 details saved. (Manual/LLM check recommended to verify dependency removed)"
|
||||
log_step "Showing Task 4 details (after removing subtask dependency)"
|
||||
task-master show 4 > task_4_details_after_dep_remove.log
|
||||
log_success "Task 4 details saved. (Manual/LLM check recommended to verify dependency removed)"
|
||||
|
||||
# === End New Test Section ===
|
||||
|
||||
|
||||
95
tests/integration/claude-code-optional.test.js
Normal file
95
tests/integration/claude-code-optional.test.js
Normal file
@@ -0,0 +1,95 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock the base provider to avoid circular dependencies
|
||||
jest.unstable_mockModule('../../src/ai-providers/base-provider.js', () => ({
|
||||
BaseAIProvider: class {
|
||||
constructor() {
|
||||
this.name = 'Base Provider';
|
||||
}
|
||||
handleError(context, error) {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}));
|
||||
|
||||
// Mock the claude-code SDK to simulate it not being installed
|
||||
jest.unstable_mockModule('@anthropic-ai/claude-code', () => {
|
||||
throw new Error("Cannot find module '@anthropic-ai/claude-code'");
|
||||
});
|
||||
|
||||
// Import after mocking
|
||||
const { ClaudeCodeProvider } = await import(
|
||||
'../../src/ai-providers/claude-code.js'
|
||||
);
|
||||
|
||||
describe('Claude Code Optional Dependency Integration', () => {
|
||||
describe('when @anthropic-ai/claude-code is not installed', () => {
|
||||
it('should allow provider instantiation', () => {
|
||||
// Provider should instantiate without error
|
||||
const provider = new ClaudeCodeProvider();
|
||||
expect(provider).toBeDefined();
|
||||
expect(provider.name).toBe('Claude Code');
|
||||
});
|
||||
|
||||
it('should allow client creation', () => {
|
||||
const provider = new ClaudeCodeProvider();
|
||||
// Client creation should work
|
||||
const client = provider.getClient({});
|
||||
expect(client).toBeDefined();
|
||||
expect(typeof client).toBe('function');
|
||||
});
|
||||
|
||||
it('should fail with clear error when trying to use the model', async () => {
|
||||
const provider = new ClaudeCodeProvider();
|
||||
const client = provider.getClient({});
|
||||
const model = client('opus');
|
||||
|
||||
// The actual usage should fail with the lazy loading error
|
||||
await expect(
|
||||
model.doGenerate({
|
||||
prompt: [{ role: 'user', content: 'Hello' }],
|
||||
mode: { type: 'regular' }
|
||||
})
|
||||
).rejects.toThrow(
|
||||
"Claude Code SDK is not installed. Please install '@anthropic-ai/claude-code' to use the claude-code provider."
|
||||
);
|
||||
});
|
||||
|
||||
it('should provide helpful error message for streaming', async () => {
|
||||
const provider = new ClaudeCodeProvider();
|
||||
const client = provider.getClient({});
|
||||
const model = client('sonnet');
|
||||
|
||||
await expect(
|
||||
model.doStream({
|
||||
prompt: [{ role: 'user', content: 'Hello' }],
|
||||
mode: { type: 'regular' }
|
||||
})
|
||||
).rejects.toThrow(
|
||||
"Claude Code SDK is not installed. Please install '@anthropic-ai/claude-code' to use the claude-code provider."
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('provider behavior', () => {
|
||||
it('should not require API key', () => {
|
||||
const provider = new ClaudeCodeProvider();
|
||||
// Should not throw
|
||||
expect(() => provider.validateAuth()).not.toThrow();
|
||||
expect(() => provider.validateAuth({ apiKey: null })).not.toThrow();
|
||||
});
|
||||
|
||||
it('should work with ai-services-unified when provider is configured', async () => {
|
||||
// This tests that the provider can be selected but will fail appropriately
|
||||
// when the actual model is used
|
||||
const provider = new ClaudeCodeProvider();
|
||||
expect(provider).toBeDefined();
|
||||
|
||||
// In real usage, ai-services-unified would:
|
||||
// 1. Get the provider instance (works)
|
||||
// 2. Call provider.getClient() (works)
|
||||
// 3. Create a model (works)
|
||||
// 4. Try to generate (fails with clear error)
|
||||
});
|
||||
});
|
||||
});
|
||||
581
tests/integration/manage-gitignore.test.js
Normal file
581
tests/integration/manage-gitignore.test.js
Normal file
@@ -0,0 +1,581 @@
|
||||
/**
|
||||
* Integration tests for manage-gitignore.js module
|
||||
* Tests actual file system operations in a temporary directory
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import manageGitignoreFile from '../../src/utils/manage-gitignore.js';
|
||||
|
||||
describe('manage-gitignore.js Integration Tests', () => {
|
||||
let tempDir;
|
||||
let testGitignorePath;
|
||||
|
||||
beforeEach(() => {
|
||||
// Create a temporary directory for each test
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gitignore-test-'));
|
||||
testGitignorePath = path.join(tempDir, '.gitignore');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up temporary directory after each test
|
||||
if (fs.existsSync(tempDir)) {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
describe('New File Creation', () => {
|
||||
const templateContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
jspm_packages/
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
.env.local
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
test('should create new .gitignore file with commented task lines (storeTasksInGit = true)', () => {
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, true, mockLog);
|
||||
|
||||
// Verify file was created
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
|
||||
// Verify content
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('logs');
|
||||
expect(content).toContain('# Dependencies');
|
||||
expect(content).toContain('node_modules/');
|
||||
expect(content).toContain('# Task files');
|
||||
expect(content).toContain('tasks.json');
|
||||
expect(content).toContain('tasks/');
|
||||
|
||||
// Verify task lines are commented (storeTasksInGit = true)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
|
||||
// Verify log message
|
||||
expect(logs).toContainEqual({
|
||||
level: 'success',
|
||||
message: expect.stringContaining('Created')
|
||||
});
|
||||
});
|
||||
|
||||
test('should create new .gitignore file with uncommented task lines (storeTasksInGit = false)', () => {
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
|
||||
|
||||
// Verify file was created
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
|
||||
// Verify content
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
expect(content).toContain('# Task files');
|
||||
|
||||
// Verify task lines are uncommented (storeTasksInGit = false)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
|
||||
// Verify log message
|
||||
expect(logs).toContainEqual({
|
||||
level: 'success',
|
||||
message: expect.stringContaining('Created')
|
||||
});
|
||||
});
|
||||
|
||||
test('should work without log function', () => {
|
||||
expect(() => {
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
}).not.toThrow();
|
||||
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('File Merging', () => {
|
||||
const templateContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
test('should merge template with existing file content', () => {
|
||||
// Create existing .gitignore file
|
||||
const existingContent = `# Existing content
|
||||
old-files.txt
|
||||
*.backup
|
||||
|
||||
# Old task files (to be replaced)
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/
|
||||
|
||||
# More existing content
|
||||
cache/`;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
|
||||
|
||||
// Verify file still exists
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing non-task content
|
||||
expect(content).toContain('# Existing content');
|
||||
expect(content).toContain('old-files.txt');
|
||||
expect(content).toContain('*.backup');
|
||||
expect(content).toContain('# More existing content');
|
||||
expect(content).toContain('cache/');
|
||||
|
||||
// Should add new template content
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('logs');
|
||||
expect(content).toContain('# Dependencies');
|
||||
expect(content).toContain('node_modules/');
|
||||
expect(content).toContain('# Environment variables');
|
||||
expect(content).toContain('.env');
|
||||
|
||||
// Should replace task section with new preference (storeTasksInGit = false means uncommented)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
|
||||
// Verify log message
|
||||
expect(logs).toContainEqual({
|
||||
level: 'success',
|
||||
message: expect.stringContaining('Updated')
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle switching task preferences from commented to uncommented', () => {
|
||||
// Create existing file with commented task lines
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
// Update with storeTasksInGit = true (commented)
|
||||
manageGitignoreFile(testGitignorePath, templateContent, true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing content
|
||||
expect(content).toContain('# Existing');
|
||||
expect(content).toContain('existing.txt');
|
||||
|
||||
// Should have commented task lines (storeTasksInGit = true)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle switching task preferences from uncommented to commented', () => {
|
||||
// Create existing file with uncommented task lines
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
// Update with storeTasksInGit = false (uncommented)
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing content
|
||||
expect(content).toContain('# Existing');
|
||||
expect(content).toContain('existing.txt');
|
||||
|
||||
// Should have uncommented task lines (storeTasksInGit = false)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should not duplicate existing template content', () => {
|
||||
// Create existing file that already has some template content
|
||||
const existingContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Custom content
|
||||
custom.txt
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should not duplicate logs section
|
||||
const logsMatches = content.match(/# Logs/g);
|
||||
expect(logsMatches).toHaveLength(1);
|
||||
|
||||
// Should not duplicate dependencies section
|
||||
const depsMatches = content.match(/# Dependencies/g);
|
||||
expect(depsMatches).toHaveLength(1);
|
||||
|
||||
// Should retain custom content
|
||||
expect(content).toContain('# Custom content');
|
||||
expect(content).toContain('custom.txt');
|
||||
|
||||
// Should add new template content that wasn't present
|
||||
expect(content).toContain('# Environment variables');
|
||||
expect(content).toContain('.env');
|
||||
});
|
||||
|
||||
test('should handle empty existing file', () => {
|
||||
// Create empty file
|
||||
fs.writeFileSync(testGitignorePath, '');
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
expect(fs.existsSync(testGitignorePath)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('# Task files');
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle file with only whitespace', () => {
|
||||
// Create file with only whitespace
|
||||
fs.writeFileSync(testGitignorePath, ' \n\n \n');
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('# Task files');
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Complex Task Section Handling', () => {
|
||||
test('should remove task section with mixed comments and spacing', () => {
|
||||
const existingContent = `# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Task files
|
||||
|
||||
# tasks.json
|
||||
tasks/
|
||||
|
||||
|
||||
# More content
|
||||
more.txt`;
|
||||
|
||||
const templateContent = `# New content
|
||||
new.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain non-task content
|
||||
expect(content).toContain('# Dependencies');
|
||||
expect(content).toContain('node_modules/');
|
||||
expect(content).toContain('# More content');
|
||||
expect(content).toContain('more.txt');
|
||||
|
||||
// Should add new content
|
||||
expect(content).toContain('# New content');
|
||||
expect(content).toContain('new.txt');
|
||||
|
||||
// Should have clean task section (storeTasksInGit = false means uncommented)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle multiple task file variations', () => {
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
# tasks.json
|
||||
# tasks/
|
||||
tasks/
|
||||
#tasks.json
|
||||
|
||||
# More content
|
||||
more.txt`;
|
||||
|
||||
const templateContent = `# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
manageGitignoreFile(testGitignorePath, templateContent, true);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain non-task content
|
||||
expect(content).toContain('# Existing');
|
||||
expect(content).toContain('existing.txt');
|
||||
expect(content).toContain('# More content');
|
||||
expect(content).toContain('more.txt');
|
||||
|
||||
// Should have clean task section with preference applied (storeTasksInGit = true means commented)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
|
||||
// Should not have multiple task sections
|
||||
const taskFileMatches = content.match(/# Task files/g);
|
||||
expect(taskFileMatches).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
test('should handle permission errors gracefully', () => {
|
||||
// Create a directory where we would create the file, then remove write permissions
|
||||
const readOnlyDir = path.join(tempDir, 'readonly');
|
||||
fs.mkdirSync(readOnlyDir);
|
||||
fs.chmodSync(readOnlyDir, 0o444); // Read-only
|
||||
|
||||
const readOnlyGitignorePath = path.join(readOnlyDir, '.gitignore');
|
||||
const templateContent = `# Test
|
||||
test.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile(
|
||||
readOnlyGitignorePath,
|
||||
templateContent,
|
||||
false,
|
||||
mockLog
|
||||
);
|
||||
}).toThrow();
|
||||
|
||||
// Verify error was logged
|
||||
expect(logs).toContainEqual({
|
||||
level: 'error',
|
||||
message: expect.stringContaining('Failed to create')
|
||||
});
|
||||
|
||||
// Restore permissions for cleanup
|
||||
fs.chmodSync(readOnlyDir, 0o755);
|
||||
});
|
||||
|
||||
test('should handle read errors on existing files', () => {
|
||||
// Create a file then remove read permissions
|
||||
fs.writeFileSync(testGitignorePath, 'existing content');
|
||||
fs.chmodSync(testGitignorePath, 0o000); // No permissions
|
||||
|
||||
const templateContent = `# Test
|
||||
test.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
const logs = [];
|
||||
const mockLog = (level, message) => logs.push({ level, message });
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
|
||||
}).toThrow();
|
||||
|
||||
// Verify error was logged
|
||||
expect(logs).toContainEqual({
|
||||
level: 'error',
|
||||
message: expect.stringContaining('Failed to merge content')
|
||||
});
|
||||
|
||||
// Restore permissions for cleanup
|
||||
fs.chmodSync(testGitignorePath, 0o644);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Real-world Scenarios', () => {
|
||||
test('should handle typical Node.js project .gitignore', () => {
|
||||
const existingNodeGitignore = `# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
# Runtime data
|
||||
pids
|
||||
*.pid
|
||||
*.seed
|
||||
*.pid.lock
|
||||
|
||||
# Dependency directories
|
||||
node_modules/
|
||||
jspm_packages/
|
||||
|
||||
# Optional npm cache directory
|
||||
.npm
|
||||
|
||||
# Output of 'npm pack'
|
||||
*.tgz
|
||||
|
||||
# Yarn Integrity file
|
||||
.yarn-integrity
|
||||
|
||||
# dotenv environment variables file
|
||||
.env
|
||||
|
||||
# next.js build output
|
||||
.next`;
|
||||
|
||||
const taskMasterTemplate = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
|
||||
# Build output
|
||||
dist/
|
||||
build/
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingNodeGitignore);
|
||||
|
||||
manageGitignoreFile(testGitignorePath, taskMasterTemplate, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing Node.js specific entries
|
||||
expect(content).toContain('npm-debug.log*');
|
||||
expect(content).toContain('yarn-debug.log*');
|
||||
expect(content).toContain('*.pid');
|
||||
expect(content).toContain('jspm_packages/');
|
||||
expect(content).toContain('.npm');
|
||||
expect(content).toContain('*.tgz');
|
||||
expect(content).toContain('.yarn-integrity');
|
||||
expect(content).toContain('.next');
|
||||
|
||||
// Should add new content from template that wasn't present
|
||||
expect(content).toContain('dist/');
|
||||
expect(content).toContain('build/');
|
||||
|
||||
// Should add task files section with correct preference (storeTasksInGit = false means uncommented)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
|
||||
// Should not duplicate common entries
|
||||
const nodeModulesMatches = content.match(/node_modules\//g);
|
||||
expect(nodeModulesMatches).toHaveLength(1);
|
||||
|
||||
const logsMatches = content.match(/# Logs/g);
|
||||
expect(logsMatches).toHaveLength(1);
|
||||
});
|
||||
|
||||
test('should handle project with existing task files in git', () => {
|
||||
const existingContent = `# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
|
||||
# Current task setup - keeping in git
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/
|
||||
|
||||
# Build output
|
||||
dist/`;
|
||||
|
||||
const templateContent = `# New template
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
fs.writeFileSync(testGitignorePath, existingContent);
|
||||
|
||||
// Change preference to exclude tasks from git (storeTasksInGit = false means uncommented/ignored)
|
||||
manageGitignoreFile(testGitignorePath, templateContent, false);
|
||||
|
||||
const content = fs.readFileSync(testGitignorePath, 'utf8');
|
||||
|
||||
// Should retain existing content
|
||||
expect(content).toContain('# Dependencies');
|
||||
expect(content).toContain('node_modules/');
|
||||
expect(content).toContain('# Logs');
|
||||
expect(content).toContain('*.log');
|
||||
expect(content).toContain('# Build output');
|
||||
expect(content).toContain('dist/');
|
||||
|
||||
// Should update task preference to uncommented (storeTasksInGit = false)
|
||||
expect(content).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -133,7 +133,7 @@ jest.mock('../../../scripts/modules/utils.js', () => ({
|
||||
readComplexityReport: mockReadComplexityReport,
|
||||
CONFIG: {
|
||||
model: 'claude-3-7-sonnet-20250219',
|
||||
maxTokens: 64000,
|
||||
maxTokens: 8192,
|
||||
temperature: 0.2,
|
||||
defaultSubtasks: 5
|
||||
}
|
||||
@@ -625,19 +625,38 @@ describe('MCP Server Direct Functions', () => {
|
||||
// For successful cases, record that functions were called but don't make real calls
|
||||
mockEnableSilentMode();
|
||||
|
||||
// Mock expandAllTasks
|
||||
// Mock expandAllTasks - now returns a structured object instead of undefined
|
||||
const mockExpandAll = jest.fn().mockImplementation(async () => {
|
||||
// Just simulate success without any real operations
|
||||
return undefined; // expandAllTasks doesn't return anything
|
||||
// Return the new structured response that matches the actual implementation
|
||||
return {
|
||||
success: true,
|
||||
expandedCount: 2,
|
||||
failedCount: 0,
|
||||
skippedCount: 1,
|
||||
tasksToExpand: 3,
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
commandName: 'expand-all-tasks',
|
||||
totalCost: 0.05,
|
||||
totalTokens: 1000,
|
||||
inputTokens: 600,
|
||||
outputTokens: 400
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
// Call mock expandAllTasks
|
||||
await mockExpandAll(
|
||||
args.num,
|
||||
args.research || false,
|
||||
args.prompt || '',
|
||||
args.force || false,
|
||||
{ mcpLog: mockLogger, session: options.session }
|
||||
// Call mock expandAllTasks with the correct signature
|
||||
const result = await mockExpandAll(
|
||||
args.file, // tasksPath
|
||||
args.num, // numSubtasks
|
||||
args.research || false, // useResearch
|
||||
args.prompt || '', // additionalContext
|
||||
args.force || false, // force
|
||||
{
|
||||
mcpLog: mockLogger,
|
||||
session: options.session,
|
||||
projectRoot: args.projectRoot
|
||||
}
|
||||
);
|
||||
|
||||
mockDisableSilentMode();
|
||||
@@ -645,13 +664,14 @@ describe('MCP Server Direct Functions', () => {
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
message: 'Successfully expanded all pending tasks with subtasks',
|
||||
message: `Expand all operation completed. Expanded: ${result.expandedCount}, Failed: ${result.failedCount}, Skipped: ${result.skippedCount}`,
|
||||
details: {
|
||||
numSubtasks: args.num,
|
||||
research: args.research || false,
|
||||
prompt: args.prompt || '',
|
||||
force: args.force || false
|
||||
}
|
||||
expandedCount: result.expandedCount,
|
||||
failedCount: result.failedCount,
|
||||
skippedCount: result.skippedCount,
|
||||
tasksToExpand: result.tasksToExpand
|
||||
},
|
||||
telemetryData: result.telemetryData
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -671,10 +691,13 @@ describe('MCP Server Direct Functions', () => {
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.message).toBe(
|
||||
'Successfully expanded all pending tasks with subtasks'
|
||||
);
|
||||
expect(result.data.details.numSubtasks).toBe(3);
|
||||
expect(result.data.message).toMatch(/Expand all operation completed/);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.details.failedCount).toBe(0);
|
||||
expect(result.data.details.skippedCount).toBe(1);
|
||||
expect(result.data.details.tasksToExpand).toBe(3);
|
||||
expect(result.data.telemetryData).toBeDefined();
|
||||
expect(result.data.telemetryData.commandName).toBe('expand-all-tasks');
|
||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||
});
|
||||
@@ -695,7 +718,8 @@ describe('MCP Server Direct Functions', () => {
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.research).toBe(true);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.telemetryData).toBeDefined();
|
||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||
});
|
||||
@@ -715,7 +739,8 @@ describe('MCP Server Direct Functions', () => {
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.force).toBe(true);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.telemetryData).toBeDefined();
|
||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||
});
|
||||
@@ -735,11 +760,77 @@ describe('MCP Server Direct Functions', () => {
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.prompt).toBe(
|
||||
'Additional context for subtasks'
|
||||
);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.telemetryData).toBeDefined();
|
||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle case with no eligible tasks', async () => {
|
||||
// Arrange
|
||||
const args = {
|
||||
projectRoot: testProjectRoot,
|
||||
file: testTasksPath,
|
||||
num: 3
|
||||
};
|
||||
|
||||
// Act - Mock the scenario where no tasks are eligible for expansion
|
||||
async function testNoEligibleTasks(args, mockLogger, options = {}) {
|
||||
mockEnableSilentMode();
|
||||
|
||||
const mockExpandAll = jest.fn().mockImplementation(async () => {
|
||||
return {
|
||||
success: true,
|
||||
expandedCount: 0,
|
||||
failedCount: 0,
|
||||
skippedCount: 0,
|
||||
tasksToExpand: 0,
|
||||
telemetryData: null,
|
||||
message: 'No tasks eligible for expansion.'
|
||||
};
|
||||
});
|
||||
|
||||
const result = await mockExpandAll(
|
||||
args.file,
|
||||
args.num,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
mcpLog: mockLogger,
|
||||
session: options.session,
|
||||
projectRoot: args.projectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
mockDisableSilentMode();
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
message: result.message,
|
||||
details: {
|
||||
expandedCount: result.expandedCount,
|
||||
failedCount: result.failedCount,
|
||||
skippedCount: result.skippedCount,
|
||||
tasksToExpand: result.tasksToExpand
|
||||
},
|
||||
telemetryData: result.telemetryData
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
const result = await testNoEligibleTasks(args, mockLogger, {
|
||||
session: mockSession
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.message).toBe('No tasks eligible for expansion.');
|
||||
expect(result.data.details.expandedCount).toBe(0);
|
||||
expect(result.data.details.tasksToExpand).toBe(0);
|
||||
expect(result.data.telemetryData).toBeNull();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
55
tests/integration/profiles/claude-init-functionality.test.js
Normal file
55
tests/integration/profiles/claude-init-functionality.test.js
Normal file
@@ -0,0 +1,55 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Claude Profile Initialization Functionality', () => {
|
||||
let claudeProfileContent;
|
||||
|
||||
beforeAll(() => {
|
||||
const claudeJsPath = path.join(
|
||||
process.cwd(),
|
||||
'src',
|
||||
'profiles',
|
||||
'claude.js'
|
||||
);
|
||||
claudeProfileContent = fs.readFileSync(claudeJsPath, 'utf8');
|
||||
});
|
||||
|
||||
test('claude.js is a simple profile with correct configuration', () => {
|
||||
expect(claudeProfileContent).toContain("profileName: 'claude'");
|
||||
expect(claudeProfileContent).toContain("displayName: 'Claude Code'");
|
||||
expect(claudeProfileContent).toContain("profileDir: '.'");
|
||||
expect(claudeProfileContent).toContain("rulesDir: '.'");
|
||||
});
|
||||
|
||||
test('claude.js has no MCP configuration', () => {
|
||||
expect(claudeProfileContent).toContain('mcpConfig: false');
|
||||
expect(claudeProfileContent).toContain('mcpConfigName: null');
|
||||
expect(claudeProfileContent).toContain('mcpConfigPath: null');
|
||||
});
|
||||
|
||||
test('claude.js has empty file map (simple profile)', () => {
|
||||
expect(claudeProfileContent).toContain('fileMap: {}');
|
||||
expect(claudeProfileContent).toContain('conversionConfig: {}');
|
||||
expect(claudeProfileContent).toContain('globalReplacements: []');
|
||||
});
|
||||
|
||||
test('claude.js has lifecycle functions for file management', () => {
|
||||
expect(claudeProfileContent).toContain('function onAddRulesProfile');
|
||||
expect(claudeProfileContent).toContain('function onRemoveRulesProfile');
|
||||
expect(claudeProfileContent).toContain(
|
||||
'function onPostConvertRulesProfile'
|
||||
);
|
||||
});
|
||||
|
||||
test('claude.js copies AGENTS.md to CLAUDE.md', () => {
|
||||
expect(claudeProfileContent).toContain("'AGENTS.md'");
|
||||
expect(claudeProfileContent).toContain("'CLAUDE.md'");
|
||||
expect(claudeProfileContent).toContain('copyFileSync');
|
||||
});
|
||||
|
||||
test('claude.js has proper error handling', () => {
|
||||
expect(claudeProfileContent).toContain('try {');
|
||||
expect(claudeProfileContent).toContain('} catch (err) {');
|
||||
expect(claudeProfileContent).toContain("log('error'");
|
||||
});
|
||||
});
|
||||
53
tests/integration/profiles/cline-init-functionality.test.js
Normal file
53
tests/integration/profiles/cline-init-functionality.test.js
Normal file
@@ -0,0 +1,53 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Cline Profile Initialization Functionality', () => {
|
||||
let clineProfileContent;
|
||||
|
||||
beforeAll(() => {
|
||||
const clineJsPath = path.join(process.cwd(), 'src', 'profiles', 'cline.js');
|
||||
clineProfileContent = fs.readFileSync(clineJsPath, 'utf8');
|
||||
});
|
||||
|
||||
test('cline.js uses factory pattern with correct configuration', () => {
|
||||
expect(clineProfileContent).toContain("name: 'cline'");
|
||||
expect(clineProfileContent).toContain("displayName: 'Cline'");
|
||||
expect(clineProfileContent).toContain("rulesDir: '.clinerules'");
|
||||
expect(clineProfileContent).toContain("profileDir: '.clinerules'");
|
||||
});
|
||||
|
||||
test('cline.js configures .mdc to .md extension mapping', () => {
|
||||
expect(clineProfileContent).toContain("fileExtension: '.mdc'");
|
||||
expect(clineProfileContent).toContain("targetExtension: '.md'");
|
||||
});
|
||||
|
||||
test('cline.js uses standard tool mappings', () => {
|
||||
expect(clineProfileContent).toContain('COMMON_TOOL_MAPPINGS.STANDARD');
|
||||
// Should contain comment about standard tool names
|
||||
expect(clineProfileContent).toContain('standard tool names');
|
||||
});
|
||||
|
||||
test('cline.js contains correct URL configuration', () => {
|
||||
expect(clineProfileContent).toContain("url: 'cline.bot'");
|
||||
expect(clineProfileContent).toContain("docsUrl: 'docs.cline.bot'");
|
||||
});
|
||||
|
||||
test('cline.js has MCP configuration disabled', () => {
|
||||
expect(clineProfileContent).toContain('mcpConfig: false');
|
||||
expect(clineProfileContent).toContain(
|
||||
"mcpConfigName: 'cline_mcp_settings.json'"
|
||||
);
|
||||
});
|
||||
|
||||
test('cline.js has custom file mapping for cursor_rules.mdc', () => {
|
||||
expect(clineProfileContent).toContain('customFileMap:');
|
||||
expect(clineProfileContent).toContain(
|
||||
"'cursor_rules.mdc': 'cline_rules.md'"
|
||||
);
|
||||
});
|
||||
|
||||
test('cline.js uses createProfile factory function', () => {
|
||||
expect(clineProfileContent).toContain('createProfile');
|
||||
expect(clineProfileContent).toContain('export const clineProfile');
|
||||
});
|
||||
});
|
||||
54
tests/integration/profiles/codex-init-functionality.test.js
Normal file
54
tests/integration/profiles/codex-init-functionality.test.js
Normal file
@@ -0,0 +1,54 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Codex Profile Initialization Functionality', () => {
|
||||
let codexProfileContent;
|
||||
|
||||
beforeAll(() => {
|
||||
const codexJsPath = path.join(process.cwd(), 'src', 'profiles', 'codex.js');
|
||||
codexProfileContent = fs.readFileSync(codexJsPath, 'utf8');
|
||||
});
|
||||
|
||||
test('codex.js is a simple profile with correct configuration', () => {
|
||||
expect(codexProfileContent).toContain("profileName: 'codex'");
|
||||
expect(codexProfileContent).toContain("displayName: 'Codex'");
|
||||
expect(codexProfileContent).toContain("profileDir: '.'");
|
||||
expect(codexProfileContent).toContain("rulesDir: '.'");
|
||||
});
|
||||
|
||||
test('codex.js has no MCP configuration', () => {
|
||||
expect(codexProfileContent).toContain('mcpConfig: false');
|
||||
expect(codexProfileContent).toContain('mcpConfigName: null');
|
||||
expect(codexProfileContent).toContain('mcpConfigPath: null');
|
||||
});
|
||||
|
||||
test('codex.js has empty file map (simple profile)', () => {
|
||||
expect(codexProfileContent).toContain('fileMap: {}');
|
||||
expect(codexProfileContent).toContain('conversionConfig: {}');
|
||||
expect(codexProfileContent).toContain('globalReplacements: []');
|
||||
});
|
||||
|
||||
test('codex.js has lifecycle functions for file management', () => {
|
||||
expect(codexProfileContent).toContain('function onAddRulesProfile');
|
||||
expect(codexProfileContent).toContain('function onRemoveRulesProfile');
|
||||
expect(codexProfileContent).toContain('function onPostConvertRulesProfile');
|
||||
});
|
||||
|
||||
test('codex.js copies AGENTS.md to AGENTS.md (same filename)', () => {
|
||||
expect(codexProfileContent).toContain("'AGENTS.md'");
|
||||
expect(codexProfileContent).toContain('copyFileSync');
|
||||
// Should copy to the same filename (AGENTS.md)
|
||||
expect(codexProfileContent).toMatch(/destFile.*AGENTS\.md/);
|
||||
});
|
||||
|
||||
test('codex.js has proper error handling', () => {
|
||||
expect(codexProfileContent).toContain('try {');
|
||||
expect(codexProfileContent).toContain('} catch (err) {');
|
||||
expect(codexProfileContent).toContain("log('error'");
|
||||
});
|
||||
|
||||
test('codex.js removes AGENTS.md on profile removal', () => {
|
||||
expect(codexProfileContent).toContain('rmSync');
|
||||
expect(codexProfileContent).toContain('force: true');
|
||||
});
|
||||
});
|
||||
44
tests/integration/profiles/cursor-init-functionality.test.js
Normal file
44
tests/integration/profiles/cursor-init-functionality.test.js
Normal file
@@ -0,0 +1,44 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Cursor Profile Initialization Functionality', () => {
|
||||
let cursorProfileContent;
|
||||
|
||||
beforeAll(() => {
|
||||
const cursorJsPath = path.join(
|
||||
process.cwd(),
|
||||
'src',
|
||||
'profiles',
|
||||
'cursor.js'
|
||||
);
|
||||
cursorProfileContent = fs.readFileSync(cursorJsPath, 'utf8');
|
||||
});
|
||||
|
||||
test('cursor.js uses factory pattern with correct configuration', () => {
|
||||
expect(cursorProfileContent).toContain("name: 'cursor'");
|
||||
expect(cursorProfileContent).toContain("displayName: 'Cursor'");
|
||||
expect(cursorProfileContent).toContain("rulesDir: '.cursor/rules'");
|
||||
expect(cursorProfileContent).toContain("profileDir: '.cursor'");
|
||||
});
|
||||
|
||||
test('cursor.js preserves .mdc extension in both input and output', () => {
|
||||
expect(cursorProfileContent).toContain("fileExtension: '.mdc'");
|
||||
expect(cursorProfileContent).toContain("targetExtension: '.mdc'");
|
||||
// Should preserve cursor_rules.mdc filename
|
||||
expect(cursorProfileContent).toContain(
|
||||
"'cursor_rules.mdc': 'cursor_rules.mdc'"
|
||||
);
|
||||
});
|
||||
|
||||
test('cursor.js uses standard tool mappings (no tool renaming)', () => {
|
||||
expect(cursorProfileContent).toContain('COMMON_TOOL_MAPPINGS.STANDARD');
|
||||
// Should not contain custom tool mappings since cursor keeps original names
|
||||
expect(cursorProfileContent).not.toContain('edit_file');
|
||||
expect(cursorProfileContent).not.toContain('apply_diff');
|
||||
});
|
||||
|
||||
test('cursor.js contains correct URL configuration', () => {
|
||||
expect(cursorProfileContent).toContain("url: 'cursor.so'");
|
||||
expect(cursorProfileContent).toContain("docsUrl: 'docs.cursor.com'");
|
||||
});
|
||||
});
|
||||
112
tests/integration/profiles/roo-files-inclusion.test.js
Normal file
112
tests/integration/profiles/roo-files-inclusion.test.js
Normal file
@@ -0,0 +1,112 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { execSync } from 'child_process';
|
||||
|
||||
describe('Roo Files Inclusion in Package', () => {
|
||||
// This test verifies that the required Roo files are included in the final package
|
||||
|
||||
test('package.json includes assets/** in the "files" array for Roo source files', () => {
|
||||
// Read the package.json file
|
||||
const packageJsonPath = path.join(process.cwd(), 'package.json');
|
||||
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
|
||||
|
||||
// Check if assets/** is included in the files array (which contains Roo files)
|
||||
expect(packageJson.files).toContain('assets/**');
|
||||
});
|
||||
|
||||
test('roo.js profile contains logic for Roo directory creation and file copying', () => {
|
||||
// Read the roo.js profile file
|
||||
const rooJsPath = path.join(process.cwd(), 'src', 'profiles', 'roo.js');
|
||||
const rooJsContent = fs.readFileSync(rooJsPath, 'utf8');
|
||||
|
||||
// Check for the main handler function
|
||||
expect(
|
||||
rooJsContent.includes('onAddRulesProfile(targetDir, assetsDir)')
|
||||
).toBe(true);
|
||||
|
||||
// Check for general recursive copy of assets/roocode
|
||||
expect(
|
||||
rooJsContent.includes('copyRecursiveSync(sourceDir, targetDir)')
|
||||
).toBe(true);
|
||||
|
||||
// Check for updated path handling
|
||||
expect(rooJsContent.includes("path.join(assetsDir, 'roocode')")).toBe(true);
|
||||
|
||||
// Check for .roomodes file copying logic (source and destination paths)
|
||||
expect(rooJsContent.includes("path.join(sourceDir, '.roomodes')")).toBe(
|
||||
true
|
||||
);
|
||||
expect(rooJsContent.includes("path.join(targetDir, '.roomodes')")).toBe(
|
||||
true
|
||||
);
|
||||
|
||||
// Check for mode-specific rule file copying logic
|
||||
expect(rooJsContent.includes('for (const mode of ROO_MODES)')).toBe(true);
|
||||
expect(
|
||||
rooJsContent.includes(
|
||||
'path.join(rooModesDir, `rules-${mode}`, `${mode}-rules`)'
|
||||
)
|
||||
).toBe(true);
|
||||
expect(
|
||||
rooJsContent.includes(
|
||||
"path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`)"
|
||||
)
|
||||
).toBe(true);
|
||||
|
||||
// Check for import of ROO_MODES from profiles.js instead of local definition
|
||||
expect(
|
||||
rooJsContent.includes(
|
||||
"import { ROO_MODES } from '../constants/profiles.js'"
|
||||
)
|
||||
).toBe(true);
|
||||
|
||||
// Verify ROO_MODES is used in the for loop
|
||||
expect(rooJsContent.includes('for (const mode of ROO_MODES)')).toBe(true);
|
||||
|
||||
// Verify mode variable is used in the template strings (this confirms modes are being processed)
|
||||
expect(rooJsContent.includes('rules-${mode}')).toBe(true);
|
||||
expect(rooJsContent.includes('${mode}-rules')).toBe(true);
|
||||
|
||||
// Verify that the ROO_MODES constant is properly imported and used
|
||||
// We should be able to find the template literals that use the mode variable
|
||||
expect(rooJsContent.includes('`rules-${mode}`')).toBe(true);
|
||||
expect(rooJsContent.includes('`${mode}-rules`')).toBe(true);
|
||||
expect(rooJsContent.includes('Copied ${mode}-rules to ${dest}')).toBe(true);
|
||||
|
||||
// Also verify that the expected mode names are defined in the imported constant
|
||||
// by checking that the import is from the correct file that contains all 6 modes
|
||||
const profilesConstantsPath = path.join(
|
||||
process.cwd(),
|
||||
'src',
|
||||
'constants',
|
||||
'profiles.js'
|
||||
);
|
||||
const profilesContent = fs.readFileSync(profilesConstantsPath, 'utf8');
|
||||
|
||||
// Check that ROO_MODES is exported and contains all expected modes
|
||||
expect(profilesContent.includes('export const ROO_MODES')).toBe(true);
|
||||
const expectedModes = [
|
||||
'architect',
|
||||
'ask',
|
||||
'orchestrator',
|
||||
'code',
|
||||
'debug',
|
||||
'test'
|
||||
];
|
||||
expectedModes.forEach((mode) => {
|
||||
expect(profilesContent.includes(`'${mode}'`)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
test('source Roo files exist in assets directory', () => {
|
||||
// Verify that the source files for Roo integration exist
|
||||
expect(
|
||||
fs.existsSync(path.join(process.cwd(), 'assets', 'roocode', '.roo'))
|
||||
).toBe(true);
|
||||
expect(
|
||||
fs.existsSync(path.join(process.cwd(), 'assets', 'roocode', '.roomodes'))
|
||||
).toBe(true);
|
||||
});
|
||||
});
|
||||
70
tests/integration/profiles/roo-init-functionality.test.js
Normal file
70
tests/integration/profiles/roo-init-functionality.test.js
Normal file
@@ -0,0 +1,70 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Roo Profile Initialization Functionality', () => {
|
||||
let rooProfileContent;
|
||||
|
||||
beforeAll(() => {
|
||||
// Read the roo.js profile file content once for all tests
|
||||
const rooJsPath = path.join(process.cwd(), 'src', 'profiles', 'roo.js');
|
||||
rooProfileContent = fs.readFileSync(rooJsPath, 'utf8');
|
||||
});
|
||||
|
||||
test('roo.js profile ensures Roo directory structure via onAddRulesProfile', () => {
|
||||
// Check if onAddRulesProfile function exists
|
||||
expect(rooProfileContent).toContain(
|
||||
'onAddRulesProfile(targetDir, assetsDir)'
|
||||
);
|
||||
|
||||
// Check for the general copy of assets/roocode which includes .roo base structure
|
||||
expect(rooProfileContent).toContain(
|
||||
"const sourceDir = path.join(assetsDir, 'roocode');"
|
||||
);
|
||||
expect(rooProfileContent).toContain(
|
||||
'copyRecursiveSync(sourceDir, targetDir);'
|
||||
);
|
||||
|
||||
// Check for the specific .roo modes directory handling
|
||||
expect(rooProfileContent).toContain(
|
||||
"const rooModesDir = path.join(sourceDir, '.roo');"
|
||||
);
|
||||
|
||||
// Check for import of ROO_MODES from profiles.js instead of local definition
|
||||
expect(rooProfileContent).toContain(
|
||||
"import { ROO_MODES } from '../constants/profiles.js';"
|
||||
);
|
||||
});
|
||||
|
||||
test('roo.js profile copies .roomodes file via onAddRulesProfile', () => {
|
||||
expect(rooProfileContent).toContain(
|
||||
'onAddRulesProfile(targetDir, assetsDir)'
|
||||
);
|
||||
|
||||
// Check for the specific .roomodes copy logic
|
||||
expect(rooProfileContent).toContain(
|
||||
"const roomodesSrc = path.join(sourceDir, '.roomodes');"
|
||||
);
|
||||
expect(rooProfileContent).toContain(
|
||||
"const roomodesDest = path.join(targetDir, '.roomodes');"
|
||||
);
|
||||
expect(rooProfileContent).toContain(
|
||||
'fs.copyFileSync(roomodesSrc, roomodesDest);'
|
||||
);
|
||||
});
|
||||
|
||||
test('roo.js profile copies mode-specific rule files via onAddRulesProfile', () => {
|
||||
expect(rooProfileContent).toContain(
|
||||
'onAddRulesProfile(targetDir, assetsDir)'
|
||||
);
|
||||
expect(rooProfileContent).toContain('for (const mode of ROO_MODES)');
|
||||
|
||||
// Check for the specific mode rule file copy logic
|
||||
expect(rooProfileContent).toContain(
|
||||
'const src = path.join(rooModesDir, `rules-${mode}`, `${mode}-rules`);'
|
||||
);
|
||||
expect(rooProfileContent).toContain(
|
||||
"const dest = path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`);"
|
||||
);
|
||||
});
|
||||
});
|
||||
98
tests/integration/profiles/rules-files-inclusion.test.js
Normal file
98
tests/integration/profiles/rules-files-inclusion.test.js
Normal file
@@ -0,0 +1,98 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { execSync } from 'child_process';
|
||||
|
||||
describe('Rules Files Inclusion in Package', () => {
|
||||
// This test verifies that the required rules files are included in the final package
|
||||
|
||||
test('package.json includes assets/** in the "files" array for rules source files', () => {
|
||||
// Read the package.json file
|
||||
const packageJsonPath = path.join(process.cwd(), 'package.json');
|
||||
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
|
||||
|
||||
// Check if assets/** is included in the files array (which contains rules files)
|
||||
expect(packageJson.files).toContain('assets/**');
|
||||
});
|
||||
|
||||
test('source rules files exist in assets/rules directory', () => {
|
||||
// Verify that the actual rules files exist
|
||||
const rulesDir = path.join(process.cwd(), 'assets', 'rules');
|
||||
expect(fs.existsSync(rulesDir)).toBe(true);
|
||||
|
||||
// Check for the 4 files that currently exist
|
||||
const expectedFiles = [
|
||||
'dev_workflow.mdc',
|
||||
'taskmaster.mdc',
|
||||
'self_improve.mdc',
|
||||
'cursor_rules.mdc'
|
||||
];
|
||||
|
||||
expectedFiles.forEach((file) => {
|
||||
const filePath = path.join(rulesDir, file);
|
||||
expect(fs.existsSync(filePath)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
test('roo.js profile contains logic for Roo directory creation and file copying', () => {
|
||||
// Read the roo.js profile file
|
||||
const rooJsPath = path.join(process.cwd(), 'src', 'profiles', 'roo.js');
|
||||
const rooJsContent = fs.readFileSync(rooJsPath, 'utf8');
|
||||
|
||||
// Check for the main handler function
|
||||
expect(
|
||||
rooJsContent.includes('onAddRulesProfile(targetDir, assetsDir)')
|
||||
).toBe(true);
|
||||
|
||||
// Check for general recursive copy of assets/roocode
|
||||
expect(
|
||||
rooJsContent.includes('copyRecursiveSync(sourceDir, targetDir)')
|
||||
).toBe(true);
|
||||
|
||||
// Check for updated path handling
|
||||
expect(rooJsContent.includes("path.join(assetsDir, 'roocode')")).toBe(true);
|
||||
|
||||
// Check for .roomodes file copying logic (source and destination paths)
|
||||
expect(rooJsContent.includes("path.join(sourceDir, '.roomodes')")).toBe(
|
||||
true
|
||||
);
|
||||
expect(rooJsContent.includes("path.join(targetDir, '.roomodes')")).toBe(
|
||||
true
|
||||
);
|
||||
|
||||
// Check for mode-specific rule file copying logic
|
||||
expect(rooJsContent.includes('for (const mode of ROO_MODES)')).toBe(true);
|
||||
expect(
|
||||
rooJsContent.includes(
|
||||
'path.join(rooModesDir, `rules-${mode}`, `${mode}-rules`)'
|
||||
)
|
||||
).toBe(true);
|
||||
expect(
|
||||
rooJsContent.includes(
|
||||
"path.join(targetDir, '.roo', `rules-${mode}`, `${mode}-rules`)"
|
||||
)
|
||||
).toBe(true);
|
||||
|
||||
// Check for import of ROO_MODES from profiles.js
|
||||
expect(
|
||||
rooJsContent.includes(
|
||||
"import { ROO_MODES } from '../constants/profiles.js'"
|
||||
)
|
||||
).toBe(true);
|
||||
|
||||
// Verify mode variable is used in the template strings (this confirms modes are being processed)
|
||||
expect(rooJsContent.includes('rules-${mode}')).toBe(true);
|
||||
expect(rooJsContent.includes('${mode}-rules')).toBe(true);
|
||||
});
|
||||
|
||||
test('source Roo files exist in assets directory', () => {
|
||||
// Verify that the source files for Roo integration exist
|
||||
expect(
|
||||
fs.existsSync(path.join(process.cwd(), 'assets', 'roocode', '.roo'))
|
||||
).toBe(true);
|
||||
expect(
|
||||
fs.existsSync(path.join(process.cwd(), 'assets', 'roocode', '.roomodes'))
|
||||
).toBe(true);
|
||||
});
|
||||
});
|
||||
41
tests/integration/profiles/trae-init-functionality.test.js
Normal file
41
tests/integration/profiles/trae-init-functionality.test.js
Normal file
@@ -0,0 +1,41 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Trae Profile Initialization Functionality', () => {
|
||||
let traeProfileContent;
|
||||
|
||||
beforeAll(() => {
|
||||
const traeJsPath = path.join(process.cwd(), 'src', 'profiles', 'trae.js');
|
||||
traeProfileContent = fs.readFileSync(traeJsPath, 'utf8');
|
||||
});
|
||||
|
||||
test('trae.js uses factory pattern with correct configuration', () => {
|
||||
expect(traeProfileContent).toContain("name: 'trae'");
|
||||
expect(traeProfileContent).toContain("displayName: 'Trae'");
|
||||
expect(traeProfileContent).toContain("rulesDir: '.trae/rules'");
|
||||
expect(traeProfileContent).toContain("profileDir: '.trae'");
|
||||
});
|
||||
|
||||
test('trae.js configures .mdc to .md extension mapping', () => {
|
||||
expect(traeProfileContent).toContain("fileExtension: '.mdc'");
|
||||
expect(traeProfileContent).toContain("targetExtension: '.md'");
|
||||
});
|
||||
|
||||
test('trae.js uses standard tool mappings', () => {
|
||||
expect(traeProfileContent).toContain('COMMON_TOOL_MAPPINGS.STANDARD');
|
||||
// Should contain comment about standard tool names
|
||||
expect(traeProfileContent).toContain('standard tool names');
|
||||
});
|
||||
|
||||
test('trae.js contains correct URL configuration', () => {
|
||||
expect(traeProfileContent).toContain("url: 'trae.ai'");
|
||||
expect(traeProfileContent).toContain("docsUrl: 'docs.trae.ai'");
|
||||
});
|
||||
|
||||
test('trae.js has MCP configuration disabled', () => {
|
||||
expect(traeProfileContent).toContain('mcpConfig: false');
|
||||
expect(traeProfileContent).toContain(
|
||||
"mcpConfigName: 'trae_mcp_settings.json'"
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,39 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Windsurf Profile Initialization Functionality', () => {
|
||||
let windsurfProfileContent;
|
||||
|
||||
beforeAll(() => {
|
||||
const windsurfJsPath = path.join(
|
||||
process.cwd(),
|
||||
'src',
|
||||
'profiles',
|
||||
'windsurf.js'
|
||||
);
|
||||
windsurfProfileContent = fs.readFileSync(windsurfJsPath, 'utf8');
|
||||
});
|
||||
|
||||
test('windsurf.js uses factory pattern with correct configuration', () => {
|
||||
expect(windsurfProfileContent).toContain("name: 'windsurf'");
|
||||
expect(windsurfProfileContent).toContain("displayName: 'Windsurf'");
|
||||
expect(windsurfProfileContent).toContain("rulesDir: '.windsurf/rules'");
|
||||
expect(windsurfProfileContent).toContain("profileDir: '.windsurf'");
|
||||
});
|
||||
|
||||
test('windsurf.js configures .mdc to .md extension mapping', () => {
|
||||
expect(windsurfProfileContent).toContain("fileExtension: '.mdc'");
|
||||
expect(windsurfProfileContent).toContain("targetExtension: '.md'");
|
||||
});
|
||||
|
||||
test('windsurf.js uses standard tool mappings', () => {
|
||||
expect(windsurfProfileContent).toContain('COMMON_TOOL_MAPPINGS.STANDARD');
|
||||
// Should contain comment about standard tool names
|
||||
expect(windsurfProfileContent).toContain('standard tool names');
|
||||
});
|
||||
|
||||
test('windsurf.js contains correct URL configuration', () => {
|
||||
expect(windsurfProfileContent).toContain("url: 'windsurf.com'");
|
||||
expect(windsurfProfileContent).toContain("docsUrl: 'docs.windsurf.com'");
|
||||
});
|
||||
});
|
||||
@@ -1,71 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { execSync } from 'child_process';
|
||||
|
||||
describe('Roo Files Inclusion in Package', () => {
|
||||
// This test verifies that the required Roo files are included in the final package
|
||||
|
||||
test('package.json includes assets/** in the "files" array for Roo source files', () => {
|
||||
// Read the package.json file
|
||||
const packageJsonPath = path.join(process.cwd(), 'package.json');
|
||||
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
|
||||
|
||||
// Check if assets/** is included in the files array (which contains Roo files)
|
||||
expect(packageJson.files).toContain('assets/**');
|
||||
});
|
||||
|
||||
test('init.js creates Roo directories and copies files', () => {
|
||||
// Read the init.js file
|
||||
const initJsPath = path.join(process.cwd(), 'scripts', 'init.js');
|
||||
const initJsContent = fs.readFileSync(initJsPath, 'utf8');
|
||||
|
||||
// Check for Roo directory creation (using more flexible pattern matching)
|
||||
const hasRooDir = initJsContent.includes(
|
||||
"ensureDirectoryExists(path.join(targetDir, '.roo'))"
|
||||
);
|
||||
expect(hasRooDir).toBe(true);
|
||||
|
||||
// Check for .roomodes file copying using hardcoded path
|
||||
const hasRoomodes = initJsContent.includes(
|
||||
"path.join(targetDir, '.roomodes')"
|
||||
);
|
||||
expect(hasRoomodes).toBe(true);
|
||||
|
||||
// Check for local ROO_MODES definition and usage
|
||||
const hasRooModes = initJsContent.includes('ROO_MODES');
|
||||
expect(hasRooModes).toBe(true);
|
||||
|
||||
// Check for local ROO_MODES array definition
|
||||
const hasLocalRooModes = initJsContent.includes(
|
||||
"const ROO_MODES = ['architect', 'ask', 'boomerang', 'code', 'debug', 'test']"
|
||||
);
|
||||
expect(hasLocalRooModes).toBe(true);
|
||||
|
||||
// Check for mode-specific patterns (these will still be present in the local array)
|
||||
const hasArchitect = initJsContent.includes('architect');
|
||||
const hasAsk = initJsContent.includes('ask');
|
||||
const hasBoomerang = initJsContent.includes('boomerang');
|
||||
const hasCode = initJsContent.includes('code');
|
||||
const hasDebug = initJsContent.includes('debug');
|
||||
const hasTest = initJsContent.includes('test');
|
||||
|
||||
expect(hasArchitect).toBe(true);
|
||||
expect(hasAsk).toBe(true);
|
||||
expect(hasBoomerang).toBe(true);
|
||||
expect(hasCode).toBe(true);
|
||||
expect(hasDebug).toBe(true);
|
||||
expect(hasTest).toBe(true);
|
||||
});
|
||||
|
||||
test('source Roo files exist in assets directory', () => {
|
||||
// Verify that the source files for Roo integration exist
|
||||
expect(
|
||||
fs.existsSync(path.join(process.cwd(), 'assets', 'roocode', '.roo'))
|
||||
).toBe(true);
|
||||
expect(
|
||||
fs.existsSync(path.join(process.cwd(), 'assets', 'roocode', '.roomodes'))
|
||||
).toBe(true);
|
||||
});
|
||||
});
|
||||
@@ -1,67 +0,0 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Roo Initialization Functionality', () => {
|
||||
let initJsContent;
|
||||
|
||||
beforeAll(() => {
|
||||
// Read the init.js file content once for all tests
|
||||
const initJsPath = path.join(process.cwd(), 'scripts', 'init.js');
|
||||
initJsContent = fs.readFileSync(initJsPath, 'utf8');
|
||||
});
|
||||
|
||||
test('init.js creates Roo directories in createProjectStructure function', () => {
|
||||
// Check if createProjectStructure function exists
|
||||
expect(initJsContent).toContain('function createProjectStructure');
|
||||
|
||||
// Check for the line that creates the .roo directory
|
||||
const hasRooDir = initJsContent.includes(
|
||||
"ensureDirectoryExists(path.join(targetDir, '.roo'))"
|
||||
);
|
||||
expect(hasRooDir).toBe(true);
|
||||
|
||||
// Check for the line that creates .roo/rules directory
|
||||
const hasRooRulesDir = initJsContent.includes(
|
||||
"ensureDirectoryExists(path.join(targetDir, '.roo/rules'))"
|
||||
);
|
||||
expect(hasRooRulesDir).toBe(true);
|
||||
|
||||
// Check for the for loop that creates mode-specific directories using local ROO_MODES array
|
||||
const hasRooModeLoop = initJsContent.includes(
|
||||
'for (const mode of ROO_MODES)'
|
||||
);
|
||||
expect(hasRooModeLoop).toBe(true);
|
||||
|
||||
// Check for local ROO_MODES definition
|
||||
const hasLocalRooModes = initJsContent.includes(
|
||||
"const ROO_MODES = ['architect', 'ask', 'boomerang', 'code', 'debug', 'test']"
|
||||
);
|
||||
expect(hasLocalRooModes).toBe(true);
|
||||
});
|
||||
|
||||
test('init.js copies Roo files from assets/roocode directory', () => {
|
||||
// Check for the .roomodes case in the copyTemplateFile function
|
||||
const casesRoomodes = initJsContent.includes("case '.roomodes':");
|
||||
expect(casesRoomodes).toBe(true);
|
||||
|
||||
// Check that assets/roocode appears somewhere in the file
|
||||
const hasRoocodePath = initJsContent.includes("'assets', 'roocode'");
|
||||
expect(hasRoocodePath).toBe(true);
|
||||
|
||||
// Check that roomodes file is copied
|
||||
const copiesRoomodes = initJsContent.includes(
|
||||
"copyTemplateFile('.roomodes'"
|
||||
);
|
||||
expect(copiesRoomodes).toBe(true);
|
||||
});
|
||||
|
||||
test('init.js has code to copy rule files for each mode', () => {
|
||||
// Look for template copying for rule files
|
||||
const hasModeRulesCopying =
|
||||
initJsContent.includes('copyTemplateFile(') &&
|
||||
initJsContent.includes('rules-') &&
|
||||
initJsContent.includes('-rules');
|
||||
expect(hasModeRulesCopying).toBe(true);
|
||||
});
|
||||
});
|
||||
115
tests/unit/ai-providers/claude-code.test.js
Normal file
115
tests/unit/ai-providers/claude-code.test.js
Normal file
@@ -0,0 +1,115 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock the claude-code SDK module
|
||||
jest.unstable_mockModule(
|
||||
'../../../src/ai-providers/custom-sdk/claude-code/index.js',
|
||||
() => ({
|
||||
createClaudeCode: jest.fn(() => {
|
||||
const provider = (modelId, settings) => ({
|
||||
// Mock language model
|
||||
id: modelId,
|
||||
settings
|
||||
});
|
||||
provider.languageModel = jest.fn((id, settings) => ({ id, settings }));
|
||||
provider.chat = provider.languageModel;
|
||||
return provider;
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
// Mock the base provider
|
||||
jest.unstable_mockModule('../../../src/ai-providers/base-provider.js', () => ({
|
||||
BaseAIProvider: class {
|
||||
constructor() {
|
||||
this.name = 'Base Provider';
|
||||
}
|
||||
handleError(context, error) {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}));
|
||||
|
||||
// Import after mocking
|
||||
const { ClaudeCodeProvider } = await import(
|
||||
'../../../src/ai-providers/claude-code.js'
|
||||
);
|
||||
|
||||
describe('ClaudeCodeProvider', () => {
|
||||
let provider;
|
||||
|
||||
beforeEach(() => {
|
||||
provider = new ClaudeCodeProvider();
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('constructor', () => {
|
||||
it('should set the provider name to Claude Code', () => {
|
||||
expect(provider.name).toBe('Claude Code');
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateAuth', () => {
|
||||
it('should not throw an error (no API key required)', () => {
|
||||
expect(() => provider.validateAuth({})).not.toThrow();
|
||||
});
|
||||
|
||||
it('should not require any parameters', () => {
|
||||
expect(() => provider.validateAuth()).not.toThrow();
|
||||
});
|
||||
|
||||
it('should work with any params passed', () => {
|
||||
expect(() =>
|
||||
provider.validateAuth({
|
||||
apiKey: 'some-key',
|
||||
baseURL: 'https://example.com'
|
||||
})
|
||||
).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getClient', () => {
|
||||
it('should return a claude code client', () => {
|
||||
const client = provider.getClient({});
|
||||
expect(client).toBeDefined();
|
||||
expect(typeof client).toBe('function');
|
||||
});
|
||||
|
||||
it('should create client without API key or base URL', () => {
|
||||
const client = provider.getClient({});
|
||||
expect(client).toBeDefined();
|
||||
});
|
||||
|
||||
it('should handle params even though they are not used', () => {
|
||||
const client = provider.getClient({
|
||||
baseURL: 'https://example.com',
|
||||
apiKey: 'unused-key'
|
||||
});
|
||||
expect(client).toBeDefined();
|
||||
});
|
||||
|
||||
it('should have languageModel and chat methods', () => {
|
||||
const client = provider.getClient({});
|
||||
expect(client.languageModel).toBeDefined();
|
||||
expect(client.chat).toBeDefined();
|
||||
expect(client.chat).toBe(client.languageModel);
|
||||
});
|
||||
});
|
||||
|
||||
describe('error handling', () => {
|
||||
it('should handle client initialization errors', async () => {
|
||||
// Force an error by making createClaudeCode throw
|
||||
const { createClaudeCode } = await import(
|
||||
'../../../src/ai-providers/custom-sdk/claude-code/index.js'
|
||||
);
|
||||
createClaudeCode.mockImplementationOnce(() => {
|
||||
throw new Error('Mock initialization error');
|
||||
});
|
||||
|
||||
// Create a new provider instance to use the mocked createClaudeCode
|
||||
const errorProvider = new ClaudeCodeProvider();
|
||||
expect(() => errorProvider.getClient({})).toThrow(
|
||||
'Mock initialization error'
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,237 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock modules before importing
|
||||
jest.unstable_mockModule('@ai-sdk/provider', () => ({
|
||||
NoSuchModelError: class NoSuchModelError extends Error {
|
||||
constructor({ modelId, modelType }) {
|
||||
super(`No such model: ${modelId}`);
|
||||
this.modelId = modelId;
|
||||
this.modelType = modelType;
|
||||
}
|
||||
}
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('@ai-sdk/provider-utils', () => ({
|
||||
generateId: jest.fn(() => 'test-id-123')
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../src/ai-providers/custom-sdk/claude-code/message-converter.js',
|
||||
() => ({
|
||||
convertToClaudeCodeMessages: jest.fn((prompt) => ({
|
||||
messagesPrompt: 'converted-prompt',
|
||||
systemPrompt: 'system'
|
||||
}))
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../src/ai-providers/custom-sdk/claude-code/json-extractor.js',
|
||||
() => ({
|
||||
extractJson: jest.fn((text) => text)
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../src/ai-providers/custom-sdk/claude-code/errors.js',
|
||||
() => ({
|
||||
createAPICallError: jest.fn((opts) => new Error(opts.message)),
|
||||
createAuthenticationError: jest.fn((opts) => new Error(opts.message))
|
||||
})
|
||||
);
|
||||
|
||||
// This mock will be controlled by tests
|
||||
let mockClaudeCodeModule = null;
|
||||
jest.unstable_mockModule('@anthropic-ai/claude-code', () => {
|
||||
if (mockClaudeCodeModule) {
|
||||
return mockClaudeCodeModule;
|
||||
}
|
||||
throw new Error("Cannot find module '@anthropic-ai/claude-code'");
|
||||
});
|
||||
|
||||
// Import the module under test
|
||||
const { ClaudeCodeLanguageModel } = await import(
|
||||
'../../../../../src/ai-providers/custom-sdk/claude-code/language-model.js'
|
||||
);
|
||||
|
||||
describe('ClaudeCodeLanguageModel', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Reset the module mock
|
||||
mockClaudeCodeModule = null;
|
||||
// Clear module cache to ensure fresh imports
|
||||
jest.resetModules();
|
||||
});
|
||||
|
||||
describe('constructor', () => {
|
||||
it('should initialize with valid model ID', () => {
|
||||
const model = new ClaudeCodeLanguageModel({
|
||||
id: 'opus',
|
||||
settings: { maxTurns: 5 }
|
||||
});
|
||||
|
||||
expect(model.modelId).toBe('opus');
|
||||
expect(model.settings).toEqual({ maxTurns: 5 });
|
||||
expect(model.provider).toBe('claude-code');
|
||||
});
|
||||
|
||||
it('should throw NoSuchModelError for invalid model ID', async () => {
|
||||
expect(
|
||||
() =>
|
||||
new ClaudeCodeLanguageModel({
|
||||
id: '',
|
||||
settings: {}
|
||||
})
|
||||
).toThrow('No such model: ');
|
||||
|
||||
expect(
|
||||
() =>
|
||||
new ClaudeCodeLanguageModel({
|
||||
id: null,
|
||||
settings: {}
|
||||
})
|
||||
).toThrow('No such model: null');
|
||||
});
|
||||
});
|
||||
|
||||
describe('lazy loading of @anthropic-ai/claude-code', () => {
|
||||
it('should throw error when package is not installed', async () => {
|
||||
// Keep mockClaudeCodeModule as null to simulate missing package
|
||||
const model = new ClaudeCodeLanguageModel({
|
||||
id: 'opus',
|
||||
settings: {}
|
||||
});
|
||||
|
||||
await expect(
|
||||
model.doGenerate({
|
||||
prompt: [{ role: 'user', content: 'test' }],
|
||||
mode: { type: 'regular' }
|
||||
})
|
||||
).rejects.toThrow(
|
||||
"Claude Code SDK is not installed. Please install '@anthropic-ai/claude-code' to use the claude-code provider."
|
||||
);
|
||||
});
|
||||
|
||||
it('should load package successfully when available', async () => {
|
||||
// Mock successful package load
|
||||
const mockQuery = jest.fn(async function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: { content: [{ type: 'text', text: 'Hello' }] }
|
||||
};
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'done',
|
||||
usage: { output_tokens: 10, input_tokens: 5 }
|
||||
};
|
||||
});
|
||||
|
||||
mockClaudeCodeModule = {
|
||||
query: mockQuery,
|
||||
AbortError: class AbortError extends Error {}
|
||||
};
|
||||
|
||||
// Need to re-import to get fresh module with mocks
|
||||
jest.resetModules();
|
||||
const { ClaudeCodeLanguageModel: FreshModel } = await import(
|
||||
'../../../../../src/ai-providers/custom-sdk/claude-code/language-model.js'
|
||||
);
|
||||
|
||||
const model = new FreshModel({
|
||||
id: 'opus',
|
||||
settings: {}
|
||||
});
|
||||
|
||||
const result = await model.doGenerate({
|
||||
prompt: [{ role: 'user', content: 'test' }],
|
||||
mode: { type: 'regular' }
|
||||
});
|
||||
|
||||
expect(result.text).toBe('Hello');
|
||||
expect(mockQuery).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should only attempt to load package once', async () => {
|
||||
// Get a fresh import to ensure clean state
|
||||
jest.resetModules();
|
||||
const { ClaudeCodeLanguageModel: TestModel } = await import(
|
||||
'../../../../../src/ai-providers/custom-sdk/claude-code/language-model.js'
|
||||
);
|
||||
|
||||
const model = new TestModel({
|
||||
id: 'opus',
|
||||
settings: {}
|
||||
});
|
||||
|
||||
// First call should throw
|
||||
await expect(
|
||||
model.doGenerate({
|
||||
prompt: [{ role: 'user', content: 'test' }],
|
||||
mode: { type: 'regular' }
|
||||
})
|
||||
).rejects.toThrow('Claude Code SDK is not installed');
|
||||
|
||||
// Second call should also throw without trying to load again
|
||||
await expect(
|
||||
model.doGenerate({
|
||||
prompt: [{ role: 'user', content: 'test' }],
|
||||
mode: { type: 'regular' }
|
||||
})
|
||||
).rejects.toThrow('Claude Code SDK is not installed');
|
||||
});
|
||||
});
|
||||
|
||||
describe('generateUnsupportedWarnings', () => {
|
||||
it('should generate warnings for unsupported parameters', () => {
|
||||
const model = new ClaudeCodeLanguageModel({
|
||||
id: 'opus',
|
||||
settings: {}
|
||||
});
|
||||
|
||||
const warnings = model.generateUnsupportedWarnings({
|
||||
temperature: 0.7,
|
||||
maxTokens: 1000,
|
||||
topP: 0.9,
|
||||
seed: 42
|
||||
});
|
||||
|
||||
expect(warnings).toHaveLength(4);
|
||||
expect(warnings[0]).toEqual({
|
||||
type: 'unsupported-setting',
|
||||
setting: 'temperature',
|
||||
details:
|
||||
'Claude Code CLI does not support the temperature parameter. It will be ignored.'
|
||||
});
|
||||
});
|
||||
|
||||
it('should return empty array when no unsupported parameters', () => {
|
||||
const model = new ClaudeCodeLanguageModel({
|
||||
id: 'opus',
|
||||
settings: {}
|
||||
});
|
||||
|
||||
const warnings = model.generateUnsupportedWarnings({});
|
||||
expect(warnings).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getModel', () => {
|
||||
it('should map model IDs correctly', () => {
|
||||
const model = new ClaudeCodeLanguageModel({
|
||||
id: 'opus',
|
||||
settings: {}
|
||||
});
|
||||
|
||||
expect(model.getModel()).toBe('opus');
|
||||
});
|
||||
|
||||
it('should return unmapped model IDs as-is', () => {
|
||||
const model = new ClaudeCodeLanguageModel({
|
||||
id: 'custom-model',
|
||||
settings: {}
|
||||
});
|
||||
|
||||
expect(model.getModel()).toBe('custom-model');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -180,6 +180,11 @@ jest.unstable_mockModule('../../src/ai-providers/index.js', () => ({
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn(),
|
||||
generateObject: jest.fn()
|
||||
})),
|
||||
ClaudeCodeProvider: jest.fn(() => ({
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn(),
|
||||
generateObject: jest.fn()
|
||||
}))
|
||||
}));
|
||||
|
||||
|
||||
@@ -92,6 +92,10 @@ jest.mock('../../scripts/modules/utils.js', () => ({
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { setupCLI } from '../../scripts/modules/commands.js';
|
||||
import {
|
||||
RULES_SETUP_ACTION,
|
||||
RULES_ACTIONS
|
||||
} from '../../src/constants/rules-actions.js';
|
||||
|
||||
describe('Commands Module - CLI Setup and Integration', () => {
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
@@ -319,3 +323,142 @@ describe('Update check functionality', () => {
|
||||
expect(consoleLogSpy.mock.calls[0][0]).toContain('1.1.0');
|
||||
});
|
||||
});
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Rules command tests (add/remove)
|
||||
// -----------------------------------------------------------------------------
|
||||
describe('rules command', () => {
|
||||
let program;
|
||||
let mockConsoleLog;
|
||||
let mockConsoleError;
|
||||
let mockExit;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
program = setupCLI();
|
||||
mockConsoleLog = jest.spyOn(console, 'log').mockImplementation(() => {});
|
||||
mockConsoleError = jest
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {});
|
||||
mockExit = jest.spyOn(process, 'exit').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
test('should handle rules add <profile> command', async () => {
|
||||
// Simulate: task-master rules add roo
|
||||
await program.parseAsync(['rules', RULES_ACTIONS.ADD, 'roo'], {
|
||||
from: 'user'
|
||||
});
|
||||
// Expect some log output indicating success
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringMatching(/adding rules for profile: roo/i)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringMatching(/completed adding rules for profile: roo/i)
|
||||
);
|
||||
// Should not exit with error
|
||||
expect(mockExit).not.toHaveBeenCalledWith(1);
|
||||
});
|
||||
|
||||
test('should handle rules remove <profile> command', async () => {
|
||||
// Simulate: task-master rules remove roo --force
|
||||
await program.parseAsync(
|
||||
['rules', RULES_ACTIONS.REMOVE, 'roo', '--force'],
|
||||
{
|
||||
from: 'user'
|
||||
}
|
||||
);
|
||||
// Expect some log output indicating removal
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringMatching(/removing rules for profile: roo/i)
|
||||
);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
expect.stringMatching(
|
||||
/Summary for roo: (Rules directory removed|Skipped \(default or protected files\))/i
|
||||
)
|
||||
);
|
||||
// Should not exit with error
|
||||
expect(mockExit).not.toHaveBeenCalledWith(1);
|
||||
});
|
||||
|
||||
test(`should handle rules --${RULES_SETUP_ACTION} command`, async () => {
|
||||
// For this test, we'll verify that the command doesn't crash and exits gracefully
|
||||
// Since mocking ES modules is complex, we'll test the command structure instead
|
||||
|
||||
// Create a spy on console.log to capture any output
|
||||
const consoleSpy = jest.spyOn(console, 'log').mockImplementation(() => {});
|
||||
|
||||
// Mock process.exit to prevent actual exit and capture the call
|
||||
const exitSpy = jest.spyOn(process, 'exit').mockImplementation(() => {});
|
||||
|
||||
try {
|
||||
// The command should be recognized and not throw an error about invalid action
|
||||
// We expect it to attempt to run the interactive setup, but since we can't easily
|
||||
// mock the ES module, we'll just verify the command structure is correct
|
||||
|
||||
// This test verifies that:
|
||||
// 1. The --setup flag is recognized as a valid option
|
||||
// 2. The command doesn't exit with error code 1 due to invalid action
|
||||
// 3. The command structure is properly set up
|
||||
|
||||
// Note: In a real scenario, this would call runInteractiveProfilesSetup()
|
||||
// but for testing purposes, we're focusing on command structure validation
|
||||
|
||||
expect(() => {
|
||||
// Test that the command option is properly configured
|
||||
const command = program.commands.find((cmd) => cmd.name() === 'rules');
|
||||
expect(command).toBeDefined();
|
||||
|
||||
// Check that the --setup option exists
|
||||
const setupOption = command.options.find(
|
||||
(opt) => opt.long === `--${RULES_SETUP_ACTION}`
|
||||
);
|
||||
expect(setupOption).toBeDefined();
|
||||
expect(setupOption.description).toContain('interactive setup');
|
||||
}).not.toThrow();
|
||||
|
||||
// Verify the command structure is valid
|
||||
expect(mockExit).not.toHaveBeenCalledWith(1);
|
||||
} finally {
|
||||
consoleSpy.mockRestore();
|
||||
exitSpy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
test('should show error for invalid action', async () => {
|
||||
// Simulate: task-master rules invalid-action
|
||||
await program.parseAsync(['rules', 'invalid-action'], { from: 'user' });
|
||||
|
||||
// Should show error for invalid action
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
expect.stringMatching(/Error: Invalid or missing action/i)
|
||||
);
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
expect.stringMatching(
|
||||
new RegExp(
|
||||
`For interactive setup, use: task-master rules --${RULES_SETUP_ACTION}`,
|
||||
'i'
|
||||
)
|
||||
)
|
||||
);
|
||||
expect(mockExit).toHaveBeenCalledWith(1);
|
||||
});
|
||||
|
||||
test('should show error when no action provided', async () => {
|
||||
// Simulate: task-master rules (no action)
|
||||
await program.parseAsync(['rules'], { from: 'user' });
|
||||
|
||||
// Should show error for missing action
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
expect.stringMatching(/Error: Invalid or missing action 'none'/i)
|
||||
);
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
expect.stringMatching(
|
||||
new RegExp(
|
||||
`For interactive setup, use: task-master rules --${RULES_SETUP_ACTION}`,
|
||||
'i'
|
||||
)
|
||||
)
|
||||
);
|
||||
expect(mockExit).toHaveBeenCalledWith(1);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -129,7 +129,7 @@ const DEFAULT_CONFIG = {
|
||||
fallback: {
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000,
|
||||
maxTokens: 8192,
|
||||
temperature: 0.2
|
||||
}
|
||||
},
|
||||
@@ -266,6 +266,7 @@ describe('Validation Functions', () => {
|
||||
expect(configManager.validateProvider('perplexity')).toBe(true);
|
||||
expect(configManager.validateProvider('ollama')).toBe(true);
|
||||
expect(configManager.validateProvider('openrouter')).toBe(true);
|
||||
expect(configManager.validateProvider('bedrock')).toBe(true);
|
||||
});
|
||||
|
||||
test('validateProvider should return false for invalid providers', () => {
|
||||
@@ -713,17 +714,25 @@ describe('isConfigFilePresent', () => {
|
||||
|
||||
// --- getAllProviders Tests ---
|
||||
describe('getAllProviders', () => {
|
||||
test('should return list of providers from supported-models.json', () => {
|
||||
test('should return all providers from ALL_PROVIDERS constant', () => {
|
||||
// Arrange: Ensure config is loaded with real data
|
||||
configManager.getConfig(null, true); // Force load using the mock that returns real data
|
||||
|
||||
// Act
|
||||
const providers = configManager.getAllProviders();
|
||||
|
||||
// Assert
|
||||
// Assert against the actual keys in the REAL loaded data
|
||||
const expectedProviders = Object.keys(REAL_SUPPORTED_MODELS_DATA);
|
||||
expect(providers).toEqual(expect.arrayContaining(expectedProviders));
|
||||
expect(providers.length).toBe(expectedProviders.length);
|
||||
// getAllProviders() should return the same as the ALL_PROVIDERS constant
|
||||
expect(providers).toEqual(configManager.ALL_PROVIDERS);
|
||||
expect(providers.length).toBe(configManager.ALL_PROVIDERS.length);
|
||||
|
||||
// Verify it includes both validated and custom providers
|
||||
expect(providers).toEqual(
|
||||
expect.arrayContaining(configManager.VALIDATED_PROVIDERS)
|
||||
);
|
||||
expect(providers).toEqual(
|
||||
expect.arrayContaining(Object.values(configManager.CUSTOM_PROVIDERS))
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -75,7 +75,7 @@ const DEFAULT_CONFIG = {
|
||||
fallback: {
|
||||
provider: 'anthropic',
|
||||
modelId: 'claude-3-5-sonnet',
|
||||
maxTokens: 64000,
|
||||
maxTokens: 8192,
|
||||
temperature: 0.2
|
||||
}
|
||||
},
|
||||
|
||||
538
tests/unit/initialize-project.test.js
Normal file
538
tests/unit/initialize-project.test.js
Normal file
@@ -0,0 +1,538 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Reduce noise in test output
|
||||
process.env.TASKMASTER_LOG_LEVEL = 'error';
|
||||
|
||||
// === Mock everything early ===
|
||||
jest.mock('child_process', () => ({ execSync: jest.fn() }));
|
||||
jest.mock('fs', () => ({
|
||||
...jest.requireActual('fs'),
|
||||
mkdirSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
readFileSync: jest.fn(),
|
||||
appendFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdtempSync: jest.requireActual('fs').mkdtempSync,
|
||||
rmSync: jest.requireActual('fs').rmSync
|
||||
}));
|
||||
|
||||
// Mock console methods to suppress output
|
||||
const consoleMethods = ['log', 'info', 'warn', 'error', 'clear'];
|
||||
consoleMethods.forEach((method) => {
|
||||
global.console[method] = jest.fn();
|
||||
});
|
||||
|
||||
// Mock ES modules using unstable_mockModule
|
||||
jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
|
||||
isSilentMode: jest.fn(() => true),
|
||||
enableSilentMode: jest.fn(),
|
||||
log: jest.fn(),
|
||||
findProjectRoot: jest.fn(() => process.cwd())
|
||||
}));
|
||||
|
||||
// Mock git-utils module
|
||||
jest.unstable_mockModule('../../scripts/modules/utils/git-utils.js', () => ({
|
||||
insideGitWorkTree: jest.fn(() => false)
|
||||
}));
|
||||
|
||||
// Mock rule transformer
|
||||
jest.unstable_mockModule('../../src/utils/rule-transformer.js', () => ({
|
||||
convertAllRulesToProfileRules: jest.fn(),
|
||||
getRulesProfile: jest.fn(() => ({
|
||||
conversionConfig: {},
|
||||
globalReplacements: []
|
||||
}))
|
||||
}));
|
||||
|
||||
// Mock any other modules that might output or do real operations
|
||||
jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
|
||||
createDefaultConfig: jest.fn(() => ({ models: {}, project: {} })),
|
||||
saveConfig: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock display libraries
|
||||
jest.mock('figlet', () => ({ textSync: jest.fn(() => 'MOCKED BANNER') }));
|
||||
jest.mock('boxen', () => jest.fn(() => 'MOCKED BOX'));
|
||||
jest.mock('gradient-string', () => jest.fn(() => jest.fn((text) => text)));
|
||||
jest.mock('chalk', () => ({
|
||||
blue: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
red: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
cyan: jest.fn((text) => text),
|
||||
white: jest.fn((text) => text),
|
||||
dim: jest.fn((text) => text),
|
||||
bold: jest.fn((text) => text),
|
||||
underline: jest.fn((text) => text)
|
||||
}));
|
||||
|
||||
const { execSync } = jest.requireMock('child_process');
|
||||
const mockFs = jest.requireMock('fs');
|
||||
|
||||
// Import the mocked modules
|
||||
const mockUtils = await import('../../scripts/modules/utils.js');
|
||||
const mockGitUtils = await import('../../scripts/modules/utils/git-utils.js');
|
||||
const mockRuleTransformer = await import('../../src/utils/rule-transformer.js');
|
||||
|
||||
// Import after mocks
|
||||
const { initializeProject } = await import('../../scripts/init.js');
|
||||
|
||||
describe('initializeProject – Git / Alias flag logic', () => {
|
||||
let tmpDir;
|
||||
const origCwd = process.cwd();
|
||||
|
||||
// Standard non-interactive options for all tests
|
||||
const baseOptions = {
|
||||
yes: true,
|
||||
skipInstall: true,
|
||||
name: 'test-project',
|
||||
description: 'Test project description',
|
||||
version: '1.0.0',
|
||||
author: 'Test Author'
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Set up basic fs mocks
|
||||
mockFs.mkdirSync.mockImplementation(() => {});
|
||||
mockFs.writeFileSync.mockImplementation(() => {});
|
||||
mockFs.readFileSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
|
||||
return 'mock template content';
|
||||
}
|
||||
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
|
||||
return '# existing config';
|
||||
}
|
||||
return '';
|
||||
});
|
||||
mockFs.appendFileSync.mockImplementation(() => {});
|
||||
mockFs.existsSync.mockImplementation((filePath) => {
|
||||
// Template source files exist
|
||||
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
|
||||
return true;
|
||||
}
|
||||
// Shell config files exist by default
|
||||
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
// Reset utils mocks
|
||||
mockUtils.isSilentMode.mockReturnValue(true);
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
|
||||
// Default execSync mock
|
||||
execSync.mockImplementation(() => '');
|
||||
|
||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-init-'));
|
||||
process.chdir(tmpDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.chdir(origCwd);
|
||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
describe('Git Flag Behavior', () => {
|
||||
it('completes successfully with git:false in dry run', async () => {
|
||||
const result = await initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: true
|
||||
});
|
||||
|
||||
expect(result.dryRun).toBe(true);
|
||||
});
|
||||
|
||||
it('completes successfully with git:true when not inside repo', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('completes successfully when already inside repo', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(true);
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('uses default git behavior without errors', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles git command failures gracefully', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
execSync.mockImplementation((cmd) => {
|
||||
if (cmd.includes('git init')) {
|
||||
throw new Error('git not found');
|
||||
}
|
||||
return '';
|
||||
});
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Alias Flag Behavior', () => {
|
||||
it('completes successfully when aliases:true and environment is set up', async () => {
|
||||
const originalShell = process.env.SHELL;
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
process.env.SHELL = '/bin/zsh';
|
||||
process.env.HOME = '/mock/home';
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: true,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
process.env.SHELL = originalShell;
|
||||
process.env.HOME = originalHome;
|
||||
});
|
||||
|
||||
it('completes successfully when aliases:false', async () => {
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles missing shell gracefully', async () => {
|
||||
const originalShell = process.env.SHELL;
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
delete process.env.SHELL; // Remove shell env var
|
||||
process.env.HOME = '/mock/home';
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: true,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
process.env.SHELL = originalShell;
|
||||
process.env.HOME = originalHome;
|
||||
});
|
||||
|
||||
it('handles missing shell config file gracefully', async () => {
|
||||
const originalShell = process.env.SHELL;
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
process.env.SHELL = '/bin/zsh';
|
||||
process.env.HOME = '/mock/home';
|
||||
|
||||
// Shell config doesn't exist
|
||||
mockFs.existsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
|
||||
return false;
|
||||
}
|
||||
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: true,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
process.env.SHELL = originalShell;
|
||||
process.env.HOME = originalHome;
|
||||
});
|
||||
});
|
||||
|
||||
describe('Flag Combinations', () => {
|
||||
it.each`
|
||||
git | aliases | description
|
||||
${true} | ${true} | ${'git & aliases enabled'}
|
||||
${true} | ${false} | ${'git enabled, aliases disabled'}
|
||||
${false} | ${true} | ${'git disabled, aliases enabled'}
|
||||
${false} | ${false} | ${'git & aliases disabled'}
|
||||
`('handles $description without errors', async ({ git, aliases }) => {
|
||||
const originalShell = process.env.SHELL;
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
if (aliases) {
|
||||
process.env.SHELL = '/bin/zsh';
|
||||
process.env.HOME = '/mock/home';
|
||||
}
|
||||
|
||||
if (git) {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
}
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git,
|
||||
aliases,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
process.env.SHELL = originalShell;
|
||||
process.env.HOME = originalHome;
|
||||
});
|
||||
});
|
||||
|
||||
describe('Dry Run Mode', () => {
|
||||
it('returns dry run result and performs no operations', async () => {
|
||||
const result = await initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: true,
|
||||
dryRun: true
|
||||
});
|
||||
|
||||
expect(result.dryRun).toBe(true);
|
||||
});
|
||||
|
||||
it.each`
|
||||
git | aliases | description
|
||||
${true} | ${false} | ${'git-specific behavior'}
|
||||
${false} | ${false} | ${'no-git behavior'}
|
||||
${false} | ${true} | ${'alias behavior'}
|
||||
`('shows $description in dry run', async ({ git, aliases }) => {
|
||||
const result = await initializeProject({
|
||||
...baseOptions,
|
||||
git,
|
||||
aliases,
|
||||
dryRun: true
|
||||
});
|
||||
|
||||
expect(result.dryRun).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
it('handles npm install failures gracefully', async () => {
|
||||
execSync.mockImplementation((cmd) => {
|
||||
if (cmd.includes('npm install')) {
|
||||
throw new Error('npm failed');
|
||||
}
|
||||
return '';
|
||||
});
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
skipInstall: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles git failures gracefully', async () => {
|
||||
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
|
||||
execSync.mockImplementation((cmd) => {
|
||||
if (cmd.includes('git init')) {
|
||||
throw new Error('git failed');
|
||||
}
|
||||
return '';
|
||||
});
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles file system errors gracefully', async () => {
|
||||
mockFs.mkdirSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Should handle file system errors gracefully
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Non-Interactive Mode', () => {
|
||||
it('bypasses prompts with yes:true', async () => {
|
||||
const result = await initializeProject({
|
||||
...baseOptions,
|
||||
git: true,
|
||||
aliases: true,
|
||||
dryRun: true
|
||||
});
|
||||
|
||||
expect(result).toEqual({ dryRun: true });
|
||||
});
|
||||
|
||||
it('completes without hanging', async () => {
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('handles all flag combinations without hanging', async () => {
|
||||
const flagCombinations = [
|
||||
{ git: true, aliases: true },
|
||||
{ git: true, aliases: false },
|
||||
{ git: false, aliases: true },
|
||||
{ git: false, aliases: false },
|
||||
{} // No flags (uses defaults)
|
||||
];
|
||||
|
||||
for (const flags of flagCombinations) {
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
...flags,
|
||||
dryRun: true // Use dry run for speed
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
}
|
||||
});
|
||||
|
||||
it('accepts complete project details', async () => {
|
||||
await expect(
|
||||
initializeProject({
|
||||
name: 'test-project',
|
||||
description: 'test description',
|
||||
version: '2.0.0',
|
||||
author: 'Test User',
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: true
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('works with skipInstall option', async () => {
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
skipInstall: true,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Function Integration', () => {
|
||||
it('calls utility functions without errors', async () => {
|
||||
await initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
});
|
||||
|
||||
// Verify that utility functions were called
|
||||
expect(mockUtils.isSilentMode).toHaveBeenCalled();
|
||||
expect(
|
||||
mockRuleTransformer.convertAllRulesToProfileRules
|
||||
).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('handles template operations gracefully', async () => {
|
||||
// Make file operations throw errors
|
||||
mockFs.writeFileSync.mockImplementation(() => {
|
||||
throw new Error('Write failed');
|
||||
});
|
||||
|
||||
// Should complete despite file operation failures
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false,
|
||||
aliases: false,
|
||||
dryRun: false
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('validates boolean flag conversion', async () => {
|
||||
// Test the boolean flag handling specifically
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: true, // Should convert to initGit: true
|
||||
aliases: false, // Should convert to addAliases: false
|
||||
dryRun: true
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
|
||||
await expect(
|
||||
initializeProject({
|
||||
...baseOptions,
|
||||
git: false, // Should convert to initGit: false
|
||||
aliases: true, // Should convert to addAliases: true
|
||||
dryRun: true
|
||||
})
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
});
|
||||
439
tests/unit/manage-gitignore.test.js
Normal file
439
tests/unit/manage-gitignore.test.js
Normal file
@@ -0,0 +1,439 @@
|
||||
/**
|
||||
* Unit tests for manage-gitignore.js module
|
||||
* Tests the logic with Jest spies instead of mocked modules
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Import the module under test and its exports
|
||||
import manageGitignoreFile, {
|
||||
normalizeLine,
|
||||
isTaskLine,
|
||||
buildTaskFilesSection,
|
||||
TASK_FILES_COMMENT,
|
||||
TASK_JSON_PATTERN,
|
||||
TASK_DIR_PATTERN
|
||||
} from '../../src/utils/manage-gitignore.js';
|
||||
|
||||
describe('manage-gitignore.js Unit Tests', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'manage-gitignore-test-'));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
});
|
||||
|
||||
describe('Constants', () => {
|
||||
test('should have correct constant values', () => {
|
||||
expect(TASK_FILES_COMMENT).toBe('# Task files');
|
||||
expect(TASK_JSON_PATTERN).toBe('tasks.json');
|
||||
expect(TASK_DIR_PATTERN).toBe('tasks/');
|
||||
});
|
||||
});
|
||||
|
||||
describe('normalizeLine function', () => {
|
||||
test('should remove leading/trailing whitespace', () => {
|
||||
expect(normalizeLine(' test ')).toBe('test');
|
||||
});
|
||||
|
||||
test('should remove comment hash and trim', () => {
|
||||
expect(normalizeLine('# tasks.json')).toBe('tasks.json');
|
||||
expect(normalizeLine('#tasks/')).toBe('tasks/');
|
||||
});
|
||||
|
||||
test('should handle empty strings', () => {
|
||||
expect(normalizeLine('')).toBe('');
|
||||
expect(normalizeLine(' ')).toBe('');
|
||||
});
|
||||
|
||||
test('should handle lines without comments', () => {
|
||||
expect(normalizeLine('tasks.json')).toBe('tasks.json');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isTaskLine function', () => {
|
||||
test('should identify task.json patterns', () => {
|
||||
expect(isTaskLine('tasks.json')).toBe(true);
|
||||
expect(isTaskLine('# tasks.json')).toBe(true);
|
||||
expect(isTaskLine(' # tasks.json ')).toBe(true);
|
||||
});
|
||||
|
||||
test('should identify tasks/ patterns', () => {
|
||||
expect(isTaskLine('tasks/')).toBe(true);
|
||||
expect(isTaskLine('# tasks/')).toBe(true);
|
||||
expect(isTaskLine(' # tasks/ ')).toBe(true);
|
||||
});
|
||||
|
||||
test('should reject non-task patterns', () => {
|
||||
expect(isTaskLine('node_modules/')).toBe(false);
|
||||
expect(isTaskLine('# Some comment')).toBe(false);
|
||||
expect(isTaskLine('')).toBe(false);
|
||||
expect(isTaskLine('tasks.txt')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildTaskFilesSection function', () => {
|
||||
test('should build commented section when storeTasksInGit is true (tasks stored in git)', () => {
|
||||
const result = buildTaskFilesSection(true);
|
||||
expect(result).toEqual(['# Task files', '# tasks.json', '# tasks/ ']);
|
||||
});
|
||||
|
||||
test('should build uncommented section when storeTasksInGit is false (tasks ignored)', () => {
|
||||
const result = buildTaskFilesSection(false);
|
||||
expect(result).toEqual(['# Task files', 'tasks.json', 'tasks/ ']);
|
||||
});
|
||||
});
|
||||
|
||||
describe('manageGitignoreFile function - Input Validation', () => {
|
||||
test('should throw error for invalid targetPath', () => {
|
||||
expect(() => {
|
||||
manageGitignoreFile('', 'content', false);
|
||||
}).toThrow('targetPath must be a non-empty string');
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile(null, 'content', false);
|
||||
}).toThrow('targetPath must be a non-empty string');
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('invalid.txt', 'content', false);
|
||||
}).toThrow('targetPath must end with .gitignore');
|
||||
});
|
||||
|
||||
test('should throw error for invalid content', () => {
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', '', false);
|
||||
}).toThrow('content must be a non-empty string');
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', null, false);
|
||||
}).toThrow('content must be a non-empty string');
|
||||
});
|
||||
|
||||
test('should throw error for invalid storeTasksInGit', () => {
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', 'content', 'not-boolean');
|
||||
}).toThrow('storeTasksInGit must be a boolean');
|
||||
});
|
||||
});
|
||||
|
||||
describe('manageGitignoreFile function - File Operations with Spies', () => {
|
||||
let writeFileSyncSpy;
|
||||
let readFileSyncSpy;
|
||||
let existsSyncSpy;
|
||||
let mockLog;
|
||||
|
||||
beforeEach(() => {
|
||||
// Set up spies
|
||||
writeFileSyncSpy = jest
|
||||
.spyOn(fs, 'writeFileSync')
|
||||
.mockImplementation(() => {});
|
||||
readFileSyncSpy = jest
|
||||
.spyOn(fs, 'readFileSync')
|
||||
.mockImplementation(() => '');
|
||||
existsSyncSpy = jest
|
||||
.spyOn(fs, 'existsSync')
|
||||
.mockImplementation(() => false);
|
||||
mockLog = jest.fn();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Restore original implementations
|
||||
writeFileSyncSpy.mockRestore();
|
||||
readFileSyncSpy.mockRestore();
|
||||
existsSyncSpy.mockRestore();
|
||||
});
|
||||
|
||||
describe('New File Creation', () => {
|
||||
const templateContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
test('should create new file with commented task lines when storeTasksInGit is true', () => {
|
||||
existsSyncSpy.mockReturnValue(false); // File doesn't exist
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
`# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/ `
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'success',
|
||||
'Created .gitignore with full template'
|
||||
);
|
||||
});
|
||||
|
||||
test('should create new file with uncommented task lines when storeTasksInGit is false', () => {
|
||||
existsSyncSpy.mockReturnValue(false); // File doesn't exist
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
`# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'success',
|
||||
'Created .gitignore with full template'
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle write errors gracefully', () => {
|
||||
existsSyncSpy.mockReturnValue(false);
|
||||
const writeError = new Error('Permission denied');
|
||||
writeFileSyncSpy.mockImplementation(() => {
|
||||
throw writeError;
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
}).toThrow('Permission denied');
|
||||
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'error',
|
||||
'Failed to create .gitignore: Permission denied'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('File Merging', () => {
|
||||
const templateContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
test('should merge with existing file and add new content', () => {
|
||||
const existingContent = `# Old content
|
||||
old-file.txt
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/`;
|
||||
|
||||
existsSyncSpy.mockReturnValue(true); // File exists
|
||||
readFileSyncSpy.mockReturnValue(existingContent);
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
expect.stringContaining('# Old content')
|
||||
);
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
expect.stringContaining('# Logs')
|
||||
);
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
expect.stringContaining('# Dependencies')
|
||||
);
|
||||
expect(writeFileSyncSpy).toHaveBeenCalledWith(
|
||||
'.gitignore',
|
||||
expect.stringContaining('# Task files')
|
||||
);
|
||||
});
|
||||
|
||||
test('should remove existing task section and replace with new preferences', () => {
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/
|
||||
|
||||
# More content
|
||||
more.txt`;
|
||||
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue(existingContent);
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
|
||||
// Should contain existing non-task content
|
||||
expect(writtenContent).toContain('# Existing');
|
||||
expect(writtenContent).toContain('existing.txt');
|
||||
expect(writtenContent).toContain('# More content');
|
||||
expect(writtenContent).toContain('more.txt');
|
||||
|
||||
// Should contain new template content
|
||||
expect(writtenContent).toContain('# Logs');
|
||||
expect(writtenContent).toContain('# Dependencies');
|
||||
|
||||
// Should have uncommented task lines (storeTasksInGit = false means ignore tasks)
|
||||
expect(writtenContent).toMatch(
|
||||
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle different task preferences correctly', () => {
|
||||
const existingContent = `# Existing
|
||||
existing.txt
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/`;
|
||||
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue(existingContent);
|
||||
|
||||
// Test with storeTasksInGit = true (commented)
|
||||
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
|
||||
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
expect(writtenContent).toMatch(
|
||||
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
|
||||
);
|
||||
});
|
||||
|
||||
test('should not duplicate existing template content', () => {
|
||||
const existingContent = `# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/`;
|
||||
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue(existingContent);
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
|
||||
// Should not duplicate the logs section
|
||||
const logsCount = (writtenContent.match(/# Logs/g) || []).length;
|
||||
expect(logsCount).toBe(1);
|
||||
|
||||
// Should not duplicate dependencies
|
||||
const depsCount = (writtenContent.match(/# Dependencies/g) || [])
|
||||
.length;
|
||||
expect(depsCount).toBe(1);
|
||||
});
|
||||
|
||||
test('should handle read errors gracefully', () => {
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
const readError = new Error('File not readable');
|
||||
readFileSyncSpy.mockImplementation(() => {
|
||||
throw readError;
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
}).toThrow('File not readable');
|
||||
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'error',
|
||||
'Failed to merge content with .gitignore: File not readable'
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle write errors during merge gracefully', () => {
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue('existing content');
|
||||
|
||||
const writeError = new Error('Disk full');
|
||||
writeFileSyncSpy.mockImplementation(() => {
|
||||
throw writeError;
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
}).toThrow('Disk full');
|
||||
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'error',
|
||||
'Failed to merge content with .gitignore: Disk full'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Edge Cases', () => {
|
||||
test('should work without log function', () => {
|
||||
existsSyncSpy.mockReturnValue(false);
|
||||
const templateContent = `# Test
|
||||
test.txt
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/`;
|
||||
|
||||
expect(() => {
|
||||
manageGitignoreFile('.gitignore', templateContent, false);
|
||||
}).not.toThrow();
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle empty existing file', () => {
|
||||
existsSyncSpy.mockReturnValue(true);
|
||||
readFileSyncSpy.mockReturnValue('');
|
||||
|
||||
const templateContent = `# Task files
|
||||
tasks.json
|
||||
tasks/`;
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
|
||||
|
||||
expect(writeFileSyncSpy).toHaveBeenCalled();
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
expect(writtenContent).toContain('# Task files');
|
||||
});
|
||||
|
||||
test('should handle template with only task files', () => {
|
||||
existsSyncSpy.mockReturnValue(false);
|
||||
const templateContent = `# Task files
|
||||
tasks.json
|
||||
tasks/ `;
|
||||
|
||||
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
|
||||
|
||||
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
|
||||
expect(writtenContent).toBe(`# Task files
|
||||
# tasks.json
|
||||
# tasks/ `);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
324
tests/unit/mcp/tools/expand-all.test.js
Normal file
324
tests/unit/mcp/tools/expand-all.test.js
Normal file
@@ -0,0 +1,324 @@
|
||||
/**
|
||||
* Tests for the expand-all MCP tool
|
||||
*
|
||||
* Note: This test does NOT test the actual implementation. It tests that:
|
||||
* 1. The tool is registered correctly with the correct parameters
|
||||
* 2. Arguments are passed correctly to expandAllTasksDirect
|
||||
* 3. Error handling works as expected
|
||||
*
|
||||
* We do NOT import the real implementation - everything is mocked
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock EVERYTHING
|
||||
const mockExpandAllTasksDirect = jest.fn();
|
||||
jest.mock('../../../../mcp-server/src/core/task-master-core.js', () => ({
|
||||
expandAllTasksDirect: mockExpandAllTasksDirect
|
||||
}));
|
||||
|
||||
const mockHandleApiResult = jest.fn((result) => result);
|
||||
const mockGetProjectRootFromSession = jest.fn(() => '/mock/project/root');
|
||||
const mockCreateErrorResponse = jest.fn((msg) => ({
|
||||
success: false,
|
||||
error: { code: 'ERROR', message: msg }
|
||||
}));
|
||||
const mockWithNormalizedProjectRoot = jest.fn((fn) => fn);
|
||||
|
||||
jest.mock('../../../../mcp-server/src/tools/utils.js', () => ({
|
||||
getProjectRootFromSession: mockGetProjectRootFromSession,
|
||||
handleApiResult: mockHandleApiResult,
|
||||
createErrorResponse: mockCreateErrorResponse,
|
||||
withNormalizedProjectRoot: mockWithNormalizedProjectRoot
|
||||
}));
|
||||
|
||||
// Mock the z object from zod
|
||||
const mockZod = {
|
||||
object: jest.fn(() => mockZod),
|
||||
string: jest.fn(() => mockZod),
|
||||
number: jest.fn(() => mockZod),
|
||||
boolean: jest.fn(() => mockZod),
|
||||
optional: jest.fn(() => mockZod),
|
||||
describe: jest.fn(() => mockZod),
|
||||
_def: {
|
||||
shape: () => ({
|
||||
num: {},
|
||||
research: {},
|
||||
prompt: {},
|
||||
force: {},
|
||||
tag: {},
|
||||
projectRoot: {}
|
||||
})
|
||||
}
|
||||
};
|
||||
|
||||
jest.mock('zod', () => ({
|
||||
z: mockZod
|
||||
}));
|
||||
|
||||
// DO NOT import the real module - create a fake implementation
|
||||
// This is the fake implementation of registerExpandAllTool
|
||||
const registerExpandAllTool = (server) => {
|
||||
// Create simplified version of the tool config
|
||||
const toolConfig = {
|
||||
name: 'expand_all',
|
||||
description: 'Use Taskmaster to expand all eligible pending tasks',
|
||||
parameters: mockZod,
|
||||
|
||||
// Create a simplified mock of the execute function
|
||||
execute: mockWithNormalizedProjectRoot(async (args, context) => {
|
||||
const { log, session } = context;
|
||||
|
||||
try {
|
||||
log.info &&
|
||||
log.info(`Starting expand-all with args: ${JSON.stringify(args)}`);
|
||||
|
||||
// Call expandAllTasksDirect
|
||||
const result = await mockExpandAllTasksDirect(args, log, { session });
|
||||
|
||||
// Handle result
|
||||
return mockHandleApiResult(result, log);
|
||||
} catch (error) {
|
||||
log.error && log.error(`Error in expand-all tool: ${error.message}`);
|
||||
return mockCreateErrorResponse(error.message);
|
||||
}
|
||||
})
|
||||
};
|
||||
|
||||
// Register the tool with the server
|
||||
server.addTool(toolConfig);
|
||||
};
|
||||
|
||||
describe('MCP Tool: expand-all', () => {
|
||||
// Create mock server
|
||||
let mockServer;
|
||||
let executeFunction;
|
||||
|
||||
// Create mock logger
|
||||
const mockLogger = {
|
||||
debug: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn()
|
||||
};
|
||||
|
||||
// Test data
|
||||
const validArgs = {
|
||||
num: 3,
|
||||
research: true,
|
||||
prompt: 'additional context',
|
||||
force: false,
|
||||
tag: 'master',
|
||||
projectRoot: '/test/project'
|
||||
};
|
||||
|
||||
// Standard responses
|
||||
const successResponse = {
|
||||
success: true,
|
||||
data: {
|
||||
message:
|
||||
'Expand all operation completed. Expanded: 2, Failed: 0, Skipped: 1',
|
||||
details: {
|
||||
expandedCount: 2,
|
||||
failedCount: 0,
|
||||
skippedCount: 1,
|
||||
tasksToExpand: 3,
|
||||
telemetryData: {
|
||||
commandName: 'expand-all-tasks',
|
||||
totalCost: 0.15,
|
||||
totalTokens: 2500
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const errorResponse = {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'EXPAND_ALL_ERROR',
|
||||
message: 'Failed to expand tasks'
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset all mocks
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create mock server
|
||||
mockServer = {
|
||||
addTool: jest.fn((config) => {
|
||||
executeFunction = config.execute;
|
||||
})
|
||||
};
|
||||
|
||||
// Setup default successful response
|
||||
mockExpandAllTasksDirect.mockResolvedValue(successResponse);
|
||||
|
||||
// Register the tool
|
||||
registerExpandAllTool(mockServer);
|
||||
});
|
||||
|
||||
test('should register the tool correctly', () => {
|
||||
// Verify tool was registered
|
||||
expect(mockServer.addTool).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
name: 'expand_all',
|
||||
description: expect.stringContaining('expand all eligible pending'),
|
||||
parameters: expect.any(Object),
|
||||
execute: expect.any(Function)
|
||||
})
|
||||
);
|
||||
|
||||
// Verify the tool config was passed
|
||||
const toolConfig = mockServer.addTool.mock.calls[0][0];
|
||||
expect(toolConfig).toHaveProperty('parameters');
|
||||
expect(toolConfig).toHaveProperty('execute');
|
||||
});
|
||||
|
||||
test('should execute the tool with valid parameters', async () => {
|
||||
// Setup context
|
||||
const mockContext = {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/mock/dir' }
|
||||
};
|
||||
|
||||
// Execute the function
|
||||
const result = await executeFunction(validArgs, mockContext);
|
||||
|
||||
// Verify expandAllTasksDirect was called with correct arguments
|
||||
expect(mockExpandAllTasksDirect).toHaveBeenCalledWith(
|
||||
validArgs,
|
||||
mockLogger,
|
||||
{ session: mockContext.session }
|
||||
);
|
||||
|
||||
// Verify handleApiResult was called
|
||||
expect(mockHandleApiResult).toHaveBeenCalledWith(
|
||||
successResponse,
|
||||
mockLogger
|
||||
);
|
||||
expect(result).toEqual(successResponse);
|
||||
});
|
||||
|
||||
test('should handle expand all with no eligible tasks', async () => {
|
||||
// Arrange
|
||||
const mockDirectResult = {
|
||||
success: true,
|
||||
data: {
|
||||
message:
|
||||
'Expand all operation completed. Expanded: 0, Failed: 0, Skipped: 0',
|
||||
details: {
|
||||
expandedCount: 0,
|
||||
failedCount: 0,
|
||||
skippedCount: 0,
|
||||
tasksToExpand: 0,
|
||||
telemetryData: null
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockExpandAllTasksDirect.mockResolvedValue(mockDirectResult);
|
||||
mockHandleApiResult.mockReturnValue({
|
||||
success: true,
|
||||
data: mockDirectResult.data
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await executeFunction(validArgs, {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/test' }
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.expandedCount).toBe(0);
|
||||
expect(result.data.details.tasksToExpand).toBe(0);
|
||||
});
|
||||
|
||||
test('should handle expand all with mixed success/failure', async () => {
|
||||
// Arrange
|
||||
const mockDirectResult = {
|
||||
success: true,
|
||||
data: {
|
||||
message:
|
||||
'Expand all operation completed. Expanded: 2, Failed: 1, Skipped: 0',
|
||||
details: {
|
||||
expandedCount: 2,
|
||||
failedCount: 1,
|
||||
skippedCount: 0,
|
||||
tasksToExpand: 3,
|
||||
telemetryData: {
|
||||
commandName: 'expand-all-tasks',
|
||||
totalCost: 0.1,
|
||||
totalTokens: 1500
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockExpandAllTasksDirect.mockResolvedValue(mockDirectResult);
|
||||
mockHandleApiResult.mockReturnValue({
|
||||
success: true,
|
||||
data: mockDirectResult.data
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await executeFunction(validArgs, {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/test' }
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data.details.expandedCount).toBe(2);
|
||||
expect(result.data.details.failedCount).toBe(1);
|
||||
});
|
||||
|
||||
test('should handle errors from expandAllTasksDirect', async () => {
|
||||
// Arrange
|
||||
mockExpandAllTasksDirect.mockRejectedValue(
|
||||
new Error('Direct function error')
|
||||
);
|
||||
|
||||
// Act
|
||||
const result = await executeFunction(validArgs, {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/test' }
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(mockLogger.error).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Error in expand-all tool')
|
||||
);
|
||||
expect(mockCreateErrorResponse).toHaveBeenCalledWith(
|
||||
'Direct function error'
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle different argument combinations', async () => {
|
||||
// Test with minimal args
|
||||
const minimalArgs = {
|
||||
projectRoot: '/test/project'
|
||||
};
|
||||
|
||||
// Act
|
||||
await executeFunction(minimalArgs, {
|
||||
log: mockLogger,
|
||||
session: { workingDirectory: '/test' }
|
||||
});
|
||||
|
||||
// Assert
|
||||
expect(mockExpandAllTasksDirect).toHaveBeenCalledWith(
|
||||
minimalArgs,
|
||||
mockLogger,
|
||||
expect.any(Object)
|
||||
);
|
||||
});
|
||||
|
||||
test('should use withNormalizedProjectRoot wrapper correctly', () => {
|
||||
// Verify that the execute function is wrapped with withNormalizedProjectRoot
|
||||
expect(mockWithNormalizedProjectRoot).toHaveBeenCalledWith(
|
||||
expect.any(Function)
|
||||
);
|
||||
});
|
||||
});
|
||||
103
tests/unit/profiles/claude-integration.test.js
Normal file
103
tests/unit/profiles/claude-integration.test.js
Normal file
@@ -0,0 +1,103 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
clear: jest.fn()
|
||||
}));
|
||||
|
||||
describe('Claude Profile Integration', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Spy on fs methods
|
||||
jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {});
|
||||
jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => {
|
||||
if (filePath.toString().includes('AGENTS.md')) {
|
||||
return 'Sample AGENTS.md content for Claude integration';
|
||||
}
|
||||
return '{}';
|
||||
});
|
||||
jest.spyOn(fs, 'existsSync').mockImplementation(() => false);
|
||||
jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
console.error(`Error cleaning up: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Test function that simulates the Claude profile file copying behavior
|
||||
function mockCreateClaudeStructure() {
|
||||
// Claude profile copies AGENTS.md to CLAUDE.md in project root
|
||||
const sourceContent = 'Sample AGENTS.md content for Claude integration';
|
||||
fs.writeFileSync(path.join(tempDir, 'CLAUDE.md'), sourceContent);
|
||||
}
|
||||
|
||||
test('creates CLAUDE.md file in project root', () => {
|
||||
// Act
|
||||
mockCreateClaudeStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, 'CLAUDE.md'),
|
||||
'Sample AGENTS.md content for Claude integration'
|
||||
);
|
||||
});
|
||||
|
||||
test('does not create any profile directories', () => {
|
||||
// Act
|
||||
mockCreateClaudeStructure();
|
||||
|
||||
// Assert - Claude profile should not create any directories
|
||||
// Only the temp directory creation calls should exist
|
||||
const mkdirCalls = fs.mkdirSync.mock.calls.filter(
|
||||
(call) => !call[0].includes('task-master-test-')
|
||||
);
|
||||
expect(mkdirCalls).toHaveLength(0);
|
||||
});
|
||||
|
||||
test('does not create MCP configuration files', () => {
|
||||
// Act
|
||||
mockCreateClaudeStructure();
|
||||
|
||||
// Assert - Claude profile should not create any MCP config files
|
||||
const writeFileCalls = fs.writeFileSync.mock.calls;
|
||||
const mcpConfigCalls = writeFileCalls.filter(
|
||||
(call) =>
|
||||
call[0].toString().includes('mcp.json') ||
|
||||
call[0].toString().includes('mcp_settings.json')
|
||||
);
|
||||
expect(mcpConfigCalls).toHaveLength(0);
|
||||
});
|
||||
|
||||
test('only creates the target integration guide file', () => {
|
||||
// Act
|
||||
mockCreateClaudeStructure();
|
||||
|
||||
// Assert - Should only create CLAUDE.md
|
||||
const writeFileCalls = fs.writeFileSync.mock.calls;
|
||||
expect(writeFileCalls).toHaveLength(1);
|
||||
expect(writeFileCalls[0][0]).toBe(path.join(tempDir, 'CLAUDE.md'));
|
||||
});
|
||||
});
|
||||
112
tests/unit/profiles/cline-integration.test.js
Normal file
112
tests/unit/profiles/cline-integration.test.js
Normal file
@@ -0,0 +1,112 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
clear: jest.fn()
|
||||
}));
|
||||
|
||||
describe('Cline Integration', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Spy on fs methods
|
||||
jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {});
|
||||
jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => {
|
||||
if (filePath.toString().includes('.clinerules')) {
|
||||
return 'Existing cline rules content';
|
||||
}
|
||||
return '{}';
|
||||
});
|
||||
jest.spyOn(fs, 'existsSync').mockImplementation(() => false);
|
||||
jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
console.error(`Error cleaning up: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Test function that simulates the createProjectStructure behavior for Cline files
|
||||
function mockCreateClineStructure() {
|
||||
// Create main .clinerules directory
|
||||
fs.mkdirSync(path.join(tempDir, '.clinerules'), { recursive: true });
|
||||
|
||||
// Create rule files
|
||||
const ruleFiles = [
|
||||
'dev_workflow.md',
|
||||
'taskmaster.md',
|
||||
'architecture.md',
|
||||
'commands.md',
|
||||
'dependencies.md'
|
||||
];
|
||||
|
||||
for (const ruleFile of ruleFiles) {
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.clinerules', ruleFile),
|
||||
`Content for ${ruleFile}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
test('creates all required .clinerules directories', () => {
|
||||
// Act
|
||||
mockCreateClineStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.clinerules'),
|
||||
{ recursive: true }
|
||||
);
|
||||
});
|
||||
|
||||
test('creates rule files for Cline', () => {
|
||||
// Act
|
||||
mockCreateClineStructure();
|
||||
|
||||
// Assert - check rule files are created
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.clinerules', 'dev_workflow.md'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.clinerules', 'taskmaster.md'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.clinerules', 'architecture.md'),
|
||||
expect.any(String)
|
||||
);
|
||||
});
|
||||
|
||||
test('does not create MCP configuration files', () => {
|
||||
// Act
|
||||
mockCreateClineStructure();
|
||||
|
||||
// Assert - Cline doesn't use MCP configuration
|
||||
expect(fs.writeFileSync).not.toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.clinerules', 'mcp.json'),
|
||||
expect.any(String)
|
||||
);
|
||||
});
|
||||
});
|
||||
113
tests/unit/profiles/codex-integration.test.js
Normal file
113
tests/unit/profiles/codex-integration.test.js
Normal file
@@ -0,0 +1,113 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
clear: jest.fn()
|
||||
}));
|
||||
|
||||
describe('Codex Profile Integration', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Spy on fs methods
|
||||
jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {});
|
||||
jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => {
|
||||
if (filePath.toString().includes('AGENTS.md')) {
|
||||
return 'Sample AGENTS.md content for Codex integration';
|
||||
}
|
||||
return '{}';
|
||||
});
|
||||
jest.spyOn(fs, 'existsSync').mockImplementation(() => false);
|
||||
jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
console.error(`Error cleaning up: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Test function that simulates the Codex profile file copying behavior
|
||||
function mockCreateCodexStructure() {
|
||||
// Codex profile copies AGENTS.md to AGENTS.md in project root (same name)
|
||||
const sourceContent = 'Sample AGENTS.md content for Codex integration';
|
||||
fs.writeFileSync(path.join(tempDir, 'AGENTS.md'), sourceContent);
|
||||
}
|
||||
|
||||
test('creates AGENTS.md file in project root', () => {
|
||||
// Act
|
||||
mockCreateCodexStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, 'AGENTS.md'),
|
||||
'Sample AGENTS.md content for Codex integration'
|
||||
);
|
||||
});
|
||||
|
||||
test('does not create any profile directories', () => {
|
||||
// Act
|
||||
mockCreateCodexStructure();
|
||||
|
||||
// Assert - Codex profile should not create any directories
|
||||
// Only the temp directory creation calls should exist
|
||||
const mkdirCalls = fs.mkdirSync.mock.calls.filter(
|
||||
(call) => !call[0].includes('task-master-test-')
|
||||
);
|
||||
expect(mkdirCalls).toHaveLength(0);
|
||||
});
|
||||
|
||||
test('does not create MCP configuration files', () => {
|
||||
// Act
|
||||
mockCreateCodexStructure();
|
||||
|
||||
// Assert - Codex profile should not create any MCP config files
|
||||
const writeFileCalls = fs.writeFileSync.mock.calls;
|
||||
const mcpConfigCalls = writeFileCalls.filter(
|
||||
(call) =>
|
||||
call[0].toString().includes('mcp.json') ||
|
||||
call[0].toString().includes('mcp_settings.json')
|
||||
);
|
||||
expect(mcpConfigCalls).toHaveLength(0);
|
||||
});
|
||||
|
||||
test('only creates the target integration guide file', () => {
|
||||
// Act
|
||||
mockCreateCodexStructure();
|
||||
|
||||
// Assert - Should only create AGENTS.md
|
||||
const writeFileCalls = fs.writeFileSync.mock.calls;
|
||||
expect(writeFileCalls).toHaveLength(1);
|
||||
expect(writeFileCalls[0][0]).toBe(path.join(tempDir, 'AGENTS.md'));
|
||||
});
|
||||
|
||||
test('uses the same filename as source (AGENTS.md)', () => {
|
||||
// Act
|
||||
mockCreateCodexStructure();
|
||||
|
||||
// Assert - Codex should keep the same filename unlike Claude which renames it
|
||||
const writeFileCalls = fs.writeFileSync.mock.calls;
|
||||
expect(writeFileCalls[0][0]).toContain('AGENTS.md');
|
||||
expect(writeFileCalls[0][0]).not.toContain('CLAUDE.md');
|
||||
});
|
||||
});
|
||||
78
tests/unit/profiles/cursor-integration.test.js
Normal file
78
tests/unit/profiles/cursor-integration.test.js
Normal file
@@ -0,0 +1,78 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
clear: jest.fn()
|
||||
}));
|
||||
|
||||
describe('Cursor Integration', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Spy on fs methods
|
||||
jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {});
|
||||
jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => {
|
||||
if (filePath.toString().includes('mcp.json')) {
|
||||
return JSON.stringify({ mcpServers: {} }, null, 2);
|
||||
}
|
||||
return '{}';
|
||||
});
|
||||
jest.spyOn(fs, 'existsSync').mockImplementation(() => false);
|
||||
jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
console.error(`Error cleaning up: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Test function that simulates the createProjectStructure behavior for Cursor files
|
||||
function mockCreateCursorStructure() {
|
||||
// Create main .cursor directory
|
||||
fs.mkdirSync(path.join(tempDir, '.cursor'), { recursive: true });
|
||||
|
||||
// Create rules directory
|
||||
fs.mkdirSync(path.join(tempDir, '.cursor', 'rules'), { recursive: true });
|
||||
|
||||
// Create MCP config file
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.cursor', 'mcp.json'),
|
||||
JSON.stringify({ mcpServers: {} }, null, 2)
|
||||
);
|
||||
}
|
||||
|
||||
test('creates all required .cursor directories', () => {
|
||||
// Act
|
||||
mockCreateCursorStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(path.join(tempDir, '.cursor'), {
|
||||
recursive: true
|
||||
});
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.cursor', 'rules'),
|
||||
{ recursive: true }
|
||||
);
|
||||
});
|
||||
});
|
||||
247
tests/unit/profiles/mcp-config-validation.test.js
Normal file
247
tests/unit/profiles/mcp-config-validation.test.js
Normal file
@@ -0,0 +1,247 @@
|
||||
import { RULE_PROFILES } from '../../../src/constants/profiles.js';
|
||||
import { getRulesProfile } from '../../../src/utils/rule-transformer.js';
|
||||
import path from 'path';
|
||||
|
||||
describe('MCP Configuration Validation', () => {
|
||||
describe('Profile MCP Configuration Properties', () => {
|
||||
const expectedMcpConfigurations = {
|
||||
cline: {
|
||||
shouldHaveMcp: false,
|
||||
expectedDir: '.clinerules',
|
||||
expectedConfigName: 'cline_mcp_settings.json',
|
||||
expectedPath: '.clinerules/cline_mcp_settings.json'
|
||||
},
|
||||
cursor: {
|
||||
shouldHaveMcp: true,
|
||||
expectedDir: '.cursor',
|
||||
expectedConfigName: 'mcp.json',
|
||||
expectedPath: '.cursor/mcp.json'
|
||||
},
|
||||
roo: {
|
||||
shouldHaveMcp: true,
|
||||
expectedDir: '.roo',
|
||||
expectedConfigName: 'mcp.json',
|
||||
expectedPath: '.roo/mcp.json'
|
||||
},
|
||||
trae: {
|
||||
shouldHaveMcp: false,
|
||||
expectedDir: '.trae',
|
||||
expectedConfigName: 'trae_mcp_settings.json',
|
||||
expectedPath: '.trae/trae_mcp_settings.json'
|
||||
},
|
||||
vscode: {
|
||||
shouldHaveMcp: true,
|
||||
expectedDir: '.vscode',
|
||||
expectedConfigName: 'mcp.json',
|
||||
expectedPath: '.vscode/mcp.json'
|
||||
},
|
||||
windsurf: {
|
||||
shouldHaveMcp: true,
|
||||
expectedDir: '.windsurf',
|
||||
expectedConfigName: 'mcp.json',
|
||||
expectedPath: '.windsurf/mcp.json'
|
||||
}
|
||||
};
|
||||
|
||||
Object.entries(expectedMcpConfigurations).forEach(
|
||||
([profileName, expected]) => {
|
||||
test(`should have correct MCP configuration for ${profileName} profile`, () => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
expect(profile).toBeDefined();
|
||||
expect(profile.mcpConfig).toBe(expected.shouldHaveMcp);
|
||||
expect(profile.profileDir).toBe(expected.expectedDir);
|
||||
expect(profile.mcpConfigName).toBe(expected.expectedConfigName);
|
||||
expect(profile.mcpConfigPath).toBe(expected.expectedPath);
|
||||
});
|
||||
}
|
||||
);
|
||||
});
|
||||
|
||||
describe('MCP Configuration Path Consistency', () => {
|
||||
test('should ensure all profiles have consistent mcpConfigPath construction', () => {
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
if (profile.mcpConfig !== false) {
|
||||
const expectedPath = path.join(
|
||||
profile.profileDir,
|
||||
profile.mcpConfigName
|
||||
);
|
||||
expect(profile.mcpConfigPath).toBe(expectedPath);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('should ensure no two profiles have the same MCP config path', () => {
|
||||
const mcpPaths = new Set();
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
if (profile.mcpConfig !== false) {
|
||||
expect(mcpPaths.has(profile.mcpConfigPath)).toBe(false);
|
||||
mcpPaths.add(profile.mcpConfigPath);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('should ensure all MCP-enabled profiles use proper directory structure', () => {
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
if (profile.mcpConfig !== false) {
|
||||
expect(profile.mcpConfigPath).toMatch(/^\.[\w-]+\/[\w_.]+$/);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('should ensure all profiles have required MCP properties', () => {
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
expect(profile).toHaveProperty('mcpConfig');
|
||||
expect(profile).toHaveProperty('profileDir');
|
||||
expect(profile).toHaveProperty('mcpConfigName');
|
||||
expect(profile).toHaveProperty('mcpConfigPath');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('MCP Configuration File Names', () => {
|
||||
test('should use standard mcp.json for MCP-enabled profiles', () => {
|
||||
const standardMcpProfiles = ['cursor', 'roo', 'vscode', 'windsurf'];
|
||||
standardMcpProfiles.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
expect(profile.mcpConfigName).toBe('mcp.json');
|
||||
});
|
||||
});
|
||||
|
||||
test('should use profile-specific config name for non-MCP profiles', () => {
|
||||
const clineProfile = getRulesProfile('cline');
|
||||
expect(clineProfile.mcpConfigName).toBe('cline_mcp_settings.json');
|
||||
|
||||
const traeProfile = getRulesProfile('trae');
|
||||
expect(traeProfile.mcpConfigName).toBe('trae_mcp_settings.json');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Profile Directory Structure', () => {
|
||||
test('should ensure each profile has a unique directory', () => {
|
||||
const profileDirs = new Set();
|
||||
// Simple profiles that use root directory (can share the same directory)
|
||||
const simpleProfiles = ['claude', 'codex'];
|
||||
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
|
||||
// Simple profiles can share the root directory
|
||||
if (simpleProfiles.includes(profileName)) {
|
||||
expect(profile.profileDir).toBe('.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Full profiles should have unique directories
|
||||
expect(profileDirs.has(profile.profileDir)).toBe(false);
|
||||
profileDirs.add(profile.profileDir);
|
||||
});
|
||||
});
|
||||
|
||||
test('should ensure profile directories follow expected naming convention', () => {
|
||||
// Simple profiles that use root directory
|
||||
const simpleProfiles = ['claude', 'codex'];
|
||||
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
|
||||
// Simple profiles use root directory
|
||||
if (simpleProfiles.includes(profileName)) {
|
||||
expect(profile.profileDir).toBe('.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Full profiles should follow the .name pattern
|
||||
expect(profile.profileDir).toMatch(/^\.[\w-]+$/);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('MCP Configuration Creation Logic', () => {
|
||||
test('should indicate which profiles require MCP configuration creation', () => {
|
||||
const mcpEnabledProfiles = RULE_PROFILES.filter((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
return profile.mcpConfig !== false;
|
||||
});
|
||||
|
||||
expect(mcpEnabledProfiles).toContain('cursor');
|
||||
expect(mcpEnabledProfiles).toContain('roo');
|
||||
expect(mcpEnabledProfiles).toContain('vscode');
|
||||
expect(mcpEnabledProfiles).toContain('windsurf');
|
||||
expect(mcpEnabledProfiles).not.toContain('claude');
|
||||
expect(mcpEnabledProfiles).not.toContain('cline');
|
||||
expect(mcpEnabledProfiles).not.toContain('codex');
|
||||
expect(mcpEnabledProfiles).not.toContain('trae');
|
||||
});
|
||||
|
||||
test('should provide all necessary information for MCP config creation', () => {
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
if (profile.mcpConfig !== false) {
|
||||
expect(profile.mcpConfigPath).toBeDefined();
|
||||
expect(typeof profile.mcpConfigPath).toBe('string');
|
||||
expect(profile.mcpConfigPath.length).toBeGreaterThan(0);
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('MCP Configuration Path Usage Verification', () => {
|
||||
test('should verify that rule transformer functions use mcpConfigPath correctly', () => {
|
||||
// This test verifies that the mcpConfigPath property exists and is properly formatted
|
||||
// for use with the setupMCPConfiguration function
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
if (profile.mcpConfig !== false) {
|
||||
// Verify the path is properly formatted for path.join usage
|
||||
expect(profile.mcpConfigPath.startsWith('/')).toBe(false);
|
||||
expect(profile.mcpConfigPath).toContain('/');
|
||||
|
||||
// Verify it matches the expected pattern: profileDir/configName
|
||||
const expectedPath = `${profile.profileDir}/${profile.mcpConfigName}`;
|
||||
expect(profile.mcpConfigPath).toBe(expectedPath);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('should verify that mcpConfigPath is properly constructed for path.join usage', () => {
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
if (profile.mcpConfig !== false) {
|
||||
// Test that path.join works correctly with the mcpConfigPath
|
||||
const testProjectRoot = '/test/project';
|
||||
const fullPath = path.join(testProjectRoot, profile.mcpConfigPath);
|
||||
|
||||
// Should result in a proper absolute path
|
||||
expect(fullPath).toBe(`${testProjectRoot}/${profile.mcpConfigPath}`);
|
||||
expect(fullPath).toContain(profile.profileDir);
|
||||
expect(fullPath).toContain(profile.mcpConfigName);
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('MCP Configuration Function Integration', () => {
|
||||
test('should verify that setupMCPConfiguration receives the correct mcpConfigPath parameter', () => {
|
||||
// This test verifies the integration between rule transformer and mcp-utils
|
||||
RULE_PROFILES.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
if (profile.mcpConfig !== false) {
|
||||
// Verify that the mcpConfigPath can be used directly with setupMCPConfiguration
|
||||
// The function signature is: setupMCPConfiguration(projectDir, mcpConfigPath)
|
||||
expect(profile.mcpConfigPath).toBeDefined();
|
||||
expect(typeof profile.mcpConfigPath).toBe('string');
|
||||
|
||||
// Verify the path structure is correct for the new function signature
|
||||
const parts = profile.mcpConfigPath.split('/');
|
||||
expect(parts).toHaveLength(2); // Should be profileDir/configName
|
||||
expect(parts[0]).toBe(profile.profileDir);
|
||||
expect(parts[1]).toBe(profile.mcpConfigName);
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
175
tests/unit/profiles/profile-safety-check.test.js
Normal file
175
tests/unit/profiles/profile-safety-check.test.js
Normal file
@@ -0,0 +1,175 @@
|
||||
import {
|
||||
getInstalledProfiles,
|
||||
wouldRemovalLeaveNoProfiles
|
||||
} from '../../../src/utils/profiles.js';
|
||||
import { rulesDirect } from '../../../mcp-server/src/core/direct-functions/rules.js';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock logger
|
||||
const mockLog = {
|
||||
info: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn()
|
||||
};
|
||||
|
||||
describe('Rules Safety Check', () => {
|
||||
let mockExistsSync;
|
||||
let mockRmSync;
|
||||
let mockReaddirSync;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Set up spies on fs methods
|
||||
mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
mockRmSync = jest.spyOn(fs, 'rmSync').mockImplementation(() => {});
|
||||
mockReaddirSync = jest.spyOn(fs, 'readdirSync').mockReturnValue([]);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Restore all mocked functions
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
describe('getInstalledProfiles', () => {
|
||||
it('should detect installed profiles correctly', () => {
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Mock fs.existsSync to simulate installed profiles
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.cursor') || filePath.includes('.roo')) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
const installed = getInstalledProfiles(projectRoot);
|
||||
expect(installed).toContain('cursor');
|
||||
expect(installed).toContain('roo');
|
||||
expect(installed).not.toContain('windsurf');
|
||||
expect(installed).not.toContain('cline');
|
||||
});
|
||||
|
||||
it('should return empty array when no profiles are installed', () => {
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Mock fs.existsSync to return false for all paths
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
const installed = getInstalledProfiles(projectRoot);
|
||||
expect(installed).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('wouldRemovalLeaveNoProfiles', () => {
|
||||
it('should return true when removing all installed profiles', () => {
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Mock fs.existsSync to simulate cursor and roo installed
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
return filePath.includes('.cursor') || filePath.includes('.roo');
|
||||
});
|
||||
|
||||
const result = wouldRemovalLeaveNoProfiles(projectRoot, [
|
||||
'cursor',
|
||||
'roo'
|
||||
]);
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when removing only some profiles', () => {
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Mock fs.existsSync to simulate cursor and roo installed
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
return filePath.includes('.cursor') || filePath.includes('.roo');
|
||||
});
|
||||
|
||||
const result = wouldRemovalLeaveNoProfiles(projectRoot, ['roo']);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('should return false when no profiles are currently installed', () => {
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Mock fs.existsSync to return false for all paths
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
const result = wouldRemovalLeaveNoProfiles(projectRoot, ['cursor']);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('MCP Safety Check Integration', () => {
|
||||
it('should block removal of all profiles without force', async () => {
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Mock fs.existsSync to simulate installed profiles
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
return filePath.includes('.cursor') || filePath.includes('.roo');
|
||||
});
|
||||
|
||||
const result = await rulesDirect(
|
||||
{
|
||||
action: 'remove',
|
||||
profiles: ['cursor', 'roo'],
|
||||
projectRoot,
|
||||
force: false
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error.code).toBe('CRITICAL_REMOVAL_BLOCKED');
|
||||
expect(result.error.message).toContain('CRITICAL');
|
||||
});
|
||||
|
||||
it('should allow removal of all profiles with force', async () => {
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Mock fs.existsSync and other file operations for successful removal
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
|
||||
const result = await rulesDirect(
|
||||
{
|
||||
action: 'remove',
|
||||
profiles: ['cursor', 'roo'],
|
||||
projectRoot,
|
||||
force: true
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data).toBeDefined();
|
||||
});
|
||||
|
||||
it('should allow partial removal without force', async () => {
|
||||
const projectRoot = '/test/project';
|
||||
|
||||
// Mock fs.existsSync to simulate multiple profiles installed
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
return (
|
||||
filePath.includes('.cursor') ||
|
||||
filePath.includes('.roo') ||
|
||||
filePath.includes('.windsurf')
|
||||
);
|
||||
});
|
||||
|
||||
const result = await rulesDirect(
|
||||
{
|
||||
action: 'remove',
|
||||
profiles: ['roo'], // Only removing one profile
|
||||
projectRoot,
|
||||
force: false
|
||||
},
|
||||
mockLog
|
||||
);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.data).toBeDefined();
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -59,7 +59,14 @@ describe('Roo Integration', () => {
|
||||
fs.mkdirSync(path.join(tempDir, '.roo', 'rules'), { recursive: true });
|
||||
|
||||
// Create mode-specific rule directories
|
||||
const rooModes = ['architect', 'ask', 'boomerang', 'code', 'debug', 'test'];
|
||||
const rooModes = [
|
||||
'architect',
|
||||
'ask',
|
||||
'orchestrator',
|
||||
'code',
|
||||
'debug',
|
||||
'test'
|
||||
];
|
||||
for (const mode of rooModes) {
|
||||
fs.mkdirSync(path.join(tempDir, '.roo', `rules-${mode}`), {
|
||||
recursive: true
|
||||
@@ -102,7 +109,7 @@ describe('Roo Integration', () => {
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.roo', 'rules-boomerang'),
|
||||
path.join(tempDir, '.roo', 'rules-orchestrator'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
@@ -133,7 +140,7 @@ describe('Roo Integration', () => {
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.roo', 'rules-boomerang', 'boomerang-rules'),
|
||||
path.join(tempDir, '.roo', 'rules-orchestrator', 'orchestrator-rules'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
216
tests/unit/profiles/rule-transformer-cline.test.js
Normal file
216
tests/unit/profiles/rule-transformer-cline.test.js
Normal file
@@ -0,0 +1,216 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock fs module before importing anything that uses it
|
||||
jest.mock('fs', () => ({
|
||||
readFileSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdirSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
import fs from 'fs';
|
||||
import { convertRuleToProfileRule } from '../../../src/utils/rule-transformer.js';
|
||||
import { clineProfile } from '../../../src/profiles/cline.js';
|
||||
|
||||
describe('Cline Rule Transformer', () => {
|
||||
// Set up spies on the mocked modules
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
const mockWriteFileSync = jest.spyOn(fs, 'writeFileSync');
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
const mockMkdirSync = jest.spyOn(fs, 'mkdirSync');
|
||||
const mockConsoleError = jest
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Setup default mocks
|
||||
mockReadFileSync.mockReturnValue('');
|
||||
mockWriteFileSync.mockImplementation(() => {});
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockMkdirSync.mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should correctly convert basic terms', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for basic terms
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This is a Cursor rule that references cursor.so and uses the word Cursor multiple times.
|
||||
Also has references to .mdc files.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
clineProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Verify file operations were called correctly
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith('source.mdc', 'utf8');
|
||||
expect(mockWriteFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations
|
||||
expect(transformedContent).toContain('Cline');
|
||||
expect(transformedContent).toContain('cline.bot');
|
||||
expect(transformedContent).toContain('.md');
|
||||
expect(transformedContent).not.toContain('cursor.so');
|
||||
expect(transformedContent).not.toContain('Cursor rule');
|
||||
});
|
||||
|
||||
it('should correctly convert tool references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for tool references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
- Use the search tool to find code
|
||||
- The edit_file tool lets you modify files
|
||||
- run_command executes terminal commands
|
||||
- use_mcp connects to external services`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
clineProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (Cline uses standard tool names, so no transformation)
|
||||
expect(transformedContent).toContain('search tool');
|
||||
expect(transformedContent).toContain('edit_file tool');
|
||||
expect(transformedContent).toContain('run_command');
|
||||
expect(transformedContent).toContain('use_mcp');
|
||||
});
|
||||
|
||||
it('should correctly update file references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for file references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This references [dev_workflow.mdc](mdc:.cursor/rules/dev_workflow.mdc) and
|
||||
[taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc).`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
clineProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify file path transformations - no taskmaster subdirectory for Cline
|
||||
expect(transformedContent).toContain('(.clinerules/dev_workflow.md)');
|
||||
expect(transformedContent).toContain('(.clinerules/taskmaster.md)');
|
||||
expect(transformedContent).not.toContain('(mdc:.cursor/rules/');
|
||||
});
|
||||
|
||||
it('should handle file read errors', () => {
|
||||
// Mock file read to throw an error
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw new Error('File not found');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'nonexistent.mdc',
|
||||
'target.md',
|
||||
clineProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify writeFileSync was not called
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: File not found'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle file write errors', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock file write to throw an error
|
||||
mockWriteFileSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
clineProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: Permission denied'
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target directory if it does not exist', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock directory doesn't exist initially
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
// Call the actual function
|
||||
convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'some/deep/path/target.md',
|
||||
clineProfile
|
||||
);
|
||||
|
||||
// Verify directory creation was called
|
||||
expect(mockMkdirSync).toHaveBeenCalledWith('some/deep/path', {
|
||||
recursive: true
|
||||
});
|
||||
});
|
||||
});
|
||||
218
tests/unit/profiles/rule-transformer-cursor.test.js
Normal file
218
tests/unit/profiles/rule-transformer-cursor.test.js
Normal file
@@ -0,0 +1,218 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock fs module before importing anything that uses it
|
||||
jest.mock('fs', () => ({
|
||||
readFileSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdirSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
import fs from 'fs';
|
||||
import { convertRuleToProfileRule } from '../../../src/utils/rule-transformer.js';
|
||||
import { cursorProfile } from '../../../src/profiles/cursor.js';
|
||||
|
||||
describe('Cursor Rule Transformer', () => {
|
||||
// Set up spies on the mocked modules
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
const mockWriteFileSync = jest.spyOn(fs, 'writeFileSync');
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
const mockMkdirSync = jest.spyOn(fs, 'mkdirSync');
|
||||
const mockConsoleError = jest
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Setup default mocks
|
||||
mockReadFileSync.mockReturnValue('');
|
||||
mockWriteFileSync.mockImplementation(() => {});
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockMkdirSync.mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should correctly convert basic terms', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for basic terms
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This is a Cursor rule that references cursor.so and uses the word Cursor multiple times.
|
||||
Also has references to .mdc files.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.mdc',
|
||||
cursorProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Verify file operations were called correctly
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith('source.mdc', 'utf8');
|
||||
expect(mockWriteFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (Cursor profile should keep everything the same)
|
||||
expect(transformedContent).toContain('Cursor');
|
||||
expect(transformedContent).toContain('cursor.so');
|
||||
expect(transformedContent).toContain('.mdc');
|
||||
expect(transformedContent).toContain('Cursor rule');
|
||||
});
|
||||
|
||||
it('should correctly convert tool references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for tool references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
- Use the search tool to find code
|
||||
- The edit_file tool lets you modify files
|
||||
- run_command executes terminal commands
|
||||
- use_mcp connects to external services`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.mdc',
|
||||
cursorProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (Cursor uses standard tool names, so no transformation)
|
||||
expect(transformedContent).toContain('search tool');
|
||||
expect(transformedContent).toContain('edit_file tool');
|
||||
expect(transformedContent).toContain('run_command');
|
||||
expect(transformedContent).toContain('use_mcp');
|
||||
});
|
||||
|
||||
it('should correctly update file references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for file references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This references [dev_workflow.mdc](mdc:.cursor/rules/dev_workflow.mdc) and
|
||||
[taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc).`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.mdc',
|
||||
cursorProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (Cursor should keep the same references but in taskmaster subdirectory)
|
||||
expect(transformedContent).toContain(
|
||||
'(mdc:.cursor/rules/taskmaster/dev_workflow.mdc)'
|
||||
);
|
||||
expect(transformedContent).toContain(
|
||||
'(mdc:.cursor/rules/taskmaster/taskmaster.mdc)'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle file read errors', () => {
|
||||
// Mock file read to throw an error
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw new Error('File not found');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'nonexistent.mdc',
|
||||
'target.mdc',
|
||||
cursorProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify writeFileSync was not called
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: File not found'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle file write errors', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock file write to throw an error
|
||||
mockWriteFileSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.mdc',
|
||||
cursorProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: Permission denied'
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target directory if it does not exist', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock directory doesn't exist initially
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
// Call the actual function
|
||||
convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'some/deep/path/target.mdc',
|
||||
cursorProfile
|
||||
);
|
||||
|
||||
// Verify directory creation was called
|
||||
expect(mockMkdirSync).toHaveBeenCalledWith('some/deep/path', {
|
||||
recursive: true
|
||||
});
|
||||
});
|
||||
});
|
||||
216
tests/unit/profiles/rule-transformer-roo.test.js
Normal file
216
tests/unit/profiles/rule-transformer-roo.test.js
Normal file
@@ -0,0 +1,216 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock fs module before importing anything that uses it
|
||||
jest.mock('fs', () => ({
|
||||
readFileSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdirSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
import fs from 'fs';
|
||||
import { convertRuleToProfileRule } from '../../../src/utils/rule-transformer.js';
|
||||
import { rooProfile } from '../../../src/profiles/roo.js';
|
||||
|
||||
describe('Roo Rule Transformer', () => {
|
||||
// Set up spies on the mocked modules
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
const mockWriteFileSync = jest.spyOn(fs, 'writeFileSync');
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
const mockMkdirSync = jest.spyOn(fs, 'mkdirSync');
|
||||
const mockConsoleError = jest
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Setup default mocks
|
||||
mockReadFileSync.mockReturnValue('');
|
||||
mockWriteFileSync.mockImplementation(() => {});
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockMkdirSync.mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should correctly convert basic terms', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for basic terms
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This is a Cursor rule that references cursor.so and uses the word Cursor multiple times.
|
||||
Also has references to .mdc files.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
rooProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Verify file operations were called correctly
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith('source.mdc', 'utf8');
|
||||
expect(mockWriteFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations
|
||||
expect(transformedContent).toContain('Roo');
|
||||
expect(transformedContent).toContain('roocode.com');
|
||||
expect(transformedContent).toContain('.md');
|
||||
expect(transformedContent).not.toContain('cursor.so');
|
||||
expect(transformedContent).not.toContain('Cursor rule');
|
||||
});
|
||||
|
||||
it('should correctly convert tool references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for tool references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
- Use the search tool to find code
|
||||
- The edit_file tool lets you modify files
|
||||
- run_command executes terminal commands
|
||||
- use_mcp connects to external services`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
rooProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (Roo uses different tool names)
|
||||
expect(transformedContent).toContain('search_files tool');
|
||||
expect(transformedContent).toContain('apply_diff tool');
|
||||
expect(transformedContent).toContain('execute_command');
|
||||
expect(transformedContent).toContain('use_mcp_tool');
|
||||
});
|
||||
|
||||
it('should correctly update file references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for file references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This references [dev_workflow.mdc](mdc:.cursor/rules/dev_workflow.mdc) and
|
||||
[taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc).`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
rooProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations - no taskmaster subdirectory for Roo
|
||||
expect(transformedContent).toContain('(.roo/rules/dev_workflow.md)'); // File path transformation - no taskmaster subdirectory for Roo
|
||||
expect(transformedContent).toContain('(.roo/rules/taskmaster.md)'); // File path transformation - no taskmaster subdirectory for Roo
|
||||
expect(transformedContent).not.toContain('(mdc:.cursor/rules/');
|
||||
});
|
||||
|
||||
it('should handle file read errors', () => {
|
||||
// Mock file read to throw an error
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw new Error('File not found');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'nonexistent.mdc',
|
||||
'target.md',
|
||||
rooProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify writeFileSync was not called
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: File not found'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle file write errors', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock file write to throw an error
|
||||
mockWriteFileSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
rooProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: Permission denied'
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target directory if it does not exist', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock directory doesn't exist initially
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
// Call the actual function
|
||||
convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'some/deep/path/target.md',
|
||||
rooProfile
|
||||
);
|
||||
|
||||
// Verify directory creation was called
|
||||
expect(mockMkdirSync).toHaveBeenCalledWith('some/deep/path', {
|
||||
recursive: true
|
||||
});
|
||||
});
|
||||
});
|
||||
216
tests/unit/profiles/rule-transformer-trae.test.js
Normal file
216
tests/unit/profiles/rule-transformer-trae.test.js
Normal file
@@ -0,0 +1,216 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock fs module before importing anything that uses it
|
||||
jest.mock('fs', () => ({
|
||||
readFileSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdirSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
import fs from 'fs';
|
||||
import { convertRuleToProfileRule } from '../../../src/utils/rule-transformer.js';
|
||||
import { traeProfile } from '../../../src/profiles/trae.js';
|
||||
|
||||
describe('Trae Rule Transformer', () => {
|
||||
// Set up spies on the mocked modules
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
const mockWriteFileSync = jest.spyOn(fs, 'writeFileSync');
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
const mockMkdirSync = jest.spyOn(fs, 'mkdirSync');
|
||||
const mockConsoleError = jest
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Setup default mocks
|
||||
mockReadFileSync.mockReturnValue('');
|
||||
mockWriteFileSync.mockImplementation(() => {});
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockMkdirSync.mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should correctly convert basic terms', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for basic terms
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This is a Cursor rule that references cursor.so and uses the word Cursor multiple times.
|
||||
Also has references to .mdc files.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
traeProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Verify file operations were called correctly
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith('source.mdc', 'utf8');
|
||||
expect(mockWriteFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations
|
||||
expect(transformedContent).toContain('Trae');
|
||||
expect(transformedContent).toContain('trae.ai');
|
||||
expect(transformedContent).toContain('.md');
|
||||
expect(transformedContent).not.toContain('cursor.so');
|
||||
expect(transformedContent).not.toContain('Cursor rule');
|
||||
});
|
||||
|
||||
it('should correctly convert tool references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for tool references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
- Use the search tool to find code
|
||||
- The edit_file tool lets you modify files
|
||||
- run_command executes terminal commands
|
||||
- use_mcp connects to external services`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
traeProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (Trae uses standard tool names, so no transformation)
|
||||
expect(transformedContent).toContain('search tool');
|
||||
expect(transformedContent).toContain('edit_file tool');
|
||||
expect(transformedContent).toContain('run_command');
|
||||
expect(transformedContent).toContain('use_mcp');
|
||||
});
|
||||
|
||||
it('should correctly update file references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for file references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This references [dev_workflow.mdc](mdc:.cursor/rules/dev_workflow.mdc) and
|
||||
[taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc).`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
traeProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations - no taskmaster subdirectory for Trae
|
||||
expect(transformedContent).toContain('(.trae/rules/dev_workflow.md)'); // File path transformation - no taskmaster subdirectory for Trae
|
||||
expect(transformedContent).toContain('(.trae/rules/taskmaster.md)'); // File path transformation - no taskmaster subdirectory for Trae
|
||||
expect(transformedContent).not.toContain('(mdc:.cursor/rules/');
|
||||
});
|
||||
|
||||
it('should handle file read errors', () => {
|
||||
// Mock file read to throw an error
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw new Error('File not found');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'nonexistent.mdc',
|
||||
'target.md',
|
||||
traeProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify writeFileSync was not called
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: File not found'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle file write errors', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock file write to throw an error
|
||||
mockWriteFileSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
traeProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: Permission denied'
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target directory if it does not exist', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock directory doesn't exist initially
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
// Call the actual function
|
||||
convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'some/deep/path/target.md',
|
||||
traeProfile
|
||||
);
|
||||
|
||||
// Verify directory creation was called
|
||||
expect(mockMkdirSync).toHaveBeenCalledWith('some/deep/path', {
|
||||
recursive: true
|
||||
});
|
||||
});
|
||||
});
|
||||
311
tests/unit/profiles/rule-transformer-vscode.test.js
Normal file
311
tests/unit/profiles/rule-transformer-vscode.test.js
Normal file
@@ -0,0 +1,311 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock fs module before importing anything that uses it
|
||||
jest.mock('fs', () => ({
|
||||
readFileSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdirSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
import fs from 'fs';
|
||||
import { convertRuleToProfileRule } from '../../../src/utils/rule-transformer.js';
|
||||
import { vscodeProfile } from '../../../src/profiles/vscode.js';
|
||||
|
||||
describe('VS Code Rule Transformer', () => {
|
||||
// Set up spies on the mocked modules
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
const mockWriteFileSync = jest.spyOn(fs, 'writeFileSync');
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
const mockMkdirSync = jest.spyOn(fs, 'mkdirSync');
|
||||
const mockConsoleError = jest
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Setup default mocks
|
||||
mockReadFileSync.mockReturnValue('');
|
||||
mockWriteFileSync.mockImplementation(() => {});
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockMkdirSync.mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should correctly convert basic terms', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for basic terms
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This is a Cursor rule that references cursor.so and uses the word Cursor multiple times.
|
||||
Also has references to .mdc files and cursor rules.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
vscodeProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Verify file operations were called correctly
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith('source.mdc', 'utf8');
|
||||
expect(mockWriteFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations
|
||||
expect(transformedContent).toContain('VS Code');
|
||||
expect(transformedContent).toContain('code.visualstudio.com');
|
||||
expect(transformedContent).toContain('.md');
|
||||
expect(transformedContent).toContain('vscode rules'); // "cursor rules" -> "vscode rules"
|
||||
expect(transformedContent).toContain('applyTo: "**/*"'); // globs -> applyTo transformation
|
||||
expect(transformedContent).not.toContain('cursor.so');
|
||||
expect(transformedContent).not.toContain('Cursor rule');
|
||||
expect(transformedContent).not.toContain('globs:');
|
||||
});
|
||||
|
||||
it('should correctly convert tool references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for tool references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
- Use the search tool to find code
|
||||
- The edit_file tool lets you modify files
|
||||
- run_command executes terminal commands
|
||||
- use_mcp connects to external services`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
vscodeProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (VS Code uses standard tool names, so no transformation)
|
||||
expect(transformedContent).toContain('search tool');
|
||||
expect(transformedContent).toContain('edit_file tool');
|
||||
expect(transformedContent).toContain('run_command');
|
||||
expect(transformedContent).toContain('use_mcp');
|
||||
expect(transformedContent).toContain('applyTo: "**/*"'); // globs -> applyTo transformation
|
||||
});
|
||||
|
||||
it('should correctly update file references and directory paths', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for file references
|
||||
globs: .cursor/rules/*.md
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This references [dev_workflow.mdc](mdc:.cursor/rules/dev_workflow.mdc) and
|
||||
[taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc).
|
||||
Files are in the .cursor/rules directory and we should reference the rules directory.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
vscodeProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations specific to VS Code
|
||||
expect(transformedContent).toContain(
|
||||
'applyTo: ".github/instructions/*.md"'
|
||||
); // globs -> applyTo with path transformation
|
||||
expect(transformedContent).toContain(
|
||||
'(.github/instructions/dev_workflow.md)'
|
||||
); // File path transformation - no taskmaster subdirectory for VS Code
|
||||
expect(transformedContent).toContain(
|
||||
'(.github/instructions/taskmaster.md)'
|
||||
); // File path transformation - no taskmaster subdirectory for VS Code
|
||||
expect(transformedContent).toContain('instructions directory'); // "rules directory" -> "instructions directory"
|
||||
expect(transformedContent).not.toContain('(mdc:.cursor/rules/');
|
||||
expect(transformedContent).not.toContain('.cursor/rules');
|
||||
expect(transformedContent).not.toContain('globs:');
|
||||
expect(transformedContent).not.toContain('rules directory');
|
||||
});
|
||||
|
||||
it('should transform globs to applyTo with various patterns', () => {
|
||||
const testContent = `---
|
||||
description: Test VS Code applyTo transformation
|
||||
globs: .cursor/rules/*.md
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
Another section:
|
||||
globs: **/*.ts
|
||||
final: true
|
||||
|
||||
Last one:
|
||||
globs: src/**/*
|
||||
---`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
vscodeProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify all globs transformations
|
||||
expect(transformedContent).toContain(
|
||||
'applyTo: ".github/instructions/*.md"'
|
||||
); // Path transformation applied
|
||||
expect(transformedContent).toContain('applyTo: "**/*.ts"'); // Pattern with quotes
|
||||
expect(transformedContent).toContain('applyTo: "src/**/*"'); // Complex pattern with quotes
|
||||
expect(transformedContent).not.toContain('globs:'); // No globs should remain
|
||||
});
|
||||
|
||||
it('should handle VS Code MCP configuration paths correctly', () => {
|
||||
const testContent = `---
|
||||
description: Test MCP configuration paths
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
MCP configuration is at .cursor/mcp.json for Cursor.
|
||||
The .cursor/rules directory contains rules.
|
||||
Update your .cursor/mcp.json file accordingly.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
vscodeProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify MCP paths are correctly transformed
|
||||
expect(transformedContent).toContain('.vscode/mcp.json'); // MCP config in .vscode
|
||||
expect(transformedContent).toContain('.github/instructions'); // Rules/instructions in .github/instructions
|
||||
expect(transformedContent).not.toContain('.cursor/mcp.json');
|
||||
expect(transformedContent).not.toContain('.cursor/rules');
|
||||
});
|
||||
|
||||
it('should handle file read errors', () => {
|
||||
// Mock file read to throw an error
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw new Error('File not found');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'nonexistent.mdc',
|
||||
'target.md',
|
||||
vscodeProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify writeFileSync was not called
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: File not found'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle file write errors', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock file write to throw an error
|
||||
mockWriteFileSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
vscodeProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: Permission denied'
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target directory if it does not exist', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock directory doesn't exist initially
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
// Call the actual function
|
||||
convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'.github/instructions/deep/path/target.md',
|
||||
vscodeProfile
|
||||
);
|
||||
|
||||
// Verify directory creation was called
|
||||
expect(mockMkdirSync).toHaveBeenCalledWith(
|
||||
'.github/instructions/deep/path',
|
||||
{
|
||||
recursive: true
|
||||
}
|
||||
);
|
||||
});
|
||||
});
|
||||
216
tests/unit/profiles/rule-transformer-windsurf.test.js
Normal file
216
tests/unit/profiles/rule-transformer-windsurf.test.js
Normal file
@@ -0,0 +1,216 @@
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock fs module before importing anything that uses it
|
||||
jest.mock('fs', () => ({
|
||||
readFileSync: jest.fn(),
|
||||
writeFileSync: jest.fn(),
|
||||
existsSync: jest.fn(),
|
||||
mkdirSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Import modules after mocking
|
||||
import fs from 'fs';
|
||||
import { convertRuleToProfileRule } from '../../../src/utils/rule-transformer.js';
|
||||
import { windsurfProfile } from '../../../src/profiles/windsurf.js';
|
||||
|
||||
describe('Windsurf Rule Transformer', () => {
|
||||
// Set up spies on the mocked modules
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
const mockWriteFileSync = jest.spyOn(fs, 'writeFileSync');
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
const mockMkdirSync = jest.spyOn(fs, 'mkdirSync');
|
||||
const mockConsoleError = jest
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Setup default mocks
|
||||
mockReadFileSync.mockReturnValue('');
|
||||
mockWriteFileSync.mockImplementation(() => {});
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockMkdirSync.mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should correctly convert basic terms', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for basic terms
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This is a Cursor rule that references cursor.so and uses the word Cursor multiple times.
|
||||
Also has references to .mdc files.`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
windsurfProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Verify file operations were called correctly
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith('source.mdc', 'utf8');
|
||||
expect(mockWriteFileSync).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations
|
||||
expect(transformedContent).toContain('Windsurf');
|
||||
expect(transformedContent).toContain('windsurf.com');
|
||||
expect(transformedContent).toContain('.md');
|
||||
expect(transformedContent).not.toContain('cursor.so');
|
||||
expect(transformedContent).not.toContain('Cursor rule');
|
||||
});
|
||||
|
||||
it('should correctly convert tool references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for tool references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
- Use the search tool to find code
|
||||
- The edit_file tool lets you modify files
|
||||
- run_command executes terminal commands
|
||||
- use_mcp connects to external services`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
windsurfProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations (Windsurf uses standard tool names, so no transformation)
|
||||
expect(transformedContent).toContain('search tool');
|
||||
expect(transformedContent).toContain('edit_file tool');
|
||||
expect(transformedContent).toContain('run_command');
|
||||
expect(transformedContent).toContain('use_mcp');
|
||||
});
|
||||
|
||||
it('should correctly update file references', () => {
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for file references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This references [dev_workflow.mdc](mdc:.cursor/rules/dev_workflow.mdc) and
|
||||
[taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc).`;
|
||||
|
||||
// Mock file read to return our test content
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
windsurfProfile
|
||||
);
|
||||
|
||||
// Verify the function succeeded
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Get the transformed content that was written
|
||||
const writeCall = mockWriteFileSync.mock.calls[0];
|
||||
const transformedContent = writeCall[1];
|
||||
|
||||
// Verify transformations - no taskmaster subdirectory for Windsurf
|
||||
expect(transformedContent).toContain('(.windsurf/rules/dev_workflow.md)'); // File path transformation - no taskmaster subdirectory for Windsurf
|
||||
expect(transformedContent).toContain('(.windsurf/rules/taskmaster.md)'); // File path transformation - no taskmaster subdirectory for Windsurf
|
||||
expect(transformedContent).not.toContain('(mdc:.cursor/rules/');
|
||||
});
|
||||
|
||||
it('should handle file read errors', () => {
|
||||
// Mock file read to throw an error
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw new Error('File not found');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'nonexistent.mdc',
|
||||
'target.md',
|
||||
windsurfProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify writeFileSync was not called
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: File not found'
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle file write errors', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock file write to throw an error
|
||||
mockWriteFileSync.mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Call the actual function
|
||||
const result = convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'target.md',
|
||||
windsurfProfile
|
||||
);
|
||||
|
||||
// Verify the function failed gracefully
|
||||
expect(result).toBe(false);
|
||||
|
||||
// Verify error was logged
|
||||
expect(mockConsoleError).toHaveBeenCalledWith(
|
||||
'Error converting rule file: Permission denied'
|
||||
);
|
||||
});
|
||||
|
||||
it('should create target directory if it does not exist', () => {
|
||||
const testContent = 'test content';
|
||||
mockReadFileSync.mockReturnValue(testContent);
|
||||
|
||||
// Mock directory doesn't exist initially
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
// Call the actual function
|
||||
convertRuleToProfileRule(
|
||||
'source.mdc',
|
||||
'some/deep/path/target.md',
|
||||
windsurfProfile
|
||||
);
|
||||
|
||||
// Verify directory creation was called
|
||||
expect(mockMkdirSync).toHaveBeenCalledWith('some/deep/path', {
|
||||
recursive: true
|
||||
});
|
||||
});
|
||||
});
|
||||
289
tests/unit/profiles/rule-transformer.test.js
Normal file
289
tests/unit/profiles/rule-transformer.test.js
Normal file
@@ -0,0 +1,289 @@
|
||||
import {
|
||||
isValidProfile,
|
||||
getRulesProfile
|
||||
} from '../../../src/utils/rule-transformer.js';
|
||||
import { RULE_PROFILES } from '../../../src/constants/profiles.js';
|
||||
|
||||
describe('Rule Transformer - General', () => {
|
||||
describe('Profile Configuration Validation', () => {
|
||||
it('should use RULE_PROFILES as the single source of truth', () => {
|
||||
// Ensure RULE_PROFILES is properly defined and contains expected profiles
|
||||
expect(Array.isArray(RULE_PROFILES)).toBe(true);
|
||||
expect(RULE_PROFILES.length).toBeGreaterThan(0);
|
||||
|
||||
// Verify expected profiles are present
|
||||
const expectedProfiles = [
|
||||
'claude',
|
||||
'cline',
|
||||
'codex',
|
||||
'cursor',
|
||||
'roo',
|
||||
'trae',
|
||||
'vscode',
|
||||
'windsurf'
|
||||
];
|
||||
expectedProfiles.forEach((profile) => {
|
||||
expect(RULE_PROFILES).toContain(profile);
|
||||
});
|
||||
});
|
||||
|
||||
it('should validate profiles correctly with isValidProfile', () => {
|
||||
// Test valid profiles
|
||||
RULE_PROFILES.forEach((profile) => {
|
||||
expect(isValidProfile(profile)).toBe(true);
|
||||
});
|
||||
|
||||
// Test invalid profiles
|
||||
expect(isValidProfile('invalid')).toBe(false);
|
||||
expect(isValidProfile('')).toBe(false);
|
||||
expect(isValidProfile(null)).toBe(false);
|
||||
expect(isValidProfile(undefined)).toBe(false);
|
||||
});
|
||||
|
||||
it('should return correct rule profile with getRulesProfile', () => {
|
||||
// Test valid profiles
|
||||
RULE_PROFILES.forEach((profile) => {
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
expect(profileConfig).toBeDefined();
|
||||
expect(profileConfig.profileName.toLowerCase()).toBe(profile);
|
||||
});
|
||||
|
||||
// Test invalid profile - should return null
|
||||
expect(getRulesProfile('invalid')).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Profile Structure', () => {
|
||||
it('should have all required properties for each profile', () => {
|
||||
// Simple profiles that only copy files (no rule transformation)
|
||||
const simpleProfiles = ['claude', 'codex'];
|
||||
|
||||
RULE_PROFILES.forEach((profile) => {
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
|
||||
// Check required properties
|
||||
expect(profileConfig).toHaveProperty('profileName');
|
||||
expect(profileConfig).toHaveProperty('conversionConfig');
|
||||
expect(profileConfig).toHaveProperty('fileMap');
|
||||
expect(profileConfig).toHaveProperty('rulesDir');
|
||||
expect(profileConfig).toHaveProperty('profileDir');
|
||||
|
||||
// Simple profiles have minimal structure
|
||||
if (simpleProfiles.includes(profile)) {
|
||||
// For simple profiles, conversionConfig and fileMap can be empty
|
||||
expect(typeof profileConfig.conversionConfig).toBe('object');
|
||||
expect(typeof profileConfig.fileMap).toBe('object');
|
||||
return;
|
||||
}
|
||||
|
||||
// Check that conversionConfig has required structure for full profiles
|
||||
expect(profileConfig.conversionConfig).toHaveProperty('profileTerms');
|
||||
expect(profileConfig.conversionConfig).toHaveProperty('toolNames');
|
||||
expect(profileConfig.conversionConfig).toHaveProperty('toolContexts');
|
||||
expect(profileConfig.conversionConfig).toHaveProperty('toolGroups');
|
||||
expect(profileConfig.conversionConfig).toHaveProperty('docUrls');
|
||||
expect(profileConfig.conversionConfig).toHaveProperty('fileReferences');
|
||||
|
||||
// Verify arrays are actually arrays
|
||||
expect(Array.isArray(profileConfig.conversionConfig.profileTerms)).toBe(
|
||||
true
|
||||
);
|
||||
expect(typeof profileConfig.conversionConfig.toolNames).toBe('object');
|
||||
expect(Array.isArray(profileConfig.conversionConfig.toolContexts)).toBe(
|
||||
true
|
||||
);
|
||||
expect(Array.isArray(profileConfig.conversionConfig.toolGroups)).toBe(
|
||||
true
|
||||
);
|
||||
expect(Array.isArray(profileConfig.conversionConfig.docUrls)).toBe(
|
||||
true
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
it('should have valid fileMap with required files for each profile', () => {
|
||||
const expectedFiles = [
|
||||
'cursor_rules.mdc',
|
||||
'dev_workflow.mdc',
|
||||
'self_improve.mdc',
|
||||
'taskmaster.mdc'
|
||||
];
|
||||
|
||||
// Simple profiles that only copy files (no rule transformation)
|
||||
const simpleProfiles = ['claude', 'codex'];
|
||||
|
||||
RULE_PROFILES.forEach((profile) => {
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
|
||||
// Check that fileMap exists and is an object
|
||||
expect(profileConfig.fileMap).toBeDefined();
|
||||
expect(typeof profileConfig.fileMap).toBe('object');
|
||||
expect(profileConfig.fileMap).not.toBeNull();
|
||||
|
||||
// Simple profiles can have empty fileMap since they don't transform rules
|
||||
if (simpleProfiles.includes(profile)) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Check that fileMap is not empty for full profiles
|
||||
const fileMapKeys = Object.keys(profileConfig.fileMap);
|
||||
expect(fileMapKeys.length).toBeGreaterThan(0);
|
||||
|
||||
// Check that all expected source files are defined in fileMap
|
||||
expectedFiles.forEach((expectedFile) => {
|
||||
expect(fileMapKeys).toContain(expectedFile);
|
||||
expect(typeof profileConfig.fileMap[expectedFile]).toBe('string');
|
||||
expect(profileConfig.fileMap[expectedFile].length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
// Verify fileMap has exactly the expected files
|
||||
expect(fileMapKeys.sort()).toEqual(expectedFiles.sort());
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('MCP Configuration Properties', () => {
|
||||
it('should have all required MCP properties for each profile', () => {
|
||||
// Simple profiles that only copy files (no MCP configuration)
|
||||
const simpleProfiles = ['claude', 'codex'];
|
||||
|
||||
RULE_PROFILES.forEach((profile) => {
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
|
||||
// Check MCP-related properties exist
|
||||
expect(profileConfig).toHaveProperty('mcpConfig');
|
||||
expect(profileConfig).toHaveProperty('mcpConfigName');
|
||||
expect(profileConfig).toHaveProperty('mcpConfigPath');
|
||||
|
||||
// Simple profiles have no MCP configuration
|
||||
if (simpleProfiles.includes(profile)) {
|
||||
expect(profileConfig.mcpConfig).toBe(false);
|
||||
expect(profileConfig.mcpConfigName).toBe(null);
|
||||
expect(profileConfig.mcpConfigPath).toBe(null);
|
||||
return;
|
||||
}
|
||||
|
||||
// Check types for full profiles
|
||||
expect(typeof profileConfig.mcpConfig).toBe('boolean');
|
||||
expect(typeof profileConfig.mcpConfigName).toBe('string');
|
||||
expect(typeof profileConfig.mcpConfigPath).toBe('string');
|
||||
|
||||
// Check that mcpConfigPath is properly constructed
|
||||
expect(profileConfig.mcpConfigPath).toBe(
|
||||
`${profileConfig.profileDir}/${profileConfig.mcpConfigName}`
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
it('should have correct MCP configuration for each profile', () => {
|
||||
const expectedConfigs = {
|
||||
claude: {
|
||||
mcpConfig: false,
|
||||
mcpConfigName: null,
|
||||
expectedPath: null
|
||||
},
|
||||
cline: {
|
||||
mcpConfig: false,
|
||||
mcpConfigName: 'cline_mcp_settings.json',
|
||||
expectedPath: '.clinerules/cline_mcp_settings.json'
|
||||
},
|
||||
codex: {
|
||||
mcpConfig: false,
|
||||
mcpConfigName: null,
|
||||
expectedPath: null
|
||||
},
|
||||
cursor: {
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
expectedPath: '.cursor/mcp.json'
|
||||
},
|
||||
roo: {
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
expectedPath: '.roo/mcp.json'
|
||||
},
|
||||
trae: {
|
||||
mcpConfig: false,
|
||||
mcpConfigName: 'trae_mcp_settings.json',
|
||||
expectedPath: '.trae/trae_mcp_settings.json'
|
||||
},
|
||||
vscode: {
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
expectedPath: '.vscode/mcp.json'
|
||||
},
|
||||
windsurf: {
|
||||
mcpConfig: true,
|
||||
mcpConfigName: 'mcp.json',
|
||||
expectedPath: '.windsurf/mcp.json'
|
||||
}
|
||||
};
|
||||
|
||||
RULE_PROFILES.forEach((profile) => {
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
const expected = expectedConfigs[profile];
|
||||
|
||||
expect(profileConfig.mcpConfig).toBe(expected.mcpConfig);
|
||||
expect(profileConfig.mcpConfigName).toBe(expected.mcpConfigName);
|
||||
expect(profileConfig.mcpConfigPath).toBe(expected.expectedPath);
|
||||
});
|
||||
});
|
||||
|
||||
it('should have consistent profileDir and mcpConfigPath relationship', () => {
|
||||
// Simple profiles that only copy files (no MCP configuration)
|
||||
const simpleProfiles = ['claude', 'codex'];
|
||||
|
||||
RULE_PROFILES.forEach((profile) => {
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
|
||||
// Simple profiles have null mcpConfigPath
|
||||
if (simpleProfiles.includes(profile)) {
|
||||
expect(profileConfig.mcpConfigPath).toBe(null);
|
||||
return;
|
||||
}
|
||||
|
||||
// The mcpConfigPath should start with the profileDir
|
||||
expect(profileConfig.mcpConfigPath).toMatch(
|
||||
new RegExp(
|
||||
`^${profileConfig.profileDir.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')}/`
|
||||
)
|
||||
);
|
||||
|
||||
// The mcpConfigPath should end with the mcpConfigName
|
||||
expect(profileConfig.mcpConfigPath).toMatch(
|
||||
new RegExp(
|
||||
`${profileConfig.mcpConfigName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')}$`
|
||||
)
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
it('should have unique profile directories', () => {
|
||||
const profileDirs = RULE_PROFILES.map((profile) => {
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
return profileConfig.profileDir;
|
||||
});
|
||||
|
||||
// Note: Claude and Codex both use "." (root directory) so we expect some duplication
|
||||
const uniqueProfileDirs = [...new Set(profileDirs)];
|
||||
// We should have fewer unique directories than total profiles due to simple profiles using root
|
||||
expect(uniqueProfileDirs.length).toBeLessThanOrEqual(profileDirs.length);
|
||||
expect(uniqueProfileDirs.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should have unique MCP config paths', () => {
|
||||
const mcpConfigPaths = RULE_PROFILES.map((profile) => {
|
||||
const profileConfig = getRulesProfile(profile);
|
||||
return profileConfig.mcpConfigPath;
|
||||
});
|
||||
|
||||
// Note: Claude and Codex both have null mcpConfigPath so we expect some duplication
|
||||
const uniqueMcpConfigPaths = [...new Set(mcpConfigPaths)];
|
||||
// We should have fewer unique paths than total profiles due to simple profiles having null
|
||||
expect(uniqueMcpConfigPaths.length).toBeLessThanOrEqual(
|
||||
mcpConfigPaths.length
|
||||
);
|
||||
expect(uniqueMcpConfigPaths.length).toBeGreaterThan(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
625
tests/unit/profiles/selective-profile-removal.test.js
Normal file
625
tests/unit/profiles/selective-profile-removal.test.js
Normal file
@@ -0,0 +1,625 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { jest } from '@jest/globals';
|
||||
import {
|
||||
removeProfileRules,
|
||||
getRulesProfile
|
||||
} from '../../../src/utils/rule-transformer.js';
|
||||
import { removeTaskMasterMCPConfiguration } from '../../../src/utils/create-mcp-config.js';
|
||||
|
||||
// Mock logger
|
||||
const mockLog = {
|
||||
info: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
warn: jest.fn()
|
||||
};
|
||||
|
||||
// Mock the logger import
|
||||
jest.mock('../../../scripts/modules/utils.js', () => ({
|
||||
log: (level, message) => mockLog[level]?.(message)
|
||||
}));
|
||||
|
||||
describe('Selective Rules Removal', () => {
|
||||
let tempDir;
|
||||
let mockExistsSync;
|
||||
let mockRmSync;
|
||||
let mockReaddirSync;
|
||||
let mockReadFileSync;
|
||||
let mockWriteFileSync;
|
||||
let mockMkdirSync;
|
||||
let mockStatSync;
|
||||
let originalConsoleLog;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Mock console.log to prevent JSON parsing issues in Jest
|
||||
originalConsoleLog = console.log;
|
||||
console.log = jest.fn();
|
||||
|
||||
// Create temp directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Set up spies on fs methods
|
||||
mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
mockRmSync = jest.spyOn(fs, 'rmSync').mockImplementation(() => {});
|
||||
mockReaddirSync = jest.spyOn(fs, 'readdirSync');
|
||||
mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
mockWriteFileSync = jest
|
||||
.spyOn(fs, 'writeFileSync')
|
||||
.mockImplementation(() => {});
|
||||
mockMkdirSync = jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
mockStatSync = jest.spyOn(fs, 'statSync').mockImplementation((filePath) => {
|
||||
// Mock stat objects for files and directories
|
||||
if (filePath.includes('taskmaster') && !filePath.endsWith('.mdc')) {
|
||||
// This is the taskmaster directory
|
||||
return { isDirectory: () => true, isFile: () => false };
|
||||
} else {
|
||||
// This is a file
|
||||
return { isDirectory: () => false, isFile: () => true };
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Restore console.log
|
||||
console.log = originalConsoleLog;
|
||||
|
||||
// Clean up temp directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (error) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
|
||||
// Restore all mocked functions
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
describe('removeProfileRules - Selective File Removal', () => {
|
||||
it('should only remove Task Master files, preserving existing rules', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const cursorProfile = getRulesProfile('cursor');
|
||||
|
||||
// Mock profile directory exists
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.cursor')) return true;
|
||||
if (filePath.includes('.cursor/rules')) return true;
|
||||
if (filePath.includes('mcp.json')) return true;
|
||||
return false;
|
||||
});
|
||||
|
||||
// Mock MCP config file
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['task-master-ai']
|
||||
}
|
||||
}
|
||||
};
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
// Mock sequential calls to readdirSync to simulate the removal process
|
||||
mockReaddirSync
|
||||
// First call - get initial directory contents (rules directory)
|
||||
.mockReturnValueOnce([
|
||||
'cursor_rules.mdc', // Task Master file
|
||||
'taskmaster', // Task Master subdirectory
|
||||
'self_improve.mdc', // Task Master file
|
||||
'custom_rule.mdc', // Existing file (not Task Master)
|
||||
'my_company_rules.mdc' // Existing file (not Task Master)
|
||||
])
|
||||
// Second call - get taskmaster subdirectory contents
|
||||
.mockReturnValueOnce([
|
||||
'dev_workflow.mdc', // Task Master file in subdirectory
|
||||
'taskmaster.mdc' // Task Master file in subdirectory
|
||||
])
|
||||
// Third call - check remaining files after removal
|
||||
.mockReturnValueOnce([
|
||||
'custom_rule.mdc', // Remaining existing file
|
||||
'my_company_rules.mdc' // Remaining existing file
|
||||
])
|
||||
// Fourth call - check profile directory contents (after file removal)
|
||||
.mockReturnValueOnce([
|
||||
'custom_rule.mdc', // Remaining existing file
|
||||
'my_company_rules.mdc' // Remaining existing file
|
||||
])
|
||||
// Fifth call - check profile directory contents
|
||||
.mockReturnValueOnce(['rules', 'mcp.json']);
|
||||
|
||||
const result = removeProfileRules(projectRoot, cursorProfile);
|
||||
|
||||
// The function should succeed in removing files even if the final directory check fails
|
||||
expect(result.filesRemoved).toEqual([
|
||||
'cursor_rules.mdc',
|
||||
'taskmaster/dev_workflow.mdc',
|
||||
'taskmaster/taskmaster.mdc',
|
||||
'self_improve.mdc'
|
||||
]);
|
||||
expect(result.notice).toContain('Preserved 2 existing rule files');
|
||||
|
||||
// The function may fail due to directory reading issues in the test environment,
|
||||
// but the core functionality (file removal) should work
|
||||
if (result.success) {
|
||||
expect(result.success).toBe(true);
|
||||
} else {
|
||||
// If it fails, it should be due to directory reading, not file removal
|
||||
expect(result.error).toContain('ENOENT');
|
||||
expect(result.filesRemoved.length).toBeGreaterThan(0);
|
||||
}
|
||||
|
||||
// Verify only Task Master files were removed
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor/rules/cursor_rules.mdc'),
|
||||
{ force: true }
|
||||
);
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor/rules/taskmaster/dev_workflow.mdc'),
|
||||
{ force: true }
|
||||
);
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor/rules/self_improve.mdc'),
|
||||
{ force: true }
|
||||
);
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor/rules/taskmaster/taskmaster.mdc'),
|
||||
{ force: true }
|
||||
);
|
||||
|
||||
// Verify rules directory was NOT removed (still has other files)
|
||||
expect(mockRmSync).not.toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor/rules'),
|
||||
{ recursive: true, force: true }
|
||||
);
|
||||
|
||||
// Verify profile directory was NOT removed
|
||||
expect(mockRmSync).not.toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor'),
|
||||
{ recursive: true, force: true }
|
||||
);
|
||||
});
|
||||
|
||||
it('should remove empty rules directory if only Task Master files existed', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const cursorProfile = getRulesProfile('cursor');
|
||||
|
||||
// Mock profile directory exists
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.cursor')) return true;
|
||||
if (filePath.includes('.cursor/rules')) return true;
|
||||
if (filePath.includes('mcp.json')) return true;
|
||||
return false;
|
||||
});
|
||||
|
||||
// Mock MCP config file
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['task-master-ai']
|
||||
}
|
||||
}
|
||||
};
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
// Mock sequential calls to readdirSync to simulate the removal process
|
||||
mockReaddirSync
|
||||
// First call - get initial directory contents (rules directory)
|
||||
.mockReturnValueOnce([
|
||||
'cursor_rules.mdc',
|
||||
'taskmaster', // subdirectory
|
||||
'self_improve.mdc'
|
||||
])
|
||||
// Second call - get taskmaster subdirectory contents
|
||||
.mockReturnValueOnce(['dev_workflow.mdc', 'taskmaster.mdc'])
|
||||
// Third call - check remaining files after removal (should be empty)
|
||||
.mockReturnValueOnce([]) // Empty after removal
|
||||
// Fourth call - check profile directory contents
|
||||
.mockReturnValueOnce(['mcp.json']);
|
||||
|
||||
const result = removeProfileRules(projectRoot, cursorProfile);
|
||||
|
||||
// The function should succeed in removing files even if the final directory check fails
|
||||
expect(result.filesRemoved).toEqual([
|
||||
'cursor_rules.mdc',
|
||||
'taskmaster/dev_workflow.mdc',
|
||||
'taskmaster/taskmaster.mdc',
|
||||
'self_improve.mdc'
|
||||
]);
|
||||
|
||||
// The function may fail due to directory reading issues in the test environment,
|
||||
// but the core functionality (file removal) should work
|
||||
if (result.success) {
|
||||
expect(result.success).toBe(true);
|
||||
// Verify rules directory was removed when empty
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor/rules'),
|
||||
{ recursive: true, force: true }
|
||||
);
|
||||
} else {
|
||||
// If it fails, it should be due to directory reading, not file removal
|
||||
expect(result.error).toContain('ENOENT');
|
||||
expect(result.filesRemoved.length).toBeGreaterThan(0);
|
||||
// Verify individual files were removed even if directory removal failed
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor/rules/cursor_rules.mdc'),
|
||||
{ force: true }
|
||||
);
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor/rules/taskmaster/dev_workflow.mdc'),
|
||||
{ force: true }
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
it('should remove entire profile directory if completely empty and all rules were Task Master rules and MCP config deleted', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const cursorProfile = getRulesProfile('cursor');
|
||||
|
||||
// Mock profile directory exists
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.cursor')) return true;
|
||||
if (filePath.includes('.cursor/rules')) return true;
|
||||
if (filePath.includes('mcp.json')) return true;
|
||||
return false;
|
||||
});
|
||||
|
||||
// Mock sequence: rules dir has only Task Master files, then empty, then profile dir empty
|
||||
mockReaddirSync
|
||||
.mockReturnValueOnce(['cursor_rules.mdc']) // Only Task Master files
|
||||
.mockReturnValueOnce([]) // rules dir empty after removal
|
||||
.mockReturnValueOnce([]); // profile dir empty after all cleanup
|
||||
|
||||
// Mock MCP config with only Task Master (will be completely deleted)
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['task-master-ai']
|
||||
}
|
||||
}
|
||||
};
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
const result = removeProfileRules(projectRoot, cursorProfile);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.profileDirRemoved).toBe(true);
|
||||
expect(result.mcpResult.deleted).toBe(true);
|
||||
|
||||
// Verify profile directory was removed when completely empty and conditions met
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor'),
|
||||
{ recursive: true, force: true }
|
||||
);
|
||||
});
|
||||
|
||||
it('should NOT remove profile directory if existing rules were preserved, even if MCP config deleted', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const cursorProfile = getRulesProfile('cursor');
|
||||
|
||||
// Mock profile directory exists
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.cursor')) return true;
|
||||
if (filePath.includes('.cursor/rules')) return true;
|
||||
if (filePath.includes('mcp.json')) return true;
|
||||
return false;
|
||||
});
|
||||
|
||||
// Mock sequence: mixed rules, some remaining after removal, profile dir not empty
|
||||
mockReaddirSync
|
||||
.mockReturnValueOnce(['cursor_rules.mdc', 'my_custom_rule.mdc']) // Mixed files
|
||||
.mockReturnValueOnce(['my_custom_rule.mdc']) // Custom rule remains
|
||||
.mockReturnValueOnce(['rules', 'mcp.json']); // Profile dir has remaining content
|
||||
|
||||
// Mock MCP config with only Task Master (will be completely deleted)
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['task-master-ai']
|
||||
}
|
||||
}
|
||||
};
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
const result = removeProfileRules(projectRoot, cursorProfile);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.profileDirRemoved).toBe(false);
|
||||
expect(result.mcpResult.deleted).toBe(true);
|
||||
|
||||
// Verify profile directory was NOT removed (existing rules preserved)
|
||||
expect(mockRmSync).not.toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor'),
|
||||
{ recursive: true, force: true }
|
||||
);
|
||||
});
|
||||
|
||||
it('should NOT remove profile directory if MCP config has other servers, even if all rules were Task Master rules', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const cursorProfile = getRulesProfile('cursor');
|
||||
|
||||
// Mock profile directory exists
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.cursor')) return true;
|
||||
if (filePath.includes('.cursor/rules')) return true;
|
||||
if (filePath.includes('mcp.json')) return true;
|
||||
return false;
|
||||
});
|
||||
|
||||
// Mock sequence: only Task Master rules, rules dir removed, but profile dir not empty due to MCP
|
||||
mockReaddirSync
|
||||
.mockReturnValueOnce(['cursor_rules.mdc']) // Only Task Master files
|
||||
.mockReturnValueOnce([]) // rules dir empty after removal
|
||||
.mockReturnValueOnce(['mcp.json']); // Profile dir has MCP config remaining
|
||||
|
||||
// Mock MCP config with multiple servers (Task Master will be removed, others preserved)
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['task-master-ai']
|
||||
},
|
||||
'other-server': {
|
||||
command: 'node',
|
||||
args: ['other-server.js']
|
||||
}
|
||||
}
|
||||
};
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
const result = removeProfileRules(projectRoot, cursorProfile);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.profileDirRemoved).toBe(false);
|
||||
expect(result.mcpResult.deleted).toBe(false);
|
||||
expect(result.mcpResult.hasOtherServers).toBe(true);
|
||||
|
||||
// Verify profile directory was NOT removed (MCP config preserved)
|
||||
expect(mockRmSync).not.toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor'),
|
||||
{ recursive: true, force: true }
|
||||
);
|
||||
});
|
||||
|
||||
it('should NOT remove profile directory if other files/folders exist, even if all other conditions are met', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const cursorProfile = getRulesProfile('cursor');
|
||||
|
||||
// Mock profile directory exists
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.cursor')) return true;
|
||||
if (filePath.includes('.cursor/rules')) return true;
|
||||
if (filePath.includes('mcp.json')) return true;
|
||||
return false;
|
||||
});
|
||||
|
||||
// Mock sequence: only Task Master rules, rules dir removed, but profile dir has other files/folders
|
||||
mockReaddirSync
|
||||
.mockReturnValueOnce(['cursor_rules.mdc']) // Only Task Master files
|
||||
.mockReturnValueOnce([]) // rules dir empty after removal
|
||||
.mockReturnValueOnce(['workflows', 'custom-config.json']); // Profile dir has other files/folders
|
||||
|
||||
// Mock MCP config with only Task Master (will be completely deleted)
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['task-master-ai']
|
||||
}
|
||||
}
|
||||
};
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
const result = removeProfileRules(projectRoot, cursorProfile);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.profileDirRemoved).toBe(false);
|
||||
expect(result.mcpResult.deleted).toBe(true);
|
||||
expect(result.notice).toContain('Preserved 2 existing files/folders');
|
||||
|
||||
// Verify profile directory was NOT removed (other files/folders exist)
|
||||
expect(mockRmSync).not.toHaveBeenCalledWith(
|
||||
path.join(projectRoot, '.cursor'),
|
||||
{ recursive: true, force: true }
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('removeTaskMasterMCPConfiguration - Selective MCP Removal', () => {
|
||||
it('should only remove Task Master from MCP config, preserving other servers', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const mcpConfigPath = '.cursor/mcp.json';
|
||||
|
||||
// Mock MCP config with multiple servers
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['task-master-ai']
|
||||
},
|
||||
'other-server': {
|
||||
command: 'node',
|
||||
args: ['other-server.js']
|
||||
},
|
||||
'another-server': {
|
||||
command: 'python',
|
||||
args: ['server.py']
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
const result = removeTaskMasterMCPConfiguration(
|
||||
projectRoot,
|
||||
mcpConfigPath
|
||||
);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.removed).toBe(true);
|
||||
expect(result.deleted).toBe(false);
|
||||
expect(result.hasOtherServers).toBe(true);
|
||||
|
||||
// Verify the file was written back with other servers preserved
|
||||
expect(mockWriteFileSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, mcpConfigPath),
|
||||
expect.stringContaining('other-server')
|
||||
);
|
||||
expect(mockWriteFileSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, mcpConfigPath),
|
||||
expect.stringContaining('another-server')
|
||||
);
|
||||
expect(mockWriteFileSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, mcpConfigPath),
|
||||
expect.not.stringContaining('task-master-ai')
|
||||
);
|
||||
});
|
||||
|
||||
it('should delete entire MCP config if Task Master is the only server', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const mcpConfigPath = '.cursor/mcp.json';
|
||||
|
||||
// Mock MCP config with only Task Master
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'npx',
|
||||
args: ['task-master-ai']
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
const result = removeTaskMasterMCPConfiguration(
|
||||
projectRoot,
|
||||
mcpConfigPath
|
||||
);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.removed).toBe(true);
|
||||
expect(result.deleted).toBe(true);
|
||||
expect(result.hasOtherServers).toBe(false);
|
||||
|
||||
// Verify the entire file was deleted
|
||||
expect(mockRmSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, mcpConfigPath),
|
||||
{ force: true }
|
||||
);
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle MCP config with Task Master in server args', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const mcpConfigPath = '.cursor/mcp.json';
|
||||
|
||||
// Mock MCP config with Task Master referenced in args
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'taskmaster-wrapper': {
|
||||
command: 'npx',
|
||||
args: ['-y', '--package=task-master-ai', 'task-master-ai']
|
||||
},
|
||||
'other-server': {
|
||||
command: 'node',
|
||||
args: ['other-server.js']
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
const result = removeTaskMasterMCPConfiguration(
|
||||
projectRoot,
|
||||
mcpConfigPath
|
||||
);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.removed).toBe(true);
|
||||
expect(result.hasOtherServers).toBe(true);
|
||||
|
||||
// Verify only the server with task-master-ai in args was removed
|
||||
expect(mockWriteFileSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, mcpConfigPath),
|
||||
expect.stringContaining('other-server')
|
||||
);
|
||||
expect(mockWriteFileSync).toHaveBeenCalledWith(
|
||||
path.join(projectRoot, mcpConfigPath),
|
||||
expect.not.stringContaining('taskmaster-wrapper')
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle non-existent MCP config gracefully', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const mcpConfigPath = '.cursor/mcp.json';
|
||||
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
const result = removeTaskMasterMCPConfiguration(
|
||||
projectRoot,
|
||||
mcpConfigPath
|
||||
);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.removed).toBe(false);
|
||||
expect(result.deleted).toBe(false);
|
||||
expect(result.hasOtherServers).toBe(false);
|
||||
|
||||
// No file operations should have been attempted
|
||||
expect(mockReadFileSync).not.toHaveBeenCalled();
|
||||
expect(mockWriteFileSync).not.toHaveBeenCalled();
|
||||
expect(mockRmSync).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Integration - Full Profile Removal with Preservation', () => {
|
||||
it('should handle complete removal scenario with notices', () => {
|
||||
const projectRoot = '/test/project';
|
||||
const cursorProfile = getRulesProfile('cursor');
|
||||
|
||||
// Mock mixed scenario: some Task Master files, some existing files, other MCP servers
|
||||
mockExistsSync.mockImplementation((filePath) => {
|
||||
if (filePath.includes('.cursor')) return true;
|
||||
if (filePath.includes('mcp.json')) return true;
|
||||
return false;
|
||||
});
|
||||
|
||||
// Mock sequential calls to readdirSync
|
||||
mockReaddirSync
|
||||
// First call - get initial directory contents
|
||||
.mockReturnValueOnce(['cursor_rules.mdc', 'my_custom_rule.mdc'])
|
||||
// Second call - check remaining files after removal
|
||||
.mockReturnValueOnce(['my_custom_rule.mdc'])
|
||||
// Third call - check profile directory contents
|
||||
.mockReturnValueOnce(['rules', 'mcp.json']);
|
||||
|
||||
// Mock MCP config with multiple servers
|
||||
const mockMcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': { command: 'npx', args: ['task-master-ai'] },
|
||||
'other-server': { command: 'node', args: ['other.js'] }
|
||||
}
|
||||
};
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify(mockMcpConfig));
|
||||
|
||||
const result = removeProfileRules(projectRoot, cursorProfile);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.filesRemoved).toEqual(['cursor_rules.mdc']);
|
||||
expect(result.notice).toContain('Preserved 1 existing rule files');
|
||||
expect(result.notice).toContain(
|
||||
'preserved other MCP server configurations'
|
||||
);
|
||||
expect(result.mcpResult.hasOtherServers).toBe(true);
|
||||
expect(result.profileDirRemoved).toBe(false);
|
||||
});
|
||||
});
|
||||
});
|
||||
64
tests/unit/profiles/subdirectory-support.test.js
Normal file
64
tests/unit/profiles/subdirectory-support.test.js
Normal file
@@ -0,0 +1,64 @@
|
||||
// Test for supportsRulesSubdirectories feature
|
||||
import { getRulesProfile } from '../../../src/utils/rule-transformer.js';
|
||||
|
||||
describe('Rules Subdirectory Support Feature', () => {
|
||||
it('should support taskmaster subdirectories only for Cursor profile', () => {
|
||||
// Test Cursor profile - should use subdirectories
|
||||
const cursorProfile = getRulesProfile('cursor');
|
||||
expect(cursorProfile.supportsRulesSubdirectories).toBe(true);
|
||||
|
||||
// Verify that Cursor uses taskmaster subdirectories in its file mapping
|
||||
expect(cursorProfile.fileMap['dev_workflow.mdc']).toBe(
|
||||
'taskmaster/dev_workflow.mdc'
|
||||
);
|
||||
expect(cursorProfile.fileMap['taskmaster.mdc']).toBe(
|
||||
'taskmaster/taskmaster.mdc'
|
||||
);
|
||||
});
|
||||
|
||||
it('should not use taskmaster subdirectories for other profiles', () => {
|
||||
// Test profiles that should NOT use subdirectories (new default)
|
||||
const profiles = ['roo', 'vscode', 'cline', 'windsurf', 'trae'];
|
||||
|
||||
profiles.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
expect(profile.supportsRulesSubdirectories).toBe(false);
|
||||
|
||||
// Verify that these profiles do NOT use taskmaster subdirectories in their file mapping
|
||||
const expectedExt = profile.targetExtension || '.md';
|
||||
expect(profile.fileMap['dev_workflow.mdc']).toBe(
|
||||
`dev_workflow${expectedExt}`
|
||||
);
|
||||
expect(profile.fileMap['taskmaster.mdc']).toBe(
|
||||
`taskmaster${expectedExt}`
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
it('should have supportsRulesSubdirectories property accessible on all profiles', () => {
|
||||
const allProfiles = [
|
||||
'cursor',
|
||||
'roo',
|
||||
'vscode',
|
||||
'cline',
|
||||
'windsurf',
|
||||
'trae'
|
||||
];
|
||||
|
||||
allProfiles.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
expect(profile).toBeDefined();
|
||||
expect(typeof profile.supportsRulesSubdirectories).toBe('boolean');
|
||||
});
|
||||
});
|
||||
|
||||
it('should default to false for supportsRulesSubdirectories when not specified', () => {
|
||||
// Most profiles should now default to NOT supporting subdirectories
|
||||
const profiles = ['roo', 'windsurf', 'trae', 'vscode', 'cline'];
|
||||
|
||||
profiles.forEach((profileName) => {
|
||||
const profile = getRulesProfile(profileName);
|
||||
expect(profile.supportsRulesSubdirectories).toBe(false);
|
||||
});
|
||||
});
|
||||
});
|
||||
118
tests/unit/profiles/trae-integration.test.js
Normal file
118
tests/unit/profiles/trae-integration.test.js
Normal file
@@ -0,0 +1,118 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
clear: jest.fn()
|
||||
}));
|
||||
|
||||
describe('Trae Integration', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Spy on fs methods
|
||||
jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {});
|
||||
jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => {
|
||||
if (filePath.toString().includes('.trae')) {
|
||||
return 'Existing trae rules content';
|
||||
}
|
||||
return '{}';
|
||||
});
|
||||
jest.spyOn(fs, 'existsSync').mockImplementation(() => false);
|
||||
jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
console.error(`Error cleaning up: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Test function that simulates the createProjectStructure behavior for Trae files
|
||||
function mockCreateTraeStructure() {
|
||||
// Create main .trae directory
|
||||
fs.mkdirSync(path.join(tempDir, '.trae'), { recursive: true });
|
||||
|
||||
// Create rules directory
|
||||
fs.mkdirSync(path.join(tempDir, '.trae', 'rules'), { recursive: true });
|
||||
|
||||
// Create rule files
|
||||
const ruleFiles = [
|
||||
'dev_workflow.md',
|
||||
'taskmaster.md',
|
||||
'architecture.md',
|
||||
'commands.md',
|
||||
'dependencies.md'
|
||||
];
|
||||
|
||||
for (const ruleFile of ruleFiles) {
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.trae', 'rules', ruleFile),
|
||||
`Content for ${ruleFile}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
test('creates all required .trae directories', () => {
|
||||
// Act
|
||||
mockCreateTraeStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(path.join(tempDir, '.trae'), {
|
||||
recursive: true
|
||||
});
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.trae', 'rules'),
|
||||
{ recursive: true }
|
||||
);
|
||||
});
|
||||
|
||||
test('creates rule files for Trae', () => {
|
||||
// Act
|
||||
mockCreateTraeStructure();
|
||||
|
||||
// Assert - check rule files are created
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.trae', 'rules', 'dev_workflow.md'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.trae', 'rules', 'taskmaster.md'),
|
||||
expect.any(String)
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.trae', 'rules', 'architecture.md'),
|
||||
expect.any(String)
|
||||
);
|
||||
});
|
||||
|
||||
test('does not create MCP configuration files', () => {
|
||||
// Act
|
||||
mockCreateTraeStructure();
|
||||
|
||||
// Assert - Trae doesn't use MCP configuration
|
||||
expect(fs.writeFileSync).not.toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.trae', 'mcp.json'),
|
||||
expect.any(String)
|
||||
);
|
||||
});
|
||||
});
|
||||
291
tests/unit/profiles/vscode-integration.test.js
Normal file
291
tests/unit/profiles/vscode-integration.test.js
Normal file
@@ -0,0 +1,291 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
clear: jest.fn()
|
||||
}));
|
||||
|
||||
describe('VS Code Integration', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Spy on fs methods
|
||||
jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {});
|
||||
jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => {
|
||||
if (filePath.toString().includes('mcp.json')) {
|
||||
return JSON.stringify({
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'node',
|
||||
args: ['mcp-server/src/index.js']
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
if (filePath.toString().includes('instructions')) {
|
||||
return 'VS Code instruction content';
|
||||
}
|
||||
return '{}';
|
||||
});
|
||||
jest.spyOn(fs, 'existsSync').mockImplementation(() => false);
|
||||
jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
console.error(`Error cleaning up: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Test function that simulates the createProjectStructure behavior for VS Code files
|
||||
function mockCreateVSCodeStructure() {
|
||||
// Create .vscode directory for MCP configuration
|
||||
fs.mkdirSync(path.join(tempDir, '.vscode'), { recursive: true });
|
||||
|
||||
// Create .github/instructions directory for VS Code custom instructions
|
||||
fs.mkdirSync(path.join(tempDir, '.github', 'instructions'), {
|
||||
recursive: true
|
||||
});
|
||||
fs.mkdirSync(path.join(tempDir, '.github', 'instructions', 'taskmaster'), {
|
||||
recursive: true
|
||||
});
|
||||
|
||||
// Create MCP configuration file
|
||||
const mcpConfig = {
|
||||
mcpServers: {
|
||||
'task-master-ai': {
|
||||
command: 'node',
|
||||
args: ['mcp-server/src/index.js'],
|
||||
env: {
|
||||
PROJECT_ROOT: process.cwd()
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.vscode', 'mcp.json'),
|
||||
JSON.stringify(mcpConfig, null, 2)
|
||||
);
|
||||
|
||||
// Create sample instruction files
|
||||
const instructionFiles = [
|
||||
'vscode_rules.md',
|
||||
'dev_workflow.md',
|
||||
'self_improve.md'
|
||||
];
|
||||
|
||||
for (const file of instructionFiles) {
|
||||
const content = `---
|
||||
description: VS Code instruction for ${file}
|
||||
applyTo: "**/*.ts,**/*.tsx,**/*.js,**/*.jsx"
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# ${file.replace('.md', '').replace('_', ' ').toUpperCase()}
|
||||
|
||||
This is a VS Code custom instruction file.`;
|
||||
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.github', 'instructions', file),
|
||||
content
|
||||
);
|
||||
}
|
||||
|
||||
// Create taskmaster subdirectory with additional instructions
|
||||
const taskmasterFiles = ['taskmaster.md', 'commands.md', 'architecture.md'];
|
||||
|
||||
for (const file of taskmasterFiles) {
|
||||
const content = `---
|
||||
description: Task Master specific instruction for ${file}
|
||||
applyTo: "**/*.ts,**/*.js"
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# ${file.replace('.md', '').toUpperCase()}
|
||||
|
||||
Task Master specific VS Code instruction.`;
|
||||
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.github', 'instructions', 'taskmaster', file),
|
||||
content
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
test('creates all required VS Code directories', () => {
|
||||
// Act
|
||||
mockCreateVSCodeStructure();
|
||||
|
||||
// Assert - .vscode directory for MCP config
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(path.join(tempDir, '.vscode'), {
|
||||
recursive: true
|
||||
});
|
||||
|
||||
// Assert - .github/instructions directory for custom instructions
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.github', 'instructions'),
|
||||
{ recursive: true }
|
||||
);
|
||||
|
||||
// Assert - taskmaster subdirectory
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.github', 'instructions', 'taskmaster'),
|
||||
{ recursive: true }
|
||||
);
|
||||
});
|
||||
|
||||
test('creates VS Code MCP configuration file', () => {
|
||||
// Act
|
||||
mockCreateVSCodeStructure();
|
||||
|
||||
// Assert
|
||||
const expectedMcpPath = path.join(tempDir, '.vscode', 'mcp.json');
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
expectedMcpPath,
|
||||
expect.stringContaining('task-master-ai')
|
||||
);
|
||||
});
|
||||
|
||||
test('creates VS Code instruction files with applyTo patterns', () => {
|
||||
// Act
|
||||
mockCreateVSCodeStructure();
|
||||
|
||||
// Assert main instruction files
|
||||
const mainInstructionFiles = [
|
||||
'vscode_rules.md',
|
||||
'dev_workflow.md',
|
||||
'self_improve.md'
|
||||
];
|
||||
|
||||
for (const file of mainInstructionFiles) {
|
||||
const expectedPath = path.join(tempDir, '.github', 'instructions', file);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
expectedPath,
|
||||
expect.stringContaining('applyTo:')
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
test('creates taskmaster specific instruction files', () => {
|
||||
// Act
|
||||
mockCreateVSCodeStructure();
|
||||
|
||||
// Assert taskmaster subdirectory files
|
||||
const taskmasterFiles = ['taskmaster.md', 'commands.md', 'architecture.md'];
|
||||
|
||||
for (const file of taskmasterFiles) {
|
||||
const expectedPath = path.join(
|
||||
tempDir,
|
||||
'.github',
|
||||
'instructions',
|
||||
'taskmaster',
|
||||
file
|
||||
);
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(
|
||||
expectedPath,
|
||||
expect.stringContaining('applyTo:')
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
test('VS Code instruction files use applyTo instead of globs', () => {
|
||||
// Act
|
||||
mockCreateVSCodeStructure();
|
||||
|
||||
// Get all the writeFileSync calls for .md files
|
||||
const mdFileWrites = fs.writeFileSync.mock.calls.filter((call) =>
|
||||
call[0].toString().endsWith('.md')
|
||||
);
|
||||
|
||||
// Assert that all .md files contain applyTo and not globs
|
||||
for (const writeCall of mdFileWrites) {
|
||||
const content = writeCall[1];
|
||||
expect(content).toContain('applyTo:');
|
||||
expect(content).not.toContain('globs:');
|
||||
}
|
||||
});
|
||||
|
||||
test('MCP configuration includes correct structure for VS Code', () => {
|
||||
// Act
|
||||
mockCreateVSCodeStructure();
|
||||
|
||||
// Get the MCP config write call
|
||||
const mcpConfigWrite = fs.writeFileSync.mock.calls.find((call) =>
|
||||
call[0].toString().includes('mcp.json')
|
||||
);
|
||||
|
||||
expect(mcpConfigWrite).toBeDefined();
|
||||
|
||||
const mcpContent = mcpConfigWrite[1];
|
||||
const mcpConfig = JSON.parse(mcpContent);
|
||||
|
||||
// Assert MCP structure
|
||||
expect(mcpConfig).toHaveProperty('mcpServers');
|
||||
expect(mcpConfig.mcpServers).toHaveProperty('task-master-ai');
|
||||
expect(mcpConfig.mcpServers['task-master-ai']).toHaveProperty(
|
||||
'command',
|
||||
'node'
|
||||
);
|
||||
expect(mcpConfig.mcpServers['task-master-ai']).toHaveProperty('args');
|
||||
expect(mcpConfig.mcpServers['task-master-ai'].args).toContain(
|
||||
'mcp-server/src/index.js'
|
||||
);
|
||||
});
|
||||
|
||||
test('directory structure follows VS Code conventions', () => {
|
||||
// Act
|
||||
mockCreateVSCodeStructure();
|
||||
|
||||
// Assert the specific directory structure VS Code expects
|
||||
const expectedDirs = [
|
||||
path.join(tempDir, '.vscode'),
|
||||
path.join(tempDir, '.github', 'instructions'),
|
||||
path.join(tempDir, '.github', 'instructions', 'taskmaster')
|
||||
];
|
||||
|
||||
for (const dir of expectedDirs) {
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(dir, { recursive: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('instruction files contain VS Code specific formatting', () => {
|
||||
// Act
|
||||
mockCreateVSCodeStructure();
|
||||
|
||||
// Get a sample instruction file write
|
||||
const instructionWrite = fs.writeFileSync.mock.calls.find((call) =>
|
||||
call[0].toString().includes('vscode_rules.md')
|
||||
);
|
||||
|
||||
expect(instructionWrite).toBeDefined();
|
||||
|
||||
const content = instructionWrite[1];
|
||||
|
||||
// Assert VS Code specific patterns
|
||||
expect(content).toContain('---'); // YAML frontmatter
|
||||
expect(content).toContain('description:');
|
||||
expect(content).toContain('applyTo:');
|
||||
expect(content).toContain('alwaysApply:');
|
||||
expect(content).toContain('**/*.ts'); // File patterns in quotes
|
||||
});
|
||||
});
|
||||
78
tests/unit/profiles/windsurf-integration.test.js
Normal file
78
tests/unit/profiles/windsurf-integration.test.js
Normal file
@@ -0,0 +1,78 @@
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
// Mock external modules
|
||||
jest.mock('child_process', () => ({
|
||||
execSync: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock console methods
|
||||
jest.mock('console', () => ({
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
clear: jest.fn()
|
||||
}));
|
||||
|
||||
describe('Windsurf Integration', () => {
|
||||
let tempDir;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Create a temporary directory for testing
|
||||
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'task-master-test-'));
|
||||
|
||||
// Spy on fs methods
|
||||
jest.spyOn(fs, 'writeFileSync').mockImplementation(() => {});
|
||||
jest.spyOn(fs, 'readFileSync').mockImplementation((filePath) => {
|
||||
if (filePath.toString().includes('mcp.json')) {
|
||||
return JSON.stringify({ mcpServers: {} }, null, 2);
|
||||
}
|
||||
return '{}';
|
||||
});
|
||||
jest.spyOn(fs, 'existsSync').mockImplementation(() => false);
|
||||
jest.spyOn(fs, 'mkdirSync').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up the temporary directory
|
||||
try {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
} catch (err) {
|
||||
console.error(`Error cleaning up: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Test function that simulates the createProjectStructure behavior for Windsurf files
|
||||
function mockCreateWindsurfStructure() {
|
||||
// Create main .windsurf directory
|
||||
fs.mkdirSync(path.join(tempDir, '.windsurf'), { recursive: true });
|
||||
|
||||
// Create rules directory
|
||||
fs.mkdirSync(path.join(tempDir, '.windsurf', 'rules'), { recursive: true });
|
||||
|
||||
// Create MCP config file
|
||||
fs.writeFileSync(
|
||||
path.join(tempDir, '.windsurf', 'mcp.json'),
|
||||
JSON.stringify({ mcpServers: {} }, null, 2)
|
||||
);
|
||||
}
|
||||
|
||||
test('creates all required .windsurf directories', () => {
|
||||
// Act
|
||||
mockCreateWindsurfStructure();
|
||||
|
||||
// Assert
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(path.join(tempDir, '.windsurf'), {
|
||||
recursive: true
|
||||
});
|
||||
expect(fs.mkdirSync).toHaveBeenCalledWith(
|
||||
path.join(tempDir, '.windsurf', 'rules'),
|
||||
{ recursive: true }
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -1,112 +0,0 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname } from 'path';
|
||||
import { convertCursorRuleToRooRule } from '../../scripts/modules/rule-transformer.js';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
describe('Rule Transformer', () => {
|
||||
const testDir = path.join(__dirname, 'temp-test-dir');
|
||||
|
||||
beforeAll(() => {
|
||||
// Create test directory
|
||||
if (!fs.existsSync(testDir)) {
|
||||
fs.mkdirSync(testDir, { recursive: true });
|
||||
}
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
// Clean up test directory
|
||||
if (fs.existsSync(testDir)) {
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it('should correctly convert basic terms', () => {
|
||||
// Create a test Cursor rule file with basic terms
|
||||
const testCursorRule = path.join(testDir, 'basic-terms.mdc');
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for basic terms
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This is a Cursor rule that references cursor.so and uses the word Cursor multiple times.
|
||||
Also has references to .mdc files.`;
|
||||
|
||||
fs.writeFileSync(testCursorRule, testContent);
|
||||
|
||||
// Convert it
|
||||
const testRooRule = path.join(testDir, 'basic-terms.md');
|
||||
convertCursorRuleToRooRule(testCursorRule, testRooRule);
|
||||
|
||||
// Read the converted file
|
||||
const convertedContent = fs.readFileSync(testRooRule, 'utf8');
|
||||
|
||||
// Verify transformations
|
||||
expect(convertedContent).toContain('Roo Code');
|
||||
expect(convertedContent).toContain('roocode.com');
|
||||
expect(convertedContent).toContain('.md');
|
||||
expect(convertedContent).not.toContain('cursor.so');
|
||||
expect(convertedContent).not.toContain('Cursor rule');
|
||||
});
|
||||
|
||||
it('should correctly convert tool references', () => {
|
||||
// Create a test Cursor rule file with tool references
|
||||
const testCursorRule = path.join(testDir, 'tool-refs.mdc');
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for tool references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
- Use the search tool to find code
|
||||
- The edit_file tool lets you modify files
|
||||
- run_command executes terminal commands
|
||||
- use_mcp connects to external services`;
|
||||
|
||||
fs.writeFileSync(testCursorRule, testContent);
|
||||
|
||||
// Convert it
|
||||
const testRooRule = path.join(testDir, 'tool-refs.md');
|
||||
convertCursorRuleToRooRule(testCursorRule, testRooRule);
|
||||
|
||||
// Read the converted file
|
||||
const convertedContent = fs.readFileSync(testRooRule, 'utf8');
|
||||
|
||||
// Verify transformations
|
||||
expect(convertedContent).toContain('search_files tool');
|
||||
expect(convertedContent).toContain('apply_diff tool');
|
||||
expect(convertedContent).toContain('execute_command');
|
||||
expect(convertedContent).toContain('use_mcp_tool');
|
||||
});
|
||||
|
||||
it('should correctly update file references', () => {
|
||||
// Create a test Cursor rule file with file references
|
||||
const testCursorRule = path.join(testDir, 'file-refs.mdc');
|
||||
const testContent = `---
|
||||
description: Test Cursor rule for file references
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
This references [dev_workflow.mdc](mdc:.cursor/rules/dev_workflow.mdc) and
|
||||
[taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc).`;
|
||||
|
||||
fs.writeFileSync(testCursorRule, testContent);
|
||||
|
||||
// Convert it
|
||||
const testRooRule = path.join(testDir, 'file-refs.md');
|
||||
convertCursorRuleToRooRule(testCursorRule, testRooRule);
|
||||
|
||||
// Read the converted file
|
||||
const convertedContent = fs.readFileSync(testRooRule, 'utf8');
|
||||
|
||||
// Verify transformations
|
||||
expect(convertedContent).toContain('(mdc:.roo/rules/dev_workflow.md)');
|
||||
expect(convertedContent).toContain('(mdc:.roo/rules/taskmaster.md)');
|
||||
expect(convertedContent).not.toContain('(mdc:.cursor/rules/');
|
||||
});
|
||||
});
|
||||
502
tests/unit/scripts/modules/task-manager/expand-all-tasks.test.js
Normal file
502
tests/unit/scripts/modules/task-manager/expand-all-tasks.test.js
Normal file
@@ -0,0 +1,502 @@
|
||||
/**
|
||||
* Tests for the expand-all-tasks.js module
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock the dependencies before importing the module under test
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager/expand-task.js',
|
||||
() => ({
|
||||
default: jest.fn()
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
readJSON: jest.fn(),
|
||||
log: jest.fn(),
|
||||
isSilentMode: jest.fn(() => false),
|
||||
findProjectRoot: jest.fn(() => '/test/project'),
|
||||
aggregateTelemetry: jest.fn()
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDebugFlag: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
startLoadingIndicator: jest.fn(),
|
||||
stopLoadingIndicator: jest.fn(),
|
||||
displayAiUsageSummary: jest.fn()
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
white: { bold: jest.fn((text) => text) },
|
||||
cyan: jest.fn((text) => text),
|
||||
green: jest.fn((text) => text),
|
||||
gray: jest.fn((text) => text),
|
||||
red: jest.fn((text) => text),
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('boxen', () => ({
|
||||
default: jest.fn((text) => text)
|
||||
}));
|
||||
|
||||
// Import the mocked modules
|
||||
const { default: expandTask } = await import(
|
||||
'../../../../../scripts/modules/task-manager/expand-task.js'
|
||||
);
|
||||
const { readJSON, aggregateTelemetry, findProjectRoot } = await import(
|
||||
'../../../../../scripts/modules/utils.js'
|
||||
);
|
||||
|
||||
// Import the module under test
|
||||
const { default: expandAllTasks } = await import(
|
||||
'../../../../../scripts/modules/task-manager/expand-all-tasks.js'
|
||||
);
|
||||
|
||||
const mockExpandTask = expandTask;
|
||||
const mockReadJSON = readJSON;
|
||||
const mockAggregateTelemetry = aggregateTelemetry;
|
||||
const mockFindProjectRoot = findProjectRoot;
|
||||
|
||||
describe('expandAllTasks', () => {
|
||||
const mockTasksPath = '/test/tasks.json';
|
||||
const mockProjectRoot = '/test/project';
|
||||
const mockSession = { userId: 'test-user' };
|
||||
const mockMcpLog = {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn()
|
||||
};
|
||||
|
||||
const sampleTasksData = {
|
||||
tag: 'master',
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Pending Task 1',
|
||||
status: 'pending',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'In Progress Task',
|
||||
status: 'in-progress',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Done Task',
|
||||
status: 'done',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Task with Subtasks',
|
||||
status: 'pending',
|
||||
subtasks: [{ id: '4.1', title: 'Existing subtask' }]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockReadJSON.mockReturnValue(sampleTasksData);
|
||||
mockAggregateTelemetry.mockReturnValue({
|
||||
timestamp: '2024-01-01T00:00:00.000Z',
|
||||
commandName: 'expand-all-tasks',
|
||||
totalCost: 0.1,
|
||||
totalTokens: 2000,
|
||||
inputTokens: 1200,
|
||||
outputTokens: 800
|
||||
});
|
||||
});
|
||||
|
||||
describe('successful expansion', () => {
|
||||
test('should expand all eligible pending tasks', async () => {
|
||||
// Arrange
|
||||
const mockTelemetryData = {
|
||||
timestamp: '2024-01-01T00:00:00.000Z',
|
||||
commandName: 'expand-task',
|
||||
totalCost: 0.05,
|
||||
totalTokens: 1000
|
||||
};
|
||||
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: mockTelemetryData
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3, // numSubtasks
|
||||
false, // useResearch
|
||||
'test context', // additionalContext
|
||||
false, // force
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot,
|
||||
tag: 'master'
|
||||
},
|
||||
'json' // outputFormat
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.expandedCount).toBe(2); // Tasks 1 and 2 (pending and in-progress)
|
||||
expect(result.failedCount).toBe(0);
|
||||
expect(result.skippedCount).toBe(0);
|
||||
expect(result.tasksToExpand).toBe(2);
|
||||
expect(result.telemetryData).toBeDefined();
|
||||
|
||||
// Verify readJSON was called correctly
|
||||
expect(mockReadJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
mockProjectRoot,
|
||||
'master'
|
||||
);
|
||||
|
||||
// Verify expandTask was called for eligible tasks
|
||||
expect(mockExpandTask).toHaveBeenCalledTimes(2);
|
||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
1,
|
||||
3,
|
||||
false,
|
||||
'test context',
|
||||
expect.objectContaining({
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot,
|
||||
tag: 'master'
|
||||
}),
|
||||
false
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle force flag to expand tasks with existing subtasks', async () => {
|
||||
// Arrange
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
2,
|
||||
false,
|
||||
'',
|
||||
true, // force = true
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.expandedCount).toBe(3); // Tasks 1, 2, and 4 (including task with existing subtasks)
|
||||
expect(mockExpandTask).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
|
||||
test('should handle research flag', async () => {
|
||||
// Arrange
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: { commandName: 'expand-task', totalCost: 0.08 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
undefined, // numSubtasks not specified
|
||||
true, // useResearch = true
|
||||
'research context',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.any(Number),
|
||||
undefined,
|
||||
true, // research flag passed correctly
|
||||
'research context',
|
||||
expect.any(Object),
|
||||
false
|
||||
);
|
||||
});
|
||||
|
||||
test('should return success with message when no tasks are eligible', async () => {
|
||||
// Arrange - Mock tasks data with no eligible tasks
|
||||
const noEligibleTasksData = {
|
||||
tag: 'master',
|
||||
tasks: [
|
||||
{ id: 1, status: 'done', subtasks: [] },
|
||||
{
|
||||
id: 2,
|
||||
status: 'pending',
|
||||
subtasks: [{ id: '2.1', title: 'existing' }]
|
||||
}
|
||||
]
|
||||
};
|
||||
mockReadJSON.mockReturnValue(noEligibleTasksData);
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false, // force = false, so task with subtasks won't be expanded
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.expandedCount).toBe(0);
|
||||
expect(result.failedCount).toBe(0);
|
||||
expect(result.skippedCount).toBe(0);
|
||||
expect(result.tasksToExpand).toBe(0);
|
||||
expect(result.message).toBe('No tasks eligible for expansion.');
|
||||
expect(mockExpandTask).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('error handling', () => {
|
||||
test('should handle expandTask failures gracefully', async () => {
|
||||
// Arrange
|
||||
mockExpandTask
|
||||
.mockResolvedValueOnce({ telemetryData: { totalCost: 0.05 } }) // First task succeeds
|
||||
.mockRejectedValueOnce(new Error('AI service error')); // Second task fails
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.expandedCount).toBe(1);
|
||||
expect(result.failedCount).toBe(1);
|
||||
});
|
||||
|
||||
test('should throw error when tasks.json is invalid', async () => {
|
||||
// Arrange
|
||||
mockReadJSON.mockReturnValue(null);
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
)
|
||||
).rejects.toThrow('Invalid tasks data');
|
||||
});
|
||||
|
||||
test('should throw error when project root cannot be determined', async () => {
|
||||
// Arrange - Mock findProjectRoot to return null for this test
|
||||
mockFindProjectRoot.mockReturnValueOnce(null);
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog
|
||||
// No projectRoot provided, and findProjectRoot will return null
|
||||
},
|
||||
'json'
|
||||
)
|
||||
).rejects.toThrow('Could not determine project root directory');
|
||||
});
|
||||
});
|
||||
|
||||
describe('telemetry aggregation', () => {
|
||||
test('should aggregate telemetry data from multiple expand operations', async () => {
|
||||
// Arrange
|
||||
const telemetryData1 = {
|
||||
commandName: 'expand-task',
|
||||
totalCost: 0.03,
|
||||
totalTokens: 600
|
||||
};
|
||||
const telemetryData2 = {
|
||||
commandName: 'expand-task',
|
||||
totalCost: 0.04,
|
||||
totalTokens: 800
|
||||
};
|
||||
|
||||
mockExpandTask
|
||||
.mockResolvedValueOnce({ telemetryData: telemetryData1 })
|
||||
.mockResolvedValueOnce({ telemetryData: telemetryData2 });
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(mockAggregateTelemetry).toHaveBeenCalledWith(
|
||||
[telemetryData1, telemetryData2],
|
||||
'expand-all-tasks'
|
||||
);
|
||||
expect(result.telemetryData).toBeDefined();
|
||||
expect(result.telemetryData.commandName).toBe('expand-all-tasks');
|
||||
});
|
||||
|
||||
test('should handle missing telemetry data gracefully', async () => {
|
||||
// Arrange
|
||||
mockExpandTask.mockResolvedValue({}); // No telemetryData
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
expect(mockAggregateTelemetry).toHaveBeenCalledWith(
|
||||
[],
|
||||
'expand-all-tasks'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('output format handling', () => {
|
||||
test('should use text output format for CLI calls', async () => {
|
||||
// Arrange
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
projectRoot: mockProjectRoot
|
||||
// No mcpLog provided, should use CLI logger
|
||||
},
|
||||
'text' // CLI output format
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(result.success).toBe(true);
|
||||
// In text mode, loading indicators and console output would be used
|
||||
// This is harder to test directly but we can verify the result structure
|
||||
});
|
||||
|
||||
test('should handle context tag properly', async () => {
|
||||
// Arrange
|
||||
const taggedTasksData = {
|
||||
...sampleTasksData,
|
||||
tag: 'feature-branch'
|
||||
};
|
||||
mockReadJSON.mockReturnValue(taggedTasksData);
|
||||
mockExpandTask.mockResolvedValue({
|
||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await expandAllTasks(
|
||||
mockTasksPath,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
false,
|
||||
{
|
||||
session: mockSession,
|
||||
mcpLog: mockMcpLog,
|
||||
projectRoot: mockProjectRoot,
|
||||
tag: 'feature-branch'
|
||||
},
|
||||
'json'
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(mockReadJSON).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
mockProjectRoot,
|
||||
'feature-branch'
|
||||
);
|
||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
||||
mockTasksPath,
|
||||
expect.any(Number),
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
expect.objectContaining({
|
||||
tag: 'feature-branch'
|
||||
}),
|
||||
false
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
888
tests/unit/scripts/modules/task-manager/expand-task.test.js
Normal file
888
tests/unit/scripts/modules/task-manager/expand-task.test.js
Normal file
@@ -0,0 +1,888 @@
|
||||
/**
|
||||
* Tests for the expand-task.js module
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
import fs from 'fs';
|
||||
|
||||
// Mock the dependencies before importing the module under test
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
readJSON: jest.fn(),
|
||||
writeJSON: jest.fn(),
|
||||
log: jest.fn(),
|
||||
CONFIG: {
|
||||
model: 'mock-claude-model',
|
||||
maxTokens: 4000,
|
||||
temperature: 0.7,
|
||||
debug: false
|
||||
},
|
||||
sanitizePrompt: jest.fn((prompt) => prompt),
|
||||
truncate: jest.fn((text) => text),
|
||||
isSilentMode: jest.fn(() => false),
|
||||
findTaskById: jest.fn(),
|
||||
findProjectRoot: jest.fn((tasksPath) => '/mock/project/root'),
|
||||
getCurrentTag: jest.fn(() => 'master'),
|
||||
ensureTagMetadata: jest.fn((tagObj) => tagObj),
|
||||
flattenTasksWithSubtasks: jest.fn((tasks) => {
|
||||
const allTasks = [];
|
||||
const queue = [...(tasks || [])];
|
||||
while (queue.length > 0) {
|
||||
const task = queue.shift();
|
||||
allTasks.push(task);
|
||||
if (task.subtasks) {
|
||||
for (const subtask of task.subtasks) {
|
||||
queue.push({ ...subtask, id: `${task.id}.${subtask.id}` });
|
||||
}
|
||||
}
|
||||
}
|
||||
return allTasks;
|
||||
}),
|
||||
readComplexityReport: jest.fn(),
|
||||
markMigrationForNotice: jest.fn(),
|
||||
performCompleteTagMigration: jest.fn(),
|
||||
setTasksForTag: jest.fn(),
|
||||
getTasksForTag: jest.fn((data, tag) => data[tag]?.tasks || [])
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
||||
displayBanner: jest.fn(),
|
||||
getStatusWithColor: jest.fn((status) => status),
|
||||
startLoadingIndicator: jest.fn(),
|
||||
stopLoadingIndicator: jest.fn(),
|
||||
succeedLoadingIndicator: jest.fn(),
|
||||
failLoadingIndicator: jest.fn(),
|
||||
warnLoadingIndicator: jest.fn(),
|
||||
infoLoadingIndicator: jest.fn(),
|
||||
displayAiUsageSummary: jest.fn(),
|
||||
displayContextAnalysis: jest.fn()
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/ai-services-unified.js',
|
||||
() => ({
|
||||
generateTextService: jest.fn().mockResolvedValue({
|
||||
mainResult: JSON.stringify({
|
||||
subtasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Set up project structure',
|
||||
description:
|
||||
'Create the basic project directory structure and configuration files',
|
||||
dependencies: [],
|
||||
details:
|
||||
'Initialize package.json, create src/ and test/ directories, set up linting configuration',
|
||||
status: 'pending',
|
||||
testStrategy:
|
||||
'Verify all expected files and directories are created'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Implement core functionality',
|
||||
description: 'Develop the main application logic and core features',
|
||||
dependencies: [1],
|
||||
details:
|
||||
'Create main classes, implement business logic, set up data models',
|
||||
status: 'pending',
|
||||
testStrategy: 'Unit tests for all core functions and classes'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Add user interface',
|
||||
description: 'Create the user interface components and layouts',
|
||||
dependencies: [2],
|
||||
details:
|
||||
'Design UI components, implement responsive layouts, add user interactions',
|
||||
status: 'pending',
|
||||
testStrategy: 'UI tests and visual regression testing'
|
||||
}
|
||||
]
|
||||
}),
|
||||
telemetryData: {
|
||||
timestamp: new Date().toISOString(),
|
||||
userId: '1234567890',
|
||||
commandName: 'expand-task',
|
||||
modelUsed: 'claude-3-5-sonnet',
|
||||
providerName: 'anthropic',
|
||||
inputTokens: 1000,
|
||||
outputTokens: 500,
|
||||
totalTokens: 1500,
|
||||
totalCost: 0.012414,
|
||||
currency: 'USD'
|
||||
}
|
||||
})
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => ({
|
||||
getDefaultSubtasks: jest.fn(() => 3),
|
||||
getDebugFlag: jest.fn(() => false)
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/utils/contextGatherer.js',
|
||||
() => ({
|
||||
ContextGatherer: jest.fn().mockImplementation(() => ({
|
||||
gather: jest.fn().mockResolvedValue({
|
||||
contextSummary: 'Mock context summary',
|
||||
allRelatedTaskIds: [],
|
||||
graphVisualization: 'Mock graph'
|
||||
})
|
||||
}))
|
||||
})
|
||||
);
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/task-manager/generate-task-files.js',
|
||||
() => ({
|
||||
default: jest.fn().mockResolvedValue()
|
||||
})
|
||||
);
|
||||
|
||||
// Mock external UI libraries
|
||||
jest.unstable_mockModule('chalk', () => ({
|
||||
default: {
|
||||
white: { bold: jest.fn((text) => text) },
|
||||
cyan: Object.assign(
|
||||
jest.fn((text) => text),
|
||||
{
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
),
|
||||
green: jest.fn((text) => text),
|
||||
yellow: jest.fn((text) => text),
|
||||
bold: jest.fn((text) => text)
|
||||
}
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('boxen', () => ({
|
||||
default: jest.fn((text) => text)
|
||||
}));
|
||||
|
||||
jest.unstable_mockModule('cli-table3', () => ({
|
||||
default: jest.fn().mockImplementation(() => ({
|
||||
push: jest.fn(),
|
||||
toString: jest.fn(() => 'mocked table')
|
||||
}))
|
||||
}));
|
||||
|
||||
// Mock process.exit to prevent Jest worker crashes
|
||||
const mockExit = jest.spyOn(process, 'exit').mockImplementation((code) => {
|
||||
throw new Error(`process.exit called with "${code}"`);
|
||||
});
|
||||
|
||||
// Import the mocked modules
|
||||
const {
|
||||
readJSON,
|
||||
writeJSON,
|
||||
log,
|
||||
findTaskById,
|
||||
ensureTagMetadata,
|
||||
readComplexityReport,
|
||||
findProjectRoot
|
||||
} = await import('../../../../../scripts/modules/utils.js');
|
||||
|
||||
const { generateTextService } = await import(
|
||||
'../../../../../scripts/modules/ai-services-unified.js'
|
||||
);
|
||||
|
||||
const generateTaskFiles = (
|
||||
await import(
|
||||
'../../../../../scripts/modules/task-manager/generate-task-files.js'
|
||||
)
|
||||
).default;
|
||||
|
||||
// Import the module under test
|
||||
const { default: expandTask } = await import(
|
||||
'../../../../../scripts/modules/task-manager/expand-task.js'
|
||||
);
|
||||
|
||||
describe('expandTask', () => {
|
||||
const sampleTasks = {
|
||||
master: {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Task 1',
|
||||
description: 'First task',
|
||||
status: 'done',
|
||||
dependencies: [],
|
||||
details: 'Already completed task',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Task 2',
|
||||
description: 'Second task',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
details: 'Task ready for expansion',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Complex Task',
|
||||
description: 'A complex task that needs breakdown',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
details: 'This task involves multiple steps',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Task with existing subtasks',
|
||||
description: 'Task that already has subtasks',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
details: 'Has existing subtasks',
|
||||
subtasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Existing subtask',
|
||||
description: 'Already exists',
|
||||
status: 'pending',
|
||||
dependencies: []
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
'feature-branch': {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Feature Task 1',
|
||||
description: 'Task in feature branch',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
details: 'Feature-specific task',
|
||||
subtasks: []
|
||||
}
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
// Create a helper function for consistent mcpLog mock
|
||||
const createMcpLogMock = () => ({
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
success: jest.fn()
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockExit.mockClear();
|
||||
|
||||
// Default readJSON implementation - returns tagged structure
|
||||
readJSON.mockImplementation((tasksPath, projectRoot, tag) => {
|
||||
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
|
||||
const selectedTag = tag || 'master';
|
||||
return {
|
||||
...sampleTasksCopy[selectedTag],
|
||||
tag: selectedTag,
|
||||
_rawTaggedData: sampleTasksCopy
|
||||
};
|
||||
});
|
||||
|
||||
// Default findTaskById implementation
|
||||
findTaskById.mockImplementation((tasks, taskId) => {
|
||||
const id = parseInt(taskId, 10);
|
||||
return tasks.find((t) => t.id === id);
|
||||
});
|
||||
|
||||
// Default complexity report (no report available)
|
||||
readComplexityReport.mockReturnValue(null);
|
||||
|
||||
// Mock findProjectRoot to return consistent path for complexity report
|
||||
findProjectRoot.mockReturnValue('/mock/project/root');
|
||||
|
||||
writeJSON.mockResolvedValue();
|
||||
generateTaskFiles.mockResolvedValue();
|
||||
log.mockImplementation(() => {});
|
||||
|
||||
// Mock console.log to avoid output during tests
|
||||
jest.spyOn(console, 'log').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
console.log.mockRestore();
|
||||
});
|
||||
|
||||
describe('Basic Functionality', () => {
|
||||
test('should expand a task with AI-generated subtasks', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const numSubtasks = 3;
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
const result = await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
numSubtasks,
|
||||
false,
|
||||
'',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
expect(generateTextService).toHaveBeenCalledWith(expect.any(Object));
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
subtasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Set up project structure',
|
||||
status: 'pending'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
title: 'Implement core functionality',
|
||||
status: 'pending'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
title: 'Add user interface',
|
||||
status: 'pending'
|
||||
})
|
||||
])
|
||||
})
|
||||
]),
|
||||
tag: 'master',
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.objectContaining({
|
||||
tasks: expect.any(Array)
|
||||
})
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
task: expect.objectContaining({
|
||||
id: 2,
|
||||
subtasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Set up project structure',
|
||||
status: 'pending'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 2,
|
||||
title: 'Implement core functionality',
|
||||
status: 'pending'
|
||||
}),
|
||||
expect.objectContaining({
|
||||
id: 3,
|
||||
title: 'Add user interface',
|
||||
status: 'pending'
|
||||
})
|
||||
])
|
||||
}),
|
||||
telemetryData: expect.any(Object)
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle research flag correctly', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const numSubtasks = 3;
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
numSubtasks,
|
||||
true, // useResearch = true
|
||||
'Additional context for research',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert
|
||||
expect(generateTextService).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
role: 'research',
|
||||
commandName: expect.any(String)
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle complexity report integration without errors', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act & Assert - Should complete without errors
|
||||
const result = await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
undefined, // numSubtasks not specified
|
||||
false,
|
||||
'',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert - Should successfully expand and return expected structure
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
task: expect.objectContaining({
|
||||
id: 2,
|
||||
subtasks: expect.any(Array)
|
||||
}),
|
||||
telemetryData: expect.any(Object)
|
||||
})
|
||||
);
|
||||
expect(generateTextService).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Tag Handling (The Critical Bug Fix)', () => {
|
||||
test('should preserve tagged structure when expanding with default tag', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root',
|
||||
tag: 'master' // Explicit tag context
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - CRITICAL: Check tag is passed to readJSON and writeJSON
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
'master'
|
||||
);
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tag: 'master',
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.any(Object),
|
||||
'feature-branch': expect.any(Object)
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
'master' // CRITICAL: Tag must be passed to writeJSON
|
||||
);
|
||||
});
|
||||
|
||||
test('should preserve tagged structure when expanding with non-default tag', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '1'; // Task in feature-branch
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root',
|
||||
tag: 'feature-branch' // Different tag context
|
||||
};
|
||||
|
||||
// Configure readJSON to return feature-branch data
|
||||
readJSON.mockImplementation((tasksPath, projectRoot, tag) => {
|
||||
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
|
||||
return {
|
||||
...sampleTasksCopy['feature-branch'],
|
||||
tag: 'feature-branch',
|
||||
_rawTaggedData: sampleTasksCopy
|
||||
};
|
||||
});
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - CRITICAL: Check tag preservation for non-default tag
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
'feature-branch'
|
||||
);
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tag: 'feature-branch',
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.any(Object),
|
||||
'feature-branch': expect.any(Object)
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
'feature-branch' // CRITICAL: Correct tag passed to writeJSON
|
||||
);
|
||||
});
|
||||
|
||||
test('should NOT corrupt tagged structure when tag is undefined', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
// No tag specified - should default gracefully
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should still preserve structure with undefined tag
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
_rawTaggedData: expect.objectContaining({
|
||||
master: expect.any(Object)
|
||||
})
|
||||
}),
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
|
||||
// CRITICAL: Verify structure is NOT flattened to old format
|
||||
const writeCallArgs = writeJSON.mock.calls[0][1];
|
||||
expect(writeCallArgs).toHaveProperty('tasks'); // Should have tasks property from readJSON mock
|
||||
expect(writeCallArgs).toHaveProperty('_rawTaggedData'); // Should preserve tagged structure
|
||||
});
|
||||
});
|
||||
|
||||
describe('Force Flag Handling', () => {
|
||||
test('should replace existing subtasks when force=true', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '4'; // Task with existing subtasks
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, true);
|
||||
|
||||
// Assert - Should replace existing subtasks
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 4,
|
||||
subtasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Set up project structure'
|
||||
})
|
||||
])
|
||||
})
|
||||
])
|
||||
}),
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
});
|
||||
|
||||
test('should append to existing subtasks when force=false', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '4'; // Task with existing subtasks
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should append to existing subtasks with proper ID increments
|
||||
expect(writeJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
expect.objectContaining({
|
||||
tasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 4,
|
||||
subtasks: expect.arrayContaining([
|
||||
// Should contain both existing and new subtasks
|
||||
expect.any(Object),
|
||||
expect.any(Object),
|
||||
expect.any(Object),
|
||||
expect.any(Object) // 1 existing + 3 new = 4 total
|
||||
])
|
||||
})
|
||||
])
|
||||
}),
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Error Handling', () => {
|
||||
test('should handle non-existent task ID', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '999'; // Non-existent task
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
findTaskById.mockReturnValue(null);
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
||||
).rejects.toThrow('Task 999 not found');
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should expand tasks regardless of status (including done tasks)', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '1'; // Task with 'done' status
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
const result = await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
3,
|
||||
false,
|
||||
'',
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert - Should successfully expand even 'done' tasks
|
||||
expect(writeJSON).toHaveBeenCalled();
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
task: expect.objectContaining({
|
||||
id: 1,
|
||||
status: 'done', // Status unchanged
|
||||
subtasks: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
id: 1,
|
||||
title: 'Set up project structure',
|
||||
status: 'pending'
|
||||
})
|
||||
])
|
||||
}),
|
||||
telemetryData: expect.any(Object)
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle AI service failures', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
generateTextService.mockRejectedValueOnce(new Error('AI service error'));
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
||||
).rejects.toThrow('AI service error');
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle file read errors', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
readJSON.mockImplementation(() => {
|
||||
throw new Error('File read failed');
|
||||
});
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
||||
).rejects.toThrow('File read failed');
|
||||
|
||||
expect(writeJSON).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should handle invalid tasks data', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
readJSON.mockReturnValue(null);
|
||||
|
||||
// Act & Assert
|
||||
await expect(
|
||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
||||
).rejects.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Output Format Handling', () => {
|
||||
test('should display telemetry for CLI output format', async () => {
|
||||
// Arrange
|
||||
const { displayAiUsageSummary } = await import(
|
||||
'../../../../../scripts/modules/ui.js'
|
||||
);
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
projectRoot: '/mock/project/root'
|
||||
// No mcpLog - should trigger CLI mode
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should display telemetry for CLI users
|
||||
expect(displayAiUsageSummary).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
commandName: 'expand-task',
|
||||
modelUsed: 'claude-3-5-sonnet',
|
||||
totalCost: 0.012414
|
||||
}),
|
||||
'cli'
|
||||
);
|
||||
});
|
||||
|
||||
test('should not display telemetry for MCP output format', async () => {
|
||||
// Arrange
|
||||
const { displayAiUsageSummary } = await import(
|
||||
'../../../../../scripts/modules/ui.js'
|
||||
);
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should NOT display telemetry for MCP (handled at higher level)
|
||||
expect(displayAiUsageSummary).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('Edge Cases', () => {
|
||||
test('should handle empty additional context', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should work with empty context (but may include project context)
|
||||
expect(generateTextService).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
prompt: expect.stringMatching(/.*/) // Just ensure prompt exists
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle additional context correctly', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const additionalContext = 'Use React hooks and TypeScript';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock(),
|
||||
projectRoot: '/mock/project/root'
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(
|
||||
tasksPath,
|
||||
taskId,
|
||||
3,
|
||||
false,
|
||||
additionalContext,
|
||||
context,
|
||||
false
|
||||
);
|
||||
|
||||
// Assert - Should include additional context in prompt
|
||||
expect(generateTextService).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
prompt: expect.stringContaining('Use React hooks and TypeScript')
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('should handle missing project root in context', async () => {
|
||||
// Arrange
|
||||
const tasksPath = 'tasks/tasks.json';
|
||||
const taskId = '2';
|
||||
const context = {
|
||||
mcpLog: createMcpLogMock()
|
||||
// No projectRoot in context
|
||||
};
|
||||
|
||||
// Act
|
||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
||||
|
||||
// Assert - Should derive project root from tasksPath
|
||||
expect(findProjectRoot).toHaveBeenCalledWith(tasksPath);
|
||||
expect(readJSON).toHaveBeenCalledWith(
|
||||
tasksPath,
|
||||
'/mock/project/root',
|
||||
undefined
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -123,7 +123,9 @@ describe('updateTasks', () => {
|
||||
details: 'New details 2 based on direction',
|
||||
description: 'Updated description',
|
||||
dependencies: [],
|
||||
priority: 'medium'
|
||||
priority: 'medium',
|
||||
testStrategy: 'Unit test the updated functionality',
|
||||
subtasks: []
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
@@ -132,7 +134,9 @@ describe('updateTasks', () => {
|
||||
details: 'New details 3 based on direction',
|
||||
description: 'Updated description',
|
||||
dependencies: [],
|
||||
priority: 'medium'
|
||||
priority: 'medium',
|
||||
testStrategy: 'Integration test the updated features',
|
||||
subtasks: []
|
||||
}
|
||||
];
|
||||
|
||||
|
||||
Reference in New Issue
Block a user