Compare commits

...

10 Commits

Author SHA1 Message Date
Ralph Khreish
04cefd84b4 chore: implement requested changes 2025-09-03 13:54:18 +02:00
Ralph Khreish
8b164e5436 chore: address PR comments 2025-09-02 23:55:50 +02:00
Ralph Khreish
bcf17bf0b8 chore: remove random env variables that don't exist 2025-09-02 23:46:44 +02:00
Ralph Khreish
5a284e9abd chore: cleanup 2025-09-02 23:43:55 +02:00
Ralph Khreish
6a04da5a4e chore: remove templates 2025-09-02 23:37:55 +02:00
Ralph Khreish
88105b7f37 feat: implement oauth with remote server 2025-09-02 23:32:40 +02:00
Ralph Khreish
930652e523 feat: implement initial auth 2025-09-02 20:51:50 +02:00
Ralph Khreish
a03a0cb45a chore: add logger module inside tm-core 2025-09-02 15:54:55 +02:00
Ralph Khreish
6409e1a366 feat: implement auth manager and auth command 2025-09-02 15:11:27 +02:00
Ralph Khreish
19ec52181d feat: create tm-core and apps/cli (#1093)
- add typescript
- add npm workspaces
2025-09-01 21:44:43 +02:00
176 changed files with 16392 additions and 4651 deletions

View File

@@ -0,0 +1,188 @@
# Task Master Migration Roadmap
## Overview
Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
## Architecture Vision
```
┌─────────────────────────────────────────────────┐
│ User Interfaces │
├──────────┬──────────┬──────────┬────────────────┤
│ @tm/cli │ @tm/mcp │ @tm/ext │ @tm/web │
│ (CLI) │ (MCP) │ (VSCode)│ (Future) │
└──────────┴──────────┴──────────┴────────────────┘
┌──────────────────────┐
│ @tm/core │
│ (Business Logic) │
└──────────────────────┘
```
## Migration Phases
### Phase 1: Core Extraction ✅ (In Progress)
**Goal**: Move all business logic to @tm/core
- [x] Create @tm/core package structure
- [x] Move types and interfaces
- [x] Implement TaskMasterCore facade
- [x] Move storage adapters
- [x] Move task services
- [ ] Move AI providers
- [ ] Move parser logic
- [ ] Complete test coverage
### Phase 2: CLI Package Creation 🚧 (Started)
**Goal**: Create @tm/cli as a thin presentation layer
- [x] Create @tm/cli package structure
- [x] Implement Command interface pattern
- [x] Create CommandRegistry
- [x] Build legacy bridge/adapter
- [x] Migrate list-tasks command
- [ ] Migrate remaining commands one by one
- [ ] Remove UI logic from core
### Phase 3: Transitional Integration
**Goal**: Use new packages in existing scripts without breaking changes
```javascript
// scripts/modules/commands.js gradually adopts new commands
import { ListTasksCommand } from '@tm/cli';
const listCommand = new ListTasksCommand();
// Old interface remains the same
programInstance
.command('list')
.action(async (options) => {
// Use new command internally
const result = await listCommand.execute(convertOptions(options));
});
```
### Phase 4: MCP Package
**Goal**: Separate MCP server as its own package
- [ ] Create @tm/mcp package
- [ ] Move MCP server code
- [ ] Use @tm/core for all logic
- [ ] MCP becomes a thin RPC layer
### Phase 5: Complete Migration
**Goal**: Remove old scripts, pure monorepo
- [ ] All commands migrated to @tm/cli
- [ ] Remove scripts/modules/task-manager/*
- [ ] Remove scripts/modules/commands.js
- [ ] Update bin/task-master.js to use @tm/cli
- [ ] Clean up dependencies
## Current Transitional Strategy
### 1. Adapter Pattern (commands-adapter.js)
```javascript
// Checks if new CLI is available and uses it
// Falls back to legacy implementation if not
export async function listTasksAdapter(...args) {
if (cliAvailable) {
return useNewImplementation(...args);
}
return useLegacyImplementation(...args);
}
```
### 2. Command Bridge Pattern
```javascript
// Allows new commands to work in old code
const bridge = new CommandBridge(new ListTasksCommand());
const data = await bridge.run(legacyOptions); // Legacy style
const result = await bridge.execute(newOptions); // New style
```
### 3. Gradual File Migration
Instead of big-bang refactoring:
1. Create new implementation in @tm/cli
2. Add adapter in commands-adapter.js
3. Update commands.js to use adapter
4. Test both paths work
5. Eventually remove adapter when all migrated
## Benefits of This Approach
1. **No Breaking Changes**: Existing CLI continues to work
2. **Incremental PRs**: Each command can be migrated separately
3. **Parallel Development**: New features can use new architecture
4. **Easy Rollback**: Can disable new implementation if issues
5. **Clear Separation**: Business logic (core) vs presentation (cli/mcp/etc)
## Example PR Sequence
### PR 1: Core Package Setup ✅
- Create @tm/core
- Move types and interfaces
- Basic TaskMasterCore implementation
### PR 2: CLI Package Foundation ✅
- Create @tm/cli
- Command interface and registry
- Legacy bridge utilities
### PR 3: First Command Migration
- Migrate list-tasks to new system
- Add adapter in scripts
- Test both implementations
### PR 4-N: Migrate Commands One by One
- Each PR migrates 1-2 related commands
- Small, reviewable changes
- Continuous delivery
### Final PR: Cleanup
- Remove legacy implementations
- Remove adapters
- Update documentation
## Testing Strategy
### Dual Testing During Migration
```javascript
describe('List Tasks', () => {
it('works with legacy implementation', async () => {
// Force legacy
const result = await legacyListTasks(...);
expect(result).toBeDefined();
});
it('works with new implementation', async () => {
// Force new
const command = new ListTasksCommand();
const result = await command.execute(...);
expect(result.success).toBe(true);
});
it('adapter chooses correctly', async () => {
// Let adapter decide
const result = await listTasksAdapter(...);
expect(result).toBeDefined();
});
});
```
## Success Metrics
- [ ] All commands migrated without breaking changes
- [ ] Test coverage maintained or improved
- [ ] Performance maintained or improved
- [ ] Cleaner, more maintainable codebase
- [ ] Easy to add new interfaces (web, desktop, etc.)
## Notes for Contributors
1. **Keep PRs Small**: Migrate one command at a time
2. **Test Both Paths**: Ensure legacy and new both work
3. **Document Changes**: Update this roadmap as you go
4. **Communicate**: Discuss in PRs if architecture needs adjustment
This is a living document - update as the migration progresses!

View File

@@ -0,0 +1,77 @@
{
"meta": {
"generatedAt": "2025-08-06T12:39:03.250Z",
"tasksAnalyzed": 8,
"totalTasks": 11,
"analysisCount": 8,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 118,
"taskTitle": "Create AI Provider Base Architecture",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the implementation of BaseProvider abstract TypeScript class into subtasks focusing on: 1) Converting existing JavaScript base-provider.js to TypeScript with proper interface definitions, 2) Implementing the Template Method pattern with abstract methods, 3) Adding comprehensive error handling and retry logic with exponential backoff, 4) Creating proper TypeScript types for all method signatures and options, 5) Setting up comprehensive unit tests with MockProvider. Consider that the existing codebase uses JavaScript ES modules and Vercel AI SDK, so the TypeScript implementation needs to maintain compatibility while adding type safety.",
"reasoning": "This task requires significant architectural work including converting existing JavaScript code to TypeScript, creating new interfaces, implementing design patterns, and ensuring backward compatibility. The existing base-provider.js already implements a sophisticated provider pattern using Vercel AI SDK, so the TypeScript conversion needs careful consideration of type definitions and maintaining existing functionality."
},
{
"taskId": 119,
"taskTitle": "Implement Provider Factory with Dynamic Imports",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the Provider Factory implementation into: 1) Creating the ProviderFactory class structure with proper TypeScript typing, 2) Implementing the switch statement for provider selection logic, 3) Adding dynamic imports for each provider to enable tree-shaking, 4) Handling provider instantiation with configuration passing, 5) Implementing comprehensive error handling for module loading failures. Note that the existing codebase already has a provider selection mechanism in the JavaScript files, so ensure the factory pattern integrates smoothly with existing infrastructure.",
"reasoning": "This is a moderate complexity task that involves creating a factory pattern with dynamic imports. The existing codebase already has provider management logic, so the main complexity is in creating a clean TypeScript implementation with proper dynamic imports while maintaining compatibility with the existing JavaScript module system."
},
{
"taskId": 120,
"taskTitle": "Implement Anthropic Provider",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement the AnthropicProvider class in stages: 1) Set up the class structure extending BaseProvider with proper TypeScript imports and type definitions, 2) Implement constructor with Anthropic SDK client initialization and configuration handling, 3) Implement generateCompletion method with proper message format transformation and error handling, 4) Add token calculation methods and utility functions (getName, getModel, getDefaultModel), 5) Implement comprehensive error handling with custom error wrapping and type exports. The existing anthropic.js provider can serve as a reference but needs to be reimplemented to extend the new TypeScript BaseProvider.",
"reasoning": "This task involves integrating with an external SDK (@anthropic-ai/sdk) and implementing all abstract methods from BaseProvider. The existing JavaScript implementation provides a good reference, but the TypeScript version needs proper type definitions, error handling, and must work with the new abstract base class architecture."
},
{
"taskId": 121,
"taskTitle": "Create Prompt Builder and Task Parser",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement PromptBuilder and TaskParser with focus on: 1) Creating PromptBuilder class with template methods for building structured prompts with JSON format instructions, 2) Implementing TaskParser class structure with dependency injection of IAIProvider and IConfiguration, 3) Implementing parsePRD method with file reading, prompt generation, and AI provider integration, 4) Adding task enrichment logic with metadata, validation, and structure verification, 5) Implementing comprehensive error handling for all failure scenarios including file I/O, AI provider errors, and JSON parsing. The existing parse-prd.js provides complex logic that needs to be reimplemented with proper TypeScript types and cleaner architecture.",
"reasoning": "This is a complex task that involves multiple components working together: file I/O, AI provider integration, JSON parsing, and data validation. The existing parse-prd.js implementation is quite sophisticated with Zod schemas and complex task processing logic that needs to be reimplemented in TypeScript with proper separation of concerns."
},
{
"taskId": 122,
"taskTitle": "Implement Configuration Management",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Create ConfigManager implementation focusing on: 1) Setting up Zod validation schema that matches the IConfiguration interface structure, 2) Implementing ConfigManager constructor with default values merging and storage initialization, 3) Creating validate method with Zod schema parsing and user-friendly error transformation, 4) Implementing type-safe get method using TypeScript generics and keyof operator, 5) Adding getAll method and ensuring proper immutability and module exports. The existing config-manager.js has complex configuration loading logic that can inform the TypeScript implementation but needs cleaner architecture.",
"reasoning": "This task involves creating a configuration management system with validation using Zod. The existing JavaScript config-manager.js is quite complex with multiple configuration sources, defaults, and validation logic. The TypeScript version needs to provide a cleaner API while maintaining the flexibility of the current system."
},
{
"taskId": 123,
"taskTitle": "Create Utility Functions and Error Handling",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement utilities and error handling in stages: 1) Create ID generation module with generateTaskId and generateSubtaskId functions using proper random generation, 2) Implement base TaskMasterError class extending Error with proper TypeScript typing, 3) Add error sanitization methods to prevent sensitive data exposure in production, 4) Implement development-only logging with environment detection, 5) Create specialized error subclasses (FileNotFoundError, ParseError, ValidationError, APIError) with appropriate error codes and formatting.",
"reasoning": "This is a relatively straightforward task involving utility functions and error class hierarchies. The main complexity is in ensuring proper error sanitization for production use and creating a well-structured error hierarchy that can be used throughout the application."
},
{
"taskId": 124,
"taskTitle": "Implement TaskMasterCore Facade",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Build TaskMasterCore facade implementation: 1) Create class structure with proper TypeScript imports and type definitions for all subsystem interfaces, 2) Implement initialize method for lazy loading AI provider and parser instances based on configuration, 3) Create parsePRD method that coordinates parser, AI provider, and storage subsystems, 4) Implement getTasks and other facade methods for task retrieval and management, 5) Create createTaskMaster factory function and set up all module exports including type re-exports. Ensure proper ESM compatibility with .js extensions in imports.",
"reasoning": "This is a complex integration task that brings together all the other components into a cohesive facade. It requires understanding of the facade pattern, proper dependency management, lazy initialization, and careful module export structure for the public API."
},
{
"taskId": 125,
"taskTitle": "Create Placeholder Providers and Complete Testing",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Complete the implementation with placeholders and testing: 1) Create OpenAIProvider placeholder class extending BaseProvider with 'not yet implemented' errors, 2) Create GoogleProvider placeholder class with similar structure, 3) Implement MockProvider in tests/mocks directory with configurable responses and behavior simulation, 4) Write comprehensive unit tests for TaskParser covering all methods and edge cases, 5) Create integration tests for the complete parse-prd workflow ensuring 80% code coverage. Follow kebab-case naming convention for test files.",
"reasoning": "This task involves creating placeholder implementations and a comprehensive test suite. While the placeholder providers are simple, creating a good MockProvider and comprehensive tests requires understanding the entire system architecture and ensuring all edge cases are covered."
}
]
}

View File

@@ -0,0 +1,77 @@
{
"meta": {
"generatedAt": "2025-08-06T12:15:01.327Z",
"tasksAnalyzed": 8,
"totalTasks": 11,
"analysisCount": 8,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 118,
"taskTitle": "Create AI Provider Base Architecture",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the conversion of base-provider.js to TypeScript BaseProvider class: 1) Convert to TypeScript and define IAIProvider interface, 2) Implement abstract class with core properties, 3) Define abstract methods and Template Method pattern, 4) Add retry logic with exponential backoff, 5) Implement validation and logging. Focus on maintaining compatibility with existing provider pattern while adding type safety.",
"reasoning": "The codebase already has a well-established BaseAIProvider class in JavaScript. Converting to TypeScript mainly involves adding type definitions and ensuring the existing pattern is preserved. The complexity is moderate because the pattern is already proven in the codebase."
},
{
"taskId": 119,
"taskTitle": "Implement Provider Factory with Dynamic Imports",
"complexityScore": 3,
"recommendedSubtasks": 5,
"expansionPrompt": "Create ProviderFactory implementation: 1) Set up class structure and types, 2) Implement provider selection switch statement, 3) Add dynamic imports for tree-shaking, 4) Handle provider instantiation with config, 5) Add comprehensive error handling. The existing PROVIDERS registry pattern should guide the implementation.",
"reasoning": "The codebase already uses a dual registry pattern (static PROVIDERS and dynamic ProviderRegistry). Creating a factory is straightforward as the provider registration patterns are well-established. Dynamic imports are already used in the codebase."
},
{
"taskId": 120,
"taskTitle": "Implement Anthropic Provider",
"complexityScore": 3,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement AnthropicProvider following existing patterns: 1) Create class structure with imports, 2) Implement constructor and client initialization, 3) Add generateCompletion with Claude API integration, 4) Implement token calculation and utility methods, 5) Add error handling and exports. Use the existing anthropic.js provider as reference.",
"reasoning": "AnthropicProvider already exists in the codebase with full implementation. This task essentially involves adapting the existing implementation to match the new TypeScript architecture, making it relatively straightforward."
},
{
"taskId": 121,
"taskTitle": "Create Prompt Builder and Task Parser",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Build prompt system and parser: 1) Create PromptBuilder with template methods, 2) Implement TaskParser with dependency injection, 3) Add parsePRD core logic with file reading, 4) Implement task enrichment and metadata, 5) Add comprehensive error handling. Leverage the existing prompt management system in src/prompts/.",
"reasoning": "While the codebase has a sophisticated prompt management system, creating a new PromptBuilder and TaskParser requires understanding the existing prompt templates, JSON schema validation, and integration with the AI provider system. The task involves significant new code."
},
{
"taskId": 122,
"taskTitle": "Implement Configuration Management",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Create ConfigManager with validation: 1) Define Zod schema for IConfiguration, 2) Implement constructor with defaults, 3) Add validate method with error handling, 4) Create type-safe get method with generics, 5) Implement getAll and finalize exports. Reference existing config-manager.js for patterns.",
"reasoning": "The codebase has an existing config-manager.js with sophisticated configuration handling. Adding Zod validation and TypeScript generics adds complexity, but the existing patterns provide a solid foundation."
},
{
"taskId": 123,
"taskTitle": "Create Utility Functions and Error Handling",
"complexityScore": 2,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement utilities and error handling: 1) Create ID generation module with unique formats, 2) Build TaskMasterError base class, 3) Add error sanitization for security, 4) Implement development-only logging, 5) Create specialized error subclasses. Keep implementation simple and focused.",
"reasoning": "This is a straightforward utility implementation task. The codebase already has error handling patterns, and ID generation is a simple algorithmic task. The main work is creating clean, reusable utilities."
},
{
"taskId": 124,
"taskTitle": "Implement TaskMasterCore Facade",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Create main facade class: 1) Set up TaskMasterCore structure with imports, 2) Implement lazy initialization logic, 3) Add parsePRD coordination method, 4) Implement getTasks and other facade methods, 5) Create factory function and exports. This ties together all other components into a cohesive API.",
"reasoning": "This is the most complex task as it requires understanding and integrating all other components. The facade must coordinate between configuration, providers, storage, and parsing while maintaining a clean API. It's the architectural keystone of the system."
},
{
"taskId": 125,
"taskTitle": "Create Placeholder Providers and Complete Testing",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement testing infrastructure: 1) Create OpenAIProvider placeholder, 2) Create GoogleProvider placeholder, 3) Build MockProvider for testing, 4) Write TaskParser unit tests, 5) Create integration tests for parse-prd flow. Follow the existing test patterns in tests/ directory.",
"reasoning": "While creating placeholder providers is simple, the testing infrastructure requires understanding Jest with ES modules, mocking patterns, and comprehensive test coverage. The existing test structure provides good examples to follow."
}
]
}

View File

@@ -1,6 +1,6 @@
{
"currentTag": "master",
"lastSwitched": "2025-08-01T14:09:25.838Z",
"lastSwitched": "2025-08-27T21:03:20.550Z",
"branchTagMapping": {
"v017-adds": "v017-adds",
"next": "next"

View File

@@ -7297,5 +7297,763 @@
"updated": "2025-07-22T09:38:19.341Z",
"description": "Tasks for cc-kiro-hooks context"
}
},
"tm-core-phase-1": {
"tasks": [
{
"id": 115,
"title": "Initialize tm-core Package Structure",
"description": "Create the initial package structure for tm-core with all required directories and configuration files",
"details": "Create the packages/tm-core directory structure with all subdirectories as specified: src/, tests/, and all nested folders. Set up package.json with proper ESM/CJS configuration, tsconfig.json with strict TypeScript settings, tsup.config.js for dual format builds, and jest.config.js for testing. Ensure all barrel export files (index.ts) are created in each directory for clean imports.",
"testStrategy": "Verify directory structure matches specification exactly, ensure all configuration files are valid JSON/JS, run 'npm install' to verify package.json is correct, run 'tsc --noEmit' to verify TypeScript configuration",
"priority": "high",
"dependencies": [],
"status": "done",
"subtasks": [
{
"id": 1,
"title": "Create tm-core directory structure and base configuration files",
"description": "Set up the packages/tm-core directory with all required subdirectories and initialize core configuration files",
"dependencies": [],
"details": "Create packages/tm-core directory with subdirectories: src/, src/types/, src/interfaces/, src/providers/, src/parsers/, src/builders/, src/utils/, src/errors/, tests/, tests/unit/, tests/integration/, tests/mocks/. Create package.json with name '@task-master/tm-core', version '1.0.0', type 'module', main/module/types fields for dual ESM/CJS support, and necessary dependencies (typescript, tsup, jest, @types/node). Set up tsconfig.json with strict mode, ES2022 target, module resolution, and proper include/exclude patterns.\n<info added on 2025-08-06T10:49:59.891Z>\nImplementation completed as specified. Directory structure verified with all paths created correctly. Package.json configured with dual ESM/CJS support using tsup build tool, exports field properly set for both formats. TypeScript configuration established with strict mode enabled, ES2022 target for modern JavaScript features, and path mappings configured for clean imports like '@/types' and '@/utils'. All configuration files are valid and ready for development.\n</info added on 2025-08-06T10:49:59.891Z>",
"status": "done",
"testStrategy": "Verify directory structure with fs.existsSync() checks, validate package.json structure with JSON.parse(), ensure tsconfig.json compiles without errors"
},
{
"id": 2,
"title": "Configure build and test infrastructure",
"description": "Set up tsup build configuration for dual format support and Jest testing configuration",
"dependencies": [
"115.1"
],
"details": "Create tsup.config.js with dual format configuration (ESM and CJS), entry points from src/index.ts, declaration files generation, and sourcemaps. Configure jest.config.js with TypeScript preset, ESM support, proper module name mapping, coverage thresholds (80%), and test environment setup. Create .gitignore for node_modules, dist, and coverage directories. Add npm scripts in package.json for build, test, test:watch, and test:coverage commands.\n<info added on 2025-08-06T10:50:49.396Z>\nBuild process successfully configured with tsup.config.ts (TypeScript configuration file instead of JavaScript) supporting dual format output and multiple entry points including submodules. Jest configuration established with comprehensive ESM support and path alias mapping. Created tests/setup.ts for centralized test environment configuration. Added ES2022 compilation target for modern JavaScript features. Enhanced .gitignore to exclude additional development-specific files beyond the basic directories.\n</info added on 2025-08-06T10:50:49.396Z>",
"status": "done",
"testStrategy": "Run 'npm run build' to verify tsup configuration works, execute 'npm test' with a simple test file to confirm Jest setup, check that both .mjs and .cjs files are generated in dist/"
},
{
"id": 3,
"title": "Create barrel export files for all directories",
"description": "Implement index.ts files in each directory to enable clean imports throughout the package",
"dependencies": [
"115.1"
],
"details": "Create index.ts in src/ that exports from all subdirectories. Create index.ts in each subdirectory (types/, interfaces/, providers/, parsers/, builders/, utils/, errors/) with appropriate exports. For now, add placeholder comments indicating what will be exported from each module. Ensure proper export syntax for TypeScript types and interfaces using 'export type' where appropriate. Structure exports to allow consumers to import like '@task-master/tm-core/types' or from the main entry point.\n<info added on 2025-08-06T10:51:56.837Z>\nImplementation complete. All barrel export files have been created successfully with:\n\n- Main src/index.ts exporting from all subdirectories with proper TypeScript syntax\n- Individual index.ts files in types/, providers/, storage/, parser/, utils/, and errors/ directories\n- Proper ES module syntax with .js extensions for TypeScript compatibility\n- Placeholder exports with @deprecated JSDoc tags to indicate future implementation\n- Clean module structure supporting both root imports and submodule imports like '@task-master/tm-core/types'\n- All files include appropriate documentation comments explaining their purpose\n</info added on 2025-08-06T10:51:56.837Z>",
"status": "done",
"testStrategy": "Compile with TypeScript to ensure all index.ts files are valid, verify no circular dependencies exist, check that imports from package root work correctly"
},
{
"id": 4,
"title": "Add development tooling and documentation",
"description": "Set up development tools, linting, and initial documentation structure",
"dependencies": [
"115.1",
"115.2"
],
"details": "Create .eslintrc.js with TypeScript plugin and recommended rules for consistent code style. Add prettier configuration for code formatting. Create README.md with package overview, installation instructions, and usage examples (marked as 'coming soon'). Add CHANGELOG.md to track version changes. Create npm scripts for linting and formatting. Add pre-commit hooks configuration if needed. Document the dual ESM/CJS support in README.\n<info added on 2025-08-06T10:53:45.056Z>\nI'll analyze the user's request and the context to determine what new information should be added to the subtask's details.Successfully completed development tooling and documentation setup. Created .eslintrc.js with TypeScript plugin and comprehensive rules including no-explicit-any, consistent-type-imports, and proper TypeScript checks. Added .prettierrc.json with sensible defaults for consistent code formatting. Created comprehensive README.md with package overview, installation instructions, usage examples for both ESM and CommonJS, modular imports, architecture description, development setup, and detailed roadmap for tasks 116-125. Added CHANGELOG.md following Keep a Changelog format with current package status and planned features. All development tooling is configured and ready for use.\n</info added on 2025-08-06T10:53:45.056Z>",
"status": "done",
"testStrategy": "Run eslint on sample TypeScript files, verify prettier formats code consistently, ensure all npm scripts execute without errors"
},
{
"id": 5,
"title": "Validate package structure and prepare for development",
"description": "Perform final validation of the package structure and ensure it's ready for implementation",
"dependencies": [
"115.1",
"115.2",
"115.3",
"115.4"
],
"details": "Run 'npm install' to ensure all dependencies are properly resolved. Execute 'tsc --noEmit' to verify TypeScript configuration is correct. Create a simple smoke test in tests/ that imports from the package to verify module resolution works. Ensure the package can be linked locally for testing in other projects. Verify that both CommonJS and ESM imports work correctly. Create a checklist in README for remaining implementation tasks based on tasks 116-125.\n<info added on 2025-08-06T11:02:21.457Z>\nSuccessfully validated package structure with comprehensive testing. All validations passed: npm install resolved dependencies without issues, TypeScript compilation (tsc --noEmit) showed no errors, and dual-format build (npm run build) successfully generated both ESM and CJS outputs with proper TypeScript declarations. Created and executed comprehensive smoke test suite covering all module imports, placeholder functionality, and type definitions - all 8 tests passing. Code quality tools (ESLint, Prettier) are properly configured and show no issues. Package is confirmed ready for local linking and supports both CommonJS and ESM import patterns. README updated with implementation checklist marking Task 115 as complete and clearly outlining remaining implementation tasks 116-125. Package structure validation is complete and development environment is fully prepared for core implementation phase.\n</info added on 2025-08-06T11:02:21.457Z>",
"status": "done",
"testStrategy": "Successfully run all build and test commands, verify package can be imported in both ESM and CJS test files, ensure TypeScript compilation produces no errors, confirm all directories contain appropriate index.ts files"
}
]
},
{
"id": 116,
"title": "Define Core TypeScript Types and Interfaces",
"description": "Create all TypeScript type definitions and interfaces for the tm-core package",
"details": "Create types/index.ts with Task, Subtask, TaskMetadata interfaces and type literals (TaskStatus, TaskPriority, TaskComplexity). Create all interface files: storage.interface.ts with IStorage methods, ai-provider.interface.ts with IAIProvider and AIOptions, configuration.interface.ts with IConfiguration. Use strict typing throughout, no 'any' types allowed. Follow naming conventions: interfaces prefixed with 'I', type literals in PascalCase.",
"testStrategy": "Compile with TypeScript to ensure no type errors, create mock implementations to verify interfaces are complete, use type checking in IDE to confirm all required properties are defined",
"priority": "high",
"dependencies": [
115
],
"status": "done",
"subtasks": [
{
"id": 1,
"title": "Create Core Task and Subtask Type Definitions",
"description": "Create types/index.ts with fundamental Task and Subtask interfaces including all required properties",
"dependencies": [],
"details": "Create types/index.ts file. Define Task interface with properties: id (string), title (string), description (string), status (TaskStatus), priority (TaskPriority), dependencies (string[]), details (string), testStrategy (string), subtasks (Subtask[]). Define Subtask interface extending Task but with numeric id. Define TaskMetadata interface with version (string), lastModified (string), taskCount (number), completedCount (number). Export all interfaces.\n<info added on 2025-08-06T11:03:44.220Z>\nImplementation completed with comprehensive type system. Created all required interfaces with strict typing, added optional properties for enhanced functionality (createdAt, updatedAt, effort, actualEffort, tags). Implemented utility types for create/update operations with proper type constraints. Added filter interfaces for advanced querying. Included runtime type guards for safe type narrowing. Successfully compiled without TypeScript errors, ready for integration with storage and AI provider implementations.\n</info added on 2025-08-06T11:03:44.220Z>",
"status": "done",
"testStrategy": "Compile TypeScript files to ensure no type errors, create sample objects conforming to interfaces to verify completeness"
},
{
"id": 2,
"title": "Define Type Literals and Enums",
"description": "Create all type literal definitions for TaskStatus, TaskPriority, and TaskComplexity in the types file",
"dependencies": [
"116.1"
],
"details": "In types/index.ts, define type literals: TaskStatus = 'pending' | 'in-progress' | 'done' | 'deferred' | 'cancelled' | 'blocked'; TaskPriority = 'low' | 'medium' | 'high' | 'critical'; TaskComplexity = 'simple' | 'moderate' | 'complex' | 'very-complex'. Consider using const assertions for better type inference. Export all type literals.\n<info added on 2025-08-06T11:04:04.675Z>\nType literals were already implemented in subtask 116.1 as part of the comprehensive type system. The types/index.ts file includes all required type literals: TaskStatus with values 'pending' | 'in-progress' | 'done' | 'deferred' | 'cancelled' | 'blocked' | 'review', TaskPriority with values 'low' | 'medium' | 'high' | 'critical', and TaskComplexity with values 'simple' | 'moderate' | 'complex' | 'very-complex'. All type literals are properly exported and include comprehensive JSDoc documentation. TypeScript compilation verified the types work correctly.\n</info added on 2025-08-06T11:04:04.675Z>",
"status": "done",
"testStrategy": "Use TypeScript compiler to verify type literals work correctly, test with invalid values to ensure type checking catches errors"
},
{
"id": 3,
"title": "Create Storage Interface Definition",
"description": "Create storage.interface.ts with IStorage interface defining all storage operation methods",
"dependencies": [
"116.1"
],
"details": "Create interfaces/storage.interface.ts file. Define IStorage interface with methods: loadTasks(tag?: string): Promise<Task[]>; saveTasks(tasks: Task[], tag?: string): Promise<void>; appendTasks(tasks: Task[], tag?: string): Promise<void>; updateTask(taskId: string, updates: Partial<Task>, tag?: string): Promise<void>; deleteTask(taskId: string, tag?: string): Promise<void>; exists(tag?: string): Promise<boolean>. Import Task type from types/index.ts.\n<info added on 2025-08-06T11:05:00.573Z>\nImplementation completed successfully. Extended IStorage interface beyond original specification to include metadata operations (loadMetadata, saveMetadata), tag management (getAllTags, deleteTag, renameTag, copyTag), and lifecycle methods (initialize, close, getStats). Added StorageStats interface for monitoring storage metrics and StorageConfig interface for configuration options. Implemented BaseStorage abstract class that provides common functionality including task validation using validateTask method, tag sanitization with sanitizeTag to ensure valid filenames, and backup path generation through getBackupPath for data safety. The abstract class serves as a foundation for concrete storage implementations, reducing code duplication and ensuring consistent behavior across different storage backends. All methods properly typed with async/await patterns and comprehensive error handling considerations.\n</info added on 2025-08-06T11:05:00.573Z>",
"status": "done",
"testStrategy": "Create mock implementation of IStorage to verify all methods are properly typed, ensure Promise return types are correct"
},
{
"id": 4,
"title": "Create AI Provider Interface Definition",
"description": "Create ai-provider.interface.ts with IAIProvider interface and AIOptions type",
"dependencies": [
"116.1"
],
"details": "Create interfaces/ai-provider.interface.ts file. Define AIOptions interface with properties: temperature (number), maxTokens (number), stream (boolean), topP (number), frequencyPenalty (number). Define IAIProvider interface with methods: generateCompletion(prompt: string, options?: AIOptions): Promise<string>; calculateTokens(text: string): number; getName(): string; getModel(): string; getDefaultModel(): string; isAvailable(): Promise<boolean>.\n<info added on 2025-08-06T11:06:15.795Z>\nFile successfully updated with expanded interface implementation details including comprehensive method signatures and supporting interfaces: AIOptions with full parameter set (temperature, maxTokens, stream, model, topP, topK, frequencyPenalty, presencePenalty, stopSequences, systemPrompt), AIResponse structure with content and usage metadata, AIModel interface for model information, ProviderInfo for capability tracking, ProviderUsageStats for usage monitoring, AIProviderConfig for initialization, and additional interfaces for streaming support. Documented BaseAIProvider abstract class implementation with validation, usage tracking, and common utility methods. All interfaces properly typed with strict TypeScript patterns and async/await support. No compilation errors.\n</info added on 2025-08-06T11:06:15.795Z>",
"status": "done",
"testStrategy": "Create stub implementation to verify interface completeness, test optional parameters work correctly"
},
{
"id": 5,
"title": "Create Configuration Interface Definition",
"description": "Create configuration.interface.ts with IConfiguration interface for all config options",
"dependencies": [
"116.1",
"116.2"
],
"details": "Create interfaces/configuration.interface.ts file. Define IConfiguration interface with properties: projectPath (string), aiProvider (string), apiKeys (Record<string, string>), models (object with main, research, fallback as strings), enableTags (boolean), defaultTag (string), maxConcurrentTasks (number), retryAttempts (number), retryDelay (number). Import necessary types from types/index.ts. Ensure all properties have appropriate types with no 'any' usage.\n<info added on 2025-08-06T11:07:43.367Z>\nImplementation completed successfully. Created comprehensive configuration system with:\n\n- Core IConfiguration interface with all required properties: projectPath, aiProvider, apiKeys, models configuration, providers settings, tasks management, tags configuration, storage options, retry behavior, logging preferences, and security settings\n- Supporting interfaces for each configuration section: ModelConfig for AI model selection, ProviderConfig for API provider settings, TaskSettings for task management options, TagSettings for tag-based organization, StorageSettings for persistence configuration, RetrySettings for error handling, LoggingSettings for debugging options, SecuritySettings for API key management\n- Configuration management system with IConfigurationFactory for creating configs from various sources (file, environment, defaults) and IConfigurationManager for runtime config operations including loading, saving, validation, watching for changes, and merging configurations\n- Validation system with ConfigValidationResult interface for detailed error reporting, ConfigSchema for JSON schema validation, and EnvironmentConfig for environment variable mapping\n- DEFAULT_CONFIG_VALUES constant providing sensible defaults for all configuration options\n- All interfaces properly typed with strict TypeScript typing, no 'any' usage, proper imports from types/index\n- Successfully exported all interfaces through main index.ts for package consumers\n- TypeScript compilation confirmed passing without any type errors\n</info added on 2025-08-06T11:07:43.367Z>",
"status": "done",
"testStrategy": "Create sample configuration objects to verify interface covers all needed options, test with partial configs to ensure optional properties work"
}
]
},
{
"id": 117,
"title": "Implement Storage Layer with Repository Pattern",
"description": "Create FileStorage class implementing IStorage interface for task persistence",
"details": "Implement FileStorage class in storage/file-storage.ts following Repository pattern. Constructor accepts projectPath, private basePath property set to {projectPath}/.taskmaster. Implement all IStorage methods: loadTasks, saveTasks, appendTasks, updateTask, deleteTask, exists. Handle file operations with proper error handling (ENOENT returns empty arrays). Use JSON format with tasks array and metadata object containing version and lastModified. Create getTasksPath method to handle tag-based file paths.",
"testStrategy": "Unit test all FileStorage methods with mock file system, test error scenarios (missing files, invalid JSON), verify tag-based path generation, test concurrent operations, ensure proper directory creation",
"priority": "high",
"dependencies": [
116
],
"status": "done",
"subtasks": [
{
"id": 1,
"title": "Create FileStorage class structure and constructor",
"description": "Set up the FileStorage class skeleton with proper TypeScript typing and implement the constructor that accepts projectPath parameter",
"dependencies": [],
"details": "Create storage/file-storage.ts file. Import necessary Node.js modules (fs/promises, path). Import IStorage interface and Task type from types. Define FileStorage class implementing IStorage. Create constructor accepting projectPath string parameter. Initialize private basePath property as `${projectPath}/.taskmaster`. Add private property for managing file locks if needed for concurrent operations.",
"status": "done",
"testStrategy": "Unit test constructor initialization, verify basePath is correctly set, test with various projectPath inputs including edge cases"
},
{
"id": 2,
"title": "Implement file path management and helper methods",
"description": "Create internal helper methods for managing file paths and ensuring directory structure exists",
"dependencies": [
"117.1"
],
"details": "Implement private getTasksPath(tag?: string) method that returns path to tasks file based on optional tag parameter. If tag provided, return `${basePath}/tasks/${tag}.json`, otherwise `${basePath}/tasks/tasks.json`. Create private ensureDirectoryExists() method that creates .taskmaster and tasks directories if they don't exist using fs.mkdir with recursive option. Add private method for safe JSON parsing with error handling.",
"status": "done",
"testStrategy": "Test getTasksPath with and without tags, verify directory creation works recursively, test JSON parsing with valid and invalid data"
},
{
"id": 3,
"title": "Implement read operations: loadTasks and exists",
"description": "Implement methods for reading tasks from the file system with proper error handling",
"dependencies": [
"117.2"
],
"details": "Implement loadTasks(tag?: string) method: use getTasksPath to get file path, read file using fs.readFile, parse JSON content, return tasks array from parsed data. Handle ENOENT error by returning empty array. Handle JSON parse errors appropriately. Implement exists(tag?: string) method: use fs.access to check if file exists at getTasksPath location, return boolean result.",
"status": "done",
"testStrategy": "Test loadTasks with existing files, missing files (ENOENT), corrupted JSON files. Test exists method with present and absent files"
},
{
"id": 4,
"title": "Implement write operations: saveTasks and appendTasks",
"description": "Implement methods for persisting tasks to the file system with metadata",
"dependencies": [
"117.3"
],
"details": "Implement saveTasks(tasks: Task[], tag?: string) method: ensure directory exists, create data object with tasks array and metadata object containing version (e.g., '1.0.0') and lastModified (ISO timestamp). Write to file using fs.writeFile with JSON.stringify and proper formatting. Implement appendTasks(tasks: Task[], tag?: string) method: load existing tasks, merge with new tasks (avoiding duplicates by ID), call saveTasks with merged array.",
"status": "done",
"testStrategy": "Test saveTasks creates files with correct structure, verify metadata is included, test appendTasks merges correctly without duplicates"
},
{
"id": 5,
"title": "Implement update and delete operations",
"description": "Implement methods for modifying and removing individual tasks with atomic operations",
"dependencies": [
"117.4"
],
"details": "Implement updateTask(taskId: string, updates: Partial<Task>, tag?: string) method: load tasks, find task by ID, merge updates using object spread, save updated tasks array. Return boolean indicating success. Implement deleteTask(taskId: string, tag?: string) method: load tasks, filter out task with matching ID, save filtered array. Return boolean indicating if task was found and deleted. Ensure both operations are atomic using temporary files if needed.",
"status": "done",
"testStrategy": "Test updateTask with existing and non-existing tasks, verify partial updates work correctly. Test deleteTask removes correct task, handles missing tasks gracefully"
}
]
},
{
"id": 118,
"title": "Create AI Provider Base Architecture",
"description": "Implement abstract BaseProvider class and provider interfaces using Template Method pattern",
"details": "Convert existing base-provider.js to TypeScript abstract class BaseProvider implementing IAIProvider. Add protected properties for apiKey and model. Create abstract methods: generateCompletion, calculateTokens, getName, getModel, getDefaultModel. Apply Template Method pattern for common provider logic like error handling and retry logic. Ensure proper type safety throughout.",
"testStrategy": "Create MockProvider extending BaseProvider to test abstract class functionality, verify all abstract methods are properly defined, test error handling and common logic",
"priority": "high",
"dependencies": [
116
],
"status": "done",
"subtasks": [
{
"id": 1,
"title": "Convert base-provider.js to TypeScript and define IAIProvider interface",
"description": "Create the IAIProvider interface with all required method signatures and convert the existing base-provider.js file to a TypeScript file with proper type definitions",
"dependencies": [],
"details": "Create src/types/providers.ts with IAIProvider interface containing methods: generateCompletion(prompt: string, options?: CompletionOptions): Promise<CompletionResult>, calculateTokens(text: string): number, getName(): string, getModel(): string, getDefaultModel(): string. Move base-provider.js to src/providers/base-provider.ts and add initial TypeScript types.\n<info added on 2025-08-06T12:16:45.893Z>\nSince the IAIProvider interface already exists in src/interfaces/ai-provider.interface.ts with all required methods and type definitions, update the subtask to focus on converting base-provider.js to TypeScript and implementing the BaseAIProvider abstract class. The conversion should extend the existing BaseAIProvider from src/interfaces/ai-provider.interface.ts rather than creating duplicate interfaces. Ensure the implementation aligns with the comprehensive interface that includes AIOptions, AIResponse, AIModel, ProviderInfo types and methods for streaming, validation, and usage tracking.\n</info added on 2025-08-06T12:16:45.893Z>",
"status": "done",
"testStrategy": "Verify that the interface is properly defined and that TypeScript compilation succeeds without errors"
},
{
"id": 2,
"title": "Implement BaseProvider abstract class with core properties",
"description": "Create the abstract BaseProvider class implementing IAIProvider with protected properties for apiKey and model configuration",
"dependencies": [
"118.1"
],
"details": "In base-provider.ts, define abstract class BaseProvider implements IAIProvider with protected properties: apiKey: string, model: string, maxRetries: number = 3, retryDelay: number = 1000. Add constructor that accepts BaseProviderConfig interface with apiKey and optional model. Implement getModel() method to return current model.\n<info added on 2025-08-06T12:28:45.485Z>\nI've reviewed the existing BaseAIProvider interface in the interfaces file. The task requires creating a separate BaseProvider abstract class in base-provider.ts that implements the IAIProvider interface, with specific protected properties and configuration. This appears to be a deliberate architectural decision to have a more concrete base class with built-in retry logic and configuration management that all provider implementations will extend.\n</info added on 2025-08-06T12:28:45.485Z>\n<info added on 2025-08-06T13:14:24.539Z>\nSuccessfully implemented BaseProvider abstract class:\n\nIMPLEMENTED FILES:\n✅ packages/tm-core/src/providers/base-provider.ts - Created new BaseProvider abstract class\n✅ packages/tm-core/src/providers/index.ts - Updated to export BaseProvider\n\nIMPLEMENTATION DETAILS:\n- Created BaseProviderConfig interface with required apiKey and optional model\n- BaseProvider abstract class implements IAIProvider interface\n- Protected properties implemented as specified:\n - apiKey: string \n - model: string\n - maxRetries: number = 3\n - retryDelay: number = 1000\n- Constructor accepts BaseProviderConfig and sets apiKey and model (using getDefaultModel() if not provided)\n- Implemented getModel() method that returns current model\n- All IAIProvider methods declared as abstract (to be implemented by concrete providers)\n- Uses .js extension for ESM import compatibility\n- TypeScript compilation verified successful\n\nThe BaseProvider provides the foundation for concrete provider implementations with shared retry logic properties and standardized configuration.\n</info added on 2025-08-06T13:14:24.539Z>\n<info added on 2025-08-20T17:16:14.037Z>\nREFACTORING REQUIRED: The BaseProvider implementation needs to be relocated from packages/tm-core/src/providers/base-provider.ts to packages/tm-core/src/providers/ai/base-provider.ts following the new directory structure. The class must implement the Template Method pattern with the following structure:\n\n1. Keep constructor concise (under 10 lines) - only initialize apiKey and model properties\n2. Remove maxRetries and retryDelay from constructor - these should be class-level constants or configurable separately\n3. Implement all abstract methods from IAIProvider: generateCompletion, calculateTokens, getName, getModel, getDefaultModel\n4. Add protected template methods for extensibility:\n - validateInput(input: string): void - for input validation with early returns\n - prepareRequest(input: string, options?: any): any - for request preparation\n - handleResponse(response: any): string - for response processing\n - handleError(error: any): never - for consistent error handling\n5. Apply clean code principles: extract complex logic into small focused methods, use early returns to reduce nesting, ensure each method has single responsibility\n\nThe refactored BaseProvider will serve as a robust foundation using Template Method pattern, allowing concrete providers to override specific behaviors while maintaining consistent structure and error handling across all AI provider implementations.\n</info added on 2025-08-20T17:16:14.037Z>\n<info added on 2025-08-21T15:57:30.467Z>\nREFACTORING UPDATE: The BaseProvider implementation in packages/tm-core/src/providers/base-provider.ts is now affected by the core/ folder removal and needs its import paths updated. Since base-provider.ts imports from '../interfaces/provider.interface.js', this import remains valid as both providers/ and interfaces/ are at the same level. No changes needed to BaseProvider imports due to the flattening. The file structure reorganization maintains the relative path relationship between providers/ and interfaces/ directories.\n</info added on 2025-08-21T15:57:30.467Z>",
"status": "done",
"testStrategy": "Create a test file that attempts to instantiate BaseProvider directly (should fail) and verify that protected properties are accessible in child classes"
},
{
"id": 3,
"title": "Define abstract methods and implement Template Method pattern",
"description": "Add all abstract methods to BaseProvider and implement the Template Method pattern for common provider operations",
"dependencies": [
"118.2"
],
"details": "Add abstract methods: protected abstract generateCompletionInternal(prompt: string, options?: CompletionOptions): Promise<CompletionResult>, abstract calculateTokens(text: string): number, abstract getName(): string, abstract getDefaultModel(): string. Implement public generateCompletion() as template method that calls generateCompletionInternal() with error handling and retry logic.\n<info added on 2025-08-20T17:16:38.315Z>\nApply Template Method pattern following clean code principles:\n\nDefine abstract methods:\n- protected abstract generateCompletionInternal(prompt: string, options?: CompletionOptions): Promise<CompletionResult>\n- protected abstract calculateTokens(text: string): number\n- protected abstract getName(): string\n- protected abstract getDefaultModel(): string\n- protected abstract getMaxRetries(): number\n- protected abstract getRetryDelay(): number\n\nImplement template method generateCompletion():\n- Call validateInput() with early returns for invalid prompt/options\n- Call prepareRequest() to format the request\n- Execute generateCompletionInternal() with retry logic\n- Call handleResponse() to process the result\n- Call handleError() in catch blocks\n\nAdd protected helper methods:\n- validateInput(prompt: string, options?: CompletionOptions): ValidationResult - Check prompt length, validate options, return early on errors\n- prepareRequest(prompt: string, options?: CompletionOptions): PreparedRequest - Format prompt, merge with defaults, add metadata\n- handleResponse(result: CompletionResult): ProcessedResult - Validate response format, extract completion text, add usage metrics\n- handleError(error: unknown, attempt: number): void - Log error details, determine if retryable, throw TaskMasterError\n\nExtract retry logic helpers:\n- shouldRetry(error: unknown, attempt: number): boolean - Check error type and attempt count\n- calculateBackoffDelay(attempt: number): number - Use exponential backoff with jitter\n- isRateLimitError(error: unknown): boolean - Detect rate limit responses\n- isTimeoutError(error: unknown): boolean - Detect timeout errors\n\nUse named constants:\n- DEFAULT_MAX_RETRIES = 3\n- BASE_RETRY_DELAY_MS = 1000\n- MAX_RETRY_DELAY_MS = 32000\n- BACKOFF_MULTIPLIER = 2\n- JITTER_FACTOR = 0.1\n\nEnsure each method stays under 30 lines by extracting complex logic into focused helper methods.\n</info added on 2025-08-20T17:16:38.315Z>",
"status": "done",
"testStrategy": "Create MockProvider extending BaseProvider to verify all abstract methods must be implemented and template method properly delegates to internal methods"
},
{
"id": 4,
"title": "Implement error handling and retry logic with exponential backoff",
"description": "Add comprehensive error handling and retry mechanism with exponential backoff for API calls in the template method",
"dependencies": [
"118.3"
],
"details": "In generateCompletion() template method, wrap generateCompletionInternal() in try-catch with retry logic. Implement exponential backoff: delay * Math.pow(2, attempt). Add error types: ProviderError, RateLimitError, AuthenticationError extending Error. Log errors in development mode only. Handle specific error cases like rate limits (429), authentication errors (401), and network timeouts.",
"status": "done",
"testStrategy": "Test retry logic with MockProvider that fails N times then succeeds, verify exponential backoff timing, test different error scenarios and their handling"
},
{
"id": 5,
"title": "Add validation, logging, and completion options handling",
"description": "Implement input validation, debug logging for development, and proper handling of completion options like temperature and max tokens",
"dependencies": [
"118.4"
],
"details": "Add validatePrompt() method to check for empty/invalid prompts. Add validateOptions() to ensure temperature is between 0-2, maxTokens is positive. Implement debug logging using console.log only when NODE_ENV !== 'production'. Create CompletionOptions interface with optional temperature, maxTokens, topP, frequencyPenalty, presencePenalty. Ensure all validation errors throw descriptive ProviderError instances.",
"status": "done",
"testStrategy": "Test validation with invalid inputs (empty prompts, negative maxTokens, temperature > 2), verify logging only occurs in development, test option merging with defaults"
}
]
},
{
"id": 119,
"title": "Implement Provider Factory with Dynamic Imports",
"description": "Create ProviderFactory class using Factory pattern for AI provider instantiation",
"details": "Implement ProviderFactory class in ai/provider-factory.ts with static create method. Use switch statement for provider selection ('anthropic', 'openai', 'google'). Implement dynamic imports for each provider to enable tree-shaking. Return Promise<IAIProvider> from create method. Handle unknown providers with meaningful error messages. Ensure proper typing for configuration object.",
"testStrategy": "Test factory with each provider type, verify dynamic imports work correctly, test error handling for unknown providers, mock dynamic imports for unit testing",
"priority": "medium",
"dependencies": [
118
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Create ProviderFactory class structure and types",
"description": "Set up the ProviderFactory class file with proper TypeScript types and interfaces",
"dependencies": [],
"details": "Create ai/provider-factory.ts file. Define ProviderFactory class with static create method signature. Import IAIProvider interface from base provider. Define ProviderType as union type ('anthropic' | 'openai' | 'google'). Set up proper return type as Promise<IAIProvider> for the create method to support dynamic imports.\n<info added on 2025-08-20T17:16:56.506Z>\nClean code architecture implementation: Move to src/providers/ai/provider-factory.ts. Follow Single Responsibility Principle - factory only creates providers, no other responsibilities. Create validateProviderConfig() method for provider configuration validation. Define PROVIDER_NAMES constant object with provider string values. Implement create() method with early returns pattern for better readability. Apply Dependency Inversion - factory depends on IAIProvider interface abstraction, not concrete implementations. Keep method under 40 lines following clean code practices.\n</info added on 2025-08-20T17:16:56.506Z>",
"status": "pending",
"testStrategy": "Verify file structure and type definitions compile correctly"
},
{
"id": 2,
"title": "Implement provider selection logic with switch statement",
"description": "Add the core switch statement logic to handle different provider types",
"dependencies": [
"119.1"
],
"details": "Inside the static create method, implement switch statement on provider type parameter. Add cases for 'anthropic', 'openai', and 'google'. Add default case that throws a descriptive error for unknown providers (e.g., throw new Error(`Unknown provider: ${providerType}`)). Structure each case to prepare for dynamic imports.",
"status": "pending",
"testStrategy": "Test switch statement with valid and invalid provider types, verify error messages"
},
{
"id": 3,
"title": "Add dynamic imports for each provider",
"description": "Implement dynamic import() statements for lazy loading provider modules",
"dependencies": [
"119.2"
],
"details": "In each switch case, use dynamic import() to load the provider module: for 'anthropic' case use await import('./providers/anthropic-provider'), similar for OpenAI and Google providers. Extract the default export or specific class from each dynamic import. This enables tree-shaking by only loading the selected provider.",
"status": "pending",
"testStrategy": "Mock dynamic imports in tests, verify only requested provider is loaded"
},
{
"id": 4,
"title": "Instantiate providers with configuration",
"description": "Create provider instances with proper configuration passing",
"dependencies": [
"119.3"
],
"details": "After each dynamic import, instantiate the provider class with the configuration object passed to create method. Ensure configuration object is properly typed (use IConfiguration or relevant subset). Return the instantiated provider instance. Handle any instantiation errors and wrap them with context about which provider failed.",
"status": "pending",
"testStrategy": "Test provider instantiation with various configuration objects, verify configuration is passed correctly"
},
{
"id": 5,
"title": "Add error handling and validation",
"description": "Implement comprehensive error handling for all failure scenarios",
"dependencies": [
"119.4"
],
"details": "Wrap dynamic imports in try-catch blocks to handle module loading failures. Add validation for configuration object before passing to providers. Create custom error messages that include the provider type and specific failure reason. Consider adding a ProviderFactoryError custom error class. Ensure all errors bubble up properly while maintaining async/await chain.",
"status": "pending",
"testStrategy": "Test various error scenarios: missing provider modules, invalid configurations, network failures during dynamic import"
}
]
},
{
"id": 120,
"title": "Implement Anthropic Provider",
"description": "Create AnthropicProvider class extending BaseProvider with full Anthropic SDK integration",
"details": "Create AnthropicProvider class in ai/providers/anthropic-provider.ts extending BaseProvider. Import and use @anthropic-ai/sdk. Initialize private client property in constructor. Implement all abstract methods: generateCompletion using Claude API, calculateTokens using appropriate tokenizer, getName returning 'anthropic', getModel returning current model, getDefaultModel returning 'claude-3-sonnet-20240229'. Wrap API errors with context.",
"testStrategy": "Mock Anthropic SDK for unit tests, test API error handling, verify token calculation accuracy, test with different model configurations",
"priority": "high",
"dependencies": [
118
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Set up AnthropicProvider class structure and dependencies",
"description": "Create the AnthropicProvider class file with proper imports and class structure extending BaseProvider",
"dependencies": [],
"details": "Create ai/providers/anthropic-provider.ts file. Import BaseProvider from base-provider.ts and import Anthropic from @anthropic-ai/sdk. Import necessary types including IAIProvider, ChatMessage, and ChatCompletion. Set up the class declaration extending BaseProvider with proper TypeScript typing. Add private client property declaration of type Anthropic.\n<info added on 2025-08-20T17:17:15.019Z>\nFile should be created at src/providers/ai/adapters/anthropic-provider.ts instead of ai/providers/anthropic-provider.ts. Follow clean code principles: keep constructor minimal (under 10 lines) with only client initialization. Extract API call logic into separate small methods (each under 20 lines). Use early returns in generateCompletionInternal() for better readability. Extract error mapping logic to a dedicated mapAnthropicError() method. Avoid magic strings - define constants for model names and API parameters.\n</info added on 2025-08-20T17:17:15.019Z>",
"status": "pending",
"testStrategy": "Verify file structure and imports compile without errors, ensure class properly extends BaseProvider"
},
{
"id": 2,
"title": "Implement constructor and client initialization",
"description": "Create the constructor that accepts configuration and initializes the Anthropic SDK client",
"dependencies": [
"120.1"
],
"details": "Implement constructor accepting IConfiguration parameter. Call super(config) to initialize BaseProvider. Initialize the private client property by creating new Anthropic instance with apiKey from config.apiKeys.anthropic. Add validation to ensure API key exists, throwing meaningful error if missing. Store the model configuration from config.model or use default.",
"status": "pending",
"testStrategy": "Test constructor with valid and invalid configurations, verify client initialization, test API key validation"
},
{
"id": 3,
"title": "Implement generateCompletion method with Claude API",
"description": "Implement the main generateCompletion method that calls Anthropic's Claude API and handles responses",
"dependencies": [
"120.2"
],
"details": "Implement async generateCompletion method accepting ChatMessage array. Map ChatMessage format to Anthropic's expected format (role and content). Use client.messages.create() with appropriate parameters including model, max_tokens, and messages. Transform Anthropic response format to ChatCompletion interface. Handle streaming vs non-streaming responses. Implement proper error handling wrapping API errors with context.",
"status": "pending",
"testStrategy": "Mock Anthropic SDK client.messages.create, test with various message formats, verify response transformation, test error scenarios"
},
{
"id": 4,
"title": "Implement token calculation and utility methods",
"description": "Implement calculateTokens method and other required abstract methods from BaseProvider",
"dependencies": [
"120.3"
],
"details": "Implement calculateTokens method using appropriate tokenizer (tiktoken or claude-tokenizer if available). Implement getName method returning 'anthropic' string constant. Implement getModel method returning current model from configuration. Implement getDefaultModel method returning 'claude-3-sonnet-20240229'. Add any additional helper methods for token counting or message formatting.",
"status": "pending",
"testStrategy": "Test token calculation accuracy with various input strings, verify utility methods return correct values"
},
{
"id": 5,
"title": "Add comprehensive error handling and type exports",
"description": "Implement robust error handling throughout the class and ensure proper TypeScript exports",
"dependencies": [
"120.4"
],
"details": "Wrap all Anthropic API calls in try-catch blocks. Create custom error messages that include context about the operation being performed. Handle rate limiting errors specifically. Ensure all methods have proper TypeScript return types. Export the AnthropicProvider class as default export. Add JSDoc comments for all public methods. Ensure proper error propagation maintaining stack traces.",
"status": "pending",
"testStrategy": "Test various API error scenarios, verify error messages include context, test rate limit handling, ensure TypeScript types are correctly exported"
}
]
},
{
"id": 121,
"title": "Create Prompt Builder and Task Parser",
"description": "Implement PromptBuilder class and TaskParser with Dependency Injection",
"details": "Create PromptBuilder class with buildParsePrompt and buildExpandPrompt methods using template literals. Include specific JSON format instructions. Create TaskParser class accepting IAIProvider and IConfiguration via constructor (Dependency Injection). Implement parsePRD method to read PRD file, use PromptBuilder to create prompt, call AI provider, extract tasks from response, and enrich with metadata. Handle parsing errors gracefully.",
"testStrategy": "Unit test prompt building with various inputs, mock AI provider responses, test JSON extraction logic, verify error handling for malformed responses, integration test with real PRD files",
"priority": "high",
"dependencies": [
119,
120
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Create PromptBuilder Class Structure",
"description": "Implement the PromptBuilder class with template methods for generating AI prompts",
"dependencies": [],
"details": "Create src/services/prompt-builder.ts. Define PromptBuilder class with two public methods: buildParsePrompt(prdContent: string): string and buildExpandPrompt(task: Task): string. Use template literals to construct prompts with clear JSON format instructions. Include system instructions for AI to follow specific output formats. Add private helper methods for common prompt sections like JSON schema definitions and response format examples.\n<info added on 2025-08-20T17:17:31.467Z>\nRefactor to src/services/prompts/prompt-builder.ts to separate concerns. Implement buildTaskPrompt() method. Define prompt template constants: PARSE_PROMPT_TEMPLATE, EXPAND_PROMPT_TEMPLATE, TASK_PROMPT_TEMPLATE, JSON_FORMAT_INSTRUCTIONS. Move JSON schema definitions and format instructions to constants. Ensure each template uses template literals with ${} placeholders. Keep all methods under 40 lines by extracting logic into focused helper methods. Use descriptive constant names for all repeated strings or instruction blocks.\n</info added on 2025-08-20T17:17:31.467Z>",
"status": "pending",
"testStrategy": "Unit test both prompt methods with sample inputs. Verify prompt contains required JSON structure instructions. Test edge cases like empty PRD content or minimal task objects."
},
{
"id": 2,
"title": "Implement TaskParser Class with DI",
"description": "Create TaskParser class accepting IAIProvider and IConfiguration through constructor injection",
"dependencies": [
"121.1"
],
"details": "Create src/services/task-parser.ts. Define TaskParser class with constructor(private aiProvider: IAIProvider, private config: IConfiguration). Add private promptBuilder property initialized in constructor. Implement basic class structure with placeholder methods. Ensure proper TypeScript typing for all parameters and properties. Follow dependency injection pattern for testability.\n<info added on 2025-08-20T17:17:49.624Z>\nUpdate file location to src/services/tasks/task-parser.ts instead of src/services/task-parser.ts. Refactor parsePRD() method to stay under 40 lines by extracting logic into helper methods: readPRD(), validatePRD(), extractTasksFromResponse(), and enrichTasksWithMetadata(). Each helper method should be under 20 lines. Implement early returns in validation methods for cleaner code flow. Remove any file I/O operations from the parser class - delegate all storage operations to injected dependencies. Ensure clean separation of concerns with parser focused only on task parsing logic.\n</info added on 2025-08-20T17:17:49.624Z>",
"status": "pending",
"testStrategy": "Test constructor properly stores injected dependencies. Verify class instantiation with mock providers. Test TypeScript compilation with proper interface implementations."
},
{
"id": 3,
"title": "Implement parsePRD Method Core Logic",
"description": "Create the main parsePRD method that orchestrates the PRD parsing workflow",
"dependencies": [
"121.2"
],
"details": "Implement parsePRD(filePath: string): Promise<ParsedTask[]> method in TaskParser. Read PRD file using fs.promises.readFile. Use promptBuilder.buildParsePrompt() to create AI prompt. Call aiProvider.generateResponse() with constructed prompt. Extract JSON array from AI response using regex or JSON.parse. Handle potential parsing errors with try-catch blocks. Return empty array on errors after logging.",
"status": "pending",
"testStrategy": "Test with mock AI provider returning valid JSON. Test file reading with various file paths. Mock file system for controlled testing. Verify proper error logging without throwing."
},
{
"id": 4,
"title": "Add Task Enrichment and Metadata",
"description": "Enhance parsed tasks with additional metadata and validation",
"dependencies": [
"121.3"
],
"details": "After extracting tasks from AI response, enrich each task with metadata: add createdAt timestamp, set initial status to 'pending', validate required fields (id, title, description). Add priority field with default 'medium' if not provided. Ensure all tasks have valid structure before returning. Create private enrichTask(task: any): ParsedTask method for this logic. Handle missing or malformed task data gracefully.",
"status": "pending",
"testStrategy": "Test enrichment adds all required metadata. Verify validation catches malformed tasks. Test default values are applied correctly. Ensure timestamps are properly formatted."
},
{
"id": 5,
"title": "Implement Comprehensive Error Handling",
"description": "Add robust error handling throughout the TaskParser implementation",
"dependencies": [
"121.4"
],
"details": "Wrap file reading in try-catch to handle FILE_NOT_FOUND errors. Catch AI provider errors and wrap in appropriate TaskMasterError. Handle JSON parsing errors when extracting from AI response. Add specific error handling for network timeouts, rate limits, and malformed responses. Log errors with context in development mode only. Return meaningful error messages without exposing internals. Ensure all errors are properly typed as TaskMasterError instances.",
"status": "pending",
"testStrategy": "Test each error scenario separately: missing files, AI provider failures, malformed JSON, network errors. Verify proper error codes are used. Test that errors don't expose sensitive information."
}
]
},
{
"id": 122,
"title": "Implement Configuration Management",
"description": "Create ConfigManager class with Zod validation for configuration",
"details": "Implement ConfigManager in config/config-manager.ts accepting Partial<IConfiguration> in constructor. Use Zod to create validation schema matching IConfiguration interface. Implement get method with TypeScript generics for type-safe access, getAll returning full config, validate method for validation. Set defaults: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true. Handle validation errors with clear messages.",
"testStrategy": "Test with various configuration combinations, verify Zod validation catches invalid configs, test default values, ensure type safety of get method",
"priority": "medium",
"dependencies": [
116
],
"status": "in-progress",
"subtasks": [
{
"id": 1,
"title": "Create Zod validation schema for IConfiguration",
"description": "Define a Zod schema that matches the IConfiguration interface structure with proper validation rules",
"dependencies": [],
"details": "Create configSchema in config/config-manager.ts using z.object() to define validation for all IConfiguration properties. Include string validations for projectPath, enum validation for aiProvider ('anthropic', 'openai', etc.), boolean for enableTags, and any other configuration fields. Use z.string().min(1) for required strings, z.enum() for provider types, and appropriate validators for other fields.\n<info added on 2025-08-06T13:14:58.822Z>\nCompleted Zod validation schema implementation in packages/tm-core/src/config/validation.ts\n\nIMPLEMENTATION DETAILS:\n- Created comprehensive Zod schemas matching IConfiguration interface structure exactly\n- All required schemas exported as expected by config-schema.ts:\n * configurationSchema - Main configuration validation with custom refinements\n * partialConfigurationSchema - For partial updates (using base schema without refinements)\n * modelConfigSchema - Model configuration validation\n * providerConfigSchema - AI provider configuration validation \n * taskSettingsSchema - Task management settings validation\n * loggingSettingsSchema/loggingConfigSchema - Logging configuration (with legacy alias)\n * tagSettingsSchema - Tag management settings validation\n * storageSettingsSchema - Storage configuration validation\n * retrySettingsSchema - Retry/resilience settings validation\n * securitySettingsSchema - Security settings validation\n * cacheConfigSchema - Cache configuration stub (for consistency)\n\nKEY FEATURES:\n- Proper Zod validation rules applied (string lengths, number ranges, enums)\n- Custom refinements for business logic (maxRetryDelay >= retryDelay)\n- Comprehensive enum schemas for all union types\n- Legacy alias support for backwards compatibility\n- All 13 nested interface schemas implemented with appropriate constraints\n- Type exports for runtime validation\n\nVALIDATION INCLUDES:\n- String validations with min lengths for required fields\n- Enum validation for providers, priorities, complexities, log levels, etc.\n- Number range validations (min/max constraints)\n- URL validation for baseUrl fields\n- Array validations with proper item types\n- Record validations for dynamic key-value pairs\n- Optional field handling with appropriate defaults\n\nTESTED AND VERIFIED:\n- All schemas compile correctly with TypeScript\n- Import/export chain works properly through config-schema.ts\n- Basic validation tests pass for key schemas\n- No conflicts with existing IConfiguration interface structure\n</info added on 2025-08-06T13:14:58.822Z>\n<info added on 2025-08-20T17:18:12.343Z>\nCreated ConfigManager class at src/config/config-manager.ts with the following implementation:\n\nSTRUCTURE:\n- DEFAULT_CONFIG constant defined with complete default values for all configuration properties\n- Constructor validates config using validate() method (follows Fail-Fast principle)\n- Constructor kept under 15 lines as required\n- Type-safe get<K>() method using TypeScript generics for accessing specific config properties\n- getAll() method returns complete validated configuration\n- validate() method extracted for configuration validation using Zod schema\n- mergeWithDefaults() helper extracted for merging partial config with defaults\n\nKEY IMPLEMENTATION DETAILS:\n- Imports configurationSchema from src/config/schemas/config.schema.ts\n- Uses z.infer<typeof configurationSchema> for type safety\n- Validates on construction with clear error messages\n- No nested ternaries used\n- Proper error handling with ConfigValidationError\n- Type-safe property access with keyof IConfiguration constraint\n\nMETHODS:\n- constructor(config?: Partial<IConfiguration>) - Validates and stores config\n- get<K extends keyof IConfiguration>(key: K): IConfiguration[K] - Type-safe getter\n- getAll(): IConfiguration - Returns full config\n- private validate(config: unknown): IConfiguration - Validates using Zod\n- private mergeWithDefaults(config: Partial<IConfiguration>): IConfiguration - Merges with defaults\n\nAlso created src/config/schemas/config.schema.ts importing the configurationSchema from validation.ts for cleaner organization.\n</info added on 2025-08-20T17:18:12.343Z>",
"status": "review",
"testStrategy": "Test schema validation with valid and invalid configurations, ensure all IConfiguration fields are covered"
},
{
"id": 2,
"title": "Implement ConfigManager class constructor and storage",
"description": "Create ConfigManager class with constructor that accepts Partial<IConfiguration> and initializes configuration with defaults",
"dependencies": [
"122.1"
],
"details": "Define ConfigManager class with private config property. In constructor, merge provided partial config with defaults (projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true). Store the merged configuration internally. Ensure the class is properly typed with IConfiguration interface.",
"status": "pending",
"testStrategy": "Test constructor with various partial configs, verify defaults are applied correctly, test with empty config"
},
{
"id": 3,
"title": "Implement validate method with error handling",
"description": "Create validate method that uses Zod schema to validate configuration and provides clear error messages",
"dependencies": [
"122.1",
"122.2"
],
"details": "Implement validate(): void method that runs configSchema.parse(this.config) within try-catch block. On ZodError, transform the error into user-friendly messages that clearly indicate which fields are invalid and why. Consider creating a custom error class for configuration validation errors. The method should throw if validation fails.",
"status": "pending",
"testStrategy": "Test with invalid configs to ensure proper error messages, verify all validation rules work correctly"
},
{
"id": 4,
"title": "Implement type-safe get method with generics",
"description": "Create generic get method for retrieving individual configuration values with TypeScript type inference",
"dependencies": [
"122.2"
],
"details": "Implement get<K extends keyof IConfiguration>(key: K): IConfiguration[K] method that returns the value for a specific configuration key. Use TypeScript generics and keyof operator to ensure type safety. The method should provide proper type inference so consumers get the correct type based on the key they request.",
"status": "pending",
"testStrategy": "Test type inference with different keys, verify TypeScript catches invalid keys at compile time"
},
{
"id": 5,
"title": "Implement getAll method and finalize class",
"description": "Create getAll method to return full configuration and ensure proper exports",
"dependencies": [
"122.2",
"122.3",
"122.4"
],
"details": "Implement getAll(): IConfiguration method that returns a deep copy of the entire configuration object to prevent external mutations. Add JSDoc comments to all public methods. Export the ConfigManager class and ensure it's properly integrated with the module structure. Consider adding a static factory method if needed.",
"status": "pending",
"testStrategy": "Test getAll returns complete config, verify returned object is immutable, test integration with other modules"
}
]
},
{
"id": 123,
"title": "Create Utility Functions and Error Handling",
"description": "Implement ID generation utilities and custom error classes",
"details": "Create id-generator.ts with generateTaskId and generateSubtaskId functions using specified formats with timestamp and random components. Create TaskMasterError class extending Error in errors/task-master-error.ts with error codes (FILE_NOT_FOUND, PARSE_ERROR, etc.). Ensure errors don't expose internal details. Add development-only logging.",
"testStrategy": "Test ID generation for uniqueness and format compliance, verify error classes properly extend Error, test error message formatting, ensure no sensitive data in errors",
"priority": "low",
"dependencies": [
116
],
"status": "in-progress",
"subtasks": [
{
"id": 1,
"title": "Create ID generation utilities module",
"description": "Implement the id-generator.ts module with functions for generating unique task and subtask IDs",
"dependencies": [],
"details": "Create src/utils/id-generator.ts file. Implement generateTaskId() function that returns format 'TASK-{timestamp}-{random}' where timestamp is Date.now() and random is 4-character alphanumeric string. Implement generateSubtaskId(parentId) function that returns format '{parentId}.{sequential}' where sequential increments based on existing subtasks. Use crypto.randomBytes or Math.random for randomness. Export both functions as named exports.\n<info added on 2025-08-06T12:42:22.203Z>\nThe ID generator module has been successfully implemented with the following completed features:\n- generateTaskId() function that creates unique IDs in 'TASK-{timestamp}-{random}' format\n- generateSubtaskId() function that generates sequential subtask IDs in '{parentId}.{sequential}' format\n- Input validation functions to ensure ID integrity\n- Proper TypeScript type definitions and interfaces\n- Comprehensive JSDoc documentation with usage examples\n- All functions exported as named exports from src/utils/id-generator.ts\n</info added on 2025-08-06T12:42:22.203Z>",
"status": "done",
"testStrategy": "Test uniqueness by generating 1000 IDs and checking for duplicates, verify format compliance with regex, test subtask ID sequential numbering"
},
{
"id": 2,
"title": "Create base error class structure",
"description": "Implement the TaskMasterError base class that extends Error with proper error handling capabilities",
"dependencies": [],
"details": "Create src/errors/task-master-error.ts file. Define TaskMasterError class extending Error. Add constructor accepting (message: string, code: string, details?: any). Set this.name = 'TaskMasterError'. Create error code constants: FILE_NOT_FOUND = 'FILE_NOT_FOUND', PARSE_ERROR = 'PARSE_ERROR', VALIDATION_ERROR = 'VALIDATION_ERROR', API_ERROR = 'API_ERROR'. Override toString() to format errors appropriately. Ensure stack trace is preserved.\n<info added on 2025-08-06T13:13:11.635Z>\nCompleted TaskMasterError base class implementation:\n\nIMPLEMENTATION DETAILS:\n- TaskMasterError class fully implemented extending Error\n- Added proper prototype chain fix with Object.setPrototypeOf(this, TaskMasterError.prototype)\n- Includes all required properties: code (from ERROR_CODES), timestamp, context, cause\n- toJSON method implemented for full serialization support\n- Error sanitization implemented via getSanitizedDetails() and containsSensitiveInfo() methods\n- Error chaining with cause property fully supported\n- Additional utility methods: getUserMessage(), toString(), is(), hasCode(), withContext(), wrap()\n\nSUCCESS CRITERIA VERIFIED:\n✅ TaskMasterError class fully implemented\n✅ Extends Error with proper prototype chain fix (Object.setPrototypeOf)\n✅ Includes all required properties and methods\n✅ toJSON method for serialization\n✅ Error sanitization logic for production (containsSensitiveInfo method)\n✅ Comprehensive error context and metadata support\n\nFILE MODIFIED: packages/tm-core/src/errors/task-master-error.ts\n</info added on 2025-08-06T13:13:11.635Z>\n<info added on 2025-08-20T17:18:38.499Z>\nRefactored to follow clean code principles:\n\nCLEAN CODE IMPROVEMENTS:\n- Moved TaskMasterError class to be under 40 lines by extracting methods\n- Created separate error-codes.ts file with ERROR_CODES constant object\n- Extracted sanitizeMessage() method to handle message sanitization\n- Extracted addContext() method for adding error context\n- Extracted toJSON() method for serialization\n- Added static factory methods: fromError(), notFound(), parseError(), validationError(), apiError()\n- Improved error chaining with proper 'cause' property handling\n- Ensured user-friendly messages that hide implementation details\n- Maintained all existing functionality while improving code organization\n\nFILES CREATED/MODIFIED:\n- packages/tm-core/src/errors/error-codes.ts (new file with ERROR_CODES)\n- packages/tm-core/src/errors/task-master-error.ts (refactored to under 40 lines)\n</info added on 2025-08-20T17:18:38.499Z>",
"status": "review",
"testStrategy": "Test that error extends Error properly, verify error.name is set correctly, test toString() output format, ensure stack trace exists"
},
{
"id": 3,
"title": "Implement error sanitization and security features",
"description": "Add security features to prevent exposure of sensitive internal details in error messages",
"dependencies": [
"123.2"
],
"details": "In TaskMasterError class, add private sanitizeDetails() method that removes sensitive data like API keys, file paths beyond project root, and internal state. Implement toJSON() method that returns sanitized error object for external consumption. Add static isSafeForProduction() method to validate error messages don't contain patterns like absolute paths, environment variables, or API credentials. Store original details in private property for debugging.",
"status": "pending",
"testStrategy": "Test sanitization removes absolute paths, API keys, and sensitive patterns, verify toJSON returns safe object, test original details are preserved internally"
},
{
"id": 4,
"title": "Add development-only logging functionality",
"description": "Implement conditional logging that only operates in development environment",
"dependencies": [
"123.2",
"123.3"
],
"details": "In task-master-error.ts, add static enableDevLogging property defaulting to process.env.NODE_ENV !== 'production'. Add logError() method that console.error's full error details only when enableDevLogging is true. Include timestamp, error code, sanitized message, and full stack trace in dev logs. In production, log only error code and safe message. Create static setDevLogging(enabled: boolean) to control logging.",
"status": "pending",
"testStrategy": "Test logging output in dev vs production modes, verify sensitive data isn't logged in production, test log format includes all required fields"
},
{
"id": 5,
"title": "Create specialized error subclasses",
"description": "Implement specific error classes for different error scenarios inheriting from TaskMasterError",
"dependencies": [
"123.2",
"123.3",
"123.4"
],
"details": "Create FileNotFoundError extending TaskMasterError with code FILE_NOT_FOUND, accepting filePath parameter. Create ParseError with code PARSE_ERROR for parsing failures, accepting source and line number. Create ValidationError with code VALIDATION_ERROR for data validation, accepting field and value. Create APIError with code API_ERROR for external API failures, accepting statusCode and provider. Each should format appropriate user-friendly messages while storing technical details internally.",
"status": "pending",
"testStrategy": "Test each error class constructor and message formatting, verify inheritance chain, test that each error type has correct code, ensure specialized errors work with logging system"
}
]
},
{
"id": 124,
"title": "Implement TaskMasterCore Facade",
"description": "Create main TaskMasterCore class as Facade pattern entry point",
"details": "Create TaskMasterCore class in index.ts with private properties for config, storage, aiProvider, and parser. Implement initialize method for lazy loading of AI provider. Implement parsePRD method that coordinates parser, storage, and configuration. Implement getTasks for retrieving stored tasks. Apply Facade pattern to hide complexity. Export createTaskMaster factory function, all types and interfaces. Use proper import paths with .js extensions for ESM.",
"testStrategy": "Integration test full parse flow, test lazy initialization, verify facade properly delegates to subsystems, test with different configurations, ensure exports are correct",
"priority": "high",
"dependencies": [
117,
121,
122
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Create TaskMasterCore class structure with type definitions",
"description": "Set up the main TaskMasterCore class in src/index.ts with all necessary imports, type definitions, and class structure following the Facade pattern",
"dependencies": [],
"details": "Create src/index.ts file. Import IConfiguration, ITaskStorage, IAIProvider, and IPRDParser interfaces. Define TaskMasterCore class with private properties: _config (ConfigManager), _storage (ITaskStorage), _aiProvider (IAIProvider | null), _parser (IPRDParser | null). Add constructor accepting options parameter of type Partial<IConfiguration>. Initialize _config with ConfigManager, set other properties to null for lazy loading. Import all necessary types from their respective modules using .js extensions for ESM compatibility.\n<info added on 2025-08-20T17:18:56.625Z>\nApply Facade pattern principles: simple public interface, hide subsystem complexity. Keep all methods under 30 lines by extracting logic. Implement lazy initialization pattern in initialize() method - only create dependencies when first needed. Extract createDependencies() private helper method to handle creation of storage, AI provider, and parser instances. Add createTaskMaster() factory function for convenient instance creation. Use barrel exports pattern - export all public types and interfaces that clients need (IConfiguration, ITaskStorage, IAIProvider, IPRDParser, TaskMasterCore). Follow Interface Segregation Principle - only expose methods and types that clients actually need, hide internal implementation details.\n</info added on 2025-08-20T17:18:56.625Z>",
"status": "pending",
"testStrategy": "Test class instantiation with various configuration options, verify private properties are correctly initialized, ensure TypeScript types are properly enforced"
},
{
"id": 2,
"title": "Implement initialize method for lazy loading",
"description": "Create the initialize method that handles lazy loading of AI provider and parser instances based on configuration",
"dependencies": [
"124.1"
],
"details": "Implement async initialize() method in TaskMasterCore. Check if _aiProvider is null, if so create appropriate provider based on config.aiProvider value using a factory pattern or switch statement. Similarly initialize _parser if null. Store instances in private properties for reuse. Handle provider initialization errors gracefully. Ensure method is idempotent - calling multiple times should not recreate instances. Use dynamic imports if needed for code splitting.",
"status": "pending",
"testStrategy": "Test lazy initialization occurs only once, verify correct provider is instantiated based on config, test error handling for invalid providers, ensure idempotency"
},
{
"id": 3,
"title": "Implement parsePRD method with coordination logic",
"description": "Create parsePRD method that coordinates the parser, AI provider, and storage to parse PRD content and store results",
"dependencies": [
"124.1",
"124.2"
],
"details": "Implement async parsePRD(content: string) method. First call initialize() to ensure components are loaded. Use _parser.parse() to parse the PRD content, passing the AI provider for task generation. Take the parsed tasks and use _storage.saveTasks() to persist them. Handle errors from parser or storage operations. Return the parsed tasks array. Implement proper error context and logging for debugging.",
"status": "pending",
"testStrategy": "Integration test with mock parser and storage, verify coordination between components, test error propagation from subsystems, ensure tasks are properly stored"
},
{
"id": 4,
"title": "Implement getTasks method and other facade methods",
"description": "Create getTasks method and any other necessary facade methods to retrieve and manage tasks",
"dependencies": [
"124.1"
],
"details": "Implement async getTasks() method that calls _storage.loadTasks() and returns the tasks array. Add getTask(id: string) for retrieving single task. Consider adding updateTask, deleteTask methods if needed. All methods should follow facade pattern - simple interface hiding complex operations. Add proper TypeScript return types for all methods. Handle storage not initialized scenarios.",
"status": "pending",
"testStrategy": "Test task retrieval with various scenarios, verify proper delegation to storage, test edge cases like empty task lists or invalid IDs"
},
{
"id": 5,
"title": "Create factory function and module exports",
"description": "Implement createTaskMaster factory function and set up all module exports including types and interfaces",
"dependencies": [
"124.1",
"124.2",
"124.3",
"124.4"
],
"details": "Create createTaskMaster(options?: Partial<IConfiguration>) factory function that returns a new TaskMasterCore instance. Export this as the primary entry point. Re-export all types and interfaces from submodules: ITask, IConfiguration, IAIProvider, ITaskStorage, IPRDParser, etc. Use 'export type' for type-only exports. Ensure all imports use .js extensions for ESM. Create index.d.ts if needed for better TypeScript support. Add JSDoc comments for public API.",
"status": "pending",
"testStrategy": "Test factory function creates proper instances, verify all exports are accessible, test TypeScript type inference works correctly, ensure ESM imports resolve properly"
}
]
},
{
"id": 125,
"title": "Create Placeholder Providers and Complete Testing",
"description": "Implement placeholder providers for OpenAI and Google, create comprehensive test suite",
"details": "Create OpenAIProvider and GoogleProvider classes extending BaseProvider, throwing 'not yet implemented' errors. Create MockProvider in tests/mocks for testing without API calls. Write unit tests for TaskParser, integration tests for parse-prd flow, ensure 80% code coverage. Follow kebab-case naming for test files. Test error scenarios comprehensively.",
"testStrategy": "Run full test suite with coverage report, verify all edge cases are tested, ensure mock provider behaves like real providers, test both success and failure paths",
"priority": "medium",
"dependencies": [
120,
124
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Create OpenAIProvider placeholder class",
"description": "Implement OpenAIProvider class that extends BaseProvider with all required methods throwing 'not yet implemented' errors",
"dependencies": [],
"details": "Create src/providers/openai-provider.ts file. Import BaseProvider from base-provider.ts. Implement class OpenAIProvider extends BaseProvider. Override parseText() method to throw new Error('OpenAI provider not yet implemented'). Add proper TypeScript types and JSDoc comments. Export the class as default.",
"status": "pending",
"testStrategy": "Write unit test to verify OpenAIProvider extends BaseProvider correctly and throws expected error when parseText is called"
},
{
"id": 2,
"title": "Create GoogleProvider placeholder class",
"description": "Implement GoogleProvider class that extends BaseProvider with all required methods throwing 'not yet implemented' errors",
"dependencies": [],
"details": "Create src/providers/google-provider.ts file. Import BaseProvider from base-provider.ts. Implement class GoogleProvider extends BaseProvider. Override parseText() method to throw new Error('Google provider not yet implemented'). Add proper TypeScript types and JSDoc comments. Export the class as default.",
"status": "pending",
"testStrategy": "Write unit test to verify GoogleProvider extends BaseProvider correctly and throws expected error when parseText is called"
},
{
"id": 3,
"title": "Create MockProvider for testing",
"description": "Implement MockProvider class in tests/mocks directory that simulates provider behavior without making actual API calls",
"dependencies": [],
"details": "Create tests/mocks/mock-provider.ts file. Extend BaseProvider class. Implement parseText() to return predefined mock task data based on input. Add methods to configure mock responses, simulate errors, and track method calls. Include delay simulation for realistic testing. Export class and helper functions for test setup.",
"status": "pending",
"testStrategy": "Test MockProvider returns consistent mock data, can simulate different scenarios (success/failure), and properly tracks method invocations"
},
{
"id": 4,
"title": "Write unit tests for TaskParser",
"description": "Create comprehensive unit tests for TaskParser class covering all methods and edge cases",
"dependencies": [
"125.3"
],
"details": "Create tests/unit/task-parser.test.ts file. Test TaskParser constructor with different providers. Test parseFromText method with valid/invalid inputs. Test error handling for malformed responses. Use MockProvider to simulate API responses. Test task ID generation and structure validation. Ensure all public methods are covered.",
"status": "pending",
"testStrategy": "Achieve 100% code coverage for TaskParser class, test both success and failure paths, verify error messages are appropriate"
},
{
"id": 5,
"title": "Write integration tests for parse-prd flow",
"description": "Create end-to-end integration tests for the complete PRD parsing workflow",
"dependencies": [
"125.3",
"125.4"
],
"details": "Create tests/integration/parse-prd-flow.test.ts file. Test full flow from PRD input to task output. Test with MockProvider simulating successful parsing. Test error scenarios (file not found, parse errors, network failures). Test task dependency resolution. Verify output format matches expected structure. Test with different PRD formats and sizes.",
"status": "pending",
"testStrategy": "Run coverage report to ensure 80% overall coverage, verify all critical paths are tested, ensure tests are deterministic and don't depend on external services"
}
]
}
],
"metadata": {
"created": "2025-08-06T08:51:19.649Z",
"updated": "2025-08-20T21:32:21.837Z",
"description": "Tasks for tm-core-phase-1 context"
}
}
}

15
.vscode/settings.json vendored
View File

@@ -10,5 +10,18 @@
},
"json.format.enable": true,
"json.validate.enable": true
"json.validate.enable": true,
"typescript.tsdk": "node_modules/typescript/lib",
"[typescript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[typescriptreact]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[javascript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[json]": {
"editor.defaultFormatter": "biomejs.biome"
}
}

50
apps/cli/package.json Normal file
View File

@@ -0,0 +1,50 @@
{
"name": "@tm/cli",
"version": "1.0.0",
"description": "Task Master CLI - Command line interface for task management",
"type": "module",
"main": "./dist/index.js",
"types": "./dist/index.d.ts",
"exports": {
".": {
"types": "./src/index.ts",
"import": "./dist/index.js",
"require": "./dist/index.js"
}
},
"files": ["dist", "README.md"],
"scripts": {
"build": "tsup",
"dev": "tsup --watch",
"typecheck": "tsc --noEmit",
"lint": "biome check src",
"format": "biome format --write src",
"test": "vitest run",
"test:watch": "vitest"
},
"dependencies": {
"@tm/core": "*",
"boxen": "^7.1.1",
"chalk": "^5.3.0",
"cli-table3": "^0.6.5",
"commander": "^12.1.0",
"inquirer": "^9.2.10",
"open": "^10.2.0",
"ora": "^8.1.0"
},
"devDependencies": {
"@biomejs/biome": "^1.9.4",
"@types/inquirer": "^9.0.3",
"@types/node": "^22.10.5",
"tsup": "^8.3.0",
"tsx": "^4.20.4",
"typescript": "^5.7.3",
"vitest": "^2.1.8"
},
"engines": {
"node": ">=18.0.0"
},
"keywords": ["task-master", "cli", "task-management", "productivity"],
"author": "",
"license": "MIT"
}

View File

@@ -0,0 +1,514 @@
/**
* @fileoverview Auth command using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import { Command } from 'commander';
import chalk from 'chalk';
import inquirer from 'inquirer';
import ora, { type Ora } from 'ora';
import open from 'open';
import {
AuthManager,
AuthenticationError,
type AuthCredentials
} from '@tm/core/auth';
import * as ui from '../utils/ui.js';
/**
* Result type from auth command
*/
export interface AuthResult {
success: boolean;
action: 'login' | 'logout' | 'status' | 'refresh';
credentials?: AuthCredentials;
message?: string;
}
/**
* AuthCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core's AuthManager
*/
export class AuthCommand extends Command {
private authManager: AuthManager;
private lastResult?: AuthResult;
constructor(name?: string) {
super(name || 'auth');
// Initialize auth manager
this.authManager = AuthManager.getInstance();
// Configure the command with subcommands
this.description('Manage authentication with tryhamster.com');
// Add subcommands
this.addLoginCommand();
this.addLogoutCommand();
this.addStatusCommand();
this.addRefreshCommand();
// Default action shows help
this.action(() => {
this.help();
});
}
/**
* Add login subcommand
*/
private addLoginCommand(): void {
this.command('login')
.description('Authenticate with tryhamster.com')
.action(async () => {
await this.executeLogin();
});
}
/**
* Add logout subcommand
*/
private addLogoutCommand(): void {
this.command('logout')
.description('Logout and clear credentials')
.action(async () => {
await this.executeLogout();
});
}
/**
* Add status subcommand
*/
private addStatusCommand(): void {
this.command('status')
.description('Display authentication status')
.action(async () => {
await this.executeStatus();
});
}
/**
* Add refresh subcommand
*/
private addRefreshCommand(): void {
this.command('refresh')
.description('Refresh authentication token')
.action(async () => {
await this.executeRefresh();
});
}
/**
* Execute login command
*/
private async executeLogin(): Promise<void> {
try {
const result = await this.performInteractiveAuth();
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
// Exit cleanly after successful authentication
// Small delay to ensure all output is flushed
setTimeout(() => {
process.exit(0);
}, 100);
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Execute logout command
*/
private async executeLogout(): Promise<void> {
try {
const result = await this.performLogout();
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Execute status command
*/
private async executeStatus(): Promise<void> {
try {
const result = this.displayStatus();
this.setLastResult(result);
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Execute refresh command
*/
private async executeRefresh(): Promise<void> {
try {
const result = await this.refreshToken();
this.setLastResult(result);
if (!result.success) {
process.exit(1);
}
} catch (error: any) {
this.handleError(error);
process.exit(1);
}
}
/**
* Display authentication status
*/
private displayStatus(): AuthResult {
const credentials = this.authManager.getCredentials();
console.log(chalk.cyan('\n🔐 Authentication Status\n'));
if (credentials) {
console.log(chalk.green('✓ Authenticated'));
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
console.log(chalk.gray(` User ID: ${credentials.userId}`));
console.log(
chalk.gray(` Token Type: ${credentials.tokenType || 'standard'}`)
);
if (credentials.expiresAt) {
const expiresAt = new Date(credentials.expiresAt);
const now = new Date();
const hoursRemaining = Math.floor(
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
);
if (hoursRemaining > 0) {
console.log(
chalk.gray(
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
);
}
} else {
console.log(chalk.gray(' Expires: Never (API key)'));
}
console.log(
chalk.gray(` Saved: ${new Date(credentials.savedAt).toLocaleString()}`)
);
return {
success: true,
action: 'status',
credentials,
message: 'Authenticated'
};
} else {
console.log(chalk.yellow('✗ Not authenticated'));
console.log(
chalk.gray('\n Run "task-master auth login" to authenticate')
);
return {
success: false,
action: 'status',
message: 'Not authenticated'
};
}
}
/**
* Perform logout
*/
private async performLogout(): Promise<AuthResult> {
try {
await this.authManager.logout();
ui.displaySuccess('Successfully logged out');
return {
success: true,
action: 'logout',
message: 'Successfully logged out'
};
} catch (error) {
const message = `Failed to logout: ${(error as Error).message}`;
ui.displayError(message);
return {
success: false,
action: 'logout',
message
};
}
}
/**
* Refresh authentication token
*/
private async refreshToken(): Promise<AuthResult> {
const spinner = ora('Refreshing authentication token...').start();
try {
const credentials = await this.authManager.refreshToken();
spinner.succeed('Token refreshed successfully');
console.log(
chalk.gray(
` New expiration: ${credentials.expiresAt ? new Date(credentials.expiresAt).toLocaleString() : 'Never'}`
)
);
return {
success: true,
action: 'refresh',
credentials,
message: 'Token refreshed successfully'
};
} catch (error) {
spinner.fail('Failed to refresh token');
if ((error as AuthenticationError).code === 'NO_REFRESH_TOKEN') {
ui.displayWarning(
'No refresh token available. Please re-authenticate.'
);
} else {
ui.displayError(`Refresh failed: ${(error as Error).message}`);
}
return {
success: false,
action: 'refresh',
message: `Failed to refresh: ${(error as Error).message}`
};
}
}
/**
* Perform interactive authentication
*/
private async performInteractiveAuth(): Promise<AuthResult> {
ui.displayBanner('Task Master Authentication');
// Check if already authenticated
if (this.authManager.isAuthenticated()) {
const { continueAuth } = await inquirer.prompt([
{
type: 'confirm',
name: 'continueAuth',
message:
'You are already authenticated. Do you want to re-authenticate?',
default: false
}
]);
if (!continueAuth) {
const credentials = this.authManager.getCredentials();
ui.displaySuccess('Using existing authentication');
if (credentials) {
console.log(chalk.gray(` Email: ${credentials.email || 'N/A'}`));
console.log(chalk.gray(` User ID: ${credentials.userId}`));
}
return {
success: true,
action: 'login',
credentials: credentials || undefined,
message: 'Using existing authentication'
};
}
}
try {
// Direct browser authentication - no menu needed
const credentials = await this.authenticateWithBrowser();
ui.displaySuccess('Authentication successful!');
console.log(
chalk.gray(` Logged in as: ${credentials.email || credentials.userId}`)
);
return {
success: true,
action: 'login',
credentials,
message: 'Authentication successful'
};
} catch (error) {
this.handleAuthError(error as AuthenticationError);
return {
success: false,
action: 'login',
message: `Authentication failed: ${(error as Error).message}`
};
}
}
/**
* Authenticate with browser using OAuth 2.0 with PKCE
*/
private async authenticateWithBrowser(): Promise<AuthCredentials> {
let authSpinner: Ora | null = null;
try {
// Use AuthManager's new unified OAuth flow method with callbacks
const credentials = await this.authManager.authenticateWithOAuth({
// Callback to handle browser opening
openBrowser: async (authUrl) => {
await open(authUrl);
},
timeout: 5 * 60 * 1000, // 5 minutes
// Callback when auth URL is ready
onAuthUrl: (authUrl) => {
// Display authentication instructions
console.log(chalk.blue.bold('\n🔐 Browser Authentication\n'));
console.log(chalk.white(' Opening your browser to authenticate...'));
console.log(chalk.gray(" If the browser doesn't open, visit:"));
console.log(chalk.cyan.underline(` ${authUrl}\n`));
},
// Callback when waiting for authentication
onWaitingForAuth: () => {
authSpinner = ora({
text: 'Waiting for authentication...',
spinner: 'dots'
}).start();
},
// Callback on success
onSuccess: () => {
if (authSpinner) {
authSpinner.succeed('Authentication successful!');
}
},
// Callback on error
onError: () => {
if (authSpinner) {
authSpinner.fail('Authentication failed');
}
}
});
return credentials;
} catch (error) {
throw error;
}
}
/**
* Handle authentication errors
*/
private handleAuthError(error: AuthenticationError): void {
console.error(chalk.red(`\n✗ ${error.message}`));
switch (error.code) {
case 'NETWORK_ERROR':
ui.displayWarning(
'Please check your internet connection and try again.'
);
break;
case 'INVALID_CREDENTIALS':
ui.displayWarning('Please check your credentials and try again.');
break;
case 'AUTH_EXPIRED':
ui.displayWarning(
'Your session has expired. Please authenticate again.'
);
break;
default:
if (process.env.DEBUG) {
console.error(chalk.gray(error.stack || ''));
}
}
}
/**
* Handle general errors
*/
private handleError(error: any): void {
if (error instanceof AuthenticationError) {
this.handleAuthError(error);
} else {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
}
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: AuthResult): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): AuthResult | undefined {
return this.lastResult;
}
/**
* Get current authentication status (for programmatic usage)
*/
isAuthenticated(): boolean {
return this.authManager.isAuthenticated();
}
/**
* Get current credentials (for programmatic usage)
*/
getCredentials(): AuthCredentials | null {
return this.authManager.getCredentials();
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
// No resources to clean up for auth command
// But keeping method for consistency with other commands
}
/**
* Static method to register this command on an existing program
* This is for gradual migration - allows commands.js to use this
*/
static registerOn(program: Command): Command {
const authCommand = new AuthCommand();
program.addCommand(authCommand);
return authCommand;
}
/**
* Alternative registration that returns the command for chaining
* Can also configure the command name if needed
*/
static register(program: Command, name?: string): AuthCommand {
const authCommand = new AuthCommand(name);
program.addCommand(authCommand);
return authCommand;
}
}

View File

@@ -0,0 +1,327 @@
/**
* @fileoverview ListTasks command using Commander's native class pattern
* Extends Commander.Command for better integration with the framework
*/
import { Command } from 'commander';
import chalk from 'chalk';
import {
createTaskMasterCore,
type Task,
type TaskStatus,
type TaskMasterCore,
TASK_STATUSES,
OUTPUT_FORMATS,
STATUS_ICONS,
type OutputFormat
} from '@tm/core';
import * as ui from '../utils/ui.js';
/**
* Options interface for the list command
*/
export interface ListCommandOptions {
status?: string;
tag?: string;
withSubtasks?: boolean;
format?: OutputFormat;
silent?: boolean;
project?: string;
}
/**
* Result type from list command
*/
export interface ListTasksResult {
tasks: Task[];
total: number;
filtered: number;
tag?: string;
storageType: 'file' | 'api';
}
/**
* ListTasksCommand extending Commander's Command class
* This is a thin presentation layer over @tm/core
*/
export class ListTasksCommand extends Command {
private tmCore?: TaskMasterCore;
private lastResult?: ListTasksResult;
constructor(name?: string) {
super(name || 'list');
// Configure the command
this.description('List tasks with optional filtering')
.alias('ls')
.option('-s, --status <status>', 'Filter by status (comma-separated)')
.option('-t, --tag <tag>', 'Filter by tag')
.option('--with-subtasks', 'Include subtasks in the output')
.option(
'-f, --format <format>',
'Output format (text, json, compact)',
'text'
)
.option('--silent', 'Suppress output (useful for programmatic usage)')
.option('-p, --project <path>', 'Project root directory', process.cwd())
.action(async (options: ListCommandOptions) => {
await this.executeCommand(options);
});
}
/**
* Execute the list command
*/
private async executeCommand(options: ListCommandOptions): Promise<void> {
try {
// Validate options
if (!this.validateOptions(options)) {
process.exit(1);
}
// Initialize tm-core
await this.initializeCore(options.project || process.cwd());
// Get tasks from core
const result = await this.getTasks(options);
// Store result for programmatic access
this.setLastResult(result);
// Display results
if (!options.silent) {
this.displayResults(result, options);
}
} catch (error: any) {
const msg = error?.getSanitizedDetails?.() ?? {
message: error?.message ?? String(error)
};
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
if (error.stack && process.env.DEBUG) {
console.error(chalk.gray(error.stack));
}
process.exit(1);
}
}
/**
* Validate command options
*/
private validateOptions(options: ListCommandOptions): boolean {
// Validate format
if (
options.format &&
!OUTPUT_FORMATS.includes(options.format as OutputFormat)
) {
console.error(chalk.red(`Invalid format: ${options.format}`));
console.error(chalk.gray(`Valid formats: ${OUTPUT_FORMATS.join(', ')}`));
return false;
}
// Validate status
if (options.status) {
const statuses = options.status.split(',').map((s: string) => s.trim());
for (const status of statuses) {
if (status !== 'all' && !TASK_STATUSES.includes(status as TaskStatus)) {
console.error(chalk.red(`Invalid status: ${status}`));
console.error(
chalk.gray(`Valid statuses: ${TASK_STATUSES.join(', ')}`)
);
return false;
}
}
}
return true;
}
/**
* Initialize TaskMasterCore
*/
private async initializeCore(projectRoot: string): Promise<void> {
if (!this.tmCore) {
this.tmCore = await createTaskMasterCore({ projectPath: projectRoot });
}
}
/**
* Get tasks from tm-core
*/
private async getTasks(
options: ListCommandOptions
): Promise<ListTasksResult> {
if (!this.tmCore) {
throw new Error('TaskMasterCore not initialized');
}
// Build filter
const filter =
options.status && options.status !== 'all'
? {
status: options.status
.split(',')
.map((s: string) => s.trim() as TaskStatus)
}
: undefined;
// Call tm-core
const result = await this.tmCore.getTaskList({
tag: options.tag,
filter,
includeSubtasks: options.withSubtasks
});
return result as ListTasksResult;
}
/**
* Display results based on format
*/
private displayResults(
result: ListTasksResult,
options: ListCommandOptions
): void {
const format = (options.format || 'text') as OutputFormat | 'text';
switch (format) {
case 'json':
this.displayJson(result);
break;
case 'compact':
this.displayCompact(result.tasks, options.withSubtasks);
break;
case 'text':
default:
this.displayText(result, options.withSubtasks);
break;
}
}
/**
* Display in JSON format
*/
private displayJson(data: ListTasksResult): void {
console.log(
JSON.stringify(
{
tasks: data.tasks,
metadata: {
total: data.total,
filtered: data.filtered,
tag: data.tag,
storageType: data.storageType
}
},
null,
2
)
);
}
/**
* Display in compact format
*/
private displayCompact(tasks: Task[], withSubtasks?: boolean): void {
tasks.forEach((task) => {
const icon = STATUS_ICONS[task.status];
console.log(`${chalk.cyan(task.id)} ${icon} ${task.title}`);
if (withSubtasks && task.subtasks?.length) {
task.subtasks.forEach((subtask) => {
const subIcon = STATUS_ICONS[subtask.status];
console.log(
` ${chalk.gray(`${task.id}.${subtask.id}`)} ${subIcon} ${chalk.gray(subtask.title)}`
);
});
}
});
}
/**
* Display in text format with tables
*/
private displayText(data: ListTasksResult, withSubtasks?: boolean): void {
const { tasks, total, filtered, tag, storageType } = data;
// Header
ui.displayBanner(`Task List${tag ? ` (${tag})` : ''}`);
// Statistics
console.log(chalk.blue.bold('\n📊 Statistics:\n'));
console.log(` Total tasks: ${chalk.cyan(total)}`);
console.log(` Filtered: ${chalk.cyan(filtered)}`);
if (tag) {
console.log(` Tag: ${chalk.cyan(tag)}`);
}
console.log(` Storage: ${chalk.cyan(storageType)}`);
// No tasks message
if (tasks.length === 0) {
ui.displayWarning('No tasks found matching the criteria.');
return;
}
// Task table
console.log(chalk.blue.bold(`\n📋 Tasks (${tasks.length}):\n`));
console.log(
ui.createTaskTable(tasks, {
showSubtasks: withSubtasks,
showDependencies: true
})
);
// Progress bar
const completedCount = tasks.filter(
(t: Task) => t.status === 'done'
).length;
console.log(chalk.blue.bold('\n📊 Overall Progress:\n'));
console.log(` ${ui.createProgressBar(completedCount, tasks.length)}`);
}
/**
* Set the last result for programmatic access
*/
private setLastResult(result: ListTasksResult): void {
this.lastResult = result;
}
/**
* Get the last result (for programmatic usage)
*/
getLastResult(): ListTasksResult | undefined {
return this.lastResult;
}
/**
* Clean up resources
*/
async cleanup(): Promise<void> {
if (this.tmCore) {
await this.tmCore.close();
this.tmCore = undefined;
}
}
/**
* Static method to register this command on an existing program
* This is for gradual migration - allows commands.js to use this
*/
static registerOn(program: Command): Command {
const listCommand = new ListTasksCommand();
program.addCommand(listCommand);
return listCommand;
}
/**
* Alternative registration that returns the command for chaining
* Can also configure the command name if needed
*/
static register(program: Command, name?: string): ListTasksCommand {
const listCommand = new ListTasksCommand(name);
program.addCommand(listCommand);
return listCommand;
}
}

19
apps/cli/src/index.ts Normal file
View File

@@ -0,0 +1,19 @@
/**
* @fileoverview Main entry point for @tm/cli package
* Exports all public APIs for the CLI presentation layer
*/
// Commands
export { ListTasksCommand } from './commands/list.command.js';
export { AuthCommand } from './commands/auth.command.js';
// UI utilities (for other commands to use)
export * as ui from './utils/ui.js';
// Re-export commonly used types from tm-core
export type {
Task,
TaskStatus,
TaskPriority,
TaskMasterCore
} from '@tm/core';

326
apps/cli/src/utils/ui.ts Normal file
View File

@@ -0,0 +1,326 @@
/**
* @fileoverview UI utilities for Task Master CLI
* Provides formatting, display, and visual components for the command line interface
*/
import chalk from 'chalk';
import boxen from 'boxen';
import Table from 'cli-table3';
import type { Task, TaskStatus, TaskPriority } from '@tm/core';
/**
* Get colored status display with ASCII icons (matches scripts/modules/ui.js style)
*/
export function getStatusWithColor(
status: TaskStatus,
forTable: boolean = false
): string {
const statusConfig = {
done: {
color: chalk.green,
icon: String.fromCharCode(8730),
tableIcon: String.fromCharCode(8730)
}, // √
pending: { color: chalk.yellow, icon: 'o', tableIcon: 'o' },
'in-progress': {
color: chalk.hex('#FFA500'),
icon: String.fromCharCode(9654),
tableIcon: '>'
}, // ▶
deferred: { color: chalk.gray, icon: 'x', tableIcon: 'x' },
blocked: { color: chalk.red, icon: '!', tableIcon: '!' },
review: { color: chalk.magenta, icon: '?', tableIcon: '?' },
cancelled: { color: chalk.gray, icon: 'X', tableIcon: 'X' }
};
const config = statusConfig[status] || {
color: chalk.red,
icon: 'X',
tableIcon: 'X'
};
// Use simple ASCII characters for stable display
const simpleIcons = {
done: String.fromCharCode(8730), // √
pending: 'o',
'in-progress': '>',
deferred: 'x',
blocked: '!',
review: '?',
cancelled: 'X'
};
const icon = forTable ? simpleIcons[status] || 'X' : config.icon;
return config.color(`${icon} ${status}`);
}
/**
* Get colored priority display
*/
export function getPriorityWithColor(priority: TaskPriority): string {
const priorityColors: Record<TaskPriority, (text: string) => string> = {
critical: chalk.red.bold,
high: chalk.red,
medium: chalk.yellow,
low: chalk.gray
};
const colorFn = priorityColors[priority] || chalk.white;
return colorFn(priority);
}
/**
* Get colored complexity display
*/
export function getComplexityWithColor(complexity: number | string): string {
const score =
typeof complexity === 'string' ? parseInt(complexity, 10) : complexity;
if (isNaN(score)) {
return chalk.gray('N/A');
}
if (score >= 8) {
return chalk.red.bold(`${score} (High)`);
} else if (score >= 5) {
return chalk.yellow(`${score} (Medium)`);
} else {
return chalk.green(`${score} (Low)`);
}
}
/**
* Truncate text to specified length
*/
export function truncate(text: string, maxLength: number): string {
if (text.length <= maxLength) {
return text;
}
return text.substring(0, maxLength - 3) + '...';
}
/**
* Create a progress bar
*/
export function createProgressBar(
completed: number,
total: number,
width: number = 30
): string {
if (total === 0) {
return chalk.gray('No tasks');
}
const percentage = Math.round((completed / total) * 100);
const filled = Math.round((completed / total) * width);
const empty = width - filled;
const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
return `${bar} ${chalk.cyan(`${percentage}%`)} (${completed}/${total})`;
}
/**
* Display a fancy banner
*/
export function displayBanner(title: string = 'Task Master'): void {
console.log(
boxen(chalk.white.bold(title), {
padding: 1,
margin: { top: 1, bottom: 1 },
borderStyle: 'round',
borderColor: 'blue',
textAlignment: 'center'
})
);
}
/**
* Display an error message (matches scripts/modules/ui.js style)
*/
export function displayError(message: string, details?: string): void {
console.error(
boxen(
chalk.red.bold('X Error: ') +
chalk.white(message) +
(details ? '\n\n' + chalk.gray(details) : ''),
{
padding: 1,
borderStyle: 'round',
borderColor: 'red'
}
)
);
}
/**
* Display a success message
*/
export function displaySuccess(message: string): void {
console.log(
boxen(
chalk.green.bold(String.fromCharCode(8730) + ' ') + chalk.white(message),
{
padding: 1,
borderStyle: 'round',
borderColor: 'green'
}
)
);
}
/**
* Display a warning message
*/
export function displayWarning(message: string): void {
console.log(
boxen(chalk.yellow.bold('⚠ ') + chalk.white(message), {
padding: 1,
borderStyle: 'round',
borderColor: 'yellow'
})
);
}
/**
* Display info message
*/
export function displayInfo(message: string): void {
console.log(
boxen(chalk.blue.bold('i ') + chalk.white(message), {
padding: 1,
borderStyle: 'round',
borderColor: 'blue'
})
);
}
/**
* Format dependencies with their status
*/
export function formatDependenciesWithStatus(
dependencies: string[] | number[],
tasks: Task[]
): string {
if (!dependencies || dependencies.length === 0) {
return chalk.gray('none');
}
const taskMap = new Map(tasks.map((t) => [t.id.toString(), t]));
return dependencies
.map((depId) => {
const task = taskMap.get(depId.toString());
if (!task) {
return chalk.red(`${depId} (not found)`);
}
const statusIcon =
task.status === 'done'
? '✓'
: task.status === 'in-progress'
? '►'
: '○';
return `${depId}${statusIcon}`;
})
.join(', ');
}
/**
* Create a task table for display
*/
export function createTaskTable(
tasks: Task[],
options?: {
showSubtasks?: boolean;
showComplexity?: boolean;
showDependencies?: boolean;
}
): string {
const {
showSubtasks = false,
showComplexity = false,
showDependencies = true
} = options || {};
// Calculate dynamic column widths based on terminal width
const terminalWidth = process.stdout.columns || 100;
const baseColWidths = showComplexity
? [8, Math.floor(terminalWidth * 0.35), 18, 12, 15, 12] // ID, Title, Status, Priority, Dependencies, Complexity
: [8, Math.floor(terminalWidth * 0.4), 18, 12, 20]; // ID, Title, Status, Priority, Dependencies
const headers = [
chalk.blue.bold('ID'),
chalk.blue.bold('Title'),
chalk.blue.bold('Status'),
chalk.blue.bold('Priority')
];
const colWidths = baseColWidths.slice(0, 4);
if (showDependencies) {
headers.push(chalk.blue.bold('Dependencies'));
colWidths.push(baseColWidths[4]);
}
if (showComplexity) {
headers.push(chalk.blue.bold('Complexity'));
colWidths.push(baseColWidths[5] || 12);
}
const table = new Table({
head: headers,
style: { head: [], border: [] },
colWidths,
wordWrap: true
});
tasks.forEach((task) => {
const row: string[] = [
chalk.cyan(task.id.toString()),
truncate(task.title, colWidths[1] - 3),
getStatusWithColor(task.status, true), // Use table version
getPriorityWithColor(task.priority)
];
if (showDependencies) {
row.push(formatDependenciesWithStatus(task.dependencies, tasks));
}
if (showComplexity && 'complexity' in task) {
row.push(getComplexityWithColor(task.complexity as number | string));
}
table.push(row);
// Add subtasks if requested
if (showSubtasks && task.subtasks && task.subtasks.length > 0) {
task.subtasks.forEach((subtask) => {
const subRow: string[] = [
chalk.gray(` └─ ${subtask.id}`),
chalk.gray(truncate(subtask.title, colWidths[1] - 6)),
chalk.gray(getStatusWithColor(subtask.status, true)),
chalk.gray(subtask.priority || 'medium')
];
if (showDependencies) {
subRow.push(
chalk.gray(
subtask.dependencies && subtask.dependencies.length > 0
? subtask.dependencies.map((dep) => String(dep)).join(', ')
: 'None'
)
);
}
if (showComplexity) {
subRow.push(chalk.gray('--'));
}
table.push(subRow);
});
}
});
return table.toString();
}

27
apps/cli/tsconfig.json Normal file
View File

@@ -0,0 +1,27 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"lib": ["ES2022"],
"moduleResolution": "bundler",
"allowSyntheticDefaultImports": true,
"esModuleInterop": true,
"strict": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"outDir": "./dist",
"rootDir": "./src",
"resolveJsonModule": true,
"allowJs": false,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"types": ["node"]
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "tests"]
}

15
apps/cli/tsup.config.ts Normal file
View File

@@ -0,0 +1,15 @@
import { defineConfig } from 'tsup';
export default defineConfig({
entry: ['src/index.ts'],
format: ['esm'],
target: 'node18',
splitting: false,
sourcemap: true,
clean: true,
dts: true,
shims: true,
esbuildOptions(options) {
options.platform = 'node';
}
});

View File

@@ -9,17 +9,9 @@
"engines": {
"vscode": "^1.93.0"
},
"categories": [
"AI",
"Visualization",
"Education",
"Other"
],
"categories": ["AI", "Visualization", "Education", "Other"],
"main": "./dist/extension.js",
"activationEvents": [
"onStartupFinished",
"workspaceContains:.taskmaster/**"
],
"activationEvents": ["onStartupFinished", "workspaceContains:.taskmaster/**"],
"contributes": {
"viewsContainers": {
"activitybar": [
@@ -147,11 +139,7 @@
},
"taskmaster.ui.theme": {
"type": "string",
"enum": [
"auto",
"light",
"dark"
],
"enum": ["auto", "light", "dark"],
"default": "auto",
"description": "UI theme preference"
},
@@ -212,12 +200,7 @@
},
"taskmaster.debug.logLevel": {
"type": "string",
"enum": [
"error",
"warn",
"info",
"debug"
],
"enum": ["error", "warn", "info", "debug"],
"default": "info",
"description": "Logging level"
},

View File

@@ -0,0 +1,131 @@
# CLI Commander Class Pattern
## Overview
We're using Commander.js's native class pattern instead of custom abstractions. This is cleaner, more maintainable, and uses the framework as designed.
## Architecture
```
@tm/core (Business Logic) @tm/cli (Presentation)
┌─────────────────────┐ ┌──────────────────────────┐
│ TaskMasterCore │◄───────────│ ListTasksCommand │
│ - getTaskList() │ │ extends Commander.Command│
│ - getTask() │ │ - display logic only │
│ - getNextTask() │ │ - formatting │
└─────────────────────┘ └──────────────────────────┘
▲ ▲
│ │
└──────── Gets Data ──────────────────┘
Displays Data
```
## Implementation
### Command Class Pattern
```typescript
// apps/cli/src/commands/list-tasks-commander.ts
export class ListTasksCommand extends Command {
constructor(name?: string) {
super(name || 'list');
this
.description('List tasks')
.option('-s, --status <status>', 'Filter by status')
.action(async (options) => {
// 1. Get data from @tm/core
const result = await this.tmCore.getTaskList(options);
// 2. Display data (presentation only)
this.displayResults(result, options);
});
}
}
```
### Main CLI Class
```typescript
// apps/cli/src/cli-commander.ts
class TaskMasterCLI extends Command {
createCommand(name?: string): Command {
switch (name) {
case 'list':
return new ListTasksCommand(name);
default:
return new Command(name);
}
}
}
```
## Integration with Existing Scripts
### Gradual Migration Path
```javascript
// scripts/modules/commands.js
// OLD WAY (keep working during migration)
program
.command('old-list')
.action(async (options) => {
await listTasksV2(...);
});
// NEW WAY (add alongside old)
import { ListTasksCommand } from '@tm/cli';
program.addCommand(new ListTasksCommand());
```
### Benefits
1. **No Custom Abstractions**: Using Commander.js as designed
2. **Clean Separation**: Business logic in core, presentation in CLI
3. **Gradual Migration**: Can migrate one command at a time
4. **Type Safety**: Full TypeScript support with Commander types
5. **Framework Native**: Better documentation, examples, and community support
## Migration Steps
1. **Phase 1**: Build command classes in @tm/cli (current)
2. **Phase 2**: Import in scripts/modules/commands.js
3. **Phase 3**: Replace old implementations one by one
4. **Phase 4**: Remove old code when all migrated
## Example Usage
### In New Code
```javascript
import { ListTasksCommand } from '@tm/cli';
const program = new Command();
program.addCommand(new ListTasksCommand());
```
### In Existing Scripts
```javascript
// Gradual adoption
const listCmd = new ListTasksCommand();
program.addCommand(listCmd);
```
### Programmatic Usage
```javascript
const listCommand = new ListTasksCommand();
await listCommand.parseAsync(['node', 'script', '--format', 'json']);
```
## POC Status
**Completed**:
- ListTasksCommand extends Commander.Command
- Clean separation of concerns
- Integration examples
- Build configuration
🚧 **Next Steps**:
- Migrate more commands
- Update existing scripts to use new classes
- Remove old implementations gradually
This POC proves the pattern works and provides a clean migration path!

View File

@@ -7,6 +7,7 @@ import logger from './logger.js';
import { registerTaskMasterTools } from './tools/index.js';
import ProviderRegistry from '../../src/provider-registry/index.js';
import { MCPProvider } from './providers/mcp-provider.js';
import packageJson from '../../package.json' with { type: 'json' };
// Load environment variables
dotenv.config();
@@ -20,10 +21,6 @@ const __dirname = path.dirname(__filename);
*/
class TaskMasterMCPServer {
constructor() {
// Get version from package.json using synchronous fs
const packagePath = path.join(__dirname, '../../package.json');
const packageJson = JSON.parse(fs.readFileSync(packagePath, 'utf8'));
this.options = {
name: 'Task Master MCP Server',
version: packageJson.version

View File

@@ -8,6 +8,7 @@ import path from 'path';
import fs from 'fs';
import { contextManager } from '../core/context-manager.js'; // Import the singleton
import { fileURLToPath } from 'url';
import packageJson from '../../../package.json' with { type: 'json' };
import { getCurrentTag } from '../../../scripts/modules/utils.js';
// Import path utilities to ensure consistent path resolution
@@ -31,33 +32,12 @@ function getVersionInfo() {
return cachedVersionInfo;
}
try {
// Navigate to the project root from the tools directory
const packageJsonPath = path.join(
path.dirname(__filename),
'../../../package.json'
);
if (fs.existsSync(packageJsonPath)) {
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf-8'));
// Use the imported packageJson directly
cachedVersionInfo = {
version: packageJson.version,
name: packageJson.name
version: packageJson.version || 'unknown',
name: packageJson.name || 'task-master-ai'
};
return cachedVersionInfo;
}
cachedVersionInfo = {
version: 'unknown',
name: 'task-master-ai'
};
return cachedVersionInfo;
} catch (error) {
// Fallback version info if package.json can't be read
cachedVersionInfo = {
version: 'unknown',
name: 'task-master-ai'
};
return cachedVersionInfo;
}
}
/**

5822
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -5,26 +5,31 @@
"main": "index.js",
"type": "module",
"bin": {
"task-master": "bin/task-master.js",
"task-master-mcp": "mcp-server/server.js",
"task-master-ai": "mcp-server/server.js"
"task-master": "dist/task-master.js",
"task-master-mcp": "dist/mcp-server.js",
"task-master-ai": "dist/mcp-server.js"
},
"workspaces": [
"apps/*",
"."
],
"workspaces": ["apps/*", "packages/*", "."],
"scripts": {
"build": "npm run build:packages && tsup",
"dev": "npm run build:packages && npm link && (npm run dev:packages & tsup --watch --onSuccess 'echo Build complete && npm link')",
"dev:packages": "(cd packages/tm-core && npm run dev) & (cd apps/cli && npm run dev) & wait",
"dev:core": "cd packages/tm-core && npm run dev",
"dev:cli": "cd apps/cli && npm run dev",
"build:packages": "npm run build:core && npm run build:cli",
"build:core": "cd packages/tm-core && npm run build",
"build:cli": "cd apps/cli && npm run build",
"test": "node --experimental-vm-modules node_modules/.bin/jest",
"test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
"test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
"test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
"test:e2e": "./tests/e2e/run_e2e.sh",
"test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log",
"prepare": "chmod +x bin/task-master.js mcp-server/server.js",
"postpack": "chmod +x dist/task-master.js dist/mcp-server.js",
"changeset": "changeset",
"release": "changeset publish",
"inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
"mcp-server": "node mcp-server/server.js",
"inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js",
"mcp-server": "node dist/mcp-server.js",
"format-check": "biome format .",
"format": "biome format . --write"
},
@@ -104,24 +109,18 @@
"bugs": {
"url": "https://github.com/eyaltoledano/claude-task-master/issues"
},
"files": [
"scripts/**",
"assets/**",
".cursor/**",
"README-task-master.md",
"index.js",
"bin/**",
"mcp-server/**",
"src/**"
],
"files": ["dist/**", "README-task-master.md", "README.md", "LICENSE"],
"overrides": {
"node-fetch": "^2.6.12",
"whatwg-url": "^11.0.0"
},
"devDependencies": {
"@biomejs/biome": "^1.9.4",
"@changesets/changelog-github": "^0.5.1",
"@changesets/cli": "^2.28.1",
"dotenv-mono": "^1.5.1",
"@types/jest": "^29.5.14",
"execa": "^8.0.1",
"ink": "^5.0.1",
@@ -130,6 +129,8 @@
"mock-fs": "^5.5.0",
"prettier": "^3.5.3",
"supertest": "^7.1.0",
"tsx": "^4.16.2"
"tsup": "^8.5.0",
"tsx": "^4.16.2",
"typescript": "^5.9.2"
}
}

83
packages/tm-core/.gitignore vendored Normal file
View File

@@ -0,0 +1,83 @@
# Dependencies
node_modules/
*.pnp
.pnp.js
# Build output
dist/
build/
*.tsbuildinfo
# Coverage reports
coverage/
*.lcov
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
# Diagnostic reports
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# nyc test coverage
.nyc_output
# Dependency directories
jspm_packages/
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# Environment variables
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db

View File

@@ -0,0 +1,70 @@
# Changelog
All notable changes to the @task-master/tm-core package will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- Initial package structure and configuration
- TypeScript support with strict mode
- Dual ESM/CJS build system with tsup
- Jest testing framework with TypeScript support
- ESLint and Prettier for code quality
- Modular architecture with barrel exports
- Placeholder implementations for all modules
- Comprehensive documentation and README
### Development Infrastructure
- tsup configuration for dual format builds
- Jest configuration with ESM support
- ESLint configuration with TypeScript rules
- Prettier configuration for consistent formatting
- Complete package.json with all required fields
- TypeScript configuration with strict settings
- .gitignore for development files
### Package Structure
- `src/types/` - TypeScript type definitions (placeholder)
- `src/providers/` - AI provider implementations (placeholder)
- `src/storage/` - Storage layer abstractions (placeholder)
- `src/parser/` - Task parsing utilities (placeholder)
- `src/utils/` - Common utility functions (placeholder)
- `src/errors/` - Custom error classes (placeholder)
- `tests/` - Test directories and setup
## [1.0.0] - TBD
### Planned Features
- Complete TypeScript type system
- AI provider implementations
- Storage adapters
- Task parsing capabilities
- Comprehensive utility functions
- Custom error handling
- Full test coverage
- Complete documentation
---
## Release Notes
### Version 1.0.0 (Coming Soon)
This will be the first stable release of tm-core with complete implementations of all modules. Currently, all modules contain placeholder implementations to establish the package structure and enable development of dependent packages.
### Development Status
- ✅ Package structure and configuration
- ✅ Build and test infrastructure
- ✅ Development tooling setup
- 🚧 TypeScript types implementation (Task 116)
- 🚧 AI provider system (Task 117)
- 🚧 Storage layer (Task 118)
- 🚧 Task parser (Task 119)
- 🚧 Utility functions (Task 120)
- 🚧 Error handling (Task 121)
- 🚧 Configuration system (Task 122)
- 🚧 Testing infrastructure (Task 123)
- 🚧 Documentation (Task 124)
- 🚧 Package finalization (Task 125)

View File

@@ -0,0 +1,194 @@
# GetTaskList POC Status
## ✅ What We've Accomplished
We've successfully implemented a complete end-to-end proof of concept for the `getTaskList` functionality with improved separation of concerns:
### 1. Clean Architecture Layers with Proper Separation
#### Configuration Layer (ConfigManager)
- Single source of truth for configuration
- Manages active tag and storage settings
- Handles config.json persistence
- Determines storage type (file vs API)
#### Service Layer (TaskService)
- Core business logic and operations
- `getTaskList()` method that coordinates between ConfigManager and Storage
- Handles all filtering and task processing
- Manages storage lifecycle
#### Facade Layer (TaskMasterCore)
- Simplified API for consumers
- Delegates to TaskService for operations
- Backwards compatible `listTasks()` method
- New `getTaskList()` method (preferred naming)
#### Domain Layer (Entities)
- `TaskEntity` with business logic
- Validation and status transitions
- Dependency checking (`canComplete()`)
#### Infrastructure Layer (Storage)
- `IStorage` interface for abstraction
- `FileStorage` for local files (handles 'master' tag correctly)
- `ApiStorage` for Hamster integration
- `StorageFactory` for automatic selection
- **NO business logic** - only persistence
### 2. Storage Abstraction Benefits
```typescript
// Same API works with different backends
const fileCore = createTaskMasterCore(path, {
storage: { type: 'file' }
});
const apiCore = createTaskMasterCore(path, {
storage: {
type: 'api',
apiEndpoint: 'https://hamster.ai',
apiAccessToken: 'xxx'
}
});
// Identical usage
const result = await core.listTasks({
filter: { status: 'pending' }
});
```
### 3. Type Safety Throughout
- Full TypeScript implementation
- Comprehensive interfaces
- Type-safe filters and options
- Proper error types
### 4. Testing Coverage
- 50 tests passing
- Unit tests for core components
- Integration tests for listTasks
- Mock implementations for testing
## 📊 Architecture Validation
### ✅ Separation of Concerns
- **CLI** handles UI/formatting only
- **tm-core** handles business logic
- **Storage** handles persistence
- Each layer is independently testable
### ✅ Extensibility
- Easy to add new storage types (database, S3, etc.)
- New filters can be added to `TaskFilter`
- AI providers follow same pattern (BaseProvider)
### ✅ Error Handling
- Consistent `TaskMasterError` with codes
- Context preservation
- User-friendly messages
### ✅ Performance Considerations
- File locking for concurrent access
- Atomic writes with temp files
- Retry logic with exponential backoff
- Request timeout handling
## 🔄 Integration Path
### Current CLI Structure
```javascript
// scripts/modules/task-manager/list-tasks.js
listTasks(tasksPath, statusFilter, reportPath, withSubtasks, outputFormat, context)
// Directly reads files, handles all logic
```
### New Integration Structure
```javascript
// Using tm-core with proper separation of concerns
const tmCore = createTaskMasterCore(projectPath, config);
const result = await tmCore.getTaskList(options);
// CLI only handles formatting result for display
// Under the hood:
// 1. ConfigManager determines active tag and storage type
// 2. TaskService uses storage to fetch tasks for the tag
// 3. TaskService applies business logic and filters
// 4. Storage only handles reading/writing - no business logic
```
## 📈 Metrics
### Code Quality
- **Clean Code**: Methods under 40 lines ✅
- **Single Responsibility**: Each class has one purpose ✅
- **DRY**: No code duplication ✅
- **Type Coverage**: 100% TypeScript ✅
### Test Coverage
- **Unit Tests**: BaseProvider, TaskEntity ✅
- **Integration Tests**: Full listTasks flow ✅
- **Storage Tests**: File and API operations ✅
## 🎯 POC Success Criteria
| Criteria | Status | Notes |
|----------|--------|-------|
| Clean architecture | ✅ | Clear layer separation |
| Storage abstraction | ✅ | File + API storage working |
| Type safety | ✅ | Full TypeScript |
| Error handling | ✅ | Comprehensive error system |
| Testing | ✅ | 50 tests passing |
| Performance | ✅ | Optimized with caching, batching |
| Documentation | ✅ | Architecture docs created |
## 🚀 Next Steps
### Immediate (Complete ListTasks Integration)
1. Create npm script to test integration example
2. Add mock Hamster API for testing
3. Create migration guide for CLI
### Phase 1 Remaining Work
Based on this POC success, implement remaining operations:
- `addTask()` - Add new tasks
- `updateTask()` - Update existing tasks
- `deleteTask()` - Remove tasks
- `expandTask()` - Break into subtasks
- Tag management operations
### Phase 2 (AI Integration)
- Complete AI provider implementations
- Task generation from PRD
- Task complexity analysis
- Auto-expansion of tasks
## 💡 Lessons Learned
### What Worked Well
1. **Separation of Concerns** - ConfigManager, TaskService, and Storage have clear responsibilities
2. **Storage Factory Pattern** - Clean abstraction for multiple backends
3. **Entity Pattern** - Business logic encapsulation
4. **Template Method Pattern** - BaseProvider for AI providers
5. **Comprehensive Error Handling** - TaskMasterError with context
### Improvements Made
1. Migrated from Jest to Vitest (faster)
2. Replaced ESLint/Prettier with Biome (unified tooling)
3. Fixed conflicting interface definitions
4. Added proper TypeScript exports
5. **Better Architecture** - Separated configuration, business logic, and persistence
6. **Proper Tag Handling** - 'master' tag maps correctly to tasks.json
7. **Clean Storage Layer** - Removed business logic from storage
## ✨ Conclusion
The ListTasks POC successfully validates our architecture. The structure is:
- **Clean and maintainable**
- **Properly abstracted**
- **Well-tested**
- **Ready for extension**
We can confidently proceed with implementing the remaining functionality following this same pattern.

226
packages/tm-core/README.md Normal file
View File

@@ -0,0 +1,226 @@
# @task-master/tm-core
Core library for Task Master AI - providing task management and orchestration capabilities with TypeScript support.
## Overview
`tm-core` is the foundational library that powers Task Master AI's task management system. It provides a comprehensive set of tools for creating, managing, and orchestrating tasks with AI integration.
## Features
- **TypeScript-first**: Built with full TypeScript support and strict type checking
- **Dual Format**: Supports both ESM and CommonJS with automatic format detection
- **Modular Architecture**: Clean separation of concerns with dedicated modules for different functionality
- **AI Provider Integration**: Pluggable AI provider system for task generation and management
- **Flexible Storage**: Abstracted storage layer supporting different persistence strategies
- **Task Parsing**: Advanced parsing capabilities for various task definition formats
- **Error Handling**: Comprehensive error system with specific error types
- **Testing**: Complete test coverage with Jest and TypeScript support
## Installation
```bash
npm install @task-master/tm-core
```
## Usage
### Basic Usage
```typescript
import { generateTaskId, PlaceholderTask } from '@task-master/tm-core';
// Generate a unique task ID
const taskId = generateTaskId();
// Create a task (coming soon - full implementation)
const task: PlaceholderTask = {
id: taskId,
title: 'My Task',
status: 'pending',
priority: 'medium'
};
```
### Modular Imports
You can import specific modules to reduce bundle size:
```typescript
// Import types only
import type { TaskId, TaskStatus } from '@task-master/tm-core/types';
// Import utilities
import { generateTaskId, formatDate } from '@task-master/tm-core/utils';
// Import providers (AI providers coming soon)
// import { AIProvider } from '@task-master/tm-core/providers';
// Import storage
import { PlaceholderStorage } from '@task-master/tm-core/storage';
// Import parsers
import { PlaceholderParser } from '@task-master/tm-core/parser';
// Import errors
import { TmCoreError, TaskNotFoundError } from '@task-master/tm-core/errors';
```
## Architecture
The library is organized into several key modules:
- **types/**: TypeScript type definitions and interfaces
- **providers/**: AI provider implementations for task generation
- **storage/**: Storage adapters for different persistence strategies
- **parser/**: Task parsing utilities for various formats
- **utils/**: Common utility functions and helpers
- **errors/**: Custom error classes and error handling
## Development
### Prerequisites
- Node.js >= 18.0.0
- npm or yarn
### Setup
```bash
# Install dependencies
npm install
# Build the library
npm run build
# Run tests
npm test
# Run tests with coverage
npm run test:coverage
# Lint code
npm run lint
# Format code
npm run format
```
### Scripts
- `build`: Build the library for both ESM and CJS formats
- `build:watch`: Build in watch mode for development
- `test`: Run the test suite
- `test:watch`: Run tests in watch mode
- `test:coverage`: Run tests with coverage reporting
- `lint`: Lint TypeScript files
- `lint:fix`: Lint and auto-fix issues
- `format`: Format code with Prettier
- `format:check`: Check code formatting
- `typecheck`: Type-check without emitting files
- `clean`: Clean build artifacts
- `dev`: Development mode with watch
## ESM and CommonJS Support
This package supports both ESM and CommonJS formats automatically:
```javascript
// ESM
import { generateTaskId } from '@task-master/tm-core';
// CommonJS
const { generateTaskId } = require('@task-master/tm-core');
```
## Roadmap
This is the initial package structure. The following features are planned for implementation:
### Task 116: TypeScript Types
- [ ] Complete type definitions for tasks, projects, and configurations
- [ ] Zod schema validation
- [ ] Generic type utilities
### Task 117: AI Provider System
- [ ] Base provider interface
- [ ] Anthropic Claude integration
- [ ] OpenAI integration
- [ ] Perplexity integration
- [ ] Provider factory and registry
### Task 118: Storage Layer
- [ ] File system storage adapter
- [ ] Memory storage adapter
- [ ] Storage interface and factory
### Task 119: Task Parser
- [ ] PRD parser implementation
- [ ] Markdown parser
- [ ] JSON task format parser
- [ ] Validation utilities
### Task 120: Utility Functions
- [ ] Task ID generation
- [ ] Date formatting
- [ ] Validation helpers
- [ ] File system utilities
### Task 121: Error Handling
- [ ] Task-specific errors
- [ ] Storage errors
- [ ] Provider errors
- [ ] Validation errors
### Task 122: Configuration System
- [ ] Configuration schema
- [ ] Default configurations
- [ ] Environment variable support
### Task 123: Testing Infrastructure
- [ ] Unit test coverage
- [ ] Integration tests
- [ ] Mock utilities
### Task 124: Documentation
- [ ] API documentation
- [ ] Usage examples
- [ ] Migration guides
### Task 125: Package Finalization
- [ ] Final testing and validation
- [ ] Release preparation
- [ ] CI/CD integration
## Implementation Checklist
### ✅ Task 115: Initialize tm-core Package Structure (COMPLETED)
- [x] Create tm-core directory structure and base configuration files
- [x] Configure build and test infrastructure
- [x] Create barrel export files for all directories
- [x] Add development tooling and documentation
- [x] Validate package structure and prepare for development
### 🚧 Remaining Implementation Tasks
- [ ] **Task 116**: TypeScript Types - Complete type definitions for tasks, projects, and configurations
- [ ] **Task 117**: AI Provider System - Base provider interface and integrations
- [ ] **Task 118**: Storage Layer - File system and memory storage adapters
- [ ] **Task 119**: Task Parser - PRD, Markdown, and JSON parsers
- [ ] **Task 120**: Utility Functions - Task ID generation, validation helpers
- [ ] **Task 121**: Error Handling - Task-specific and validation errors
- [ ] **Task 122**: Configuration System - Schema and environment support
- [ ] **Task 123**: Testing Infrastructure - Complete unit and integration tests
- [ ] **Task 124**: Documentation - API docs and usage examples
- [ ] **Task 125**: Package Finalization - Release preparation and CI/CD
## Contributing
This package is part of the Task Master AI project. Please refer to the main project's contributing guidelines.
## License
MIT - See the main project's LICENSE file for details.
## Support
For questions and support, please refer to the main Task Master AI documentation.

View File

@@ -0,0 +1,161 @@
# ListTasks Architecture - End-to-End POC
## Current Implementation Structure
```
┌─────────────────────────────────────────────────────────────┐
│ CLI Layer │
│ scripts/modules/task-manager/list-tasks.js │
│ - Complex UI rendering (tables, progress bars) │
│ - Multiple output formats (json, text, markdown, compact) │
│ - Status filtering and statistics │
└─────────────────────────────────────────────────────────────┘
│ Currently reads directly
│ from files (needs integration)
┌─────────────────────────────────────────────────────────────┐
│ tm-core Package │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ TaskMasterCore (Facade) │ │
│ │ src/task-master-core.ts │ │
│ │ │ │
│ │ - listTasks(options) │ │
│ │ • tag filtering │ │
│ │ • status filtering │ │
│ │ • include/exclude subtasks │ │
│ │ - getTask(id) │ │
│ │ - getTasksByStatus(status) │ │
│ │ - getTaskStats() │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Storage Layer (IStorage) │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ FileStorage │ │ ApiStorage │ │ │
│ │ │ │ │ (Hamster) │ │ │
│ │ └──────────────┘ └──────────────┘ │ │
│ │ │ │
│ │ StorageFactory.create() selects based on config │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Domain Layer (Entities) │ │
│ │ │ │
│ │ TaskEntity │ │
│ │ - Business logic │ │
│ │ - Validation │ │
│ │ - Status transitions │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
## ListTasks Data Flow
### 1. CLI Request
```javascript
// Current CLI (needs update to use tm-core)
listTasks(tasksPath, statusFilter, reportPath, withSubtasks, outputFormat, context)
```
### 2. TaskMasterCore Processing
```typescript
// Our new implementation
const tmCore = createTaskMasterCore(projectPath, {
storage: {
type: 'api', // or 'file'
apiEndpoint: 'https://hamster.ai/api',
apiAccessToken: 'xxx'
}
});
const result = await tmCore.listTasks({
tag: 'feature-branch',
filter: {
status: ['pending', 'in-progress'],
priority: 'high',
search: 'authentication'
},
includeSubtasks: true
});
```
### 3. Storage Selection
```typescript
// StorageFactory automatically selects storage
const storage = StorageFactory.create(config, projectPath);
// Returns either FileStorage or ApiStorage based on config
```
### 4. Data Loading
```typescript
// FileStorage
- Reads from .taskmaster/tasks/tasks.json (or tag-specific file)
- Local file system operations
// ApiStorage (Hamster)
- Makes HTTP requests to Hamster API
- Uses access token from config
- Handles retries and rate limiting
```
### 5. Entity Processing
```typescript
// Convert raw data to TaskEntity for business logic
const taskEntities = TaskEntity.fromArray(rawTasks);
// Apply filters
const filtered = applyFilters(taskEntities, filter);
// Convert back to plain objects
const tasks = filtered.map(entity => entity.toJSON());
```
### 6. Response Structure
```typescript
interface ListTasksResult {
tasks: Task[]; // Filtered tasks
total: number; // Total task count
filtered: number; // Filtered task count
tag?: string; // Tag context if applicable
}
```
## Integration Points Needed
### 1. CLI Integration
- [ ] Update `scripts/modules/task-manager/list-tasks.js` to use tm-core
- [ ] Map CLI options to TaskMasterCore options
- [ ] Handle output formatting in CLI layer
### 2. Configuration Loading
- [ ] Load `.taskmaster/config.json` for storage settings
- [ ] Support environment variables for API tokens
- [ ] Handle storage type selection
### 3. Testing Requirements
- [x] Unit tests for TaskEntity
- [x] Unit tests for BaseProvider
- [x] Integration tests for listTasks with FileStorage
- [ ] Integration tests for listTasks with ApiStorage (mock API)
- [ ] E2E tests with real Hamster API (optional)
## Benefits of This Architecture
1. **Storage Abstraction**: Switch between file and API storage without changing business logic
2. **Clean Separation**: UI (CLI) separate from business logic (tm-core)
3. **Testability**: Each layer can be tested independently
4. **Extensibility**: Easy to add new storage types (database, cloud, etc.)
5. **Type Safety**: Full TypeScript support throughout
6. **Error Handling**: Consistent error handling with TaskMasterError
## Next Steps
1. Create a simple CLI wrapper that uses tm-core
2. Test with file storage (existing functionality)
3. Test with mock API storage
4. Integrate with actual Hamster API when available
5. Migrate other commands (addTask, updateTask, etc.) following same pattern

View File

@@ -0,0 +1,101 @@
{
"name": "@tm/core",
"version": "1.0.0",
"description": "Core library for Task Master - TypeScript task management system",
"type": "module",
"types": "./dist/index.d.ts",
"main": "./dist/index.js",
"exports": {
".": {
"types": "./src/index.ts",
"import": "./dist/index.js",
"require": "./dist/index.js"
},
"./auth": {
"types": "./src/auth/index.ts",
"import": "./dist/auth/index.js",
"require": "./dist/auth/index.js"
},
"./storage": {
"types": "./src/storage/index.ts",
"import": "./dist/storage/index.js",
"require": "./dist/storage/index.js"
},
"./config": {
"types": "./src/config/index.ts",
"import": "./dist/config/index.js",
"require": "./dist/config/index.js"
},
"./providers": {
"types": "./src/providers/index.ts",
"import": "./dist/providers/index.js",
"require": "./dist/providers/index.js"
},
"./services": {
"types": "./src/services/index.ts",
"import": "./dist/services/index.js",
"require": "./dist/services/index.js"
},
"./errors": {
"types": "./src/errors/index.ts",
"import": "./dist/errors/index.js",
"require": "./dist/errors/index.js"
},
"./logger": {
"types": "./src/logger/index.ts",
"import": "./dist/logger/index.js",
"require": "./dist/logger/index.js"
},
"./types": {
"types": "./src/types/index.ts",
"import": "./dist/types/index.js",
"require": "./dist/types/index.js"
},
"./interfaces": {
"types": "./src/interfaces/index.ts",
"import": "./dist/interfaces/index.js",
"require": "./dist/interfaces/index.js"
},
"./utils": {
"types": "./src/utils/index.ts",
"import": "./dist/utils/index.js",
"require": "./dist/utils/index.js"
},
"./package.json": "./package.json"
},
"scripts": {
"build": "tsup",
"dev": "tsup --watch",
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage",
"lint": "biome check --write",
"lint:check": "biome check",
"lint:fix": "biome check --fix --unsafe",
"format": "biome format --write",
"format:check": "biome format",
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@supabase/supabase-js": "^2.57.0",
"chalk": "^5.3.0",
"zod": "^3.22.4"
},
"devDependencies": {
"@biomejs/biome": "^1.9.4",
"@types/node": "^20.11.30",
"@vitest/coverage-v8": "^2.0.5",
"dotenv-mono": "^1.5.1",
"ts-node": "^10.9.2",
"tsup": "^8.0.2",
"typescript": "^5.4.3",
"vitest": "^2.0.5"
},
"engines": {
"node": ">=18.0.0"
},
"files": ["dist", "README.md", "CHANGELOG.md"],
"keywords": ["task-management", "typescript", "ai", "prd", "parser"],
"author": "Task Master AI",
"license": "MIT"
}

View File

@@ -0,0 +1,163 @@
/**
* Authentication manager for Task Master CLI
*/
import {
AuthCredentials,
OAuthFlowOptions,
AuthenticationError,
AuthConfig
} from './types';
import { CredentialStore } from './credential-store';
import { OAuthService } from './oauth-service';
import { SupabaseAuthClient } from '../clients/supabase-client';
/**
* Authentication manager class
*/
export class AuthManager {
private static instance: AuthManager;
private credentialStore: CredentialStore;
private oauthService: OAuthService;
private supabaseClient: SupabaseAuthClient;
private constructor(config?: Partial<AuthConfig>) {
this.credentialStore = new CredentialStore(config);
this.supabaseClient = new SupabaseAuthClient();
this.oauthService = new OAuthService(this.credentialStore, config);
}
/**
* Get singleton instance
*/
static getInstance(config?: Partial<AuthConfig>): AuthManager {
if (!AuthManager.instance) {
AuthManager.instance = new AuthManager(config);
}
return AuthManager.instance;
}
/**
* Get stored authentication credentials
*/
getCredentials(): AuthCredentials | null {
return this.credentialStore.getCredentials();
}
/**
* Start OAuth 2.0 Authorization Code Flow with browser handling
*/
async authenticateWithOAuth(
options: OAuthFlowOptions = {}
): Promise<AuthCredentials> {
return this.oauthService.authenticate(options);
}
/**
* Get the authorization URL (for browser opening)
*/
getAuthorizationUrl(): string | null {
return this.oauthService.getAuthorizationUrl();
}
/**
* Authenticate with API key
* Note: This would require a custom implementation or Supabase RLS policies
*/
async authenticateWithApiKey(apiKey: string): Promise<AuthCredentials> {
const token = apiKey.trim();
if (!token || token.length < 10) {
throw new AuthenticationError('Invalid API key', 'INVALID_API_KEY');
}
const authData: AuthCredentials = {
token,
tokenType: 'api_key',
userId: 'api-user',
email: undefined,
expiresAt: undefined, // API keys don't expire
savedAt: new Date().toISOString()
};
this.credentialStore.saveCredentials(authData);
return authData;
}
/**
* Refresh authentication token
*/
async refreshToken(): Promise<AuthCredentials> {
const authData = this.credentialStore.getCredentials({
allowExpired: true
});
if (!authData || !authData.refreshToken) {
throw new AuthenticationError(
'No refresh token available',
'NO_REFRESH_TOKEN'
);
}
try {
// Use Supabase client to refresh the token
const response = await this.supabaseClient.refreshSession(
authData.refreshToken
);
// Update authentication data
const newAuthData: AuthCredentials = {
...authData,
token: response.token,
refreshToken: response.refreshToken,
expiresAt: response.expiresAt,
savedAt: new Date().toISOString()
};
this.credentialStore.saveCredentials(newAuthData);
return newAuthData;
} catch (error) {
throw error;
}
}
/**
* Logout and clear credentials
*/
async logout(): Promise<void> {
try {
// First try to sign out from Supabase to revoke tokens
await this.supabaseClient.signOut();
} catch (error) {
// Log but don't throw - we still want to clear local credentials
console.warn('Failed to sign out from Supabase:', error);
}
// Always clear local credentials (removes auth.json file)
this.credentialStore.clearCredentials();
}
/**
* Check if authenticated
*/
isAuthenticated(): boolean {
return this.credentialStore.hasValidCredentials();
}
/**
* Get authorization headers
*/
getAuthHeaders(): Record<string, string> {
const authData = this.getCredentials();
if (!authData) {
throw new AuthenticationError(
'Not authenticated. Please authenticate first.',
'NOT_AUTHENTICATED'
);
}
return {
Authorization: `Bearer ${authData.token}`
};
}
}

View File

@@ -0,0 +1,37 @@
/**
* Centralized authentication configuration
*/
import os from 'os';
import path from 'path';
import { AuthConfig } from './types';
// Single base domain for all URLs
// Build-time: process.env.TM_PUBLIC_BASE_DOMAIN gets replaced by tsup's env option
// Default: https://tryhamster.com for production
const BASE_DOMAIN =
process.env.TM_PUBLIC_BASE_DOMAIN || // This gets replaced at build time by tsup
'https://tryhamster.com';
/**
* Default authentication configuration
* All URL configuration is derived from the single BASE_DOMAIN
*/
export const DEFAULT_AUTH_CONFIG: AuthConfig = {
// Base domain for all services
baseUrl: BASE_DOMAIN,
// Configuration directory and file paths
configDir: path.join(os.homedir(), '.taskmaster'),
configFile: path.join(os.homedir(), '.taskmaster', 'auth.json')
};
/**
* Get merged configuration with optional overrides
*/
export function getAuthConfig(overrides?: Partial<AuthConfig>): AuthConfig {
return {
...DEFAULT_AUTH_CONFIG,
...overrides
};
}

View File

@@ -0,0 +1,109 @@
/**
* Credential storage and management
*/
import fs from 'fs';
import { AuthCredentials, AuthenticationError, AuthConfig } from './types';
import { getAuthConfig } from './config';
import { getLogger } from '../logger';
export class CredentialStore {
private logger = getLogger('CredentialStore');
private config: AuthConfig;
constructor(config?: Partial<AuthConfig>) {
this.config = getAuthConfig(config);
}
/**
* Get stored authentication credentials
*/
getCredentials(options?: { allowExpired?: boolean }): AuthCredentials | null {
try {
if (!fs.existsSync(this.config.configFile)) {
return null;
}
const authData = JSON.parse(
fs.readFileSync(this.config.configFile, 'utf-8')
) as AuthCredentials;
// Check if token is expired
if (
authData.expiresAt &&
new Date(authData.expiresAt) < new Date() &&
!options?.allowExpired
) {
this.logger.warn('Authentication token has expired');
return null;
}
return authData;
} catch (error) {
this.logger.error(
`Failed to read auth credentials: ${(error as Error).message}`
);
return null;
}
}
/**
* Save authentication credentials
*/
saveCredentials(authData: AuthCredentials): void {
try {
// Ensure directory exists
if (!fs.existsSync(this.config.configDir)) {
fs.mkdirSync(this.config.configDir, { recursive: true, mode: 0o700 });
}
// Add timestamp
authData.savedAt = new Date().toISOString();
// Save credentials atomically with secure permissions
const tempFile = `${this.config.configFile}.tmp`;
fs.writeFileSync(tempFile, JSON.stringify(authData, null, 2), {
mode: 0o600
});
fs.renameSync(tempFile, this.config.configFile);
} catch (error) {
throw new AuthenticationError(
`Failed to save auth credentials: ${(error as Error).message}`,
'SAVE_FAILED',
error
);
}
}
/**
* Clear stored credentials
*/
clearCredentials(): void {
try {
if (fs.existsSync(this.config.configFile)) {
fs.unlinkSync(this.config.configFile);
}
} catch (error) {
throw new AuthenticationError(
`Failed to clear credentials: ${(error as Error).message}`,
'CLEAR_FAILED',
error
);
}
}
/**
* Check if credentials exist and are valid
*/
hasValidCredentials(): boolean {
const credentials = this.getCredentials({ allowExpired: false });
return credentials !== null;
}
/**
* Get configuration
*/
getConfig(): AuthConfig {
return { ...this.config };
}
}

View File

@@ -0,0 +1,21 @@
/**
* Authentication module exports
*/
export { AuthManager } from './auth-manager';
export { CredentialStore } from './credential-store';
export { OAuthService } from './oauth-service';
export type {
AuthCredentials,
OAuthFlowOptions,
AuthConfig,
CliData
} from './types';
export { AuthenticationError } from './types';
export {
DEFAULT_AUTH_CONFIG,
getAuthConfig
} from './config';

View File

@@ -0,0 +1,346 @@
/**
* OAuth 2.0 Authorization Code Flow service
*/
import http from 'http';
import { URL } from 'url';
import crypto from 'crypto';
import os from 'os';
import {
AuthCredentials,
AuthenticationError,
OAuthFlowOptions,
AuthConfig,
CliData
} from './types';
import { CredentialStore } from './credential-store';
import { SupabaseAuthClient } from '../clients/supabase-client';
import { getAuthConfig } from './config';
import { getLogger } from '../logger';
import packageJson from '../../../../package.json' with { type: 'json' };
export class OAuthService {
private logger = getLogger('OAuthService');
private credentialStore: CredentialStore;
private supabaseClient: SupabaseAuthClient;
private baseUrl: string;
private authorizationUrl: string | null = null;
private originalState: string | null = null;
private authorizationReady: Promise<void> | null = null;
private resolveAuthorizationReady: (() => void) | null = null;
constructor(
credentialStore: CredentialStore,
config: Partial<AuthConfig> = {}
) {
this.credentialStore = credentialStore;
this.supabaseClient = new SupabaseAuthClient();
const authConfig = getAuthConfig(config);
this.baseUrl = authConfig.baseUrl;
}
/**
* Start OAuth 2.0 Authorization Code Flow with browser handling
*/
async authenticate(options: OAuthFlowOptions = {}): Promise<AuthCredentials> {
const {
openBrowser,
timeout = 300000, // 5 minutes default
onAuthUrl,
onWaitingForAuth,
onSuccess,
onError
} = options;
try {
// Start the OAuth flow (starts local server)
const authPromise = this.startFlow(timeout);
// Wait for server to be ready and URL to be generated
if (this.authorizationReady) {
await this.authorizationReady;
}
// Get the authorization URL
const authUrl = this.getAuthorizationUrl();
if (!authUrl) {
throw new AuthenticationError(
'Failed to generate authorization URL',
'URL_GENERATION_FAILED'
);
}
// Notify about the auth URL
if (onAuthUrl) {
onAuthUrl(authUrl);
}
// Open browser if callback provided
if (openBrowser) {
try {
await openBrowser(authUrl);
this.logger.debug('Browser opened successfully with URL:', authUrl);
} catch (error) {
// Log the error but don't throw - user can still manually open the URL
this.logger.warn('Failed to open browser automatically:', error);
}
}
// Notify that we're waiting for authentication
if (onWaitingForAuth) {
onWaitingForAuth();
}
// Wait for authentication to complete
const credentials = await authPromise;
// Notify success
if (onSuccess) {
onSuccess(credentials);
}
return credentials;
} catch (error) {
const authError =
error instanceof AuthenticationError
? error
: new AuthenticationError(
`OAuth authentication failed: ${(error as Error).message}`,
'OAUTH_FAILED',
error
);
// Notify error
if (onError) {
onError(authError);
}
throw authError;
}
}
/**
* Start the OAuth flow (internal implementation)
*/
private async startFlow(timeout: number = 300000): Promise<AuthCredentials> {
const state = this.generateState();
// Store the original state for verification
this.originalState = state;
// Create a promise that will resolve when the server is ready
this.authorizationReady = new Promise<void>((resolve) => {
this.resolveAuthorizationReady = resolve;
});
return new Promise((resolve, reject) => {
let timeoutId: NodeJS.Timeout;
// Create local HTTP server for OAuth callback
const server = http.createServer();
// Start server on localhost only, bind to port 0 for automatic port assignment
server.listen(0, '127.0.0.1', () => {
const address = server.address();
if (!address || typeof address === 'string') {
reject(new Error('Failed to get server address'));
return;
}
const port = address.port;
const callbackUrl = `http://localhost:${port}/callback`;
// Set up request handler after we know the port
server.on('request', async (req, res) => {
const url = new URL(req.url!, `http://127.0.0.1:${port}`);
if (url.pathname === '/callback') {
await this.handleCallback(
url,
res,
server,
resolve,
reject,
timeoutId
);
} else {
// Handle other paths (favicon, etc.)
res.writeHead(404);
res.end();
}
});
// Prepare CLI data object (server handles OAuth/PKCE)
const cliData: CliData = {
callback: callbackUrl,
state: state,
name: 'Task Master CLI',
version: this.getCliVersion(),
device: os.hostname(),
user: os.userInfo().username,
platform: os.platform(),
timestamp: Date.now()
};
// Build authorization URL for web app sign-in page
const authUrl = new URL(`${this.baseUrl}/auth/sign-in`);
// Encode CLI data as base64
const cliParam = Buffer.from(JSON.stringify(cliData)).toString(
'base64'
);
// Set the single CLI parameter with all encoded data
authUrl.searchParams.append('cli', cliParam);
// Store auth URL for browser opening
this.authorizationUrl = authUrl.toString();
this.logger.info(
`OAuth session started - ${cliData.name} v${cliData.version} on port ${port}`
);
this.logger.debug('CLI data:', cliData);
// Signal that the server is ready and URL is available
if (this.resolveAuthorizationReady) {
this.resolveAuthorizationReady();
this.resolveAuthorizationReady = null;
}
});
// Set timeout for authentication
timeoutId = setTimeout(() => {
if (server.listening) {
server.close();
// Clean up the readiness promise if still pending
if (this.resolveAuthorizationReady) {
this.resolveAuthorizationReady();
this.resolveAuthorizationReady = null;
}
reject(
new AuthenticationError('Authentication timeout', 'AUTH_TIMEOUT')
);
}
}, timeout);
});
}
/**
* Handle OAuth callback
*/
private async handleCallback(
url: URL,
res: http.ServerResponse,
server: http.Server,
resolve: (value: AuthCredentials) => void,
reject: (error: any) => void,
timeoutId?: NodeJS.Timeout
): Promise<void> {
// Server now returns tokens directly instead of code
const type = url.searchParams.get('type');
const returnedState = url.searchParams.get('state');
const accessToken = url.searchParams.get('access_token');
const refreshToken = url.searchParams.get('refresh_token');
const expiresIn = url.searchParams.get('expires_in');
const error = url.searchParams.get('error');
const errorDescription = url.searchParams.get('error_description');
// Server handles displaying success/failure, just close connection
res.writeHead(200);
res.end();
if (error) {
if (server.listening) {
server.close();
}
reject(
new AuthenticationError(
errorDescription || error || 'Authentication failed',
'OAUTH_ERROR'
)
);
return;
}
// Verify state parameter for CSRF protection
if (returnedState !== this.originalState) {
if (server.listening) {
server.close();
}
reject(
new AuthenticationError('Invalid state parameter', 'INVALID_STATE')
);
return;
}
// Handle direct token response from server
if (
accessToken &&
(type === 'oauth_success' || type === 'session_transfer')
) {
try {
this.logger.info(`Received tokens via ${type}`);
// Get user info using the access token if possible
const user = await this.supabaseClient.getUser(accessToken);
// Calculate expiration time
const expiresAt = expiresIn
? new Date(Date.now() + parseInt(expiresIn) * 1000).toISOString()
: undefined;
// Save authentication data
const authData: AuthCredentials = {
token: accessToken,
refreshToken: refreshToken || undefined,
userId: user?.id || 'unknown',
email: user?.email,
expiresAt: expiresAt,
tokenType: 'standard',
savedAt: new Date().toISOString()
};
this.credentialStore.saveCredentials(authData);
if (server.listening) {
server.close();
}
// Clear timeout since authentication succeeded
if (timeoutId) {
clearTimeout(timeoutId);
}
resolve(authData);
} catch (error) {
if (server.listening) {
server.close();
}
reject(error);
}
} else {
if (server.listening) {
server.close();
}
reject(new AuthenticationError('No access token received', 'NO_TOKEN'));
}
}
/**
* Generate state for OAuth flow
*/
private generateState(): string {
return crypto.randomBytes(32).toString('base64url');
}
/**
* Get CLI version from package.json if available
*/
private getCliVersion(): string {
return packageJson.version || 'unknown';
}
/**
* Get the authorization URL (for browser opening)
*/
getAuthorizationUrl(): string | null {
return this.authorizationUrl;
}
}

View File

@@ -0,0 +1,85 @@
/**
* Authentication types and interfaces
*/
export interface AuthCredentials {
token: string;
refreshToken?: string;
userId: string;
email?: string;
expiresAt?: string;
tokenType?: 'standard' | 'api_key';
savedAt: string;
}
export interface OAuthFlowOptions {
/** Callback to open the browser with the auth URL. If not provided, browser won't be opened */
openBrowser?: (url: string) => Promise<void>;
/** Timeout for the OAuth flow in milliseconds. Default: 300000 (5 minutes) */
timeout?: number;
/** Callback to be invoked with the authorization URL */
onAuthUrl?: (url: string) => void;
/** Callback to be invoked when waiting for authentication */
onWaitingForAuth?: () => void;
/** Callback to be invoked on successful authentication */
onSuccess?: (credentials: AuthCredentials) => void;
/** Callback to be invoked on authentication error */
onError?: (error: AuthenticationError) => void;
}
export interface AuthConfig {
baseUrl: string;
configDir: string;
configFile: string;
}
export interface CliData {
callback: string;
state: string;
name: string;
version: string;
device?: string;
user?: string;
platform?: string;
timestamp?: number;
}
/**
* Authentication error codes
*/
export type AuthErrorCode =
| 'AUTH_TIMEOUT'
| 'AUTH_EXPIRED'
| 'OAUTH_FAILED'
| 'OAUTH_ERROR'
| 'OAUTH_CANCELED'
| 'URL_GENERATION_FAILED'
| 'INVALID_STATE'
| 'NO_TOKEN'
| 'TOKEN_EXCHANGE_FAILED'
| 'INVALID_API_KEY'
| 'INVALID_CREDENTIALS'
| 'NO_REFRESH_TOKEN'
| 'NOT_AUTHENTICATED'
| 'NETWORK_ERROR'
| 'CONFIG_MISSING'
| 'SAVE_FAILED'
| 'CLEAR_FAILED'
| 'STORAGE_ERROR';
/**
* Authentication error class
*/
export class AuthenticationError extends Error {
constructor(
message: string,
public code: AuthErrorCode,
public cause?: unknown
) {
super(message);
this.name = 'AuthenticationError';
if (cause && cause instanceof Error) {
this.stack = `${this.stack}\nCaused by: ${cause.stack}`;
}
}
}

View File

@@ -0,0 +1,5 @@
/**
* Client exports
*/
export { SupabaseAuthClient } from './supabase-client';

View File

@@ -0,0 +1,154 @@
/**
* Supabase client for authentication
*/
import { createClient, SupabaseClient, User } from '@supabase/supabase-js';
import { AuthenticationError } from '../auth/types';
import { getLogger } from '../logger';
export class SupabaseAuthClient {
private client: SupabaseClient | null = null;
private logger = getLogger('SupabaseAuthClient');
/**
* Initialize Supabase client
*/
private getClient(): SupabaseClient {
if (!this.client) {
// Get Supabase configuration from environment - using TM_PUBLIC prefix
const supabaseUrl = process.env.TM_PUBLIC_SUPABASE_URL;
const supabaseAnonKey = process.env.TM_PUBLIC_SUPABASE_ANON_KEY;
if (!supabaseUrl || !supabaseAnonKey) {
throw new AuthenticationError(
'Supabase configuration missing. Please set TM_PUBLIC_SUPABASE_URL and TM_PUBLIC_SUPABASE_ANON_KEY environment variables.',
'CONFIG_MISSING'
);
}
this.client = createClient(supabaseUrl, supabaseAnonKey, {
auth: {
autoRefreshToken: true,
persistSession: false, // We handle persistence ourselves
detectSessionInUrl: false
}
});
}
return this.client;
}
/**
* Note: Code exchange is now handled server-side
* The server returns tokens directly to avoid PKCE issues
* This method is kept for potential future use
*/
async exchangeCodeForSession(_code: string): Promise<{
token: string;
refreshToken?: string;
userId: string;
email?: string;
expiresAt?: string;
}> {
throw new AuthenticationError(
'Code exchange is handled server-side. CLI receives tokens directly.',
'NOT_SUPPORTED'
);
}
/**
* Refresh an access token
*/
async refreshSession(refreshToken: string): Promise<{
token: string;
refreshToken?: string;
expiresAt?: string;
}> {
try {
const client = this.getClient();
this.logger.info('Refreshing session...');
// Set the session with refresh token
const { data, error } = await client.auth.refreshSession({
refresh_token: refreshToken
});
if (error) {
this.logger.error('Failed to refresh session:', error);
throw new AuthenticationError(
`Failed to refresh session: ${error.message}`,
'REFRESH_FAILED'
);
}
if (!data.session) {
throw new AuthenticationError(
'No session data returned',
'INVALID_RESPONSE'
);
}
this.logger.info('Successfully refreshed session');
return {
token: data.session.access_token,
refreshToken: data.session.refresh_token,
expiresAt: data.session.expires_at
? new Date(data.session.expires_at * 1000).toISOString()
: undefined
};
} catch (error) {
if (error instanceof AuthenticationError) {
throw error;
}
throw new AuthenticationError(
`Failed to refresh session: ${(error as Error).message}`,
'REFRESH_FAILED'
);
}
}
/**
* Get user details from token
*/
async getUser(token: string): Promise<User | null> {
try {
const client = this.getClient();
// Get user with the token
const { data, error } = await client.auth.getUser(token);
if (error) {
this.logger.warn('Failed to get user:', error);
return null;
}
return data.user;
} catch (error) {
this.logger.error('Error getting user:', error);
return null;
}
}
/**
* Sign out (revoke tokens)
* Note: This requires the user to be authenticated with the current session.
* For remote token revocation, a server-side admin API with service_role key would be needed.
*/
async signOut(): Promise<void> {
try {
const client = this.getClient();
// Sign out the current session with global scope to revoke all refresh tokens
const { error } = await client.auth.signOut({ scope: 'global' });
if (error) {
this.logger.warn('Failed to sign out:', error);
}
} catch (error) {
this.logger.error('Error during sign out:', error);
}
}
}

View File

@@ -0,0 +1,452 @@
/**
* @fileoverview Integration tests for ConfigManager
* Tests the orchestration of all configuration services
*/
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
import { ConfigManager } from './config-manager.js';
import { ConfigLoader } from './services/config-loader.service.js';
import { ConfigMerger } from './services/config-merger.service.js';
import { RuntimeStateManager } from './services/runtime-state-manager.service.js';
import { ConfigPersistence } from './services/config-persistence.service.js';
import { EnvironmentConfigProvider } from './services/environment-config-provider.service.js';
// Mock all services
vi.mock('./services/config-loader.service.js');
vi.mock('./services/config-merger.service.js');
vi.mock('./services/runtime-state-manager.service.js');
vi.mock('./services/config-persistence.service.js');
vi.mock('./services/environment-config-provider.service.js');
describe('ConfigManager', () => {
let manager: ConfigManager;
const testProjectRoot = '/test/project';
const originalEnv = { ...process.env };
beforeEach(async () => {
vi.clearAllMocks();
// Clear environment variables
Object.keys(process.env).forEach((key) => {
if (key.startsWith('TASKMASTER_')) {
delete process.env[key];
}
});
// Setup default mock behaviors
vi.mocked(ConfigLoader).mockImplementation(
() =>
({
getDefaultConfig: vi.fn().mockReturnValue({
models: { main: 'default-model', fallback: 'fallback-model' },
storage: { type: 'file' },
version: '1.0.0'
}),
loadLocalConfig: vi.fn().mockResolvedValue(null),
loadGlobalConfig: vi.fn().mockResolvedValue(null),
hasLocalConfig: vi.fn().mockResolvedValue(false),
hasGlobalConfig: vi.fn().mockResolvedValue(false)
}) as any
);
vi.mocked(ConfigMerger).mockImplementation(
() =>
({
addSource: vi.fn(),
clearSources: vi.fn(),
merge: vi.fn().mockReturnValue({
models: { main: 'merged-model', fallback: 'fallback-model' },
storage: { type: 'file' }
}),
getSources: vi.fn().mockReturnValue([]),
hasSource: vi.fn().mockReturnValue(false),
removeSource: vi.fn().mockReturnValue(false)
}) as any
);
vi.mocked(RuntimeStateManager).mockImplementation(
() =>
({
loadState: vi.fn().mockResolvedValue({ activeTag: 'master' }),
saveState: vi.fn().mockResolvedValue(undefined),
getActiveTag: vi.fn().mockReturnValue('master'),
setActiveTag: vi.fn().mockResolvedValue(undefined),
getState: vi.fn().mockReturnValue({ activeTag: 'master' }),
updateMetadata: vi.fn().mockResolvedValue(undefined),
clearState: vi.fn().mockResolvedValue(undefined)
}) as any
);
vi.mocked(ConfigPersistence).mockImplementation(
() =>
({
saveConfig: vi.fn().mockResolvedValue(undefined),
configExists: vi.fn().mockResolvedValue(false),
deleteConfig: vi.fn().mockResolvedValue(undefined),
getBackups: vi.fn().mockResolvedValue([]),
restoreFromBackup: vi.fn().mockResolvedValue(undefined)
}) as any
);
vi.mocked(EnvironmentConfigProvider).mockImplementation(
() =>
({
loadConfig: vi.fn().mockReturnValue({}),
getRuntimeState: vi.fn().mockReturnValue({}),
hasEnvVar: vi.fn().mockReturnValue(false),
getAllTaskmasterEnvVars: vi.fn().mockReturnValue({}),
addMapping: vi.fn(),
getMappings: vi.fn().mockReturnValue([])
}) as any
);
// Since constructor is private, we need to use the factory method
// But for testing, we'll create a test instance using create()
manager = await ConfigManager.create(testProjectRoot);
});
afterEach(() => {
vi.restoreAllMocks();
process.env = { ...originalEnv };
});
describe('creation', () => {
it('should initialize all services when created', () => {
// Services should have been initialized during beforeEach
expect(ConfigLoader).toHaveBeenCalledWith(testProjectRoot);
expect(ConfigMerger).toHaveBeenCalled();
expect(RuntimeStateManager).toHaveBeenCalledWith(testProjectRoot);
expect(ConfigPersistence).toHaveBeenCalledWith(testProjectRoot);
expect(EnvironmentConfigProvider).toHaveBeenCalled();
});
});
describe('create (factory method)', () => {
it('should create and initialize manager', async () => {
const createdManager = await ConfigManager.create(testProjectRoot);
expect(createdManager).toBeInstanceOf(ConfigManager);
expect(createdManager.getConfig()).toBeDefined();
});
});
describe('initialization (via create)', () => {
it('should load and merge all configuration sources', () => {
// Manager was created in beforeEach, so initialization already happened
const loader = (manager as any).loader;
const merger = (manager as any).merger;
const stateManager = (manager as any).stateManager;
const envProvider = (manager as any).envProvider;
// Verify loading sequence
expect(merger.clearSources).toHaveBeenCalled();
expect(loader.getDefaultConfig).toHaveBeenCalled();
expect(loader.loadGlobalConfig).toHaveBeenCalled();
expect(loader.loadLocalConfig).toHaveBeenCalled();
expect(envProvider.loadConfig).toHaveBeenCalled();
expect(merger.merge).toHaveBeenCalled();
expect(stateManager.loadState).toHaveBeenCalled();
});
it('should add sources with correct precedence during creation', () => {
const merger = (manager as any).merger;
// Check that sources were added with correct precedence
expect(merger.addSource).toHaveBeenCalledWith(
expect.objectContaining({
name: 'defaults',
precedence: 0
})
);
// Note: local and env sources may not be added if they don't exist
// The mock setup determines what gets called
});
});
describe('configuration access', () => {
// Manager is already initialized in the main beforeEach
it('should return merged configuration', () => {
const config = manager.getConfig();
expect(config).toEqual({
models: { main: 'merged-model', fallback: 'fallback-model' },
storage: { type: 'file' }
});
});
it('should return storage configuration', () => {
const storage = manager.getStorageConfig();
expect(storage).toEqual({ type: 'auto', apiConfigured: false });
});
it('should return API storage configuration when configured', async () => {
// Create a new instance with API storage config
vi.mocked(ConfigMerger).mockImplementationOnce(
() =>
({
addSource: vi.fn(),
clearSources: vi.fn(),
merge: vi.fn().mockReturnValue({
storage: {
type: 'api',
apiEndpoint: 'https://api.example.com',
apiAccessToken: 'token123'
}
}),
getSources: vi.fn().mockReturnValue([]),
hasSource: vi.fn().mockReturnValue(false),
removeSource: vi.fn().mockReturnValue(false)
}) as any
);
const apiManager = await ConfigManager.create(testProjectRoot);
const storage = apiManager.getStorageConfig();
expect(storage).toEqual({
type: 'api',
apiEndpoint: 'https://api.example.com',
apiAccessToken: 'token123',
apiConfigured: true
});
});
it('should return auto storage configuration with apiConfigured flag', async () => {
// Create a new instance with auto storage config and partial API settings
vi.mocked(ConfigMerger).mockImplementationOnce(
() =>
({
addSource: vi.fn(),
clearSources: vi.fn(),
merge: vi.fn().mockReturnValue({
storage: {
type: 'auto',
apiEndpoint: 'https://api.example.com'
// No apiAccessToken - partial config
}
}),
getSources: vi.fn().mockReturnValue([])
}) as any
);
const autoManager = await ConfigManager.create(testProjectRoot);
const storage = autoManager.getStorageConfig();
expect(storage).toEqual({
type: 'auto',
apiEndpoint: 'https://api.example.com',
apiAccessToken: undefined,
apiConfigured: true // true because apiEndpoint is provided
});
});
it('should return auto storage with apiConfigured false when no API settings', async () => {
// Create a new instance with auto storage but no API settings
vi.mocked(ConfigMerger).mockImplementationOnce(
() =>
({
addSource: vi.fn(),
clearSources: vi.fn(),
merge: vi.fn().mockReturnValue({
storage: {
type: 'auto'
// No API settings at all
}
}),
getSources: vi.fn().mockReturnValue([])
}) as any
);
const autoManager = await ConfigManager.create(testProjectRoot);
const storage = autoManager.getStorageConfig();
expect(storage).toEqual({
type: 'auto',
apiEndpoint: undefined,
apiAccessToken: undefined,
apiConfigured: false // false because no API settings
});
});
it('should return model configuration', () => {
const models = manager.getModelConfig();
expect(models).toEqual({
main: 'merged-model',
fallback: 'fallback-model'
});
});
it('should return default models when not configured', () => {
// Update the mock for current instance
const merger = (manager as any).merger;
merger.merge.mockReturnValue({});
// Force re-merge
(manager as any).config = merger.merge();
const models = manager.getModelConfig();
expect(models).toEqual({
main: 'claude-3-5-sonnet-20241022',
fallback: 'gpt-4o-mini'
});
});
it('should return response language', () => {
const language = manager.getResponseLanguage();
expect(language).toBe('English');
});
it('should return custom response language', () => {
// Update config for current instance
(manager as any).config = {
custom: { responseLanguage: 'Spanish' }
};
const language = manager.getResponseLanguage();
expect(language).toBe('Spanish');
});
it('should return project root', () => {
expect(manager.getProjectRoot()).toBe(testProjectRoot);
});
it('should check if using API storage', () => {
expect(manager.isUsingApiStorage()).toBe(false);
});
it('should detect API storage', () => {
// Update config for current instance
(manager as any).config = {
storage: {
type: 'api',
apiEndpoint: 'https://api.example.com',
apiAccessToken: 'token'
}
};
expect(manager.isUsingApiStorage()).toBe(true);
});
});
describe('runtime state', () => {
// Manager is already initialized in the main beforeEach
it('should get active tag from state manager', () => {
const tag = manager.getActiveTag();
expect(tag).toBe('master');
});
it('should set active tag through state manager', async () => {
await manager.setActiveTag('feature-branch');
const stateManager = (manager as any).stateManager;
expect(stateManager.setActiveTag).toHaveBeenCalledWith('feature-branch');
});
});
describe('configuration updates', () => {
// Manager is already initialized in the main beforeEach
it('should update configuration and save', async () => {
const updates = {
models: { main: 'new-model', fallback: 'fallback-model' }
};
await manager.updateConfig(updates);
const persistence = (manager as any).persistence;
expect(persistence.saveConfig).toHaveBeenCalled();
});
it('should re-initialize after update to maintain precedence', async () => {
const merger = (manager as any).merger;
merger.clearSources.mockClear();
await manager.updateConfig({ custom: { test: 'value' } });
expect(merger.clearSources).toHaveBeenCalled();
});
it('should set response language', async () => {
await manager.setResponseLanguage('French');
const persistence = (manager as any).persistence;
expect(persistence.saveConfig).toHaveBeenCalledWith(
expect.objectContaining({
custom: { responseLanguage: 'French' }
})
);
});
it('should save configuration with options', async () => {
await manager.saveConfig();
const persistence = (manager as any).persistence;
expect(persistence.saveConfig).toHaveBeenCalledWith(expect.any(Object), {
createBackup: true,
atomic: true
});
});
});
describe('utilities', () => {
// Manager is already initialized in the main beforeEach
it('should reset configuration to defaults', async () => {
await manager.reset();
const persistence = (manager as any).persistence;
const stateManager = (manager as any).stateManager;
expect(persistence.deleteConfig).toHaveBeenCalled();
expect(stateManager.clearState).toHaveBeenCalled();
});
it('should re-initialize after reset', async () => {
const merger = (manager as any).merger;
merger.clearSources.mockClear();
await manager.reset();
expect(merger.clearSources).toHaveBeenCalled();
});
it('should get configuration sources for debugging', () => {
const merger = (manager as any).merger;
const mockSources = [{ name: 'test', config: {}, precedence: 1 }];
merger.getSources.mockReturnValue(mockSources);
const sources = manager.getConfigSources();
expect(sources).toEqual(mockSources);
});
it('should return no-op function for watch (not implemented)', () => {
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
const callback = vi.fn();
const unsubscribe = manager.watch(callback);
expect(warnSpy).toHaveBeenCalledWith(
'Configuration watching not yet implemented'
);
expect(unsubscribe).toBeInstanceOf(Function);
// Calling unsubscribe should not throw
expect(() => unsubscribe()).not.toThrow();
warnSpy.mockRestore();
});
});
describe('error handling', () => {
it('should handle missing services gracefully', async () => {
// Even if a service fails, manager should still work
const loader = (manager as any).loader;
loader.loadLocalConfig.mockRejectedValue(new Error('File error'));
// Creating a new manager should not throw even if service fails
await expect(
ConfigManager.create(testProjectRoot)
).resolves.not.toThrow();
});
});
});

View File

@@ -0,0 +1,273 @@
/**
* @fileoverview Configuration Manager
* Orchestrates configuration services following clean architecture principles
*
* This ConfigManager delegates responsibilities to specialized services for better
* maintainability, testability, and separation of concerns.
*/
import type { PartialConfiguration } from '../interfaces/configuration.interface.js';
import { ConfigLoader } from './services/config-loader.service.js';
import {
ConfigMerger,
CONFIG_PRECEDENCE
} from './services/config-merger.service.js';
import { RuntimeStateManager } from './services/runtime-state-manager.service.js';
import { ConfigPersistence } from './services/config-persistence.service.js';
import { EnvironmentConfigProvider } from './services/environment-config-provider.service.js';
/**
* ConfigManager orchestrates all configuration services
*
* This class delegates responsibilities to specialized services:
* - ConfigLoader: Loads configuration from files
* - ConfigMerger: Merges configurations with precedence
* - RuntimeStateManager: Manages runtime state
* - ConfigPersistence: Handles file persistence
* - EnvironmentConfigProvider: Extracts env var configuration
*/
export class ConfigManager {
private projectRoot: string;
private config: PartialConfiguration = {};
private initialized = false;
// Services
private loader: ConfigLoader;
private merger: ConfigMerger;
private stateManager: RuntimeStateManager;
private persistence: ConfigPersistence;
private envProvider: EnvironmentConfigProvider;
/**
* Create and initialize a new ConfigManager instance
* This is the ONLY way to create a ConfigManager
*
* @param projectRoot - The root directory of the project
* @returns Fully initialized ConfigManager instance
*/
static async create(projectRoot: string): Promise<ConfigManager> {
const manager = new ConfigManager(projectRoot);
await manager.initialize();
return manager;
}
/**
* Private constructor - use ConfigManager.create() instead
* This ensures the ConfigManager is always properly initialized
*/
private constructor(projectRoot: string) {
this.projectRoot = projectRoot;
// Initialize services
this.loader = new ConfigLoader(projectRoot);
this.merger = new ConfigMerger();
this.stateManager = new RuntimeStateManager(projectRoot);
this.persistence = new ConfigPersistence(projectRoot);
this.envProvider = new EnvironmentConfigProvider();
}
/**
* Initialize by loading configuration from all sources
* Private - only called by the factory method
*/
private async initialize(): Promise<void> {
if (this.initialized) return;
// Clear any existing configuration sources
this.merger.clearSources();
// 1. Load default configuration (lowest precedence)
this.merger.addSource({
name: 'defaults',
config: this.loader.getDefaultConfig(),
precedence: CONFIG_PRECEDENCE.DEFAULTS
});
// 2. Load global configuration (if exists)
const globalConfig = await this.loader.loadGlobalConfig();
if (globalConfig) {
this.merger.addSource({
name: 'global',
config: globalConfig,
precedence: CONFIG_PRECEDENCE.GLOBAL
});
}
// 3. Load local project configuration
const localConfig = await this.loader.loadLocalConfig();
if (localConfig) {
this.merger.addSource({
name: 'local',
config: localConfig,
precedence: CONFIG_PRECEDENCE.LOCAL
});
}
// 4. Load environment variables (highest precedence)
const envConfig = this.envProvider.loadConfig();
if (Object.keys(envConfig).length > 0) {
this.merger.addSource({
name: 'environment',
config: envConfig,
precedence: CONFIG_PRECEDENCE.ENVIRONMENT
});
}
// 5. Merge all configurations
this.config = this.merger.merge();
// 6. Load runtime state
await this.stateManager.loadState();
this.initialized = true;
}
// ==================== Configuration Access ====================
/**
* Get full configuration
*/
getConfig(): PartialConfiguration {
return this.config;
}
/**
* Get storage configuration
*/
getStorageConfig(): {
type: 'file' | 'api' | 'auto';
apiEndpoint?: string;
apiAccessToken?: string;
apiConfigured: boolean;
} {
const storage = this.config.storage;
// Return the configured type (including 'auto')
const storageType = storage?.type || 'auto';
if (storageType === 'api' || storageType === 'auto') {
return {
type: storageType,
apiEndpoint: storage?.apiEndpoint,
apiAccessToken: storage?.apiAccessToken,
apiConfigured: Boolean(storage?.apiEndpoint || storage?.apiAccessToken)
};
}
return { type: storageType, apiConfigured: false };
}
/**
* Get model configuration
*/
getModelConfig() {
return (
this.config.models || {
main: 'claude-3-5-sonnet-20241022',
fallback: 'gpt-4o-mini'
}
);
}
/**
* Get response language setting
*/
getResponseLanguage(): string {
const customConfig = this.config.custom as any;
return customConfig?.responseLanguage || 'English';
}
/**
* Get project root path
*/
getProjectRoot(): string {
return this.projectRoot;
}
/**
* Check if using API storage
*/
isUsingApiStorage(): boolean {
return this.getStorageConfig().type === 'api';
}
// ==================== Runtime State ====================
/**
* Get the currently active tag
*/
getActiveTag(): string {
return this.stateManager.getCurrentTag();
}
/**
* Set the active tag
*/
async setActiveTag(tag: string): Promise<void> {
await this.stateManager.setCurrentTag(tag);
}
// ==================== Configuration Updates ====================
/**
* Update configuration
*/
async updateConfig(updates: PartialConfiguration): Promise<void> {
// Merge updates into current config
Object.assign(this.config, updates);
// Save to persistence
await this.persistence.saveConfig(this.config);
// Re-initialize to respect precedence
await this.initialize();
}
/**
* Set response language
*/
async setResponseLanguage(language: string): Promise<void> {
if (!this.config.custom) {
this.config.custom = {};
}
(this.config.custom as any).responseLanguage = language;
await this.persistence.saveConfig(this.config);
}
/**
* Save current configuration
*/
async saveConfig(): Promise<void> {
await this.persistence.saveConfig(this.config, {
createBackup: true,
atomic: true
});
}
// ==================== Utilities ====================
/**
* Reset configuration to defaults
*/
async reset(): Promise<void> {
// Clear configuration file
await this.persistence.deleteConfig();
// Clear runtime state
await this.stateManager.clearState();
// Reset internal state
this.initialized = false;
this.config = {};
// Re-initialize with defaults
await this.initialize();
}
/**
* Get configuration sources for debugging
*/
getConfigSources() {
return this.merger.getSources();
}
}

View File

@@ -0,0 +1,43 @@
/**
* @fileoverview Configuration module exports
* Exports the main ConfigManager and all configuration services
*/
// Export the main ConfigManager
export { ConfigManager } from './config-manager.js';
// Export all configuration services for advanced usage
export {
ConfigLoader,
ConfigMerger,
CONFIG_PRECEDENCE,
RuntimeStateManager,
ConfigPersistence,
EnvironmentConfigProvider,
type ConfigSource,
type RuntimeState,
type PersistenceOptions
} from './services/index.js';
// Re-export configuration interfaces
export type {
IConfiguration,
PartialConfiguration,
ModelConfig,
ProviderConfig,
TaskSettings,
TagSettings,
StorageSettings,
RetrySettings,
LoggingSettings,
SecuritySettings,
ConfigValidationResult,
EnvironmentConfig,
ConfigSchema,
ConfigProperty,
IConfigurationFactory,
IConfigurationManager
} from '../interfaces/configuration.interface.js';
// Re-export default values
export { DEFAULT_CONFIG_VALUES } from '../interfaces/configuration.interface.js';

View File

@@ -0,0 +1,144 @@
/**
* @fileoverview Unit tests for ConfigLoader service
*/
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
import { promises as fs } from 'node:fs';
import { ConfigLoader } from './config-loader.service.js';
import { DEFAULT_CONFIG_VALUES } from '../../interfaces/configuration.interface.js';
vi.mock('node:fs', () => ({
promises: {
readFile: vi.fn(),
access: vi.fn()
}
}));
describe('ConfigLoader', () => {
let configLoader: ConfigLoader;
const testProjectRoot = '/test/project';
beforeEach(() => {
configLoader = new ConfigLoader(testProjectRoot);
vi.clearAllMocks();
});
afterEach(() => {
vi.restoreAllMocks();
});
describe('getDefaultConfig', () => {
it('should return default configuration values', () => {
const config = configLoader.getDefaultConfig();
expect(config.models).toEqual({
main: DEFAULT_CONFIG_VALUES.MODELS.MAIN,
fallback: DEFAULT_CONFIG_VALUES.MODELS.FALLBACK
});
expect(config.storage).toEqual({
type: DEFAULT_CONFIG_VALUES.STORAGE.TYPE,
encoding: DEFAULT_CONFIG_VALUES.STORAGE.ENCODING,
enableBackup: false,
maxBackups: DEFAULT_CONFIG_VALUES.STORAGE.MAX_BACKUPS,
enableCompression: false,
atomicOperations: true
});
expect(config.version).toBe(DEFAULT_CONFIG_VALUES.VERSION);
});
});
describe('loadLocalConfig', () => {
it('should load and parse local configuration file', async () => {
const mockConfig = {
models: { main: 'test-model' },
storage: { type: 'api' as const }
};
vi.mocked(fs.readFile).mockResolvedValue(JSON.stringify(mockConfig));
const result = await configLoader.loadLocalConfig();
expect(fs.readFile).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json',
'utf-8'
);
expect(result).toEqual(mockConfig);
});
it('should return null when config file does not exist', async () => {
const error = new Error('File not found') as any;
error.code = 'ENOENT';
vi.mocked(fs.readFile).mockRejectedValue(error);
const result = await configLoader.loadLocalConfig();
expect(result).toBeNull();
});
it('should throw TaskMasterError for other file errors', async () => {
const error = new Error('Permission denied');
vi.mocked(fs.readFile).mockRejectedValue(error);
await expect(configLoader.loadLocalConfig()).rejects.toThrow(
'Failed to load local configuration'
);
});
it('should throw error for invalid JSON', async () => {
vi.mocked(fs.readFile).mockResolvedValue('invalid json');
await expect(configLoader.loadLocalConfig()).rejects.toThrow();
});
});
describe('loadGlobalConfig', () => {
it('should return null (not implemented yet)', async () => {
const result = await configLoader.loadGlobalConfig();
expect(result).toBeNull();
});
});
describe('hasLocalConfig', () => {
it('should return true when local config exists', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
const result = await configLoader.hasLocalConfig();
expect(fs.access).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json'
);
expect(result).toBe(true);
});
it('should return false when local config does not exist', async () => {
vi.mocked(fs.access).mockRejectedValue(new Error('Not found'));
const result = await configLoader.hasLocalConfig();
expect(result).toBe(false);
});
});
describe('hasGlobalConfig', () => {
it('should check global config path', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
const result = await configLoader.hasGlobalConfig();
expect(fs.access).toHaveBeenCalledWith(
expect.stringContaining('.taskmaster/config.json')
);
expect(result).toBe(true);
});
it('should return false when global config does not exist', async () => {
vi.mocked(fs.access).mockRejectedValue(new Error('Not found'));
const result = await configLoader.hasGlobalConfig();
expect(result).toBe(false);
});
});
});

View File

@@ -0,0 +1,124 @@
/**
* @fileoverview Configuration Loader Service
* Responsible for loading configuration from various file sources
*/
import { promises as fs } from 'node:fs';
import path from 'node:path';
import type { PartialConfiguration } from '../../interfaces/configuration.interface.js';
import { DEFAULT_CONFIG_VALUES } from '../../interfaces/configuration.interface.js';
import {
ERROR_CODES,
TaskMasterError
} from '../../errors/task-master-error.js';
/**
* ConfigLoader handles loading configuration from files
* Single responsibility: File-based configuration loading
*/
export class ConfigLoader {
private localConfigPath: string;
private globalConfigPath: string;
constructor(projectRoot: string) {
this.localConfigPath = path.join(projectRoot, '.taskmaster', 'config.json');
this.globalConfigPath = path.join(
process.env.HOME || '',
'.taskmaster',
'config.json'
);
}
/**
* Get default configuration values
*/
getDefaultConfig(): PartialConfiguration {
return {
models: {
main: DEFAULT_CONFIG_VALUES.MODELS.MAIN,
fallback: DEFAULT_CONFIG_VALUES.MODELS.FALLBACK
},
storage: {
type: DEFAULT_CONFIG_VALUES.STORAGE.TYPE,
encoding: DEFAULT_CONFIG_VALUES.STORAGE.ENCODING,
enableBackup: false,
maxBackups: DEFAULT_CONFIG_VALUES.STORAGE.MAX_BACKUPS,
enableCompression: false,
atomicOperations: true
},
version: DEFAULT_CONFIG_VALUES.VERSION
};
}
/**
* Load local project configuration
*/
async loadLocalConfig(): Promise<PartialConfiguration | null> {
try {
const configData = await fs.readFile(this.localConfigPath, 'utf-8');
return JSON.parse(configData);
} catch (error: any) {
if (error.code === 'ENOENT') {
// File doesn't exist, return null
console.debug('No local config.json found, using defaults');
return null;
}
throw new TaskMasterError(
'Failed to load local configuration',
ERROR_CODES.CONFIG_ERROR,
{ configPath: this.localConfigPath },
error
);
}
}
/**
* Load global user configuration
* @future-implementation Full implementation pending
*/
async loadGlobalConfig(): Promise<PartialConfiguration | null> {
// TODO: Implement in future PR
// For now, return null to indicate no global config
return null;
// Future implementation:
// try {
// const configData = await fs.readFile(this.globalConfigPath, 'utf-8');
// return JSON.parse(configData);
// } catch (error: any) {
// if (error.code === 'ENOENT') {
// return null;
// }
// throw new TaskMasterError(
// 'Failed to load global configuration',
// ERROR_CODES.CONFIG_ERROR,
// { configPath: this.globalConfigPath },
// error
// );
// }
}
/**
* Check if local config exists
*/
async hasLocalConfig(): Promise<boolean> {
try {
await fs.access(this.localConfigPath);
return true;
} catch {
return false;
}
}
/**
* Check if global config exists
*/
async hasGlobalConfig(): Promise<boolean> {
try {
await fs.access(this.globalConfigPath);
return true;
} catch {
return false;
}
}
}

View File

@@ -0,0 +1,237 @@
/**
* @fileoverview Unit tests for ConfigMerger service
*/
import { describe, it, expect, beforeEach } from 'vitest';
import { ConfigMerger, CONFIG_PRECEDENCE } from './config-merger.service.js';
describe('ConfigMerger', () => {
let merger: ConfigMerger;
beforeEach(() => {
merger = new ConfigMerger();
});
describe('addSource', () => {
it('should add configuration source', () => {
const source = {
name: 'test',
config: { test: true },
precedence: 1
};
merger.addSource(source);
const sources = merger.getSources();
expect(sources).toHaveLength(1);
expect(sources[0]).toEqual(source);
});
it('should add multiple sources', () => {
merger.addSource({ name: 'source1', config: {}, precedence: 1 });
merger.addSource({ name: 'source2', config: {}, precedence: 2 });
expect(merger.getSources()).toHaveLength(2);
});
});
describe('clearSources', () => {
it('should remove all configuration sources', () => {
merger.addSource({ name: 'test', config: {}, precedence: 1 });
merger.clearSources();
expect(merger.getSources()).toHaveLength(0);
});
});
describe('merge', () => {
it('should merge configurations based on precedence', () => {
merger.addSource({
name: 'low',
config: { a: 1, b: 2 },
precedence: 1
});
merger.addSource({
name: 'high',
config: { a: 3, c: 4 },
precedence: 2
});
const result = merger.merge();
expect(result).toEqual({
a: 3, // High precedence wins
b: 2, // Only in low
c: 4 // Only in high
});
});
it('should deep merge nested objects', () => {
merger.addSource({
name: 'base',
config: {
models: { main: 'model1', fallback: 'model2' },
storage: { type: 'file' as const }
},
precedence: 1
});
merger.addSource({
name: 'override',
config: {
models: { main: 'model3' },
storage: { encoding: 'utf8' as const }
},
precedence: 2
});
const result = merger.merge();
expect(result).toEqual({
models: {
main: 'model3', // Overridden
fallback: 'model2' // Preserved
},
storage: {
type: 'file', // Preserved
encoding: 'utf8' // Added
}
});
});
it('should handle arrays by replacement', () => {
merger.addSource({
name: 'base',
config: { items: [1, 2, 3] },
precedence: 1
});
merger.addSource({
name: 'override',
config: { items: [4, 5] },
precedence: 2
});
const result = merger.merge();
expect(result.items).toEqual([4, 5]); // Arrays are replaced, not merged
});
it('should ignore null and undefined values', () => {
merger.addSource({
name: 'base',
config: { a: 1, b: 2 },
precedence: 1
});
merger.addSource({
name: 'override',
config: { a: null, b: undefined, c: 3 } as any,
precedence: 2
});
const result = merger.merge();
expect(result).toEqual({
a: 1, // null ignored
b: 2, // undefined ignored
c: 3 // new value added
});
});
it('should return empty object when no sources', () => {
const result = merger.merge();
expect(result).toEqual({});
});
it('should use CONFIG_PRECEDENCE constants correctly', () => {
merger.addSource({
name: 'defaults',
config: { level: 'default' },
precedence: CONFIG_PRECEDENCE.DEFAULTS
});
merger.addSource({
name: 'local',
config: { level: 'local' },
precedence: CONFIG_PRECEDENCE.LOCAL
});
merger.addSource({
name: 'environment',
config: { level: 'env' },
precedence: CONFIG_PRECEDENCE.ENVIRONMENT
});
const result = merger.merge();
expect(result.level).toBe('env'); // Highest precedence wins
});
});
describe('getSources', () => {
it('should return sources sorted by precedence (highest first)', () => {
merger.addSource({ name: 'low', config: {}, precedence: 1 });
merger.addSource({ name: 'high', config: {}, precedence: 3 });
merger.addSource({ name: 'medium', config: {}, precedence: 2 });
const sources = merger.getSources();
expect(sources[0].name).toBe('high');
expect(sources[1].name).toBe('medium');
expect(sources[2].name).toBe('low');
});
it('should return a copy of sources array', () => {
merger.addSource({ name: 'test', config: {}, precedence: 1 });
const sources1 = merger.getSources();
const sources2 = merger.getSources();
expect(sources1).not.toBe(sources2); // Different array instances
expect(sources1).toEqual(sources2); // Same content
});
});
describe('hasSource', () => {
it('should return true when source exists', () => {
merger.addSource({ name: 'test', config: {}, precedence: 1 });
expect(merger.hasSource('test')).toBe(true);
});
it('should return false when source does not exist', () => {
expect(merger.hasSource('nonexistent')).toBe(false);
});
});
describe('removeSource', () => {
it('should remove source by name and return true', () => {
merger.addSource({ name: 'test', config: {}, precedence: 1 });
merger.addSource({ name: 'keep', config: {}, precedence: 2 });
const removed = merger.removeSource('test');
expect(removed).toBe(true);
expect(merger.hasSource('test')).toBe(false);
expect(merger.hasSource('keep')).toBe(true);
});
it('should return false when source does not exist', () => {
const removed = merger.removeSource('nonexistent');
expect(removed).toBe(false);
});
it('should handle removing all sources', () => {
merger.addSource({ name: 'test1', config: {}, precedence: 1 });
merger.addSource({ name: 'test2', config: {}, precedence: 2 });
merger.removeSource('test1');
merger.removeSource('test2');
expect(merger.getSources()).toHaveLength(0);
});
});
});

View File

@@ -0,0 +1,118 @@
/**
* @fileoverview Configuration Merger Service
* Responsible for merging configurations from multiple sources with precedence
*/
import type { PartialConfiguration } from '../../interfaces/configuration.interface.js';
/**
* Configuration source with precedence
*/
export interface ConfigSource {
/** Source name for debugging */
name: string;
/** Configuration data from this source */
config: PartialConfiguration;
/** Precedence level (higher = more important) */
precedence: number;
}
/**
* Configuration precedence levels (higher number = higher priority)
*/
export const CONFIG_PRECEDENCE = {
DEFAULTS: 0,
GLOBAL: 1, // Reserved for future implementation
LOCAL: 2,
ENVIRONMENT: 3
} as const;
/**
* ConfigMerger handles merging configurations with precedence rules
* Single responsibility: Configuration merging logic
*/
export class ConfigMerger {
private configSources: ConfigSource[] = [];
/**
* Add a configuration source
*/
addSource(source: ConfigSource): void {
this.configSources.push(source);
}
/**
* Clear all configuration sources
*/
clearSources(): void {
this.configSources = [];
}
/**
* Merge all configuration sources based on precedence
*/
merge(): PartialConfiguration {
// Sort sources by precedence (lowest first)
const sortedSources = [...this.configSources].sort(
(a, b) => a.precedence - b.precedence
);
// Merge from lowest to highest precedence
let merged: PartialConfiguration = {};
for (const source of sortedSources) {
merged = this.deepMerge(merged, source.config);
}
return merged;
}
/**
* Deep merge two configuration objects
* Higher precedence values override lower ones
*/
private deepMerge(target: any, source: any): any {
if (!source) return target;
if (!target) return source;
const result = { ...target };
for (const key in source) {
if (source[key] === null || source[key] === undefined) {
continue;
}
if (typeof source[key] === 'object' && !Array.isArray(source[key])) {
result[key] = this.deepMerge(result[key] || {}, source[key]);
} else {
result[key] = source[key];
}
}
return result;
}
/**
* Get configuration sources for debugging
*/
getSources(): ConfigSource[] {
return [...this.configSources].sort((a, b) => b.precedence - a.precedence);
}
/**
* Check if a source exists
*/
hasSource(name: string): boolean {
return this.configSources.some((source) => source.name === name);
}
/**
* Remove a source by name
*/
removeSource(name: string): boolean {
const initialLength = this.configSources.length;
this.configSources = this.configSources.filter(
(source) => source.name !== name
);
return this.configSources.length < initialLength;
}
}

View File

@@ -0,0 +1,316 @@
/**
* @fileoverview Unit tests for ConfigPersistence service
*/
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
import { promises as fs } from 'node:fs';
import { ConfigPersistence } from './config-persistence.service.js';
vi.mock('node:fs', () => ({
promises: {
readFile: vi.fn(),
writeFile: vi.fn(),
mkdir: vi.fn(),
unlink: vi.fn(),
access: vi.fn(),
readdir: vi.fn(),
rename: vi.fn()
}
}));
describe('ConfigPersistence', () => {
let persistence: ConfigPersistence;
const testProjectRoot = '/test/project';
beforeEach(() => {
persistence = new ConfigPersistence(testProjectRoot);
vi.clearAllMocks();
});
afterEach(() => {
vi.restoreAllMocks();
});
describe('saveConfig', () => {
const mockConfig = {
models: { main: 'test-model' },
storage: { type: 'file' as const }
};
it('should save configuration to file', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
await persistence.saveConfig(mockConfig);
expect(fs.mkdir).toHaveBeenCalledWith('/test/project/.taskmaster', {
recursive: true
});
expect(fs.writeFile).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json',
JSON.stringify(mockConfig, null, 2),
'utf-8'
);
});
it('should use atomic write when specified', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
vi.mocked(fs.rename).mockResolvedValue(undefined);
await persistence.saveConfig(mockConfig, { atomic: true });
// Should write to temp file first
expect(fs.writeFile).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json.tmp',
JSON.stringify(mockConfig, null, 2),
'utf-8'
);
// Then rename to final location
expect(fs.rename).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json.tmp',
'/test/project/.taskmaster/config.json'
);
});
it('should create backup when requested', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
vi.mocked(fs.access).mockResolvedValue(undefined); // Config exists
vi.mocked(fs.readFile).mockResolvedValue('{"old": "config"}');
vi.mocked(fs.readdir).mockResolvedValue([]);
await persistence.saveConfig(mockConfig, { createBackup: true });
// Should create backup directory
expect(fs.mkdir).toHaveBeenCalledWith(
'/test/project/.taskmaster/backups',
{ recursive: true }
);
// Should read existing config for backup
expect(fs.readFile).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json',
'utf-8'
);
// Should write backup file
expect(fs.writeFile).toHaveBeenCalledWith(
expect.stringContaining('/test/project/.taskmaster/backups/config-'),
'{"old": "config"}',
'utf-8'
);
});
it('should not create backup if config does not exist', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
vi.mocked(fs.access).mockRejectedValue(new Error('Not found'));
await persistence.saveConfig(mockConfig, { createBackup: true });
// Should not read or create backup
expect(fs.readFile).not.toHaveBeenCalled();
expect(fs.writeFile).toHaveBeenCalledTimes(1); // Only the main config
});
it('should throw TaskMasterError on save failure', async () => {
vi.mocked(fs.mkdir).mockRejectedValue(new Error('Disk full'));
await expect(persistence.saveConfig(mockConfig)).rejects.toThrow(
'Failed to save configuration'
);
});
});
describe('configExists', () => {
it('should return true when config exists', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
const exists = await persistence.configExists();
expect(fs.access).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json'
);
expect(exists).toBe(true);
});
it('should return false when config does not exist', async () => {
vi.mocked(fs.access).mockRejectedValue(new Error('Not found'));
const exists = await persistence.configExists();
expect(exists).toBe(false);
});
});
describe('deleteConfig', () => {
it('should delete configuration file', async () => {
vi.mocked(fs.unlink).mockResolvedValue(undefined);
await persistence.deleteConfig();
expect(fs.unlink).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json'
);
});
it('should not throw when file does not exist', async () => {
const error = new Error('File not found') as any;
error.code = 'ENOENT';
vi.mocked(fs.unlink).mockRejectedValue(error);
await expect(persistence.deleteConfig()).resolves.not.toThrow();
});
it('should throw TaskMasterError for other errors', async () => {
vi.mocked(fs.unlink).mockRejectedValue(new Error('Permission denied'));
await expect(persistence.deleteConfig()).rejects.toThrow(
'Failed to delete configuration'
);
});
});
describe('getBackups', () => {
it('should return list of backup files sorted newest first', async () => {
vi.mocked(fs.readdir).mockResolvedValue([
'config-2024-01-01T10-00-00-000Z.json',
'config-2024-01-02T10-00-00-000Z.json',
'config-2024-01-03T10-00-00-000Z.json',
'other-file.txt'
] as any);
const backups = await persistence.getBackups();
expect(fs.readdir).toHaveBeenCalledWith(
'/test/project/.taskmaster/backups'
);
expect(backups).toEqual([
'config-2024-01-03T10-00-00-000Z.json',
'config-2024-01-02T10-00-00-000Z.json',
'config-2024-01-01T10-00-00-000Z.json'
]);
});
it('should return empty array when backup directory does not exist', async () => {
vi.mocked(fs.readdir).mockRejectedValue(new Error('Not found'));
const backups = await persistence.getBackups();
expect(backups).toEqual([]);
});
it('should filter out non-backup files', async () => {
vi.mocked(fs.readdir).mockResolvedValue([
'config-2024-01-01T10-00-00-000Z.json',
'README.md',
'.DS_Store',
'config.json',
'config-backup.json' // Wrong format
] as any);
const backups = await persistence.getBackups();
expect(backups).toEqual(['config-2024-01-01T10-00-00-000Z.json']);
});
});
describe('restoreFromBackup', () => {
const backupFile = 'config-2024-01-01T10-00-00-000Z.json';
const backupContent = '{"restored": "config"}';
it('should restore configuration from backup', async () => {
vi.mocked(fs.readFile).mockResolvedValue(backupContent);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
await persistence.restoreFromBackup(backupFile);
expect(fs.readFile).toHaveBeenCalledWith(
`/test/project/.taskmaster/backups/${backupFile}`,
'utf-8'
);
expect(fs.writeFile).toHaveBeenCalledWith(
'/test/project/.taskmaster/config.json',
backupContent,
'utf-8'
);
});
it('should throw TaskMasterError when backup file not found', async () => {
vi.mocked(fs.readFile).mockRejectedValue(new Error('File not found'));
await expect(
persistence.restoreFromBackup('nonexistent.json')
).rejects.toThrow('Failed to restore from backup');
});
it('should throw TaskMasterError on write failure', async () => {
vi.mocked(fs.readFile).mockResolvedValue(backupContent);
vi.mocked(fs.writeFile).mockRejectedValue(new Error('Disk full'));
await expect(persistence.restoreFromBackup(backupFile)).rejects.toThrow(
'Failed to restore from backup'
);
});
});
describe('backup management', () => {
it('should clean old backups when limit exceeded', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readFile).mockResolvedValue('{"old": "config"}');
vi.mocked(fs.unlink).mockResolvedValue(undefined);
// Mock 7 existing backups
vi.mocked(fs.readdir).mockResolvedValue([
'config-2024-01-01T10-00-00-000Z.json',
'config-2024-01-02T10-00-00-000Z.json',
'config-2024-01-03T10-00-00-000Z.json',
'config-2024-01-04T10-00-00-000Z.json',
'config-2024-01-05T10-00-00-000Z.json',
'config-2024-01-06T10-00-00-000Z.json',
'config-2024-01-07T10-00-00-000Z.json'
] as any);
await persistence.saveConfig({}, { createBackup: true });
// Should delete oldest backups (keeping 5)
expect(fs.unlink).toHaveBeenCalledWith(
'/test/project/.taskmaster/backups/config-2024-01-01T10-00-00-000Z.json'
);
expect(fs.unlink).toHaveBeenCalledWith(
'/test/project/.taskmaster/backups/config-2024-01-02T10-00-00-000Z.json'
);
});
it('should handle backup cleanup errors gracefully', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readFile).mockResolvedValue('{"old": "config"}');
vi.mocked(fs.readdir).mockResolvedValue(['config-old.json'] as any);
vi.mocked(fs.unlink).mockRejectedValue(new Error('Permission denied'));
// Mock console.warn to verify it's called
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
// Should not throw even if cleanup fails
await expect(
persistence.saveConfig({}, { createBackup: true })
).resolves.not.toThrow();
expect(warnSpy).toHaveBeenCalledWith(
'Failed to clean old backups:',
expect.any(Error)
);
warnSpy.mockRestore();
});
});
});

View File

@@ -0,0 +1,186 @@
/**
* @fileoverview Configuration Persistence Service
* Handles saving and backup of configuration files
*/
import { promises as fs } from 'node:fs';
import path from 'node:path';
import type { PartialConfiguration } from '../../interfaces/configuration.interface.js';
import {
ERROR_CODES,
TaskMasterError
} from '../../errors/task-master-error.js';
/**
* Persistence options
*/
export interface PersistenceOptions {
/** Enable backup before saving */
createBackup?: boolean;
/** Maximum number of backups to keep */
maxBackups?: number;
/** Use atomic write operations */
atomic?: boolean;
}
/**
* ConfigPersistence handles all configuration file I/O operations
* Single responsibility: Configuration persistence
*/
export class ConfigPersistence {
private localConfigPath: string;
private backupDir: string;
constructor(projectRoot: string) {
this.localConfigPath = path.join(projectRoot, '.taskmaster', 'config.json');
this.backupDir = path.join(projectRoot, '.taskmaster', 'backups');
}
/**
* Save configuration to file
*/
async saveConfig(
config: PartialConfiguration,
options: PersistenceOptions = {}
): Promise<void> {
const { createBackup = false, atomic = true } = options;
try {
// Create backup if requested
if (createBackup && (await this.configExists())) {
await this.createBackup();
}
// Ensure directory exists
const configDir = path.dirname(this.localConfigPath);
await fs.mkdir(configDir, { recursive: true });
const jsonContent = JSON.stringify(config, null, 2);
if (atomic) {
// Atomic write: write to temp file then rename
const tempPath = `${this.localConfigPath}.tmp`;
await fs.writeFile(tempPath, jsonContent, 'utf-8');
await fs.rename(tempPath, this.localConfigPath);
} else {
// Direct write
await fs.writeFile(this.localConfigPath, jsonContent, 'utf-8');
}
} catch (error) {
throw new TaskMasterError(
'Failed to save configuration',
ERROR_CODES.CONFIG_ERROR,
{ configPath: this.localConfigPath },
error as Error
);
}
}
/**
* Create a backup of the current configuration
*/
private async createBackup(): Promise<string> {
try {
await fs.mkdir(this.backupDir, { recursive: true });
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const backupPath = path.join(this.backupDir, `config-${timestamp}.json`);
const configContent = await fs.readFile(this.localConfigPath, 'utf-8');
await fs.writeFile(backupPath, configContent, 'utf-8');
// Clean old backups
await this.cleanOldBackups();
return backupPath;
} catch (error) {
console.warn('Failed to create backup:', error);
throw error;
}
}
/**
* Clean old backup files
*/
private async cleanOldBackups(maxBackups = 5): Promise<void> {
try {
const files = await fs.readdir(this.backupDir);
const backupFiles = files
.filter((f) => f.startsWith('config-') && f.endsWith('.json'))
.sort()
.reverse();
// Remove old backups
const toDelete = backupFiles.slice(maxBackups);
for (const file of toDelete) {
await fs.unlink(path.join(this.backupDir, file));
}
} catch (error) {
console.warn('Failed to clean old backups:', error);
}
}
/**
* Check if config file exists
*/
async configExists(): Promise<boolean> {
try {
await fs.access(this.localConfigPath);
return true;
} catch {
return false;
}
}
/**
* Delete configuration file
*/
async deleteConfig(): Promise<void> {
try {
await fs.unlink(this.localConfigPath);
} catch (error: any) {
if (error.code !== 'ENOENT') {
throw new TaskMasterError(
'Failed to delete configuration',
ERROR_CODES.CONFIG_ERROR,
{ configPath: this.localConfigPath },
error
);
}
}
}
/**
* Get list of available backups
*/
async getBackups(): Promise<string[]> {
try {
const files = await fs.readdir(this.backupDir);
return files
.filter((f) => f.startsWith('config-') && f.endsWith('.json'))
.sort()
.reverse();
} catch {
return [];
}
}
/**
* Restore from a backup
*/
async restoreFromBackup(backupFile: string): Promise<void> {
const backupPath = path.join(this.backupDir, backupFile);
try {
const backupContent = await fs.readFile(backupPath, 'utf-8');
await fs.writeFile(this.localConfigPath, backupContent, 'utf-8');
} catch (error) {
throw new TaskMasterError(
'Failed to restore from backup',
ERROR_CODES.CONFIG_ERROR,
{ backupPath },
error as Error
);
}
}
}

View File

@@ -0,0 +1,348 @@
/**
* @fileoverview Unit tests for EnvironmentConfigProvider service
*/
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import { EnvironmentConfigProvider } from './environment-config-provider.service.js';
describe('EnvironmentConfigProvider', () => {
let provider: EnvironmentConfigProvider;
const originalEnv = { ...process.env };
beforeEach(() => {
// Clear all TASKMASTER_ env vars
Object.keys(process.env).forEach((key) => {
if (key.startsWith('TASKMASTER_')) {
delete process.env[key];
}
});
provider = new EnvironmentConfigProvider();
});
afterEach(() => {
// Restore original environment
process.env = { ...originalEnv };
});
describe('loadConfig', () => {
it('should load configuration from environment variables', () => {
process.env.TASKMASTER_STORAGE_TYPE = 'api';
process.env.TASKMASTER_API_ENDPOINT = 'https://api.example.com';
process.env.TASKMASTER_MODEL_MAIN = 'gpt-4';
const config = provider.loadConfig();
expect(config).toEqual({
storage: {
type: 'api',
apiEndpoint: 'https://api.example.com'
},
models: {
main: 'gpt-4'
}
});
});
it('should return empty object when no env vars are set', () => {
const config = provider.loadConfig();
expect(config).toEqual({});
});
it('should skip runtime state variables', () => {
process.env.TASKMASTER_TAG = 'feature-branch';
process.env.TASKMASTER_MODEL_MAIN = 'claude-3';
const config = provider.loadConfig();
expect(config).toEqual({
models: { main: 'claude-3' }
});
expect(config).not.toHaveProperty('activeTag');
});
it('should validate storage type values', () => {
// Mock console.warn to check validation
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
process.env.TASKMASTER_STORAGE_TYPE = 'invalid';
const config = provider.loadConfig();
expect(config).toEqual({});
expect(warnSpy).toHaveBeenCalledWith(
'Invalid value for TASKMASTER_STORAGE_TYPE: invalid'
);
warnSpy.mockRestore();
});
it('should accept valid storage type values', () => {
process.env.TASKMASTER_STORAGE_TYPE = 'file';
let config = provider.loadConfig();
expect(config.storage?.type).toBe('file');
process.env.TASKMASTER_STORAGE_TYPE = 'api';
provider = new EnvironmentConfigProvider(); // Reset provider
config = provider.loadConfig();
expect(config.storage?.type).toBe('api');
process.env.TASKMASTER_STORAGE_TYPE = 'auto';
provider = new EnvironmentConfigProvider(); // Reset provider
config = provider.loadConfig();
expect(config.storage?.type).toBe('auto');
});
it('should handle nested configuration paths', () => {
process.env.TASKMASTER_MODEL_MAIN = 'model1';
process.env.TASKMASTER_MODEL_RESEARCH = 'model2';
process.env.TASKMASTER_MODEL_FALLBACK = 'model3';
const config = provider.loadConfig();
expect(config).toEqual({
models: {
main: 'model1',
research: 'model2',
fallback: 'model3'
}
});
});
it('should handle custom response language', () => {
process.env.TASKMASTER_RESPONSE_LANGUAGE = 'Spanish';
const config = provider.loadConfig();
expect(config).toEqual({
custom: {
responseLanguage: 'Spanish'
}
});
});
it('should ignore empty string values', () => {
process.env.TASKMASTER_MODEL_MAIN = '';
process.env.TASKMASTER_MODEL_FALLBACK = 'fallback-model';
const config = provider.loadConfig();
expect(config).toEqual({
models: {
fallback: 'fallback-model'
}
});
});
});
describe('getRuntimeState', () => {
it('should extract runtime state variables', () => {
process.env.TASKMASTER_TAG = 'develop';
process.env.TASKMASTER_MODEL_MAIN = 'model'; // Should not be included
const state = provider.getRuntimeState();
expect(state).toEqual({
activeTag: 'develop'
});
});
it('should return empty object when no runtime state vars', () => {
process.env.TASKMASTER_MODEL_MAIN = 'model';
const state = provider.getRuntimeState();
expect(state).toEqual({});
});
});
describe('hasEnvVar', () => {
it('should return true when env var exists', () => {
process.env.TASKMASTER_MODEL_MAIN = 'test';
expect(provider.hasEnvVar('TASKMASTER_MODEL_MAIN')).toBe(true);
});
it('should return false when env var does not exist', () => {
expect(provider.hasEnvVar('TASKMASTER_NONEXISTENT')).toBe(false);
});
it('should return false for undefined values', () => {
process.env.TASKMASTER_TEST = undefined as any;
expect(provider.hasEnvVar('TASKMASTER_TEST')).toBe(false);
});
});
describe('getAllTaskmasterEnvVars', () => {
it('should return all TASKMASTER_ prefixed variables', () => {
process.env.TASKMASTER_VAR1 = 'value1';
process.env.TASKMASTER_VAR2 = 'value2';
process.env.OTHER_VAR = 'other';
process.env.TASK_MASTER = 'wrong-prefix';
const vars = provider.getAllTaskmasterEnvVars();
expect(vars).toEqual({
TASKMASTER_VAR1: 'value1',
TASKMASTER_VAR2: 'value2'
});
});
it('should return empty object when no TASKMASTER_ vars', () => {
process.env.OTHER_VAR = 'value';
const vars = provider.getAllTaskmasterEnvVars();
expect(vars).toEqual({});
});
it('should filter out undefined values', () => {
process.env.TASKMASTER_DEFINED = 'value';
process.env.TASKMASTER_UNDEFINED = undefined as any;
const vars = provider.getAllTaskmasterEnvVars();
expect(vars).toEqual({
TASKMASTER_DEFINED: 'value'
});
});
});
describe('custom mappings', () => {
it('should use custom mappings when provided', () => {
const customMappings = [{ env: 'CUSTOM_VAR', path: ['custom', 'value'] }];
const customProvider = new EnvironmentConfigProvider(customMappings);
process.env.CUSTOM_VAR = 'test-value';
const config = customProvider.loadConfig();
expect(config).toEqual({
custom: {
value: 'test-value'
}
});
});
it('should add new mapping with addMapping', () => {
process.env.NEW_MAPPING = 'new-value';
provider.addMapping({
env: 'NEW_MAPPING',
path: ['new', 'mapping']
});
const config = provider.loadConfig();
expect(config).toHaveProperty('new.mapping', 'new-value');
});
it('should return current mappings with getMappings', () => {
const mappings = provider.getMappings();
expect(mappings).toBeInstanceOf(Array);
expect(mappings.length).toBeGreaterThan(0);
// Check for some expected mappings
const envNames = mappings.map((m) => m.env);
expect(envNames).toContain('TASKMASTER_STORAGE_TYPE');
expect(envNames).toContain('TASKMASTER_MODEL_MAIN');
expect(envNames).toContain('TASKMASTER_TAG');
});
it('should return copy of mappings array', () => {
const mappings1 = provider.getMappings();
const mappings2 = provider.getMappings();
expect(mappings1).not.toBe(mappings2); // Different instances
expect(mappings1).toEqual(mappings2); // Same content
});
});
describe('validation', () => {
it('should validate values when validator is provided', () => {
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
process.env.TASKMASTER_STORAGE_TYPE = 'database'; // Invalid
const config = provider.loadConfig();
expect(config).toEqual({});
expect(warnSpy).toHaveBeenCalledWith(
'Invalid value for TASKMASTER_STORAGE_TYPE: database'
);
warnSpy.mockRestore();
});
it('should accept values that pass validation', () => {
process.env.TASKMASTER_STORAGE_TYPE = 'file';
const config = provider.loadConfig();
expect(config.storage?.type).toBe('file');
});
it('should work with custom validators', () => {
const customProvider = new EnvironmentConfigProvider([
{
env: 'CUSTOM_NUMBER',
path: ['custom', 'number'],
validate: (v) => !isNaN(Number(v))
}
]);
process.env.CUSTOM_NUMBER = '123';
let config = customProvider.loadConfig();
expect(config.custom?.number).toBe('123');
process.env.CUSTOM_NUMBER = 'not-a-number';
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
customProvider = new EnvironmentConfigProvider([
{
env: 'CUSTOM_NUMBER',
path: ['custom', 'number'],
validate: (v) => !isNaN(Number(v))
}
]);
config = customProvider.loadConfig();
expect(config).toEqual({});
expect(warnSpy).toHaveBeenCalled();
warnSpy.mockRestore();
});
});
describe('edge cases', () => {
it('should handle special characters in values', () => {
process.env.TASKMASTER_API_ENDPOINT =
'https://api.example.com/v1?key=abc&token=xyz';
process.env.TASKMASTER_API_TOKEN = 'Bearer abc123!@#$%^&*()';
const config = provider.loadConfig();
expect(config.storage?.apiEndpoint).toBe(
'https://api.example.com/v1?key=abc&token=xyz'
);
expect(config.storage?.apiAccessToken).toBe('Bearer abc123!@#$%^&*()');
});
it('should handle whitespace in values', () => {
process.env.TASKMASTER_MODEL_MAIN = ' claude-3 ';
const config = provider.loadConfig();
// Note: We're not trimming, preserving the value as-is
expect(config.models?.main).toBe(' claude-3 ');
});
it('should handle very long values', () => {
const longValue = 'a'.repeat(10000);
process.env.TASKMASTER_API_TOKEN = longValue;
const config = provider.loadConfig();
expect(config.storage?.apiAccessToken).toBe(longValue);
});
});
});

View File

@@ -0,0 +1,166 @@
/**
* @fileoverview Environment Configuration Provider
* Extracts configuration from environment variables
*/
import type { PartialConfiguration } from '../../interfaces/configuration.interface.js';
/**
* Environment variable mapping definition
*/
interface EnvMapping {
/** Environment variable name */
env: string;
/** Path in configuration object */
path: readonly string[];
/** Optional validator function */
validate?: (value: string) => boolean;
/** Whether this is runtime state (not configuration) */
isRuntimeState?: boolean;
}
/**
* EnvironmentConfigProvider extracts configuration from environment variables
* Single responsibility: Environment variable configuration extraction
*/
export class EnvironmentConfigProvider {
/**
* Default environment variable mappings
*/
private static readonly DEFAULT_MAPPINGS: EnvMapping[] = [
{
env: 'TASKMASTER_STORAGE_TYPE',
path: ['storage', 'type'],
validate: (v: string) => ['file', 'api', 'auto'].includes(v)
},
{ env: 'TASKMASTER_API_ENDPOINT', path: ['storage', 'apiEndpoint'] },
{ env: 'TASKMASTER_API_TOKEN', path: ['storage', 'apiAccessToken'] },
{ env: 'TASKMASTER_MODEL_MAIN', path: ['models', 'main'] },
{ env: 'TASKMASTER_MODEL_RESEARCH', path: ['models', 'research'] },
{ env: 'TASKMASTER_MODEL_FALLBACK', path: ['models', 'fallback'] },
{
env: 'TASKMASTER_RESPONSE_LANGUAGE',
path: ['custom', 'responseLanguage']
}
];
/**
* Runtime state mappings (separate from configuration)
*/
private static readonly RUNTIME_STATE_MAPPINGS: EnvMapping[] = [
{ env: 'TASKMASTER_TAG', path: ['activeTag'], isRuntimeState: true }
];
private mappings: EnvMapping[];
constructor(customMappings?: EnvMapping[]) {
this.mappings = customMappings || [
...EnvironmentConfigProvider.DEFAULT_MAPPINGS,
...EnvironmentConfigProvider.RUNTIME_STATE_MAPPINGS
];
}
/**
* Load configuration from environment variables
*/
loadConfig(): PartialConfiguration {
const config: PartialConfiguration = {};
for (const mapping of this.mappings) {
// Skip runtime state variables
if (mapping.isRuntimeState) continue;
const value = process.env[mapping.env];
if (!value) continue;
// Validate value if validator is provided
if (mapping.validate && !mapping.validate(value)) {
console.warn(`Invalid value for ${mapping.env}: ${value}`);
continue;
}
// Set the value in the config object
this.setNestedProperty(config, mapping.path, value);
}
return config;
}
/**
* Get runtime state from environment variables
*/
getRuntimeState(): Record<string, string> {
const state: Record<string, string> = {};
for (const mapping of this.mappings) {
if (!mapping.isRuntimeState) continue;
const value = process.env[mapping.env];
if (value) {
const key = mapping.path[mapping.path.length - 1];
state[key] = value;
}
}
return state;
}
/**
* Helper to set a nested property in an object
*/
private setNestedProperty(
obj: any,
path: readonly string[],
value: any
): void {
const lastKey = path[path.length - 1];
const keys = path.slice(0, -1);
let current = obj;
for (const key of keys) {
if (!current[key]) {
current[key] = {};
}
current = current[key];
}
current[lastKey] = value;
}
/**
* Check if an environment variable is set
*/
hasEnvVar(envName: string): boolean {
return envName in process.env && process.env[envName] !== undefined;
}
/**
* Get all environment variables that match our prefix
*/
getAllTaskmasterEnvVars(): Record<string, string> {
const vars: Record<string, string> = {};
const prefix = 'TASKMASTER_';
for (const [key, value] of Object.entries(process.env)) {
if (key.startsWith(prefix) && value !== undefined) {
vars[key] = value;
}
}
return vars;
}
/**
* Add a custom mapping
*/
addMapping(mapping: EnvMapping): void {
this.mappings.push(mapping);
}
/**
* Get current mappings
*/
getMappings(): EnvMapping[] {
return [...this.mappings];
}
}

View File

@@ -0,0 +1,20 @@
/**
* @fileoverview Configuration services exports
* Export all configuration-related services
*/
export { ConfigLoader } from './config-loader.service.js';
export {
ConfigMerger,
CONFIG_PRECEDENCE,
type ConfigSource
} from './config-merger.service.js';
export {
RuntimeStateManager,
type RuntimeState
} from './runtime-state-manager.service.js';
export {
ConfigPersistence,
type PersistenceOptions
} from './config-persistence.service.js';
export { EnvironmentConfigProvider } from './environment-config-provider.service.js';

View File

@@ -0,0 +1,272 @@
/**
* @fileoverview Unit tests for RuntimeStateManager service
*/
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
import { promises as fs } from 'node:fs';
import { RuntimeStateManager } from './runtime-state-manager.service.js';
import { DEFAULT_CONFIG_VALUES } from '../../interfaces/configuration.interface.js';
vi.mock('node:fs', () => ({
promises: {
readFile: vi.fn(),
writeFile: vi.fn(),
mkdir: vi.fn(),
unlink: vi.fn()
}
}));
describe('RuntimeStateManager', () => {
let stateManager: RuntimeStateManager;
const testProjectRoot = '/test/project';
beforeEach(() => {
stateManager = new RuntimeStateManager(testProjectRoot);
vi.clearAllMocks();
// Clear environment variables
delete process.env.TASKMASTER_TAG;
});
afterEach(() => {
vi.restoreAllMocks();
delete process.env.TASKMASTER_TAG;
});
describe('loadState', () => {
it('should load state from file', async () => {
const mockState = {
activeTag: 'feature-branch',
lastUpdated: '2024-01-01T00:00:00.000Z',
metadata: { test: 'data' }
};
vi.mocked(fs.readFile).mockResolvedValue(JSON.stringify(mockState));
const state = await stateManager.loadState();
expect(fs.readFile).toHaveBeenCalledWith(
'/test/project/.taskmaster/state.json',
'utf-8'
);
expect(state.activeTag).toBe('feature-branch');
expect(state.metadata).toEqual({ test: 'data' });
});
it('should override with environment variable if set', async () => {
const mockState = { activeTag: 'file-tag' };
vi.mocked(fs.readFile).mockResolvedValue(JSON.stringify(mockState));
process.env.TASKMASTER_TAG = 'env-tag';
const state = await stateManager.loadState();
expect(state.activeTag).toBe('env-tag');
});
it('should use default state when file does not exist', async () => {
const error = new Error('File not found') as any;
error.code = 'ENOENT';
vi.mocked(fs.readFile).mockRejectedValue(error);
const state = await stateManager.loadState();
expect(state.activeTag).toBe(DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG);
});
it('should use environment variable when file does not exist', async () => {
const error = new Error('File not found') as any;
error.code = 'ENOENT';
vi.mocked(fs.readFile).mockRejectedValue(error);
process.env.TASKMASTER_TAG = 'env-tag';
const state = await stateManager.loadState();
expect(state.activeTag).toBe('env-tag');
});
it('should handle file read errors gracefully', async () => {
vi.mocked(fs.readFile).mockRejectedValue(new Error('Permission denied'));
const state = await stateManager.loadState();
expect(state.activeTag).toBe(DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG);
});
it('should handle invalid JSON gracefully', async () => {
vi.mocked(fs.readFile).mockResolvedValue('invalid json');
// Mock console.warn to avoid noise in tests
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
const state = await stateManager.loadState();
expect(state.activeTag).toBe(DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG);
expect(warnSpy).toHaveBeenCalled();
warnSpy.mockRestore();
});
});
describe('saveState', () => {
it('should save state to file with timestamp', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
// Set a specific state
await stateManager.setActiveTag('test-tag');
// Verify mkdir was called
expect(fs.mkdir).toHaveBeenCalledWith('/test/project/.taskmaster', {
recursive: true
});
// Verify writeFile was called with correct data
expect(fs.writeFile).toHaveBeenCalledWith(
'/test/project/.taskmaster/state.json',
expect.stringContaining('"activeTag":"test-tag"'),
'utf-8'
);
// Verify timestamp is included
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('"lastUpdated"'),
'utf-8'
);
});
it('should throw TaskMasterError on save failure', async () => {
vi.mocked(fs.mkdir).mockRejectedValue(new Error('Disk full'));
await expect(stateManager.saveState()).rejects.toThrow(
'Failed to save runtime state'
);
});
it('should format JSON with proper indentation', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
await stateManager.saveState();
const writeCall = vi.mocked(fs.writeFile).mock.calls[0];
const jsonContent = writeCall[1] as string;
// Check for 2-space indentation
expect(jsonContent).toMatch(/\n /);
});
});
describe('getActiveTag', () => {
it('should return current active tag', () => {
const tag = stateManager.getActiveTag();
expect(tag).toBe(DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG);
});
it('should return updated tag after setActiveTag', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
await stateManager.setActiveTag('new-tag');
expect(stateManager.getActiveTag()).toBe('new-tag');
});
});
describe('setActiveTag', () => {
it('should update active tag and save state', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
await stateManager.setActiveTag('feature-xyz');
expect(stateManager.getActiveTag()).toBe('feature-xyz');
expect(fs.writeFile).toHaveBeenCalled();
});
});
describe('getState', () => {
it('should return copy of current state', () => {
const state1 = stateManager.getState();
const state2 = stateManager.getState();
expect(state1).not.toBe(state2); // Different instances
expect(state1).toEqual(state2); // Same content
expect(state1.activeTag).toBe(DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG);
});
});
describe('updateMetadata', () => {
it('should update metadata and save state', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
await stateManager.updateMetadata({ key1: 'value1' });
const state = stateManager.getState();
expect(state.metadata).toEqual({ key1: 'value1' });
expect(fs.writeFile).toHaveBeenCalled();
});
it('should merge metadata with existing values', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
await stateManager.updateMetadata({ key1: 'value1' });
await stateManager.updateMetadata({ key2: 'value2' });
const state = stateManager.getState();
expect(state.metadata).toEqual({
key1: 'value1',
key2: 'value2'
});
});
it('should override existing metadata values', async () => {
vi.mocked(fs.mkdir).mockResolvedValue(undefined);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
await stateManager.updateMetadata({ key1: 'value1' });
await stateManager.updateMetadata({ key1: 'value2' });
const state = stateManager.getState();
expect(state.metadata).toEqual({ key1: 'value2' });
});
});
describe('clearState', () => {
it('should delete state file and reset to defaults', async () => {
vi.mocked(fs.unlink).mockResolvedValue(undefined);
await stateManager.clearState();
expect(fs.unlink).toHaveBeenCalledWith(
'/test/project/.taskmaster/state.json'
);
expect(stateManager.getActiveTag()).toBe(
DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG
);
expect(stateManager.getState().metadata).toBeUndefined();
});
it('should ignore ENOENT errors when file does not exist', async () => {
const error = new Error('File not found') as any;
error.code = 'ENOENT';
vi.mocked(fs.unlink).mockRejectedValue(error);
await expect(stateManager.clearState()).resolves.not.toThrow();
expect(stateManager.getActiveTag()).toBe(
DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG
);
});
it('should throw other errors', async () => {
vi.mocked(fs.unlink).mockRejectedValue(new Error('Permission denied'));
await expect(stateManager.clearState()).rejects.toThrow(
'Permission denied'
);
});
});
});

View File

@@ -0,0 +1,161 @@
/**
* @fileoverview Runtime State Manager Service
* Manages runtime state separate from configuration
*/
import { promises as fs } from 'node:fs';
import path from 'node:path';
import {
ERROR_CODES,
TaskMasterError
} from '../../errors/task-master-error.js';
import { DEFAULT_CONFIG_VALUES } from '../../interfaces/configuration.interface.js';
/**
* Runtime state data structure
*/
export interface RuntimeState {
/** Currently active tag */
currentTag: string;
/** Last updated timestamp */
lastUpdated?: string;
/** Additional metadata */
metadata?: Record<string, unknown>;
}
/**
* RuntimeStateManager handles runtime state persistence
* Single responsibility: Runtime state management (separate from config)
*/
export class RuntimeStateManager {
private stateFilePath: string;
private currentState: RuntimeState;
constructor(projectRoot: string) {
this.stateFilePath = path.join(projectRoot, '.taskmaster', 'state.json');
this.currentState = {
currentTag: DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG
};
}
/**
* Load runtime state from disk
*/
async loadState(): Promise<RuntimeState> {
try {
const stateData = await fs.readFile(this.stateFilePath, 'utf-8');
const rawState = JSON.parse(stateData);
// Map legacy field names to current interface
const state: RuntimeState = {
currentTag:
rawState.currentTag ||
rawState.activeTag ||
DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG,
lastUpdated: rawState.lastUpdated,
metadata: rawState.metadata
};
// Apply environment variable override for current tag
if (process.env.TASKMASTER_TAG) {
state.currentTag = process.env.TASKMASTER_TAG;
}
this.currentState = state;
return state;
} catch (error: any) {
if (error.code === 'ENOENT') {
// State file doesn't exist, use defaults
console.debug('No state.json found, using default state');
// Check environment variable
if (process.env.TASKMASTER_TAG) {
this.currentState.currentTag = process.env.TASKMASTER_TAG;
}
return this.currentState;
}
console.warn('Failed to load state file:', error.message);
return this.currentState;
}
}
/**
* Save runtime state to disk
*/
async saveState(): Promise<void> {
const stateDir = path.dirname(this.stateFilePath);
try {
await fs.mkdir(stateDir, { recursive: true });
const stateToSave = {
...this.currentState,
lastUpdated: new Date().toISOString()
};
await fs.writeFile(
this.stateFilePath,
JSON.stringify(stateToSave, null, 2),
'utf-8'
);
} catch (error) {
throw new TaskMasterError(
'Failed to save runtime state',
ERROR_CODES.CONFIG_ERROR,
{ statePath: this.stateFilePath },
error as Error
);
}
}
/**
* Get the currently active tag
*/
getCurrentTag(): string {
return this.currentState.currentTag;
}
/**
* Set the current tag
*/
async setCurrentTag(tag: string): Promise<void> {
this.currentState.currentTag = tag;
await this.saveState();
}
/**
* Get current state
*/
getState(): RuntimeState {
return { ...this.currentState };
}
/**
* Update metadata
*/
async updateMetadata(metadata: Record<string, unknown>): Promise<void> {
this.currentState.metadata = {
...this.currentState.metadata,
...metadata
};
await this.saveState();
}
/**
* Clear state file
*/
async clearState(): Promise<void> {
try {
await fs.unlink(this.stateFilePath);
} catch (error: any) {
if (error.code !== 'ENOENT') {
throw error;
}
}
this.currentState = {
currentTag: DEFAULT_CONFIG_VALUES.TAGS.DEFAULT_TAG
};
}
}

View File

@@ -0,0 +1,75 @@
/**
* @fileoverview Constants for Task Master Core
* Single source of truth for all constant values
*/
import type {
TaskStatus,
TaskPriority,
TaskComplexity
} from '../types/index.js';
/**
* Valid task status values
*/
export const TASK_STATUSES: readonly TaskStatus[] = [
'pending',
'in-progress',
'done',
'deferred',
'cancelled',
'blocked',
'review'
] as const;
/**
* Valid task priority values
*/
export const TASK_PRIORITIES: readonly TaskPriority[] = [
'low',
'medium',
'high',
'critical'
] as const;
/**
* Valid task complexity values
*/
export const TASK_COMPLEXITIES: readonly TaskComplexity[] = [
'simple',
'moderate',
'complex',
'very-complex'
] as const;
/**
* Valid output formats for task display
*/
export const OUTPUT_FORMATS = ['text', 'json', 'compact'] as const;
export type OutputFormat = (typeof OUTPUT_FORMATS)[number];
/**
* Status icons for display
*/
export const STATUS_ICONS: Record<TaskStatus, string> = {
done: '✓',
'in-progress': '►',
blocked: '⭕',
pending: '○',
deferred: '⏸',
cancelled: '✗',
review: '👁'
} as const;
/**
* Status colors for display (using chalk color names)
*/
export const STATUS_COLORS: Record<TaskStatus, string> = {
pending: 'yellow',
'in-progress': 'blue',
done: 'green',
deferred: 'gray',
cancelled: 'red',
blocked: 'magenta',
review: 'cyan'
} as const;

View File

@@ -0,0 +1,266 @@
/**
* @fileoverview Task entity with business rules and domain logic
*/
import { ERROR_CODES, TaskMasterError } from '../errors/task-master-error.js';
import type {
Subtask,
Task,
TaskPriority,
TaskStatus
} from '../types/index.js';
/**
* Task entity representing a task with business logic
* Encapsulates validation and state management rules
*/
export class TaskEntity implements Task {
readonly id: string;
title: string;
description: string;
status: TaskStatus;
priority: TaskPriority;
dependencies: string[];
details: string;
testStrategy: string;
subtasks: Subtask[];
// Optional properties
createdAt?: string;
updatedAt?: string;
effort?: number;
actualEffort?: number;
tags?: string[];
assignee?: string;
complexity?: Task['complexity'];
constructor(data: Task | (Omit<Task, 'id'> & { id: number | string })) {
this.validate(data);
// Always convert ID to string
this.id = String(data.id);
this.title = data.title;
this.description = data.description;
this.status = data.status;
this.priority = data.priority;
// Ensure dependency IDs are also strings
this.dependencies = (data.dependencies || []).map((dep) => String(dep));
this.details = data.details;
this.testStrategy = data.testStrategy;
// Normalize subtask IDs to strings
this.subtasks = (data.subtasks || []).map((subtask) => ({
...subtask,
id: Number(subtask.id), // Keep subtask IDs as numbers per interface
parentId: String(subtask.parentId)
}));
// Optional properties
this.createdAt = data.createdAt;
this.updatedAt = data.updatedAt;
this.effort = data.effort;
this.actualEffort = data.actualEffort;
this.tags = data.tags;
this.assignee = data.assignee;
this.complexity = data.complexity;
}
/**
* Validate task data
*/
private validate(
data: Partial<Task> | Partial<Omit<Task, 'id'> & { id: number | string }>
): void {
if (
data.id === undefined ||
data.id === null ||
(typeof data.id !== 'string' && typeof data.id !== 'number')
) {
throw new TaskMasterError(
'Task ID is required and must be a string or number',
ERROR_CODES.VALIDATION_ERROR
);
}
if (!data.title || data.title.trim().length === 0) {
throw new TaskMasterError(
'Task title is required',
ERROR_CODES.VALIDATION_ERROR
);
}
if (!data.description || data.description.trim().length === 0) {
throw new TaskMasterError(
'Task description is required',
ERROR_CODES.VALIDATION_ERROR
);
}
if (!this.isValidStatus(data.status)) {
throw new TaskMasterError(
`Invalid task status: ${data.status}`,
ERROR_CODES.VALIDATION_ERROR
);
}
if (!this.isValidPriority(data.priority)) {
throw new TaskMasterError(
`Invalid task priority: ${data.priority}`,
ERROR_CODES.VALIDATION_ERROR
);
}
}
/**
* Check if status is valid
*/
private isValidStatus(status: any): status is TaskStatus {
return [
'pending',
'in-progress',
'done',
'deferred',
'cancelled',
'blocked',
'review'
].includes(status);
}
/**
* Check if priority is valid
*/
private isValidPriority(priority: any): priority is TaskPriority {
return ['low', 'medium', 'high', 'critical'].includes(priority);
}
/**
* Check if task can be marked as complete
*/
canComplete(): boolean {
// Cannot complete if status is already done or cancelled
if (this.status === 'done' || this.status === 'cancelled') {
return false;
}
// Cannot complete if blocked
if (this.status === 'blocked') {
return false;
}
// Check if all subtasks are complete
const allSubtasksComplete = this.subtasks.every(
(subtask) => subtask.status === 'done' || subtask.status === 'cancelled'
);
return allSubtasksComplete;
}
/**
* Mark task as complete
*/
markAsComplete(): void {
if (!this.canComplete()) {
throw new TaskMasterError(
'Task cannot be marked as complete',
ERROR_CODES.TASK_STATUS_ERROR,
{
taskId: this.id,
currentStatus: this.status,
hasIncompleteSubtasks: this.subtasks.some(
(s) => s.status !== 'done' && s.status !== 'cancelled'
)
}
);
}
this.status = 'done';
this.updatedAt = new Date().toISOString();
}
/**
* Check if task has dependencies
*/
hasDependencies(): boolean {
return this.dependencies.length > 0;
}
/**
* Check if task has subtasks
*/
hasSubtasks(): boolean {
return this.subtasks.length > 0;
}
/**
* Add a subtask
*/
addSubtask(subtask: Omit<Subtask, 'id' | 'parentId'>): void {
const nextId = this.subtasks.length + 1;
this.subtasks.push({
...subtask,
id: nextId,
parentId: this.id
});
this.updatedAt = new Date().toISOString();
}
/**
* Update task status
*/
updateStatus(newStatus: TaskStatus): void {
if (!this.isValidStatus(newStatus)) {
throw new TaskMasterError(
`Invalid status: ${newStatus}`,
ERROR_CODES.VALIDATION_ERROR
);
}
// Business rule: Cannot move from done to pending
if (this.status === 'done' && newStatus === 'pending') {
throw new TaskMasterError(
'Cannot move completed task back to pending',
ERROR_CODES.TASK_STATUS_ERROR
);
}
this.status = newStatus;
this.updatedAt = new Date().toISOString();
}
/**
* Convert entity to plain object
*/
toJSON(): Task {
return {
id: this.id,
title: this.title,
description: this.description,
status: this.status,
priority: this.priority,
dependencies: this.dependencies,
details: this.details,
testStrategy: this.testStrategy,
subtasks: this.subtasks,
createdAt: this.createdAt,
updatedAt: this.updatedAt,
effort: this.effort,
actualEffort: this.actualEffort,
tags: this.tags,
assignee: this.assignee,
complexity: this.complexity
};
}
/**
* Create TaskEntity from plain object
*/
static fromObject(data: Task): TaskEntity {
return new TaskEntity(data);
}
/**
* Create multiple TaskEntities from array
*/
static fromArray(data: Task[]): TaskEntity[] {
return data.map((task) => new TaskEntity(task));
}
}

View File

@@ -0,0 +1,68 @@
/**
* @fileoverview Custom error classes for the tm-core package
* This file exports all custom error types and error handling utilities
*/
// Export the main TaskMasterError class
export {
TaskMasterError,
ERROR_CODES,
type ErrorCode,
type ErrorContext,
type SerializableError
} from './task-master-error.js';
// Error implementations will be defined here
// export * from './task-errors.js';
// export * from './storage-errors.js';
// export * from './provider-errors.js';
// export * from './validation-errors.js';
// Placeholder exports - these will be implemented in later tasks
/**
* Base error class for all tm-core errors
* @deprecated This is a placeholder class that will be properly implemented in later tasks
*/
export class TmCoreError extends Error {
constructor(
message: string,
public code?: string
) {
super(message);
this.name = 'TmCoreError';
}
}
/**
* Error thrown when a task is not found
* @deprecated This is a placeholder class that will be properly implemented in later tasks
*/
export class TaskNotFoundError extends TmCoreError {
constructor(taskId: string) {
super(`Task not found: ${taskId}`, 'TASK_NOT_FOUND');
this.name = 'TaskNotFoundError';
}
}
/**
* Error thrown when validation fails
* @deprecated This is a placeholder class that will be properly implemented in later tasks
*/
export class ValidationError extends TmCoreError {
constructor(message: string) {
super(message, 'VALIDATION_ERROR');
this.name = 'ValidationError';
}
}
/**
* Error thrown when storage operations fail
* @deprecated This is a placeholder class that will be properly implemented in later tasks
*/
export class StorageError extends TmCoreError {
constructor(message: string) {
super(message, 'STORAGE_ERROR');
this.name = 'StorageError';
}
}

View File

@@ -0,0 +1,328 @@
/**
* @fileoverview Base error class for Task Master operations
* Provides comprehensive error handling with metadata, context, and serialization support
*/
/**
* Error codes used throughout the Task Master system
*/
export const ERROR_CODES = {
// File system errors
FILE_NOT_FOUND: 'FILE_NOT_FOUND',
FILE_READ_ERROR: 'FILE_READ_ERROR',
FILE_WRITE_ERROR: 'FILE_WRITE_ERROR',
// Parsing errors
PARSE_ERROR: 'PARSE_ERROR',
JSON_PARSE_ERROR: 'JSON_PARSE_ERROR',
YAML_PARSE_ERROR: 'YAML_PARSE_ERROR',
// Validation errors
VALIDATION_ERROR: 'VALIDATION_ERROR',
SCHEMA_VALIDATION_ERROR: 'SCHEMA_VALIDATION_ERROR',
TYPE_VALIDATION_ERROR: 'TYPE_VALIDATION_ERROR',
// API and network errors
API_ERROR: 'API_ERROR',
NETWORK_ERROR: 'NETWORK_ERROR',
AUTHENTICATION_ERROR: 'AUTHENTICATION_ERROR',
AUTHORIZATION_ERROR: 'AUTHORIZATION_ERROR',
// Task management errors
TASK_NOT_FOUND: 'TASK_NOT_FOUND',
TASK_DEPENDENCY_ERROR: 'TASK_DEPENDENCY_ERROR',
TASK_STATUS_ERROR: 'TASK_STATUS_ERROR',
// Storage errors
STORAGE_ERROR: 'STORAGE_ERROR',
DATABASE_ERROR: 'DATABASE_ERROR',
// Configuration errors
CONFIG_ERROR: 'CONFIG_ERROR',
MISSING_CONFIGURATION: 'MISSING_CONFIGURATION',
INVALID_CONFIGURATION: 'INVALID_CONFIGURATION',
// Provider errors
PROVIDER_ERROR: 'PROVIDER_ERROR',
PROVIDER_NOT_FOUND: 'PROVIDER_NOT_FOUND',
PROVIDER_INITIALIZATION_ERROR: 'PROVIDER_INITIALIZATION_ERROR',
// Generic errors
INTERNAL_ERROR: 'INTERNAL_ERROR',
INVALID_INPUT: 'INVALID_INPUT',
NOT_IMPLEMENTED: 'NOT_IMPLEMENTED',
UNKNOWN_ERROR: 'UNKNOWN_ERROR'
} as const;
export type ErrorCode = (typeof ERROR_CODES)[keyof typeof ERROR_CODES];
/**
* Error context interface for additional error metadata
*/
export interface ErrorContext {
/** Additional details about the error */
details?: any;
/** Error timestamp */
timestamp?: Date;
/** Operation that failed */
operation?: string;
/** Resource identifier related to the error */
resource?: string;
/** Stack of operations leading to the error */
operationStack?: string[];
/** User-safe message for display */
userMessage?: string;
/** Internal error identifier for debugging */
errorId?: string;
/** Additional metadata */
metadata?: Record<string, any>;
/** Allow additional properties for flexibility */
[key: string]: any;
}
/**
* Serializable error representation
*/
export interface SerializableError {
name: string;
message: string;
code: string;
context: ErrorContext;
stack?: string;
cause?: SerializableError;
}
/**
* Base error class for all Task Master operations
*
* Provides comprehensive error handling with:
* - Error codes for programmatic handling
* - Rich context and metadata support
* - Error chaining with cause property
* - Serialization for logging and transport
* - Sanitization for user-facing messages
*
* @example
* ```typescript
* try {
* // Some operation that might fail
* throw new TaskMasterError(
* 'Failed to parse task file',
* ERROR_CODES.PARSE_ERROR,
* {
* details: { filename: 'tasks.json', line: 42 },
* operation: 'parseTaskFile',
* userMessage: 'There was an error reading your task file'
* }
* );
* } catch (error) {
* console.error(error.toJSON());
* throw new TaskMasterError(
* 'Operation failed',
* ERROR_CODES.INTERNAL_ERROR,
* { operation: 'processTask' },
* error
* );
* }
* ```
*/
export class TaskMasterError extends Error {
/** Error code for programmatic handling */
public readonly code: string;
/** Rich context and metadata */
public readonly context: ErrorContext;
/** Original error that caused this error (for error chaining) */
public readonly cause?: Error;
/** Timestamp when error was created */
public readonly timestamp: Date;
/**
* Create a new TaskMasterError
*
* @param message - Human-readable error message
* @param code - Error code from ERROR_CODES
* @param context - Additional error context and metadata
* @param cause - Original error that caused this error (for chaining)
*/
constructor(
message: string,
code: string = ERROR_CODES.UNKNOWN_ERROR,
context: ErrorContext = {},
cause?: Error
) {
super(message);
// Set error name
this.name = 'TaskMasterError';
// Set properties
this.code = code;
this.cause = cause;
this.timestamp = new Date();
// Merge context with defaults
this.context = {
timestamp: this.timestamp,
...context
};
// Fix prototype chain for proper instanceof checks
Object.setPrototypeOf(this, TaskMasterError.prototype);
// Maintain proper stack trace
if (Error.captureStackTrace) {
Error.captureStackTrace(this, TaskMasterError);
}
// If we have a cause error, append its stack trace
if (cause?.stack) {
this.stack = `${this.stack}\nCaused by: ${cause.stack}`;
}
}
/**
* Get a user-friendly error message
* Falls back to the main message if no user message is provided
*/
public getUserMessage(): string {
return this.context.userMessage || this.message;
}
/**
* Get sanitized error details safe for user display
* Removes sensitive information and internal details
*/
public getSanitizedDetails(): Record<string, any> {
const { details, resource, operation } = this.context;
return {
code: this.code,
message: this.getUserMessage(),
...(resource && { resource }),
...(operation && { operation }),
...(details &&
typeof details === 'object' &&
!this.containsSensitiveInfo(details) && { details })
};
}
/**
* Check if error details contain potentially sensitive information
*/
private containsSensitiveInfo(obj: any): boolean {
if (typeof obj !== 'object' || obj === null) return false;
const sensitiveKeys = [
'password',
'token',
'key',
'secret',
'auth',
'credential'
];
const objString = JSON.stringify(obj).toLowerCase();
return sensitiveKeys.some((key) => objString.includes(key));
}
/**
* Convert error to JSON for serialization
* Includes all error information for logging and debugging
*/
public toJSON(): SerializableError {
const result: SerializableError = {
name: this.name,
message: this.message,
code: this.code,
context: this.context,
stack: this.stack
};
// Include serialized cause if present
if (this.cause) {
if (this.cause instanceof TaskMasterError) {
result.cause = this.cause.toJSON();
} else {
result.cause = {
name: this.cause.name,
message: this.cause.message,
code: ERROR_CODES.UNKNOWN_ERROR,
context: {},
stack: this.cause.stack
};
}
}
return result;
}
/**
* Convert error to string representation
* Provides formatted output for logging and debugging
*/
public toString(): string {
let result = `${this.name}[${this.code}]: ${this.message}`;
if (this.context.operation) {
result += ` (operation: ${this.context.operation})`;
}
if (this.context.resource) {
result += ` (resource: ${this.context.resource})`;
}
if (this.cause) {
result += `\nCaused by: ${this.cause.toString()}`;
}
return result;
}
/**
* Check if this error is of a specific code
*/
public is(code: string): boolean {
return this.code === code;
}
/**
* Check if this error or any error in its cause chain is of a specific code
*/
public hasCode(code: string): boolean {
if (this.is(code)) return true;
if (this.cause instanceof TaskMasterError) {
return this.cause.hasCode(code);
}
return false;
}
/**
* Create a new error with additional context
*/
public withContext(
additionalContext: Partial<ErrorContext>
): TaskMasterError {
return new TaskMasterError(
this.message,
this.code,
{ ...this.context, ...additionalContext },
this.cause
);
}
/**
* Create a new error wrapping this one as the cause
*/
public wrap(
message: string,
code: string = ERROR_CODES.INTERNAL_ERROR,
context: ErrorContext = {}
): TaskMasterError {
return new TaskMasterError(message, code, context, this);
}
}

View File

@@ -0,0 +1,57 @@
/**
* @fileoverview Main entry point for the tm-core package
* This file exports all public APIs from the core Task Master library
*/
// Export main facade
export {
TaskMasterCore,
createTaskMasterCore,
type TaskMasterCoreOptions,
type ListTasksResult
} from './task-master-core';
// Re-export types
export type * from './types';
// Re-export interfaces (types only to avoid conflicts)
export type * from './interfaces';
// Re-export constants
export * from './constants';
// Re-export providers
export * from './providers';
// Re-export storage (selectively to avoid conflicts)
export {
FileStorage,
ApiStorage,
StorageFactory,
type ApiStorageConfig
} from './storage';
export { PlaceholderStorage, type StorageAdapter } from './storage';
// Re-export parser
export * from './parser';
// Re-export utilities
export * from './utils';
// Re-export errors
export * from './errors';
// Re-export entities
export { TaskEntity } from './entities/task.entity';
// Re-export authentication
export {
AuthManager,
AuthenticationError,
type AuthCredentials,
type OAuthFlowOptions,
type AuthConfig
} from './auth';
// Re-export logger
export { getLogger, createLogger, setGlobalLogger } from './logger';

View File

@@ -0,0 +1,423 @@
/**
* @fileoverview AI Provider interface definitions for the tm-core package
* This file defines the contract for all AI provider implementations
*/
/**
* Options for AI completion requests
*/
export interface AIOptions {
/** Temperature for response randomness (0.0 to 1.0) */
temperature?: number;
/** Maximum number of tokens to generate */
maxTokens?: number;
/** Whether to use streaming responses */
stream?: boolean;
/** Top-p sampling parameter (0.0 to 1.0) */
topP?: number;
/** Frequency penalty to reduce repetition (-2.0 to 2.0) */
frequencyPenalty?: number;
/** Presence penalty to encourage new topics (-2.0 to 2.0) */
presencePenalty?: number;
/** Stop sequences to halt generation */
stop?: string | string[];
/** Custom system prompt override */
systemPrompt?: string;
/** Request timeout in milliseconds */
timeout?: number;
/** Number of retry attempts on failure */
retries?: number;
}
/**
* Response from AI completion request
*/
export interface AIResponse {
/** Generated text content */
content: string;
/** Token count for the request */
inputTokens: number;
/** Token count for the response */
outputTokens: number;
/** Total tokens used */
totalTokens: number;
/** Cost in USD (if available) */
cost?: number;
/** Model used for generation */
model: string;
/** Provider name */
provider: string;
/** Response timestamp */
timestamp: string;
/** Request duration in milliseconds */
duration: number;
/** Whether the response was cached */
cached?: boolean;
/** Finish reason (completed, length, stop, etc.) */
finishReason?: string;
}
/**
* AI model information
*/
export interface AIModel {
/** Model identifier */
id: string;
/** Human-readable model name */
name: string;
/** Model description */
description?: string;
/** Maximum context length in tokens */
contextLength: number;
/** Input cost per 1K tokens in USD */
inputCostPer1K?: number;
/** Output cost per 1K tokens in USD */
outputCostPer1K?: number;
/** Whether the model supports function calling */
supportsFunctions?: boolean;
/** Whether the model supports vision/image inputs */
supportsVision?: boolean;
/** Whether the model supports streaming */
supportsStreaming?: boolean;
}
/**
* Provider capabilities and metadata
*/
export interface ProviderInfo {
/** Provider name */
name: string;
/** Provider display name */
displayName: string;
/** Provider description */
description?: string;
/** Base API URL */
baseUrl?: string;
/** Available models */
models: AIModel[];
/** Default model ID */
defaultModel: string;
/** Whether the provider requires an API key */
requiresApiKey: boolean;
/** Supported features */
features: {
streaming?: boolean;
functions?: boolean;
vision?: boolean;
embeddings?: boolean;
};
}
/**
* Interface for AI provider implementations
* All AI providers must implement this interface
*/
export interface IAIProvider {
/**
* Generate a text completion from a prompt
* @param prompt - Input prompt text
* @param options - Optional generation parameters
* @returns Promise that resolves to AI response
*/
generateCompletion(prompt: string, options?: AIOptions): Promise<AIResponse>;
/**
* Generate a streaming completion (if supported)
* @param prompt - Input prompt text
* @param options - Optional generation parameters
* @returns AsyncIterator of response chunks
*/
generateStreamingCompletion(
prompt: string,
options?: AIOptions
): AsyncIterator<Partial<AIResponse>>;
/**
* Calculate token count for given text
* @param text - Text to count tokens for
* @param model - Optional model to use for counting
* @returns Number of tokens
*/
calculateTokens(text: string, model?: string): number;
/**
* Get the provider name
* @returns Provider name string
*/
getName(): string;
/**
* Get current model being used
* @returns Current model ID
*/
getModel(): string;
/**
* Set the model to use for requests
* @param model - Model ID to use
*/
setModel(model: string): void;
/**
* Get the default model for this provider
* @returns Default model ID
*/
getDefaultModel(): string;
/**
* Check if the provider is available and configured
* @returns Promise that resolves to availability status
*/
isAvailable(): Promise<boolean>;
/**
* Get provider information and capabilities
* @returns Provider information object
*/
getProviderInfo(): ProviderInfo;
/**
* Get available models for this provider
* @returns Array of available models
*/
getAvailableModels(): AIModel[];
/**
* Validate API key or credentials
* @returns Promise that resolves to validation status
*/
validateCredentials(): Promise<boolean>;
/**
* Get usage statistics if available
* @returns Promise that resolves to usage stats or null
*/
getUsageStats(): Promise<ProviderUsageStats | null>;
/**
* Initialize the provider (set up connections, validate config, etc.)
* @returns Promise that resolves when initialization is complete
*/
initialize(): Promise<void>;
/**
* Clean up and close provider connections
* @returns Promise that resolves when cleanup is complete
*/
close(): Promise<void>;
}
/**
* Usage statistics for a provider
*/
export interface ProviderUsageStats {
/** Total requests made */
totalRequests: number;
/** Total tokens consumed */
totalTokens: number;
/** Total cost in USD */
totalCost: number;
/** Requests today */
requestsToday: number;
/** Tokens used today */
tokensToday: number;
/** Cost today */
costToday: number;
/** Average response time in milliseconds */
averageResponseTime: number;
/** Success rate (0.0 to 1.0) */
successRate: number;
/** Last request timestamp */
lastRequestAt?: string;
/** Rate limit information if available */
rateLimits?: {
requestsPerMinute: number;
tokensPerMinute: number;
requestsRemaining: number;
tokensRemaining: number;
resetTime: string;
};
}
/**
* Configuration for AI provider instances
*/
export interface AIProviderConfig {
/** API key for the provider */
apiKey: string;
/** Base URL override */
baseUrl?: string;
/** Default model to use */
model?: string;
/** Default generation options */
defaultOptions?: AIOptions;
/** Request timeout in milliseconds */
timeout?: number;
/** Maximum retry attempts */
maxRetries?: number;
/** Custom headers to include in requests */
headers?: Record<string, string>;
/** Enable request/response logging */
enableLogging?: boolean;
/** Enable usage tracking */
enableUsageTracking?: boolean;
}
/**
* Abstract base class for AI provider implementations
* Provides common functionality and enforces the interface
*/
export abstract class BaseAIProvider implements IAIProvider {
protected config: AIProviderConfig;
protected currentModel: string;
protected usageStats: ProviderUsageStats | null = null;
constructor(config: AIProviderConfig) {
this.config = config;
this.currentModel = config.model || this.getDefaultModel();
if (config.enableUsageTracking) {
this.initializeUsageTracking();
}
}
// Abstract methods that must be implemented by concrete classes
abstract generateCompletion(
prompt: string,
options?: AIOptions
): Promise<AIResponse>;
abstract generateStreamingCompletion(
prompt: string,
options?: AIOptions
): AsyncIterator<Partial<AIResponse>>;
abstract calculateTokens(text: string, model?: string): number;
abstract getName(): string;
abstract getDefaultModel(): string;
abstract isAvailable(): Promise<boolean>;
abstract getProviderInfo(): ProviderInfo;
abstract validateCredentials(): Promise<boolean>;
abstract initialize(): Promise<void>;
abstract close(): Promise<void>;
// Implemented methods with common functionality
getModel(): string {
return this.currentModel;
}
setModel(model: string): void {
const availableModels = this.getAvailableModels();
const modelExists = availableModels.some((m) => m.id === model);
if (!modelExists) {
throw new Error(
`Model "${model}" is not available for provider "${this.getName()}"`
);
}
this.currentModel = model;
}
getAvailableModels(): AIModel[] {
return this.getProviderInfo().models;
}
async getUsageStats(): Promise<ProviderUsageStats | null> {
return this.usageStats;
}
/**
* Initialize usage tracking
*/
protected initializeUsageTracking(): void {
this.usageStats = {
totalRequests: 0,
totalTokens: 0,
totalCost: 0,
requestsToday: 0,
tokensToday: 0,
costToday: 0,
averageResponseTime: 0,
successRate: 1.0
};
}
/**
* Update usage statistics after a request
* @param response - AI response to record
* @param duration - Request duration in milliseconds
* @param success - Whether the request was successful
*/
protected updateUsageStats(
response: AIResponse,
duration: number,
success: boolean
): void {
if (!this.usageStats) return;
this.usageStats.totalRequests++;
this.usageStats.totalTokens += response.totalTokens;
if (response.cost) {
this.usageStats.totalCost += response.cost;
}
// Update daily stats (simplified - would need proper date tracking)
this.usageStats.requestsToday++;
this.usageStats.tokensToday += response.totalTokens;
if (response.cost) {
this.usageStats.costToday += response.cost;
}
// Update average response time
const totalTime =
this.usageStats.averageResponseTime * (this.usageStats.totalRequests - 1);
this.usageStats.averageResponseTime =
(totalTime + duration) / this.usageStats.totalRequests;
// Update success rate
const successCount = Math.floor(
this.usageStats.successRate * (this.usageStats.totalRequests - 1)
);
const newSuccessCount = successCount + (success ? 1 : 0);
this.usageStats.successRate =
newSuccessCount / this.usageStats.totalRequests;
this.usageStats.lastRequestAt = new Date().toISOString();
}
/**
* Merge user options with default options
* @param userOptions - User-provided options
* @returns Merged options object
*/
protected mergeOptions(userOptions?: AIOptions): AIOptions {
return {
temperature: 0.7,
maxTokens: 2000,
stream: false,
topP: 1.0,
frequencyPenalty: 0.0,
presencePenalty: 0.0,
timeout: 30000,
retries: 3,
...this.config.defaultOptions,
...userOptions
};
}
/**
* Validate prompt input
* @param prompt - Prompt to validate
* @throws Error if prompt is invalid
*/
protected validatePrompt(prompt: string): void {
if (!prompt || typeof prompt !== 'string') {
throw new Error('Prompt must be a non-empty string');
}
if (prompt.trim().length === 0) {
throw new Error('Prompt cannot be empty or only whitespace');
}
}
}

View File

@@ -0,0 +1,415 @@
/**
* @fileoverview Configuration interface definitions for the tm-core package
* This file defines the contract for configuration management
*/
import type { TaskComplexity, TaskPriority } from '../types/index';
/**
* Model configuration for different AI roles
*/
export interface ModelConfig {
/** Primary model for task generation and updates */
main: string;
/** Research model for enhanced task analysis (optional) */
research?: string;
/** Fallback model when primary fails */
fallback: string;
}
/**
* AI provider configuration
*/
export interface ProviderConfig {
/** Provider name (e.g., 'anthropic', 'openai', 'perplexity') */
name: string;
/** API key for the provider */
apiKey?: string;
/** Base URL override */
baseUrl?: string;
/** Custom configuration options */
options?: Record<string, unknown>;
/** Whether this provider is enabled */
enabled?: boolean;
}
/**
* Task generation and management settings
*/
export interface TaskSettings {
/** Default priority for new tasks */
defaultPriority: TaskPriority;
/** Default complexity for analysis */
defaultComplexity: TaskComplexity;
/** Maximum number of subtasks per task */
maxSubtasks: number;
/** Maximum number of concurrent tasks */
maxConcurrentTasks: number;
/** Enable automatic task ID generation */
autoGenerateIds: boolean;
/** Task ID prefix (e.g., 'TASK-', 'TM-') */
taskIdPrefix?: string;
/** Enable task dependency validation */
validateDependencies: boolean;
/** Enable automatic timestamps */
enableTimestamps: boolean;
/** Enable effort tracking */
enableEffortTracking: boolean;
}
/**
* Tag and context management settings
*/
export interface TagSettings {
/** Enable tag-based task organization */
enableTags: boolean;
/** Default tag for new tasks */
defaultTag: string;
/** Maximum number of tags per task */
maxTagsPerTask: number;
/** Enable automatic tag creation from Git branches */
autoCreateFromBranch: boolean;
/** Tag naming convention (kebab-case, camelCase, snake_case) */
tagNamingConvention: 'kebab-case' | 'camelCase' | 'snake_case';
}
/**
* Storage and persistence settings
*/
export interface StorageSettings {
/** Storage backend type - 'auto' detects based on auth status */
type: 'file' | 'api' | 'auto';
/** Base path for file storage */
basePath?: string;
/** API endpoint for API storage (Hamster integration) */
apiEndpoint?: string;
/** Access token for API authentication */
apiAccessToken?: string;
/** Indicates whether API is configured (has endpoint or token) */
apiConfigured?: boolean;
/** Enable automatic backups */
enableBackup: boolean;
/** Maximum number of backups to retain */
maxBackups: number;
/** Enable compression for storage */
enableCompression: boolean;
/** File encoding for text files */
encoding: BufferEncoding;
/** Enable atomic file operations */
atomicOperations: boolean;
}
/**
* Retry and resilience settings
*/
export interface RetrySettings {
/** Number of retry attempts for failed operations */
retryAttempts: number;
/** Base delay between retries in milliseconds */
retryDelay: number;
/** Maximum delay between retries in milliseconds */
maxRetryDelay: number;
/** Exponential backoff multiplier */
backoffMultiplier: number;
/** Request timeout in milliseconds */
requestTimeout: number;
/** Enable retry for network errors */
retryOnNetworkError: boolean;
/** Enable retry for rate limit errors */
retryOnRateLimit: boolean;
}
/**
* Logging and debugging settings
*/
export interface LoggingSettings {
/** Enable logging */
enabled: boolean;
/** Log level (error, warn, info, debug) */
level: 'error' | 'warn' | 'info' | 'debug';
/** Log file path (optional) */
filePath?: string;
/** Enable request/response logging */
logRequests: boolean;
/** Enable performance metrics logging */
logPerformance: boolean;
/** Enable error stack traces */
logStackTraces: boolean;
/** Maximum log file size in MB */
maxFileSize: number;
/** Maximum number of log files to retain */
maxFiles: number;
}
/**
* Security and validation settings
*/
export interface SecuritySettings {
/** Enable API key validation */
validateApiKeys: boolean;
/** Enable request rate limiting */
enableRateLimit: boolean;
/** Maximum requests per minute */
maxRequestsPerMinute: number;
/** Enable input sanitization */
sanitizeInputs: boolean;
/** Maximum prompt length in characters */
maxPromptLength: number;
/** Allowed file extensions for imports */
allowedFileExtensions: string[];
/** Enable CORS protection */
enableCors: boolean;
}
/**
* Main configuration interface for Task Master core
*/
export interface IConfiguration {
/** Project root path */
projectPath: string;
/** Current AI provider name */
aiProvider: string;
/** API keys for different providers */
apiKeys: Record<string, string>;
/** Model configuration for different roles */
models: ModelConfig;
/** Provider configurations */
providers: Record<string, ProviderConfig>;
/** Task management settings */
tasks: TaskSettings;
/** Tag and context settings */
tags: TagSettings;
/** Storage configuration */
storage: StorageSettings;
/** Retry and resilience settings */
retry: RetrySettings;
/** Logging configuration */
logging: LoggingSettings;
/** Security settings */
security: SecuritySettings;
/** Custom user-defined settings */
custom?: Record<string, unknown>;
/** Configuration version for migration purposes */
version: string;
/** Last updated timestamp */
lastUpdated: string;
}
/**
* Partial configuration for updates (all fields optional)
*/
export type PartialConfiguration = Partial<IConfiguration>;
/**
* Configuration validation result
*/
export interface ConfigValidationResult {
/** Whether the configuration is valid */
isValid: boolean;
/** Array of error messages */
errors: string[];
/** Array of warning messages */
warnings: string[];
/** Suggested fixes */
suggestions?: string[];
}
/**
* Environment variable configuration mapping
*/
export interface EnvironmentConfig {
/** Mapping of environment variables to config paths */
variables: Record<string, string>;
/** Prefix for environment variables */
prefix: string;
/** Whether to override existing config with env vars */
override: boolean;
}
/**
* Configuration schema definition for validation
*/
export interface ConfigSchema {
/** Schema for the main configuration */
properties: Record<string, ConfigProperty>;
/** Required properties */
required: string[];
/** Additional properties allowed */
additionalProperties: boolean;
}
/**
* Configuration property schema
*/
export interface ConfigProperty {
/** Property type */
type: 'string' | 'number' | 'boolean' | 'object' | 'array';
/** Property description */
description?: string;
/** Default value */
default?: unknown;
/** Allowed values for enums */
enum?: unknown[];
/** Minimum value (for numbers) */
minimum?: number;
/** Maximum value (for numbers) */
maximum?: number;
/** Pattern for string validation */
pattern?: string;
/** Nested properties (for objects) */
properties?: Record<string, ConfigProperty>;
/** Array item type (for arrays) */
items?: ConfigProperty;
/** Whether the property is required */
required?: boolean;
}
/**
* Default configuration factory
*/
export interface IConfigurationFactory {
/**
* Create a default configuration
* @param projectPath - Project root path
* @returns Default configuration object
*/
createDefault(projectPath: string): IConfiguration;
/**
* Merge configurations with precedence
* @param base - Base configuration
* @param override - Override configuration
* @returns Merged configuration
*/
merge(base: IConfiguration, override: PartialConfiguration): IConfiguration;
/**
* Validate configuration against schema
* @param config - Configuration to validate
* @returns Validation result
*/
validate(config: IConfiguration): ConfigValidationResult;
/**
* Load configuration from environment variables
* @param envConfig - Environment variable mapping
* @returns Partial configuration from environment
*/
loadFromEnvironment(envConfig: EnvironmentConfig): PartialConfiguration;
/**
* Get configuration schema
* @returns Configuration schema definition
*/
getSchema(): ConfigSchema;
}
/**
* Configuration manager interface
*/
export interface IConfigurationManager {
/**
* Load configuration from file or create default
* @param configPath - Path to configuration file
* @returns Promise that resolves to configuration
*/
load(configPath?: string): Promise<IConfiguration>;
/**
* Save configuration to file
* @param config - Configuration to save
* @param configPath - Optional path override
* @returns Promise that resolves when save is complete
*/
save(config: IConfiguration, configPath?: string): Promise<void>;
/**
* Update configuration with partial changes
* @param updates - Partial configuration updates
* @returns Promise that resolves to updated configuration
*/
update(updates: PartialConfiguration): Promise<IConfiguration>;
/**
* Get current configuration
* @returns Current configuration object
*/
getConfig(): IConfiguration;
/**
* Watch for configuration changes
* @param callback - Function to call when config changes
* @returns Function to stop watching
*/
watch(callback: (config: IConfiguration) => void): () => void;
/**
* Validate current configuration
* @returns Validation result
*/
validate(): ConfigValidationResult;
/**
* Reset configuration to defaults
* @returns Promise that resolves when reset is complete
*/
reset(): Promise<void>;
}
/**
* Constants for default configuration values
*/
export const DEFAULT_CONFIG_VALUES = {
MODELS: {
MAIN: 'claude-3-5-sonnet-20241022',
FALLBACK: 'gpt-4o-mini'
},
TASKS: {
DEFAULT_PRIORITY: 'medium' as TaskPriority,
DEFAULT_COMPLEXITY: 'moderate' as TaskComplexity,
MAX_SUBTASKS: 20,
MAX_CONCURRENT: 5,
TASK_ID_PREFIX: 'TASK-'
},
TAGS: {
DEFAULT_TAG: 'master',
MAX_TAGS_PER_TASK: 10,
NAMING_CONVENTION: 'kebab-case' as const
},
STORAGE: {
TYPE: 'auto' as const,
ENCODING: 'utf8' as BufferEncoding,
MAX_BACKUPS: 5
},
RETRY: {
ATTEMPTS: 3,
DELAY: 1000,
MAX_DELAY: 30000,
BACKOFF_MULTIPLIER: 2,
TIMEOUT: 30000
},
LOGGING: {
LEVEL: 'info' as const,
MAX_FILE_SIZE: 10,
MAX_FILES: 5
},
SECURITY: {
MAX_REQUESTS_PER_MINUTE: 60,
MAX_PROMPT_LENGTH: 100000,
ALLOWED_EXTENSIONS: ['.txt', '.md', '.json']
},
VERSION: '1.0.0'
} as const;

View File

@@ -0,0 +1,16 @@
/**
* @fileoverview Interface definitions index for the tm-core package
* This file exports all interface definitions from their respective modules
*/
// Storage interfaces
export type * from './storage.interface';
export * from './storage.interface';
// AI Provider interfaces
export type * from './ai-provider.interface';
export * from './ai-provider.interface';
// Configuration interfaces
export type * from './configuration.interface';
export * from './configuration.interface';

View File

@@ -0,0 +1,238 @@
/**
* @fileoverview Storage interface definitions for the tm-core package
* This file defines the contract for all storage implementations
*/
import type { Task, TaskMetadata } from '../types/index';
/**
* Interface for storage operations on tasks
* All storage implementations must implement this interface
*/
export interface IStorage {
/**
* Load all tasks from storage, optionally filtered by tag
* @param tag - Optional tag to filter tasks by
* @returns Promise that resolves to an array of tasks
*/
loadTasks(tag?: string): Promise<Task[]>;
/**
* Save tasks to storage, replacing existing tasks
* @param tasks - Array of tasks to save
* @param tag - Optional tag context for the tasks
* @returns Promise that resolves when save is complete
*/
saveTasks(tasks: Task[], tag?: string): Promise<void>;
/**
* Append new tasks to existing storage without replacing
* @param tasks - Array of tasks to append
* @param tag - Optional tag context for the tasks
* @returns Promise that resolves when append is complete
*/
appendTasks(tasks: Task[], tag?: string): Promise<void>;
/**
* Update a specific task by ID
* @param taskId - ID of the task to update
* @param updates - Partial task object with fields to update
* @param tag - Optional tag context for the task
* @returns Promise that resolves when update is complete
*/
updateTask(
taskId: string,
updates: Partial<Task>,
tag?: string
): Promise<void>;
/**
* Delete a task by ID
* @param taskId - ID of the task to delete
* @param tag - Optional tag context for the task
* @returns Promise that resolves when deletion is complete
*/
deleteTask(taskId: string, tag?: string): Promise<void>;
/**
* Check if tasks exist in storage for the given tag
* @param tag - Optional tag to check existence for
* @returns Promise that resolves to boolean indicating existence
*/
exists(tag?: string): Promise<boolean>;
/**
* Load metadata about the task collection
* @param tag - Optional tag to get metadata for
* @returns Promise that resolves to task metadata
*/
loadMetadata(tag?: string): Promise<TaskMetadata | null>;
/**
* Save metadata about the task collection
* @param metadata - Metadata object to save
* @param tag - Optional tag context for the metadata
* @returns Promise that resolves when save is complete
*/
saveMetadata(metadata: TaskMetadata, tag?: string): Promise<void>;
/**
* Get all available tags in storage
* @returns Promise that resolves to array of available tags
*/
getAllTags(): Promise<string[]>;
/**
* Delete all tasks and metadata for a specific tag
* @param tag - Tag to delete
* @returns Promise that resolves when deletion is complete
*/
deleteTag(tag: string): Promise<void>;
/**
* Rename a tag (move all tasks from old tag to new tag)
* @param oldTag - Current tag name
* @param newTag - New tag name
* @returns Promise that resolves when rename is complete
*/
renameTag(oldTag: string, newTag: string): Promise<void>;
/**
* Copy all tasks from one tag to another
* @param sourceTag - Source tag to copy from
* @param targetTag - Target tag to copy to
* @returns Promise that resolves when copy is complete
*/
copyTag(sourceTag: string, targetTag: string): Promise<void>;
/**
* Initialize storage (create necessary directories, files, etc.)
* @returns Promise that resolves when initialization is complete
*/
initialize(): Promise<void>;
/**
* Clean up and close storage connections
* @returns Promise that resolves when cleanup is complete
*/
close(): Promise<void>;
/**
* Get storage statistics (file sizes, task counts, etc.)
* @returns Promise that resolves to storage statistics
*/
getStats(): Promise<StorageStats>;
}
/**
* Storage statistics interface
*/
export interface StorageStats {
/** Total number of tasks across all tags */
totalTasks: number;
/** Total number of tags */
totalTags: number;
/** Storage size in bytes */
storageSize: number;
/** Last modified timestamp */
lastModified: string;
/** Available tags with task counts */
tagStats: Array<{
tag: string;
taskCount: number;
lastModified: string;
}>;
}
/**
* Configuration options for storage implementations
*/
export interface StorageConfig {
/** Base path for storage */
basePath: string;
/** Enable backup creation */
enableBackup?: boolean;
/** Maximum number of backups to keep */
maxBackups?: number;
/** Enable compression for storage */
enableCompression?: boolean;
/** File encoding (default: utf8) */
encoding?: BufferEncoding;
/** Enable atomic writes */
atomicWrites?: boolean;
}
/**
* Base abstract class for storage implementations
* Provides common functionality and enforces the interface
*/
export abstract class BaseStorage implements IStorage {
protected config: StorageConfig;
constructor(config: StorageConfig) {
this.config = config;
}
// Abstract methods that must be implemented by concrete classes
abstract loadTasks(tag?: string): Promise<Task[]>;
abstract saveTasks(tasks: Task[], tag?: string): Promise<void>;
abstract appendTasks(tasks: Task[], tag?: string): Promise<void>;
abstract updateTask(
taskId: string,
updates: Partial<Task>,
tag?: string
): Promise<void>;
abstract deleteTask(taskId: string, tag?: string): Promise<void>;
abstract exists(tag?: string): Promise<boolean>;
abstract loadMetadata(tag?: string): Promise<TaskMetadata | null>;
abstract saveMetadata(metadata: TaskMetadata, tag?: string): Promise<void>;
abstract getAllTags(): Promise<string[]>;
abstract deleteTag(tag: string): Promise<void>;
abstract renameTag(oldTag: string, newTag: string): Promise<void>;
abstract copyTag(sourceTag: string, targetTag: string): Promise<void>;
abstract initialize(): Promise<void>;
abstract close(): Promise<void>;
abstract getStats(): Promise<StorageStats>;
/**
* Utility method to generate backup filename
* @param originalPath - Original file path
* @returns Backup file path with timestamp
*/
protected generateBackupPath(originalPath: string): string {
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const parts = originalPath.split('.');
const extension = parts.pop();
const baseName = parts.join('.');
return `${baseName}.backup.${timestamp}.${extension}`;
}
/**
* Utility method to validate task data before storage operations
* @param task - Task to validate
* @throws Error if task is invalid
*/
protected validateTask(task: Task): void {
if (!task.id) {
throw new Error('Task ID is required');
}
if (!task.title) {
throw new Error('Task title is required');
}
if (!task.description) {
throw new Error('Task description is required');
}
if (!task.status) {
throw new Error('Task status is required');
}
}
/**
* Utility method to sanitize tag names for file system safety
* @param tag - Tag name to sanitize
* @returns Sanitized tag name
*/
protected sanitizeTag(tag: string): string {
return tag.replace(/[^a-zA-Z0-9-_]/g, '-').toLowerCase();
}
}

View File

@@ -0,0 +1,59 @@
/**
* @fileoverview Logger factory and singleton management
*/
import { Logger, LoggerConfig } from './logger.js';
// Global logger instance
let globalLogger: Logger | null = null;
// Named logger instances
const loggers = new Map<string, Logger>();
/**
* Create a new logger instance
*/
export function createLogger(config?: LoggerConfig): Logger {
return new Logger(config);
}
/**
* Get or create a named logger instance
*/
export function getLogger(name?: string, config?: LoggerConfig): Logger {
// If no name provided, return global logger
if (!name) {
if (!globalLogger) {
globalLogger = createLogger(config);
}
return globalLogger;
}
// Check if named logger exists
if (!loggers.has(name)) {
loggers.set(
name,
createLogger({
prefix: name,
...config
})
);
}
return loggers.get(name)!;
}
/**
* Set the global logger instance
*/
export function setGlobalLogger(logger: Logger): void {
globalLogger = logger;
}
/**
* Clear all logger instances (useful for testing)
*/
export function clearLoggers(): void {
globalLogger = null;
loggers.clear();
}

View File

@@ -0,0 +1,8 @@
/**
* @fileoverview Logger package for Task Master
* Provides centralized logging with support for different modes and levels
*/
export { Logger, LogLevel } from './logger.js';
export type { LoggerConfig } from './logger.js';
export { createLogger, getLogger, setGlobalLogger } from './factory.js';

View File

@@ -0,0 +1,242 @@
/**
* @fileoverview Core logger implementation
*/
import chalk from 'chalk';
export enum LogLevel {
SILENT = 0,
ERROR = 1,
WARN = 2,
INFO = 3,
DEBUG = 4
}
export interface LoggerConfig {
level?: LogLevel;
silent?: boolean;
prefix?: string;
timestamp?: boolean;
colors?: boolean;
// MCP mode silences all output
mcpMode?: boolean;
}
export class Logger {
private config: Required<LoggerConfig>;
private static readonly DEFAULT_CONFIG: Required<LoggerConfig> = {
level: LogLevel.WARN,
silent: false,
prefix: '',
timestamp: false,
colors: true,
mcpMode: false
};
constructor(config: LoggerConfig = {}) {
// Check environment variables
const envConfig: LoggerConfig = {};
// Check for MCP mode
if (
process.env.MCP_MODE === 'true' ||
process.env.TASK_MASTER_MCP === 'true'
) {
envConfig.mcpMode = true;
}
// Check for silent mode
if (
process.env.TASK_MASTER_SILENT === 'true' ||
process.env.TM_SILENT === 'true'
) {
envConfig.silent = true;
}
// Check for log level
if (process.env.TASK_MASTER_LOG_LEVEL || process.env.TM_LOG_LEVEL) {
const levelStr = (
process.env.TASK_MASTER_LOG_LEVEL ||
process.env.TM_LOG_LEVEL ||
''
).toUpperCase();
if (levelStr in LogLevel) {
envConfig.level = LogLevel[levelStr as keyof typeof LogLevel];
}
}
// Check for no colors
if (
process.env.NO_COLOR === 'true' ||
process.env.TASK_MASTER_NO_COLOR === 'true'
) {
envConfig.colors = false;
}
// Merge configs: defaults < constructor < environment
this.config = {
...Logger.DEFAULT_CONFIG,
...config,
...envConfig
};
// MCP mode overrides everything to be silent
if (this.config.mcpMode) {
this.config.silent = true;
}
}
/**
* Check if logging is enabled for a given level
*/
private shouldLog(level: LogLevel): boolean {
if (this.config.silent || this.config.mcpMode) {
return false;
}
return level <= this.config.level;
}
/**
* Format a log message
*/
private formatMessage(
level: LogLevel,
message: string,
...args: any[]
): string {
let formatted = '';
// Add timestamp if enabled
if (this.config.timestamp) {
const timestamp = new Date().toISOString();
formatted += this.config.colors
? chalk.gray(`[${timestamp}] `)
: `[${timestamp}] `;
}
// Add prefix if configured
if (this.config.prefix) {
formatted += this.config.colors
? chalk.cyan(`[${this.config.prefix}] `)
: `[${this.config.prefix}] `;
}
// Skip level indicator for cleaner output
// We can still color the message based on level
if (this.config.colors) {
switch (level) {
case LogLevel.ERROR:
message = chalk.red(message);
break;
case LogLevel.WARN:
message = chalk.yellow(message);
break;
case LogLevel.INFO:
// Info stays default color
break;
case LogLevel.DEBUG:
message = chalk.gray(message);
break;
}
}
// Add the message
formatted += message;
// Add any additional arguments
if (args.length > 0) {
formatted +=
' ' +
args
.map((arg) =>
typeof arg === 'object' ? JSON.stringify(arg, null, 2) : String(arg)
)
.join(' ');
}
return formatted;
}
/**
* Log an error message
*/
error(message: string, ...args: any[]): void {
if (!this.shouldLog(LogLevel.ERROR)) return;
console.error(this.formatMessage(LogLevel.ERROR, message, ...args));
}
/**
* Log a warning message
*/
warn(message: string, ...args: any[]): void {
if (!this.shouldLog(LogLevel.WARN)) return;
console.warn(this.formatMessage(LogLevel.WARN, message, ...args));
}
/**
* Log an info message
*/
info(message: string, ...args: any[]): void {
if (!this.shouldLog(LogLevel.INFO)) return;
console.log(this.formatMessage(LogLevel.INFO, message, ...args));
}
/**
* Log a debug message
*/
debug(message: string, ...args: any[]): void {
if (!this.shouldLog(LogLevel.DEBUG)) return;
console.log(this.formatMessage(LogLevel.DEBUG, message, ...args));
}
/**
* Log a message without any formatting (raw output)
* Useful for CLI output that should appear as-is
*/
log(message: string, ...args: any[]): void {
if (this.config.silent || this.config.mcpMode) return;
if (args.length > 0) {
console.log(message, ...args);
} else {
console.log(message);
}
}
/**
* Update logger configuration
*/
setConfig(config: Partial<LoggerConfig>): void {
this.config = {
...this.config,
...config
};
// MCP mode always overrides to silent
if (this.config.mcpMode) {
this.config.silent = true;
}
}
/**
* Get current configuration
*/
getConfig(): Readonly<Required<LoggerConfig>> {
return { ...this.config };
}
/**
* Create a child logger with a prefix
*/
child(prefix: string, config?: Partial<LoggerConfig>): Logger {
const childPrefix = this.config.prefix
? `${this.config.prefix}:${prefix}`
: prefix;
return new Logger({
...this.config,
...config,
prefix: childPrefix
});
}
}

View File

@@ -0,0 +1,39 @@
/**
* @fileoverview Task parsing functionality for the tm-core package
* This file exports all parsing-related classes and functions
*/
import type { PlaceholderTask } from '../types/index';
// Parser implementations will be defined here
// export * from './prd-parser.js';
// export * from './task-parser.js';
// export * from './markdown-parser.js';
// Placeholder exports - these will be implemented in later tasks
export interface TaskParser {
parse(content: string): Promise<PlaceholderTask[]>;
validate(content: string): Promise<boolean>;
}
/**
* @deprecated This is a placeholder class that will be properly implemented in later tasks
*/
export class PlaceholderParser implements TaskParser {
async parse(content: string): Promise<PlaceholderTask[]> {
// Simple placeholder parsing logic
const lines = content
.split('\n')
.filter((line) => line.trim().startsWith('-'));
return lines.map((line, index) => ({
id: `task-${index + 1}`,
title: line.trim().replace(/^-\s*/, ''),
status: 'pending' as const,
priority: 'medium' as const
}));
}
async validate(content: string): Promise<boolean> {
return content.trim().length > 0;
}
}

View File

@@ -0,0 +1,444 @@
/**
* @fileoverview Abstract base provider with Template Method pattern for AI providers
* Provides common functionality, error handling, and retry logic
*/
import {
ERROR_CODES,
TaskMasterError
} from '../../errors/task-master-error.js';
import type {
AIOptions,
AIResponse,
IAIProvider
} from '../../interfaces/ai-provider.interface.js';
// Constants for retry logic
const DEFAULT_MAX_RETRIES = 3;
const BASE_RETRY_DELAY_MS = 1000;
const MAX_RETRY_DELAY_MS = 32000;
const BACKOFF_MULTIPLIER = 2;
const JITTER_FACTOR = 0.1;
// Constants for validation
const MIN_PROMPT_LENGTH = 1;
const MAX_PROMPT_LENGTH = 100000;
const MIN_TEMPERATURE = 0;
const MAX_TEMPERATURE = 2;
const MIN_MAX_TOKENS = 1;
const MAX_MAX_TOKENS = 100000;
/**
* Configuration for BaseProvider
*/
export interface BaseProviderConfig {
apiKey: string;
model?: string;
}
/**
* Internal completion result structure
*/
export interface CompletionResult {
content: string;
inputTokens?: number;
outputTokens?: number;
finishReason?: string;
model?: string;
}
/**
* Validation result for input validation
*/
interface ValidationResult {
valid: boolean;
error?: string;
}
/**
* Prepared request after preprocessing
*/
interface PreparedRequest {
prompt: string;
options: AIOptions;
metadata: Record<string, any>;
}
/**
* Abstract base provider implementing Template Method pattern
* Provides common error handling, retry logic, and validation
*/
export abstract class BaseProvider implements IAIProvider {
protected readonly apiKey: string;
protected model: string;
constructor(config: BaseProviderConfig) {
if (!config.apiKey) {
throw new TaskMasterError(
'API key is required',
ERROR_CODES.AUTHENTICATION_ERROR
);
}
this.apiKey = config.apiKey;
this.model = config.model || this.getDefaultModel();
}
/**
* Template method for generating completions
* Handles validation, retries, and error handling
*/
async generateCompletion(
prompt: string,
options?: AIOptions
): Promise<AIResponse> {
// Validate input
const validation = this.validateInput(prompt, options);
if (!validation.valid) {
throw new TaskMasterError(
validation.error || 'Invalid input',
ERROR_CODES.VALIDATION_ERROR
);
}
// Prepare request
const prepared = this.prepareRequest(prompt, options);
// Execute with retry logic
let lastError: Error | undefined;
const maxRetries = this.getMaxRetries();
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const startTime = Date.now();
const result = await this.generateCompletionInternal(
prepared.prompt,
prepared.options
);
const duration = Date.now() - startTime;
return this.handleResponse(result, duration, prepared);
} catch (error) {
lastError = error as Error;
if (!this.shouldRetry(error, attempt)) {
break;
}
const delay = this.calculateBackoffDelay(attempt);
await this.sleep(delay);
}
}
// All retries failed
this.handleError(lastError || new Error('Unknown error'));
}
/**
* Validate input prompt and options
*/
protected validateInput(
prompt: string,
options?: AIOptions
): ValidationResult {
// Validate prompt
if (!prompt || typeof prompt !== 'string') {
return { valid: false, error: 'Prompt must be a non-empty string' };
}
const trimmedPrompt = prompt.trim();
if (trimmedPrompt.length < MIN_PROMPT_LENGTH) {
return { valid: false, error: 'Prompt cannot be empty' };
}
if (trimmedPrompt.length > MAX_PROMPT_LENGTH) {
return {
valid: false,
error: `Prompt exceeds maximum length of ${MAX_PROMPT_LENGTH} characters`
};
}
// Validate options if provided
if (options) {
const optionValidation = this.validateOptions(options);
if (!optionValidation.valid) {
return optionValidation;
}
}
return { valid: true };
}
/**
* Validate completion options
*/
protected validateOptions(options: AIOptions): ValidationResult {
if (options.temperature !== undefined) {
if (
options.temperature < MIN_TEMPERATURE ||
options.temperature > MAX_TEMPERATURE
) {
return {
valid: false,
error: `Temperature must be between ${MIN_TEMPERATURE} and ${MAX_TEMPERATURE}`
};
}
}
if (options.maxTokens !== undefined) {
if (
options.maxTokens < MIN_MAX_TOKENS ||
options.maxTokens > MAX_MAX_TOKENS
) {
return {
valid: false,
error: `Max tokens must be between ${MIN_MAX_TOKENS} and ${MAX_MAX_TOKENS}`
};
}
}
if (options.topP !== undefined) {
if (options.topP < 0 || options.topP > 1) {
return { valid: false, error: 'Top-p must be between 0 and 1' };
}
}
return { valid: true };
}
/**
* Prepare request for processing
*/
protected prepareRequest(
prompt: string,
options?: AIOptions
): PreparedRequest {
const defaultOptions = this.getDefaultOptions();
const mergedOptions = { ...defaultOptions, ...options };
return {
prompt: prompt.trim(),
options: mergedOptions,
metadata: {
provider: this.getName(),
model: this.model,
timestamp: new Date().toISOString()
}
};
}
/**
* Process and format the response
*/
protected handleResponse(
result: CompletionResult,
duration: number,
request: PreparedRequest
): AIResponse {
const inputTokens =
result.inputTokens || this.calculateTokens(request.prompt);
const outputTokens =
result.outputTokens || this.calculateTokens(result.content);
return {
content: result.content,
inputTokens,
outputTokens,
totalTokens: inputTokens + outputTokens,
model: result.model || this.model,
provider: this.getName(),
timestamp: request.metadata.timestamp,
duration,
finishReason: result.finishReason
};
}
/**
* Handle errors with proper wrapping
*/
protected handleError(error: unknown): never {
if (error instanceof TaskMasterError) {
throw error;
}
const errorMessage = error instanceof Error ? error.message : String(error);
const errorCode = this.getErrorCode(error);
throw new TaskMasterError(
`${this.getName()} provider error: ${errorMessage}`,
errorCode,
{
operation: 'generateCompletion',
resource: this.getName(),
details:
error instanceof Error
? {
name: error.name,
stack: error.stack,
model: this.model
}
: { error: String(error), model: this.model }
},
error instanceof Error ? error : undefined
);
}
/**
* Determine if request should be retried
*/
protected shouldRetry(error: unknown, attempt: number): boolean {
if (attempt >= this.getMaxRetries()) {
return false;
}
return this.isRetryableError(error);
}
/**
* Check if error is retryable
*/
protected isRetryableError(error: unknown): boolean {
if (this.isRateLimitError(error)) return true;
if (this.isTimeoutError(error)) return true;
if (this.isNetworkError(error)) return true;
return false;
}
/**
* Check if error is a rate limit error
*/
protected isRateLimitError(error: unknown): boolean {
if (error instanceof Error) {
const message = error.message.toLowerCase();
return (
message.includes('rate limit') ||
message.includes('too many requests') ||
message.includes('429')
);
}
return false;
}
/**
* Check if error is a timeout error
*/
protected isTimeoutError(error: unknown): boolean {
if (error instanceof Error) {
const message = error.message.toLowerCase();
return (
message.includes('timeout') ||
message.includes('timed out') ||
message.includes('econnreset')
);
}
return false;
}
/**
* Check if error is a network error
*/
protected isNetworkError(error: unknown): boolean {
if (error instanceof Error) {
const message = error.message.toLowerCase();
return (
message.includes('network') ||
message.includes('enotfound') ||
message.includes('econnrefused')
);
}
return false;
}
/**
* Calculate exponential backoff delay with jitter
*/
protected calculateBackoffDelay(attempt: number): number {
const exponentialDelay =
BASE_RETRY_DELAY_MS * BACKOFF_MULTIPLIER ** (attempt - 1);
const clampedDelay = Math.min(exponentialDelay, MAX_RETRY_DELAY_MS);
// Add jitter to prevent thundering herd
const jitter = clampedDelay * JITTER_FACTOR * (Math.random() - 0.5) * 2;
return Math.round(clampedDelay + jitter);
}
/**
* Get error code from error
*/
protected getErrorCode(error: unknown): string {
if (this.isRateLimitError(error)) return ERROR_CODES.API_ERROR;
if (this.isTimeoutError(error)) return ERROR_CODES.NETWORK_ERROR;
if (this.isNetworkError(error)) return ERROR_CODES.NETWORK_ERROR;
if (error instanceof Error && error.message.includes('401')) {
return ERROR_CODES.AUTHENTICATION_ERROR;
}
return ERROR_CODES.PROVIDER_ERROR;
}
/**
* Sleep utility for delays
*/
protected sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
/**
* Get default options for completions
*/
protected getDefaultOptions(): AIOptions {
return {
temperature: 0.7,
maxTokens: 2000,
topP: 1.0
};
}
/**
* Get maximum retry attempts
*/
protected getMaxRetries(): number {
return DEFAULT_MAX_RETRIES;
}
// Public interface methods
getModel(): string {
return this.model;
}
setModel(model: string): void {
this.model = model;
}
// Abstract methods that must be implemented by concrete providers
protected abstract generateCompletionInternal(
prompt: string,
options?: AIOptions
): Promise<CompletionResult>;
abstract calculateTokens(text: string, model?: string): number;
abstract getName(): string;
abstract getDefaultModel(): string;
// IAIProvider methods that must be implemented
abstract generateStreamingCompletion(
prompt: string,
options?: AIOptions
): AsyncIterator<Partial<AIResponse>>;
abstract isAvailable(): Promise<boolean>;
abstract getProviderInfo(): import(
'../../interfaces/ai-provider.interface.js'
).ProviderInfo;
abstract getAvailableModels(): import(
'../../interfaces/ai-provider.interface.js'
).AIModel[];
abstract validateCredentials(): Promise<boolean>;
abstract getUsageStats(): Promise<
| import('../../interfaces/ai-provider.interface.js').ProviderUsageStats
| null
>;
abstract initialize(): Promise<void>;
abstract close(): Promise<void>;
}

View File

@@ -0,0 +1,14 @@
/**
* @fileoverview Barrel export for AI provider modules
*/
export { BaseProvider } from './base-provider.js';
export type { BaseProviderConfig, CompletionResult } from './base-provider.js';
// Export provider factory when implemented
// export { ProviderFactory } from './provider-factory.js';
// Export concrete providers when implemented
// export { AnthropicProvider } from './adapters/anthropic-provider.js';
// export { OpenAIProvider } from './adapters/openai-provider.js';
// export { GoogleProvider } from './adapters/google-provider.js';

View File

@@ -0,0 +1,9 @@
/**
* @fileoverview Barrel export for provider modules
*/
// Export all from AI module
export * from './ai/index.js';
// Storage providers will be exported here when implemented
// export * from './storage/index.js';

View File

@@ -0,0 +1,6 @@
/**
* Services module exports
* Provides business logic and service layer functionality
*/
export { TaskService } from './task-service';

View File

@@ -0,0 +1,354 @@
/**
* @fileoverview Task Service
* Core service for task operations - handles business logic between storage and API
*/
import type { Task, TaskFilter, TaskStatus } from '../types/index.js';
import type { IStorage } from '../interfaces/storage.interface.js';
import { ConfigManager } from '../config/config-manager.js';
import { StorageFactory } from '../storage/storage-factory.js';
import { TaskEntity } from '../entities/task.entity.js';
import { ERROR_CODES, TaskMasterError } from '../errors/task-master-error.js';
/**
* Result returned by getTaskList
*/
export interface TaskListResult {
/** The filtered list of tasks */
tasks: Task[];
/** Total number of tasks before filtering */
total: number;
/** Number of tasks after filtering */
filtered: number;
/** The tag these tasks belong to (only present if explicitly provided) */
tag?: string;
/** Storage type being used - includes 'auto' for automatic detection */
storageType: 'file' | 'api' | 'auto';
}
/**
* Options for getTaskList
*/
export interface GetTaskListOptions {
/** Optional tag override (uses active tag from config if not provided) */
tag?: string;
/** Filter criteria */
filter?: TaskFilter;
/** Include subtasks in response */
includeSubtasks?: boolean;
}
/**
* TaskService handles all task-related operations
* This is where business logic lives - it coordinates between ConfigManager and Storage
*/
export class TaskService {
private configManager: ConfigManager;
private storage: IStorage;
private initialized = false;
constructor(configManager: ConfigManager) {
this.configManager = configManager;
// Storage will be created during initialization
this.storage = null as any;
}
/**
* Initialize the service
*/
async initialize(): Promise<void> {
if (this.initialized) return;
// Create storage based on configuration
const storageConfig = this.configManager.getStorageConfig();
const projectRoot = this.configManager.getProjectRoot();
this.storage = StorageFactory.create(
{ storage: storageConfig } as any,
projectRoot
);
// Initialize storage
await this.storage.initialize();
this.initialized = true;
}
/**
* Get list of tasks
* This is the main method that retrieves tasks from storage and applies filters
*/
async getTaskList(options: GetTaskListOptions = {}): Promise<TaskListResult> {
// Determine which tag to use
const activeTag = this.configManager.getActiveTag();
const tag = options.tag || activeTag;
try {
// Load raw tasks from storage - storage only knows about tags
const rawTasks = await this.storage.loadTasks(tag);
// Convert to TaskEntity for business logic operations
const taskEntities = TaskEntity.fromArray(rawTasks);
// Apply filters if provided
let filteredEntities = taskEntities;
if (options.filter) {
filteredEntities = this.applyFilters(taskEntities, options.filter);
}
// Convert back to plain objects
let tasks = filteredEntities.map((entity) => entity.toJSON());
// Handle subtasks option
if (options.includeSubtasks === false) {
tasks = tasks.map((task) => ({
...task,
subtasks: []
}));
}
return {
tasks,
total: rawTasks.length,
filtered: filteredEntities.length,
tag: options.tag, // Only include tag if explicitly provided
storageType: this.configManager.getStorageConfig().type
};
} catch (error) {
throw new TaskMasterError(
'Failed to get task list',
ERROR_CODES.INTERNAL_ERROR,
{
operation: 'getTaskList',
tag,
hasFilter: !!options.filter
},
error as Error
);
}
}
/**
* Get a single task by ID
*/
async getTask(taskId: string, tag?: string): Promise<Task | null> {
const result = await this.getTaskList({
tag,
includeSubtasks: true
});
return result.tasks.find((t) => t.id === taskId) || null;
}
/**
* Get tasks filtered by status
*/
async getTasksByStatus(
status: TaskStatus | TaskStatus[],
tag?: string
): Promise<Task[]> {
const statuses = Array.isArray(status) ? status : [status];
const result = await this.getTaskList({
tag,
filter: { status: statuses }
});
return result.tasks;
}
/**
* Get statistics about tasks
*/
async getTaskStats(tag?: string): Promise<{
total: number;
byStatus: Record<TaskStatus, number>;
withSubtasks: number;
blocked: number;
storageType: 'file' | 'api' | 'auto';
}> {
const result = await this.getTaskList({
tag,
includeSubtasks: true
});
const stats = {
total: result.total,
byStatus: {} as Record<TaskStatus, number>,
withSubtasks: 0,
blocked: 0,
storageType: result.storageType
};
// Initialize all statuses
const allStatuses: TaskStatus[] = [
'pending',
'in-progress',
'done',
'deferred',
'cancelled',
'blocked',
'review'
];
allStatuses.forEach((status) => {
stats.byStatus[status] = 0;
});
// Count tasks
result.tasks.forEach((task) => {
stats.byStatus[task.status]++;
if (task.subtasks && task.subtasks.length > 0) {
stats.withSubtasks++;
}
if (task.status === 'blocked') {
stats.blocked++;
}
});
return stats;
}
/**
* Get next available task to work on
*/
async getNextTask(tag?: string): Promise<Task | null> {
const result = await this.getTaskList({
tag,
filter: {
status: ['pending', 'in-progress']
}
});
// Find tasks with no dependencies or all dependencies satisfied
const completedIds = new Set(
result.tasks.filter((t) => t.status === 'done').map((t) => t.id)
);
const availableTasks = result.tasks.filter((task) => {
if (task.status === 'done' || task.status === 'blocked') {
return false;
}
if (!task.dependencies || task.dependencies.length === 0) {
return true;
}
return task.dependencies.every((depId) =>
completedIds.has(depId.toString())
);
});
// Sort by priority
availableTasks.sort((a, b) => {
const priorityOrder = { critical: 0, high: 1, medium: 2, low: 3 };
const aPriority = priorityOrder[a.priority || 'medium'];
const bPriority = priorityOrder[b.priority || 'medium'];
return aPriority - bPriority;
});
return availableTasks[0] || null;
}
/**
* Apply filters to task entities
*/
private applyFilters(tasks: TaskEntity[], filter: TaskFilter): TaskEntity[] {
return tasks.filter((task) => {
// Status filter
if (filter.status) {
const statuses = Array.isArray(filter.status)
? filter.status
: [filter.status];
if (!statuses.includes(task.status)) {
return false;
}
}
// Priority filter
if (filter.priority) {
const priorities = Array.isArray(filter.priority)
? filter.priority
: [filter.priority];
if (!priorities.includes(task.priority)) {
return false;
}
}
// Tags filter
if (filter.tags && filter.tags.length > 0) {
if (
!task.tags ||
!filter.tags.some((tag) => task.tags?.includes(tag))
) {
return false;
}
}
// Assignee filter
if (filter.assignee) {
if (task.assignee !== filter.assignee) {
return false;
}
}
// Complexity filter
if (filter.complexity) {
const complexities = Array.isArray(filter.complexity)
? filter.complexity
: [filter.complexity];
if (!task.complexity || !complexities.includes(task.complexity)) {
return false;
}
}
// Search filter
if (filter.search) {
const searchLower = filter.search.toLowerCase();
const inTitle = task.title.toLowerCase().includes(searchLower);
const inDescription = task.description
.toLowerCase()
.includes(searchLower);
const inDetails = task.details.toLowerCase().includes(searchLower);
if (!inTitle && !inDescription && !inDetails) {
return false;
}
}
// Has subtasks filter
if (filter.hasSubtasks !== undefined) {
const hasSubtasks = task.subtasks.length > 0;
if (hasSubtasks !== filter.hasSubtasks) {
return false;
}
}
return true;
});
}
/**
* Get current storage type
*/
getStorageType(): 'file' | 'api' | 'auto' {
return this.configManager.getStorageConfig().type;
}
/**
* Get current active tag
*/
getActiveTag(): string {
return this.configManager.getActiveTag();
}
/**
* Set active tag
*/
async setActiveTag(tag: string): Promise<void> {
await this.configManager.setActiveTag(tag);
}
}

View File

@@ -0,0 +1,724 @@
/**
* @fileoverview API-based storage implementation for Hamster integration
* This provides storage via REST API instead of local file system
*/
import type {
IStorage,
StorageStats
} from '../interfaces/storage.interface.js';
import type { Task, TaskMetadata } from '../types/index.js';
import { ERROR_CODES, TaskMasterError } from '../errors/task-master-error.js';
/**
* API storage configuration
*/
export interface ApiStorageConfig {
/** API endpoint base URL */
endpoint: string;
/** Access token for authentication */
accessToken: string;
/** Optional project ID */
projectId?: string;
/** Request timeout in milliseconds */
timeout?: number;
/** Enable request retries */
enableRetry?: boolean;
/** Maximum retry attempts */
maxRetries?: number;
}
/**
* API response wrapper
*/
interface ApiResponse<T> {
success: boolean;
data?: T;
error?: string;
message?: string;
}
/**
* ApiStorage implementation for Hamster integration
* Fetches and stores tasks via REST API
*/
export class ApiStorage implements IStorage {
private readonly config: Required<ApiStorageConfig>;
private initialized = false;
constructor(config: ApiStorageConfig) {
this.validateConfig(config);
this.config = {
endpoint: config.endpoint.replace(/\/$/, ''), // Remove trailing slash
accessToken: config.accessToken,
projectId: config.projectId || 'default',
timeout: config.timeout || 30000,
enableRetry: config.enableRetry ?? true,
maxRetries: config.maxRetries || 3
};
}
/**
* Validate API storage configuration
*/
private validateConfig(config: ApiStorageConfig): void {
if (!config.endpoint) {
throw new TaskMasterError(
'API endpoint is required for API storage',
ERROR_CODES.MISSING_CONFIGURATION
);
}
if (!config.accessToken) {
throw new TaskMasterError(
'Access token is required for API storage',
ERROR_CODES.MISSING_CONFIGURATION
);
}
// Validate endpoint URL format
try {
new URL(config.endpoint);
} catch {
throw new TaskMasterError(
'Invalid API endpoint URL',
ERROR_CODES.INVALID_INPUT,
{ endpoint: config.endpoint }
);
}
}
/**
* Initialize the API storage
*/
async initialize(): Promise<void> {
if (this.initialized) return;
try {
// Verify API connectivity
await this.verifyConnection();
this.initialized = true;
} catch (error) {
throw new TaskMasterError(
'Failed to initialize API storage',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'initialize' },
error as Error
);
}
}
/**
* Verify API connection
*/
private async verifyConnection(): Promise<void> {
const response = await this.makeRequest<{ status: string }>('/health');
if (!response.success) {
throw new Error(`API health check failed: ${response.error}`);
}
}
/**
* Load tasks from API
*/
async loadTasks(tag?: string): Promise<Task[]> {
await this.ensureInitialized();
try {
const endpoint = tag
? `/projects/${this.config.projectId}/tasks?tag=${encodeURIComponent(tag)}`
: `/projects/${this.config.projectId}/tasks`;
const response = await this.makeRequest<{ tasks: Task[] }>(endpoint);
if (!response.success) {
throw new Error(response.error || 'Failed to load tasks');
}
return response.data?.tasks || [];
} catch (error) {
throw new TaskMasterError(
'Failed to load tasks from API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'loadTasks', tag },
error as Error
);
}
}
/**
* Save tasks to API
*/
async saveTasks(tasks: Task[], tag?: string): Promise<void> {
await this.ensureInitialized();
try {
const endpoint = tag
? `/projects/${this.config.projectId}/tasks?tag=${encodeURIComponent(tag)}`
: `/projects/${this.config.projectId}/tasks`;
const response = await this.makeRequest(endpoint, 'PUT', { tasks });
if (!response.success) {
throw new Error(response.error || 'Failed to save tasks');
}
} catch (error) {
throw new TaskMasterError(
'Failed to save tasks to API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'saveTasks', tag, taskCount: tasks.length },
error as Error
);
}
}
/**
* Load a single task by ID
*/
async loadTask(taskId: string, tag?: string): Promise<Task | null> {
await this.ensureInitialized();
try {
const endpoint = tag
? `/projects/${this.config.projectId}/tasks/${taskId}?tag=${encodeURIComponent(tag)}`
: `/projects/${this.config.projectId}/tasks/${taskId}`;
const response = await this.makeRequest<{ task: Task }>(endpoint);
if (!response.success) {
if (response.error?.includes('not found')) {
return null;
}
throw new Error(response.error || 'Failed to load task');
}
return response.data?.task || null;
} catch (error) {
throw new TaskMasterError(
'Failed to load task from API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'loadTask', taskId, tag },
error as Error
);
}
}
/**
* Save a single task
*/
async saveTask(task: Task, tag?: string): Promise<void> {
await this.ensureInitialized();
try {
const endpoint = tag
? `/projects/${this.config.projectId}/tasks/${task.id}?tag=${encodeURIComponent(tag)}`
: `/projects/${this.config.projectId}/tasks/${task.id}`;
const response = await this.makeRequest(endpoint, 'PUT', { task });
if (!response.success) {
throw new Error(response.error || 'Failed to save task');
}
} catch (error) {
throw new TaskMasterError(
'Failed to save task to API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'saveTask', taskId: task.id, tag },
error as Error
);
}
}
/**
* Delete a task
*/
async deleteTask(taskId: string, tag?: string): Promise<void> {
await this.ensureInitialized();
try {
const endpoint = tag
? `/projects/${this.config.projectId}/tasks/${taskId}?tag=${encodeURIComponent(tag)}`
: `/projects/${this.config.projectId}/tasks/${taskId}`;
const response = await this.makeRequest(endpoint, 'DELETE');
if (!response.success) {
throw new Error(response.error || 'Failed to delete task');
}
} catch (error) {
throw new TaskMasterError(
'Failed to delete task from API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'deleteTask', taskId, tag },
error as Error
);
}
}
/**
* List available tags
*/
async listTags(): Promise<string[]> {
await this.ensureInitialized();
try {
const response = await this.makeRequest<{ tags: string[] }>(
`/projects/${this.config.projectId}/tags`
);
if (!response.success) {
throw new Error(response.error || 'Failed to list tags');
}
return response.data?.tags || [];
} catch (error) {
throw new TaskMasterError(
'Failed to list tags from API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'listTags' },
error as Error
);
}
}
/**
* Load metadata
*/
async loadMetadata(tag?: string): Promise<TaskMetadata | null> {
await this.ensureInitialized();
try {
const endpoint = tag
? `/projects/${this.config.projectId}/metadata?tag=${encodeURIComponent(tag)}`
: `/projects/${this.config.projectId}/metadata`;
const response = await this.makeRequest<{ metadata: TaskMetadata }>(
endpoint
);
if (!response.success) {
return null;
}
return response.data?.metadata || null;
} catch (error) {
throw new TaskMasterError(
'Failed to load metadata from API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'loadMetadata', tag },
error as Error
);
}
}
/**
* Save metadata
*/
async saveMetadata(metadata: TaskMetadata, tag?: string): Promise<void> {
await this.ensureInitialized();
try {
const endpoint = tag
? `/projects/${this.config.projectId}/metadata?tag=${encodeURIComponent(tag)}`
: `/projects/${this.config.projectId}/metadata`;
const response = await this.makeRequest(endpoint, 'PUT', { metadata });
if (!response.success) {
throw new Error(response.error || 'Failed to save metadata');
}
} catch (error) {
throw new TaskMasterError(
'Failed to save metadata to API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'saveMetadata', tag },
error as Error
);
}
}
/**
* Check if storage exists
*/
async exists(): Promise<boolean> {
try {
await this.initialize();
return true;
} catch {
return false;
}
}
/**
* Append tasks to existing storage
*/
async appendTasks(tasks: Task[], tag?: string): Promise<void> {
await this.ensureInitialized();
try {
// First load existing tasks
const existingTasks = await this.loadTasks(tag);
// Append new tasks
const allTasks = [...existingTasks, ...tasks];
// Save all tasks
await this.saveTasks(allTasks, tag);
} catch (error) {
throw new TaskMasterError(
'Failed to append tasks to API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'appendTasks', tag, taskCount: tasks.length },
error as Error
);
}
}
/**
* Update a specific task
*/
async updateTask(
taskId: string,
updates: Partial<Task>,
tag?: string
): Promise<void> {
await this.ensureInitialized();
try {
// Load the task
const task = await this.loadTask(taskId, tag);
if (!task) {
throw new Error(`Task ${taskId} not found`);
}
// Merge updates
const updatedTask = { ...task, ...updates, id: taskId };
// Save updated task
await this.saveTask(updatedTask, tag);
} catch (error) {
throw new TaskMasterError(
'Failed to update task via API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'updateTask', taskId, tag },
error as Error
);
}
}
/**
* Get all available tags
*/
async getAllTags(): Promise<string[]> {
return this.listTags();
}
/**
* Delete all tasks for a tag
*/
async deleteTag(tag: string): Promise<void> {
await this.ensureInitialized();
try {
const response = await this.makeRequest(
`/projects/${this.config.projectId}/tags/${encodeURIComponent(tag)}`,
'DELETE'
);
if (!response.success) {
throw new Error(response.error || 'Failed to delete tag');
}
} catch (error) {
throw new TaskMasterError(
'Failed to delete tag via API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'deleteTag', tag },
error as Error
);
}
}
/**
* Rename a tag
*/
async renameTag(oldTag: string, newTag: string): Promise<void> {
await this.ensureInitialized();
try {
const response = await this.makeRequest(
`/projects/${this.config.projectId}/tags/${encodeURIComponent(oldTag)}/rename`,
'POST',
{ newTag }
);
if (!response.success) {
throw new Error(response.error || 'Failed to rename tag');
}
} catch (error) {
throw new TaskMasterError(
'Failed to rename tag via API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'renameTag', oldTag, newTag },
error as Error
);
}
}
/**
* Copy a tag
*/
async copyTag(sourceTag: string, targetTag: string): Promise<void> {
await this.ensureInitialized();
try {
const response = await this.makeRequest(
`/projects/${this.config.projectId}/tags/${encodeURIComponent(sourceTag)}/copy`,
'POST',
{ targetTag }
);
if (!response.success) {
throw new Error(response.error || 'Failed to copy tag');
}
} catch (error) {
throw new TaskMasterError(
'Failed to copy tag via API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'copyTag', sourceTag, targetTag },
error as Error
);
}
}
/**
* Get storage statistics
*/
async getStats(): Promise<StorageStats> {
await this.ensureInitialized();
try {
const response = await this.makeRequest<{
stats: StorageStats;
}>(`/projects/${this.config.projectId}/stats`);
if (!response.success) {
throw new Error(response.error || 'Failed to get stats');
}
// Return stats or default values
return (
response.data?.stats || {
totalTasks: 0,
totalTags: 0,
storageSize: 0,
lastModified: new Date().toISOString(),
tagStats: []
}
);
} catch (error) {
throw new TaskMasterError(
'Failed to get stats from API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'getStats' },
error as Error
);
}
}
/**
* Create backup
*/
async backup(): Promise<string> {
await this.ensureInitialized();
try {
const response = await this.makeRequest<{ backupId: string }>(
`/projects/${this.config.projectId}/backup`,
'POST'
);
if (!response.success) {
throw new Error(response.error || 'Failed to create backup');
}
return response.data?.backupId || 'unknown';
} catch (error) {
throw new TaskMasterError(
'Failed to create backup via API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'backup' },
error as Error
);
}
}
/**
* Restore from backup
*/
async restore(backupPath: string): Promise<void> {
await this.ensureInitialized();
try {
const response = await this.makeRequest(
`/projects/${this.config.projectId}/restore`,
'POST',
{ backupId: backupPath }
);
if (!response.success) {
throw new Error(response.error || 'Failed to restore backup');
}
} catch (error) {
throw new TaskMasterError(
'Failed to restore backup via API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'restore', backupPath },
error as Error
);
}
}
/**
* Clear all data
*/
async clear(): Promise<void> {
await this.ensureInitialized();
try {
const response = await this.makeRequest(
`/projects/${this.config.projectId}/clear`,
'POST'
);
if (!response.success) {
throw new Error(response.error || 'Failed to clear data');
}
} catch (error) {
throw new TaskMasterError(
'Failed to clear data via API',
ERROR_CODES.STORAGE_ERROR,
{ operation: 'clear' },
error as Error
);
}
}
/**
* Close connection
*/
async close(): Promise<void> {
this.initialized = false;
}
/**
* Ensure storage is initialized
*/
private async ensureInitialized(): Promise<void> {
if (!this.initialized) {
await this.initialize();
}
}
/**
* Make HTTP request to API
*/
private async makeRequest<T>(
path: string,
method: 'GET' | 'POST' | 'PUT' | 'DELETE' = 'GET',
body?: unknown
): Promise<ApiResponse<T>> {
const url = `${this.config.endpoint}${path}`;
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), this.config.timeout);
try {
const options: RequestInit = {
method,
headers: {
Authorization: `Bearer ${this.config.accessToken}`,
'Content-Type': 'application/json',
Accept: 'application/json'
},
signal: controller.signal
};
if (body && (method === 'POST' || method === 'PUT')) {
options.body = JSON.stringify(body);
}
let lastError: Error | null = null;
let attempt = 0;
while (attempt < this.config.maxRetries) {
attempt++;
try {
const response = await fetch(url, options);
const data = await response.json();
if (response.ok) {
return { success: true, data: data as T };
}
// Handle specific error codes
if (response.status === 401) {
return {
success: false,
error: 'Authentication failed - check access token'
};
}
if (response.status === 404) {
return {
success: false,
error: 'Resource not found'
};
}
if (response.status === 429) {
// Rate limited - retry with backoff
if (this.config.enableRetry && attempt < this.config.maxRetries) {
await this.delay(Math.pow(2, attempt) * 1000);
continue;
}
}
const errorData = data as any;
return {
success: false,
error:
errorData.error ||
errorData.message ||
`HTTP ${response.status}: ${response.statusText}`
};
} catch (error) {
lastError = error as Error;
// Retry on network errors
if (this.config.enableRetry && attempt < this.config.maxRetries) {
await this.delay(Math.pow(2, attempt) * 1000);
continue;
}
}
}
// All retries exhausted
return {
success: false,
error: lastError?.message || 'Request failed after retries'
};
} finally {
clearTimeout(timeoutId);
}
}
/**
* Delay helper for retries
*/
private delay(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
}

View File

@@ -0,0 +1,170 @@
/**
* @fileoverview File operations with atomic writes and locking
*/
import { promises as fs } from 'node:fs';
import type { FileStorageData } from './format-handler.js';
/**
* Handles atomic file operations with locking mechanism
*/
export class FileOperations {
private fileLocks: Map<string, Promise<void>> = new Map();
/**
* Read and parse JSON file
*/
async readJson(filePath: string): Promise<any> {
try {
const content = await fs.readFile(filePath, 'utf-8');
return JSON.parse(content);
} catch (error: any) {
if (error.code === 'ENOENT') {
throw error; // Re-throw ENOENT for caller to handle
}
if (error instanceof SyntaxError) {
throw new Error(`Invalid JSON in file ${filePath}: ${error.message}`);
}
throw new Error(`Failed to read file ${filePath}: ${error.message}`);
}
}
/**
* Write JSON file with atomic operation and locking
*/
async writeJson(
filePath: string,
data: FileStorageData | any
): Promise<void> {
// Use file locking to prevent concurrent writes
const lockKey = filePath;
const existingLock = this.fileLocks.get(lockKey);
if (existingLock) {
await existingLock;
}
const lockPromise = this.performAtomicWrite(filePath, data);
this.fileLocks.set(lockKey, lockPromise);
try {
await lockPromise;
} finally {
this.fileLocks.delete(lockKey);
}
}
/**
* Perform atomic write operation using temporary file
*/
private async performAtomicWrite(filePath: string, data: any): Promise<void> {
const tempPath = `${filePath}.tmp`;
try {
// Write to temp file first
const content = JSON.stringify(data, null, 2);
await fs.writeFile(tempPath, content, 'utf-8');
// Atomic rename
await fs.rename(tempPath, filePath);
} catch (error: any) {
// Clean up temp file if it exists
try {
await fs.unlink(tempPath);
} catch {
// Ignore cleanup errors
}
throw new Error(`Failed to write file ${filePath}: ${error.message}`);
}
}
/**
* Check if file exists
*/
async exists(filePath: string): Promise<boolean> {
try {
await fs.access(filePath, fs.constants.F_OK);
return true;
} catch {
return false;
}
}
/**
* Get file stats
*/
async getStats(filePath: string) {
return fs.stat(filePath);
}
/**
* Read directory contents
*/
async readDir(dirPath: string): Promise<string[]> {
return fs.readdir(dirPath);
}
/**
* Create directory recursively
*/
async ensureDir(dirPath: string): Promise<void> {
try {
await fs.mkdir(dirPath, { recursive: true });
} catch (error: any) {
throw new Error(
`Failed to create directory ${dirPath}: ${error.message}`
);
}
}
/**
* Delete file
*/
async deleteFile(filePath: string): Promise<void> {
try {
await fs.unlink(filePath);
} catch (error: any) {
if (error.code !== 'ENOENT') {
throw new Error(`Failed to delete file ${filePath}: ${error.message}`);
}
}
}
/**
* Rename/move file
*/
async moveFile(oldPath: string, newPath: string): Promise<void> {
try {
await fs.rename(oldPath, newPath);
} catch (error: any) {
throw new Error(
`Failed to move file from ${oldPath} to ${newPath}: ${error.message}`
);
}
}
/**
* Copy file
*/
async copyFile(srcPath: string, destPath: string): Promise<void> {
try {
await fs.copyFile(srcPath, destPath);
} catch (error: any) {
throw new Error(
`Failed to copy file from ${srcPath} to ${destPath}: ${error.message}`
);
}
}
/**
* Clean up all pending file operations
*/
async cleanup(): Promise<void> {
const locks = Array.from(this.fileLocks.values());
if (locks.length > 0) {
await Promise.all(locks);
}
this.fileLocks.clear();
}
}

View File

@@ -0,0 +1,384 @@
/**
* @fileoverview Refactored file-based storage implementation for Task Master
*/
import type { Task, TaskMetadata } from '../../types/index.js';
import type {
IStorage,
StorageStats
} from '../../interfaces/storage.interface.js';
import { FormatHandler } from './format-handler.js';
import { FileOperations } from './file-operations.js';
import { PathResolver } from './path-resolver.js';
/**
* File-based storage implementation using a single tasks.json file with separated concerns
*/
export class FileStorage implements IStorage {
private formatHandler: FormatHandler;
private fileOps: FileOperations;
private pathResolver: PathResolver;
constructor(projectPath: string) {
this.formatHandler = new FormatHandler();
this.fileOps = new FileOperations();
this.pathResolver = new PathResolver(projectPath);
}
/**
* Initialize storage by creating necessary directories
*/
async initialize(): Promise<void> {
await this.fileOps.ensureDir(this.pathResolver.getTasksDir());
}
/**
* Close storage and cleanup resources
*/
async close(): Promise<void> {
await this.fileOps.cleanup();
}
/**
* Get statistics about the storage
*/
async getStats(): Promise<StorageStats> {
const filePath = this.pathResolver.getTasksPath();
try {
const stats = await this.fileOps.getStats(filePath);
const data = await this.fileOps.readJson(filePath);
const tags = this.formatHandler.extractTags(data);
let totalTasks = 0;
const tagStats = tags.map((tag) => {
const tasks = this.formatHandler.extractTasks(data, tag);
const taskCount = tasks.length;
totalTasks += taskCount;
return {
tag,
taskCount,
lastModified: stats.mtime.toISOString()
};
});
return {
totalTasks,
totalTags: tags.length,
lastModified: stats.mtime.toISOString(),
storageSize: 0, // Could calculate actual file sizes if needed
tagStats
};
} catch (error: any) {
if (error.code === 'ENOENT') {
return {
totalTasks: 0,
totalTags: 0,
lastModified: new Date().toISOString(),
storageSize: 0,
tagStats: []
};
}
throw new Error(`Failed to get storage stats: ${error.message}`);
}
}
/**
* Load tasks from the single tasks.json file for a specific tag
*/
async loadTasks(tag?: string): Promise<Task[]> {
const filePath = this.pathResolver.getTasksPath();
const resolvedTag = tag || 'master';
try {
const rawData = await this.fileOps.readJson(filePath);
return this.formatHandler.extractTasks(rawData, resolvedTag);
} catch (error: any) {
if (error.code === 'ENOENT') {
return []; // File doesn't exist, return empty array
}
throw new Error(`Failed to load tasks: ${error.message}`);
}
}
/**
* Save tasks for a specific tag in the single tasks.json file
*/
async saveTasks(tasks: Task[], tag?: string): Promise<void> {
const filePath = this.pathResolver.getTasksPath();
const resolvedTag = tag || 'master';
// Ensure directory exists
await this.fileOps.ensureDir(this.pathResolver.getTasksDir());
// Get existing data from the file
let existingData: any = {};
try {
existingData = await this.fileOps.readJson(filePath);
} catch (error: any) {
if (error.code !== 'ENOENT') {
throw new Error(`Failed to read existing tasks: ${error.message}`);
}
// File doesn't exist, start with empty data
}
// Create metadata for this tag
const metadata: TaskMetadata = {
version: '1.0.0',
lastModified: new Date().toISOString(),
taskCount: tasks.length,
completedCount: tasks.filter((t) => t.status === 'done').length,
tags: [resolvedTag]
};
// Normalize tasks
const normalizedTasks = this.normalizeTaskIds(tasks);
// Update the specific tag in the existing data structure
if (
this.formatHandler.detectFormat(existingData) === 'legacy' ||
Object.keys(existingData).some(
(key) => key !== 'tasks' && key !== 'metadata'
)
) {
// Legacy format - update/add the tag
existingData[resolvedTag] = {
tasks: normalizedTasks,
metadata
};
} else if (resolvedTag === 'master') {
// Standard format for master tag
existingData = {
tasks: normalizedTasks,
metadata
};
} else {
// Convert to legacy format when adding non-master tags
const masterTasks = existingData.tasks || [];
const masterMetadata = existingData.metadata || metadata;
existingData = {
master: {
tasks: masterTasks,
metadata: masterMetadata
},
[resolvedTag]: {
tasks: normalizedTasks,
metadata
}
};
}
// Write the updated file
await this.fileOps.writeJson(filePath, existingData);
}
/**
* Normalize task IDs - keep Task IDs as strings, Subtask IDs as numbers
*/
private normalizeTaskIds(tasks: Task[]): Task[] {
return tasks.map((task) => ({
...task,
id: String(task.id), // Task IDs are strings
dependencies: task.dependencies?.map((dep) => String(dep)) || [],
subtasks:
task.subtasks?.map((subtask) => ({
...subtask,
id: Number(subtask.id), // Subtask IDs are numbers
parentId: String(subtask.parentId) // Parent ID is string (Task ID)
})) || []
}));
}
/**
* Check if the tasks file exists
*/
async exists(_tag?: string): Promise<boolean> {
const filePath = this.pathResolver.getTasksPath();
return this.fileOps.exists(filePath);
}
/**
* Get all available tags from the single tasks.json file
*/
async getAllTags(): Promise<string[]> {
try {
const filePath = this.pathResolver.getTasksPath();
const data = await this.fileOps.readJson(filePath);
return this.formatHandler.extractTags(data);
} catch (error: any) {
if (error.code === 'ENOENT') {
return []; // File doesn't exist
}
throw new Error(`Failed to get tags: ${error.message}`);
}
}
/**
* Load metadata from the single tasks.json file for a specific tag
*/
async loadMetadata(tag?: string): Promise<TaskMetadata | null> {
const filePath = this.pathResolver.getTasksPath();
const resolvedTag = tag || 'master';
try {
const rawData = await this.fileOps.readJson(filePath);
return this.formatHandler.extractMetadata(rawData, resolvedTag);
} catch (error: any) {
if (error.code === 'ENOENT') {
return null;
}
throw new Error(`Failed to load metadata: ${error.message}`);
}
}
/**
* Save metadata (stored with tasks)
*/
async saveMetadata(_metadata: TaskMetadata, tag?: string): Promise<void> {
const tasks = await this.loadTasks(tag);
await this.saveTasks(tasks, tag);
}
/**
* Append tasks to existing storage
*/
async appendTasks(tasks: Task[], tag?: string): Promise<void> {
const existingTasks = await this.loadTasks(tag);
const allTasks = [...existingTasks, ...tasks];
await this.saveTasks(allTasks, tag);
}
/**
* Update a specific task
*/
async updateTask(
taskId: string,
updates: Partial<Task>,
tag?: string
): Promise<void> {
const tasks = await this.loadTasks(tag);
const taskIndex = tasks.findIndex((t) => t.id === taskId.toString());
if (taskIndex === -1) {
throw new Error(`Task ${taskId} not found`);
}
tasks[taskIndex] = {
...tasks[taskIndex],
...updates,
id: taskId.toString()
};
await this.saveTasks(tasks, tag);
}
/**
* Delete a task
*/
async deleteTask(taskId: string, tag?: string): Promise<void> {
const tasks = await this.loadTasks(tag);
const filteredTasks = tasks.filter((t) => t.id !== taskId);
if (filteredTasks.length === tasks.length) {
throw new Error(`Task ${taskId} not found`);
}
await this.saveTasks(filteredTasks, tag);
}
/**
* Delete a tag from the single tasks.json file
*/
async deleteTag(tag: string): Promise<void> {
const filePath = this.pathResolver.getTasksPath();
try {
const existingData = await this.fileOps.readJson(filePath);
if (this.formatHandler.detectFormat(existingData) === 'legacy') {
// Legacy format - remove the tag key
if (tag in existingData) {
delete existingData[tag];
await this.fileOps.writeJson(filePath, existingData);
} else {
throw new Error(`Tag ${tag} not found`);
}
} else if (tag === 'master') {
// Standard format - delete the entire file for master tag
await this.fileOps.deleteFile(filePath);
} else {
throw new Error(`Tag ${tag} not found in standard format`);
}
} catch (error: any) {
if (error.code === 'ENOENT') {
throw new Error(`Tag ${tag} not found - file doesn't exist`);
}
throw error;
}
}
/**
* Rename a tag within the single tasks.json file
*/
async renameTag(oldTag: string, newTag: string): Promise<void> {
const filePath = this.pathResolver.getTasksPath();
try {
const existingData = await this.fileOps.readJson(filePath);
if (this.formatHandler.detectFormat(existingData) === 'legacy') {
// Legacy format - rename the tag key
if (oldTag in existingData) {
existingData[newTag] = existingData[oldTag];
delete existingData[oldTag];
// Update metadata tags array
if (existingData[newTag].metadata) {
existingData[newTag].metadata.tags = [newTag];
}
await this.fileOps.writeJson(filePath, existingData);
} else {
throw new Error(`Tag ${oldTag} not found`);
}
} else if (oldTag === 'master') {
// Convert standard format to legacy when renaming master
const masterTasks = existingData.tasks || [];
const masterMetadata = existingData.metadata || {};
const newData = {
[newTag]: {
tasks: masterTasks,
metadata: { ...masterMetadata, tags: [newTag] }
}
};
await this.fileOps.writeJson(filePath, newData);
} else {
throw new Error(`Tag ${oldTag} not found in standard format`);
}
} catch (error: any) {
if (error.code === 'ENOENT') {
throw new Error(`Tag ${oldTag} not found - file doesn't exist`);
}
throw error;
}
}
/**
* Copy a tag within the single tasks.json file
*/
async copyTag(sourceTag: string, targetTag: string): Promise<void> {
const tasks = await this.loadTasks(sourceTag);
if (tasks.length === 0) {
throw new Error(`Source tag ${sourceTag} not found or has no tasks`);
}
await this.saveTasks(tasks, targetTag);
}
}
// Export as default for convenience
export default FileStorage;

View File

@@ -0,0 +1,248 @@
/**
* @fileoverview Format handler for task storage files
*/
import type { Task, TaskMetadata } from '../../types/index.js';
export interface FileStorageData {
tasks: Task[];
metadata: TaskMetadata;
}
export type FileFormat = 'legacy' | 'standard';
/**
* Handles format detection and conversion between legacy and standard task file formats
*/
export class FormatHandler {
/**
* Detect the format of the raw data
*/
detectFormat(data: any): FileFormat {
if (!data || typeof data !== 'object') {
return 'standard';
}
const keys = Object.keys(data);
// Check if this uses the legacy format with tag keys
// Legacy format has keys that are not 'tasks' or 'metadata'
const hasLegacyFormat = keys.some(
(key) => key !== 'tasks' && key !== 'metadata'
);
return hasLegacyFormat ? 'legacy' : 'standard';
}
/**
* Extract tasks from data for a specific tag
*/
extractTasks(data: any, tag: string): Task[] {
if (!data) {
return [];
}
const format = this.detectFormat(data);
if (format === 'legacy') {
return this.extractTasksFromLegacy(data, tag);
}
return this.extractTasksFromStandard(data);
}
/**
* Extract tasks from legacy format
*/
private extractTasksFromLegacy(data: any, tag: string): Task[] {
// First check if the requested tag exists
if (tag in data) {
const tagData = data[tag];
return tagData?.tasks || [];
}
// If we're looking for 'master' tag but it doesn't exist, try the first available tag
const availableKeys = Object.keys(data).filter(
(key) => key !== 'tasks' && key !== 'metadata'
);
if (tag === 'master' && availableKeys.length > 0) {
const firstTag = availableKeys[0];
const tagData = data[firstTag];
return tagData?.tasks || [];
}
return [];
}
/**
* Extract tasks from standard format
*/
private extractTasksFromStandard(data: any): Task[] {
return data?.tasks || [];
}
/**
* Extract metadata from data for a specific tag
*/
extractMetadata(data: any, tag: string): TaskMetadata | null {
if (!data) {
return null;
}
const format = this.detectFormat(data);
if (format === 'legacy') {
return this.extractMetadataFromLegacy(data, tag);
}
return this.extractMetadataFromStandard(data);
}
/**
* Extract metadata from legacy format
*/
private extractMetadataFromLegacy(
data: any,
tag: string
): TaskMetadata | null {
if (tag in data) {
const tagData = data[tag];
// Generate metadata if not present in legacy format
if (!tagData?.metadata && tagData?.tasks) {
return this.generateMetadataFromTasks(tagData.tasks, tag);
}
return tagData?.metadata || null;
}
// If we're looking for 'master' tag but it doesn't exist, try the first available tag
const availableKeys = Object.keys(data).filter(
(key) => key !== 'tasks' && key !== 'metadata'
);
if (tag === 'master' && availableKeys.length > 0) {
const firstTag = availableKeys[0];
const tagData = data[firstTag];
if (!tagData?.metadata && tagData?.tasks) {
return this.generateMetadataFromTasks(tagData.tasks, firstTag);
}
return tagData?.metadata || null;
}
return null;
}
/**
* Extract metadata from standard format
*/
private extractMetadataFromStandard(data: any): TaskMetadata | null {
return data?.metadata || null;
}
/**
* Extract all available tags from the single tasks.json file
*/
extractTags(data: any): string[] {
if (!data) {
return [];
}
const format = this.detectFormat(data);
if (format === 'legacy') {
// Return all tag keys from legacy format
const keys = Object.keys(data);
return keys.filter((key) => key !== 'tasks' && key !== 'metadata');
}
// Standard format - just has 'master' tag
return ['master'];
}
/**
* Convert tasks and metadata to the appropriate format for saving
*/
convertToSaveFormat(
tasks: Task[],
metadata: TaskMetadata,
existingData: any,
tag: string
): any {
const resolvedTag = tag || 'master';
// Normalize task IDs to strings
const normalizedTasks = this.normalizeTasks(tasks);
// Check if existing file uses legacy format
if (existingData && this.detectFormat(existingData) === 'legacy') {
return this.convertToLegacyFormat(normalizedTasks, metadata, resolvedTag);
}
// Use standard format for new files
return this.convertToStandardFormat(normalizedTasks, metadata, tag);
}
/**
* Convert to legacy format
*/
private convertToLegacyFormat(
tasks: Task[],
metadata: TaskMetadata,
tag: string
): any {
return {
[tag]: {
tasks,
metadata: {
...metadata,
tags: [tag]
}
}
};
}
/**
* Convert to standard format
*/
private convertToStandardFormat(
tasks: Task[],
metadata: TaskMetadata,
tag?: string
): FileStorageData {
return {
tasks,
metadata: {
...metadata,
tags: tag ? [tag] : []
}
};
}
/**
* Normalize task IDs - keep Task IDs as strings, Subtask IDs as numbers
*/
private normalizeTasks(tasks: Task[]): Task[] {
return tasks.map((task) => ({
...task,
id: String(task.id), // Task IDs are strings
dependencies: task.dependencies?.map((dep) => String(dep)) || [],
subtasks:
task.subtasks?.map((subtask) => ({
...subtask,
id: Number(subtask.id), // Subtask IDs are numbers
parentId: String(subtask.parentId) // Parent ID is string (Task ID)
})) || []
}));
}
/**
* Generate metadata from tasks when not present
*/
private generateMetadataFromTasks(tasks: Task[], tag: string): TaskMetadata {
return {
version: '1.0.0',
lastModified: new Date().toISOString(),
taskCount: tasks.length,
completedCount: tasks.filter((t: any) => t.status === 'done').length,
tags: [tag]
};
}
}

View File

@@ -0,0 +1,14 @@
/**
* @fileoverview Exports for file storage components
*/
export {
FormatHandler,
type FileStorageData,
type FileFormat
} from './format-handler.js';
export { FileOperations } from './file-operations.js';
export { PathResolver } from './path-resolver.js';
// Main FileStorage class - primary export
export { FileStorage as default, FileStorage } from './file-storage.js';

View File

@@ -0,0 +1,42 @@
/**
* @fileoverview Path resolution utilities for single tasks.json file
*/
import path from 'node:path';
/**
* Handles path resolution for the single tasks.json file storage
*/
export class PathResolver {
private readonly basePath: string;
private readonly tasksDir: string;
private readonly tasksFilePath: string;
constructor(projectPath: string) {
this.basePath = path.join(projectPath, '.taskmaster');
this.tasksDir = path.join(this.basePath, 'tasks');
this.tasksFilePath = path.join(this.tasksDir, 'tasks.json');
}
/**
* Get the base storage directory path
*/
getBasePath(): string {
return this.basePath;
}
/**
* Get the tasks directory path
*/
getTasksDir(): string {
return this.tasksDir;
}
/**
* Get the path to the single tasks.json file
* All tags are stored in this one file
*/
getTasksPath(): string {
return this.tasksFilePath;
}
}

View File

@@ -0,0 +1,46 @@
/**
* @fileoverview Storage layer for the tm-core package
* This file exports all storage-related classes and interfaces
*/
// Export storage implementations
export { FileStorage } from './file-storage/index.js';
export { ApiStorage, type ApiStorageConfig } from './api-storage.js';
export { StorageFactory } from './storage-factory.js';
// Export storage interface and types
export type {
IStorage,
StorageStats
} from '../interfaces/storage.interface.js';
// Placeholder exports - these will be implemented in later tasks
export interface StorageAdapter {
read(path: string): Promise<string | null>;
write(path: string, data: string): Promise<void>;
exists(path: string): Promise<boolean>;
delete(path: string): Promise<void>;
}
/**
* @deprecated This is a placeholder class that will be properly implemented in later tasks
*/
export class PlaceholderStorage implements StorageAdapter {
private data = new Map<string, string>();
async read(path: string): Promise<string | null> {
return this.data.get(path) || null;
}
async write(path: string, data: string): Promise<void> {
this.data.set(path, data);
}
async exists(path: string): Promise<boolean> {
return this.data.has(path);
}
async delete(path: string): Promise<void> {
this.data.delete(path);
}
}

View File

@@ -0,0 +1,235 @@
/**
* @fileoverview Storage factory for creating appropriate storage implementations
*/
import type { IStorage } from '../interfaces/storage.interface.js';
import type { IConfiguration } from '../interfaces/configuration.interface.js';
import { FileStorage } from './file-storage';
import { ApiStorage } from './api-storage.js';
import { ERROR_CODES, TaskMasterError } from '../errors/task-master-error.js';
import { AuthManager } from '../auth/auth-manager.js';
import { getLogger } from '../logger/index.js';
/**
* Factory for creating storage implementations based on configuration
*/
export class StorageFactory {
/**
* Create a storage implementation based on configuration
* @param config - Configuration object
* @param projectPath - Project root path (for file storage)
* @returns Storage implementation
*/
static create(
config: Partial<IConfiguration>,
projectPath: string
): IStorage {
const storageType = config.storage?.type || 'auto';
const logger = getLogger('StorageFactory');
switch (storageType) {
case 'file':
logger.debug('📁 Using local file storage');
return StorageFactory.createFileStorage(projectPath, config);
case 'api':
if (!StorageFactory.isHamsterAvailable(config)) {
// Check if authenticated via AuthManager
const authManager = AuthManager.getInstance();
if (!authManager.isAuthenticated()) {
throw new TaskMasterError(
'API storage configured but not authenticated. Run: tm auth login',
ERROR_CODES.MISSING_CONFIGURATION,
{ storageType: 'api' }
);
}
// Use auth token from AuthManager
const credentials = authManager.getCredentials();
if (credentials) {
// Merge with existing storage config, ensuring required fields
config.storage = {
...config.storage,
type: 'api' as const,
apiAccessToken: credentials.token,
apiEndpoint:
config.storage?.apiEndpoint ||
process.env.HAMSTER_API_URL ||
'https://tryhamster.com/api'
} as any; // Cast to any to bypass strict type checking for partial config
}
}
logger.info('☁️ Using API storage');
return StorageFactory.createApiStorage(config);
case 'auto':
// Auto-detect based on authentication status
const authManager = AuthManager.getInstance();
// First check if API credentials are explicitly configured
if (StorageFactory.isHamsterAvailable(config)) {
logger.info('☁️ Using API storage (configured)');
return StorageFactory.createApiStorage(config);
}
// Then check if authenticated via AuthManager
if (authManager.isAuthenticated()) {
const credentials = authManager.getCredentials();
if (credentials) {
// Configure API storage with auth credentials
config.storage = {
...config.storage,
type: 'api' as const,
apiAccessToken: credentials.token,
apiEndpoint:
config.storage?.apiEndpoint ||
process.env.HAMSTER_API_URL ||
'https://tryhamster.com/api'
} as any; // Cast to any to bypass strict type checking for partial config
logger.info('☁️ Using API storage (authenticated)');
return StorageFactory.createApiStorage(config);
}
}
// Default to file storage
logger.debug('📁 Using local file storage');
return StorageFactory.createFileStorage(projectPath, config);
default:
throw new TaskMasterError(
`Unknown storage type: ${storageType}`,
ERROR_CODES.INVALID_INPUT,
{ storageType }
);
}
}
/**
* Create file storage implementation
*/
private static createFileStorage(
projectPath: string,
config: Partial<IConfiguration>
): FileStorage {
const basePath = config.storage?.basePath || projectPath;
return new FileStorage(basePath);
}
/**
* Create API storage implementation
*/
private static createApiStorage(config: Partial<IConfiguration>): ApiStorage {
const { apiEndpoint, apiAccessToken } = config.storage || {};
if (!apiEndpoint) {
throw new TaskMasterError(
'API endpoint is required for API storage',
ERROR_CODES.MISSING_CONFIGURATION,
{ storageType: 'api' }
);
}
if (!apiAccessToken) {
throw new TaskMasterError(
'API access token is required for API storage',
ERROR_CODES.MISSING_CONFIGURATION,
{ storageType: 'api' }
);
}
return new ApiStorage({
endpoint: apiEndpoint,
accessToken: apiAccessToken,
projectId: config.projectPath,
timeout: config.retry?.requestTimeout,
enableRetry: config.retry?.retryOnNetworkError,
maxRetries: config.retry?.retryAttempts
});
}
/**
* Detect optimal storage type based on available configuration
*/
static detectOptimalStorage(config: Partial<IConfiguration>): 'file' | 'api' {
// If API credentials are provided, prefer API storage (Hamster)
if (config.storage?.apiEndpoint && config.storage?.apiAccessToken) {
return 'api';
}
// Default to file storage
return 'file';
}
/**
* Validate storage configuration
*/
static validateStorageConfig(config: Partial<IConfiguration>): {
isValid: boolean;
errors: string[];
} {
const errors: string[] = [];
const storageType = config.storage?.type;
if (!storageType) {
errors.push('Storage type is not specified');
return { isValid: false, errors };
}
switch (storageType) {
case 'api':
if (!config.storage?.apiEndpoint) {
errors.push('API endpoint is required for API storage');
}
if (!config.storage?.apiAccessToken) {
errors.push('API access token is required for API storage');
}
break;
case 'file':
// File storage doesn't require additional config
break;
default:
errors.push(`Unknown storage type: ${storageType}`);
}
return {
isValid: errors.length === 0,
errors
};
}
/**
* Check if Hamster (API storage) is available
*/
static isHamsterAvailable(config: Partial<IConfiguration>): boolean {
return !!(config.storage?.apiEndpoint && config.storage?.apiAccessToken);
}
/**
* Create a storage implementation with fallback
* Tries API storage first, falls back to file storage
*/
static async createWithFallback(
config: Partial<IConfiguration>,
projectPath: string
): Promise<IStorage> {
// Try API storage if configured
if (StorageFactory.isHamsterAvailable(config)) {
try {
const apiStorage = StorageFactory.createApiStorage(config);
await apiStorage.initialize();
return apiStorage;
} catch (error) {
const logger = getLogger('StorageFactory');
logger.warn(
'Failed to initialize API storage, falling back to file storage:',
error
);
}
}
// Fallback to file storage
return StorageFactory.createFileStorage(projectPath, config);
}
}

View File

@@ -0,0 +1,99 @@
/**
* Test file documenting subpath export usage
* This demonstrates how consumers can use granular imports for better tree-shaking
*/
import { describe, it, expect } from 'vitest';
describe('Subpath Exports', () => {
it('should allow importing from auth subpath', async () => {
// Instead of: import { AuthManager } from '@tm/core';
// Use: import { AuthManager } from '@tm/core/auth';
const authModule = await import('./auth');
expect(authModule.AuthManager).toBeDefined();
expect(authModule.AuthenticationError).toBeDefined();
});
it('should allow importing from storage subpath', async () => {
// Instead of: import { FileStorage } from '@tm/core';
// Use: import { FileStorage } from '@tm/core/storage';
const storageModule = await import('./storage');
expect(storageModule.FileStorage).toBeDefined();
expect(storageModule.ApiStorage).toBeDefined();
expect(storageModule.StorageFactory).toBeDefined();
});
it('should allow importing from config subpath', async () => {
// Instead of: import { ConfigManager } from '@tm/core';
// Use: import { ConfigManager } from '@tm/core/config';
const configModule = await import('./config');
expect(configModule.ConfigManager).toBeDefined();
});
it('should allow importing from errors subpath', async () => {
// Instead of: import { TaskMasterError } from '@tm/core';
// Use: import { TaskMasterError } from '@tm/core/errors';
const errorsModule = await import('./errors');
expect(errorsModule.TaskMasterError).toBeDefined();
expect(errorsModule.ERROR_CODES).toBeDefined();
});
it('should allow importing from logger subpath', async () => {
// Instead of: import { getLogger } from '@tm/core';
// Use: import { getLogger } from '@tm/core/logger';
const loggerModule = await import('./logger');
expect(loggerModule.getLogger).toBeDefined();
expect(loggerModule.createLogger).toBeDefined();
});
it('should allow importing from providers subpath', async () => {
// Instead of: import { BaseProvider } from '@tm/core';
// Use: import { BaseProvider } from '@tm/core/providers';
const providersModule = await import('./providers');
expect(providersModule.BaseProvider).toBeDefined();
});
it('should allow importing from services subpath', async () => {
// Instead of: import { TaskService } from '@tm/core';
// Use: import { TaskService } from '@tm/core/services';
const servicesModule = await import('./services');
expect(servicesModule.TaskService).toBeDefined();
});
it('should allow importing from utils subpath', async () => {
// Instead of: import { generateId } from '@tm/core';
// Use: import { generateId } from '@tm/core/utils';
const utilsModule = await import('./utils');
expect(utilsModule.generateId).toBeDefined();
});
});
/**
* Usage Examples for Consumers:
*
* 1. Import only authentication (smaller bundle):
* ```typescript
* import { AuthManager, AuthenticationError } from '@tm/core/auth';
* ```
*
* 2. Import only storage (no auth code bundled):
* ```typescript
* import { FileStorage, StorageFactory } from '@tm/core/storage';
* ```
*
* 3. Import only errors (minimal bundle):
* ```typescript
* import { TaskMasterError, ERROR_CODES } from '@tm/core/errors';
* ```
*
* 4. Still support convenience imports (larger bundle but better DX):
* ```typescript
* import { AuthManager, FileStorage, TaskMasterError } from '@tm/core';
* ```
*
* Benefits:
* - Better tree-shaking: unused modules are not bundled
* - Clearer dependencies: explicit about what parts of the library you use
* - Faster builds: bundlers can optimize better with granular imports
* - Smaller bundles: especially important for browser/edge deployments
*/

View File

@@ -0,0 +1,188 @@
/**
* @fileoverview TaskMasterCore facade - main entry point for tm-core functionality
*/
import { ConfigManager } from './config/config-manager.js';
import {
TaskService,
type TaskListResult as ListTasksResult,
type GetTaskListOptions
} from './services/task-service.js';
import { ERROR_CODES, TaskMasterError } from './errors/task-master-error.js';
import type { IConfiguration } from './interfaces/configuration.interface.js';
import type { Task, TaskStatus, TaskFilter } from './types/index.js';
/**
* Options for creating TaskMasterCore instance
*/
export interface TaskMasterCoreOptions {
projectPath: string;
configuration?: Partial<IConfiguration>;
}
/**
* Re-export result types from TaskService
*/
export type { TaskListResult as ListTasksResult } from './services/task-service.js';
export type { GetTaskListOptions } from './services/task-service.js';
/**
* TaskMasterCore facade class
* Provides simplified API for all tm-core operations
*/
export class TaskMasterCore {
private configManager: ConfigManager;
private taskService: TaskService;
/**
* Create and initialize a new TaskMasterCore instance
* This is the ONLY way to create a TaskMasterCore
*
* @param options - Configuration options for TaskMasterCore
* @returns Fully initialized TaskMasterCore instance
*/
static async create(options: TaskMasterCoreOptions): Promise<TaskMasterCore> {
const instance = new TaskMasterCore();
await instance.initialize(options);
return instance;
}
/**
* Private constructor - use TaskMasterCore.create() instead
* This ensures the TaskMasterCore is always properly initialized
*/
private constructor() {
// Services will be initialized in the initialize() method
this.configManager = null as any;
this.taskService = null as any;
}
/**
* Initialize by loading services
* Private - only called by the factory method
*/
private async initialize(options: TaskMasterCoreOptions): Promise<void> {
if (!options.projectPath) {
throw new TaskMasterError(
'Project path is required',
ERROR_CODES.MISSING_CONFIGURATION
);
}
try {
// Create config manager using factory method
this.configManager = await ConfigManager.create(options.projectPath);
// Apply configuration overrides if provided
if (options.configuration) {
await this.configManager.updateConfig(options.configuration);
}
// Create task service
this.taskService = new TaskService(this.configManager);
await this.taskService.initialize();
} catch (error) {
throw new TaskMasterError(
'Failed to initialize TaskMasterCore',
ERROR_CODES.INTERNAL_ERROR,
{ operation: 'initialize' },
error as Error
);
}
}
/**
* Get list of tasks with optional filtering
* @deprecated Use getTaskList() instead
*/
async listTasks(options?: {
tag?: string;
filter?: TaskFilter;
includeSubtasks?: boolean;
}): Promise<ListTasksResult> {
return this.getTaskList(options);
}
/**
* Get list of tasks with optional filtering
*/
async getTaskList(options?: GetTaskListOptions): Promise<ListTasksResult> {
return this.taskService.getTaskList(options);
}
/**
* Get a specific task by ID
*/
async getTask(taskId: string, tag?: string): Promise<Task | null> {
return this.taskService.getTask(taskId, tag);
}
/**
* Get tasks by status
*/
async getTasksByStatus(
status: TaskStatus | TaskStatus[],
tag?: string
): Promise<Task[]> {
return this.taskService.getTasksByStatus(status, tag);
}
/**
* Get task statistics
*/
async getTaskStats(tag?: string): Promise<{
total: number;
byStatus: Record<TaskStatus, number>;
withSubtasks: number;
blocked: number;
}> {
const stats = await this.taskService.getTaskStats(tag);
// Remove storageType from the return to maintain backward compatibility
const { storageType, ...restStats } = stats;
return restStats;
}
/**
* Get next available task
*/
async getNextTask(tag?: string): Promise<Task | null> {
return this.taskService.getNextTask(tag);
}
/**
* Get current storage type
*/
getStorageType(): 'file' | 'api' | 'auto' {
return this.taskService.getStorageType();
}
/**
* Get current active tag
*/
getActiveTag(): string {
return this.configManager.getActiveTag();
}
/**
* Set active tag
*/
async setActiveTag(tag: string): Promise<void> {
await this.configManager.setActiveTag(tag);
}
/**
* Close and cleanup resources
*/
async close(): Promise<void> {
// TaskService handles storage cleanup internally
}
}
/**
* Factory function to create TaskMasterCore instance
*/
export async function createTaskMasterCore(
options: TaskMasterCoreOptions
): Promise<TaskMasterCore> {
return TaskMasterCore.create(options);
}

View File

@@ -0,0 +1,238 @@
/**
* Core type definitions for Task Master
*/
// ============================================================================
// Type Literals
// ============================================================================
/**
* Task status values
*/
export type TaskStatus =
| 'pending'
| 'in-progress'
| 'done'
| 'deferred'
| 'cancelled'
| 'blocked'
| 'review';
/**
* Task priority levels
*/
export type TaskPriority = 'low' | 'medium' | 'high' | 'critical';
/**
* Task complexity levels
*/
export type TaskComplexity = 'simple' | 'moderate' | 'complex' | 'very-complex';
// ============================================================================
// Core Interfaces
// ============================================================================
/**
* Placeholder task interface for temporary/minimal task objects
*/
export interface PlaceholderTask {
id: string;
title: string;
status: TaskStatus;
priority: TaskPriority;
}
/**
* Base task interface
*/
export interface Task {
id: string;
title: string;
description: string;
status: TaskStatus;
priority: TaskPriority;
dependencies: string[];
details: string;
testStrategy: string;
subtasks: Subtask[];
// Optional enhanced properties
createdAt?: string;
updatedAt?: string;
effort?: number;
actualEffort?: number;
tags?: string[];
assignee?: string;
complexity?: TaskComplexity;
}
/**
* Subtask interface extending Task with numeric ID
*/
export interface Subtask extends Omit<Task, 'id' | 'subtasks'> {
id: number;
parentId: string;
subtasks?: never; // Subtasks cannot have their own subtasks
}
/**
* Task metadata for tracking overall project state
*/
export interface TaskMetadata {
version: string;
lastModified: string;
taskCount: number;
completedCount: number;
projectName?: string;
description?: string;
tags?: string[];
}
/**
* Task collection with metadata
*/
export interface TaskCollection {
tasks: Task[];
metadata: TaskMetadata;
}
// ============================================================================
// Utility Types
// ============================================================================
/**
* Type for creating a new task (without generated fields)
*/
export type CreateTask = Omit<
Task,
'id' | 'createdAt' | 'updatedAt' | 'subtasks'
> & {
subtasks?: Omit<Subtask, 'id' | 'parentId' | 'createdAt' | 'updatedAt'>[];
};
/**
* Type for updating a task (all fields optional except ID)
*/
export type UpdateTask = Partial<Omit<Task, 'id'>> & {
id: string;
};
/**
* Type for task filters
*/
export interface TaskFilter {
status?: TaskStatus | TaskStatus[];
priority?: TaskPriority | TaskPriority[];
tags?: string[];
hasSubtasks?: boolean;
search?: string;
assignee?: string;
complexity?: TaskComplexity | TaskComplexity[];
}
/**
* Type for sort options
*/
export interface TaskSortOptions {
field: keyof Task;
direction: 'asc' | 'desc';
}
// ============================================================================
// Type Guards
// ============================================================================
/**
* Type guard to check if a value is a valid TaskStatus
*/
export function isTaskStatus(value: unknown): value is TaskStatus {
return (
typeof value === 'string' &&
[
'pending',
'in-progress',
'done',
'deferred',
'cancelled',
'blocked',
'review'
].includes(value)
);
}
/**
* Type guard to check if a value is a valid TaskPriority
*/
export function isTaskPriority(value: unknown): value is TaskPriority {
return (
typeof value === 'string' &&
['low', 'medium', 'high', 'critical'].includes(value)
);
}
/**
* Type guard to check if a value is a valid TaskComplexity
*/
export function isTaskComplexity(value: unknown): value is TaskComplexity {
return (
typeof value === 'string' &&
['simple', 'moderate', 'complex', 'very-complex'].includes(value)
);
}
/**
* Type guard to check if an object is a Task
*/
export function isTask(obj: unknown): obj is Task {
if (!obj || typeof obj !== 'object') return false;
const task = obj as Record<string, unknown>;
return (
typeof task.id === 'string' &&
typeof task.title === 'string' &&
typeof task.description === 'string' &&
isTaskStatus(task.status) &&
isTaskPriority(task.priority) &&
Array.isArray(task.dependencies) &&
typeof task.details === 'string' &&
typeof task.testStrategy === 'string' &&
Array.isArray(task.subtasks)
);
}
/**
* Type guard to check if an object is a Subtask
*/
export function isSubtask(obj: unknown): obj is Subtask {
if (!obj || typeof obj !== 'object') return false;
const subtask = obj as Record<string, unknown>;
return (
typeof subtask.id === 'number' &&
typeof subtask.parentId === 'string' &&
typeof subtask.title === 'string' &&
typeof subtask.description === 'string' &&
isTaskStatus(subtask.status) &&
isTaskPriority(subtask.priority) &&
!('subtasks' in subtask)
);
}
// ============================================================================
// Deprecated Types (for backwards compatibility)
// ============================================================================
/**
* @deprecated Use TaskStatus instead
*/
export type Status = TaskStatus;
/**
* @deprecated Use TaskPriority instead
*/
export type Priority = TaskPriority;
/**
* @deprecated Use TaskComplexity instead
*/
export type Complexity = TaskComplexity;

View File

@@ -0,0 +1,9 @@
/**
* @fileoverview Legacy type definitions for backwards compatibility
* These types are deprecated and will be removed in future versions
*/
/**
* @deprecated Use string directly instead. This will be removed in a future version.
*/
export type TaskId = string;

View File

@@ -0,0 +1,142 @@
/**
* @fileoverview ID generation utilities for Task Master
* Provides functions to generate unique identifiers for tasks and subtasks
*/
import { randomBytes } from 'node:crypto';
/**
* Generates a unique task ID using the format: TASK-{timestamp}-{random}
*
* @returns A unique task ID string
* @example
* ```typescript
* const taskId = generateTaskId();
* // Returns something like: "TASK-1704067200000-A7B3"
* ```
*/
export function generateTaskId(): string {
const timestamp = Date.now();
const random = generateRandomString(4);
return `TASK-${timestamp}-${random}`;
}
/**
* Generates a subtask ID using the format: {parentId}.{sequential}
*
* @param parentId - The ID of the parent task
* @param existingSubtasks - Array of existing subtask IDs to determine the next sequential number
* @returns A unique subtask ID string
* @example
* ```typescript
* const subtaskId = generateSubtaskId("TASK-123-A7B3", ["TASK-123-A7B3.1"]);
* // Returns: "TASK-123-A7B3.2"
* ```
*/
export function generateSubtaskId(
parentId: string,
existingSubtasks: string[] = []
): string {
// Find existing subtasks for this parent
const parentSubtasks = existingSubtasks.filter((id) =>
id.startsWith(`${parentId}.`)
);
// Extract sequential numbers and find the highest
const sequentialNumbers = parentSubtasks
.map((id) => {
const parts = id.split('.');
const lastPart = parts[parts.length - 1];
return Number.parseInt(lastPart, 10);
})
.filter((num) => !Number.isNaN(num))
.sort((a, b) => a - b);
// Determine the next sequential number
const nextSequential =
sequentialNumbers.length > 0 ? Math.max(...sequentialNumbers) + 1 : 1;
return `${parentId}.${nextSequential}`;
}
/**
* Generates a random alphanumeric string of specified length
* Uses crypto.randomBytes for cryptographically secure randomness
*
* @param length - The desired length of the random string
* @returns A random alphanumeric string
* @internal
*/
function generateRandomString(length: number): string {
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
const bytes = randomBytes(length);
let result = '';
for (let i = 0; i < length; i++) {
result += chars[bytes[i] % chars.length];
}
return result;
}
/**
* Validates a task ID format
*
* @param id - The ID to validate
* @returns True if the ID matches the expected task ID format
* @example
* ```typescript
* isValidTaskId("TASK-1704067200000-A7B3"); // true
* isValidTaskId("invalid-id"); // false
* ```
*/
export function isValidTaskId(id: string): boolean {
const taskIdRegex = /^TASK-\d{13}-[A-Z0-9]{4}$/;
return taskIdRegex.test(id);
}
/**
* Validates a subtask ID format
*
* @param id - The ID to validate
* @returns True if the ID matches the expected subtask ID format
* @example
* ```typescript
* isValidSubtaskId("TASK-1704067200000-A7B3.1"); // true
* isValidSubtaskId("TASK-1704067200000-A7B3.1.2"); // true (nested subtask)
* isValidSubtaskId("invalid.id"); // false
* ```
*/
export function isValidSubtaskId(id: string): boolean {
const parts = id.split('.');
if (parts.length < 2) return false;
// First part should be a valid task ID
const taskIdPart = parts[0];
if (!isValidTaskId(taskIdPart)) return false;
// Remaining parts should be positive integers
const sequentialParts = parts.slice(1);
return sequentialParts.every((part) => {
const num = Number.parseInt(part, 10);
return !Number.isNaN(num) && num > 0 && part === num.toString();
});
}
/**
* Extracts the parent task ID from a subtask ID
*
* @param subtaskId - The subtask ID
* @returns The parent task ID, or null if the input is not a valid subtask ID
* @example
* ```typescript
* getParentTaskId("TASK-1704067200000-A7B3.1.2"); // "TASK-1704067200000-A7B3"
* getParentTaskId("TASK-1704067200000-A7B3"); // null (not a subtask)
* ```
*/
export function getParentTaskId(subtaskId: string): string | null {
if (!isValidSubtaskId(subtaskId)) return null;
const parts = subtaskId.split('.');
return parts[0];
}

View File

@@ -0,0 +1,32 @@
/**
* @fileoverview Utility functions for the tm-core package
* This file exports all utility functions and helper classes
*/
// Export ID generation utilities
export {
generateTaskId as generateId, // Alias for backward compatibility
generateTaskId,
generateSubtaskId,
isValidTaskId,
isValidSubtaskId,
getParentTaskId
} from './id-generator';
// Additional utility exports
/**
* Formats a date for task timestamps
* @deprecated This is a placeholder function that will be properly implemented in later tasks
*/
export function formatDate(date: Date = new Date()): string {
return date.toISOString();
}
/**
* Deep clones an object
* @deprecated This is a placeholder function that will be properly implemented in later tasks
*/
export function deepClone<T>(obj: T): T {
return JSON.parse(JSON.stringify(obj));
}

View File

@@ -0,0 +1,422 @@
/**
* @fileoverview End-to-end integration test for listTasks functionality
*/
import { promises as fs } from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
import {
type Task,
type TaskMasterCore,
type TaskStatus,
createTaskMasterCore
} from '../../src/index';
describe('TaskMasterCore - listTasks E2E', () => {
let tmpDir: string;
let tmCore: TaskMasterCore;
// Sample tasks data
const sampleTasks: Task[] = [
{
id: '1',
title: 'Setup project',
description: 'Initialize the project structure',
status: 'done',
priority: 'high',
dependencies: [],
details: 'Create all necessary directories and config files',
testStrategy: 'Manual verification',
subtasks: [
{
id: 1,
parentId: '1',
title: 'Create directories',
description: 'Create project directories',
status: 'done',
priority: 'high',
dependencies: [],
details: 'Create src, tests, docs directories',
testStrategy: 'Check directories exist'
},
{
id: 2,
parentId: '1',
title: 'Initialize package.json',
description: 'Create package.json file',
status: 'done',
priority: 'high',
dependencies: [],
details: 'Run npm init',
testStrategy: 'Verify package.json exists'
}
],
tags: ['setup', 'infrastructure']
},
{
id: '2',
title: 'Implement core features',
description: 'Build the main functionality',
status: 'in-progress',
priority: 'high',
dependencies: ['1'],
details: 'Implement all core business logic',
testStrategy: 'Unit tests for all features',
subtasks: [],
tags: ['feature', 'core'],
assignee: 'developer1'
},
{
id: '3',
title: 'Write documentation',
description: 'Create user and developer docs',
status: 'pending',
priority: 'medium',
dependencies: ['2'],
details: 'Write comprehensive documentation',
testStrategy: 'Review by team',
subtasks: [],
tags: ['documentation'],
complexity: 'simple'
},
{
id: '4',
title: 'Performance optimization',
description: 'Optimize for speed and efficiency',
status: 'blocked',
priority: 'low',
dependencies: ['2'],
details: 'Profile and optimize bottlenecks',
testStrategy: 'Performance benchmarks',
subtasks: [],
assignee: 'developer2',
complexity: 'complex'
},
{
id: '5',
title: 'Security audit',
description: 'Review security vulnerabilities',
status: 'deferred',
priority: 'critical',
dependencies: [],
details: 'Complete security assessment',
testStrategy: 'Security scanning tools',
subtasks: [],
tags: ['security', 'audit']
}
];
beforeEach(async () => {
// Create temp directory for testing
tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'tm-core-test-'));
// Create .taskmaster/tasks directory
const tasksDir = path.join(tmpDir, '.taskmaster', 'tasks');
await fs.mkdir(tasksDir, { recursive: true });
// Write sample tasks.json
const tasksFile = path.join(tasksDir, 'tasks.json');
const tasksData = {
tasks: sampleTasks,
metadata: {
version: '1.0.0',
lastModified: new Date().toISOString(),
taskCount: sampleTasks.length,
completedCount: 1
}
};
await fs.writeFile(tasksFile, JSON.stringify(tasksData, null, 2));
// Create TaskMasterCore instance
tmCore = createTaskMasterCore(tmpDir);
await tmCore.initialize();
});
afterEach(async () => {
// Cleanup
if (tmCore) {
await tmCore.close();
}
// Remove temp directory
await fs.rm(tmpDir, { recursive: true, force: true });
});
describe('Basic listing', () => {
it('should list all tasks', async () => {
const result = await tmCore.listTasks();
expect(result.tasks).toHaveLength(5);
expect(result.total).toBe(5);
expect(result.filtered).toBe(5);
expect(result.tag).toBeUndefined();
});
it('should include subtasks by default', async () => {
const result = await tmCore.listTasks();
const setupTask = result.tasks.find((t) => t.id === '1');
expect(setupTask?.subtasks).toHaveLength(2);
expect(setupTask?.subtasks[0].title).toBe('Create directories');
});
it('should exclude subtasks when requested', async () => {
const result = await tmCore.listTasks({ includeSubtasks: false });
const setupTask = result.tasks.find((t) => t.id === '1');
expect(setupTask?.subtasks).toHaveLength(0);
});
});
describe('Filtering', () => {
it('should filter by status', async () => {
const result = await tmCore.listTasks({
filter: { status: 'done' }
});
expect(result.filtered).toBe(1);
expect(result.tasks[0].id).toBe('1');
});
it('should filter by multiple statuses', async () => {
const result = await tmCore.listTasks({
filter: { status: ['done', 'in-progress'] }
});
expect(result.filtered).toBe(2);
const ids = result.tasks.map((t) => t.id);
expect(ids).toContain('1');
expect(ids).toContain('2');
});
it('should filter by priority', async () => {
const result = await tmCore.listTasks({
filter: { priority: 'high' }
});
expect(result.filtered).toBe(2);
});
it('should filter by tags', async () => {
const result = await tmCore.listTasks({
filter: { tags: ['setup'] }
});
expect(result.filtered).toBe(1);
expect(result.tasks[0].id).toBe('1');
});
it('should filter by assignee', async () => {
const result = await tmCore.listTasks({
filter: { assignee: 'developer1' }
});
expect(result.filtered).toBe(1);
expect(result.tasks[0].id).toBe('2');
});
it('should filter by complexity', async () => {
const result = await tmCore.listTasks({
filter: { complexity: 'complex' }
});
expect(result.filtered).toBe(1);
expect(result.tasks[0].id).toBe('4');
});
it('should filter by search term', async () => {
const result = await tmCore.listTasks({
filter: { search: 'documentation' }
});
expect(result.filtered).toBe(1);
expect(result.tasks[0].id).toBe('3');
});
it('should filter by hasSubtasks', async () => {
const withSubtasks = await tmCore.listTasks({
filter: { hasSubtasks: true }
});
expect(withSubtasks.filtered).toBe(1);
expect(withSubtasks.tasks[0].id).toBe('1');
const withoutSubtasks = await tmCore.listTasks({
filter: { hasSubtasks: false }
});
expect(withoutSubtasks.filtered).toBe(4);
});
it('should handle combined filters', async () => {
const result = await tmCore.listTasks({
filter: {
priority: ['high', 'critical'],
status: ['pending', 'deferred']
}
});
expect(result.filtered).toBe(1);
expect(result.tasks[0].id).toBe('5'); // Critical priority, deferred status
});
});
describe('Helper methods', () => {
it('should get task by ID', async () => {
const task = await tmCore.getTask('2');
expect(task).not.toBeNull();
expect(task?.title).toBe('Implement core features');
});
it('should return null for non-existent task', async () => {
const task = await tmCore.getTask('999');
expect(task).toBeNull();
});
it('should get tasks by status', async () => {
const pendingTasks = await tmCore.getTasksByStatus('pending');
expect(pendingTasks).toHaveLength(1);
expect(pendingTasks[0].id).toBe('3');
const multipleTasks = await tmCore.getTasksByStatus(['done', 'blocked']);
expect(multipleTasks).toHaveLength(2);
});
it('should get task statistics', async () => {
const stats = await tmCore.getTaskStats();
expect(stats.total).toBe(5);
expect(stats.byStatus.done).toBe(1);
expect(stats.byStatus['in-progress']).toBe(1);
expect(stats.byStatus.pending).toBe(1);
expect(stats.byStatus.blocked).toBe(1);
expect(stats.byStatus.deferred).toBe(1);
expect(stats.byStatus.cancelled).toBe(0);
expect(stats.byStatus.review).toBe(0);
expect(stats.withSubtasks).toBe(1);
expect(stats.blocked).toBe(1);
});
});
describe('Error handling', () => {
it('should handle missing tasks file gracefully', async () => {
// Create new instance with empty directory
const emptyDir = await fs.mkdtemp(path.join(os.tmpdir(), 'tm-empty-'));
const emptyCore = createTaskMasterCore(emptyDir);
try {
const result = await emptyCore.listTasks();
expect(result.tasks).toHaveLength(0);
expect(result.total).toBe(0);
expect(result.filtered).toBe(0);
} finally {
await emptyCore.close();
await fs.rm(emptyDir, { recursive: true, force: true });
}
});
it('should validate task entities', async () => {
// Write invalid task data
const invalidDir = await fs.mkdtemp(
path.join(os.tmpdir(), 'tm-invalid-')
);
const tasksDir = path.join(invalidDir, '.taskmaster', 'tasks');
await fs.mkdir(tasksDir, { recursive: true });
const invalidData = {
tasks: [
{
id: '', // Invalid: empty ID
title: 'Test',
description: 'Test',
status: 'done',
priority: 'high',
dependencies: [],
details: 'Test',
testStrategy: 'Test',
subtasks: []
}
],
metadata: {
version: '1.0.0',
lastModified: new Date().toISOString(),
taskCount: 1,
completedCount: 0
}
};
await fs.writeFile(
path.join(tasksDir, 'tasks.json'),
JSON.stringify(invalidData)
);
const invalidCore = createTaskMasterCore(invalidDir);
try {
await expect(invalidCore.listTasks()).rejects.toThrow();
} finally {
await invalidCore.close();
await fs.rm(invalidDir, { recursive: true, force: true });
}
});
});
describe('Tags support', () => {
beforeEach(async () => {
// Create tasks for a different tag
const taggedTasks = [
{
id: 'tag-1',
title: 'Tagged task',
description: 'Task with tag',
status: 'pending' as TaskStatus,
priority: 'medium' as const,
dependencies: [],
details: 'Tagged task details',
testStrategy: 'Test',
subtasks: []
}
];
const tagFile = path.join(
tmpDir,
'.taskmaster',
'tasks',
'feature-branch.json'
);
await fs.writeFile(
tagFile,
JSON.stringify({
tasks: taggedTasks,
metadata: {
version: '1.0.0',
lastModified: new Date().toISOString(),
taskCount: 1,
completedCount: 0
}
})
);
});
it('should list tasks for specific tag', async () => {
const result = await tmCore.listTasks({ tag: 'feature-branch' });
expect(result.tasks).toHaveLength(1);
expect(result.tasks[0].id).toBe('tag-1');
expect(result.tag).toBe('feature-branch');
});
it('should list default tasks when no tag specified', async () => {
const result = await tmCore.listTasks();
expect(result.tasks).toHaveLength(5);
expect(result.tasks[0].id).toBe('1');
});
});
});

View File

@@ -0,0 +1,210 @@
/**
* @fileoverview Mock provider for testing BaseProvider functionality
*/
import type {
AIModel,
AIOptions,
AIResponse,
ProviderInfo,
ProviderUsageStats
} from '../../src/interfaces/ai-provider.interface';
import {
BaseProvider,
type BaseProviderConfig,
type CompletionResult
} from '../../src/providers/ai/base-provider';
/**
* Configuration for MockProvider behavior
*/
export interface MockProviderOptions extends BaseProviderConfig {
shouldFail?: boolean;
failAfterAttempts?: number;
simulateRateLimit?: boolean;
simulateTimeout?: boolean;
responseDelay?: number;
tokenMultiplier?: number;
}
/**
* Mock provider for testing BaseProvider functionality
*/
export class MockProvider extends BaseProvider {
private attemptCount = 0;
private readonly options: MockProviderOptions;
constructor(options: MockProviderOptions) {
super(options);
this.options = options;
}
/**
* Simulate completion generation with configurable behavior
*/
protected async generateCompletionInternal(
prompt: string,
_options?: AIOptions
): Promise<CompletionResult> {
this.attemptCount++;
// Simulate delay if configured
if (this.options.responseDelay) {
await this.sleep(this.options.responseDelay);
}
// Simulate failures based on configuration
if (this.options.shouldFail) {
throw new Error('Mock provider error');
}
if (
this.options.failAfterAttempts &&
this.attemptCount <= this.options.failAfterAttempts
) {
if (this.options.simulateRateLimit) {
throw new Error('Rate limit exceeded - too many requests (429)');
}
if (this.options.simulateTimeout) {
throw new Error('Request timeout - ECONNRESET');
}
throw new Error('Temporary failure');
}
// Return successful mock response
return {
content: `Mock response to: ${prompt}`,
inputTokens: this.calculateTokens(prompt),
outputTokens: this.calculateTokens(`Mock response to: ${prompt}`),
finishReason: 'complete',
model: this.model
};
}
/**
* Simple token calculation for testing
*/
calculateTokens(text: string, _model?: string): number {
const multiplier = this.options.tokenMultiplier || 1;
// Rough approximation: 1 token per 4 characters
return Math.ceil((text.length / 4) * multiplier);
}
getName(): string {
return 'mock';
}
getDefaultModel(): string {
return 'mock-model-v1';
}
/**
* Get the number of attempts made
*/
getAttemptCount(): number {
return this.attemptCount;
}
/**
* Reset attempt counter
*/
resetAttempts(): void {
this.attemptCount = 0;
}
// Implement remaining abstract methods
async generateStreamingCompletion(
prompt: string,
_options?: AIOptions
): AsyncIterator<Partial<AIResponse>> {
// Simple mock implementation
const response: Partial<AIResponse> = {
content: `Mock streaming response to: ${prompt}`,
provider: this.getName(),
model: this.model
};
return {
async next() {
return { value: response, done: true };
}
};
}
async isAvailable(): Promise<boolean> {
return !this.options.shouldFail;
}
getProviderInfo(): ProviderInfo {
return {
name: 'mock',
displayName: 'Mock Provider',
description: 'Mock provider for testing',
models: this.getAvailableModels(),
defaultModel: this.getDefaultModel(),
requiresApiKey: true,
features: {
streaming: true,
functions: false,
vision: false,
embeddings: false
}
};
}
getAvailableModels(): AIModel[] {
return [
{
id: 'mock-model-v1',
name: 'Mock Model v1',
description: 'First mock model',
contextLength: 4096,
inputCostPer1K: 0.001,
outputCostPer1K: 0.002,
supportsStreaming: true
},
{
id: 'mock-model-v2',
name: 'Mock Model v2',
description: 'Second mock model',
contextLength: 8192,
inputCostPer1K: 0.002,
outputCostPer1K: 0.004,
supportsStreaming: true
}
];
}
async validateCredentials(): Promise<boolean> {
return this.apiKey === 'valid-key';
}
async getUsageStats(): Promise<ProviderUsageStats | null> {
return {
totalRequests: this.attemptCount,
totalTokens: 1000,
totalCost: 0.01,
requestsToday: this.attemptCount,
tokensToday: 1000,
costToday: 0.01,
averageResponseTime: 100,
successRate: 0.9,
lastRequestAt: new Date().toISOString()
};
}
async initialize(): Promise<void> {
// No-op for mock
}
async close(): Promise<void> {
// No-op for mock
}
// Override retry configuration for testing
protected getMaxRetries(): number {
return this.options.failAfterAttempts
? this.options.failAfterAttempts + 1
: 3;
}
}

View File

@@ -0,0 +1,21 @@
/**
* @fileoverview Vitest test setup file
*/
import { afterAll, beforeAll, vi } from 'vitest';
// Setup any global test configuration here
// For example, increase timeout for slow CI environments
if (process.env.CI) {
// Vitest timeout is configured in vitest.config.ts
}
// Suppress console errors during tests unless explicitly testing them
const originalError = console.error;
beforeAll(() => {
console.error = vi.fn();
});
afterAll(() => {
console.error = originalError;
});

View File

@@ -0,0 +1,265 @@
/**
* @fileoverview Unit tests for BaseProvider abstract class
*/
import { beforeEach, describe, expect, it } from 'vitest';
import {
ERROR_CODES,
TaskMasterError
} from '../../src/errors/task-master-error';
import { MockProvider } from '../mocks/mock-provider';
describe('BaseProvider', () => {
describe('constructor', () => {
it('should require an API key', () => {
expect(() => {
new MockProvider({ apiKey: '' });
}).toThrow(TaskMasterError);
});
it('should initialize with provided API key and model', () => {
const provider = new MockProvider({
apiKey: 'test-key',
model: 'mock-model-v2'
});
expect(provider.getModel()).toBe('mock-model-v2');
});
it('should use default model if not provided', () => {
const provider = new MockProvider({ apiKey: 'test-key' });
expect(provider.getModel()).toBe('mock-model-v1');
});
});
describe('generateCompletion', () => {
let provider: MockProvider;
beforeEach(() => {
provider = new MockProvider({ apiKey: 'test-key' });
});
it('should successfully generate a completion', async () => {
const response = await provider.generateCompletion('Test prompt');
expect(response).toMatchObject({
content: 'Mock response to: Test prompt',
provider: 'mock',
model: 'mock-model-v1',
inputTokens: expect.any(Number),
outputTokens: expect.any(Number),
totalTokens: expect.any(Number),
duration: expect.any(Number),
timestamp: expect.any(String)
});
});
it('should validate empty prompts', async () => {
await expect(provider.generateCompletion('')).rejects.toThrow(
'Prompt must be a non-empty string'
);
});
it('should validate prompt type', async () => {
await expect(provider.generateCompletion(null as any)).rejects.toThrow(
'Prompt must be a non-empty string'
);
});
it('should validate temperature range', async () => {
await expect(
provider.generateCompletion('Test', { temperature: 3 })
).rejects.toThrow('Temperature must be between 0 and 2');
});
it('should validate maxTokens range', async () => {
await expect(
provider.generateCompletion('Test', { maxTokens: 0 })
).rejects.toThrow('Max tokens must be between 1 and 100000');
});
it('should validate topP range', async () => {
await expect(
provider.generateCompletion('Test', { topP: 1.5 })
).rejects.toThrow('Top-p must be between 0 and 1');
});
});
describe('retry logic', () => {
it('should retry on rate limit errors', async () => {
const provider = new MockProvider({
apiKey: 'test-key',
failAfterAttempts: 2,
simulateRateLimit: true,
responseDelay: 10
});
const response = await provider.generateCompletion('Test prompt');
expect(response.content).toBe('Mock response to: Test prompt');
expect(provider.getAttemptCount()).toBe(3); // 2 failures + 1 success
});
it('should retry on timeout errors', async () => {
const provider = new MockProvider({
apiKey: 'test-key',
failAfterAttempts: 1,
simulateTimeout: true
});
const response = await provider.generateCompletion('Test prompt');
expect(response.content).toBe('Mock response to: Test prompt');
expect(provider.getAttemptCount()).toBe(2); // 1 failure + 1 success
});
it('should fail after max retries', async () => {
const provider = new MockProvider({
apiKey: 'test-key',
shouldFail: true
});
await expect(provider.generateCompletion('Test prompt')).rejects.toThrow(
'mock provider error'
);
});
it('should calculate exponential backoff delays', () => {
const provider = new MockProvider({ apiKey: 'test-key' });
// Access protected method through type assertion
const calculateDelay = (provider as any).calculateBackoffDelay.bind(
provider
);
const delay1 = calculateDelay(1);
const delay2 = calculateDelay(2);
const delay3 = calculateDelay(3);
// Check exponential growth (with jitter, so use ranges)
expect(delay1).toBeGreaterThanOrEqual(900);
expect(delay1).toBeLessThanOrEqual(1100);
expect(delay2).toBeGreaterThanOrEqual(1800);
expect(delay2).toBeLessThanOrEqual(2200);
expect(delay3).toBeGreaterThanOrEqual(3600);
expect(delay3).toBeLessThanOrEqual(4400);
});
});
describe('error handling', () => {
it('should wrap provider errors properly', async () => {
const provider = new MockProvider({
apiKey: 'test-key',
shouldFail: true
});
try {
await provider.generateCompletion('Test prompt');
expect.fail('Should have thrown an error');
} catch (error) {
expect(error).toBeInstanceOf(TaskMasterError);
const tmError = error as TaskMasterError;
expect(tmError.code).toBe(ERROR_CODES.PROVIDER_ERROR);
expect(tmError.context.operation).toBe('generateCompletion');
expect(tmError.context.resource).toBe('mock');
}
});
it('should identify rate limit errors correctly', () => {
const provider = new MockProvider({ apiKey: 'test-key' });
const isRateLimitError = (provider as any).isRateLimitError.bind(
provider
);
expect(isRateLimitError(new Error('Rate limit exceeded'))).toBe(true);
expect(isRateLimitError(new Error('Too many requests'))).toBe(true);
expect(isRateLimitError(new Error('Status: 429'))).toBe(true);
expect(isRateLimitError(new Error('Some other error'))).toBe(false);
});
it('should identify timeout errors correctly', () => {
const provider = new MockProvider({ apiKey: 'test-key' });
const isTimeoutError = (provider as any).isTimeoutError.bind(provider);
expect(isTimeoutError(new Error('Request timeout'))).toBe(true);
expect(isTimeoutError(new Error('Operation timed out'))).toBe(true);
expect(isTimeoutError(new Error('ECONNRESET'))).toBe(true);
expect(isTimeoutError(new Error('Some other error'))).toBe(false);
});
it('should identify network errors correctly', () => {
const provider = new MockProvider({ apiKey: 'test-key' });
const isNetworkError = (provider as any).isNetworkError.bind(provider);
expect(isNetworkError(new Error('Network error'))).toBe(true);
expect(isNetworkError(new Error('ENOTFOUND'))).toBe(true);
expect(isNetworkError(new Error('ECONNREFUSED'))).toBe(true);
expect(isNetworkError(new Error('Some other error'))).toBe(false);
});
});
describe('model management', () => {
it('should get and set model', () => {
const provider = new MockProvider({ apiKey: 'test-key' });
expect(provider.getModel()).toBe('mock-model-v1');
provider.setModel('mock-model-v2');
expect(provider.getModel()).toBe('mock-model-v2');
});
});
describe('provider information', () => {
it('should return provider info', () => {
const provider = new MockProvider({ apiKey: 'test-key' });
const info = provider.getProviderInfo();
expect(info.name).toBe('mock');
expect(info.displayName).toBe('Mock Provider');
expect(info.requiresApiKey).toBe(true);
expect(info.models).toHaveLength(2);
});
it('should return available models', () => {
const provider = new MockProvider({ apiKey: 'test-key' });
const models = provider.getAvailableModels();
expect(models).toHaveLength(2);
expect(models[0].id).toBe('mock-model-v1');
expect(models[1].id).toBe('mock-model-v2');
});
it('should validate credentials', async () => {
const validProvider = new MockProvider({ apiKey: 'valid-key' });
const invalidProvider = new MockProvider({ apiKey: 'invalid-key' });
expect(await validProvider.validateCredentials()).toBe(true);
expect(await invalidProvider.validateCredentials()).toBe(false);
});
});
describe('template method pattern', () => {
it('should follow the template method flow', async () => {
const provider = new MockProvider({
apiKey: 'test-key',
responseDelay: 50
});
const startTime = Date.now();
const response = await provider.generateCompletion('Test prompt', {
temperature: 0.5,
maxTokens: 100
});
const endTime = Date.now();
// Verify the response was processed through the template
expect(response.content).toBeDefined();
expect(response.duration).toBeGreaterThanOrEqual(50);
expect(response.duration).toBeLessThanOrEqual(endTime - startTime + 10);
expect(response.timestamp).toBeDefined();
expect(response.provider).toBe('mock');
});
});
});

View File

@@ -0,0 +1,139 @@
/**
* Smoke tests to verify basic package functionality and imports
*/
import {
PlaceholderParser,
PlaceholderStorage,
StorageError,
TaskNotFoundError,
TmCoreError,
ValidationError,
formatDate,
generateTaskId,
isValidTaskId,
name,
version
} from '@tm/core';
import type {
PlaceholderTask,
TaskId,
TaskPriority,
TaskStatus
} from '@tm/core';
describe('tm-core smoke tests', () => {
describe('package metadata', () => {
it('should export correct package name and version', () => {
expect(name).toBe('@task-master/tm-core');
expect(version).toBe('1.0.0');
});
});
describe('utility functions', () => {
it('should generate valid task IDs', () => {
const id1 = generateTaskId();
const id2 = generateTaskId();
expect(typeof id1).toBe('string');
expect(typeof id2).toBe('string');
expect(id1).not.toBe(id2); // Should be unique
expect(isValidTaskId(id1)).toBe(true);
expect(isValidTaskId('')).toBe(false);
});
it('should format dates', () => {
const date = new Date('2023-01-01T00:00:00.000Z');
const formatted = formatDate(date);
expect(formatted).toBe('2023-01-01T00:00:00.000Z');
});
});
describe('placeholder storage', () => {
it('should perform basic storage operations', async () => {
const storage = new PlaceholderStorage();
const testPath = 'test/path';
const testData = 'test data';
// Initially should not exist
expect(await storage.exists(testPath)).toBe(false);
expect(await storage.read(testPath)).toBe(null);
// Write and verify
await storage.write(testPath, testData);
expect(await storage.exists(testPath)).toBe(true);
expect(await storage.read(testPath)).toBe(testData);
// Delete and verify
await storage.delete(testPath);
expect(await storage.exists(testPath)).toBe(false);
});
});
describe('placeholder parser', () => {
it('should parse simple task lists', async () => {
const parser = new PlaceholderParser();
const content = `
- Task 1
- Task 2
- Task 3
`;
const isValid = await parser.validate(content);
expect(isValid).toBe(true);
const tasks = await parser.parse(content);
expect(tasks).toHaveLength(3);
expect(tasks[0]?.title).toBe('Task 1');
expect(tasks[1]?.title).toBe('Task 2');
expect(tasks[2]?.title).toBe('Task 3');
tasks.forEach((task) => {
expect(task.status).toBe('pending');
expect(task.priority).toBe('medium');
});
});
});
describe('error classes', () => {
it('should create and throw custom errors', () => {
const baseError = new TmCoreError('Base error');
expect(baseError.name).toBe('TmCoreError');
expect(baseError.message).toBe('Base error');
const taskNotFound = new TaskNotFoundError('task-123');
expect(taskNotFound.name).toBe('TaskNotFoundError');
expect(taskNotFound.code).toBe('TASK_NOT_FOUND');
expect(taskNotFound.message).toContain('task-123');
const validationError = new ValidationError('Invalid data');
expect(validationError.name).toBe('ValidationError');
expect(validationError.code).toBe('VALIDATION_ERROR');
const storageError = new StorageError('Storage failed');
expect(storageError.name).toBe('StorageError');
expect(storageError.code).toBe('STORAGE_ERROR');
});
});
describe('type definitions', () => {
it('should have correct types available', () => {
// These are compile-time checks that verify types exist
const taskId: TaskId = 'test-id';
const status: TaskStatus = 'pending';
const priority: TaskPriority = 'high';
const task: PlaceholderTask = {
id: taskId,
title: 'Test Task',
status: status,
priority: priority
};
expect(task.id).toBe('test-id');
expect(task.status).toBe('pending');
expect(task.priority).toBe('high');
});
});
});

View File

@@ -0,0 +1,46 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"lib": ["ES2022"],
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"noImplicitThis": true,
"alwaysStrict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"paths": {
"@/*": ["./src/*"],
"@/auth": ["./src/auth"],
"@/config": ["./src/config"],
"@/errors": ["./src/errors"],
"@/interfaces": ["./src/interfaces"],
"@/logger": ["./src/logger"],
"@/parser": ["./src/parser"],
"@/providers": ["./src/providers"],
"@/services": ["./src/services"],
"@/storage": ["./src/storage"],
"@/types": ["./src/types"],
"@/utils": ["./src/utils"]
}
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "tests", "**/*.test.ts", "**/*.spec.ts"]
}

View File

@@ -0,0 +1,52 @@
import { defineConfig } from 'tsup';
import { dotenvLoad } from 'dotenv-mono';
dotenvLoad();
// Get all TM_PUBLIC_* env variables for build-time injection
const getBuildTimeEnvs = () => {
const envs: Record<string, string> = {};
for (const [key, value] of Object.entries(process.env)) {
if (key.startsWith('TM_PUBLIC_')) {
// Return the actual value, not JSON.stringify'd
envs[key] = value || '';
}
}
return envs;
};
export default defineConfig({
entry: {
index: 'src/index.ts',
'auth/index': 'src/auth/index.ts',
'config/index': 'src/config/index.ts',
'errors/index': 'src/errors/index.ts',
'interfaces/index': 'src/interfaces/index.ts',
'logger/index': 'src/logger/index.ts',
'parser/index': 'src/parser/index.ts',
'providers/index': 'src/providers/index.ts',
'services/index': 'src/services/index.ts',
'storage/index': 'src/storage/index.ts',
'types/index': 'src/types/index.ts',
'utils/index': 'src/utils/index.ts'
},
format: ['cjs', 'esm'],
dts: true,
sourcemap: true,
clean: true,
splitting: false,
treeshake: true,
minify: false,
target: 'es2022',
tsconfig: './tsconfig.json',
outDir: 'dist',
// Replace process.env.TM_PUBLIC_* with actual values at build time
env: getBuildTimeEnvs(),
// Auto-external all dependencies from package.json
external: [
// External all node_modules - everything not starting with . or /
/^[^./]/
],
esbuildOptions(options) {
options.conditions = ['module'];
}
});

View File

@@ -0,0 +1,57 @@
import path from 'node:path';
import { fileURLToPath } from 'node:url';
import { defineConfig } from 'vitest/config';
// __dirname in ESM
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
export default defineConfig({
test: {
globals: true,
environment: 'node',
include: [
'tests/**/*.test.ts',
'tests/**/*.spec.ts',
'tests/{unit,integration,e2e}/**/*.{test,spec}.ts',
'src/**/*.test.ts',
'src/**/*.spec.ts'
],
exclude: ['node_modules', 'dist', '.git', '.cache'],
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html', 'lcov'],
exclude: [
'node_modules',
'dist',
'tests',
'**/*.test.ts',
'**/*.spec.ts',
'**/*.d.ts',
'src/index.ts'
],
thresholds: {
branches: 80,
functions: 80,
lines: 80,
statements: 80
}
},
setupFiles: ['./tests/setup.ts'],
testTimeout: 10000,
clearMocks: true,
restoreMocks: true,
mockReset: true
},
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
'@/types': path.resolve(__dirname, './src/types'),
'@/providers': path.resolve(__dirname, './src/providers'),
'@/storage': path.resolve(__dirname, './src/storage'),
'@/parser': path.resolve(__dirname, './src/parser'),
'@/utils': path.resolve(__dirname, './src/utils'),
'@/errors': path.resolve(__dirname, './src/errors')
}
}
});

Some files were not shown because too many files have changed in this diff Show More