Compare commits

...

24 Commits

Author SHA1 Message Date
Eyal Toledano
16e6326010 fix(config): adds missing import + task management for api key design 2025-06-05 14:20:25 -04:00
Eyal Toledano
a9c1b6bbcf fix(config-manager): Add silent mode check and improve test mocking for ensureConfigFileExists 2025-06-05 13:33:20 -04:00
Eyal Toledano
f12fc476d3 fix(init): Ensure hosted mode option available by creating .taskmasterconfig early
- Added ensureConfigFileExists() to create default config if missing
- Call early in init flows before gateway check - Preserve email from initializeUser()
- Add comprehensive tests
2025-06-05 13:30:14 -04:00
Eyal Toledano
31178e2f43 chore: adjust .taskmasterconfig defaults 2025-06-04 19:04:17 -04:00
Eyal Toledano
3fa3be4e1b chore: fix user email, telemetryEnabled by default 2025-06-04 19:03:47 -04:00
Eyal Toledano
685365270d feat: integrate Supabase authenticated users
- Updated init.js, ai-services-unified.js, user-management.js, telemetry-submission.js, and .taskmasterconfig to support Supabase authentication flow and authenticated gateway calls
2025-06-04 18:53:28 -04:00
Eyal Toledano
58aa0992f6 feat(error-handling): Implement comprehensive gateway error handling with user-friendly messages
- Add comprehensive gateway error handler with friendly user messages
- Handle subscription status errors (inactive BYOK, subscription required)
- Handle authentication errors (invalid API keys, missing tokens)
- Handle rate limiting with retry suggestions
- Handle model availability and validation errors
- Handle network connectivity issues
- Provide actionable solutions for each error type
- Prevent duplicate error messages by returning early after showing friendly error
- Fix telemetry tests to use correct environment variable names (TASKMASTER_API_KEY)
- Fix config manager getUserId function to properly save default userId to file
- All tests now passing (34 test suites, 360 tests)
2025-06-02 12:34:47 -04:00
Eyal Toledano
2819be51d3 feat: Implement TaskMaster AI Gateway integration with enhanced UX
- Fix Zod schema conversion, update headers, add premium telemetry display, improve user auth flow, and standardize email fields

Functionally complete on this end, mostly polish around user experience and need to add in profile, upgrade/downgrade, etc.

But the AI commands are working off the gateway.
2025-06-01 19:37:12 -04:00
Eyal Toledano
9b87dd23de fix(gateway/auth): Implement proper auth/init flow with automatic background userId generation
- Fix getUserId() to use placeholder that triggers auth/init if the auth/init endpoint is down for whatever reason
- Add silent auth/init attempt in AI services
- Improve hosted mode error handling
- Remove fake userId/email generation from init.js
2025-05-31 19:47:18 -04:00
Eyal Toledano
769275b3bc fix(config): Fix config structure and tests after refactoring
- Fixed getUserId() to always return value, never null (sets default '1234567890')
- Updated all test files to match new config.account structure
- Fixed config-manager.test.js default config expectations
- Updated telemetry-submission.test.js and ai-services-unified.test.js mocks
- Added getTelemetryEnabled export to all config-manager mocks
- All 44 tests now passing
2025-05-30 19:40:38 -04:00
Eyal Toledano
4e9d58a1b0 feat(config): Restructure .taskmasterconfig and enhance gateway integration
Config Structure Changes and Gateway Integration

## Configuration Structure Changes
- Restructured .taskmasterconfig to use 'account' section for user settings
- Moved userId, userEmail, mode, telemetryEnabled from global to account section
- API keys remain isolated in .env file (not accessible to AI)
- Enhanced getUserId() to always return value, never null (sets default '1234567890')

## Gateway Integration Enhancements
- Updated registerUserWithGateway() to accept both email and userId parameters
- Enhanced /auth/init endpoint integration for existing user validation
- API key updates automatically written to .env during registration process
- Improved user identification and validation flow

## Code Updates for New Structure
- Fixed config-manager.js getter functions for account section access
- Updated user-management.js to use config.account.userId/mode
- Modified telemetry-submission.js to read from account section
- Added getTelemetryEnabled() function with proper account section access
- Enhanced telemetry configuration reading with new structure

## Comprehensive Test Updates
- Updated integration tests (init-config.test.js) for new config structure
- Fixed unit tests (config-manager.test.js) with updated default config
- Updated telemetry tests (telemetry-submission.test.js) for account structure
- Added missing getTelemetryEnabled mock to ai-services-unified.test.js
- Fixed all test expectations to use config.account.* instead of config.global.*
- Removed references to deprecated config.subscription object

## Configuration Access Consistency
- Standardized configuration access patterns across entire codebase
- Clean separation: user settings in account, API keys in .env, models/global in respective sections
- All tests passing with new configuration structure
- Maintained backward compatibility during transition

Changes support enhanced telemetry system with proper user management and gateway integration while maintaining security through API key isolation.
2025-05-30 18:53:16 -04:00
Eyal Toledano
e573db3b3b feat(task-90): Complete telemetry integration with init flow improvements - Task 90.3: AI Services Integration COMPLETED with automatic submission after AI usage logging and graceful error handling - Init Flow Enhancements: restructured to prioritize gateway selection with beautiful UI for BYOK vs Hosted modes - Telemetry Improvements: modified submission to send FULL data to gateway while maintaining security filtering for users - All 344 tests passing, telemetry integration ready for production 2025-05-30 16:35:40 -04:00
Eyal Toledano
75b7b93fa4 feat(task-90): Complete telemetry integration with /auth/init + fix Roo test brittleness
- Updated telemetry submission to use /auth/init endpoint instead of /api/v1/users
- Hardcoded gateway endpoint to http://localhost:4444/api/v1/telemetry for all users
- Removed unnecessary service API key complexity - simplified authentication
- Enhanced init.js with hosted gateway setup option and user registration
- Added configureTelemetrySettings() to update .taskmasterconfig with credentials
- Fixed brittle Roo integration tests that required exact string matching
- Updated tests to use flexible regex patterns supporting any quote style
- All test suites now green: 332 tests passed, 11 skipped, 0 failed
- All 11 telemetry tests passing with live gateway integration verified
- Ready for ai-services-unified.js integration in subtask 90.3
2025-05-28 22:38:18 -04:00
Eyal Toledano
6ec3a10083 feat(task-90): Complete subtask 90.2 with gateway integration and init.js enhancements
- Hardcoded gateway endpoint to http://localhost:4444/api/v1/telemetry
- Updated credential handling to use config-based approach (not env vars)
- Added registerUserWithGateway() function for user registration/lookup
- Enhanced init.js with hosted gateway setup option and configureTelemetrySettings()
- Updated all 10 tests to reflect new architecture - all passing
- Security features maintained: sensitive data filtering, Bearer token auth
- Ready for ai-services-unified.js integration in subtask 90.3
2025-05-28 21:05:25 -04:00
Eyal Toledano
8ad31ac5eb feat(task-90): Complete subtask 90.2 with secure telemetry submission service - Implemented telemetry submission with Zod validation, retry logic, graceful error handling, and user opt-out support - Used correct Bearer token authentication with X-User-Email header - Successfully tested with live gateway endpoint, all 6 tests passing - Verified security: sensitive data filtered before submission 2025-05-28 15:12:31 -04:00
Eyal Toledano
2773e347f9 feat(task-90): Complete subtask 90.2
- Implement secure telemetry submission service
- Created scripts/modules/telemetry-submission.js with submitTelemetryData function
- Implemented secure filtering: removes commandArgs and fullOutput before remote submission
- Added comprehensive validation using Zod schema for telemetry data integrity
- Implemented exponential backoff retry logic (3 attempts max) with smart retry decisions
- Added graceful error handling that never blocks execution
- Respects user opt-out preferences via config.telemetryEnabled
- Configured for localhost testing endpoint (http://localhost:4444/api/v1/telemetry) for now
- Added comprehensive test coverage with 6/6 passing tests covering all scenarios
- Includes submitTelemetryDataAsync for fire-and-forget submissions
2025-05-28 14:51:42 -04:00
Eyal Toledano
bfc39dd377 feat(task-90): Complete subtask 90.1
- Implement secure telemetry capture with filtering - Enhanced ai-services-unified.js to capture commandArgs and fullOutput in telemetry - Added filterSensitiveTelemetryData() function to prevent sensitive data exposure - Updated processMCPResponseData() to filter telemetry before sending to MCP clients - Verified CLI displayAiUsageSummary() only shows safe fields - Added comprehensive test coverage with 4 passing tests - Resolved critical security issue: API keys and sensitive data now filtered from responses
2025-05-28 14:26:24 -04:00
Eyal Toledano
9e6c190af3 fix(move-task): Fix duplicate task creation when moving subtask to standalone task 2025-05-28 14:05:30 -04:00
Eyal Toledano
ab64437ad2 chore: task management 2025-05-28 11:16:54 -04:00
Eyal Toledano
cb95a07771 chore: task management 2025-05-28 11:16:09 -04:00
Eyal Toledano
c096f3fe9d Merge branch 'v017-adds' into gateway 2025-05-28 11:13:26 -04:00
Eyal Toledano
b6a3b8d385 chore: task management - moves 87,88,89 to 90,91,92 2025-05-28 10:38:33 -04:00
Eyal Toledano
ce09d9cdc3 chore: task mgmt 2025-05-28 09:56:08 -04:00
Eyal Toledano
b5c2cf47b0 chore: task management 2025-05-28 00:29:43 -04:00
39 changed files with 16213 additions and 10563 deletions

View File

@@ -1,8 +1,29 @@
{
"mcpServers": {
"task-master-ai": {
"task-master-ai-tm": {
"command": "node",
"args": ["./mcp-server/server.js"],
"args": [
"./mcp-server/server.js"
],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
},
"task-master-ai": {
"command": "npx",
"args": [
"-y",
"--package=task-master-ai",
"task-master-ai"
],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
@@ -15,5 +36,9 @@
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE"
}
}
},
"env": {
"TASKMASTER_TELEMETRY_API_KEY": "339a81c9-5b9c-4d60-92d8-cba2ee2a8cc3",
"TASKMASTER_TELEMETRY_USER_EMAIL": "user_1748640077834@taskmaster.dev"
}
}

View File

@@ -50,6 +50,7 @@ This rule guides AI assistants on how to view, configure, and interact with the
- **Key Locations** (See [`dev_workflow.mdc`](mdc:.cursor/rules/dev_workflow.mdc) - Configuration Management):
- **MCP/Cursor:** Set keys in the `env` section of `.cursor/mcp.json`.
- **CLI:** Set keys in a `.env` file in the project root.
- As the AI agent, you do not have access to read the .env -- but do not attempt to recreate it!
- **Provider List & Keys:**
- **`anthropic`**: Requires `ANTHROPIC_API_KEY`.
- **`google`**: Requires `GOOGLE_API_KEY`.

View File

@@ -1,6 +1,7 @@
---
description: Guidelines for interacting with the unified AI service layer.
globs: scripts/modules/ai-services-unified.js, scripts/modules/task-manager/*.js, scripts/modules/commands.js
alwaysApply: false
---
# AI Services Layer Guidelines
@@ -91,7 +92,7 @@ This document outlines the architecture and usage patterns for interacting with
* ✅ **DO**: Centralize **all** LLM calls through `generateTextService` or `generateObjectService`.
* ✅ **DO**: Determine the appropriate `role` (`main`, `research`, `fallback`) in your core logic and pass it to the service.
* ✅ **DO**: Pass the `session` object (received in the `context` parameter, especially from direct function wrappers) to the service call when in MCP context.
* ✅ **DO**: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP).
* ✅ **DO**: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP). FYI: As the AI agent, you do not have access to read the .env -- so do not attempt to recreate it!
* ✅ **DO**: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).
* ✅ **DO**: Use `generateTextService` and implement robust manual JSON parsing (with Zod validation *after* parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
* ❌ **DON'T**: Import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

View File

@@ -39,12 +39,12 @@ alwaysApply: false
- **Responsibilities** (See also: [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc)):
- Exports `generateTextService`, `generateObjectService`.
- Handles provider/model selection based on `role` and `.taskmasterconfig`.
- Resolves API keys (from `.env` or `session.env`).
- Resolves API keys (from `.env` or `session.env`). As the AI agent, you do not have access to read the .env -- but do not attempt to recreate it!
- Implements fallback and retry logic.
- Orchestrates calls to provider-specific implementations (`src/ai-providers/`).
- Telemetry data generated by the AI service layer is propagated upwards through core logic, direct functions, and MCP tools. See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for the detailed integration pattern.
- **[`src/ai-providers/*.js`](mdc:src/ai-providers/): Provider-Specific Implementations**
- **[`src/ai-providers/*.js`](mdc:src/ai-providers): Provider-Specific Implementations**
- **Purpose**: Provider-specific wrappers for Vercel AI SDK functions.
- **Responsibilities**: Interact directly with Vercel AI SDK adapters.
@@ -63,7 +63,7 @@ alwaysApply: false
- API Key Resolution (`resolveEnvVariable`).
- Silent Mode Control (`enableSilentMode`, `disableSilentMode`).
- **[`mcp-server/`](mdc:mcp-server/): MCP Server Integration**
- **[`mcp-server/`](mdc:mcp-server): MCP Server Integration**
- **Purpose**: Provides MCP interface using FastMCP.
- **Responsibilities** (See also: [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc)):
- Registers tools (`mcp-server/src/tools/*.js`). Tool `execute` methods **should be wrapped** with the `withNormalizedProjectRoot` HOF (from `tools/utils.js`) to ensure consistent path handling.

View File

@@ -0,0 +1,408 @@
---
description:
globs:
alwaysApply: true
---
# Test Workflow & Development Process
## **Test-Driven Development (TDD) Integration**
### **Core TDD Cycle with Jest**
```bash
# 1. Start development with watch mode
npm run test:watch
# 2. Write failing test first
# Create test file: src/utils/newFeature.test.ts
# Write test that describes expected behavior
# 3. Implement minimum code to make test pass
# 4. Refactor while keeping tests green
# 5. Add edge cases and error scenarios
```
### **TDD Workflow Per Subtask**
```bash
# When starting a new subtask:
task-master set-status --id=4.1 --status=in-progress
# Begin TDD cycle:
npm run test:watch # Keep running during development
# Document TDD progress in subtask:
task-master update-subtask --id=4.1 --prompt="TDD Progress:
- Written 3 failing tests for core functionality
- Implemented basic feature, tests now passing
- Adding edge case tests for error handling"
# Complete subtask with test summary:
task-master update-subtask --id=4.1 --prompt="Implementation complete:
- Feature implemented with 8 unit tests
- Coverage: 95% statements, 88% branches
- All tests passing, TDD cycle complete"
```
## **Testing Commands & Usage**
### **Development Commands**
```bash
# Primary development command - use during coding
npm run test:watch # Watch mode with Jest
npm run test:watch -- --testNamePattern="auth" # Watch specific tests
# Targeted testing during development
npm run test:unit # Run only unit tests
npm run test:unit -- --coverage # Unit tests with coverage
# Integration testing when APIs are ready
npm run test:integration # Run integration tests
npm run test:integration -- --detectOpenHandles # Debug hanging tests
# End-to-end testing for workflows
npm run test:e2e # Run E2E tests
npm run test:e2e -- --timeout=30000 # Extended timeout for E2E
```
### **Quality Assurance Commands**
```bash
# Full test suite with coverage (before commits)
npm run test:coverage # Complete coverage analysis
# All tests (CI/CD pipeline)
npm test # Run all test projects
# Specific test file execution
npm test -- auth.test.ts # Run specific test file
npm test -- --testNamePattern="should handle errors" # Run specific tests
```
## **Test Implementation Patterns**
### **Unit Test Development**
```typescript
// ✅ DO: Follow established patterns from auth.test.ts
describe('FeatureName', () => {
beforeEach(() => {
jest.clearAllMocks();
// Setup mocks with proper typing
});
describe('functionName', () => {
it('should handle normal case', () => {
// Test implementation with specific assertions
});
it('should throw error for invalid input', async () => {
// Error scenario testing
await expect(functionName(invalidInput))
.rejects.toThrow('Specific error message');
});
});
});
```
### **Integration Test Development**
```typescript
// ✅ DO: Use supertest for API endpoint testing
import request from 'supertest';
import { app } from '../../src/app';
describe('POST /api/auth/register', () => {
beforeEach(async () => {
await integrationTestUtils.cleanupTestData();
});
it('should register user successfully', async () => {
const userData = createTestUser();
const response = await request(app)
.post('/api/auth/register')
.send(userData)
.expect(201);
expect(response.body).toMatchObject({
id: expect.any(String),
email: userData.email
});
// Verify database state
const user = await prisma.user.findUnique({
where: { email: userData.email }
});
expect(user).toBeTruthy();
});
});
```
### **E2E Test Development**
```typescript
// ✅ DO: Test complete user workflows
describe('User Authentication Flow', () => {
it('should complete registration → login → protected access', async () => {
// Step 1: Register
const userData = createTestUser();
await request(app)
.post('/api/auth/register')
.send(userData)
.expect(201);
// Step 2: Login
const loginResponse = await request(app)
.post('/api/auth/login')
.send({ email: userData.email, password: userData.password })
.expect(200);
const { token } = loginResponse.body;
// Step 3: Access protected resource
await request(app)
.get('/api/profile')
.set('Authorization', `Bearer ${token}`)
.expect(200);
}, 30000); // Extended timeout for E2E
});
```
## **Mocking & Test Utilities**
### **Established Mocking Patterns**
```typescript
// ✅ DO: Use established bcrypt mocking pattern
jest.mock('bcrypt');
import bcrypt from 'bcrypt';
const mockHash = bcrypt.hash as jest.MockedFunction<typeof bcrypt.hash>;
const mockCompare = bcrypt.compare as jest.MockedFunction<typeof bcrypt.compare>;
// ✅ DO: Use Prisma mocking for unit tests
jest.mock('@prisma/client', () => ({
PrismaClient: jest.fn().mockImplementation(() => ({
user: {
create: jest.fn(),
findUnique: jest.fn(),
},
$connect: jest.fn(),
$disconnect: jest.fn(),
})),
}));
```
### **Test Fixtures Usage**
```typescript
// ✅ DO: Use centralized test fixtures
import { createTestUser, adminUser, invalidUser } from '../fixtures/users';
describe('User Service', () => {
it('should handle admin user creation', async () => {
const userData = createTestUser(adminUser);
// Test implementation
});
it('should reject invalid user data', async () => {
const userData = createTestUser(invalidUser);
// Error testing
});
});
```
## **Coverage Standards & Monitoring**
### **Coverage Thresholds**
- **Global Standards**: 80% lines/functions, 70% branches
- **Critical Code**: 90% utils, 85% middleware
- **New Features**: Must meet or exceed global thresholds
- **Legacy Code**: Gradual improvement with each change
### **Coverage Reporting & Analysis**
```bash
# Generate coverage reports
npm run test:coverage
# View detailed HTML report
open coverage/lcov-report/index.html
# Coverage files generated:
# - coverage/lcov-report/index.html # Detailed HTML report
# - coverage/lcov.info # LCOV format for IDE integration
# - coverage/coverage-final.json # JSON format for tooling
```
### **Coverage Quality Checks**
```typescript
// ✅ DO: Test all code paths
describe('validateInput', () => {
it('should return true for valid input', () => {
expect(validateInput('valid')).toBe(true);
});
it('should return false for various invalid inputs', () => {
expect(validateInput('')).toBe(false); // Empty string
expect(validateInput(null)).toBe(false); // Null value
expect(validateInput(undefined)).toBe(false); // Undefined
});
it('should throw for unexpected input types', () => {
expect(() => validateInput(123)).toThrow('Invalid input type');
});
});
```
## **Testing During Development Phases**
### **Feature Development Phase**
```bash
# 1. Start feature development
task-master set-status --id=X.Y --status=in-progress
# 2. Begin TDD cycle
npm run test:watch
# 3. Document test progress in subtask
task-master update-subtask --id=X.Y --prompt="Test development:
- Created test file with 5 failing tests
- Implemented core functionality
- Tests passing, adding error scenarios"
# 4. Verify coverage before completion
npm run test:coverage
# 5. Update subtask with final test status
task-master update-subtask --id=X.Y --prompt="Testing complete:
- 12 unit tests with full coverage
- All edge cases and error scenarios covered
- Ready for integration testing"
```
### **Integration Testing Phase**
```bash
# After API endpoints are implemented
npm run test:integration
# Update integration test templates
# Replace placeholder tests with real endpoint calls
# Document integration test results
task-master update-subtask --id=X.Y --prompt="Integration tests:
- Updated auth endpoint tests
- Database integration verified
- All HTTP status codes and responses tested"
```
### **Pre-Commit Testing Phase**
```bash
# Before committing code
npm run test:coverage # Verify all tests pass with coverage
npm run test:unit # Quick unit test verification
npm run test:integration # Integration test verification (if applicable)
# Commit pattern for test updates
git add tests/ src/**/*.test.ts
git commit -m "test(task-X): Add comprehensive tests for Feature Y
- Unit tests with 95% coverage (exceeds 90% threshold)
- Integration tests for API endpoints
- Test fixtures for data generation
- Proper mocking patterns established
Task X: Feature Y - Testing complete"
```
## **Error Handling & Debugging**
### **Test Debugging Techniques**
```typescript
// ✅ DO: Use test utilities for debugging
import { testUtils } from '../setup';
it('should debug complex operation', () => {
testUtils.withConsole(() => {
// Console output visible only for this test
console.log('Debug info:', complexData);
service.complexOperation();
});
});
// ✅ DO: Use proper async debugging
it('should handle async operations', async () => {
const promise = service.asyncOperation();
// Test intermediate state
expect(service.isProcessing()).toBe(true);
const result = await promise;
expect(result).toBe('expected');
expect(service.isProcessing()).toBe(false);
});
```
### **Common Test Issues & Solutions**
```bash
# Hanging tests (common with database connections)
npm run test:integration -- --detectOpenHandles
# Memory leaks in tests
npm run test:unit -- --logHeapUsage
# Slow tests identification
npm run test:coverage -- --verbose
# Mock not working properly
# Check: mock is declared before imports
# Check: jest.clearAllMocks() in beforeEach
# Check: TypeScript typing is correct
```
## **Continuous Integration Integration**
### **CI/CD Pipeline Testing**
```yaml
# Example GitHub Actions integration
- name: Run tests
run: |
npm ci
npm run test:coverage
- name: Upload coverage reports
uses: codecov/codecov-action@v3
with:
file: ./coverage/lcov.info
```
### **Pre-commit Hooks**
```bash
# Setup pre-commit testing (recommended)
# In package.json scripts:
"pre-commit": "npm run test:unit && npm run test:integration"
# Husky integration example:
npx husky add .husky/pre-commit "npm run test:unit"
```
## **Test Maintenance & Evolution**
### **Adding Tests for New Features**
1. **Create test file** alongside source code or in `tests/unit/`
2. **Follow established patterns** from `src/utils/auth.test.ts`
3. **Use existing fixtures** from `tests/fixtures/`
4. **Apply proper mocking** patterns for dependencies
5. **Meet coverage thresholds** for the module
### **Updating Integration/E2E Tests**
1. **Update templates** in `tests/integration/` when APIs change
2. **Modify E2E workflows** in `tests/e2e/` for new user journeys
3. **Update test fixtures** for new data requirements
4. **Maintain database cleanup** utilities
### **Test Performance Optimization**
- **Parallel execution**: Jest runs tests in parallel by default
- **Test isolation**: Use proper setup/teardown for independence
- **Mock optimization**: Mock heavy dependencies appropriately
- **Database efficiency**: Use transaction rollbacks where possible
---
**Key References:**
- [Testing Standards](mdc:.cursor/rules/tests.mdc)
- [Git Workflow](mdc:.cursor/rules/git_workflow.mdc)
- [Development Workflow](mdc:.cursor/rules/dev_workflow.mdc)
- [Jest Configuration](mdc:jest.config.js)
- [Auth Test Example](mdc:src/utils/auth.test.ts)

14
.gitignore vendored
View File

@@ -77,3 +77,17 @@ dev-debug.log
# NPMRC
.npmrc
# Added by Claude Task Master
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# OS specific
# Task files
tasks.json
tasks/

View File

@@ -3,7 +3,7 @@
"main": {
"provider": "anthropic",
"modelId": "claude-sonnet-4-20250514",
"maxTokens": 50000,
"maxTokens": 64000,
"temperature": 0.2
},
"research": {
@@ -14,8 +14,8 @@
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 128000,
"modelId": "claude-3-5-sonnet-20241022",
"maxTokens": 64000,
"temperature": 0.2
}
},
@@ -26,7 +26,12 @@
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseURL": "http://localhost:11434/api",
"userId": "1234567890",
"azureBaseURL": "https://your-endpoint.azure.com/"
},
"account": {
"userId": "1234567890",
"email": "",
"mode": "byok",
"telemetryEnabled": true
}
}

View File

@@ -1,31 +0,0 @@
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 120000,
"temperature": 0.2
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet-20240620",
"maxTokens": 8192,
"temperature": 0.1
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"projectName": "Taskmaster",
"ollamaBaseURL": "http://localhost:11434/api",
"azureOpenaiBaseURL": "https://your-endpoint.openai.azure.com/"
}
}

View File

@@ -1,6 +1,6 @@
export default {
// Use Node.js environment for testing
testEnvironment: 'node',
testEnvironment: "node",
// Automatically clear mock calls between every test
clearMocks: true,
@@ -9,27 +9,27 @@ export default {
collectCoverage: false,
// The directory where Jest should output its coverage files
coverageDirectory: 'coverage',
coverageDirectory: "coverage",
// A list of paths to directories that Jest should use to search for files in
roots: ['<rootDir>/tests'],
roots: ["<rootDir>/tests"],
// The glob patterns Jest uses to detect test files
testMatch: ['**/__tests__/**/*.js', '**/?(*.)+(spec|test).js'],
testMatch: ["**/__tests__/**/*.js", "**/?(*.)+(spec|test).js"],
// Transform files
transform: {},
// Disable transformations for node_modules
transformIgnorePatterns: ['/node_modules/'],
transformIgnorePatterns: ["/node_modules/"],
// Set moduleNameMapper for absolute paths
moduleNameMapper: {
'^@/(.*)$': '<rootDir>/$1'
"^@/(.*)$": "<rootDir>/$1",
},
// Setup module aliases
moduleDirectories: ['node_modules', '<rootDir>'],
moduleDirectories: ["node_modules", "<rootDir>"],
// Configure test coverage thresholds
coverageThreshold: {
@@ -37,16 +37,16 @@ export default {
branches: 80,
functions: 80,
lines: 80,
statements: 80
}
statements: 80,
},
},
// Generate coverage report in these formats
coverageReporters: ['text', 'lcov'],
coverageReporters: ["text", "lcov"],
// Verbose output
verbose: true,
// Setup file
setupFilesAfterEnv: ['<rootDir>/tests/setup.js']
setupFilesAfterEnv: ["<rootDir>/tests/setup.js"],
};

View File

@@ -3,16 +3,16 @@
* Utility functions for Task Master CLI integration
*/
import { spawnSync } from 'child_process';
import path from 'path';
import fs from 'fs';
import { contextManager } from '../core/context-manager.js'; // Import the singleton
import { spawnSync } from "child_process";
import path from "path";
import fs from "fs";
import { contextManager } from "../core/context-manager.js"; // Import the singleton
// Import path utilities to ensure consistent path resolution
import {
lastFoundProjectRoot,
PROJECT_MARKERS
} from '../core/utils/path-utils.js';
PROJECT_MARKERS,
} from "../core/utils/path-utils.js";
/**
* Get normalized project root path
@@ -77,7 +77,7 @@ function getProjectRoot(projectRootRaw, log) {
`No task-master project detected in current directory. Using ${currentDir} as project root.`
);
log.warn(
'Consider using --project-root to specify the correct project location or set TASK_MASTER_PROJECT_ROOT environment variable.'
"Consider using --project-root to specify the correct project location or set TASK_MASTER_PROJECT_ROOT environment variable."
);
return currentDir;
}
@@ -103,7 +103,7 @@ function getProjectRootFromSession(session, log) {
rootsRootsType: typeof session?.roots?.roots,
isRootsRootsArray: Array.isArray(session?.roots?.roots),
rootsRootsLength: session?.roots?.roots?.length,
firstRootsRoot: session?.roots?.roots?.[0]
firstRootsRoot: session?.roots?.roots?.[0],
})}`
);
@@ -126,16 +126,16 @@ function getProjectRootFromSession(session, log) {
if (rawRootPath) {
// Decode URI and strip file:// protocol
decodedPath = rawRootPath.startsWith('file://')
decodedPath = rawRootPath.startsWith("file://")
? decodeURIComponent(rawRootPath.slice(7))
: rawRootPath; // Assume non-file URI is already decoded? Or decode anyway? Let's decode.
if (!rawRootPath.startsWith('file://')) {
if (!rawRootPath.startsWith("file://")) {
decodedPath = decodeURIComponent(rawRootPath); // Decode even if no file://
}
// Handle potential Windows drive prefix after stripping protocol (e.g., /C:/...)
if (
decodedPath.startsWith('/') &&
decodedPath.startsWith("/") &&
/[A-Za-z]:/.test(decodedPath.substring(1, 3))
) {
decodedPath = decodedPath.substring(1); // Remove leading slash if it's like /C:/...
@@ -144,7 +144,7 @@ function getProjectRootFromSession(session, log) {
log.info(`Decoded path: ${decodedPath}`);
// Normalize slashes and resolve
const normalizedSlashes = decodedPath.replace(/\\/g, '/');
const normalizedSlashes = decodedPath.replace(/\\/g, "/");
finalPath = path.resolve(normalizedSlashes); // Resolve to absolute path for current OS
log.info(`Normalized and resolved session path: ${finalPath}`);
@@ -152,22 +152,22 @@ function getProjectRootFromSession(session, log) {
}
// Fallback Logic (remains the same)
log.warn('No project root URI found in session. Attempting fallbacks...');
log.warn("No project root URI found in session. Attempting fallbacks...");
const cwd = process.cwd();
// Fallback 1: Use server path deduction (Cursor IDE)
const serverPath = process.argv[1];
if (serverPath && serverPath.includes('mcp-server')) {
const mcpServerIndex = serverPath.indexOf('mcp-server');
if (serverPath && serverPath.includes("mcp-server")) {
const mcpServerIndex = serverPath.indexOf("mcp-server");
if (mcpServerIndex !== -1) {
const projectRoot = path.dirname(
serverPath.substring(0, mcpServerIndex)
); // Go up one level
if (
fs.existsSync(path.join(projectRoot, '.cursor')) ||
fs.existsSync(path.join(projectRoot, 'mcp-server')) ||
fs.existsSync(path.join(projectRoot, 'package.json'))
fs.existsSync(path.join(projectRoot, ".cursor")) ||
fs.existsSync(path.join(projectRoot, "mcp-server")) ||
fs.existsSync(path.join(projectRoot, "package.json"))
) {
log.info(
`Using project root derived from server path: ${projectRoot}`
@@ -202,7 +202,7 @@ function getProjectRootFromSession(session, log) {
function handleApiResult(
result,
log,
errorPrefix = 'API error',
errorPrefix = "API error",
processFunction = processMCPResponseData
) {
if (!result.success) {
@@ -223,7 +223,7 @@ function handleApiResult(
// Create the response payload including the fromCache flag
const responsePayload = {
fromCache: result.fromCache, // Get the flag from the original 'result'
data: processedData // Nest the processed data under a 'data' key
data: processedData, // Nest the processed data under a 'data' key
};
// Pass this combined payload to createContentResponse
@@ -261,10 +261,10 @@ function executeTaskMasterCommand(
// Common options for spawn
const spawnOptions = {
encoding: 'utf8',
encoding: "utf8",
cwd: cwd,
// Merge process.env with customEnv, giving precedence to customEnv
env: { ...process.env, ...(customEnv || {}) }
env: { ...process.env, ...(customEnv || {}) },
};
// Log the environment being passed (optional, for debugging)
@@ -272,13 +272,13 @@ function executeTaskMasterCommand(
// Execute the command using the global task-master CLI or local script
// Try the global CLI first
let result = spawnSync('task-master', fullArgs, spawnOptions);
let result = spawnSync("task-master", fullArgs, spawnOptions);
// If global CLI is not available, try fallback to the local script
if (result.error && result.error.code === 'ENOENT') {
log.info('Global task-master not found, falling back to local script');
if (result.error && result.error.code === "ENOENT") {
log.info("Global task-master not found, falling back to local script");
// Pass the same spawnOptions (including env) to the fallback
result = spawnSync('node', ['scripts/dev.js', ...fullArgs], spawnOptions);
result = spawnSync("node", ["scripts/dev.js", ...fullArgs], spawnOptions);
}
if (result.error) {
@@ -291,7 +291,7 @@ function executeTaskMasterCommand(
? result.stderr.trim()
: result.stdout
? result.stdout.trim()
: 'Unknown error';
: "Unknown error";
throw new Error(
`Command failed with exit code ${result.status}: ${errorOutput}`
);
@@ -300,13 +300,13 @@ function executeTaskMasterCommand(
return {
success: true,
stdout: result.stdout,
stderr: result.stderr
stderr: result.stderr,
};
} catch (error) {
log.error(`Error executing task-master command: ${error.message}`);
return {
success: false,
error: error.message
error: error.message,
};
}
}
@@ -332,7 +332,7 @@ async function getCachedOrExecute({ cacheKey, actionFn, log }) {
// Return the cached data in the same structure as a fresh result
return {
...cachedResult, // Spread the cached result to maintain its structure
fromCache: true // Just add the fromCache flag
fromCache: true, // Just add the fromCache flag
};
}
@@ -360,20 +360,38 @@ async function getCachedOrExecute({ cacheKey, actionFn, log }) {
// Return the fresh result, indicating it wasn't from cache
return {
...result,
fromCache: false
fromCache: false,
};
}
/**
* Filters sensitive fields from telemetry data before sending to users.
* Removes commandArgs and fullOutput which may contain API keys and sensitive data.
* @param {Object} telemetryData - The telemetry data object to filter.
* @returns {Object} - Filtered telemetry data safe for user exposure.
*/
function filterSensitiveTelemetryData(telemetryData) {
if (!telemetryData || typeof telemetryData !== "object") {
return telemetryData;
}
// Create a copy and remove sensitive fields
const { commandArgs, fullOutput, ...safeTelemetryData } = telemetryData;
return safeTelemetryData;
}
/**
* Recursively removes specified fields from task objects, whether single or in an array.
* Handles common data structures returned by task commands.
* Also filters sensitive telemetry data if present.
* @param {Object|Array} taskOrData - A single task object or a data object containing a 'tasks' array.
* @param {string[]} fieldsToRemove - An array of field names to remove.
* @returns {Object|Array} - The processed data with specified fields removed.
*/
function processMCPResponseData(
taskOrData,
fieldsToRemove = ['details', 'testStrategy']
fieldsToRemove = ["details", "testStrategy"]
) {
if (!taskOrData) {
return taskOrData;
@@ -381,7 +399,7 @@ function processMCPResponseData(
// Helper function to process a single task object
const processSingleTask = (task) => {
if (typeof task !== 'object' || task === null) {
if (typeof task !== "object" || task === null) {
return task;
}
@@ -392,6 +410,13 @@ function processMCPResponseData(
delete processedTask[field];
});
// Filter telemetry data if present
if (processedTask.telemetryData) {
processedTask.telemetryData = filterSensitiveTelemetryData(
processedTask.telemetryData
);
}
// Recursively process subtasks if they exist and are an array
if (processedTask.subtasks && Array.isArray(processedTask.subtasks)) {
// Use processArrayOfTasks to handle the subtasks array
@@ -406,33 +431,41 @@ function processMCPResponseData(
return tasks.map(processSingleTask);
};
// Handle top-level telemetry data filtering for any response structure
let processedData = { ...taskOrData };
if (processedData.telemetryData) {
processedData.telemetryData = filterSensitiveTelemetryData(
processedData.telemetryData
);
}
// Check if the input is a data structure containing a 'tasks' array (like from listTasks)
if (
typeof taskOrData === 'object' &&
taskOrData !== null &&
Array.isArray(taskOrData.tasks)
typeof processedData === "object" &&
processedData !== null &&
Array.isArray(processedData.tasks)
) {
return {
...taskOrData, // Keep other potential fields like 'stats', 'filter'
tasks: processArrayOfTasks(taskOrData.tasks)
...processedData, // Keep other potential fields like 'stats', 'filter'
tasks: processArrayOfTasks(processedData.tasks),
};
}
// Check if the input is likely a single task object (add more checks if needed)
else if (
typeof taskOrData === 'object' &&
taskOrData !== null &&
'id' in taskOrData &&
'title' in taskOrData
typeof processedData === "object" &&
processedData !== null &&
"id" in processedData &&
"title" in processedData
) {
return processSingleTask(taskOrData);
return processSingleTask(processedData);
}
// Check if the input is an array of tasks directly (less common but possible)
else if (Array.isArray(taskOrData)) {
return processArrayOfTasks(taskOrData);
else if (Array.isArray(processedData)) {
return processArrayOfTasks(processedData);
}
// If it doesn't match known task structures, return it as is
return taskOrData;
// If it doesn't match known task structures, return the processed data (with filtered telemetry)
return processedData;
}
/**
@@ -445,15 +478,15 @@ function createContentResponse(content) {
return {
content: [
{
type: 'text',
type: "text",
text:
typeof content === 'object'
typeof content === "object"
? // Format JSON nicely with indentation
JSON.stringify(content, null, 2)
: // Keep other content types as-is
String(content)
}
]
String(content),
},
],
};
}
@@ -466,11 +499,11 @@ function createErrorResponse(errorMessage) {
return {
content: [
{
type: 'text',
text: `Error: ${errorMessage}`
}
type: "text",
text: `Error: ${errorMessage}`,
},
],
isError: true
isError: true,
};
}
@@ -489,7 +522,7 @@ function createLogWrapper(log) {
debug: (message, ...args) =>
log.debug ? log.debug(message, ...args) : null,
// Map success to info as a common fallback
success: (message, ...args) => log.info(message, ...args)
success: (message, ...args) => log.info(message, ...args),
};
}
@@ -520,23 +553,23 @@ function normalizeProjectRoot(rawPath, log) {
}
// 2. Strip file:// prefix (handle 2 or 3 slashes)
if (pathString.startsWith('file:///')) {
if (pathString.startsWith("file:///")) {
pathString = pathString.slice(7); // Slice 7 for file:///, may leave leading / on Windows
} else if (pathString.startsWith('file://')) {
} else if (pathString.startsWith("file://")) {
pathString = pathString.slice(7); // Slice 7 for file://
}
// 3. Handle potential Windows leading slash after stripping prefix (e.g., /C:/...)
// This checks if it starts with / followed by a drive letter C: D: etc.
if (
pathString.startsWith('/') &&
pathString.startsWith("/") &&
/[A-Za-z]:/.test(pathString.substring(1, 3))
) {
pathString = pathString.substring(1); // Remove the leading slash
}
// 4. Normalize backslashes to forward slashes
pathString = pathString.replace(/\\/g, '/');
pathString = pathString.replace(/\\/g, "/");
// 5. Resolve to absolute path using server's OS convention
const resolvedPath = path.resolve(pathString);
@@ -586,7 +619,7 @@ function withNormalizedProjectRoot(executeFn) {
return async (args, context) => {
const { log, session } = context;
let normalizedRoot = null;
let rootSource = 'unknown';
let rootSource = "unknown";
try {
// PRECEDENCE ORDER:
@@ -601,7 +634,7 @@ function withNormalizedProjectRoot(executeFn) {
normalizedRoot = path.isAbsolute(envRoot)
? envRoot
: path.resolve(process.cwd(), envRoot);
rootSource = 'TASK_MASTER_PROJECT_ROOT environment variable';
rootSource = "TASK_MASTER_PROJECT_ROOT environment variable";
log.info(`Using project root from ${rootSource}: ${normalizedRoot}`);
}
// Also check session environment variables for TASK_MASTER_PROJECT_ROOT
@@ -610,13 +643,13 @@ function withNormalizedProjectRoot(executeFn) {
normalizedRoot = path.isAbsolute(envRoot)
? envRoot
: path.resolve(process.cwd(), envRoot);
rootSource = 'TASK_MASTER_PROJECT_ROOT session environment variable';
rootSource = "TASK_MASTER_PROJECT_ROOT session environment variable";
log.info(`Using project root from ${rootSource}: ${normalizedRoot}`);
}
// 2. If no environment variable, try args.projectRoot
else if (args.projectRoot) {
normalizedRoot = normalizeProjectRoot(args.projectRoot, log);
rootSource = 'args.projectRoot';
rootSource = "args.projectRoot";
log.info(`Using project root from ${rootSource}: ${normalizedRoot}`);
}
// 3. If no args.projectRoot, try session-based resolution
@@ -624,17 +657,17 @@ function withNormalizedProjectRoot(executeFn) {
const sessionRoot = getProjectRootFromSession(session, log);
if (sessionRoot) {
normalizedRoot = sessionRoot; // getProjectRootFromSession already normalizes
rootSource = 'session';
rootSource = "session";
log.info(`Using project root from ${rootSource}: ${normalizedRoot}`);
}
}
if (!normalizedRoot) {
log.error(
'Could not determine project root from environment, args, or session.'
"Could not determine project root from environment, args, or session."
);
return createErrorResponse(
'Could not determine project root. Please provide projectRoot argument or ensure TASK_MASTER_PROJECT_ROOT environment variable is set.'
"Could not determine project root. Please provide projectRoot argument or ensure TASK_MASTER_PROJECT_ROOT environment variable is set."
);
}
@@ -670,5 +703,6 @@ export {
createLogWrapper,
normalizeProjectRoot,
getRawProjectRootFromSession,
withNormalizedProjectRoot
withNormalizedProjectRoot,
filterSensitiveTelemetryData,
};

137
package-lock.json generated
View File

@@ -40,11 +40,13 @@
"jsonwebtoken": "^9.0.2",
"lru-cache": "^10.2.0",
"ollama-ai-provider": "^1.2.0",
"open": "^10.1.2",
"openai": "^4.89.0",
"ora": "^8.2.0",
"task-master-ai": "^0.15.0",
"uuid": "^11.1.0",
"zod": "^3.23.8"
"zod": "^3.23.8",
"zod-to-json-schema": "^3.24.5"
},
"bin": {
"task-master": "bin/task-master.js",
@@ -5423,6 +5425,21 @@
"dev": true,
"license": "MIT"
},
"node_modules/bundle-name": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/bundle-name/-/bundle-name-4.1.0.tgz",
"integrity": "sha512-tjwM5exMg6BGRI+kNmTntNsvdZS1X8BFYS6tnJ2hdH0kVxM6/eVZ2xy+FqStSWvYmtfFMDLIxurorHwDKfDz5Q==",
"license": "MIT",
"dependencies": {
"run-applescript": "^7.0.0"
},
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/bytes": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz",
@@ -6192,6 +6209,46 @@
"node": ">=0.10.0"
}
},
"node_modules/default-browser": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/default-browser/-/default-browser-5.2.1.tgz",
"integrity": "sha512-WY/3TUME0x3KPYdRRxEJJvXRHV4PyPoUsxtZa78lwItwRQRHhd2U9xOscaT/YTf8uCXIAjeJOFBVEh/7FtD8Xg==",
"license": "MIT",
"dependencies": {
"bundle-name": "^4.1.0",
"default-browser-id": "^5.0.0"
},
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/default-browser-id": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/default-browser-id/-/default-browser-id-5.0.0.tgz",
"integrity": "sha512-A6p/pu/6fyBcA1TRz/GqWYPViplrftcW2gZC9q79ngNCKAeR/X3gcEdXQHl4KNXV+3wgIJ1CPkJQ3IHM6lcsyA==",
"license": "MIT",
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/define-lazy-prop": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-3.0.0.tgz",
"integrity": "sha512-N+MeXYoqr3pOgn8xfyRPREN7gHakLYjhsHhWGT3fWAiL4IkAt0iDw14QiiEm2bE30c5XX5q0FtAA3CK5f9/BUg==",
"license": "MIT",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/delayed-stream": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
@@ -8030,6 +8087,21 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/is-docker": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/is-docker/-/is-docker-3.0.0.tgz",
"integrity": "sha512-eljcgEDlEns/7AXFosB5K/2nCM4P7FQPkGc/DWLy5rmFEWvZayGrik1d9/QIY5nJ4f9YsVvBkA6kJpHn9rISdQ==",
"license": "MIT",
"bin": {
"is-docker": "cli.js"
},
"engines": {
"node": "^12.20.0 || ^14.13.1 || >=16.0.0"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/is-extglob": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz",
@@ -8088,6 +8160,24 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/is-inside-container": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/is-inside-container/-/is-inside-container-1.0.0.tgz",
"integrity": "sha512-KIYLCCJghfHZxqjYBE7rEy0OBuTd5xCHS7tHVgvCLkx7StIoaxwNW3hCALgEUjFfeRk+MG/Qxmp/vtETEF3tRA==",
"license": "MIT",
"dependencies": {
"is-docker": "^3.0.0"
},
"bin": {
"is-inside-container": "cli.js"
},
"engines": {
"node": ">=14.16"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/is-interactive": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/is-interactive/-/is-interactive-2.0.0.tgz",
@@ -8176,6 +8266,21 @@
"node": ">=0.10.0"
}
},
"node_modules/is-wsl": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-3.1.0.tgz",
"integrity": "sha512-UcVfVfaK4Sc4m7X3dUSoHoozQGBEFeDC+zVo06t98xe8CzHSZZBekNXH+tu0NalHolcJ/QAGqS46Hef7QXBIMw==",
"license": "MIT",
"dependencies": {
"is-inside-container": "^1.0.0"
},
"engines": {
"node": ">=16"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/isexe": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz",
@@ -9933,6 +10038,24 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/open": {
"version": "10.1.2",
"resolved": "https://registry.npmjs.org/open/-/open-10.1.2.tgz",
"integrity": "sha512-cxN6aIDPz6rm8hbebcP7vrQNhvRcveZoJU72Y7vskh4oIm+BZwBECnx5nTmrlres1Qapvx27Qo1Auukpf8PKXw==",
"license": "MIT",
"dependencies": {
"default-browser": "^5.2.1",
"define-lazy-prop": "^3.0.0",
"is-inside-container": "^1.0.0",
"is-wsl": "^3.1.0"
},
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/openai": {
"version": "4.89.0",
"resolved": "https://registry.npmjs.org/openai/-/openai-4.89.0.tgz",
@@ -10706,6 +10829,18 @@
"node": ">=16"
}
},
"node_modules/run-applescript": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/run-applescript/-/run-applescript-7.0.0.tgz",
"integrity": "sha512-9by4Ij99JUr/MCFBUkDKLWK3G9HVXmabKz9U5MlIAIuvuzkiOicRYs8XJLxX+xahD+mLiiCYDqF9dKAgtzKP1A==",
"license": "MIT",
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/run-async": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/run-async/-/run-async-3.0.0.tgz",

View File

@@ -70,11 +70,13 @@
"jsonwebtoken": "^9.0.2",
"lru-cache": "^10.2.0",
"ollama-ai-provider": "^1.2.0",
"open": "^10.1.2",
"openai": "^4.89.0",
"ora": "^8.2.0",
"task-master-ai": "^0.15.0",
"uuid": "^11.1.0",
"zod": "^3.23.8"
"zod": "^3.23.8",
"zod-to-json-schema": "^3.24.5"
},
"engines": {
"node": ">=18.0.0"

File diff suppressed because it is too large Load Diff

View File

@@ -23,9 +23,12 @@ import {
getOllamaBaseURL,
getAzureBaseURL,
getVertexProjectId,
getVertexLocation
} from './config-manager.js';
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
getVertexLocation,
getConfig,
} from "./config-manager.js";
import { log, findProjectRoot, resolveEnvVariable } from "./utils.js";
import { submitTelemetryData } from "./telemetry-submission.js";
import { isHostedMode } from "./user-management.js";
// Import provider classes
import {
@@ -38,8 +41,11 @@ import {
OllamaAIProvider,
BedrockAIProvider,
AzureProvider,
VertexAIProvider
} from '../../src/ai-providers/index.js';
VertexAIProvider,
} from "../../src/ai-providers/index.js";
import { zodToJsonSchema } from "zod-to-json-schema";
import { handleGatewayError } from "./utils/gatewayErrorHandler.js";
// Create provider instances
const PROVIDERS = {
@@ -52,36 +58,36 @@ const PROVIDERS = {
ollama: new OllamaAIProvider(),
bedrock: new BedrockAIProvider(),
azure: new AzureProvider(),
vertex: new VertexAIProvider()
vertex: new VertexAIProvider(),
};
// Helper function to get cost for a specific model
function _getCostForModel(providerName, modelId) {
if (!MODEL_MAP || !MODEL_MAP[providerName]) {
log(
'warn',
"warn",
`Provider "${providerName}" not found in MODEL_MAP. Cannot determine cost for model ${modelId}.`
);
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
return { inputCost: 0, outputCost: 0, currency: "USD" }; // Default to zero cost
}
const modelData = MODEL_MAP[providerName].find((m) => m.id === modelId);
if (!modelData || !modelData.cost_per_1m_tokens) {
log(
'debug',
"debug",
`Cost data not found for model "${modelId}" under provider "${providerName}". Assuming zero cost.`
);
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
return { inputCost: 0, outputCost: 0, currency: "USD" }; // Default to zero cost
}
// Ensure currency is part of the returned object, defaulting if not present
const currency = modelData.cost_per_1m_tokens.currency || 'USD';
const currency = modelData.cost_per_1m_tokens.currency || "USD";
return {
inputCost: modelData.cost_per_1m_tokens.input || 0,
outputCost: modelData.cost_per_1m_tokens.output || 0,
currency: currency
currency: currency,
};
}
@@ -91,13 +97,13 @@ const INITIAL_RETRY_DELAY_MS = 1000;
// Helper function to check if an error is retryable
function isRetryableError(error) {
const errorMessage = error.message?.toLowerCase() || '';
const errorMessage = error.message?.toLowerCase() || "";
return (
errorMessage.includes('rate limit') ||
errorMessage.includes('overloaded') ||
errorMessage.includes('service temporarily unavailable') ||
errorMessage.includes('timeout') ||
errorMessage.includes('network error') ||
errorMessage.includes("rate limit") ||
errorMessage.includes("overloaded") ||
errorMessage.includes("service temporarily unavailable") ||
errorMessage.includes("timeout") ||
errorMessage.includes("network error") ||
error.status === 429 ||
error.status >= 500
);
@@ -122,7 +128,7 @@ function _extractErrorMessage(error) {
}
// Attempt 3: Look for nested error message in response body if it's JSON string
if (typeof error?.responseBody === 'string') {
if (typeof error?.responseBody === "string") {
try {
const body = JSON.parse(error.responseBody);
if (body?.error?.message) {
@@ -134,20 +140,20 @@ function _extractErrorMessage(error) {
}
// Attempt 4: Use the top-level message if it exists
if (typeof error?.message === 'string' && error.message) {
if (typeof error?.message === "string" && error.message) {
return error.message;
}
// Attempt 5: Handle simple string errors
if (typeof error === 'string') {
if (typeof error === "string") {
return error;
}
// Fallback
return 'An unknown AI service error occurred.';
return "An unknown AI service error occurred.";
} catch (e) {
// Safety net
return 'Failed to extract error message.';
return "Failed to extract error message.";
}
}
@@ -161,17 +167,17 @@ function _extractErrorMessage(error) {
*/
function _resolveApiKey(providerName, session, projectRoot = null) {
const keyMap = {
openai: 'OPENAI_API_KEY',
anthropic: 'ANTHROPIC_API_KEY',
google: 'GOOGLE_API_KEY',
perplexity: 'PERPLEXITY_API_KEY',
mistral: 'MISTRAL_API_KEY',
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY',
ollama: 'OLLAMA_API_KEY',
bedrock: 'AWS_ACCESS_KEY_ID',
vertex: 'GOOGLE_API_KEY'
openai: "OPENAI_API_KEY",
anthropic: "ANTHROPIC_API_KEY",
google: "GOOGLE_API_KEY",
perplexity: "PERPLEXITY_API_KEY",
mistral: "MISTRAL_API_KEY",
azure: "AZURE_OPENAI_API_KEY",
openrouter: "OPENROUTER_API_KEY",
xai: "XAI_API_KEY",
ollama: "OLLAMA_API_KEY",
bedrock: "AWS_ACCESS_KEY_ID",
vertex: "GOOGLE_API_KEY",
};
const envVarName = keyMap[providerName];
@@ -184,7 +190,7 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
const apiKey = resolveEnvVariable(envVarName, session, projectRoot);
// Special handling for providers that can use alternative auth
if (providerName === 'ollama' || providerName === 'bedrock') {
if (providerName === "ollama" || providerName === "bedrock") {
return apiKey || null;
}
@@ -222,7 +228,7 @@ async function _attemptProviderCallWithRetries(
try {
if (getDebugFlag()) {
log(
'info',
"info",
`Attempt ${retries + 1}/${MAX_RETRIES + 1} calling ${fnName} (Provider: ${providerName}, Model: ${modelId}, Role: ${attemptRole})`
);
}
@@ -232,14 +238,14 @@ async function _attemptProviderCallWithRetries(
if (getDebugFlag()) {
log(
'info',
"info",
`${fnName} succeeded for role ${attemptRole} (Provider: ${providerName}) on attempt ${retries + 1}`
);
}
return result;
} catch (error) {
log(
'warn',
"warn",
`Attempt ${retries + 1} failed for role ${attemptRole} (${fnName} / ${providerName}): ${error.message}`
);
@@ -247,13 +253,13 @@ async function _attemptProviderCallWithRetries(
retries++;
const delay = INITIAL_RETRY_DELAY_MS * Math.pow(2, retries - 1);
log(
'info',
"info",
`Something went wrong on the provider side. Retrying in ${delay / 1000}s...`
);
await new Promise((resolve) => setTimeout(resolve, delay));
} else {
log(
'error',
"error",
`Something went wrong on the provider side. Max retries reached for role ${attemptRole} (${fnName} / ${providerName}).`
);
throw error;
@@ -266,6 +272,141 @@ async function _attemptProviderCallWithRetries(
);
}
/**
* Makes an AI call through the TaskMaster gateway for hosted users
* @param {string} serviceType - Type of service (generateText, generateObject, streamText)
* @param {object} callParams - Parameters for the AI call
* @param {string} providerName - AI provider name
* @param {string} modelId - Model ID
* @param {string} userId - User ID
* @param {string} commandName - Command name for tracking
* @param {string} outputType - Output type (cli, mcp)
* @param {string} projectRoot - Project root path
* @param {string} initialRole - The initial client role
* @returns {Promise<object>} AI response with usage data
*/
/**
* Calls the TaskMaster gateway for AI processing (hosted mode only).
* BYOK users don't use this function - they make direct API calls.
*/
async function _callGatewayAI(
serviceType,
callParams,
providerName,
modelId,
userId,
commandName,
outputType,
projectRoot,
initialRole
) {
// Hard-code service-level constants
const gatewayUrl = "http://localhost:4444";
const serviceId = "98fb3198-2dfc-42d1-af53-07b99e4f3bde"; // Hardcoded service ID -- if you change this, the Hosted Gateway will not work
// Get user auth info for headers
const userMgmt = await import("./user-management.js");
const config = getConfig(projectRoot);
const mode = config.account?.mode || "byok";
// Both BYOK and hosted users have the same user token
// BYOK users just don't use it for AI calls (they use their own API keys)
const userToken = await userMgmt.getUserToken(projectRoot);
const userEmail = await userMgmt.getUserEmail(projectRoot);
// Note: BYOK users will have both token and email, but won't use this function
// since they make direct API calls with their own keys
if (!userToken) {
throw new Error(
"User token not found. Run 'task-master init' to register with gateway."
);
}
const endpoint = `${gatewayUrl}/api/v1/ai/${serviceType}`;
// Extract messages from callParams and convert to gateway format
const systemPrompt =
callParams.messages?.find((m) => m.role === "system")?.content || "";
const prompt =
callParams.messages?.find((m) => m.role === "user")?.content || "";
const requestBody = {
provider: providerName,
serviceType,
role: initialRole,
messages: callParams.messages,
modelId,
commandName,
outputType,
roleParams: {
maxTokens: callParams.maxTokens,
temperature: callParams.temperature,
},
...(serviceType === "generateObject" && {
schema: zodToJsonSchema(callParams.schema),
objectName: callParams.objectName,
}),
};
const headers = {
"Content-Type": "application/json",
"X-TaskMaster-Service-ID": serviceId, // TaskMaster service ID for instance auth
Authorization: `Bearer ${userToken}`, // User-level auth
};
// Add user email header if available
if (userEmail) {
headers["X-User-Email"] = userEmail;
}
try {
const response = await fetch(endpoint, {
method: "POST",
headers,
body: JSON.stringify(requestBody),
});
if (!response.ok) {
const errorText = await response.text();
throw new Error(
`Gateway AI call failed: ${response.status} ${errorText}`
);
}
const result = await response.json();
if (!result.success) {
throw new Error(result.error || "Gateway AI call failed");
}
// Return the AI response in the expected format
return {
text: result.data.text,
object: result.data.object,
usage: result.data.usage,
// Include any account info returned from gateway
accountInfo: result.accountInfo,
};
} catch (error) {
// Use the enhanced error handler for user-friendly messages
handleGatewayError(error, commandName);
// Throw a much cleaner error message to prevent ugly double logging
const match = error.message.match(/Gateway AI call failed: (\d+)/);
if (match) {
const statusCode = match[1];
throw new Error(
`TaskMaster gateway error (${statusCode}). See details above.`
);
} else {
throw new Error(
"TaskMaster gateway communication failed. See details above."
);
}
}
}
/**
* Base logic for unified service functions.
* @param {string} serviceType - Type of service ('generateText', 'streamText', 'generateObject').
@@ -294,36 +435,184 @@ async function _unifiedServiceRunner(serviceType, params) {
outputType,
...restApiParams
} = params;
if (getDebugFlag()) {
log('info', `${serviceType}Service called`, {
log("info", `${serviceType}Service called`, {
role: initialRole,
commandName,
outputType,
projectRoot
projectRoot,
});
if (isHostedMode(projectRoot)) {
log("info", "Communicating with Taskmaster Gateway");
}
}
const effectiveProjectRoot = projectRoot || findProjectRoot();
const userId = getUserId(effectiveProjectRoot);
// If userId is the placeholder, try to initialize user silently
if (userId === "1234567890") {
try {
// Dynamic import to avoid circular dependency
const userMgmt = await import("./user-management.js");
const initResult = await userMgmt.initializeUser(effectiveProjectRoot);
if (initResult.success) {
// Update the config with the new userId
const { writeConfig, getConfig } = await import("./config-manager.js");
const config = getConfig(effectiveProjectRoot);
config.account.userId = initResult.userId;
writeConfig(config, effectiveProjectRoot);
log("info", "User successfully authenticated with gateway");
} else {
// Silent failure - only log at debug level during init sequence
log("debug", `Silent auth/init failed: ${initResult.error}`);
}
} catch (error) {
// Silent failure - only log at debug level during init sequence
log("debug", `Silent auth/init attempt failed: ${error.message}`);
}
}
// Add hosted mode check here
const hostedMode = isHostedMode(effectiveProjectRoot);
if (hostedMode) {
// Route through gateway - use your existing implementation
log("info", "Routing AI call through TaskMaster gateway (hosted mode)");
try {
// Check if we have a valid userId (not placeholder)
const finalUserId = getUserId(effectiveProjectRoot); // Re-check after potential auth
if (finalUserId === "1234567890" || !finalUserId) {
throw new Error(
"Hosted mode requires user authentication. Please run 'task-master init' to register with the gateway, or switch to BYOK mode if the gateway service is unavailable."
);
}
// Get the role configuration for provider/model selection
let providerName, modelId;
if (initialRole === "main") {
providerName = getMainProvider(effectiveProjectRoot);
modelId = getMainModelId(effectiveProjectRoot);
} else if (initialRole === "research") {
providerName = getResearchProvider(effectiveProjectRoot);
modelId = getResearchModelId(effectiveProjectRoot);
} else if (initialRole === "fallback") {
providerName = getFallbackProvider(effectiveProjectRoot);
modelId = getFallbackModelId(effectiveProjectRoot);
} else {
throw new Error(`Unknown AI role: ${initialRole}`);
}
if (!providerName || !modelId) {
throw new Error(
`Configuration missing for role '${initialRole}'. Provider: ${providerName}, Model: ${modelId}`
);
}
// Get role parameters
const roleParams = getParametersForRole(
initialRole,
effectiveProjectRoot
);
// Prepare messages
const messages = [];
if (systemPrompt) {
messages.push({ role: "system", content: systemPrompt });
}
if (prompt) {
messages.push({ role: "user", content: prompt });
} else {
throw new Error("User prompt content is missing.");
}
const callParams = {
maxTokens: roleParams.maxTokens,
temperature: roleParams.temperature,
messages,
...(serviceType === "generateObject" && { schema, objectName }),
...restApiParams,
};
const gatewayResponse = await _callGatewayAI(
serviceType,
callParams,
providerName,
modelId,
finalUserId,
commandName,
outputType,
effectiveProjectRoot,
initialRole
);
// For hosted mode, we don't need to submit telemetry separately
// The gateway handles everything and returns account info
let telemetryData = null;
if (gatewayResponse.accountInfo) {
// Convert gateway account info to telemetry format for UI display
telemetryData = {
timestamp: new Date().toISOString(),
userId: finalUserId,
commandName,
modelUsed: modelId,
providerName,
inputTokens: gatewayResponse.usage?.inputTokens || 0,
outputTokens: gatewayResponse.usage?.outputTokens || 0,
totalTokens: gatewayResponse.usage?.totalTokens || 0,
totalCost: 0, // Not used in hosted mode
currency: "USD",
// Include account info for UI display
accountInfo: gatewayResponse.accountInfo,
};
}
let finalMainResult;
if (serviceType === "generateText") {
finalMainResult = gatewayResponse.text;
} else if (serviceType === "generateObject") {
finalMainResult = gatewayResponse.object;
} else if (serviceType === "streamText") {
finalMainResult = gatewayResponse; // Streaming through gateway would need special handling
} else {
finalMainResult = gatewayResponse;
}
return {
mainResult: finalMainResult,
telemetryData: telemetryData,
};
} catch (error) {
const cleanMessage = _extractErrorMessage(error);
log("error", `Gateway AI call failed: ${cleanMessage}`);
throw new Error(cleanMessage);
}
}
// For BYOK mode, continue with existing logic...
let sequence;
if (initialRole === 'main') {
sequence = ['main', 'fallback', 'research'];
} else if (initialRole === 'research') {
sequence = ['research', 'fallback', 'main'];
} else if (initialRole === 'fallback') {
sequence = ['fallback', 'main', 'research'];
if (initialRole === "main") {
sequence = ["main", "fallback", "research"];
} else if (initialRole === "research") {
sequence = ["research", "fallback", "main"];
} else if (initialRole === "fallback") {
sequence = ["fallback", "main", "research"];
} else {
log(
'warn',
"warn",
`Unknown initial role: ${initialRole}. Defaulting to main -> fallback -> research sequence.`
);
sequence = ['main', 'fallback', 'research'];
sequence = ["main", "fallback", "research"];
}
let lastError = null;
let lastCleanErrorMessage =
'AI service call failed for all configured roles.';
"AI service call failed for all configured roles.";
for (const currentRole of sequence) {
let providerName,
@@ -336,20 +625,20 @@ async function _unifiedServiceRunner(serviceType, params) {
telemetryData = null;
try {
log('info', `New AI service call with role: ${currentRole}`);
log("info", `New AI service call with role: ${currentRole}`);
if (currentRole === 'main') {
if (currentRole === "main") {
providerName = getMainProvider(effectiveProjectRoot);
modelId = getMainModelId(effectiveProjectRoot);
} else if (currentRole === 'research') {
} else if (currentRole === "research") {
providerName = getResearchProvider(effectiveProjectRoot);
modelId = getResearchModelId(effectiveProjectRoot);
} else if (currentRole === 'fallback') {
} else if (currentRole === "fallback") {
providerName = getFallbackProvider(effectiveProjectRoot);
modelId = getFallbackModelId(effectiveProjectRoot);
} else {
log(
'error',
"error",
`Unknown role encountered in _unifiedServiceRunner: ${currentRole}`
);
lastError =
@@ -359,7 +648,7 @@ async function _unifiedServiceRunner(serviceType, params) {
if (!providerName || !modelId) {
log(
'warn',
"warn",
`Skipping role '${currentRole}': Provider or Model ID not configured.`
);
lastError =
@@ -374,7 +663,7 @@ async function _unifiedServiceRunner(serviceType, params) {
provider = PROVIDERS[providerName?.toLowerCase()];
if (!provider) {
log(
'warn',
"warn",
`Skipping role '${currentRole}': Provider '${providerName}' not supported.`
);
lastError =
@@ -384,10 +673,10 @@ async function _unifiedServiceRunner(serviceType, params) {
}
// Check API key if needed
if (providerName?.toLowerCase() !== 'ollama') {
if (providerName?.toLowerCase() !== "ollama") {
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
log(
'warn',
"warn",
`Skipping role '${currentRole}' (Provider: ${providerName}): API key not set or invalid.`
);
lastError =
@@ -403,13 +692,13 @@ async function _unifiedServiceRunner(serviceType, params) {
baseURL = getBaseUrlForRole(currentRole, effectiveProjectRoot);
// For Azure, use the global Azure base URL if role-specific URL is not configured
if (providerName?.toLowerCase() === 'azure' && !baseURL) {
if (providerName?.toLowerCase() === "azure" && !baseURL) {
baseURL = getAzureBaseURL(effectiveProjectRoot);
log('debug', `Using global Azure base URL: ${baseURL}`);
} else if (providerName?.toLowerCase() === 'ollama' && !baseURL) {
log("debug", `Using global Azure base URL: ${baseURL}`);
} else if (providerName?.toLowerCase() === "ollama" && !baseURL) {
// For Ollama, use the global Ollama base URL if role-specific URL is not configured
baseURL = getOllamaBaseURL(effectiveProjectRoot);
log('debug', `Using global Ollama base URL: ${baseURL}`);
log("debug", `Using global Ollama base URL: ${baseURL}`);
}
// Get AI parameters for the current role
@@ -424,12 +713,12 @@ async function _unifiedServiceRunner(serviceType, params) {
let providerSpecificParams = {};
// Handle Vertex AI specific configuration
if (providerName?.toLowerCase() === 'vertex') {
if (providerName?.toLowerCase() === "vertex") {
// Get Vertex project ID and location
const projectId =
getVertexProjectId(effectiveProjectRoot) ||
resolveEnvVariable(
'VERTEX_PROJECT_ID',
"VERTEX_PROJECT_ID",
session,
effectiveProjectRoot
);
@@ -437,15 +726,15 @@ async function _unifiedServiceRunner(serviceType, params) {
const location =
getVertexLocation(effectiveProjectRoot) ||
resolveEnvVariable(
'VERTEX_LOCATION',
"VERTEX_LOCATION",
session,
effectiveProjectRoot
) ||
'us-central1';
"us-central1";
// Get credentials path if available
const credentialsPath = resolveEnvVariable(
'GOOGLE_APPLICATION_CREDENTIALS',
"GOOGLE_APPLICATION_CREDENTIALS",
session,
effectiveProjectRoot
);
@@ -454,18 +743,18 @@ async function _unifiedServiceRunner(serviceType, params) {
providerSpecificParams = {
projectId,
location,
...(credentialsPath && { credentials: { credentialsFromEnv: true } })
...(credentialsPath && { credentials: { credentialsFromEnv: true } }),
};
log(
'debug',
"debug",
`Using Vertex AI configuration: Project ID=${projectId}, Location=${location}`
);
}
const messages = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
messages.push({ role: "system", content: systemPrompt });
}
// IN THE FUTURE WHEN DOING CONTEXT IMPROVEMENTS
@@ -487,9 +776,9 @@ async function _unifiedServiceRunner(serviceType, params) {
// }
if (prompt) {
messages.push({ role: 'user', content: prompt });
messages.push({ role: "user", content: prompt });
} else {
throw new Error('User prompt content is missing.');
throw new Error("User prompt content is missing.");
}
const callParams = {
@@ -499,9 +788,9 @@ async function _unifiedServiceRunner(serviceType, params) {
temperature: roleParams.temperature,
messages,
...(baseURL && { baseURL }),
...(serviceType === 'generateObject' && { schema, objectName }),
...(serviceType === "generateObject" && { schema, objectName }),
...providerSpecificParams,
...restApiParams
...restApiParams,
};
providerResponse = await _attemptProviderCallWithRetries(
@@ -522,7 +811,9 @@ async function _unifiedServiceRunner(serviceType, params) {
modelId,
inputTokens: providerResponse.usage.inputTokens,
outputTokens: providerResponse.usage.outputTokens,
outputType
outputType,
commandArgs: callParams,
fullOutput: providerResponse,
});
} catch (telemetryError) {
// logAiUsage already logs its own errors and returns null on failure
@@ -530,21 +821,21 @@ async function _unifiedServiceRunner(serviceType, params) {
}
} else if (userId && providerResponse && !providerResponse.usage) {
log(
'warn',
"warn",
`Cannot log telemetry for ${commandName} (${providerName}/${modelId}): AI result missing 'usage' data. (May be expected for streams)`
);
}
let finalMainResult;
if (serviceType === 'generateText') {
if (serviceType === "generateText") {
finalMainResult = providerResponse.text;
} else if (serviceType === 'generateObject') {
} else if (serviceType === "generateObject") {
finalMainResult = providerResponse.object;
} else if (serviceType === 'streamText') {
} else if (serviceType === "streamText") {
finalMainResult = providerResponse;
} else {
log(
'error',
"error",
`Unknown serviceType in _unifiedServiceRunner: ${serviceType}`
);
finalMainResult = providerResponse;
@@ -552,37 +843,37 @@ async function _unifiedServiceRunner(serviceType, params) {
return {
mainResult: finalMainResult,
telemetryData: telemetryData
telemetryData: telemetryData,
};
} catch (error) {
const cleanMessage = _extractErrorMessage(error);
log(
'error',
`Service call failed for role ${currentRole} (Provider: ${providerName || 'unknown'}, Model: ${modelId || 'unknown'}): ${cleanMessage}`
"error",
`Service call failed for role ${currentRole} (Provider: ${providerName || "unknown"}, Model: ${modelId || "unknown"}): ${cleanMessage}`
);
lastError = error;
lastCleanErrorMessage = cleanMessage;
if (serviceType === 'generateObject') {
if (serviceType === "generateObject") {
const lowerCaseMessage = cleanMessage.toLowerCase();
if (
lowerCaseMessage.includes(
'no endpoints found that support tool use'
"no endpoints found that support tool use"
) ||
lowerCaseMessage.includes('does not support tool_use') ||
lowerCaseMessage.includes('tool use is not supported') ||
lowerCaseMessage.includes('tools are not supported') ||
lowerCaseMessage.includes('function calling is not supported')
lowerCaseMessage.includes("does not support tool_use") ||
lowerCaseMessage.includes("tool use is not supported") ||
lowerCaseMessage.includes("tools are not supported") ||
lowerCaseMessage.includes("function calling is not supported")
) {
const specificErrorMsg = `Model '${modelId || 'unknown'}' via provider '${providerName || 'unknown'}' does not support the 'tool use' required by generateObjectService. Please configure a model that supports tool/function calling for the '${currentRole}' role, or use generateTextService if structured output is not strictly required.`;
log('error', `[Tool Support Error] ${specificErrorMsg}`);
const specificErrorMsg = `Model '${modelId || "unknown"}' via provider '${providerName || "unknown"}' does not support the 'tool use' required by generateObjectService. Please configure a model that supports tool/function calling for the '${currentRole}' role, or use generateTextService if structured output is not strictly required.`;
log("error", `[Tool Support Error] ${specificErrorMsg}`);
throw new Error(specificErrorMsg);
}
}
}
}
log('error', `All roles in the sequence [${sequence.join(', ')}] failed.`);
log("error", `All roles in the sequence [${sequence.join(", ")}] failed.`);
throw new Error(lastCleanErrorMessage);
}
@@ -602,10 +893,10 @@ async function _unifiedServiceRunner(serviceType, params) {
*/
async function generateTextService(params) {
// Ensure default outputType if not provided
const defaults = { outputType: 'cli' };
const defaults = { outputType: "cli" };
const combinedParams = { ...defaults, ...params };
// TODO: Validate commandName exists?
return _unifiedServiceRunner('generateText', combinedParams);
return _unifiedServiceRunner("generateText", combinedParams);
}
/**
@@ -623,13 +914,13 @@ async function generateTextService(params) {
* @returns {Promise<object>} Result object containing the stream and usage data.
*/
async function streamTextService(params) {
const defaults = { outputType: 'cli' };
const defaults = { outputType: "cli" };
const combinedParams = { ...defaults, ...params };
// TODO: Validate commandName exists?
// NOTE: Telemetry for streaming might be tricky as usage data often comes at the end.
// The current implementation logs *after* the stream is returned.
// We might need to adjust how usage is captured/logged for streams.
return _unifiedServiceRunner('streamText', combinedParams);
return _unifiedServiceRunner("streamText", combinedParams);
}
/**
@@ -651,13 +942,13 @@ async function streamTextService(params) {
*/
async function generateObjectService(params) {
const defaults = {
objectName: 'generated_object',
objectName: "generated_object",
maxRetries: 3,
outputType: 'cli'
outputType: "cli",
};
const combinedParams = { ...defaults, ...params };
// TODO: Validate commandName exists?
return _unifiedServiceRunner('generateObject', combinedParams);
return _unifiedServiceRunner("generateObject", combinedParams);
}
// --- Telemetry Function ---
@@ -671,6 +962,9 @@ async function generateObjectService(params) {
* @param {string} params.modelId - The specific AI model ID used.
* @param {number} params.inputTokens - Number of input tokens.
* @param {number} params.outputTokens - Number of output tokens.
* @param {string} params.outputType - 'cli' or 'mcp'.
* @param {object} [params.commandArgs] - Original command arguments passed to the AI service.
* @param {object} [params.fullOutput] - Complete AI response output before filtering.
*/
async function logAiUsage({
userId,
@@ -679,10 +973,12 @@ async function logAiUsage({
modelId,
inputTokens,
outputTokens,
outputType
outputType,
commandArgs,
fullOutput,
}) {
try {
const isMCP = outputType === 'mcp';
const isMCP = outputType === "mcp";
const timestamp = new Date().toISOString();
const totalTokens = (inputTokens || 0) + (outputTokens || 0);
@@ -706,19 +1002,40 @@ async function logAiUsage({
outputTokens: outputTokens || 0,
totalTokens,
totalCost: parseFloat(totalCost.toFixed(6)),
currency // Add currency to the telemetry data
currency, // Add currency to the telemetry data
};
if (getDebugFlag()) {
log('info', 'AI Usage Telemetry:', telemetryData);
// Add commandArgs and fullOutput if provided (for internal telemetry only)
if (commandArgs !== undefined) {
telemetryData.commandArgs = commandArgs;
}
if (fullOutput !== undefined) {
telemetryData.fullOutput = fullOutput;
}
// TODO (Subtask 77.2): Send telemetryData securely to the external endpoint.
if (getDebugFlag()) {
log("info", "AI Usage Telemetry:", telemetryData);
}
// Subtask 90.3: Submit telemetry data to gateway
try {
const submissionResult = await submitTelemetryData(telemetryData);
if (getDebugFlag() && submissionResult.success) {
log("debug", "Telemetry data successfully submitted to gateway");
} else if (getDebugFlag() && !submissionResult.success) {
log("debug", `Telemetry submission failed: ${submissionResult.error}`);
}
} catch (submissionError) {
// Telemetry submission should never block core functionality
if (getDebugFlag()) {
log("debug", `Telemetry submission error: ${submissionError.message}`);
}
}
return telemetryData;
} catch (error) {
log('error', `Failed to log AI usage telemetry: ${error.message}`, {
error
log("error", `Failed to log AI usage telemetry: ${error.message}`, {
error,
});
// Don't re-throw; telemetry failure shouldn't block core functionality.
return null;
@@ -729,5 +1046,5 @@ export {
generateTextService,
streamTextService,
generateObjectService,
logAiUsage
logAiUsage,
};

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,13 @@
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import { fileURLToPath } from 'url';
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
import fs from "fs";
import path from "path";
import chalk from "chalk";
import { fileURLToPath } from "url";
import {
log,
findProjectRoot,
resolveEnvVariable,
isSilentMode,
} from "./utils.js";
// Calculate __dirname in ESM
const __filename = fileURLToPath(import.meta.url);
@@ -12,14 +17,14 @@ const __dirname = path.dirname(__filename);
let MODEL_MAP;
try {
const supportedModelsRaw = fs.readFileSync(
path.join(__dirname, 'supported-models.json'),
'utf-8'
path.join(__dirname, "supported-models.json"),
"utf-8"
);
MODEL_MAP = JSON.parse(supportedModelsRaw);
} catch (error) {
console.error(
chalk.red(
'FATAL ERROR: Could not load supported-models.json. Please ensure the file exists and is valid JSON.'
"FATAL ERROR: Could not load supported-models.json. Please ensure the file exists and is valid JSON."
),
error
);
@@ -27,42 +32,49 @@ try {
process.exit(1); // Exit if models can't be loaded
}
const CONFIG_FILE_NAME = '.taskmasterconfig';
const CONFIG_FILE_NAME = ".taskmasterconfig";
// Define valid providers dynamically from the loaded MODEL_MAP
const VALID_PROVIDERS = Object.keys(MODEL_MAP || {});
// Default configuration values (used if .taskmasterconfig is missing or incomplete)
const DEFAULTS = {
// Default configuration structure (updated)
const defaultConfig = {
global: {
logLevel: "info",
debug: false,
defaultSubtasks: 5,
defaultPriority: "medium",
projectName: "Taskmaster",
ollamaBaseURL: "http://localhost:11434/api",
azureBaseURL: "https://your-endpoint.azure.com/",
},
models: {
main: {
provider: 'anthropic',
modelId: 'claude-3-7-sonnet-20250219',
provider: "anthropic",
modelId: "claude-3-7-sonnet-20250219",
maxTokens: 64000,
temperature: 0.2
temperature: 0.2,
},
research: {
provider: 'perplexity',
modelId: 'sonar-pro',
provider: "perplexity",
modelId: "sonar-pro",
maxTokens: 8700,
temperature: 0.1
temperature: 0.1,
},
fallback: {
// No default fallback provider/model initially
provider: 'anthropic',
modelId: 'claude-3-5-sonnet',
provider: "anthropic",
modelId: "claude-3-5-sonnet",
maxTokens: 64000, // Default parameters if fallback IS configured
temperature: 0.2
}
temperature: 0.2,
},
},
account: {
userId: "1234567890", // Placeholder that triggers auth/init
email: "",
mode: "byok",
telemetryEnabled: true,
},
global: {
logLevel: 'info',
debug: false,
defaultSubtasks: 5,
defaultPriority: 'medium',
projectName: 'Task Master',
ollamaBaseURL: 'http://localhost:11434/api'
}
};
// --- Internal Config Loading ---
@@ -73,16 +85,16 @@ let loadedConfigRoot = null; // Track which root loaded the config
class ConfigurationError extends Error {
constructor(message) {
super(message);
this.name = 'ConfigurationError';
this.name = "ConfigurationError";
}
}
function _loadAndValidateConfig(explicitRoot = null) {
const defaults = DEFAULTS; // Use the defined defaults
const defaults = defaultConfig; // Use the defined defaults
let rootToUse = explicitRoot;
let configSource = explicitRoot
? `explicit root (${explicitRoot})`
: 'defaults (no root provided yet)';
: "defaults (no root provided yet)";
// ---> If no explicit root, TRY to find it <---
if (!rootToUse) {
@@ -104,7 +116,7 @@ function _loadAndValidateConfig(explicitRoot = null) {
if (fs.existsSync(configPath)) {
configExists = true;
try {
const rawData = fs.readFileSync(configPath, 'utf-8');
const rawData = fs.readFileSync(configPath, "utf-8");
const parsedConfig = JSON.parse(rawData);
// Deep merge parsed config onto defaults
@@ -113,15 +125,16 @@ function _loadAndValidateConfig(explicitRoot = null) {
main: { ...defaults.models.main, ...parsedConfig?.models?.main },
research: {
...defaults.models.research,
...parsedConfig?.models?.research
...parsedConfig?.models?.research,
},
fallback:
parsedConfig?.models?.fallback?.provider &&
parsedConfig?.models?.fallback?.modelId
? { ...defaults.models.fallback, ...parsedConfig.models.fallback }
: { ...defaults.models.fallback }
: { ...defaults.models.fallback },
},
global: { ...defaults.global, ...parsedConfig?.global }
global: { ...defaults.global, ...parsedConfig?.global },
account: { ...defaults.account, ...parsedConfig?.account },
};
configSource = `file (${configPath})`; // Update source info
@@ -256,68 +269,68 @@ function getModelConfigForRole(role, explicitRoot = null) {
const roleConfig = config?.models?.[role];
if (!roleConfig) {
log(
'warn',
"warn",
`No model configuration found for role: ${role}. Returning default.`
);
return DEFAULTS.models[role] || {};
return defaultConfig.models[role] || {};
}
return roleConfig;
}
function getMainProvider(explicitRoot = null) {
return getModelConfigForRole('main', explicitRoot).provider;
return getModelConfigForRole("main", explicitRoot).provider;
}
function getMainModelId(explicitRoot = null) {
return getModelConfigForRole('main', explicitRoot).modelId;
return getModelConfigForRole("main", explicitRoot).modelId;
}
function getMainMaxTokens(explicitRoot = null) {
// Directly return value from config (which includes defaults)
return getModelConfigForRole('main', explicitRoot).maxTokens;
return getModelConfigForRole("main", explicitRoot).maxTokens;
}
function getMainTemperature(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('main', explicitRoot).temperature;
return getModelConfigForRole("main", explicitRoot).temperature;
}
function getResearchProvider(explicitRoot = null) {
return getModelConfigForRole('research', explicitRoot).provider;
return getModelConfigForRole("research", explicitRoot).provider;
}
function getResearchModelId(explicitRoot = null) {
return getModelConfigForRole('research', explicitRoot).modelId;
return getModelConfigForRole("research", explicitRoot).modelId;
}
function getResearchMaxTokens(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('research', explicitRoot).maxTokens;
return getModelConfigForRole("research", explicitRoot).maxTokens;
}
function getResearchTemperature(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('research', explicitRoot).temperature;
return getModelConfigForRole("research", explicitRoot).temperature;
}
function getFallbackProvider(explicitRoot = null) {
// Directly return value from config (will be undefined if not set)
return getModelConfigForRole('fallback', explicitRoot).provider;
return getModelConfigForRole("fallback", explicitRoot).provider;
}
function getFallbackModelId(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('fallback', explicitRoot).modelId;
return getModelConfigForRole("fallback", explicitRoot).modelId;
}
function getFallbackMaxTokens(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('fallback', explicitRoot).maxTokens;
return getModelConfigForRole("fallback", explicitRoot).maxTokens;
}
function getFallbackTemperature(explicitRoot = null) {
// Directly return value from config
return getModelConfigForRole('fallback', explicitRoot).temperature;
return getModelConfigForRole("fallback", explicitRoot).temperature;
}
// --- Global Settings Getters ---
@@ -325,7 +338,7 @@ function getFallbackTemperature(explicitRoot = null) {
function getGlobalConfig(explicitRoot = null) {
const config = getConfig(explicitRoot);
// Ensure global defaults are applied if global section is missing
return { ...DEFAULTS.global, ...(config?.global || {}) };
return { ...defaultConfig.global, ...(config?.global || {}) };
}
function getLogLevel(explicitRoot = null) {
@@ -342,13 +355,13 @@ function getDefaultSubtasks(explicitRoot = null) {
// Directly return value from config, ensure integer
const val = getGlobalConfig(explicitRoot).defaultSubtasks;
const parsedVal = parseInt(val, 10);
return isNaN(parsedVal) ? DEFAULTS.global.defaultSubtasks : parsedVal;
return isNaN(parsedVal) ? defaultConfig.global.defaultSubtasks : parsedVal;
}
function getDefaultNumTasks(explicitRoot = null) {
const val = getGlobalConfig(explicitRoot).defaultNumTasks;
const parsedVal = parseInt(val, 10);
return isNaN(parsedVal) ? DEFAULTS.global.defaultNumTasks : parsedVal;
return isNaN(parsedVal) ? defaultConfig.global.defaultNumTasks : parsedVal;
}
function getDefaultPriority(explicitRoot = null) {
@@ -388,7 +401,7 @@ function getVertexProjectId(explicitRoot = null) {
*/
function getVertexLocation(explicitRoot = null) {
// Return value from config or default
return getGlobalConfig(explicitRoot).vertexLocation || 'us-central1';
return getGlobalConfig(explicitRoot).vertexLocation || "us-central1";
}
/**
@@ -416,31 +429,31 @@ function getParametersForRole(role, explicitRoot = null) {
// Check if a model-specific max_tokens is defined and valid
if (
modelDefinition &&
typeof modelDefinition.max_tokens === 'number' &&
typeof modelDefinition.max_tokens === "number" &&
modelDefinition.max_tokens > 0
) {
const modelSpecificMaxTokens = modelDefinition.max_tokens;
// Use the minimum of the role default and the model specific limit
effectiveMaxTokens = Math.min(roleMaxTokens, modelSpecificMaxTokens);
log(
'debug',
"debug",
`Applying model-specific max_tokens (${modelSpecificMaxTokens}) for ${modelId}. Effective limit: ${effectiveMaxTokens}`
);
} else {
log(
'debug',
"debug",
`No valid model-specific max_tokens override found for ${modelId}. Using role default: ${roleMaxTokens}`
);
}
} else {
log(
'debug',
"debug",
`No model definitions found for provider ${providerName} in MODEL_MAP. Using role default maxTokens: ${roleMaxTokens}`
);
}
} catch (lookupError) {
log(
'warn',
"warn",
`Error looking up model-specific max_tokens for ${modelId}: ${lookupError.message}. Using role default: ${roleMaxTokens}`
);
// Fallback to role default on error
@@ -449,7 +462,7 @@ function getParametersForRole(role, explicitRoot = null) {
return {
maxTokens: effectiveMaxTokens,
temperature: roleTemperature
temperature: roleTemperature,
};
}
@@ -463,26 +476,26 @@ function getParametersForRole(role, explicitRoot = null) {
*/
function isApiKeySet(providerName, session = null, projectRoot = null) {
// Define the expected environment variable name for each provider
if (providerName?.toLowerCase() === 'ollama') {
if (providerName?.toLowerCase() === "ollama") {
return true; // Indicate key status is effectively "OK"
}
const keyMap = {
openai: 'OPENAI_API_KEY',
anthropic: 'ANTHROPIC_API_KEY',
google: 'GOOGLE_API_KEY',
perplexity: 'PERPLEXITY_API_KEY',
mistral: 'MISTRAL_API_KEY',
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
xai: 'XAI_API_KEY',
vertex: 'GOOGLE_API_KEY' // Vertex uses the same key as Google
openai: "OPENAI_API_KEY",
anthropic: "ANTHROPIC_API_KEY",
google: "GOOGLE_API_KEY",
perplexity: "PERPLEXITY_API_KEY",
mistral: "MISTRAL_API_KEY",
azure: "AZURE_OPENAI_API_KEY",
openrouter: "OPENROUTER_API_KEY",
xai: "XAI_API_KEY",
vertex: "GOOGLE_API_KEY", // Vertex uses the same key as Google
// Add other providers as needed
};
const providerKey = providerName?.toLowerCase();
if (!providerKey || !keyMap[providerKey]) {
log('warn', `Unknown provider name: ${providerName} in isApiKeySet check.`);
log("warn", `Unknown provider name: ${providerName} in isApiKeySet check.`);
return false;
}
@@ -492,9 +505,9 @@ function isApiKeySet(providerName, session = null, projectRoot = null) {
// Check if the key exists, is not empty, and is not a placeholder
return (
apiKeyValue &&
apiKeyValue.trim() !== '' &&
apiKeyValue.trim() !== "" &&
!/YOUR_.*_API_KEY_HERE/.test(apiKeyValue) && // General placeholder check
!apiKeyValue.includes('KEY_HERE')
!apiKeyValue.includes("KEY_HERE")
); // Another common placeholder pattern
}
@@ -509,11 +522,11 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
const rootDir = projectRoot || findProjectRoot(); // Use existing root finding
if (!rootDir) {
console.warn(
chalk.yellow('Warning: Could not find project root to check mcp.json.')
chalk.yellow("Warning: Could not find project root to check mcp.json.")
);
return false; // Cannot check without root
}
const mcpConfigPath = path.join(rootDir, '.cursor', 'mcp.json');
const mcpConfigPath = path.join(rootDir, ".cursor", "mcp.json");
if (!fs.existsSync(mcpConfigPath)) {
// console.warn(chalk.yellow('Warning: .cursor/mcp.json not found.'));
@@ -521,10 +534,10 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
}
try {
const mcpConfigRaw = fs.readFileSync(mcpConfigPath, 'utf-8');
const mcpConfigRaw = fs.readFileSync(mcpConfigPath, "utf-8");
const mcpConfig = JSON.parse(mcpConfigRaw);
const mcpEnv = mcpConfig?.mcpServers?.['taskmaster-ai']?.env;
const mcpEnv = mcpConfig?.mcpServers?.["taskmaster-ai"]?.env;
if (!mcpEnv) {
// console.warn(chalk.yellow('Warning: Could not find taskmaster-ai env in mcp.json.'));
return false; // Structure missing
@@ -534,43 +547,43 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
let placeholderValue = null;
switch (providerName) {
case 'anthropic':
case "anthropic":
apiKeyToCheck = mcpEnv.ANTHROPIC_API_KEY;
placeholderValue = 'YOUR_ANTHROPIC_API_KEY_HERE';
placeholderValue = "YOUR_ANTHROPIC_API_KEY_HERE";
break;
case 'openai':
case "openai":
apiKeyToCheck = mcpEnv.OPENAI_API_KEY;
placeholderValue = 'YOUR_OPENAI_API_KEY_HERE'; // Assuming placeholder matches OPENAI
placeholderValue = "YOUR_OPENAI_API_KEY_HERE"; // Assuming placeholder matches OPENAI
break;
case 'openrouter':
case "openrouter":
apiKeyToCheck = mcpEnv.OPENROUTER_API_KEY;
placeholderValue = 'YOUR_OPENROUTER_API_KEY_HERE';
placeholderValue = "YOUR_OPENROUTER_API_KEY_HERE";
break;
case 'google':
case "google":
apiKeyToCheck = mcpEnv.GOOGLE_API_KEY;
placeholderValue = 'YOUR_GOOGLE_API_KEY_HERE';
placeholderValue = "YOUR_GOOGLE_API_KEY_HERE";
break;
case 'perplexity':
case "perplexity":
apiKeyToCheck = mcpEnv.PERPLEXITY_API_KEY;
placeholderValue = 'YOUR_PERPLEXITY_API_KEY_HERE';
placeholderValue = "YOUR_PERPLEXITY_API_KEY_HERE";
break;
case 'xai':
case "xai":
apiKeyToCheck = mcpEnv.XAI_API_KEY;
placeholderValue = 'YOUR_XAI_API_KEY_HERE';
placeholderValue = "YOUR_XAI_API_KEY_HERE";
break;
case 'ollama':
case "ollama":
return true; // No key needed
case 'mistral':
case "mistral":
apiKeyToCheck = mcpEnv.MISTRAL_API_KEY;
placeholderValue = 'YOUR_MISTRAL_API_KEY_HERE';
placeholderValue = "YOUR_MISTRAL_API_KEY_HERE";
break;
case 'azure':
case "azure":
apiKeyToCheck = mcpEnv.AZURE_OPENAI_API_KEY;
placeholderValue = 'YOUR_AZURE_OPENAI_API_KEY_HERE';
placeholderValue = "YOUR_AZURE_OPENAI_API_KEY_HERE";
break;
case 'vertex':
case "vertex":
apiKeyToCheck = mcpEnv.GOOGLE_API_KEY; // Vertex uses Google API key
placeholderValue = 'YOUR_GOOGLE_API_KEY_HERE';
placeholderValue = "YOUR_GOOGLE_API_KEY_HERE";
break;
default:
return false; // Unknown provider
@@ -598,20 +611,20 @@ function getAvailableModels() {
const modelId = modelObj.id;
const sweScore = modelObj.swe_score;
const cost = modelObj.cost_per_1m_tokens;
const allowedRoles = modelObj.allowed_roles || ['main', 'fallback'];
const allowedRoles = modelObj.allowed_roles || ["main", "fallback"];
const nameParts = modelId
.split('-')
.split("-")
.map((p) => p.charAt(0).toUpperCase() + p.slice(1));
// Handle specific known names better if needed
let name = nameParts.join(' ');
if (modelId === 'claude-3.5-sonnet-20240620')
name = 'Claude 3.5 Sonnet';
if (modelId === 'claude-3-7-sonnet-20250219')
name = 'Claude 3.7 Sonnet';
if (modelId === 'gpt-4o') name = 'GPT-4o';
if (modelId === 'gpt-4-turbo') name = 'GPT-4 Turbo';
if (modelId === 'sonar-pro') name = 'Perplexity Sonar Pro';
if (modelId === 'sonar-mini') name = 'Perplexity Sonar Mini';
let name = nameParts.join(" ");
if (modelId === "claude-3.5-sonnet-20240620")
name = "Claude 3.5 Sonnet";
if (modelId === "claude-3-7-sonnet-20250219")
name = "Claude 3.7 Sonnet";
if (modelId === "gpt-4o") name = "GPT-4o";
if (modelId === "gpt-4-turbo") name = "GPT-4 Turbo";
if (modelId === "sonar-pro") name = "Perplexity Sonar Pro";
if (modelId === "sonar-mini") name = "Perplexity Sonar Mini";
available.push({
id: modelId,
@@ -619,7 +632,7 @@ function getAvailableModels() {
provider: provider,
swe_score: sweScore,
cost_per_1m_tokens: cost,
allowed_roles: allowedRoles
allowed_roles: allowedRoles,
});
});
} else {
@@ -627,7 +640,7 @@ function getAvailableModels() {
available.push({
id: `[${provider}-any]`,
name: `Any (${provider})`,
provider: provider
provider: provider,
});
}
}
@@ -649,7 +662,7 @@ function writeConfig(config, explicitRoot = null) {
if (!foundRoot) {
console.error(
chalk.red(
'Error: Could not determine project root. Configuration not saved.'
"Error: Could not determine project root. Configuration not saved."
)
);
return false;
@@ -701,30 +714,70 @@ function isConfigFilePresent(explicitRoot = null) {
/**
* Gets the user ID from the configuration.
* Returns a placeholder that triggers auth/init if no real userId exists.
* @param {string|null} explicitRoot - Optional explicit path to the project root.
* @returns {string|null} The user ID or null if not found.
* @returns {string|null} The user ID or placeholder, or null if auth unavailable.
*/
function getUserId(explicitRoot = null) {
const config = getConfig(explicitRoot);
if (!config.global) {
config.global = {}; // Ensure global object exists
// Ensure account section exists
if (!config.account) {
config.account = { ...defaultConfig.account };
}
if (!config.global.userId) {
config.global.userId = '1234567890';
// Attempt to write the updated config.
// It's important that writeConfig correctly resolves the path
// using explicitRoot, similar to how getConfig does.
// Check if the userId exists in the actual file (not merged config)
let needsToSaveUserId = false;
// Load the raw config to check if userId is actually in the file
try {
let rootPath = explicitRoot;
if (explicitRoot === null || explicitRoot === undefined) {
const foundRoot = findProjectRoot();
if (!foundRoot) {
// If no project root, can't check file, assume userId needs to be saved
needsToSaveUserId = true;
} else {
rootPath = foundRoot;
}
}
if (rootPath && !needsToSaveUserId) {
const configPath = path.join(rootPath, CONFIG_FILE_NAME);
if (fs.existsSync(configPath)) {
const rawConfig = JSON.parse(fs.readFileSync(configPath, "utf8"));
// Check if userId is missing from the actual file
needsToSaveUserId = !rawConfig.account?.userId;
} else {
// Config file doesn't exist, need to save
needsToSaveUserId = true;
}
}
} catch (error) {
// If there's any error reading the file, assume we need to save
needsToSaveUserId = true;
}
// If userId exists and is not the placeholder, return it
if (config.account.userId && config.account.userId !== "1234567890") {
return config.account.userId;
}
// If userId is missing from the actual file, set the placeholder and save it
if (needsToSaveUserId) {
config.account.userId = "1234567890";
const success = writeConfig(config, explicitRoot);
if (!success) {
// Log an error or handle the failure to write,
// though for now, we'll proceed with the in-memory default.
log(
'warning',
'Failed to write updated configuration with new userId. Please let the developers know.'
);
console.warn("Warning: Failed to save default userId to config file");
}
// Force reload the cached config to reflect the change
loadedConfig = null;
loadedConfigRoot = null;
}
return config.global.userId;
// Return the placeholder
// This signals to other code that auth/init needs to be attempted
return "1234567890";
}
/**
@@ -737,11 +790,84 @@ function getAllProviders() {
function getBaseUrlForRole(role, explicitRoot = null) {
const roleConfig = getModelConfigForRole(role, explicitRoot);
return roleConfig && typeof roleConfig.baseURL === 'string'
return roleConfig && typeof roleConfig.baseURL === "string"
? roleConfig.baseURL
: undefined;
}
// Get telemetryEnabled from account section
function getTelemetryEnabled(explicitRoot = null) {
const config = getConfig(explicitRoot);
return config.account?.telemetryEnabled ?? false;
}
// Update getUserEmail to use account
function getUserEmail(explicitRoot = null) {
const config = getConfig(explicitRoot);
return config.account?.email || "";
}
// Update getMode function to use account
function getMode(explicitRoot = null) {
const config = getConfig(explicitRoot);
return config.account?.mode || "byok";
}
/**
* Ensures that the .taskmasterconfig file exists, creating it with defaults if it doesn't.
* This is called early in initialization to prevent chicken-and-egg problems.
* @param {string|null} explicitRoot - Optional explicit path to the project root
* @returns {boolean} True if file exists or was created successfully, false otherwise
*/
function ensureConfigFileExists(explicitRoot = null) {
// ---> Determine root path reliably (following existing pattern) <---
let rootPath = explicitRoot;
if (explicitRoot === null || explicitRoot === undefined) {
// Logic matching _loadAndValidateConfig and other functions
const foundRoot = findProjectRoot(); // *** Explicitly call findProjectRoot ***
if (!foundRoot) {
console.warn(
chalk.yellow(
"Warning: Could not determine project root for config file creation."
)
);
return false;
}
rootPath = foundRoot;
}
// ---> End determine root path logic <---
const configPath = path.join(rootPath, CONFIG_FILE_NAME);
// If file already exists, we're good
if (fs.existsSync(configPath)) {
return true;
}
try {
// Create the default config file (following writeConfig pattern)
fs.writeFileSync(configPath, JSON.stringify(defaultConfig, null, 2));
// Only log if not in silent mode
if (!isSilentMode()) {
console.log(chalk.blue(` Created default .taskmasterconfig file`));
}
// Clear any cached config to ensure fresh load
loadedConfig = null;
loadedConfigRoot = null;
return true;
} catch (error) {
console.error(
chalk.red(
`Error creating default .taskmasterconfig file: ${error.message}`
)
);
return false;
}
}
export {
// Core config access
getConfig,
@@ -785,5 +911,11 @@ export {
// ADD: Function to get all provider names
getAllProviders,
getVertexProjectId,
getVertexLocation
getVertexLocation,
// New getters
getTelemetryEnabled,
getUserEmail,
getMode,
// New function
ensureConfigFileExists,
};

View File

@@ -1,40 +1,40 @@
import path from 'path';
import chalk from 'chalk';
import boxen from 'boxen';
import Table from 'cli-table3';
import { z } from 'zod';
import Fuse from 'fuse.js'; // Import Fuse.js for advanced fuzzy search
import path from "path";
import chalk from "chalk";
import boxen from "boxen";
import Table from "cli-table3";
import { z } from "zod";
import Fuse from "fuse.js"; // Import Fuse.js for advanced fuzzy search
import {
displayBanner,
getStatusWithColor,
startLoadingIndicator,
stopLoadingIndicator,
displayAiUsageSummary
} from '../ui.js';
import { readJSON, writeJSON, log as consoleLog, truncate } from '../utils.js';
import { generateObjectService } from '../ai-services-unified.js';
import { getDefaultPriority } from '../config-manager.js';
import generateTaskFiles from './generate-task-files.js';
displayAiUsageSummary,
} from "../ui.js";
import { readJSON, writeJSON, log as consoleLog, truncate } from "../utils.js";
import { generateObjectService } from "../ai-services-unified.js";
import { getDefaultPriority } from "../config-manager.js";
import generateTaskFiles from "./generate-task-files.js";
// Define Zod schema for the expected AI output object
const AiTaskDataSchema = z.object({
title: z.string().describe('Clear, concise title for the task'),
title: z.string().describe("Clear, concise title for the task"),
description: z
.string()
.describe('A one or two sentence description of the task'),
.describe("A one or two sentence description of the task"),
details: z
.string()
.describe('In-depth implementation details, considerations, and guidance'),
.describe("In-depth implementation details, considerations, and guidance"),
testStrategy: z
.string()
.describe('Detailed approach for verifying task completion'),
.describe("Detailed approach for verifying task completion"),
dependencies: z
.array(z.number())
.optional()
.describe(
'Array of task IDs that this task depends on (must be completed before this task can start)'
)
"Array of task IDs that this task depends on (must be completed before this task can start)"
),
});
/**
@@ -62,7 +62,7 @@ async function addTask(
dependencies = [],
priority = null,
context = {},
outputFormat = 'text', // Default to text for CLI
outputFormat = "text", // Default to text for CLI
manualTaskData = null,
useResearch = false
) {
@@ -74,27 +74,27 @@ async function addTask(
? mcpLog // Use MCP logger if provided
: {
// Create a wrapper around consoleLog for CLI
info: (...args) => consoleLog('info', ...args),
warn: (...args) => consoleLog('warn', ...args),
error: (...args) => consoleLog('error', ...args),
debug: (...args) => consoleLog('debug', ...args),
success: (...args) => consoleLog('success', ...args)
info: (...args) => consoleLog("info", ...args),
warn: (...args) => consoleLog("warn", ...args),
error: (...args) => consoleLog("error", ...args),
debug: (...args) => consoleLog("debug", ...args),
success: (...args) => consoleLog("success", ...args),
};
const effectivePriority = priority || getDefaultPriority(projectRoot);
logFn.info(
`Adding new task with prompt: "${prompt}", Priority: ${effectivePriority}, Dependencies: ${dependencies.join(', ') || 'None'}, Research: ${useResearch}, ProjectRoot: ${projectRoot}`
`Adding new task with prompt: "${prompt}", Priority: ${effectivePriority}, Dependencies: ${dependencies.join(", ") || "None"}, Research: ${useResearch}, ProjectRoot: ${projectRoot}`
);
let loadingIndicator = null;
let aiServiceResponse = null; // To store the full response from AI service
// Create custom reporter that checks for MCP log
const report = (message, level = 'info') => {
const report = (message, level = "info") => {
if (mcpLog) {
mcpLog[level](message);
} else if (outputFormat === 'text') {
} else if (outputFormat === "text") {
consoleLog(level, message);
}
};
@@ -156,7 +156,7 @@ async function addTask(
title: task.title,
description: task.description,
status: task.status,
dependencies: dependencyData
dependencies: dependencyData,
};
}
@@ -166,14 +166,14 @@ async function addTask(
// If tasks.json doesn't exist or is invalid, create a new one
if (!data || !data.tasks) {
report('tasks.json not found or invalid. Creating a new one.', 'info');
report("tasks.json not found or invalid. Creating a new one.", "info");
// Create default tasks data structure
data = {
tasks: []
tasks: [],
};
// Ensure the directory exists and write the new file
writeJSON(tasksPath, data);
report('Created new tasks.json file with empty tasks array.', 'info');
report("Created new tasks.json file with empty tasks array.", "info");
}
// Find the highest task ID to determine the next ID
@@ -182,13 +182,13 @@ async function addTask(
const newTaskId = highestId + 1;
// Only show UI box for CLI mode
if (outputFormat === 'text') {
if (outputFormat === "text") {
console.log(
boxen(chalk.white.bold(`Creating New Task #${newTaskId}`), {
padding: 1,
borderColor: 'blue',
borderStyle: 'round',
margin: { top: 1, bottom: 1 }
borderColor: "blue",
borderStyle: "round",
margin: { top: 1, bottom: 1 },
})
);
}
@@ -202,10 +202,10 @@ async function addTask(
if (invalidDeps.length > 0) {
report(
`The following dependencies do not exist or are invalid: ${invalidDeps.join(', ')}`,
'warn'
`The following dependencies do not exist or are invalid: ${invalidDeps.join(", ")}`,
"warn"
);
report('Removing invalid dependencies...', 'info');
report("Removing invalid dependencies...", "info");
dependencies = dependencies.filter(
(depId) => !invalidDeps.includes(depId)
);
@@ -240,28 +240,28 @@ async function addTask(
// Check if manual task data is provided
if (manualTaskData) {
report('Using manually provided task data', 'info');
report("Using manually provided task data", "info");
taskData = manualTaskData;
report('DEBUG: Taking MANUAL task data path.', 'debug');
report("DEBUG: Taking MANUAL task data path.", "debug");
// Basic validation for manual data
if (
!taskData.title ||
typeof taskData.title !== 'string' ||
typeof taskData.title !== "string" ||
!taskData.description ||
typeof taskData.description !== 'string'
typeof taskData.description !== "string"
) {
throw new Error(
'Manual task data must include at least a title and description.'
"Manual task data must include at least a title and description."
);
}
} else {
report('DEBUG: Taking AI task generation path.', 'debug');
report("DEBUG: Taking AI task generation path.", "debug");
// --- Refactored AI Interaction ---
report(`Generating task data with AI with prompt:\n${prompt}`, 'info');
report(`Generating task data with AI with prompt:\n${prompt}`, "info");
// Create context string for task creation prompt
let contextTasks = '';
let contextTasks = "";
// Create a dependency map for better understanding of the task relationships
const taskMap = {};
@@ -272,18 +272,18 @@ async function addTask(
title: t.title,
description: t.description,
dependencies: t.dependencies || [],
status: t.status
status: t.status,
};
});
// CLI-only feedback for the dependency analysis
if (outputFormat === 'text') {
if (outputFormat === "text") {
console.log(
boxen(chalk.cyan.bold('Task Context Analysis') + '\n', {
boxen(chalk.cyan.bold("Task Context Analysis") + "\n", {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
margin: { top: 0, bottom: 0 },
borderColor: 'cyan',
borderStyle: 'round'
borderColor: "cyan",
borderStyle: "round",
})
);
}
@@ -314,7 +314,7 @@ async function addTask(
const directDeps = data.tasks.filter((t) =>
numericDependencies.includes(t.id)
);
contextTasks += `\n${directDeps.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`).join('\n')}`;
contextTasks += `\n${directDeps.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`).join("\n")}`;
// Add an overview of indirect dependencies if present
const indirectDeps = dependentTasks.filter(
@@ -325,7 +325,7 @@ async function addTask(
contextTasks += `\n${indirectDeps
.slice(0, 5)
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
.join('\n')}`;
.join("\n")}`;
if (indirectDeps.length > 5) {
contextTasks += `\n- ... and ${indirectDeps.length - 5} more indirect dependencies`;
}
@@ -336,15 +336,15 @@ async function addTask(
for (const depTask of uniqueDetailedTasks) {
const depthInfo = depthMap.get(depTask.id)
? ` (depth: ${depthMap.get(depTask.id)})`
: '';
: "";
const isDirect = numericDependencies.includes(depTask.id)
? ' [DIRECT DEPENDENCY]'
: '';
? " [DIRECT DEPENDENCY]"
: "";
contextTasks += `\n\n------ Task ${depTask.id}${isDirect}${depthInfo}: ${depTask.title} ------\n`;
contextTasks += `Description: ${depTask.description}\n`;
contextTasks += `Status: ${depTask.status || 'pending'}\n`;
contextTasks += `Priority: ${depTask.priority || 'medium'}\n`;
contextTasks += `Status: ${depTask.status || "pending"}\n`;
contextTasks += `Priority: ${depTask.priority || "medium"}\n`;
// List its dependencies
if (depTask.dependencies && depTask.dependencies.length > 0) {
@@ -354,7 +354,7 @@ async function addTask(
? `Task ${dId}: ${depDepTask.title}`
: `Task ${dId}`;
});
contextTasks += `Dependencies: ${depDeps.join(', ')}\n`;
contextTasks += `Dependencies: ${depDeps.join(", ")}\n`;
} else {
contextTasks += `Dependencies: None\n`;
}
@@ -363,7 +363,7 @@ async function addTask(
if (depTask.details) {
const truncatedDetails =
depTask.details.length > 400
? depTask.details.substring(0, 400) + '... (truncated)'
? depTask.details.substring(0, 400) + "... (truncated)"
: depTask.details;
contextTasks += `Implementation Details: ${truncatedDetails}\n`;
}
@@ -371,19 +371,19 @@ async function addTask(
// Add dependency chain visualization
if (dependencyGraphs.length > 0) {
contextTasks += '\n\nDependency Chain Visualization:';
contextTasks += "\n\nDependency Chain Visualization:";
// Helper function to format dependency chain as text
function formatDependencyChain(
node,
prefix = '',
prefix = "",
isLast = true,
depth = 0
) {
if (depth > 3) return ''; // Limit depth to avoid excessive nesting
if (depth > 3) return ""; // Limit depth to avoid excessive nesting
const connector = isLast ? '└── ' : '├── ';
const childPrefix = isLast ? ' ' : '';
const connector = isLast ? "└── " : "├── ";
const childPrefix = isLast ? " " : "";
let result = `\n${prefix}${connector}Task ${node.id}: ${node.title}`;
@@ -409,7 +409,7 @@ async function addTask(
}
// Show dependency analysis in CLI mode
if (outputFormat === 'text') {
if (outputFormat === "text") {
if (directDeps.length > 0) {
console.log(chalk.gray(` Explicitly specified dependencies:`));
directDeps.forEach((t) => {
@@ -449,14 +449,14 @@ async function addTask(
// Convert dependency graph to ASCII art for terminal
function visualizeDependencyGraph(
node,
prefix = '',
prefix = "",
isLast = true,
depth = 0
) {
if (depth > 2) return; // Limit depth for display
const connector = isLast ? '└── ' : '├── ';
const childPrefix = isLast ? ' ' : '';
const connector = isLast ? "└── " : "├── ";
const childPrefix = isLast ? " " : "";
console.log(
chalk.blue(
@@ -492,18 +492,18 @@ async function addTask(
includeScore: true, // Return match scores
threshold: 0.4, // Lower threshold = stricter matching (range 0-1)
keys: [
{ name: 'title', weight: 2 }, // Title is most important
{ name: 'description', weight: 1.5 }, // Description is next
{ name: 'details', weight: 0.8 }, // Details is less important
{ name: "title", weight: 2 }, // Title is most important
{ name: "description", weight: 1.5 }, // Description is next
{ name: "details", weight: 0.8 }, // Details is less important
// Search dependencies to find tasks that depend on similar things
{ name: 'dependencyTitles', weight: 0.5 }
{ name: "dependencyTitles", weight: 0.5 },
],
// Sort matches by score (lower is better)
shouldSort: true,
// Allow searching in nested properties
useExtendedSearch: true,
// Return up to 15 matches
limit: 15
limit: 15,
};
// Prepare task data with dependencies expanded as titles for better semantic search
@@ -514,15 +514,15 @@ async function addTask(
? task.dependencies
.map((depId) => {
const depTask = data.tasks.find((t) => t.id === depId);
return depTask ? depTask.title : '';
return depTask ? depTask.title : "";
})
.filter((title) => title)
.join(' ')
: '';
.join(" ")
: "";
return {
...task,
dependencyTitles
dependencyTitles,
};
});
@@ -532,7 +532,7 @@ async function addTask(
// Extract significant words and phrases from the prompt
const promptWords = prompt
.toLowerCase()
.replace(/[^\w\s-]/g, ' ') // Replace non-alphanumeric chars with spaces
.replace(/[^\w\s-]/g, " ") // Replace non-alphanumeric chars with spaces
.split(/\s+/)
.filter((word) => word.length > 3); // Words at least 4 chars
@@ -598,13 +598,13 @@ async function addTask(
// Also look for tasks with similar purposes or categories
const purposeCategories = [
{ pattern: /(command|cli|flag)/i, label: 'CLI commands' },
{ pattern: /(task|subtask|add)/i, label: 'Task management' },
{ pattern: /(dependency|depend)/i, label: 'Dependency handling' },
{ pattern: /(AI|model|prompt)/i, label: 'AI integration' },
{ pattern: /(UI|display|show)/i, label: 'User interface' },
{ pattern: /(schedule|time|cron)/i, label: 'Scheduling' }, // Added scheduling category
{ pattern: /(config|setting|option)/i, label: 'Configuration' } // Added configuration category
{ pattern: /(command|cli|flag)/i, label: "CLI commands" },
{ pattern: /(task|subtask|add)/i, label: "Task management" },
{ pattern: /(dependency|depend)/i, label: "Dependency handling" },
{ pattern: /(AI|model|prompt)/i, label: "AI integration" },
{ pattern: /(UI|display|show)/i, label: "User interface" },
{ pattern: /(schedule|time|cron)/i, label: "Scheduling" }, // Added scheduling category
{ pattern: /(config|setting|option)/i, label: "Configuration" }, // Added configuration category
];
promptCategory = purposeCategories.find((cat) =>
@@ -626,33 +626,33 @@ async function addTask(
if (relatedTasks.length > 0) {
contextTasks = `\nRelevant tasks identified by semantic similarity:\n${relatedTasks
.map((t, i) => {
const relevanceMarker = i < highRelevance.length ? '' : '';
const relevanceMarker = i < highRelevance.length ? "" : "";
return `- ${relevanceMarker}Task ${t.id}: ${t.title} - ${t.description}`;
})
.join('\n')}`;
.join("\n")}`;
}
if (categoryTasks.length > 0) {
contextTasks += `\n\nTasks related to ${promptCategory.label}:\n${categoryTasks
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
.join('\n')}`;
.join("\n")}`;
}
if (
recentTasks.length > 0 &&
!contextTasks.includes('Recently created tasks')
!contextTasks.includes("Recently created tasks")
) {
contextTasks += `\n\nRecently created tasks:\n${recentTasks
.filter((t) => !relatedTasks.some((rt) => rt.id === t.id))
.slice(0, 3)
.map((t) => `- Task ${t.id}: ${t.title} - ${t.description}`)
.join('\n')}`;
.join("\n")}`;
}
// Add detailed information about the most relevant tasks
const allDetailedTasks = [
...relatedTasks.slice(0, 5),
...categoryTasks.slice(0, 2)
...categoryTasks.slice(0, 2),
];
uniqueDetailedTasks = Array.from(
new Map(allDetailedTasks.map((t) => [t.id, t])).values()
@@ -663,8 +663,8 @@ async function addTask(
for (const task of uniqueDetailedTasks) {
contextTasks += `\n\n------ Task ${task.id}: ${task.title} ------\n`;
contextTasks += `Description: ${task.description}\n`;
contextTasks += `Status: ${task.status || 'pending'}\n`;
contextTasks += `Priority: ${task.priority || 'medium'}\n`;
contextTasks += `Status: ${task.status || "pending"}\n`;
contextTasks += `Priority: ${task.priority || "medium"}\n`;
if (task.dependencies && task.dependencies.length > 0) {
// Format dependency list with titles
const depList = task.dependencies.map((depId) => {
@@ -673,13 +673,13 @@ async function addTask(
? `Task ${depId} (${depTask.title})`
: `Task ${depId}`;
});
contextTasks += `Dependencies: ${depList.join(', ')}\n`;
contextTasks += `Dependencies: ${depList.join(", ")}\n`;
}
// Add implementation details but truncate if too long
if (task.details) {
const truncatedDetails =
task.details.length > 400
? task.details.substring(0, 400) + '... (truncated)'
? task.details.substring(0, 400) + "... (truncated)"
: task.details;
contextTasks += `Implementation Details: ${truncatedDetails}\n`;
}
@@ -687,7 +687,7 @@ async function addTask(
}
// Add a concise view of the task dependency structure
contextTasks += '\n\nSummary of task dependencies in the project:';
contextTasks += "\n\nSummary of task dependencies in the project:";
// Get pending/in-progress tasks that might be most relevant based on fuzzy search
// Prioritize tasks from our similarity search
@@ -695,7 +695,7 @@ async function addTask(
const relevantPendingTasks = data.tasks
.filter(
(t) =>
(t.status === 'pending' || t.status === 'in-progress') &&
(t.status === "pending" || t.status === "in-progress") &&
// Either in our relevant set OR has relevant words in title/description
(relevantTaskIds.has(t.id) ||
promptWords.some(
@@ -709,8 +709,8 @@ async function addTask(
for (const task of relevantPendingTasks) {
const depsStr =
task.dependencies && task.dependencies.length > 0
? task.dependencies.join(', ')
: 'None';
? task.dependencies.join(", ")
: "None";
contextTasks += `\n- Task ${task.id}: depends on [${depsStr}]`;
}
@@ -726,7 +726,7 @@ async function addTask(
let commonDeps = []; // Initialize commonDeps
if (similarPurposeTasks.length > 0) {
contextTasks += `\n\nCommon patterns for ${promptCategory ? promptCategory.label : 'similar'} tasks:`;
contextTasks += `\n\nCommon patterns for ${promptCategory ? promptCategory.label : "similar"} tasks:`;
// Collect dependencies from similar purpose tasks
const similarDeps = similarPurposeTasks
@@ -746,7 +746,7 @@ async function addTask(
.slice(0, 5);
if (commonDeps.length > 0) {
contextTasks += '\nMost common dependencies for similar tasks:';
contextTasks += "\nMost common dependencies for similar tasks:";
commonDeps.forEach(([depId, count]) => {
const depTask = data.tasks.find((t) => t.id === parseInt(depId));
if (depTask) {
@@ -757,7 +757,7 @@ async function addTask(
}
// Show fuzzy search analysis in CLI mode
if (outputFormat === 'text') {
if (outputFormat === "text") {
console.log(
chalk.gray(
` Fuzzy search across ${data.tasks.length} tasks using full prompt and ${promptWords.length} keywords`
@@ -825,7 +825,7 @@ async function addTask(
const isHighRelevance = highRelevance.some(
(ht) => ht.id === t.id
);
const relevanceIndicator = isHighRelevance ? '' : '';
const relevanceIndicator = isHighRelevance ? "" : "";
console.log(
chalk.cyan(
`${relevanceIndicator}Task ${t.id}: ${truncate(t.title, 40)}`
@@ -853,26 +853,26 @@ async function addTask(
}
// Add a visual transition to show we're moving to AI generation - only for CLI
if (outputFormat === 'text') {
if (outputFormat === "text") {
console.log(
boxen(
chalk.white.bold('AI Task Generation') +
`\n\n${chalk.gray('Analyzing context and generating task details using AI...')}` +
`\n${chalk.cyan('Context size: ')}${chalk.yellow(contextTasks.length.toLocaleString())} characters` +
`\n${chalk.cyan('Dependency detection: ')}${chalk.yellow(numericDependencies.length > 0 ? 'Explicit dependencies' : 'Auto-discovery mode')}` +
`\n${chalk.cyan('Detailed tasks: ')}${chalk.yellow(
chalk.white.bold("AI Task Generation") +
`\n\n${chalk.gray("Analyzing context and generating task details using AI...")}` +
`\n${chalk.cyan("Context size: ")}${chalk.yellow(contextTasks.length.toLocaleString())} characters` +
`\n${chalk.cyan("Dependency detection: ")}${chalk.yellow(numericDependencies.length > 0 ? "Explicit dependencies" : "Auto-discovery mode")}` +
`\n${chalk.cyan("Detailed tasks: ")}${chalk.yellow(
numericDependencies.length > 0
? dependentTasks.length // Use length of tasks from explicit dependency path
: uniqueDetailedTasks.length // Use length of tasks from fuzzy search path
)}` +
(promptCategory
? `\n${chalk.cyan('Category detected: ')}${chalk.yellow(promptCategory.label)}`
: ''),
? `\n${chalk.cyan("Category detected: ")}${chalk.yellow(promptCategory.label)}`
: ""),
{
padding: { top: 0, bottom: 1, left: 1, right: 1 },
margin: { top: 1, bottom: 0 },
borderColor: 'white',
borderStyle: 'round'
borderColor: "white",
borderStyle: "round",
}
)
);
@@ -882,15 +882,15 @@ async function addTask(
// System Prompt - Enhanced for dependency awareness
const systemPrompt =
"You are a helpful assistant that creates well-structured tasks for a software development project. Generate a single new task based on the user's description, adhering strictly to the provided JSON schema. Pay special attention to dependencies between tasks, ensuring the new task correctly references any tasks it depends on.\n\n" +
'When determining dependencies for a new task, follow these principles:\n' +
'1. Select dependencies based on logical requirements - what must be completed before this task can begin.\n' +
'2. Prioritize task dependencies that are semantically related to the functionality being built.\n' +
'3. Consider both direct dependencies (immediately prerequisite) and indirect dependencies.\n' +
'4. Avoid adding unnecessary dependencies - only include tasks that are genuinely prerequisite.\n' +
'5. Consider the current status of tasks - prefer completed tasks as dependencies when possible.\n' +
"When determining dependencies for a new task, follow these principles:\n" +
"1. Select dependencies based on logical requirements - what must be completed before this task can begin.\n" +
"2. Prioritize task dependencies that are semantically related to the functionality being built.\n" +
"3. Consider both direct dependencies (immediately prerequisite) and indirect dependencies.\n" +
"4. Avoid adding unnecessary dependencies - only include tasks that are genuinely prerequisite.\n" +
"5. Consider the current status of tasks - prefer completed tasks as dependencies when possible.\n" +
"6. Pay special attention to foundation tasks (1-5) but don't automatically include them without reason.\n" +
'7. Recent tasks (higher ID numbers) may be more relevant for newer functionality.\n\n' +
'The dependencies array should contain task IDs (numbers) of prerequisite tasks.\n';
"7. Recent tasks (higher ID numbers) may be more relevant for newer functionality.\n\n" +
"The dependencies array should contain task IDs (numbers) of prerequisite tasks.\n";
// Task Structure Description (for user prompt)
const taskStructureDesc = `
@@ -904,7 +904,7 @@ async function addTask(
`;
// Add any manually provided details to the prompt for context
let contextFromArgs = '';
let contextFromArgs = "";
if (manualTaskData?.title)
contextFromArgs += `\n- Suggested Title: "${manualTaskData.title}"`;
if (manualTaskData?.description)
@@ -918,7 +918,7 @@ async function addTask(
const userPrompt = `You are generating the details for Task #${newTaskId}. Based on the user's request: "${prompt}", create a comprehensive new task for a software development project.
${contextTasks}
${contextFromArgs ? `\nConsider these additional details provided by the user:${contextFromArgs}` : ''}
${contextFromArgs ? `\nConsider these additional details provided by the user:${contextFromArgs}` : ""}
Based on the information about existing tasks provided above, include appropriate dependencies in the "dependencies" array. Only include task IDs that this new task directly depends on.
@@ -929,15 +929,15 @@ async function addTask(
`;
// Start the loading indicator - only for text mode
if (outputFormat === 'text') {
if (outputFormat === "text") {
loadingIndicator = startLoadingIndicator(
`Generating new task with ${useResearch ? 'Research' : 'Main'} AI...\n`
`Generating new task with ${useResearch ? "Research" : "Main"} AI...\n`
);
}
try {
const serviceRole = useResearch ? 'research' : 'main';
report('DEBUG: Calling generateObjectService...', 'debug');
const serviceRole = useResearch ? "research" : "main";
report("DEBUG: Calling generateObjectService...", "debug");
aiServiceResponse = await generateObjectService({
// Capture the full response
@@ -945,17 +945,17 @@ async function addTask(
session: session,
projectRoot: projectRoot,
schema: AiTaskDataSchema,
objectName: 'newTaskData',
objectName: "newTaskData",
systemPrompt: systemPrompt,
prompt: userPrompt,
commandName: commandName || 'add-task', // Use passed commandName or default
outputType: outputType || (isMCP ? 'mcp' : 'cli') // Use passed outputType or derive
commandName: commandName || "add-task", // Use passed commandName or default
outputType: outputType || (isMCP ? "mcp" : "cli"), // Use passed outputType or derive
});
report('DEBUG: generateObjectService returned successfully.', 'debug');
report("DEBUG: generateObjectService returned successfully.", "debug");
if (!aiServiceResponse || !aiServiceResponse.mainResult) {
throw new Error(
'AI service did not return the expected object structure.'
"AI service did not return the expected object structure."
);
}
@@ -972,20 +972,20 @@ async function addTask(
) {
taskData = aiServiceResponse.mainResult.object;
} else {
throw new Error('AI service did not return a valid task object.');
throw new Error("AI service did not return a valid task object.");
}
report('Successfully generated task data from AI.', 'success');
report("Successfully generated task data from AI.", "success");
} catch (error) {
report(
`DEBUG: generateObjectService caught error: ${error.message}`,
'debug'
"debug"
);
report(`Error generating task with AI: ${error.message}`, 'error');
// Don't log user-facing error here - main catch block handles it
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
throw error; // Re-throw error after logging
} finally {
report('DEBUG: generateObjectService finally block reached.', 'debug');
report("DEBUG: generateObjectService finally block reached.", "debug");
if (loadingIndicator) stopLoadingIndicator(loadingIndicator); // Ensure indicator stops
}
// --- End Refactored AI Interaction ---
@@ -996,14 +996,14 @@ async function addTask(
id: newTaskId,
title: taskData.title,
description: taskData.description,
details: taskData.details || '',
testStrategy: taskData.testStrategy || '',
status: 'pending',
details: taskData.details || "",
testStrategy: taskData.testStrategy || "",
status: "pending",
dependencies: taskData.dependencies?.length
? taskData.dependencies
: numericDependencies, // Use AI-suggested dependencies if available, fallback to manually specified
priority: effectivePriority,
subtasks: [] // Initialize with empty subtasks array
subtasks: [], // Initialize with empty subtasks array
};
// Additional check: validate all dependencies in the AI response
@@ -1015,8 +1015,8 @@ async function addTask(
if (!allValidDeps) {
report(
'AI suggested invalid dependencies. Filtering them out...',
'warn'
"AI suggested invalid dependencies. Filtering them out...",
"warn"
);
newTask.dependencies = taskData.dependencies.filter((depId) => {
const numDepId = parseInt(depId, 10);
@@ -1028,48 +1028,48 @@ async function addTask(
// Add the task to the tasks array
data.tasks.push(newTask);
report('DEBUG: Writing tasks.json...', 'debug');
report("DEBUG: Writing tasks.json...", "debug");
// Write the updated tasks to the file
writeJSON(tasksPath, data);
report('DEBUG: tasks.json written.', 'debug');
report("DEBUG: tasks.json written.", "debug");
// Generate markdown task files
report('Generating task files...', 'info');
report('DEBUG: Calling generateTaskFiles...', 'debug');
report("Generating task files...", "info");
report("DEBUG: Calling generateTaskFiles...", "debug");
// Pass mcpLog if available to generateTaskFiles
await generateTaskFiles(tasksPath, path.dirname(tasksPath), { mcpLog });
report('DEBUG: generateTaskFiles finished.', 'debug');
report("DEBUG: generateTaskFiles finished.", "debug");
// Show success message - only for text output (CLI)
if (outputFormat === 'text') {
if (outputFormat === "text") {
const table = new Table({
head: [
chalk.cyan.bold('ID'),
chalk.cyan.bold('Title'),
chalk.cyan.bold('Description')
chalk.cyan.bold("ID"),
chalk.cyan.bold("Title"),
chalk.cyan.bold("Description"),
],
colWidths: [5, 30, 50] // Adjust widths as needed
colWidths: [5, 30, 50], // Adjust widths as needed
});
table.push([
newTask.id,
truncate(newTask.title, 27),
truncate(newTask.description, 47)
truncate(newTask.description, 47),
]);
console.log(chalk.green('✅ New task created successfully:'));
console.log(chalk.green("✅ New task created successfully:"));
console.log(table.toString());
// Helper to get priority color
const getPriorityColor = (p) => {
switch (p?.toLowerCase()) {
case 'high':
return 'red';
case 'low':
return 'gray';
case 'medium':
case "high":
return "red";
case "low":
return "gray";
case "medium":
default:
return 'yellow';
return "yellow";
}
};
@@ -1093,49 +1093,49 @@ async function addTask(
});
// Prepare dependency display string
let dependencyDisplay = '';
let dependencyDisplay = "";
if (newTask.dependencies.length > 0) {
dependencyDisplay = chalk.white('Dependencies:') + '\n';
dependencyDisplay = chalk.white("Dependencies:") + "\n";
newTask.dependencies.forEach((dep) => {
const isAiAdded = aiAddedDeps.includes(dep);
const depType = isAiAdded ? chalk.yellow(' (AI suggested)') : '';
const depType = isAiAdded ? chalk.yellow(" (AI suggested)") : "";
dependencyDisplay +=
chalk.white(
` - ${dep}: ${depTitles[dep] || 'Unknown task'}${depType}`
) + '\n';
` - ${dep}: ${depTitles[dep] || "Unknown task"}${depType}`
) + "\n";
});
} else {
dependencyDisplay = chalk.white('Dependencies: None') + '\n';
dependencyDisplay = chalk.white("Dependencies: None") + "\n";
}
// Add info about removed dependencies if any
if (aiRemovedDeps.length > 0) {
dependencyDisplay +=
chalk.gray('\nUser-specified dependencies that were not used:') +
'\n';
chalk.gray("\nUser-specified dependencies that were not used:") +
"\n";
aiRemovedDeps.forEach((dep) => {
const depTask = data.tasks.find((t) => t.id === dep);
const title = depTask ? truncate(depTask.title, 30) : 'Unknown task';
dependencyDisplay += chalk.gray(` - ${dep}: ${title}`) + '\n';
const title = depTask ? truncate(depTask.title, 30) : "Unknown task";
dependencyDisplay += chalk.gray(` - ${dep}: ${title}`) + "\n";
});
}
// Add dependency analysis summary
let dependencyAnalysis = '';
let dependencyAnalysis = "";
if (aiAddedDeps.length > 0 || aiRemovedDeps.length > 0) {
dependencyAnalysis =
'\n' + chalk.white.bold('Dependency Analysis:') + '\n';
"\n" + chalk.white.bold("Dependency Analysis:") + "\n";
if (aiAddedDeps.length > 0) {
dependencyAnalysis +=
chalk.green(
`AI identified ${aiAddedDeps.length} additional dependencies`
) + '\n';
) + "\n";
}
if (aiRemovedDeps.length > 0) {
dependencyAnalysis +=
chalk.yellow(
`AI excluded ${aiRemovedDeps.length} user-provided dependencies`
) + '\n';
) + "\n";
}
}
@@ -1143,32 +1143,32 @@ async function addTask(
console.log(
boxen(
chalk.white.bold(`Task ${newTaskId} Created Successfully`) +
'\n\n' +
"\n\n" +
chalk.white(`Title: ${newTask.title}`) +
'\n' +
"\n" +
chalk.white(`Status: ${getStatusWithColor(newTask.status)}`) +
'\n' +
"\n" +
chalk.white(
`Priority: ${chalk[getPriorityColor(newTask.priority)](newTask.priority)}`
) +
'\n\n' +
"\n\n" +
dependencyDisplay +
dependencyAnalysis +
'\n' +
chalk.white.bold('Next Steps:') +
'\n' +
"\n" +
chalk.white.bold("Next Steps:") +
"\n" +
chalk.cyan(
`1. Run ${chalk.yellow(`task-master show ${newTaskId}`)} to see complete task details`
) +
'\n' +
"\n" +
chalk.cyan(
`2. Run ${chalk.yellow(`task-master set-status --id=${newTaskId} --status=in-progress`)} to start working on it`
) +
'\n' +
"\n" +
chalk.cyan(
`3. Run ${chalk.yellow(`task-master expand --id=${newTaskId}`)} to break it down into subtasks`
),
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
{ padding: 1, borderColor: "green", borderStyle: "round" }
)
);
@@ -1176,19 +1176,19 @@ async function addTask(
if (
aiServiceResponse &&
aiServiceResponse.telemetryData &&
(outputType === 'cli' || outputType === 'text')
(outputType === "cli" || outputType === "text")
) {
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
displayAiUsageSummary(aiServiceResponse.telemetryData, "cli");
}
}
report(
`DEBUG: Returning new task ID: ${newTaskId} and telemetry.`,
'debug'
"debug"
);
return {
newTaskId: newTaskId,
telemetryData: aiServiceResponse ? aiServiceResponse.telemetryData : null
telemetryData: aiServiceResponse ? aiServiceResponse.telemetryData : null,
};
} catch (error) {
// Stop any loading indicator on error
@@ -1196,8 +1196,8 @@ async function addTask(
stopLoadingIndicator(loadingIndicator);
}
report(`Error adding task: ${error.message}`, 'error');
if (outputFormat === 'text') {
report(`Error adding task: ${error.message}`, "error");
if (outputFormat === "text") {
console.error(chalk.red(`Error: ${error.message}`));
}
// In MCP mode, we let the direct function handler catch and format

View File

@@ -1,7 +1,7 @@
import path from 'path';
import { log, readJSON, writeJSON } from '../utils.js';
import { isTaskDependentOn } from '../task-manager.js';
import generateTaskFiles from './generate-task-files.js';
import path from "path";
import { log, readJSON, writeJSON } from "../utils.js";
import { isTaskDependentOn } from "../task-manager.js";
import generateTaskFiles from "./generate-task-files.js";
/**
* Move one or more tasks/subtasks to new positions
@@ -18,8 +18,8 @@ async function moveTask(
generateFiles = true
) {
// Check if we have comma-separated IDs (multiple moves)
const sourceIds = sourceId.split(',').map((id) => id.trim());
const destinationIds = destinationId.split(',').map((id) => id.trim());
const sourceIds = sourceId.split(",").map((id) => id.trim());
const destinationIds = destinationId.split(",").map((id) => id.trim());
// If multiple IDs, validate they match in count
if (sourceIds.length > 1 || destinationIds.length > 1) {
@@ -63,8 +63,8 @@ async function moveMultipleTasks(
) {
try {
log(
'info',
`Moving multiple tasks/subtasks: ${sourceIds.join(', ')} to ${destinationIds.join(', ')}...`
"info",
`Moving multiple tasks/subtasks: ${sourceIds.join(", ")} to ${destinationIds.join(", ")}...`
);
const results = [];
@@ -82,16 +82,16 @@ async function moveMultipleTasks(
// Generate task files once at the end if requested
if (generateFiles) {
log('info', 'Regenerating task files...');
log("info", "Regenerating task files...");
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
}
return {
message: `Successfully moved ${sourceIds.length} tasks/subtasks`,
moves: results
moves: results,
};
} catch (error) {
log('error', `Error moving multiple tasks/subtasks: ${error.message}`);
log("error", `Error moving multiple tasks/subtasks: ${error.message}`);
throw error;
}
}
@@ -111,7 +111,7 @@ async function moveSingleTask(
generateFiles = true
) {
try {
log('info', `Moving task/subtask ${sourceId} to ${destinationId}...`);
log("info", `Moving task/subtask ${sourceId} to ${destinationId}...`);
// Read the existing tasks
const data = readJSON(tasksPath);
@@ -120,7 +120,7 @@ async function moveSingleTask(
}
// Parse source ID to determine if it's a task or subtask
const isSourceSubtask = sourceId.includes('.');
const isSourceSubtask = sourceId.includes(".");
let sourceTask,
sourceParentTask,
sourceSubtask,
@@ -128,13 +128,13 @@ async function moveSingleTask(
sourceSubtaskIndex;
// Parse destination ID to determine the target
const isDestinationSubtask = destinationId.includes('.');
const isDestinationSubtask = destinationId.includes(".");
let destTask, destParentTask, destSubtask, destTaskIndex, destSubtaskIndex;
// Validate source exists
if (isSourceSubtask) {
// Source is a subtask
const [parentIdStr, subtaskIdStr] = sourceId.split('.');
const [parentIdStr, subtaskIdStr] = sourceId.split(".");
const parentIdNum = parseInt(parentIdStr, 10);
const subtaskIdNum = parseInt(subtaskIdStr, 10);
@@ -172,7 +172,7 @@ async function moveSingleTask(
// Validate destination exists
if (isDestinationSubtask) {
// Destination is a subtask (target will be the parent of this subtask)
const [parentIdStr, subtaskIdStr] = destinationId.split('.');
const [parentIdStr, subtaskIdStr] = destinationId.split(".");
const parentIdNum = parseInt(parentIdStr, 10);
const subtaskIdNum = parseInt(subtaskIdStr, 10);
@@ -210,15 +210,15 @@ async function moveSingleTask(
if (destTaskIndex === -1) {
// Create placeholder for destination if it doesn't exist
log('info', `Creating placeholder for destination task ${destIdNum}`);
log("info", `Creating placeholder for destination task ${destIdNum}`);
const newTask = {
id: destIdNum,
title: `Task ${destIdNum}`,
description: '',
status: 'pending',
priority: 'medium',
details: '',
testStrategy: ''
description: "",
status: "pending",
priority: "medium",
details: "",
testStrategy: "",
};
// Find correct position to insert the new task
@@ -241,14 +241,14 @@ async function moveSingleTask(
// Validate that we aren't trying to move a task to itself
if (sourceId === destinationId) {
throw new Error('Cannot move a task/subtask to itself');
throw new Error("Cannot move a task/subtask to itself");
}
// Prevent moving a parent to its own subtask
if (!isSourceSubtask && isDestinationSubtask) {
const destParentId = parseInt(destinationId.split('.')[0], 10);
const destParentId = parseInt(destinationId.split(".")[0], 10);
if (parseInt(sourceId, 10) === destParentId) {
throw new Error('Cannot move a parent task to one of its own subtasks');
throw new Error("Cannot move a parent task to one of its own subtasks");
}
}
@@ -300,8 +300,8 @@ async function moveSingleTask(
} else if (isSourceSubtask && isDestinationSubtask) {
// Case 4: Move subtask to another parent or position
// First check if it's the same parent
const sourceParentId = parseInt(sourceId.split('.')[0], 10);
const destParentId = parseInt(destinationId.split('.')[0], 10);
const sourceParentId = parseInt(sourceId.split(".")[0], 10);
const destParentId = parseInt(destinationId.split(".")[0], 10);
if (sourceParentId === destParentId) {
// Case 4a: Move subtask within the same parent (reordering)
@@ -327,13 +327,13 @@ async function moveSingleTask(
// Generate task files if requested
if (generateFiles) {
log('info', 'Regenerating task files...');
log("info", "Regenerating task files...");
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
}
return movedTask;
} catch (error) {
log('error', `Error moving task/subtask: ${error.message}`);
log("error", `Error moving task/subtask: ${error.message}`);
throw error;
}
}
@@ -363,7 +363,7 @@ function moveTaskToTask(data, sourceTask, sourceTaskIndex, destTask) {
const newSubtask = {
...sourceTask,
id: newSubtaskId,
parentTaskId: destTask.id
parentTaskId: destTask.id,
};
// Add to destination's subtasks
@@ -373,7 +373,7 @@ function moveTaskToTask(data, sourceTask, sourceTaskIndex, destTask) {
data.tasks.splice(sourceTaskIndex, 1);
log(
'info',
"info",
`Moved task ${sourceTask.id} to become subtask ${destTask.id}.${newSubtaskId}`
);
@@ -412,7 +412,7 @@ function moveTaskToSubtaskPosition(
const newSubtask = {
...sourceTask,
id: newSubtaskId,
parentTaskId: destParentTask.id
parentTaskId: destParentTask.id,
};
// Insert at specific position
@@ -425,7 +425,7 @@ function moveTaskToSubtaskPosition(
data.tasks.splice(sourceTaskIndex, 1);
log(
'info',
"info",
`Moved task ${sourceTask.id} to become subtask ${destParentTask.id}.${newSubtaskId}`
);
@@ -438,7 +438,7 @@ function moveTaskToSubtaskPosition(
* @param {Object} sourceSubtask - Source subtask to move
* @param {Object} sourceParentTask - Parent task of the source subtask
* @param {number} sourceSubtaskIndex - Index of source subtask in parent's subtasks
* @param {Object} destTask - Destination task (for position reference)
* @param {Object} destTask - Destination task (will be replaced)
* @returns {Object} Moved task object
*/
function moveSubtaskToTask(
@@ -448,15 +448,14 @@ function moveSubtaskToTask(
sourceSubtaskIndex,
destTask
) {
// Find the highest task ID to determine the next ID
const highestId = Math.max(...data.tasks.map((t) => t.id));
const newTaskId = highestId + 1;
// Use the destination task's ID instead of generating a new one
const newTaskId = destTask.id;
// Create the new task from the subtask
// Create the new task from the subtask, using the destination task's ID
const newTask = {
...sourceSubtask,
id: newTaskId,
priority: sourceParentTask.priority || 'medium' // Inherit priority from parent
priority: sourceParentTask.priority || "medium", // Inherit priority from parent
};
delete newTask.parentTaskId;
@@ -468,11 +467,11 @@ function moveSubtaskToTask(
newTask.dependencies.push(sourceParentTask.id);
}
// Find the destination index to insert the new task
// Find the destination index to replace the destination task
const destTaskIndex = data.tasks.findIndex((t) => t.id === destTask.id);
// Insert the new task after the destination task
data.tasks.splice(destTaskIndex + 1, 0, newTask);
// Replace the destination task with the new task
data.tasks[destTaskIndex] = newTask;
// Remove the subtask from the parent
sourceParentTask.subtasks.splice(sourceSubtaskIndex, 1);
@@ -483,7 +482,7 @@ function moveSubtaskToTask(
}
log(
'info',
"info",
`Moved subtask ${sourceParentTask.id}.${sourceSubtask.id} to become task ${newTaskId}`
);
@@ -510,7 +509,7 @@ function reorderSubtask(parentTask, sourceIndex, destIndex) {
parentTask.subtasks.splice(adjustedDestIndex, 0, subtask);
log(
'info',
"info",
`Reordered subtask ${parentTask.id}.${subtask.id} within parent task ${parentTask.id}`
);
@@ -544,7 +543,7 @@ function moveSubtaskToAnotherParent(
const newSubtask = {
...sourceSubtask,
id: newSubtaskId,
parentTaskId: destParentTask.id
parentTaskId: destParentTask.id,
};
// If the subtask depends on its original parent, keep that dependency
@@ -570,7 +569,7 @@ function moveSubtaskToAnotherParent(
}
log(
'info',
"info",
`Moved subtask ${sourceParentTask.id}.${sourceSubtask.id} to become subtask ${destParentTask.id}.${newSubtaskId}`
);
@@ -596,7 +595,7 @@ function moveTaskToNewId(
// Create a copy of the source task with the new ID
const movedTask = {
...sourceTask,
id: destTask.id
id: destTask.id,
};
// Get numeric IDs for comparison
@@ -608,7 +607,7 @@ function moveTaskToNewId(
// Update subtasks to reference the new parent ID if needed
movedTask.subtasks = sourceTask.subtasks.map((subtask) => ({
...subtask,
parentTaskId: destIdNum
parentTaskId: destIdNum,
}));
}
@@ -650,7 +649,7 @@ function moveTaskToNewId(
data.tasks.splice(sourceTaskIndex, 0, movedTask);
}
log('info', `Moved task ${sourceIdNum} to replace task ${destIdNum}`);
log("info", `Moved task ${sourceIdNum} to replace task ${destIdNum}`);
return movedTask;
}

View File

@@ -0,0 +1,384 @@
import fs from "fs";
import path from "path";
import { submitTelemetryData } from "./telemetry-submission.js";
import { getDebugFlag } from "./config-manager.js";
import { log } from "./utils.js";
class TelemetryQueue {
constructor() {
this.queue = [];
this.processing = false;
this.backgroundInterval = null;
this.stats = {
pending: 0,
processed: 0,
failed: 0,
lastProcessedAt: null,
};
this.logFile = null;
}
/**
* Initialize the queue with comprehensive logging file path
* @param {string} projectRoot - Project root directory for log file
*/
initialize(projectRoot) {
if (projectRoot) {
this.logFile = path.join(projectRoot, ".taskmaster-activity.log");
this.loadPersistedQueue();
}
}
/**
* Add telemetry data to queue without blocking
* @param {Object} telemetryData - Command telemetry data
*/
addToQueue(telemetryData) {
const queueItem = {
...telemetryData,
queuedAt: new Date().toISOString(),
attempts: 0,
};
this.queue.push(queueItem);
this.stats.pending = this.queue.length;
// Log the activity immediately to .log file
this.logActivity("QUEUED", {
commandName: telemetryData.commandName,
queuedAt: queueItem.queuedAt,
userId: telemetryData.userId,
success: telemetryData.success,
executionTimeMs: telemetryData.executionTimeMs,
});
if (getDebugFlag()) {
log("debug", `Added ${telemetryData.commandName} to telemetry queue`);
}
// Persist queue state if file is configured
this.persistQueue();
}
/**
* Log activity to comprehensive .log file
* @param {string} action - The action being logged (QUEUED, SUBMITTED, FAILED, etc.)
* @param {Object} data - The data to log
*/
logActivity(action, data) {
if (!this.logFile) return;
try {
const timestamp = new Date().toISOString();
const logEntry = `${timestamp} [${action}] ${JSON.stringify(data)}\n`;
fs.appendFileSync(this.logFile, logEntry);
} catch (error) {
if (getDebugFlag()) {
log("error", `Failed to write to activity log: ${error.message}`);
}
}
}
/**
* Process all queued telemetry items
* @returns {Object} Processing result with stats
*/
async processQueue() {
if (this.processing || this.queue.length === 0) {
return { processed: 0, failed: 0, errors: [] };
}
this.processing = true;
const errors = [];
let processed = 0;
let failed = 0;
this.logActivity("PROCESSING_START", { queueSize: this.queue.length });
// Process items in batches to avoid overwhelming the gateway
const batchSize = 5;
const itemsToProcess = [...this.queue];
for (let i = 0; i < itemsToProcess.length; i += batchSize) {
const batch = itemsToProcess.slice(i, i + batchSize);
for (const item of batch) {
try {
item.attempts++;
const result = await submitTelemetryData(item);
if (result.success) {
// Remove from queue on success
const index = this.queue.findIndex(
(q) => q.queuedAt === item.queuedAt
);
if (index > -1) {
this.queue.splice(index, 1);
}
processed++;
// Log successful submission
this.logActivity("SUBMITTED", {
commandName: item.commandName,
queuedAt: item.queuedAt,
attempts: item.attempts,
});
} else {
// Retry failed items up to 3 times
if (item.attempts >= 3) {
const index = this.queue.findIndex(
(q) => q.queuedAt === item.queuedAt
);
if (index > -1) {
this.queue.splice(index, 1);
}
failed++;
const errorMsg = `Failed to submit ${item.commandName} after 3 attempts: ${result.error}`;
errors.push(errorMsg);
// Log final failure
this.logActivity("FAILED", {
commandName: item.commandName,
queuedAt: item.queuedAt,
attempts: item.attempts,
error: result.error,
});
} else {
// Log retry attempt
this.logActivity("RETRY", {
commandName: item.commandName,
queuedAt: item.queuedAt,
attempts: item.attempts,
error: result.error,
});
}
}
} catch (error) {
// Network or unexpected errors
if (item.attempts >= 3) {
const index = this.queue.findIndex(
(q) => q.queuedAt === item.queuedAt
);
if (index > -1) {
this.queue.splice(index, 1);
}
failed++;
const errorMsg = `Exception submitting ${item.commandName}: ${error.message}`;
errors.push(errorMsg);
// Log exception failure
this.logActivity("EXCEPTION", {
commandName: item.commandName,
queuedAt: item.queuedAt,
attempts: item.attempts,
error: error.message,
});
} else {
// Log retry for exception
this.logActivity("RETRY_EXCEPTION", {
commandName: item.commandName,
queuedAt: item.queuedAt,
attempts: item.attempts,
error: error.message,
});
}
}
}
// Small delay between batches
if (i + batchSize < itemsToProcess.length) {
await new Promise((resolve) => setTimeout(resolve, 100));
}
}
this.stats.pending = this.queue.length;
this.stats.processed += processed;
this.stats.failed += failed;
this.stats.lastProcessedAt = new Date().toISOString();
this.processing = false;
this.persistQueue();
// Log processing completion
this.logActivity("PROCESSING_COMPLETE", {
processed,
failed,
remainingInQueue: this.queue.length,
});
if (getDebugFlag() && (processed > 0 || failed > 0)) {
log(
"debug",
`Telemetry queue processed: ${processed} success, ${failed} failed`
);
}
return { processed, failed, errors };
}
/**
* Start background processing at specified interval
* @param {number} intervalMs - Processing interval in milliseconds (default: 30000)
*/
startBackgroundProcessor(intervalMs = 30000) {
if (this.backgroundInterval) {
clearInterval(this.backgroundInterval);
}
this.backgroundInterval = setInterval(async () => {
try {
await this.processQueue();
} catch (error) {
if (getDebugFlag()) {
log(
"error",
`Background telemetry processing error: ${error.message}`
);
}
}
}, intervalMs);
if (getDebugFlag()) {
log(
"debug",
`Started telemetry background processor (${intervalMs}ms interval)`
);
}
}
/**
* Stop background processing
*/
stopBackgroundProcessor() {
if (this.backgroundInterval) {
clearInterval(this.backgroundInterval);
this.backgroundInterval = null;
if (getDebugFlag()) {
log("debug", "Stopped telemetry background processor");
}
}
}
/**
* Get queue statistics
* @returns {Object} Queue stats
*/
getQueueStats() {
return {
...this.stats,
pending: this.queue.length,
};
}
/**
* Load persisted queue from file (now reads from .log file)
*/
loadPersistedQueue() {
// For the .log file, we'll look for a companion .json file for queue state
if (!this.logFile) return;
const stateFile = this.logFile.replace(".log", "-queue-state.json");
if (!fs.existsSync(stateFile)) {
return;
}
try {
const data = fs.readFileSync(stateFile, "utf8");
const persistedData = JSON.parse(data);
this.queue = persistedData.queue || [];
this.stats = { ...this.stats, ...persistedData.stats };
if (getDebugFlag()) {
log(
"debug",
`Loaded ${this.queue.length} items from telemetry queue state`
);
}
} catch (error) {
if (getDebugFlag()) {
log(
"error",
`Failed to load persisted telemetry queue: ${error.message}`
);
}
}
}
/**
* Persist queue state to companion file
*/
persistQueue() {
if (!this.logFile) return;
const stateFile = this.logFile.replace(".log", "-queue-state.json");
try {
const data = {
queue: this.queue,
stats: this.stats,
lastUpdated: new Date().toISOString(),
};
fs.writeFileSync(stateFile, JSON.stringify(data, null, 2));
} catch (error) {
if (getDebugFlag()) {
log("error", `Failed to persist telemetry queue: ${error.message}`);
}
}
}
}
// Global instance
const telemetryQueue = new TelemetryQueue();
/**
* Add command telemetry to queue (non-blocking)
* @param {Object} commandData - Command execution data
*/
export function queueCommandTelemetry(commandData) {
telemetryQueue.addToQueue(commandData);
}
/**
* Initialize telemetry queue with project root
* @param {string} projectRoot - Project root directory
*/
export function initializeTelemetryQueue(projectRoot) {
telemetryQueue.initialize(projectRoot);
}
/**
* Start background telemetry processing
* @param {number} intervalMs - Processing interval in milliseconds
*/
export function startTelemetryBackgroundProcessor(intervalMs = 30000) {
telemetryQueue.startBackgroundProcessor(intervalMs);
}
/**
* Stop background telemetry processing
*/
export function stopTelemetryBackgroundProcessor() {
telemetryQueue.stopBackgroundProcessor();
}
/**
* Get telemetry queue statistics
* @returns {Object} Queue statistics
*/
export function getTelemetryQueueStats() {
return telemetryQueue.getQueueStats();
}
/**
* Manually process telemetry queue
* @returns {Object} Processing result
*/
export function processTelemetryQueue() {
return telemetryQueue.processQueue();
}
export { telemetryQueue };

View File

@@ -0,0 +1,238 @@
/**
* Telemetry Submission Service
* Handles sending telemetry data to remote gateway endpoint
*/
import { z } from "zod";
import { getConfig } from "./config-manager.js";
import { getTelemetryEnabled } from "./config-manager.js";
import { resolveEnvVariable } from "./utils.js";
// Telemetry data validation schema
const TelemetryDataSchema = z.object({
timestamp: z.string().datetime(),
userId: z.string().min(1),
commandName: z.string().min(1),
modelUsed: z.string().optional(),
providerName: z.string().optional(),
inputTokens: z.number().optional(),
outputTokens: z.number().optional(),
totalTokens: z.number().optional(),
totalCost: z.number().optional(),
currency: z.string().optional(),
commandArgs: z.any().optional(),
fullOutput: z.any().optional(),
});
// Hardcoded configuration for TaskMaster telemetry gateway
const TASKMASTER_BASE_URL = "http://localhost:4444";
const TASKMASTER_TELEMETRY_ENDPOINT = `${TASKMASTER_BASE_URL}/api/v1/telemetry`;
const TASKMASTER_USER_REGISTRATION_ENDPOINT = `${TASKMASTER_BASE_URL}/auth/init`;
const MAX_RETRIES = 3;
const RETRY_DELAY = 1000; // 1 second
/**
* Get telemetry configuration from hardcoded service ID, user token, and config
* @returns {Object} Configuration object with serviceId, apiKey, userId, and email
*/
function getTelemetryConfig() {
// Get the config which contains userId and email
const config = getConfig();
// Hardcoded service ID for TaskMaster telemetry service
const hardcodedServiceId = "98fb3198-2dfc-42d1-af53-07b99e4f3bde";
// Get user's API token from .env (managed by user-management.js)
const userApiKey = resolveEnvVariable("TASKMASTER_API_KEY");
return {
serviceId: hardcodedServiceId, // Hardcoded service identifier
apiKey: userApiKey || null, // User's Bearer token from .env
userId: config?.account?.userId || null, // From config
email: config?.account?.email || null, // From config
};
}
/**
* Register or lookup user with the TaskMaster telemetry gateway using /auth/init
* @param {string} email - User's email address
* @param {string} userId - User's ID
* @returns {Promise<{success: boolean, apiKey?: string, userId?: string, email?: string, isNewUser?: boolean, error?: string}>}
*/
export async function registerUserWithGateway(email = null, userId = null) {
try {
const requestBody = {};
if (email) requestBody.email = email;
if (userId) requestBody.userId = userId;
const response = await fetch(TASKMASTER_USER_REGISTRATION_ENDPOINT, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(requestBody),
});
if (!response.ok) {
return {
success: false,
error: `Gateway registration failed: ${response.status} ${response.statusText}`,
};
}
const result = await response.json();
// Handle the /auth/init response format
if (result.success && result.data) {
return {
success: true,
apiKey: result.data.token,
userId: result.data.userId,
email: email,
isNewUser: result.data.isNewUser,
};
} else {
return {
success: false,
error: result.error || result.message || "Unknown registration error",
};
}
} catch (error) {
return {
success: false,
error: `Gateway registration error: ${error.message}`,
};
}
}
/**
* Submits telemetry data to the remote gateway endpoint
* @param {Object} telemetryData - The telemetry data to submit
* @returns {Promise<Object>} - Result object with success status and details
*/
export async function submitTelemetryData(telemetryData) {
try {
// Check user opt-out preferences first, but hosted mode always sends telemetry
const config = getConfig();
const isHostedMode = config?.account?.mode === "hosted";
if (!isHostedMode && !getTelemetryEnabled()) {
return {
success: true,
skipped: true,
reason: "Telemetry disabled by user preference",
};
}
// Get telemetry configuration
const telemetryConfig = getTelemetryConfig();
if (
!telemetryConfig.apiKey ||
!telemetryConfig.userId ||
!telemetryConfig.email
) {
return {
success: false,
error:
"Telemetry configuration incomplete. Please ensure you have completed 'task-master init' to set up your user account.",
};
}
// Validate telemetry data
try {
TelemetryDataSchema.parse(telemetryData);
} catch (validationError) {
return {
success: false,
error: `Telemetry data validation failed: ${validationError.message}`,
};
}
// Send FULL telemetry data to gateway (including commandArgs and fullOutput)
// Note: Sensitive data filtering is handled separately for user-facing responses
const completeTelemetryData = {
...telemetryData,
userId: telemetryConfig.userId, // Ensure correct userId
};
// Attempt submission with retry logic
let lastError;
for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
try {
const response = await fetch(TASKMASTER_TELEMETRY_ENDPOINT, {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-taskmaster-service-id": telemetryConfig.serviceId, // Hardcoded service ID
Authorization: `Bearer ${telemetryConfig.apiKey}`, // User's Bearer token
"X-User-Email": telemetryConfig.email, // User's email from config
},
body: JSON.stringify(completeTelemetryData),
});
if (response.ok) {
const result = await response.json();
return {
success: true,
id: result.id,
attempt,
};
} else {
// Handle HTTP error responses
const errorData = await response.json().catch(() => ({}));
const errorMessage = `HTTP ${response.status} ${response.statusText}`;
// Don't retry on certain status codes (rate limiting, auth errors, etc.)
if (
response.status === 429 ||
response.status === 401 ||
response.status === 403
) {
return {
success: false,
error: errorMessage,
statusCode: response.status,
};
}
// For other HTTP errors, continue retrying
lastError = new Error(errorMessage);
}
} catch (networkError) {
lastError = networkError;
}
// Wait before retry (exponential backoff)
if (attempt < MAX_RETRIES) {
await new Promise((resolve) =>
setTimeout(resolve, RETRY_DELAY * Math.pow(2, attempt - 1))
);
}
}
// All retries failed
return {
success: false,
error: lastError.message,
attempts: MAX_RETRIES,
};
} catch (error) {
// Graceful error handling - never throw
return {
success: false,
error: `Telemetry submission failed: ${error.message}`,
};
}
}
/**
* Submits telemetry data asynchronously without blocking execution
* @param {Object} telemetryData - The telemetry data to submit
*/
export function submitTelemetryDataAsync(telemetryData) {
// Fire and forget - don't block execution
submitTelemetryData(telemetryData).catch((error) => {
// Silently log errors without blocking
console.debug("Telemetry submission failed:", error);
});
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,516 @@
import fs from "fs";
import path from "path";
import { log, findProjectRoot } from "./utils.js";
import { getConfig, writeConfig, getUserId } from "./config-manager.js";
/**
* Registers or finds a user via the gateway's /auth/init endpoint
* @param {string|null} email - Optional user's email address (only needed for billing)
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {Promise<{success: boolean, userId: string, token: string, isNewUser: boolean, error?: string}>}
*/
async function registerUserWithGateway(email = null, explicitRoot = null) {
try {
const gatewayUrl =
process.env.TASKMASTER_GATEWAY_URL || "http://localhost:4444";
// Check for existing userId and email to pass to gateway
const existingUserId = getUserId(explicitRoot);
const existingEmail = email || getUserEmail(explicitRoot);
// Build request body with existing values (gateway can handle userId for existing users)
const requestBody = {};
if (existingUserId && existingUserId !== "1234567890") {
requestBody.userId = existingUserId;
}
if (existingEmail) {
requestBody.email = existingEmail;
}
const response = await fetch(`${gatewayUrl}/auth/init`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(requestBody),
});
if (!response.ok) {
const errorText = await response.text();
return {
success: false,
userId: "",
token: "",
isNewUser: false,
error: `Gateway registration failed: ${response.status} ${errorText}`,
};
}
const result = await response.json();
if (result.success && result.data) {
return {
success: true,
userId: result.data.userId,
token: result.data.token,
isNewUser: result.data.isNewUser,
};
} else {
return {
success: false,
userId: "",
token: "",
isNewUser: false,
error: "Invalid response format from gateway",
};
}
} catch (error) {
return {
success: false,
userId: "",
token: "",
isNewUser: false,
error: `Network error: ${error.message}`,
};
}
}
/**
* Updates the user configuration with gateway registration results
* @param {string} userId - User ID from gateway
* @param {string} token - User authentication token from gateway (stored in .env)
* @param {string} mode - User mode ('byok' or 'hosted')
* @param {string|null} email - Optional user email to save
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {boolean} Success status
*/
function updateUserConfig(
userId,
token,
mode,
email = null,
explicitRoot = null
) {
try {
const config = getConfig(explicitRoot);
// Ensure account section exists
if (!config.account) {
config.account = {};
}
// Ensure global section exists for email
if (!config.global) {
config.global = {};
}
// Update user configuration in account section
config.account.userId = userId;
config.account.mode = mode; // 'byok' or 'hosted'
// Save email if provided
if (email) {
config.account.email = email;
}
// Write user authentication token to .env file (not config)
if (token) {
writeApiKeyToEnv(token, explicitRoot);
}
// Save updated config
const success = writeConfig(config, explicitRoot);
if (success) {
const emailInfo = email ? `, email=${email}` : "";
log(
"info",
`User configuration updated: userId=${userId}, mode=${mode}${emailInfo}`
);
} else {
log("error", "Failed to write updated user configuration");
}
return success;
} catch (error) {
log("error", `Error updating user config: ${error.message}`);
return false;
}
}
/**
* Writes the user authentication token to the .env file
* This token is used as Bearer auth for gateway API calls
* @param {string} token - Authentication token to write
* @param {string|null} explicitRoot - Optional explicit project root path
*/
function writeApiKeyToEnv(token, explicitRoot = null) {
try {
// Determine project root
let rootPath = explicitRoot;
if (!rootPath) {
rootPath = findProjectRoot();
if (!rootPath) {
log("warn", "Could not determine project root for .env file");
return;
}
}
const envPath = path.join(rootPath, ".env");
let envContent = "";
// Read existing .env content if file exists
if (fs.existsSync(envPath)) {
envContent = fs.readFileSync(envPath, "utf8");
}
// Check if TASKMASTER_API_KEY already exists
const lines = envContent.split("\n");
let keyExists = false;
for (let i = 0; i < lines.length; i++) {
if (lines[i].startsWith("TASKMASTER_API_KEY=")) {
lines[i] = `TASKMASTER_API_KEY=${token}`;
keyExists = true;
break;
}
}
// Add key if it doesn't exist
if (!keyExists) {
if (envContent && !envContent.endsWith("\n")) {
envContent += "\n";
}
envContent += `TASKMASTER_API_KEY=${token}\n`;
} else {
envContent = lines.join("\n");
}
// Write updated content
fs.writeFileSync(envPath, envContent);
} catch (error) {
log("error", `Failed to write user token to .env: ${error.message}`);
}
}
/**
* Gets the current user mode from configuration
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {string} User mode ('byok', 'hosted', or 'unknown')
*/
function getUserMode(explicitRoot = null) {
try {
const config = getConfig(explicitRoot);
return config?.account?.mode || "unknown";
} catch (error) {
log("error", `Error getting user mode: ${error.message}`);
return "unknown";
}
}
/**
* Checks if user is in hosted mode
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {boolean} True if user is in hosted mode
*/
function isHostedMode(explicitRoot = null) {
return getUserMode(explicitRoot) === "hosted";
}
/**
* Checks if user is in BYOK mode
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {boolean} True if user is in BYOK mode
*/
function isByokMode(explicitRoot = null) {
return getUserMode(explicitRoot) === "byok";
}
/**
* Complete user setup: register with gateway and configure TaskMaster
* @param {string|null} email - Optional user's email (only needed for billing)
* @param {string} mode - User's mode: 'byok' or 'hosted'
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {Promise<{success: boolean, userId: string, mode: string, error?: string}>}
*/
async function setupUser(email = null, mode = "hosted", explicitRoot = null) {
try {
// Step 1: Register with gateway (email optional)
const registrationResult = await registerUserWithGateway(
email,
explicitRoot
);
if (!registrationResult.success) {
return {
success: false,
userId: "",
mode: "",
error: registrationResult.error,
};
}
// Step 2: Update config with userId, mode, and email
const configResult = updateUserConfig(
registrationResult.userId,
registrationResult.token,
mode,
email,
explicitRoot
);
if (!configResult) {
return {
success: false,
userId: registrationResult.userId,
mode: "",
error: "Failed to update user configuration",
};
}
return {
success: true,
userId: registrationResult.userId,
mode: mode,
message: email
? `User setup complete with email ${email}`
: "User setup complete (email will be collected during billing setup)",
};
} catch (error) {
return {
success: false,
userId: "",
mode: "",
error: `Setup failed: ${error.message}`,
};
}
}
/**
* Initialize TaskMaster user (typically called during init)
* Gets userId from gateway without requiring email upfront
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {Promise<{success: boolean, userId: string, error?: string}>}
*/
async function initializeUser(explicitRoot = null) {
const config = getConfig(explicitRoot);
const mode = config.account?.mode || "byok";
if (mode === "byok") {
return await initializeBYOKUser(explicitRoot);
} else {
return await initializeHostedUser(explicitRoot);
}
}
async function initializeBYOKUser(projectRoot) {
try {
const gatewayUrl =
process.env.TASKMASTER_GATEWAY_URL || "http://localhost:4444";
// Check if we already have an anonymous user ID stored
let config = getConfig(projectRoot);
const existingAnonymousUserId = config?.account?.userId;
// Prepare headers for the request
const headers = {
"Content-Type": "application/json",
"X-TaskMaster-Service-ID": "98fb3198-2dfc-42d1-af53-07b99e4f3bde",
};
// If we have an existing anonymous user ID, try to reuse it
if (existingAnonymousUserId && existingAnonymousUserId !== "1234567890") {
headers["X-Anonymous-User-ID"] = existingAnonymousUserId;
}
// Call gateway /auth/anonymous to create or reuse a user account
// BYOK users still get an account for potential future hosted mode switch
const response = await fetch(`${gatewayUrl}/auth/anonymous`, {
method: "POST",
headers,
body: JSON.stringify({}),
});
if (response.ok) {
const result = await response.json();
// Store the user token (same as hosted users)
// BYOK users won't use this for AI calls, but will have it for potential mode switch
if (result.session && result.session.access_token) {
writeApiKeyToEnv(result.session.access_token, projectRoot);
}
// Update config with BYOK user info, ensuring we store the anonymous user ID
if (!config.account) {
config.account = {};
}
config.account.userId = result.anonymousUserId || result.user.id;
config.account.mode = "byok";
config.account.email =
result.user.email ||
`anon-${result.anonymousUserId || result.user.id}@taskmaster.temp`;
config.account.telemetryEnabled = true;
writeConfig(config, projectRoot);
return {
success: true,
userId: result.anonymousUserId || result.user.id,
token: result.session?.access_token || null,
mode: "byok",
isAnonymous: true,
isReused: result.isReused || false,
};
} else {
const errorText = await response.text();
return {
success: false,
error: `Gateway not available: ${response.status} ${errorText}`,
};
}
} catch (error) {
return {
success: false,
error: `Network error: ${error.message}`,
};
}
}
async function initializeHostedUser(projectRoot) {
try {
// For hosted users, we need proper authentication
// This would typically involve OAuth flow or registration
const gatewayUrl =
process.env.TASKMASTER_GATEWAY_URL || "http://localhost:4444";
// Check if we already have stored credentials
const existingToken = getUserToken(projectRoot);
const existingUserId = getUserId(projectRoot);
if (existingToken && existingUserId && existingUserId !== "1234567890") {
// Try to validate existing credentials
try {
const response = await fetch(`${gatewayUrl}/auth/validate`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${existingToken}`,
"X-TaskMaster-Service-ID": "98fb3198-2dfc-42d1-af53-07b99e4f3bde",
},
});
if (response.ok) {
return {
success: true,
userId: existingUserId,
token: existingToken,
mode: "hosted",
isExisting: true,
};
}
} catch (error) {
// Fall through to re-authentication
}
}
// If no valid credentials, use the existing registration flow
const registrationResult = await registerUserWithGateway(null, projectRoot);
if (registrationResult.success) {
// Update config for hosted mode
updateUserConfig(
registrationResult.userId,
registrationResult.token,
"hosted",
null,
projectRoot
);
return {
success: true,
userId: registrationResult.userId,
token: registrationResult.token,
mode: "hosted",
isNewUser: registrationResult.isNewUser,
};
} else {
return {
success: false,
error: `Hosted mode setup failed: ${registrationResult.error}`,
};
}
} catch (error) {
return {
success: false,
error: `Hosted user initialization failed: ${error.message}`,
};
}
}
/**
* Gets the current user authentication token from .env file
* This is the Bearer token used for gateway API calls
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {string|null} User authentication token or null if not found
*/
function getUserToken(explicitRoot = null) {
try {
// Determine project root
let rootPath = explicitRoot;
if (!rootPath) {
rootPath = findProjectRoot();
if (!rootPath) {
log("error", "Could not determine project root for .env file");
return null;
}
}
const envPath = path.join(rootPath, ".env");
if (!fs.existsSync(envPath)) {
return null;
}
const envContent = fs.readFileSync(envPath, "utf8");
const lines = envContent.split("\n");
for (const line of lines) {
if (line.startsWith("TASKMASTER_API_KEY=")) {
return line.substring("TASKMASTER_API_KEY=".length).trim();
}
}
return null;
} catch (error) {
log("error", `Error getting user token from .env: ${error.message}`);
return null;
}
}
/**
* Gets the current user email from configuration
* @param {string|null} explicitRoot - Optional explicit project root path
* @returns {string|null} User email or null if not found
*/
function getUserEmail(explicitRoot = null) {
try {
const config = getConfig(explicitRoot);
return config?.account?.email || null;
} catch (error) {
log("error", `Error getting user email: ${error.message}`);
return null;
}
}
export {
registerUserWithGateway,
updateUserConfig,
writeApiKeyToEnv,
getUserMode,
isHostedMode,
isByokMode,
setupUser,
initializeUser,
initializeBYOKUser,
initializeHostedUser,
getUserToken,
getUserEmail,
};

View File

@@ -0,0 +1,186 @@
/**
* Enhanced error handler for gateway responses
* @param {Error} error - The error from the gateway call
* @param {string} commandName - The command being executed
*/
function handleGatewayError(error, commandName) {
try {
// Extract status code and response from error message
const match = error.message.match(/Gateway AI call failed: (\d+) (.+)/);
if (!match) {
throw new Error(`Unexpected error format: ${error.message}`);
}
const [, statusCode, responseText] = match;
const status = parseInt(statusCode);
let response;
try {
response = JSON.parse(responseText);
} catch {
// Handle non-JSON error responses
console.error(`[ERROR] Gateway error (${status}): ${responseText}`);
return;
}
switch (status) {
case 400:
handleValidationError(response, commandName);
break;
case 401:
handleAuthError(response, commandName);
break;
case 402:
handleCreditError(response, commandName);
break;
case 403:
handleAccessDeniedError(response, commandName);
break;
case 429:
handleRateLimitError(response, commandName);
break;
case 500:
handleServerError(response, commandName);
break;
default:
console.error(
`[ERROR] Unexpected gateway error (${status}):`,
response
);
}
} catch (parseError) {
console.error(`[ERROR] Failed to parse gateway error: ${error.message}`);
}
}
function handleValidationError(response, commandName) {
if (response.error?.includes("Unsupported model")) {
console.error("🚫 The selected AI model is not supported by the gateway.");
console.error(
"💡 Try running `task-master models` to see available models."
);
return;
}
if (response.error?.includes("schema is required")) {
console.error("🚫 This command requires a schema for structured output.");
console.error("💡 This is likely a bug - please report it.");
return;
}
console.error(`🚫 Invalid request: ${response.error}`);
if (response.details?.length > 0) {
response.details.forEach((detail) => {
console.error(`${detail.message || detail}`);
});
}
}
function handleAuthError(response, commandName) {
console.error("🔐 Authentication failed with TaskMaster gateway.");
if (response.message?.includes("Invalid token")) {
console.error("💡 Your auth token may have expired. Try running:");
console.error(" task-master init");
} else if (response.message?.includes("Missing X-TaskMaster-Service-ID")) {
console.error(
"💡 Service authentication issue. This is likely a bug - please report it."
);
} else {
console.error("💡 Please check your authentication settings.");
}
}
function handleCreditError(response, commandName) {
console.error("💳 Insufficient credits for this operation.");
console.error(`💡 ${response.message || "Your account needs more credits."}`);
console.error(" • Visit your dashboard to add credits");
console.error(" • Or upgrade to a plan with more credits");
console.error(
" • You can also switch to BYOK mode to use your own API keys"
);
}
function handleAccessDeniedError(response, commandName) {
const { details, hint } = response;
if (
details?.planType === "byok" &&
details?.subscriptionStatus === "inactive"
) {
console.error(
"🔒 BYOK users need active subscriptions for hosted AI services."
);
console.error("💡 You have two options:");
console.error(" 1. Upgrade to a paid plan for hosted AI services");
console.error(" 2. Switch to BYOK mode and use your own API keys");
console.error("");
console.error(" To use your own API keys:");
console.error(
" • Set your API keys in .env file (e.g., ANTHROPIC_API_KEY=...)"
);
console.error(" • The system will automatically use direct API calls");
return;
}
if (details?.subscriptionStatus === "past_due") {
console.error("💳 Your subscription payment is overdue.");
console.error(
"💡 Please update your payment method to continue using AI services."
);
console.error(
" Visit your account dashboard to update billing information."
);
return;
}
if (details?.planType === "free" && commandName === "research") {
console.error("🔬 Research features require a paid subscription.");
console.error("💡 Upgrade your plan to access research-powered commands.");
return;
}
console.error(`🔒 Access denied: ${response.message}`);
if (hint) {
console.error(`💡 ${hint}`);
}
}
function handleRateLimitError(response, commandName) {
const retryAfter = response.retryAfter || 60;
console.error("⏱️ Rate limit exceeded - too many requests.");
console.error(`💡 Please wait ${retryAfter} seconds before trying again.`);
console.error(" Consider upgrading your plan for higher rate limits.");
}
function handleServerError(response, commandName) {
const retryAfter = response.retryAfter || 10;
if (response.error?.includes("Service temporarily unavailable")) {
console.error("🚧 TaskMaster gateway is temporarily unavailable.");
console.error(
`💡 The service should recover automatically. Try again in ${retryAfter} seconds.`
);
console.error(
" You can also switch to BYOK mode to use direct API calls."
);
return;
}
if (response.message?.includes("No user message found")) {
console.error("🚫 Invalid request format - missing user message.");
console.error("💡 This is likely a bug - please report it.");
return;
}
console.error("⚠️ Gateway server error occurred.");
console.error(
`💡 Try again in ${retryAfter} seconds. If the problem persists:`
);
console.error(" • Check TaskMaster status page");
console.error(" • Switch to BYOK mode as a workaround");
console.error(" • Contact support if the issue continues");
}
// Export the main handler function
export { handleGatewayError };

252
tasks/task_090.txt Normal file
View File

@@ -0,0 +1,252 @@
# Task ID: 90
# Title: Implement Comprehensive Telemetry Improvements for Task Master
# Status: in-progress
# Dependencies: 2, 3, 17
# Priority: high
# Description: Enhance Task Master with robust telemetry capabilities, including secure capture of command arguments and outputs, remote telemetry submission, DAU and active user tracking, extension to non-AI commands, and opt-out preferences during initialization.
# Details:
1. Instrument all CLI commands (including non-AI commands) to capture execution metadata, command arguments, and outputs, ensuring that sensitive data is never exposed in user-facing responses or logs. Use in-memory redaction and encryption techniques to protect sensitive information before transmission.
2. Implement a telemetry client that securely sends anonymized and aggregated telemetry data to the remote endpoint (gateway.task-master.dev/telemetry) using HTTPS/TLS. Ensure data is encrypted in transit and at rest, following best practices for privacy and compliance.
3. Track daily active users (DAU) and active user sessions by generating anonymized user/session identifiers, and aggregate usage metrics to analyze user patterns and feature adoption.
4. Extend telemetry instrumentation to all command types, not just AI-powered commands, ensuring consistent and comprehensive observability across the application.
5. During Task Master initialization, prompt users with clear opt-out options for telemetry collection, store their preferences securely, and respect these settings throughout the application lifecycle.
6. Design telemetry payloads to support future analysis of user patterns, operational costs, and to provide data for potential custom AI model training, while maintaining strict privacy standards.
7. Document the internal instrumentation policy, including guidelines for data collection, aggregation, and export, and automate as much of the instrumentation as possible to ensure consistency and minimize manual errors.
8. Ensure minimal performance impact by implementing efficient sampling, aggregation, and rate limiting strategies within the telemetry pipeline.
# Test Strategy:
- Verify that all command executions (including non-AI commands) generate appropriate telemetry events without exposing sensitive data in logs or responses.
- Confirm that telemetry data is securely transmitted to the remote endpoint using encrypted channels, and that data at rest is also encrypted.
- Test DAU and active user tracking by simulating multiple user sessions and verifying correct aggregation and anonymization.
- Validate that users are prompted for telemetry opt-out during initialization, and that their preferences are respected and persisted.
- Inspect telemetry payloads for completeness, privacy compliance, and suitability for downstream analytics and AI training.
- Conduct performance testing to ensure telemetry instrumentation does not introduce significant overhead or degrade user experience.
- Review documentation and automated instrumentation for completeness and adherence to internal policy.
# Subtasks:
## 1. Capture command args and output without exposing in responses [done]
### Dependencies: None
### Description: Modify telemetry to capture command arguments and full output, but ensure these are not included in MCP or CLI responses. Adjust the middle logic layer that passes data to MCP/CLI to exclude these new fields.
### Details:
Update ai-services-unified.js to capture the initial args passed to the AI service and the full output. Modify the telemetryData object structure to include 'commandArgs' and 'fullOutput' fields. Ensure handleApiResult in MCP and displayAiUsageSummary in CLI do not expose these fields to end users.
<info added on 2025-05-28T15:21:20.380Z>
TDD Progress - Red Phase Complete:
- Created test file: tests/unit/scripts/modules/telemetry-enhancements.test.js
- Written 4 failing tests for core functionality:
1. Capture command arguments in telemetry data
2. Capture full AI output in telemetry data
3. Ensure commandArgs/fullOutput not exposed in MCP responses
4. Ensure commandArgs/fullOutput not exposed in CLI responses
- All tests failing as expected (TDD red phase)
- Ready to implement minimum code to make tests pass
Next: Implement commandArgs and fullOutput capture in ai-services-unified.js
</info added on 2025-05-28T15:21:20.380Z>
<info added on 2025-05-28T18:04:52.595Z>
TDD Progress - Green Phase Complete:
- Fixed test mocking using jest.unstable_mockModule for ES modules
- All 4 tests now passing:
1. ✓ should capture command arguments in telemetry data
2. ✓ should capture full AI output in telemetry data
3. ✓ should not expose commandArgs/fullOutput in MCP responses
4. ✓ should not expose commandArgs/fullOutput in CLI responses
- Tests 3 & 4 are placeholder tests that will need real implementation
- Ready to implement actual functionality in ai-services-unified.js
Next: Implement commandArgs and fullOutput capture in ai-services-unified.js to make tests meaningful
</info added on 2025-05-28T18:04:52.595Z>
<info added on 2025-05-28T18:08:25.013Z>
TDD Progress - Refactor Phase Complete:
- ✅ Implemented commandArgs and fullOutput capture in ai-services-unified.js
- ✅ Modified logAiUsage function to accept and store commandArgs and fullOutput
- ✅ Updated _unifiedServiceRunner to pass callParams as commandArgs and providerResponse as fullOutput
- ✅ All 4 tests passing (including placeholder tests for filtering)
- ✅ Core functionality implemented: telemetry now captures sensitive data internally
Implementation Details:
- commandArgs captures the complete callParams object (includes apiKey, modelId, messages, etc.)
- fullOutput captures the complete providerResponse object (includes usage, raw response data, etc.)
- Both fields are conditionally added to telemetryData only when provided
- Maintains backward compatibility with existing telemetry structure
Ready for subtask 90.2: Implement actual filtering in MCP and CLI response handlers
</info added on 2025-05-28T18:08:25.013Z>
<info added on 2025-05-28T18:10:11.676Z>
CRITICAL SECURITY ISSUE IDENTIFIED - Sensitive Data Exposure Risk:
Current implementation captures commandArgs and fullOutput in telemetry but fails to filter them before user exposure. This creates potential security vulnerabilities where API keys, full AI responses, and other sensitive data could be leaked to clients.
Specific Issues Found:
- MCP Server: handleApiResult in mcp-server/src/tools/utils.js passes entire result.data including unfiltered telemetryData to client responses
- CLI: While displayAiUsageSummary only shows safe fields, the underlying telemetryData object retains sensitive data accessible programmatically
- Tests: Current filtering tests (3 & 4) are placeholders and don't verify actual filtering behavior
Required Security Implementation:
1. Create telemetry filtering utility function to strip commandArgs/fullOutput before user exposure
2. Modify handleApiResult in MCP server to apply filtering to telemetryData in all client responses
3. Ensure CLI telemetry handling maintains security by only exposing safe fields
4. Update placeholder tests to verify actual filtering functionality works correctly
Priority: HIGH - Must implement filtering before any telemetry data reaches production endpoints to prevent sensitive data leakage.
</info added on 2025-05-28T18:10:11.676Z>
<info added on 2025-05-28T18:25:47.900Z>
TDD COMPLETE - Subtask 90.1 Implementation Finished:
✅ **SECURITY ISSUE RESOLVED**: Successfully implemented filtering to prevent sensitive data exposure
**Implementation Details:**
1. **Capture Enhancement**: Modified ai-services-unified.js to capture commandArgs and fullOutput in telemetry
2. **MCP Filtering**: Created filterSensitiveTelemetryData() function in mcp-server/src/tools/utils.js
3. **Response Processing**: Enhanced processMCPResponseData() to filter telemetry data before sending to clients
4. **CLI Safety**: Verified displayAiUsageSummary() only displays safe fields (already secure)
**Security Verification:**
- ✅ commandArgs (containing API keys, secrets) are captured but filtered out before user exposure
- ✅ fullOutput (containing internal debug data) is captured but filtered out before user exposure
- ✅ MCP responses automatically filter sensitive telemetry fields
- ✅ CLI responses only display safe telemetry fields (modelUsed, tokens, cost, etc.)
**Test Coverage:**
- ✅ 4/4 tests passing with real implementation (not mocks)
- ✅ Verified actual filtering functionality works correctly
- ✅ Confirmed sensitive data is captured internally but never exposed to users
**Ready for subtask 90.2**: Send telemetry data to remote database endpoint
</info added on 2025-05-28T18:25:47.900Z>
<info added on 2025-05-30T22:16:38.344Z>
Configuration Structure Refactoring Complete:
- Moved telemetryEnabled from separate telemetry object to account section for better organization
- Consolidated userId, mode, and userEmail into account section (previously scattered across config)
- Removed subscription object to simplify configuration structure
- Updated config-manager.js to handle new configuration structure properly
- Verified new structure works correctly with test commands
- Configuration now has cleaner, more logical organization with account-related settings grouped together
</info added on 2025-05-30T22:16:38.344Z>
<info added on 2025-05-30T22:30:56.872Z>
Configuration Structure Migration Complete - All Code and Tests Updated:
**Code Updates:**
- Fixed user-management.js to use config.account.userId/mode instead of deprecated config.global paths
- Updated telemetry-submission.js to read userId from config.account.userId for proper telemetry data association
- Enhanced telemetry opt-out validation to use getTelemetryEnabled() function for consistent config access
- Improved registerUserWithGateway() function to accept both email and userId parameters for comprehensive user validation
**Test Suite Updates:**
- Updated tests/integration/init-config.test.js to validate new config.account structure
- Migrated all test assertions from config.global.userId to config.account.userId
- Updated config.mode references to config.account.mode throughout test files
- Changed telemetry validation from config.telemetryEnabled to config.account.telemetryEnabled
- Removed obsolete config.subscription object references from all test cases
- Fixed tests/unit/scripts/modules/telemetry-submission.test.js to match new configuration schema
**Gateway Integration Enhancements:**
- registerUserWithGateway() now sends both email and userId to /auth/init endpoint for proper user identification
- Gateway can validate existing users and provide appropriate authentication responses
- API key updates are automatically persisted to .env file upon successful registration
- Complete user validation and authentication flow implemented and tested
All configuration structure changes are now consistent across codebase. Ready for end-to-end testing with gateway integration.
</info added on 2025-05-30T22:30:56.872Z>
## 2. Send telemetry data to remote database endpoint [done]
### Dependencies: None
### Description: Implement POST requests to gateway.task-master.dev/telemetry endpoint to send all telemetry data including new fields (args, output) for analysis and future AI model training
### Details:
Create a telemetry submission service that POSTs to gateway.task-master.dev/telemetry. Include all existing telemetry fields plus commandArgs and fullOutput. Implement retry logic and handle failures gracefully without blocking command execution. Respect user opt-out preferences.
<info added on 2025-05-28T18:27:30.207Z>
TDD Progress - Red Phase Complete:
- Created test file: tests/unit/scripts/modules/telemetry-submission.test.js
- Written 6 failing tests for telemetry submission functionality:
1. Successfully submit telemetry data to gateway endpoint
2. Implement retry logic for failed requests
3. Handle failures gracefully without blocking execution
4. Respect user opt-out preferences
5. Validate telemetry data before submission
6. Handle HTTP error responses appropriately
- All tests failing as expected (module doesn't exist yet)
- Ready to implement minimum code to make tests pass
Next: Create scripts/modules/telemetry-submission.js with submitTelemetryData function
</info added on 2025-05-28T18:27:30.207Z>
<info added on 2025-05-28T18:43:47.334Z>
TDD Green Phase Complete:
- Implemented scripts/modules/telemetry-submission.js with submitTelemetryData function
- All 6 tests now passing with full functionality implemented
- Security measures in place: commandArgs and fullOutput filtered out before remote submission
- Reliability features: exponential backoff retry logic (3 attempts max), graceful error handling
- Gateway integration: configured for https://gateway.task-master.dev/telemetry endpoint
- Zod schema validation ensures data integrity before submission
- User privacy protected through telemetryEnabled config option
- Smart retry logic avoids retries for 429/401/403 status codes
- Service never throws errors and always returns result object to prevent blocking command execution
Implementation ready for integration into ai-services-unified.js in subtask 90.3
</info added on 2025-05-28T18:43:47.334Z>
<info added on 2025-05-28T18:59:16.039Z>
Integration Testing Complete - Live Gateway Verification:
Successfully tested telemetry submission against live gateway at localhost:4444/api/v1/telemetry. Confirmed proper authentication using Bearer token and X-User-Email headers (not X-API-Key as initially assumed). Security filtering verified working correctly - sensitive data like commandArgs, fullOutput, apiKey, and internalDebugData properly removed before submission. Gateway responded with success confirmation and assigned telemetry ID. Service handles missing GATEWAY_USER_EMAIL environment variable gracefully. All functionality validated end-to-end including retry logic, error handling, and data validation. Module ready for integration into ai-services-unified.js.
</info added on 2025-05-28T18:59:16.039Z>
<info added on 2025-05-29T01:04:27.886Z>
Implementation Complete - Gateway Integration Finalized:
Hardcoded gateway endpoint to http://localhost:4444/api/v1/telemetry with config-based credential handling replacing environment variables. Added registerUserWithGateway() function for automatic user registration/lookup during project initialization. Enhanced init.js with hosted gateway setup option and configureTelemetrySettings() function to store user credentials in .taskmasterconfig under telemetry section. Updated all 10 tests to reflect new architecture - all passing. Security features maintained: sensitive data filtering, Bearer token authentication with email header, graceful error handling, retry logic, and user opt-out support. Module fully integrated and ready for ai-services-unified.js integration in subtask 90.3.
</info added on 2025-05-29T01:04:27.886Z>
<info added on 2025-05-30T23:36:58.010Z>
Subtask 90.2 COMPLETED successfully! ✅
## What Was Accomplished:
### Config Structure Restructure
- ✅ Restructured .taskmasterconfig to use 'account' section for user settings
- ✅ Moved userId, userEmail, mode, telemetryEnabled from global to account section
- ✅ Removed deprecated subscription object entirely
- ✅ API keys remain isolated in .env file (not accessible to AI)
- ✅ Enhanced getUserId() to always return value, never null (sets default '1234567890')
### Gateway Integration Enhancements
- ✅ Updated registerUserWithGateway() to accept both email and userId parameters
- ✅ Enhanced /auth/init endpoint integration for existing user validation
- ✅ API key updates automatically written to .env during registration
### Code Updates
- ✅ Updated config-manager.js with new structure and proper getter functions
- ✅ Fixed user-management.js to use config.account structure
- ✅ Updated telemetry-submission.js to read from account section
- ✅ Enhanced init.js to store user settings in account section
### Test Suite Fixes
- ✅ Fixed tests/unit/config-manager.test.js for new structure
- ✅ Updated tests/integration/init-config.test.js config paths
- ✅ Fixed tests/unit/scripts/modules/telemetry-submission.test.js
- ✅ Updated tests/unit/ai-services-unified.test.js mock exports
- ✅ All tests now passing (44 tests)
### Telemetry Verification
- ✅ Confirmed telemetry system is working correctly
- ✅ AI commands show proper telemetry output with cost/token tracking
- ✅ User preferences (enabled/disabled) are respected
## Ready for Next Subtask
The config foundation is now solid and consistent. Ready to move to subtask 90.3 for the next phase of telemetry improvements.
</info added on 2025-05-30T23:36:58.010Z>
## 3. Implement DAU and active user tracking [done]
### Dependencies: None
### Description: Enhance telemetry to track Daily Active Users (DAU) and identify active users through unique user IDs and usage patterns
### Details:
Ensure userId generation is consistent and persistent. Track command execution timestamps to calculate DAU. Include session tracking to understand user engagement patterns. Add fields for tracking unique daily users, command frequency, and session duration.
<info added on 2025-05-30T00:27:53.666Z>
COMPLETED: TDD implementation successfully integrated telemetry submission into AI services. Modified logAiUsage function in ai-services-unified.js to automatically submit telemetry data to gateway after each AI usage event. Implementation includes graceful error handling with try/catch wrapper to prevent telemetry failures from blocking core functionality. Added debug logging for submission states. All 7 tests passing with no regressions introduced. Integration maintains security by filtering sensitive data from user responses while sending complete telemetry to gateway for analytics. Every AI call now automatically triggers telemetry submission as designed.
</info added on 2025-05-30T00:27:53.666Z>
## 4. Extend telemetry to non-AI commands [pending]
### Dependencies: None
### Description: Implement telemetry collection for all Task Master commands, not just AI-powered ones, to get complete usage analytics
### Details:
Create a unified telemetry collection mechanism for all commands in commands.js. Track command name, execution time, success/failure status, and basic metrics. Ensure non-AI commands generate appropriate telemetry without AI-specific fields like tokens or costs.
## 5. Add opt-out data collection prompt to init command [pending]
### Dependencies: None
### Description: Modify init.js to prompt users about telemetry opt-out with default as 'yes' to data collection, storing preference in .taskmasterconfig
### Details:
Add a prompt during task-master init that asks users if they want to opt-out of telemetry (default: no/continue with telemetry). Store the preference as 'telemetryOptOut: boolean' in .taskmasterconfig. Ensure all telemetry collection respects this setting. Include clear explanation of what data is collected and why.

57
tasks/task_091.txt Normal file
View File

@@ -0,0 +1,57 @@
# Task ID: 91
# Title: Integrate Gateway AI Service Mode into ai-services-unified.js
# Status: done
# Dependencies: 2, 3, 17
# Priority: high
# Description: Implement support for a hosted AI gateway service in Task Master, allowing users to select between BYOK and hosted gateway modes during initialization. Ensure gateway integration intercepts and routes AI calls appropriately, handles gateway-specific telemetry, and maintains compatibility with existing command structures.
# Details:
1. Update the initialization logic to allow users to select between BYOK (Bring Your Own Key) and hosted gateway service modes, storing the selection in the configuration system.
2. In ai-services-unified.js, detect when the hosted gateway mode is active.
3. Refactor the AI call flow to intercept requests before _resolveApiKey and _attemptProviderCallWithRetries. When in gateway mode, route calls to the gateway endpoint instead of directly to the provider.
4. Construct gateway requests with the full messages array, modelId, roleParams, and commandName, ensuring all required data is passed.
5. Parse gateway responses, extracting the AI result and handling telemetry fields for credits used/remaining instead of tokens/costs. Update internal telemetry handling to support both formats.
6. Ensure the command structure and response handling remain compatible with existing provider integrations, so downstream consumers are unaffected.
7. Add comprehensive logging for gateway interactions, including request/response payloads and credit telemetry, leveraging the existing logging system.
8. Maintain robust error handling and fallback logic for gateway failures.
9. Update documentation to describe the new gateway mode and configuration options.
# Test Strategy:
- Unit test initialization logic to verify correct mode selection and configuration persistence.
- Mock gateway endpoints to test interception and routing of AI calls in gateway mode, ensuring correct request formatting and response parsing.
- Validate that credits telemetry is correctly extracted and logged, and that legacy token/cost telemetry remains supported in BYOK mode.
- Perform integration tests to confirm that command execution and AI responses are consistent across both BYOK and gateway modes.
- Simulate gateway errors and verify error handling and fallback mechanisms.
- Review logs to ensure gateway interactions are properly recorded.
- Confirm documentation updates accurately reflect new functionality and usage.
# Subtasks:
## 1. Update initialization logic for gateway mode selection [done]
### Dependencies: None
### Description: Modify the initialization logic to allow users to choose between BYOK and hosted gateway service modes, storing this selection in the configuration system.
### Details:
Implement a configuration option that allows users to select between BYOK (Bring Your Own Key) and hosted gateway modes during system initialization. Create appropriate configuration parameters and storage mechanisms to persist this selection. Ensure the configuration is accessible throughout the application, particularly in ai-services-unified.js.
## 2. Implement gateway mode detection in ai-services-unified.js [done]
### Dependencies: 91.1
### Description: Add logic to detect when the hosted gateway mode is active and prepare the system for gateway-specific processing.
### Details:
Modify ai-services-unified.js to check the configuration and determine if the system is operating in gateway mode. Create helper functions to facilitate gateway-specific operations. Ensure this detection happens early in the processing flow to properly route subsequent operations.
## 3. Refactor AI call flow for gateway integration [done]
### Dependencies: 91.2
### Description: Modify the AI call flow to intercept requests and route them to the gateway endpoint when in gateway mode.
### Details:
Refactor the existing AI call flow to intercept requests before _resolveApiKey and _attemptProviderCallWithRetries methods are called. When gateway mode is active, construct appropriate gateway requests containing the full messages array, modelId, roleParams, and commandName. Implement the routing logic to direct these requests to the gateway endpoint instead of directly to the provider.
## 4. Implement gateway response handling and telemetry [done]
### Dependencies: 91.3
### Description: Develop logic to parse gateway responses, extract AI results, and handle gateway-specific telemetry data.
### Details:
Create functions to parse responses from the gateway, extracting the AI result and handling telemetry fields for credits used/remaining instead of tokens/costs. Update the internal telemetry handling system to support both gateway and traditional formats. Ensure all relevant metrics are captured and properly stored.
## 5. Implement error handling, logging, and documentation [done]
### Dependencies: 91.4
### Description: Add comprehensive logging, error handling, and update documentation for the gateway integration.
### Details:
Implement robust error handling and fallback logic for gateway failures. Add detailed logging for gateway interactions, including request/response payloads and credit telemetry, using the existing logging system. Update documentation to describe the new gateway mode, configuration options, and how the system behaves differently when in gateway mode versus BYOK mode. Ensure the command structure and response handling remain compatible with existing provider integrations.

121
tasks/task_092.txt Normal file
View File

@@ -0,0 +1,121 @@
# Task ID: 92
# Title: Implement TaskMaster Mode Selection and Configuration System
# Status: pending
# Dependencies: 16, 56, 87
# Priority: high
# Description: Create a comprehensive mode selection system for TaskMaster that allows users to choose between BYOK (Bring Your Own Key) and hosted gateway modes during initialization, with proper configuration management and authentication.
# Details:
This task implements a complete mode selection system for TaskMaster with the following components:
1. **Configuration Management (.taskmasterconfig)**:
- Add mode field to .taskmasterconfig schema with values: "byok" | "hosted"
- Include gateway authentication fields (apiKey, userId) for hosted mode
- Maintain backward compatibility with existing config structure
- Add validation for mode-specific required fields
2. **Initialization Flow (init.js)**:
- Modify setup wizard to prompt for mode selection after basic configuration
- Present clear descriptions of each mode (BYOK vs hosted benefits)
- Collect gateway API key and user credentials for hosted mode
- Skip AI provider setup prompts when hosted mode is selected
- Validate gateway connectivity during hosted mode setup
3. **AI Services Integration (ai-services-unified.js)**:
- Add mode detection logic that reads from .taskmasterconfig
- Implement gateway routing for hosted mode to https://api.taskmaster.ai/v1/ai
- Create gateway request wrapper with authentication headers
- Maintain existing BYOK provider routing as fallback
- Add error handling for gateway unavailability with graceful degradation
4. **Authentication System**:
- Implement secure API key storage and retrieval
- Add request signing/authentication for gateway calls
- Include user identification in gateway requests
- Handle authentication errors with clear user messaging
5. **Backward Compatibility**:
- Default to BYOK mode for existing installations without mode config
- Preserve all existing AI provider functionality
- Ensure seamless migration path for current users
- Maintain existing command interfaces and outputs
6. **Error Handling and Fallbacks**:
- Graceful degradation when gateway is unavailable
- Clear error messages for authentication failures
- Fallback to BYOK providers when gateway fails
- Network connectivity validation and retry logic
# Test Strategy:
**Testing Strategy**:
1. **Configuration Testing**:
- Verify .taskmasterconfig accepts both mode values
- Test configuration validation for required fields per mode
- Confirm backward compatibility with existing config files
2. **Initialization Testing**:
- Test fresh installation with both mode selections
- Verify hosted mode setup collects proper credentials
- Test BYOK mode maintains existing setup flow
- Validate gateway connectivity testing during setup
3. **Mode Detection Testing**:
- Test ai-services-unified.js correctly reads mode from config
- Verify routing logic directs calls to appropriate endpoints
- Test fallback behavior when mode is undefined (backward compatibility)
4. **Gateway Integration Testing**:
- Test successful API calls to https://api.taskmaster.ai/v1/ai
- Verify authentication headers are properly included
- Test error handling for invalid API keys
- Validate request/response format compatibility
5. **End-to-End Testing**:
- Test complete task generation flow in hosted mode
- Verify BYOK mode continues to work unchanged
- Test mode switching by modifying configuration
- Validate all existing commands work in both modes
6. **Error Scenario Testing**:
- Test behavior when gateway is unreachable
- Verify fallback to BYOK providers when configured
- Test authentication failure handling
- Validate network timeout scenarios
# Subtasks:
## 1. Add Mode Configuration to .taskmasterconfig Schema [pending]
### Dependencies: None
### Description: Extend the .taskmasterconfig file structure to include mode selection (byok vs hosted) and gateway authentication fields while maintaining backward compatibility.
### Details:
Add mode field to configuration schema with values 'byok' or 'hosted'. Include gateway authentication fields (apiKey, userId) for hosted mode. Ensure backward compatibility by defaulting to 'byok' mode for existing installations. Add validation for mode-specific required fields.
## 2. Modify init.js for Mode Selection During Setup [pending]
### Dependencies: 92.1
### Description: Update the initialization wizard to prompt users for mode selection and collect appropriate credentials for hosted mode.
### Details:
Add mode selection prompt after basic configuration. Present clear descriptions of BYOK vs hosted benefits. Collect gateway API key and user credentials for hosted mode. Skip AI provider setup prompts when hosted mode is selected. Validate gateway connectivity during hosted mode setup.
## 3. Update ai-services-unified.js for Gateway Routing [pending]
### Dependencies: 92.1
### Description: Modify the unified AI service runner to detect mode and route calls to the hard-coded gateway URL when in hosted mode.
### Details:
Add mode detection logic that reads from .taskmasterconfig. Implement gateway routing for hosted mode to https://api.taskmaster.ai/v1/ai (hard-coded URL). Create gateway request wrapper with authentication headers. Maintain existing BYOK provider routing as fallback. Ensure identical response format for backward compatibility.
## 4. Implement Gateway Authentication System [pending]
### Dependencies: 92.3
### Description: Create secure authentication system for gateway requests including API key management and request signing.
### Details:
Implement secure API key storage and retrieval. Add request signing/authentication for gateway calls. Include user identification in gateway requests. Handle authentication errors with clear user messaging. Add token refresh logic if needed.
## 5. Add Error Handling and Fallback Logic [pending]
### Dependencies: 92.4
### Description: Implement comprehensive error handling for gateway unavailability with graceful degradation to BYOK mode when possible.
### Details:
Add error handling for gateway unavailability with graceful degradation. Implement clear error messages for authentication failures. Add fallback to BYOK providers when gateway fails (if keys are available). Include network connectivity validation and retry logic. Handle rate limiting and quota exceeded scenarios.
## 6. Ensure Backward Compatibility and Migration [pending]
### Dependencies: 92.1, 92.2, 92.3, 92.4, 92.5
### Description: Ensure seamless backward compatibility for existing TaskMaster installations and provide smooth migration path to hosted mode.
### Details:
Default to BYOK mode for existing installations without mode config. Preserve all existing AI provider functionality. Ensure seamless migration path for current users. Maintain existing command interfaces and outputs. Add migration utility for users wanting to switch modes. Test with existing .taskmasterconfig files.

64
tasks/task_093.txt Normal file
View File

@@ -0,0 +1,64 @@
# Task ID: 93
# Title: Implement Telemetry Testing Framework with Humorous Response Capability
# Status: pending
# Dependencies: 90, 77
# Priority: medium
# Description: Create a comprehensive testing framework for validating telemetry functionality across all TaskMaster components, including the ability to respond with jokes during test scenarios to verify response handling mechanisms.
# Details:
This task implements a robust testing framework for telemetry validation with the following components:
1. **Telemetry Test Suite Creation**:
- Create `tests/telemetry/` directory structure with comprehensive test files
- Implement unit tests for telemetry data capture, sanitization, and transmission
- Add integration tests for end-to-end telemetry flow validation
- Create mock telemetry endpoints to simulate external analytics services
2. **Joke Response Testing Module**:
- Implement a test utility that can inject humorous responses during telemetry testing
- Create a collection of programming-related jokes for test scenarios
- Add response validation to ensure joke responses are properly handled by telemetry systems
- Implement timing tests to verify joke responses don't interfere with telemetry performance
3. **Telemetry Data Validation**:
- Create validators for telemetry payload structure and content
- Implement tests for sensitive data redaction and encryption
- Add verification for proper anonymization of user data
- Test telemetry opt-out functionality and preference handling
4. **Performance and Reliability Testing**:
- Implement load testing for telemetry submission under various conditions
- Add network failure simulation and retry mechanism testing
- Create tests for telemetry buffer management and data persistence
- Validate telemetry doesn't impact core TaskMaster functionality
5. **Cross-Mode Testing**:
- Test telemetry functionality in both BYOK and hosted gateway modes
- Validate mode-specific telemetry data collection and routing
- Ensure consistent telemetry behavior across different AI providers
6. **Test Utilities and Helpers**:
- Create mock telemetry services for isolated testing
- Implement test data generators for various telemetry scenarios
- Add debugging utilities for telemetry troubleshooting
- Create automated test reporting for telemetry coverage
# Test Strategy:
1. **Unit Test Validation**: Run all telemetry unit tests to verify individual component functionality, ensuring 100% pass rate for data capture, sanitization, and transmission modules.
2. **Integration Test Execution**: Execute end-to-end telemetry tests across all TaskMaster commands, validating that telemetry data is properly collected and transmitted without affecting command performance.
3. **Joke Response Verification**: Test the joke response mechanism by triggering test scenarios and verifying that humorous responses are delivered correctly while maintaining telemetry data integrity.
4. **Data Privacy Validation**: Verify that all sensitive data is properly redacted or encrypted in telemetry payloads, with no personally identifiable information exposed in test outputs.
5. **Performance Impact Assessment**: Run performance benchmarks comparing TaskMaster execution with and without telemetry enabled, ensuring minimal performance degradation (< 5% overhead).
6. **Network Failure Simulation**: Test telemetry behavior under various network conditions including timeouts, connection failures, and intermittent connectivity to validate retry mechanisms and data persistence.
7. **Cross-Mode Compatibility**: Execute telemetry tests in both BYOK and hosted gateway modes, verifying consistent behavior and appropriate mode-specific data collection.
8. **Opt-out Functionality Testing**: Validate that telemetry opt-out preferences are properly respected and no data is collected or transmitted when users have opted out.
9. **Mock Service Integration**: Verify that mock telemetry endpoints properly simulate real analytics services and capture expected data formats and frequencies.
10. **Automated Test Coverage**: Ensure test suite achieves minimum 90% code coverage for all telemetry-related modules and generates comprehensive test reports.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

227
test-move-fix.js Normal file
View File

@@ -0,0 +1,227 @@
/**
* Test script for move-task functionality
*
* This script tests various scenarios for the move-task command to ensure
* it works correctly without creating duplicate tasks or leaving orphaned data.
*
* Test scenarios covered:
* 1. Moving a subtask to become a standalone task (with specific target ID)
* 2. Moving a task to replace another task
*
* Usage:
* node test-move-fix.js # Run all tests
*
* Or import specific test functions:
* import { testMoveSubtaskToTask } from './test-move-fix.js';
*
* This was created to verify the fix for the bug where moving subtasks
* to standalone tasks was creating duplicate entries.
*/
import fs from "fs";
import path from "path";
import moveTask from "./scripts/modules/task-manager/move-task.js";
// Create a test tasks.json file
const testData = {
tasks: [
{
id: 1,
title: "Parent Task",
description: "A parent task with subtasks",
status: "pending",
priority: "medium",
details: "Parent task details",
testStrategy: "Parent test strategy",
subtasks: [
{
id: 1,
title: "Subtask 1",
description: "First subtask",
status: "pending",
details: "Subtask 1 details",
testStrategy: "Subtask 1 test strategy",
},
{
id: 2,
title: "Subtask 2",
description: "Second subtask",
status: "pending",
details: "Subtask 2 details",
testStrategy: "Subtask 2 test strategy",
},
],
},
{
id: 2,
title: "Another Task",
description: "Another standalone task",
status: "pending",
priority: "low",
details: "Another task details",
testStrategy: "Another test strategy",
},
{
id: 3,
title: "Third Task",
description: "A third standalone task",
status: "done",
priority: "high",
details: "Third task details",
testStrategy: "Third test strategy",
},
],
};
const testFile = "./test-tasks.json";
function logSeparator(title) {
console.log(`\n${"=".repeat(60)}`);
console.log(` ${title}`);
console.log(`${"=".repeat(60)}`);
}
function logTaskState(data, label) {
console.log(`\n${label}:`);
console.log(
"Tasks:",
data.tasks.map((t) => ({ id: t.id, title: t.title, status: t.status }))
);
data.tasks.forEach((task) => {
if (task.subtasks && task.subtasks.length > 0) {
console.log(
`Task ${task.id} subtasks:`,
task.subtasks.map((st) => ({ id: st.id, title: st.title }))
);
}
});
}
async function testMoveSubtaskToTask() {
try {
logSeparator("TEST: Move Subtask to Standalone Task");
// Write test data
fs.writeFileSync(testFile, JSON.stringify(testData, null, 2));
const beforeData = JSON.parse(fs.readFileSync(testFile, "utf8"));
logTaskState(beforeData, "Before move");
// Move subtask 1.2 to become task 26
console.log("\n🔄 Moving subtask 1.2 to task 26...");
const result = await moveTask(testFile, "1.2", "26", false);
const afterData = JSON.parse(fs.readFileSync(testFile, "utf8"));
logTaskState(afterData, "After move");
// Verify the result
const task26 = afterData.tasks.find((t) => t.id === 26);
if (task26) {
console.log("\n✅ SUCCESS: Task 26 created with correct content:");
console.log(" Title:", task26.title);
console.log(" Description:", task26.description);
console.log(" Details:", task26.details);
console.log(" Dependencies:", task26.dependencies);
console.log(" Priority:", task26.priority);
} else {
console.log("\n❌ FAILED: Task 26 not found");
}
// Check for duplicates
const taskIds = afterData.tasks.map((t) => t.id);
const duplicates = taskIds.filter(
(id, index) => taskIds.indexOf(id) !== index
);
if (duplicates.length > 0) {
console.log("\n❌ FAILED: Duplicate task IDs found:", duplicates);
} else {
console.log("\n✅ SUCCESS: No duplicate task IDs");
}
// Check that original subtask was removed
const task1 = afterData.tasks.find((t) => t.id === 1);
const hasSubtask2 = task1.subtasks?.some((st) => st.id === 2);
if (hasSubtask2) {
console.log("\n❌ FAILED: Original subtask 1.2 still exists");
} else {
console.log("\n✅ SUCCESS: Original subtask 1.2 was removed");
}
return true;
} catch (error) {
console.error("\n❌ Test failed:", error.message);
return false;
}
}
async function testMoveTaskToTask() {
try {
logSeparator("TEST: Move Task to Replace Another Task");
// Reset test data
fs.writeFileSync(testFile, JSON.stringify(testData, null, 2));
const beforeData = JSON.parse(fs.readFileSync(testFile, "utf8"));
logTaskState(beforeData, "Before move");
// Move task 2 to replace task 3
console.log("\n🔄 Moving task 2 to replace task 3...");
const result = await moveTask(testFile, "2", "3", false);
const afterData = JSON.parse(fs.readFileSync(testFile, "utf8"));
logTaskState(afterData, "After move");
// Verify the result
const task3 = afterData.tasks.find((t) => t.id === 3);
const task2Gone = !afterData.tasks.find((t) => t.id === 2);
if (task3 && task3.title === "Another Task" && task2Gone) {
console.log("\n✅ SUCCESS: Task 2 replaced task 3 correctly");
console.log(" New Task 3 title:", task3.title);
console.log(" New Task 3 description:", task3.description);
} else {
console.log("\n❌ FAILED: Task replacement didn't work correctly");
}
return true;
} catch (error) {
console.error("\n❌ Test failed:", error.message);
return false;
}
}
async function runAllTests() {
console.log("🧪 Running Move Task Tests");
const results = [];
results.push(await testMoveSubtaskToTask());
results.push(await testMoveTaskToTask());
const passed = results.filter((r) => r).length;
const total = results.length;
logSeparator("TEST SUMMARY");
console.log(`\n📊 Results: ${passed}/${total} tests passed`);
if (passed === total) {
console.log("🎉 All tests passed!");
} else {
console.log("⚠️ Some tests failed. Check the output above.");
}
// Clean up
if (fs.existsSync(testFile)) {
fs.unlinkSync(testFile);
console.log("\n🧹 Cleaned up test files");
}
}
// Run tests if this file is executed directly
if (import.meta.url === `file://${process.argv[1]}`) {
runAllTests();
}
// Export for use in other test files
export { testMoveSubtaskToTask, testMoveTaskToTask, runAllTests };

View File

@@ -0,0 +1,95 @@
#!/usr/bin/env node
/**
* Integration test for telemetry submission with real gateway
*/
import { submitTelemetryData } from "./scripts/modules/telemetry-submission.js";
// Test data from the gateway registration
const TEST_API_KEY = "554d9e2a-9c07-4f69-a449-a2bda0ff06e7";
const TEST_USER_ID = "c81e686a-a37c-4dc4-ac23-0849f70a9a52";
async function testTelemetrySubmission() {
console.log("🧪 Testing telemetry submission with real gateway...\n");
// Create test telemetry data
const telemetryData = {
timestamp: new Date().toISOString(),
userId: TEST_USER_ID,
commandName: "add-task",
modelUsed: "claude-3-sonnet",
providerName: "anthropic",
inputTokens: 150,
outputTokens: 75,
totalTokens: 225,
totalCost: 0.0045,
currency: "USD",
// These should be filtered out before submission
commandArgs: {
id: "15",
prompt: "Test task creation",
apiKey: "sk-secret-key-should-be-filtered",
},
fullOutput: {
title: "Generated Task",
description: "AI generated task description",
internalDebugData: "This should not be sent to gateway",
},
};
console.log("📤 Submitting telemetry data...");
console.log("Data to submit:", JSON.stringify(telemetryData, null, 2));
console.log(
"\n⚠ Note: commandArgs and fullOutput should be filtered out before submission\n"
);
try {
const result = await submitTelemetryData(telemetryData);
console.log("✅ Telemetry submission result:");
console.log(JSON.stringify(result, null, 2));
if (result.success) {
console.log("\n🎉 SUCCESS: Telemetry data submitted successfully!");
if (result.id) {
console.log(`📝 Gateway assigned ID: ${result.id}`);
}
console.log(`🔄 Completed in ${result.attempt || 1} attempt(s)`);
} else {
console.log("\n❌ FAILED: Telemetry submission failed");
console.log(`Error: ${result.error}`);
}
} catch (error) {
console.error(
"\n💥 EXCEPTION: Unexpected error during telemetry submission"
);
console.error(error);
}
}
// Test with manual curl to verify endpoint works
async function testWithCurl() {
console.log("\n🔧 Testing with direct curl for comparison...\n");
const testData = {
timestamp: new Date().toISOString(),
userId: TEST_USER_ID,
commandName: "curl-test",
modelUsed: "claude-3-sonnet",
totalCost: 0.001,
currency: "USD",
};
console.log("Curl command that should work:");
console.log(`curl -X POST http://localhost:4444/api/v1/telemetry \\`);
console.log(` -H "Content-Type: application/json" \\`);
console.log(` -H "X-API-Key: ${TEST_API_KEY}" \\`);
console.log(` -d '${JSON.stringify(testData)}'`);
}
// Run the tests
console.log("🚀 Starting telemetry integration tests...\n");
await testTelemetrySubmission();
await testWithCurl();
console.log("\n✨ Integration test complete!");

View File

@@ -1,16 +0,0 @@
{
"models": {
"main": {
"provider": "openai",
"modelId": "gpt-4o"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro"
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-haiku-20240307"
}
}
}

View File

@@ -0,0 +1,253 @@
import fs from "fs";
import path from "path";
import { execSync } from "child_process";
import { jest } from "@jest/globals";
import { fileURLToPath } from "url";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
describe("TaskMaster Init Configuration Tests", () => {
const testProjectDir = path.join(__dirname, "../../test-init-project");
const configPath = path.join(testProjectDir, ".taskmasterconfig");
const envPath = path.join(testProjectDir, ".env");
beforeEach(() => {
// Clear all mocks and reset modules to prevent interference from other tests
jest.clearAllMocks();
jest.resetAllMocks();
jest.resetModules();
// Clean up test directory
if (fs.existsSync(testProjectDir)) {
execSync(`rm -rf "${testProjectDir}"`);
}
fs.mkdirSync(testProjectDir, { recursive: true });
process.chdir(testProjectDir);
});
afterEach(() => {
// Clean up after tests
process.chdir(__dirname);
if (fs.existsSync(testProjectDir)) {
execSync(`rm -rf "${testProjectDir}"`);
}
// Clear mocks again
jest.clearAllMocks();
jest.resetAllMocks();
});
describe("getUserId functionality", () => {
it("should read userId from config.account.userId", async () => {
// Create config with userId in account section
const config = {
account: {
mode: "byok",
userId: "test-user-123",
},
};
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
// Import and test getUserId
const { getUserId } = await import(
"../../scripts/modules/config-manager.js"
);
const userId = getUserId(testProjectDir);
expect(userId).toBe("test-user-123");
});
it("should set default userId if none exists", async () => {
// Create config without userId
const config = {
account: {
mode: "byok",
},
};
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
const { getUserId } = await import(
"../../scripts/modules/config-manager.js"
);
const userId = getUserId(testProjectDir);
// Should set default userId
expect(userId).toBe("1234567890");
// Verify it was written to config
const savedConfig = JSON.parse(fs.readFileSync(configPath, "utf8"));
expect(savedConfig.account.userId).toBe("1234567890");
});
it("should return existing userId even if it's the default value", async () => {
// Create config with default userId already set
const config = {
account: {
mode: "byok",
userId: "1234567890",
},
};
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
const { getUserId } = await import(
"../../scripts/modules/config-manager.js"
);
const userId = getUserId(testProjectDir);
// Should return the existing userId (even if it's the default)
expect(userId).toBe("1234567890");
});
});
describe("Init process integration", () => {
it("should store mode (byok/hosted) in config", () => {
// Test that mode gets stored correctly
const config = {
account: {
mode: "hosted",
userId: "test-user-789",
},
};
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
// Read config back
const savedConfig = JSON.parse(fs.readFileSync(configPath, "utf8"));
expect(savedConfig.account.mode).toBe("hosted");
expect(savedConfig.account.userId).toBe("test-user-789");
});
it("should store API key in .env file (NOT config)", () => {
// Create .env with API key
const envContent =
"TASKMASTER_SERVICE_ID=test-api-key-123\nOTHER_VAR=value\n";
fs.writeFileSync(envPath, envContent);
// Test that API key is in .env
const envFileContent = fs.readFileSync(envPath, "utf8");
expect(envFileContent).toContain(
"TASKMASTER_SERVICE_ID=test-api-key-123"
);
// Test that API key is NOT in config
const config = {
account: {
mode: "byok",
userId: "test-user-abc",
},
};
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
const configContent = fs.readFileSync(configPath, "utf8");
expect(configContent).not.toContain("test-api-key-123");
expect(configContent).not.toContain("apiKey");
});
});
describe("Telemetry configuration", () => {
it("should get API key from .env file", async () => {
// Create .env with API key
const envContent = "TASKMASTER_SERVICE_ID=env-api-key-456\n";
fs.writeFileSync(envPath, envContent);
// Test reading API key from .env
const { resolveEnvVariable } = await import(
"../../scripts/modules/utils.js"
);
const apiKey = resolveEnvVariable(
"TASKMASTER_SERVICE_ID",
null,
testProjectDir
);
expect(apiKey).toBe("env-api-key-456");
});
it("should prioritize environment variables", async () => {
// Clean up any existing env var first
delete process.env.TASKMASTER_SERVICE_ID;
// Set environment variable
process.env.TASKMASTER_SERVICE_ID = "process-env-key";
// Also create .env file
const envContent = "TASKMASTER_SERVICE_ID=file-env-key\n";
fs.writeFileSync(envPath, envContent);
const { resolveEnvVariable } = await import(
"../../scripts/modules/utils.js"
);
// Test with explicit projectRoot to avoid caching issues
const apiKey = resolveEnvVariable("TASKMASTER_SERVICE_ID");
// Should prioritize process.env over .env file
expect(apiKey).toBe("process-env-key");
// Clean up
delete process.env.TASKMASTER_SERVICE_ID;
});
});
describe("Config structure consistency", () => {
it("should maintain consistent structure for both BYOK and hosted modes", () => {
// Test BYOK mode structure
const byokConfig = {
account: {
mode: "byok",
userId: "byok-user-123",
telemetryEnabled: false,
},
};
fs.writeFileSync(configPath, JSON.stringify(byokConfig, null, 2));
let config = JSON.parse(fs.readFileSync(configPath, "utf8"));
expect(config.account.mode).toBe("byok");
expect(config.account.userId).toBe("byok-user-123");
expect(config.account.telemetryEnabled).toBe(false);
// Test hosted mode structure
const hostedConfig = {
account: {
mode: "hosted",
userId: "hosted-user-456",
telemetryEnabled: true,
},
};
fs.writeFileSync(configPath, JSON.stringify(hostedConfig, null, 2));
config = JSON.parse(fs.readFileSync(configPath, "utf8"));
expect(config.account.mode).toBe("hosted");
expect(config.account.userId).toBe("hosted-user-456");
expect(config.account.telemetryEnabled).toBe(true);
});
it("should use consistent userId location (config.account.userId)", async () => {
const config = {
account: {
mode: "byok",
userId: "consistent-user-789",
},
global: {
logLevel: "info",
},
};
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
// Clear any cached modules to ensure fresh import
jest.resetModules();
const { getUserId } = await import(
"../../scripts/modules/config-manager.js"
);
const userId = getUserId(testProjectDir);
expect(userId).toBe("consistent-user-789");
// Verify it's in account section, not root
const savedConfig = JSON.parse(fs.readFileSync(configPath, "utf8"));
expect(savedConfig.account.userId).toBe("consistent-user-789");
expect(savedConfig.userId).toBeUndefined(); // Should NOT be in root
});
});
});

View File

@@ -1,43 +1,46 @@
import { jest } from '@jest/globals';
import fs from 'fs';
import path from 'path';
import os from 'os';
import { execSync } from 'child_process';
import { jest } from "@jest/globals";
import fs from "fs";
import path from "path";
import os from "os";
import { execSync } from "child_process";
describe('Roo Files Inclusion in Package', () => {
describe("Roo Files Inclusion in Package", () => {
// This test verifies that the required Roo files are included in the final package
test('package.json includes assets/** in the "files" array for Roo source files', () => {
// Read the package.json file
const packageJsonPath = path.join(process.cwd(), 'package.json');
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
const packageJsonPath = path.join(process.cwd(), "package.json");
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, "utf8"));
// Check if assets/** is included in the files array (which contains Roo files)
expect(packageJson.files).toContain('assets/**');
expect(packageJson.files).toContain("assets/**");
});
test('init.js creates Roo directories and copies files', () => {
test("init.js creates Roo directories and copies files", () => {
// Read the init.js file
const initJsPath = path.join(process.cwd(), 'scripts', 'init.js');
const initJsContent = fs.readFileSync(initJsPath, 'utf8');
const initJsPath = path.join(process.cwd(), "scripts", "init.js");
const initJsContent = fs.readFileSync(initJsPath, "utf8");
// Check for Roo directory creation (using more flexible pattern matching)
const hasRooDir = initJsContent.includes(
"ensureDirectoryExists(path.join(targetDir, '.roo"
// Check for Roo directory creation (flexible quote matching)
const hasRooDir =
/ensureDirectoryExists\(path\.join\(targetDir,\s*['""]\.roo/.test(
initJsContent
);
expect(hasRooDir).toBe(true);
// Check for .roomodes file copying
const hasRoomodes = initJsContent.includes("copyTemplateFile('.roomodes'");
// Check for .roomodes file copying (flexible quote matching)
const hasRoomodes = /copyTemplateFile\(\s*['""]\.roomodes['""]/.test(
initJsContent
);
expect(hasRoomodes).toBe(true);
// Check for mode-specific patterns (using more flexible pattern matching)
const hasArchitect = initJsContent.includes('architect');
const hasAsk = initJsContent.includes('ask');
const hasBoomerang = initJsContent.includes('boomerang');
const hasCode = initJsContent.includes('code');
const hasDebug = initJsContent.includes('debug');
const hasTest = initJsContent.includes('test');
const hasArchitect = initJsContent.includes("architect");
const hasAsk = initJsContent.includes("ask");
const hasBoomerang = initJsContent.includes("boomerang");
const hasCode = initJsContent.includes("code");
const hasDebug = initJsContent.includes("debug");
const hasTest = initJsContent.includes("test");
expect(hasArchitect).toBe(true);
expect(hasAsk).toBe(true);
@@ -47,13 +50,13 @@ describe('Roo Files Inclusion in Package', () => {
expect(hasTest).toBe(true);
});
test('source Roo files exist in assets directory', () => {
test("source Roo files exist in assets directory", () => {
// Verify that the source files for Roo integration exist
expect(
fs.existsSync(path.join(process.cwd(), 'assets', 'roocode', '.roo'))
fs.existsSync(path.join(process.cwd(), "assets", "roocode", ".roo"))
).toBe(true);
expect(
fs.existsSync(path.join(process.cwd(), 'assets', 'roocode', '.roomodes'))
fs.existsSync(path.join(process.cwd(), "assets", "roocode", ".roomodes"))
).toBe(true);
});
});

View File

@@ -1,69 +1,70 @@
import { jest } from '@jest/globals';
import fs from 'fs';
import path from 'path';
import { jest } from "@jest/globals";
import fs from "fs";
import path from "path";
describe('Roo Initialization Functionality', () => {
describe("Roo Initialization Functionality", () => {
let initJsContent;
beforeAll(() => {
// Read the init.js file content once for all tests
const initJsPath = path.join(process.cwd(), 'scripts', 'init.js');
initJsContent = fs.readFileSync(initJsPath, 'utf8');
const initJsPath = path.join(process.cwd(), "scripts", "init.js");
initJsContent = fs.readFileSync(initJsPath, "utf8");
});
test('init.js creates Roo directories in createProjectStructure function', () => {
test("init.js creates Roo directories in createProjectStructure function", () => {
// Check if createProjectStructure function exists
expect(initJsContent).toContain('function createProjectStructure');
expect(initJsContent).toContain("function createProjectStructure");
// Check for the line that creates the .roo directory
const hasRooDir = initJsContent.includes(
"ensureDirectoryExists(path.join(targetDir, '.roo'))"
// Check for the line that creates the .roo directory (flexible quote matching)
const hasRooDir =
/ensureDirectoryExists\(path\.join\(targetDir,\s*['""]\.roo['""]/.test(
initJsContent
);
expect(hasRooDir).toBe(true);
// Check for the line that creates .roo/rules directory
const hasRooRulesDir = initJsContent.includes(
"ensureDirectoryExists(path.join(targetDir, '.roo', 'rules'))"
// Check for the line that creates .roo/rules directory (flexible quote matching)
const hasRooRulesDir =
/ensureDirectoryExists\(path\.join\(targetDir,\s*['""]\.roo['""],\s*['""]rules['""]/.test(
initJsContent
);
expect(hasRooRulesDir).toBe(true);
// Check for the for loop that creates mode-specific directories
// Check for the for loop that creates mode-specific directories (flexible matching)
const hasRooModeLoop =
initJsContent.includes(
"for (const mode of ['architect', 'ask', 'boomerang', 'code', 'debug', 'test'])"
) ||
(initJsContent.includes('for (const mode of [') &&
initJsContent.includes('architect') &&
initJsContent.includes('ask') &&
initJsContent.includes('boomerang') &&
initJsContent.includes('code') &&
initJsContent.includes('debug') &&
initJsContent.includes('test'));
(initJsContent.includes("for (const mode of [") ||
initJsContent.includes("for (const mode of[")) &&
initJsContent.includes("architect") &&
initJsContent.includes("ask") &&
initJsContent.includes("boomerang") &&
initJsContent.includes("code") &&
initJsContent.includes("debug") &&
initJsContent.includes("test");
expect(hasRooModeLoop).toBe(true);
});
test('init.js copies Roo files from assets/roocode directory', () => {
// Check for the .roomodes case in the copyTemplateFile function
const casesRoomodes = initJsContent.includes("case '.roomodes':");
test("init.js copies Roo files from assets/roocode directory", () => {
// Check for the .roomodes case in the copyTemplateFile function (flexible quote matching)
const casesRoomodes = /case\s*['""]\.roomodes['""]/.test(initJsContent);
expect(casesRoomodes).toBe(true);
// Check that assets/roocode appears somewhere in the file
const hasRoocodePath = initJsContent.includes("'assets', 'roocode'");
// Check that assets/roocode appears somewhere in the file (flexible quote matching)
const hasRoocodePath = /['""]assets['""],\s*['""]roocode['""]/.test(
initJsContent
);
expect(hasRoocodePath).toBe(true);
// Check that roomodes file is copied
const copiesRoomodes = initJsContent.includes(
"copyTemplateFile('.roomodes'"
// Check that roomodes file is copied (flexible quote matching)
const copiesRoomodes = /copyTemplateFile\(\s*['""]\.roomodes['""]/.test(
initJsContent
);
expect(copiesRoomodes).toBe(true);
});
test('init.js has code to copy rule files for each mode', () => {
// Look for template copying for rule files
test("init.js has code to copy rule files for each mode", () => {
// Look for template copying for rule files (more flexible matching)
const hasModeRulesCopying =
initJsContent.includes('copyTemplateFile(') &&
initJsContent.includes('rules-') &&
initJsContent.includes('-rules');
initJsContent.includes("copyTemplateFile(") &&
(initJsContent.includes("rules-") || initJsContent.includes("-rules"));
expect(hasModeRulesCopying).toBe(true);
});
});

View File

@@ -1,4 +1,4 @@
import { jest } from '@jest/globals';
import { jest } from "@jest/globals";
// Mock config-manager
const mockGetMainProvider = jest.fn();
@@ -17,26 +17,26 @@ const mockIsApiKeySet = jest.fn();
const mockModelMap = {
anthropic: [
{
id: 'test-main-model',
cost_per_1m_tokens: { input: 3, output: 15, currency: 'USD' }
id: "test-main-model",
cost_per_1m_tokens: { input: 3, output: 15, currency: "USD" },
},
{
id: 'test-fallback-model',
cost_per_1m_tokens: { input: 3, output: 15, currency: 'USD' }
}
id: "test-fallback-model",
cost_per_1m_tokens: { input: 3, output: 15, currency: "USD" },
},
],
perplexity: [
{
id: 'test-research-model',
cost_per_1m_tokens: { input: 1, output: 1, currency: 'USD' }
}
id: "test-research-model",
cost_per_1m_tokens: { input: 1, output: 1, currency: "USD" },
},
],
openai: [
{
id: 'test-openai-model',
cost_per_1m_tokens: { input: 2, output: 6, currency: 'USD' }
}
]
id: "test-openai-model",
cost_per_1m_tokens: { input: 2, output: 6, currency: "USD" },
},
],
// Add other providers/models if needed for specific tests
};
const mockGetBaseUrlForRole = jest.fn();
@@ -64,7 +64,7 @@ const mockGetDefaultSubtasks = jest.fn();
const mockGetDefaultPriority = jest.fn();
const mockGetProjectName = jest.fn();
jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
jest.unstable_mockModule("../../scripts/modules/config-manager.js", () => ({
// Core config access
getConfig: mockGetConfig,
writeConfig: mockWriteConfig,
@@ -72,14 +72,14 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
ConfigurationError: class ConfigurationError extends Error {
constructor(message) {
super(message);
this.name = 'ConfigurationError';
this.name = "ConfigurationError";
}
},
// Validation
validateProvider: mockValidateProvider,
validateProviderModelCombination: mockValidateProviderModelCombination,
VALID_PROVIDERS: ['anthropic', 'perplexity', 'openai', 'google'],
VALID_PROVIDERS: ["anthropic", "perplexity", "openai", "google"],
MODEL_MAP: mockModelMap,
getAvailableModels: mockGetAvailableModels,
@@ -115,70 +115,71 @@ jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
getAzureBaseURL: mockGetAzureBaseURL,
getVertexProjectId: mockGetVertexProjectId,
getVertexLocation: mockGetVertexLocation,
getMcpApiKeyStatus: mockGetMcpApiKeyStatus
getMcpApiKeyStatus: mockGetMcpApiKeyStatus,
getTelemetryEnabled: jest.fn(() => false),
}));
// Mock AI Provider Classes with proper methods
const mockAnthropicProvider = {
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
};
const mockPerplexityProvider = {
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
};
const mockOpenAIProvider = {
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
};
const mockOllamaProvider = {
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
};
// Mock the provider classes to return our mock instances
jest.unstable_mockModule('../../src/ai-providers/index.js', () => ({
jest.unstable_mockModule("../../src/ai-providers/index.js", () => ({
AnthropicAIProvider: jest.fn(() => mockAnthropicProvider),
PerplexityAIProvider: jest.fn(() => mockPerplexityProvider),
GoogleAIProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
})),
OpenAIProvider: jest.fn(() => mockOpenAIProvider),
XAIProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
})),
OpenRouterAIProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
})),
OllamaAIProvider: jest.fn(() => mockOllamaProvider),
BedrockAIProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
})),
AzureProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
generateObject: jest.fn(),
})),
VertexAIProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn()
}))
generateObject: jest.fn(),
})),
}));
// Mock utils logger, API key resolver, AND findProjectRoot
@@ -205,7 +206,7 @@ const mockReadComplexityReport = jest.fn();
const mockFindTaskInComplexityReport = jest.fn();
const mockAggregateTelemetry = jest.fn();
jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
jest.unstable_mockModule("../../scripts/modules/utils.js", () => ({
LOG_LEVELS: { error: 0, warn: 1, info: 2, debug: 3 },
log: mockLog,
resolveEnvVariable: mockResolveEnvVariable,
@@ -228,261 +229,261 @@ jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
sanitizePrompt: mockSanitizePrompt,
readComplexityReport: mockReadComplexityReport,
findTaskInComplexityReport: mockFindTaskInComplexityReport,
aggregateTelemetry: mockAggregateTelemetry
aggregateTelemetry: mockAggregateTelemetry,
}));
// Import the module to test (AFTER mocks)
const { generateTextService } = await import(
'../../scripts/modules/ai-services-unified.js'
"../../scripts/modules/ai-services-unified.js"
);
describe('Unified AI Services', () => {
const fakeProjectRoot = '/fake/project/root'; // Define for reuse
describe("Unified AI Services", () => {
const fakeProjectRoot = "/fake/project/root"; // Define for reuse
beforeEach(() => {
// Clear mocks before each test
jest.clearAllMocks(); // Clears all mocks
// Set default mock behaviors
mockGetMainProvider.mockReturnValue('anthropic');
mockGetMainModelId.mockReturnValue('test-main-model');
mockGetResearchProvider.mockReturnValue('perplexity');
mockGetResearchModelId.mockReturnValue('test-research-model');
mockGetFallbackProvider.mockReturnValue('anthropic');
mockGetFallbackModelId.mockReturnValue('test-fallback-model');
mockGetMainProvider.mockReturnValue("anthropic");
mockGetMainModelId.mockReturnValue("test-main-model");
mockGetResearchProvider.mockReturnValue("perplexity");
mockGetResearchModelId.mockReturnValue("test-research-model");
mockGetFallbackProvider.mockReturnValue("anthropic");
mockGetFallbackModelId.mockReturnValue("test-fallback-model");
mockGetParametersForRole.mockImplementation((role) => {
if (role === 'main') return { maxTokens: 100, temperature: 0.5 };
if (role === 'research') return { maxTokens: 200, temperature: 0.3 };
if (role === 'fallback') return { maxTokens: 150, temperature: 0.6 };
if (role === "main") return { maxTokens: 100, temperature: 0.5 };
if (role === "research") return { maxTokens: 200, temperature: 0.3 };
if (role === "fallback") return { maxTokens: 150, temperature: 0.6 };
return { maxTokens: 100, temperature: 0.5 }; // Default
});
mockResolveEnvVariable.mockImplementation((key) => {
if (key === 'ANTHROPIC_API_KEY') return 'mock-anthropic-key';
if (key === 'PERPLEXITY_API_KEY') return 'mock-perplexity-key';
if (key === 'OPENAI_API_KEY') return 'mock-openai-key';
if (key === 'OLLAMA_API_KEY') return 'mock-ollama-key';
if (key === "ANTHROPIC_API_KEY") return "mock-anthropic-key";
if (key === "PERPLEXITY_API_KEY") return "mock-perplexity-key";
if (key === "OPENAI_API_KEY") return "mock-openai-key";
if (key === "OLLAMA_API_KEY") return "mock-ollama-key";
return null;
});
// Set a default behavior for the new mock
mockFindProjectRoot.mockReturnValue(fakeProjectRoot);
mockGetDebugFlag.mockReturnValue(false);
mockGetUserId.mockReturnValue('test-user-id'); // Add default mock for getUserId
mockGetUserId.mockReturnValue("test-user-id"); // Add default mock for getUserId
mockIsApiKeySet.mockReturnValue(true); // Default to true for most tests
mockGetBaseUrlForRole.mockReturnValue(null); // Default to no base URL
});
describe('generateTextService', () => {
test('should use main provider/model and succeed', async () => {
describe("generateTextService", () => {
test("should use main provider/model and succeed", async () => {
mockAnthropicProvider.generateText.mockResolvedValue({
text: 'Main provider response',
usage: { inputTokens: 10, outputTokens: 20, totalTokens: 30 }
text: "Main provider response",
usage: { inputTokens: 10, outputTokens: 20, totalTokens: 30 },
});
const params = {
role: 'main',
role: "main",
session: { env: {} },
systemPrompt: 'System',
prompt: 'Test'
systemPrompt: "System",
prompt: "Test",
};
const result = await generateTextService(params);
expect(result.mainResult).toBe('Main provider response');
expect(result).toHaveProperty('telemetryData');
expect(result.mainResult).toBe("Main provider response");
expect(result).toHaveProperty("telemetryData");
expect(mockGetMainProvider).toHaveBeenCalledWith(fakeProjectRoot);
expect(mockGetMainModelId).toHaveBeenCalledWith(fakeProjectRoot);
expect(mockGetParametersForRole).toHaveBeenCalledWith(
'main',
"main",
fakeProjectRoot
);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(1);
expect(mockPerplexityProvider.generateText).not.toHaveBeenCalled();
});
test('should fall back to fallback provider if main fails', async () => {
const mainError = new Error('Main provider failed');
test("should fall back to fallback provider if main fails", async () => {
const mainError = new Error("Main provider failed");
mockAnthropicProvider.generateText
.mockRejectedValueOnce(mainError)
.mockResolvedValueOnce({
text: 'Fallback provider response',
usage: { inputTokens: 15, outputTokens: 25, totalTokens: 40 }
text: "Fallback provider response",
usage: { inputTokens: 15, outputTokens: 25, totalTokens: 40 },
});
const explicitRoot = '/explicit/test/root';
const explicitRoot = "/explicit/test/root";
const params = {
role: 'main',
prompt: 'Fallback test',
projectRoot: explicitRoot
role: "main",
prompt: "Fallback test",
projectRoot: explicitRoot,
};
const result = await generateTextService(params);
expect(result.mainResult).toBe('Fallback provider response');
expect(result).toHaveProperty('telemetryData');
expect(result.mainResult).toBe("Fallback provider response");
expect(result).toHaveProperty("telemetryData");
expect(mockGetMainProvider).toHaveBeenCalledWith(explicitRoot);
expect(mockGetFallbackProvider).toHaveBeenCalledWith(explicitRoot);
expect(mockGetParametersForRole).toHaveBeenCalledWith(
'main',
"main",
explicitRoot
);
expect(mockGetParametersForRole).toHaveBeenCalledWith(
'fallback',
"fallback",
explicitRoot
);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(2);
expect(mockPerplexityProvider.generateText).not.toHaveBeenCalled();
expect(mockLog).toHaveBeenCalledWith(
'error',
expect.stringContaining('Service call failed for role main')
"error",
expect.stringContaining("Service call failed for role main")
);
expect(mockLog).toHaveBeenCalledWith(
'info',
expect.stringContaining('New AI service call with role: fallback')
"info",
expect.stringContaining("New AI service call with role: fallback")
);
});
test('should fall back to research provider if main and fallback fail', async () => {
const mainError = new Error('Main failed');
const fallbackError = new Error('Fallback failed');
test("should fall back to research provider if main and fallback fail", async () => {
const mainError = new Error("Main failed");
const fallbackError = new Error("Fallback failed");
mockAnthropicProvider.generateText
.mockRejectedValueOnce(mainError)
.mockRejectedValueOnce(fallbackError);
mockPerplexityProvider.generateText.mockResolvedValue({
text: 'Research provider response',
usage: { inputTokens: 20, outputTokens: 30, totalTokens: 50 }
text: "Research provider response",
usage: { inputTokens: 20, outputTokens: 30, totalTokens: 50 },
});
const params = { role: 'main', prompt: 'Research fallback test' };
const params = { role: "main", prompt: "Research fallback test" };
const result = await generateTextService(params);
expect(result.mainResult).toBe('Research provider response');
expect(result).toHaveProperty('telemetryData');
expect(result.mainResult).toBe("Research provider response");
expect(result).toHaveProperty("telemetryData");
expect(mockGetMainProvider).toHaveBeenCalledWith(fakeProjectRoot);
expect(mockGetFallbackProvider).toHaveBeenCalledWith(fakeProjectRoot);
expect(mockGetResearchProvider).toHaveBeenCalledWith(fakeProjectRoot);
expect(mockGetParametersForRole).toHaveBeenCalledWith(
'main',
"main",
fakeProjectRoot
);
expect(mockGetParametersForRole).toHaveBeenCalledWith(
'fallback',
"fallback",
fakeProjectRoot
);
expect(mockGetParametersForRole).toHaveBeenCalledWith(
'research',
"research",
fakeProjectRoot
);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(2);
expect(mockPerplexityProvider.generateText).toHaveBeenCalledTimes(1);
expect(mockLog).toHaveBeenCalledWith(
'error',
expect.stringContaining('Service call failed for role fallback')
"error",
expect.stringContaining("Service call failed for role fallback")
);
expect(mockLog).toHaveBeenCalledWith(
'info',
expect.stringContaining('New AI service call with role: research')
"info",
expect.stringContaining("New AI service call with role: research")
);
});
test('should throw error if all providers in sequence fail', async () => {
test("should throw error if all providers in sequence fail", async () => {
mockAnthropicProvider.generateText.mockRejectedValue(
new Error('Anthropic failed')
new Error("Anthropic failed")
);
mockPerplexityProvider.generateText.mockRejectedValue(
new Error('Perplexity failed')
new Error("Perplexity failed")
);
const params = { role: 'main', prompt: 'All fail test' };
const params = { role: "main", prompt: "All fail test" };
await expect(generateTextService(params)).rejects.toThrow(
'Perplexity failed' // Error from the last attempt (research)
"Perplexity failed" // Error from the last attempt (research)
);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(2); // main, fallback
expect(mockPerplexityProvider.generateText).toHaveBeenCalledTimes(1); // research
});
test('should handle retryable errors correctly', async () => {
const retryableError = new Error('Rate limit');
test("should handle retryable errors correctly", async () => {
const retryableError = new Error("Rate limit");
mockAnthropicProvider.generateText
.mockRejectedValueOnce(retryableError) // Fails once
.mockResolvedValueOnce({
// Succeeds on retry
text: 'Success after retry',
usage: { inputTokens: 5, outputTokens: 10, totalTokens: 15 }
text: "Success after retry",
usage: { inputTokens: 5, outputTokens: 10, totalTokens: 15 },
});
const params = { role: 'main', prompt: 'Retry success test' };
const params = { role: "main", prompt: "Retry success test" };
const result = await generateTextService(params);
expect(result.mainResult).toBe('Success after retry');
expect(result).toHaveProperty('telemetryData');
expect(result.mainResult).toBe("Success after retry");
expect(result).toHaveProperty("telemetryData");
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(2); // Initial + 1 retry
expect(mockLog).toHaveBeenCalledWith(
'info',
"info",
expect.stringContaining(
'Something went wrong on the provider side. Retrying'
"Something went wrong on the provider side. Retrying"
)
);
});
test('should use default project root or handle null if findProjectRoot returns null', async () => {
test("should use default project root or handle null if findProjectRoot returns null", async () => {
mockFindProjectRoot.mockReturnValue(null); // Simulate not finding root
mockAnthropicProvider.generateText.mockResolvedValue({
text: 'Response with no root',
usage: { inputTokens: 1, outputTokens: 1, totalTokens: 2 }
text: "Response with no root",
usage: { inputTokens: 1, outputTokens: 1, totalTokens: 2 },
});
const params = { role: 'main', prompt: 'No root test' }; // No explicit root passed
const params = { role: "main", prompt: "No root test" }; // No explicit root passed
await generateTextService(params);
expect(mockGetMainProvider).toHaveBeenCalledWith(null);
expect(mockGetParametersForRole).toHaveBeenCalledWith('main', null);
expect(mockGetParametersForRole).toHaveBeenCalledWith("main", null);
expect(mockAnthropicProvider.generateText).toHaveBeenCalledTimes(1);
});
test('should skip provider with missing API key and try next in fallback sequence', async () => {
test("should skip provider with missing API key and try next in fallback sequence", async () => {
// Setup isApiKeySet to return false for anthropic but true for perplexity
mockIsApiKeySet.mockImplementation((provider, session, root) => {
if (provider === 'anthropic') return false; // Main provider has no key
if (provider === "anthropic") return false; // Main provider has no key
return true; // Other providers have keys
});
// Mock perplexity text response (since we'll skip anthropic)
mockPerplexityProvider.generateText.mockResolvedValue({
text: 'Perplexity response (skipped to research)',
usage: { inputTokens: 20, outputTokens: 30, totalTokens: 50 }
text: "Perplexity response (skipped to research)",
usage: { inputTokens: 20, outputTokens: 30, totalTokens: 50 },
});
const params = {
role: 'main',
prompt: 'Skip main provider test',
session: { env: {} }
role: "main",
prompt: "Skip main provider test",
session: { env: {} },
};
const result = await generateTextService(params);
// Should have gotten the perplexity response
expect(result.mainResult).toBe(
'Perplexity response (skipped to research)'
"Perplexity response (skipped to research)"
);
// Should check API keys
expect(mockIsApiKeySet).toHaveBeenCalledWith(
'anthropic',
"anthropic",
params.session,
fakeProjectRoot
);
expect(mockIsApiKeySet).toHaveBeenCalledWith(
'perplexity',
"perplexity",
params.session,
fakeProjectRoot
);
// Should log a warning
expect(mockLog).toHaveBeenCalledWith(
'warn',
"warn",
expect.stringContaining(
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
)
@@ -495,70 +496,70 @@ describe('Unified AI Services', () => {
expect(mockPerplexityProvider.generateText).toHaveBeenCalledTimes(1);
});
test('should skip multiple providers with missing API keys and use first available', async () => {
test("should skip multiple providers with missing API keys and use first available", async () => {
// Setup: Main and fallback providers have no keys, only research has a key
mockIsApiKeySet.mockImplementation((provider, session, root) => {
if (provider === 'anthropic') return false; // Main and fallback are both anthropic
if (provider === 'perplexity') return true; // Research has a key
if (provider === "anthropic") return false; // Main and fallback are both anthropic
if (provider === "perplexity") return true; // Research has a key
return false;
});
// Define different providers for testing multiple skips
mockGetFallbackProvider.mockReturnValue('openai'); // Different from main
mockGetFallbackModelId.mockReturnValue('test-openai-model');
mockGetFallbackProvider.mockReturnValue("openai"); // Different from main
mockGetFallbackModelId.mockReturnValue("test-openai-model");
// Mock isApiKeySet to return false for both main and fallback
mockIsApiKeySet.mockImplementation((provider, session, root) => {
if (provider === 'anthropic') return false; // Main provider has no key
if (provider === 'openai') return false; // Fallback provider has no key
if (provider === "anthropic") return false; // Main provider has no key
if (provider === "openai") return false; // Fallback provider has no key
return true; // Research provider has a key
});
// Mock perplexity text response (since we'll skip to research)
mockPerplexityProvider.generateText.mockResolvedValue({
text: 'Research response after skipping main and fallback',
usage: { inputTokens: 20, outputTokens: 30, totalTokens: 50 }
text: "Research response after skipping main and fallback",
usage: { inputTokens: 20, outputTokens: 30, totalTokens: 50 },
});
const params = {
role: 'main',
prompt: 'Skip multiple providers test',
session: { env: {} }
role: "main",
prompt: "Skip multiple providers test",
session: { env: {} },
};
const result = await generateTextService(params);
// Should have gotten the perplexity (research) response
expect(result.mainResult).toBe(
'Research response after skipping main and fallback'
"Research response after skipping main and fallback"
);
// Should check API keys for all three roles
expect(mockIsApiKeySet).toHaveBeenCalledWith(
'anthropic',
"anthropic",
params.session,
fakeProjectRoot
);
expect(mockIsApiKeySet).toHaveBeenCalledWith(
'openai',
"openai",
params.session,
fakeProjectRoot
);
expect(mockIsApiKeySet).toHaveBeenCalledWith(
'perplexity',
"perplexity",
params.session,
fakeProjectRoot
);
// Should log warnings for both skipped providers
expect(mockLog).toHaveBeenCalledWith(
'warn',
"warn",
expect.stringContaining(
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
)
);
expect(mockLog).toHaveBeenCalledWith(
'warn',
"warn",
expect.stringContaining(
`Skipping role 'fallback' (Provider: openai): API key not set or invalid.`
)
@@ -572,36 +573,36 @@ describe('Unified AI Services', () => {
expect(mockPerplexityProvider.generateText).toHaveBeenCalledTimes(1);
});
test('should throw error if all providers in sequence have missing API keys', async () => {
test("should throw error if all providers in sequence have missing API keys", async () => {
// Mock all providers to have missing API keys
mockIsApiKeySet.mockReturnValue(false);
const params = {
role: 'main',
prompt: 'All API keys missing test',
session: { env: {} }
role: "main",
prompt: "All API keys missing test",
session: { env: {} },
};
// Should throw error since all providers would be skipped
await expect(generateTextService(params)).rejects.toThrow(
'AI service call failed for all configured roles'
"AI service call failed for all configured roles"
);
// Should log warnings for all skipped providers
expect(mockLog).toHaveBeenCalledWith(
'warn',
"warn",
expect.stringContaining(
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
)
);
expect(mockLog).toHaveBeenCalledWith(
'warn',
"warn",
expect.stringContaining(
`Skipping role 'fallback' (Provider: anthropic): API key not set or invalid.`
)
);
expect(mockLog).toHaveBeenCalledWith(
'warn',
"warn",
expect.stringContaining(
`Skipping role 'research' (Provider: perplexity): API key not set or invalid.`
)
@@ -609,9 +610,9 @@ describe('Unified AI Services', () => {
// Should log final error
expect(mockLog).toHaveBeenCalledWith(
'error',
"error",
expect.stringContaining(
'All roles in the sequence [main, fallback, research] failed.'
"All roles in the sequence [main, fallback, research] failed."
)
);
@@ -620,27 +621,27 @@ describe('Unified AI Services', () => {
expect(mockPerplexityProvider.generateText).not.toHaveBeenCalled();
});
test('should not check API key for Ollama provider and try to use it', async () => {
test("should not check API key for Ollama provider and try to use it", async () => {
// Setup: Set main provider to ollama
mockGetMainProvider.mockReturnValue('ollama');
mockGetMainModelId.mockReturnValue('llama3');
mockGetMainProvider.mockReturnValue("ollama");
mockGetMainModelId.mockReturnValue("llama3");
// Mock Ollama text generation to succeed
mockOllamaProvider.generateText.mockResolvedValue({
text: 'Ollama response (no API key required)',
usage: { inputTokens: 10, outputTokens: 10, totalTokens: 20 }
text: "Ollama response (no API key required)",
usage: { inputTokens: 10, outputTokens: 10, totalTokens: 20 },
});
const params = {
role: 'main',
prompt: 'Ollama special case test',
session: { env: {} }
role: "main",
prompt: "Ollama special case test",
session: { env: {} },
};
const result = await generateTextService(params);
// Should have gotten the Ollama response
expect(result.mainResult).toBe('Ollama response (no API key required)');
expect(result.mainResult).toBe("Ollama response (no API key required)");
// isApiKeySet shouldn't be called for Ollama
// Note: This is indirect - the code just doesn't check isApiKeySet for ollama
@@ -651,9 +652,9 @@ describe('Unified AI Services', () => {
expect(mockOllamaProvider.generateText).toHaveBeenCalledTimes(1);
});
test('should correctly use the provided session for API key check', async () => {
test("should correctly use the provided session for API key check", async () => {
// Mock custom session object with env vars
const customSession = { env: { ANTHROPIC_API_KEY: 'session-api-key' } };
const customSession = { env: { ANTHROPIC_API_KEY: "session-api-key" } };
// Setup API key check to verify the session is passed correctly
mockIsApiKeySet.mockImplementation((provider, session, root) => {
@@ -663,27 +664,27 @@ describe('Unified AI Services', () => {
// Mock the anthropic response
mockAnthropicProvider.generateText.mockResolvedValue({
text: 'Anthropic response with session key',
usage: { inputTokens: 10, outputTokens: 10, totalTokens: 20 }
text: "Anthropic response with session key",
usage: { inputTokens: 10, outputTokens: 10, totalTokens: 20 },
});
const params = {
role: 'main',
prompt: 'Session API key test',
session: customSession
role: "main",
prompt: "Session API key test",
session: customSession,
};
const result = await generateTextService(params);
// Should check API key with the custom session
expect(mockIsApiKeySet).toHaveBeenCalledWith(
'anthropic',
"anthropic",
customSession,
fakeProjectRoot
);
// Should have gotten the anthropic response
expect(result.mainResult).toBe('Anthropic response with session key');
expect(result.mainResult).toBe("Anthropic response with session key");
});
});
});

View File

@@ -1,29 +1,29 @@
import fs from 'fs';
import path from 'path';
import { jest } from '@jest/globals';
import { fileURLToPath } from 'url';
import fs from "fs";
import path from "path";
import { jest } from "@jest/globals";
import { fileURLToPath } from "url";
// --- Read REAL supported-models.json data BEFORE mocks ---
const __filename = fileURLToPath(import.meta.url); // Get current file path
const __dirname = path.dirname(__filename); // Get current directory
const realSupportedModelsPath = path.resolve(
__dirname,
'../../scripts/modules/supported-models.json'
"../../scripts/modules/supported-models.json"
);
let REAL_SUPPORTED_MODELS_CONTENT;
let REAL_SUPPORTED_MODELS_DATA;
try {
REAL_SUPPORTED_MODELS_CONTENT = fs.readFileSync(
realSupportedModelsPath,
'utf-8'
"utf-8"
);
REAL_SUPPORTED_MODELS_DATA = JSON.parse(REAL_SUPPORTED_MODELS_CONTENT);
} catch (err) {
console.error(
'FATAL TEST SETUP ERROR: Could not read or parse real supported-models.json',
"FATAL TEST SETUP ERROR: Could not read or parse real supported-models.json",
err
);
REAL_SUPPORTED_MODELS_CONTENT = '{}'; // Default to empty object on error
REAL_SUPPORTED_MODELS_CONTENT = "{}"; // Default to empty object on error
REAL_SUPPORTED_MODELS_DATA = {};
process.exit(1); // Exit if essential test data can't be loaded
}
@@ -31,126 +31,137 @@ try {
// --- Define Mock Function Instances ---
const mockFindProjectRoot = jest.fn();
const mockLog = jest.fn();
const mockIsSilentMode = jest.fn();
// --- Mock Dependencies BEFORE importing the module under test ---
// Mock the entire 'fs' module
jest.mock('fs');
jest.mock("fs");
// Mock the 'utils.js' module using a factory function
jest.mock('../../scripts/modules/utils.js', () => ({
jest.mock("../../scripts/modules/utils.js", () => ({
__esModule: true, // Indicate it's an ES module mock
findProjectRoot: mockFindProjectRoot, // Use the mock function instance
log: mockLog, // Use the mock function instance
isSilentMode: mockIsSilentMode, // Use the mock function instance
// Include other necessary exports from utils if config-manager uses them directly
resolveEnvVariable: jest.fn() // Example if needed
resolveEnvVariable: jest.fn(), // Example if needed
}));
// DO NOT MOCK 'chalk'
// --- Import the module under test AFTER mocks are defined ---
import * as configManager from '../../scripts/modules/config-manager.js';
import * as configManager from "../../scripts/modules/config-manager.js";
// Import the mocked 'fs' module to allow spying on its functions
import fsMocked from 'fs';
import fsMocked from "fs";
// --- Test Data (Keep as is, ensure DEFAULT_CONFIG is accurate) ---
const MOCK_PROJECT_ROOT = '/mock/project';
const MOCK_CONFIG_PATH = path.join(MOCK_PROJECT_ROOT, '.taskmasterconfig');
const MOCK_PROJECT_ROOT = "/mock/project";
const MOCK_CONFIG_PATH = path.join(MOCK_PROJECT_ROOT, ".taskmasterconfig");
// Updated DEFAULT_CONFIG reflecting the implementation
const DEFAULT_CONFIG = {
models: {
main: {
provider: 'anthropic',
modelId: 'claude-3-7-sonnet-20250219',
maxTokens: 64000,
temperature: 0.2
},
research: {
provider: 'perplexity',
modelId: 'sonar-pro',
maxTokens: 8700,
temperature: 0.1
},
fallback: {
provider: 'anthropic',
modelId: 'claude-3-5-sonnet',
maxTokens: 64000,
temperature: 0.2
}
},
global: {
logLevel: 'info',
logLevel: "info",
debug: false,
defaultSubtasks: 5,
defaultPriority: 'medium',
projectName: 'Task Master',
ollamaBaseURL: 'http://localhost:11434/api'
}
defaultPriority: "medium",
projectName: "Taskmaster",
ollamaBaseURL: "http://localhost:11434/api",
azureBaseURL: "https://your-endpoint.azure.com/",
},
models: {
main: {
provider: "anthropic",
modelId: "claude-3-7-sonnet-20250219",
maxTokens: 64000,
temperature: 0.2,
},
research: {
provider: "perplexity",
modelId: "sonar-pro",
maxTokens: 8700,
temperature: 0.1,
},
fallback: {
provider: "anthropic",
modelId: "claude-3-5-sonnet",
maxTokens: 64000,
temperature: 0.2,
},
},
account: {
userId: "1234567890",
email: "",
mode: "byok",
telemetryEnabled: true,
},
};
// Other test data (VALID_CUSTOM_CONFIG, PARTIAL_CONFIG, INVALID_PROVIDER_CONFIG)
const VALID_CUSTOM_CONFIG = {
models: {
main: {
provider: 'openai',
modelId: 'gpt-4o',
provider: "openai",
modelId: "gpt-4o",
maxTokens: 4096,
temperature: 0.5
temperature: 0.5,
},
research: {
provider: 'google',
modelId: 'gemini-1.5-pro-latest',
provider: "google",
modelId: "gemini-1.5-pro-latest",
maxTokens: 8192,
temperature: 0.3
temperature: 0.3,
},
fallback: {
provider: 'anthropic',
modelId: 'claude-3-opus-20240229',
provider: "anthropic",
modelId: "claude-3-opus-20240229",
maxTokens: 100000,
temperature: 0.4
}
temperature: 0.4,
},
},
global: {
logLevel: 'debug',
defaultPriority: 'high',
projectName: 'My Custom Project'
}
logLevel: "debug",
defaultPriority: "high",
projectName: "My Custom Project",
},
};
const PARTIAL_CONFIG = {
models: {
main: { provider: 'openai', modelId: 'gpt-4-turbo' }
main: { provider: "openai", modelId: "gpt-4-turbo" },
},
global: {
projectName: 'Partial Project'
}
projectName: "Partial Project",
},
};
const INVALID_PROVIDER_CONFIG = {
models: {
main: { provider: 'invalid-provider', modelId: 'some-model' },
main: { provider: "invalid-provider", modelId: "some-model" },
research: {
provider: 'perplexity',
modelId: 'llama-3-sonar-large-32k-online'
}
provider: "perplexity",
modelId: "llama-3-sonar-large-32k-online",
},
},
global: {
logLevel: 'warn'
}
logLevel: "warn",
},
};
// Define spies globally to be restored in afterAll
let consoleErrorSpy;
let consoleWarnSpy;
let consoleLogSpy;
let fsReadFileSyncSpy;
let fsWriteFileSyncSpy;
let fsExistsSyncSpy;
beforeAll(() => {
// Set up console spies
consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation(() => {});
consoleWarnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {});
consoleErrorSpy = jest.spyOn(console, "error").mockImplementation(() => {});
consoleWarnSpy = jest.spyOn(console, "warn").mockImplementation(() => {});
consoleLogSpy = jest.spyOn(console, "log").mockImplementation(() => {});
});
afterAll(() => {
@@ -165,20 +176,22 @@ beforeEach(() => {
// Reset the external mock instances for utils
mockFindProjectRoot.mockReset();
mockLog.mockReset();
mockIsSilentMode.mockReset();
// --- Set up spies ON the imported 'fs' mock ---
fsExistsSyncSpy = jest.spyOn(fsMocked, 'existsSync');
fsReadFileSyncSpy = jest.spyOn(fsMocked, 'readFileSync');
fsWriteFileSyncSpy = jest.spyOn(fsMocked, 'writeFileSync');
fsExistsSyncSpy = jest.spyOn(fsMocked, "existsSync");
fsReadFileSyncSpy = jest.spyOn(fsMocked, "readFileSync");
fsWriteFileSyncSpy = jest.spyOn(fsMocked, "writeFileSync");
// --- Default Mock Implementations ---
mockFindProjectRoot.mockReturnValue(MOCK_PROJECT_ROOT); // Default for utils.findProjectRoot
mockIsSilentMode.mockReturnValue(false); // Default for utils.isSilentMode
fsExistsSyncSpy.mockReturnValue(true); // Assume files exist by default
// Default readFileSync: Return REAL models content, mocked config, or throw error
fsReadFileSyncSpy.mockImplementation((filePath) => {
const baseName = path.basename(filePath);
if (baseName === 'supported-models.json') {
if (baseName === "supported-models.json") {
// Return the REAL file content stringified
return REAL_SUPPORTED_MODELS_CONTENT;
} else if (filePath === MOCK_CONFIG_PATH) {
@@ -194,76 +207,76 @@ beforeEach(() => {
});
// --- Validation Functions ---
describe('Validation Functions', () => {
describe("Validation Functions", () => {
// Tests for validateProvider and validateProviderModelCombination
test('validateProvider should return true for valid providers', () => {
expect(configManager.validateProvider('openai')).toBe(true);
expect(configManager.validateProvider('anthropic')).toBe(true);
expect(configManager.validateProvider('google')).toBe(true);
expect(configManager.validateProvider('perplexity')).toBe(true);
expect(configManager.validateProvider('ollama')).toBe(true);
expect(configManager.validateProvider('openrouter')).toBe(true);
test("validateProvider should return true for valid providers", () => {
expect(configManager.validateProvider("openai")).toBe(true);
expect(configManager.validateProvider("anthropic")).toBe(true);
expect(configManager.validateProvider("google")).toBe(true);
expect(configManager.validateProvider("perplexity")).toBe(true);
expect(configManager.validateProvider("ollama")).toBe(true);
expect(configManager.validateProvider("openrouter")).toBe(true);
});
test('validateProvider should return false for invalid providers', () => {
expect(configManager.validateProvider('invalid-provider')).toBe(false);
expect(configManager.validateProvider('grok')).toBe(false); // Not in mock map
expect(configManager.validateProvider('')).toBe(false);
test("validateProvider should return false for invalid providers", () => {
expect(configManager.validateProvider("invalid-provider")).toBe(false);
expect(configManager.validateProvider("grok")).toBe(false); // Not in mock map
expect(configManager.validateProvider("")).toBe(false);
expect(configManager.validateProvider(null)).toBe(false);
});
test('validateProviderModelCombination should validate known good combinations', () => {
test("validateProviderModelCombination should validate known good combinations", () => {
// Re-load config to ensure MODEL_MAP is populated from mock (now real data)
configManager.getConfig(MOCK_PROJECT_ROOT, true);
expect(
configManager.validateProviderModelCombination('openai', 'gpt-4o')
configManager.validateProviderModelCombination("openai", "gpt-4o")
).toBe(true);
expect(
configManager.validateProviderModelCombination(
'anthropic',
'claude-3-5-sonnet-20241022'
"anthropic",
"claude-3-5-sonnet-20241022"
)
).toBe(true);
});
test('validateProviderModelCombination should return false for known bad combinations', () => {
test("validateProviderModelCombination should return false for known bad combinations", () => {
// Re-load config to ensure MODEL_MAP is populated from mock (now real data)
configManager.getConfig(MOCK_PROJECT_ROOT, true);
expect(
configManager.validateProviderModelCombination(
'openai',
'claude-3-opus-20240229'
"openai",
"claude-3-opus-20240229"
)
).toBe(false);
});
test('validateProviderModelCombination should return true for ollama/openrouter (empty lists in map)', () => {
test("validateProviderModelCombination should return true for ollama/openrouter (empty lists in map)", () => {
// Re-load config to ensure MODEL_MAP is populated from mock (now real data)
configManager.getConfig(MOCK_PROJECT_ROOT, true);
expect(
configManager.validateProviderModelCombination('ollama', 'any-model')
configManager.validateProviderModelCombination("ollama", "any-model")
).toBe(false);
expect(
configManager.validateProviderModelCombination('openrouter', 'any/model')
configManager.validateProviderModelCombination("openrouter", "any/model")
).toBe(false);
});
test('validateProviderModelCombination should return true for providers not in map', () => {
test("validateProviderModelCombination should return true for providers not in map", () => {
// Re-load config to ensure MODEL_MAP is populated from mock (now real data)
configManager.getConfig(MOCK_PROJECT_ROOT, true);
// The implementation returns true if the provider isn't in the map
expect(
configManager.validateProviderModelCombination(
'unknown-provider',
'some-model'
"unknown-provider",
"some-model"
)
).toBe(true);
});
});
// --- getConfig Tests ---
describe('getConfig Tests', () => {
test('should return default config if .taskmasterconfig does not exist', () => {
describe("getConfig Tests", () => {
test("should return default config if .taskmasterconfig does not exist", () => {
// Arrange
fsExistsSyncSpy.mockReturnValue(false);
// findProjectRoot mock is set in beforeEach
@@ -277,11 +290,11 @@ describe('getConfig Tests', () => {
expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
expect(fsReadFileSyncSpy).not.toHaveBeenCalled(); // No read if file doesn't exist
expect(consoleWarnSpy).toHaveBeenCalledWith(
expect.stringContaining('not found at provided project root')
expect.stringContaining("not found at provided project root")
);
});
test.skip('should use findProjectRoot and return defaults if file not found', () => {
test.skip("should use findProjectRoot and return defaults if file not found", () => {
// TODO: Fix mock interaction, findProjectRoot isn't being registered as called
// Arrange
fsExistsSyncSpy.mockReturnValue(false);
@@ -296,111 +309,76 @@ describe('getConfig Tests', () => {
expect(config).toEqual(DEFAULT_CONFIG);
expect(fsReadFileSyncSpy).not.toHaveBeenCalled();
expect(consoleWarnSpy).toHaveBeenCalledWith(
expect.stringContaining('not found at derived root')
expect.stringContaining("not found at derived root")
); // Adjusted expected warning
});
test('should read and merge valid config file with defaults', () => {
// Arrange: Override readFileSync for this test
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(VALID_CUSTOM_CONFIG);
if (path.basename(filePath) === 'supported-models.json') {
// Provide necessary models for validation within getConfig
return JSON.stringify({
openai: [{ id: 'gpt-4o' }],
google: [{ id: 'gemini-1.5-pro-latest' }],
perplexity: [{ id: 'sonar-pro' }],
anthropic: [
{ id: 'claude-3-opus-20240229' },
{ id: 'claude-3-5-sonnet' },
{ id: 'claude-3-7-sonnet-20250219' },
{ id: 'claude-3-5-sonnet' }
],
ollama: [],
openrouter: []
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// findProjectRoot mock set in beforeEach
// Act
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); // Force reload
// Assert: Construct expected merged config
const expectedMergedConfig = {
models: {
main: {
...DEFAULT_CONFIG.models.main,
...VALID_CUSTOM_CONFIG.models.main
},
research: {
...DEFAULT_CONFIG.models.research,
...VALID_CUSTOM_CONFIG.models.research
},
fallback: {
...DEFAULT_CONFIG.models.fallback,
...VALID_CUSTOM_CONFIG.models.fallback
}
},
global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global }
};
expect(config).toEqual(expectedMergedConfig);
expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8');
});
test('should merge defaults for partial config file', () => {
test("should read and merge valid config file with defaults", () => {
// Arrange
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) return JSON.stringify(PARTIAL_CONFIG);
if (path.basename(filePath) === 'supported-models.json') {
return JSON.stringify({
openai: [{ id: 'gpt-4-turbo' }],
perplexity: [{ id: 'sonar-pro' }],
anthropic: [
{ id: 'claude-3-7-sonnet-20250219' },
{ id: 'claude-3-5-sonnet' }
],
ollama: [],
openrouter: []
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
});
fsExistsSyncSpy.mockReturnValue(true);
// findProjectRoot mock set in beforeEach
fsReadFileSyncSpy.mockReturnValue(JSON.stringify(VALID_CUSTOM_CONFIG));
// Act
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Assert: Construct expected merged config
// Assert
const expectedMergedConfig = {
models: {
main: {
...DEFAULT_CONFIG.models.main,
...VALID_CUSTOM_CONFIG.models.main,
},
research: {
...DEFAULT_CONFIG.models.research,
...VALID_CUSTOM_CONFIG.models.research,
},
fallback: {
...DEFAULT_CONFIG.models.fallback,
...VALID_CUSTOM_CONFIG.models.fallback,
},
},
global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global },
account: { ...DEFAULT_CONFIG.account },
};
expect(config).toEqual(expectedMergedConfig);
expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, "utf-8");
});
test("should merge defaults for partial config file", () => {
// Arrange
fsExistsSyncSpy.mockReturnValue(true);
fsReadFileSyncSpy.mockReturnValue(JSON.stringify(PARTIAL_CONFIG));
// Act
const config = configManager.getConfig(MOCK_PROJECT_ROOT, true);
// Assert
const expectedMergedConfig = {
models: {
main: { ...DEFAULT_CONFIG.models.main, ...PARTIAL_CONFIG.models.main },
research: { ...DEFAULT_CONFIG.models.research },
fallback: { ...DEFAULT_CONFIG.models.fallback }
fallback: { ...DEFAULT_CONFIG.models.fallback },
},
global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global }
global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global },
account: { ...DEFAULT_CONFIG.account },
};
expect(config).toEqual(expectedMergedConfig);
expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8');
expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, "utf-8");
});
test('should handle JSON parsing error and return defaults', () => {
test("should handle JSON parsing error and return defaults", () => {
// Arrange
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) return 'invalid json';
if (filePath === MOCK_CONFIG_PATH) return "invalid json";
// Mock models read needed for initial load before parse error
if (path.basename(filePath) === 'supported-models.json') {
if (path.basename(filePath) === "supported-models.json") {
return JSON.stringify({
anthropic: [{ id: 'claude-3-7-sonnet-20250219' }],
perplexity: [{ id: 'sonar-pro' }],
fallback: [{ id: 'claude-3-5-sonnet' }],
anthropic: [{ id: "claude-3-7-sonnet-20250219" }],
perplexity: [{ id: "sonar-pro" }],
fallback: [{ id: "claude-3-5-sonnet" }],
ollama: [],
openrouter: []
openrouter: [],
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
@@ -414,23 +392,23 @@ describe('getConfig Tests', () => {
// Assert
expect(config).toEqual(DEFAULT_CONFIG);
expect(consoleErrorSpy).toHaveBeenCalledWith(
expect.stringContaining('Error reading or parsing')
expect.stringContaining("Error reading or parsing")
);
});
test('should handle file read error and return defaults', () => {
test("should handle file read error and return defaults", () => {
// Arrange
const readError = new Error('Permission denied');
const readError = new Error("Permission denied");
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH) throw readError;
// Mock models read needed for initial load before read error
if (path.basename(filePath) === 'supported-models.json') {
if (path.basename(filePath) === "supported-models.json") {
return JSON.stringify({
anthropic: [{ id: 'claude-3-7-sonnet-20250219' }],
perplexity: [{ id: 'sonar-pro' }],
fallback: [{ id: 'claude-3-5-sonnet' }],
anthropic: [{ id: "claude-3-7-sonnet-20250219" }],
perplexity: [{ id: "sonar-pro" }],
fallback: [{ id: "claude-3-5-sonnet" }],
ollama: [],
openrouter: []
openrouter: [],
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
@@ -448,20 +426,20 @@ describe('getConfig Tests', () => {
);
});
test('should validate provider and fallback to default if invalid', () => {
test("should validate provider and fallback to default if invalid", () => {
// Arrange
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(INVALID_PROVIDER_CONFIG);
if (path.basename(filePath) === 'supported-models.json') {
if (path.basename(filePath) === "supported-models.json") {
return JSON.stringify({
perplexity: [{ id: 'llama-3-sonar-large-32k-online' }],
perplexity: [{ id: "llama-3-sonar-large-32k-online" }],
anthropic: [
{ id: 'claude-3-7-sonnet-20250219' },
{ id: 'claude-3-5-sonnet' }
{ id: "claude-3-7-sonnet-20250219" },
{ id: "claude-3-5-sonnet" },
],
ollama: [],
openrouter: []
openrouter: [],
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
@@ -483,19 +461,20 @@ describe('getConfig Tests', () => {
main: { ...DEFAULT_CONFIG.models.main },
research: {
...DEFAULT_CONFIG.models.research,
...INVALID_PROVIDER_CONFIG.models.research
...INVALID_PROVIDER_CONFIG.models.research,
},
fallback: { ...DEFAULT_CONFIG.models.fallback }
fallback: { ...DEFAULT_CONFIG.models.fallback },
},
global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global }
global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global },
account: { ...DEFAULT_CONFIG.account },
};
expect(config).toEqual(expectedMergedConfig);
});
});
// --- writeConfig Tests ---
describe('writeConfig', () => {
test('should write valid config to file', () => {
describe("writeConfig", () => {
test("should write valid config to file", () => {
// Arrange (Default mocks are sufficient)
// findProjectRoot mock set in beforeEach
fsWriteFileSyncSpy.mockImplementation(() => {}); // Ensure it doesn't throw
@@ -515,9 +494,9 @@ describe('writeConfig', () => {
expect(consoleErrorSpy).not.toHaveBeenCalled();
});
test('should return false and log error if write fails', () => {
test("should return false and log error if write fails", () => {
// Arrange
const mockWriteError = new Error('Disk full');
const mockWriteError = new Error("Disk full");
fsWriteFileSyncSpy.mockImplementation(() => {
throw mockWriteError;
});
@@ -537,7 +516,7 @@ describe('writeConfig', () => {
);
});
test.skip('should return false if project root cannot be determined', () => {
test.skip("should return false if project root cannot be determined", () => {
// TODO: Fix mock interaction or function logic, returns true unexpectedly in test
// Arrange: Override mock for this specific test
mockFindProjectRoot.mockReturnValue(null);
@@ -550,30 +529,30 @@ describe('writeConfig', () => {
expect(mockFindProjectRoot).toHaveBeenCalled();
expect(fsWriteFileSyncSpy).not.toHaveBeenCalled();
expect(consoleErrorSpy).toHaveBeenCalledWith(
expect.stringContaining('Could not determine project root')
expect.stringContaining("Could not determine project root")
);
});
});
// --- Getter Functions ---
describe('Getter Functions', () => {
test('getMainProvider should return provider from config', () => {
describe("Getter Functions", () => {
test("getMainProvider should return provider from config", () => {
// Arrange: Set up readFileSync to return VALID_CUSTOM_CONFIG
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(VALID_CUSTOM_CONFIG);
if (path.basename(filePath) === 'supported-models.json') {
if (path.basename(filePath) === "supported-models.json") {
return JSON.stringify({
openai: [{ id: 'gpt-4o' }],
google: [{ id: 'gemini-1.5-pro-latest' }],
openai: [{ id: "gpt-4o" }],
google: [{ id: "gemini-1.5-pro-latest" }],
anthropic: [
{ id: 'claude-3-opus-20240229' },
{ id: 'claude-3-7-sonnet-20250219' },
{ id: 'claude-3-5-sonnet' }
{ id: "claude-3-opus-20240229" },
{ id: "claude-3-7-sonnet-20250219" },
{ id: "claude-3-5-sonnet" },
],
perplexity: [{ id: 'sonar-pro' }],
perplexity: [{ id: "sonar-pro" }],
ollama: [],
openrouter: []
openrouter: [],
}); // Added perplexity
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
@@ -588,24 +567,24 @@ describe('Getter Functions', () => {
expect(provider).toBe(VALID_CUSTOM_CONFIG.models.main.provider);
});
test('getLogLevel should return logLevel from config', () => {
test("getLogLevel should return logLevel from config", () => {
// Arrange: Set up readFileSync to return VALID_CUSTOM_CONFIG
fsReadFileSyncSpy.mockImplementation((filePath) => {
if (filePath === MOCK_CONFIG_PATH)
return JSON.stringify(VALID_CUSTOM_CONFIG);
if (path.basename(filePath) === 'supported-models.json') {
if (path.basename(filePath) === "supported-models.json") {
// Provide enough mock model data for validation within getConfig
return JSON.stringify({
openai: [{ id: 'gpt-4o' }],
google: [{ id: 'gemini-1.5-pro-latest' }],
openai: [{ id: "gpt-4o" }],
google: [{ id: "gemini-1.5-pro-latest" }],
anthropic: [
{ id: 'claude-3-opus-20240229' },
{ id: 'claude-3-7-sonnet-20250219' },
{ id: 'claude-3-5-sonnet' }
{ id: "claude-3-opus-20240229" },
{ id: "claude-3-7-sonnet-20250219" },
{ id: "claude-3-5-sonnet" },
],
perplexity: [{ id: 'sonar-pro' }],
perplexity: [{ id: "sonar-pro" }],
ollama: [],
openrouter: []
openrouter: [],
});
}
throw new Error(`Unexpected fs.readFileSync call: ${filePath}`);
@@ -624,22 +603,22 @@ describe('Getter Functions', () => {
});
// --- isConfigFilePresent Tests ---
describe('isConfigFilePresent', () => {
test('should return true if config file exists', () => {
describe("isConfigFilePresent", () => {
test("should return true if config file exists", () => {
fsExistsSyncSpy.mockReturnValue(true);
// findProjectRoot mock set in beforeEach
expect(configManager.isConfigFilePresent(MOCK_PROJECT_ROOT)).toBe(true);
expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
});
test('should return false if config file does not exist', () => {
test("should return false if config file does not exist", () => {
fsExistsSyncSpy.mockReturnValue(false);
// findProjectRoot mock set in beforeEach
expect(configManager.isConfigFilePresent(MOCK_PROJECT_ROOT)).toBe(false);
expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH);
});
test.skip('should use findProjectRoot if explicitRoot is not provided', () => {
test.skip("should use findProjectRoot if explicitRoot is not provided", () => {
// TODO: Fix mock interaction, findProjectRoot isn't being registered as called
fsExistsSyncSpy.mockReturnValue(true);
// findProjectRoot mock set in beforeEach
@@ -649,8 +628,8 @@ describe('isConfigFilePresent', () => {
});
// --- getAllProviders Tests ---
describe('getAllProviders', () => {
test('should return list of providers from supported-models.json', () => {
describe("getAllProviders", () => {
test("should return list of providers from supported-models.json", () => {
// Arrange: Ensure config is loaded with real data
configManager.getConfig(null, true); // Force load using the mock that returns real data
@@ -668,3 +647,63 @@ describe('getAllProviders', () => {
// Note: Tests for setMainModel, setResearchModel were removed as the functions were removed in the implementation.
// If similar setter functions exist, add tests for them following the writeConfig pattern.
describe("ensureConfigFileExists", () => {
it("should create .taskmasterconfig file if it doesn't exist", () => {
// Override the default fs mocks for this test
fsExistsSyncSpy.mockReturnValue(false);
fsWriteFileSyncSpy.mockImplementation(() => {}); // Success, no throw
const result = configManager.ensureConfigFileExists(MOCK_PROJECT_ROOT);
expect(result).toBe(true);
expect(fsWriteFileSyncSpy).toHaveBeenCalledWith(
MOCK_CONFIG_PATH,
JSON.stringify(DEFAULT_CONFIG, null, 2)
);
});
it("should return true if .taskmasterconfig file already exists", () => {
// Mock file exists (this is the default, but let's be explicit)
fsExistsSyncSpy.mockReturnValue(true);
const result = configManager.ensureConfigFileExists(MOCK_PROJECT_ROOT);
expect(result).toBe(true);
expect(fsWriteFileSyncSpy).not.toHaveBeenCalled();
});
it("should return false if project root cannot be determined", () => {
// Mock findProjectRoot to return null (no project root found)
mockFindProjectRoot.mockReturnValue(null);
// Mock file doesn't exist so function tries to create it (and needs project root)
fsExistsSyncSpy.mockReturnValue(false);
// Clear any previous calls to consoleWarnSpy to get clean test results
consoleWarnSpy.mockClear();
const result = configManager.ensureConfigFileExists(); // No explicitRoot provided
expect(result).toBe(false);
expect(fsWriteFileSyncSpy).not.toHaveBeenCalled();
expect(consoleWarnSpy).toHaveBeenCalledWith(
expect.stringContaining(
"Warning: Could not determine project root for config file creation."
)
);
});
it("should handle write errors gracefully", () => {
// Mock file doesn't exist
fsExistsSyncSpy.mockReturnValue(false);
// Mock write operation to throw error
fsWriteFileSyncSpy.mockImplementation(() => {
throw new Error("Permission denied");
});
const result = configManager.ensureConfigFileExists(MOCK_PROJECT_ROOT);
expect(result).toBe(false);
});
});

View File

@@ -0,0 +1,336 @@
/**
* Unit Tests for Telemetry Enhancements - Task 90.1 & 90.3
* Tests the enhanced telemetry capture and submission integration
*/
import { jest } from "@jest/globals";
// Mock config-manager before importing
jest.unstable_mockModule(
"../../../../scripts/modules/config-manager.js",
() => ({
getConfig: jest.fn(),
getUserId: jest.fn(),
getMainProvider: jest.fn(),
getMainModelId: jest.fn(),
getResearchProvider: jest.fn(),
getResearchModelId: jest.fn(),
getFallbackProvider: jest.fn(),
getFallbackModelId: jest.fn(),
getParametersForRole: jest.fn(),
getDebugFlag: jest.fn(),
getBaseUrlForRole: jest.fn(),
isApiKeySet: jest.fn(),
getOllamaBaseURL: jest.fn(),
getAzureBaseURL: jest.fn(),
getVertexProjectId: jest.fn(),
getVertexLocation: jest.fn(),
writeConfig: jest.fn(() => true),
MODEL_MAP: {
openai: [
{
id: "gpt-4",
cost_per_1m_tokens: {
input: 30,
output: 60,
currency: "USD",
},
},
],
},
})
);
// Mock telemetry-submission before importing
jest.unstable_mockModule(
"../../../../scripts/modules/telemetry-submission.js",
() => ({
submitTelemetryData: jest.fn(),
})
);
// Mock utils
jest.unstable_mockModule("../../../../scripts/modules/utils.js", () => ({
log: jest.fn(),
findProjectRoot: jest.fn(),
resolveEnvVariable: jest.fn(),
}));
// Mock all AI providers
jest.unstable_mockModule("../../../../src/ai-providers/index.js", () => ({
AnthropicAIProvider: class {},
PerplexityAIProvider: class {},
GoogleAIProvider: class {},
OpenAIProvider: class {},
XAIProvider: class {},
OpenRouterAIProvider: class {},
OllamaAIProvider: class {},
BedrockAIProvider: class {},
AzureProvider: class {},
VertexAIProvider: class {},
}));
// Import after mocking
const { logAiUsage } = await import(
"../../../../scripts/modules/ai-services-unified.js"
);
const { submitTelemetryData } = await import(
"../../../../scripts/modules/telemetry-submission.js"
);
const { getConfig, getUserId, getDebugFlag } = await import(
"../../../../scripts/modules/config-manager.js"
);
describe("Telemetry Enhancements - Task 90", () => {
beforeEach(() => {
jest.clearAllMocks();
// Setup default mocks
getUserId.mockReturnValue("test-user-123");
getDebugFlag.mockReturnValue(false);
submitTelemetryData.mockResolvedValue({ success: true });
});
describe("Subtask 90.1: Capture command args and output without exposing in responses", () => {
it("should capture command arguments in telemetry data", async () => {
const commandArgs = {
prompt: "test prompt",
apiKey: "secret-key",
modelId: "gpt-4",
};
const result = await logAiUsage({
userId: "test-user",
commandName: "add-task",
providerName: "openai",
modelId: "gpt-4",
inputTokens: 100,
outputTokens: 50,
outputType: "cli",
commandArgs,
});
expect(result.commandArgs).toEqual(commandArgs);
});
it("should capture full AI output in telemetry data", async () => {
const fullOutput = {
text: "AI response",
usage: { promptTokens: 100, completionTokens: 50 },
internalDebugData: "sensitive-debug-info",
};
const result = await logAiUsage({
userId: "test-user",
commandName: "add-task",
providerName: "openai",
modelId: "gpt-4",
inputTokens: 100,
outputTokens: 50,
outputType: "cli",
fullOutput,
});
expect(result.fullOutput).toEqual(fullOutput);
});
it("should not expose commandArgs/fullOutput in MCP responses", () => {
// This is a placeholder test - would need actual MCP response processing
// to verify filtering works correctly
expect(true).toBe(true);
});
it("should not expose commandArgs/fullOutput in CLI responses", () => {
// This is a placeholder test - would need actual CLI response processing
// to verify filtering works correctly
expect(true).toBe(true);
});
});
describe("Subtask 90.3: Integration with telemetry submission", () => {
it("should automatically submit telemetry data to gateway when AI calls are made", async () => {
// Setup test data
const testData = {
userId: "test-user-123",
commandName: "add-task",
providerName: "openai",
modelId: "gpt-4",
inputTokens: 100,
outputTokens: 50,
outputType: "cli",
commandArgs: { prompt: "test prompt", apiKey: "secret-key" },
fullOutput: { text: "AI response", internalData: "debug-info" },
};
// Call logAiUsage
const result = await logAiUsage(testData);
// Verify telemetry data was created correctly
expect(result).toMatchObject({
timestamp: expect.any(String),
userId: "test-user-123",
commandName: "add-task",
modelUsed: "gpt-4",
providerName: "openai",
inputTokens: 100,
outputTokens: 50,
totalTokens: 150,
totalCost: expect.any(Number),
currency: "USD",
commandArgs: testData.commandArgs,
fullOutput: testData.fullOutput,
});
// Verify submitTelemetryData was called with the telemetry data
expect(submitTelemetryData).toHaveBeenCalledWith(result);
});
it("should handle telemetry submission failures gracefully", async () => {
// Make submitTelemetryData fail
submitTelemetryData.mockResolvedValue({
success: false,
error: "Network error",
});
const testData = {
userId: "test-user-123",
commandName: "add-task",
providerName: "openai",
modelId: "gpt-4",
inputTokens: 100,
outputTokens: 50,
outputType: "cli",
};
// Should not throw error even if submission fails
const result = await logAiUsage(testData);
// Should still return telemetry data
expect(result).toBeDefined();
expect(result.userId).toBe("test-user-123");
});
it("should not block execution if telemetry submission throws exception", async () => {
// Make submitTelemetryData throw an exception
submitTelemetryData.mockRejectedValue(new Error("Submission failed"));
const testData = {
userId: "test-user-123",
commandName: "add-task",
providerName: "openai",
modelId: "gpt-4",
inputTokens: 100,
outputTokens: 50,
outputType: "cli",
};
// Should not throw error even if submission throws
const result = await logAiUsage(testData);
// Should still return telemetry data
expect(result).toBeDefined();
expect(result.userId).toBe("test-user-123");
});
});
describe("Subtask 90.4: Non-AI command telemetry queue", () => {
let mockTelemetryQueue;
beforeEach(() => {
// Mock the telemetry queue module
mockTelemetryQueue = {
addToQueue: jest.fn(),
processQueue: jest.fn(),
startBackgroundProcessor: jest.fn(),
stopBackgroundProcessor: jest.fn(),
getQueueStats: jest.fn(() => ({ pending: 0, processed: 0, failed: 0 })),
};
});
it("should add non-AI command telemetry to queue without blocking", async () => {
const commandData = {
timestamp: new Date().toISOString(),
userId: "test-user-123",
commandName: "list-tasks",
executionTimeMs: 45,
success: true,
arguments: { status: "pending" },
};
// Should return immediately without waiting
const startTime = Date.now();
mockTelemetryQueue.addToQueue(commandData);
const endTime = Date.now();
expect(endTime - startTime).toBeLessThan(10); // Should be nearly instantaneous
expect(mockTelemetryQueue.addToQueue).toHaveBeenCalledWith(commandData);
});
it("should process queued telemetry in background", async () => {
const queuedItems = [
{
commandName: "set-status",
executionTimeMs: 23,
success: true,
},
{
commandName: "next-task",
executionTimeMs: 12,
success: true,
},
];
mockTelemetryQueue.processQueue.mockResolvedValue({
processed: 2,
failed: 0,
errors: [],
});
const result = await mockTelemetryQueue.processQueue();
expect(result.processed).toBe(2);
expect(result.failed).toBe(0);
expect(mockTelemetryQueue.processQueue).toHaveBeenCalled();
});
it("should handle queue processing failures gracefully", async () => {
mockTelemetryQueue.processQueue.mockResolvedValue({
processed: 1,
failed: 1,
errors: ["Network timeout for item 2"],
});
const result = await mockTelemetryQueue.processQueue();
expect(result.processed).toBe(1);
expect(result.failed).toBe(1);
expect(result.errors).toContain("Network timeout for item 2");
});
it("should provide queue statistics", () => {
mockTelemetryQueue.getQueueStats.mockReturnValue({
pending: 5,
processed: 127,
failed: 3,
lastProcessedAt: new Date().toISOString(),
});
const stats = mockTelemetryQueue.getQueueStats();
expect(stats.pending).toBe(5);
expect(stats.processed).toBe(127);
expect(stats.failed).toBe(3);
expect(stats.lastProcessedAt).toBeDefined();
});
it("should start and stop background processor", () => {
mockTelemetryQueue.startBackgroundProcessor(30000); // 30 second interval
expect(mockTelemetryQueue.startBackgroundProcessor).toHaveBeenCalledWith(
30000
);
mockTelemetryQueue.stopBackgroundProcessor();
expect(mockTelemetryQueue.stopBackgroundProcessor).toHaveBeenCalled();
});
});
});

View File

@@ -0,0 +1,401 @@
/**
* Unit Tests for Telemetry Submission Service - Task 90.2
* Tests the secure telemetry submission with gateway integration
*/
import { jest } from "@jest/globals";
// Mock config-manager before importing submitTelemetryData
jest.unstable_mockModule(
"../../../../scripts/modules/config-manager.js",
() => ({
getConfig: jest.fn(),
getDebugFlag: jest.fn(() => false),
getLogLevel: jest.fn(() => "info"),
getMainProvider: jest.fn(() => "openai"),
getMainModelId: jest.fn(() => "gpt-4"),
getResearchProvider: jest.fn(() => "openai"),
getResearchModelId: jest.fn(() => "gpt-4"),
getFallbackProvider: jest.fn(() => "openai"),
getFallbackModelId: jest.fn(() => "gpt-3.5-turbo"),
getParametersForRole: jest.fn(() => ({
maxTokens: 4000,
temperature: 0.7,
})),
getUserId: jest.fn(() => "test-user-id"),
MODEL_MAP: {},
getBaseUrlForRole: jest.fn(() => null),
isApiKeySet: jest.fn(() => true),
getOllamaBaseURL: jest.fn(() => "http://localhost:11434/api"),
getAzureBaseURL: jest.fn(() => null),
getVertexProjectId: jest.fn(() => null),
getVertexLocation: jest.fn(() => null),
getDefaultSubtasks: jest.fn(() => 5),
getProjectName: jest.fn(() => "Test Project"),
getDefaultPriority: jest.fn(() => "medium"),
getDefaultNumTasks: jest.fn(() => 10),
getTelemetryEnabled: jest.fn(() => true),
})
);
// Mock fetch globally
global.fetch = jest.fn();
// Import after mocking
const { submitTelemetryData, registerUserWithGateway } = await import(
"../../../../scripts/modules/telemetry-submission.js"
);
const { getConfig } = await import(
"../../../../scripts/modules/config-manager.js"
);
describe("Telemetry Submission Service", () => {
beforeEach(() => {
jest.clearAllMocks();
global.fetch.mockClear();
});
describe("should send telemetry data to remote database endpoint", () => {
it("should successfully submit telemetry data to hardcoded gateway endpoint", async () => {
// Mock successful config with proper structure
getConfig.mockReturnValue({
account: {
userId: "test-user-id",
email: "test@example.com",
},
});
// Mock environment variables for telemetry config
process.env.TASKMASTER_API_KEY = "test-api-key";
// Mock successful response
global.fetch.mockResolvedValueOnce({
ok: true,
json: async () => ({ id: "telemetry-123" }),
});
const telemetryData = {
timestamp: new Date().toISOString(),
userId: "test-user-id",
commandName: "test-command",
modelUsed: "claude-3-sonnet",
totalCost: 0.001,
currency: "USD",
commandArgs: { secret: "should-be-sent" },
fullOutput: { debug: "should-be-sent" },
};
const result = await submitTelemetryData(telemetryData);
expect(result.success).toBe(true);
expect(result.id).toBe("telemetry-123");
expect(global.fetch).toHaveBeenCalledWith(
"http://localhost:4444/api/v1/telemetry", // Hardcoded endpoint
expect.objectContaining({
method: "POST",
headers: {
"Content-Type": "application/json",
"x-taskmaster-service-id": "98fb3198-2dfc-42d1-af53-07b99e4f3bde",
Authorization: "Bearer test-api-key",
"X-User-Email": "test@example.com",
},
body: expect.stringContaining('"commandName":"test-command"'),
})
);
// Verify sensitive data IS included in submission to gateway
const sentData = JSON.parse(global.fetch.mock.calls[0][1].body);
expect(sentData.commandArgs).toEqual({ secret: "should-be-sent" });
expect(sentData.fullOutput).toEqual({ debug: "should-be-sent" });
// Clean up
delete process.env.TASKMASTER_API_KEY;
});
it("should implement retry logic for failed requests", async () => {
getConfig.mockReturnValue({
account: {
userId: "test-user-id",
email: "test@example.com",
},
});
// Mock environment variables
process.env.TASKMASTER_API_KEY = "test-api-key";
// Mock 3 network failures then final HTTP error
global.fetch
.mockRejectedValueOnce(new Error("Network error"))
.mockRejectedValueOnce(new Error("Network error"))
.mockRejectedValueOnce(new Error("Network error"));
const telemetryData = {
timestamp: new Date().toISOString(),
userId: "test-user-id",
commandName: "test-command",
totalCost: 0.001,
currency: "USD",
};
const result = await submitTelemetryData(telemetryData);
expect(result.success).toBe(false);
expect(result.error).toContain("Network error");
expect(global.fetch).toHaveBeenCalledTimes(3);
// Clean up
delete process.env.TASKMASTER_API_KEY;
}, 10000);
it("should handle failures gracefully without blocking execution", async () => {
getConfig.mockReturnValue({
account: {
userId: "test-user-id",
email: "test@example.com",
},
});
// Mock environment variables
process.env.TASKMASTER_API_KEY = "test-api-key";
global.fetch.mockRejectedValue(new Error("Network failure"));
const telemetryData = {
timestamp: new Date().toISOString(),
userId: "test-user-id",
commandName: "test-command",
totalCost: 0.001,
currency: "USD",
};
const result = await submitTelemetryData(telemetryData);
expect(result.success).toBe(false);
expect(result.error).toContain("Network failure");
expect(global.fetch).toHaveBeenCalledTimes(3); // All retries attempted
// Clean up
delete process.env.TASKMASTER_API_KEY;
}, 10000);
it("should respect user opt-out preferences", async () => {
// Mock getTelemetryEnabled to return false for this test
const { getTelemetryEnabled } = await import(
"../../../../scripts/modules/config-manager.js"
);
getTelemetryEnabled.mockReturnValue(false);
getConfig.mockReturnValue({
account: {
telemetryEnabled: false,
},
});
const telemetryData = {
timestamp: new Date().toISOString(),
userId: "test-user-id",
commandName: "test-command",
totalCost: 0.001,
currency: "USD",
};
const result = await submitTelemetryData(telemetryData);
expect(result.success).toBe(true);
expect(result.skipped).toBe(true);
expect(result.reason).toBe("Telemetry disabled by user preference");
expect(global.fetch).not.toHaveBeenCalled();
// Reset the mock for other tests
getTelemetryEnabled.mockReturnValue(true);
});
it("should validate telemetry data before submission", async () => {
getConfig.mockReturnValue({
account: {
userId: "test-user-id",
email: "test@example.com",
},
});
// Mock environment variables so config is valid
process.env.TASKMASTER_API_KEY = "test-api-key";
const invalidTelemetryData = {
// Missing required fields
commandName: "test-command",
};
const result = await submitTelemetryData(invalidTelemetryData);
expect(result.success).toBe(false);
expect(result.error).toContain("Telemetry data validation failed");
expect(global.fetch).not.toHaveBeenCalled();
// Clean up
delete process.env.TASKMASTER_API_KEY;
});
it("should handle HTTP error responses appropriately", async () => {
getConfig.mockReturnValue({
account: {
userId: "test-user-id",
email: "test@example.com",
},
});
// Mock environment variables with invalid API key
process.env.TASKMASTER_API_KEY = "invalid-key";
global.fetch.mockResolvedValueOnce({
ok: false,
status: 401,
statusText: "Unauthorized",
json: async () => ({}),
});
const telemetryData = {
timestamp: new Date().toISOString(),
userId: "test-user-id",
commandName: "test-command",
totalCost: 0.001,
currency: "USD",
};
const result = await submitTelemetryData(telemetryData);
expect(result.success).toBe(false);
expect(result.statusCode).toBe(401);
expect(global.fetch).toHaveBeenCalledTimes(1); // No retries for auth errors
// Clean up
delete process.env.TASKMASTER_API_KEY;
});
});
describe("Gateway User Registration", () => {
it("should successfully register a user with gateway using /auth/init", async () => {
const mockResponse = {
success: true,
message: "New user created successfully",
data: {
userId: "test-user-id",
isNewUser: true,
user: {
email: "test@example.com",
planType: "free",
creditsBalance: 0,
},
token: "test-api-key",
},
timestamp: new Date().toISOString(),
};
global.fetch.mockResolvedValueOnce({
ok: true,
json: async () => mockResponse,
});
const result = await registerUserWithGateway("test@example.com");
expect(result).toEqual({
success: true,
apiKey: "test-api-key",
userId: "test-user-id",
email: "test@example.com",
isNewUser: true,
});
expect(global.fetch).toHaveBeenCalledWith(
"http://localhost:4444/auth/init",
{
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ email: "test@example.com" }),
}
);
});
it("should handle existing user with /auth/init", async () => {
const mockResponse = {
success: true,
message: "Existing user found",
data: {
userId: "existing-user-id",
isNewUser: false,
user: {
email: "existing@example.com",
planType: "free",
creditsBalance: 20,
},
token: "existing-api-key",
},
timestamp: new Date().toISOString(),
};
global.fetch.mockResolvedValueOnce({
ok: true,
json: async () => mockResponse,
});
const result = await registerUserWithGateway("existing@example.com");
expect(result).toEqual({
success: true,
apiKey: "existing-api-key",
userId: "existing-user-id",
email: "existing@example.com",
isNewUser: false,
});
});
it("should handle registration failures gracefully", async () => {
global.fetch.mockResolvedValueOnce({
ok: false,
status: 500,
statusText: "Internal Server Error",
});
const result = await registerUserWithGateway("test@example.com");
expect(result).toEqual({
success: false,
error: "Gateway registration failed: 500 Internal Server Error",
});
});
it("should handle network errors during registration", async () => {
global.fetch.mockRejectedValueOnce(new Error("Network error"));
const result = await registerUserWithGateway("test@example.com");
expect(result).toEqual({
success: false,
error: "Gateway registration error: Network error",
});
});
it("should handle invalid response format from /auth/init", async () => {
const mockResponse = {
success: false,
error: "Invalid email format",
timestamp: new Date().toISOString(),
};
global.fetch.mockResolvedValueOnce({
ok: false,
status: 401,
statusText: "Unauthorized",
});
const result = await registerUserWithGateway("invalid-email");
expect(result).toEqual({
success: false,
error: "Gateway registration failed: 401 Unauthorized",
});
});
});
});