Files
claude-task-master/.taskmaster/tasks/task_061.txt
2025-05-31 16:21:03 +02:00

2699 lines
120 KiB
Plaintext

# Task ID: 61
# Title: Implement Flexible AI Model Management
# Status: done
# Dependencies: None
# Priority: high
# Description: Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.
# Details:
### Proposed Solution
Implement an intuitive CLI command for AI model management, leveraging Vercel's AI SDK for seamless integration:
- `task-master models`: Lists currently configured models for main operations and research.
- `task-master models --set-main="<model_name>" --set-research="<model_name>"`: Sets the desired models for main operations and research tasks respectively.
Supported AI Models:
- **Main Operations:** Claude (current default), OpenAI, Ollama, Gemini, OpenRouter
- **Research Operations:** Perplexity (current default), OpenAI, Ollama, Grok
If a user specifies an invalid model, the CLI lists available models clearly.
### Example CLI Usage
List current models:
```shell
task-master models
```
Output example:
```
Current AI Model Configuration:
- Main Operations: Claude
- Research Operations: Perplexity
```
Set new models:
```shell
task-master models --set-main="gemini" --set-research="grok"
```
Attempt invalid model:
```shell
task-master models --set-main="invalidModel"
```
Output example:
```
Error: "invalidModel" is not a valid model.
Available models for Main Operations:
- claude
- openai
- ollama
- gemini
- openrouter
```
### High-Level Workflow
1. Update CLI parsing logic to handle new `models` command and associated flags.
2. Consolidate all AI calls into `ai-services.js` for centralized management.
3. Utilize Vercel's AI SDK to implement robust wrapper functions for each AI API:
- Claude (existing)
- Perplexity (existing)
- OpenAI
- Ollama
- Gemini
- OpenRouter
- Grok
4. Update environment variables and provide clear documentation in `.env_example`:
```env
# MAIN_MODEL options: claude, openai, ollama, gemini, openrouter
MAIN_MODEL=claude
# RESEARCH_MODEL options: perplexity, openai, ollama, grok
RESEARCH_MODEL=perplexity
```
5. Ensure dynamic model switching via environment variables or configuration management.
6. Provide clear CLI feedback and validation of model names.
### Vercel AI SDK Integration
- Use Vercel's AI SDK to abstract API calls for supported models, ensuring consistent error handling and response formatting.
- Implement a configuration layer to map model names to their respective Vercel SDK integrations.
- Example pattern for integration:
```javascript
import { createClient } from '@vercel/ai';
const clients = {
claude: createClient({ provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }),
openai: createClient({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY }),
ollama: createClient({ provider: 'ollama', apiKey: process.env.OLLAMA_API_KEY }),
gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),
openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),
perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),
grok: createClient({ provider: 'xai', apiKey: process.env.XAI_API_KEY })
};
export function getClient(model) {
if (!clients[model]) {
throw new Error(`Invalid model: ${model}`);
}
return clients[model];
}
```
- Leverage `generateText` and `streamText` functions from the SDK for text generation and streaming capabilities.
- Ensure compatibility with serverless and edge deployments using Vercel's infrastructure.
### Key Elements
- Enhanced model visibility and intuitive management commands.
- Centralized and robust handling of AI API integrations via Vercel AI SDK.
- Clear CLI responses with detailed validation feedback.
- Flexible, easy-to-understand environment configuration.
### Implementation Considerations
- Centralize all AI interactions through a single, maintainable module (`ai-services.js`).
- Ensure comprehensive error handling for invalid model selections.
- Clearly document environment variable options and their purposes.
- Validate model names rigorously to prevent runtime errors.
### Out of Scope (Future Considerations)
- Automatic benchmarking or model performance comparison.
- Dynamic runtime switching of models based on task type or complexity.
# Test Strategy:
### Test Strategy
1. **Unit Tests**:
- Test CLI commands for listing, setting, and validating models.
- Mock Vercel AI SDK calls to ensure proper integration and error handling.
2. **Integration Tests**:
- Validate end-to-end functionality of model management commands.
- Test dynamic switching of models via environment variables.
3. **Error Handling Tests**:
- Simulate invalid model names and verify error messages.
- Test API failures for each model provider and ensure graceful degradation.
4. **Documentation Validation**:
- Verify that `.env_example` and CLI usage examples are accurate and comprehensive.
5. **Performance Tests**:
- Measure response times for API calls through Vercel AI SDK.
- Ensure no significant latency is introduced by model switching.
6. **SDK-Specific Tests**:
- Validate the behavior of `generateText` and `streamText` functions for supported models.
- Test compatibility with serverless and edge deployments.
# Subtasks:
## 1. Create Configuration Management Module [done]
### Dependencies: None
### Description: Develop a centralized configuration module to manage AI model settings and preferences, leveraging the Strategy pattern for model selection.
### Details:
1. Create a new `config-manager.js` module to handle model configuration
2. Implement functions to read/write model preferences to a local config file
3. Define model validation logic with clear error messages
4. Create mapping of valid models for main and research operations
5. Implement getters and setters for model configuration
6. Add utility functions to validate model names against available options
7. Include default fallback models
8. Testing approach: Write unit tests to verify config reading/writing and model validation logic
<info added on 2025-04-14T21:54:28.887Z>
Here's the additional information to add:
```
The configuration management module should:
1. Use a `.taskmasterconfig` JSON file in the project root directory to store model settings
2. Structure the config file with two main keys: `main` and `research` for respective model selections
3. Implement functions to locate the project root directory (using package.json as reference)
4. Define constants for valid models:
```javascript
const VALID_MAIN_MODELS = ['gpt-4', 'gpt-3.5-turbo', 'gpt-4-turbo'];
const VALID_RESEARCH_MODELS = ['gpt-4', 'gpt-4-turbo', 'claude-2'];
const DEFAULT_MAIN_MODEL = 'gpt-3.5-turbo';
const DEFAULT_RESEARCH_MODEL = 'gpt-4';
```
5. Implement model getters with priority order:
- First check `.taskmasterconfig` file
- Fall back to environment variables if config file missing/invalid
- Use defaults as last resort
6. Implement model setters that validate input against valid model lists before updating config
7. Keep API key management in `ai-services.js` using environment variables (don't store keys in config file)
8. Add helper functions for config file operations:
```javascript
function getConfigPath() { /* locate .taskmasterconfig */ }
function readConfig() { /* read and parse config file */ }
function writeConfig(config) { /* stringify and write config */ }
```
9. Include error handling for file operations and invalid configurations
```
</info added on 2025-04-14T21:54:28.887Z>
<info added on 2025-04-14T22:52:29.551Z>
```
The configuration management module should be updated to:
1. Separate model configuration into provider and modelId components:
```javascript
// Example config structure
{
"models": {
"main": {
"provider": "openai",
"modelId": "gpt-3.5-turbo"
},
"research": {
"provider": "openai",
"modelId": "gpt-4"
}
}
}
```
2. Define provider constants:
```javascript
const VALID_MAIN_PROVIDERS = ['openai', 'anthropic', 'local'];
const VALID_RESEARCH_PROVIDERS = ['openai', 'anthropic', 'cohere'];
const DEFAULT_MAIN_PROVIDER = 'openai';
const DEFAULT_RESEARCH_PROVIDER = 'openai';
```
3. Implement optional MODEL_MAP for validation:
```javascript
const MODEL_MAP = {
'openai': ['gpt-3.5-turbo', 'gpt-4', 'gpt-4-turbo'],
'anthropic': ['claude-2', 'claude-instant'],
'cohere': ['command', 'command-light'],
'local': ['llama2', 'mistral']
};
```
4. Update getter functions to handle provider/modelId separation:
```javascript
function getMainProvider() { /* return provider with fallbacks */ }
function getMainModelId() { /* return modelId with fallbacks */ }
function getResearchProvider() { /* return provider with fallbacks */ }
function getResearchModelId() { /* return modelId with fallbacks */ }
```
5. Update setter functions to validate both provider and modelId:
```javascript
function setMainModel(provider, modelId) {
// Validate provider is in VALID_MAIN_PROVIDERS
// Optionally validate modelId is valid for provider using MODEL_MAP
// Update config file with new values
}
```
6. Add utility functions for provider-specific validation:
```javascript
function isValidProviderModelCombination(provider, modelId) {
return MODEL_MAP[provider]?.includes(modelId) || false;
}
```
7. Extend unit tests to cover provider/modelId separation, including:
- Testing provider validation
- Testing provider-modelId combination validation
- Verifying getters return correct provider and modelId values
- Confirming setters properly validate and store both components
```
</info added on 2025-04-14T22:52:29.551Z>
## 2. Implement CLI Command Parser for Model Management [done]
### Dependencies: 61.1
### Description: Extend the CLI command parser to handle the new 'models' command and associated flags for model management.
### Details:
1. Update the CLI command parser to recognize the 'models' command
2. Add support for '--set-main' and '--set-research' flags
3. Implement validation for command arguments
4. Create help text and usage examples for the models command
5. Add error handling for invalid command usage
6. Connect CLI parser to the configuration manager
7. Implement command output formatting for model listings
8. Testing approach: Create integration tests that verify CLI commands correctly interact with the configuration manager
## 3. Integrate Vercel AI SDK and Create Client Factory [done]
### Dependencies: 61.1
### Description: Set up Vercel AI SDK integration and implement a client factory pattern to create and manage AI model clients.
### Details:
1. Install Vercel AI SDK: `npm install @vercel/ai`
2. Create an `ai-client-factory.js` module that implements the Factory pattern
3. Define client creation functions for each supported model (Claude, OpenAI, Ollama, Gemini, OpenRouter, Perplexity, Grok)
4. Implement error handling for missing API keys or configuration issues
5. Add caching mechanism to reuse existing clients
6. Create a unified interface for all clients regardless of the underlying model
7. Implement client validation to ensure proper initialization
8. Testing approach: Mock API responses to test client creation and error handling
<info added on 2025-04-14T23:02:30.519Z>
Here's additional information for the client factory implementation:
For the client factory implementation:
1. Structure the factory with a modular approach:
```javascript
// ai-client-factory.js
import { createOpenAI } from '@ai-sdk/openai';
import { createAnthropic } from '@ai-sdk/anthropic';
import { createGoogle } from '@ai-sdk/google';
import { createPerplexity } from '@ai-sdk/perplexity';
const clientCache = new Map();
export function createClientInstance(providerName, options = {}) {
// Implementation details below
}
```
2. For OpenAI-compatible providers (Ollama), implement specific configuration:
```javascript
case 'ollama':
const ollamaBaseUrl = process.env.OLLAMA_BASE_URL || 'http://localhost:11434';
return createOpenAI({
baseURL: ollamaBaseUrl,
apiKey: 'ollama', // Ollama doesn't require a real API key
...options
});
```
3. Add provider-specific model mapping:
```javascript
// Model mapping helper
const getModelForProvider = (provider, requestedModel) => {
const modelMappings = {
openai: {
default: 'gpt-3.5-turbo',
// Add other mappings
},
anthropic: {
default: 'claude-3-opus-20240229',
// Add other mappings
},
// Add mappings for other providers
};
return (modelMappings[provider] && modelMappings[provider][requestedModel])
|| modelMappings[provider]?.default
|| requestedModel;
};
```
4. Implement caching with provider+model as key:
```javascript
export function getClient(providerName, model) {
const cacheKey = `${providerName}:${model || 'default'}`;
if (clientCache.has(cacheKey)) {
return clientCache.get(cacheKey);
}
const modelName = getModelForProvider(providerName, model);
const client = createClientInstance(providerName, { model: modelName });
clientCache.set(cacheKey, client);
return client;
}
```
5. Add detailed environment variable validation:
```javascript
function validateEnvironment(provider) {
const requirements = {
openai: ['OPENAI_API_KEY'],
anthropic: ['ANTHROPIC_API_KEY'],
google: ['GOOGLE_API_KEY'],
perplexity: ['PERPLEXITY_API_KEY'],
openrouter: ['OPENROUTER_API_KEY'],
ollama: ['OLLAMA_BASE_URL'],
xai: ['XAI_API_KEY']
};
const missing = requirements[provider]?.filter(env => !process.env[env]) || [];
if (missing.length > 0) {
throw new Error(`Missing environment variables for ${provider}: ${missing.join(', ')}`);
}
}
```
6. Add Jest test examples:
```javascript
// ai-client-factory.test.js
describe('AI Client Factory', () => {
beforeEach(() => {
// Mock environment variables
process.env.OPENAI_API_KEY = 'test-openai-key';
process.env.ANTHROPIC_API_KEY = 'test-anthropic-key';
// Add other mocks
});
test('creates OpenAI client with correct configuration', () => {
const client = getClient('openai');
expect(client).toBeDefined();
// Add assertions for client configuration
});
test('throws error when environment variables are missing', () => {
delete process.env.OPENAI_API_KEY;
expect(() => getClient('openai')).toThrow(/Missing environment variables/);
});
// Add tests for other providers
});
```
</info added on 2025-04-14T23:02:30.519Z>
## 4. Develop Centralized AI Services Module [done]
### Dependencies: 61.3
### Description: Create a centralized AI services module that abstracts all AI interactions through a unified interface, using the Decorator pattern for adding functionality like logging and retries.
### Details:
1. Create `ai-services.js` module to consolidate all AI model interactions
2. Implement wrapper functions for text generation and streaming
3. Add retry mechanisms for handling API rate limits and transient errors
4. Implement logging for all AI interactions for observability
5. Create model-specific adapters to normalize responses across different providers
6. Add caching layer for frequently used responses to optimize performance
7. Implement graceful fallback mechanisms when primary models fail
8. Testing approach: Create unit tests with mocked responses to verify service behavior
<info added on 2025-04-19T23:51:22.219Z>
Based on the exploration findings, here's additional information for the AI services module refactoring:
The existing `ai-services.js` should be refactored to:
1. Leverage the `ai-client-factory.js` for model instantiation while providing a higher-level service abstraction
2. Implement a layered architecture:
- Base service layer handling common functionality (retries, logging, caching)
- Model-specific service implementations extending the base
- Facade pattern to provide a unified API for all consumers
3. Integration points:
- Replace direct OpenAI client usage with factory-provided clients
- Maintain backward compatibility with existing service consumers
- Add service registration mechanism for new AI providers
4. Performance considerations:
- Implement request batching for high-volume operations
- Add request priority queuing for critical vs non-critical operations
- Implement circuit breaker pattern to prevent cascading failures
5. Monitoring enhancements:
- Add detailed telemetry for response times, token usage, and costs
- Implement standardized error classification for better diagnostics
6. Implementation sequence:
- Start with abstract base service class
- Refactor existing OpenAI implementations
- Add adapter layer for new providers
- Implement the unified facade
</info added on 2025-04-19T23:51:22.219Z>
## 5. Implement Environment Variable Management [done]
### Dependencies: 61.1, 61.3
### Description: Update environment variable handling to support multiple AI models and create documentation for configuration options.
### Details:
1. Update `.env.example` with all required API keys for supported models
2. Implement environment variable validation on startup
3. Create clear error messages for missing or invalid environment variables
4. Add support for model-specific configuration options
5. Document all environment variables and their purposes
6. Implement a check to ensure required API keys are present for selected models
7. Add support for optional configuration parameters for each model
8. Testing approach: Create tests that verify environment variable validation logic
## 6. Implement Model Listing Command [done]
### Dependencies: 61.1, 61.2, 61.4
### Description: Implement the 'task-master models' command to display currently configured models and available options.
### Details:
1. Create handler for the models command without flags
2. Implement formatted output showing current model configuration
3. Add color-coding for better readability using a library like chalk
4. Include version information for each configured model
5. Show API status indicators (connected/disconnected)
6. Display usage examples for changing models
7. Add support for verbose output with additional details
8. Testing approach: Create integration tests that verify correct output formatting and content
## 7. Implement Model Setting Commands [done]
### Dependencies: 61.1, 61.2, 61.4, 61.6
### Description: Implement the commands to set main and research models with proper validation and feedback.
### Details:
1. Create handlers for '--set-main' and '--set-research' flags
2. Implement validation logic for model names
3. Add clear error messages for invalid model selections
4. Implement confirmation messages for successful model changes
5. Add support for setting both models in a single command
6. Implement dry-run option to validate without making changes
7. Add verbose output option for debugging
8. Testing approach: Create integration tests that verify model setting functionality with various inputs
## 8. Update Main Task Processing Logic [done]
### Dependencies: 61.4, 61.5, 61.18
### Description: Refactor the main task processing logic to use the new AI services module and support dynamic model selection.
### Details:
1. Update task processing functions to use the centralized AI services
2. Implement dynamic model selection based on configuration
3. Add error handling for model-specific failures
4. Implement graceful degradation when preferred models are unavailable
5. Update prompts to be model-agnostic where possible
6. Add telemetry for model performance monitoring
7. Implement response validation to ensure quality across different models
8. Testing approach: Create integration tests that verify task processing with different model configurations
<info added on 2025-04-20T03:55:56.310Z>
When updating the main task processing logic, implement the following changes to align with the new configuration system:
1. Replace direct environment variable access with calls to the configuration manager:
```javascript
// Before
const apiKey = process.env.OPENAI_API_KEY;
const modelId = process.env.MAIN_MODEL || "gpt-4";
// After
import { getMainProvider, getMainModelId, getMainMaxTokens, getMainTemperature } from './config-manager.js';
const provider = getMainProvider();
const modelId = getMainModelId();
const maxTokens = getMainMaxTokens();
const temperature = getMainTemperature();
```
2. Implement model fallback logic using the configuration hierarchy:
```javascript
async function processTaskWithFallback(task) {
try {
return await processWithModel(task, getMainModelId());
} catch (error) {
logger.warn(`Primary model failed: ${error.message}`);
const fallbackModel = getMainFallbackModelId();
if (fallbackModel) {
return await processWithModel(task, fallbackModel);
}
throw error;
}
}
```
3. Add configuration-aware telemetry points to track model usage and performance:
```javascript
function trackModelPerformance(modelId, startTime, success) {
const duration = Date.now() - startTime;
telemetry.trackEvent('model_usage', {
modelId,
provider: getMainProvider(),
duration,
success,
configVersion: getConfigVersion()
});
}
```
4. Ensure all prompt templates are loaded through the configuration system rather than hardcoded:
```javascript
const promptTemplate = getPromptTemplate('task_processing');
const prompt = formatPrompt(promptTemplate, { task: taskData });
```
</info added on 2025-04-20T03:55:56.310Z>
## 9. Update Research Processing Logic [done]
### Dependencies: 61.4, 61.5, 61.8, 61.18
### Description: Refactor the research processing logic to use the new AI services module and support dynamic model selection for research operations.
### Details:
1. Update research functions to use the centralized AI services
2. Implement dynamic model selection for research operations
3. Add specialized error handling for research-specific issues
4. Optimize prompts for research-focused models
5. Implement result caching for research operations
6. Add support for model-specific research parameters
7. Create fallback mechanisms for research operations
8. Testing approach: Create integration tests that verify research functionality with different model configurations
<info added on 2025-04-20T03:55:39.633Z>
When implementing the refactored research processing logic, ensure the following:
1. Replace direct environment variable access with the new configuration system:
```javascript
// Old approach
const apiKey = process.env.OPENAI_API_KEY;
const model = "gpt-4";
// New approach
import { getResearchProvider, getResearchModelId, getResearchMaxTokens,
getResearchTemperature } from './config-manager.js';
const provider = getResearchProvider();
const modelId = getResearchModelId();
const maxTokens = getResearchMaxTokens();
const temperature = getResearchTemperature();
```
2. Implement model fallback chains using the configuration system:
```javascript
async function performResearch(query) {
try {
return await callAIService({
provider: getResearchProvider(),
modelId: getResearchModelId(),
maxTokens: getResearchMaxTokens(),
temperature: getResearchTemperature()
});
} catch (error) {
logger.warn(`Primary research model failed: ${error.message}`);
return await callAIService({
provider: getResearchProvider('fallback'),
modelId: getResearchModelId('fallback'),
maxTokens: getResearchMaxTokens('fallback'),
temperature: getResearchTemperature('fallback')
});
}
}
```
3. Add support for dynamic parameter adjustment based on research type:
```javascript
function getResearchParameters(researchType) {
// Get base parameters
const baseParams = {
provider: getResearchProvider(),
modelId: getResearchModelId(),
maxTokens: getResearchMaxTokens(),
temperature: getResearchTemperature()
};
// Adjust based on research type
switch(researchType) {
case 'deep':
return {...baseParams, maxTokens: baseParams.maxTokens * 1.5};
case 'creative':
return {...baseParams, temperature: Math.min(baseParams.temperature + 0.2, 1.0)};
case 'factual':
return {...baseParams, temperature: Math.max(baseParams.temperature - 0.2, 0)};
default:
return baseParams;
}
}
```
4. Ensure the caching mechanism uses configuration-based TTL settings:
```javascript
const researchCache = new Cache({
ttl: getResearchCacheTTL(),
maxSize: getResearchCacheMaxSize()
});
```
</info added on 2025-04-20T03:55:39.633Z>
## 10. Create Comprehensive Documentation and Examples [done]
### Dependencies: 61.6, 61.7, 61.8, 61.9
### Description: Develop comprehensive documentation for the new model management features, including examples, troubleshooting guides, and best practices.
### Details:
1. Update README.md with new model management commands
2. Create usage examples for all supported models
3. Document environment variable requirements for each model
4. Create troubleshooting guide for common issues
5. Add performance considerations and best practices
6. Document API key acquisition process for each supported service
7. Create comparison chart of model capabilities and limitations
8. Testing approach: Conduct user testing with the documentation to ensure clarity and completeness
<info added on 2025-04-20T03:55:20.433Z>
## Documentation Update for Configuration System Refactoring
### Configuration System Architecture
- Document the separation between environment variables and configuration file:
- API keys: Sourced exclusively from environment variables (process.env or session.env)
- All other settings: Centralized in `.taskmasterconfig` JSON file
### `.taskmasterconfig` Structure
```json
{
"models": {
"completion": "gpt-3.5-turbo",
"chat": "gpt-4",
"embedding": "text-embedding-ada-002"
},
"parameters": {
"temperature": 0.7,
"maxTokens": 2000,
"topP": 1
},
"logging": {
"enabled": true,
"level": "info"
},
"defaults": {
"outputFormat": "markdown"
}
}
```
### Configuration Access Patterns
- Document the getter functions in `config-manager.js`:
- `getModelForRole(role)`: Returns configured model for a specific role
- `getParameter(name)`: Retrieves model parameters
- `getLoggingConfig()`: Access logging settings
- Example usage: `const completionModel = getModelForRole('completion')`
### Environment Variable Resolution
- Explain the `resolveEnvVariable(key)` function:
- Checks both process.env and session.env
- Prioritizes session variables over process variables
- Returns null if variable not found
### Configuration Precedence
- Document the order of precedence:
1. Command-line arguments (highest priority)
2. Session environment variables
3. Process environment variables
4. `.taskmasterconfig` settings
5. Hardcoded defaults (lowest priority)
### Migration Guide
- Steps for users to migrate from previous configuration approach
- How to verify configuration is correctly loaded
</info added on 2025-04-20T03:55:20.433Z>
## 11. Refactor PRD Parsing to use generateObjectService [done]
### Dependencies: 61.23
### Description: Update PRD processing logic (callClaude, processClaudeResponse, handleStreamingRequest in ai-services.js) to use the new `generateObjectService` from `ai-services-unified.js` with an appropriate Zod schema.
### Details:
<info added on 2025-04-20T03:55:01.707Z>
The PRD parsing refactoring should align with the new configuration system architecture. When implementing this change:
1. Replace direct environment variable access with `resolveEnvVariable` calls for API keys.
2. Remove any hardcoded model names or parameters in the PRD processing functions. Instead, use the config-manager.js getters:
- `getModelForRole('prd')` to determine the appropriate model
- `getModelParameters('prd')` to retrieve temperature, maxTokens, etc.
3. When constructing the generateObjectService call, ensure parameters are sourced from config:
```javascript
const modelConfig = getModelParameters('prd');
const model = getModelForRole('prd');
const result = await generateObjectService({
model,
temperature: modelConfig.temperature,
maxTokens: modelConfig.maxTokens,
// other parameters as needed
schema: prdSchema,
// existing prompt/context parameters
});
```
4. Update any logging to respect the logging configuration from config-manager (e.g., `isLoggingEnabled('ai')`)
5. Ensure any default values previously hardcoded are now retrieved from the configuration system.
</info added on 2025-04-20T03:55:01.707Z>
## 12. Refactor Basic Subtask Generation to use generateObjectService [done]
### Dependencies: 61.23
### Description: Update the `generateSubtasks` function in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the subtask array.
### Details:
<info added on 2025-04-20T03:54:45.542Z>
The refactoring should leverage the new configuration system:
1. Replace direct model references with calls to config-manager.js getters:
```javascript
const { getModelForRole, getModelParams } = require('./config-manager');
// Instead of hardcoded models/parameters:
const model = getModelForRole('subtask-generator');
const modelParams = getModelParams('subtask-generator');
```
2. Update API key handling to use the resolveEnvVariable pattern:
```javascript
const { resolveEnvVariable } = require('./utils');
const apiKey = resolveEnvVariable('OPENAI_API_KEY');
```
3. When calling generateObjectService, pass the configuration parameters:
```javascript
const result = await generateObjectService({
schema: subtasksArraySchema,
prompt: subtaskPrompt,
model: model,
temperature: modelParams.temperature,
maxTokens: modelParams.maxTokens,
// Other parameters from config
});
```
4. Add error handling that respects logging configuration:
```javascript
const { isLoggingEnabled } = require('./config-manager');
try {
// Generation code
} catch (error) {
if (isLoggingEnabled('errors')) {
console.error('Subtask generation error:', error);
}
throw error;
}
```
</info added on 2025-04-20T03:54:45.542Z>
## 13. Refactor Research Subtask Generation to use generateObjectService [done]
### Dependencies: 61.23
### Description: Update the `generateSubtasksWithPerplexity` function in `ai-services.js` to first perform research (potentially keeping the Perplexity call separate or adapting it) and then use `generateObjectService` from `ai-services-unified.js` with research results included in the prompt.
### Details:
<info added on 2025-04-20T03:54:26.882Z>
The refactoring should align with the new configuration system by:
1. Replace direct environment variable access with `resolveEnvVariable` for API keys
2. Use the config-manager.js getters to retrieve model parameters:
- Replace hardcoded model names with `getModelForRole('research')`
- Use `getParametersForRole('research')` to get temperature, maxTokens, etc.
3. Implement proper error handling that respects the `getLoggingConfig()` settings
4. Example implementation pattern:
```javascript
const { getModelForRole, getParametersForRole, getLoggingConfig } = require('./config-manager');
const { resolveEnvVariable } = require('./environment-utils');
// In the refactored function:
const researchModel = getModelForRole('research');
const { temperature, maxTokens } = getParametersForRole('research');
const apiKey = resolveEnvVariable('PERPLEXITY_API_KEY');
const { verbose } = getLoggingConfig();
// Then use these variables in the API call configuration
```
5. Ensure the transition to generateObjectService maintains all existing functionality while leveraging the new configuration system
</info added on 2025-04-20T03:54:26.882Z>
## 14. Refactor Research Task Description Generation to use generateObjectService [done]
### Dependencies: 61.23
### Description: Update the `generateTaskDescriptionWithPerplexity` function in `ai-services.js` to first perform research and then use `generateObjectService` from `ai-services-unified.js` to generate the structured task description.
### Details:
<info added on 2025-04-20T03:54:04.420Z>
The refactoring should incorporate the new configuration management system:
1. Update imports to include the config-manager:
```javascript
const { getModelForRole, getParametersForRole } = require('./config-manager');
```
2. Replace any hardcoded model selections or parameters with config-manager calls:
```javascript
// Replace direct model references like:
// const model = "perplexity-model-7b-online"
// With:
const model = getModelForRole('research');
const parameters = getParametersForRole('research');
```
3. For API key handling, use the resolveEnvVariable pattern:
```javascript
const apiKey = resolveEnvVariable('PERPLEXITY_API_KEY');
```
4. When calling generateObjectService, pass the configuration-derived parameters:
```javascript
return generateObjectService({
prompt: researchResults,
schema: taskDescriptionSchema,
role: 'taskDescription',
// Config-driven parameters will be applied within generateObjectService
});
```
5. Remove any hardcoded configuration values, ensuring all settings are retrieved from the centralized configuration system.
</info added on 2025-04-20T03:54:04.420Z>
## 15. Refactor Complexity Analysis AI Call to use generateObjectService [done]
### Dependencies: 61.23
### Description: Update the logic that calls the AI after using `generateComplexityAnalysisPrompt` in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the complexity report.
### Details:
<info added on 2025-04-20T03:53:46.120Z>
The complexity analysis AI call should be updated to align with the new configuration system architecture. When refactoring to use `generateObjectService`, implement the following changes:
1. Replace direct model references with calls to the appropriate config getter:
```javascript
const modelName = getComplexityAnalysisModel(); // Use the specific getter from config-manager.js
```
2. Retrieve AI parameters from the config system:
```javascript
const temperature = getAITemperature('complexityAnalysis');
const maxTokens = getAIMaxTokens('complexityAnalysis');
```
3. When constructing the call to `generateObjectService`, pass these configuration values:
```javascript
const result = await generateObjectService({
prompt,
schema: complexityReportSchema,
modelName,
temperature,
maxTokens,
sessionEnv: session?.env
});
```
4. Ensure API key resolution uses the `resolveEnvVariable` helper:
```javascript
// Don't hardcode API keys or directly access process.env
// The generateObjectService should handle this internally with resolveEnvVariable
```
5. Add logging configuration based on settings:
```javascript
const enableLogging = getAILoggingEnabled('complexityAnalysis');
if (enableLogging) {
// Use the logging mechanism defined in the configuration
}
```
</info added on 2025-04-20T03:53:46.120Z>
## 16. Refactor Task Addition AI Call to use generateObjectService [done]
### Dependencies: 61.23
### Description: Update the logic that calls the AI after using `_buildAddTaskPrompt` in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the single task object.
### Details:
<info added on 2025-04-20T03:53:27.455Z>
To implement this refactoring, you'll need to:
1. Replace direct AI calls with the new `generateObjectService` approach:
```javascript
// OLD approach
const aiResponse = await callLLM(prompt, modelName, temperature, maxTokens);
const task = parseAIResponseToTask(aiResponse);
// NEW approach using generateObjectService with config-manager
import { generateObjectService } from '../services/ai-services-unified.js';
import { getAIModelForRole, getAITemperature, getAIMaxTokens } from '../config/config-manager.js';
import { taskSchema } from '../schemas/task-schema.js'; // Create this Zod schema for a single task
const modelName = getAIModelForRole('taskCreation');
const temperature = getAITemperature('taskCreation');
const maxTokens = getAIMaxTokens('taskCreation');
const task = await generateObjectService({
prompt: _buildAddTaskPrompt(...),
schema: taskSchema,
modelName,
temperature,
maxTokens
});
```
2. Create a Zod schema for the task object in a new file `schemas/task-schema.js` that defines the expected structure.
3. Ensure API key resolution uses the new pattern:
```javascript
// This happens inside generateObjectService, but verify it uses:
import { resolveEnvVariable } from '../config/config-manager.js';
// Instead of direct process.env access
```
4. Update any error handling to match the new service's error patterns.
</info added on 2025-04-20T03:53:27.455Z>
## 17. Refactor General Chat/Update AI Calls [done]
### Dependencies: 61.23
### Description: Refactor functions like `sendChatWithContext` (and potentially related task update functions in `task-manager.js` if they make direct AI calls) to use `streamTextService` or `generateTextService` from `ai-services-unified.js`.
### Details:
<info added on 2025-04-20T03:53:03.709Z>
When refactoring `sendChatWithContext` and related functions, ensure they align with the new configuration system:
1. Replace direct model references with config getter calls:
```javascript
// Before
const model = "gpt-4";
// After
import { getModelForRole } from './config-manager.js';
const model = getModelForRole('chat'); // or appropriate role
```
2. Extract AI parameters from config rather than hardcoding:
```javascript
import { getAIParameters } from './config-manager.js';
const { temperature, maxTokens } = getAIParameters('chat');
```
3. When calling `streamTextService` or `generateTextService`, pass parameters from config:
```javascript
await streamTextService({
messages,
model: getModelForRole('chat'),
temperature: getAIParameters('chat').temperature,
// other parameters as needed
});
```
4. For logging control, check config settings:
```javascript
import { isLoggingEnabled } from './config-manager.js';
if (isLoggingEnabled('aiCalls')) {
console.log('AI request:', messages);
}
```
5. Ensure any default behaviors respect configuration defaults rather than hardcoded values.
</info added on 2025-04-20T03:53:03.709Z>
## 18. Refactor Callers of AI Parsing Utilities [done]
### Dependencies: None
### Description: Update the code that calls `parseSubtasksFromText`, `parseTaskJsonResponse`, and `parseTasksFromCompletion` to instead directly handle the structured JSON output provided by `generateObjectService` (as the refactored AI calls will now use it).
### Details:
<info added on 2025-04-20T03:52:45.518Z>
The refactoring of callers to AI parsing utilities should align with the new configuration system. When updating these callers:
1. Replace direct API key references with calls to the configuration system using `resolveEnvVariable` for sensitive credentials.
2. Update model selection logic to use the centralized configuration from `.taskmasterconfig` via the getter functions in `config-manager.js`. For example:
```javascript
// Old approach
const model = "gpt-4";
// New approach
import { getModelForRole } from './config-manager';
const model = getModelForRole('parsing'); // or appropriate role
```
3. Similarly, replace hardcoded parameters with configuration-based values:
```javascript
// Old approach
const maxTokens = 2000;
const temperature = 0.2;
// New approach
import { getAIParameterValue } from './config-manager';
const maxTokens = getAIParameterValue('maxTokens', 'parsing');
const temperature = getAIParameterValue('temperature', 'parsing');
```
4. Ensure logging behavior respects the centralized logging configuration settings.
5. When calling `generateObjectService`, pass the appropriate configuration context to ensure it uses the correct settings from the centralized configuration system.
</info added on 2025-04-20T03:52:45.518Z>
## 19. Refactor `updateSubtaskById` AI Call [done]
### Dependencies: 61.23
### Description: Refactor the AI call within `updateSubtaskById` in `task-manager.js` (which generates additional information based on a prompt) to use the appropriate unified service function (e.g., `generateTextService`) from `ai-services-unified.js`.
### Details:
<info added on 2025-04-20T03:52:28.196Z>
The `updateSubtaskById` function currently makes direct AI calls with hardcoded parameters. When refactoring to use the unified service:
1. Replace direct OpenAI calls with `generateTextService` from `ai-services-unified.js`
2. Use configuration parameters from `config-manager.js`:
- Replace hardcoded model with `getMainModel()`
- Use `getMainMaxTokens()` for token limits
- Apply `getMainTemperature()` for response randomness
3. Ensure prompt construction remains consistent but passes these dynamic parameters
4. Handle API key resolution through the unified service (which uses `resolveEnvVariable`)
5. Update error handling to work with the unified service response format
6. If the function uses any logging, ensure it respects `getLoggingEnabled()` setting
Example refactoring pattern:
```javascript
// Before
const completion = await openai.chat.completions.create({
model: "gpt-4",
temperature: 0.7,
max_tokens: 1000,
messages: [/* prompt messages */]
});
// After
const completion = await generateTextService({
model: getMainModel(),
temperature: getMainTemperature(),
max_tokens: getMainMaxTokens(),
messages: [/* prompt messages */]
});
```
</info added on 2025-04-20T03:52:28.196Z>
<info added on 2025-04-22T06:05:42.437Z>
- When testing the non-streaming `generateTextService` call within `updateSubtaskById`, ensure that the function awaits the full response before proceeding with subtask updates. This allows you to validate that the unified service returns the expected structure (e.g., `completion.choices.message.content`) and that error handling logic correctly interprets any error objects or status codes returned by the service.
- Mock or stub the `generateTextService` in unit tests to simulate both successful and failed completions. For example, verify that when the service returns a valid completion, the subtask is updated with the generated content, and when an error is returned, the error handling path is triggered and logged appropriately.
- Confirm that the non-streaming mode does not emit partial results or require event-based handling; the function should only process the final, complete response.
- Example test assertion:
```javascript
// Mocked response from generateTextService
const mockCompletion = {
choices: [{ message: { content: "Generated subtask details." } }]
};
generateTextService.mockResolvedValue(mockCompletion);
// Call updateSubtaskById and assert the subtask is updated
await updateSubtaskById(...);
expect(subtask.details).toBe("Generated subtask details.");
```
- If the unified service supports both streaming and non-streaming modes, explicitly set or verify the `stream` parameter is `false` (or omitted) to ensure non-streaming behavior during these tests.
</info added on 2025-04-22T06:05:42.437Z>
<info added on 2025-04-22T06:20:19.747Z>
When testing the non-streaming `generateTextService` call in `updateSubtaskById`, implement these verification steps:
1. Add unit tests that verify proper parameter transformation between the old and new implementation:
```javascript
test('should correctly transform parameters when calling generateTextService', async () => {
// Setup mocks for config values
jest.spyOn(configManager, 'getMainModel').mockReturnValue('gpt-4');
jest.spyOn(configManager, 'getMainTemperature').mockReturnValue(0.7);
jest.spyOn(configManager, 'getMainMaxTokens').mockReturnValue(1000);
const generateTextServiceSpy = jest.spyOn(aiServices, 'generateTextService')
.mockResolvedValue({ choices: [{ message: { content: 'test content' } }] });
await updateSubtaskById(/* params */);
// Verify the service was called with correct transformed parameters
expect(generateTextServiceSpy).toHaveBeenCalledWith({
model: 'gpt-4',
temperature: 0.7,
max_tokens: 1000,
messages: expect.any(Array)
});
});
```
2. Implement response validation to ensure the subtask content is properly extracted:
```javascript
// In updateSubtaskById function
try {
const completion = await generateTextService({
// parameters
});
// Validate response structure before using
if (!completion?.choices?.[0]?.message?.content) {
throw new Error('Invalid response structure from AI service');
}
// Continue with updating subtask
} catch (error) {
// Enhanced error handling
}
```
3. Add integration tests that verify the end-to-end flow with actual configuration values.
</info added on 2025-04-22T06:20:19.747Z>
<info added on 2025-04-22T06:23:23.247Z>
<info added on 2025-04-22T06:35:14.892Z>
When testing the non-streaming `generateTextService` call in `updateSubtaskById`, implement these specific verification steps:
1. Create a dedicated test fixture that isolates the AI service interaction:
```javascript
describe('updateSubtaskById AI integration', () => {
beforeEach(() => {
// Reset all mocks and spies
jest.clearAllMocks();
// Setup environment with controlled config values
process.env.OPENAI_API_KEY = 'test-key';
});
// Test cases follow...
});
```
2. Test error propagation from the unified service:
```javascript
test('should properly handle AI service errors', async () => {
const mockError = new Error('Service unavailable');
mockError.status = 503;
jest.spyOn(aiServices, 'generateTextService').mockRejectedValue(mockError);
// Capture console errors if needed
const consoleSpy = jest.spyOn(console, 'error').mockImplementation();
// Execute with error expectation
await expect(updateSubtaskById(1, { prompt: 'test' })).rejects.toThrow();
// Verify error was logged with appropriate context
expect(consoleSpy).toHaveBeenCalledWith(
expect.stringContaining('AI service error'),
expect.objectContaining({ status: 503 })
);
});
```
3. Verify that the function correctly preserves existing subtask content when appending new AI-generated information:
```javascript
test('should preserve existing content when appending AI-generated details', async () => {
// Setup mock subtask with existing content
const mockSubtask = {
id: 1,
details: 'Existing details.\n\n'
};
// Mock database retrieval
getSubtaskById.mockResolvedValue(mockSubtask);
// Mock AI response
generateTextService.mockResolvedValue({
choices: [{ message: { content: 'New AI content.' } }]
});
await updateSubtaskById(1, { prompt: 'Enhance this subtask' });
// Verify the update preserves existing content
expect(updateSubtaskInDb).toHaveBeenCalledWith(
1,
expect.objectContaining({
details: expect.stringContaining('Existing details.\n\n<info added on')
})
);
// Verify the new content was added
expect(updateSubtaskInDb).toHaveBeenCalledWith(
1,
expect.objectContaining({
details: expect.stringContaining('New AI content.')
})
);
});
```
4. Test that the function correctly formats the timestamp and wraps the AI-generated content:
```javascript
test('should format timestamp and wrap content correctly', async () => {
// Mock date for consistent testing
const mockDate = new Date('2025-04-22T10:00:00Z');
jest.spyOn(global, 'Date').mockImplementation(() => mockDate);
// Setup and execute test
// ...
// Verify correct formatting
expect(updateSubtaskInDb).toHaveBeenCalledWith(
expect.any(Number),
expect.objectContaining({
details: expect.stringMatching(
/<info added on 2025-04-22T10:00:00\.000Z>\n.*\n<\/info added on 2025-04-22T10:00:00\.000Z>/s
)
})
);
});
```
5. Verify that the function correctly handles the case when no existing details are present:
```javascript
test('should handle subtasks with no existing details', async () => {
// Setup mock subtask with no details
const mockSubtask = { id: 1 };
getSubtaskById.mockResolvedValue(mockSubtask);
// Execute test
// ...
// Verify details were initialized properly
expect(updateSubtaskInDb).toHaveBeenCalledWith(
1,
expect.objectContaining({
details: expect.stringMatching(/^<info added on/)
})
);
});
```
</info added on 2025-04-22T06:35:14.892Z>
</info added on 2025-04-22T06:23:23.247Z>
## 20. Implement `anthropic.js` Provider Module using Vercel AI SDK [done]
### Dependencies: None
### Description: Create and implement the `anthropic.js` module within `src/ai-providers/`. This module should contain functions to interact with the Anthropic API (streaming and non-streaming) using the **Vercel AI SDK**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
### Details:
<info added on 2025-04-24T02:54:40.326Z>
- Use the `@ai-sdk/anthropic` package to implement the provider module. You can import the default provider instance with `import { anthropic } from '@ai-sdk/anthropic'`, or create a custom instance using `createAnthropic` if you need to specify custom headers, API key, or base URL (such as for beta features or proxying)[1][4].
- To address persistent 'Not Found' errors, ensure the model name matches the latest Anthropic model IDs (e.g., `claude-3-haiku-20240307`, `claude-3-5-sonnet-20241022`). Model naming is case-sensitive and must match Anthropic's published versions[4][5].
- If you require custom headers (such as for beta features), use the `createAnthropic` function and pass a `headers` object. For example:
```js
import { createAnthropic } from '@ai-sdk/anthropic';
const anthropic = createAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
headers: { 'anthropic-beta': 'tools-2024-04-04' }
});
```
- For streaming and non-streaming support, the Vercel AI SDK provides both `generateText` (non-streaming) and `streamText` (streaming) functions. Use these with the Anthropic provider instance as the `model` parameter[5].
- Example usage for non-streaming:
```js
import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = await generateText({
model: anthropic('claude-3-haiku-20240307'),
messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }]
});
```
- Example usage for streaming:
```js
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const stream = await streamText({
model: anthropic('claude-3-haiku-20240307'),
messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }]
});
```
- Ensure that your implementation adheres to the standardized input/output format defined for `ai-services-unified.js`, mapping the SDK's response structure to your unified format.
- If you continue to encounter 'Not Found' errors, verify:
- The API key is valid and has access to the requested models.
- The model name is correct and available to your Anthropic account.
- Any required beta headers are included if using beta features or models[1].
- Prefer direct provider instantiation with explicit headers and API key configuration for maximum compatibility and to avoid SDK-level abstraction issues[1].
</info added on 2025-04-24T02:54:40.326Z>
## 21. Implement `perplexity.js` Provider Module using Vercel AI SDK [done]
### Dependencies: None
### Description: Create and implement the `perplexity.js` module within `src/ai-providers/`. This module should contain functions to interact with the Perplexity API (likely using their OpenAI-compatible endpoint) via the **Vercel AI SDK**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
### Details:
## 22. Implement `openai.js` Provider Module using Vercel AI SDK [done]
### Dependencies: None
### Description: Create and implement the `openai.js` module within `src/ai-providers/`. This module should contain functions to interact with the OpenAI API (streaming and non-streaming) using the **Vercel AI SDK**, adhering to the standardized input/output format defined for `ai-services-unified.js`. (Optional, implement if OpenAI models are needed).
### Details:
<info added on 2025-04-27T05:33:49.977Z>
```javascript
// Implementation details for openai.js provider module
import { createOpenAI } from 'ai';
/**
* Generates text using OpenAI models via Vercel AI SDK
*
* @param {Object} params - Configuration parameters
* @param {string} params.apiKey - OpenAI API key
* @param {string} params.modelId - Model ID (e.g., 'gpt-4', 'gpt-3.5-turbo')
* @param {Array} params.messages - Array of message objects with role and content
* @param {number} [params.maxTokens] - Maximum tokens to generate
* @param {number} [params.temperature=0.7] - Sampling temperature (0-1)
* @returns {Promise<string>} The generated text response
*/
export async function generateOpenAIText(params) {
try {
const { apiKey, modelId, messages, maxTokens, temperature = 0.7 } = params;
if (!apiKey) throw new Error('OpenAI API key is required');
if (!modelId) throw new Error('Model ID is required');
if (!messages || !Array.isArray(messages)) throw new Error('Messages array is required');
const openai = createOpenAI({ apiKey });
const response = await openai.chat.completions.create({
model: modelId,
messages,
max_tokens: maxTokens,
temperature,
});
return response.choices[0].message.content;
} catch (error) {
console.error('OpenAI text generation error:', error);
throw new Error(`OpenAI API error: ${error.message}`);
}
}
/**
* Streams text using OpenAI models via Vercel AI SDK
*
* @param {Object} params - Configuration parameters (same as generateOpenAIText)
* @returns {ReadableStream} A stream of text chunks
*/
export async function streamOpenAIText(params) {
try {
const { apiKey, modelId, messages, maxTokens, temperature = 0.7 } = params;
if (!apiKey) throw new Error('OpenAI API key is required');
if (!modelId) throw new Error('Model ID is required');
if (!messages || !Array.isArray(messages)) throw new Error('Messages array is required');
const openai = createOpenAI({ apiKey });
const stream = await openai.chat.completions.create({
model: modelId,
messages,
max_tokens: maxTokens,
temperature,
stream: true,
});
return stream;
} catch (error) {
console.error('OpenAI streaming error:', error);
throw new Error(`OpenAI streaming error: ${error.message}`);
}
}
/**
* Generates a structured object using OpenAI models via Vercel AI SDK
*
* @param {Object} params - Configuration parameters
* @param {string} params.apiKey - OpenAI API key
* @param {string} params.modelId - Model ID (e.g., 'gpt-4', 'gpt-3.5-turbo')
* @param {Array} params.messages - Array of message objects
* @param {Object} params.schema - JSON schema for the response object
* @param {string} params.objectName - Name of the object to generate
* @returns {Promise<Object>} The generated structured object
*/
export async function generateOpenAIObject(params) {
try {
const { apiKey, modelId, messages, schema, objectName } = params;
if (!apiKey) throw new Error('OpenAI API key is required');
if (!modelId) throw new Error('Model ID is required');
if (!messages || !Array.isArray(messages)) throw new Error('Messages array is required');
if (!schema) throw new Error('Schema is required');
if (!objectName) throw new Error('Object name is required');
const openai = createOpenAI({ apiKey });
// Using the Vercel AI SDK's function calling capabilities
const response = await openai.chat.completions.create({
model: modelId,
messages,
functions: [
{
name: objectName,
description: `Generate a ${objectName} object`,
parameters: schema,
},
],
function_call: { name: objectName },
});
const functionCall = response.choices[0].message.function_call;
return JSON.parse(functionCall.arguments);
} catch (error) {
console.error('OpenAI object generation error:', error);
throw new Error(`OpenAI object generation error: ${error.message}`);
}
}
```
</info added on 2025-04-27T05:33:49.977Z>
<info added on 2025-04-27T05:35:03.679Z>
<info added on 2025-04-28T10:15:22.123Z>
```javascript
// Additional implementation notes for openai.js
/**
* Export a provider info object for OpenAI
*/
export const providerInfo = {
id: 'openai',
name: 'OpenAI',
description: 'OpenAI API integration using Vercel AI SDK',
models: {
'gpt-4': {
id: 'gpt-4',
name: 'GPT-4',
contextWindow: 8192,
supportsFunctions: true,
},
'gpt-4-turbo': {
id: 'gpt-4-turbo',
name: 'GPT-4 Turbo',
contextWindow: 128000,
supportsFunctions: true,
},
'gpt-3.5-turbo': {
id: 'gpt-3.5-turbo',
name: 'GPT-3.5 Turbo',
contextWindow: 16385,
supportsFunctions: true,
}
}
};
/**
* Helper function to format error responses consistently
*
* @param {Error} error - The caught error
* @param {string} operation - The operation being performed
* @returns {Error} A formatted error
*/
function formatError(error, operation) {
// Extract OpenAI specific error details if available
const statusCode = error.status || error.statusCode;
const errorType = error.type || error.code || 'unknown_error';
// Create a more detailed error message
const message = `OpenAI ${operation} error (${errorType}): ${error.message}`;
// Create a new error with the formatted message
const formattedError = new Error(message);
// Add additional properties for debugging
formattedError.originalError = error;
formattedError.provider = 'openai';
formattedError.statusCode = statusCode;
formattedError.errorType = errorType;
return formattedError;
}
/**
* Example usage with the unified AI services interface:
*
* // In ai-services-unified.js
* import * as openaiProvider from './ai-providers/openai.js';
*
* export async function generateText(params) {
* switch(params.provider) {
* case 'openai':
* return openaiProvider.generateOpenAIText(params);
* // other providers...
* }
* }
*/
// Note: For proper error handling with the Vercel AI SDK, you may need to:
// 1. Check for rate limiting errors (429)
// 2. Handle token context window exceeded errors
// 3. Implement exponential backoff for retries on 5xx errors
// 4. Parse streaming errors properly from the ReadableStream
```
</info added on 2025-04-28T10:15:22.123Z>
</info added on 2025-04-27T05:35:03.679Z>
<info added on 2025-04-27T05:39:31.942Z>
```javascript
// Correction for openai.js provider module
// IMPORTANT: Use the correct import from Vercel AI SDK
import { createOpenAI, openai } from '@ai-sdk/openai';
// Note: Before using this module, install the required dependency:
// npm install @ai-sdk/openai
// The rest of the implementation remains the same, but uses the correct imports.
// When implementing this module, ensure your package.json includes this dependency.
// For streaming implementations with the Vercel AI SDK, you can also use the
// streamText and experimental streamUI methods:
/**
* Example of using streamText for simpler streaming implementation
*/
export async function streamOpenAITextSimplified(params) {
try {
const { apiKey, modelId, messages, maxTokens, temperature = 0.7 } = params;
if (!apiKey) throw new Error('OpenAI API key is required');
const openaiClient = createOpenAI({ apiKey });
return openaiClient.streamText({
model: modelId,
messages,
temperature,
maxTokens,
});
} catch (error) {
console.error('OpenAI streaming error:', error);
throw new Error(`OpenAI streaming error: ${error.message}`);
}
}
```
</info added on 2025-04-27T05:39:31.942Z>
## 23. Implement Conditional Provider Logic in `ai-services-unified.js` [done]
### Dependencies: None
### Description: Implement logic within the functions of `ai-services-unified.js` (e.g., `generateTextService`, `generateObjectService`, `streamChatService`) to dynamically select and call the appropriate provider module (`anthropic.js`, `perplexity.js`, etc.) based on configuration (e.g., environment variables like `AI_PROVIDER` and `AI_MODEL` from `process.env` or `session.env`).
### Details:
<info added on 2025-04-20T03:52:13.065Z>
The unified service should now use the configuration manager for provider selection rather than directly accessing environment variables. Here's the implementation approach:
1. Import the config-manager functions:
```javascript
const {
getMainProvider,
getResearchProvider,
getFallbackProvider,
getModelForRole,
getProviderParameters
} = require('./config-manager');
```
2. Implement provider selection based on context/role:
```javascript
function selectProvider(role = 'default', context = {}) {
// Try to get provider based on role or context
let provider;
if (role === 'research') {
provider = getResearchProvider();
} else if (context.fallback) {
provider = getFallbackProvider();
} else {
provider = getMainProvider();
}
// Dynamically import the provider module
return require(`./${provider}.js`);
}
```
3. Update service functions to use this selection logic:
```javascript
async function generateTextService(prompt, options = {}) {
const { role = 'default', ...otherOptions } = options;
const provider = selectProvider(role, options);
const model = getModelForRole(role);
const parameters = getProviderParameters(provider.name);
return provider.generateText(prompt, {
model,
...parameters,
...otherOptions
});
}
```
4. Implement fallback logic for service resilience:
```javascript
async function executeWithFallback(serviceFunction, ...args) {
try {
return await serviceFunction(...args);
} catch (error) {
console.error(`Primary provider failed: ${error.message}`);
const fallbackProvider = require(`./${getFallbackProvider()}.js`);
return fallbackProvider[serviceFunction.name](...args);
}
}
```
5. Add provider capability checking to prevent calling unsupported features:
```javascript
function checkProviderCapability(provider, capability) {
const capabilities = {
'anthropic': ['text', 'chat', 'stream'],
'perplexity': ['text', 'chat', 'stream', 'research'],
'openai': ['text', 'chat', 'stream', 'embedding', 'vision']
// Add other providers as needed
};
return capabilities[provider]?.includes(capability) || false;
}
```
</info added on 2025-04-20T03:52:13.065Z>
## 24. Implement `google.js` Provider Module using Vercel AI SDK [done]
### Dependencies: None
### Description: Create and implement the `google.js` module within `src/ai-providers/`. This module should contain functions to interact with Google AI models (e.g., Gemini) using the **Vercel AI SDK (`@ai-sdk/google`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
### Details:
<info added on 2025-04-27T00:00:46.675Z>
```javascript
// Implementation details for google.js provider module
// 1. Required imports
import { GoogleGenerativeAI } from "@ai-sdk/google";
import { streamText, generateText, generateObject } from "@ai-sdk/core";
// 2. Model configuration
const DEFAULT_MODEL = "gemini-1.5-pro"; // Default model, can be overridden
const TEMPERATURE_DEFAULT = 0.7;
// 3. Function implementations
export async function generateGoogleText({
prompt,
model = DEFAULT_MODEL,
temperature = TEMPERATURE_DEFAULT,
apiKey
}) {
if (!apiKey) throw new Error("Google API key is required");
const googleAI = new GoogleGenerativeAI(apiKey);
const googleModel = googleAI.getGenerativeModel({ model });
const result = await generateText({
model: googleModel,
prompt,
temperature
});
return result;
}
export async function streamGoogleText({
prompt,
model = DEFAULT_MODEL,
temperature = TEMPERATURE_DEFAULT,
apiKey
}) {
if (!apiKey) throw new Error("Google API key is required");
const googleAI = new GoogleGenerativeAI(apiKey);
const googleModel = googleAI.getGenerativeModel({ model });
const stream = await streamText({
model: googleModel,
prompt,
temperature
});
return stream;
}
export async function generateGoogleObject({
prompt,
schema,
model = DEFAULT_MODEL,
temperature = TEMPERATURE_DEFAULT,
apiKey
}) {
if (!apiKey) throw new Error("Google API key is required");
const googleAI = new GoogleGenerativeAI(apiKey);
const googleModel = googleAI.getGenerativeModel({ model });
const result = await generateObject({
model: googleModel,
prompt,
schema,
temperature
});
return result;
}
// 4. Environment variable setup in .env.local
// GOOGLE_API_KEY=your_google_api_key_here
// 5. Error handling considerations
// - Implement proper error handling for API rate limits
// - Add retries for transient failures
// - Consider adding logging for debugging purposes
```
</info added on 2025-04-27T00:00:46.675Z>
## 25. Implement `ollama.js` Provider Module [done]
### Dependencies: None
### Description: Create and implement the `ollama.js` module within `src/ai-providers/`. This module should contain functions to interact with local Ollama models using the **`ollama-ai-provider` library**, adhering to the standardized input/output format defined for `ai-services-unified.js`. Note the specific library used.
### Details:
## 26. Implement `mistral.js` Provider Module using Vercel AI SDK [done]
### Dependencies: None
### Description: Create and implement the `mistral.js` module within `src/ai-providers/`. This module should contain functions to interact with Mistral AI models using the **Vercel AI SDK (`@ai-sdk/mistral`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
### Details:
## 27. Implement `azure.js` Provider Module using Vercel AI SDK [done]
### Dependencies: None
### Description: Create and implement the `azure.js` module within `src/ai-providers/`. This module should contain functions to interact with Azure OpenAI models using the **Vercel AI SDK (`@ai-sdk/azure`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
### Details:
## 28. Implement `openrouter.js` Provider Module [done]
### Dependencies: None
### Description: Create and implement the `openrouter.js` module within `src/ai-providers/`. This module should contain functions to interact with various models via OpenRouter using the **`@openrouter/ai-sdk-provider` library**, adhering to the standardized input/output format defined for `ai-services-unified.js`. Note the specific library used.
### Details:
## 29. Implement `xai.js` Provider Module using Vercel AI SDK [done]
### Dependencies: None
### Description: Create and implement the `xai.js` module within `src/ai-providers/`. This module should contain functions to interact with xAI models (e.g., Grok) using the **Vercel AI SDK (`@ai-sdk/xai`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
### Details:
## 30. Update Configuration Management for AI Providers [done]
### Dependencies: None
### Description: Update `config-manager.js` and related configuration logic/documentation to support the new provider/model selection mechanism for `ai-services-unified.js` (e.g., using `AI_PROVIDER`, `AI_MODEL` env vars from `process.env` or `session.env`), ensuring compatibility with existing role-based selection if needed.
### Details:
<info added on 2025-04-20T00:42:35.876Z>
```javascript
// Implementation details for config-manager.js updates
/**
* Unified configuration resolution function that checks multiple sources in priority order:
* 1. process.env
* 2. session.env (if available)
* 3. Default values from .taskmasterconfig
*
* @param {string} key - Configuration key to resolve
* @param {object} session - Optional session object that may contain env values
* @param {*} defaultValue - Default value if not found in any source
* @returns {*} Resolved configuration value
*/
function resolveConfig(key, session = null, defaultValue = null) {
return process.env[key] ?? session?.env?.[key] ?? defaultValue;
}
// AI provider/model resolution with fallback to role-based selection
function resolveAIConfig(session = null, role = 'default') {
const provider = resolveConfig('AI_PROVIDER', session);
const model = resolveConfig('AI_MODEL', session);
// If explicit provider/model specified, use those
if (provider && model) {
return { provider, model };
}
// Otherwise fall back to role-based configuration
const roleConfig = getRoleBasedAIConfig(role);
return {
provider: provider || roleConfig.provider,
model: model || roleConfig.model
};
}
// Example usage in ai-services-unified.js:
// const { provider, model } = resolveAIConfig(session, role);
// const client = getProviderClient(provider, resolveConfig(`${provider.toUpperCase()}_API_KEY`, session));
/**
* Configuration Resolution Documentation:
*
* 1. Environment Variables:
* - AI_PROVIDER: Explicitly sets the AI provider (e.g., 'openai', 'anthropic')
* - AI_MODEL: Explicitly sets the model to use (e.g., 'gpt-4', 'claude-2')
* - OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.: Provider-specific API keys
*
* 2. Resolution Strategy:
* - Values are first checked in process.env
* - If not found, session.env is checked (when available)
* - If still not found, defaults from .taskmasterconfig are used
* - For AI provider/model, explicit settings override role-based configuration
*
* 3. Backward Compatibility:
* - Role-based selection continues to work when AI_PROVIDER/AI_MODEL are not set
* - Existing code using getRoleBasedAIConfig() will continue to function
*/
```
</info added on 2025-04-20T00:42:35.876Z>
<info added on 2025-04-20T03:51:51.967Z>
<info added on 2025-04-20T14:30:12.456Z>
```javascript
/**
* Refactored configuration management implementation
*/
// Core configuration getters - replace direct CONFIG access
const getMainProvider = () => resolveConfig('AI_PROVIDER', null, CONFIG.ai?.mainProvider || 'openai');
const getMainModel = () => resolveConfig('AI_MODEL', null, CONFIG.ai?.mainModel || 'gpt-4');
const getLogLevel = () => resolveConfig('LOG_LEVEL', null, CONFIG.logging?.level || 'info');
const getMaxTokens = (role = 'default') => {
const explicitMaxTokens = parseInt(resolveConfig('MAX_TOKENS', null, 0), 10);
if (explicitMaxTokens > 0) return explicitMaxTokens;
// Fall back to role-based configuration
return CONFIG.ai?.roles?.[role]?.maxTokens || CONFIG.ai?.defaultMaxTokens || 4096;
};
// API key resolution - separate from general configuration
function resolveEnvVariable(key, session = null) {
return process.env[key] ?? session?.env?.[key] ?? null;
}
function isApiKeySet(provider, session = null) {
const keyName = `${provider.toUpperCase()}_API_KEY`;
return Boolean(resolveEnvVariable(keyName, session));
}
/**
* Migration guide for application components:
*
* 1. Replace direct CONFIG access:
* - Before: `const provider = CONFIG.ai.mainProvider;`
* - After: `const provider = getMainProvider();`
*
* 2. Replace direct process.env access for API keys:
* - Before: `const apiKey = process.env.OPENAI_API_KEY;`
* - After: `const apiKey = resolveEnvVariable('OPENAI_API_KEY', session);`
*
* 3. Check API key availability:
* - Before: `if (process.env.OPENAI_API_KEY) {...}`
* - After: `if (isApiKeySet('openai', session)) {...}`
*
* 4. Update provider/model selection in ai-services:
* - Before:
* ```
* const provider = role ? CONFIG.ai.roles[role]?.provider : CONFIG.ai.mainProvider;
* const model = role ? CONFIG.ai.roles[role]?.model : CONFIG.ai.mainModel;
* ```
* - After:
* ```
* const { provider, model } = resolveAIConfig(session, role);
* ```
*/
// Update .taskmasterconfig schema documentation
const configSchema = {
"ai": {
"mainProvider": "Default AI provider (overridden by AI_PROVIDER env var)",
"mainModel": "Default AI model (overridden by AI_MODEL env var)",
"defaultMaxTokens": "Default max tokens (overridden by MAX_TOKENS env var)",
"roles": {
"role_name": {
"provider": "Provider for this role (fallback if AI_PROVIDER not set)",
"model": "Model for this role (fallback if AI_MODEL not set)",
"maxTokens": "Max tokens for this role (fallback if MAX_TOKENS not set)"
}
}
},
"logging": {
"level": "Logging level (overridden by LOG_LEVEL env var)"
}
};
```
Implementation notes:
1. All configuration getters should provide environment variable override capability first, then fall back to .taskmasterconfig values
2. API key resolution should be kept separate from general configuration to maintain security boundaries
3. Update all application components to use these new getters rather than accessing CONFIG or process.env directly
4. Document the priority order (env vars > session.env > .taskmasterconfig) in JSDoc comments
5. Ensure backward compatibility by maintaining support for role-based configuration when explicit env vars aren't set
</info added on 2025-04-20T14:30:12.456Z>
</info added on 2025-04-20T03:51:51.967Z>
<info added on 2025-04-22T02:41:51.174Z>
**Implementation Update (Deviation from Original Plan):**
- The configuration management system has been refactored to **eliminate environment variable overrides** (such as `AI_PROVIDER`, `AI_MODEL`, `MAX_TOKENS`, etc.) for all settings except API keys and select endpoints. All configuration values for providers, models, parameters, and logging are now sourced *exclusively* from the loaded `.taskmasterconfig` file (merged with defaults), ensuring a single source of truth.
- The `resolveConfig` and `resolveAIConfig` helpers, which previously checked `process.env` and `session.env`, have been **removed**. All configuration getters now directly access the loaded configuration object.
- A new `MissingConfigError` is thrown if the `.taskmasterconfig` file is not found at startup. This error is caught in the application entrypoint (`ai-services-unified.js`), which then instructs the user to initialize the configuration file before proceeding.
- API key and endpoint resolution remains an exception: environment variable overrides are still supported for secrets like `OPENAI_API_KEY` or provider-specific endpoints, maintaining security best practices.
- Documentation (`README.md`, inline JSDoc, and `.taskmasterconfig` schema) has been updated to clarify that **environment variables are no longer used for general configuration** (other than secrets), and that all settings must be defined in `.taskmasterconfig`.
- All application components have been updated to use the new configuration getters, and any direct access to `CONFIG`, `process.env`, or the previous helpers has been removed.
- This stricter approach enforces configuration-as-code principles, ensures reproducibility, and prevents configuration drift, aligning with modern best practices for immutable infrastructure and automated configuration management[2][4].
</info added on 2025-04-22T02:41:51.174Z>
## 31. Implement Integration Tests for Unified AI Service [done]
### Dependencies: 61.18
### Description: Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider modules based on configuration and ensure the unified service functions (`generateTextService`, `generateObjectService`, etc.) work correctly when called from modules like `task-manager.js`. [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025] [Updated: 5/2/2025]
### Details:
<info added on 2025-04-20T03:51:23.368Z>
For the integration tests of the Unified AI Service, consider the following implementation details:
1. Setup test fixtures:
- Create a mock `.taskmasterconfig` file with different provider configurations
- Define test cases with various model selections and parameter settings
- Use environment variable mocks only for API keys (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
2. Test configuration resolution:
- Verify that `ai-services-unified.js` correctly retrieves settings from `config-manager.js`
- Test that model selection follows the hierarchy defined in `.taskmasterconfig`
- Ensure fallback mechanisms work when primary providers are unavailable
3. Mock the provider modules:
```javascript
jest.mock('../services/openai-service.js');
jest.mock('../services/anthropic-service.js');
```
4. Test specific scenarios:
- Provider selection based on configured preferences
- Parameter inheritance from config (temperature, maxTokens)
- Error handling when API keys are missing
- Proper routing when specific models are requested
5. Verify integration with task-manager:
```javascript
test('task-manager correctly uses unified AI service with config-based settings', async () => {
// Setup mock config with specific settings
mockConfigManager.getAIProviderPreference.mockReturnValue(['openai', 'anthropic']);
mockConfigManager.getModelForRole.mockReturnValue('gpt-4');
mockConfigManager.getParametersForModel.mockReturnValue({ temperature: 0.7, maxTokens: 2000 });
// Verify task-manager uses these settings when calling the unified service
// ...
});
```
6. Include tests for configuration changes at runtime and their effect on service behavior.
</info added on 2025-04-20T03:51:23.368Z>
<info added on 2025-05-02T18:41:13.374Z>
]
{
"id": 31,
"title": "Implement Integration Test for Unified AI Service",
"description": "Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider module based on configuration and ensure the unified service function (`generateTextService`, `generateObjectService`, etc.) work correctly when called from module like `task-manager.js`.",
"details": "\n\n<info added on 2025-04-20T03:51:23.368Z>\nFor the integration test of the Unified AI Service, consider the following implementation details:\n\n1. Setup test fixture:\n - Create a mock `.taskmasterconfig` file with different provider configuration\n - Define test case with various model selection and parameter setting\n - Use environment variable mock only for API key (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)\n\n2. Test configuration resolution:\n - Verify that `ai-services-unified.js` correctly retrieve setting from `config-manager.js`\n - Test that model selection follow the hierarchy defined in `.taskmasterconfig`\n - Ensure fallback mechanism work when primary provider are unavailable\n\n3. Mock the provider module:\n ```javascript\n jest.mock('../service/openai-service.js');\n jest.mock('../service/anthropic-service.js');\n ```\n\n4. Test specific scenario:\n - Provider selection based on configured preference\n - Parameter inheritance from config (temperature, maxToken)\n - Error handling when API key are missing\n - Proper routing when specific model are requested\n\n5. Verify integration with task-manager:\n ```javascript\n test('task-manager correctly use unified AI service with config-based setting', async () => {\n // Setup mock config with specific setting\n mockConfigManager.getAIProviderPreference.mockReturnValue(['openai', 'anthropic']);\n mockConfigManager.getModelForRole.mockReturnValue('gpt-4');\n mockConfigManager.getParameterForModel.mockReturnValue({ temperature: 0.7, maxToken: 2000 });\n \n // Verify task-manager use these setting when calling the unified service\n // ...\n });\n ```\n\n6. Include test for configuration change at runtime and their effect on service behavior.\n</info added on 2025-04-20T03:51:23.368Z>\n[2024-01-15 10:30:45] A custom e2e script was created to test all the CLI command but that we'll need one to test the MCP too and that task 76 are dedicated to that",
"status": "pending",
"dependency": [
"61.18"
],
"parentTaskId": 61
}
</info added on 2025-05-02T18:41:13.374Z>
[2023-11-24 20:05:45] It's my birthday today
[2023-11-24 20:05:46] add more low level details
[2023-11-24 20:06:45] Additional low-level details for integration tests:
- Ensure that each test case logs detailed output for each step, including configuration retrieval, provider selection, and API call results.
- Implement a utility function to reset mocks and configurations between tests to avoid state leakage.
- Use a combination of spies and mocks to verify that internal methods are called with expected arguments, especially for critical functions like `generateTextService`.
- Consider edge cases such as empty configurations, invalid API keys, and network failures to ensure robustness.
- Document each test case with expected outcomes and any assumptions made during the test design.
- Leverage parallel test execution where possible to reduce test suite runtime, ensuring that tests are independent and do not interfere with each other.
<info added on 2025-05-02T20:42:14.388Z>
<info added on 2025-04-20T03:51:23.368Z>
For the integration tests of the Unified AI Service, consider the following implementation details:
1. Setup test fixtures:
- Create a mock `.taskmasterconfig` file with different provider configurations
- Define test cases with various model selections and parameter settings
- Use environment variable mocks only for API keys (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
2. Test configuration resolution:
- Verify that `ai-services-unified.js` correctly retrieves settings from `config-manager.js`
- Test that model selection follows the hierarchy defined in `.taskmasterconfig`
- Ensure fallback mechanisms work when primary providers are unavailable
3. Mock the provider modules:
```javascript
jest.mock('../services/openai-service.js');
jest.mock('../services/anthropic-service.js');
```
4. Test specific scenarios:
- Provider selection based on configured preferences
- Parameter inheritance from config (temperature, maxTokens)
- Error handling when API keys are missing
- Proper routing when specific models are requested
5. Verify integration with task-manager:
```javascript
test('task-manager correctly uses unified AI service with config-based settings', async () => {
// Setup mock config with specific settings
mockConfigManager.getAIProviderPreference.mockReturnValue(['openai', 'anthropic']);
mockConfigManager.getModelForRole.mockReturnValue('gpt-4');
mockConfigManager.getParametersForModel.mockReturnValue({ temperature: 0.7, maxTokens: 2000 });
// Verify task-manager uses these settings when calling the unified service
// ...
});
```
6. Include tests for configuration changes at runtime and their effect on service behavior.
</info added on 2025-04-20T03:51:23.368Z>
<info added on 2025-05-02T18:41:13.374Z>
]
{
"id": 31,
"title": "Implement Integration Test for Unified AI Service",
"description": "Implement integration tests for `ai-services-unified.js`. These tests should verify the correct routing to different provider module based on configuration and ensure the unified service function (`generateTextService`, `generateObjectService`, etc.) work correctly when called from module like `task-manager.js`.",
"details": "\n\n<info added on 2025-04-20T03:51:23.368Z>\nFor the integration test of the Unified AI Service, consider the following implementation details:\n\n1. Setup test fixture:\n - Create a mock `.taskmasterconfig` file with different provider configuration\n - Define test case with various model selection and parameter setting\n - Use environment variable mock only for API key (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)\n\n2. Test configuration resolution:\n - Verify that `ai-services-unified.js` correctly retrieve setting from `config-manager.js`\n - Test that model selection follow the hierarchy defined in `.taskmasterconfig`\n - Ensure fallback mechanism work when primary provider are unavailable\n\n3. Mock the provider module:\n ```javascript\n jest.mock('../service/openai-service.js');\n jest.mock('../service/anthropic-service.js');\n ```\n\n4. Test specific scenario:\n - Provider selection based on configured preference\n - Parameter inheritance from config (temperature, maxToken)\n - Error handling when API key are missing\n - Proper routing when specific model are requested\n\n5. Verify integration with task-manager:\n ```javascript\n test('task-manager correctly use unified AI service with config-based setting', async () => {\n // Setup mock config with specific setting\n mockConfigManager.getAIProviderPreference.mockReturnValue(['openai', 'anthropic']);\n mockConfigManager.getModelForRole.mockReturnValue('gpt-4');\n mockConfigManager.getParameterForModel.mockReturnValue({ temperature: 0.7, maxToken: 2000 });\n \n // Verify task-manager use these setting when calling the unified service\n // ...\n });\n ```\n\n6. Include test for configuration change at runtime and their effect on service behavior.\n</info added on 2025-04-20T03:51:23.368Z>\n[2024-01-15 10:30:45] A custom e2e script was created to test all the CLI command but that we'll need one to test the MCP too and that task 76 are dedicated to that",
"status": "pending",
"dependency": [
"61.18"
],
"parentTaskId": 61
}
</info added on 2025-05-02T18:41:13.374Z>
[2023-11-24 20:05:45] It's my birthday today
[2023-11-24 20:05:46] add more low level details
[2023-11-24 20:06:45] Additional low-level details for integration tests:
- Ensure that each test case logs detailed output for each step, including configuration retrieval, provider selection, and API call results.
- Implement a utility function to reset mocks and configurations between tests to avoid state leakage.
- Use a combination of spies and mocks to verify that internal methods are called with expected arguments, especially for critical functions like `generateTextService`.
- Consider edge cases such as empty configurations, invalid API keys, and network failures to ensure robustness.
- Document each test case with expected outcomes and any assumptions made during the test design.
- Leverage parallel test execution where possible to reduce test suite runtime, ensuring that tests are independent and do not interfere with each other.
<info added on 2023-11-24T20:10:00.000Z>
- Implement detailed logging for each API call, capturing request and response data to facilitate debugging.
- Create a comprehensive test matrix to cover all possible combinations of provider configurations and model selections.
- Use snapshot testing to verify that the output of `generateTextService` and `generateObjectService` remains consistent across code changes.
- Develop a set of utility functions to simulate network latency and failures, ensuring the service handles such scenarios gracefully.
- Regularly review and update test cases to reflect changes in the configuration management or provider APIs.
- Ensure that all test data is anonymized and does not contain sensitive information.
</info added on 2023-11-24T20:10:00.000Z>
</info added on 2025-05-02T20:42:14.388Z>
## 32. Update Documentation for New AI Architecture [done]
### Dependencies: 61.31
### Description: Update relevant documentation files (e.g., `architecture.mdc`, `taskmaster.mdc`, environment variable guides, README) to accurately reflect the new AI service architecture using `ai-services-unified.js`, provider modules, the Vercel AI SDK, and the updated configuration approach.
### Details:
<info added on 2025-04-20T03:51:04.461Z>
The new AI architecture introduces a clear separation between sensitive credentials and configuration settings:
## Environment Variables vs Configuration File
- **Environment Variables (.env)**:
- Store only sensitive API keys and credentials
- Accessed via `resolveEnvVariable()` which checks both process.env and session.env
- Example: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GOOGLE_API_KEY`
- No model names, parameters, or non-sensitive settings should be here
- **.taskmasterconfig File**:
- Central location for all non-sensitive configuration
- Structured JSON with clear sections for different aspects of the system
- Contains:
- Model mappings by role (e.g., `systemModels`, `userModels`)
- Default parameters (temperature, maxTokens, etc.)
- Logging preferences
- Provider-specific settings
- Accessed via getter functions from `config-manager.js` like:
```javascript
import { getModelForRole, getDefaultTemperature } from './config-manager.js';
// Usage examples
const model = getModelForRole('system');
const temp = getDefaultTemperature();
```
## Implementation Notes
- Document the structure of `.taskmasterconfig` with examples
- Explain the migration path for users with existing setups
- Include a troubleshooting section for common configuration issues
- Add a configuration validation section explaining how the system verifies settings
</info added on 2025-04-20T03:51:04.461Z>
## 33. Cleanup Old AI Service Files [done]
### Dependencies: 61.31, 61.32
### Description: After all other migration subtasks (refactoring, provider implementation, testing, documentation) are complete and verified, remove the old `ai-services.js` and `ai-client-factory.js` files from the `scripts/modules/` directory. Ensure no code still references them.
### Details:
<info added on 2025-04-22T06:51:02.444Z>
I'll provide additional technical information to enhance the "Cleanup Old AI Service Files" subtask:
## Implementation Details
**Pre-Cleanup Verification Steps:**
- Run a comprehensive codebase search for any remaining imports or references to `ai-services.js` and `ai-client-factory.js` using grep or your IDE's search functionality[1][4]
- Check for any dynamic imports that might not be caught by static analysis tools
- Verify that all dependent modules have been properly migrated to the new AI service architecture
**Cleanup Process:**
- Create a backup of the files before deletion in case rollback is needed
- Document the file removal in the migration changelog with timestamps and specific file paths[5]
- Update any build configuration files that might reference these files (webpack configs, etc.)
- Run a full test suite after removal to ensure no runtime errors occur[2]
**Post-Cleanup Validation:**
- Implement automated tests to verify the application functions correctly without the removed files
- Monitor application logs and error reporting systems for 48-72 hours after deployment to catch any missed dependencies[3]
- Perform a final code review to ensure clean architecture principles are maintained in the new implementation
**Technical Considerations:**
- Check for any circular dependencies that might have been created during the migration process
- Ensure proper garbage collection by removing any cached instances of the old services
- Verify that performance metrics remain stable after the removal of legacy code
</info added on 2025-04-22T06:51:02.444Z>
## 34. Audit and Standardize Env Variable Access [done]
### Dependencies: None
### Description: Audit the entire codebase (core modules, provider modules, utilities) to ensure all accesses to environment variables (API keys, configuration flags) consistently use a standardized resolution function (like `resolveEnvVariable` or a new utility) that checks `process.env` first and then `session.env` if available. Refactor any direct `process.env` access where `session.env` should also be considered.
### Details:
<info added on 2025-04-20T03:50:25.632Z>
This audit should distinguish between two types of configuration:
1. **Sensitive credentials (API keys)**: These should exclusively use the `resolveEnvVariable` pattern to check both `process.env` and `session.env`. Verify that no API keys are hardcoded or accessed through direct `process.env` references.
2. **Application configuration**: All non-credential settings should be migrated to use the centralized `.taskmasterconfig` system via the `config-manager.js` getters. This includes:
- Model selections and role assignments
- Parameter settings (temperature, maxTokens, etc.)
- Logging configuration
- Default behaviors and fallbacks
Implementation notes:
- Create a comprehensive inventory of all environment variable accesses
- Categorize each as either credential or application configuration
- For credentials: standardize on `resolveEnvVariable` pattern
- For app config: migrate to appropriate `config-manager.js` getter methods
- Document any exceptions that require special handling
- Add validation to prevent regression (e.g., ESLint rules against direct `process.env` access)
This separation ensures security best practices for credentials while centralizing application configuration for better maintainability.
</info added on 2025-04-20T03:50:25.632Z>
<info added on 2025-04-20T06:58:36.731Z>
**Plan & Analysis (Added on 2023-05-15T14:32:18.421Z)**:
**Goal:**
1. **Standardize API Key Access**: Ensure all accesses to sensitive API keys (Anthropic, Perplexity, etc.) consistently use a standard function (like `resolveEnvVariable(key, session)`) that checks both `process.env` and `session.env`. Replace direct `process.env.API_KEY` access.
2. **Centralize App Configuration**: Ensure all non-sensitive configuration values (model names, temperature, logging levels, max tokens, etc.) are accessed *only* through `scripts/modules/config-manager.js` getters. Eliminate direct `process.env` access for these.
**Strategy: Inventory -> Analyze -> Target -> Refine**
1. **Inventory (`process.env` Usage):** Performed grep search (`rg "process\.env"`). Results indicate widespread usage across multiple files.
2. **Analysis (Categorization of Usage):**
* **API Keys (Credentials):** ANTHROPIC_API_KEY, PERPLEXITY_API_KEY, OPENAI_API_KEY, etc. found in `task-manager.js`, `ai-services.js`, `commands.js`, `dependency-manager.js`, `ai-client-utils.js`, test files. Needs replacement with `resolveEnvVariable(key, session)`.
* **App Configuration:** PERPLEXITY_MODEL, TEMPERATURE, MAX_TOKENS, MODEL, DEBUG, LOG_LEVEL, DEFAULT_*, PROJECT_*, TASK_MASTER_PROJECT_ROOT found in `task-manager.js`, `ai-services.js`, `scripts/init.js`, `mcp-server/src/logger.js`, `mcp-server/src/tools/utils.js`, test files. Needs replacement with `config-manager.js` getters.
* **System/Environment Info:** HOME, USERPROFILE, SHELL in `scripts/init.js`. Needs review (e.g., `os.homedir()` preference).
* **Test Code/Setup:** Extensive usage in test files. Acceptable for mocking, but code under test must use standard methods. May require test adjustments.
* **Helper Functions/Comments:** Definitions/comments about `resolveEnvVariable`. No action needed.
3. **Target (High-Impact Areas & Initial Focus):**
* High Impact: `task-manager.js` (~5800 lines), `ai-services.js` (~1500 lines).
* Medium Impact: `commands.js`, Test Files.
* Foundational: `ai-client-utils.js`, `config-manager.js`, `utils.js`.
* **Initial Target Command:** `task-master analyze-complexity` for a focused, end-to-end refactoring exercise.
4. **Refine (Plan for `analyze-complexity`):**
a. **Trace Code Path:** Identify functions involved in `analyze-complexity`.
b. **Refactor API Key Access:** Replace direct `process.env.PERPLEXITY_API_KEY` with `resolveEnvVariable(key, session)`.
c. **Refactor App Config Access:** Replace direct `process.env` for model name, temp, tokens with `config-manager.js` getters.
d. **Verify `resolveEnvVariable`:** Ensure robustness, especially handling potentially undefined `session`.
e. **Test:** Verify command works locally and via MCP context (if possible). Update tests.
This piecemeal approach aims to establish the refactoring pattern before tackling the entire codebase.
</info added on 2025-04-20T06:58:36.731Z>
## 35. Refactor add-task.js for Unified AI Service & Config [done]
### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep `getDefaultPriority` usage.
### Details:
## 36. Refactor analyze-task-complexity.js for Unified AI Service & Config [done]
### Dependencies: None
### Description: Replace direct AI calls with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep config getters needed for report metadata (`getProjectName`, `getDefaultSubtasks`).
### Details:
<info added on 2025-04-24T17:45:51.956Z>
## Additional Implementation Notes for Refactoring
**General Guidance**
- Ensure all AI-related logic in `analyze-task-complexity.js` is abstracted behind the `generateObjectService` interface. The function should only specify *what* to generate (schema, prompt, and parameters), not *how* the AI call is made or which model/config is used.
- Remove any code that directly fetches AI model parameters or credentials from configuration files. All such details must be handled by the unified service layer.
**1. Core Logic Function (analyze-task-complexity.js)**
- Refactor the function signature to accept a `session` object and a `role` parameter, in addition to the existing arguments.
- When preparing the service call, construct a payload object containing:
- The Zod schema for expected output.
- The prompt or input for the AI.
- The `role` (e.g., "researcher" or "default") based on the `useResearch` flag.
- The `session` context for downstream configuration and authentication.
- Example service call:
```js
const result = await generateObjectService({
schema: complexitySchema,
prompt: buildPrompt(task, options),
role,
session,
});
```
- Remove all references to direct AI client instantiation or configuration fetching.
**2. CLI Command Action Handler (commands.js)**
- Ensure the CLI handler for `analyze-complexity`:
- Accepts and parses the `--use-research` flag (or equivalent).
- Passes the `useResearch` flag and the current session context to the core function.
- Handles errors from the unified service gracefully, providing user-friendly feedback.
**3. MCP Tool Definition (mcp-server/src/tools/analyze.js)**
- Align the Zod schema for CLI options with the parameters expected by the core function, including `useResearch` and any new required fields.
- Use `getMCPProjectRoot` to resolve the project path before invoking the core function.
- Add status logging before and after the analysis, e.g., "Analyzing task complexity..." and "Analysis complete."
- Ensure the tool calls the core function with all required parameters, including session and resolved paths.
**4. MCP Direct Function Wrapper (mcp-server/src/core/direct-functions/analyze-complexity-direct.js)**
- Remove any direct AI client or config usage.
- Implement a logger wrapper that standardizes log output for this function (e.g., `logger.info`, `logger.error`).
- Pass the session context through to the core function to ensure all environment/config access is centralized.
- Return a standardized response object, e.g.:
```js
return {
success: true,
data: analysisResult,
message: "Task complexity analysis completed.",
};
```
**Testing and Validation**
- After refactoring, add or update tests to ensure:
- The function does not break if AI service configuration changes.
- The correct role and session are always passed to the unified service.
- Errors from the unified service are handled and surfaced appropriately.
**Best Practices**
- Keep the core logic function pure and focused on orchestration, not implementation details.
- Use dependency injection for session/context to facilitate testing and future extensibility.
- Document the expected structure of the session and role parameters for maintainability.
These enhancements will ensure the refactored code is modular, maintainable, and fully decoupled from AI implementation details, aligning with modern refactoring best practices[1][3][5].
</info added on 2025-04-24T17:45:51.956Z>
## 37. Refactor expand-task.js for Unified AI Service & Config [done]
### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers like `generateSubtasksWithPerplexity`) with `generateObjectService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead. Keep `getDefaultSubtasks` usage.
### Details:
<info added on 2025-04-24T17:46:51.286Z>
- In expand-task.js, ensure that all AI parameter configuration (such as model, temperature, max tokens) is passed via the unified generateObjectService interface, not fetched directly from config files or environment variables. This centralizes AI config management and supports future service changes without further refactoring.
- When preparing the service call, construct the payload to include both the prompt and any schema or validation requirements expected by generateObjectService. For example, if subtasks must conform to a Zod schema, pass the schema definition or reference as part of the call.
- For the CLI handler, ensure that the --research flag is mapped to the useResearch boolean and that this is explicitly passed to the core expand-task logic. Also, propagate any session or user context from CLI options to the core function for downstream auditing or personalization.
- In the MCP tool definition, validate that all CLI-exposed parameters are reflected in the Zod schema, including optional ones like prompt overrides or force regeneration. This ensures strict input validation and prevents runtime errors.
- In the direct function wrapper, implement a try/catch block around the core expandTask invocation. On error, log the error with context (task id, session id) and return a standardized error response object with error code and message fields.
- Add unit tests or integration tests to verify that expand-task.js no longer imports or uses any direct AI client or config getter, and that all AI calls are routed through ai-services-unified.js.
- Document the expected shape of the session object and any required fields for downstream service calls, so future maintainers know what context must be provided.
</info added on 2025-04-24T17:46:51.286Z>
## 38. Refactor expand-all-tasks.js for Unified AI Helpers & Config [done]
### Dependencies: None
### Description: Ensure this file correctly calls the refactored `getSubtasksFromAI` helper. Update config usage to only use `getDefaultSubtasks` from `config-manager.js` directly. AI interaction itself is handled by the helper.
### Details:
<info added on 2025-04-24T17:48:09.354Z>
## Additional Implementation Notes for Refactoring expand-all-tasks.js
- Replace any direct imports of AI clients (e.g., OpenAI, Anthropic) and configuration getters with a single import of `expandTask` from `expand-task.js`, which now encapsulates all AI and config logic.
- Ensure that the orchestration logic in `expand-all-tasks.js`:
- Iterates over all pending tasks, checking for existing subtasks before invoking expansion.
- For each task, calls `expandTask` and passes both the `useResearch` flag and the current `session` object as received from upstream callers.
- Does not contain any logic for AI prompt construction, API calls, or config file reading—these are now delegated to the unified helpers.
- Maintain progress reporting by emitting status updates (e.g., via events or logging) before and after each task expansion, and ensure that errors from `expandTask` are caught and reported with sufficient context (task ID, error message).
- Example code snippet for calling the refactored helper:
```js
// Pseudocode for orchestration loop
for (const task of pendingTasks) {
try {
reportProgress(`Expanding task ${task.id}...`);
await expandTask({
task,
useResearch,
session,
});
reportProgress(`Task ${task.id} expanded.`);
} catch (err) {
reportError(`Failed to expand task ${task.id}: ${err.message}`);
}
}
```
- Remove any fallback or legacy code paths that previously handled AI or config logic directly within this file.
- Ensure that all configuration defaults are accessed exclusively via `getDefaultSubtasks` from `config-manager.js` and only within the unified helper, not in `expand-all-tasks.js`.
- Add or update JSDoc comments to clarify that this module is now a pure orchestrator and does not perform AI or config operations directly.
</info added on 2025-04-24T17:48:09.354Z>
## 39. Refactor get-subtasks-from-ai.js for Unified AI Service & Config [done]
### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters; use unified service instead.
### Details:
<info added on 2025-04-24T17:48:35.005Z>
**Additional Implementation Notes for Refactoring get-subtasks-from-ai.js**
- **Zod Schema Definition**:
Define a Zod schema that precisely matches the expected subtask object structure. For example, if a subtask should have an id (string), title (string), and status (string), use:
```js
import { z } from 'zod';
const SubtaskSchema = z.object({
id: z.string(),
title: z.string(),
status: z.string(),
// Add other fields as needed
});
const SubtasksArraySchema = z.array(SubtaskSchema);
```
This ensures robust runtime validation and clear error reporting if the AI response does not match expectations[5][1][3].
- **Unified Service Invocation**:
Replace all direct AI client and config usage with:
```js
import { generateObjectService } from './ai-services-unified';
// Example usage:
const subtasks = await generateObjectService({
schema: SubtasksArraySchema,
prompt,
role,
session,
});
```
This centralizes AI invocation and parameter management, ensuring consistency and easier maintenance.
- **Role Determination**:
Use the `useResearch` flag to select the AI role:
```js
const role = useResearch ? 'researcher' : 'default';
```
- **Error Handling**:
Implement structured error handling:
```js
try {
// AI service call
} catch (err) {
if (err.name === 'ServiceUnavailableError') {
// Handle AI service unavailability
} else if (err.name === 'ZodError') {
// Handle schema validation errors
// err.errors contains detailed validation issues
} else if (err.name === 'PromptConstructionError') {
// Handle prompt construction issues
} else {
// Handle unexpected errors
}
throw err; // or wrap and rethrow as needed
}
```
This pattern ensures that consumers can distinguish between different failure modes and respond appropriately.
- **Consumer Contract**:
Update the function signature to require both `useResearch` and `session` parameters, and document this in JSDoc/type annotations for clarity.
- **Prompt Construction**:
Move all prompt construction logic outside the core function if possible, or encapsulate it so that errors can be caught and reported as `PromptConstructionError`.
- **No AI Implementation Details**:
The refactored function should not expose or depend on any AI implementation specifics—only the unified service interface and schema validation.
- **Testing**:
Add or update tests to cover:
- Successful subtask generation
- Schema validation failures (invalid AI output)
- Service unavailability scenarios
- Prompt construction errors
These enhancements ensure the refactored file is robust, maintainable, and aligned with the unified AI service architecture, leveraging Zod for strict runtime validation and clear error boundaries[5][1][3].
</info added on 2025-04-24T17:48:35.005Z>
## 40. Refactor update-task-by-id.js for Unified AI Service & Config [done]
### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`.
### Details:
<info added on 2025-04-24T17:48:58.133Z>
- When defining the Zod schema for task update validation, consider using Zod's function schemas to validate both the input parameters and the expected output of the update function. This approach helps separate validation logic from business logic and ensures type safety throughout the update process[1][2].
- For the core logic, use Zod's `.implement()` method to wrap the update function, so that all inputs (such as task ID, prompt, and options) are validated before execution, and outputs are type-checked. This reduces runtime errors and enforces contract compliance between layers[1][2].
- In the MCP tool definition, ensure that the Zod schema explicitly validates all required parameters (e.g., `id` as a string, `prompt` as a string, `research` as a boolean or optional flag). This guarantees that only well-formed requests reach the core logic, improving reliability and error reporting[3][5].
- When preparing the unified AI service call, pass the validated and sanitized data from the Zod schema directly to `generateObjectService`, ensuring that no unvalidated data is sent to the AI layer.
- For output formatting, leverage Zod's ability to define and enforce the shape of the returned object, ensuring that the response structure (including success/failure status and updated task data) is always consistent and predictable[1][2][3].
- If you need to validate or transform nested objects (such as task metadata or options), use Zod's object and nested schema capabilities to define these structures precisely, catching errors early and simplifying downstream logic[3][5].
</info added on 2025-04-24T17:48:58.133Z>
## 41. Refactor update-tasks.js for Unified AI Service & Config [done]
### Dependencies: None
### Description: Replace direct AI calls (old `ai-services.js` helpers) with `generateObjectService` or `generateTextService` from `ai-services-unified.js`. Pass `role` and `session`. Remove direct config getter usage (from `config-manager.js`) for AI parameters and fallback logic; use unified service instead. Keep `getDebugFlag`.
### Details:
<info added on 2025-04-24T17:49:25.126Z>
## Additional Implementation Notes for Refactoring update-tasks.js
- **Zod Schema for Batch Updates**:
Define a Zod schema to validate the structure of the batch update payload. For example, if updating tasks requires an array of task objects with specific fields, use:
```typescript
import { z } from "zod";
const TaskUpdateSchema = z.object({
id: z.number(),
status: z.string(),
// add other fields as needed
});
const BatchUpdateSchema = z.object({
tasks: z.array(TaskUpdateSchema),
from: z.number(),
prompt: z.string().optional(),
useResearch: z.boolean().optional(),
});
```
This ensures all incoming data for batch updates is validated at runtime, catching malformed input early and providing clear error messages[4][5].
- **Function Schema Validation**:
If exposing the update logic as a callable function (e.g., for CLI or API), consider using Zod's function schema to validate both input and output:
```typescript
const updateTasksFunction = z
.function()
.args(BatchUpdateSchema, z.object({ session: z.any() }))
.returns(z.promise(z.object({ success: z.boolean(), updated: z.number() })))
.implement(async (input, { session }) => {
// implementation here
});
```
This pattern enforces correct usage and output shape, improving reliability[1].
- **Error Handling and Reporting**:
Use Zod's `.safeParse()` or `.parse()` methods to validate input. On validation failure, return or throw a formatted error to the caller (CLI, API, etc.), ensuring actionable feedback for users[5].
- **Consistent JSON Output**:
When invoking the core update function from wrappers (CLI, MCP), ensure the output is always serialized as JSON. This is critical for downstream consumers and for automated tooling.
- **Logger Wrapper Example**:
Implement a logger utility that can be toggled for silent mode:
```typescript
function createLogger(silent: boolean) {
return {
log: (...args: any[]) => { if (!silent) console.log(...args); },
error: (...args: any[]) => { if (!silent) console.error(...args); }
};
}
```
Pass this logger to the core logic for consistent, suppressible output.
- **Session Context Usage**:
Ensure all AI service calls and config access are routed through the provided session context, not global config getters. This supports multi-user and multi-session environments.
- **Task Filtering Logic**:
Before invoking the AI service, filter the tasks array to only include those with `id >= from` and `status === "pending"`. This preserves the intended batch update semantics.
- **Preserve File Regeneration**:
After updating tasks, ensure any logic that regenerates or writes task files is retained and invoked as before.
- **CLI and API Parameter Validation**:
Use the same Zod schemas to validate CLI arguments and API payloads, ensuring consistency across all entry points[5].
- **Example: Validating CLI Arguments**
```typescript
const cliArgsSchema = z.object({
from: z.string().regex(/^\d+$/).transform(Number),
research: z.boolean().optional(),
session: z.any(),
});
const parsedArgs = cliArgsSchema.parse(cliArgs);
```
These enhancements ensure robust validation, unified service usage, and maintainable, predictable batch update behavior.
</info added on 2025-04-24T17:49:25.126Z>
## 42. Remove all unused imports [done]
### Dependencies: None
### Description:
### Details:
## 43. Remove all unnecessary console logs [done]
### Dependencies: None
### Description:
### Details:
<info added on 2025-05-02T20:47:07.566Z>
1. Identify all files within the project directory that contain console log statements.
2. Use a code editor or IDE with search functionality to locate all instances of console.log().
3. Review each console log statement to determine if it is necessary for debugging or logging purposes.
4. For each unnecessary console log, remove the statement from the code.
5. Ensure that the removal of console logs does not affect the functionality of the application.
6. Test the application thoroughly to confirm that no errors are introduced by the removal of these logs.
7. Commit the changes to the version control system with a message indicating the cleanup of console logs.
</info added on 2025-05-02T20:47:07.566Z>
<info added on 2025-05-02T20:47:56.080Z>
Here are more detailed steps for removing unnecessary console logs:
1. Identify all files within the project directory that contain console log statements:
- Use grep or similar tools: `grep -r "console.log" --include="*.js" --include="*.jsx" --include="*.ts" --include="*.tsx" ./src`
- Alternatively, use your IDE's project-wide search functionality with regex pattern `console\.(log|debug|info|warn|error)`
2. Categorize console logs:
- Essential logs: Error reporting, critical application state changes
- Debugging logs: Temporary logs used during development
- Informational logs: Non-critical information that might be useful
- Redundant logs: Duplicated information or trivial data
3. Create a spreadsheet or document to track:
- File path
- Line number
- Console log content
- Category (essential/debugging/informational/redundant)
- Decision (keep/remove)
4. Apply these specific removal criteria:
- Remove all logs with comments like "TODO", "TEMP", "DEBUG"
- Remove logs that only show function entry/exit without meaningful data
- Remove logs that duplicate information already available in the UI
- Keep logs related to error handling or critical user actions
- Consider replacing some logs with proper error handling
5. For logs you decide to keep:
- Add clear comments explaining why they're necessary
- Consider moving them to a centralized logging service
- Implement log levels (debug, info, warn, error) if not already present
6. Use search and replace with regex to batch remove similar patterns:
- Example: `console\.log\(\s*['"]Processing.*?['"]\s*\);`
7. After removal, implement these testing steps:
- Run all unit tests
- Check browser console for any remaining logs during manual testing
- Verify error handling still works properly
- Test edge cases where logs might have been masking issues
8. Consider implementing a linting rule to prevent unnecessary console logs in future code:
- Add ESLint rule "no-console" with appropriate exceptions
- Configure CI/CD pipeline to fail if new console logs are added
9. Document any logging standards for the team to follow going forward.
10. After committing changes, monitor the application in staging environment to ensure no critical information is lost.
</info added on 2025-05-02T20:47:56.080Z>
## 44. Add setters for temperature, max tokens on per role basis. [done]
### Dependencies: None
### Description: NOT per model/provider basis though we could probably just define those in the .taskmasterconfig file but then they would be hard-coded. if we let users define them on a per role basis, they will define incorrect values. maybe a good middle ground is to do both - we enforce maximum using known max tokens for input and output at the .taskmasterconfig level but then we also give setters to adjust temp/input tokens/output tokens for each of the 3 roles.
### Details:
## 45. Add support for Bedrock provider with ai sdk and unified service [done]
### Dependencies: None
### Description:
### Details:
<info added on 2025-04-25T19:03:42.584Z>
- Install the Bedrock provider for the AI SDK using your package manager (e.g., npm i @ai-sdk/amazon-bedrock) and ensure the core AI SDK is present[3][4].
- To integrate with your existing config manager, externalize all Bedrock-specific configuration (such as region, model name, and credential provider) into your config management system. For example, store values like region ("us-east-1") and model identifier ("meta.llama3-8b-instruct-v1:0") in your config files or environment variables, and load them at runtime.
- For credentials, leverage the AWS SDK credential provider chain to avoid hardcoding secrets. Use the @aws-sdk/credential-providers package and pass a credentialProvider (e.g., fromNodeProviderChain()) to the Bedrock provider. This allows your config manager to control credential sourcing via environment, profiles, or IAM roles, consistent with other AWS integrations[1].
- Example integration with config manager:
```js
import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
import { fromNodeProviderChain } from '@aws-sdk/credential-providers';
// Assume configManager.get returns your config values
const region = configManager.get('bedrock.region');
const model = configManager.get('bedrock.model');
const bedrock = createAmazonBedrock({
region,
credentialProvider: fromNodeProviderChain(),
});
// Use with AI SDK methods
const { text } = await generateText({
model: bedrock(model),
prompt: 'Your prompt here',
});
```
- If your config manager supports dynamic provider selection, you can abstract the provider initialization so switching between Bedrock and other providers (like OpenAI or Anthropic) is seamless.
- Be aware that Bedrock exposes multiple models from different vendors, each with potentially different API behaviors. Your config should allow specifying the exact model string, and your integration should handle any model-specific options or response formats[5].
- For unified service integration, ensure your service layer can route requests to Bedrock using the configured provider instance, and normalize responses if you support multiple AI backends.
</info added on 2025-04-25T19:03:42.584Z>