feat(config): Add Fallback Model and Expanded Provider Support

Introduces a configurable fallback model and adds support for additional AI provider API keys in the environment setup.

- **Add Fallback Model Configuration (.taskmasterconfig):**
  - Implemented a new  section in .
  - Configured  as the default fallback model, enhancing resilience if the primary model fails.

- **Update Default Model Configuration (.taskmasterconfig):**
  - Changed the default  model to .
  - Changed the default  model to .

- **Add API Key Examples (assets/env.example):**
  - Added example environment variables for:
    -  (for OpenAI/OpenRouter)
    -  (for Google Gemini)
    -  (for XAI Grok)
  - Included format comments for clarity.
This commit is contained in:
Eyal Toledano
2025-04-16 00:35:30 -04:00
parent d84c2486e4
commit 1ab836f191
12 changed files with 1638 additions and 68 deletions

View File

@@ -1,6 +1,6 @@
# Task ID: 61
# Title: Implement Flexible AI Model Management
# Status: pending
# Status: in-progress
# Dependencies: None
# Priority: high
# Description: Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.
@@ -142,7 +142,7 @@ export function getClient(model) {
- Test compatibility with serverless and edge deployments.
# Subtasks:
## 1. Create Configuration Management Module [in-progress]
## 1. Create Configuration Management Module [done]
### Dependencies: None
### Description: Develop a centralized configuration module to manage AI model settings and preferences, leveraging the Strategy pattern for model selection.
### Details:
@@ -428,7 +428,7 @@ describe('AI Client Factory', () => {
7. Add support for optional configuration parameters for each model
8. Testing approach: Create tests that verify environment variable validation logic
## 6. Implement Model Listing Command [pending]
## 6. Implement Model Listing Command [done]
### Dependencies: 61.1, 61.2, 61.4
### Description: Implement the 'task-master models' command to display currently configured models and available options.
### Details:
@@ -441,7 +441,7 @@ describe('AI Client Factory', () => {
7. Add support for verbose output with additional details
8. Testing approach: Create integration tests that verify correct output formatting and content
## 7. Implement Model Setting Commands [pending]
## 7. Implement Model Setting Commands [done]
### Dependencies: 61.1, 61.2, 61.4, 61.6
### Description: Implement the commands to set main and research models with proper validation and feedback.
### Details:

View File

@@ -2743,7 +2743,7 @@
"description": "Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.",
"details": "### Proposed Solution\nImplement an intuitive CLI command for AI model management, leveraging Vercel's AI SDK for seamless integration:\n\n- `task-master models`: Lists currently configured models for main operations and research.\n- `task-master models --set-main=\"<model_name>\" --set-research=\"<model_name>\"`: Sets the desired models for main operations and research tasks respectively.\n\nSupported AI Models:\n- **Main Operations:** Claude (current default), OpenAI, Ollama, Gemini, OpenRouter\n- **Research Operations:** Perplexity (current default), OpenAI, Ollama, Grok\n\nIf a user specifies an invalid model, the CLI lists available models clearly.\n\n### Example CLI Usage\n\nList current models:\n```shell\ntask-master models\n```\nOutput example:\n```\nCurrent AI Model Configuration:\n- Main Operations: Claude\n- Research Operations: Perplexity\n```\n\nSet new models:\n```shell\ntask-master models --set-main=\"gemini\" --set-research=\"grok\"\n```\n\nAttempt invalid model:\n```shell\ntask-master models --set-main=\"invalidModel\"\n```\nOutput example:\n```\nError: \"invalidModel\" is not a valid model.\n\nAvailable models for Main Operations:\n- claude\n- openai\n- ollama\n- gemini\n- openrouter\n```\n\n### High-Level Workflow\n1. Update CLI parsing logic to handle new `models` command and associated flags.\n2. Consolidate all AI calls into `ai-services.js` for centralized management.\n3. Utilize Vercel's AI SDK to implement robust wrapper functions for each AI API:\n - Claude (existing)\n - Perplexity (existing)\n - OpenAI\n - Ollama\n - Gemini\n - OpenRouter\n - Grok\n4. Update environment variables and provide clear documentation in `.env_example`:\n```env\n# MAIN_MODEL options: claude, openai, ollama, gemini, openrouter\nMAIN_MODEL=claude\n\n# RESEARCH_MODEL options: perplexity, openai, ollama, grok\nRESEARCH_MODEL=perplexity\n```\n5. Ensure dynamic model switching via environment variables or configuration management.\n6. Provide clear CLI feedback and validation of model names.\n\n### Vercel AI SDK Integration\n- Use Vercel's AI SDK to abstract API calls for supported models, ensuring consistent error handling and response formatting.\n- Implement a configuration layer to map model names to their respective Vercel SDK integrations.\n- Example pattern for integration:\n```javascript\nimport { createClient } from '@vercel/ai';\n\nconst clients = {\n claude: createClient({ provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }),\n openai: createClient({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY }),\n ollama: createClient({ provider: 'ollama', apiKey: process.env.OLLAMA_API_KEY }),\n gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),\n openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),\n perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),\n grok: createClient({ provider: 'grok', apiKey: process.env.GROK_API_KEY })\n};\n\nexport function getClient(model) {\n if (!clients[model]) {\n throw new Error(`Invalid model: ${model}`);\n }\n return clients[model];\n}\n```\n- Leverage `generateText` and `streamText` functions from the SDK for text generation and streaming capabilities.\n- Ensure compatibility with serverless and edge deployments using Vercel's infrastructure.\n\n### Key Elements\n- Enhanced model visibility and intuitive management commands.\n- Centralized and robust handling of AI API integrations via Vercel AI SDK.\n- Clear CLI responses with detailed validation feedback.\n- Flexible, easy-to-understand environment configuration.\n\n### Implementation Considerations\n- Centralize all AI interactions through a single, maintainable module (`ai-services.js`).\n- Ensure comprehensive error handling for invalid model selections.\n- Clearly document environment variable options and their purposes.\n- Validate model names rigorously to prevent runtime errors.\n\n### Out of Scope (Future Considerations)\n- Automatic benchmarking or model performance comparison.\n- Dynamic runtime switching of models based on task type or complexity.",
"testStrategy": "### Test Strategy\n1. **Unit Tests**:\n - Test CLI commands for listing, setting, and validating models.\n - Mock Vercel AI SDK calls to ensure proper integration and error handling.\n\n2. **Integration Tests**:\n - Validate end-to-end functionality of model management commands.\n - Test dynamic switching of models via environment variables.\n\n3. **Error Handling Tests**:\n - Simulate invalid model names and verify error messages.\n - Test API failures for each model provider and ensure graceful degradation.\n\n4. **Documentation Validation**:\n - Verify that `.env_example` and CLI usage examples are accurate and comprehensive.\n\n5. **Performance Tests**:\n - Measure response times for API calls through Vercel AI SDK.\n - Ensure no significant latency is introduced by model switching.\n\n6. **SDK-Specific Tests**:\n - Validate the behavior of `generateText` and `streamText` functions for supported models.\n - Test compatibility with serverless and edge deployments.",
"status": "pending",
"status": "in-progress",
"dependencies": [],
"priority": "high",
"subtasks": [
@@ -2753,7 +2753,7 @@
"description": "Develop a centralized configuration module to manage AI model settings and preferences, leveraging the Strategy pattern for model selection.",
"dependencies": [],
"details": "1. Create a new `config-manager.js` module to handle model configuration\n2. Implement functions to read/write model preferences to a local config file\n3. Define model validation logic with clear error messages\n4. Create mapping of valid models for main and research operations\n5. Implement getters and setters for model configuration\n6. Add utility functions to validate model names against available options\n7. Include default fallback models\n8. Testing approach: Write unit tests to verify config reading/writing and model validation logic\n\n<info added on 2025-04-14T21:54:28.887Z>\nHere's the additional information to add:\n\n```\nThe configuration management module should:\n\n1. Use a `.taskmasterconfig` JSON file in the project root directory to store model settings\n2. Structure the config file with two main keys: `main` and `research` for respective model selections\n3. Implement functions to locate the project root directory (using package.json as reference)\n4. Define constants for valid models:\n ```javascript\n const VALID_MAIN_MODELS = ['gpt-4', 'gpt-3.5-turbo', 'gpt-4-turbo'];\n const VALID_RESEARCH_MODELS = ['gpt-4', 'gpt-4-turbo', 'claude-2'];\n const DEFAULT_MAIN_MODEL = 'gpt-3.5-turbo';\n const DEFAULT_RESEARCH_MODEL = 'gpt-4';\n ```\n5. Implement model getters with priority order:\n - First check `.taskmasterconfig` file\n - Fall back to environment variables if config file missing/invalid\n - Use defaults as last resort\n6. Implement model setters that validate input against valid model lists before updating config\n7. Keep API key management in `ai-services.js` using environment variables (don't store keys in config file)\n8. Add helper functions for config file operations:\n ```javascript\n function getConfigPath() { /* locate .taskmasterconfig */ }\n function readConfig() { /* read and parse config file */ }\n function writeConfig(config) { /* stringify and write config */ }\n ```\n9. Include error handling for file operations and invalid configurations\n```\n</info added on 2025-04-14T21:54:28.887Z>\n\n<info added on 2025-04-14T22:52:29.551Z>\n```\nThe configuration management module should be updated to:\n\n1. Separate model configuration into provider and modelId components:\n ```javascript\n // Example config structure\n {\n \"models\": {\n \"main\": {\n \"provider\": \"openai\",\n \"modelId\": \"gpt-3.5-turbo\"\n },\n \"research\": {\n \"provider\": \"openai\",\n \"modelId\": \"gpt-4\"\n }\n }\n }\n ```\n\n2. Define provider constants:\n ```javascript\n const VALID_MAIN_PROVIDERS = ['openai', 'anthropic', 'local'];\n const VALID_RESEARCH_PROVIDERS = ['openai', 'anthropic', 'cohere'];\n const DEFAULT_MAIN_PROVIDER = 'openai';\n const DEFAULT_RESEARCH_PROVIDER = 'openai';\n ```\n\n3. Implement optional MODEL_MAP for validation:\n ```javascript\n const MODEL_MAP = {\n 'openai': ['gpt-3.5-turbo', 'gpt-4', 'gpt-4-turbo'],\n 'anthropic': ['claude-2', 'claude-instant'],\n 'cohere': ['command', 'command-light'],\n 'local': ['llama2', 'mistral']\n };\n ```\n\n4. Update getter functions to handle provider/modelId separation:\n ```javascript\n function getMainProvider() { /* return provider with fallbacks */ }\n function getMainModelId() { /* return modelId with fallbacks */ }\n function getResearchProvider() { /* return provider with fallbacks */ }\n function getResearchModelId() { /* return modelId with fallbacks */ }\n ```\n\n5. Update setter functions to validate both provider and modelId:\n ```javascript\n function setMainModel(provider, modelId) {\n // Validate provider is in VALID_MAIN_PROVIDERS\n // Optionally validate modelId is valid for provider using MODEL_MAP\n // Update config file with new values\n }\n ```\n\n6. Add utility functions for provider-specific validation:\n ```javascript\n function isValidProviderModelCombination(provider, modelId) {\n return MODEL_MAP[provider]?.includes(modelId) || false;\n }\n ```\n\n7. Extend unit tests to cover provider/modelId separation, including:\n - Testing provider validation\n - Testing provider-modelId combination validation\n - Verifying getters return correct provider and modelId values\n - Confirming setters properly validate and store both components\n```\n</info added on 2025-04-14T22:52:29.551Z>",
"status": "in-progress",
"status": "done",
"parentTaskId": 61
},
{
@@ -2811,7 +2811,7 @@
4
],
"details": "1. Create handler for the models command without flags\n2. Implement formatted output showing current model configuration\n3. Add color-coding for better readability using a library like chalk\n4. Include version information for each configured model\n5. Show API status indicators (connected/disconnected)\n6. Display usage examples for changing models\n7. Add support for verbose output with additional details\n8. Testing approach: Create integration tests that verify correct output formatting and content",
"status": "pending",
"status": "done",
"parentTaskId": 61
},
{
@@ -2825,7 +2825,7 @@
6
],
"details": "1. Create handlers for '--set-main' and '--set-research' flags\n2. Implement validation logic for model names\n3. Add clear error messages for invalid model selections\n4. Implement confirmation messages for successful model changes\n5. Add support for setting both models in a single command\n6. Implement dry-run option to validate without making changes\n7. Add verbose output option for debugging\n8. Testing approach: Create integration tests that verify model setting functionality with various inputs",
"status": "pending",
"status": "done",
"parentTaskId": 61
},
{