feat(ai): Enhance Perplexity research calls & fix docs examples

Improves the quality and relevance of research-backed AI operations:
- Tweaks Perplexity AI calls to use max input tokens (8700), temperature 0.1, high context size, and day-fresh search recency.
- Adds a system prompt to guide Perplexity research output.

Docs:
- Updates CLI examples in taskmaster.mdc to use ANSI-C quoting ($'...') for multi-line prompts, ensuring they work correctly in bash/zsh.
This commit is contained in:
Eyal Toledano
2025-04-14 17:09:06 -04:00
parent 4c57537157
commit 9a482789f7
8 changed files with 211 additions and 37 deletions

View File

@@ -0,0 +1,9 @@
---
'task-master-ai': patch
---
- Tweaks Perplexity AI calls for research mode to max out input tokens and get day-fresh information
- Forces temp at 0.1 for highly deterministic output, no variations
- Adds a system prompt to further improve the output
- Correctly uses the maximum input tokens (8,719, used 8,700) for perplexity
- Specificies to use a high degree of research across the web
- Specifies to use information that is as fresh as today; this support stuff like capturing brand new announcements like new GPT models and being able to query for those in research. 🔥

View File

@@ -131,7 +131,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks (e.g., "We are now using React Query instead of Redux Toolkit for data fetching").` (CLI: `-p, --prompt <text>`) * `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks (e.g., "We are now using React Query instead of Redux Toolkit for data fetching").` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates based on external knowledge (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`) * `research`: `Enable Taskmaster to use Perplexity AI for more informed updates based on external knowledge (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`) * `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'` * **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt=$'Switching to React Query.\nNeed to refactor data fetching...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 9. Update Task (`update_task`) ### 9. Update Task (`update_task`)
@@ -144,7 +144,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`) * `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`) * `research`: `Enable Taskmaster to use Perplexity AI for more informed updates (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`) * `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* **Usage:** Refine a specific task based on new understanding or feedback. Example CLI: `task-master update-task --id='15' --prompt='Clarification: Use PostgreSQL instead of MySQL.\nUpdate schema details...'` * **Usage:** Refine a specific task based on new understanding or feedback. Example CLI: `task-master update-task --id='15' --prompt=$'Clarification: Use PostgreSQL instead of MySQL.\nUpdate schema details...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 10. Update Subtask (`update_subtask`) ### 10. Update Subtask (`update_subtask`)
@@ -157,7 +157,7 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `prompt`: `Required. Provide the information or notes Taskmaster should append to the subtask's details. Ensure this adds *new* information not already present.` (CLI: `-p, --prompt <text>`) * `prompt`: `Required. Provide the information or notes Taskmaster should append to the subtask's details. Ensure this adds *new* information not already present.` (CLI: `-p, --prompt <text>`)
* `research`: `Enable Taskmaster to use Perplexity AI for more informed updates (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`) * `research`: `Enable Taskmaster to use Perplexity AI for more informed updates (requires PERPLEXITY_API_KEY).` (CLI: `-r, --research`)
* `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`) * `file`: `Path to your Taskmaster 'tasks.json' file (default relies on auto-detection).` (CLI: `-f, --file <file>`)
* **Usage:** Add implementation notes, code snippets, or clarifications to a subtask during development. Before calling, review the subtask's current details to append only fresh insights, helping to build a detailed log of the implementation journey and avoid redundancy. Example CLI: `task-master update-subtask --id='15.2' --prompt='Discovered that the API requires header X.\nImplementation needs adjustment...'` * **Usage:** Add implementation notes, code snippets, or clarifications to a subtask during development. Before calling, review the subtask's current details to append only fresh insights, helping to build a detailed log of the implementation journey and avoid redundancy. Example CLI: `task-master update-subtask --id='15.2' --prompt=$'Discovered that the API requires header X.\nImplementation needs adjustment...'`
* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress.
### 11. Set Task Status (`set_task_status`) ### 11. Set Task Status (`set_task_status`)

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.10.1", "version": "0.11.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.10.1", "version": "0.11.0",
"license": "MIT WITH Commons-Clause", "license": "MIT WITH Commons-Clause",
"dependencies": { "dependencies": {
"@anthropic-ai/sdk": "^0.39.0", "@anthropic-ai/sdk": "^0.39.0",

View File

@@ -709,12 +709,24 @@ Include concrete code examples and technical considerations where relevant.`;
const researchResponse = await perplexityClient.chat.completions.create({ const researchResponse = await perplexityClient.chat.completions.create({
model: PERPLEXITY_MODEL, model: PERPLEXITY_MODEL,
messages: [ messages: [
{
role: 'system',
content: `You are a helpful assistant that provides research on current best practices and implementation approaches for software development.
You are given a task and a description of the task.
You need to provide a list of best practices, libraries, design patterns, and implementation approaches that are relevant to the task.
You should provide concrete code examples and technical considerations where relevant.`
},
{ {
role: 'user', role: 'user',
content: researchQuery content: researchQuery
} }
], ],
temperature: 0.1 // Lower temperature for more factual responses temperature: 0.1, // Lower temperature for more factual responses
max_tokens: 8700, // Respect maximum input tokens for Perplexity (8719 max)
web_search_options: {
search_context_size: 'high'
},
search_recency_filter: 'day' // Filter for results that are as recent as today to capture new releases
}); });
const researchResult = researchResponse.choices[0].message.content; const researchResult = researchResponse.choices[0].message.content;
@@ -814,7 +826,7 @@ Note on dependencies: Subtasks can depend on other subtasks with lower IDs. Use
anthropic, anthropic,
{ {
model: session?.env?.ANTHROPIC_MODEL || CONFIG.model, model: session?.env?.ANTHROPIC_MODEL || CONFIG.model,
max_tokens: session?.env?.MAX_TOKENS || CONFIG.maxTokens, max_tokens: 8700,
temperature: session?.env?.TEMPERATURE || CONFIG.temperature, temperature: session?.env?.TEMPERATURE || CONFIG.temperature,
system: systemPrompt, system: systemPrompt,
messages: [{ role: 'user', content: userPrompt }] messages: [{ role: 'user', content: userPrompt }]
@@ -1328,7 +1340,12 @@ Include concrete code examples and technical considerations where relevant.`;
content: researchQuery content: researchQuery
} }
], ],
temperature: 0.1 // Lower temperature for more factual responses temperature: 0.1, // Lower temperature for more factual responses
max_tokens: 8700, // Respect maximum input tokens for Perplexity (8719 max)
web_search_options: {
search_context_size: 'high'
},
search_recency_filter: 'day' // Filter for results that are as recent as today to capture new releases
}); });
const researchResult = researchResponse.choices[0].message.content; const researchResult = researchResponse.choices[0].message.content;

View File

@@ -427,11 +427,7 @@ Return only the updated tasks as a valid JSON array.`
session?.env?.TEMPERATURE || session?.env?.TEMPERATURE ||
CONFIG.temperature CONFIG.temperature
), ),
max_tokens: parseInt( max_tokens: 8700
process.env.MAX_TOKENS ||
session?.env?.MAX_TOKENS ||
CONFIG.maxTokens
)
}); });
const responseText = result.choices[0].message.content; const responseText = result.choices[0].message.content;
@@ -972,11 +968,7 @@ Return only the updated task as a valid JSON object.`
session?.env?.TEMPERATURE || session?.env?.TEMPERATURE ||
CONFIG.temperature CONFIG.temperature
), ),
max_tokens: parseInt( max_tokens: 8700
process.env.MAX_TOKENS ||
session?.env?.MAX_TOKENS ||
CONFIG.maxTokens
)
}); });
const responseText = result.choices[0].message.content; const responseText = result.choices[0].message.content;
@@ -3738,7 +3730,11 @@ DO NOT include any text before or after the JSON array. No explanations, no mark
} }
], ],
temperature: session?.env?.TEMPERATURE || CONFIG.temperature, temperature: session?.env?.TEMPERATURE || CONFIG.temperature,
max_tokens: session?.env?.MAX_TOKENS || CONFIG.maxTokens max_tokens: 8700,
web_search_options: {
search_context_size: 'high'
},
search_recency_filter: 'day'
}); });
// Extract the response text // Extract the response text

View File

@@ -1,6 +1,6 @@
# Task ID: 23 # Task ID: 23
# Title: Complete MCP Server Implementation for Task Master using FastMCP # Title: Complete MCP Server Implementation for Task Master using FastMCP
# Status: in-progress # Status: done
# Dependencies: 22 # Dependencies: 22
# Priority: medium # Priority: medium
# Description: Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. Ensure the server integrates seamlessly with Cursor via `mcp.json` and supports proper tool registration, efficient context handling, and transport type handling (focusing on stdio). Additionally, ensure the server can be instantiated properly when installed via `npx` or `npm i -g`. Evaluate and address gaps in the current implementation, including function imports, context management, caching, tool registration, and adherence to FastMCP best practices. # Description: Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. Ensure the server integrates seamlessly with Cursor via `mcp.json` and supports proper tool registration, efficient context handling, and transport type handling (focusing on stdio). Additionally, ensure the server can be instantiated properly when installed via `npx` or `npm i -g`. Evaluate and address gaps in the current implementation, including function imports, context management, caching, tool registration, and adherence to FastMCP best practices.
@@ -221,7 +221,7 @@ Testing approach:
- Test error handling with invalid inputs - Test error handling with invalid inputs
- Benchmark endpoint performance - Benchmark endpoint performance
## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [cancelled] ## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [done]
### Dependencies: 23.1, 23.2, 23.3 ### Dependencies: 23.1, 23.2, 23.3
### Description: Integrate the ModelContextProtocol SDK directly into the MCP server implementation to streamline tool registration and resource handling. ### Description: Integrate the ModelContextProtocol SDK directly into the MCP server implementation to streamline tool registration and resource handling.
### Details: ### Details:
@@ -329,7 +329,7 @@ function listTasks(tasksPath, statusFilter, withSubtasks = false, outputFormat =
7. Add cache statistics for monitoring performance 7. Add cache statistics for monitoring performance
8. Create unit tests for context management and caching functionality 8. Create unit tests for context management and caching functionality
## 10. Enhance Tool Registration and Resource Management [deferred] ## 10. Enhance Tool Registration and Resource Management [done]
### Dependencies: 23.1, 23.8 ### Dependencies: 23.1, 23.8
### Description: Refactor tool registration to follow FastMCP best practices, using decorators and improving the overall structure. Implement proper resource management for task templates and other shared resources. ### Description: Refactor tool registration to follow FastMCP best practices, using decorators and improving the overall structure. Implement proper resource management for task templates and other shared resources.
### Details: ### Details:
@@ -412,7 +412,7 @@ Best practices for integrating resources with Task Master functionality:
By properly implementing these resources and resource templates, we can provide rich, contextual data to LLM clients, enhancing the Task Master's capabilities and user experience. By properly implementing these resources and resource templates, we can provide rich, contextual data to LLM clients, enhancing the Task Master's capabilities and user experience.
</info added on 2025-03-31T18:35:21.513Z> </info added on 2025-03-31T18:35:21.513Z>
## 11. Implement Comprehensive Error Handling [deferred] ## 11. Implement Comprehensive Error Handling [done]
### Dependencies: 23.1, 23.3 ### Dependencies: 23.1, 23.3
### Description: Implement robust error handling using FastMCP's MCPError, including custom error types for different categories and standardized error responses. ### Description: Implement robust error handling using FastMCP's MCPError, including custom error types for different categories and standardized error responses.
### Details: ### Details:
@@ -424,7 +424,7 @@ By properly implementing these resources and resource templates, we can provide
### Details: ### Details:
1. Design structured log format for consistent parsing\n2. Implement different log levels (debug, info, warn, error)\n3. Add request/response logging middleware\n4. Implement correlation IDs for request tracking\n5. Add performance metrics logging\n6. Configure log output destinations (console, file)\n7. Document logging patterns and usage 1. Design structured log format for consistent parsing\n2. Implement different log levels (debug, info, warn, error)\n3. Add request/response logging middleware\n4. Implement correlation IDs for request tracking\n5. Add performance metrics logging\n6. Configure log output destinations (console, file)\n7. Document logging patterns and usage
## 13. Create Testing Framework and Test Suite [deferred] ## 13. Create Testing Framework and Test Suite [done]
### Dependencies: 23.1, 23.3 ### Dependencies: 23.1, 23.3
### Description: Implement a comprehensive testing framework for the MCP server, including unit tests, integration tests, and end-to-end tests. ### Description: Implement a comprehensive testing framework for the MCP server, including unit tests, integration tests, and end-to-end tests.
### Details: ### Details:
@@ -436,7 +436,7 @@ By properly implementing these resources and resource templates, we can provide
### Details: ### Details:
1. Create functionality to detect if .cursor/mcp.json exists in the project\n2. Implement logic to create a new mcp.json file with proper structure if it doesn't exist\n3. Add functionality to read and parse existing mcp.json if it exists\n4. Create method to add a new taskmaster-ai server entry to the mcpServers object\n5. Implement intelligent JSON merging that avoids trailing commas and syntax errors\n6. Ensure proper formatting and indentation in the generated/updated JSON\n7. Add validation to verify the updated configuration is valid JSON\n8. Include this functionality in the init workflow\n9. Add error handling for file system operations and JSON parsing\n10. Document the mcp.json structure and integration process 1. Create functionality to detect if .cursor/mcp.json exists in the project\n2. Implement logic to create a new mcp.json file with proper structure if it doesn't exist\n3. Add functionality to read and parse existing mcp.json if it exists\n4. Create method to add a new taskmaster-ai server entry to the mcpServers object\n5. Implement intelligent JSON merging that avoids trailing commas and syntax errors\n6. Ensure proper formatting and indentation in the generated/updated JSON\n7. Add validation to verify the updated configuration is valid JSON\n8. Include this functionality in the init workflow\n9. Add error handling for file system operations and JSON parsing\n10. Document the mcp.json structure and integration process
## 15. Implement SSE Support for Real-time Updates [deferred] ## 15. Implement SSE Support for Real-time Updates [done]
### Dependencies: 23.1, 23.3, 23.11 ### Dependencies: 23.1, 23.3, 23.11
### Description: Add Server-Sent Events (SSE) capabilities to the MCP server to enable real-time updates and streaming of task execution progress, logs, and status changes to clients ### Description: Add Server-Sent Events (SSE) capabilities to the MCP server to enable real-time updates and streaming of task execution progress, logs, and status changes to clients
### Details: ### Details:
@@ -923,7 +923,7 @@ Following MCP implementation standards:
8. Update tests to reflect the new naming conventions 8. Update tests to reflect the new naming conventions
9. Create a linting rule to enforce naming conventions in future development 9. Create a linting rule to enforce naming conventions in future development
## 34. Review functionality of all MCP direct functions [in-progress] ## 34. Review functionality of all MCP direct functions [done]
### Dependencies: None ### Dependencies: None
### Description: Verify that all implemented MCP direct functions work correctly with edge cases ### Description: Verify that all implemented MCP direct functions work correctly with edge cases
### Details: ### Details:
@@ -1130,13 +1130,13 @@ By implementing these advanced techniques, task-master can achieve robust path h
### Details: ### Details:
## 44. Implement init MCP command [deferred] ## 44. Implement init MCP command [done]
### Dependencies: None ### Dependencies: None
### Description: Create MCP tool implementation for the init command ### Description: Create MCP tool implementation for the init command
### Details: ### Details:
## 45. Support setting env variables through mcp server [pending] ## 45. Support setting env variables through mcp server [done]
### Dependencies: None ### Dependencies: None
### Description: currently we need to access the env variables through the env file present in the project (that we either create or find and append to). we could abstract this by allowing users to define the env vars in the mcp.json directly as folks currently do. mcp.json should then be in gitignore if thats the case. but for this i think in fastmcp all we need is to access ENV in a specific way. we need to find that way and then implement it ### Description: currently we need to access the env variables through the env file present in the project (that we either create or find and append to). we could abstract this by allowing users to define the env vars in the mcp.json directly as folks currently do. mcp.json should then be in gitignore if thats the case. but for this i think in fastmcp all we need is to access ENV in a specific way. we need to find that way and then implement it
### Details: ### Details:

142
tasks/task_061.txt Normal file
View File

@@ -0,0 +1,142 @@
# Task ID: 61
# Title: Implement Flexible AI Model Management
# Status: pending
# Dependencies: None
# Priority: high
# Description: Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.
# Details:
### Proposed Solution
Implement an intuitive CLI command for AI model management, leveraging Vercel's AI SDK for seamless integration:
- `task-master models`: Lists currently configured models for main operations and research.
- `task-master models --set-main="<model_name>" --set-research="<model_name>"`: Sets the desired models for main operations and research tasks respectively.
Supported AI Models:
- **Main Operations:** Claude (current default), OpenAI, Ollama, Gemini, OpenRouter
- **Research Operations:** Perplexity (current default), OpenAI, Ollama, Grok
If a user specifies an invalid model, the CLI lists available models clearly.
### Example CLI Usage
List current models:
```shell
task-master models
```
Output example:
```
Current AI Model Configuration:
- Main Operations: Claude
- Research Operations: Perplexity
```
Set new models:
```shell
task-master models --set-main="gemini" --set-research="grok"
```
Attempt invalid model:
```shell
task-master models --set-main="invalidModel"
```
Output example:
```
Error: "invalidModel" is not a valid model.
Available models for Main Operations:
- claude
- openai
- ollama
- gemini
- openrouter
```
### High-Level Workflow
1. Update CLI parsing logic to handle new `models` command and associated flags.
2. Consolidate all AI calls into `ai-services.js` for centralized management.
3. Utilize Vercel's AI SDK to implement robust wrapper functions for each AI API:
- Claude (existing)
- Perplexity (existing)
- OpenAI
- Ollama
- Gemini
- OpenRouter
- Grok
4. Update environment variables and provide clear documentation in `.env_example`:
```env
# MAIN_MODEL options: claude, openai, ollama, gemini, openrouter
MAIN_MODEL=claude
# RESEARCH_MODEL options: perplexity, openai, ollama, grok
RESEARCH_MODEL=perplexity
```
5. Ensure dynamic model switching via environment variables or configuration management.
6. Provide clear CLI feedback and validation of model names.
### Vercel AI SDK Integration
- Use Vercel's AI SDK to abstract API calls for supported models, ensuring consistent error handling and response formatting.
- Implement a configuration layer to map model names to their respective Vercel SDK integrations.
- Example pattern for integration:
```javascript
import { createClient } from '@vercel/ai';
const clients = {
claude: createClient({ provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }),
openai: createClient({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY }),
ollama: createClient({ provider: 'ollama', apiKey: process.env.OLLAMA_API_KEY }),
gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),
openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),
perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),
grok: createClient({ provider: 'grok', apiKey: process.env.GROK_API_KEY })
};
export function getClient(model) {
if (!clients[model]) {
throw new Error(`Invalid model: ${model}`);
}
return clients[model];
}
```
- Leverage `generateText` and `streamText` functions from the SDK for text generation and streaming capabilities.
- Ensure compatibility with serverless and edge deployments using Vercel's infrastructure.
### Key Elements
- Enhanced model visibility and intuitive management commands.
- Centralized and robust handling of AI API integrations via Vercel AI SDK.
- Clear CLI responses with detailed validation feedback.
- Flexible, easy-to-understand environment configuration.
### Implementation Considerations
- Centralize all AI interactions through a single, maintainable module (`ai-services.js`).
- Ensure comprehensive error handling for invalid model selections.
- Clearly document environment variable options and their purposes.
- Validate model names rigorously to prevent runtime errors.
### Out of Scope (Future Considerations)
- Automatic benchmarking or model performance comparison.
- Dynamic runtime switching of models based on task type or complexity.
# Test Strategy:
### Test Strategy
1. **Unit Tests**:
- Test CLI commands for listing, setting, and validating models.
- Mock Vercel AI SDK calls to ensure proper integration and error handling.
2. **Integration Tests**:
- Validate end-to-end functionality of model management commands.
- Test dynamic switching of models via environment variables.
3. **Error Handling Tests**:
- Simulate invalid model names and verify error messages.
- Test API failures for each model provider and ensure graceful degradation.
4. **Documentation Validation**:
- Verify that `.env_example` and CLI usage examples are accurate and comprehensive.
5. **Performance Tests**:
- Measure response times for API calls through Vercel AI SDK.
- Ensure no significant latency is introduced by model switching.
6. **SDK-Specific Tests**:
- Validate the behavior of `generateText` and `streamText` functions for supported models.
- Test compatibility with serverless and edge deployments.

View File

@@ -1339,7 +1339,7 @@
"id": 23, "id": 23,
"title": "Complete MCP Server Implementation for Task Master using FastMCP", "title": "Complete MCP Server Implementation for Task Master using FastMCP",
"description": "Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. Ensure the server integrates seamlessly with Cursor via `mcp.json` and supports proper tool registration, efficient context handling, and transport type handling (focusing on stdio). Additionally, ensure the server can be instantiated properly when installed via `npx` or `npm i -g`. Evaluate and address gaps in the current implementation, including function imports, context management, caching, tool registration, and adherence to FastMCP best practices.", "description": "Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. Ensure the server integrates seamlessly with Cursor via `mcp.json` and supports proper tool registration, efficient context handling, and transport type handling (focusing on stdio). Additionally, ensure the server can be instantiated properly when installed via `npx` or `npm i -g`. Evaluate and address gaps in the current implementation, including function imports, context management, caching, tool registration, and adherence to FastMCP best practices.",
"status": "in-progress", "status": "done",
"dependencies": [ "dependencies": [
22 22
], ],
@@ -1389,7 +1389,7 @@
3 3
], ],
"details": "Implementation steps:\n1. Replace manual tool registration with ModelContextProtocol SDK methods.\n2. Use SDK utilities to simplify resource and template management.\n3. Ensure compatibility with FastMCP's transport mechanisms.\n4. Update server initialization to include SDK-based configurations.\n\nTesting approach:\n- Verify SDK integration with all MCP endpoints.\n- Test resource and template registration using SDK methods.\n- Validate compatibility with existing MCP clients.\n- Benchmark performance improvements from SDK integration.\n\n<info added on 2025-03-31T18:49:14.439Z>\nThe subtask is being cancelled because FastMCP already serves as a higher-level abstraction over the Model Context Protocol SDK. Direct integration with the MCP SDK would be redundant and potentially counterproductive since:\n\n1. FastMCP already encapsulates the necessary SDK functionality for tool registration and resource handling\n2. The existing FastMCP abstractions provide a more streamlined developer experience\n3. Adding another layer of SDK integration would increase complexity without clear benefits\n4. The transport mechanisms in FastMCP are already optimized for the current architecture\n\nInstead, we should focus on extending and enhancing the existing FastMCP abstractions where needed, rather than attempting to bypass them with direct SDK integration.\n</info added on 2025-03-31T18:49:14.439Z>", "details": "Implementation steps:\n1. Replace manual tool registration with ModelContextProtocol SDK methods.\n2. Use SDK utilities to simplify resource and template management.\n3. Ensure compatibility with FastMCP's transport mechanisms.\n4. Update server initialization to include SDK-based configurations.\n\nTesting approach:\n- Verify SDK integration with all MCP endpoints.\n- Test resource and template registration using SDK methods.\n- Validate compatibility with existing MCP clients.\n- Benchmark performance improvements from SDK integration.\n\n<info added on 2025-03-31T18:49:14.439Z>\nThe subtask is being cancelled because FastMCP already serves as a higher-level abstraction over the Model Context Protocol SDK. Direct integration with the MCP SDK would be redundant and potentially counterproductive since:\n\n1. FastMCP already encapsulates the necessary SDK functionality for tool registration and resource handling\n2. The existing FastMCP abstractions provide a more streamlined developer experience\n3. Adding another layer of SDK integration would increase complexity without clear benefits\n4. The transport mechanisms in FastMCP are already optimized for the current architecture\n\nInstead, we should focus on extending and enhancing the existing FastMCP abstractions where needed, rather than attempting to bypass them with direct SDK integration.\n</info added on 2025-03-31T18:49:14.439Z>",
"status": "cancelled", "status": "done",
"parentTaskId": 23 "parentTaskId": 23
}, },
{ {
@@ -1423,7 +1423,7 @@
"23.8" "23.8"
], ],
"details": "1. Update registerTaskMasterTools function to use FastMCP's decorator pattern\n2. Implement @mcp.tool() decorators for all existing tools\n3. Add proper type annotations and documentation for all tools\n4. Create resource handlers for task templates using @mcp.resource()\n5. Implement resource templates for common task patterns\n6. Update the server initialization to properly register all tools and resources\n7. Add validation for tool inputs using FastMCP's built-in validation\n8. Create comprehensive tests for tool registration and resource access\n\n<info added on 2025-03-31T18:35:21.513Z>\nHere is additional information to enhance the subtask regarding resources and resource templates in FastMCP:\n\nResources in FastMCP are used to expose static or dynamic data to LLM clients. For the Task Master MCP server, we should implement resources to provide:\n\n1. Task templates: Predefined task structures that can be used as starting points\n2. Workflow definitions: Reusable workflow patterns for common task sequences\n3. User preferences: Stored user settings for task management\n4. Project metadata: Information about active projects and their attributes\n\nResource implementation should follow this structure:\n\n```python\n@mcp.resource(\"tasks://templates/{template_id}\")\ndef get_task_template(template_id: str) -> dict:\n # Fetch and return the specified task template\n ...\n\n@mcp.resource(\"workflows://definitions/{workflow_id}\")\ndef get_workflow_definition(workflow_id: str) -> dict:\n # Fetch and return the specified workflow definition\n ...\n\n@mcp.resource(\"users://{user_id}/preferences\")\ndef get_user_preferences(user_id: str) -> dict:\n # Fetch and return user preferences\n ...\n\n@mcp.resource(\"projects://metadata\")\ndef get_project_metadata() -> List[dict]:\n # Fetch and return metadata for all active projects\n ...\n```\n\nResource templates in FastMCP allow for dynamic generation of resources based on patterns. For Task Master, we can implement:\n\n1. Dynamic task creation templates\n2. Customizable workflow templates\n3. User-specific resource views\n\nExample implementation:\n\n```python\n@mcp.resource(\"tasks://create/{task_type}\")\ndef get_task_creation_template(task_type: str) -> dict:\n # Generate and return a task creation template based on task_type\n ...\n\n@mcp.resource(\"workflows://custom/{user_id}/{workflow_name}\")\ndef get_custom_workflow_template(user_id: str, workflow_name: str) -> dict:\n # Generate and return a custom workflow template for the user\n ...\n\n@mcp.resource(\"users://{user_id}/dashboard\")\ndef get_user_dashboard(user_id: str) -> dict:\n # Generate and return a personalized dashboard view for the user\n ...\n```\n\nBest practices for integrating resources with Task Master functionality:\n\n1. Use resources to provide context and data for tools\n2. Implement caching for frequently accessed resources\n3. Ensure proper error handling and not-found cases for all resources\n4. Use resource templates to generate dynamic, personalized views of data\n5. Implement access control to ensure users only access authorized resources\n\nBy properly implementing these resources and resource templates, we can provide rich, contextual data to LLM clients, enhancing the Task Master's capabilities and user experience.\n</info added on 2025-03-31T18:35:21.513Z>", "details": "1. Update registerTaskMasterTools function to use FastMCP's decorator pattern\n2. Implement @mcp.tool() decorators for all existing tools\n3. Add proper type annotations and documentation for all tools\n4. Create resource handlers for task templates using @mcp.resource()\n5. Implement resource templates for common task patterns\n6. Update the server initialization to properly register all tools and resources\n7. Add validation for tool inputs using FastMCP's built-in validation\n8. Create comprehensive tests for tool registration and resource access\n\n<info added on 2025-03-31T18:35:21.513Z>\nHere is additional information to enhance the subtask regarding resources and resource templates in FastMCP:\n\nResources in FastMCP are used to expose static or dynamic data to LLM clients. For the Task Master MCP server, we should implement resources to provide:\n\n1. Task templates: Predefined task structures that can be used as starting points\n2. Workflow definitions: Reusable workflow patterns for common task sequences\n3. User preferences: Stored user settings for task management\n4. Project metadata: Information about active projects and their attributes\n\nResource implementation should follow this structure:\n\n```python\n@mcp.resource(\"tasks://templates/{template_id}\")\ndef get_task_template(template_id: str) -> dict:\n # Fetch and return the specified task template\n ...\n\n@mcp.resource(\"workflows://definitions/{workflow_id}\")\ndef get_workflow_definition(workflow_id: str) -> dict:\n # Fetch and return the specified workflow definition\n ...\n\n@mcp.resource(\"users://{user_id}/preferences\")\ndef get_user_preferences(user_id: str) -> dict:\n # Fetch and return user preferences\n ...\n\n@mcp.resource(\"projects://metadata\")\ndef get_project_metadata() -> List[dict]:\n # Fetch and return metadata for all active projects\n ...\n```\n\nResource templates in FastMCP allow for dynamic generation of resources based on patterns. For Task Master, we can implement:\n\n1. Dynamic task creation templates\n2. Customizable workflow templates\n3. User-specific resource views\n\nExample implementation:\n\n```python\n@mcp.resource(\"tasks://create/{task_type}\")\ndef get_task_creation_template(task_type: str) -> dict:\n # Generate and return a task creation template based on task_type\n ...\n\n@mcp.resource(\"workflows://custom/{user_id}/{workflow_name}\")\ndef get_custom_workflow_template(user_id: str, workflow_name: str) -> dict:\n # Generate and return a custom workflow template for the user\n ...\n\n@mcp.resource(\"users://{user_id}/dashboard\")\ndef get_user_dashboard(user_id: str) -> dict:\n # Generate and return a personalized dashboard view for the user\n ...\n```\n\nBest practices for integrating resources with Task Master functionality:\n\n1. Use resources to provide context and data for tools\n2. Implement caching for frequently accessed resources\n3. Ensure proper error handling and not-found cases for all resources\n4. Use resource templates to generate dynamic, personalized views of data\n5. Implement access control to ensure users only access authorized resources\n\nBy properly implementing these resources and resource templates, we can provide rich, contextual data to LLM clients, enhancing the Task Master's capabilities and user experience.\n</info added on 2025-03-31T18:35:21.513Z>",
"status": "deferred", "status": "done",
"parentTaskId": 23 "parentTaskId": 23
}, },
{ {
@@ -1431,7 +1431,7 @@
"title": "Implement Comprehensive Error Handling", "title": "Implement Comprehensive Error Handling",
"description": "Implement robust error handling using FastMCP's MCPError, including custom error types for different categories and standardized error responses.", "description": "Implement robust error handling using FastMCP's MCPError, including custom error types for different categories and standardized error responses.",
"details": "1. Create custom error types extending MCPError for different categories (validation, auth, etc.)\\n2. Implement standardized error responses following MCP protocol\\n3. Add error handling middleware for all MCP endpoints\\n4. Ensure proper error propagation from tools to client\\n5. Add debug mode with detailed error information\\n6. Document error types and handling patterns", "details": "1. Create custom error types extending MCPError for different categories (validation, auth, etc.)\\n2. Implement standardized error responses following MCP protocol\\n3. Add error handling middleware for all MCP endpoints\\n4. Ensure proper error propagation from tools to client\\n5. Add debug mode with detailed error information\\n6. Document error types and handling patterns",
"status": "deferred", "status": "done",
"dependencies": [ "dependencies": [
"23.1", "23.1",
"23.3" "23.3"
@@ -1455,7 +1455,7 @@
"title": "Create Testing Framework and Test Suite", "title": "Create Testing Framework and Test Suite",
"description": "Implement a comprehensive testing framework for the MCP server, including unit tests, integration tests, and end-to-end tests.", "description": "Implement a comprehensive testing framework for the MCP server, including unit tests, integration tests, and end-to-end tests.",
"details": "1. Set up Jest testing framework with proper configuration\\n2. Create MCPTestClient for testing FastMCP server interaction\\n3. Implement unit tests for individual tool functions\\n4. Create integration tests for end-to-end request/response cycles\\n5. Set up test fixtures and mock data\\n6. Implement test coverage reporting\\n7. Document testing guidelines and examples", "details": "1. Set up Jest testing framework with proper configuration\\n2. Create MCPTestClient for testing FastMCP server interaction\\n3. Implement unit tests for individual tool functions\\n4. Create integration tests for end-to-end request/response cycles\\n5. Set up test fixtures and mock data\\n6. Implement test coverage reporting\\n7. Document testing guidelines and examples",
"status": "deferred", "status": "done",
"dependencies": [ "dependencies": [
"23.1", "23.1",
"23.3" "23.3"
@@ -1479,7 +1479,7 @@
"title": "Implement SSE Support for Real-time Updates", "title": "Implement SSE Support for Real-time Updates",
"description": "Add Server-Sent Events (SSE) capabilities to the MCP server to enable real-time updates and streaming of task execution progress, logs, and status changes to clients", "description": "Add Server-Sent Events (SSE) capabilities to the MCP server to enable real-time updates and streaming of task execution progress, logs, and status changes to clients",
"details": "1. Research and implement SSE protocol for the MCP server\\n2. Create dedicated SSE endpoints for event streaming\\n3. Implement event emitter pattern for internal event management\\n4. Add support for different event types (task status, logs, errors)\\n5. Implement client connection management with proper keep-alive handling\\n6. Add filtering capabilities to allow subscribing to specific event types\\n7. Create in-memory event buffer for clients reconnecting\\n8. Document SSE endpoint usage and client implementation examples\\n9. Add robust error handling for dropped connections\\n10. Implement rate limiting and backpressure mechanisms\\n11. Add authentication for SSE connections", "details": "1. Research and implement SSE protocol for the MCP server\\n2. Create dedicated SSE endpoints for event streaming\\n3. Implement event emitter pattern for internal event management\\n4. Add support for different event types (task status, logs, errors)\\n5. Implement client connection management with proper keep-alive handling\\n6. Add filtering capabilities to allow subscribing to specific event types\\n7. Create in-memory event buffer for clients reconnecting\\n8. Document SSE endpoint usage and client implementation examples\\n9. Add robust error handling for dropped connections\\n10. Implement rate limiting and backpressure mechanisms\\n11. Add authentication for SSE connections",
"status": "deferred", "status": "done",
"dependencies": [ "dependencies": [
"23.1", "23.1",
"23.3", "23.3",
@@ -1656,7 +1656,7 @@
"title": "Review functionality of all MCP direct functions", "title": "Review functionality of all MCP direct functions",
"description": "Verify that all implemented MCP direct functions work correctly with edge cases", "description": "Verify that all implemented MCP direct functions work correctly with edge cases",
"details": "Perform comprehensive testing of all MCP direct function implementations to ensure they handle various input scenarios correctly and return appropriate responses. Check edge cases, error handling, and parameter validation.", "details": "Perform comprehensive testing of all MCP direct function implementations to ensure they handle various input scenarios correctly and return appropriate responses. Check edge cases, error handling, and parameter validation.",
"status": "in-progress", "status": "done",
"dependencies": [], "dependencies": [],
"parentTaskId": 23 "parentTaskId": 23
}, },
@@ -1759,7 +1759,7 @@
"title": "Implement init MCP command", "title": "Implement init MCP command",
"description": "Create MCP tool implementation for the init command", "description": "Create MCP tool implementation for the init command",
"details": "", "details": "",
"status": "deferred", "status": "done",
"dependencies": [], "dependencies": [],
"parentTaskId": 23 "parentTaskId": 23
}, },
@@ -1768,7 +1768,7 @@
"title": "Support setting env variables through mcp server", "title": "Support setting env variables through mcp server",
"description": "currently we need to access the env variables through the env file present in the project (that we either create or find and append to). we could abstract this by allowing users to define the env vars in the mcp.json directly as folks currently do. mcp.json should then be in gitignore if thats the case. but for this i think in fastmcp all we need is to access ENV in a specific way. we need to find that way and then implement it", "description": "currently we need to access the env variables through the env file present in the project (that we either create or find and append to). we could abstract this by allowing users to define the env vars in the mcp.json directly as folks currently do. mcp.json should then be in gitignore if thats the case. but for this i think in fastmcp all we need is to access ENV in a specific way. we need to find that way and then implement it",
"details": "\n\n<info added on 2025-04-01T01:57:24.160Z>\nTo access environment variables defined in the mcp.json config file when using FastMCP, you can utilize the `Config` class from the `fastmcp` module. Here's how to implement this:\n\n1. Import the necessary module:\n```python\nfrom fastmcp import Config\n```\n\n2. Access environment variables:\n```python\nconfig = Config()\nenv_var = config.env.get(\"VARIABLE_NAME\")\n```\n\nThis approach allows you to retrieve environment variables defined in the mcp.json file directly in your code. The `Config` class automatically loads the configuration, including environment variables, from the mcp.json file.\n\nFor security, ensure that sensitive information in mcp.json is not committed to version control. You can add mcp.json to your .gitignore file to prevent accidental commits.\n\nIf you need to access multiple environment variables, you can do so like this:\n```python\ndb_url = config.env.get(\"DATABASE_URL\")\napi_key = config.env.get(\"API_KEY\")\ndebug_mode = config.env.get(\"DEBUG_MODE\", False) # With a default value\n```\n\nThis method provides a clean and consistent way to access environment variables defined in the mcp.json configuration file within your FastMCP project.\n</info added on 2025-04-01T01:57:24.160Z>\n\n<info added on 2025-04-01T01:57:49.848Z>\nTo access environment variables defined in the mcp.json config file when using FastMCP in a JavaScript environment, you can use the `fastmcp` npm package. Here's how to implement this:\n\n1. Install the `fastmcp` package:\n```bash\nnpm install fastmcp\n```\n\n2. Import the necessary module:\n```javascript\nconst { Config } = require('fastmcp');\n```\n\n3. Access environment variables:\n```javascript\nconst config = new Config();\nconst envVar = config.env.get('VARIABLE_NAME');\n```\n\nThis approach allows you to retrieve environment variables defined in the mcp.json file directly in your JavaScript code. The `Config` class automatically loads the configuration, including environment variables, from the mcp.json file.\n\nYou can access multiple environment variables like this:\n```javascript\nconst dbUrl = config.env.get('DATABASE_URL');\nconst apiKey = config.env.get('API_KEY');\nconst debugMode = config.env.get('DEBUG_MODE', false); // With a default value\n```\n\nThis method provides a consistent way to access environment variables defined in the mcp.json configuration file within your FastMCP project in a JavaScript environment.\n</info added on 2025-04-01T01:57:49.848Z>", "details": "\n\n<info added on 2025-04-01T01:57:24.160Z>\nTo access environment variables defined in the mcp.json config file when using FastMCP, you can utilize the `Config` class from the `fastmcp` module. Here's how to implement this:\n\n1. Import the necessary module:\n```python\nfrom fastmcp import Config\n```\n\n2. Access environment variables:\n```python\nconfig = Config()\nenv_var = config.env.get(\"VARIABLE_NAME\")\n```\n\nThis approach allows you to retrieve environment variables defined in the mcp.json file directly in your code. The `Config` class automatically loads the configuration, including environment variables, from the mcp.json file.\n\nFor security, ensure that sensitive information in mcp.json is not committed to version control. You can add mcp.json to your .gitignore file to prevent accidental commits.\n\nIf you need to access multiple environment variables, you can do so like this:\n```python\ndb_url = config.env.get(\"DATABASE_URL\")\napi_key = config.env.get(\"API_KEY\")\ndebug_mode = config.env.get(\"DEBUG_MODE\", False) # With a default value\n```\n\nThis method provides a clean and consistent way to access environment variables defined in the mcp.json configuration file within your FastMCP project.\n</info added on 2025-04-01T01:57:24.160Z>\n\n<info added on 2025-04-01T01:57:49.848Z>\nTo access environment variables defined in the mcp.json config file when using FastMCP in a JavaScript environment, you can use the `fastmcp` npm package. Here's how to implement this:\n\n1. Install the `fastmcp` package:\n```bash\nnpm install fastmcp\n```\n\n2. Import the necessary module:\n```javascript\nconst { Config } = require('fastmcp');\n```\n\n3. Access environment variables:\n```javascript\nconst config = new Config();\nconst envVar = config.env.get('VARIABLE_NAME');\n```\n\nThis approach allows you to retrieve environment variables defined in the mcp.json file directly in your JavaScript code. The `Config` class automatically loads the configuration, including environment variables, from the mcp.json file.\n\nYou can access multiple environment variables like this:\n```javascript\nconst dbUrl = config.env.get('DATABASE_URL');\nconst apiKey = config.env.get('API_KEY');\nconst debugMode = config.env.get('DEBUG_MODE', false); // With a default value\n```\n\nThis method provides a consistent way to access environment variables defined in the mcp.json configuration file within your FastMCP project in a JavaScript environment.\n</info added on 2025-04-01T01:57:49.848Z>",
"status": "pending", "status": "done",
"dependencies": [], "dependencies": [],
"parentTaskId": 23 "parentTaskId": 23
}, },
@@ -2736,6 +2736,16 @@
"status": "pending", "status": "pending",
"dependencies": [], "dependencies": [],
"priority": "medium" "priority": "medium"
},
{
"id": 61,
"title": "Implement Flexible AI Model Management",
"description": "Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.",
"details": "### Proposed Solution\nImplement an intuitive CLI command for AI model management, leveraging Vercel's AI SDK for seamless integration:\n\n- `task-master models`: Lists currently configured models for main operations and research.\n- `task-master models --set-main=\"<model_name>\" --set-research=\"<model_name>\"`: Sets the desired models for main operations and research tasks respectively.\n\nSupported AI Models:\n- **Main Operations:** Claude (current default), OpenAI, Ollama, Gemini, OpenRouter\n- **Research Operations:** Perplexity (current default), OpenAI, Ollama, Grok\n\nIf a user specifies an invalid model, the CLI lists available models clearly.\n\n### Example CLI Usage\n\nList current models:\n```shell\ntask-master models\n```\nOutput example:\n```\nCurrent AI Model Configuration:\n- Main Operations: Claude\n- Research Operations: Perplexity\n```\n\nSet new models:\n```shell\ntask-master models --set-main=\"gemini\" --set-research=\"grok\"\n```\n\nAttempt invalid model:\n```shell\ntask-master models --set-main=\"invalidModel\"\n```\nOutput example:\n```\nError: \"invalidModel\" is not a valid model.\n\nAvailable models for Main Operations:\n- claude\n- openai\n- ollama\n- gemini\n- openrouter\n```\n\n### High-Level Workflow\n1. Update CLI parsing logic to handle new `models` command and associated flags.\n2. Consolidate all AI calls into `ai-services.js` for centralized management.\n3. Utilize Vercel's AI SDK to implement robust wrapper functions for each AI API:\n - Claude (existing)\n - Perplexity (existing)\n - OpenAI\n - Ollama\n - Gemini\n - OpenRouter\n - Grok\n4. Update environment variables and provide clear documentation in `.env_example`:\n```env\n# MAIN_MODEL options: claude, openai, ollama, gemini, openrouter\nMAIN_MODEL=claude\n\n# RESEARCH_MODEL options: perplexity, openai, ollama, grok\nRESEARCH_MODEL=perplexity\n```\n5. Ensure dynamic model switching via environment variables or configuration management.\n6. Provide clear CLI feedback and validation of model names.\n\n### Vercel AI SDK Integration\n- Use Vercel's AI SDK to abstract API calls for supported models, ensuring consistent error handling and response formatting.\n- Implement a configuration layer to map model names to their respective Vercel SDK integrations.\n- Example pattern for integration:\n```javascript\nimport { createClient } from '@vercel/ai';\n\nconst clients = {\n claude: createClient({ provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }),\n openai: createClient({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY }),\n ollama: createClient({ provider: 'ollama', apiKey: process.env.OLLAMA_API_KEY }),\n gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),\n openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),\n perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),\n grok: createClient({ provider: 'grok', apiKey: process.env.GROK_API_KEY })\n};\n\nexport function getClient(model) {\n if (!clients[model]) {\n throw new Error(`Invalid model: ${model}`);\n }\n return clients[model];\n}\n```\n- Leverage `generateText` and `streamText` functions from the SDK for text generation and streaming capabilities.\n- Ensure compatibility with serverless and edge deployments using Vercel's infrastructure.\n\n### Key Elements\n- Enhanced model visibility and intuitive management commands.\n- Centralized and robust handling of AI API integrations via Vercel AI SDK.\n- Clear CLI responses with detailed validation feedback.\n- Flexible, easy-to-understand environment configuration.\n\n### Implementation Considerations\n- Centralize all AI interactions through a single, maintainable module (`ai-services.js`).\n- Ensure comprehensive error handling for invalid model selections.\n- Clearly document environment variable options and their purposes.\n- Validate model names rigorously to prevent runtime errors.\n\n### Out of Scope (Future Considerations)\n- Automatic benchmarking or model performance comparison.\n- Dynamic runtime switching of models based on task type or complexity.",
"testStrategy": "### Test Strategy\n1. **Unit Tests**:\n - Test CLI commands for listing, setting, and validating models.\n - Mock Vercel AI SDK calls to ensure proper integration and error handling.\n\n2. **Integration Tests**:\n - Validate end-to-end functionality of model management commands.\n - Test dynamic switching of models via environment variables.\n\n3. **Error Handling Tests**:\n - Simulate invalid model names and verify error messages.\n - Test API failures for each model provider and ensure graceful degradation.\n\n4. **Documentation Validation**:\n - Verify that `.env_example` and CLI usage examples are accurate and comprehensive.\n\n5. **Performance Tests**:\n - Measure response times for API calls through Vercel AI SDK.\n - Ensure no significant latency is introduced by model switching.\n\n6. **SDK-Specific Tests**:\n - Validate the behavior of `generateText` and `streamText` functions for supported models.\n - Test compatibility with serverless and edge deployments.",
"status": "pending",
"dependencies": [],
"priority": "high"
} }
] ]
} }