From ed79d4f4735dfab4124fa189214c0bd5e23a6860 Mon Sep 17 00:00:00 2001 From: Eyal Toledano Date: Sun, 27 Apr 2025 14:47:50 -0400 Subject: [PATCH] feat(ai): Add xAI provider and Grok models Integrates the xAI provider into the unified AI service layer, allowing the use of Grok models (e.g., grok-3, grok-3-mini). Changes include: - Added dependency. - Created with implementations for generateText, streamText, and generateObject (stubbed). - Updated to include the xAI provider in the function map. - Updated to recognize the 'xai' provider and the environment variable. - Updated to include known Grok models and their capabilities (object generation marked as likely unsupported). --- .changeset/blue-spies-kick.md | 5 + .cursor/rules/ai_providers.mdc | 89 +++++++++++++- .taskmasterconfig | 8 +- package-lock.json | 16 +-- package.json | 2 +- scripts/modules/ai-services-unified.js | 7 ++ scripts/modules/config-manager.js | 3 +- scripts/modules/supported-models.json | 19 ++- src/ai-providers/xai.js | 160 +++++++++++++++++++++++++ tasks/task_061.txt | 4 +- tasks/task_071.txt | 2 +- tasks/task_072.txt | 11 ++ tasks/tasks.json | 17 ++- 13 files changed, 315 insertions(+), 28 deletions(-) create mode 100644 .changeset/blue-spies-kick.md create mode 100644 src/ai-providers/xai.js create mode 100644 tasks/task_072.txt diff --git a/.changeset/blue-spies-kick.md b/.changeset/blue-spies-kick.md new file mode 100644 index 00000000..f7fea4e7 --- /dev/null +++ b/.changeset/blue-spies-kick.md @@ -0,0 +1,5 @@ +--- +'task-master-ai': patch +--- + +Add xAI provider and Grok models support diff --git a/.cursor/rules/ai_providers.mdc b/.cursor/rules/ai_providers.mdc index 35800174..42acee6d 100644 --- a/.cursor/rules/ai_providers.mdc +++ b/.cursor/rules/ai_providers.mdc @@ -3,7 +3,6 @@ description: Guidelines for managing Task Master AI providers and models. globs: alwaysApply: false --- - # Task Master AI Provider Management This rule guides AI assistants on how to view, configure, and interact with the different AI providers and models supported by Task Master. For internal implementation details of the service layer, see [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc). @@ -55,4 +54,90 @@ This rule guides AI assistants on how to view, configure, and interact with the 1. **Verify API Key:** Ensure the correct API key for the *selected provider* (check `models` output) exists in the appropriate location (`.cursor/mcp.json` env or `.env`). 2. **Check Model ID:** Ensure the model ID set for the role is valid (use `models` listAvailableModels/`task-master models`). 3. **Provider Status:** Check the status of the external AI provider's service. - 4. **Restart MCP:** If changes were made to configuration or provider code, restart the MCP server. \ No newline at end of file + 4. **Restart MCP:** If changes were made to configuration or provider code, restart the MCP server. + +## Adding a New AI Provider (Vercel AI SDK Method) + +Follow these steps to integrate a new AI provider that has an official Vercel AI SDK adapter (`@ai-sdk/`): + +1. **Install Dependency:** + - Install the provider-specific package: + ```bash + npm install @ai-sdk/ + ``` + +2. **Create Provider Module:** + - Create a new file in `src/ai-providers/` named `.js`. + - Use existing modules (`openai.js`, `anthropic.js`, etc.) as a template. + - **Import:** + - Import the provider's `create` function from `@ai-sdk/`. + - Import `generateText`, `streamText`, `generateObject` from the core `ai` package. + - Import the `log` utility from `../../scripts/modules/utils.js`. + - **Implement Core Functions:** + - `generateText(params)`: + - Accepts `params` (apiKey, modelId, messages, etc.). + - Instantiate the client: `const client = create({ apiKey });` + - Call `generateText({ model: client(modelId), ... })`. + - Return `result.text`. + - Include basic validation and try/catch error handling. + - `streamText(params)`: + - Similar structure to `generateText`. + - Call `streamText({ model: client(modelId), ... })`. + - Return the full stream result object. + - Include basic validation and try/catch. + - `generateObject(params)`: + - Similar structure. + - Call `generateObject({ model: client(modelId), schema, messages, ... })`. + - Return `result.object`. + - Include basic validation and try/catch. + - **Export Functions:** Export the three implemented functions (`generateText`, `streamText`, `generateObject`). + +3. **Integrate with Unified Service:** + - Open `scripts/modules/ai-services-unified.js`. + - **Import:** Add `import * as from '../../src/ai-providers/.js';` + - **Map:** Add an entry to the `PROVIDER_FUNCTIONS` map: + ```javascript + '': { + generateText: .generateText, + streamText: .streamText, + generateObject: .generateObject + }, + ``` + +4. **Update Configuration Management:** + - Open `scripts/modules/config-manager.js`. + - **`MODEL_MAP`:** Add the new `` key to the `MODEL_MAP` loaded from `supported-models.json` (or ensure the loading handles new providers dynamically if `supported-models.json` is updated first). + - **`VALID_PROVIDERS`:** Ensure the new `` is included in the `VALID_PROVIDERS` array (this should happen automatically if derived from `MODEL_MAP` keys). + - **API Key Handling:** + - Update the `keyMap` in `_resolveApiKey` and `isApiKeySet` with the correct environment variable name (e.g., `PROVIDER_API_KEY`). + - Update the `switch` statement in `getMcpApiKeyStatus` to check the corresponding key in `mcp.json` and its placeholder value. + - Add a case to the `switch` statement in `getMcpApiKeyStatus` for the new provider, including its placeholder string if applicable. + - **Ollama Exception:** If adding Ollama or another provider *not* requiring an API key, add a specific check at the beginning of `isApiKeySet` and `getMcpApiKeyStatus` to return `true` immediately for that provider. + +5. **Update Supported Models List:** + - Edit `scripts/modules/supported-models.json`. + - Add a new key for the ``. + - Add an array of model objects under the provider key, each including: + - `id`: The specific model identifier (e.g., `claude-3-opus-20240229`). + - `name`: A user-friendly name (optional). + - `swe_score`, `cost_per_1m_tokens`: (Optional) Add performance/cost data if available. + - `allowed_roles`: An array of roles (`"main"`, `"research"`, `"fallback"`) the model is suitable for. + - `max_tokens`: (Optional but recommended) The maximum token limit for the model. + +6. **Update Environment Examples:** + - Add the new `PROVIDER_API_KEY` to `.env.example`. + - Add the new `PROVIDER_API_KEY` with its placeholder (`YOUR_PROVIDER_API_KEY_HERE`) to the `env` section for `taskmaster-ai` in `.cursor/mcp.json.example` (if it exists) or update instructions. + +7. **Add Unit Tests:** + - Create `tests/unit/ai-providers/.test.js`. + - Mock the `@ai-sdk/` module and the core `ai` module functions (`generateText`, `streamText`, `generateObject`). + - Write tests for each exported function (`generateText`, etc.) to verify: + - Correct client instantiation. + - Correct parameters passed to the mocked Vercel AI SDK functions. + - Correct handling of results. + - Error handling (missing API key, SDK errors). + +8. **Documentation:** + - Update any relevant documentation (like `README.md` or other rules) mentioning supported providers or configuration. + +*(Note: For providers **without** an official Vercel AI SDK adapter, the process would involve directly using the provider's own SDK or API within the `src/ai-providers/.js` module and manually constructing responses compatible with the unified service layer, which is significantly more complex.)* \ No newline at end of file diff --git a/.taskmasterconfig b/.taskmasterconfig index ffda308e..07aa817f 100644 --- a/.taskmasterconfig +++ b/.taskmasterconfig @@ -1,14 +1,14 @@ { "models": { "main": { - "provider": "openai", - "modelId": "o3-mini", + "provider": "xai", + "modelId": "grok-3", "maxTokens": 100000, "temperature": 0.2 }, "research": { - "provider": "perplexity", - "modelId": "sonar-pro", + "provider": "xai", + "modelId": "grok-3", "maxTokens": 8700, "temperature": 0.1 }, diff --git a/package-lock.json b/package-lock.json index 4d3c982e..77f9a6fa 100644 --- a/package-lock.json +++ b/package-lock.json @@ -15,7 +15,7 @@ "@ai-sdk/mistral": "^1.2.7", "@ai-sdk/openai": "^1.3.20", "@ai-sdk/perplexity": "^1.1.7", - "@ai-sdk/xai": "^1.2.13", + "@ai-sdk/xai": "^1.2.15", "@anthropic-ai/sdk": "^0.39.0", "@openrouter/ai-sdk-provider": "^0.4.5", "ai": "^4.3.10", @@ -155,9 +155,9 @@ } }, "node_modules/@ai-sdk/openai-compatible": { - "version": "0.2.11", - "resolved": "https://registry.npmjs.org/@ai-sdk/openai-compatible/-/openai-compatible-0.2.11.tgz", - "integrity": "sha512-56U0uNCcFTygA4h6R/uREv8r5sKA3/pGkpIAnMOpRzs5wiARlTYakWW3LZgxg6D4Gpeswo4gwNJczB7nM0K1Qg==", + "version": "0.2.13", + "resolved": "https://registry.npmjs.org/@ai-sdk/openai-compatible/-/openai-compatible-0.2.13.tgz", + "integrity": "sha512-tB+lL8Z3j0qDod/mvxwjrPhbLUHp/aQW+NvMoJaqeTtP+Vmv5qR800pncGczxn5WN0pllQm+7aIRDnm69XeSbg==", "license": "Apache-2.0", "dependencies": { "@ai-sdk/provider": "1.1.3", @@ -257,12 +257,12 @@ } }, "node_modules/@ai-sdk/xai": { - "version": "1.2.13", - "resolved": "https://registry.npmjs.org/@ai-sdk/xai/-/xai-1.2.13.tgz", - "integrity": "sha512-vJnzpnRVIVuGgDHrHgfIc3ImjVp6YN+salVX99r+HWd2itiGQy+vAmQKen0Ml8BK/avnLyQneeYRfdlgDBkhgQ==", + "version": "1.2.15", + "resolved": "https://registry.npmjs.org/@ai-sdk/xai/-/xai-1.2.15.tgz", + "integrity": "sha512-18qEYyVHIqTiOMePE00bfx4kJrTHM4dV3D3Rpe+eBISlY80X1FnzZRnRTJo3Q6MOSmW5+ZKVaX9jtryhoFpn0A==", "license": "Apache-2.0", "dependencies": { - "@ai-sdk/openai-compatible": "0.2.11", + "@ai-sdk/openai-compatible": "0.2.13", "@ai-sdk/provider": "1.1.3", "@ai-sdk/provider-utils": "2.2.7" }, diff --git a/package.json b/package.json index ec905c5e..29e09f49 100644 --- a/package.json +++ b/package.json @@ -44,7 +44,7 @@ "@ai-sdk/mistral": "^1.2.7", "@ai-sdk/openai": "^1.3.20", "@ai-sdk/perplexity": "^1.1.7", - "@ai-sdk/xai": "^1.2.13", + "@ai-sdk/xai": "^1.2.15", "@anthropic-ai/sdk": "^0.39.0", "@openrouter/ai-sdk-provider": "^0.4.5", "ai": "^4.3.10", diff --git a/scripts/modules/ai-services-unified.js b/scripts/modules/ai-services-unified.js index e94d2b25..6995dd43 100644 --- a/scripts/modules/ai-services-unified.js +++ b/scripts/modules/ai-services-unified.js @@ -26,6 +26,7 @@ import * as anthropic from '../../src/ai-providers/anthropic.js'; import * as perplexity from '../../src/ai-providers/perplexity.js'; import * as google from '../../src/ai-providers/google.js'; // Import Google provider import * as openai from '../../src/ai-providers/openai.js'; // ADD: Import OpenAI provider +import * as xai from '../../src/ai-providers/xai.js'; // ADD: Import xAI provider // TODO: Import other provider modules when implemented (ollama, etc.) // --- Provider Function Map --- @@ -54,6 +55,12 @@ const PROVIDER_FUNCTIONS = { generateText: openai.generateOpenAIText, streamText: openai.streamOpenAIText, generateObject: openai.generateOpenAIObject + }, + xai: { + // ADD: xAI entry + generateText: xai.generateXaiText, + streamText: xai.streamXaiText, + generateObject: xai.generateXaiObject // Note: Object generation might be unsupported } // TODO: Add entries for ollama, etc. when implemented }; diff --git a/scripts/modules/config-manager.js b/scripts/modules/config-manager.js index e583419c..8027cc33 100644 --- a/scripts/modules/config-manager.js +++ b/scripts/modules/config-manager.js @@ -30,7 +30,7 @@ try { const CONFIG_FILE_NAME = '.taskmasterconfig'; // Define valid providers dynamically from the loaded MODEL_MAP -const VALID_PROVIDERS = Object.keys(MODEL_MAP); +const VALID_PROVIDERS = Object.keys(MODEL_MAP || {}); // Default configuration values (used if .taskmasterconfig is missing or incomplete) const DEFAULTS = { @@ -534,6 +534,7 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) { case 'azure': apiKeyToCheck = mcpEnv.AZURE_OPENAI_API_KEY; placeholderValue = 'YOUR_AZURE_OPENAI_API_KEY_HERE'; + break; default: return false; // Unknown provider } diff --git a/scripts/modules/supported-models.json b/scripts/modules/supported-models.json index 63278d26..e6be76e4 100644 --- a/scripts/modules/supported-models.json +++ b/scripts/modules/supported-models.json @@ -263,28 +263,35 @@ ], "xai": [ { - "id": "grok3", - "swe_score": 0, + "id": "grok-3", + "name": "Grok 3", + "swe_score": null, "cost_per_1m_tokens": { "input": 3, "output": 15 }, - "allowed_roles": ["main", "fallback", "research"] + "allowed_roles": ["main", "fallback", "research"], + "max_tokens": 131072 }, { "id": "grok-3-mini", + "name": "Grok 3 Mini", "swe_score": 0, "cost_per_1m_tokens": { "input": 0.3, "output": 0.5 }, - "allowed_roles": ["main", "fallback", "research"] + "allowed_roles": ["main", "fallback", "research"], + "max_tokens": 131072 }, { "id": "grok3-fast", + "name": "Grok 3 Fast", "swe_score": 0, "cost_per_1m_tokens": { "input": 5, "output": 25 }, - "allowed_roles": ["main", "fallback", "research"] + "allowed_roles": ["main", "fallback", "research"], + "max_tokens": 131072 }, { "id": "grok-3-mini-fast", "swe_score": 0, "cost_per_1m_tokens": { "input": 0.6, "output": 4 }, - "allowed_roles": ["main", "fallback", "research"] + "allowed_roles": ["main", "fallback", "research"], + "max_tokens": 131072 } ] } diff --git a/src/ai-providers/xai.js b/src/ai-providers/xai.js new file mode 100644 index 00000000..e7386ba5 --- /dev/null +++ b/src/ai-providers/xai.js @@ -0,0 +1,160 @@ +/** + * src/ai-providers/xai.js + * + * Implementation for interacting with xAI models (e.g., Grok) + * using the Vercel AI SDK. + */ +import { createXai } from '@ai-sdk/xai'; +import { generateText, streamText, generateObject } from 'ai'; // Only import what's used +import { log } from '../../scripts/modules/utils.js'; // Assuming utils is accessible + +// --- Client Instantiation --- +function getClient(apiKey) { + if (!apiKey) { + throw new Error('xAI API key is required.'); + } + // Create and return a new instance directly + return createXai({ + apiKey: apiKey + // Add baseURL or other options if needed later + }); +} + +// --- Standardized Service Function Implementations --- + +/** + * Generates text using an xAI model. + * + * @param {object} params - Parameters for the text generation. + * @param {string} params.apiKey - The xAI API key. + * @param {string} params.modelId - The specific xAI model ID (e.g., 'grok-3'). + * @param {Array} params.messages - The messages array (e.g., [{ role: 'user', content: '...' }]). + * @param {number} [params.maxTokens] - Maximum tokens for the response. + * @param {number} [params.temperature] - Temperature for generation. + * @returns {Promise} The generated text content. + * @throws {Error} If the API call fails. + */ +export async function generateXaiText({ + apiKey, + modelId, + messages, + maxTokens, + temperature +}) { + log('debug', `Generating xAI text with model: ${modelId}`); + try { + const client = getClient(apiKey); + const result = await generateText({ + model: client(modelId), // Correct model invocation + messages: messages, + maxTokens: maxTokens, + temperature: temperature, + // Add reasoningEffort or other xAI specific options via providerOptions if needed + providerOptions: { xai: { reasoningEffort: 'high' } } + }); + log( + 'debug', + `xAI generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}` + ); + return result.text; + } catch (error) { + log('error', `xAI generateText failed: ${error.message}`); + throw error; + } +} + +/** + * Streams text using an xAI model. + * + * @param {object} params - Parameters for the text streaming. + * @param {string} params.apiKey - The xAI API key. + * @param {string} params.modelId - The specific xAI model ID. + * @param {Array} params.messages - The messages array. + * @param {number} [params.maxTokens] - Maximum tokens for the response. + * @param {number} [params.temperature] - Temperature for generation. + * @returns {Promise} The full stream result object from the Vercel AI SDK. + * @throws {Error} If the API call fails to initiate the stream. + */ +export async function streamXaiText({ + apiKey, + modelId, + messages, + maxTokens, + temperature +}) { + log('debug', `Streaming xAI text with model: ${modelId}`); + try { + const client = getClient(apiKey); + const stream = await streamText({ + model: client(modelId), // Correct model invocation + messages: messages, + maxTokens: maxTokens, + temperature: temperature + }); + return stream; // Return the full stream object + } catch (error) { + log('error', `xAI streamText failed: ${error.message}`, error.stack); + throw error; + } +} + +/** + * Generates a structured object using an xAI model. + * Note: Based on search results, xAI models do not currently support Object Generation. + * This function is included for structural consistency but will likely fail if called. + * + * @param {object} params - Parameters for object generation. + * @param {string} params.apiKey - The xAI API key. + * @param {string} params.modelId - The specific xAI model ID. + * @param {Array} params.messages - The messages array. + * @param {import('zod').ZodSchema} params.schema - The Zod schema for the object. + * @param {string} params.objectName - A name for the object/tool. + * @param {number} [params.maxTokens] - Maximum tokens for the response. + * @param {number} [params.temperature] - Temperature for generation. + * @param {number} [params.maxRetries] - Max retries for validation/generation. + * @returns {Promise} The generated object matching the schema. + * @throws {Error} If generation or validation fails. + */ +export async function generateXaiObject({ + apiKey, + modelId, + messages, + schema, + objectName = 'generated_xai_object', + maxTokens, + temperature, + maxRetries = 3 +}) { + log( + 'warn', // Log warning as this is likely unsupported + `Attempting to generate xAI object ('${objectName}') with model: ${modelId}. This may not be supported by the provider.` + ); + try { + const client = getClient(apiKey); + const result = await generateObject({ + model: client(modelId), // Correct model invocation + // Note: mode might need adjustment if xAI ever supports object generation differently + mode: 'tool', + schema: schema, + messages: messages, + tool: { + name: objectName, + description: `Generate a ${objectName} based on the prompt.` + }, + maxTokens: maxTokens, + temperature: temperature, + maxRetries: maxRetries + }); + log( + 'debug', + `xAI generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}` + ); + return result.object; + } catch (error) { + log( + 'error', + `xAI generateObject ('${objectName}') failed: ${error.message}. (Likely unsupported by provider)` + ); + throw error; // Re-throw the error + } +} diff --git a/tasks/task_061.txt b/tasks/task_061.txt index d487d897..506a3b01 100644 --- a/tasks/task_061.txt +++ b/tasks/task_061.txt @@ -1336,7 +1336,7 @@ When testing the non-streaming `generateTextService` call in `updateSubtaskById` ### Details: -## 22. Implement `openai.js` Provider Module using Vercel AI SDK [in-progress] +## 22. Implement `openai.js` Provider Module using Vercel AI SDK [done] ### Dependencies: None ### Description: Create and implement the `openai.js` module within `src/ai-providers/`. This module should contain functions to interact with the OpenAI API (streaming and non-streaming) using the **Vercel AI SDK**, adhering to the standardized input/output format defined for `ai-services-unified.js`. (Optional, implement if OpenAI models are needed). ### Details: @@ -1785,7 +1785,7 @@ export async function generateGoogleObject({ ### Details: -## 29. Implement `xai.js` Provider Module using Vercel AI SDK [pending] +## 29. Implement `xai.js` Provider Module using Vercel AI SDK [in-progress] ### Dependencies: None ### Description: Create and implement the `xai.js` module within `src/ai-providers/`. This module should contain functions to interact with xAI models (e.g., Grok) using the **Vercel AI SDK (`@ai-sdk/xai`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`. ### Details: diff --git a/tasks/task_071.txt b/tasks/task_071.txt index 557ee5df..ae70285e 100644 --- a/tasks/task_071.txt +++ b/tasks/task_071.txt @@ -1,6 +1,6 @@ # Task ID: 71 # Title: Add Model-Specific maxTokens Override Configuration -# Status: pending +# Status: done # Dependencies: None # Priority: high # Description: Implement functionality to allow specifying a maximum token limit for individual AI models within .taskmasterconfig, overriding the role-based maxTokens if the model-specific limit is lower. diff --git a/tasks/task_072.txt b/tasks/task_072.txt new file mode 100644 index 00000000..b0ca546b --- /dev/null +++ b/tasks/task_072.txt @@ -0,0 +1,11 @@ +# Task ID: 72 +# Title: Implement PDF Generation for Project Progress and Dependency Overview +# Status: pending +# Dependencies: None +# Priority: medium +# Description: Develop a feature to generate a PDF report summarizing the current project progress and visualizing the dependency chain of tasks. +# Details: +This task involves creating a new CLI command named 'progress-pdf' within the existing project framework to generate a PDF document. The PDF should include: 1) A summary of project progress, detailing completed, in-progress, and pending tasks with their respective statuses and completion percentages if applicable. 2) A visual representation of the task dependency chain, leveraging the output format from the 'diagram' command (Task 70) to include Mermaid diagrams or similar visualizations converted to image format for PDF embedding. Use a suitable PDF generation library (e.g., jsPDF for JavaScript environments or ReportLab for Python) compatible with the project’s tech stack. Ensure the command accepts optional parameters to filter tasks by status or ID for customized reports. Handle large dependency chains by implementing pagination or zoomable image sections in the PDF. Provide error handling for cases where diagram generation or PDF creation fails, logging detailed error messages for debugging. Consider accessibility by ensuring text in the PDF is selectable and images have alt text descriptions. Integrate this feature with the existing CLI structure, ensuring it aligns with the project’s configuration settings (e.g., output directory for generated files). Document the command usage and parameters in the project’s help or README file. + +# Test Strategy: +Verify the completion of this task through a multi-step testing approach: 1) Unit Tests: Create tests for the PDF generation logic to ensure data (task statuses and dependencies) is correctly fetched and formatted. Mock the PDF library to test edge cases like empty task lists or broken dependency links. 2) Integration Tests: Run the 'progress-pdf' command via CLI to confirm it generates a PDF file without errors under normal conditions, with filtered task IDs, and with various status filters. Validate that the output file exists in the specified directory and can be opened. 3) Content Validation: Manually or via automated script, check the generated PDF content to ensure it accurately reflects the current project state (compare task counts and statuses against a known project state) and includes dependency diagrams as images. 4) Error Handling Tests: Simulate failures in diagram generation or PDF creation (e.g., invalid output path, library errors) and verify that appropriate error messages are logged and the command exits gracefully. 5) Accessibility Checks: Use a PDF accessibility tool or manual inspection to confirm that text is selectable and images have alt text. Run these tests across different project sizes (small with few tasks, large with complex dependencies) to ensure scalability. Document test results and include a sample PDF output in the project repository for reference. diff --git a/tasks/tasks.json b/tasks/tasks.json index 0f3b1a57..8fb6f744 100644 --- a/tasks/tasks.json +++ b/tasks/tasks.json @@ -3232,7 +3232,7 @@ "title": "Implement `openai.js` Provider Module using Vercel AI SDK", "description": "Create and implement the `openai.js` module within `src/ai-providers/`. This module should contain functions to interact with the OpenAI API (streaming and non-streaming) using the **Vercel AI SDK**, adhering to the standardized input/output format defined for `ai-services-unified.js`. (Optional, implement if OpenAI models are needed).", "details": "\n\n\n```javascript\n// Implementation details for openai.js provider module\n\nimport { createOpenAI } from 'ai';\n\n/**\n * Generates text using OpenAI models via Vercel AI SDK\n * \n * @param {Object} params - Configuration parameters\n * @param {string} params.apiKey - OpenAI API key\n * @param {string} params.modelId - Model ID (e.g., 'gpt-4', 'gpt-3.5-turbo')\n * @param {Array} params.messages - Array of message objects with role and content\n * @param {number} [params.maxTokens] - Maximum tokens to generate\n * @param {number} [params.temperature=0.7] - Sampling temperature (0-1)\n * @returns {Promise} The generated text response\n */\nexport async function generateOpenAIText(params) {\n try {\n const { apiKey, modelId, messages, maxTokens, temperature = 0.7 } = params;\n \n if (!apiKey) throw new Error('OpenAI API key is required');\n if (!modelId) throw new Error('Model ID is required');\n if (!messages || !Array.isArray(messages)) throw new Error('Messages array is required');\n \n const openai = createOpenAI({ apiKey });\n \n const response = await openai.chat.completions.create({\n model: modelId,\n messages,\n max_tokens: maxTokens,\n temperature,\n });\n \n return response.choices[0].message.content;\n } catch (error) {\n console.error('OpenAI text generation error:', error);\n throw new Error(`OpenAI API error: ${error.message}`);\n }\n}\n\n/**\n * Streams text using OpenAI models via Vercel AI SDK\n * \n * @param {Object} params - Configuration parameters (same as generateOpenAIText)\n * @returns {ReadableStream} A stream of text chunks\n */\nexport async function streamOpenAIText(params) {\n try {\n const { apiKey, modelId, messages, maxTokens, temperature = 0.7 } = params;\n \n if (!apiKey) throw new Error('OpenAI API key is required');\n if (!modelId) throw new Error('Model ID is required');\n if (!messages || !Array.isArray(messages)) throw new Error('Messages array is required');\n \n const openai = createOpenAI({ apiKey });\n \n const stream = await openai.chat.completions.create({\n model: modelId,\n messages,\n max_tokens: maxTokens,\n temperature,\n stream: true,\n });\n \n return stream;\n } catch (error) {\n console.error('OpenAI streaming error:', error);\n throw new Error(`OpenAI streaming error: ${error.message}`);\n }\n}\n\n/**\n * Generates a structured object using OpenAI models via Vercel AI SDK\n * \n * @param {Object} params - Configuration parameters\n * @param {string} params.apiKey - OpenAI API key\n * @param {string} params.modelId - Model ID (e.g., 'gpt-4', 'gpt-3.5-turbo')\n * @param {Array} params.messages - Array of message objects\n * @param {Object} params.schema - JSON schema for the response object\n * @param {string} params.objectName - Name of the object to generate\n * @returns {Promise} The generated structured object\n */\nexport async function generateOpenAIObject(params) {\n try {\n const { apiKey, modelId, messages, schema, objectName } = params;\n \n if (!apiKey) throw new Error('OpenAI API key is required');\n if (!modelId) throw new Error('Model ID is required');\n if (!messages || !Array.isArray(messages)) throw new Error('Messages array is required');\n if (!schema) throw new Error('Schema is required');\n if (!objectName) throw new Error('Object name is required');\n \n const openai = createOpenAI({ apiKey });\n \n // Using the Vercel AI SDK's function calling capabilities\n const response = await openai.chat.completions.create({\n model: modelId,\n messages,\n functions: [\n {\n name: objectName,\n description: `Generate a ${objectName} object`,\n parameters: schema,\n },\n ],\n function_call: { name: objectName },\n });\n \n const functionCall = response.choices[0].message.function_call;\n return JSON.parse(functionCall.arguments);\n } catch (error) {\n console.error('OpenAI object generation error:', error);\n throw new Error(`OpenAI object generation error: ${error.message}`);\n }\n}\n```\n\n\n\n\n```javascript\n// Additional implementation notes for openai.js\n\n/**\n * Export a provider info object for OpenAI\n */\nexport const providerInfo = {\n id: 'openai',\n name: 'OpenAI',\n description: 'OpenAI API integration using Vercel AI SDK',\n models: {\n 'gpt-4': {\n id: 'gpt-4',\n name: 'GPT-4',\n contextWindow: 8192,\n supportsFunctions: true,\n },\n 'gpt-4-turbo': {\n id: 'gpt-4-turbo',\n name: 'GPT-4 Turbo',\n contextWindow: 128000,\n supportsFunctions: true,\n },\n 'gpt-3.5-turbo': {\n id: 'gpt-3.5-turbo',\n name: 'GPT-3.5 Turbo',\n contextWindow: 16385,\n supportsFunctions: true,\n }\n }\n};\n\n/**\n * Helper function to format error responses consistently\n * \n * @param {Error} error - The caught error\n * @param {string} operation - The operation being performed\n * @returns {Error} A formatted error\n */\nfunction formatError(error, operation) {\n // Extract OpenAI specific error details if available\n const statusCode = error.status || error.statusCode;\n const errorType = error.type || error.code || 'unknown_error';\n \n // Create a more detailed error message\n const message = `OpenAI ${operation} error (${errorType}): ${error.message}`;\n \n // Create a new error with the formatted message\n const formattedError = new Error(message);\n \n // Add additional properties for debugging\n formattedError.originalError = error;\n formattedError.provider = 'openai';\n formattedError.statusCode = statusCode;\n formattedError.errorType = errorType;\n \n return formattedError;\n}\n\n/**\n * Example usage with the unified AI services interface:\n * \n * // In ai-services-unified.js\n * import * as openaiProvider from './ai-providers/openai.js';\n * \n * export async function generateText(params) {\n * switch(params.provider) {\n * case 'openai':\n * return openaiProvider.generateOpenAIText(params);\n * // other providers...\n * }\n * }\n */\n\n// Note: For proper error handling with the Vercel AI SDK, you may need to:\n// 1. Check for rate limiting errors (429)\n// 2. Handle token context window exceeded errors\n// 3. Implement exponential backoff for retries on 5xx errors\n// 4. Parse streaming errors properly from the ReadableStream\n```\n\n\n\n\n```javascript\n// Correction for openai.js provider module\n\n// IMPORTANT: Use the correct import from Vercel AI SDK\nimport { createOpenAI, openai } from '@ai-sdk/openai';\n\n// Note: Before using this module, install the required dependency:\n// npm install @ai-sdk/openai\n\n// The rest of the implementation remains the same, but uses the correct imports.\n// When implementing this module, ensure your package.json includes this dependency.\n\n// For streaming implementations with the Vercel AI SDK, you can also use the \n// streamText and experimental streamUI methods:\n\n/**\n * Example of using streamText for simpler streaming implementation\n */\nexport async function streamOpenAITextSimplified(params) {\n try {\n const { apiKey, modelId, messages, maxTokens, temperature = 0.7 } = params;\n \n if (!apiKey) throw new Error('OpenAI API key is required');\n \n const openaiClient = createOpenAI({ apiKey });\n \n return openaiClient.streamText({\n model: modelId,\n messages,\n temperature,\n maxTokens,\n });\n } catch (error) {\n console.error('OpenAI streaming error:', error);\n throw new Error(`OpenAI streaming error: ${error.message}`);\n }\n}\n```\n", - "status": "in-progress", + "status": "done", "dependencies": [], "parentTaskId": 61 }, @@ -3297,7 +3297,7 @@ "title": "Implement `xai.js` Provider Module using Vercel AI SDK", "description": "Create and implement the `xai.js` module within `src/ai-providers/`. This module should contain functions to interact with xAI models (e.g., Grok) using the **Vercel AI SDK (`@ai-sdk/xai`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.", "details": "", - "status": "pending", + "status": "in-progress", "dependencies": [], "parentTaskId": 61 }, @@ -3894,10 +3894,21 @@ "description": "Implement functionality to allow specifying a maximum token limit for individual AI models within .taskmasterconfig, overriding the role-based maxTokens if the model-specific limit is lower.", "details": "1. **Modify `.taskmasterconfig` Structure:** Add a new top-level section `modelOverrides` (e.g., `\"modelOverrides\": { \"o3-mini\": { \"maxTokens\": 100000 } }`).\n2. **Update `config-manager.js`:**\n - Modify config loading to read the new `modelOverrides` section.\n - Update `getParametersForRole(role)` logic: Fetch role defaults (roleMaxTokens, temperature). Get the modelId for the role. Look up `modelOverrides[modelId].maxTokens` (modelSpecificMaxTokens). Calculate `effectiveMaxTokens = Math.min(roleMaxTokens, modelSpecificMaxTokens ?? Infinity)`. Return `{ maxTokens: effectiveMaxTokens, temperature }`.\n3. **Update Documentation:** Add an example of `modelOverrides` to `.taskmasterconfig.example` or relevant documentation.", "testStrategy": "1. **Unit Tests (`config-manager.js`):**\n - Verify `getParametersForRole` returns role defaults when no override exists.\n - Verify `getParametersForRole` returns the lower model-specific limit when an override exists and is lower.\n - Verify `getParametersForRole` returns the role limit when an override exists but is higher.\n - Verify handling of missing `modelOverrides` section.\n2. **Integration Tests (`ai-services-unified.js`):**\n - Call an AI service (e.g., `generateTextService`) with a config having a model override.\n - Mock the underlying provider function.\n - Assert that the `maxTokens` value passed to the mocked provider function matches the expected (potentially overridden) minimum value.", - "status": "pending", + "status": "done", "dependencies": [], "priority": "high", "subtasks": [] + }, + { + "id": 72, + "title": "Implement PDF Generation for Project Progress and Dependency Overview", + "description": "Develop a feature to generate a PDF report summarizing the current project progress and visualizing the dependency chain of tasks.", + "details": "This task involves creating a new CLI command named 'progress-pdf' within the existing project framework to generate a PDF document. The PDF should include: 1) A summary of project progress, detailing completed, in-progress, and pending tasks with their respective statuses and completion percentages if applicable. 2) A visual representation of the task dependency chain, leveraging the output format from the 'diagram' command (Task 70) to include Mermaid diagrams or similar visualizations converted to image format for PDF embedding. Use a suitable PDF generation library (e.g., jsPDF for JavaScript environments or ReportLab for Python) compatible with the project’s tech stack. Ensure the command accepts optional parameters to filter tasks by status or ID for customized reports. Handle large dependency chains by implementing pagination or zoomable image sections in the PDF. Provide error handling for cases where diagram generation or PDF creation fails, logging detailed error messages for debugging. Consider accessibility by ensuring text in the PDF is selectable and images have alt text descriptions. Integrate this feature with the existing CLI structure, ensuring it aligns with the project’s configuration settings (e.g., output directory for generated files). Document the command usage and parameters in the project’s help or README file.", + "testStrategy": "Verify the completion of this task through a multi-step testing approach: 1) Unit Tests: Create tests for the PDF generation logic to ensure data (task statuses and dependencies) is correctly fetched and formatted. Mock the PDF library to test edge cases like empty task lists or broken dependency links. 2) Integration Tests: Run the 'progress-pdf' command via CLI to confirm it generates a PDF file without errors under normal conditions, with filtered task IDs, and with various status filters. Validate that the output file exists in the specified directory and can be opened. 3) Content Validation: Manually or via automated script, check the generated PDF content to ensure it accurately reflects the current project state (compare task counts and statuses against a known project state) and includes dependency diagrams as images. 4) Error Handling Tests: Simulate failures in diagram generation or PDF creation (e.g., invalid output path, library errors) and verify that appropriate error messages are logged and the command exits gracefully. 5) Accessibility Checks: Use a PDF accessibility tool or manual inspection to confirm that text is selectable and images have alt text. Run these tests across different project sizes (small with few tasks, large with complex dependencies) to ensure scalability. Document test results and include a sample PDF output in the project repository for reference.", + "status": "pending", + "dependencies": [], + "priority": "medium", + "subtasks": [] } ] } \ No newline at end of file