feat(ai): Add xAI provider and Grok models

Integrates the xAI provider into the unified AI service layer, allowing the use of Grok models (e.g., grok-3, grok-3-mini).

    Changes include:
    - Added  dependency.
    - Created  with implementations for generateText, streamText, and generateObject (stubbed).
    - Updated  to include the xAI provider in the function map.
    - Updated  to recognize the 'xai' provider and the  environment variable.
    - Updated  to include known Grok models and their capabilities (object generation marked as likely unsupported).
This commit is contained in:
Eyal Toledano
2025-04-27 14:47:50 -04:00
parent 2517bc112c
commit ed79d4f473
13 changed files with 315 additions and 28 deletions

View File

@@ -1336,7 +1336,7 @@ When testing the non-streaming `generateTextService` call in `updateSubtaskById`
### Details:
## 22. Implement `openai.js` Provider Module using Vercel AI SDK [in-progress]
## 22. Implement `openai.js` Provider Module using Vercel AI SDK [done]
### Dependencies: None
### Description: Create and implement the `openai.js` module within `src/ai-providers/`. This module should contain functions to interact with the OpenAI API (streaming and non-streaming) using the **Vercel AI SDK**, adhering to the standardized input/output format defined for `ai-services-unified.js`. (Optional, implement if OpenAI models are needed).
### Details:
@@ -1785,7 +1785,7 @@ export async function generateGoogleObject({
### Details:
## 29. Implement `xai.js` Provider Module using Vercel AI SDK [pending]
## 29. Implement `xai.js` Provider Module using Vercel AI SDK [in-progress]
### Dependencies: None
### Description: Create and implement the `xai.js` module within `src/ai-providers/`. This module should contain functions to interact with xAI models (e.g., Grok) using the **Vercel AI SDK (`@ai-sdk/xai`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
### Details:

View File

@@ -1,6 +1,6 @@
# Task ID: 71
# Title: Add Model-Specific maxTokens Override Configuration
# Status: pending
# Status: done
# Dependencies: None
# Priority: high
# Description: Implement functionality to allow specifying a maximum token limit for individual AI models within .taskmasterconfig, overriding the role-based maxTokens if the model-specific limit is lower.

11
tasks/task_072.txt Normal file
View File

@@ -0,0 +1,11 @@
# Task ID: 72
# Title: Implement PDF Generation for Project Progress and Dependency Overview
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Develop a feature to generate a PDF report summarizing the current project progress and visualizing the dependency chain of tasks.
# Details:
This task involves creating a new CLI command named 'progress-pdf' within the existing project framework to generate a PDF document. The PDF should include: 1) A summary of project progress, detailing completed, in-progress, and pending tasks with their respective statuses and completion percentages if applicable. 2) A visual representation of the task dependency chain, leveraging the output format from the 'diagram' command (Task 70) to include Mermaid diagrams or similar visualizations converted to image format for PDF embedding. Use a suitable PDF generation library (e.g., jsPDF for JavaScript environments or ReportLab for Python) compatible with the projects tech stack. Ensure the command accepts optional parameters to filter tasks by status or ID for customized reports. Handle large dependency chains by implementing pagination or zoomable image sections in the PDF. Provide error handling for cases where diagram generation or PDF creation fails, logging detailed error messages for debugging. Consider accessibility by ensuring text in the PDF is selectable and images have alt text descriptions. Integrate this feature with the existing CLI structure, ensuring it aligns with the projects configuration settings (e.g., output directory for generated files). Document the command usage and parameters in the projects help or README file.
# Test Strategy:
Verify the completion of this task through a multi-step testing approach: 1) Unit Tests: Create tests for the PDF generation logic to ensure data (task statuses and dependencies) is correctly fetched and formatted. Mock the PDF library to test edge cases like empty task lists or broken dependency links. 2) Integration Tests: Run the 'progress-pdf' command via CLI to confirm it generates a PDF file without errors under normal conditions, with filtered task IDs, and with various status filters. Validate that the output file exists in the specified directory and can be opened. 3) Content Validation: Manually or via automated script, check the generated PDF content to ensure it accurately reflects the current project state (compare task counts and statuses against a known project state) and includes dependency diagrams as images. 4) Error Handling Tests: Simulate failures in diagram generation or PDF creation (e.g., invalid output path, library errors) and verify that appropriate error messages are logged and the command exits gracefully. 5) Accessibility Checks: Use a PDF accessibility tool or manual inspection to confirm that text is selectable and images have alt text. Run these tests across different project sizes (small with few tasks, large with complex dependencies) to ensure scalability. Document test results and include a sample PDF output in the project repository for reference.

File diff suppressed because one or more lines are too long