feat(ai): Add xAI provider and Grok models
Integrates the xAI provider into the unified AI service layer, allowing the use of Grok models (e.g., grok-3, grok-3-mini).
Changes include:
- Added dependency.
- Created with implementations for generateText, streamText, and generateObject (stubbed).
- Updated to include the xAI provider in the function map.
- Updated to recognize the 'xai' provider and the environment variable.
- Updated to include known Grok models and their capabilities (object generation marked as likely unsupported).
This commit is contained in:
5
.changeset/blue-spies-kick.md
Normal file
5
.changeset/blue-spies-kick.md
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
'task-master-ai': patch
|
||||
---
|
||||
|
||||
Add xAI provider and Grok models support
|
||||
@@ -3,7 +3,6 @@ description: Guidelines for managing Task Master AI providers and models.
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Task Master AI Provider Management
|
||||
|
||||
This rule guides AI assistants on how to view, configure, and interact with the different AI providers and models supported by Task Master. For internal implementation details of the service layer, see [`ai_services.mdc`](mdc:.cursor/rules/ai_services.mdc).
|
||||
@@ -55,4 +54,90 @@ This rule guides AI assistants on how to view, configure, and interact with the
|
||||
1. **Verify API Key:** Ensure the correct API key for the *selected provider* (check `models` output) exists in the appropriate location (`.cursor/mcp.json` env or `.env`).
|
||||
2. **Check Model ID:** Ensure the model ID set for the role is valid (use `models` listAvailableModels/`task-master models`).
|
||||
3. **Provider Status:** Check the status of the external AI provider's service.
|
||||
4. **Restart MCP:** If changes were made to configuration or provider code, restart the MCP server.
|
||||
4. **Restart MCP:** If changes were made to configuration or provider code, restart the MCP server.
|
||||
|
||||
## Adding a New AI Provider (Vercel AI SDK Method)
|
||||
|
||||
Follow these steps to integrate a new AI provider that has an official Vercel AI SDK adapter (`@ai-sdk/<provider>`):
|
||||
|
||||
1. **Install Dependency:**
|
||||
- Install the provider-specific package:
|
||||
```bash
|
||||
npm install @ai-sdk/<provider-name>
|
||||
```
|
||||
|
||||
2. **Create Provider Module:**
|
||||
- Create a new file in `src/ai-providers/` named `<provider-name>.js`.
|
||||
- Use existing modules (`openai.js`, `anthropic.js`, etc.) as a template.
|
||||
- **Import:**
|
||||
- Import the provider's `create<ProviderName>` function from `@ai-sdk/<provider-name>`.
|
||||
- Import `generateText`, `streamText`, `generateObject` from the core `ai` package.
|
||||
- Import the `log` utility from `../../scripts/modules/utils.js`.
|
||||
- **Implement Core Functions:**
|
||||
- `generate<ProviderName>Text(params)`:
|
||||
- Accepts `params` (apiKey, modelId, messages, etc.).
|
||||
- Instantiate the client: `const client = create<ProviderName>({ apiKey });`
|
||||
- Call `generateText({ model: client(modelId), ... })`.
|
||||
- Return `result.text`.
|
||||
- Include basic validation and try/catch error handling.
|
||||
- `stream<ProviderName>Text(params)`:
|
||||
- Similar structure to `generateText`.
|
||||
- Call `streamText({ model: client(modelId), ... })`.
|
||||
- Return the full stream result object.
|
||||
- Include basic validation and try/catch.
|
||||
- `generate<ProviderName>Object(params)`:
|
||||
- Similar structure.
|
||||
- Call `generateObject({ model: client(modelId), schema, messages, ... })`.
|
||||
- Return `result.object`.
|
||||
- Include basic validation and try/catch.
|
||||
- **Export Functions:** Export the three implemented functions (`generate<ProviderName>Text`, `stream<ProviderName>Text`, `generate<ProviderName>Object`).
|
||||
|
||||
3. **Integrate with Unified Service:**
|
||||
- Open `scripts/modules/ai-services-unified.js`.
|
||||
- **Import:** Add `import * as <providerName> from '../../src/ai-providers/<provider-name>.js';`
|
||||
- **Map:** Add an entry to the `PROVIDER_FUNCTIONS` map:
|
||||
```javascript
|
||||
'<provider-name>': {
|
||||
generateText: <providerName>.generate<ProviderName>Text,
|
||||
streamText: <providerName>.stream<ProviderName>Text,
|
||||
generateObject: <providerName>.generate<ProviderName>Object
|
||||
},
|
||||
```
|
||||
|
||||
4. **Update Configuration Management:**
|
||||
- Open `scripts/modules/config-manager.js`.
|
||||
- **`MODEL_MAP`:** Add the new `<provider-name>` key to the `MODEL_MAP` loaded from `supported-models.json` (or ensure the loading handles new providers dynamically if `supported-models.json` is updated first).
|
||||
- **`VALID_PROVIDERS`:** Ensure the new `<provider-name>` is included in the `VALID_PROVIDERS` array (this should happen automatically if derived from `MODEL_MAP` keys).
|
||||
- **API Key Handling:**
|
||||
- Update the `keyMap` in `_resolveApiKey` and `isApiKeySet` with the correct environment variable name (e.g., `PROVIDER_API_KEY`).
|
||||
- Update the `switch` statement in `getMcpApiKeyStatus` to check the corresponding key in `mcp.json` and its placeholder value.
|
||||
- Add a case to the `switch` statement in `getMcpApiKeyStatus` for the new provider, including its placeholder string if applicable.
|
||||
- **Ollama Exception:** If adding Ollama or another provider *not* requiring an API key, add a specific check at the beginning of `isApiKeySet` and `getMcpApiKeyStatus` to return `true` immediately for that provider.
|
||||
|
||||
5. **Update Supported Models List:**
|
||||
- Edit `scripts/modules/supported-models.json`.
|
||||
- Add a new key for the `<provider-name>`.
|
||||
- Add an array of model objects under the provider key, each including:
|
||||
- `id`: The specific model identifier (e.g., `claude-3-opus-20240229`).
|
||||
- `name`: A user-friendly name (optional).
|
||||
- `swe_score`, `cost_per_1m_tokens`: (Optional) Add performance/cost data if available.
|
||||
- `allowed_roles`: An array of roles (`"main"`, `"research"`, `"fallback"`) the model is suitable for.
|
||||
- `max_tokens`: (Optional but recommended) The maximum token limit for the model.
|
||||
|
||||
6. **Update Environment Examples:**
|
||||
- Add the new `PROVIDER_API_KEY` to `.env.example`.
|
||||
- Add the new `PROVIDER_API_KEY` with its placeholder (`YOUR_PROVIDER_API_KEY_HERE`) to the `env` section for `taskmaster-ai` in `.cursor/mcp.json.example` (if it exists) or update instructions.
|
||||
|
||||
7. **Add Unit Tests:**
|
||||
- Create `tests/unit/ai-providers/<provider-name>.test.js`.
|
||||
- Mock the `@ai-sdk/<provider-name>` module and the core `ai` module functions (`generateText`, `streamText`, `generateObject`).
|
||||
- Write tests for each exported function (`generate<ProviderName>Text`, etc.) to verify:
|
||||
- Correct client instantiation.
|
||||
- Correct parameters passed to the mocked Vercel AI SDK functions.
|
||||
- Correct handling of results.
|
||||
- Error handling (missing API key, SDK errors).
|
||||
|
||||
8. **Documentation:**
|
||||
- Update any relevant documentation (like `README.md` or other rules) mentioning supported providers or configuration.
|
||||
|
||||
*(Note: For providers **without** an official Vercel AI SDK adapter, the process would involve directly using the provider's own SDK or API within the `src/ai-providers/<provider-name>.js` module and manually constructing responses compatible with the unified service layer, which is significantly more complex.)*
|
||||
@@ -1,14 +1,14 @@
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "openai",
|
||||
"modelId": "o3-mini",
|
||||
"provider": "xai",
|
||||
"modelId": "grok-3",
|
||||
"maxTokens": 100000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"provider": "xai",
|
||||
"modelId": "grok-3",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
},
|
||||
|
||||
16
package-lock.json
generated
16
package-lock.json
generated
@@ -15,7 +15,7 @@
|
||||
"@ai-sdk/mistral": "^1.2.7",
|
||||
"@ai-sdk/openai": "^1.3.20",
|
||||
"@ai-sdk/perplexity": "^1.1.7",
|
||||
"@ai-sdk/xai": "^1.2.13",
|
||||
"@ai-sdk/xai": "^1.2.15",
|
||||
"@anthropic-ai/sdk": "^0.39.0",
|
||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||
"ai": "^4.3.10",
|
||||
@@ -155,9 +155,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@ai-sdk/openai-compatible": {
|
||||
"version": "0.2.11",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/openai-compatible/-/openai-compatible-0.2.11.tgz",
|
||||
"integrity": "sha512-56U0uNCcFTygA4h6R/uREv8r5sKA3/pGkpIAnMOpRzs5wiARlTYakWW3LZgxg6D4Gpeswo4gwNJczB7nM0K1Qg==",
|
||||
"version": "0.2.13",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/openai-compatible/-/openai-compatible-0.2.13.tgz",
|
||||
"integrity": "sha512-tB+lL8Z3j0qDod/mvxwjrPhbLUHp/aQW+NvMoJaqeTtP+Vmv5qR800pncGczxn5WN0pllQm+7aIRDnm69XeSbg==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
@@ -257,12 +257,12 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@ai-sdk/xai": {
|
||||
"version": "1.2.13",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/xai/-/xai-1.2.13.tgz",
|
||||
"integrity": "sha512-vJnzpnRVIVuGgDHrHgfIc3ImjVp6YN+salVX99r+HWd2itiGQy+vAmQKen0Ml8BK/avnLyQneeYRfdlgDBkhgQ==",
|
||||
"version": "1.2.15",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/xai/-/xai-1.2.15.tgz",
|
||||
"integrity": "sha512-18qEYyVHIqTiOMePE00bfx4kJrTHM4dV3D3Rpe+eBISlY80X1FnzZRnRTJo3Q6MOSmW5+ZKVaX9jtryhoFpn0A==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/openai-compatible": "0.2.11",
|
||||
"@ai-sdk/openai-compatible": "0.2.13",
|
||||
"@ai-sdk/provider": "1.1.3",
|
||||
"@ai-sdk/provider-utils": "2.2.7"
|
||||
},
|
||||
|
||||
@@ -44,7 +44,7 @@
|
||||
"@ai-sdk/mistral": "^1.2.7",
|
||||
"@ai-sdk/openai": "^1.3.20",
|
||||
"@ai-sdk/perplexity": "^1.1.7",
|
||||
"@ai-sdk/xai": "^1.2.13",
|
||||
"@ai-sdk/xai": "^1.2.15",
|
||||
"@anthropic-ai/sdk": "^0.39.0",
|
||||
"@openrouter/ai-sdk-provider": "^0.4.5",
|
||||
"ai": "^4.3.10",
|
||||
|
||||
@@ -26,6 +26,7 @@ import * as anthropic from '../../src/ai-providers/anthropic.js';
|
||||
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
||||
import * as google from '../../src/ai-providers/google.js'; // Import Google provider
|
||||
import * as openai from '../../src/ai-providers/openai.js'; // ADD: Import OpenAI provider
|
||||
import * as xai from '../../src/ai-providers/xai.js'; // ADD: Import xAI provider
|
||||
// TODO: Import other provider modules when implemented (ollama, etc.)
|
||||
|
||||
// --- Provider Function Map ---
|
||||
@@ -54,6 +55,12 @@ const PROVIDER_FUNCTIONS = {
|
||||
generateText: openai.generateOpenAIText,
|
||||
streamText: openai.streamOpenAIText,
|
||||
generateObject: openai.generateOpenAIObject
|
||||
},
|
||||
xai: {
|
||||
// ADD: xAI entry
|
||||
generateText: xai.generateXaiText,
|
||||
streamText: xai.streamXaiText,
|
||||
generateObject: xai.generateXaiObject // Note: Object generation might be unsupported
|
||||
}
|
||||
// TODO: Add entries for ollama, etc. when implemented
|
||||
};
|
||||
|
||||
@@ -30,7 +30,7 @@ try {
|
||||
const CONFIG_FILE_NAME = '.taskmasterconfig';
|
||||
|
||||
// Define valid providers dynamically from the loaded MODEL_MAP
|
||||
const VALID_PROVIDERS = Object.keys(MODEL_MAP);
|
||||
const VALID_PROVIDERS = Object.keys(MODEL_MAP || {});
|
||||
|
||||
// Default configuration values (used if .taskmasterconfig is missing or incomplete)
|
||||
const DEFAULTS = {
|
||||
@@ -534,6 +534,7 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
|
||||
case 'azure':
|
||||
apiKeyToCheck = mcpEnv.AZURE_OPENAI_API_KEY;
|
||||
placeholderValue = 'YOUR_AZURE_OPENAI_API_KEY_HERE';
|
||||
break;
|
||||
default:
|
||||
return false; // Unknown provider
|
||||
}
|
||||
|
||||
@@ -263,28 +263,35 @@
|
||||
],
|
||||
"xai": [
|
||||
{
|
||||
"id": "grok3",
|
||||
"swe_score": 0,
|
||||
"id": "grok-3",
|
||||
"name": "Grok 3",
|
||||
"swe_score": null,
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 131072
|
||||
},
|
||||
{
|
||||
"id": "grok-3-mini",
|
||||
"name": "Grok 3 Mini",
|
||||
"swe_score": 0,
|
||||
"cost_per_1m_tokens": { "input": 0.3, "output": 0.5 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 131072
|
||||
},
|
||||
{
|
||||
"id": "grok3-fast",
|
||||
"name": "Grok 3 Fast",
|
||||
"swe_score": 0,
|
||||
"cost_per_1m_tokens": { "input": 5, "output": 25 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 131072
|
||||
},
|
||||
{
|
||||
"id": "grok-3-mini-fast",
|
||||
"swe_score": 0,
|
||||
"cost_per_1m_tokens": { "input": 0.6, "output": 4 },
|
||||
"allowed_roles": ["main", "fallback", "research"]
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 131072
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
160
src/ai-providers/xai.js
Normal file
160
src/ai-providers/xai.js
Normal file
@@ -0,0 +1,160 @@
|
||||
/**
|
||||
* src/ai-providers/xai.js
|
||||
*
|
||||
* Implementation for interacting with xAI models (e.g., Grok)
|
||||
* using the Vercel AI SDK.
|
||||
*/
|
||||
import { createXai } from '@ai-sdk/xai';
|
||||
import { generateText, streamText, generateObject } from 'ai'; // Only import what's used
|
||||
import { log } from '../../scripts/modules/utils.js'; // Assuming utils is accessible
|
||||
|
||||
// --- Client Instantiation ---
|
||||
function getClient(apiKey) {
|
||||
if (!apiKey) {
|
||||
throw new Error('xAI API key is required.');
|
||||
}
|
||||
// Create and return a new instance directly
|
||||
return createXai({
|
||||
apiKey: apiKey
|
||||
// Add baseURL or other options if needed later
|
||||
});
|
||||
}
|
||||
|
||||
// --- Standardized Service Function Implementations ---
|
||||
|
||||
/**
|
||||
* Generates text using an xAI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text generation.
|
||||
* @param {string} params.apiKey - The xAI API key.
|
||||
* @param {string} params.modelId - The specific xAI model ID (e.g., 'grok-3').
|
||||
* @param {Array<object>} params.messages - The messages array (e.g., [{ role: 'user', content: '...' }]).
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @throws {Error} If the API call fails.
|
||||
*/
|
||||
export async function generateXaiText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature
|
||||
}) {
|
||||
log('debug', `Generating xAI text with model: ${modelId}`);
|
||||
try {
|
||||
const client = getClient(apiKey);
|
||||
const result = await generateText({
|
||||
model: client(modelId), // Correct model invocation
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature,
|
||||
// Add reasoningEffort or other xAI specific options via providerOptions if needed
|
||||
providerOptions: { xai: { reasoningEffort: 'high' } }
|
||||
});
|
||||
log(
|
||||
'debug',
|
||||
`xAI generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
return result.text;
|
||||
} catch (error) {
|
||||
log('error', `xAI generateText failed: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Streams text using an xAI model.
|
||||
*
|
||||
* @param {object} params - Parameters for the text streaming.
|
||||
* @param {string} params.apiKey - The xAI API key.
|
||||
* @param {string} params.modelId - The specific xAI model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @returns {Promise<object>} The full stream result object from the Vercel AI SDK.
|
||||
* @throws {Error} If the API call fails to initiate the stream.
|
||||
*/
|
||||
export async function streamXaiText({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
maxTokens,
|
||||
temperature
|
||||
}) {
|
||||
log('debug', `Streaming xAI text with model: ${modelId}`);
|
||||
try {
|
||||
const client = getClient(apiKey);
|
||||
const stream = await streamText({
|
||||
model: client(modelId), // Correct model invocation
|
||||
messages: messages,
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature
|
||||
});
|
||||
return stream; // Return the full stream object
|
||||
} catch (error) {
|
||||
log('error', `xAI streamText failed: ${error.message}`, error.stack);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a structured object using an xAI model.
|
||||
* Note: Based on search results, xAI models do not currently support Object Generation.
|
||||
* This function is included for structural consistency but will likely fail if called.
|
||||
*
|
||||
* @param {object} params - Parameters for object generation.
|
||||
* @param {string} params.apiKey - The xAI API key.
|
||||
* @param {string} params.modelId - The specific xAI model ID.
|
||||
* @param {Array<object>} params.messages - The messages array.
|
||||
* @param {import('zod').ZodSchema} params.schema - The Zod schema for the object.
|
||||
* @param {string} params.objectName - A name for the object/tool.
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {number} [params.maxRetries] - Max retries for validation/generation.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @throws {Error} If generation or validation fails.
|
||||
*/
|
||||
export async function generateXaiObject({
|
||||
apiKey,
|
||||
modelId,
|
||||
messages,
|
||||
schema,
|
||||
objectName = 'generated_xai_object',
|
||||
maxTokens,
|
||||
temperature,
|
||||
maxRetries = 3
|
||||
}) {
|
||||
log(
|
||||
'warn', // Log warning as this is likely unsupported
|
||||
`Attempting to generate xAI object ('${objectName}') with model: ${modelId}. This may not be supported by the provider.`
|
||||
);
|
||||
try {
|
||||
const client = getClient(apiKey);
|
||||
const result = await generateObject({
|
||||
model: client(modelId), // Correct model invocation
|
||||
// Note: mode might need adjustment if xAI ever supports object generation differently
|
||||
mode: 'tool',
|
||||
schema: schema,
|
||||
messages: messages,
|
||||
tool: {
|
||||
name: objectName,
|
||||
description: `Generate a ${objectName} based on the prompt.`
|
||||
},
|
||||
maxTokens: maxTokens,
|
||||
temperature: temperature,
|
||||
maxRetries: maxRetries
|
||||
});
|
||||
log(
|
||||
'debug',
|
||||
`xAI generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
return result.object;
|
||||
} catch (error) {
|
||||
log(
|
||||
'error',
|
||||
`xAI generateObject ('${objectName}') failed: ${error.message}. (Likely unsupported by provider)`
|
||||
);
|
||||
throw error; // Re-throw the error
|
||||
}
|
||||
}
|
||||
@@ -1336,7 +1336,7 @@ When testing the non-streaming `generateTextService` call in `updateSubtaskById`
|
||||
### Details:
|
||||
|
||||
|
||||
## 22. Implement `openai.js` Provider Module using Vercel AI SDK [in-progress]
|
||||
## 22. Implement `openai.js` Provider Module using Vercel AI SDK [done]
|
||||
### Dependencies: None
|
||||
### Description: Create and implement the `openai.js` module within `src/ai-providers/`. This module should contain functions to interact with the OpenAI API (streaming and non-streaming) using the **Vercel AI SDK**, adhering to the standardized input/output format defined for `ai-services-unified.js`. (Optional, implement if OpenAI models are needed).
|
||||
### Details:
|
||||
@@ -1785,7 +1785,7 @@ export async function generateGoogleObject({
|
||||
### Details:
|
||||
|
||||
|
||||
## 29. Implement `xai.js` Provider Module using Vercel AI SDK [pending]
|
||||
## 29. Implement `xai.js` Provider Module using Vercel AI SDK [in-progress]
|
||||
### Dependencies: None
|
||||
### Description: Create and implement the `xai.js` module within `src/ai-providers/`. This module should contain functions to interact with xAI models (e.g., Grok) using the **Vercel AI SDK (`@ai-sdk/xai`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
|
||||
### Details:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Task ID: 71
|
||||
# Title: Add Model-Specific maxTokens Override Configuration
|
||||
# Status: pending
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Implement functionality to allow specifying a maximum token limit for individual AI models within .taskmasterconfig, overriding the role-based maxTokens if the model-specific limit is lower.
|
||||
|
||||
11
tasks/task_072.txt
Normal file
11
tasks/task_072.txt
Normal file
@@ -0,0 +1,11 @@
|
||||
# Task ID: 72
|
||||
# Title: Implement PDF Generation for Project Progress and Dependency Overview
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Develop a feature to generate a PDF report summarizing the current project progress and visualizing the dependency chain of tasks.
|
||||
# Details:
|
||||
This task involves creating a new CLI command named 'progress-pdf' within the existing project framework to generate a PDF document. The PDF should include: 1) A summary of project progress, detailing completed, in-progress, and pending tasks with their respective statuses and completion percentages if applicable. 2) A visual representation of the task dependency chain, leveraging the output format from the 'diagram' command (Task 70) to include Mermaid diagrams or similar visualizations converted to image format for PDF embedding. Use a suitable PDF generation library (e.g., jsPDF for JavaScript environments or ReportLab for Python) compatible with the project’s tech stack. Ensure the command accepts optional parameters to filter tasks by status or ID for customized reports. Handle large dependency chains by implementing pagination or zoomable image sections in the PDF. Provide error handling for cases where diagram generation or PDF creation fails, logging detailed error messages for debugging. Consider accessibility by ensuring text in the PDF is selectable and images have alt text descriptions. Integrate this feature with the existing CLI structure, ensuring it aligns with the project’s configuration settings (e.g., output directory for generated files). Document the command usage and parameters in the project’s help or README file.
|
||||
|
||||
# Test Strategy:
|
||||
Verify the completion of this task through a multi-step testing approach: 1) Unit Tests: Create tests for the PDF generation logic to ensure data (task statuses and dependencies) is correctly fetched and formatted. Mock the PDF library to test edge cases like empty task lists or broken dependency links. 2) Integration Tests: Run the 'progress-pdf' command via CLI to confirm it generates a PDF file without errors under normal conditions, with filtered task IDs, and with various status filters. Validate that the output file exists in the specified directory and can be opened. 3) Content Validation: Manually or via automated script, check the generated PDF content to ensure it accurately reflects the current project state (compare task counts and statuses against a known project state) and includes dependency diagrams as images. 4) Error Handling Tests: Simulate failures in diagram generation or PDF creation (e.g., invalid output path, library errors) and verify that appropriate error messages are logged and the command exits gracefully. 5) Accessibility Checks: Use a PDF accessibility tool or manual inspection to confirm that text is selectable and images have alt text. Run these tests across different project sizes (small with few tasks, large with complex dependencies) to ensure scalability. Document test results and include a sample PDF output in the project repository for reference.
|
||||
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user