fix(ai-providers): change generateObject mode from 'tool' to 'auto' for better provider compatibility
Fixes Perplexity research role failing with 'tool-mode object generation' error The hardcoded 'tool' mode was incompatible with providers like Perplexity that support structured JSON output but not function calling/tool use Using 'auto' mode allows the AI SDK to choose the best approach for each provider
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# Task ID: 92
|
||||
# Title: Implement Project Root Environment Variable Support in MCP Configuration
|
||||
# Status: in-progress
|
||||
# Status: review
|
||||
# Dependencies: 1, 3, 17
|
||||
# Priority: medium
|
||||
# Description: Add support for a 'TASK_MASTER_PROJECT_ROOT' environment variable in MCP configuration, allowing it to be set in both mcp.json and .env, with precedence over other methods. This will define the root directory for the MCP server and take precedence over all other project root resolution methods. The implementation should be backward compatible with existing workflows that don't use this variable.
|
||||
@@ -44,49 +44,49 @@ Implementation steps:
|
||||
- Test with invalid or non-existent directories to verify error handling
|
||||
|
||||
# Subtasks:
|
||||
## 92.1. Update configuration loader to check for TASK_MASTER_PROJECT_ROOT environment variable [pending]
|
||||
## 1. Update configuration loader to check for TASK_MASTER_PROJECT_ROOT environment variable [pending]
|
||||
### Dependencies: None
|
||||
### Description: Modify the configuration loading system to check for the TASK_MASTER_PROJECT_ROOT environment variable as the primary source for project root directory. Ensure proper error handling if the variable is set but points to a non-existent or inaccessible directory.
|
||||
### Details:
|
||||
|
||||
|
||||
## 92.2. Add support for 'projectRoot' in configuration files [pending]
|
||||
## 2. Add support for 'projectRoot' in configuration files [pending]
|
||||
### Dependencies: None
|
||||
### Description: Implement support for a 'projectRoot' key in mcp_config.toml and mcp.json configuration files as a fallback when the environment variable is not set. Update the configuration parser to recognize and validate this field.
|
||||
### Details:
|
||||
|
||||
|
||||
## 92.3. Refactor project root resolution logic with clear precedence rules [pending]
|
||||
## 3. Refactor project root resolution logic with clear precedence rules [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a unified project root resolution function that follows the precedence order: 1) TASK_MASTER_PROJECT_ROOT environment variable, 2) 'projectRoot' in config files, 3) existing resolution methods. Ensure this function is used consistently throughout the codebase.
|
||||
### Details:
|
||||
|
||||
|
||||
## 92.4. Update all MCP tools to use the new project root resolution [pending]
|
||||
## 4. Update all MCP tools to use the new project root resolution [pending]
|
||||
### Dependencies: None
|
||||
### Description: Identify all MCP tools and components that need to access the project root and update them to use the new resolution logic. Ensure consistent behavior across all parts of the system.
|
||||
### Details:
|
||||
|
||||
|
||||
## 92.5. Add comprehensive tests for the new project root resolution [pending]
|
||||
## 5. Add comprehensive tests for the new project root resolution [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create unit and integration tests to verify the correct behavior of the project root resolution logic under various configurations and edge cases.
|
||||
### Details:
|
||||
|
||||
|
||||
## 92.6. Update documentation with new configuration options [pending]
|
||||
## 6. Update documentation with new configuration options [pending]
|
||||
### Dependencies: None
|
||||
### Description: Update the project documentation to clearly explain the new TASK_MASTER_PROJECT_ROOT environment variable, the 'projectRoot' configuration option, and the precedence rules. Include examples of different configuration scenarios.
|
||||
### Details:
|
||||
|
||||
|
||||
## 92.7. Implement validation for project root directory [pending]
|
||||
## 7. Implement validation for project root directory [pending]
|
||||
### Dependencies: None
|
||||
### Description: Add validation to ensure the specified project root directory exists and has the necessary permissions. Provide clear error messages when validation fails.
|
||||
### Details:
|
||||
|
||||
|
||||
## 92.8. Implement support for loading environment variables from .env files [pending]
|
||||
## 8. Implement support for loading environment variables from .env files [pending]
|
||||
### Dependencies: None
|
||||
### Description: Add functionality to load the TASK_MASTER_PROJECT_ROOT variable from .env files in the workspace, following best practices for environment variable management in MCP servers.
|
||||
### Details:
|
||||
|
||||
37
.taskmaster/tasks/task_096.txt
Normal file
37
.taskmaster/tasks/task_096.txt
Normal file
@@ -0,0 +1,37 @@
|
||||
# Task ID: 96
|
||||
# Title: Create Export Command for On-Demand Task File and PDF Generation
|
||||
# Status: pending
|
||||
# Dependencies: 2, 4, 95
|
||||
# Priority: medium
|
||||
# Description: Develop an 'export' CLI command that generates task files and comprehensive PDF exports on-demand, replacing automatic file generation and providing users with flexible export options.
|
||||
# Details:
|
||||
Implement a new 'export' command in the CLI that supports two primary modes: (1) generating individual task files on-demand (superseding the current automatic generation system), and (2) producing a comprehensive PDF export. The PDF should include: a first page with the output of 'tm list --with-subtasks', followed by individual pages for each task (using 'tm show <task_id>') and each subtask (using 'tm show <subtask_id>'). Integrate PDF generation using a robust library (e.g., pdfkit, Puppeteer, or jsPDF) to ensure high-quality output and proper pagination. Refactor or disable any existing automatic file generation logic to avoid performance overhead. Ensure the command supports flexible output paths and options for exporting only files, only PDF, or both. Update documentation and help output to reflect the new export capabilities. Consider concurrency and error handling for large projects. Ensure the export process is efficient and does not block the main CLI thread unnecessarily.
|
||||
|
||||
# Test Strategy:
|
||||
1. Run the 'export' command with various options and verify that task files are generated only on-demand, not automatically. 2. Generate a PDF export and confirm that the first page contains the correct 'tm list --with-subtasks' output, and that each subsequent page accurately reflects the output of 'tm show <task_id>' and 'tm show <subtask_id>' for all tasks and subtasks. 3. Test exporting in projects with large numbers of tasks and subtasks to ensure performance and correctness. 4. Attempt exports with invalid paths or missing data to verify robust error handling. 5. Confirm that no automatic file generation occurs during normal task operations. 6. Review CLI help output and documentation for accuracy regarding the new export functionality.
|
||||
|
||||
# Subtasks:
|
||||
## 1. Remove Automatic Task File Generation from Task Operations [pending]
|
||||
### Dependencies: None
|
||||
### Description: Eliminate all calls to generateTaskFiles() from task operations such as add-task, remove-task, set-status, and similar commands to prevent unnecessary performance overhead.
|
||||
### Details:
|
||||
Audit the codebase for any automatic invocations of generateTaskFiles() and remove or refactor them to ensure task files are not generated automatically during task operations.
|
||||
|
||||
## 2. Implement Export Command Infrastructure with On-Demand Task File Generation [pending]
|
||||
### Dependencies: 96.1
|
||||
### Description: Develop the CLI 'export' command infrastructure, enabling users to generate task files on-demand by invoking the preserved generateTaskFiles function only when requested.
|
||||
### Details:
|
||||
Create the export command with options for output paths and modes (files, PDF, or both). Ensure generateTaskFiles is only called within this command and not elsewhere.
|
||||
|
||||
## 3. Implement Comprehensive PDF Export Functionality [pending]
|
||||
### Dependencies: 96.2
|
||||
### Description: Add PDF export capability to the export command, generating a structured PDF with a first page listing all tasks and subtasks, followed by individual pages for each task and subtask, using a robust PDF library.
|
||||
### Details:
|
||||
Integrate a PDF generation library (e.g., pdfkit, Puppeteer, or jsPDF). Ensure the PDF includes the output of 'tm list --with-subtasks' on the first page, and uses 'tm show <task_id>' and 'tm show <subtask_id>' for subsequent pages. Handle pagination, concurrency, and error handling for large projects.
|
||||
|
||||
## 4. Update Documentation, Tests, and CLI Help for Export Workflow [pending]
|
||||
### Dependencies: 96.2, 96.3
|
||||
### Description: Revise all relevant documentation, automated tests, and CLI help output to reflect the new export-based workflow and available options.
|
||||
### Details:
|
||||
Update user guides, README files, and CLI help text. Add or modify tests to cover the new export command and its options. Ensure all documentation accurately describes the new workflow and usage.
|
||||
|
||||
@@ -5467,6 +5467,70 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 92,
|
||||
"title": "Implement Project Root Environment Variable Support in MCP Configuration",
|
||||
"description": "Add support for a 'TASK_MASTER_PROJECT_ROOT' environment variable in MCP configuration, allowing it to be set in both mcp.json and .env, with precedence over other methods. This will define the root directory for the MCP server and take precedence over all other project root resolution methods. The implementation should be backward compatible with existing workflows that don't use this variable.",
|
||||
"status": "review",
|
||||
"dependencies": [
|
||||
1,
|
||||
3,
|
||||
17
|
||||
],
|
||||
"priority": "medium",
|
||||
"details": "Update the MCP server configuration system to support the TASK_MASTER_PROJECT_ROOT environment variable as the standard way to specify the project root directory. This provides better namespacing and avoids conflicts with other tools that might use a generic PROJECT_ROOT variable. Implement a clear precedence order for project root resolution:\n\n1. TASK_MASTER_PROJECT_ROOT environment variable (from shell or .env file)\n2. 'projectRoot' key in mcp_config.toml or mcp.json configuration files\n3. Existing resolution logic (CLI args, current working directory, etc.)\n\nModify the configuration loading logic to check for these sources in the specified order, ensuring backward compatibility. All MCP tools and components should use this standardized project root resolution logic. The TASK_MASTER_PROJECT_ROOT environment variable will be required because path resolution is delegated to the MCP client implementation, ensuring consistent behavior across different environments.\n\nImplementation steps:\n1. Identify all code locations where project root is determined (initialization, utility functions)\n2. Update configuration loaders to check for TASK_MASTER_PROJECT_ROOT in environment variables\n3. Add support for 'projectRoot' in configuration files as a fallback\n4. Refactor project root resolution logic to follow the new precedence rules\n5. Ensure all MCP tools and functions use the updated resolution logic\n6. Add comprehensive error handling for cases where TASK_MASTER_PROJECT_ROOT is not set or invalid\n7. Implement validation to ensure the specified directory exists and is accessible",
|
||||
"testStrategy": "1. Write unit tests to verify that the config loader correctly reads project root from environment variables and configuration files with the expected precedence:\n - Test TASK_MASTER_PROJECT_ROOT environment variable takes precedence when set\n - Test 'projectRoot' in configuration files is used when environment variable is absent\n - Test fallback to existing resolution logic when neither is specified\n\n2. Add integration tests to ensure that the MCP server and all tools use the correct project root:\n - Test server startup with TASK_MASTER_PROJECT_ROOT set to various valid and invalid paths\n - Test configuration file loading from the specified project root\n - Test path resolution for resources relative to the project root\n\n3. Test backward compatibility:\n - Verify existing workflows function correctly without the new variables\n - Ensure no regression in projects not using the new configuration options\n\n4. Manual testing:\n - Set TASK_MASTER_PROJECT_ROOT in shell environment and verify correct behavior\n - Set TASK_MASTER_PROJECT_ROOT in .env file and verify it's properly loaded\n - Configure 'projectRoot' in configuration files and test precedence\n - Test with invalid or non-existent directories to verify error handling",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Update configuration loader to check for TASK_MASTER_PROJECT_ROOT environment variable",
|
||||
"description": "Modify the configuration loading system to check for the TASK_MASTER_PROJECT_ROOT environment variable as the primary source for project root directory. Ensure proper error handling if the variable is set but points to a non-existent or inaccessible directory.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Add support for 'projectRoot' in configuration files",
|
||||
"description": "Implement support for a 'projectRoot' key in mcp_config.toml and mcp.json configuration files as a fallback when the environment variable is not set. Update the configuration parser to recognize and validate this field.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Refactor project root resolution logic with clear precedence rules",
|
||||
"description": "Create a unified project root resolution function that follows the precedence order: 1) TASK_MASTER_PROJECT_ROOT environment variable, 2) 'projectRoot' in config files, 3) existing resolution methods. Ensure this function is used consistently throughout the codebase.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Update all MCP tools to use the new project root resolution",
|
||||
"description": "Identify all MCP tools and components that need to access the project root and update them to use the new resolution logic. Ensure consistent behavior across all parts of the system.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Add comprehensive tests for the new project root resolution",
|
||||
"description": "Create unit and integration tests to verify the correct behavior of the project root resolution logic under various configurations and edge cases.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "Update documentation with new configuration options",
|
||||
"description": "Update the project documentation to clearly explain the new TASK_MASTER_PROJECT_ROOT environment variable, the 'projectRoot' configuration option, and the precedence rules. Include examples of different configuration scenarios.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"title": "Implement validation for project root directory",
|
||||
"description": "Add validation to ensure the specified project root directory exists and has the necessary permissions. Provide clear error messages when validation fails.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"title": "Implement support for loading environment variables from .env files",
|
||||
"description": "Add functionality to load the TASK_MASTER_PROJECT_ROOT variable from .env files in the workspace, following best practices for environment variable management in MCP servers.",
|
||||
"status": "pending"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 93,
|
||||
"title": "Implement Google Vertex AI Provider Integration",
|
||||
@@ -5613,70 +5677,6 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 92,
|
||||
"title": "Implement Project Root Environment Variable Support in MCP Configuration",
|
||||
"description": "Add support for a 'TASK_MASTER_PROJECT_ROOT' environment variable in MCP configuration, allowing it to be set in both mcp.json and .env, with precedence over other methods. This will define the root directory for the MCP server and take precedence over all other project root resolution methods. The implementation should be backward compatible with existing workflows that don't use this variable.",
|
||||
"status": "in-progress",
|
||||
"dependencies": [
|
||||
1,
|
||||
3,
|
||||
17
|
||||
],
|
||||
"priority": "medium",
|
||||
"details": "Update the MCP server configuration system to support the TASK_MASTER_PROJECT_ROOT environment variable as the standard way to specify the project root directory. This provides better namespacing and avoids conflicts with other tools that might use a generic PROJECT_ROOT variable. Implement a clear precedence order for project root resolution:\n\n1. TASK_MASTER_PROJECT_ROOT environment variable (from shell or .env file)\n2. 'projectRoot' key in mcp_config.toml or mcp.json configuration files\n3. Existing resolution logic (CLI args, current working directory, etc.)\n\nModify the configuration loading logic to check for these sources in the specified order, ensuring backward compatibility. All MCP tools and components should use this standardized project root resolution logic. The TASK_MASTER_PROJECT_ROOT environment variable will be required because path resolution is delegated to the MCP client implementation, ensuring consistent behavior across different environments.\n\nImplementation steps:\n1. Identify all code locations where project root is determined (initialization, utility functions)\n2. Update configuration loaders to check for TASK_MASTER_PROJECT_ROOT in environment variables\n3. Add support for 'projectRoot' in configuration files as a fallback\n4. Refactor project root resolution logic to follow the new precedence rules\n5. Ensure all MCP tools and functions use the updated resolution logic\n6. Add comprehensive error handling for cases where TASK_MASTER_PROJECT_ROOT is not set or invalid\n7. Implement validation to ensure the specified directory exists and is accessible",
|
||||
"testStrategy": "1. Write unit tests to verify that the config loader correctly reads project root from environment variables and configuration files with the expected precedence:\n - Test TASK_MASTER_PROJECT_ROOT environment variable takes precedence when set\n - Test 'projectRoot' in configuration files is used when environment variable is absent\n - Test fallback to existing resolution logic when neither is specified\n\n2. Add integration tests to ensure that the MCP server and all tools use the correct project root:\n - Test server startup with TASK_MASTER_PROJECT_ROOT set to various valid and invalid paths\n - Test configuration file loading from the specified project root\n - Test path resolution for resources relative to the project root\n\n3. Test backward compatibility:\n - Verify existing workflows function correctly without the new variables\n - Ensure no regression in projects not using the new configuration options\n\n4. Manual testing:\n - Set TASK_MASTER_PROJECT_ROOT in shell environment and verify correct behavior\n - Set TASK_MASTER_PROJECT_ROOT in .env file and verify it's properly loaded\n - Configure 'projectRoot' in configuration files and test precedence\n - Test with invalid or non-existent directories to verify error handling",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 92.1,
|
||||
"title": "Update configuration loader to check for TASK_MASTER_PROJECT_ROOT environment variable",
|
||||
"description": "Modify the configuration loading system to check for the TASK_MASTER_PROJECT_ROOT environment variable as the primary source for project root directory. Ensure proper error handling if the variable is set but points to a non-existent or inaccessible directory.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 92.2,
|
||||
"title": "Add support for 'projectRoot' in configuration files",
|
||||
"description": "Implement support for a 'projectRoot' key in mcp_config.toml and mcp.json configuration files as a fallback when the environment variable is not set. Update the configuration parser to recognize and validate this field.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 92.3,
|
||||
"title": "Refactor project root resolution logic with clear precedence rules",
|
||||
"description": "Create a unified project root resolution function that follows the precedence order: 1) TASK_MASTER_PROJECT_ROOT environment variable, 2) 'projectRoot' in config files, 3) existing resolution methods. Ensure this function is used consistently throughout the codebase.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 92.4,
|
||||
"title": "Update all MCP tools to use the new project root resolution",
|
||||
"description": "Identify all MCP tools and components that need to access the project root and update them to use the new resolution logic. Ensure consistent behavior across all parts of the system.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 92.5,
|
||||
"title": "Add comprehensive tests for the new project root resolution",
|
||||
"description": "Create unit and integration tests to verify the correct behavior of the project root resolution logic under various configurations and edge cases.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 92.6,
|
||||
"title": "Update documentation with new configuration options",
|
||||
"description": "Update the project documentation to clearly explain the new TASK_MASTER_PROJECT_ROOT environment variable, the 'projectRoot' configuration option, and the precedence rules. Include examples of different configuration scenarios.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 92.7,
|
||||
"title": "Implement validation for project root directory",
|
||||
"description": "Add validation to ensure the specified project root directory exists and has the necessary permissions. Provide clear error messages when validation fails.",
|
||||
"status": "pending"
|
||||
},
|
||||
{
|
||||
"id": 92.8,
|
||||
"title": "Implement support for loading environment variables from .env files",
|
||||
"description": "Add functionality to load the TASK_MASTER_PROJECT_ROOT variable from .env files in the workspace, following best practices for environment variable management in MCP servers.",
|
||||
"status": "pending"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 95,
|
||||
"title": "Implement .taskmaster Directory Structure",
|
||||
@@ -5808,6 +5808,69 @@
|
||||
"testStrategy": "Test complete workflows and verify only .taskmaster/ directory is created in project root. Check that all Task Master operations respect the new file organization. Verify .gitignore compatibility."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 96,
|
||||
"title": "Create Export Command for On-Demand Task File and PDF Generation",
|
||||
"description": "Develop an 'export' CLI command that generates task files and comprehensive PDF exports on-demand, replacing automatic file generation and providing users with flexible export options.",
|
||||
"details": "Implement a new 'export' command in the CLI that supports two primary modes: (1) generating individual task files on-demand (superseding the current automatic generation system), and (2) producing a comprehensive PDF export. The PDF should include: a first page with the output of 'tm list --with-subtasks', followed by individual pages for each task (using 'tm show <task_id>') and each subtask (using 'tm show <subtask_id>'). Integrate PDF generation using a robust library (e.g., pdfkit, Puppeteer, or jsPDF) to ensure high-quality output and proper pagination. Refactor or disable any existing automatic file generation logic to avoid performance overhead. Ensure the command supports flexible output paths and options for exporting only files, only PDF, or both. Update documentation and help output to reflect the new export capabilities. Consider concurrency and error handling for large projects. Ensure the export process is efficient and does not block the main CLI thread unnecessarily.",
|
||||
"testStrategy": "1. Run the 'export' command with various options and verify that task files are generated only on-demand, not automatically. 2. Generate a PDF export and confirm that the first page contains the correct 'tm list --with-subtasks' output, and that each subsequent page accurately reflects the output of 'tm show <task_id>' and 'tm show <subtask_id>' for all tasks and subtasks. 3. Test exporting in projects with large numbers of tasks and subtasks to ensure performance and correctness. 4. Attempt exports with invalid paths or missing data to verify robust error handling. 5. Confirm that no automatic file generation occurs during normal task operations. 6. Review CLI help output and documentation for accuracy regarding the new export functionality.",
|
||||
"status": "pending",
|
||||
"dependencies": [
|
||||
2,
|
||||
4,
|
||||
95
|
||||
],
|
||||
"priority": "medium",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Remove Automatic Task File Generation from Task Operations",
|
||||
"description": "Eliminate all calls to generateTaskFiles() from task operations such as add-task, remove-task, set-status, and similar commands to prevent unnecessary performance overhead.",
|
||||
"dependencies": [],
|
||||
"details": "Audit the codebase for any automatic invocations of generateTaskFiles() and remove or refactor them to ensure task files are not generated automatically during task operations.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Verify that no task file generation occurs during any task operation by running the CLI and monitoring file system changes.",
|
||||
"parentTaskId": 96
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement Export Command Infrastructure with On-Demand Task File Generation",
|
||||
"description": "Develop the CLI 'export' command infrastructure, enabling users to generate task files on-demand by invoking the preserved generateTaskFiles function only when requested.",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Create the export command with options for output paths and modes (files, PDF, or both). Ensure generateTaskFiles is only called within this command and not elsewhere.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test the export command to confirm task files are generated only when explicitly requested and that output paths and options function as intended.",
|
||||
"parentTaskId": 96
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement Comprehensive PDF Export Functionality",
|
||||
"description": "Add PDF export capability to the export command, generating a structured PDF with a first page listing all tasks and subtasks, followed by individual pages for each task and subtask, using a robust PDF library.",
|
||||
"dependencies": [
|
||||
2
|
||||
],
|
||||
"details": "Integrate a PDF generation library (e.g., pdfkit, Puppeteer, or jsPDF). Ensure the PDF includes the output of 'tm list --with-subtasks' on the first page, and uses 'tm show <task_id>' and 'tm show <subtask_id>' for subsequent pages. Handle pagination, concurrency, and error handling for large projects.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Generate PDFs for projects of varying sizes and verify layout, content accuracy, and performance. Test error handling and concurrency under load.",
|
||||
"parentTaskId": 96
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Update Documentation, Tests, and CLI Help for Export Workflow",
|
||||
"description": "Revise all relevant documentation, automated tests, and CLI help output to reflect the new export-based workflow and available options.",
|
||||
"dependencies": [
|
||||
2,
|
||||
3
|
||||
],
|
||||
"details": "Update user guides, README files, and CLI help text. Add or modify tests to cover the new export command and its options. Ensure all documentation accurately describes the new workflow and usage.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Review documentation for completeness and accuracy. Run all tests to ensure coverage of the new export command and verify CLI help output.",
|
||||
"parentTaskId": 96
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.16.1",
|
||||
"version": "0.16.2-rc.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "task-master-ai",
|
||||
"version": "0.16.1",
|
||||
"version": "0.16.2-rc.0",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"dependencies": {
|
||||
"@ai-sdk/amazon-bedrock": "^2.2.9",
|
||||
|
||||
@@ -24,9 +24,9 @@ import {
|
||||
getAzureBaseURL,
|
||||
getBedrockBaseURL,
|
||||
getVertexProjectId,
|
||||
getVertexLocation
|
||||
} from './config-manager.js';
|
||||
import { log, findProjectRoot, resolveEnvVariable } from './utils.js';
|
||||
getVertexLocation,
|
||||
} from "./config-manager.js";
|
||||
import { log, findProjectRoot, resolveEnvVariable } from "./utils.js";
|
||||
|
||||
// Import provider classes
|
||||
import {
|
||||
@@ -39,8 +39,8 @@ import {
|
||||
OllamaAIProvider,
|
||||
BedrockAIProvider,
|
||||
AzureProvider,
|
||||
VertexAIProvider
|
||||
} from '../../src/ai-providers/index.js';
|
||||
VertexAIProvider,
|
||||
} from "../../src/ai-providers/index.js";
|
||||
|
||||
// Create provider instances
|
||||
const PROVIDERS = {
|
||||
@@ -53,36 +53,36 @@ const PROVIDERS = {
|
||||
ollama: new OllamaAIProvider(),
|
||||
bedrock: new BedrockAIProvider(),
|
||||
azure: new AzureProvider(),
|
||||
vertex: new VertexAIProvider()
|
||||
vertex: new VertexAIProvider(),
|
||||
};
|
||||
|
||||
// Helper function to get cost for a specific model
|
||||
function _getCostForModel(providerName, modelId) {
|
||||
if (!MODEL_MAP || !MODEL_MAP[providerName]) {
|
||||
log(
|
||||
'warn',
|
||||
"warn",
|
||||
`Provider "${providerName}" not found in MODEL_MAP. Cannot determine cost for model ${modelId}.`
|
||||
);
|
||||
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
|
||||
return { inputCost: 0, outputCost: 0, currency: "USD" }; // Default to zero cost
|
||||
}
|
||||
|
||||
const modelData = MODEL_MAP[providerName].find((m) => m.id === modelId);
|
||||
|
||||
if (!modelData || !modelData.cost_per_1m_tokens) {
|
||||
log(
|
||||
'debug',
|
||||
"debug",
|
||||
`Cost data not found for model "${modelId}" under provider "${providerName}". Assuming zero cost.`
|
||||
);
|
||||
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
|
||||
return { inputCost: 0, outputCost: 0, currency: "USD" }; // Default to zero cost
|
||||
}
|
||||
|
||||
// Ensure currency is part of the returned object, defaulting if not present
|
||||
const currency = modelData.cost_per_1m_tokens.currency || 'USD';
|
||||
const currency = modelData.cost_per_1m_tokens.currency || "USD";
|
||||
|
||||
return {
|
||||
inputCost: modelData.cost_per_1m_tokens.input || 0,
|
||||
outputCost: modelData.cost_per_1m_tokens.output || 0,
|
||||
currency: currency
|
||||
currency: currency,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -92,13 +92,13 @@ const INITIAL_RETRY_DELAY_MS = 1000;
|
||||
|
||||
// Helper function to check if an error is retryable
|
||||
function isRetryableError(error) {
|
||||
const errorMessage = error.message?.toLowerCase() || '';
|
||||
const errorMessage = error.message?.toLowerCase() || "";
|
||||
return (
|
||||
errorMessage.includes('rate limit') ||
|
||||
errorMessage.includes('overloaded') ||
|
||||
errorMessage.includes('service temporarily unavailable') ||
|
||||
errorMessage.includes('timeout') ||
|
||||
errorMessage.includes('network error') ||
|
||||
errorMessage.includes("rate limit") ||
|
||||
errorMessage.includes("overloaded") ||
|
||||
errorMessage.includes("service temporarily unavailable") ||
|
||||
errorMessage.includes("timeout") ||
|
||||
errorMessage.includes("network error") ||
|
||||
error.status === 429 ||
|
||||
error.status >= 500
|
||||
);
|
||||
@@ -123,7 +123,7 @@ function _extractErrorMessage(error) {
|
||||
}
|
||||
|
||||
// Attempt 3: Look for nested error message in response body if it's JSON string
|
||||
if (typeof error?.responseBody === 'string') {
|
||||
if (typeof error?.responseBody === "string") {
|
||||
try {
|
||||
const body = JSON.parse(error.responseBody);
|
||||
if (body?.error?.message) {
|
||||
@@ -135,20 +135,20 @@ function _extractErrorMessage(error) {
|
||||
}
|
||||
|
||||
// Attempt 4: Use the top-level message if it exists
|
||||
if (typeof error?.message === 'string' && error.message) {
|
||||
if (typeof error?.message === "string" && error.message) {
|
||||
return error.message;
|
||||
}
|
||||
|
||||
// Attempt 5: Handle simple string errors
|
||||
if (typeof error === 'string') {
|
||||
if (typeof error === "string") {
|
||||
return error;
|
||||
}
|
||||
|
||||
// Fallback
|
||||
return 'An unknown AI service error occurred.';
|
||||
return "An unknown AI service error occurred.";
|
||||
} catch (e) {
|
||||
// Safety net
|
||||
return 'Failed to extract error message.';
|
||||
return "Failed to extract error message.";
|
||||
}
|
||||
}
|
||||
|
||||
@@ -162,17 +162,17 @@ function _extractErrorMessage(error) {
|
||||
*/
|
||||
function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||
const keyMap = {
|
||||
openai: 'OPENAI_API_KEY',
|
||||
anthropic: 'ANTHROPIC_API_KEY',
|
||||
google: 'GOOGLE_API_KEY',
|
||||
perplexity: 'PERPLEXITY_API_KEY',
|
||||
mistral: 'MISTRAL_API_KEY',
|
||||
azure: 'AZURE_OPENAI_API_KEY',
|
||||
openrouter: 'OPENROUTER_API_KEY',
|
||||
xai: 'XAI_API_KEY',
|
||||
ollama: 'OLLAMA_API_KEY',
|
||||
bedrock: 'AWS_ACCESS_KEY_ID',
|
||||
vertex: 'GOOGLE_API_KEY'
|
||||
openai: "OPENAI_API_KEY",
|
||||
anthropic: "ANTHROPIC_API_KEY",
|
||||
google: "GOOGLE_API_KEY",
|
||||
perplexity: "PERPLEXITY_API_KEY",
|
||||
mistral: "MISTRAL_API_KEY",
|
||||
azure: "AZURE_OPENAI_API_KEY",
|
||||
openrouter: "OPENROUTER_API_KEY",
|
||||
xai: "XAI_API_KEY",
|
||||
ollama: "OLLAMA_API_KEY",
|
||||
bedrock: "AWS_ACCESS_KEY_ID",
|
||||
vertex: "GOOGLE_API_KEY",
|
||||
};
|
||||
|
||||
const envVarName = keyMap[providerName];
|
||||
@@ -185,7 +185,7 @@ function _resolveApiKey(providerName, session, projectRoot = null) {
|
||||
const apiKey = resolveEnvVariable(envVarName, session, projectRoot);
|
||||
|
||||
// Special handling for providers that can use alternative auth
|
||||
if (providerName === 'ollama' || providerName === 'bedrock') {
|
||||
if (providerName === "ollama" || providerName === "bedrock") {
|
||||
return apiKey || null;
|
||||
}
|
||||
|
||||
@@ -223,7 +223,7 @@ async function _attemptProviderCallWithRetries(
|
||||
try {
|
||||
if (getDebugFlag()) {
|
||||
log(
|
||||
'info',
|
||||
"info",
|
||||
`Attempt ${retries + 1}/${MAX_RETRIES + 1} calling ${fnName} (Provider: ${providerName}, Model: ${modelId}, Role: ${attemptRole})`
|
||||
);
|
||||
}
|
||||
@@ -233,14 +233,14 @@ async function _attemptProviderCallWithRetries(
|
||||
|
||||
if (getDebugFlag()) {
|
||||
log(
|
||||
'info',
|
||||
"info",
|
||||
`${fnName} succeeded for role ${attemptRole} (Provider: ${providerName}) on attempt ${retries + 1}`
|
||||
);
|
||||
}
|
||||
return result;
|
||||
} catch (error) {
|
||||
log(
|
||||
'warn',
|
||||
"warn",
|
||||
`Attempt ${retries + 1} failed for role ${attemptRole} (${fnName} / ${providerName}): ${error.message}`
|
||||
);
|
||||
|
||||
@@ -248,13 +248,13 @@ async function _attemptProviderCallWithRetries(
|
||||
retries++;
|
||||
const delay = INITIAL_RETRY_DELAY_MS * Math.pow(2, retries - 1);
|
||||
log(
|
||||
'info',
|
||||
"info",
|
||||
`Something went wrong on the provider side. Retrying in ${delay / 1000}s...`
|
||||
);
|
||||
await new Promise((resolve) => setTimeout(resolve, delay));
|
||||
} else {
|
||||
log(
|
||||
'error',
|
||||
"error",
|
||||
`Something went wrong on the provider side. Max retries reached for role ${attemptRole} (${fnName} / ${providerName}).`
|
||||
);
|
||||
throw error;
|
||||
@@ -296,11 +296,11 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
...restApiParams
|
||||
} = params;
|
||||
if (getDebugFlag()) {
|
||||
log('info', `${serviceType}Service called`, {
|
||||
log("info", `${serviceType}Service called`, {
|
||||
role: initialRole,
|
||||
commandName,
|
||||
outputType,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -308,23 +308,23 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
const userId = getUserId(effectiveProjectRoot);
|
||||
|
||||
let sequence;
|
||||
if (initialRole === 'main') {
|
||||
sequence = ['main', 'fallback', 'research'];
|
||||
} else if (initialRole === 'research') {
|
||||
sequence = ['research', 'fallback', 'main'];
|
||||
} else if (initialRole === 'fallback') {
|
||||
sequence = ['fallback', 'main', 'research'];
|
||||
if (initialRole === "main") {
|
||||
sequence = ["main", "fallback", "research"];
|
||||
} else if (initialRole === "research") {
|
||||
sequence = ["research", "fallback", "main"];
|
||||
} else if (initialRole === "fallback") {
|
||||
sequence = ["fallback", "main", "research"];
|
||||
} else {
|
||||
log(
|
||||
'warn',
|
||||
"warn",
|
||||
`Unknown initial role: ${initialRole}. Defaulting to main -> fallback -> research sequence.`
|
||||
);
|
||||
sequence = ['main', 'fallback', 'research'];
|
||||
sequence = ["main", "fallback", "research"];
|
||||
}
|
||||
|
||||
let lastError = null;
|
||||
let lastCleanErrorMessage =
|
||||
'AI service call failed for all configured roles.';
|
||||
"AI service call failed for all configured roles.";
|
||||
|
||||
for (const currentRole of sequence) {
|
||||
let providerName,
|
||||
@@ -337,20 +337,20 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
telemetryData = null;
|
||||
|
||||
try {
|
||||
log('info', `New AI service call with role: ${currentRole}`);
|
||||
log("info", `New AI service call with role: ${currentRole}`);
|
||||
|
||||
if (currentRole === 'main') {
|
||||
if (currentRole === "main") {
|
||||
providerName = getMainProvider(effectiveProjectRoot);
|
||||
modelId = getMainModelId(effectiveProjectRoot);
|
||||
} else if (currentRole === 'research') {
|
||||
} else if (currentRole === "research") {
|
||||
providerName = getResearchProvider(effectiveProjectRoot);
|
||||
modelId = getResearchModelId(effectiveProjectRoot);
|
||||
} else if (currentRole === 'fallback') {
|
||||
} else if (currentRole === "fallback") {
|
||||
providerName = getFallbackProvider(effectiveProjectRoot);
|
||||
modelId = getFallbackModelId(effectiveProjectRoot);
|
||||
} else {
|
||||
log(
|
||||
'error',
|
||||
"error",
|
||||
`Unknown role encountered in _unifiedServiceRunner: ${currentRole}`
|
||||
);
|
||||
lastError =
|
||||
@@ -360,7 +360,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
|
||||
if (!providerName || !modelId) {
|
||||
log(
|
||||
'warn',
|
||||
"warn",
|
||||
`Skipping role '${currentRole}': Provider or Model ID not configured.`
|
||||
);
|
||||
lastError =
|
||||
@@ -375,7 +375,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
provider = PROVIDERS[providerName?.toLowerCase()];
|
||||
if (!provider) {
|
||||
log(
|
||||
'warn',
|
||||
"warn",
|
||||
`Skipping role '${currentRole}': Provider '${providerName}' not supported.`
|
||||
);
|
||||
lastError =
|
||||
@@ -385,10 +385,10 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
}
|
||||
|
||||
// Check API key if needed
|
||||
if (providerName?.toLowerCase() !== 'ollama') {
|
||||
if (providerName?.toLowerCase() !== "ollama") {
|
||||
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
|
||||
log(
|
||||
'warn',
|
||||
"warn",
|
||||
`Skipping role '${currentRole}' (Provider: ${providerName}): API key not set or invalid.`
|
||||
);
|
||||
lastError =
|
||||
@@ -404,17 +404,17 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
baseURL = getBaseUrlForRole(currentRole, effectiveProjectRoot);
|
||||
|
||||
// For Azure, use the global Azure base URL if role-specific URL is not configured
|
||||
if (providerName?.toLowerCase() === 'azure' && !baseURL) {
|
||||
if (providerName?.toLowerCase() === "azure" && !baseURL) {
|
||||
baseURL = getAzureBaseURL(effectiveProjectRoot);
|
||||
log('debug', `Using global Azure base URL: ${baseURL}`);
|
||||
} else if (providerName?.toLowerCase() === 'ollama' && !baseURL) {
|
||||
log("debug", `Using global Azure base URL: ${baseURL}`);
|
||||
} else if (providerName?.toLowerCase() === "ollama" && !baseURL) {
|
||||
// For Ollama, use the global Ollama base URL if role-specific URL is not configured
|
||||
baseURL = getOllamaBaseURL(effectiveProjectRoot);
|
||||
log('debug', `Using global Ollama base URL: ${baseURL}`);
|
||||
} else if (providerName?.toLowerCase() === 'bedrock' && !baseURL) {
|
||||
log("debug", `Using global Ollama base URL: ${baseURL}`);
|
||||
} else if (providerName?.toLowerCase() === "bedrock" && !baseURL) {
|
||||
// For Bedrock, use the global Bedrock base URL if role-specific URL is not configured
|
||||
baseURL = getBedrockBaseURL(effectiveProjectRoot);
|
||||
log('debug', `Using global Bedrock base URL: ${baseURL}`);
|
||||
log("debug", `Using global Bedrock base URL: ${baseURL}`);
|
||||
}
|
||||
|
||||
// Get AI parameters for the current role
|
||||
@@ -429,12 +429,12 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
let providerSpecificParams = {};
|
||||
|
||||
// Handle Vertex AI specific configuration
|
||||
if (providerName?.toLowerCase() === 'vertex') {
|
||||
if (providerName?.toLowerCase() === "vertex") {
|
||||
// Get Vertex project ID and location
|
||||
const projectId =
|
||||
getVertexProjectId(effectiveProjectRoot) ||
|
||||
resolveEnvVariable(
|
||||
'VERTEX_PROJECT_ID',
|
||||
"VERTEX_PROJECT_ID",
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
@@ -442,15 +442,15 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
const location =
|
||||
getVertexLocation(effectiveProjectRoot) ||
|
||||
resolveEnvVariable(
|
||||
'VERTEX_LOCATION',
|
||||
"VERTEX_LOCATION",
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
) ||
|
||||
'us-central1';
|
||||
"us-central1";
|
||||
|
||||
// Get credentials path if available
|
||||
const credentialsPath = resolveEnvVariable(
|
||||
'GOOGLE_APPLICATION_CREDENTIALS',
|
||||
"GOOGLE_APPLICATION_CREDENTIALS",
|
||||
session,
|
||||
effectiveProjectRoot
|
||||
);
|
||||
@@ -459,18 +459,18 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
providerSpecificParams = {
|
||||
projectId,
|
||||
location,
|
||||
...(credentialsPath && { credentials: { credentialsFromEnv: true } })
|
||||
...(credentialsPath && { credentials: { credentialsFromEnv: true } }),
|
||||
};
|
||||
|
||||
log(
|
||||
'debug',
|
||||
"debug",
|
||||
`Using Vertex AI configuration: Project ID=${projectId}, Location=${location}`
|
||||
);
|
||||
}
|
||||
|
||||
const messages = [];
|
||||
if (systemPrompt) {
|
||||
messages.push({ role: 'system', content: systemPrompt });
|
||||
messages.push({ role: "system", content: systemPrompt });
|
||||
}
|
||||
|
||||
// IN THE FUTURE WHEN DOING CONTEXT IMPROVEMENTS
|
||||
@@ -492,9 +492,9 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
// }
|
||||
|
||||
if (prompt) {
|
||||
messages.push({ role: 'user', content: prompt });
|
||||
messages.push({ role: "user", content: prompt });
|
||||
} else {
|
||||
throw new Error('User prompt content is missing.');
|
||||
throw new Error("User prompt content is missing.");
|
||||
}
|
||||
|
||||
const callParams = {
|
||||
@@ -504,9 +504,9 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
temperature: roleParams.temperature,
|
||||
messages,
|
||||
...(baseURL && { baseURL }),
|
||||
...(serviceType === 'generateObject' && { schema, objectName }),
|
||||
...(serviceType === "generateObject" && { schema, objectName }),
|
||||
...providerSpecificParams,
|
||||
...restApiParams
|
||||
...restApiParams,
|
||||
};
|
||||
|
||||
providerResponse = await _attemptProviderCallWithRetries(
|
||||
@@ -527,7 +527,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
modelId,
|
||||
inputTokens: providerResponse.usage.inputTokens,
|
||||
outputTokens: providerResponse.usage.outputTokens,
|
||||
outputType
|
||||
outputType,
|
||||
});
|
||||
} catch (telemetryError) {
|
||||
// logAiUsage already logs its own errors and returns null on failure
|
||||
@@ -535,21 +535,21 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
}
|
||||
} else if (userId && providerResponse && !providerResponse.usage) {
|
||||
log(
|
||||
'warn',
|
||||
"warn",
|
||||
`Cannot log telemetry for ${commandName} (${providerName}/${modelId}): AI result missing 'usage' data. (May be expected for streams)`
|
||||
);
|
||||
}
|
||||
|
||||
let finalMainResult;
|
||||
if (serviceType === 'generateText') {
|
||||
if (serviceType === "generateText") {
|
||||
finalMainResult = providerResponse.text;
|
||||
} else if (serviceType === 'generateObject') {
|
||||
} else if (serviceType === "generateObject") {
|
||||
finalMainResult = providerResponse.object;
|
||||
} else if (serviceType === 'streamText') {
|
||||
} else if (serviceType === "streamText") {
|
||||
finalMainResult = providerResponse;
|
||||
} else {
|
||||
log(
|
||||
'error',
|
||||
"error",
|
||||
`Unknown serviceType in _unifiedServiceRunner: ${serviceType}`
|
||||
);
|
||||
finalMainResult = providerResponse;
|
||||
@@ -557,37 +557,38 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
|
||||
return {
|
||||
mainResult: finalMainResult,
|
||||
telemetryData: telemetryData
|
||||
telemetryData: telemetryData,
|
||||
};
|
||||
} catch (error) {
|
||||
const cleanMessage = _extractErrorMessage(error);
|
||||
log(
|
||||
'error',
|
||||
`Service call failed for role ${currentRole} (Provider: ${providerName || 'unknown'}, Model: ${modelId || 'unknown'}): ${cleanMessage}`
|
||||
"error",
|
||||
`Service call failed for role ${currentRole} (Provider: ${providerName || "unknown"}, Model: ${modelId || "unknown"}): ${cleanMessage}`
|
||||
);
|
||||
lastError = error;
|
||||
lastCleanErrorMessage = cleanMessage;
|
||||
|
||||
if (serviceType === 'generateObject') {
|
||||
if (serviceType === "generateObject") {
|
||||
const lowerCaseMessage = cleanMessage.toLowerCase();
|
||||
if (
|
||||
lowerCaseMessage.includes(
|
||||
'no endpoints found that support tool use'
|
||||
"no endpoints found that support tool use"
|
||||
) ||
|
||||
lowerCaseMessage.includes('does not support tool_use') ||
|
||||
lowerCaseMessage.includes('tool use is not supported') ||
|
||||
lowerCaseMessage.includes('tools are not supported') ||
|
||||
lowerCaseMessage.includes('function calling is not supported')
|
||||
lowerCaseMessage.includes("does not support tool_use") ||
|
||||
lowerCaseMessage.includes("tool use is not supported") ||
|
||||
lowerCaseMessage.includes("tools are not supported") ||
|
||||
lowerCaseMessage.includes("function calling is not supported") ||
|
||||
lowerCaseMessage.includes("tool use is not supported")
|
||||
) {
|
||||
const specificErrorMsg = `Model '${modelId || 'unknown'}' via provider '${providerName || 'unknown'}' does not support the 'tool use' required by generateObjectService. Please configure a model that supports tool/function calling for the '${currentRole}' role, or use generateTextService if structured output is not strictly required.`;
|
||||
log('error', `[Tool Support Error] ${specificErrorMsg}`);
|
||||
const specificErrorMsg = `Model '${modelId || "unknown"}' via provider '${providerName || "unknown"}' does not support the 'tool use' required by generateObjectService. Please configure a model that supports tool/function calling for the '${currentRole}' role, or use generateTextService if structured output is not strictly required.`;
|
||||
log("error", `[Tool Support Error] ${specificErrorMsg}`);
|
||||
throw new Error(specificErrorMsg);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log('error', `All roles in the sequence [${sequence.join(', ')}] failed.`);
|
||||
log("error", `All roles in the sequence [${sequence.join(", ")}] failed.`);
|
||||
throw new Error(lastCleanErrorMessage);
|
||||
}
|
||||
|
||||
@@ -607,10 +608,10 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
*/
|
||||
async function generateTextService(params) {
|
||||
// Ensure default outputType if not provided
|
||||
const defaults = { outputType: 'cli' };
|
||||
const defaults = { outputType: "cli" };
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
return _unifiedServiceRunner('generateText', combinedParams);
|
||||
return _unifiedServiceRunner("generateText", combinedParams);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -628,13 +629,13 @@ async function generateTextService(params) {
|
||||
* @returns {Promise<object>} Result object containing the stream and usage data.
|
||||
*/
|
||||
async function streamTextService(params) {
|
||||
const defaults = { outputType: 'cli' };
|
||||
const defaults = { outputType: "cli" };
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
// NOTE: Telemetry for streaming might be tricky as usage data often comes at the end.
|
||||
// The current implementation logs *after* the stream is returned.
|
||||
// We might need to adjust how usage is captured/logged for streams.
|
||||
return _unifiedServiceRunner('streamText', combinedParams);
|
||||
return _unifiedServiceRunner("streamText", combinedParams);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -656,13 +657,13 @@ async function streamTextService(params) {
|
||||
*/
|
||||
async function generateObjectService(params) {
|
||||
const defaults = {
|
||||
objectName: 'generated_object',
|
||||
objectName: "generated_object",
|
||||
maxRetries: 3,
|
||||
outputType: 'cli'
|
||||
outputType: "cli",
|
||||
};
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
return _unifiedServiceRunner('generateObject', combinedParams);
|
||||
return _unifiedServiceRunner("generateObject", combinedParams);
|
||||
}
|
||||
|
||||
// --- Telemetry Function ---
|
||||
@@ -684,10 +685,10 @@ async function logAiUsage({
|
||||
modelId,
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
outputType
|
||||
outputType,
|
||||
}) {
|
||||
try {
|
||||
const isMCP = outputType === 'mcp';
|
||||
const isMCP = outputType === "mcp";
|
||||
const timestamp = new Date().toISOString();
|
||||
const totalTokens = (inputTokens || 0) + (outputTokens || 0);
|
||||
|
||||
@@ -711,19 +712,19 @@ async function logAiUsage({
|
||||
outputTokens: outputTokens || 0,
|
||||
totalTokens,
|
||||
totalCost: parseFloat(totalCost.toFixed(6)),
|
||||
currency // Add currency to the telemetry data
|
||||
currency, // Add currency to the telemetry data
|
||||
};
|
||||
|
||||
if (getDebugFlag()) {
|
||||
log('info', 'AI Usage Telemetry:', telemetryData);
|
||||
log("info", "AI Usage Telemetry:", telemetryData);
|
||||
}
|
||||
|
||||
// TODO (Subtask 77.2): Send telemetryData securely to the external endpoint.
|
||||
|
||||
return telemetryData;
|
||||
} catch (error) {
|
||||
log('error', `Failed to log AI usage telemetry: ${error.message}`, {
|
||||
error
|
||||
log("error", `Failed to log AI usage telemetry: ${error.message}`, {
|
||||
error,
|
||||
});
|
||||
// Don't re-throw; telemetry failure shouldn't block core functionality.
|
||||
return null;
|
||||
@@ -734,5 +735,5 @@ export {
|
||||
generateTextService,
|
||||
streamTextService,
|
||||
generateObjectService,
|
||||
logAiUsage
|
||||
logAiUsage,
|
||||
};
|
||||
|
||||
@@ -153,7 +153,7 @@
|
||||
"id": "sonar-pro",
|
||||
"swe_score": 0,
|
||||
"cost_per_1m_tokens": { "input": 3, "output": 15 },
|
||||
"allowed_roles": ["research"],
|
||||
"allowed_roles": ["main", "research"],
|
||||
"max_tokens": 8700
|
||||
},
|
||||
{
|
||||
@@ -174,14 +174,14 @@
|
||||
"id": "sonar-reasoning-pro",
|
||||
"swe_score": 0.211,
|
||||
"cost_per_1m_tokens": { "input": 2, "output": 8 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"allowed_roles": ["main", "research", "fallback"],
|
||||
"max_tokens": 8700
|
||||
},
|
||||
{
|
||||
"id": "sonar-reasoning",
|
||||
"swe_score": 0.211,
|
||||
"cost_per_1m_tokens": { "input": 1, "output": 5 },
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"allowed_roles": ["main", "research", "fallback"],
|
||||
"max_tokens": 8700
|
||||
}
|
||||
],
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { generateText, streamText, generateObject } from 'ai';
|
||||
import { log } from '../../scripts/modules/index.js';
|
||||
import { generateText, streamText, generateObject } from "ai";
|
||||
import { log } from "../../scripts/modules/index.js";
|
||||
|
||||
/**
|
||||
* Base class for all AI providers
|
||||
@@ -7,7 +7,7 @@ import { log } from '../../scripts/modules/index.js';
|
||||
export class BaseAIProvider {
|
||||
constructor() {
|
||||
if (this.constructor === BaseAIProvider) {
|
||||
throw new Error('BaseAIProvider cannot be instantiated directly');
|
||||
throw new Error("BaseAIProvider cannot be instantiated directly");
|
||||
}
|
||||
|
||||
// Each provider must set their name
|
||||
@@ -51,10 +51,10 @@ export class BaseAIProvider {
|
||||
params.temperature !== undefined &&
|
||||
(params.temperature < 0 || params.temperature > 1)
|
||||
) {
|
||||
throw new Error('Temperature must be between 0 and 1');
|
||||
throw new Error("Temperature must be between 0 and 1");
|
||||
}
|
||||
if (params.maxTokens !== undefined && params.maxTokens <= 0) {
|
||||
throw new Error('maxTokens must be greater than 0');
|
||||
throw new Error("maxTokens must be greater than 0");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -63,13 +63,13 @@ export class BaseAIProvider {
|
||||
*/
|
||||
validateMessages(messages) {
|
||||
if (!messages || !Array.isArray(messages) || messages.length === 0) {
|
||||
throw new Error('Invalid or empty messages array provided');
|
||||
throw new Error("Invalid or empty messages array provided");
|
||||
}
|
||||
|
||||
for (const msg of messages) {
|
||||
if (!msg.role || !msg.content) {
|
||||
throw new Error(
|
||||
'Invalid message format. Each message must have role and content'
|
||||
"Invalid message format. Each message must have role and content"
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -79,9 +79,9 @@ export class BaseAIProvider {
|
||||
* Common error handler
|
||||
*/
|
||||
handleError(operation, error) {
|
||||
const errorMessage = error.message || 'Unknown error occurred';
|
||||
log('error', `${this.name} ${operation} failed: ${errorMessage}`, {
|
||||
error
|
||||
const errorMessage = error.message || "Unknown error occurred";
|
||||
log("error", `${this.name} ${operation} failed: ${errorMessage}`, {
|
||||
error,
|
||||
});
|
||||
throw new Error(
|
||||
`${this.name} API error during ${operation}: ${errorMessage}`
|
||||
@@ -93,7 +93,7 @@ export class BaseAIProvider {
|
||||
* @abstract
|
||||
*/
|
||||
getClient(params) {
|
||||
throw new Error('getClient must be implemented by provider');
|
||||
throw new Error("getClient must be implemented by provider");
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -105,7 +105,7 @@ export class BaseAIProvider {
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
log(
|
||||
'debug',
|
||||
"debug",
|
||||
`Generating ${this.name} text with model: ${params.modelId}`
|
||||
);
|
||||
|
||||
@@ -114,11 +114,11 @@ export class BaseAIProvider {
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
temperature: params.temperature,
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
"debug",
|
||||
`${this.name} generateText completed successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
@@ -127,11 +127,11 @@ export class BaseAIProvider {
|
||||
usage: {
|
||||
inputTokens: result.usage?.promptTokens,
|
||||
outputTokens: result.usage?.completionTokens,
|
||||
totalTokens: result.usage?.totalTokens
|
||||
}
|
||||
totalTokens: result.usage?.totalTokens,
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
this.handleError('text generation', error);
|
||||
this.handleError("text generation", error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -143,24 +143,24 @@ export class BaseAIProvider {
|
||||
this.validateParams(params);
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
log('debug', `Streaming ${this.name} text with model: ${params.modelId}`);
|
||||
log("debug", `Streaming ${this.name} text with model: ${params.modelId}`);
|
||||
|
||||
const client = this.getClient(params);
|
||||
const stream = await streamText({
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
temperature: params.temperature,
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
"debug",
|
||||
`${this.name} streamText initiated successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
return stream;
|
||||
} catch (error) {
|
||||
this.handleError('text streaming', error);
|
||||
this.handleError("text streaming", error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -173,14 +173,14 @@ export class BaseAIProvider {
|
||||
this.validateMessages(params.messages);
|
||||
|
||||
if (!params.schema) {
|
||||
throw new Error('Schema is required for object generation');
|
||||
throw new Error("Schema is required for object generation");
|
||||
}
|
||||
if (!params.objectName) {
|
||||
throw new Error('Object name is required for object generation');
|
||||
throw new Error("Object name is required for object generation");
|
||||
}
|
||||
|
||||
log(
|
||||
'debug',
|
||||
"debug",
|
||||
`Generating ${this.name} object ('${params.objectName}') with model: ${params.modelId}`
|
||||
);
|
||||
|
||||
@@ -189,13 +189,13 @@ export class BaseAIProvider {
|
||||
model: client(params.modelId),
|
||||
messages: params.messages,
|
||||
schema: params.schema,
|
||||
mode: 'tool',
|
||||
mode: "auto",
|
||||
maxTokens: params.maxTokens,
|
||||
temperature: params.temperature
|
||||
temperature: params.temperature,
|
||||
});
|
||||
|
||||
log(
|
||||
'debug',
|
||||
"debug",
|
||||
`${this.name} generateObject completed successfully for model: ${params.modelId}`
|
||||
);
|
||||
|
||||
@@ -204,11 +204,11 @@ export class BaseAIProvider {
|
||||
usage: {
|
||||
inputTokens: result.usage?.promptTokens,
|
||||
outputTokens: result.usage?.completionTokens,
|
||||
totalTokens: result.usage?.totalTokens
|
||||
}
|
||||
totalTokens: result.usage?.totalTokens,
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
this.handleError('object generation', error);
|
||||
this.handleError("object generation", error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user