Compare commits
3 Commits
extension-
...
extension-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cf8f0f4b1c | ||
|
|
75c514cf5b | ||
|
|
41d1e671b1 |
@@ -6,8 +6,12 @@
|
|||||||
"extension": "0.23.0"
|
"extension": "0.23.0"
|
||||||
},
|
},
|
||||||
"changesets": [
|
"changesets": [
|
||||||
|
"fuzzy-brooms-mate",
|
||||||
"fuzzy-words-count",
|
"fuzzy-words-count",
|
||||||
|
"honest-steaks-check",
|
||||||
"tender-trams-refuse",
|
"tender-trams-refuse",
|
||||||
"vast-sites-leave"
|
"upset-ants-return",
|
||||||
|
"vast-sites-leave",
|
||||||
|
"wide-actors-report"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
7
.changeset/vast-weeks-fetch.md
Normal file
7
.changeset/vast-weeks-fetch.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
"task-master-ai": minor
|
||||||
|
---
|
||||||
|
|
||||||
|
Add GPT-5 support with proper parameter handling
|
||||||
|
|
||||||
|
- Added GPT-5 model to supported models configuration with SWE score of 0.749
|
||||||
73
CHANGELOG.md
73
CHANGELOG.md
@@ -1,5 +1,78 @@
|
|||||||
# task-master-ai
|
# task-master-ai
|
||||||
|
|
||||||
|
## 0.24.0-rc.1
|
||||||
|
|
||||||
|
### Minor Changes
|
||||||
|
|
||||||
|
- [#1093](https://github.com/eyaltoledano/claude-task-master/pull/1093) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
|
||||||
|
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
|
||||||
|
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
|
||||||
|
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
|
||||||
|
|
||||||
|
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
|
||||||
|
|
||||||
|
## New Claude Code Agents
|
||||||
|
|
||||||
|
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
|
||||||
|
|
||||||
|
### task-orchestrator
|
||||||
|
|
||||||
|
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
|
||||||
|
- Analyzes task dependencies to identify parallelizable work
|
||||||
|
- Deploys multiple task-executor agents for concurrent execution
|
||||||
|
- Monitors task completion and updates the dependency graph
|
||||||
|
- Automatically identifies and starts newly unblocked tasks
|
||||||
|
|
||||||
|
### task-executor
|
||||||
|
|
||||||
|
Handles the actual implementation of individual tasks:
|
||||||
|
- Executes specific tasks identified by the orchestrator
|
||||||
|
- Works on concrete implementation rather than planning
|
||||||
|
- Updates task status and logs progress
|
||||||
|
- Can work in parallel with other executors on independent tasks
|
||||||
|
|
||||||
|
### task-checker
|
||||||
|
|
||||||
|
Verifies that completed tasks meet their specifications:
|
||||||
|
- Reviews tasks marked as 'review' status
|
||||||
|
- Validates implementation against requirements
|
||||||
|
- Runs tests and checks for best practices
|
||||||
|
- Ensures quality before marking tasks as 'done'
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
|
||||||
|
|
||||||
|
## Usage Example
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In Claude Code, after initializing a project with tasks:
|
||||||
|
|
||||||
|
# Use task-orchestrator to analyze and coordinate work
|
||||||
|
# The orchestrator will:
|
||||||
|
# 1. Check task dependencies
|
||||||
|
# 2. Identify tasks that can run in parallel
|
||||||
|
# 3. Deploy executors for available work
|
||||||
|
# 4. Monitor progress and deploy new executors as tasks complete
|
||||||
|
|
||||||
|
# Use task-executor for specific task implementation
|
||||||
|
# When the orchestrator identifies task 2.3 needs work:
|
||||||
|
# The executor will implement that specific task
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benefits
|
||||||
|
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
|
||||||
|
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
|
||||||
|
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
|
||||||
|
- **Progress Tracking**: Real-time updates as tasks are completed
|
||||||
|
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
|
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
|
||||||
|
|
||||||
|
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
|
||||||
|
|
||||||
## 0.23.1-rc.0
|
## 0.23.1-rc.0
|
||||||
|
|
||||||
### Patch Changes
|
### Patch Changes
|
||||||
|
|||||||
@@ -3,3 +3,7 @@
|
|||||||
## Task Master AI Instructions
|
## Task Master AI Instructions
|
||||||
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
|
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
|
||||||
@./.taskmaster/CLAUDE.md
|
@./.taskmaster/CLAUDE.md
|
||||||
|
|
||||||
|
## Changeset Guidelines
|
||||||
|
|
||||||
|
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.
|
||||||
@@ -1,5 +1,14 @@
|
|||||||
# Change Log
|
# Change Log
|
||||||
|
|
||||||
|
## 0.23.1-rc.0
|
||||||
|
|
||||||
|
### Patch Changes
|
||||||
|
|
||||||
|
- [#1090](https://github.com/eyaltoledano/claude-task-master/pull/1090) [`a464e55`](https://github.com/eyaltoledano/claude-task-master/commit/a464e550b886ef81b09df80588fe5881bce83d93) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with some users not being able to connect to Taskmaster MCP server while using the extension
|
||||||
|
|
||||||
|
- Updated dependencies [[`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5), [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1), [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7)]:
|
||||||
|
- task-master-ai@0.24.0-rc.1
|
||||||
|
|
||||||
## 0.23.0
|
## 0.23.0
|
||||||
|
|
||||||
### Minor Changes
|
### Minor Changes
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
"private": true,
|
"private": true,
|
||||||
"displayName": "TaskMaster",
|
"displayName": "TaskMaster",
|
||||||
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
|
||||||
"version": "0.23.0",
|
"version": "0.23.1-rc.0",
|
||||||
"publisher": "Hamster",
|
"publisher": "Hamster",
|
||||||
"icon": "assets/icon.png",
|
"icon": "assets/icon.png",
|
||||||
"engines": {
|
"engines": {
|
||||||
@@ -239,7 +239,7 @@
|
|||||||
"check-types": "tsc --noEmit"
|
"check-types": "tsc --noEmit"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"task-master-ai": "*"
|
"task-master-ai": "0.24.0-rc.1"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@dnd-kit/core": "^6.3.1",
|
"@dnd-kit/core": "^6.3.1",
|
||||||
|
|||||||
@@ -64,23 +64,49 @@ try {
|
|||||||
fs.readFileSync(publishPackagePath, 'utf8')
|
fs.readFileSync(publishPackagePath, 'utf8')
|
||||||
);
|
);
|
||||||
|
|
||||||
// Check if versions are in sync
|
// Handle RC versions for VS Code Marketplace
|
||||||
if (devPackage.version !== publishPackage.version) {
|
let finalVersion = devPackage.version;
|
||||||
|
if (finalVersion.includes('-rc.')) {
|
||||||
console.log(
|
console.log(
|
||||||
` - Version sync needed: ${publishPackage.version} → ${devPackage.version}`
|
' - Detected RC version, transforming for VS Code Marketplace...'
|
||||||
);
|
);
|
||||||
publishPackage.version = devPackage.version;
|
|
||||||
|
|
||||||
// Update the source package.publish.json file
|
// Extract base version and RC number
|
||||||
|
const baseVersion = finalVersion.replace(/-rc\.\d+$/, '');
|
||||||
|
const rcMatch = finalVersion.match(/rc\.(\d+)/);
|
||||||
|
const rcNumber = rcMatch ? parseInt(rcMatch[1]) : 0;
|
||||||
|
|
||||||
|
// For each RC iteration, increment the patch version
|
||||||
|
// This ensures unique versions in VS Code Marketplace
|
||||||
|
if (rcNumber > 0) {
|
||||||
|
const [major, minor, patch] = baseVersion.split('.').map(Number);
|
||||||
|
finalVersion = `${major}.${minor}.${patch + rcNumber}`;
|
||||||
|
console.log(
|
||||||
|
` - RC version mapping: ${devPackage.version} → ${finalVersion}`
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
finalVersion = baseVersion;
|
||||||
|
console.log(
|
||||||
|
` - RC version mapping: ${devPackage.version} → ${finalVersion}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if versions need updating
|
||||||
|
if (publishPackage.version !== finalVersion) {
|
||||||
|
console.log(
|
||||||
|
` - Version sync needed: ${publishPackage.version} → ${finalVersion}`
|
||||||
|
);
|
||||||
|
publishPackage.version = finalVersion;
|
||||||
|
|
||||||
|
// Update the source package.publish.json file with the final version
|
||||||
fs.writeFileSync(
|
fs.writeFileSync(
|
||||||
publishPackagePath,
|
publishPackagePath,
|
||||||
JSON.stringify(publishPackage, null, '\t') + '\n'
|
JSON.stringify(publishPackage, null, '\t') + '\n'
|
||||||
);
|
);
|
||||||
console.log(
|
console.log(` - Updated package.publish.json version to ${finalVersion}`);
|
||||||
` - Updated package.publish.json version to ${devPackage.version}`
|
|
||||||
);
|
|
||||||
} else {
|
} else {
|
||||||
console.log(` - Versions already in sync: ${devPackage.version}`);
|
console.log(` - Versions already in sync: ${finalVersion}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Copy the (now synced) package.publish.json as package.json
|
// Copy the (now synced) package.publish.json as package.json
|
||||||
@@ -124,8 +150,7 @@ try {
|
|||||||
`cd vsix-build && npx vsce package --no-dependencies`
|
`cd vsix-build && npx vsce package --no-dependencies`
|
||||||
);
|
);
|
||||||
|
|
||||||
// Use the synced version for output
|
// Use the transformed version for output
|
||||||
const finalVersion = devPackage.version;
|
|
||||||
console.log(
|
console.log(
|
||||||
`\nYour extension will be packaged to: vsix-build/task-master-${finalVersion}.vsix`
|
`\nYour extension will be packaged to: vsix-build/task-master-${finalVersion}.vsix`
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
"name": "task-master-hamster",
|
"name": "task-master-hamster",
|
||||||
"displayName": "Taskmaster AI",
|
"displayName": "Taskmaster AI",
|
||||||
"description": "A visual Kanban board interface for Taskmaster projects in VS Code",
|
"description": "A visual Kanban board interface for Taskmaster projects in VS Code",
|
||||||
"version": "0.23.0",
|
"version": "0.23.1",
|
||||||
"publisher": "Hamster",
|
"publisher": "Hamster",
|
||||||
"icon": "assets/icon.png",
|
"icon": "assets/icon.png",
|
||||||
"engines": {
|
"engines": {
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Available Models as of July 23, 2025
|
# Available Models as of August 8, 2025
|
||||||
|
|
||||||
## Main Models
|
## Main Models
|
||||||
|
|
||||||
@@ -24,6 +24,7 @@
|
|||||||
| openai | gpt-4-1-mini | — | 0.4 | 1.6 |
|
| openai | gpt-4-1-mini | — | 0.4 | 1.6 |
|
||||||
| openai | gpt-4-1-nano | — | 0.1 | 0.4 |
|
| openai | gpt-4-1-nano | — | 0.1 | 0.4 |
|
||||||
| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
|
| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
|
||||||
|
| openai | gpt-5 | 0.749 | 5 | 20 |
|
||||||
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
|
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
|
||||||
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
|
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
|
||||||
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
|
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
|
||||||
@@ -134,6 +135,7 @@
|
|||||||
| openai | gpt-4o | 0.332 | 2.5 | 10 |
|
| openai | gpt-4o | 0.332 | 2.5 | 10 |
|
||||||
| openai | o3 | 0.5 | 2 | 8 |
|
| openai | o3 | 0.5 | 2 | 8 |
|
||||||
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
|
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
|
||||||
|
| openai | gpt-5 | 0.749 | 5 | 20 |
|
||||||
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
|
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
|
||||||
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
|
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
|
||||||
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
|
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "task-master-ai",
|
"name": "task-master-ai",
|
||||||
"version": "0.23.1-rc.0",
|
"version": "0.24.0-rc.1",
|
||||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||||
"main": "index.js",
|
"main": "index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
|
|||||||
@@ -557,6 +557,7 @@ function getParametersForRole(role, explicitRoot = null) {
|
|||||||
const providerName = roleConfig.provider;
|
const providerName = roleConfig.provider;
|
||||||
|
|
||||||
let effectiveMaxTokens = roleMaxTokens; // Start with the role's default
|
let effectiveMaxTokens = roleMaxTokens; // Start with the role's default
|
||||||
|
let effectiveTemperature = roleTemperature; // Start with the role's default
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Find the model definition in MODEL_MAP
|
// Find the model definition in MODEL_MAP
|
||||||
@@ -583,6 +584,20 @@ function getParametersForRole(role, explicitRoot = null) {
|
|||||||
`No valid model-specific max_tokens override found for ${modelId}. Using role default: ${roleMaxTokens}`
|
`No valid model-specific max_tokens override found for ${modelId}. Using role default: ${roleMaxTokens}`
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check if a model-specific temperature is defined
|
||||||
|
if (
|
||||||
|
modelDefinition &&
|
||||||
|
typeof modelDefinition.temperature === 'number' &&
|
||||||
|
modelDefinition.temperature >= 0 &&
|
||||||
|
modelDefinition.temperature <= 1
|
||||||
|
) {
|
||||||
|
effectiveTemperature = modelDefinition.temperature;
|
||||||
|
log(
|
||||||
|
'debug',
|
||||||
|
`Applying model-specific temperature (${modelDefinition.temperature}) for ${modelId}`
|
||||||
|
);
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
// Special handling for custom OpenRouter models
|
// Special handling for custom OpenRouter models
|
||||||
if (providerName === CUSTOM_PROVIDERS.OPENROUTER) {
|
if (providerName === CUSTOM_PROVIDERS.OPENROUTER) {
|
||||||
@@ -603,15 +618,16 @@ function getParametersForRole(role, explicitRoot = null) {
|
|||||||
} catch (lookupError) {
|
} catch (lookupError) {
|
||||||
log(
|
log(
|
||||||
'warn',
|
'warn',
|
||||||
`Error looking up model-specific max_tokens for ${modelId}: ${lookupError.message}. Using role default: ${roleMaxTokens}`
|
`Error looking up model-specific parameters for ${modelId}: ${lookupError.message}. Using role defaults.`
|
||||||
);
|
);
|
||||||
// Fallback to role default on error
|
// Fallback to role defaults on error
|
||||||
effectiveMaxTokens = roleMaxTokens;
|
effectiveMaxTokens = roleMaxTokens;
|
||||||
|
effectiveTemperature = roleTemperature;
|
||||||
}
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
maxTokens: effectiveMaxTokens,
|
maxTokens: effectiveMaxTokens,
|
||||||
temperature: roleTemperature
|
temperature: effectiveTemperature
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -239,6 +239,18 @@
|
|||||||
},
|
},
|
||||||
"allowed_roles": ["research"],
|
"allowed_roles": ["research"],
|
||||||
"supported": true
|
"supported": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "gpt-5",
|
||||||
|
"swe_score": 0.749,
|
||||||
|
"cost_per_1m_tokens": {
|
||||||
|
"input": 5.0,
|
||||||
|
"output": 20.0
|
||||||
|
},
|
||||||
|
"allowed_roles": ["main", "fallback"],
|
||||||
|
"max_tokens": 100000,
|
||||||
|
"temperature": 1,
|
||||||
|
"supported": true
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"google": [
|
"google": [
|
||||||
|
|||||||
@@ -61,8 +61,11 @@ export class BaseAIProvider {
|
|||||||
) {
|
) {
|
||||||
throw new Error('Temperature must be between 0 and 1');
|
throw new Error('Temperature must be between 0 and 1');
|
||||||
}
|
}
|
||||||
if (params.maxTokens !== undefined && params.maxTokens <= 0) {
|
if (params.maxTokens !== undefined) {
|
||||||
throw new Error('maxTokens must be greater than 0');
|
const maxTokens = Number(params.maxTokens);
|
||||||
|
if (!Number.isFinite(maxTokens) || maxTokens <= 0) {
|
||||||
|
throw new Error('maxTokens must be a finite number greater than 0');
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -122,6 +125,37 @@ export class BaseAIProvider {
|
|||||||
throw new Error('getRequiredApiKeyName must be implemented by provider');
|
throw new Error('getRequiredApiKeyName must be implemented by provider');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Determines if a model requires max_completion_tokens instead of maxTokens
|
||||||
|
* Can be overridden by providers to specify their model requirements
|
||||||
|
* @param {string} modelId - The model ID to check
|
||||||
|
* @returns {boolean} True if the model requires max_completion_tokens
|
||||||
|
*/
|
||||||
|
requiresMaxCompletionTokens(modelId) {
|
||||||
|
return false; // Default behavior - most models use maxTokens
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Prepares token limit parameter based on model requirements
|
||||||
|
* @param {string} modelId - The model ID
|
||||||
|
* @param {number} maxTokens - The maximum tokens value
|
||||||
|
* @returns {object} Object with either maxTokens or max_completion_tokens
|
||||||
|
*/
|
||||||
|
prepareTokenParam(modelId, maxTokens) {
|
||||||
|
if (maxTokens === undefined) {
|
||||||
|
return {};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure maxTokens is an integer
|
||||||
|
const tokenValue = Math.floor(Number(maxTokens));
|
||||||
|
|
||||||
|
if (this.requiresMaxCompletionTokens(modelId)) {
|
||||||
|
return { max_completion_tokens: tokenValue };
|
||||||
|
} else {
|
||||||
|
return { maxTokens: tokenValue };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Generates text using the provider's model
|
* Generates text using the provider's model
|
||||||
*/
|
*/
|
||||||
@@ -139,7 +173,7 @@ export class BaseAIProvider {
|
|||||||
const result = await generateText({
|
const result = await generateText({
|
||||||
model: client(params.modelId),
|
model: client(params.modelId),
|
||||||
messages: params.messages,
|
messages: params.messages,
|
||||||
maxTokens: params.maxTokens,
|
...this.prepareTokenParam(params.modelId, params.maxTokens),
|
||||||
temperature: params.temperature
|
temperature: params.temperature
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -175,7 +209,7 @@ export class BaseAIProvider {
|
|||||||
const stream = await streamText({
|
const stream = await streamText({
|
||||||
model: client(params.modelId),
|
model: client(params.modelId),
|
||||||
messages: params.messages,
|
messages: params.messages,
|
||||||
maxTokens: params.maxTokens,
|
...this.prepareTokenParam(params.modelId, params.maxTokens),
|
||||||
temperature: params.temperature
|
temperature: params.temperature
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -216,7 +250,7 @@ export class BaseAIProvider {
|
|||||||
messages: params.messages,
|
messages: params.messages,
|
||||||
schema: zodSchema(params.schema),
|
schema: zodSchema(params.schema),
|
||||||
mode: params.mode || 'auto',
|
mode: params.mode || 'auto',
|
||||||
maxTokens: params.maxTokens,
|
...this.prepareTokenParam(params.modelId, params.maxTokens),
|
||||||
temperature: params.temperature
|
temperature: params.temperature
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -20,6 +20,16 @@ export class OpenAIProvider extends BaseAIProvider {
|
|||||||
return 'OPENAI_API_KEY';
|
return 'OPENAI_API_KEY';
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Determines if a model requires max_completion_tokens instead of maxTokens
|
||||||
|
* GPT-5 models require max_completion_tokens parameter
|
||||||
|
* @param {string} modelId - The model ID to check
|
||||||
|
* @returns {boolean} True if the model requires max_completion_tokens
|
||||||
|
*/
|
||||||
|
requiresMaxCompletionTokens(modelId) {
|
||||||
|
return modelId && modelId.startsWith('gpt-5');
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Creates and returns an OpenAI client instance.
|
* Creates and returns an OpenAI client instance.
|
||||||
* @param {object} params - Parameters for client initialization
|
* @param {object} params - Parameters for client initialization
|
||||||
|
|||||||
238
tests/unit/ai-providers/openai.test.js
Normal file
238
tests/unit/ai-providers/openai.test.js
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
/**
|
||||||
|
* Tests for OpenAI Provider - Token parameter handling for GPT-5
|
||||||
|
*
|
||||||
|
* This test suite covers:
|
||||||
|
* 1. Correct identification of GPT-5 models requiring max_completion_tokens
|
||||||
|
* 2. Token parameter preparation for different model types
|
||||||
|
* 3. Validation of maxTokens parameter
|
||||||
|
* 4. Integer coercion of token values
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { jest } from '@jest/globals';
|
||||||
|
|
||||||
|
// Mock the utils module to prevent logging during tests
|
||||||
|
jest.mock('../../../scripts/modules/utils.js', () => ({
|
||||||
|
log: jest.fn()
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Import the provider
|
||||||
|
import { OpenAIProvider } from '../../../src/ai-providers/openai.js';
|
||||||
|
|
||||||
|
describe('OpenAIProvider', () => {
|
||||||
|
let provider;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
provider = new OpenAIProvider();
|
||||||
|
jest.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('requiresMaxCompletionTokens', () => {
|
||||||
|
it('should return true for GPT-5 models', () => {
|
||||||
|
expect(provider.requiresMaxCompletionTokens('gpt-5')).toBe(true);
|
||||||
|
expect(provider.requiresMaxCompletionTokens('gpt-5-mini')).toBe(true);
|
||||||
|
expect(provider.requiresMaxCompletionTokens('gpt-5-nano')).toBe(true);
|
||||||
|
expect(provider.requiresMaxCompletionTokens('gpt-5-turbo')).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for non-GPT-5 models', () => {
|
||||||
|
expect(provider.requiresMaxCompletionTokens('gpt-4')).toBe(false);
|
||||||
|
expect(provider.requiresMaxCompletionTokens('gpt-4o')).toBe(false);
|
||||||
|
expect(provider.requiresMaxCompletionTokens('gpt-3.5-turbo')).toBe(false);
|
||||||
|
expect(provider.requiresMaxCompletionTokens('o1')).toBe(false);
|
||||||
|
expect(provider.requiresMaxCompletionTokens('o1-mini')).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle null/undefined modelId', () => {
|
||||||
|
expect(provider.requiresMaxCompletionTokens(null)).toBeFalsy();
|
||||||
|
expect(provider.requiresMaxCompletionTokens(undefined)).toBeFalsy();
|
||||||
|
expect(provider.requiresMaxCompletionTokens('')).toBeFalsy();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('prepareTokenParam', () => {
|
||||||
|
it('should return max_completion_tokens for GPT-5 models', () => {
|
||||||
|
const result = provider.prepareTokenParam('gpt-5', 1000);
|
||||||
|
expect(result).toEqual({ max_completion_tokens: 1000 });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return maxTokens for non-GPT-5 models', () => {
|
||||||
|
const result = provider.prepareTokenParam('gpt-4', 1000);
|
||||||
|
expect(result).toEqual({ maxTokens: 1000 });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should coerce token value to integer', () => {
|
||||||
|
// Float values
|
||||||
|
const result1 = provider.prepareTokenParam('gpt-5', 1000.7);
|
||||||
|
expect(result1).toEqual({ max_completion_tokens: 1000 });
|
||||||
|
|
||||||
|
const result2 = provider.prepareTokenParam('gpt-4', 1000.7);
|
||||||
|
expect(result2).toEqual({ maxTokens: 1000 });
|
||||||
|
|
||||||
|
// String float
|
||||||
|
const result3 = provider.prepareTokenParam('gpt-5', '1000.7');
|
||||||
|
expect(result3).toEqual({ max_completion_tokens: 1000 });
|
||||||
|
|
||||||
|
// String integers (common CLI input path)
|
||||||
|
expect(provider.prepareTokenParam('gpt-5', '1000')).toEqual({
|
||||||
|
max_completion_tokens: 1000
|
||||||
|
});
|
||||||
|
expect(provider.prepareTokenParam('gpt-4', '1000')).toEqual({
|
||||||
|
maxTokens: 1000
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty object for undefined maxTokens', () => {
|
||||||
|
const result = provider.prepareTokenParam('gpt-5', undefined);
|
||||||
|
expect(result).toEqual({});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle edge cases', () => {
|
||||||
|
// Test with 0 (should still pass through as 0)
|
||||||
|
const result1 = provider.prepareTokenParam('gpt-5', 0);
|
||||||
|
expect(result1).toEqual({ max_completion_tokens: 0 });
|
||||||
|
|
||||||
|
// Test with string number
|
||||||
|
const result2 = provider.prepareTokenParam('gpt-5', '100');
|
||||||
|
expect(result2).toEqual({ max_completion_tokens: 100 });
|
||||||
|
|
||||||
|
// Test with negative number (will be floored, validation happens elsewhere)
|
||||||
|
const result3 = provider.prepareTokenParam('gpt-4', -10.5);
|
||||||
|
expect(result3).toEqual({ maxTokens: -11 });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('validateOptionalParams', () => {
|
||||||
|
it('should accept valid maxTokens values', () => {
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ maxTokens: 1000 })
|
||||||
|
).not.toThrow();
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ maxTokens: 1 })
|
||||||
|
).not.toThrow();
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ maxTokens: '1000' })
|
||||||
|
).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject invalid maxTokens values', () => {
|
||||||
|
expect(() => provider.validateOptionalParams({ maxTokens: 0 })).toThrow(
|
||||||
|
Error
|
||||||
|
);
|
||||||
|
expect(() => provider.validateOptionalParams({ maxTokens: -1 })).toThrow(
|
||||||
|
Error
|
||||||
|
);
|
||||||
|
expect(() => provider.validateOptionalParams({ maxTokens: NaN })).toThrow(
|
||||||
|
Error
|
||||||
|
);
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ maxTokens: Infinity })
|
||||||
|
).toThrow(Error);
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ maxTokens: 'invalid' })
|
||||||
|
).toThrow(Error);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should accept valid temperature values', () => {
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ temperature: 0 })
|
||||||
|
).not.toThrow();
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ temperature: 0.5 })
|
||||||
|
).not.toThrow();
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ temperature: 1 })
|
||||||
|
).not.toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should reject invalid temperature values', () => {
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ temperature: -0.1 })
|
||||||
|
).toThrow(Error);
|
||||||
|
expect(() =>
|
||||||
|
provider.validateOptionalParams({ temperature: 1.1 })
|
||||||
|
).toThrow(Error);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getRequiredApiKeyName', () => {
|
||||||
|
it('should return OPENAI_API_KEY', () => {
|
||||||
|
expect(provider.getRequiredApiKeyName()).toBe('OPENAI_API_KEY');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getClient', () => {
|
||||||
|
it('should throw error if API key is missing', () => {
|
||||||
|
expect(() => provider.getClient({})).toThrow(Error);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create client with apiKey only', () => {
|
||||||
|
const params = {
|
||||||
|
apiKey: 'sk-test-123'
|
||||||
|
};
|
||||||
|
|
||||||
|
// The getClient method should return a function
|
||||||
|
const client = provider.getClient(params);
|
||||||
|
expect(typeof client).toBe('function');
|
||||||
|
|
||||||
|
// The client function should be callable and return a model object
|
||||||
|
const model = client('gpt-4');
|
||||||
|
expect(model).toBeDefined();
|
||||||
|
expect(model.modelId).toBe('gpt-4');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create client with apiKey and baseURL', () => {
|
||||||
|
const params = {
|
||||||
|
apiKey: 'sk-test-456',
|
||||||
|
baseURL: 'https://api.openai.example'
|
||||||
|
};
|
||||||
|
|
||||||
|
// Should not throw when baseURL is provided
|
||||||
|
const client = provider.getClient(params);
|
||||||
|
expect(typeof client).toBe('function');
|
||||||
|
|
||||||
|
// The client function should be callable and return a model object
|
||||||
|
const model = client('gpt-5');
|
||||||
|
expect(model).toBeDefined();
|
||||||
|
expect(model.modelId).toBe('gpt-5');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return the same client instance for the same parameters', () => {
|
||||||
|
const params = {
|
||||||
|
apiKey: 'sk-test-789'
|
||||||
|
};
|
||||||
|
|
||||||
|
// Multiple calls with same params should work
|
||||||
|
const client1 = provider.getClient(params);
|
||||||
|
const client2 = provider.getClient(params);
|
||||||
|
|
||||||
|
expect(typeof client1).toBe('function');
|
||||||
|
expect(typeof client2).toBe('function');
|
||||||
|
|
||||||
|
// Both clients should be able to create models
|
||||||
|
const model1 = client1('gpt-4');
|
||||||
|
const model2 = client2('gpt-4');
|
||||||
|
expect(model1.modelId).toBe('gpt-4');
|
||||||
|
expect(model2.modelId).toBe('gpt-4');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle different model IDs correctly', () => {
|
||||||
|
const client = provider.getClient({ apiKey: 'sk-test-models' });
|
||||||
|
|
||||||
|
// Test with different models
|
||||||
|
const gpt4 = client('gpt-4');
|
||||||
|
expect(gpt4.modelId).toBe('gpt-4');
|
||||||
|
|
||||||
|
const gpt5 = client('gpt-5');
|
||||||
|
expect(gpt5.modelId).toBe('gpt-5');
|
||||||
|
|
||||||
|
const gpt35 = client('gpt-3.5-turbo');
|
||||||
|
expect(gpt35.modelId).toBe('gpt-3.5-turbo');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('name property', () => {
|
||||||
|
it('should have OpenAI as the provider name', () => {
|
||||||
|
expect(provider.name).toBe('OpenAI');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
Reference in New Issue
Block a user