mirror of
https://github.com/eyaltoledano/claude-task-master.git
synced 2026-01-30 06:12:05 +00:00
Merge pull request #1366 from eyaltoledano/next
This commit is contained in:
29
.changeset/dirty-walls-ask.md
Normal file
29
.changeset/dirty-walls-ask.md
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add support for custom OpenAI-compatible providers, allowing you to connect Task Master to any service that implements the OpenAI API specification
|
||||
|
||||
**How to use:**
|
||||
|
||||
Configure your custom provider with the `models` command:
|
||||
|
||||
```bash
|
||||
task-master models --set-main <your-model-id> --openai-compatible --baseURL <your-api-endpoint>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
task-master models --set-main llama-3-70b --openai-compatible --baseURL http://localhost:8000/v1
|
||||
# Or for an interactive view
|
||||
task-master models --setup
|
||||
```
|
||||
|
||||
Set your API key (if required by your provider) in mcp.json, your .env file or in your env exports:
|
||||
|
||||
```bash
|
||||
OPENAI_COMPATIBLE_API_KEY="your-key-here"
|
||||
```
|
||||
|
||||
This gives you the flexibility to use virtually any LLM service with Task Master, whether it's self-hosted, a specialized provider, or a custom inference server.
|
||||
7
.changeset/fix-metadata-nullable.md
Normal file
7
.changeset/fix-metadata-nullable.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve parse PRD schema for better llm model compatiblity
|
||||
|
||||
- Fixes #1353
|
||||
7
.changeset/forty-squids-sell.md
Normal file
7
.changeset/forty-squids-sell.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix subtask ID display to show full compound notation
|
||||
|
||||
When displaying a subtask via `tm show 104.1`, the header and properties table showed only the subtask's local ID (e.g., "1") instead of the full compound ID (e.g., "104.1"). The CLI now preserves and displays the original requested task ID throughout the display chain, ensuring subtasks are clearly identified with their parent context. Also improved TypeScript typing by using discriminated unions for Task/Subtask returns from `tasks.get()`, eliminating unsafe type coercions.
|
||||
30
.changeset/mighty-pens-ring.md
Normal file
30
.changeset/mighty-pens-ring.md
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add native support for Z.ai (GLM models), giving you access to high-performance Chinese models including glm-4.6 with massive 200K+ token context windows at competitive pricing
|
||||
|
||||
**How to use:**
|
||||
|
||||
1. Get your Z.ai API key from <https://z.ai/manage-apikey/apikey-list>
|
||||
2. Set your API key in .env, mcp.json or in env exports:
|
||||
|
||||
```bash
|
||||
ZAI_API_KEY="your-key-here"
|
||||
```
|
||||
|
||||
3. Configure Task Master to use GLM models:
|
||||
|
||||
```bash
|
||||
task-master models --set-main glm-4.6
|
||||
# Or for an interactive view
|
||||
task-master models --setup
|
||||
```
|
||||
|
||||
**Available models:**
|
||||
|
||||
- `glm-4.6` - Latest model with 200K+ context, excellent for complex projects
|
||||
- `glm-4.5` - Previous generation, still highly capable
|
||||
- Additional GLM variants for different use cases: `glm-4.5-air`, `glm-4.5v`
|
||||
|
||||
GLM models offer strong performance on software engineering tasks, with particularly good results on code generation and technical reasoning. The large context window makes them ideal for analyzing entire codebases or working with extensive documentation.
|
||||
7
.changeset/nice-laws-help.md
Normal file
7
.changeset/nice-laws-help.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fixed MCP server sometimes crashing when getting into the commit step of autopilot
|
||||
|
||||
- autopilot now persists state consistently through the whole flow
|
||||
7
.changeset/plain-animals-train.md
Normal file
7
.changeset/plain-animals-train.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Improve gemini cli integration
|
||||
|
||||
When initializing Task Master with the `gemini` profile, you now get properly configured context files tailored specifically for Gemini CLI, including MCP configuration and Gemini-specific features like file references, session management, and headless mode.
|
||||
25
.changeset/pre.json
Normal file
25
.changeset/pre.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"mode": "exit",
|
||||
"tag": "rc",
|
||||
"initialVersions": {
|
||||
"task-master-ai": "0.30.2",
|
||||
"@tm/cli": "",
|
||||
"docs": "0.0.8",
|
||||
"extension": "0.25.6",
|
||||
"@tm/mcp": "",
|
||||
"@tm/ai-sdk-provider-grok-cli": "",
|
||||
"@tm/build-config": "",
|
||||
"@tm/claude-code-plugin": "0.0.3",
|
||||
"@tm/bridge": "",
|
||||
"@tm/core": ""
|
||||
},
|
||||
"changesets": [
|
||||
"dirty-walls-ask",
|
||||
"fix-metadata-nullable",
|
||||
"forty-squids-sell",
|
||||
"mighty-pens-ring",
|
||||
"nice-laws-help",
|
||||
"plain-animals-train",
|
||||
"tricky-bats-ring"
|
||||
]
|
||||
}
|
||||
23
.changeset/tricky-bats-ring.md
Normal file
23
.changeset/tricky-bats-ring.md
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
"task-master-ai": minor
|
||||
---
|
||||
|
||||
Add LM Studio integration, enabling you to run Task Master completely offline with local models at zero API cost.
|
||||
|
||||
**How to use:**
|
||||
|
||||
1. Download and install [LM Studio](https://lmstudio.ai/)
|
||||
2. Launch LM Studio and download a model (e.g., Llama 3.2, Mistral, Qwen)
|
||||
3. Optional: Add api key to mcp.json or .env (LMSTUDIO_API_KEY)
|
||||
4. Go to the "Local Server" tab and click "Start Server"
|
||||
5. Configure Task Master:
|
||||
|
||||
```bash
|
||||
task-master models --set-main <model-name> --lmstudio
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
task-master models --set-main llama-3.2-3b --lmstudio
|
||||
```
|
||||
90
CHANGELOG.md
90
CHANGELOG.md
@@ -1,5 +1,95 @@
|
||||
# task-master-ai
|
||||
|
||||
## 0.31.0-rc.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- [#1360](https://github.com/eyaltoledano/claude-task-master/pull/1360) [`819d5e1`](https://github.com/eyaltoledano/claude-task-master/commit/819d5e1bc5fb81be4b25f1823988a8e20abe8440) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add support for custom OpenAI-compatible providers, allowing you to connect Task Master to any service that implements the OpenAI API specification
|
||||
|
||||
**How to use:**
|
||||
|
||||
Configure your custom provider with the `models` command:
|
||||
|
||||
```bash
|
||||
task-master models --set-main <your-model-id> --openai-compatible --baseURL <your-api-endpoint>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
task-master models --set-main llama-3-70b --openai-compatible --baseURL http://localhost:8000/v1
|
||||
# Or for an interactive view
|
||||
task-master models --setup
|
||||
```
|
||||
|
||||
Set your API key (if required by your provider) in mcp.json, your .env file or in your env exports:
|
||||
|
||||
```bash
|
||||
OPENAI_COMPATIBLE_API_KEY="your-key-here"
|
||||
```
|
||||
|
||||
This gives you the flexibility to use virtually any LLM service with Task Master, whether it's self-hosted, a specialized provider, or a custom inference server.
|
||||
|
||||
- [#1360](https://github.com/eyaltoledano/claude-task-master/pull/1360) [`819d5e1`](https://github.com/eyaltoledano/claude-task-master/commit/819d5e1bc5fb81be4b25f1823988a8e20abe8440) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add native support for Z.ai (GLM models), giving you access to high-performance Chinese models including glm-4.6 with massive 200K+ token context windows at competitive pricing
|
||||
|
||||
**How to use:**
|
||||
1. Get your Z.ai API key from <https://z.ai/manage-apikey/apikey-list>
|
||||
2. Set your API key in .env, mcp.json or in env exports:
|
||||
|
||||
```bash
|
||||
ZAI_API_KEY="your-key-here"
|
||||
```
|
||||
|
||||
3. Configure Task Master to use GLM models:
|
||||
|
||||
```bash
|
||||
task-master models --set-main glm-4.6
|
||||
# Or for an interactive view
|
||||
task-master models --setup
|
||||
```
|
||||
|
||||
**Available models:**
|
||||
- `glm-4.6` - Latest model with 200K+ context, excellent for complex projects
|
||||
- `glm-4.5` - Previous generation, still highly capable
|
||||
- Additional GLM variants for different use cases: `glm-4.5-air`, `glm-4.5v`
|
||||
|
||||
GLM models offer strong performance on software engineering tasks, with particularly good results on code generation and technical reasoning. The large context window makes them ideal for analyzing entire codebases or working with extensive documentation.
|
||||
|
||||
- [#1360](https://github.com/eyaltoledano/claude-task-master/pull/1360) [`819d5e1`](https://github.com/eyaltoledano/claude-task-master/commit/819d5e1bc5fb81be4b25f1823988a8e20abe8440) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add LM Studio integration, enabling you to run Task Master completely offline with local models at zero API cost.
|
||||
|
||||
**How to use:**
|
||||
1. Download and install [LM Studio](https://lmstudio.ai/)
|
||||
2. Launch LM Studio and download a model (e.g., Llama 3.2, Mistral, Qwen)
|
||||
3. Optional: Add api key to mcp.json or .env (LMSTUDIO_API_KEY)
|
||||
4. Go to the "Local Server" tab and click "Start Server"
|
||||
5. Configure Task Master:
|
||||
|
||||
```bash
|
||||
task-master models --set-main <model-name> --lmstudio
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
task-master models --set-main llama-3.2-3b --lmstudio
|
||||
```
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- [#1362](https://github.com/eyaltoledano/claude-task-master/pull/1362) [`3e70edf`](https://github.com/eyaltoledano/claude-task-master/commit/3e70edfa3a1f47bd8a6d2d2a30c20c72f5758b9b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve parse PRD schema for better llm model compatiblity
|
||||
- Fixes #1353
|
||||
|
||||
- [#1358](https://github.com/eyaltoledano/claude-task-master/pull/1358) [`0c639bd`](https://github.com/eyaltoledano/claude-task-master/commit/0c639bd1db9d2d9b4c2c22ac60b0d875ba75f80e) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix subtask ID display to show full compound notation
|
||||
|
||||
When displaying a subtask via `tm show 104.1`, the header and properties table showed only the subtask's local ID (e.g., "1") instead of the full compound ID (e.g., "104.1"). The CLI now preserves and displays the original requested task ID throughout the display chain, ensuring subtasks are clearly identified with their parent context. Also improved TypeScript typing by using discriminated unions for Task/Subtask returns from `tasks.get()`, eliminating unsafe type coercions.
|
||||
|
||||
- [#1339](https://github.com/eyaltoledano/claude-task-master/pull/1339) [`3b09b5d`](https://github.com/eyaltoledano/claude-task-master/commit/3b09b5da2a929f260d275f056d35bb6ded54ca6d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fixed MCP server sometimes crashing when getting into the commit step of autopilot
|
||||
- autopilot now persists state consistently through the whole flow
|
||||
|
||||
- [#1326](https://github.com/eyaltoledano/claude-task-master/pull/1326) [`9d5812b`](https://github.com/eyaltoledano/claude-task-master/commit/9d5812ba6725cfadebb8db8f4aa732cf3cdb3a36) Thanks [@SharifMrCreed](https://github.com/SharifMrCreed)! - Improve gemini cli integration
|
||||
|
||||
When initializing Task Master with the `gemini` profile, you now get properly configured context files tailored specifically for Gemini CLI, including MCP configuration and Gemini-specific features like file references, session management, and headless mode.
|
||||
|
||||
## 0.30.2
|
||||
|
||||
### Patch Changes
|
||||
|
||||
59
CLAUDE.md
59
CLAUDE.md
@@ -1,6 +1,7 @@
|
||||
# Claude Code Instructions
|
||||
|
||||
## Task Master AI Instructions
|
||||
|
||||
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
|
||||
@./.taskmaster/CLAUDE.md
|
||||
|
||||
@@ -14,10 +15,12 @@
|
||||
- **Test extension**: Always use `.ts` for TypeScript tests, never `.js`
|
||||
|
||||
### Synchronous Tests
|
||||
|
||||
- **NEVER use async/await in test functions** unless testing actual asynchronous operations
|
||||
- Use synchronous top-level imports instead of dynamic `await import()`
|
||||
- Test bodies should be synchronous whenever possible
|
||||
- Example:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT - Synchronous imports with .ts extension
|
||||
import { MyClass } from '../src/my-class.js';
|
||||
@@ -33,6 +36,57 @@
|
||||
});
|
||||
```
|
||||
|
||||
### When to Write Tests
|
||||
|
||||
**ALWAYS write tests for:**
|
||||
|
||||
- **Bug fixes**: Add a regression test that would have caught the bug
|
||||
- **Business logic**: Complex calculations, validations, transformations
|
||||
- **Edge cases**: Boundary conditions, error handling, null/undefined cases
|
||||
- **Public APIs**: Methods other code depends on
|
||||
- **Integration points**: Database, file system, external APIs
|
||||
|
||||
**SKIP tests for:**
|
||||
|
||||
- Simple getters/setters: `getX() { return this.x; }`
|
||||
- Trivial pass-through functions with no logic
|
||||
- Pure configuration objects
|
||||
- Code that just delegates to another tested function
|
||||
|
||||
**Examples:**
|
||||
|
||||
```javascript
|
||||
// ✅ WRITE A TEST - Bug fix with regression prevention
|
||||
it('should use correct baseURL from defaultBaseURL config', () => {
|
||||
const provider = new ZAIProvider();
|
||||
expect(provider.defaultBaseURL).toBe('https://api.z.ai/api/paas/v4/');
|
||||
});
|
||||
|
||||
// ✅ WRITE A TEST - Business logic with edge cases
|
||||
it('should parse subtask IDs correctly', () => {
|
||||
expect(parseTaskId('1.2.3')).toEqual({ taskId: 1, subtaskId: 2, subSubtaskId: 3 });
|
||||
expect(parseTaskId('invalid')).toBeNull();
|
||||
});
|
||||
|
||||
// ❌ SKIP TEST - Trivial getter
|
||||
class Task {
|
||||
get id() { return this._id; } // No test needed
|
||||
}
|
||||
|
||||
// ❌ SKIP TEST - Pure delegation
|
||||
function getTasks() {
|
||||
return taskManager.getTasks(); // Already tested in taskManager
|
||||
}
|
||||
```
|
||||
|
||||
**Bug Fix Workflow:**
|
||||
|
||||
1. Encounter a bug
|
||||
2. Write a failing test that reproduces it
|
||||
3. Fix the bug
|
||||
4. Verify test now passes
|
||||
5. Commit both fix and test together
|
||||
|
||||
## Architecture Guidelines
|
||||
|
||||
### Business Logic Separation
|
||||
@@ -70,6 +124,7 @@
|
||||
- ❌ Duplicating logic across CLI and MCP → Implement once in tm-core
|
||||
|
||||
**Correct approach:**
|
||||
|
||||
- ✅ Add method to TasksDomain: `tasks.get(taskId)` (automatically handles task and subtask IDs)
|
||||
- ✅ CLI calls: `await tmCore.tasks.get(taskId)` (supports "1", "1.2", "HAM-123", "HAM-123.2")
|
||||
- ✅ MCP calls: `await tmCore.tasks.get(taskId)` (same intelligent ID parsing)
|
||||
@@ -78,8 +133,8 @@
|
||||
## Documentation Guidelines
|
||||
|
||||
- **Documentation location**: Write docs in `apps/docs/` (Mintlify site source), not `docs/`
|
||||
- **Documentation URL**: Reference docs at https://docs.task-master.dev, not local file paths
|
||||
- **Documentation URL**: Reference docs at <https://docs.task-master.dev>, not local file paths
|
||||
|
||||
## Changeset Guidelines
|
||||
|
||||
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.
|
||||
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.
|
||||
|
||||
@@ -1,507 +0,0 @@
|
||||
/**
|
||||
* @fileoverview AutopilotCommand using Commander's native class pattern
|
||||
* Extends Commander.Command for better integration with the framework
|
||||
* This is a thin presentation layer over @tm/core's autopilot functionality
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import ora, { type Ora } from 'ora';
|
||||
import { createTmCore, type TmCore, type Task, type Subtask } from '@tm/core';
|
||||
import * as ui from '../utils/ui.js';
|
||||
|
||||
/**
|
||||
* CLI-specific options interface for the autopilot command
|
||||
*/
|
||||
export interface AutopilotCommandOptions {
|
||||
format?: 'text' | 'json';
|
||||
project?: string;
|
||||
dryRun?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Preflight check result for a single check
|
||||
*/
|
||||
export interface PreflightCheckResult {
|
||||
success: boolean;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Overall preflight check results
|
||||
*/
|
||||
export interface PreflightResult {
|
||||
success: boolean;
|
||||
testCommand: PreflightCheckResult;
|
||||
gitWorkingTree: PreflightCheckResult;
|
||||
requiredTools: PreflightCheckResult;
|
||||
defaultBranch: PreflightCheckResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* CLI-specific result type from autopilot command
|
||||
*/
|
||||
export interface AutopilotCommandResult {
|
||||
success: boolean;
|
||||
taskId: string;
|
||||
task?: Task;
|
||||
error?: string;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* AutopilotCommand extending Commander's Command class
|
||||
* This is a thin presentation layer over @tm/core's autopilot functionality
|
||||
*/
|
||||
export class AutopilotCommand extends Command {
|
||||
private tmCore?: TmCore;
|
||||
private lastResult?: AutopilotCommandResult;
|
||||
|
||||
constructor(name?: string) {
|
||||
super(name || 'autopilot');
|
||||
|
||||
// Configure the command
|
||||
this.description(
|
||||
'Execute a task autonomously using TDD workflow with git integration'
|
||||
)
|
||||
.argument('<taskId>', 'Task ID to execute autonomously')
|
||||
.option('-f, --format <format>', 'Output format (text, json)', 'text')
|
||||
.option('-p, --project <path>', 'Project root directory', process.cwd())
|
||||
.option(
|
||||
'--dry-run',
|
||||
'Show what would be executed without performing actions'
|
||||
)
|
||||
.action(async (taskId: string, options: AutopilotCommandOptions) => {
|
||||
await this.executeCommand(taskId, options);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the autopilot command
|
||||
*/
|
||||
private async executeCommand(
|
||||
taskId: string,
|
||||
options: AutopilotCommandOptions
|
||||
): Promise<void> {
|
||||
let spinner: Ora | null = null;
|
||||
|
||||
try {
|
||||
// Validate options
|
||||
if (!this.validateOptions(options)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Validate task ID format
|
||||
if (!this.validateTaskId(taskId)) {
|
||||
ui.displayError(`Invalid task ID format: ${taskId}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize tm-core with spinner
|
||||
spinner = ora('Initializing Task Master...').start();
|
||||
await this.initializeCore(options.project || process.cwd());
|
||||
spinner.succeed('Task Master initialized');
|
||||
|
||||
// Load and validate task existence
|
||||
spinner = ora(`Loading task ${taskId}...`).start();
|
||||
const task = await this.loadTask(taskId);
|
||||
|
||||
if (!task) {
|
||||
spinner.fail(`Task ${taskId} not found`);
|
||||
ui.displayError(`Task with ID ${taskId} does not exist`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
spinner.succeed(`Task ${taskId} loaded`);
|
||||
|
||||
// Display task information
|
||||
this.displayTaskInfo(task, options.dryRun || false);
|
||||
|
||||
// Execute autopilot logic (placeholder for now)
|
||||
const result = await this.performAutopilot(taskId, task, options);
|
||||
|
||||
// Store result for programmatic access
|
||||
this.setLastResult(result);
|
||||
|
||||
// Display results
|
||||
this.displayResults(result, options);
|
||||
} catch (error: unknown) {
|
||||
if (spinner) {
|
||||
spinner.fail('Operation failed');
|
||||
}
|
||||
this.handleError(error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate command options
|
||||
*/
|
||||
private validateOptions(options: AutopilotCommandOptions): boolean {
|
||||
// Validate format
|
||||
if (options.format && !['text', 'json'].includes(options.format)) {
|
||||
console.error(chalk.red(`Invalid format: ${options.format}`));
|
||||
console.error(chalk.gray(`Valid formats: text, json`));
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate task ID format
|
||||
*/
|
||||
private validateTaskId(taskId: string): boolean {
|
||||
// Task ID should be a number or number.number format (e.g., "1" or "1.2")
|
||||
const taskIdPattern = /^\d+(\.\d+)*$/;
|
||||
return taskIdPattern.test(taskId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize TmCore
|
||||
*/
|
||||
private async initializeCore(projectRoot: string): Promise<void> {
|
||||
if (!this.tmCore) {
|
||||
this.tmCore = await createTmCore({ projectPath: projectRoot });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Load task from tm-core
|
||||
*/
|
||||
private async loadTask(taskId: string): Promise<Task | null> {
|
||||
if (!this.tmCore) {
|
||||
throw new Error('TmCore not initialized');
|
||||
}
|
||||
|
||||
try {
|
||||
const { task } = await this.tmCore.tasks.get(taskId);
|
||||
return task;
|
||||
} catch (error) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display task information before execution
|
||||
*/
|
||||
private displayTaskInfo(task: Task, isDryRun: boolean): void {
|
||||
const prefix = isDryRun ? '[DRY RUN] ' : '';
|
||||
console.log();
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.cyan.bold(`${prefix}Autopilot Task Execution`) +
|
||||
'\n\n' +
|
||||
chalk.white(`Task ID: ${task.id}`) +
|
||||
'\n' +
|
||||
chalk.white(`Title: ${task.title}`) +
|
||||
'\n' +
|
||||
chalk.white(`Status: ${task.status}`) +
|
||||
(task.description ? '\n\n' + chalk.gray(task.description) : ''),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'cyan',
|
||||
width: process.stdout.columns ? process.stdout.columns * 0.95 : 100
|
||||
}
|
||||
)
|
||||
);
|
||||
console.log();
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform autopilot execution using PreflightChecker and TaskLoader
|
||||
*/
|
||||
private async performAutopilot(
|
||||
taskId: string,
|
||||
task: Task,
|
||||
options: AutopilotCommandOptions
|
||||
): Promise<AutopilotCommandResult> {
|
||||
// Run preflight checks
|
||||
const preflightResult = await this.runPreflightChecks(options);
|
||||
if (!preflightResult.success) {
|
||||
return {
|
||||
success: false,
|
||||
taskId,
|
||||
task,
|
||||
error: 'Preflight checks failed',
|
||||
message: 'Please resolve the issues above before running autopilot'
|
||||
};
|
||||
}
|
||||
|
||||
// Validate task structure and get execution order
|
||||
const validationResult = await this.validateTaskStructure(taskId, task);
|
||||
if (!validationResult.success) {
|
||||
return validationResult;
|
||||
}
|
||||
|
||||
// Display execution plan
|
||||
this.displayExecutionPlan(
|
||||
validationResult.task!,
|
||||
validationResult.orderedSubtasks!,
|
||||
options
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
taskId,
|
||||
task: validationResult.task,
|
||||
message: options.dryRun
|
||||
? 'Dry run completed successfully'
|
||||
: 'Autopilot execution ready (actual execution not yet implemented)'
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Run preflight checks and display results
|
||||
*/
|
||||
private async runPreflightChecks(
|
||||
options: AutopilotCommandOptions
|
||||
): Promise<PreflightResult> {
|
||||
const { PreflightChecker } = await import('@tm/core');
|
||||
|
||||
console.log();
|
||||
console.log(chalk.cyan.bold('Running preflight checks...'));
|
||||
|
||||
const preflightChecker = new PreflightChecker(
|
||||
options.project || process.cwd()
|
||||
);
|
||||
const result = await preflightChecker.runAllChecks();
|
||||
|
||||
this.displayPreflightResults(result);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate task structure and get execution order
|
||||
*/
|
||||
private async validateTaskStructure(
|
||||
taskId: string,
|
||||
task: Task
|
||||
): Promise<AutopilotCommandResult & { orderedSubtasks?: Subtask[] }> {
|
||||
if (!this.tmCore) {
|
||||
return {
|
||||
success: false,
|
||||
taskId,
|
||||
task,
|
||||
error: 'TmCore not initialized'
|
||||
};
|
||||
}
|
||||
|
||||
console.log();
|
||||
console.log(chalk.cyan.bold('Validating task structure...'));
|
||||
|
||||
const validationResult = await this.tmCore.tasks.loadAndValidate(taskId);
|
||||
|
||||
if (!validationResult.success) {
|
||||
return {
|
||||
success: false,
|
||||
taskId,
|
||||
task,
|
||||
error: validationResult.errorMessage,
|
||||
message: validationResult.suggestion
|
||||
};
|
||||
}
|
||||
|
||||
const orderedSubtasks = this.tmCore.tasks.getExecutionOrder(
|
||||
validationResult.task!
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
taskId,
|
||||
task: validationResult.task,
|
||||
orderedSubtasks
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Display execution plan with subtasks and TDD workflow
|
||||
*/
|
||||
private displayExecutionPlan(
|
||||
task: Task,
|
||||
orderedSubtasks: Subtask[],
|
||||
options: AutopilotCommandOptions
|
||||
): void {
|
||||
console.log();
|
||||
console.log(chalk.green.bold('✓ All checks passed!'));
|
||||
console.log();
|
||||
console.log(chalk.cyan.bold('Execution Plan:'));
|
||||
console.log(chalk.white(`Task: ${task.title}`));
|
||||
console.log(
|
||||
chalk.gray(
|
||||
`${orderedSubtasks.length} subtasks will be executed in dependency order`
|
||||
)
|
||||
);
|
||||
console.log();
|
||||
|
||||
// Display subtasks
|
||||
orderedSubtasks.forEach((subtask: Subtask, index: number) => {
|
||||
console.log(
|
||||
chalk.yellow(`${index + 1}. ${task.id}.${subtask.id}: ${subtask.title}`)
|
||||
);
|
||||
if (subtask.dependencies && subtask.dependencies.length > 0) {
|
||||
console.log(
|
||||
chalk.gray(` Dependencies: ${subtask.dependencies.join(', ')}`)
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
console.log();
|
||||
console.log(
|
||||
chalk.cyan('Autopilot would execute each subtask using TDD workflow:')
|
||||
);
|
||||
console.log(chalk.gray(' 1. RED phase: Write failing test'));
|
||||
console.log(chalk.gray(' 2. GREEN phase: Implement code to pass test'));
|
||||
console.log(chalk.gray(' 3. COMMIT phase: Commit changes'));
|
||||
console.log();
|
||||
|
||||
if (options.dryRun) {
|
||||
console.log(
|
||||
chalk.yellow('This was a dry run. Use without --dry-run to execute.')
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display preflight check results
|
||||
*/
|
||||
private displayPreflightResults(result: PreflightResult): void {
|
||||
const checks = [
|
||||
{ name: 'Test command', result: result.testCommand },
|
||||
{ name: 'Git working tree', result: result.gitWorkingTree },
|
||||
{ name: 'Required tools', result: result.requiredTools },
|
||||
{ name: 'Default branch', result: result.defaultBranch }
|
||||
];
|
||||
|
||||
checks.forEach((check) => {
|
||||
const icon = check.result.success ? chalk.green('✓') : chalk.red('✗');
|
||||
const status = check.result.success
|
||||
? chalk.green('PASS')
|
||||
: chalk.red('FAIL');
|
||||
console.log(`${icon} ${chalk.white(check.name)}: ${status}`);
|
||||
if (check.result.message) {
|
||||
console.log(chalk.gray(` ${check.result.message}`));
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Display results based on format
|
||||
*/
|
||||
private displayResults(
|
||||
result: AutopilotCommandResult,
|
||||
options: AutopilotCommandOptions
|
||||
): void {
|
||||
const format = options.format || 'text';
|
||||
|
||||
switch (format) {
|
||||
case 'json':
|
||||
this.displayJson(result);
|
||||
break;
|
||||
|
||||
case 'text':
|
||||
default:
|
||||
this.displayTextResult(result);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Display in JSON format
|
||||
*/
|
||||
private displayJson(result: AutopilotCommandResult): void {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
}
|
||||
|
||||
/**
|
||||
* Display result in text format
|
||||
*/
|
||||
private displayTextResult(result: AutopilotCommandResult): void {
|
||||
if (result.success) {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green.bold('✓ Autopilot Command Completed') +
|
||||
'\n\n' +
|
||||
chalk.white(result.message || 'Execution complete'),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'green',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
} else {
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.red.bold('✗ Autopilot Command Failed') +
|
||||
'\n\n' +
|
||||
chalk.white(result.error || 'Unknown error'),
|
||||
{
|
||||
padding: 1,
|
||||
borderStyle: 'round',
|
||||
borderColor: 'red',
|
||||
margin: { top: 1 }
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle general errors
|
||||
*/
|
||||
private handleError(error: unknown): void {
|
||||
const errorObj = error as {
|
||||
getSanitizedDetails?: () => { message: string };
|
||||
message?: string;
|
||||
stack?: string;
|
||||
};
|
||||
|
||||
const msg = errorObj?.getSanitizedDetails?.() ?? {
|
||||
message: errorObj?.message ?? String(error)
|
||||
};
|
||||
console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
|
||||
|
||||
// Show stack trace in development mode or when DEBUG is set
|
||||
const isDevelopment = process.env.NODE_ENV !== 'production';
|
||||
if ((isDevelopment || process.env.DEBUG) && errorObj.stack) {
|
||||
console.error(chalk.gray(errorObj.stack));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the last result for programmatic access
|
||||
*/
|
||||
private setLastResult(result: AutopilotCommandResult): void {
|
||||
this.lastResult = result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the last result (for programmatic usage)
|
||||
*/
|
||||
getLastResult(): AutopilotCommandResult | undefined {
|
||||
return this.lastResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
if (this.tmCore) {
|
||||
this.tmCore = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this command on an existing program
|
||||
*/
|
||||
static register(program: Command, name?: string): AutopilotCommand {
|
||||
const autopilotCommand = new AutopilotCommand(name);
|
||||
program.addCommand(autopilotCommand);
|
||||
return autopilotCommand;
|
||||
}
|
||||
}
|
||||
@@ -3,16 +3,8 @@
|
||||
*/
|
||||
|
||||
import { Command } from 'commander';
|
||||
import { WorkflowOrchestrator } from '@tm/core';
|
||||
import {
|
||||
AutopilotBaseOptions,
|
||||
hasWorkflowState,
|
||||
loadWorkflowState,
|
||||
createGitAdapter,
|
||||
createCommitMessageGenerator,
|
||||
OutputFormatter,
|
||||
saveWorkflowState
|
||||
} from './shared.js';
|
||||
import { WorkflowService, GitAdapter, CommitMessageGenerator } from '@tm/core';
|
||||
import { AutopilotBaseOptions, OutputFormatter } from './shared.js';
|
||||
|
||||
type CommitOptions = AutopilotBaseOptions;
|
||||
|
||||
@@ -43,47 +35,41 @@ export class CommitCommand extends Command {
|
||||
const formatter = new OutputFormatter(mergedOptions.json || false);
|
||||
|
||||
try {
|
||||
// Check for workflow state
|
||||
const hasState = await hasWorkflowState(mergedOptions.projectRoot!);
|
||||
if (!hasState) {
|
||||
const projectRoot = mergedOptions.projectRoot!;
|
||||
|
||||
// Create workflow service (manages WorkflowStateManager internally)
|
||||
const workflowService = new WorkflowService(projectRoot);
|
||||
|
||||
// Check if workflow exists
|
||||
if (!(await workflowService.hasWorkflow())) {
|
||||
formatter.error('No active workflow', {
|
||||
suggestion: 'Start a workflow with: autopilot start <taskId>'
|
||||
});
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Load state
|
||||
const state = await loadWorkflowState(mergedOptions.projectRoot!);
|
||||
if (!state) {
|
||||
formatter.error('Failed to load workflow state');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const orchestrator = new WorkflowOrchestrator(state.context);
|
||||
orchestrator.restoreState(state);
|
||||
orchestrator.enableAutoPersist(async (newState) => {
|
||||
await saveWorkflowState(mergedOptions.projectRoot!, newState);
|
||||
});
|
||||
// Resume workflow (loads state with single WorkflowStateManager instance)
|
||||
await workflowService.resumeWorkflow();
|
||||
const status = workflowService.getStatus();
|
||||
const workflowContext = workflowService.getContext();
|
||||
|
||||
// Verify in COMMIT phase
|
||||
const tddPhase = orchestrator.getCurrentTDDPhase();
|
||||
if (tddPhase !== 'COMMIT') {
|
||||
if (status.tddPhase !== 'COMMIT') {
|
||||
formatter.error('Not in COMMIT phase', {
|
||||
currentPhase: tddPhase || orchestrator.getCurrentPhase(),
|
||||
currentPhase: status.tddPhase || status.phase,
|
||||
suggestion: 'Complete RED and GREEN phases first'
|
||||
});
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Get current subtask
|
||||
const currentSubtask = orchestrator.getCurrentSubtask();
|
||||
if (!currentSubtask) {
|
||||
// Verify there's an active subtask
|
||||
if (!status.currentSubtask) {
|
||||
formatter.error('No current subtask');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize git adapter
|
||||
const gitAdapter = createGitAdapter(mergedOptions.projectRoot!);
|
||||
const gitAdapter = new GitAdapter(projectRoot);
|
||||
await gitAdapter.ensureGitRepository();
|
||||
|
||||
// Check for staged changes
|
||||
@@ -95,20 +81,20 @@ export class CommitCommand extends Command {
|
||||
}
|
||||
|
||||
// Get changed files for scope detection
|
||||
const status = await gitAdapter.getStatus();
|
||||
const changedFiles = [...status.staged, ...status.modified];
|
||||
const gitStatus = await gitAdapter.getStatus();
|
||||
const changedFiles = [...gitStatus.staged, ...gitStatus.modified];
|
||||
|
||||
// Generate commit message
|
||||
const messageGenerator = createCommitMessageGenerator();
|
||||
const testResults = state.context.lastTestResults;
|
||||
const messageGenerator = new CommitMessageGenerator();
|
||||
const testResults = workflowContext.lastTestResults;
|
||||
|
||||
const commitMessage = messageGenerator.generateMessage({
|
||||
type: 'feat',
|
||||
description: currentSubtask.title,
|
||||
description: status.currentSubtask.title,
|
||||
changedFiles,
|
||||
taskId: state.context.taskId,
|
||||
phase: 'TDD',
|
||||
tag: (state.context.metadata.tag as string) || undefined,
|
||||
taskId: status.taskId,
|
||||
phase: status.tddPhase,
|
||||
tag: (workflowContext.metadata.tag as string) || undefined,
|
||||
testsPassing: testResults?.passed,
|
||||
testsFailing: testResults?.failed,
|
||||
coveragePercent: undefined // Could be added if available
|
||||
@@ -117,8 +103,8 @@ export class CommitCommand extends Command {
|
||||
// Create commit with metadata
|
||||
await gitAdapter.createCommit(commitMessage, {
|
||||
metadata: {
|
||||
taskId: state.context.taskId,
|
||||
subtaskId: currentSubtask.id,
|
||||
taskId: status.taskId,
|
||||
subtaskId: status.currentSubtask.id,
|
||||
phase: 'COMMIT',
|
||||
tddCycle: 'complete'
|
||||
}
|
||||
@@ -127,36 +113,24 @@ export class CommitCommand extends Command {
|
||||
// Get commit info
|
||||
const lastCommit = await gitAdapter.getLastCommit();
|
||||
|
||||
// Complete COMMIT phase (this marks subtask as completed)
|
||||
orchestrator.transition({ type: 'COMMIT_COMPLETE' });
|
||||
// Complete COMMIT phase and advance workflow
|
||||
// This handles all transitions internally with a single WorkflowStateManager
|
||||
const newStatus = await workflowService.commit();
|
||||
|
||||
// Check if should advance to next subtask
|
||||
const progress = orchestrator.getProgress();
|
||||
if (progress.current < progress.total) {
|
||||
orchestrator.transition({ type: 'SUBTASK_COMPLETE' });
|
||||
} else {
|
||||
// All subtasks complete
|
||||
orchestrator.transition({ type: 'ALL_SUBTASKS_COMPLETE' });
|
||||
}
|
||||
const isComplete = newStatus.phase === 'COMPLETE';
|
||||
|
||||
// Output success
|
||||
formatter.success('Commit created', {
|
||||
commitHash: lastCommit.hash.substring(0, 7),
|
||||
message: commitMessage.split('\n')[0], // First line only
|
||||
subtask: {
|
||||
id: currentSubtask.id,
|
||||
title: currentSubtask.title,
|
||||
status: currentSubtask.status
|
||||
id: status.currentSubtask.id,
|
||||
title: status.currentSubtask.title
|
||||
},
|
||||
progress: {
|
||||
completed: progress.completed,
|
||||
total: progress.total,
|
||||
percentage: progress.percentage
|
||||
},
|
||||
nextAction:
|
||||
progress.completed < progress.total
|
||||
? 'Start next subtask with RED phase'
|
||||
: 'All subtasks complete. Run: autopilot status'
|
||||
progress: newStatus.progress,
|
||||
nextAction: isComplete
|
||||
? 'All subtasks complete. Run: autopilot status'
|
||||
: 'Start next subtask with RED phase'
|
||||
});
|
||||
} catch (error) {
|
||||
formatter.error((error as Error).message);
|
||||
|
||||
282
apps/cli/src/commands/models/custom-providers.ts
Normal file
282
apps/cli/src/commands/models/custom-providers.ts
Normal file
@@ -0,0 +1,282 @@
|
||||
/**
|
||||
* @fileoverview Custom provider handlers for model setup
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import inquirer from 'inquirer';
|
||||
import { CUSTOM_PROVIDERS } from '@tm/core';
|
||||
import type {
|
||||
CustomProviderConfig,
|
||||
CustomProviderId,
|
||||
CUSTOM_PROVIDER_IDS,
|
||||
ModelRole
|
||||
} from './types.js';
|
||||
import { validateOpenRouterModel, validateOllamaModel } from './fetchers.js';
|
||||
|
||||
/**
|
||||
* Configuration for all custom providers
|
||||
*/
|
||||
export const customProviderConfigs: Record<
|
||||
keyof typeof CUSTOM_PROVIDER_IDS,
|
||||
CustomProviderConfig
|
||||
> = {
|
||||
OPENROUTER: {
|
||||
id: '__CUSTOM_OPENROUTER__',
|
||||
name: '* Custom OpenRouter model',
|
||||
provider: CUSTOM_PROVIDERS.OPENROUTER,
|
||||
promptMessage: (role) =>
|
||||
`Enter the custom OpenRouter Model ID for the ${role} role:`,
|
||||
validate: async (modelId) => {
|
||||
const isValid = await validateOpenRouterModel(modelId);
|
||||
if (!isValid) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Model ID "${modelId}" not found in the live OpenRouter model list. Please check the ID.`
|
||||
)
|
||||
);
|
||||
}
|
||||
return isValid;
|
||||
}
|
||||
},
|
||||
OLLAMA: {
|
||||
id: '__CUSTOM_OLLAMA__',
|
||||
name: '* Custom Ollama model',
|
||||
provider: CUSTOM_PROVIDERS.OLLAMA,
|
||||
requiresBaseURL: true,
|
||||
defaultBaseURL: 'http://localhost:11434/api',
|
||||
promptMessage: (role) =>
|
||||
`Enter the custom Ollama Model ID for the ${role} role:`,
|
||||
validate: async (modelId, baseURL) => {
|
||||
const urlToCheck = baseURL || 'http://localhost:11434/api';
|
||||
const isValid = await validateOllamaModel(modelId, urlToCheck);
|
||||
if (!isValid) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Model ID "${modelId}" not found in the Ollama instance. Please verify the model is pulled and available.`
|
||||
)
|
||||
);
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
`You can check available models with: curl ${urlToCheck}/tags`
|
||||
)
|
||||
);
|
||||
}
|
||||
return isValid;
|
||||
}
|
||||
},
|
||||
BEDROCK: {
|
||||
id: '__CUSTOM_BEDROCK__',
|
||||
name: '* Custom Bedrock model',
|
||||
provider: CUSTOM_PROVIDERS.BEDROCK,
|
||||
promptMessage: (role) =>
|
||||
`Enter the custom Bedrock Model ID for the ${role} role (e.g., anthropic.claude-3-sonnet-20240229-v1:0):`,
|
||||
checkEnvVars: () => {
|
||||
if (
|
||||
!process.env.AWS_ACCESS_KEY_ID ||
|
||||
!process.env.AWS_SECRET_ACCESS_KEY
|
||||
) {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
'Warning: AWS_ACCESS_KEY_ID and/or AWS_SECRET_ACCESS_KEY environment variables are missing. Will fallback to system configuration (ex: aws config files or ec2 instance profiles).'
|
||||
)
|
||||
);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
},
|
||||
AZURE: {
|
||||
id: '__CUSTOM_AZURE__',
|
||||
name: '* Custom Azure model',
|
||||
provider: CUSTOM_PROVIDERS.AZURE,
|
||||
promptMessage: (role) =>
|
||||
`Enter the custom Azure OpenAI Model ID for the ${role} role (e.g., gpt-4o):`,
|
||||
checkEnvVars: () => {
|
||||
if (
|
||||
!process.env.AZURE_OPENAI_API_KEY ||
|
||||
!process.env.AZURE_OPENAI_ENDPOINT
|
||||
) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: AZURE_OPENAI_API_KEY and/or AZURE_OPENAI_ENDPOINT environment variables are missing. Please set them before using custom Azure models.'
|
||||
)
|
||||
);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
},
|
||||
VERTEX: {
|
||||
id: '__CUSTOM_VERTEX__',
|
||||
name: '* Custom Vertex model',
|
||||
provider: CUSTOM_PROVIDERS.VERTEX,
|
||||
promptMessage: (role) =>
|
||||
`Enter the custom Vertex AI Model ID for the ${role} role (e.g., gemini-1.5-pro-002):`,
|
||||
checkEnvVars: () => {
|
||||
if (
|
||||
!process.env.GOOGLE_API_KEY &&
|
||||
!process.env.GOOGLE_APPLICATION_CREDENTIALS
|
||||
) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Either GOOGLE_API_KEY or GOOGLE_APPLICATION_CREDENTIALS environment variable is required. Please set one before using custom Vertex models.'
|
||||
)
|
||||
);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
},
|
||||
LMSTUDIO: {
|
||||
id: '__CUSTOM_LMSTUDIO__',
|
||||
name: '* Custom LMStudio model',
|
||||
provider: CUSTOM_PROVIDERS.LMSTUDIO,
|
||||
requiresBaseURL: true,
|
||||
defaultBaseURL: 'http://localhost:1234/v1',
|
||||
promptMessage: (role) =>
|
||||
`Enter the custom LM Studio Model ID for the ${role} role:`,
|
||||
checkEnvVars: () => {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
'Note: LM Studio runs locally. Make sure the LM Studio server is running.'
|
||||
)
|
||||
);
|
||||
return true;
|
||||
}
|
||||
},
|
||||
OPENAI_COMPATIBLE: {
|
||||
id: '__CUSTOM_OPENAI_COMPATIBLE__',
|
||||
name: '* Custom OpenAI-compatible model',
|
||||
provider: CUSTOM_PROVIDERS.OPENAI_COMPATIBLE,
|
||||
promptMessage: (role) =>
|
||||
`Enter the custom OpenAI-compatible Model ID for the ${role} role:`,
|
||||
requiresBaseURL: true,
|
||||
checkEnvVars: () => {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
'Note: This will configure a generic OpenAI-compatible provider. Make sure your API endpoint is accessible.'
|
||||
)
|
||||
);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Handle custom provider selection
|
||||
*/
|
||||
export async function handleCustomProvider(
|
||||
providerId: CustomProviderId,
|
||||
role: ModelRole,
|
||||
currentModel: {
|
||||
modelId?: string | null;
|
||||
provider?: string | null;
|
||||
baseURL?: string | null;
|
||||
} | null = null
|
||||
): Promise<{
|
||||
modelId: string | null;
|
||||
provider: string | null;
|
||||
baseURL?: string | null;
|
||||
success: boolean;
|
||||
}> {
|
||||
// Find the matching config
|
||||
const configEntry = Object.entries(customProviderConfigs).find(
|
||||
([_, config]) => config.id === providerId
|
||||
);
|
||||
|
||||
if (!configEntry) {
|
||||
console.error(chalk.red(`Unknown custom provider: ${providerId}`));
|
||||
return { modelId: null, provider: null, success: false };
|
||||
}
|
||||
|
||||
const config = configEntry[1];
|
||||
|
||||
// Check environment variables if needed
|
||||
if (config.checkEnvVars && !config.checkEnvVars()) {
|
||||
return { modelId: null, provider: null, success: false };
|
||||
}
|
||||
|
||||
// Prompt for baseURL if required
|
||||
let baseURL: string | null = null;
|
||||
if (config.requiresBaseURL) {
|
||||
// Determine the appropriate default baseURL
|
||||
let defaultBaseURL: string;
|
||||
if (currentModel?.provider === config.provider && currentModel?.baseURL) {
|
||||
// Already using this provider - preserve existing baseURL
|
||||
defaultBaseURL = currentModel.baseURL;
|
||||
} else {
|
||||
// Switching providers or no existing baseURL - use fallback default
|
||||
defaultBaseURL = config.defaultBaseURL || '';
|
||||
}
|
||||
|
||||
const baseURLAnswer = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'baseURL',
|
||||
message: `Enter the base URL for the ${role} role:`,
|
||||
default: defaultBaseURL,
|
||||
validate: (input: string) => {
|
||||
if (!input || input.trim() === '') {
|
||||
return `Base URL is required for ${config.provider} providers`;
|
||||
}
|
||||
try {
|
||||
new URL(input);
|
||||
return true;
|
||||
} catch {
|
||||
return 'Please enter a valid URL';
|
||||
}
|
||||
}
|
||||
}
|
||||
]);
|
||||
baseURL = baseURLAnswer.baseURL;
|
||||
}
|
||||
|
||||
// Prompt for custom ID
|
||||
const { customId } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'customId',
|
||||
message: config.promptMessage(role)
|
||||
}
|
||||
]);
|
||||
|
||||
if (!customId) {
|
||||
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
|
||||
return { modelId: null, provider: null, success: true };
|
||||
}
|
||||
|
||||
// Validate if validation function exists
|
||||
if (config.validate) {
|
||||
const isValid = await config.validate(customId, baseURL || undefined);
|
||||
if (!isValid) {
|
||||
return { modelId: null, provider: null, success: false };
|
||||
}
|
||||
} else {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Custom ${config.provider} model "${customId}" will be used. No validation performed.`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
modelId: customId,
|
||||
provider: config.provider,
|
||||
baseURL: baseURL,
|
||||
success: true
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all custom provider options for display
|
||||
*/
|
||||
export function getCustomProviderOptions(): Array<{
|
||||
name: string;
|
||||
value: CustomProviderId;
|
||||
short: string;
|
||||
}> {
|
||||
return Object.values(customProviderConfigs).map((config) => ({
|
||||
name: config.name,
|
||||
value: config.id,
|
||||
short: config.name
|
||||
}));
|
||||
}
|
||||
165
apps/cli/src/commands/models/fetchers.ts
Normal file
165
apps/cli/src/commands/models/fetchers.ts
Normal file
@@ -0,0 +1,165 @@
|
||||
/**
|
||||
* @fileoverview Model fetching utilities for OpenRouter, Ollama, and other providers
|
||||
*/
|
||||
|
||||
import https from 'https';
|
||||
import http from 'http';
|
||||
import type { FetchResult, OpenRouterModel, OllamaModel } from './types.js';
|
||||
|
||||
/**
|
||||
* Fetch available models from OpenRouter API
|
||||
*/
|
||||
export async function fetchOpenRouterModels(): Promise<
|
||||
FetchResult<OpenRouterModel[]>
|
||||
> {
|
||||
return new Promise((resolve) => {
|
||||
const options = {
|
||||
hostname: 'openrouter.ai',
|
||||
path: '/api/v1/models',
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json'
|
||||
}
|
||||
};
|
||||
|
||||
const req = https.request(options, (res) => {
|
||||
let data = '';
|
||||
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
|
||||
res.on('end', () => {
|
||||
if (res.statusCode === 200) {
|
||||
try {
|
||||
const parsedData = JSON.parse(data);
|
||||
resolve({
|
||||
success: true,
|
||||
data: parsedData.data || []
|
||||
});
|
||||
} catch (e) {
|
||||
resolve({
|
||||
success: false,
|
||||
error: 'Failed to parse OpenRouter response'
|
||||
});
|
||||
}
|
||||
} else {
|
||||
resolve({
|
||||
success: false,
|
||||
error: `OpenRouter API returned status ${res.statusCode}`
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', (e) => {
|
||||
resolve({
|
||||
success: false,
|
||||
error: `Failed to fetch OpenRouter models: ${e.message}`
|
||||
});
|
||||
});
|
||||
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch available models from Ollama instance
|
||||
*/
|
||||
export async function fetchOllamaModels(
|
||||
baseURL = 'http://localhost:11434/api'
|
||||
): Promise<FetchResult<OllamaModel[]>> {
|
||||
return new Promise((resolve) => {
|
||||
try {
|
||||
// Parse the base URL to extract hostname, port, and base path
|
||||
const url = new URL(baseURL);
|
||||
const isHttps = url.protocol === 'https:';
|
||||
const port = url.port || (isHttps ? 443 : 80);
|
||||
const basePath = url.pathname.endsWith('/')
|
||||
? url.pathname.slice(0, -1)
|
||||
: url.pathname;
|
||||
|
||||
const options = {
|
||||
hostname: url.hostname,
|
||||
port: parseInt(String(port), 10),
|
||||
path: `${basePath}/tags`,
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json'
|
||||
}
|
||||
};
|
||||
|
||||
const requestLib = isHttps ? https : http;
|
||||
const req = requestLib.request(options, (res) => {
|
||||
let data = '';
|
||||
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
|
||||
res.on('end', () => {
|
||||
if (res.statusCode === 200) {
|
||||
try {
|
||||
const parsedData = JSON.parse(data);
|
||||
resolve({
|
||||
success: true,
|
||||
data: parsedData.models || []
|
||||
});
|
||||
} catch (e) {
|
||||
resolve({
|
||||
success: false,
|
||||
error: 'Failed to parse Ollama response'
|
||||
});
|
||||
}
|
||||
} else {
|
||||
resolve({
|
||||
success: false,
|
||||
error: `Ollama API returned status ${res.statusCode}`
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', (e) => {
|
||||
resolve({
|
||||
success: false,
|
||||
error: `Failed to connect to Ollama: ${e.message}`
|
||||
});
|
||||
});
|
||||
|
||||
req.end();
|
||||
} catch (e) {
|
||||
resolve({
|
||||
success: false,
|
||||
error: `Invalid Ollama base URL: ${e instanceof Error ? e.message : 'Unknown error'}`
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate if a model ID exists in OpenRouter
|
||||
*/
|
||||
export async function validateOpenRouterModel(
|
||||
modelId: string
|
||||
): Promise<boolean> {
|
||||
const result = await fetchOpenRouterModels();
|
||||
if (!result.success || !result.data) {
|
||||
return false;
|
||||
}
|
||||
return result.data.some((m) => m.id === modelId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate if a model ID exists in Ollama instance
|
||||
*/
|
||||
export async function validateOllamaModel(
|
||||
modelId: string,
|
||||
baseURL?: string
|
||||
): Promise<boolean> {
|
||||
const result = await fetchOllamaModels(baseURL);
|
||||
if (!result.success || !result.data) {
|
||||
return false;
|
||||
}
|
||||
return result.data.some((m) => m.model === modelId);
|
||||
}
|
||||
9
apps/cli/src/commands/models/index.ts
Normal file
9
apps/cli/src/commands/models/index.ts
Normal file
@@ -0,0 +1,9 @@
|
||||
/**
|
||||
* @fileoverview Model setup module exports, command not yet here, still lives in commands.js (old structure)
|
||||
*/
|
||||
|
||||
export * from './types.js';
|
||||
export * from './fetchers.js';
|
||||
export * from './custom-providers.js';
|
||||
export * from './prompts.js';
|
||||
export * from './setup.js';
|
||||
213
apps/cli/src/commands/models/prompts.ts
Normal file
213
apps/cli/src/commands/models/prompts.ts
Normal file
@@ -0,0 +1,213 @@
|
||||
/**
|
||||
* @fileoverview Interactive prompt logic for model selection
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import search, { Separator } from '@inquirer/search';
|
||||
import { getAvailableModels } from '../../lib/model-management.js';
|
||||
import type {
|
||||
ModelRole,
|
||||
ModelInfo,
|
||||
CurrentModels,
|
||||
PromptData,
|
||||
ModelChoice
|
||||
} from './types.js';
|
||||
import { getCustomProviderOptions } from './custom-providers.js';
|
||||
|
||||
/**
|
||||
* Build prompt choices for a specific role
|
||||
*/
|
||||
export function buildPromptChoices(
|
||||
role: ModelRole,
|
||||
currentModels: CurrentModels,
|
||||
allowNone = false
|
||||
): PromptData {
|
||||
const currentModel = currentModels[role];
|
||||
const allModels = getAvailableModels();
|
||||
|
||||
// Group models by provider (filter out models without provider)
|
||||
const modelsByProvider = allModels
|
||||
.filter(
|
||||
(model): model is ModelInfo & { provider: string } => !!model.provider
|
||||
)
|
||||
.reduce(
|
||||
(acc, model) => {
|
||||
if (!acc[model.provider]) {
|
||||
acc[model.provider] = [];
|
||||
}
|
||||
acc[model.provider].push(model);
|
||||
return acc;
|
||||
},
|
||||
{} as Record<string, ModelInfo[]>
|
||||
);
|
||||
|
||||
// System options (cancel and no change)
|
||||
const systemOptions: ModelChoice[] = [];
|
||||
const cancelOption: ModelChoice = {
|
||||
name: '⏹ Cancel Model Setup',
|
||||
value: '__CANCEL__',
|
||||
short: 'Cancel'
|
||||
};
|
||||
const noChangeOption: ModelChoice | null =
|
||||
currentModel?.modelId && currentModel?.provider
|
||||
? {
|
||||
name: `✔ No change to current ${role} model (${currentModel.provider}/${currentModel.modelId})`,
|
||||
value: '__NO_CHANGE__',
|
||||
short: 'No change'
|
||||
}
|
||||
: null;
|
||||
|
||||
if (noChangeOption) {
|
||||
systemOptions.push(noChangeOption);
|
||||
}
|
||||
systemOptions.push(cancelOption);
|
||||
|
||||
// Build role-specific model choices
|
||||
const roleChoices: ModelChoice[] = Object.entries(modelsByProvider)
|
||||
.flatMap(([provider, models]) => {
|
||||
return models
|
||||
.filter((m) => m.allowed_roles && m.allowed_roles.includes(role))
|
||||
.map((m) => ({
|
||||
name: `${provider} / ${m.id} ${
|
||||
m.cost_per_1m_tokens
|
||||
? chalk.gray(
|
||||
`($${m.cost_per_1m_tokens.input.toFixed(2)} input | $${m.cost_per_1m_tokens.output.toFixed(2)} output)`
|
||||
)
|
||||
: ''
|
||||
}`,
|
||||
value: { id: m.id, provider },
|
||||
short: `${provider}/${m.id}`
|
||||
}));
|
||||
})
|
||||
.filter((choice) => choice !== null);
|
||||
|
||||
// Find current model index
|
||||
let currentChoiceIndex = -1;
|
||||
if (currentModel?.modelId && currentModel?.provider) {
|
||||
currentChoiceIndex = roleChoices.findIndex(
|
||||
(choice) =>
|
||||
typeof choice.value === 'object' &&
|
||||
choice.value !== null &&
|
||||
'id' in choice.value &&
|
||||
choice.value.id === currentModel.modelId &&
|
||||
choice.value.provider === currentModel.provider
|
||||
);
|
||||
}
|
||||
|
||||
// Get custom provider options
|
||||
const customProviderOptions = getCustomProviderOptions();
|
||||
|
||||
// Build final choices array
|
||||
const systemLength = systemOptions.length;
|
||||
let choices: (ModelChoice | Separator)[];
|
||||
let defaultIndex: number;
|
||||
|
||||
if (allowNone) {
|
||||
choices = [
|
||||
...systemOptions,
|
||||
new Separator('\n── Standard Models ──'),
|
||||
{ name: '⚪ None (disable)', value: null, short: 'None' },
|
||||
...roleChoices,
|
||||
new Separator('\n── Custom Providers ──'),
|
||||
...customProviderOptions
|
||||
];
|
||||
const noneOptionIndex = systemLength + 1;
|
||||
defaultIndex =
|
||||
currentChoiceIndex !== -1
|
||||
? currentChoiceIndex + systemLength + 2
|
||||
: noneOptionIndex;
|
||||
} else {
|
||||
choices = [
|
||||
...systemOptions,
|
||||
new Separator('\n── Standard Models ──'),
|
||||
...roleChoices,
|
||||
new Separator('\n── Custom Providers ──'),
|
||||
...customProviderOptions
|
||||
];
|
||||
defaultIndex =
|
||||
currentChoiceIndex !== -1
|
||||
? currentChoiceIndex + systemLength + 1
|
||||
: noChangeOption
|
||||
? 1
|
||||
: 0;
|
||||
}
|
||||
|
||||
// Ensure defaultIndex is valid
|
||||
if (defaultIndex < 0 || defaultIndex >= choices.length) {
|
||||
defaultIndex = 0;
|
||||
console.warn(
|
||||
`Warning: Could not determine default model for role '${role}'. Defaulting to 'Cancel'.`
|
||||
);
|
||||
}
|
||||
|
||||
return { choices, default: defaultIndex };
|
||||
}
|
||||
|
||||
/**
|
||||
* Create search source for inquirer search prompt
|
||||
*/
|
||||
export function createSearchSource(
|
||||
choices: (ModelChoice | Separator)[],
|
||||
_defaultValue: number
|
||||
) {
|
||||
return (searchTerm = '') => {
|
||||
const filteredChoices = choices.filter((choice) => {
|
||||
// Separators are always included
|
||||
if (choice instanceof Separator) return true;
|
||||
// Filter regular choices by search term
|
||||
const searchText = (choice as ModelChoice).name || '';
|
||||
return searchText.toLowerCase().includes(searchTerm.toLowerCase());
|
||||
});
|
||||
// Map ModelChoice to the format inquirer expects
|
||||
return Promise.resolve(
|
||||
filteredChoices.map((choice) => {
|
||||
if (choice instanceof Separator) return choice;
|
||||
const mc = choice as ModelChoice;
|
||||
return {
|
||||
name: mc.name,
|
||||
value: mc.value,
|
||||
short: mc.short
|
||||
};
|
||||
})
|
||||
);
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Display introductory message for interactive setup
|
||||
*/
|
||||
export function displaySetupIntro(): void {
|
||||
console.log(chalk.cyan('\n🎯 Interactive Model Setup'));
|
||||
console.log(chalk.gray('━'.repeat(50)));
|
||||
console.log(chalk.yellow('💡 Navigation tips:'));
|
||||
console.log(chalk.gray(' • Type to search and filter options'));
|
||||
console.log(chalk.gray(' • Use ↑↓ arrow keys to navigate results'));
|
||||
console.log(
|
||||
chalk.gray(
|
||||
' • Standard models are listed first, custom providers at bottom'
|
||||
)
|
||||
);
|
||||
console.log(chalk.gray(' • Press Enter to select\n'));
|
||||
}
|
||||
|
||||
/**
|
||||
* Prompt user to select a model for a specific role
|
||||
*/
|
||||
export async function promptForModel(
|
||||
role: ModelRole,
|
||||
promptData: PromptData
|
||||
): Promise<string | { id: string; provider: string } | null> {
|
||||
const roleLabels = {
|
||||
main: 'main model for generation/updates',
|
||||
research: 'research model',
|
||||
fallback: 'fallback model (optional)'
|
||||
};
|
||||
|
||||
const answer = await search({
|
||||
message: `Select the ${roleLabels[role]}:`,
|
||||
source: createSearchSource(promptData.choices, promptData.default),
|
||||
pageSize: 15
|
||||
});
|
||||
|
||||
return answer;
|
||||
}
|
||||
304
apps/cli/src/commands/models/setup.ts
Normal file
304
apps/cli/src/commands/models/setup.ts
Normal file
@@ -0,0 +1,304 @@
|
||||
/**
|
||||
* @fileoverview Main setup orchestration for interactive model configuration
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
import {
|
||||
getModelConfiguration,
|
||||
setModel,
|
||||
getConfig,
|
||||
writeConfig
|
||||
} from '../../lib/model-management.js';
|
||||
import type { ModelRole, CurrentModels, CustomProviderId } from './types.js';
|
||||
import {
|
||||
buildPromptChoices,
|
||||
displaySetupIntro,
|
||||
promptForModel
|
||||
} from './prompts.js';
|
||||
import {
|
||||
handleCustomProvider,
|
||||
customProviderConfigs
|
||||
} from './custom-providers.js';
|
||||
|
||||
/**
|
||||
* Check if a value is a custom provider ID
|
||||
*/
|
||||
function isCustomProviderId(value: unknown): value is CustomProviderId {
|
||||
if (typeof value !== 'string') return false;
|
||||
return Object.values(customProviderConfigs).some(
|
||||
(config) => config.id === value
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle setting a model for a specific role
|
||||
*/
|
||||
async function handleSetModel(
|
||||
role: ModelRole,
|
||||
selectedValue: string | { id: string; provider: string } | null,
|
||||
currentModel: {
|
||||
modelId?: string | null;
|
||||
provider?: string | null;
|
||||
baseURL?: string | null;
|
||||
} | null,
|
||||
projectRoot: string
|
||||
): Promise<{ success: boolean; modified: boolean }> {
|
||||
const currentModelId = currentModel?.modelId ?? null;
|
||||
const currentProvider = currentModel?.provider ?? null;
|
||||
const currentBaseURL = currentModel?.baseURL ?? null;
|
||||
// Handle cancellation
|
||||
if (selectedValue === '__CANCEL__') {
|
||||
console.log(
|
||||
chalk.yellow(`\nSetup canceled during ${role} model selection.`)
|
||||
);
|
||||
return { success: false, modified: false };
|
||||
}
|
||||
|
||||
// Handle no change
|
||||
if (selectedValue === '__NO_CHANGE__') {
|
||||
console.log(chalk.gray(`No change selected for ${role} model.`));
|
||||
return { success: true, modified: false };
|
||||
}
|
||||
|
||||
let modelIdToSet: string | null = null;
|
||||
let providerHint: string | null = null;
|
||||
let baseURL: string | null = null;
|
||||
|
||||
// Handle custom providers
|
||||
if (isCustomProviderId(selectedValue)) {
|
||||
const result = await handleCustomProvider(
|
||||
selectedValue,
|
||||
role,
|
||||
currentModel
|
||||
);
|
||||
if (!result.success) {
|
||||
return { success: false, modified: false };
|
||||
}
|
||||
if (!result.modelId) {
|
||||
return { success: true, modified: false };
|
||||
}
|
||||
modelIdToSet = result.modelId;
|
||||
providerHint = result.provider;
|
||||
baseURL = result.baseURL || null;
|
||||
}
|
||||
// Handle standard model selection
|
||||
else if (
|
||||
selectedValue &&
|
||||
typeof selectedValue === 'object' &&
|
||||
'id' in selectedValue
|
||||
) {
|
||||
modelIdToSet = selectedValue.id;
|
||||
providerHint = selectedValue.provider;
|
||||
}
|
||||
// Handle disabling fallback
|
||||
else if (selectedValue === null && role === 'fallback') {
|
||||
modelIdToSet = null;
|
||||
providerHint = null;
|
||||
}
|
||||
// Unknown selection
|
||||
else if (selectedValue) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Internal Error: Unexpected selection value for ${role}: ${JSON.stringify(selectedValue)}`
|
||||
)
|
||||
);
|
||||
return { success: false, modified: false };
|
||||
}
|
||||
|
||||
// Check if there's actually a change to make
|
||||
if (
|
||||
modelIdToSet === currentModelId &&
|
||||
(providerHint ?? null) === currentProvider &&
|
||||
(baseURL ?? null) === currentBaseURL
|
||||
) {
|
||||
return { success: true, modified: false };
|
||||
}
|
||||
|
||||
// Set the model
|
||||
if (modelIdToSet) {
|
||||
const result = await setModel(role, modelIdToSet, {
|
||||
projectRoot,
|
||||
providerHint: providerHint || undefined,
|
||||
baseURL: baseURL || undefined
|
||||
});
|
||||
|
||||
if (result.success) {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Set ${role} model: ${result.data?.provider} / ${result.data?.modelId}`
|
||||
)
|
||||
);
|
||||
if (result.data?.warning) {
|
||||
console.log(chalk.yellow(result.data?.warning));
|
||||
}
|
||||
return { success: true, modified: true };
|
||||
} else {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error setting ${role} model: ${result.error?.message || 'Unknown'}`
|
||||
)
|
||||
);
|
||||
return { success: false, modified: false };
|
||||
}
|
||||
}
|
||||
// Disable fallback model
|
||||
else if (role === 'fallback') {
|
||||
const currentCfg = getConfig(projectRoot);
|
||||
if (currentCfg?.models?.fallback?.modelId) {
|
||||
currentCfg.models.fallback = {
|
||||
...currentCfg.models.fallback,
|
||||
provider: undefined,
|
||||
modelId: undefined
|
||||
};
|
||||
if (writeConfig(currentCfg, projectRoot)) {
|
||||
console.log(chalk.blue('Fallback model disabled.'));
|
||||
return { success: true, modified: true };
|
||||
} else {
|
||||
console.error(
|
||||
chalk.red('Failed to disable fallback model in config file.')
|
||||
);
|
||||
return { success: false, modified: false };
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.blue('Fallback model was already disabled.'));
|
||||
return { success: true, modified: false };
|
||||
}
|
||||
}
|
||||
|
||||
return { success: true, modified: false };
|
||||
}
|
||||
|
||||
/**
|
||||
* Run interactive model setup
|
||||
*/
|
||||
export async function runInteractiveSetup(
|
||||
projectRoot: string
|
||||
): Promise<boolean> {
|
||||
if (!projectRoot) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Could not determine project root for interactive setup.'
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Get current configuration
|
||||
const currentConfigResult = await getModelConfiguration({ projectRoot });
|
||||
const currentModels: CurrentModels =
|
||||
currentConfigResult.success && currentConfigResult.data
|
||||
? {
|
||||
main: currentConfigResult.data.activeModels.main
|
||||
? {
|
||||
modelId: currentConfigResult.data.activeModels.main.modelId,
|
||||
provider: currentConfigResult.data.activeModels.main.provider,
|
||||
baseURL: currentConfigResult.data.activeModels.main.baseURL
|
||||
}
|
||||
: null,
|
||||
research: currentConfigResult.data.activeModels.research
|
||||
? {
|
||||
modelId: currentConfigResult.data.activeModels.research.modelId,
|
||||
provider:
|
||||
currentConfigResult.data.activeModels.research.provider,
|
||||
baseURL: currentConfigResult.data.activeModels.research.baseURL
|
||||
}
|
||||
: null,
|
||||
fallback: currentConfigResult.data.activeModels.fallback
|
||||
? {
|
||||
modelId: currentConfigResult.data.activeModels.fallback.modelId,
|
||||
provider:
|
||||
currentConfigResult.data.activeModels.fallback.provider,
|
||||
baseURL: currentConfigResult.data.activeModels.fallback.baseURL
|
||||
}
|
||||
: null
|
||||
}
|
||||
: { main: null, research: null, fallback: null };
|
||||
|
||||
// Handle config load failure gracefully
|
||||
if (
|
||||
!currentConfigResult.success &&
|
||||
currentConfigResult.error?.code !== 'CONFIG_MISSING'
|
||||
) {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
`Warning: Could not load current model configuration: ${currentConfigResult.error?.message || 'Unknown error'}. Proceeding with defaults.`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
// Build prompt data
|
||||
const mainPromptData = buildPromptChoices('main', currentModels);
|
||||
const researchPromptData = buildPromptChoices('research', currentModels);
|
||||
const fallbackPromptData = buildPromptChoices(
|
||||
'fallback',
|
||||
currentModels,
|
||||
true
|
||||
);
|
||||
|
||||
// Display intro
|
||||
displaySetupIntro();
|
||||
|
||||
// Prompt for main model
|
||||
const mainModel = await promptForModel('main', mainPromptData);
|
||||
if (mainModel === '__CANCEL__') {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Prompt for research model
|
||||
const researchModel = await promptForModel('research', researchPromptData);
|
||||
if (researchModel === '__CANCEL__') {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Prompt for fallback model
|
||||
const fallbackModel = await promptForModel('fallback', fallbackPromptData);
|
||||
if (fallbackModel === '__CANCEL__') {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Process all model selections
|
||||
let setupSuccess = true;
|
||||
let setupConfigModified = false;
|
||||
|
||||
const mainResult = await handleSetModel(
|
||||
'main',
|
||||
mainModel,
|
||||
currentModels.main,
|
||||
projectRoot
|
||||
);
|
||||
if (!mainResult.success) setupSuccess = false;
|
||||
if (mainResult.modified) setupConfigModified = true;
|
||||
|
||||
const researchResult = await handleSetModel(
|
||||
'research',
|
||||
researchModel,
|
||||
currentModels.research,
|
||||
projectRoot
|
||||
);
|
||||
if (!researchResult.success) setupSuccess = false;
|
||||
if (researchResult.modified) setupConfigModified = true;
|
||||
|
||||
const fallbackResult = await handleSetModel(
|
||||
'fallback',
|
||||
fallbackModel,
|
||||
currentModels.fallback,
|
||||
projectRoot
|
||||
);
|
||||
if (!fallbackResult.success) setupSuccess = false;
|
||||
if (fallbackResult.modified) setupConfigModified = true;
|
||||
|
||||
// Display final result
|
||||
if (setupSuccess && setupConfigModified) {
|
||||
console.log(chalk.green.bold('\nModel setup complete!'));
|
||||
} else if (setupSuccess && !setupConfigModified) {
|
||||
console.log(chalk.yellow('\nNo changes made to model configuration.'));
|
||||
} else {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'\nErrors occurred during model selection. Please review and try again.'
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
return setupSuccess;
|
||||
}
|
||||
147
apps/cli/src/commands/models/types.ts
Normal file
147
apps/cli/src/commands/models/types.ts
Normal file
@@ -0,0 +1,147 @@
|
||||
/**
|
||||
* @fileoverview Type definitions for model setup functionality
|
||||
*/
|
||||
|
||||
/**
|
||||
* Represents a model role in the system
|
||||
*/
|
||||
export type ModelRole = 'main' | 'research' | 'fallback';
|
||||
|
||||
/**
|
||||
* Custom provider option identifiers
|
||||
*/
|
||||
export const CUSTOM_PROVIDER_IDS = {
|
||||
OPENROUTER: '__CUSTOM_OPENROUTER__',
|
||||
OLLAMA: '__CUSTOM_OLLAMA__',
|
||||
BEDROCK: '__CUSTOM_BEDROCK__',
|
||||
AZURE: '__CUSTOM_AZURE__',
|
||||
VERTEX: '__CUSTOM_VERTEX__',
|
||||
LMSTUDIO: '__CUSTOM_LMSTUDIO__',
|
||||
OPENAI_COMPATIBLE: '__CUSTOM_OPENAI_COMPATIBLE__'
|
||||
} as const;
|
||||
|
||||
export type CustomProviderId =
|
||||
(typeof CUSTOM_PROVIDER_IDS)[keyof typeof CUSTOM_PROVIDER_IDS];
|
||||
|
||||
/**
|
||||
* Special control values for model selection
|
||||
*/
|
||||
export const CONTROL_VALUES = {
|
||||
CANCEL: '__CANCEL__',
|
||||
NO_CHANGE: '__NO_CHANGE__'
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Model information for display
|
||||
*/
|
||||
export interface ModelInfo {
|
||||
id: string;
|
||||
provider: string;
|
||||
cost_per_1m_tokens?: {
|
||||
input: number;
|
||||
output: number;
|
||||
};
|
||||
allowed_roles: ModelRole[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Currently configured model for a role
|
||||
*/
|
||||
export interface CurrentModel {
|
||||
modelId?: string;
|
||||
provider?: string;
|
||||
baseURL?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Current models configuration
|
||||
*/
|
||||
export interface CurrentModels {
|
||||
main: CurrentModel | null;
|
||||
research: CurrentModel | null;
|
||||
fallback: CurrentModel | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Model selection choice for inquirer prompts
|
||||
*/
|
||||
export interface ModelChoice {
|
||||
name: string;
|
||||
value: { id: string; provider: string } | CustomProviderId | string | null;
|
||||
short?: string;
|
||||
type?: 'separator';
|
||||
}
|
||||
|
||||
/**
|
||||
* Prompt data for a specific role
|
||||
*/
|
||||
export interface PromptData {
|
||||
choices: (ModelChoice | any)[]; // any to accommodate Separator instances
|
||||
default: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result from model fetcher functions
|
||||
*/
|
||||
export interface FetchResult<T> {
|
||||
success: boolean;
|
||||
data?: T;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* OpenRouter model response
|
||||
*/
|
||||
export interface OpenRouterModel {
|
||||
id: string;
|
||||
name?: string;
|
||||
description?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Ollama model response
|
||||
*/
|
||||
export interface OllamaModel {
|
||||
model: string;
|
||||
name: string;
|
||||
modified_at?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Custom provider handler configuration
|
||||
*/
|
||||
export interface CustomProviderConfig {
|
||||
id: CustomProviderId;
|
||||
name: string;
|
||||
provider: string;
|
||||
promptMessage: (role: ModelRole) => string;
|
||||
validate?: (modelId: string, baseURL?: string) => Promise<boolean>;
|
||||
checkEnvVars?: () => boolean;
|
||||
fetchModels?: () => Promise<FetchResult<unknown[]>>;
|
||||
requiresBaseURL?: boolean;
|
||||
defaultBaseURL?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Model setup options
|
||||
*/
|
||||
export interface ModelSetupOptions {
|
||||
projectRoot: string;
|
||||
providerHint?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Model set result
|
||||
*/
|
||||
export interface ModelSetResult {
|
||||
success: boolean;
|
||||
data?: {
|
||||
message: string;
|
||||
provider: string;
|
||||
modelId: string;
|
||||
warning?: string;
|
||||
};
|
||||
error?: {
|
||||
message: string;
|
||||
};
|
||||
}
|
||||
@@ -7,7 +7,7 @@ import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import { createTmCore, type Task, type TmCore } from '@tm/core';
|
||||
import type { StorageType } from '@tm/core';
|
||||
import type { StorageType, Subtask } from '@tm/core';
|
||||
import * as ui from '../utils/ui.js';
|
||||
import { displayError } from '../utils/error-handler.js';
|
||||
import { displayTaskDetails } from '../ui/components/task-detail.component.js';
|
||||
@@ -28,16 +28,17 @@ export interface ShowCommandOptions {
|
||||
* Result type from show command
|
||||
*/
|
||||
export interface ShowTaskResult {
|
||||
task: Task | null;
|
||||
task: Task | Subtask | null;
|
||||
found: boolean;
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
originalTaskId?: string; // The original task ID requested (for subtasks like "104.1")
|
||||
}
|
||||
|
||||
/**
|
||||
* Result type for multiple tasks
|
||||
*/
|
||||
export interface ShowMultipleTasksResult {
|
||||
tasks: Task[];
|
||||
tasks: (Task | Subtask)[];
|
||||
notFound: string[];
|
||||
storageType: Exclude<StorageType, 'auto'>;
|
||||
}
|
||||
@@ -161,7 +162,8 @@ export class ShowCommand extends Command {
|
||||
return {
|
||||
task: result.task,
|
||||
found: result.task !== null,
|
||||
storageType: storageType as Exclude<StorageType, 'auto'>
|
||||
storageType: storageType as Exclude<StorageType, 'auto'>,
|
||||
originalTaskId: result.isSubtask ? taskId : undefined
|
||||
};
|
||||
}
|
||||
|
||||
@@ -176,7 +178,7 @@ export class ShowCommand extends Command {
|
||||
throw new Error('TmCore not initialized');
|
||||
}
|
||||
|
||||
const tasks: Task[] = [];
|
||||
const tasks: (Task | Subtask)[] = [];
|
||||
const notFound: string[] = [];
|
||||
|
||||
// Get each task individually
|
||||
@@ -262,9 +264,11 @@ export class ShowCommand extends Command {
|
||||
console.log(); // Add spacing
|
||||
|
||||
// Use the global task details display function
|
||||
// Pass the original requested ID if it's a subtask
|
||||
displayTaskDetails(result.task, {
|
||||
statusFilter: options.status,
|
||||
showSuggestedActions: true
|
||||
showSuggestedActions: true,
|
||||
originalTaskId: result.originalTaskId
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -12,7 +12,6 @@ export { ContextCommand } from './commands/context.command.js';
|
||||
export { StartCommand } from './commands/start.command.js';
|
||||
export { SetStatusCommand } from './commands/set-status.command.js';
|
||||
export { ExportCommand } from './commands/export.command.js';
|
||||
export { AutopilotCommand } from './commands/autopilot.command.js';
|
||||
|
||||
// Command Registry
|
||||
export {
|
||||
@@ -36,6 +35,8 @@ export {
|
||||
compareVersions
|
||||
} from './utils/auto-update.js';
|
||||
|
||||
export { runInteractiveSetup } from './commands/models/index.js';
|
||||
|
||||
// Re-export commonly used types from tm-core
|
||||
export type {
|
||||
Task,
|
||||
|
||||
162
apps/cli/src/lib/model-management.ts
Normal file
162
apps/cli/src/lib/model-management.ts
Normal file
@@ -0,0 +1,162 @@
|
||||
/**
|
||||
* @fileoverview TypeScript bridge for model management functions
|
||||
* Wraps the JavaScript functions with proper TypeScript types
|
||||
* Will remove once we move models.js and config-manager to new structure
|
||||
*/
|
||||
|
||||
// @ts-ignore - JavaScript module without types
|
||||
import * as modelsJs from '../../../../scripts/modules/task-manager/models.js';
|
||||
// @ts-ignore - JavaScript module without types
|
||||
import * as configManagerJs from '../../../../scripts/modules/config-manager.js';
|
||||
|
||||
// ========== Types ==========
|
||||
|
||||
export interface ModelCost {
|
||||
input: number;
|
||||
output: number;
|
||||
}
|
||||
|
||||
export interface ModelData {
|
||||
id: string;
|
||||
provider?: string;
|
||||
swe_score?: number | null;
|
||||
cost_per_1m_tokens?: ModelCost | null;
|
||||
allowed_roles?: string[];
|
||||
max_tokens?: number;
|
||||
supported?: boolean;
|
||||
}
|
||||
|
||||
export interface ModelConfiguration {
|
||||
provider: string;
|
||||
modelId: string;
|
||||
baseURL?: string;
|
||||
sweScore: number | null;
|
||||
cost: ModelCost | null;
|
||||
keyStatus: {
|
||||
cli: boolean;
|
||||
mcp: boolean;
|
||||
};
|
||||
}
|
||||
|
||||
export interface ModelConfigurationResponse {
|
||||
success: boolean;
|
||||
data?: {
|
||||
activeModels: {
|
||||
main: ModelConfiguration;
|
||||
research: ModelConfiguration;
|
||||
fallback: ModelConfiguration | null;
|
||||
};
|
||||
message: string;
|
||||
};
|
||||
error?: {
|
||||
code: string;
|
||||
message: string;
|
||||
};
|
||||
}
|
||||
|
||||
export interface AvailableModel {
|
||||
provider: string;
|
||||
modelId: string;
|
||||
sweScore: number | null;
|
||||
cost: ModelCost | null;
|
||||
allowedRoles: string[];
|
||||
}
|
||||
|
||||
export interface AvailableModelsResponse {
|
||||
success: boolean;
|
||||
data?: {
|
||||
models: AvailableModel[];
|
||||
message: string;
|
||||
};
|
||||
error?: {
|
||||
code: string;
|
||||
message: string;
|
||||
};
|
||||
}
|
||||
|
||||
export interface SetModelResponse {
|
||||
success: boolean;
|
||||
data?: {
|
||||
role: string;
|
||||
provider: string;
|
||||
modelId: string;
|
||||
message: string;
|
||||
warning?: string | null;
|
||||
};
|
||||
error?: {
|
||||
code: string;
|
||||
message: string;
|
||||
};
|
||||
}
|
||||
|
||||
export interface SetModelOptions {
|
||||
providerHint?: string;
|
||||
baseURL?: string;
|
||||
session?: Record<string, string | undefined>;
|
||||
mcpLog?: {
|
||||
info: (...args: unknown[]) => void;
|
||||
warn: (...args: unknown[]) => void;
|
||||
error: (...args: unknown[]) => void;
|
||||
};
|
||||
projectRoot: string;
|
||||
}
|
||||
|
||||
// ========== Wrapped Functions ==========
|
||||
|
||||
/**
|
||||
* Get the current model configuration
|
||||
*/
|
||||
export async function getModelConfiguration(
|
||||
options: SetModelOptions
|
||||
): Promise<ModelConfigurationResponse> {
|
||||
return modelsJs.getModelConfiguration(
|
||||
options as any
|
||||
) as Promise<ModelConfigurationResponse>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all available models
|
||||
*/
|
||||
export async function getAvailableModelsList(
|
||||
options: SetModelOptions
|
||||
): Promise<AvailableModelsResponse> {
|
||||
return modelsJs.getAvailableModelsList(
|
||||
options as any
|
||||
) as Promise<AvailableModelsResponse>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set a model for a specific role
|
||||
*/
|
||||
export async function setModel(
|
||||
role: 'main' | 'research' | 'fallback',
|
||||
modelId: string,
|
||||
options: SetModelOptions
|
||||
): Promise<SetModelResponse> {
|
||||
return modelsJs.setModel(
|
||||
role,
|
||||
modelId,
|
||||
options as any
|
||||
) as Promise<SetModelResponse>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get config from config manager
|
||||
*/
|
||||
export function getConfig(projectRoot: string): any {
|
||||
return configManagerJs.getConfig(projectRoot);
|
||||
}
|
||||
|
||||
/**
|
||||
* Write config using config manager
|
||||
*/
|
||||
export function writeConfig(config: any, projectRoot: string): boolean {
|
||||
return configManagerJs.writeConfig(config, projectRoot);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available models from config manager
|
||||
*/
|
||||
export function getAvailableModels(): ModelData[] {
|
||||
return configManagerJs.getAvailableModels() as ModelData[];
|
||||
}
|
||||
@@ -8,7 +8,7 @@ import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import { marked, MarkedExtension } from 'marked';
|
||||
import { markedTerminal } from 'marked-terminal';
|
||||
import type { Task } from '@tm/core';
|
||||
import type { Subtask, Task } from '@tm/core';
|
||||
import {
|
||||
getStatusWithColor,
|
||||
getPriorityWithColor,
|
||||
@@ -74,7 +74,10 @@ export function displayTaskHeader(
|
||||
/**
|
||||
* Display task properties in a table format
|
||||
*/
|
||||
export function displayTaskProperties(task: Task): void {
|
||||
export function displayTaskProperties(
|
||||
task: Task | Subtask,
|
||||
originalTaskId?: string
|
||||
): void {
|
||||
const terminalWidth = process.stdout.columns * 0.95 || 100;
|
||||
// Create table for task properties - simple 2-column layout
|
||||
const table = new Table({
|
||||
@@ -95,6 +98,9 @@ export function displayTaskProperties(task: Task): void {
|
||||
? task.dependencies.map((d) => String(d)).join(', ')
|
||||
: 'None';
|
||||
|
||||
// Use originalTaskId if provided (for subtasks like "104.1")
|
||||
const displayId = originalTaskId || String(task.id);
|
||||
|
||||
// Build the left column (labels) and right column (values)
|
||||
const labels = [
|
||||
chalk.cyan('ID:'),
|
||||
@@ -107,7 +113,7 @@ export function displayTaskProperties(task: Task): void {
|
||||
].join('\n');
|
||||
|
||||
const values = [
|
||||
String(task.id),
|
||||
displayId,
|
||||
task.title,
|
||||
getStatusWithColor(task.status),
|
||||
getPriorityWithColor(task.priority),
|
||||
@@ -272,19 +278,21 @@ export function displaySuggestedActions(taskId: string | number): void {
|
||||
* Display complete task details - used by both show and start commands
|
||||
*/
|
||||
export function displayTaskDetails(
|
||||
task: Task,
|
||||
task: Task | Subtask,
|
||||
options?: {
|
||||
statusFilter?: string;
|
||||
showSuggestedActions?: boolean;
|
||||
customHeader?: string;
|
||||
headerColor?: string;
|
||||
originalTaskId?: string;
|
||||
}
|
||||
): void {
|
||||
const {
|
||||
statusFilter,
|
||||
showSuggestedActions = false,
|
||||
customHeader,
|
||||
headerColor = 'blue'
|
||||
headerColor = 'blue',
|
||||
originalTaskId
|
||||
} = options || {};
|
||||
|
||||
// Display header - either custom or default
|
||||
@@ -298,11 +306,13 @@ export function displayTaskDetails(
|
||||
})
|
||||
);
|
||||
} else {
|
||||
displayTaskHeader(task.id, task.title);
|
||||
// Use originalTaskId if provided (for subtasks like "104.1")
|
||||
const displayId = originalTaskId || task.id;
|
||||
displayTaskHeader(displayId, task.title);
|
||||
}
|
||||
|
||||
// Display task properties in table format
|
||||
displayTaskProperties(task);
|
||||
displayTaskProperties(task, originalTaskId);
|
||||
|
||||
// Display implementation details if available
|
||||
if (task.details) {
|
||||
@@ -335,6 +345,7 @@ export function displayTaskDetails(
|
||||
// Display suggested actions if requested
|
||||
if (showSuggestedActions) {
|
||||
console.log(); // Empty line for spacing
|
||||
displaySuggestedActions(task.id);
|
||||
const actionTaskId = originalTaskId || task.id;
|
||||
displaySuggestedActions(actionTaskId);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import type { Task, TaskStatus, TaskPriority } from '@tm/core';
|
||||
import type { Task, TaskStatus, TaskPriority, Subtask } from '@tm/core';
|
||||
|
||||
/**
|
||||
* Get colored status display with ASCII icons (matches scripts/modules/ui.js style)
|
||||
@@ -294,7 +294,7 @@ export function formatDependenciesWithStatus(
|
||||
* Create a task table for display
|
||||
*/
|
||||
export function createTaskTable(
|
||||
tasks: Task[],
|
||||
tasks: (Task | Subtask)[],
|
||||
options?: {
|
||||
showSubtasks?: boolean;
|
||||
showComplexity?: boolean;
|
||||
|
||||
@@ -275,7 +275,7 @@
|
||||
"tailwindcss": "4.1.11",
|
||||
"typescript": "^5.9.2",
|
||||
"@tm/core": "*",
|
||||
"task-master-ai": "*"
|
||||
"task-master-ai": "0.31.0-rc.0"
|
||||
},
|
||||
"overrides": {
|
||||
"glob@<8": "^10.4.5",
|
||||
|
||||
@@ -11,7 +11,7 @@ import {
|
||||
withNormalizedProjectRoot
|
||||
} from '../../shared/utils.js';
|
||||
import type { MCPContext } from '../../shared/types.js';
|
||||
import { createTmCore, type Task } from '@tm/core';
|
||||
import { createTmCore, Subtask, type Task } from '@tm/core';
|
||||
import type { FastMCP } from 'fastmcp';
|
||||
|
||||
const GetTaskSchema = z.object({
|
||||
@@ -66,7 +66,7 @@ export function registerGetTaskTool(server: FastMCP) {
|
||||
taskIds.map((taskId) => tmCore.tasks.get(taskId, tag))
|
||||
);
|
||||
|
||||
const tasks: Task[] = [];
|
||||
const tasks: (Task | Subtask)[] = [];
|
||||
for (const result of results) {
|
||||
if (!result.task) continue;
|
||||
|
||||
|
||||
110
assets/GEMINI.md
Normal file
110
assets/GEMINI.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Gemini CLI-Specific Instructions
|
||||
|
||||
> **Note:** This file works alongside `AGENTS.md` (generic AI agent instructions). AGENTS.md contains the core Task Master commands and workflows for all AI agents. This file contains only Gemini CLI-specific features and integrations.
|
||||
|
||||
## MCP Configuration for Gemini CLI
|
||||
|
||||
Configure Task Master MCP server in `~/.gemini/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"task-master-ai": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "task-master-ai"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** API keys are configured via `task-master models --setup`, not in MCP configuration.
|
||||
|
||||
## Gemini CLI-Specific Features
|
||||
|
||||
### Session Management
|
||||
|
||||
Built-in session commands:
|
||||
|
||||
- `/chat` - Start new conversation while keeping context
|
||||
- `/checkpoint save <name>` - Save session state
|
||||
- `/checkpoint load <name>` - Resume saved session
|
||||
- `/memory show` - View loaded context
|
||||
|
||||
Both `AGENTS.md` and `GEMINI.md` are auto-loaded on every Gemini CLI session.
|
||||
|
||||
### Headless Mode for Automation
|
||||
|
||||
Non-interactive mode for scripts:
|
||||
|
||||
```bash
|
||||
# Simple text response
|
||||
gemini -p "What's the next task?"
|
||||
|
||||
# JSON output for parsing
|
||||
gemini -p "List all pending tasks" --output-format json
|
||||
|
||||
# Stream events for long operations
|
||||
gemini -p "Expand all tasks" --output-format stream-json
|
||||
```
|
||||
|
||||
### Token Usage Monitoring
|
||||
|
||||
```bash
|
||||
# In Gemini CLI session
|
||||
/stats
|
||||
|
||||
# Shows: token usage, API costs, request counts
|
||||
```
|
||||
|
||||
### Google Search Grounding
|
||||
|
||||
Leverage built-in Google Search as an alternative to Perplexity research mode:
|
||||
- Best practices research
|
||||
- Library documentation
|
||||
- Security vulnerability checks
|
||||
- Implementation patterns
|
||||
|
||||
## Important Differences from Other Agents
|
||||
|
||||
### No Slash Commands
|
||||
Gemini CLI does not support custom slash commands (unlike Claude Code). Use natural language instead.
|
||||
|
||||
### No Tool Allowlist
|
||||
Security is managed at the MCP level, not via agent configuration.
|
||||
|
||||
### Session Persistence
|
||||
Use `/checkpoint` instead of git worktrees for managing multiple work contexts.
|
||||
|
||||
### Configuration Files
|
||||
- Global: `~/.gemini/settings.json`
|
||||
- Project: `.gemini/settings.json`
|
||||
- **Not**: `.mcp.json` (that's for Claude Code)
|
||||
|
||||
## Recommended Model Configuration
|
||||
|
||||
For Gemini CLI users:
|
||||
|
||||
```bash
|
||||
# Set Gemini as primary model
|
||||
task-master models --set-main gemini-2.0-flash-exp
|
||||
task-master models --set-fallback gemini-1.5-flash
|
||||
|
||||
# Optional: Use Perplexity for research (or rely on Google Search)
|
||||
task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online
|
||||
```
|
||||
|
||||
## Your Role with Gemini CLI
|
||||
|
||||
As a Gemini CLI assistant with Task Master:
|
||||
|
||||
1. **Use MCP tools naturally** - They integrate transparently in conversation
|
||||
2. **Reference files with @** - Leverage Gemini's file inclusion
|
||||
3. **Save checkpoints** - Offer to save state after significant progress
|
||||
4. **Monitor usage** - Remind users about `/stats` for long sessions
|
||||
5. **Use Google Search** - Leverage search grounding for research
|
||||
|
||||
**Key Principle:** Focus on natural conversation. Task Master MCP tools work seamlessly with Gemini CLI's interface.
|
||||
|
||||
---
|
||||
|
||||
*See AGENTS.md for complete Task Master commands, workflows, and best practices.*
|
||||
@@ -1,4 +1,4 @@
|
||||
# Available Models as of October 18, 2025
|
||||
# Available Models as of October 31, 2025
|
||||
|
||||
## Main Models
|
||||
|
||||
@@ -77,6 +77,10 @@
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||
| zai | glm-4.6 | 0.68 | 0.6 | 2.2 |
|
||||
| zai | glm-4.5 | 0.65 | 0.6 | 2.2 |
|
||||
| zai | glm-4.5-air | 0.62 | 0.2 | 1.1 |
|
||||
| zai | glm-4.5v | 0.63 | 0.6 | 1.8 |
|
||||
| ollama | gpt-oss:latest | 0.607 | 0 | 0 |
|
||||
| ollama | gpt-oss:20b | 0.607 | 0 | 0 |
|
||||
| ollama | gpt-oss:120b | 0.624 | 0 | 0 |
|
||||
@@ -130,6 +134,8 @@
|
||||
| perplexity | sonar-deep-research | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
|
||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||
| zai | glm-4.6 | 0.68 | 0.6 | 2.2 |
|
||||
| zai | glm-4.5 | 0.65 | 0.6 | 2.2 |
|
||||
| bedrock | us.anthropic.claude-3-opus-20240229-v1:0 | 0.725 | 15 | 75 |
|
||||
| bedrock | us.anthropic.claude-3-5-sonnet-20240620-v1:0 | 0.49 | 3 | 15 |
|
||||
| bedrock | us.anthropic.claude-3-5-sonnet-20241022-v2:0 | 0.49 | 3 | 15 |
|
||||
@@ -202,6 +208,10 @@
|
||||
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
|
||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||
| zai | glm-4.6 | 0.68 | 0.6 | 2.2 |
|
||||
| zai | glm-4.5 | 0.65 | 0.6 | 2.2 |
|
||||
| zai | glm-4.5-air | 0.62 | 0.2 | 1.1 |
|
||||
| zai | glm-4.5v | 0.63 | 0.6 | 1.8 |
|
||||
| ollama | gpt-oss:latest | 0.607 | 0 | 0 |
|
||||
| ollama | gpt-oss:20b | 0.607 | 0 | 0 |
|
||||
| ollama | gpt-oss:120b | 0.624 | 0 | 0 |
|
||||
|
||||
@@ -13,7 +13,7 @@ import {
|
||||
disableSilentMode
|
||||
} from '../../../../scripts/modules/utils.js';
|
||||
import { createLogWrapper } from '../../tools/utils.js';
|
||||
import { CUSTOM_PROVIDERS_ARRAY } from '../../../../src/constants/providers.js';
|
||||
import { CUSTOM_PROVIDERS_ARRAY } from '@tm/core';
|
||||
|
||||
// Define supported roles for model setting
|
||||
const MODEL_ROLES = ['main', 'research', 'fallback'];
|
||||
@@ -42,7 +42,8 @@ async function handleModelSetting(args, context) {
|
||||
|
||||
return await setModel(role, args[roleKey], {
|
||||
...context,
|
||||
providerHint
|
||||
providerHint,
|
||||
...(args.baseURL && { baseURL: args.baseURL })
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -80,7 +81,7 @@ export async function modelsDirect(args, log, context = {}) {
|
||||
error: {
|
||||
code: 'INVALID_ARGS',
|
||||
message:
|
||||
'Cannot use multiple custom provider flags simultaneously. Choose only one: openrouter, ollama, bedrock, azure, or vertex.'
|
||||
'Cannot use multiple custom provider flags simultaneously. Choose only one: openrouter, ollama, bedrock, azure, vertex, or openai-compatible.'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
@@ -71,6 +71,18 @@ export function registerModelsTool(server) {
|
||||
.optional()
|
||||
.describe(
|
||||
'Indicates the set model ID is a custom Google Vertex AI model.'
|
||||
),
|
||||
'openai-compatible': z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe(
|
||||
'Indicates the set model ID is a custom OpenAI-compatible model. Requires baseURL parameter.'
|
||||
),
|
||||
baseURL: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Custom base URL for openai-compatible provider (e.g., https://api.example.com/v1)'
|
||||
)
|
||||
}),
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
|
||||
58
package-lock.json
generated
58
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.30.1",
|
||||
"version": "0.30.2",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "task-master-ai",
|
||||
"version": "0.30.1",
|
||||
"version": "0.30.2",
|
||||
"license": "MIT WITH Commons-Clause",
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
@@ -22,6 +22,7 @@
|
||||
"@ai-sdk/groq": "^2.0.21",
|
||||
"@ai-sdk/mistral": "^2.0.16",
|
||||
"@ai-sdk/openai": "^2.0.34",
|
||||
"@ai-sdk/openai-compatible": "^1.0.25",
|
||||
"@ai-sdk/perplexity": "^2.0.10",
|
||||
"@ai-sdk/provider": "^2.0.0",
|
||||
"@ai-sdk/provider-utils": "^3.0.10",
|
||||
@@ -64,6 +65,7 @@
|
||||
"open": "^10.2.0",
|
||||
"ora": "^8.2.0",
|
||||
"simple-git": "^3.28.0",
|
||||
"steno": "^4.0.2",
|
||||
"uuid": "^11.1.0",
|
||||
"zod": "^4.1.11"
|
||||
},
|
||||
@@ -1727,11 +1729,30 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@ai-sdk/openai-compatible": {
|
||||
"version": "1.0.19",
|
||||
"version": "1.0.25",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/openai-compatible/-/openai-compatible-1.0.25.tgz",
|
||||
"integrity": "sha512-VPylb5ytkOu9Bs1UnVmz4x0wr1VtS30Pw6ghh6GxpGH6lo4GOWqVnYuB+8M755dkof74c5LULZq5C1n/1J4Kvg==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "2.0.0",
|
||||
"@ai-sdk/provider-utils": "3.0.10"
|
||||
"@ai-sdk/provider-utils": "3.0.15"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"zod": "^3.25.76 || ^4.1.8"
|
||||
}
|
||||
},
|
||||
"node_modules/@ai-sdk/openai-compatible/node_modules/@ai-sdk/provider-utils": {
|
||||
"version": "3.0.15",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/provider-utils/-/provider-utils-3.0.15.tgz",
|
||||
"integrity": "sha512-kOc6Pxb7CsRlNt+sLZKL7/VGQUd7ccl3/tIK+Bqf5/QhHR0Qm3qRBMz1IwU1RmjJEZA73x+KB5cUckbDl2WF7Q==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "2.0.0",
|
||||
"@standard-schema/spec": "^1.0.0",
|
||||
"eventsource-parser": "^3.0.6"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
@@ -1794,6 +1815,22 @@
|
||||
"zod": "^3.25.76 || ^4.1.8"
|
||||
}
|
||||
},
|
||||
"node_modules/@ai-sdk/xai/node_modules/@ai-sdk/openai-compatible": {
|
||||
"version": "1.0.19",
|
||||
"resolved": "https://registry.npmjs.org/@ai-sdk/openai-compatible/-/openai-compatible-1.0.19.tgz",
|
||||
"integrity": "sha512-hnsqPCCSNKgpZRNDOAIXZs7OcUDM4ut5ggWxj2sjB4tNL/aBn/xrM7pJkqu+WuPowyrE60wPVSlw0LvtXAlMXQ==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@ai-sdk/provider": "2.0.0",
|
||||
"@ai-sdk/provider-utils": "3.0.10"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"zod": "^3.25.76 || ^4.1.8"
|
||||
}
|
||||
},
|
||||
"node_modules/@alcalzone/ansi-tokenize": {
|
||||
"version": "0.2.0",
|
||||
"dev": true,
|
||||
@@ -25049,6 +25086,18 @@
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/steno": {
|
||||
"version": "4.0.2",
|
||||
"resolved": "https://registry.npmjs.org/steno/-/steno-4.0.2.tgz",
|
||||
"integrity": "sha512-yhPIQXjrlt1xv7dyPQg2P17URmXbuM5pdGkpiMB3RenprfiBlvK415Lctfe0eshk90oA7/tNq7WEiMK8RSP39A==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/typicode"
|
||||
}
|
||||
},
|
||||
"node_modules/stop-iteration-iterator": {
|
||||
"version": "1.1.0",
|
||||
"dev": true,
|
||||
@@ -28500,6 +28549,7 @@
|
||||
"@supabase/supabase-js": "^2.57.4",
|
||||
"fs-extra": "^11.3.2",
|
||||
"simple-git": "^3.28.0",
|
||||
"steno": "^4.0.2",
|
||||
"zod": "^4.1.11"
|
||||
},
|
||||
"devDependencies": {
|
||||
|
||||
10
package.json
10
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "task-master-ai",
|
||||
"version": "0.30.2",
|
||||
"version": "0.31.0-rc.0",
|
||||
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
@@ -9,11 +9,7 @@
|
||||
"task-master-mcp": "dist/mcp-server.js",
|
||||
"task-master-ai": "dist/mcp-server.js"
|
||||
},
|
||||
"workspaces": [
|
||||
"apps/*",
|
||||
"packages/*",
|
||||
"."
|
||||
],
|
||||
"workspaces": ["apps/*", "packages/*", "."],
|
||||
"scripts": {
|
||||
"build": "npm run build:build-config && cross-env NODE_ENV=production tsdown",
|
||||
"dev": "tsdown --watch",
|
||||
@@ -64,6 +60,7 @@
|
||||
"@ai-sdk/groq": "^2.0.21",
|
||||
"@ai-sdk/mistral": "^2.0.16",
|
||||
"@ai-sdk/openai": "^2.0.34",
|
||||
"@ai-sdk/openai-compatible": "^1.0.25",
|
||||
"@ai-sdk/perplexity": "^2.0.10",
|
||||
"@ai-sdk/provider": "^2.0.0",
|
||||
"@ai-sdk/provider-utils": "^3.0.10",
|
||||
@@ -106,6 +103,7 @@
|
||||
"open": "^10.2.0",
|
||||
"ora": "^8.2.0",
|
||||
"simple-git": "^3.28.0",
|
||||
"steno": "^4.0.2",
|
||||
"uuid": "^11.1.0",
|
||||
"zod": "^4.1.11"
|
||||
},
|
||||
|
||||
@@ -33,6 +33,7 @@
|
||||
"@supabase/supabase-js": "^2.57.4",
|
||||
"fs-extra": "^11.3.2",
|
||||
"simple-git": "^3.28.0",
|
||||
"steno": "^4.0.2",
|
||||
"zod": "^4.1.11"
|
||||
},
|
||||
"devDependencies": {
|
||||
|
||||
@@ -75,3 +75,8 @@ export const STATUS_COLORS: Record<TaskStatus, string> = {
|
||||
review: 'cyan',
|
||||
completed: 'green'
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Provider constants - AI model providers
|
||||
*/
|
||||
export * from './providers.js';
|
||||
|
||||
@@ -8,11 +8,14 @@ export const VALIDATED_PROVIDERS = [
|
||||
'anthropic',
|
||||
'openai',
|
||||
'google',
|
||||
'zai',
|
||||
'perplexity',
|
||||
'xai',
|
||||
'groq',
|
||||
'mistral'
|
||||
];
|
||||
] as const;
|
||||
|
||||
export type ValidatedProvider = (typeof VALIDATED_PROVIDERS)[number];
|
||||
|
||||
// Custom providers object for easy named access
|
||||
export const CUSTOM_PROVIDERS = {
|
||||
@@ -21,12 +24,17 @@ export const CUSTOM_PROVIDERS = {
|
||||
BEDROCK: 'bedrock',
|
||||
OPENROUTER: 'openrouter',
|
||||
OLLAMA: 'ollama',
|
||||
LMSTUDIO: 'lmstudio',
|
||||
OPENAI_COMPATIBLE: 'openai-compatible',
|
||||
CLAUDE_CODE: 'claude-code',
|
||||
MCP: 'mcp',
|
||||
GEMINI_CLI: 'gemini-cli',
|
||||
GROK_CLI: 'grok-cli',
|
||||
CODEX_CLI: 'codex-cli'
|
||||
};
|
||||
} as const;
|
||||
|
||||
export type CustomProvider =
|
||||
(typeof CUSTOM_PROVIDERS)[keyof typeof CUSTOM_PROVIDERS];
|
||||
|
||||
// Custom providers array (for backward compatibility and iteration)
|
||||
export const CUSTOM_PROVIDERS_ARRAY = Object.values(CUSTOM_PROVIDERS);
|
||||
@@ -35,4 +43,6 @@ export const CUSTOM_PROVIDERS_ARRAY = Object.values(CUSTOM_PROVIDERS);
|
||||
export const ALL_PROVIDERS = [
|
||||
...VALIDATED_PROVIDERS,
|
||||
...CUSTOM_PROVIDERS_ARRAY
|
||||
];
|
||||
] as const;
|
||||
|
||||
export type Provider = ValidatedProvider | CustomProvider;
|
||||
@@ -9,7 +9,7 @@ import { TaskExecutionService } from './services/task-execution-service.js';
|
||||
import { TaskLoaderService } from './services/task-loader.service.js';
|
||||
import { PreflightChecker } from './services/preflight-checker.service.js';
|
||||
|
||||
import type { Task, TaskStatus } from '../../common/types/index.js';
|
||||
import type { Subtask, Task, TaskStatus } from '../../common/types/index.js';
|
||||
import type {
|
||||
TaskListResult,
|
||||
GetTaskListOptions
|
||||
@@ -58,12 +58,16 @@ export class TasksDomain {
|
||||
* - Simple task IDs (e.g., "1", "HAM-123")
|
||||
* - Subtask IDs with dot notation (e.g., "1.2", "HAM-123.2")
|
||||
*
|
||||
* @returns Task and whether the ID represents a subtask
|
||||
* @returns Discriminated union indicating task/subtask with proper typing
|
||||
*/
|
||||
async get(
|
||||
taskId: string,
|
||||
tag?: string
|
||||
): Promise<{ task: Task | null; isSubtask: boolean }> {
|
||||
): Promise<
|
||||
| { task: Task; isSubtask: false }
|
||||
| { task: Subtask; isSubtask: true }
|
||||
| { task: null; isSubtask: boolean }
|
||||
> {
|
||||
// Parse ID - check for dot notation (subtask)
|
||||
const parts = taskId.split('.');
|
||||
const parentId = parts[0];
|
||||
@@ -75,13 +79,17 @@ export class TasksDomain {
|
||||
return { task: null, isSubtask: false };
|
||||
}
|
||||
|
||||
// Handle subtask notation (1.2, HAM-123.2)
|
||||
// Handle subtask notation (1.2)
|
||||
if (subtaskIdPart && task.subtasks) {
|
||||
const subtask = task.subtasks.find(
|
||||
(st) => String(st.id) === subtaskIdPart
|
||||
);
|
||||
// Return parent task with isSubtask flag
|
||||
return { task, isSubtask: !!subtask };
|
||||
if (subtask) {
|
||||
// Return the actual subtask with properly typed result
|
||||
return { task: subtask, isSubtask: true };
|
||||
}
|
||||
// Subtask ID provided but not found
|
||||
return { task: null, isSubtask: true };
|
||||
}
|
||||
|
||||
// It's a regular task
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
import fs from 'node:fs/promises';
|
||||
import path from 'node:path';
|
||||
import os from 'node:os';
|
||||
import { Writer } from 'steno';
|
||||
import type { WorkflowState } from '../types.js';
|
||||
import { getLogger } from '../../../common/logger/index.js';
|
||||
|
||||
@@ -28,6 +29,8 @@ export class WorkflowStateManager {
|
||||
private readonly sessionDir: string;
|
||||
private maxBackups: number;
|
||||
private readonly logger = getLogger('WorkflowStateManager');
|
||||
private writer: Writer | null = null;
|
||||
private writerInitPromise: Promise<void> | null = null;
|
||||
|
||||
constructor(projectRoot: string, maxBackups = 5) {
|
||||
this.projectRoot = path.resolve(projectRoot);
|
||||
@@ -69,6 +72,31 @@ export class WorkflowStateManager {
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure the steno Writer is initialized
|
||||
* This ensures the session directory exists before creating the writer
|
||||
*/
|
||||
private async ensureWriter(): Promise<void> {
|
||||
if (this.writer) {
|
||||
return;
|
||||
}
|
||||
|
||||
// If another call is already initializing, wait for it
|
||||
if (this.writerInitPromise) {
|
||||
await this.writerInitPromise;
|
||||
return;
|
||||
}
|
||||
|
||||
this.writerInitPromise = (async () => {
|
||||
// Ensure session directory exists before creating writer
|
||||
await fs.mkdir(this.sessionDir, { recursive: true });
|
||||
this.writer = new Writer(this.statePath);
|
||||
})();
|
||||
|
||||
await this.writerInitPromise;
|
||||
this.writerInitPromise = null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if workflow state exists
|
||||
*/
|
||||
@@ -98,11 +126,12 @@ export class WorkflowStateManager {
|
||||
|
||||
/**
|
||||
* Save workflow state to disk
|
||||
* Uses steno for atomic writes and automatic queueing of concurrent saves
|
||||
*/
|
||||
async save(state: WorkflowState): Promise<void> {
|
||||
try {
|
||||
// Ensure session directory exists
|
||||
await fs.mkdir(this.sessionDir, { recursive: true });
|
||||
// Ensure writer is initialized (creates directory if needed)
|
||||
await this.ensureWriter();
|
||||
|
||||
// Serialize and validate JSON
|
||||
const jsonContent = JSON.stringify(state, null, 2);
|
||||
@@ -115,10 +144,8 @@ export class WorkflowStateManager {
|
||||
throw new Error('Failed to generate valid JSON from workflow state');
|
||||
}
|
||||
|
||||
// Write state atomically with newline at end
|
||||
const tempPath = `${this.statePath}.tmp`;
|
||||
await fs.writeFile(tempPath, jsonContent + '\n', 'utf-8');
|
||||
await fs.rename(tempPath, this.statePath);
|
||||
// Write using steno (handles queuing and atomic writes automatically)
|
||||
await this.writer!.write(jsonContent + '\n');
|
||||
|
||||
this.logger.debug(`Saved workflow state (${jsonContent.length} bytes)`);
|
||||
} catch (error: any) {
|
||||
|
||||
@@ -98,7 +98,7 @@ export class WorkflowOrchestrator {
|
||||
/**
|
||||
* Transition to next state based on event
|
||||
*/
|
||||
transition(event: WorkflowEvent): void {
|
||||
async transition(event: WorkflowEvent): Promise<void> {
|
||||
// Check if workflow is aborted
|
||||
if (this.aborted && event.type !== 'ABORT') {
|
||||
throw new Error('Workflow has been aborted');
|
||||
@@ -107,26 +107,26 @@ export class WorkflowOrchestrator {
|
||||
// Handle special events that work across all phases
|
||||
if (event.type === 'ERROR') {
|
||||
this.handleError(event.error);
|
||||
void this.triggerAutoPersist();
|
||||
await this.triggerAutoPersist();
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.type === 'ABORT') {
|
||||
this.aborted = true;
|
||||
void this.triggerAutoPersist();
|
||||
await this.triggerAutoPersist();
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.type === 'RETRY') {
|
||||
this.handleRetry();
|
||||
void this.triggerAutoPersist();
|
||||
await this.triggerAutoPersist();
|
||||
return;
|
||||
}
|
||||
|
||||
// Handle TDD phase transitions within SUBTASK_LOOP
|
||||
if (this.currentPhase === 'SUBTASK_LOOP') {
|
||||
this.handleTDDPhaseTransition(event);
|
||||
void this.triggerAutoPersist();
|
||||
await this.handleTDDPhaseTransition(event);
|
||||
await this.triggerAutoPersist();
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -143,13 +143,13 @@ export class WorkflowOrchestrator {
|
||||
|
||||
// Execute transition
|
||||
this.executeTransition(validTransition, event);
|
||||
void this.triggerAutoPersist();
|
||||
await this.triggerAutoPersist();
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle TDD phase transitions (RED -> GREEN -> COMMIT)
|
||||
*/
|
||||
private handleTDDPhaseTransition(event: WorkflowEvent): void {
|
||||
private async handleTDDPhaseTransition(event: WorkflowEvent): Promise<void> {
|
||||
const currentTDD = this.context.currentTDDPhase || 'RED';
|
||||
|
||||
switch (event.type) {
|
||||
@@ -201,7 +201,7 @@ export class WorkflowOrchestrator {
|
||||
this.emit('subtask:started');
|
||||
} else {
|
||||
// All subtasks complete, transition to FINALIZE
|
||||
this.transition({ type: 'ALL_SUBTASKS_COMPLETE' });
|
||||
await this.transition({ type: 'ALL_SUBTASKS_COMPLETE' });
|
||||
}
|
||||
break;
|
||||
}
|
||||
@@ -272,7 +272,7 @@ export class WorkflowOrchestrator {
|
||||
this.emit('subtask:started');
|
||||
} else {
|
||||
// All subtasks complete, transition to FINALIZE
|
||||
this.transition({ type: 'ALL_SUBTASKS_COMPLETE' });
|
||||
await this.transition({ type: 'ALL_SUBTASKS_COMPLETE' });
|
||||
}
|
||||
break;
|
||||
|
||||
|
||||
@@ -168,7 +168,7 @@ export class WorkflowService {
|
||||
this.activityLogger.start();
|
||||
|
||||
// Transition through PREFLIGHT and BRANCH_SETUP phases
|
||||
this.orchestrator.transition({ type: 'PREFLIGHT_COMPLETE' });
|
||||
await this.orchestrator.transition({ type: 'PREFLIGHT_COMPLETE' });
|
||||
|
||||
// Create git branch with descriptive name
|
||||
const branchName = this.generateBranchName(taskId, taskTitle, tag);
|
||||
@@ -181,7 +181,7 @@ export class WorkflowService {
|
||||
}
|
||||
|
||||
// Transition to SUBTASK_LOOP with RED phase
|
||||
this.orchestrator.transition({
|
||||
await this.orchestrator.transition({
|
||||
type: 'BRANCH_CREATED',
|
||||
branchName
|
||||
});
|
||||
@@ -363,13 +363,13 @@ export class WorkflowService {
|
||||
// Transition based on current phase
|
||||
switch (tddPhase) {
|
||||
case 'RED':
|
||||
this.orchestrator.transition({
|
||||
await this.orchestrator.transition({
|
||||
type: 'RED_PHASE_COMPLETE',
|
||||
testResults
|
||||
});
|
||||
break;
|
||||
case 'GREEN':
|
||||
this.orchestrator.transition({
|
||||
await this.orchestrator.transition({
|
||||
type: 'GREEN_PHASE_COMPLETE',
|
||||
testResults
|
||||
});
|
||||
@@ -402,17 +402,17 @@ export class WorkflowService {
|
||||
}
|
||||
|
||||
// Transition COMMIT phase complete
|
||||
this.orchestrator.transition({
|
||||
await this.orchestrator.transition({
|
||||
type: 'COMMIT_COMPLETE'
|
||||
});
|
||||
|
||||
// Check if should advance to next subtask
|
||||
const progress = this.orchestrator.getProgress();
|
||||
if (progress.current < progress.total) {
|
||||
this.orchestrator.transition({ type: 'SUBTASK_COMPLETE' });
|
||||
await this.orchestrator.transition({ type: 'SUBTASK_COMPLETE' });
|
||||
} else {
|
||||
// All subtasks complete
|
||||
this.orchestrator.transition({ type: 'ALL_SUBTASKS_COMPLETE' });
|
||||
await this.orchestrator.transition({ type: 'ALL_SUBTASKS_COMPLETE' });
|
||||
}
|
||||
|
||||
return this.getStatus();
|
||||
@@ -448,7 +448,7 @@ export class WorkflowService {
|
||||
}
|
||||
|
||||
// Transition to COMPLETE
|
||||
this.orchestrator.transition({ type: 'FINALIZE_COMPLETE' });
|
||||
await this.orchestrator.transition({ type: 'FINALIZE_COMPLETE' });
|
||||
|
||||
return this.getStatus();
|
||||
}
|
||||
@@ -458,7 +458,7 @@ export class WorkflowService {
|
||||
*/
|
||||
async abortWorkflow(): Promise<void> {
|
||||
if (this.orchestrator) {
|
||||
this.orchestrator.transition({ type: 'ABORT' });
|
||||
await this.orchestrator.transition({ type: 'ABORT' });
|
||||
}
|
||||
|
||||
// Delete state file
|
||||
|
||||
@@ -24,9 +24,7 @@ import {
|
||||
getResponseLanguage,
|
||||
getUserId,
|
||||
getVertexLocation,
|
||||
getVertexProjectId,
|
||||
isApiKeySet,
|
||||
providersWithoutApiKeys
|
||||
getVertexProjectId
|
||||
} from './config-manager.js';
|
||||
import {
|
||||
findProjectRoot,
|
||||
@@ -46,12 +44,15 @@ import {
|
||||
GoogleAIProvider,
|
||||
GrokCliProvider,
|
||||
GroqProvider,
|
||||
LMStudioProvider,
|
||||
OllamaAIProvider,
|
||||
OpenAICompatibleProvider,
|
||||
OpenAIProvider,
|
||||
OpenRouterAIProvider,
|
||||
PerplexityAIProvider,
|
||||
VertexAIProvider,
|
||||
XAIProvider
|
||||
XAIProvider,
|
||||
ZAIProvider
|
||||
} from '../../src/ai-providers/index.js';
|
||||
|
||||
// Import the provider registry
|
||||
@@ -62,11 +63,19 @@ const PROVIDERS = {
|
||||
anthropic: new AnthropicAIProvider(),
|
||||
perplexity: new PerplexityAIProvider(),
|
||||
google: new GoogleAIProvider(),
|
||||
zai: new ZAIProvider(),
|
||||
lmstudio: new LMStudioProvider(),
|
||||
openai: new OpenAIProvider(),
|
||||
xai: new XAIProvider(),
|
||||
groq: new GroqProvider(),
|
||||
openrouter: new OpenRouterAIProvider(),
|
||||
ollama: new OllamaAIProvider(),
|
||||
'openai-compatible': new OpenAICompatibleProvider({
|
||||
name: 'OpenAI Compatible',
|
||||
apiKeyEnvVar: 'OPENAI_COMPATIBLE_API_KEY',
|
||||
requiresApiKey: true
|
||||
// baseURL will be set per-role from config
|
||||
}),
|
||||
bedrock: new BedrockAIProvider(),
|
||||
azure: new AzureProvider(),
|
||||
vertex: new VertexAIProvider(),
|
||||
@@ -579,22 +588,6 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check API key if needed
|
||||
if (!providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
|
||||
if (!isApiKeySet(providerName, session, effectiveProjectRoot)) {
|
||||
log(
|
||||
'warn',
|
||||
`Skipping role '${currentRole}' (Provider: ${providerName}): API key not set or invalid.`
|
||||
);
|
||||
lastError =
|
||||
lastError ||
|
||||
new Error(
|
||||
`API key for provider '${providerName}' (role: ${currentRole}) is not set.`
|
||||
);
|
||||
continue; // Skip to the next role in the sequence
|
||||
}
|
||||
}
|
||||
|
||||
// Get base URL if configured (optional for most providers)
|
||||
baseURL = getBaseUrlForRole(currentRole, effectiveProjectRoot);
|
||||
|
||||
|
||||
@@ -20,7 +20,8 @@ import {
|
||||
checkForUpdate,
|
||||
performAutoUpdate,
|
||||
displayUpgradeNotification,
|
||||
displayError
|
||||
displayError,
|
||||
runInteractiveSetup
|
||||
} from '@tm/cli';
|
||||
|
||||
import {
|
||||
@@ -68,16 +69,12 @@ import {
|
||||
import {
|
||||
isApiKeySet,
|
||||
getDebugFlag,
|
||||
getConfig,
|
||||
writeConfig,
|
||||
ConfigurationError,
|
||||
isConfigFilePresent,
|
||||
getAvailableModels,
|
||||
getBaseUrlForRole,
|
||||
getDefaultNumTasks
|
||||
} from './config-manager.js';
|
||||
|
||||
import { CUSTOM_PROVIDERS } from '../../src/constants/providers.js';
|
||||
import { CUSTOM_PROVIDERS } from '@tm/core';
|
||||
|
||||
import {
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
@@ -90,7 +87,6 @@ import { initTaskMaster } from '../../src/task-master.js';
|
||||
import {
|
||||
displayBanner,
|
||||
displayHelp,
|
||||
displayNextTask,
|
||||
displayComplexityReport,
|
||||
getStatusWithColor,
|
||||
confirmTaskOverwrite,
|
||||
@@ -144,641 +140,6 @@ import {
|
||||
categorizeRemovalResults
|
||||
} from '../../src/utils/profiles.js';
|
||||
|
||||
/**
|
||||
* Runs the interactive setup process for model configuration.
|
||||
* @param {string|null} projectRoot - The resolved project root directory.
|
||||
*/
|
||||
async function runInteractiveSetup(projectRoot) {
|
||||
if (!projectRoot) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Could not determine project root for interactive setup.'
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const currentConfigResult = await getModelConfiguration({ projectRoot });
|
||||
const currentModels = currentConfigResult.success
|
||||
? currentConfigResult.data.activeModels
|
||||
: { main: null, research: null, fallback: null };
|
||||
// Handle potential config load failure gracefully for the setup flow
|
||||
if (
|
||||
!currentConfigResult.success &&
|
||||
currentConfigResult.error?.code !== 'CONFIG_MISSING'
|
||||
) {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
`Warning: Could not load current model configuration: ${currentConfigResult.error?.message || 'Unknown error'}. Proceeding with defaults.`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
// Helper function to fetch OpenRouter models (duplicated for CLI context)
|
||||
function fetchOpenRouterModelsCLI() {
|
||||
return new Promise((resolve) => {
|
||||
const options = {
|
||||
hostname: 'openrouter.ai',
|
||||
path: '/api/v1/models',
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json'
|
||||
}
|
||||
};
|
||||
|
||||
const req = https.request(options, (res) => {
|
||||
let data = '';
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
res.on('end', () => {
|
||||
if (res.statusCode === 200) {
|
||||
try {
|
||||
const parsedData = JSON.parse(data);
|
||||
resolve(parsedData.data || []); // Return the array of models
|
||||
} catch (e) {
|
||||
console.error('Error parsing OpenRouter response:', e);
|
||||
resolve(null); // Indicate failure
|
||||
}
|
||||
} else {
|
||||
console.error(
|
||||
`OpenRouter API request failed with status code: ${res.statusCode}`
|
||||
);
|
||||
resolve(null); // Indicate failure
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', (e) => {
|
||||
console.error('Error fetching OpenRouter models:', e);
|
||||
resolve(null); // Indicate failure
|
||||
});
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
// Helper function to fetch Ollama models (duplicated for CLI context)
|
||||
function fetchOllamaModelsCLI(baseURL = 'http://localhost:11434/api') {
|
||||
return new Promise((resolve) => {
|
||||
try {
|
||||
// Parse the base URL to extract hostname, port, and base path
|
||||
const url = new URL(baseURL);
|
||||
const isHttps = url.protocol === 'https:';
|
||||
const port = url.port || (isHttps ? 443 : 80);
|
||||
const basePath = url.pathname.endsWith('/')
|
||||
? url.pathname.slice(0, -1)
|
||||
: url.pathname;
|
||||
|
||||
const options = {
|
||||
hostname: url.hostname,
|
||||
port: parseInt(port, 10),
|
||||
path: `${basePath}/tags`,
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json'
|
||||
}
|
||||
};
|
||||
|
||||
const requestLib = isHttps ? https : http;
|
||||
const req = requestLib.request(options, (res) => {
|
||||
let data = '';
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
res.on('end', () => {
|
||||
if (res.statusCode === 200) {
|
||||
try {
|
||||
const parsedData = JSON.parse(data);
|
||||
resolve(parsedData.models || []); // Return the array of models
|
||||
} catch (e) {
|
||||
console.error('Error parsing Ollama response:', e);
|
||||
resolve(null); // Indicate failure
|
||||
}
|
||||
} else {
|
||||
console.error(
|
||||
`Ollama API request failed with status code: ${res.statusCode}`
|
||||
);
|
||||
resolve(null); // Indicate failure
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', (e) => {
|
||||
console.error('Error fetching Ollama models:', e);
|
||||
resolve(null); // Indicate failure
|
||||
});
|
||||
req.end();
|
||||
} catch (e) {
|
||||
console.error('Error parsing Ollama base URL:', e);
|
||||
resolve(null); // Indicate failure
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Helper to get choices and default index for a role
|
||||
const getPromptData = (role, allowNone = false) => {
|
||||
const currentModel = currentModels[role]; // Use the fetched data
|
||||
const allModelsRaw = getAvailableModels(); // Get all available models
|
||||
|
||||
// Manually group models by provider
|
||||
const modelsByProvider = allModelsRaw.reduce((acc, model) => {
|
||||
if (!acc[model.provider]) {
|
||||
acc[model.provider] = [];
|
||||
}
|
||||
acc[model.provider].push(model);
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
const cancelOption = { name: '⏹ Cancel Model Setup', value: '__CANCEL__' }; // Symbol updated
|
||||
const noChangeOption = currentModel?.modelId
|
||||
? {
|
||||
name: `✔ No change to current ${role} model (${currentModel.modelId})`, // Symbol updated
|
||||
value: '__NO_CHANGE__'
|
||||
}
|
||||
: null;
|
||||
|
||||
// Define custom provider options
|
||||
const customProviderOptions = [
|
||||
{ name: '* Custom OpenRouter model', value: '__CUSTOM_OPENROUTER__' },
|
||||
{ name: '* Custom Ollama model', value: '__CUSTOM_OLLAMA__' },
|
||||
{ name: '* Custom Bedrock model', value: '__CUSTOM_BEDROCK__' },
|
||||
{ name: '* Custom Azure model', value: '__CUSTOM_AZURE__' },
|
||||
{ name: '* Custom Vertex model', value: '__CUSTOM_VERTEX__' }
|
||||
];
|
||||
|
||||
let choices = [];
|
||||
let defaultIndex = 0; // Default to 'Cancel'
|
||||
|
||||
// Filter and format models allowed for this role using the manually grouped data
|
||||
const roleChoices = Object.entries(modelsByProvider)
|
||||
.map(([provider, models]) => {
|
||||
const providerModels = models
|
||||
.filter((m) => m.allowed_roles.includes(role))
|
||||
.map((m) => ({
|
||||
name: `${provider} / ${m.id} ${
|
||||
m.cost_per_1m_tokens
|
||||
? chalk.gray(
|
||||
`($${m.cost_per_1m_tokens.input.toFixed(2)} input | $${m.cost_per_1m_tokens.output.toFixed(2)} output)`
|
||||
)
|
||||
: ''
|
||||
}`,
|
||||
value: { id: m.id, provider },
|
||||
short: `${provider}/${m.id}`
|
||||
}));
|
||||
if (providerModels.length > 0) {
|
||||
return [...providerModels];
|
||||
}
|
||||
return null;
|
||||
})
|
||||
.filter(Boolean)
|
||||
.flat();
|
||||
|
||||
// Find the index of the currently selected model for setting the default
|
||||
let currentChoiceIndex = -1;
|
||||
if (currentModel?.modelId && currentModel?.provider) {
|
||||
currentChoiceIndex = roleChoices.findIndex(
|
||||
(choice) =>
|
||||
typeof choice.value === 'object' &&
|
||||
choice.value.id === currentModel.modelId &&
|
||||
choice.value.provider === currentModel.provider
|
||||
);
|
||||
}
|
||||
|
||||
// Construct final choices list with custom options moved to bottom
|
||||
const systemOptions = [];
|
||||
if (noChangeOption) {
|
||||
systemOptions.push(noChangeOption);
|
||||
}
|
||||
systemOptions.push(cancelOption);
|
||||
|
||||
const systemLength = systemOptions.length;
|
||||
|
||||
if (allowNone) {
|
||||
choices = [
|
||||
...systemOptions,
|
||||
new inquirer.Separator('\n── Standard Models ──'),
|
||||
{ name: '⚪ None (disable)', value: null },
|
||||
...roleChoices,
|
||||
new inquirer.Separator('\n── Custom Providers ──'),
|
||||
...customProviderOptions
|
||||
];
|
||||
// Adjust default index: System + Sep1 + None (+2)
|
||||
const noneOptionIndex = systemLength + 1;
|
||||
defaultIndex =
|
||||
currentChoiceIndex !== -1
|
||||
? currentChoiceIndex + systemLength + 2 // Offset by system options and separators
|
||||
: noneOptionIndex; // Default to 'None' if no current model matched
|
||||
} else {
|
||||
choices = [
|
||||
...systemOptions,
|
||||
new inquirer.Separator('\n── Standard Models ──'),
|
||||
...roleChoices,
|
||||
new inquirer.Separator('\n── Custom Providers ──'),
|
||||
...customProviderOptions
|
||||
];
|
||||
// Adjust default index: System + Sep (+1)
|
||||
defaultIndex =
|
||||
currentChoiceIndex !== -1
|
||||
? currentChoiceIndex + systemLength + 1 // Offset by system options and separator
|
||||
: noChangeOption
|
||||
? 1
|
||||
: 0; // Default to 'No Change' if present, else 'Cancel'
|
||||
}
|
||||
|
||||
// Ensure defaultIndex is valid within the final choices array length
|
||||
if (defaultIndex < 0 || defaultIndex >= choices.length) {
|
||||
// If default calculation failed or pointed outside bounds, reset intelligently
|
||||
defaultIndex = 0; // Default to 'Cancel'
|
||||
console.warn(
|
||||
`Warning: Could not determine default model for role '${role}'. Defaulting to 'Cancel'.`
|
||||
); // Add warning
|
||||
}
|
||||
|
||||
return { choices, default: defaultIndex };
|
||||
};
|
||||
|
||||
// --- Generate choices using the helper ---
|
||||
const mainPromptData = getPromptData('main');
|
||||
const researchPromptData = getPromptData('research');
|
||||
const fallbackPromptData = getPromptData('fallback', true); // Allow 'None' for fallback
|
||||
|
||||
// Display helpful intro message
|
||||
console.log(chalk.cyan('\n🎯 Interactive Model Setup'));
|
||||
console.log(chalk.gray('━'.repeat(50)));
|
||||
console.log(chalk.yellow('💡 Navigation tips:'));
|
||||
console.log(chalk.gray(' • Type to search and filter options'));
|
||||
console.log(chalk.gray(' • Use ↑↓ arrow keys to navigate results'));
|
||||
console.log(
|
||||
chalk.gray(
|
||||
' • Standard models are listed first, custom providers at bottom'
|
||||
)
|
||||
);
|
||||
console.log(chalk.gray(' • Press Enter to select\n'));
|
||||
|
||||
// Helper function to create search source for models
|
||||
const createSearchSource = (choices, defaultValue) => {
|
||||
return (searchTerm = '') => {
|
||||
const filteredChoices = choices.filter((choice) => {
|
||||
if (choice.type === 'separator') return true; // Always show separators
|
||||
const searchText = choice.name || '';
|
||||
return searchText.toLowerCase().includes(searchTerm.toLowerCase());
|
||||
});
|
||||
return Promise.resolve(filteredChoices);
|
||||
};
|
||||
};
|
||||
|
||||
const answers = {};
|
||||
|
||||
// Main model selection
|
||||
answers.mainModel = await search({
|
||||
message: 'Select the main model for generation/updates:',
|
||||
source: createSearchSource(mainPromptData.choices, mainPromptData.default),
|
||||
pageSize: 15
|
||||
});
|
||||
|
||||
if (answers.mainModel !== '__CANCEL__') {
|
||||
// Research model selection
|
||||
answers.researchModel = await search({
|
||||
message: 'Select the research model:',
|
||||
source: createSearchSource(
|
||||
researchPromptData.choices,
|
||||
researchPromptData.default
|
||||
),
|
||||
pageSize: 15
|
||||
});
|
||||
|
||||
if (answers.researchModel !== '__CANCEL__') {
|
||||
// Fallback model selection
|
||||
answers.fallbackModel = await search({
|
||||
message: 'Select the fallback model (optional):',
|
||||
source: createSearchSource(
|
||||
fallbackPromptData.choices,
|
||||
fallbackPromptData.default
|
||||
),
|
||||
pageSize: 15
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
let setupSuccess = true;
|
||||
let setupConfigModified = false;
|
||||
const coreOptionsSetup = { projectRoot }; // Pass root for setup actions
|
||||
|
||||
// Helper to handle setting a model (including custom)
|
||||
async function handleSetModel(role, selectedValue, currentModelId) {
|
||||
if (selectedValue === '__CANCEL__') {
|
||||
console.log(
|
||||
chalk.yellow(`\nSetup canceled during ${role} model selection.`)
|
||||
);
|
||||
setupSuccess = false; // Also mark success as false on cancel
|
||||
return false; // Indicate cancellation
|
||||
}
|
||||
|
||||
// Handle the new 'No Change' option
|
||||
if (selectedValue === '__NO_CHANGE__') {
|
||||
console.log(chalk.gray(`No change selected for ${role} model.`));
|
||||
return true; // Indicate success, continue setup
|
||||
}
|
||||
|
||||
let modelIdToSet = null;
|
||||
let providerHint = null;
|
||||
let isCustomSelection = false;
|
||||
|
||||
if (selectedValue === '__CUSTOM_OPENROUTER__') {
|
||||
isCustomSelection = true;
|
||||
const { customId } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'customId',
|
||||
message: `Enter the custom OpenRouter Model ID for the ${role} role:`
|
||||
}
|
||||
]);
|
||||
if (!customId) {
|
||||
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
|
||||
return true; // Continue setup, but don't set this role
|
||||
}
|
||||
modelIdToSet = customId;
|
||||
providerHint = CUSTOM_PROVIDERS.OPENROUTER;
|
||||
// Validate against live OpenRouter list
|
||||
const openRouterModels = await fetchOpenRouterModelsCLI();
|
||||
if (
|
||||
!openRouterModels ||
|
||||
!openRouterModels.some((m) => m.id === modelIdToSet)
|
||||
) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Model ID "${modelIdToSet}" not found in the live OpenRouter model list. Please check the ID.`
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true; // Continue setup, but mark as failed
|
||||
}
|
||||
} else if (selectedValue === '__CUSTOM_OLLAMA__') {
|
||||
isCustomSelection = true;
|
||||
const { customId } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'customId',
|
||||
message: `Enter the custom Ollama Model ID for the ${role} role:`
|
||||
}
|
||||
]);
|
||||
if (!customId) {
|
||||
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
|
||||
return true; // Continue setup, but don't set this role
|
||||
}
|
||||
modelIdToSet = customId;
|
||||
providerHint = CUSTOM_PROVIDERS.OLLAMA;
|
||||
// Get the Ollama base URL from config for this role
|
||||
const ollamaBaseURL = getBaseUrlForRole(role, projectRoot);
|
||||
// Validate against live Ollama list
|
||||
const ollamaModels = await fetchOllamaModelsCLI(ollamaBaseURL);
|
||||
if (ollamaModels === null) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Unable to connect to Ollama server at ${ollamaBaseURL}. Please ensure Ollama is running and try again.`
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true; // Continue setup, but mark as failed
|
||||
} else if (!ollamaModels.some((m) => m.model === modelIdToSet)) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error: Model ID "${modelIdToSet}" not found in the Ollama instance. Please verify the model is pulled and available.`
|
||||
)
|
||||
);
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
`You can check available models with: curl ${ollamaBaseURL}/tags`
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true; // Continue setup, but mark as failed
|
||||
}
|
||||
} else if (selectedValue === '__CUSTOM_BEDROCK__') {
|
||||
isCustomSelection = true;
|
||||
const { customId } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'customId',
|
||||
message: `Enter the custom Bedrock Model ID for the ${role} role (e.g., anthropic.claude-3-sonnet-20240229-v1:0):`
|
||||
}
|
||||
]);
|
||||
if (!customId) {
|
||||
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
|
||||
return true; // Continue setup, but don't set this role
|
||||
}
|
||||
modelIdToSet = customId;
|
||||
providerHint = CUSTOM_PROVIDERS.BEDROCK;
|
||||
|
||||
// Check if AWS environment variables exist
|
||||
if (
|
||||
!process.env.AWS_ACCESS_KEY_ID ||
|
||||
!process.env.AWS_SECRET_ACCESS_KEY
|
||||
) {
|
||||
console.warn(
|
||||
chalk.yellow(
|
||||
'Warning: AWS_ACCESS_KEY_ID and/or AWS_SECRET_ACCESS_KEY environment variables are missing. Will fallback to system configuration. (ex: aws config files or ec2 instance profiles)'
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true; // Continue setup, but mark as failed
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Custom Bedrock model "${modelIdToSet}" will be used. No validation performed.`
|
||||
)
|
||||
);
|
||||
} else if (selectedValue === '__CUSTOM_AZURE__') {
|
||||
isCustomSelection = true;
|
||||
const { customId } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'customId',
|
||||
message: `Enter the custom Azure OpenAI Model ID for the ${role} role (e.g., gpt-4o):`
|
||||
}
|
||||
]);
|
||||
if (!customId) {
|
||||
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
|
||||
return true; // Continue setup, but don't set this role
|
||||
}
|
||||
modelIdToSet = customId;
|
||||
providerHint = CUSTOM_PROVIDERS.AZURE;
|
||||
|
||||
// Check if Azure environment variables exist
|
||||
if (
|
||||
!process.env.AZURE_OPENAI_API_KEY ||
|
||||
!process.env.AZURE_OPENAI_ENDPOINT
|
||||
) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: AZURE_OPENAI_API_KEY and/or AZURE_OPENAI_ENDPOINT environment variables are missing. Please set them before using custom Azure models.'
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true; // Continue setup, but mark as failed
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Custom Azure OpenAI model "${modelIdToSet}" will be used. No validation performed.`
|
||||
)
|
||||
);
|
||||
} else if (selectedValue === '__CUSTOM_VERTEX__') {
|
||||
isCustomSelection = true;
|
||||
const { customId } = await inquirer.prompt([
|
||||
{
|
||||
type: 'input',
|
||||
name: 'customId',
|
||||
message: `Enter the custom Vertex AI Model ID for the ${role} role (e.g., gemini-1.5-pro-002):`
|
||||
}
|
||||
]);
|
||||
if (!customId) {
|
||||
console.log(chalk.yellow('No custom ID entered. Skipping role.'));
|
||||
return true; // Continue setup, but don't set this role
|
||||
}
|
||||
modelIdToSet = customId;
|
||||
providerHint = CUSTOM_PROVIDERS.VERTEX;
|
||||
|
||||
// Check if Google/Vertex environment variables exist
|
||||
if (
|
||||
!process.env.GOOGLE_API_KEY &&
|
||||
!process.env.GOOGLE_APPLICATION_CREDENTIALS
|
||||
) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Either GOOGLE_API_KEY or GOOGLE_APPLICATION_CREDENTIALS environment variable is required. Please set one before using custom Vertex models.'
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true; // Continue setup, but mark as failed
|
||||
}
|
||||
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Custom Vertex AI model "${modelIdToSet}" will be used. No validation performed.`
|
||||
)
|
||||
);
|
||||
} else if (
|
||||
selectedValue &&
|
||||
typeof selectedValue === 'object' &&
|
||||
selectedValue.id
|
||||
) {
|
||||
// Standard model selected from list
|
||||
modelIdToSet = selectedValue.id;
|
||||
providerHint = selectedValue.provider; // Provider is known
|
||||
} else if (selectedValue === null && role === 'fallback') {
|
||||
// Handle disabling fallback
|
||||
modelIdToSet = null;
|
||||
providerHint = null;
|
||||
} else if (selectedValue) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Internal Error: Unexpected selection value for ${role}: ${JSON.stringify(selectedValue)}`
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
return true;
|
||||
}
|
||||
|
||||
// Only proceed if there's a change to be made
|
||||
if (modelIdToSet !== currentModelId) {
|
||||
if (modelIdToSet) {
|
||||
// Set a specific model (standard or custom)
|
||||
const result = await setModel(role, modelIdToSet, {
|
||||
...coreOptionsSetup,
|
||||
providerHint // Pass the hint
|
||||
});
|
||||
if (result.success) {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Set ${role} model: ${result.data.provider} / ${result.data.modelId}`
|
||||
)
|
||||
);
|
||||
if (result.data.warning) {
|
||||
// Display warning if returned by setModel
|
||||
console.log(chalk.yellow(result.data.warning));
|
||||
}
|
||||
setupConfigModified = true;
|
||||
} else {
|
||||
console.error(
|
||||
chalk.red(
|
||||
`Error setting ${role} model: ${result.error?.message || 'Unknown'}`
|
||||
)
|
||||
);
|
||||
setupSuccess = false;
|
||||
}
|
||||
} else if (role === 'fallback') {
|
||||
// Disable fallback model
|
||||
const currentCfg = getConfig(projectRoot);
|
||||
if (currentCfg?.models?.fallback?.modelId) {
|
||||
// Check if it was actually set before clearing
|
||||
currentCfg.models.fallback = {
|
||||
...currentCfg.models.fallback,
|
||||
provider: undefined,
|
||||
modelId: undefined
|
||||
};
|
||||
if (writeConfig(currentCfg, projectRoot)) {
|
||||
console.log(chalk.blue('Fallback model disabled.'));
|
||||
setupConfigModified = true;
|
||||
} else {
|
||||
console.error(
|
||||
chalk.red('Failed to disable fallback model in config file.')
|
||||
);
|
||||
setupSuccess = false;
|
||||
}
|
||||
} else {
|
||||
console.log(chalk.blue('Fallback model was already disabled.'));
|
||||
}
|
||||
}
|
||||
}
|
||||
return true; // Indicate setup should continue
|
||||
}
|
||||
|
||||
// Process answers using the handler
|
||||
if (
|
||||
!(await handleSetModel(
|
||||
'main',
|
||||
answers.mainModel,
|
||||
currentModels.main?.modelId // <--- Now 'currentModels' is defined
|
||||
))
|
||||
) {
|
||||
return false; // Explicitly return false if cancelled
|
||||
}
|
||||
if (
|
||||
!(await handleSetModel(
|
||||
'research',
|
||||
answers.researchModel,
|
||||
currentModels.research?.modelId // <--- Now 'currentModels' is defined
|
||||
))
|
||||
) {
|
||||
return false; // Explicitly return false if cancelled
|
||||
}
|
||||
if (
|
||||
!(await handleSetModel(
|
||||
'fallback',
|
||||
answers.fallbackModel,
|
||||
currentModels.fallback?.modelId // <--- Now 'currentModels' is defined
|
||||
))
|
||||
) {
|
||||
return false; // Explicitly return false if cancelled
|
||||
}
|
||||
|
||||
if (setupSuccess && setupConfigModified) {
|
||||
console.log(chalk.green.bold('\nModel setup complete!'));
|
||||
} else if (setupSuccess && !setupConfigModified) {
|
||||
console.log(chalk.yellow('\nNo changes made to model configuration.'));
|
||||
} else if (!setupSuccess) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'\nErrors occurred during model selection. Please review and try again.'
|
||||
)
|
||||
);
|
||||
}
|
||||
return true; // Indicate setup flow completed (not cancelled)
|
||||
// Let the main command flow continue to display results
|
||||
}
|
||||
|
||||
/**
|
||||
* Configure and register CLI commands
|
||||
* @param {Object} program - Commander program instance
|
||||
@@ -3512,6 +2873,18 @@ ${result.result}
|
||||
'--codex-cli',
|
||||
'Allow setting a Codex CLI model ID (use with --set-*)'
|
||||
)
|
||||
.option(
|
||||
'--lmstudio',
|
||||
'Allow setting a custom LM Studio model ID (use with --set-*)'
|
||||
)
|
||||
.option(
|
||||
'--openai-compatible',
|
||||
'Allow setting a custom OpenAI-compatible model ID (use with --set-*)'
|
||||
)
|
||||
.option(
|
||||
'--baseURL <url>',
|
||||
'Custom base URL for openai-compatible, lmstudio, or ollama providers (e.g., http://localhost:8000/v1)'
|
||||
)
|
||||
.addHelpText(
|
||||
'after',
|
||||
`
|
||||
@@ -3528,6 +2901,9 @@ Examples:
|
||||
$ task-master models --set-main claude-3-5-sonnet@20241022 --vertex # Set custom Vertex AI model for main role
|
||||
$ task-master models --set-main gemini-2.5-pro --gemini-cli # Set Gemini CLI model for main role
|
||||
$ task-master models --set-main gpt-5-codex --codex-cli # Set Codex CLI model for main role
|
||||
$ task-master models --set-main qwen3-vl-4b --lmstudio # Set LM Studio model for main role (defaults to http://localhost:1234/v1)
|
||||
$ task-master models --set-main qwen3-vl-4b --lmstudio --baseURL http://localhost:8000/v1 # Set LM Studio model with custom base URL
|
||||
$ task-master models --set-main my-model --openai-compatible --baseURL http://localhost:8000/v1 # Set custom OpenAI-compatible model with custom endpoint
|
||||
$ task-master models --setup # Run interactive setup`
|
||||
)
|
||||
.action(async (options) => {
|
||||
@@ -3545,12 +2921,14 @@ Examples:
|
||||
options.bedrock,
|
||||
options.claudeCode,
|
||||
options.geminiCli,
|
||||
options.codexCli
|
||||
options.codexCli,
|
||||
options.lmstudio,
|
||||
options.openaiCompatible
|
||||
].filter(Boolean).length;
|
||||
if (providerFlags > 1) {
|
||||
console.error(
|
||||
chalk.red(
|
||||
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code, --gemini-cli, --codex-cli) simultaneously.'
|
||||
'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code, --gemini-cli, --codex-cli, --lmstudio, --openai-compatible) simultaneously.'
|
||||
)
|
||||
);
|
||||
process.exit(1);
|
||||
@@ -3598,7 +2976,12 @@ Examples:
|
||||
? 'gemini-cli'
|
||||
: options.codexCli
|
||||
? 'codex-cli'
|
||||
: undefined
|
||||
: options.lmstudio
|
||||
? 'lmstudio'
|
||||
: options.openaiCompatible
|
||||
? 'openai-compatible'
|
||||
: undefined,
|
||||
baseURL: options.baseURL
|
||||
});
|
||||
if (result.success) {
|
||||
console.log(chalk.green(`✅ ${result.data.message}`));
|
||||
@@ -3626,7 +3009,12 @@ Examples:
|
||||
? 'gemini-cli'
|
||||
: options.codexCli
|
||||
? 'codex-cli'
|
||||
: undefined
|
||||
: options.lmstudio
|
||||
? 'lmstudio'
|
||||
: options.openaiCompatible
|
||||
? 'openai-compatible'
|
||||
: undefined,
|
||||
baseURL: options.baseURL
|
||||
});
|
||||
if (result.success) {
|
||||
console.log(chalk.green(`✅ ${result.data.message}`));
|
||||
@@ -3656,7 +3044,12 @@ Examples:
|
||||
? 'gemini-cli'
|
||||
: options.codexCli
|
||||
? 'codex-cli'
|
||||
: undefined
|
||||
: options.lmstudio
|
||||
? 'lmstudio'
|
||||
: options.openaiCompatible
|
||||
? 'openai-compatible'
|
||||
: undefined,
|
||||
baseURL: options.baseURL
|
||||
});
|
||||
if (result.success) {
|
||||
console.log(chalk.green(`✅ ${result.data.message}`));
|
||||
|
||||
@@ -13,7 +13,7 @@ import {
|
||||
CUSTOM_PROVIDERS,
|
||||
CUSTOM_PROVIDERS_ARRAY,
|
||||
VALIDATED_PROVIDERS
|
||||
} from '../../src/constants/providers.js';
|
||||
} from '@tm/core';
|
||||
import { findConfigPath } from '../../src/utils/path-utils.js';
|
||||
import { findProjectRoot, isEmpty, log, resolveEnvVariable } from './utils.js';
|
||||
import MODEL_MAP from './supported-models.json' with { type: 'json' };
|
||||
|
||||
@@ -892,6 +892,52 @@
|
||||
"reason": "Free OpenRouter models are not supported due to severe rate limits, lack of tool use support, and other reliability issues that make them impractical for production use."
|
||||
}
|
||||
],
|
||||
"zai": [
|
||||
{
|
||||
"id": "glm-4.6",
|
||||
"swe_score": 0.68,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0.6,
|
||||
"output": 2.2
|
||||
},
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 204800,
|
||||
"supported": true
|
||||
},
|
||||
{
|
||||
"id": "glm-4.5",
|
||||
"swe_score": 0.65,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0.6,
|
||||
"output": 2.2
|
||||
},
|
||||
"allowed_roles": ["main", "fallback", "research"],
|
||||
"max_tokens": 131072,
|
||||
"supported": true
|
||||
},
|
||||
{
|
||||
"id": "glm-4.5-air",
|
||||
"swe_score": 0.62,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0.2,
|
||||
"output": 1.1
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 131072,
|
||||
"supported": true
|
||||
},
|
||||
{
|
||||
"id": "glm-4.5v",
|
||||
"swe_score": 0.63,
|
||||
"cost_per_1m_tokens": {
|
||||
"input": 0.6,
|
||||
"output": 1.8
|
||||
},
|
||||
"allowed_roles": ["main", "fallback"],
|
||||
"max_tokens": 64000,
|
||||
"supported": true
|
||||
}
|
||||
],
|
||||
"ollama": [
|
||||
{
|
||||
"id": "gpt-oss:latest",
|
||||
|
||||
@@ -20,11 +20,7 @@ import {
|
||||
hasCodebaseAnalysis
|
||||
} from '../config-manager.js';
|
||||
import { getPromptManager } from '../prompt-manager.js';
|
||||
import {
|
||||
COMPLEXITY_REPORT_FILE,
|
||||
LEGACY_TASKS_FILE
|
||||
} from '../../../src/constants/paths.js';
|
||||
import { CUSTOM_PROVIDERS } from '../../../src/constants/providers.js';
|
||||
import { LEGACY_TASKS_FILE } from '../../../src/constants/paths.js';
|
||||
import { resolveComplexityReportOutputPath } from '../../../src/utils/path-utils.js';
|
||||
import { ContextGatherer } from '../utils/contextGatherer.js';
|
||||
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
|
||||
|
||||
@@ -23,7 +23,7 @@ import {
|
||||
} from '../config-manager.js';
|
||||
import { findConfigPath } from '../../../src/utils/path-utils.js';
|
||||
import { log } from '../utils.js';
|
||||
import { CUSTOM_PROVIDERS } from '../../../src/constants/providers.js';
|
||||
import { CUSTOM_PROVIDERS } from '@tm/core';
|
||||
|
||||
// Constants
|
||||
const CONFIG_MISSING_ERROR =
|
||||
@@ -179,10 +179,13 @@ async function getModelConfiguration(options = {}) {
|
||||
// Get current settings - these should use the config from the found path automatically
|
||||
const mainProvider = getMainProvider(projectRoot);
|
||||
const mainModelId = getMainModelId(projectRoot);
|
||||
const mainBaseURL = getBaseUrlForRole('main', projectRoot);
|
||||
const researchProvider = getResearchProvider(projectRoot);
|
||||
const researchModelId = getResearchModelId(projectRoot);
|
||||
const researchBaseURL = getBaseUrlForRole('research', projectRoot);
|
||||
const fallbackProvider = getFallbackProvider(projectRoot);
|
||||
const fallbackModelId = getFallbackModelId(projectRoot);
|
||||
const fallbackBaseURL = getBaseUrlForRole('fallback', projectRoot);
|
||||
|
||||
// Check API keys
|
||||
const mainCliKeyOk = isApiKeySet(mainProvider, session, projectRoot);
|
||||
@@ -220,6 +223,7 @@ async function getModelConfiguration(options = {}) {
|
||||
main: {
|
||||
provider: mainProvider,
|
||||
modelId: mainModelId,
|
||||
baseURL: mainBaseURL,
|
||||
sweScore: mainModelData?.swe_score || null,
|
||||
cost: mainModelData?.cost_per_1m_tokens || null,
|
||||
keyStatus: {
|
||||
@@ -230,6 +234,7 @@ async function getModelConfiguration(options = {}) {
|
||||
research: {
|
||||
provider: researchProvider,
|
||||
modelId: researchModelId,
|
||||
baseURL: researchBaseURL,
|
||||
sweScore: researchModelData?.swe_score || null,
|
||||
cost: researchModelData?.cost_per_1m_tokens || null,
|
||||
keyStatus: {
|
||||
@@ -241,6 +246,7 @@ async function getModelConfiguration(options = {}) {
|
||||
? {
|
||||
provider: fallbackProvider,
|
||||
modelId: fallbackModelId,
|
||||
baseURL: fallbackBaseURL,
|
||||
sweScore: fallbackModelData?.swe_score || null,
|
||||
cost: fallbackModelData?.cost_per_1m_tokens || null,
|
||||
keyStatus: {
|
||||
@@ -365,7 +371,8 @@ async function getAvailableModelsList(options = {}) {
|
||||
* @returns {Object} RESTful response with result of update operation
|
||||
*/
|
||||
async function setModel(role, modelId, options = {}) {
|
||||
const { mcpLog, projectRoot, providerHint } = options;
|
||||
const { mcpLog, projectRoot, providerHint, baseURL } = options;
|
||||
let computedBaseURL = baseURL; // Track the computed baseURL separately
|
||||
|
||||
const report = (level, ...args) => {
|
||||
if (mcpLog && typeof mcpLog[level] === 'function') {
|
||||
@@ -468,8 +475,25 @@ async function setModel(role, modelId, options = {}) {
|
||||
// Check Ollama ONLY because hint was ollama
|
||||
report('info', `Checking Ollama for ${modelId} (as hinted)...`);
|
||||
|
||||
// Get the Ollama base URL from config
|
||||
const ollamaBaseURL = getBaseUrlForRole(role, projectRoot);
|
||||
// Get current provider for this role to check if we should preserve baseURL
|
||||
let currentProvider;
|
||||
if (role === 'main') {
|
||||
currentProvider = getMainProvider(projectRoot);
|
||||
} else if (role === 'research') {
|
||||
currentProvider = getResearchProvider(projectRoot);
|
||||
} else if (role === 'fallback') {
|
||||
currentProvider = getFallbackProvider(projectRoot);
|
||||
}
|
||||
|
||||
// Only preserve baseURL if we're already using OLLAMA
|
||||
const existingBaseURL =
|
||||
currentProvider === CUSTOM_PROVIDERS.OLLAMA
|
||||
? getBaseUrlForRole(role, projectRoot)
|
||||
: null;
|
||||
|
||||
// Get the Ollama base URL - use provided, existing, or default
|
||||
const ollamaBaseURL =
|
||||
baseURL || existingBaseURL || 'http://localhost:11434/api';
|
||||
const ollamaModels = await fetchOllamaModels(ollamaBaseURL);
|
||||
|
||||
if (ollamaModels === null) {
|
||||
@@ -481,6 +505,8 @@ async function setModel(role, modelId, options = {}) {
|
||||
determinedProvider = CUSTOM_PROVIDERS.OLLAMA;
|
||||
warningMessage = `Warning: Custom Ollama model '${modelId}' set. Ensure your Ollama server is running and has pulled this model. Taskmaster cannot guarantee compatibility.`;
|
||||
report('warn', warningMessage);
|
||||
// Store the computed baseURL so it gets saved in config
|
||||
computedBaseURL = ollamaBaseURL;
|
||||
} else {
|
||||
// Server is running but model not found
|
||||
const tagsUrl = `${ollamaBaseURL}/tags`;
|
||||
@@ -555,6 +581,62 @@ async function setModel(role, modelId, options = {}) {
|
||||
warningMessage = `Warning: Codex CLI model '${modelId}' not found in supported models. Setting without validation.`;
|
||||
report('warn', warningMessage);
|
||||
}
|
||||
} else if (providerHint === CUSTOM_PROVIDERS.LMSTUDIO) {
|
||||
// LM Studio provider - set without validation since it's a local server
|
||||
determinedProvider = CUSTOM_PROVIDERS.LMSTUDIO;
|
||||
|
||||
// Get current provider for this role to check if we should preserve baseURL
|
||||
let currentProvider;
|
||||
if (role === 'main') {
|
||||
currentProvider = getMainProvider(projectRoot);
|
||||
} else if (role === 'research') {
|
||||
currentProvider = getResearchProvider(projectRoot);
|
||||
} else if (role === 'fallback') {
|
||||
currentProvider = getFallbackProvider(projectRoot);
|
||||
}
|
||||
|
||||
// Only preserve baseURL if we're already using LMSTUDIO
|
||||
const existingBaseURL =
|
||||
currentProvider === CUSTOM_PROVIDERS.LMSTUDIO
|
||||
? getBaseUrlForRole(role, projectRoot)
|
||||
: null;
|
||||
|
||||
const lmStudioBaseURL =
|
||||
baseURL || existingBaseURL || 'http://localhost:1234/v1';
|
||||
warningMessage = `Warning: Custom LM Studio model '${modelId}' set with base URL '${lmStudioBaseURL}'. Please ensure LM Studio server is running and has loaded this model. Taskmaster cannot guarantee compatibility.`;
|
||||
report('warn', warningMessage);
|
||||
// Store the computed baseURL so it gets saved in config
|
||||
computedBaseURL = lmStudioBaseURL;
|
||||
} else if (providerHint === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE) {
|
||||
// OpenAI-compatible provider - set without validation, requires baseURL
|
||||
determinedProvider = CUSTOM_PROVIDERS.OPENAI_COMPATIBLE;
|
||||
|
||||
// Get current provider for this role to check if we should preserve baseURL
|
||||
let currentProvider;
|
||||
if (role === 'main') {
|
||||
currentProvider = getMainProvider(projectRoot);
|
||||
} else if (role === 'research') {
|
||||
currentProvider = getResearchProvider(projectRoot);
|
||||
} else if (role === 'fallback') {
|
||||
currentProvider = getFallbackProvider(projectRoot);
|
||||
}
|
||||
|
||||
// Only preserve baseURL if we're already using OPENAI_COMPATIBLE
|
||||
const existingBaseURL =
|
||||
currentProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE
|
||||
? getBaseUrlForRole(role, projectRoot)
|
||||
: null;
|
||||
|
||||
const resolvedBaseURL = baseURL || existingBaseURL;
|
||||
if (!resolvedBaseURL) {
|
||||
throw new Error(
|
||||
`Base URL is required for OpenAI-compatible providers. Please provide a baseURL.`
|
||||
);
|
||||
}
|
||||
warningMessage = `Warning: Custom OpenAI-compatible model '${modelId}' set with base URL '${resolvedBaseURL}'. Taskmaster cannot guarantee compatibility. Ensure your API endpoint follows the OpenAI API specification.`;
|
||||
report('warn', warningMessage);
|
||||
// Store the computed baseURL so it gets saved in config
|
||||
computedBaseURL = resolvedBaseURL;
|
||||
} else {
|
||||
// Invalid provider hint - should not happen with our constants
|
||||
throw new Error(`Invalid provider hint received: ${providerHint}`);
|
||||
@@ -575,7 +657,7 @@ async function setModel(role, modelId, options = {}) {
|
||||
success: false,
|
||||
error: {
|
||||
code: 'MODEL_NOT_FOUND_NO_HINT',
|
||||
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --gemini-cli, or --codex-cli.`
|
||||
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -602,6 +684,19 @@ async function setModel(role, modelId, options = {}) {
|
||||
modelId: modelId
|
||||
};
|
||||
|
||||
// Handle baseURL for providers that support it
|
||||
if (
|
||||
computedBaseURL &&
|
||||
(determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE ||
|
||||
determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO ||
|
||||
determinedProvider === CUSTOM_PROVIDERS.OLLAMA)
|
||||
) {
|
||||
currentConfig.models[role].baseURL = computedBaseURL;
|
||||
} else {
|
||||
// Remove baseURL when switching to a provider that doesn't need it
|
||||
delete currentConfig.models[role].baseURL;
|
||||
}
|
||||
|
||||
// If model data is available, update maxTokens from supported-models.json
|
||||
if (modelData && modelData.max_tokens) {
|
||||
currentConfig.models[role].maxTokens = modelData.max_tokens;
|
||||
|
||||
@@ -27,12 +27,14 @@ export const prdSingleTaskSchema = z.object({
|
||||
// Define the Zod schema for the ENTIRE expected AI response object
|
||||
export const prdResponseSchema = z.object({
|
||||
tasks: z.array(prdSingleTaskSchema),
|
||||
metadata: z.object({
|
||||
projectName: z.string(),
|
||||
totalTasks: z.number(),
|
||||
sourceFile: z.string(),
|
||||
generatedAt: z.string()
|
||||
})
|
||||
metadata: z
|
||||
.object({
|
||||
projectName: z.string(),
|
||||
totalTasks: z.number(),
|
||||
sourceFile: z.string(),
|
||||
generatedAt: z.string()
|
||||
})
|
||||
.nullable()
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
|
||||
@@ -18,3 +18,6 @@ export { ClaudeCodeProvider } from './claude-code.js';
|
||||
export { GeminiCliProvider } from './gemini-cli.js';
|
||||
export { GrokCliProvider } from './grok-cli.js';
|
||||
export { CodexCliProvider } from './codex-cli.js';
|
||||
export { OpenAICompatibleProvider } from './openai-compatible.js';
|
||||
export { ZAIProvider } from './zai.js';
|
||||
export { LMStudioProvider } from './lmstudio.js';
|
||||
|
||||
39
src/ai-providers/lmstudio.js
Normal file
39
src/ai-providers/lmstudio.js
Normal file
@@ -0,0 +1,39 @@
|
||||
/**
|
||||
* lmstudio.js
|
||||
* AI provider implementation for LM Studio local models.
|
||||
*
|
||||
* LM Studio is a desktop application for running local LLMs.
|
||||
* It provides an OpenAI-compatible API server that runs locally.
|
||||
* Default server: http://localhost:1234/v1
|
||||
*
|
||||
* Usage:
|
||||
* 1. Start LM Studio application
|
||||
* 2. Load a model (e.g., llama-3.2-1b, mistral-7b)
|
||||
* 3. Go to "Local Server" tab and click "Start Server"
|
||||
* 4. Use the model ID from LM Studio in your config
|
||||
*
|
||||
* Note: LM Studio only supports `json_schema` mode for structured outputs,
|
||||
* not `json_object` mode. We disable native structured outputs to force
|
||||
* the AI SDK to use alternative strategies (like tool calling) which work
|
||||
* reliably across all LM Studio models.
|
||||
*/
|
||||
|
||||
import { OpenAICompatibleProvider } from './openai-compatible.js';
|
||||
|
||||
/**
|
||||
* LM Studio provider for local model inference.
|
||||
* Does not require an API key as it runs locally.
|
||||
*/
|
||||
export class LMStudioProvider extends OpenAICompatibleProvider {
|
||||
constructor() {
|
||||
super({
|
||||
name: 'LM Studio',
|
||||
apiKeyEnvVar: 'LMSTUDIO_API_KEY',
|
||||
requiresApiKey: false, // Local server, no API key needed
|
||||
defaultBaseURL: 'http://localhost:1234/v1',
|
||||
supportsStructuredOutputs: true
|
||||
// LM Studio only supports json_schema mode, not json_object mode
|
||||
// Disable native structured outputs to use alternative strategies
|
||||
});
|
||||
}
|
||||
}
|
||||
132
src/ai-providers/openai-compatible.js
Normal file
132
src/ai-providers/openai-compatible.js
Normal file
@@ -0,0 +1,132 @@
|
||||
/**
|
||||
* openai-compatible.js
|
||||
* Generic base class for OpenAI-compatible API providers.
|
||||
* This allows any provider with an OpenAI-compatible API to be easily integrated.
|
||||
*/
|
||||
|
||||
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
|
||||
import { BaseAIProvider } from './base-provider.js';
|
||||
|
||||
/**
|
||||
* Base class for OpenAI-compatible providers (LM Studio, Z.ai, etc.)
|
||||
* Provides a flexible foundation for any service with OpenAI-compatible endpoints.
|
||||
*/
|
||||
export class OpenAICompatibleProvider extends BaseAIProvider {
|
||||
/**
|
||||
* @param {object} config - Provider configuration
|
||||
* @param {string} config.name - Provider display name
|
||||
* @param {string} config.apiKeyEnvVar - Environment variable name for API key
|
||||
* @param {boolean} [config.requiresApiKey=true] - Whether API key is required
|
||||
* @param {string} [config.defaultBaseURL] - Default base URL for the API
|
||||
* @param {Function} [config.getBaseURL] - Function to determine base URL from params
|
||||
* @param {boolean} [config.supportsStructuredOutputs] - Whether provider supports structured outputs
|
||||
*/
|
||||
constructor(config) {
|
||||
super();
|
||||
|
||||
if (!config.name) {
|
||||
throw new Error('Provider name is required');
|
||||
}
|
||||
if (!config.apiKeyEnvVar) {
|
||||
throw new Error('API key environment variable name is required');
|
||||
}
|
||||
|
||||
this.name = config.name;
|
||||
this.apiKeyEnvVar = config.apiKeyEnvVar;
|
||||
this.requiresApiKey = config.requiresApiKey !== false; // Default to true
|
||||
this.defaultBaseURL = config.defaultBaseURL;
|
||||
this.getBaseURLFromParams = config.getBaseURL;
|
||||
this.supportsStructuredOutputs = config.supportsStructuredOutputs;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the environment variable name required for this provider's API key.
|
||||
* @returns {string} The environment variable name for the API key
|
||||
*/
|
||||
getRequiredApiKeyName() {
|
||||
return this.apiKeyEnvVar;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns whether this provider requires an API key.
|
||||
* @returns {boolean} True if API key is required
|
||||
*/
|
||||
isRequiredApiKey() {
|
||||
return this.requiresApiKey;
|
||||
}
|
||||
|
||||
/**
|
||||
* Override auth validation based on requiresApiKey setting
|
||||
* @param {object} params - Parameters to validate
|
||||
*/
|
||||
validateAuth(params) {
|
||||
if (this.requiresApiKey && !params.apiKey) {
|
||||
throw new Error(`${this.name} API key is required`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Determines the base URL to use for the API.
|
||||
* @param {object} params - Client parameters
|
||||
* @returns {string|undefined} The base URL to use
|
||||
*/
|
||||
getBaseURL(params) {
|
||||
// If custom baseURL provided in params, use it
|
||||
if (params.baseURL) {
|
||||
return params.baseURL;
|
||||
}
|
||||
|
||||
// If provider has a custom getBaseURL function, use it
|
||||
if (this.getBaseURLFromParams) {
|
||||
return this.getBaseURLFromParams(params);
|
||||
}
|
||||
|
||||
// Otherwise use default baseURL if available
|
||||
return this.defaultBaseURL;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates and returns an OpenAI-compatible client instance.
|
||||
* @param {object} params - Parameters for client initialization
|
||||
* @param {string} [params.apiKey] - API key (required if requiresApiKey is true)
|
||||
* @param {string} [params.baseURL] - Optional custom API endpoint
|
||||
* @returns {Function} OpenAI-compatible client function
|
||||
* @throws {Error} If required parameters are missing or initialization fails
|
||||
*/
|
||||
getClient(params) {
|
||||
try {
|
||||
const { apiKey } = params;
|
||||
|
||||
// Validate API key if required
|
||||
if (this.requiresApiKey && !apiKey) {
|
||||
throw new Error(`${this.name} API key is required.`);
|
||||
}
|
||||
|
||||
const baseURL = this.getBaseURL(params);
|
||||
|
||||
const clientConfig = {
|
||||
// Provider name for SDK (required, used for logging/debugging)
|
||||
name: this.name.toLowerCase().replace(/[^a-z0-9]/g, '-')
|
||||
};
|
||||
|
||||
// Only include apiKey if provider requires it
|
||||
if (this.requiresApiKey && apiKey) {
|
||||
clientConfig.apiKey = apiKey;
|
||||
}
|
||||
|
||||
// Include baseURL if available
|
||||
if (baseURL) {
|
||||
clientConfig.baseURL = baseURL;
|
||||
}
|
||||
|
||||
// Configure structured outputs support if specified
|
||||
if (this.supportsStructuredOutputs !== undefined) {
|
||||
clientConfig.supportsStructuredOutputs = this.supportsStructuredOutputs;
|
||||
}
|
||||
|
||||
return createOpenAICompatible(clientConfig);
|
||||
} catch (error) {
|
||||
this.handleError('client initialization', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
21
src/ai-providers/zai.js
Normal file
21
src/ai-providers/zai.js
Normal file
@@ -0,0 +1,21 @@
|
||||
/**
|
||||
* zai.js
|
||||
* AI provider implementation for Z.ai (GLM) models.
|
||||
* Uses the OpenAI-compatible API endpoint.
|
||||
*/
|
||||
|
||||
import { OpenAICompatibleProvider } from './openai-compatible.js';
|
||||
|
||||
/**
|
||||
* Z.ai provider supporting GLM models through OpenAI-compatible API.
|
||||
*/
|
||||
export class ZAIProvider extends OpenAICompatibleProvider {
|
||||
constructor() {
|
||||
super({
|
||||
name: 'Z.ai',
|
||||
apiKeyEnvVar: 'ZAI_API_KEY',
|
||||
requiresApiKey: true,
|
||||
defaultBaseURL: 'https://api.z.ai/api/paas/v4/'
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -12,6 +12,7 @@ export const geminiProfile = createProfile({
|
||||
mcpConfigName: 'settings.json', // Override default 'mcp.json'
|
||||
includeDefaultRules: false,
|
||||
fileMap: {
|
||||
'AGENTS.md': 'GEMINI.md'
|
||||
'AGENT.md': 'AGENTS.md', // Generic base for all AI agents
|
||||
'GEMINI.md': 'GEMINI.md' // Gemini-specific features only
|
||||
}
|
||||
});
|
||||
|
||||
@@ -57,7 +57,7 @@
|
||||
"prompts": {
|
||||
"default": {
|
||||
"system": "You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.{{#if research}}\nBefore breaking down the PRD into tasks, you will:\n1. Research and analyze the latest technologies, libraries, frameworks, and best practices that would be appropriate for this project\n2. Identify any potential technical challenges, security concerns, or scalability issues not explicitly mentioned in the PRD without discarding any explicit requirements or going overboard with complexity -- always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches\n3. Consider current industry standards and evolving trends relevant to this project (this step aims to solve LLM hallucinations and out of date information due to training data cutoff dates)\n4. Evaluate alternative implementation approaches and recommend the most efficient path\n5. Include specific library versions, helpful APIs, and concrete implementation guidance based on your research\n6. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches\n\nYour task breakdown should incorporate this research, resulting in more detailed implementation guidance, more accurate dependency mapping, and more precise technology recommendations than would be possible from the PRD text alone, while maintaining all explicit requirements and best practices and all details and nuances of the PRD.{{/if}}\n\nAnalyze the provided PRD content and generate {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD\nEach task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.\nAssign sequential IDs starting from {{nextId}}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.\nSet status to 'pending', dependencies to an empty array [], and priority to '{{defaultTaskPriority}}' initially for all tasks.\nGenerate a response containing a single key \"tasks\", where the value is an array of task objects adhering to the provided schema.\n\nEach task should follow this JSON structure:\n{\n\t\"id\": number,\n\t\"title\": string,\n\t\"description\": string,\n\t\"status\": \"pending\",\n\t\"dependencies\": number[] (IDs of tasks this depends on),\n\t\"priority\": \"high\" | \"medium\" | \"low\",\n\t\"details\": string (implementation details),\n\t\"testStrategy\": string (validation approach)\n}\n\nGuidelines:\n1. {{#if (gt numTasks 0)}}Unless complexity warrants otherwise{{else}}Depending on the complexity{{/if}}, create {{#if (gt numTasks 0)}}exactly {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, numbered sequentially starting from {{nextId}}\n2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards\n3. Order tasks logically - consider dependencies and implementation sequence\n4. Early tasks should focus on setup, core functionality first, then advanced features\n5. Include clear validation/testing approach for each task\n6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than {{nextId}} if applicable)\n7. Assign priority (high/medium/low) based on criticality and dependency order\n8. Include detailed implementation guidance in the \"details\" field{{#if research}}, with specific libraries and version recommendations based on your research{{/if}}\n9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance\n10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements\n11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches{{#if research}}\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research{{/if}}",
|
||||
"user": "{{#if hasCodebaseAnalysis}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before generating tasks:\n\n1. Use the Glob tool to explore the project structure (e.g., \"**/*.js\", \"**/*.json\", \"**/README.md\")\n2. Use the Grep tool to search for existing implementations, patterns, and technologies\n3. Use the Read tool to examine key files like package.json, README.md, and main entry points\n4. Analyze the current state of implementation to understand what already exists\n\nBased on your analysis:\n- Identify what components/features are already implemented\n- Understand the technology stack, frameworks, and patterns in use\n- Generate tasks that build upon the existing codebase rather than duplicating work\n- Ensure tasks align with the project's current architecture and conventions\n\nProject Root: {{projectRoot}}\n\n{{/if}}Here's the Product Requirements Document (PRD) to break down into {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, starting IDs from {{nextId}}:{{#if research}}\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.{{/if}}\n\n{{prdContent}}\n\nIMPORTANT: Your response must be a JSON object with a single property named \"tasks\" containing an array of task objects. Do NOT include metadata or any other properties."
|
||||
"user": "{{#if hasCodebaseAnalysis}}## IMPORTANT: Codebase Analysis Required\n\nYou have access to powerful codebase analysis tools. Before generating tasks:\n\n1. Use the Glob tool to explore the project structure (e.g., \"**/*.js\", \"**/*.json\", \"**/README.md\")\n2. Use the Grep tool to search for existing implementations, patterns, and technologies\n3. Use the Read tool to examine key files like package.json, README.md, and main entry points\n4. Analyze the current state of implementation to understand what already exists\n\nBased on your analysis:\n- Identify what components/features are already implemented\n- Understand the technology stack, frameworks, and patterns in use\n- Generate tasks that build upon the existing codebase rather than duplicating work\n- Ensure tasks align with the project's current architecture and conventions\n\nProject Root: {{projectRoot}}\n\n{{/if}}Here's the Product Requirements Document (PRD) to break down into {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} tasks, starting IDs from {{nextId}}:{{#if research}}\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.{{/if}}\n\n{{prdContent}}\n\nIMPORTANT: Your response must be a JSON object with a \"tasks\" property containing an array of task objects. You may optionally include a \"metadata\" object. Do not include any other properties."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,7 +27,8 @@ describe('Gemini Profile Initialization Functionality', () => {
|
||||
expect(geminiProfileContent).toContain("rulesDir: '.'"); // non-default
|
||||
expect(geminiProfileContent).toContain("mcpConfigName: 'settings.json'"); // non-default
|
||||
expect(geminiProfileContent).toContain('includeDefaultRules: false'); // non-default
|
||||
expect(geminiProfileContent).toContain("'AGENTS.md': 'GEMINI.md'");
|
||||
expect(geminiProfileContent).toContain("'AGENT.md': 'AGENTS.md'");
|
||||
expect(geminiProfileContent).toContain("'GEMINI.md': 'GEMINI.md'");
|
||||
|
||||
// Check the final computed properties on the profile object
|
||||
expect(geminiProfile.profileName).toBe('gemini');
|
||||
@@ -38,7 +39,8 @@ describe('Gemini Profile Initialization Functionality', () => {
|
||||
expect(geminiProfile.mcpConfigName).toBe('settings.json');
|
||||
expect(geminiProfile.mcpConfigPath).toBe('.gemini/settings.json'); // computed
|
||||
expect(geminiProfile.includeDefaultRules).toBe(false);
|
||||
expect(geminiProfile.fileMap['AGENTS.md']).toBe('GEMINI.md');
|
||||
expect(geminiProfile.fileMap['AGENT.md']).toBe('AGENTS.md');
|
||||
expect(geminiProfile.fileMap['GEMINI.md']).toBe('GEMINI.md');
|
||||
});
|
||||
|
||||
test('gemini.js has no lifecycle functions', () => {
|
||||
|
||||
102
tests/unit/ai-providers/lmstudio.test.js
Normal file
102
tests/unit/ai-providers/lmstudio.test.js
Normal file
@@ -0,0 +1,102 @@
|
||||
/**
|
||||
* Tests for LMStudioProvider
|
||||
*/
|
||||
|
||||
import { LMStudioProvider } from '../../../src/ai-providers/lmstudio.js';
|
||||
|
||||
describe('LMStudioProvider', () => {
|
||||
let provider;
|
||||
|
||||
beforeEach(() => {
|
||||
provider = new LMStudioProvider();
|
||||
});
|
||||
|
||||
describe('constructor', () => {
|
||||
it('should initialize with correct name', () => {
|
||||
expect(provider.name).toBe('LM Studio');
|
||||
});
|
||||
|
||||
it('should not require API key', () => {
|
||||
expect(provider.requiresApiKey).toBe(false);
|
||||
});
|
||||
|
||||
it('should have default localhost baseURL', () => {
|
||||
expect(provider.defaultBaseURL).toBe('http://localhost:1234/v1');
|
||||
});
|
||||
|
||||
it('should disable structured outputs (LM Studio only supports json_schema mode)', () => {
|
||||
expect(provider.supportsStructuredOutputs).toBe(true);
|
||||
});
|
||||
|
||||
it('should inherit from OpenAICompatibleProvider', () => {
|
||||
expect(provider).toHaveProperty('generateText');
|
||||
expect(provider).toHaveProperty('streamText');
|
||||
expect(provider).toHaveProperty('generateObject');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRequiredApiKeyName', () => {
|
||||
it('should return environment variable name', () => {
|
||||
expect(provider.getRequiredApiKeyName()).toBe('LMSTUDIO_API_KEY');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isRequiredApiKey', () => {
|
||||
it('should return false as local server does not require API key', () => {
|
||||
expect(provider.isRequiredApiKey()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getClient', () => {
|
||||
it('should create client without API key', () => {
|
||||
const client = provider.getClient({});
|
||||
expect(client).toBeDefined();
|
||||
});
|
||||
|
||||
it('should create client with custom baseURL', () => {
|
||||
const params = {
|
||||
baseURL: 'http://custom-host:8080/v1'
|
||||
};
|
||||
const client = provider.getClient(params);
|
||||
expect(client).toBeDefined();
|
||||
});
|
||||
|
||||
it('should not throw error when API key is missing', () => {
|
||||
expect(() => {
|
||||
provider.getClient({});
|
||||
}).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateAuth', () => {
|
||||
it('should not require API key validation', () => {
|
||||
expect(() => {
|
||||
provider.validateAuth({});
|
||||
}).not.toThrow();
|
||||
});
|
||||
|
||||
it('should pass with or without API key', () => {
|
||||
expect(() => {
|
||||
provider.validateAuth({ apiKey: 'test-key' });
|
||||
}).not.toThrow();
|
||||
|
||||
expect(() => {
|
||||
provider.validateAuth({});
|
||||
}).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getBaseURL', () => {
|
||||
it('should return default localhost URL', () => {
|
||||
const baseURL = provider.getBaseURL({});
|
||||
expect(baseURL).toBe('http://localhost:1234/v1');
|
||||
});
|
||||
|
||||
it('should return custom baseURL when provided', () => {
|
||||
const baseURL = provider.getBaseURL({
|
||||
baseURL: 'http://192.168.1.100:1234/v1'
|
||||
});
|
||||
expect(baseURL).toBe('http://192.168.1.100:1234/v1');
|
||||
});
|
||||
});
|
||||
});
|
||||
190
tests/unit/ai-providers/openai-compatible.test.js
Normal file
190
tests/unit/ai-providers/openai-compatible.test.js
Normal file
@@ -0,0 +1,190 @@
|
||||
/**
|
||||
* Tests for OpenAICompatibleProvider base class
|
||||
*/
|
||||
|
||||
import { OpenAICompatibleProvider } from '../../../src/ai-providers/openai-compatible.js';
|
||||
|
||||
describe('OpenAICompatibleProvider', () => {
|
||||
describe('constructor', () => {
|
||||
it('should initialize with required config', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY'
|
||||
});
|
||||
|
||||
expect(provider.name).toBe('Test Provider');
|
||||
expect(provider.apiKeyEnvVar).toBe('TEST_API_KEY');
|
||||
expect(provider.requiresApiKey).toBe(true);
|
||||
});
|
||||
|
||||
it('should initialize with requiresApiKey set to false', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
requiresApiKey: false
|
||||
});
|
||||
|
||||
expect(provider.requiresApiKey).toBe(false);
|
||||
});
|
||||
|
||||
it('should throw error if name is missing', () => {
|
||||
expect(() => {
|
||||
new OpenAICompatibleProvider({
|
||||
apiKeyEnvVar: 'TEST_API_KEY'
|
||||
});
|
||||
}).toThrow('Provider name is required');
|
||||
});
|
||||
|
||||
it('should throw error if apiKeyEnvVar is missing', () => {
|
||||
expect(() => {
|
||||
new OpenAICompatibleProvider({
|
||||
name: 'Test Provider'
|
||||
});
|
||||
}).toThrow('API key environment variable name is required');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRequiredApiKeyName', () => {
|
||||
it('should return correct environment variable name', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY'
|
||||
});
|
||||
|
||||
expect(provider.getRequiredApiKeyName()).toBe('TEST_API_KEY');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isRequiredApiKey', () => {
|
||||
it('should return true by default', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY'
|
||||
});
|
||||
|
||||
expect(provider.isRequiredApiKey()).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when explicitly set', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
requiresApiKey: false
|
||||
});
|
||||
|
||||
expect(provider.isRequiredApiKey()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateAuth', () => {
|
||||
it('should validate API key is present when required', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
requiresApiKey: true
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
provider.validateAuth({});
|
||||
}).toThrow('Test Provider API key is required');
|
||||
});
|
||||
|
||||
it('should not validate API key when not required', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
requiresApiKey: false
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
provider.validateAuth({});
|
||||
}).not.toThrow();
|
||||
});
|
||||
|
||||
it('should pass with valid API key', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY'
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
provider.validateAuth({ apiKey: 'test-key' });
|
||||
}).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getBaseURL', () => {
|
||||
it('should return custom baseURL from params', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
defaultBaseURL: 'https://default.api.com'
|
||||
});
|
||||
|
||||
const baseURL = provider.getBaseURL({
|
||||
baseURL: 'https://custom.api.com'
|
||||
});
|
||||
expect(baseURL).toBe('https://custom.api.com');
|
||||
});
|
||||
|
||||
it('should return default baseURL if no custom provided', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
defaultBaseURL: 'https://default.api.com'
|
||||
});
|
||||
|
||||
const baseURL = provider.getBaseURL({});
|
||||
expect(baseURL).toBe('https://default.api.com');
|
||||
});
|
||||
|
||||
it('should use custom getBaseURL function', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
getBaseURL: (params) => `https://api.example.com/${params.route}`
|
||||
});
|
||||
|
||||
const baseURL = provider.getBaseURL({ route: 'v2' });
|
||||
expect(baseURL).toBe('https://api.example.com/v2');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getClient', () => {
|
||||
it('should create client with API key when required', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
requiresApiKey: true,
|
||||
defaultBaseURL: 'https://api.example.com'
|
||||
});
|
||||
|
||||
const client = provider.getClient({ apiKey: 'test-key' });
|
||||
expect(client).toBeDefined();
|
||||
});
|
||||
|
||||
it('should create client without API key when not required', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
requiresApiKey: false,
|
||||
defaultBaseURL: 'https://api.example.com'
|
||||
});
|
||||
|
||||
const client = provider.getClient({});
|
||||
expect(client).toBeDefined();
|
||||
});
|
||||
|
||||
it('should throw error when API key is required but missing', () => {
|
||||
const provider = new OpenAICompatibleProvider({
|
||||
name: 'Test Provider',
|
||||
apiKeyEnvVar: 'TEST_API_KEY',
|
||||
requiresApiKey: true
|
||||
});
|
||||
|
||||
expect(() => {
|
||||
provider.getClient({});
|
||||
}).toThrow('Test Provider API key is required.');
|
||||
});
|
||||
});
|
||||
});
|
||||
78
tests/unit/ai-providers/zai.test.js
Normal file
78
tests/unit/ai-providers/zai.test.js
Normal file
@@ -0,0 +1,78 @@
|
||||
/**
|
||||
* Tests for ZAIProvider
|
||||
*/
|
||||
|
||||
import { ZAIProvider } from '../../../src/ai-providers/zai.js';
|
||||
|
||||
describe('ZAIProvider', () => {
|
||||
let provider;
|
||||
|
||||
beforeEach(() => {
|
||||
provider = new ZAIProvider();
|
||||
});
|
||||
|
||||
describe('constructor', () => {
|
||||
it('should initialize with correct name', () => {
|
||||
expect(provider.name).toBe('Z.ai');
|
||||
});
|
||||
|
||||
it('should initialize with correct default baseURL', () => {
|
||||
expect(provider.defaultBaseURL).toBe('https://api.z.ai/api/paas/v4/');
|
||||
});
|
||||
|
||||
it('should inherit from OpenAICompatibleProvider', () => {
|
||||
expect(provider).toHaveProperty('generateText');
|
||||
expect(provider).toHaveProperty('streamText');
|
||||
expect(provider).toHaveProperty('generateObject');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRequiredApiKeyName', () => {
|
||||
it('should return correct environment variable name', () => {
|
||||
expect(provider.getRequiredApiKeyName()).toBe('ZAI_API_KEY');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isRequiredApiKey', () => {
|
||||
it('should return true as API key is required', () => {
|
||||
expect(provider.isRequiredApiKey()).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getClient', () => {
|
||||
it('should create client with API key', () => {
|
||||
const params = { apiKey: 'test-key' };
|
||||
const client = provider.getClient(params);
|
||||
expect(client).toBeDefined();
|
||||
});
|
||||
|
||||
it('should create client with custom baseURL', () => {
|
||||
const params = {
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://custom.api.com/v1'
|
||||
};
|
||||
const client = provider.getClient(params);
|
||||
expect(client).toBeDefined();
|
||||
});
|
||||
|
||||
it('should throw error when API key is missing', () => {
|
||||
expect(() => {
|
||||
provider.getClient({});
|
||||
}).toThrow('Z.ai API key is required.');
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateAuth', () => {
|
||||
it('should validate API key is present', () => {
|
||||
expect(() => {
|
||||
provider.validateAuth({});
|
||||
}).toThrow('Z.ai API key is required');
|
||||
});
|
||||
|
||||
it('should pass with valid API key', () => {
|
||||
expect(() => {
|
||||
provider.validateAuth({ apiKey: 'test-key' });
|
||||
}).not.toThrow();
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -246,6 +246,27 @@ jest.unstable_mockModule('../../src/ai-providers/index.js', () => ({
|
||||
generateObject: jest.fn(),
|
||||
getRequiredApiKeyName: jest.fn(() => 'XAI_API_KEY'),
|
||||
isRequiredApiKey: jest.fn(() => false)
|
||||
})),
|
||||
OpenAICompatibleProvider: jest.fn(() => ({
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn(),
|
||||
generateObject: jest.fn(),
|
||||
getRequiredApiKeyName: jest.fn(() => 'OPENAI_COMPATIBLE_API_KEY'),
|
||||
isRequiredApiKey: jest.fn(() => true)
|
||||
})),
|
||||
ZAIProvider: jest.fn(() => ({
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn(),
|
||||
generateObject: jest.fn(),
|
||||
getRequiredApiKeyName: jest.fn(() => 'ZAI_API_KEY'),
|
||||
isRequiredApiKey: jest.fn(() => true)
|
||||
})),
|
||||
LMStudioProvider: jest.fn(() => ({
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn(),
|
||||
generateObject: jest.fn(),
|
||||
getRequiredApiKeyName: jest.fn(() => 'LMSTUDIO_API_KEY'),
|
||||
isRequiredApiKey: jest.fn(() => false)
|
||||
}))
|
||||
}));
|
||||
|
||||
@@ -580,11 +601,12 @@ describe('Unified AI Services', () => {
|
||||
// - generateObjectService (mock schema, check object result)
|
||||
// - streamTextService (more complex to test, might need stream helpers)
|
||||
test('should skip provider with missing API key and try next in fallback sequence', async () => {
|
||||
// Setup isApiKeySet to return false for anthropic but true for perplexity
|
||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
||||
if (provider === 'anthropic') return false; // Main provider has no key
|
||||
return true; // Other providers have keys
|
||||
});
|
||||
// Mock anthropic to throw API key error
|
||||
mockAnthropicProvider.generateText.mockRejectedValue(
|
||||
new Error(
|
||||
"Required API key ANTHROPIC_API_KEY for provider 'anthropic' is not set in environment, session, or .env file."
|
||||
)
|
||||
);
|
||||
|
||||
// Mock perplexity text response (since we'll skip anthropic)
|
||||
mockPerplexityProvider.generateText.mockResolvedValue({
|
||||
@@ -605,51 +627,35 @@ describe('Unified AI Services', () => {
|
||||
'Perplexity response (skipped to research)'
|
||||
);
|
||||
|
||||
// Should check API keys
|
||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
||||
'anthropic',
|
||||
params.session,
|
||||
fakeProjectRoot
|
||||
);
|
||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
||||
'perplexity',
|
||||
params.session,
|
||||
fakeProjectRoot
|
||||
);
|
||||
|
||||
// Should log a warning
|
||||
// Should log an error for the failed provider
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'warn',
|
||||
expect.stringContaining(
|
||||
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
|
||||
)
|
||||
'error',
|
||||
expect.stringContaining(`Service call failed for role main`)
|
||||
);
|
||||
|
||||
// Should NOT call anthropic provider
|
||||
expect(mockAnthropicProvider.generateText).not.toHaveBeenCalled();
|
||||
// Should attempt to call anthropic provider first
|
||||
expect(mockAnthropicProvider.generateText).toHaveBeenCalled();
|
||||
|
||||
// Should call perplexity provider
|
||||
// Should call perplexity provider after anthropic fails
|
||||
expect(mockPerplexityProvider.generateText).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
test('should skip multiple providers with missing API keys and use first available', async () => {
|
||||
// Setup: Main and fallback providers have no keys, only research has a key
|
||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
||||
if (provider === 'anthropic') return false; // Main and fallback are both anthropic
|
||||
if (provider === 'perplexity') return true; // Research has a key
|
||||
return false;
|
||||
});
|
||||
|
||||
// Define different providers for testing multiple skips
|
||||
mockGetFallbackProvider.mockReturnValue('openai'); // Different from main
|
||||
mockGetFallbackModelId.mockReturnValue('test-openai-model');
|
||||
|
||||
// Mock isApiKeySet to return false for both main and fallback
|
||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
||||
if (provider === 'anthropic') return false; // Main provider has no key
|
||||
if (provider === 'openai') return false; // Fallback provider has no key
|
||||
return true; // Research provider has a key
|
||||
});
|
||||
// Mock providers to throw API key errors (simulating _resolveApiKey behavior)
|
||||
mockAnthropicProvider.generateText.mockRejectedValue(
|
||||
new Error(
|
||||
"Required API key ANTHROPIC_API_KEY for provider 'anthropic' is not set in environment, session, or .env file."
|
||||
)
|
||||
);
|
||||
mockOpenAIProvider.generateText.mockRejectedValue(
|
||||
new Error(
|
||||
"Required API key OPENAI_API_KEY for provider 'openai' is not set in environment, session, or .env file."
|
||||
)
|
||||
);
|
||||
|
||||
// Mock perplexity text response (since we'll skip to research)
|
||||
mockPerplexityProvider.generateText.mockResolvedValue({
|
||||
@@ -670,48 +676,36 @@ describe('Unified AI Services', () => {
|
||||
'Research response after skipping main and fallback'
|
||||
);
|
||||
|
||||
// Should check API keys for all three roles
|
||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
||||
'anthropic',
|
||||
params.session,
|
||||
fakeProjectRoot
|
||||
);
|
||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
||||
'openai',
|
||||
params.session,
|
||||
fakeProjectRoot
|
||||
);
|
||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
||||
'perplexity',
|
||||
params.session,
|
||||
fakeProjectRoot
|
||||
);
|
||||
|
||||
// Should log warnings for both skipped providers
|
||||
// Should log errors for both skipped providers
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'warn',
|
||||
expect.stringContaining(
|
||||
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
|
||||
)
|
||||
'error',
|
||||
expect.stringContaining(`Service call failed for role main`)
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'warn',
|
||||
expect.stringContaining(
|
||||
`Skipping role 'fallback' (Provider: openai): API key not set or invalid.`
|
||||
)
|
||||
'error',
|
||||
expect.stringContaining(`Service call failed for role fallback`)
|
||||
);
|
||||
|
||||
// Should NOT call skipped providers
|
||||
expect(mockAnthropicProvider.generateText).not.toHaveBeenCalled();
|
||||
expect(mockOpenAIProvider.generateText).not.toHaveBeenCalled();
|
||||
// Should call all providers in sequence until one succeeds
|
||||
expect(mockAnthropicProvider.generateText).toHaveBeenCalled();
|
||||
expect(mockOpenAIProvider.generateText).toHaveBeenCalled();
|
||||
|
||||
// Should call perplexity provider
|
||||
// Should call perplexity provider which succeeds
|
||||
expect(mockPerplexityProvider.generateText).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
test('should throw error if all providers in sequence have missing API keys', async () => {
|
||||
// Mock all providers to have missing API keys
|
||||
mockIsApiKeySet.mockReturnValue(false);
|
||||
// Mock all providers to throw API key errors
|
||||
mockAnthropicProvider.generateText.mockRejectedValue(
|
||||
new Error(
|
||||
"Required API key ANTHROPIC_API_KEY for provider 'anthropic' is not set in environment, session, or .env file."
|
||||
)
|
||||
);
|
||||
mockPerplexityProvider.generateText.mockRejectedValue(
|
||||
new Error(
|
||||
"Required API key PERPLEXITY_API_KEY for provider 'perplexity' is not set in environment, session, or .env file."
|
||||
)
|
||||
);
|
||||
|
||||
const params = {
|
||||
role: 'main',
|
||||
@@ -719,29 +713,23 @@ describe('Unified AI Services', () => {
|
||||
session: { env: {} }
|
||||
};
|
||||
|
||||
// Should throw error since all providers would be skipped
|
||||
// Should throw error since all providers would fail
|
||||
await expect(generateTextService(params)).rejects.toThrow(
|
||||
'AI service call failed for all configured roles'
|
||||
"Required API key PERPLEXITY_API_KEY for provider 'perplexity' is not set"
|
||||
);
|
||||
|
||||
// Should log warnings for all skipped providers
|
||||
// Should log errors for all failed providers
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'warn',
|
||||
expect.stringContaining(
|
||||
`Skipping role 'main' (Provider: anthropic): API key not set or invalid.`
|
||||
)
|
||||
'error',
|
||||
expect.stringContaining(`Service call failed for role main`)
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'warn',
|
||||
expect.stringContaining(
|
||||
`Skipping role 'fallback' (Provider: anthropic): API key not set or invalid.`
|
||||
)
|
||||
'error',
|
||||
expect.stringContaining(`Service call failed for role fallback`)
|
||||
);
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'warn',
|
||||
expect.stringContaining(
|
||||
`Skipping role 'research' (Provider: perplexity): API key not set or invalid.`
|
||||
)
|
||||
'error',
|
||||
expect.stringContaining(`Service call failed for role research`)
|
||||
);
|
||||
|
||||
// Should log final error
|
||||
@@ -752,9 +740,9 @@ describe('Unified AI Services', () => {
|
||||
)
|
||||
);
|
||||
|
||||
// Should NOT call any providers
|
||||
expect(mockAnthropicProvider.generateText).not.toHaveBeenCalled();
|
||||
expect(mockPerplexityProvider.generateText).not.toHaveBeenCalled();
|
||||
// Should attempt to call all providers in sequence
|
||||
expect(mockAnthropicProvider.generateText).toHaveBeenCalled();
|
||||
expect(mockPerplexityProvider.generateText).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should not check API key for Ollama provider and try to use it', async () => {
|
||||
@@ -788,17 +776,11 @@ describe('Unified AI Services', () => {
|
||||
expect(mockOllamaProvider.generateText).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
test('should correctly use the provided session for API key check', async () => {
|
||||
test('should correctly use the provided session for API key resolution', async () => {
|
||||
// Mock custom session object with env vars
|
||||
const customSession = { env: { ANTHROPIC_API_KEY: 'session-api-key' } };
|
||||
|
||||
// Setup API key check to verify the session is passed correctly
|
||||
mockIsApiKeySet.mockImplementation((provider, session, root) => {
|
||||
// Only return true if the correct session was provided
|
||||
return session === customSession;
|
||||
});
|
||||
|
||||
// Mock the anthropic response
|
||||
// Mock the anthropic response - if API key resolution works, this will be called
|
||||
mockAnthropicProvider.generateText.mockResolvedValue({
|
||||
text: 'Anthropic response with session key',
|
||||
usage: { inputTokens: 10, outputTokens: 10, totalTokens: 20 }
|
||||
@@ -812,12 +794,8 @@ describe('Unified AI Services', () => {
|
||||
|
||||
const result = await generateTextService(params);
|
||||
|
||||
// Should check API key with the custom session
|
||||
expect(mockIsApiKeySet).toHaveBeenCalledWith(
|
||||
'anthropic',
|
||||
customSession,
|
||||
fakeProjectRoot
|
||||
);
|
||||
// Should have successfully resolved API key from session and called provider
|
||||
expect(mockAnthropicProvider.generateText).toHaveBeenCalled();
|
||||
|
||||
// Should have gotten the anthropic response
|
||||
expect(result.mainResult).toBe('Anthropic response with session key');
|
||||
|
||||
@@ -16,7 +16,8 @@ describe('Rule Transformer - Gemini Profile', () => {
|
||||
expect(geminiProfile.mcpConfigPath).toBe('.gemini/settings.json');
|
||||
expect(geminiProfile.includeDefaultRules).toBe(false);
|
||||
expect(geminiProfile.fileMap).toEqual({
|
||||
'AGENTS.md': 'GEMINI.md'
|
||||
'AGENT.md': 'AGENTS.md',
|
||||
'GEMINI.md': 'GEMINI.md'
|
||||
});
|
||||
});
|
||||
|
||||
@@ -41,15 +42,17 @@ describe('Rule Transformer - Gemini Profile', () => {
|
||||
test('should have correct file mapping', () => {
|
||||
const geminiProfile = getRulesProfile('gemini');
|
||||
expect(geminiProfile.fileMap).toEqual({
|
||||
'AGENTS.md': 'GEMINI.md'
|
||||
'AGENT.md': 'AGENTS.md',
|
||||
'GEMINI.md': 'GEMINI.md'
|
||||
});
|
||||
});
|
||||
|
||||
test('should place GEMINI.md in root directory', () => {
|
||||
test('should place AGENTS.md and GEMINI.md in root directory', () => {
|
||||
const geminiProfile = getRulesProfile('gemini');
|
||||
// rulesDir determines where fileMap files go
|
||||
expect(geminiProfile.rulesDir).toBe('.');
|
||||
// This means AGENTS.md -> GEMINI.md will be placed in the root
|
||||
// This means both AGENTS.md and GEMINI.md will be placed in the root
|
||||
// Both files are auto-loaded by Gemini CLI
|
||||
});
|
||||
|
||||
test('should place settings.json in .gemini directory', () => {
|
||||
|
||||
415
tests/unit/scripts/modules/task-manager/models-baseurl.test.js
Normal file
415
tests/unit/scripts/modules/task-manager/models-baseurl.test.js
Normal file
@@ -0,0 +1,415 @@
|
||||
/**
|
||||
* Tests for models.js baseURL handling
|
||||
* Verifies that baseURL is only preserved when switching models within the same provider
|
||||
*/
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock the config manager
|
||||
const mockConfigManager = {
|
||||
getMainModelId: jest.fn(() => 'claude-3-sonnet-20240229'),
|
||||
getResearchModelId: jest.fn(
|
||||
() => 'perplexity-llama-3.1-sonar-large-128k-online'
|
||||
),
|
||||
getFallbackModelId: jest.fn(() => 'gpt-4o-mini'),
|
||||
getMainProvider: jest.fn(),
|
||||
getResearchProvider: jest.fn(),
|
||||
getFallbackProvider: jest.fn(),
|
||||
getBaseUrlForRole: jest.fn(),
|
||||
getAvailableModels: jest.fn(),
|
||||
getConfig: jest.fn(),
|
||||
writeConfig: jest.fn(),
|
||||
isConfigFilePresent: jest.fn(() => true),
|
||||
getAllProviders: jest.fn(() => [
|
||||
'anthropic',
|
||||
'openai',
|
||||
'google',
|
||||
'openrouter'
|
||||
]),
|
||||
isApiKeySet: jest.fn(() => true),
|
||||
getMcpApiKeyStatus: jest.fn(() => true)
|
||||
};
|
||||
|
||||
jest.unstable_mockModule(
|
||||
'../../../../../scripts/modules/config-manager.js',
|
||||
() => mockConfigManager
|
||||
);
|
||||
|
||||
// Mock path utils
|
||||
jest.unstable_mockModule('../../../../../src/utils/path-utils.js', () => ({
|
||||
findConfigPath: jest.fn(() => '/test/path/.taskmaster/config.json')
|
||||
}));
|
||||
|
||||
// Mock utils
|
||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
||||
log: jest.fn()
|
||||
}));
|
||||
|
||||
// Mock core constants
|
||||
jest.unstable_mockModule('@tm/core', () => ({
|
||||
CUSTOM_PROVIDERS: {
|
||||
OLLAMA: 'ollama',
|
||||
LMSTUDIO: 'lmstudio',
|
||||
OPENROUTER: 'openrouter',
|
||||
BEDROCK: 'bedrock',
|
||||
CLAUDE_CODE: 'claude-code',
|
||||
AZURE: 'azure',
|
||||
VERTEX: 'vertex',
|
||||
GEMINI_CLI: 'gemini-cli',
|
||||
CODEX_CLI: 'codex-cli',
|
||||
OPENAI_COMPATIBLE: 'openai-compatible'
|
||||
}
|
||||
}));
|
||||
|
||||
// Import the module under test after mocks are set up
|
||||
const { setModel } = await import(
|
||||
'../../../../../scripts/modules/task-manager/models.js'
|
||||
);
|
||||
|
||||
describe('models.js - baseURL handling for LMSTUDIO', () => {
|
||||
const mockProjectRoot = '/test/project';
|
||||
const mockConfig = {
|
||||
models: {
|
||||
main: { provider: 'lmstudio', modelId: 'existing-model' },
|
||||
research: { provider: 'ollama', modelId: 'llama2' },
|
||||
fallback: { provider: 'anthropic', modelId: 'claude-3-haiku-20240307' }
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockConfigManager.getConfig.mockReturnValue(
|
||||
JSON.parse(JSON.stringify(mockConfig))
|
||||
);
|
||||
mockConfigManager.writeConfig.mockReturnValue(true);
|
||||
mockConfigManager.getAvailableModels.mockReturnValue([]);
|
||||
});
|
||||
|
||||
test('should use provided baseURL when explicitly given', async () => {
|
||||
const customBaseURL = 'http://192.168.1.100:1234/v1';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('lmstudio');
|
||||
|
||||
const result = await setModel('main', 'custom-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'lmstudio',
|
||||
baseURL: customBaseURL
|
||||
});
|
||||
|
||||
// Check if setModel succeeded
|
||||
expect(result).toHaveProperty('success');
|
||||
if (!result.success) {
|
||||
throw new Error(`setModel failed: ${JSON.stringify(result.error)}`);
|
||||
}
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.baseURL).toBe(customBaseURL);
|
||||
});
|
||||
|
||||
test('should preserve existing baseURL when already using LMSTUDIO', async () => {
|
||||
const existingBaseURL = 'http://custom-lmstudio:8080/v1';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('lmstudio');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(existingBaseURL);
|
||||
|
||||
await setModel('main', 'new-lmstudio-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'lmstudio'
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.baseURL).toBe(existingBaseURL);
|
||||
});
|
||||
|
||||
test('should use default baseURL when switching from OLLAMA to LMSTUDIO', async () => {
|
||||
const ollamaBaseURL = 'http://ollama-server:11434/api';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('ollama');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(ollamaBaseURL);
|
||||
|
||||
await setModel('main', 'lmstudio-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'lmstudio'
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
// Should use default LMSTUDIO baseURL, not OLLAMA's
|
||||
expect(writtenConfig.models.main.baseURL).toBe('http://localhost:1234/v1');
|
||||
expect(writtenConfig.models.main.baseURL).not.toBe(ollamaBaseURL);
|
||||
});
|
||||
|
||||
test('should use default baseURL when switching from any other provider to LMSTUDIO', async () => {
|
||||
mockConfigManager.getMainProvider.mockReturnValue('anthropic');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(null);
|
||||
|
||||
await setModel('main', 'lmstudio-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'lmstudio'
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.baseURL).toBe('http://localhost:1234/v1');
|
||||
});
|
||||
});
|
||||
|
||||
// NOTE: OLLAMA tests omitted since they require HTTP mocking for fetchOllamaModels.
|
||||
// The baseURL preservation logic is identical to LMSTUDIO, so LMSTUDIO tests prove it works.
|
||||
|
||||
describe.skip('models.js - baseURL handling for OLLAMA', () => {
|
||||
const mockProjectRoot = '/test/project';
|
||||
const mockConfig = {
|
||||
models: {
|
||||
main: { provider: 'ollama', modelId: 'existing-model' },
|
||||
research: { provider: 'lmstudio', modelId: 'some-model' },
|
||||
fallback: { provider: 'anthropic', modelId: 'claude-3-haiku-20240307' }
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockConfigManager.getConfig.mockReturnValue(
|
||||
JSON.parse(JSON.stringify(mockConfig))
|
||||
);
|
||||
mockConfigManager.writeConfig.mockReturnValue(true);
|
||||
mockConfigManager.getAvailableModels.mockReturnValue([]);
|
||||
});
|
||||
|
||||
test('should use provided baseURL when explicitly given', async () => {
|
||||
const customBaseURL = 'http://192.168.1.200:11434/api';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('ollama');
|
||||
|
||||
// Mock fetch for Ollama models check
|
||||
global.fetch = jest.fn(() =>
|
||||
Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({ models: [{ model: 'custom-model' }] })
|
||||
})
|
||||
);
|
||||
|
||||
await setModel('main', 'custom-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'ollama',
|
||||
baseURL: customBaseURL
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.baseURL).toBe(customBaseURL);
|
||||
});
|
||||
|
||||
test('should preserve existing baseURL when already using OLLAMA', async () => {
|
||||
const existingBaseURL = 'http://custom-ollama:9999/api';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('ollama');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(existingBaseURL);
|
||||
|
||||
// Mock fetch for Ollama models check
|
||||
global.fetch = jest.fn(() =>
|
||||
Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({ models: [{ model: 'new-ollama-model' }] })
|
||||
})
|
||||
);
|
||||
|
||||
await setModel('main', 'new-ollama-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'ollama'
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.baseURL).toBe(existingBaseURL);
|
||||
});
|
||||
|
||||
test('should use default baseURL when switching from LMSTUDIO to OLLAMA', async () => {
|
||||
const lmstudioBaseURL = 'http://lmstudio-server:1234/v1';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('lmstudio');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(lmstudioBaseURL);
|
||||
|
||||
// Mock fetch for Ollama models check
|
||||
global.fetch = jest.fn(() =>
|
||||
Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({ models: [{ model: 'ollama-model' }] })
|
||||
})
|
||||
);
|
||||
|
||||
await setModel('main', 'ollama-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'ollama'
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
// Should use default OLLAMA baseURL, not LMSTUDIO's
|
||||
expect(writtenConfig.models.main.baseURL).toBe(
|
||||
'http://localhost:11434/api'
|
||||
);
|
||||
expect(writtenConfig.models.main.baseURL).not.toBe(lmstudioBaseURL);
|
||||
});
|
||||
|
||||
test('should use default baseURL when switching from any other provider to OLLAMA', async () => {
|
||||
mockConfigManager.getMainProvider.mockReturnValue('anthropic');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(null);
|
||||
|
||||
// Mock fetch for Ollama models check
|
||||
global.fetch = jest.fn(() =>
|
||||
Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({ models: [{ model: 'ollama-model' }] })
|
||||
})
|
||||
);
|
||||
|
||||
await setModel('main', 'ollama-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'ollama'
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.baseURL).toBe(
|
||||
'http://localhost:11434/api'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe.skip('models.js - cross-provider baseURL isolation', () => {
|
||||
const mockProjectRoot = '/test/project';
|
||||
const mockConfig = {
|
||||
models: {
|
||||
main: {
|
||||
provider: 'ollama',
|
||||
modelId: 'existing-model',
|
||||
baseURL: 'http://ollama:11434/api'
|
||||
},
|
||||
research: {
|
||||
provider: 'lmstudio',
|
||||
modelId: 'some-model',
|
||||
baseURL: 'http://lmstudio:1234/v1'
|
||||
},
|
||||
fallback: { provider: 'anthropic', modelId: 'claude-3-haiku-20240307' }
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockConfigManager.getConfig.mockReturnValue(
|
||||
JSON.parse(JSON.stringify(mockConfig))
|
||||
);
|
||||
mockConfigManager.writeConfig.mockReturnValue(true);
|
||||
mockConfigManager.getAvailableModels.mockReturnValue([]);
|
||||
});
|
||||
|
||||
test('OLLAMA baseURL should not leak to LMSTUDIO', async () => {
|
||||
const ollamaBaseURL = 'http://custom-ollama:11434/api';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('ollama');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(ollamaBaseURL);
|
||||
|
||||
await setModel('main', 'lmstudio-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'lmstudio'
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.provider).toBe('lmstudio');
|
||||
expect(writtenConfig.models.main.baseURL).toBe('http://localhost:1234/v1');
|
||||
expect(writtenConfig.models.main.baseURL).not.toContain('ollama');
|
||||
});
|
||||
|
||||
test('LMSTUDIO baseURL should not leak to OLLAMA', async () => {
|
||||
const lmstudioBaseURL = 'http://custom-lmstudio:1234/v1';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('lmstudio');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(lmstudioBaseURL);
|
||||
|
||||
// Mock fetch for Ollama models check
|
||||
global.fetch = jest.fn(() =>
|
||||
Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({ models: [{ model: 'ollama-model' }] })
|
||||
})
|
||||
);
|
||||
|
||||
await setModel('main', 'ollama-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'ollama'
|
||||
});
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.provider).toBe('ollama');
|
||||
expect(writtenConfig.models.main.baseURL).toBe(
|
||||
'http://localhost:11434/api'
|
||||
);
|
||||
expect(writtenConfig.models.main.baseURL).not.toContain('lmstudio');
|
||||
expect(writtenConfig.models.main.baseURL).not.toContain('1234');
|
||||
});
|
||||
});
|
||||
|
||||
describe('models.js - baseURL handling for OPENAI_COMPATIBLE', () => {
|
||||
const mockProjectRoot = '/test/project';
|
||||
const mockConfig = {
|
||||
models: {
|
||||
main: {
|
||||
provider: 'openai-compatible',
|
||||
modelId: 'existing-model',
|
||||
baseURL: 'https://api.custom.com/v1'
|
||||
},
|
||||
research: { provider: 'anthropic', modelId: 'claude-3-haiku-20240307' },
|
||||
fallback: { provider: 'openai', modelId: 'gpt-4o-mini' }
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockConfigManager.getConfig.mockReturnValue(
|
||||
JSON.parse(JSON.stringify(mockConfig))
|
||||
);
|
||||
mockConfigManager.writeConfig.mockReturnValue(true);
|
||||
mockConfigManager.getAvailableModels.mockReturnValue([]);
|
||||
});
|
||||
|
||||
test('should preserve existing baseURL when already using OPENAI_COMPATIBLE', async () => {
|
||||
const existingBaseURL = 'https://api.custom.com/v1';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('openai-compatible');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(existingBaseURL);
|
||||
|
||||
const result = await setModel('main', 'new-compatible-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'openai-compatible'
|
||||
});
|
||||
|
||||
expect(result).toHaveProperty('success');
|
||||
if (!result.success) {
|
||||
throw new Error(`setModel failed: ${JSON.stringify(result.error)}`);
|
||||
}
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.baseURL).toBe(existingBaseURL);
|
||||
});
|
||||
|
||||
test('should require baseURL when switching from another provider to OPENAI_COMPATIBLE', async () => {
|
||||
mockConfigManager.getMainProvider.mockReturnValue('anthropic');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(null);
|
||||
|
||||
const result = await setModel('main', 'compatible-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'openai-compatible'
|
||||
// No baseURL provided
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error?.message).toContain(
|
||||
'Base URL is required for OpenAI-compatible providers'
|
||||
);
|
||||
});
|
||||
|
||||
test('should use provided baseURL when switching to OPENAI_COMPATIBLE', async () => {
|
||||
const newBaseURL = 'https://api.newprovider.com/v1';
|
||||
mockConfigManager.getMainProvider.mockReturnValue('anthropic');
|
||||
mockConfigManager.getBaseUrlForRole.mockReturnValue(null);
|
||||
|
||||
const result = await setModel('main', 'compatible-model', {
|
||||
projectRoot: mockProjectRoot,
|
||||
providerHint: 'openai-compatible',
|
||||
baseURL: newBaseURL
|
||||
});
|
||||
|
||||
expect(result).toHaveProperty('success');
|
||||
if (!result.success) {
|
||||
throw new Error(`setModel failed: ${JSON.stringify(result.error)}`);
|
||||
}
|
||||
|
||||
const writtenConfig = mockConfigManager.writeConfig.mock.calls[0][0];
|
||||
expect(writtenConfig.models.main.baseURL).toBe(newBaseURL);
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user