Compare commits

..

1 Commits

Author SHA1 Message Date
github-actions[bot]
464c4ee47d docs: auto-update documentation based on changes in next branch
This PR was automatically generated to update documentation based on recent changes.

  Original commit: fix: downgrade log level to silent (#1321)\n\n\n

  Co-authored-by: Claude <claude-assistant@anthropic.com>
2025-10-18 09:10:12 +00:00
13 changed files with 120 additions and 108 deletions

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Add 4.5 haiku and sonnet to supported models for claude-code and anthropic ai providers

View File

@@ -38,7 +38,7 @@ Taskmaster uses two primary methods for configuration:
} }
}, },
"global": { "global": {
"logLevel": "info", "logLevel": "silent",
"debug": false, "debug": false,
"defaultSubtasks": 5, "defaultSubtasks": 5,
"defaultPriority": "medium", "defaultPriority": "medium",
@@ -85,9 +85,73 @@ Taskmaster uses two primary methods for configuration:
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI). - `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
- **Optional Auto-Update Control:** - **Optional Auto-Update Control:**
- `TASKMASTER_SKIP_AUTO_UPDATE`: Set to '1' to disable automatic updates. Also automatically disabled in CI environments (when `CI` environment variable is set). - `TASKMASTER_SKIP_AUTO_UPDATE`: Set to '1' to disable automatic updates. Also automatically disabled in CI environments (when `CI` environment variable is set).
- **Optional Logging Control:**
- `TASK_MASTER_LOG_LEVEL` or `TM_LOG_LEVEL`: Override the log level (values: `SILENT`, `ERROR`, `WARN`, `INFO`, `DEBUG`)
- `TASK_MASTER_SILENT` or `TM_SILENT`: Set to 'true' to completely silence all output
- `TASK_MASTER_NO_COLOR`: Set to 'true' to disable colored output
- `NO_COLOR`: Standard environment variable to disable colored output
- `MCP_MODE` or `TASK_MASTER_MCP`: Set to 'true' to enable MCP mode (automatically silences all output)
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables. **Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.
## Logging Configuration
Task Master uses a configurable logging system that defaults to **silent mode** for clean CLI output. You can control logging behavior through both configuration files and environment variables.
### Log Levels
- **`SILENT` (0)**: No output (default behavior)
- **`ERROR` (1)**: Only error messages
- **`WARN` (2)**: Warnings and errors
- **`INFO` (3)**: Informational messages, warnings, and errors
- **`DEBUG` (4)**: All messages including debug information
### Configuration Methods
**1. Configuration File (`.taskmaster/config.json`):**
```json
{
"global": {
"logLevel": "info"
}
}
```
**2. Environment Variables (override config file):**
```bash
# Set specific log level
TASK_MASTER_LOG_LEVEL=INFO
# or
TM_LOG_LEVEL=INFO
# Completely silence all output
TASK_MASTER_SILENT=true
# or
TM_SILENT=true
# Disable colors
TASK_MASTER_NO_COLOR=true
# or
NO_COLOR=true
```
### MCP Mode
When running as an MCP server (Model Context Protocol), Task Master automatically enables silent mode to prevent interference with MCP communication:
```bash
# These environment variables automatically enable silent mode
MCP_MODE=true
TASK_MASTER_MCP=true
```
### Default Behavior
By default, Task Master operates in **silent mode** to provide clean CLI output. This means:
- No debug, info, warning, or error messages are displayed during normal operation
- Only essential command output (like task lists, task details) is shown
- You can enable logging by setting `logLevel` in your config or using environment variables
## Tagged Task Lists Configuration (v0.17+) ## Tagged Task Lists Configuration (v0.17+)
Taskmaster includes a tagged task lists system for multi-context task management. Taskmaster includes a tagged task lists system for multi-context task management.

View File

@@ -1,4 +1,4 @@
# Available Models as of October 18, 2025 # Available Models as of October 5, 2025
## Main Models ## Main Models
@@ -8,11 +8,8 @@
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 | | anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 | | anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 | | anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| anthropic | claude-sonnet-4-5-20250929 | 0.73 | 3 | 15 |
| anthropic | claude-haiku-4-5-20251001 | 0.45 | 1 | 5 |
| claude-code | opus | 0.725 | 0 | 0 | | claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 | | claude-code | sonnet | 0.727 | 0 | 0 |
| claude-code | haiku | 0.45 | 0 | 0 |
| codex-cli | gpt-5 | 0.749 | 0 | 0 | | codex-cli | gpt-5 | 0.749 | 0 | 0 |
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 | | codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 | | mcp | mcp-sampling | — | 0 | 0 |
@@ -105,7 +102,6 @@
| ----------- | -------------------------------------------- | --------- | ---------- | ----------- | | ----------- | -------------------------------------------- | --------- | ---------- | ----------- |
| claude-code | opus | 0.725 | 0 | 0 | | claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 | | claude-code | sonnet | 0.727 | 0 | 0 |
| claude-code | haiku | 0.45 | 0 | 0 |
| codex-cli | gpt-5 | 0.749 | 0 | 0 | | codex-cli | gpt-5 | 0.749 | 0 | 0 |
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 | | codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 | | mcp | mcp-sampling | — | 0 | 0 |
@@ -146,11 +142,8 @@
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 | | anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 | | anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 | | anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| anthropic | claude-sonnet-4-5-20250929 | 0.73 | 3 | 15 |
| anthropic | claude-haiku-4-5-20251001 | 0.45 | 1 | 5 |
| claude-code | opus | 0.725 | 0 | 0 | | claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 | | claude-code | sonnet | 0.727 | 0 | 0 |
| claude-code | haiku | 0.45 | 0 | 0 |
| codex-cli | gpt-5 | 0.749 | 0 | 0 | | codex-cli | gpt-5 | 0.749 | 0 | 0 |
| codex-cli | gpt-5-codex | 0.749 | 0 | 0 | | codex-cli | gpt-5-codex | 0.749 | 0 | 0 |
| mcp | mcp-sampling | — | 0 | 0 | | mcp | mcp-sampling | — | 0 | 0 |

View File

@@ -78,7 +78,7 @@ function log(level, ...args) {
// is responsible for directing logs correctly (e.g., to stderr) // is responsible for directing logs correctly (e.g., to stderr)
// during tool execution without upsetting the client connection. // during tool execution without upsetting the client connection.
// Logs outside of tool execution (like startup) will go to stdout. // Logs outside of tool execution (like startup) will go to stdout.
console.error(prefix, ...coloredArgs); console.log(prefix, ...coloredArgs);
} }
} }

41
output.txt Normal file

File diff suppressed because one or more lines are too long

View File

@@ -307,20 +307,6 @@ function validateProviderModelCombination(providerName, modelId) {
); );
} }
/**
* Gets the list of supported model IDs for a given provider from supported-models.json
* @param {string} providerName - The name of the provider (e.g., 'claude-code', 'anthropic')
* @returns {string[]} Array of supported model IDs, or empty array if provider not found
*/
export function getSupportedModelsForProvider(providerName) {
if (!MODEL_MAP[providerName]) {
return [];
}
return MODEL_MAP[providerName]
.filter((model) => model.supported !== false)
.map((model) => model.id);
}
/** /**
* Validates Claude Code AI provider custom settings * Validates Claude Code AI provider custom settings
* @param {object} settings The settings to validate * @param {object} settings The settings to validate

View File

@@ -43,28 +43,6 @@
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 8192, "max_tokens": 8192,
"supported": true "supported": true
},
{
"id": "claude-sonnet-4-5-20250929",
"swe_score": 0.73,
"cost_per_1m_tokens": {
"input": 3.0,
"output": 15.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 64000,
"supported": true
},
{
"id": "claude-haiku-4-5-20251001",
"swe_score": 0.45,
"cost_per_1m_tokens": {
"input": 1.0,
"output": 5.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 200000,
"supported": true
} }
], ],
"claude-code": [ "claude-code": [
@@ -89,17 +67,6 @@
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 64000, "max_tokens": 64000,
"supported": true "supported": true
},
{
"id": "haiku",
"swe_score": 0.45,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback", "research"],
"max_tokens": 200000,
"supported": true
} }
], ],
"codex-cli": [ "codex-cli": [

View File

@@ -12,10 +12,7 @@
import { createClaudeCode } from 'ai-sdk-provider-claude-code'; import { createClaudeCode } from 'ai-sdk-provider-claude-code';
import { BaseAIProvider } from './base-provider.js'; import { BaseAIProvider } from './base-provider.js';
import { import { getClaudeCodeSettingsForCommand } from '../../scripts/modules/config-manager.js';
getClaudeCodeSettingsForCommand,
getSupportedModelsForProvider
} from '../../scripts/modules/config-manager.js';
import { execSync } from 'child_process'; import { execSync } from 'child_process';
import { log } from '../../scripts/modules/utils.js'; import { log } from '../../scripts/modules/utils.js';
@@ -27,24 +24,14 @@ let _claudeCliAvailable = null;
* *
* Features: * Features:
* - No API key required (uses local Claude Code CLI) * - No API key required (uses local Claude Code CLI)
* - Supported models loaded from supported-models.json * - Supports 'sonnet' and 'opus' models
* - Command-specific configuration support * - Command-specific configuration support
*/ */
export class ClaudeCodeProvider extends BaseAIProvider { export class ClaudeCodeProvider extends BaseAIProvider {
constructor() { constructor() {
super(); super();
this.name = 'Claude Code'; this.name = 'Claude Code';
// Load supported models from supported-models.json this.supportedModels = ['sonnet', 'opus'];
this.supportedModels = getSupportedModelsForProvider('claude-code');
// Validate that models were loaded successfully
if (this.supportedModels.length === 0) {
log(
'warn',
'No supported models found for claude-code provider. Check supported-models.json configuration.'
);
}
// Claude Code requires explicit JSON schema mode // Claude Code requires explicit JSON schema mode
this.needsExplicitJsonSchema = true; this.needsExplicitJsonSchema = true;
// Claude Code does not support temperature parameter // Claude Code does not support temperature parameter

View File

@@ -10,10 +10,7 @@ import { createCodexCli } from 'ai-sdk-provider-codex-cli';
import { BaseAIProvider } from './base-provider.js'; import { BaseAIProvider } from './base-provider.js';
import { execSync } from 'child_process'; import { execSync } from 'child_process';
import { log } from '../../scripts/modules/utils.js'; import { log } from '../../scripts/modules/utils.js';
import { import { getCodexCliSettingsForCommand } from '../../scripts/modules/config-manager.js';
getCodexCliSettingsForCommand,
getSupportedModelsForProvider
} from '../../scripts/modules/config-manager.js';
export class CodexCliProvider extends BaseAIProvider { export class CodexCliProvider extends BaseAIProvider {
constructor() { constructor() {
@@ -23,17 +20,8 @@ export class CodexCliProvider extends BaseAIProvider {
this.needsExplicitJsonSchema = false; this.needsExplicitJsonSchema = false;
// Codex CLI does not support temperature parameter // Codex CLI does not support temperature parameter
this.supportsTemperature = false; this.supportsTemperature = false;
// Load supported models from supported-models.json // Restrict to supported models for OAuth subscription usage
this.supportedModels = getSupportedModelsForProvider('codex-cli'); this.supportedModels = ['gpt-5', 'gpt-5-codex'];
// Validate that models were loaded successfully
if (this.supportedModels.length === 0) {
log(
'warn',
'No supported models found for codex-cli provider. Check supported-models.json configuration.'
);
}
// CLI availability check cache // CLI availability check cache
this._codexCliChecked = false; this._codexCliChecked = false;
this._codexCliAvailable = null; this._codexCliAvailable = null;

View File

@@ -43,9 +43,9 @@ describe('Claude Code Error Handling', () => {
// These should work even if CLI is not available // These should work even if CLI is not available
expect(provider.name).toBe('Claude Code'); expect(provider.name).toBe('Claude Code');
expect(provider.getSupportedModels()).toEqual(['opus', 'sonnet', 'haiku']); expect(provider.getSupportedModels()).toEqual(['sonnet', 'opus']);
expect(provider.isModelSupported('sonnet')).toBe(true); expect(provider.isModelSupported('sonnet')).toBe(true);
expect(provider.isModelSupported('haiku')).toBe(true); expect(provider.isModelSupported('haiku')).toBe(false);
expect(provider.isRequiredApiKey()).toBe(false); expect(provider.isRequiredApiKey()).toBe(false);
expect(() => provider.validateAuth()).not.toThrow(); expect(() => provider.validateAuth()).not.toThrow();
}); });

View File

@@ -40,14 +40,14 @@ describe('Claude Code Integration (Optional)', () => {
it('should create a working provider instance', () => { it('should create a working provider instance', () => {
const provider = new ClaudeCodeProvider(); const provider = new ClaudeCodeProvider();
expect(provider.name).toBe('Claude Code'); expect(provider.name).toBe('Claude Code');
expect(provider.getSupportedModels()).toEqual(['opus', 'sonnet', 'haiku']); expect(provider.getSupportedModels()).toEqual(['sonnet', 'opus']);
}); });
it('should support model validation', () => { it('should support model validation', () => {
const provider = new ClaudeCodeProvider(); const provider = new ClaudeCodeProvider();
expect(provider.isModelSupported('sonnet')).toBe(true); expect(provider.isModelSupported('sonnet')).toBe(true);
expect(provider.isModelSupported('opus')).toBe(true); expect(provider.isModelSupported('opus')).toBe(true);
expect(provider.isModelSupported('haiku')).toBe(true); expect(provider.isModelSupported('haiku')).toBe(false);
expect(provider.isModelSupported('unknown')).toBe(false); expect(provider.isModelSupported('unknown')).toBe(false);
}); });

View File

@@ -28,14 +28,6 @@ jest.unstable_mockModule('../../../src/ai-providers/base-provider.js', () => ({
} }
})); }));
// Mock config getters
jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({
getClaudeCodeSettingsForCommand: jest.fn(() => ({})),
getSupportedModelsForProvider: jest.fn(() => ['opus', 'sonnet', 'haiku']),
getDebugFlag: jest.fn(() => false),
getLogLevel: jest.fn(() => 'info')
}));
// Import after mocking // Import after mocking
const { ClaudeCodeProvider } = await import( const { ClaudeCodeProvider } = await import(
'../../../src/ai-providers/claude-code.js' '../../../src/ai-providers/claude-code.js'
@@ -104,13 +96,13 @@ describe('ClaudeCodeProvider', () => {
describe('model support', () => { describe('model support', () => {
it('should return supported models', () => { it('should return supported models', () => {
const models = provider.getSupportedModels(); const models = provider.getSupportedModels();
expect(models).toEqual(['opus', 'sonnet', 'haiku']); expect(models).toEqual(['sonnet', 'opus']);
}); });
it('should check if model is supported', () => { it('should check if model is supported', () => {
expect(provider.isModelSupported('sonnet')).toBe(true); expect(provider.isModelSupported('sonnet')).toBe(true);
expect(provider.isModelSupported('opus')).toBe(true); expect(provider.isModelSupported('opus')).toBe(true);
expect(provider.isModelSupported('haiku')).toBe(true); expect(provider.isModelSupported('haiku')).toBe(false);
expect(provider.isModelSupported('unknown')).toBe(false); expect(provider.isModelSupported('unknown')).toBe(false);
}); });
}); });

View File

@@ -20,7 +20,6 @@ jest.unstable_mockModule('ai-sdk-provider-codex-cli', () => ({
// Mock config getters // Mock config getters
jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({ jest.unstable_mockModule('../../../scripts/modules/config-manager.js', () => ({
getCodexCliSettingsForCommand: jest.fn(() => ({ allowNpx: true })), getCodexCliSettingsForCommand: jest.fn(() => ({ allowNpx: true })),
getSupportedModelsForProvider: jest.fn(() => ['gpt-5', 'gpt-5-codex']),
// Provide commonly imported getters to satisfy other module imports if any // Provide commonly imported getters to satisfy other module imports if any
getDebugFlag: jest.fn(() => false), getDebugFlag: jest.fn(() => false),
getLogLevel: jest.fn(() => 'info') getLogLevel: jest.fn(() => 'info')