Compare commits

..

6 Commits

Author SHA1 Message Date
Ralph Khreish
66f21b22db feat: improve PR and add changeset 2025-07-23 17:13:41 +03:00
Ralph Khreish
784fc65b21 chore: run format 2025-07-23 09:40:55 +03:00
Ralph Khreish
1ad53372a0 chore: run format 2025-07-22 21:52:59 +03:00
Ralph Khreish
e362670aed chore: improve unit tests on kiro rules 2025-07-22 21:52:35 +03:00
Ralph Khreish
8c9889be1a chore: run format 2025-07-22 21:41:46 +03:00
Ralph Khreish
e5440dd884 feat: Add Kiro hooks and configuration for Taskmaster integration
- Introduced multiple Kiro hooks to automate task management workflows, including:
  - Code Change Task Tracker
  - Complexity Analyzer
  - Daily Standup Assistant
  - Git Commit Task Linker
  - Import Cleanup on Delete
  - New File Boilerplate
  - PR Readiness Checker
  - Task Dependency Auto-Progression
  - Test Success Task Completer
- Added .mcp.json configuration for Taskmaster AI integration.
- Updated development workflow documentation to reflect new hook-driven processes and best practices.

This commit enhances the automation capabilities of Taskmaster, streamlining task management and improving developer efficiency.
2025-07-22 21:37:12 +03:00
24 changed files with 240 additions and 805 deletions

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix for tasks not found when using string IDs

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Fix tag-specific complexity report detection in expand command
The expand command now correctly finds and uses tag-specific complexity reports (e.g., `task-complexity-report_feature-xyz.json`) when operating in a tag context. Previously, it would always look for the generic `task-complexity-report.json` file due to a default value in the CLI option definition.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix 'expand --all' and 'show' commands to correctly handle tag contexts for complexity reports and task display.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Clean up remaining automatic task file generation calls

View File

@@ -0,0 +1,24 @@
---
"task-master-ai": minor
---
Add comprehensive Kiro IDE integration with autonomous task management hooks
- **Kiro Profile**: Added full support for Kiro IDE with automatic installation of 7 Taskmaster agent hooks
- **Hook-Driven Workflow**: Introduced natural language automation hooks that eliminate manual task status updates
- **Automatic Hook Installation**: Hooks are now automatically copied to `.kiro/hooks/` when running `task-master rules add kiro`
- **Language-Agnostic Support**: All hooks support multiple programming languages (JS, Python, Go, Rust, Java, etc.)
- **Frontmatter Transformation**: Kiro rules use simplified `inclusion: always` format instead of Cursor's complex frontmatter
- **Special Rule**: Added `taskmaster_hooks_workflow.md` that guides AI assistants to prefer hook-driven completion
Key hooks included:
- Task Dependency Auto-Progression: Automatically starts tasks when dependencies complete
- Code Change Task Tracker: Updates task progress as you save files
- Test Success Task Completer: Marks tasks done when tests pass
- Daily Standup Assistant: Provides personalized task status summaries
- PR Readiness Checker: Validates task completion before creating pull requests
- Complexity Analyzer: Auto-expands complex tasks into manageable subtasks
- Git Commit Task Linker: Links commits to tasks for better traceability
This creates a truly autonomous development workflow where task management happens naturally as you code!

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": patch
---
Fix VSCode profile generation to use correct rule file names (using `.instructions.md` extension instead of `.md`) and front-matter properties (removing the unsupported `alwaysApply` property from instructions files' front-matter).

View File

@@ -1,45 +0,0 @@
# What type of PR is this?
<!-- Check one -->
- [ ] 🐛 Bug fix
- [ ] ✨ Feature
- [ ] 🔌 Integration
- [ ] 📝 Docs
- [ ] 🧹 Refactor
- [ ] Other:
## Description
<!-- What does this PR do? -->
## Related Issues
<!-- Link issues: Fixes #123 -->
## How to Test This
<!-- Quick steps to verify the changes work -->
```bash
# Example commands or steps
```
**Expected result:**
<!-- What should happen? -->
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check` (or `npm run format` to fix)
- [ ] Addressed CodeRabbit comments (if any)
- [ ] Linked related issues (if any)
- [ ] Manually tested the changes
## Changelog Entry
<!-- One line describing the change for users -->
<!-- Example: "Added Kiro IDE integration with automatic task status updates" -->
---
### For Maintainers
- [ ] PR title follows conventional commits
- [ ] Target branch correct
- [ ] Labels added
- [ ] Milestone assigned (if applicable)

View File

@@ -1,39 +0,0 @@
## 🐛 Bug Fix
### 🔍 Bug Description
<!-- Describe the bug -->
### 🔗 Related Issues
<!-- Fixes #123 -->
### ✨ Solution
<!-- How does this PR fix the bug? -->
## How to Test
### Steps that caused the bug:
1.
2.
**Before fix:**
**After fix:**
### Quick verification:
```bash
# Commands to verify the fix
```
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Added unit tests (if applicable)
- [ ] Manually verified the fix works
---
### For Maintainers
- [ ] Root cause identified
- [ ] Fix doesn't introduce new issues
- [ ] CI passes

View File

@@ -1,11 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: 🐛 Bug Fix
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=bugfix.md
about: Fix a bug in Task Master
- name: ✨ New Feature
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=feature.md
about: Add a new feature to Task Master
- name: 🔌 New Integration
url: https://github.com/eyaltoledano/claude-task-master/compare/next...HEAD?template=integration.md
about: Add support for a new tool, IDE, or platform

View File

@@ -1,49 +0,0 @@
## ✨ New Feature
### 📋 Feature Description
<!-- Brief description -->
### 🎯 Problem Statement
<!-- What problem does this feature solve? Why is it needed? -->
### 💡 Solution
<!-- How does this feature solve the problem? What's the approach? -->
### 🔗 Related Issues
<!-- Link related issues: Fixes #123, Part of #456 -->
## How to Use It
### Quick Start
```bash
# Basic usage example
```
### Example
<!-- Show a real use case -->
```bash
# Practical example
```
**What you should see:**
<!-- Expected behavior -->
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Added tests for new functionality
- [ ] Manually tested in CLI mode
- [ ] Manually tested in MCP mode (if applicable)
## Changelog Entry
<!-- One-liner for release notes -->
---
### For Maintainers
- [ ] Feature aligns with project vision
- [ ] CIs pass
- [ ] Changeset file exists

View File

@@ -1,53 +0,0 @@
# 🔌 New Integration
## What tool/IDE is being integrated?
<!-- Name and brief description -->
## What can users do with it?
<!-- Key benefits -->
## How to Enable
### Setup
```bash
task-master rules add [name]
# Any other setup steps
```
### Example Usage
<!-- Show it in action -->
```bash
# Real example
```
### Natural Language Hooks (if applicable)
```
"When tests pass, mark task as done"
# Other examples
```
## Contributor Checklist
- [ ] Created changeset: `npm run changeset`
- [ ] Tests pass: `npm test`
- [ ] Format check passes: `npm run format-check`
- [ ] Addressed CodeRabbit comments
- [ ] Integration fully tested with target tool/IDE
- [ ] Error scenarios tested
- [ ] Added integration tests
- [ ] Documentation includes setup guide
- [ ] Examples are working and clear
---
## For Maintainers
- [ ] Integration stability verified
- [ ] Documentation comprehensive
- [ ] Examples working

View File

@@ -16,7 +16,7 @@ jobs:
- uses: actions/setup-node@v4 - uses: actions/setup-node@v4
with: with:
node-version: 20 node-version: 20
cache: "npm" cache: 'npm'
- name: Cache node_modules - name: Cache node_modules
uses: actions/cache@v4 uses: actions/cache@v4
@@ -32,13 +32,10 @@ jobs:
run: npm ci run: npm ci
timeout-minutes: 2 timeout-minutes: 2
- name: Enter RC mode (if not already in RC mode) - name: Enter RC mode
run: | run: |
# ensure were in the right pre-mode (tag "rc") npx changeset pre exit || true
if [ ! -f .changeset/pre.json ] \ npx changeset pre enter rc
|| [ "$(jq -r '.tag' .changeset/pre.json 2>/dev/null || echo '')" != "rc" ]; then
npx changeset pre enter rc
fi
- name: Version RC packages - name: Version RC packages
run: npx changeset version run: npx changeset version
@@ -54,9 +51,12 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }} NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
- name: Exit RC mode
run: npx changeset pre exit
- name: Commit & Push changes - name: Commit & Push changes
uses: actions-js/push@master uses: actions-js/push@master
with: with:
github_token: ${{ secrets.GITHUB_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }} branch: ${{ github.ref }}
message: "chore: rc version bump" message: 'chore: rc version bump'

3
.kiro/steering/test.md Normal file
View File

@@ -0,0 +1,3 @@
Testing rules that you can help me improve to see how it works<!------------------------------------------------------------------------------------
Add Rules to this file or a short description and have Kiro refine them for you:
------------------------------------------------------------------------------------->

View File

@@ -1,91 +1,5 @@
# task-master-ai # task-master-ai
## 0.22.0
### Minor Changes
- [#1043](https://github.com/eyaltoledano/claude-task-master/pull/1043) [`dc44ed9`](https://github.com/eyaltoledano/claude-task-master/commit/dc44ed9de8a57aca5d39d3a87565568bd0a82068) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Prompt to generate a complexity report when it is missing
- [#1032](https://github.com/eyaltoledano/claude-task-master/pull/1032) [`4423119`](https://github.com/eyaltoledano/claude-task-master/commit/4423119a5ec53958c9dffa8bf564da8be7a2827d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add comprehensive Kiro IDE integration with autonomous task management hooks
- **Kiro Profile**: Added full support for Kiro IDE with automatic installation of 7 Taskmaster agent hooks
- **Hook-Driven Workflow**: Introduced natural language automation hooks that eliminate manual task status updates
- **Automatic Hook Installation**: Hooks are now automatically copied to `.kiro/hooks/` when running `task-master rules add kiro`
- **Language-Agnostic Support**: All hooks support multiple programming languages (JS, Python, Go, Rust, Java, etc.)
- **Frontmatter Transformation**: Kiro rules use simplified `inclusion: always` format instead of Cursor's complex frontmatter
- **Special Rule**: Added `taskmaster_hooks_workflow.md` that guides AI assistants to prefer hook-driven completion
Key hooks included:
- Task Dependency Auto-Progression: Automatically starts tasks when dependencies complete
- Code Change Task Tracker: Updates task progress as you save files
- Test Success Task Completer: Marks tasks done when tests pass
- Daily Standup Assistant: Provides personalized task status summaries
- PR Readiness Checker: Validates task completion before creating pull requests
- Complexity Analyzer: Auto-expands complex tasks into manageable subtasks
- Git Commit Task Linker: Links commits to tasks for better traceability
This creates a truly autonomous development workflow where task management happens naturally as you code!
### Patch Changes
- [#1033](https://github.com/eyaltoledano/claude-task-master/pull/1033) [`7b90568`](https://github.com/eyaltoledano/claude-task-master/commit/7b9056832653464f934c91c22997077065d738c4) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Fix compatibility with @google/gemini-cli-core v0.1.12+ by updating ai-sdk-provider-gemini-cli to v0.1.1.
- [#1038](https://github.com/eyaltoledano/claude-task-master/pull/1038) [`77cc5e4`](https://github.com/eyaltoledano/claude-task-master/commit/77cc5e4537397642f2664f61940a101433ee6fb4) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix 'expand --all' and 'show' commands to correctly handle tag contexts for complexity reports and task display.
- [#1025](https://github.com/eyaltoledano/claude-task-master/pull/1025) [`8781794`](https://github.com/eyaltoledano/claude-task-master/commit/8781794c56d454697fc92c88a3925982d6b81205) Thanks [@joedanz](https://github.com/joedanz)! - Clean up remaining automatic task file generation calls
- [#1035](https://github.com/eyaltoledano/claude-task-master/pull/1035) [`fb7d588`](https://github.com/eyaltoledano/claude-task-master/commit/fb7d588137e8c53b0d0f54bd1dd8d387648583ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix max_tokens limits for OpenRouter and Groq models
- Add special handling in config-manager.js for custom OpenRouter models to use a conservative default of 32,768 max_tokens
- Update qwen/qwen-turbo model max_tokens from 1,000,000 to 32,768 to match OpenRouter's actual limits
- Fix moonshotai/kimi-k2-instruct max_tokens to 16,384 to match Groq's actual limit (fixes #1028)
- This prevents "maximum context length exceeded" errors when using OpenRouter models not in our supported models list
- [#1027](https://github.com/eyaltoledano/claude-task-master/pull/1027) [`6ae66b2`](https://github.com/eyaltoledano/claude-task-master/commit/6ae66b2afbfe911340fa25e0236c3db83deaa7eb) Thanks [@andreswebs](https://github.com/andreswebs)! - Fix VSCode profile generation to use correct rule file names (using `.instructions.md` extension instead of `.md`) and front-matter properties (removing the unsupported `alwaysApply` property from instructions files' front-matter).
## 0.22.0-rc.1
### Minor Changes
- [#1043](https://github.com/eyaltoledano/claude-task-master/pull/1043) [`dc44ed9`](https://github.com/eyaltoledano/claude-task-master/commit/dc44ed9de8a57aca5d39d3a87565568bd0a82068) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Prompt to generate a complexity report when it is missing
## 0.22.0-rc.0
### Minor Changes
- [#1032](https://github.com/eyaltoledano/claude-task-master/pull/1032) [`4423119`](https://github.com/eyaltoledano/claude-task-master/commit/4423119a5ec53958c9dffa8bf564da8be7a2827d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add comprehensive Kiro IDE integration with autonomous task management hooks
- **Kiro Profile**: Added full support for Kiro IDE with automatic installation of 7 Taskmaster agent hooks
- **Hook-Driven Workflow**: Introduced natural language automation hooks that eliminate manual task status updates
- **Automatic Hook Installation**: Hooks are now automatically copied to `.kiro/hooks/` when running `task-master rules add kiro`
- **Language-Agnostic Support**: All hooks support multiple programming languages (JS, Python, Go, Rust, Java, etc.)
- **Frontmatter Transformation**: Kiro rules use simplified `inclusion: always` format instead of Cursor's complex frontmatter
- **Special Rule**: Added `taskmaster_hooks_workflow.md` that guides AI assistants to prefer hook-driven completion
Key hooks included:
- Task Dependency Auto-Progression: Automatically starts tasks when dependencies complete
- Code Change Task Tracker: Updates task progress as you save files
- Test Success Task Completer: Marks tasks done when tests pass
- Daily Standup Assistant: Provides personalized task status summaries
- PR Readiness Checker: Validates task completion before creating pull requests
- Complexity Analyzer: Auto-expands complex tasks into manageable subtasks
- Git Commit Task Linker: Links commits to tasks for better traceability
This creates a truly autonomous development workflow where task management happens naturally as you code!
### Patch Changes
- [#1033](https://github.com/eyaltoledano/claude-task-master/pull/1033) [`7b90568`](https://github.com/eyaltoledano/claude-task-master/commit/7b9056832653464f934c91c22997077065d738c4) Thanks [@ben-vargas](https://github.com/ben-vargas)! - Fix compatibility with @google/gemini-cli-core v0.1.12+ by updating ai-sdk-provider-gemini-cli to v0.1.1.
- [#1038](https://github.com/eyaltoledano/claude-task-master/pull/1038) [`77cc5e4`](https://github.com/eyaltoledano/claude-task-master/commit/77cc5e4537397642f2664f61940a101433ee6fb4) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix 'expand --all' and 'show' commands to correctly handle tag contexts for complexity reports and task display.
- [#1025](https://github.com/eyaltoledano/claude-task-master/pull/1025) [`8781794`](https://github.com/eyaltoledano/claude-task-master/commit/8781794c56d454697fc92c88a3925982d6b81205) Thanks [@joedanz](https://github.com/joedanz)! - Clean up remaining automatic task file generation calls
- [#1035](https://github.com/eyaltoledano/claude-task-master/pull/1035) [`fb7d588`](https://github.com/eyaltoledano/claude-task-master/commit/fb7d588137e8c53b0d0f54bd1dd8d387648583ee) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix max_tokens limits for OpenRouter and Groq models
- Add special handling in config-manager.js for custom OpenRouter models to use a conservative default of 32,768 max_tokens
- Update qwen/qwen-turbo model max_tokens from 1,000,000 to 32,768 to match OpenRouter's actual limits
- Fix moonshotai/kimi-k2-instruct max_tokens to 16,384 to match Groq's actual limit (fixes #1028)
- This prevents "maximum context length exceeded" errors when using OpenRouter models not in our supported models list
- [#1027](https://github.com/eyaltoledano/claude-task-master/pull/1027) [`6ae66b2`](https://github.com/eyaltoledano/claude-task-master/commit/6ae66b2afbfe911340fa25e0236c3db83deaa7eb) Thanks [@andreswebs](https://github.com/andreswebs)! - Fix VSCode profile generation to use correct rule file names (using `.instructions.md` extension instead of `.md`) and front-matter properties (removing the unsupported `alwaysApply` property from instructions files' front-matter).
## 0.21.0 ## 0.21.0
### Minor Changes ### Minor Changes

View File

@@ -1,4 +1,4 @@
# Available Models as of July 23, 2025 # Available Models as of July 19, 2025
## Main Models ## Main Models
@@ -48,6 +48,7 @@
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 | | openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 | | openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 | | openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324 | — | 0.27 | 1.1 | | openrouter | deepseek/deepseek-chat-v3-0324 | — | 0.27 | 1.1 |
| openrouter | openai/gpt-4.1 | — | 2 | 8 | | openrouter | openai/gpt-4.1 | — | 2 | 8 |
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 | | openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
@@ -64,9 +65,11 @@
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 | | openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 | | openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 | | openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 | | openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 | | openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 | | openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| ollama | devstral:latest | — | 0 | 0 | | ollama | devstral:latest | — | 0 | 0 |
| ollama | qwen3:latest | — | 0 | 0 | | ollama | qwen3:latest | — | 0 | 0 |
| ollama | qwen3:14b | — | 0 | 0 | | ollama | qwen3:14b | — | 0 | 0 |
@@ -155,6 +158,7 @@
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 | | openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 | | openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 | | openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
| openrouter | openai/gpt-4.1 | — | 2 | 8 | | openrouter | openai/gpt-4.1 | — | 2 | 8 |
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 | | openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 | | openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
@@ -170,8 +174,10 @@
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 | | openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 | | openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 | | openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 | | openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 | | openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| ollama | devstral:latest | — | 0 | 0 | | ollama | devstral:latest | — | 0 | 0 |
| ollama | qwen3:latest | — | 0 | 0 | | ollama | qwen3:latest | — | 0 | 0 |
| ollama | qwen3:14b | — | 0 | 0 | | ollama | qwen3:14b | — | 0 | 0 |
@@ -190,11 +196,3 @@
| bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 | | bedrock | us.anthropic.claude-3-5-haiku-20241022-v1:0 | 0.4 | 0.8 | 4 |
| bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 | | bedrock | us.anthropic.claude-opus-4-20250514-v1:0 | 0.725 | 15 | 75 |
| bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 | | bedrock | us.anthropic.claude-sonnet-4-20250514-v1:0 | 0.727 | 3 | 15 |
## Unsupported Models
| Provider | Model Name | Reason |
| ---------- | --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| openrouter | deepseek/deepseek-chat-v3-0324:free | Free OpenRouter models are not supported due to severe rate limits, lack of tool use support, and other reliability issues that make them impractical for production use. |
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | Free OpenRouter models are not supported due to severe rate limits, lack of tool use support, and other reliability issues that make them impractical for production use. |
| openrouter | thudm/glm-4-32b:free | Free OpenRouter models are not supported due to severe rate limits, lack of tool use support, and other reliability issues that make them impractical for production use. |

View File

@@ -47,20 +47,6 @@ function generateMarkdownTable(title, models) {
return table; return table;
} }
function generateUnsupportedTable(models) {
if (!models || models.length === 0) {
return '## Unsupported Models\n\nNo unsupported models found.\n\n';
}
let table = '## Unsupported Models\n\n';
table += '| Provider | Model Name | Reason |\n';
table += '|---|---|---|\n';
models.forEach((model) => {
table += `| ${model.provider} | ${model.modelName} | ${model.reason || '—'} |\n`;
});
table += '\n';
return table;
}
function main() { function main() {
try { try {
const correctSupportedModelsPath = path.join( const correctSupportedModelsPath = path.join(
@@ -82,46 +68,31 @@ function main() {
const mainModels = []; const mainModels = [];
const researchModels = []; const researchModels = [];
const fallbackModels = []; const fallbackModels = [];
const unsupportedModels = [];
for (const provider in supportedModels) { for (const provider in supportedModels) {
if (Object.hasOwnProperty.call(supportedModels, provider)) { if (Object.hasOwnProperty.call(supportedModels, provider)) {
const models = supportedModels[provider]; const models = supportedModels[provider];
models.forEach((model) => { models.forEach((model) => {
const isSupported = model.supported !== false; // default to true if missing const modelEntry = {
if (isSupported) { provider: provider,
const modelEntry = { modelName: model.id,
provider: provider, sweScore: model.swe_score,
modelName: model.id, inputCost: model.cost_per_1m_tokens
sweScore: model.swe_score, ? model.cost_per_1m_tokens.input
inputCost: model.cost_per_1m_tokens : null,
? model.cost_per_1m_tokens.input outputCost: model.cost_per_1m_tokens
: null, ? model.cost_per_1m_tokens.output
outputCost: model.cost_per_1m_tokens : null
? model.cost_per_1m_tokens.output };
: null
}; if (model.allowed_roles.includes('main')) {
if (model.allowed_roles && model.allowed_roles.includes('main')) { mainModels.push(modelEntry);
mainModels.push(modelEntry); }
} if (model.allowed_roles.includes('research')) {
if ( researchModels.push(modelEntry);
model.allowed_roles && }
model.allowed_roles.includes('research') if (model.allowed_roles.includes('fallback')) {
) { fallbackModels.push(modelEntry);
researchModels.push(modelEntry);
}
if (
model.allowed_roles &&
model.allowed_roles.includes('fallback')
) {
fallbackModels.push(modelEntry);
}
} else {
unsupportedModels.push({
provider: provider,
modelName: model.id,
reason: model.reason || 'Not specified'
});
} }
}); });
} }
@@ -148,7 +119,6 @@ function main() {
markdownContent += generateMarkdownTable('Main Models', mainModels); markdownContent += generateMarkdownTable('Main Models', mainModels);
markdownContent += generateMarkdownTable('Research Models', researchModels); markdownContent += generateMarkdownTable('Research Models', researchModels);
markdownContent += generateMarkdownTable('Fallback Models', fallbackModels); markdownContent += generateMarkdownTable('Fallback Models', fallbackModels);
markdownContent += generateUnsupportedTable(unsupportedModels);
fs.writeFileSync(correctOutputMarkdownPath, markdownContent, 'utf8'); fs.writeFileSync(correctOutputMarkdownPath, markdownContent, 'utf8');
console.log(`Successfully updated ${correctOutputMarkdownPath}`); console.log(`Successfully updated ${correctOutputMarkdownPath}`);

100
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.22.0", "version": "0.21.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.22.0", "version": "0.21.0",
"license": "MIT WITH Commons-Clause", "license": "MIT WITH Commons-Clause",
"workspaces": [ "workspaces": [
"apps/*", "apps/*",
@@ -81,7 +81,7 @@
"optionalDependencies": { "optionalDependencies": {
"@anthropic-ai/claude-code": "^1.0.25", "@anthropic-ai/claude-code": "^1.0.25",
"@biomejs/cli-linux-x64": "^1.9.4", "@biomejs/cli-linux-x64": "^1.9.4",
"ai-sdk-provider-gemini-cli": "^0.1.1" "ai-sdk-provider-gemini-cli": "^0.0.4"
} }
}, },
"apps/extension": { "apps/extension": {
@@ -2065,12 +2065,12 @@
} }
}, },
"node_modules/@google/gemini-cli-core": { "node_modules/@google/gemini-cli-core": {
"version": "0.1.13", "version": "0.1.9",
"resolved": "https://registry.npmjs.org/@google/gemini-cli-core/-/gemini-cli-core-0.1.13.tgz", "resolved": "https://registry.npmjs.org/@google/gemini-cli-core/-/gemini-cli-core-0.1.9.tgz",
"integrity": "sha512-Vx3CbRpLJiGs/sj4SXlGH2ALKyON5skV/p+SCAoRuS6yRsANS1+diEeXbp6jlWT2TTiGoa8+GolqeNIU7wbN8w==", "integrity": "sha512-NFmu0qivppBZ3JT6to0A2+tEtcvWcWuhbfyTz42Wm2AoAtl941lTbcd/TiBryK0yWz3WCkqukuDxl+L7axLpvA==",
"optional": true, "optional": true,
"dependencies": { "dependencies": {
"@google/genai": "1.9.0", "@google/genai": "^1.4.0",
"@modelcontextprotocol/sdk": "^1.11.0", "@modelcontextprotocol/sdk": "^1.11.0",
"@opentelemetry/api": "^1.9.0", "@opentelemetry/api": "^1.9.0",
"@opentelemetry/exporter-logs-otlp-grpc": "^0.52.0", "@opentelemetry/exporter-logs-otlp-grpc": "^0.52.0",
@@ -2080,46 +2080,23 @@
"@opentelemetry/sdk-node": "^0.52.0", "@opentelemetry/sdk-node": "^0.52.0",
"@types/glob": "^8.1.0", "@types/glob": "^8.1.0",
"@types/html-to-text": "^9.0.4", "@types/html-to-text": "^9.0.4",
"ajv": "^8.17.1",
"diff": "^7.0.0", "diff": "^7.0.0",
"dotenv": "^17.1.0", "dotenv": "^16.6.1",
"gaxios": "^6.1.1",
"glob": "^10.4.5", "glob": "^10.4.5",
"google-auth-library": "^9.11.0", "google-auth-library": "^9.11.0",
"html-to-text": "^9.0.5", "html-to-text": "^9.0.5",
"https-proxy-agent": "^7.0.6",
"ignore": "^7.0.0", "ignore": "^7.0.0",
"micromatch": "^4.0.8", "micromatch": "^4.0.8",
"open": "^10.1.2", "open": "^10.1.2",
"shell-quote": "^1.8.3", "shell-quote": "^1.8.2",
"simple-git": "^3.28.0", "simple-git": "^3.28.0",
"strip-ansi": "^7.1.0", "strip-ansi": "^7.1.0",
"undici": "^7.10.0", "undici": "^7.10.0",
"ws": "^8.18.0" "ws": "^8.18.0"
}, },
"engines": { "engines": {
"node": ">=20" "node": ">=18"
}
},
"node_modules/@google/gemini-cli-core/node_modules/@google/genai": {
"version": "1.9.0",
"resolved": "https://registry.npmjs.org/@google/genai/-/genai-1.9.0.tgz",
"integrity": "sha512-w9P93OXKPMs9H1mfAx9+p3zJqQGrWBGdvK/SVc7cLZEXNHr/3+vW2eif7ZShA6wU24rNLn9z9MK2vQFUvNRI2Q==",
"license": "Apache-2.0",
"optional": true,
"dependencies": {
"google-auth-library": "^9.14.2",
"ws": "^8.18.0"
},
"engines": {
"node": ">=20.0.0"
},
"peerDependencies": {
"@modelcontextprotocol/sdk": "^1.11.0"
},
"peerDependenciesMeta": {
"@modelcontextprotocol/sdk": {
"optional": true
}
} }
}, },
"node_modules/@google/gemini-cli-core/node_modules/ansi-regex": { "node_modules/@google/gemini-cli-core/node_modules/ansi-regex": {
@@ -2145,19 +2122,6 @@
"balanced-match": "^1.0.0" "balanced-match": "^1.0.0"
} }
}, },
"node_modules/@google/gemini-cli-core/node_modules/dotenv": {
"version": "17.2.0",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-17.2.0.tgz",
"integrity": "sha512-Q4sgBT60gzd0BB0lSyYD3xM4YxrXA9y4uBDof1JNYGzOXrQdQ6yX+7XIAqoFOGQFOTK1D3Hts5OllpxMDZFONQ==",
"license": "BSD-2-Clause",
"optional": true,
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://dotenvx.com"
}
},
"node_modules/@google/gemini-cli-core/node_modules/glob": { "node_modules/@google/gemini-cli-core/node_modules/glob": {
"version": "10.4.5", "version": "10.4.5",
"resolved": "https://registry.npmjs.org/glob/-/glob-10.4.5.tgz", "resolved": "https://registry.npmjs.org/glob/-/glob-10.4.5.tgz",
@@ -2222,14 +2186,16 @@
} }
}, },
"node_modules/@google/genai": { "node_modules/@google/genai": {
"version": "1.10.0", "version": "1.8.0",
"resolved": "https://registry.npmjs.org/@google/genai/-/genai-1.10.0.tgz", "resolved": "https://registry.npmjs.org/@google/genai/-/genai-1.8.0.tgz",
"integrity": "sha512-PR4tLuiIFMrpAiiCko2Z16ydikFsPF1c5TBfI64hlZcv3xBEApSCceLuDYu1pNMq2SkNh4r66J4AG+ZexBnMLw==", "integrity": "sha512-n3KiMFesQCy2R9iSdBIuJ0JWYQ1HZBJJkmt4PPZMGZKvlgHhBAGw1kUMyX+vsAIzprN3lK45DI755lm70wPOOg==",
"license": "Apache-2.0", "license": "Apache-2.0",
"optional": true, "optional": true,
"dependencies": { "dependencies": {
"google-auth-library": "^9.14.2", "google-auth-library": "^9.14.2",
"ws": "^8.18.0" "ws": "^8.18.0",
"zod": "^3.22.4",
"zod-to-json-schema": "^3.22.4"
}, },
"engines": { "engines": {
"node": ">=20.0.0" "node": ">=20.0.0"
@@ -5457,15 +5423,15 @@
} }
}, },
"node_modules/ai-sdk-provider-gemini-cli": { "node_modules/ai-sdk-provider-gemini-cli": {
"version": "0.1.1", "version": "0.0.4",
"resolved": "https://registry.npmjs.org/ai-sdk-provider-gemini-cli/-/ai-sdk-provider-gemini-cli-0.1.1.tgz", "resolved": "https://registry.npmjs.org/ai-sdk-provider-gemini-cli/-/ai-sdk-provider-gemini-cli-0.0.4.tgz",
"integrity": "sha512-fvX3n9jTt8JaTyc+qDv5Og0H4NQMpS6B1VdaTT71AN2F+3u2Bz9/OSd7ATokrV2Rmv+ZlEnUCmJnke58zHXUSQ==", "integrity": "sha512-rXxNM/+wVHL8Syf/SjyoVmFJgTMwLnVSPPhqkLzbP6JKBvp81qZfkBFQiI9l6VMF1ctb6L+iSdVNd0/G1pTVZg==",
"license": "MIT", "license": "MIT",
"optional": true, "optional": true,
"dependencies": { "dependencies": {
"@ai-sdk/provider": "^1.1.3", "@ai-sdk/provider": "^1.1.3",
"@ai-sdk/provider-utils": "^2.2.8", "@ai-sdk/provider-utils": "^2.2.8",
"@google/gemini-cli-core": "^0.1.13", "@google/gemini-cli-core": "^0.1.4",
"@google/genai": "^1.7.0", "@google/genai": "^1.7.0",
"google-auth-library": "^9.0.0", "google-auth-library": "^9.0.0",
"zod": "^3.23.8", "zod": "^3.23.8",
@@ -11270,16 +11236,16 @@
} }
}, },
"node_modules/open": { "node_modules/open": {
"version": "10.2.0", "version": "10.1.2",
"resolved": "https://registry.npmjs.org/open/-/open-10.2.0.tgz", "resolved": "https://registry.npmjs.org/open/-/open-10.1.2.tgz",
"integrity": "sha512-YgBpdJHPyQ2UE5x+hlSXcnejzAvD0b22U2OuAP+8OnlJT+PjWPxtgmGqKKc+RgTM63U9gN0YzrYc71R2WT/hTA==", "integrity": "sha512-cxN6aIDPz6rm8hbebcP7vrQNhvRcveZoJU72Y7vskh4oIm+BZwBECnx5nTmrlres1Qapvx27Qo1Auukpf8PKXw==",
"license": "MIT", "license": "MIT",
"optional": true, "optional": true,
"dependencies": { "dependencies": {
"default-browser": "^5.2.1", "default-browser": "^5.2.1",
"define-lazy-prop": "^3.0.0", "define-lazy-prop": "^3.0.0",
"is-inside-container": "^1.0.0", "is-inside-container": "^1.0.0",
"wsl-utils": "^0.1.0" "is-wsl": "^3.1.0"
}, },
"engines": { "engines": {
"node": ">=18" "node": ">=18"
@@ -13656,22 +13622,6 @@
} }
} }
}, },
"node_modules/wsl-utils": {
"version": "0.1.0",
"resolved": "https://registry.npmjs.org/wsl-utils/-/wsl-utils-0.1.0.tgz",
"integrity": "sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw==",
"license": "MIT",
"optional": true,
"dependencies": {
"is-wsl": "^3.1.0"
},
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/xsschema": { "node_modules/xsschema": {
"version": "0.3.0-beta.8", "version": "0.3.0-beta.8",
"resolved": "https://registry.npmjs.org/xsschema/-/xsschema-0.3.0-beta.8.tgz", "resolved": "https://registry.npmjs.org/xsschema/-/xsschema-0.3.0-beta.8.tgz",

View File

@@ -1,6 +1,6 @@
{ {
"name": "task-master-ai", "name": "task-master-ai",
"version": "0.22.0", "version": "0.21.0",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.", "description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js", "main": "index.js",
"type": "module", "type": "module",
@@ -84,8 +84,8 @@
}, },
"optionalDependencies": { "optionalDependencies": {
"@anthropic-ai/claude-code": "^1.0.25", "@anthropic-ai/claude-code": "^1.0.25",
"@biomejs/cli-linux-x64": "^1.9.4", "ai-sdk-provider-gemini-cli": "^0.0.4",
"ai-sdk-provider-gemini-cli": "^0.1.1" "@biomejs/cli-linux-x64": "^1.9.4"
}, },
"engines": { "engines": {
"node": ">=18.0.0" "node": ">=18.0.0"

View File

@@ -1564,8 +1564,8 @@ function registerCommands(programInstance) {
) // Allow file override ) // Allow file override
.option( .option(
'-cr, --complexity-report <file>', '-cr, --complexity-report <file>',
'Path to the complexity report file (use this to specify the complexity report, not --file)' 'Path to the report file',
// Removed default value to allow tag-specific auto-detection COMPLEXITY_REPORT_FILE
) )
.option('--tag <tag>', 'Specify tag context for task operations') .option('--tag <tag>', 'Specify tag context for task operations')
.action(async (options) => { .action(async (options) => {

View File

@@ -584,21 +584,10 @@ function getParametersForRole(role, explicitRoot = null) {
); );
} }
} else { } else {
// Special handling for custom OpenRouter models log(
if (providerName === CUSTOM_PROVIDERS.OPENROUTER) { 'debug',
// Use a conservative default for OpenRouter models not in our list `No model definitions found for provider ${providerName} in MODEL_MAP. Using role default maxTokens: ${roleMaxTokens}`
const openrouterDefault = 32768; );
effectiveMaxTokens = Math.min(roleMaxTokens, openrouterDefault);
log(
'debug',
`Custom OpenRouter model ${modelId} detected. Using conservative max_tokens: ${effectiveMaxTokens}`
);
} else {
log(
'debug',
`No model definitions found for provider ${providerName} in MODEL_MAP. Using role default maxTokens: ${roleMaxTokens}`
);
}
} }
} catch (lookupError) { } catch (lookupError) {
log( log(
@@ -783,38 +772,36 @@ function getAvailableModels() {
const available = []; const available = [];
for (const [provider, models] of Object.entries(MODEL_MAP)) { for (const [provider, models] of Object.entries(MODEL_MAP)) {
if (models.length > 0) { if (models.length > 0) {
models models.forEach((modelObj) => {
.filter((modelObj) => Boolean(modelObj.supported)) // Basic name generation - can be improved
.forEach((modelObj) => { const modelId = modelObj.id;
// Basic name generation - can be improved const sweScore = modelObj.swe_score;
const modelId = modelObj.id; const cost = modelObj.cost_per_1m_tokens;
const sweScore = modelObj.swe_score; const allowedRoles = modelObj.allowed_roles || ['main', 'fallback'];
const cost = modelObj.cost_per_1m_tokens; const nameParts = modelId
const allowedRoles = modelObj.allowed_roles || ['main', 'fallback']; .split('-')
const nameParts = modelId .map((p) => p.charAt(0).toUpperCase() + p.slice(1));
.split('-') // Handle specific known names better if needed
.map((p) => p.charAt(0).toUpperCase() + p.slice(1)); let name = nameParts.join(' ');
// Handle specific known names better if needed if (modelId === 'claude-3.5-sonnet-20240620')
let name = nameParts.join(' '); name = 'Claude 3.5 Sonnet';
if (modelId === 'claude-3.5-sonnet-20240620') if (modelId === 'claude-3-7-sonnet-20250219')
name = 'Claude 3.5 Sonnet'; name = 'Claude 3.7 Sonnet';
if (modelId === 'claude-3-7-sonnet-20250219') if (modelId === 'gpt-4o') name = 'GPT-4o';
name = 'Claude 3.7 Sonnet'; if (modelId === 'gpt-4-turbo') name = 'GPT-4 Turbo';
if (modelId === 'gpt-4o') name = 'GPT-4o'; if (modelId === 'sonar-pro') name = 'Perplexity Sonar Pro';
if (modelId === 'gpt-4-turbo') name = 'GPT-4 Turbo'; if (modelId === 'sonar-mini') name = 'Perplexity Sonar Mini';
if (modelId === 'sonar-pro') name = 'Perplexity Sonar Pro';
if (modelId === 'sonar-mini') name = 'Perplexity Sonar Mini';
available.push({ available.push({
id: modelId, id: modelId,
name: name, name: name,
provider: provider, provider: provider,
swe_score: sweScore, swe_score: sweScore,
cost_per_1m_tokens: cost, cost_per_1m_tokens: cost,
allowed_roles: allowedRoles, allowed_roles: allowedRoles,
max_tokens: modelObj.max_tokens max_tokens: modelObj.max_tokens
});
}); });
});
} else { } else {
// For providers with empty lists (like ollama), maybe add a placeholder or skip // For providers with empty lists (like ollama), maybe add a placeholder or skip
available.push({ available.push({

View File

@@ -8,8 +8,7 @@
"output": 15.0 "output": 15.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 64000, "max_tokens": 64000
"supported": true
}, },
{ {
"id": "claude-opus-4-20250514", "id": "claude-opus-4-20250514",
@@ -19,8 +18,7 @@
"output": 75.0 "output": 75.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 32000, "max_tokens": 32000
"supported": true
}, },
{ {
"id": "claude-3-7-sonnet-20250219", "id": "claude-3-7-sonnet-20250219",
@@ -30,8 +28,7 @@
"output": 15.0 "output": 15.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 120000, "max_tokens": 120000
"supported": true
}, },
{ {
"id": "claude-3-5-sonnet-20241022", "id": "claude-3-5-sonnet-20241022",
@@ -41,8 +38,7 @@
"output": 15.0 "output": 15.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 8192, "max_tokens": 8192
"supported": true
} }
], ],
"claude-code": [ "claude-code": [
@@ -54,8 +50,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32000, "max_tokens": 32000
"supported": true
}, },
{ {
"id": "sonnet", "id": "sonnet",
@@ -65,8 +60,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 64000, "max_tokens": 64000
"supported": true
} }
], ],
"mcp": [ "mcp": [
@@ -78,8 +72,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
} }
], ],
"gemini-cli": [ "gemini-cli": [
@@ -91,8 +84,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536, "max_tokens": 65536
"supported": true
}, },
{ {
"id": "gemini-2.5-flash", "id": "gemini-2.5-flash",
@@ -102,8 +94,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536, "max_tokens": 65536
"supported": true
} }
], ],
"openai": [ "openai": [
@@ -115,8 +106,7 @@
"output": 10.0 "output": 10.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 16384, "max_tokens": 16384
"supported": true
}, },
{ {
"id": "o1", "id": "o1",
@@ -125,8 +115,7 @@
"input": 15.0, "input": 15.0,
"output": 60.0 "output": 60.0
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"]
"supported": true
}, },
{ {
"id": "o3", "id": "o3",
@@ -136,8 +125,7 @@
"output": 8.0 "output": 8.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
}, },
{ {
"id": "o3-mini", "id": "o3-mini",
@@ -147,8 +135,7 @@
"output": 4.4 "output": 4.4
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
}, },
{ {
"id": "o4-mini", "id": "o4-mini",
@@ -157,8 +144,7 @@
"input": 1.1, "input": 1.1,
"output": 4.4 "output": 4.4
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "o1-mini", "id": "o1-mini",
@@ -167,8 +153,7 @@
"input": 1.1, "input": 1.1,
"output": 4.4 "output": 4.4
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"]
"supported": true
}, },
{ {
"id": "o1-pro", "id": "o1-pro",
@@ -177,8 +162,7 @@
"input": 150.0, "input": 150.0,
"output": 600.0 "output": 600.0
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"]
"supported": true
}, },
{ {
"id": "gpt-4-5-preview", "id": "gpt-4-5-preview",
@@ -187,8 +171,7 @@
"input": 75.0, "input": 75.0,
"output": 150.0 "output": 150.0
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"]
"supported": true
}, },
{ {
"id": "gpt-4-1-mini", "id": "gpt-4-1-mini",
@@ -197,8 +180,7 @@
"input": 0.4, "input": 0.4,
"output": 1.6 "output": 1.6
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"]
"supported": true
}, },
{ {
"id": "gpt-4-1-nano", "id": "gpt-4-1-nano",
@@ -207,8 +189,7 @@
"input": 0.1, "input": 0.1,
"output": 0.4 "output": 0.4
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"]
"supported": true
}, },
{ {
"id": "gpt-4o-mini", "id": "gpt-4o-mini",
@@ -217,8 +198,7 @@
"input": 0.15, "input": 0.15,
"output": 0.6 "output": 0.6
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"]
"supported": true
}, },
{ {
"id": "gpt-4o-search-preview", "id": "gpt-4o-search-preview",
@@ -227,8 +207,7 @@
"input": 2.5, "input": 2.5,
"output": 10.0 "output": 10.0
}, },
"allowed_roles": ["research"], "allowed_roles": ["research"]
"supported": true
}, },
{ {
"id": "gpt-4o-mini-search-preview", "id": "gpt-4o-mini-search-preview",
@@ -237,8 +216,7 @@
"input": 0.15, "input": 0.15,
"output": 0.6 "output": 0.6
}, },
"allowed_roles": ["research"], "allowed_roles": ["research"]
"supported": true
} }
], ],
"google": [ "google": [
@@ -247,24 +225,21 @@
"swe_score": 0.638, "swe_score": 0.638,
"cost_per_1m_tokens": null, "cost_per_1m_tokens": null,
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1048000, "max_tokens": 1048000
"supported": true
}, },
{ {
"id": "gemini-2.5-pro-preview-03-25", "id": "gemini-2.5-pro-preview-03-25",
"swe_score": 0.638, "swe_score": 0.638,
"cost_per_1m_tokens": null, "cost_per_1m_tokens": null,
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1048000, "max_tokens": 1048000
"supported": true
}, },
{ {
"id": "gemini-2.5-flash-preview-04-17", "id": "gemini-2.5-flash-preview-04-17",
"swe_score": 0.604, "swe_score": 0.604,
"cost_per_1m_tokens": null, "cost_per_1m_tokens": null,
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1048000, "max_tokens": 1048000
"supported": true
}, },
{ {
"id": "gemini-2.0-flash", "id": "gemini-2.0-flash",
@@ -274,16 +249,14 @@
"output": 0.6 "output": 0.6
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1048000, "max_tokens": 1048000
"supported": true
}, },
{ {
"id": "gemini-2.0-flash-lite", "id": "gemini-2.0-flash-lite",
"swe_score": 0, "swe_score": 0,
"cost_per_1m_tokens": null, "cost_per_1m_tokens": null,
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1048000, "max_tokens": 1048000
"supported": true
} }
], ],
"xai": [ "xai": [
@@ -296,8 +269,7 @@
"output": 15 "output": 15
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072, "max_tokens": 131072
"supported": true
}, },
{ {
"id": "grok-3-fast", "id": "grok-3-fast",
@@ -308,8 +280,7 @@
"output": 25 "output": 25
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072, "max_tokens": 131072
"supported": true
}, },
{ {
"id": "grok-4", "id": "grok-4",
@@ -320,8 +291,7 @@
"output": 15 "output": 15
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 131072, "max_tokens": 131072
"supported": true
} }
], ],
"groq": [ "groq": [
@@ -333,8 +303,7 @@
"output": 3.0 "output": 3.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 16384, "max_tokens": 131072
"supported": true
}, },
{ {
"id": "llama-3.3-70b-versatile", "id": "llama-3.3-70b-versatile",
@@ -344,8 +313,7 @@
"output": 0.79 "output": 0.79
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768, "max_tokens": 32768
"supported": true
}, },
{ {
"id": "llama-3.1-8b-instant", "id": "llama-3.1-8b-instant",
@@ -355,8 +323,7 @@
"output": 0.08 "output": 0.08
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 131072, "max_tokens": 131072
"supported": true
}, },
{ {
"id": "llama-4-scout", "id": "llama-4-scout",
@@ -366,8 +333,7 @@
"output": 0.34 "output": 0.34
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768, "max_tokens": 32768
"supported": true
}, },
{ {
"id": "llama-4-maverick", "id": "llama-4-maverick",
@@ -377,8 +343,7 @@
"output": 0.77 "output": 0.77
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768, "max_tokens": 32768
"supported": true
}, },
{ {
"id": "mixtral-8x7b-32768", "id": "mixtral-8x7b-32768",
@@ -388,8 +353,7 @@
"output": 0.24 "output": 0.24
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 32768, "max_tokens": 32768
"supported": true
}, },
{ {
"id": "qwen-qwq-32b-preview", "id": "qwen-qwq-32b-preview",
@@ -399,8 +363,7 @@
"output": 0.18 "output": 0.18
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 32768, "max_tokens": 32768
"supported": true
}, },
{ {
"id": "deepseek-r1-distill-llama-70b", "id": "deepseek-r1-distill-llama-70b",
@@ -410,8 +373,7 @@
"output": 0.99 "output": 0.99
}, },
"allowed_roles": ["main", "research"], "allowed_roles": ["main", "research"],
"max_tokens": 8192, "max_tokens": 8192
"supported": true
}, },
{ {
"id": "gemma2-9b-it", "id": "gemma2-9b-it",
@@ -421,8 +383,7 @@
"output": 0.2 "output": 0.2
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 8192, "max_tokens": 8192
"supported": true
}, },
{ {
"id": "whisper-large-v3", "id": "whisper-large-v3",
@@ -432,8 +393,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"],
"max_tokens": 0, "max_tokens": 0
"supported": true
} }
], ],
"perplexity": [ "perplexity": [
@@ -445,8 +405,7 @@
"output": 15 "output": 15
}, },
"allowed_roles": ["main", "research"], "allowed_roles": ["main", "research"],
"max_tokens": 8700, "max_tokens": 8700
"supported": true
}, },
{ {
"id": "sonar", "id": "sonar",
@@ -456,8 +415,7 @@
"output": 1 "output": 1
}, },
"allowed_roles": ["research"], "allowed_roles": ["research"],
"max_tokens": 8700, "max_tokens": 8700
"supported": true
}, },
{ {
"id": "deep-research", "id": "deep-research",
@@ -467,8 +425,7 @@
"output": 8 "output": 8
}, },
"allowed_roles": ["research"], "allowed_roles": ["research"],
"max_tokens": 8700, "max_tokens": 8700
"supported": true
}, },
{ {
"id": "sonar-reasoning-pro", "id": "sonar-reasoning-pro",
@@ -478,8 +435,7 @@
"output": 8 "output": 8
}, },
"allowed_roles": ["main", "research", "fallback"], "allowed_roles": ["main", "research", "fallback"],
"max_tokens": 8700, "max_tokens": 8700
"supported": true
}, },
{ {
"id": "sonar-reasoning", "id": "sonar-reasoning",
@@ -489,8 +445,7 @@
"output": 5 "output": 5
}, },
"allowed_roles": ["main", "research", "fallback"], "allowed_roles": ["main", "research", "fallback"],
"max_tokens": 8700, "max_tokens": 8700
"supported": true
} }
], ],
"openrouter": [ "openrouter": [
@@ -502,8 +457,7 @@
"output": 0.6 "output": 0.6
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1048576, "max_tokens": 1048576
"supported": true
}, },
{ {
"id": "google/gemini-2.5-flash-preview-05-20:thinking", "id": "google/gemini-2.5-flash-preview-05-20:thinking",
@@ -513,8 +467,7 @@
"output": 3.5 "output": 3.5
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1048576, "max_tokens": 1048576
"supported": true
}, },
{ {
"id": "google/gemini-2.5-pro-exp-03-25", "id": "google/gemini-2.5-pro-exp-03-25",
@@ -524,8 +477,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1000000, "max_tokens": 1000000
"supported": true
}, },
{ {
"id": "deepseek/deepseek-chat-v3-0324:free", "id": "deepseek/deepseek-chat-v3-0324:free",
@@ -535,9 +487,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 163840, "max_tokens": 163840
"supported": false,
"reason": "Free OpenRouter models are not supported due to severe rate limits, lack of tool use support, and other reliability issues that make them impractical for production use."
}, },
{ {
"id": "deepseek/deepseek-chat-v3-0324", "id": "deepseek/deepseek-chat-v3-0324",
@@ -547,8 +497,7 @@
"output": 1.1 "output": 1.1
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"],
"max_tokens": 64000, "max_tokens": 64000
"supported": true
}, },
{ {
"id": "openai/gpt-4.1", "id": "openai/gpt-4.1",
@@ -558,8 +507,7 @@
"output": 8 "output": 8
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1000000, "max_tokens": 1000000
"supported": true
}, },
{ {
"id": "openai/gpt-4.1-mini", "id": "openai/gpt-4.1-mini",
@@ -569,8 +517,7 @@
"output": 1.6 "output": 1.6
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1000000, "max_tokens": 1000000
"supported": true
}, },
{ {
"id": "openai/gpt-4.1-nano", "id": "openai/gpt-4.1-nano",
@@ -580,8 +527,7 @@
"output": 0.4 "output": 0.4
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1000000, "max_tokens": 1000000
"supported": true
}, },
{ {
"id": "openai/o3", "id": "openai/o3",
@@ -591,8 +537,7 @@
"output": 40 "output": 40
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 200000, "max_tokens": 200000
"supported": true
}, },
{ {
"id": "openai/codex-mini", "id": "openai/codex-mini",
@@ -602,8 +547,7 @@
"output": 6 "output": 6
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
}, },
{ {
"id": "openai/gpt-4o-mini", "id": "openai/gpt-4o-mini",
@@ -613,8 +557,7 @@
"output": 0.6 "output": 0.6
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
}, },
{ {
"id": "openai/o4-mini", "id": "openai/o4-mini",
@@ -624,8 +567,7 @@
"output": 4.4 "output": 4.4
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
}, },
{ {
"id": "openai/o4-mini-high", "id": "openai/o4-mini-high",
@@ -635,8 +577,7 @@
"output": 4.4 "output": 4.4
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
}, },
{ {
"id": "openai/o1-pro", "id": "openai/o1-pro",
@@ -646,8 +587,7 @@
"output": 600 "output": 600
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
}, },
{ {
"id": "meta-llama/llama-3.3-70b-instruct", "id": "meta-llama/llama-3.3-70b-instruct",
@@ -657,8 +597,7 @@
"output": 600 "output": 600
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1048576, "max_tokens": 1048576
"supported": true
}, },
{ {
"id": "meta-llama/llama-4-maverick", "id": "meta-llama/llama-4-maverick",
@@ -668,8 +607,7 @@
"output": 0.6 "output": 0.6
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1000000, "max_tokens": 1000000
"supported": true
}, },
{ {
"id": "meta-llama/llama-4-scout", "id": "meta-llama/llama-4-scout",
@@ -679,8 +617,7 @@
"output": 0.3 "output": 0.3
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 1000000, "max_tokens": 1000000
"supported": true
}, },
{ {
"id": "qwen/qwen-max", "id": "qwen/qwen-max",
@@ -690,8 +627,7 @@
"output": 6.4 "output": 6.4
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 32768, "max_tokens": 32768
"supported": true
}, },
{ {
"id": "qwen/qwen-turbo", "id": "qwen/qwen-turbo",
@@ -701,8 +637,7 @@
"output": 0.2 "output": 0.2
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 32768, "max_tokens": 1000000
"supported": true
}, },
{ {
"id": "qwen/qwen3-235b-a22b", "id": "qwen/qwen3-235b-a22b",
@@ -712,8 +647,7 @@
"output": 2 "output": 2
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 24000, "max_tokens": 24000
"supported": true
}, },
{ {
"id": "mistralai/mistral-small-3.1-24b-instruct:free", "id": "mistralai/mistral-small-3.1-24b-instruct:free",
@@ -723,9 +657,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 96000, "max_tokens": 96000
"supported": false,
"reason": "Free OpenRouter models are not supported due to severe rate limits, lack of tool use support, and other reliability issues that make them impractical for production use."
}, },
{ {
"id": "mistralai/mistral-small-3.1-24b-instruct", "id": "mistralai/mistral-small-3.1-24b-instruct",
@@ -735,8 +667,7 @@
"output": 0.3 "output": 0.3
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 128000, "max_tokens": 128000
"supported": true
}, },
{ {
"id": "mistralai/devstral-small", "id": "mistralai/devstral-small",
@@ -746,8 +677,7 @@
"output": 0.3 "output": 0.3
}, },
"allowed_roles": ["main"], "allowed_roles": ["main"],
"max_tokens": 110000, "max_tokens": 110000
"supported": true
}, },
{ {
"id": "mistralai/mistral-nemo", "id": "mistralai/mistral-nemo",
@@ -757,8 +687,7 @@
"output": 0.07 "output": 0.07
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 100000, "max_tokens": 100000
"supported": true
}, },
{ {
"id": "thudm/glm-4-32b:free", "id": "thudm/glm-4-32b:free",
@@ -768,9 +697,7 @@
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 32768, "max_tokens": 32768
"supported": false,
"reason": "Free OpenRouter models are not supported due to severe rate limits, lack of tool use support, and other reliability issues that make them impractical for production use."
} }
], ],
"ollama": [ "ollama": [
@@ -781,8 +708,7 @@
"input": 0, "input": 0,
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "qwen3:latest", "id": "qwen3:latest",
@@ -791,8 +717,7 @@
"input": 0, "input": 0,
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "qwen3:14b", "id": "qwen3:14b",
@@ -801,8 +726,7 @@
"input": 0, "input": 0,
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "qwen3:32b", "id": "qwen3:32b",
@@ -811,8 +735,7 @@
"input": 0, "input": 0,
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "mistral-small3.1:latest", "id": "mistral-small3.1:latest",
@@ -821,8 +744,7 @@
"input": 0, "input": 0,
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "llama3.3:latest", "id": "llama3.3:latest",
@@ -831,8 +753,7 @@
"input": 0, "input": 0,
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "phi4:latest", "id": "phi4:latest",
@@ -841,8 +762,7 @@
"input": 0, "input": 0,
"output": 0 "output": 0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
} }
], ],
"azure": [ "azure": [
@@ -851,11 +771,10 @@
"swe_score": 0.332, "swe_score": 0.332,
"cost_per_1m_tokens": { "cost_per_1m_tokens": {
"input": 2.5, "input": 2.5,
"output": 10 "output": 10.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 16384, "max_tokens": 16384
"supported": true
}, },
{ {
"id": "gpt-4o-mini", "id": "gpt-4o-mini",
@@ -865,8 +784,7 @@
"output": 0.6 "output": 0.6
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 16384, "max_tokens": 16384
"supported": true
}, },
{ {
"id": "gpt-4-1", "id": "gpt-4-1",
@@ -876,8 +794,7 @@
"output": 10.0 "output": 10.0
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"],
"max_tokens": 16384, "max_tokens": 16384
"supported": true
} }
], ],
"bedrock": [ "bedrock": [
@@ -888,8 +805,7 @@
"input": 0.25, "input": 0.25,
"output": 1.25 "output": 1.25
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "us.anthropic.claude-3-opus-20240229-v1:0", "id": "us.anthropic.claude-3-opus-20240229-v1:0",
@@ -898,8 +814,7 @@
"input": 15, "input": 15,
"output": 75 "output": 75
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"]
"supported": true
}, },
{ {
"id": "us.anthropic.claude-3-5-sonnet-20240620-v1:0", "id": "us.anthropic.claude-3-5-sonnet-20240620-v1:0",
@@ -908,8 +823,7 @@
"input": 3, "input": 3,
"output": 15 "output": 15
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"]
"supported": true
}, },
{ {
"id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0", "id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
@@ -918,8 +832,7 @@
"input": 3, "input": 3,
"output": 15 "output": 15
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"]
"supported": true
}, },
{ {
"id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0", "id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
@@ -929,8 +842,7 @@
"output": 15 "output": 15
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"],
"max_tokens": 65536, "max_tokens": 65536
"supported": true
}, },
{ {
"id": "us.anthropic.claude-3-5-haiku-20241022-v1:0", "id": "us.anthropic.claude-3-5-haiku-20241022-v1:0",
@@ -939,8 +851,7 @@
"input": 0.8, "input": 0.8,
"output": 4 "output": 4
}, },
"allowed_roles": ["main", "fallback"], "allowed_roles": ["main", "fallback"]
"supported": true
}, },
{ {
"id": "us.anthropic.claude-opus-4-20250514-v1:0", "id": "us.anthropic.claude-opus-4-20250514-v1:0",
@@ -949,8 +860,7 @@
"input": 15, "input": 15,
"output": 75 "output": 75
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"]
"supported": true
}, },
{ {
"id": "us.anthropic.claude-sonnet-4-20250514-v1:0", "id": "us.anthropic.claude-sonnet-4-20250514-v1:0",
@@ -959,8 +869,7 @@
"input": 3, "input": 3,
"output": 15 "output": 15
}, },
"allowed_roles": ["main", "fallback", "research"], "allowed_roles": ["main", "fallback", "research"]
"supported": true
}, },
{ {
"id": "us.deepseek.r1-v1:0", "id": "us.deepseek.r1-v1:0",
@@ -970,8 +879,7 @@
"output": 5.4 "output": 5.4
}, },
"allowed_roles": ["research"], "allowed_roles": ["research"],
"max_tokens": 65536, "max_tokens": 65536
"supported": true
} }
] ]
} }

View File

@@ -9,7 +9,6 @@ import boxen from 'boxen';
import ora from 'ora'; import ora from 'ora';
import Table from 'cli-table3'; import Table from 'cli-table3';
import gradient from 'gradient-string'; import gradient from 'gradient-string';
import readline from 'readline';
import { import {
log, log,
findTaskById, findTaskById,
@@ -1683,15 +1682,18 @@ async function displayComplexityReport(reportPath) {
) )
); );
const rl = readline.createInterface({ const readline = require('readline').createInterface({
input: process.stdin, input: process.stdin,
output: process.stdout output: process.stdout
}); });
const answer = await new Promise((resolve) => { const answer = await new Promise((resolve) => {
rl.question(chalk.cyan('Generate complexity report? (y/n): '), resolve); readline.question(
chalk.cyan('Generate complexity report? (y/n): '),
resolve
);
}); });
rl.close(); readline.close();
if (answer.toLowerCase() === 'y' || answer.toLowerCase() === 'yes') { if (answer.toLowerCase() === 'y' || answer.toLowerCase() === 'yes') {
// Call the analyze-complexity command // Call the analyze-complexity command
@@ -1972,6 +1974,8 @@ async function confirmTaskOverwrite(tasksPath) {
) )
); );
// Use dynamic import to get the readline module
const readline = await import('readline');
const rl = readline.createInterface({ const rl = readline.createInterface({
input: process.stdin, input: process.stdin,
output: process.stdout output: process.stdout
@@ -2459,6 +2463,8 @@ async function displayMultipleTasksSummary(
) )
); );
// Use dynamic import for readline
const readline = await import('readline');
const rl = readline.createInterface({ const rl = readline.createInterface({
input: process.stdin, input: process.stdin,
output: process.stdout output: process.stdout

View File

@@ -262,43 +262,6 @@ function hasTaggedStructure(data) {
return false; return false;
} }
/**
* Normalizes task IDs to ensure they are numbers instead of strings
* @param {Array} tasks - Array of tasks to normalize
*/
function normalizeTaskIds(tasks) {
if (!Array.isArray(tasks)) return;
tasks.forEach((task) => {
// Convert task ID to number with validation
if (task.id !== undefined) {
const parsedId = parseInt(task.id, 10);
if (!isNaN(parsedId) && parsedId > 0) {
task.id = parsedId;
}
}
// Convert subtask IDs to numbers with validation
if (Array.isArray(task.subtasks)) {
task.subtasks.forEach((subtask) => {
if (subtask.id !== undefined) {
// Check for dot notation (which shouldn't exist in storage)
if (typeof subtask.id === 'string' && subtask.id.includes('.')) {
// Extract the subtask part after the dot
const parts = subtask.id.split('.');
subtask.id = parseInt(parts[parts.length - 1], 10);
} else {
const parsedSubtaskId = parseInt(subtask.id, 10);
if (!isNaN(parsedSubtaskId) && parsedSubtaskId > 0) {
subtask.id = parsedSubtaskId;
}
}
}
});
}
});
}
/** /**
* Reads and parses a JSON file * Reads and parses a JSON file
* @param {string} filepath - Path to the JSON file * @param {string} filepath - Path to the JSON file
@@ -359,8 +322,6 @@ function readJSON(filepath, projectRoot = null, tag = null) {
console.log(`File is in legacy format, performing migration...`); console.log(`File is in legacy format, performing migration...`);
} }
normalizeTaskIds(data.tasks);
// This is legacy format - migrate it to tagged format // This is legacy format - migrate it to tagged format
const migratedData = { const migratedData = {
master: { master: {
@@ -440,16 +401,6 @@ function readJSON(filepath, projectRoot = null, tag = null) {
// Store reference to the raw tagged data for functions that need it // Store reference to the raw tagged data for functions that need it
const originalTaggedData = JSON.parse(JSON.stringify(data)); const originalTaggedData = JSON.parse(JSON.stringify(data));
// Normalize IDs in all tags before storing as originalTaggedData
for (const tagName in originalTaggedData) {
if (
originalTaggedData[tagName] &&
Array.isArray(originalTaggedData[tagName].tasks)
) {
normalizeTaskIds(originalTaggedData[tagName].tasks);
}
}
// Check and auto-switch git tags if enabled (for existing tagged format) // Check and auto-switch git tags if enabled (for existing tagged format)
// This needs to run synchronously BEFORE tag resolution // This needs to run synchronously BEFORE tag resolution
if (projectRoot) { if (projectRoot) {
@@ -497,8 +448,6 @@ function readJSON(filepath, projectRoot = null, tag = null) {
// Get the data for the resolved tag // Get the data for the resolved tag
const tagData = data[resolvedTag]; const tagData = data[resolvedTag];
if (tagData && tagData.tasks) { if (tagData && tagData.tasks) {
normalizeTaskIds(tagData.tasks);
// Add the _rawTaggedData property and the resolved tag to the returned data // Add the _rawTaggedData property and the resolved tag to the returned data
const result = { const result = {
...tagData, ...tagData,
@@ -515,8 +464,6 @@ function readJSON(filepath, projectRoot = null, tag = null) {
// If the resolved tag doesn't exist, fall back to master // If the resolved tag doesn't exist, fall back to master
const masterData = data.master; const masterData = data.master;
if (masterData && masterData.tasks) { if (masterData && masterData.tasks) {
normalizeTaskIds(masterData.tasks);
if (isDebug) { if (isDebug) {
console.log( console.log(
`Tag '${resolvedTag}' not found, falling back to master with ${masterData.tasks.length} tasks` `Tag '${resolvedTag}' not found, falling back to master with ${masterData.tasks.length} tasks`
@@ -546,7 +493,6 @@ function readJSON(filepath, projectRoot = null, tag = null) {
// If anything goes wrong, try to return master or empty // If anything goes wrong, try to return master or empty
const masterData = data.master; const masterData = data.master;
if (masterData && masterData.tasks) { if (masterData && masterData.tasks) {
normalizeTaskIds(masterData.tasks);
return { return {
...masterData, ...masterData,
_rawTaggedData: originalTaggedData _rawTaggedData: originalTaggedData
@@ -1466,6 +1412,5 @@ export {
createStateJson, createStateJson,
markMigrationForNotice, markMigrationForNotice,
flattenTasksWithSubtasks, flattenTasksWithSubtasks,
ensureTagMetadata, ensureTagMetadata
normalizeTaskIds
}; };

View File

@@ -23,82 +23,6 @@ describe('Task Finder', () => {
expect(result.originalSubtaskCount).toBeNull(); expect(result.originalSubtaskCount).toBeNull();
}); });
test('should find tasks when JSON contains string IDs (normalized to numbers)', () => {
// Simulate tasks loaded from JSON with string IDs and mixed subtask notations
const tasksWithStringIds = [
{ id: '1', title: 'First Task' },
{
id: '2',
title: 'Second Task',
subtasks: [
{ id: '1', title: 'Subtask One' },
{ id: '2.2', title: 'Subtask Two (with dotted notation)' } // Testing dotted notation
]
},
{
id: '5',
title: 'Fifth Task',
subtasks: [
{ id: '5.1', title: 'Subtask with dotted ID' }, // Should normalize to 1
{ id: '3', title: 'Subtask with simple ID' } // Should stay as 3
]
}
];
// The readJSON function should normalize these IDs to numbers
// For this test, we'll manually normalize them to simulate what happens
tasksWithStringIds.forEach((task) => {
task.id = parseInt(task.id, 10);
if (task.subtasks) {
task.subtasks.forEach((subtask) => {
// Handle dotted notation like "5.1" -> extract the subtask part
if (typeof subtask.id === 'string' && subtask.id.includes('.')) {
const parts = subtask.id.split('.');
subtask.id = parseInt(parts[parts.length - 1], 10);
} else {
subtask.id = parseInt(subtask.id, 10);
}
});
}
});
// Test finding tasks by numeric ID
const result1 = findTaskById(tasksWithStringIds, 5);
expect(result1.task).toBeDefined();
expect(result1.task.id).toBe(5);
expect(result1.task.title).toBe('Fifth Task');
// Test finding tasks by string ID
const result2 = findTaskById(tasksWithStringIds, '5');
expect(result2.task).toBeDefined();
expect(result2.task.id).toBe(5);
// Test finding subtasks with normalized IDs
const result3 = findTaskById(tasksWithStringIds, '2.1');
expect(result3.task).toBeDefined();
expect(result3.task.id).toBe(1);
expect(result3.task.title).toBe('Subtask One');
expect(result3.task.isSubtask).toBe(true);
// Test subtask that was originally "2.2" (should be normalized to 2)
const result4 = findTaskById(tasksWithStringIds, '2.2');
expect(result4.task).toBeDefined();
expect(result4.task.id).toBe(2);
expect(result4.task.title).toBe('Subtask Two (with dotted notation)');
// Test subtask that was originally "5.1" (should be normalized to 1)
const result5 = findTaskById(tasksWithStringIds, '5.1');
expect(result5.task).toBeDefined();
expect(result5.task.id).toBe(1);
expect(result5.task.title).toBe('Subtask with dotted ID');
// Test subtask that was originally "3" (should stay as 3)
const result6 = findTaskById(tasksWithStringIds, '5.3');
expect(result6.task).toBeDefined();
expect(result6.task.id).toBe(3);
expect(result6.task.title).toBe('Subtask with simple ID');
});
test('should find a subtask using dot notation', () => { test('should find a subtask using dot notation', () => {
const result = findTaskById(sampleTasks.tasks, '3.1'); const result = findTaskById(sampleTasks.tasks, '3.1');
expect(result.task).toBeDefined(); expect(result.task).toBeDefined();