feat: improve config-manager max tokens for openrouter and kimi-k2 model (#1035)
This commit is contained in:
10
.changeset/quiet-rabbits-bathe.md
Normal file
10
.changeset/quiet-rabbits-bathe.md
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
"task-master-ai": patch
|
||||
---
|
||||
|
||||
Fix max_tokens limits for OpenRouter and Groq models
|
||||
|
||||
- Add special handling in config-manager.js for custom OpenRouter models to use a conservative default of 32,768 max_tokens
|
||||
- Update qwen/qwen-turbo model max_tokens from 1,000,000 to 32,768 to match OpenRouter's actual limits
|
||||
- Fix moonshotai/kimi-k2-instruct max_tokens to 16,384 to match Groq's actual limit (fixes #1028)
|
||||
- This prevents "maximum context length exceeded" errors when using OpenRouter models not in our supported models list
|
||||
Reference in New Issue
Block a user