Compare commits

...

6 Commits

Author SHA1 Message Date
Ralph Khreish
20e1b72a17 Merge pull request #549 from eyaltoledano/changeset-release/main
Version Packages
2025-05-20 00:34:13 +02:00
github-actions[bot]
db631f43a5 Version Packages 2025-05-19 22:31:08 +00:00
Ralph Khreish
3b9402f1f8 Merge Release 0.14.0 #529
Release 0.14.0
2025-05-20 00:30:46 +02:00
Ralph Khreish
c8c0fc2a57 fix: improve ollama object to telemetry structure (#546) 2025-05-19 23:05:45 +02:00
HR
60b8e97a1c fix: roomodes typo (#544) 2025-05-19 17:00:06 +02:00
github-actions[bot]
3a6d6dd671 chore: rc version bump 2025-05-18 08:08:54 +00:00
22 changed files with 95 additions and 135 deletions

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Resolve all issues related to MCP

View File

@@ -1,9 +0,0 @@
---
'task-master-ai': patch
---
Fix CLI --force flag for parse-prd command
Previously, the --force flag was not respected when running `parse-prd`, causing the command to prompt for confirmation or fail even when --force was provided. This patch ensures that the flag is correctly passed and handled, allowing users to overwrite existing tasks.json files as intended.
- Fixes #477

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': minor
---
.taskmasterconfig now supports a baseUrl field per model role (main, research, fallback), allowing endpoint overrides for any provider.

View File

@@ -1,11 +0,0 @@
---
'task-master-ai': minor
---
Add Ollama as a supported AI provider.
- You can now add it by running `task-master models --setup` and selecting it.
- Ollama is a local model provider, so no API key is required.
- Ollama models are available at `http://localhost:11434/api` by default.
- You can change the default URL by setting the `OLLAMA_BASE_URL` environment variable or by adding a `baseUrl` property to the `ollama` model role in `.taskmasterconfig`.
- If you want to use a custom API key, you can set it in the `OLLAMA_API_KEY` environment variable.

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Task Master no longer tells you to update when you're already up to date

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Adds costs information to AI commands using input/output tokens and model costs.

View File

@@ -1,23 +0,0 @@
{
"mode": "exit",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.13.2"
},
"changesets": [
"beige-doodles-type",
"floppy-plants-marry",
"forty-plums-stay",
"many-wasps-sell",
"red-oranges-attend",
"red-suns-wash",
"sharp-dingos-melt",
"six-cloths-happen",
"slow-singers-swim",
"social-masks-fold",
"soft-zoos-flow",
"ten-ways-mate",
"tricky-wombats-spend",
"wide-eyes-relax"
]
}

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Fix ERR_MODULE_NOT_FOUND when trying to run MCP Server

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Add src directory to exports

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Fix the error handling of task status settings

View File

@@ -1,7 +0,0 @@
---
'task-master-ai': patch
---
Remove caching layer from MCP direct functions for task listing, next task, and complexity report
- Fixes issues users where having where they were getting stale data

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Fix for issue #409 LOG_LEVEL Pydantic validation error

View File

@@ -1,7 +0,0 @@
---
'task-master-ai': patch
---
Small fixes
- `next` command no longer incorrectly suggests that subtasks be broken down into subtasks in the CLI
- fixes the `append` flag so it properly works in the CLI

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': minor
---
Display task complexity scores in task lists, next task, and task details views.

View File

@@ -1,7 +0,0 @@
---
'task-master-ai': patch
---
Fix initial .env.example to work out of the box
- Closes #419

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Fix default fallback model and maxTokens in Taskmaster initialization

View File

@@ -1,5 +0,0 @@
---
'task-master-ai': patch
---
Fix bug when updating tasks on the MCP server (#412)

View File

@@ -1,11 +0,0 @@
---
'task-master-ai': patch
---
Fix duplicate output on CLI help screen
- Prevent the Task Master CLI from printing the help screen more than once when using `-h` or `--help`.
- Removed redundant manual event handlers and guards for help output; now only the Commander `.helpInformation` override is used for custom help.
- Simplified logic so that help is only shown once for both "no arguments" and help flag flows.
- Ensures a clean, branded help experience with no repeated content.
- Fixes #339

View File

@@ -1,5 +1,83 @@
# task-master-ai
## 0.14.0
### Minor Changes
- [#521](https://github.com/eyaltoledano/claude-task-master/pull/521) [`ed17cb0`](https://github.com/eyaltoledano/claude-task-master/commit/ed17cb0e0a04dedde6c616f68f24f3660f68dd04) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - .taskmasterconfig now supports a baseUrl field per model role (main, research, fallback), allowing endpoint overrides for any provider.
- [#536](https://github.com/eyaltoledano/claude-task-master/pull/536) [`f4a83ec`](https://github.com/eyaltoledano/claude-task-master/commit/f4a83ec047b057196833e3a9b861d4bceaec805d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Ollama as a supported AI provider.
- You can now add it by running `task-master models --setup` and selecting it.
- Ollama is a local model provider, so no API key is required.
- Ollama models are available at `http://localhost:11434/api` by default.
- You can change the default URL by setting the `OLLAMA_BASE_URL` environment variable or by adding a `baseUrl` property to the `ollama` model role in `.taskmasterconfig`.
- If you want to use a custom API key, you can set it in the `OLLAMA_API_KEY` environment variable.
- [#528](https://github.com/eyaltoledano/claude-task-master/pull/528) [`58b417a`](https://github.com/eyaltoledano/claude-task-master/commit/58b417a8ce697e655f749ca4d759b1c20014c523) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Display task complexity scores in task lists, next task, and task details views.
### Patch Changes
- [#402](https://github.com/eyaltoledano/claude-task-master/pull/402) [`01963af`](https://github.com/eyaltoledano/claude-task-master/commit/01963af2cb6f77f43b2ad8a6e4a838ec205412bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Resolve all issues related to MCP
- [#478](https://github.com/eyaltoledano/claude-task-master/pull/478) [`4117f71`](https://github.com/eyaltoledano/claude-task-master/commit/4117f71c18ee4d321a9c91308d00d5d69bfac61e) Thanks [@joedanz](https://github.com/joedanz)! - Fix CLI --force flag for parse-prd command
Previously, the --force flag was not respected when running `parse-prd`, causing the command to prompt for confirmation or fail even when --force was provided. This patch ensures that the flag is correctly passed and handled, allowing users to overwrite existing tasks.json files as intended.
- Fixes #477
- [#511](https://github.com/eyaltoledano/claude-task-master/pull/511) [`17294ff`](https://github.com/eyaltoledano/claude-task-master/commit/17294ff25918d64278674e558698a1a9ad785098) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Task Master no longer tells you to update when you're already up to date
- [#442](https://github.com/eyaltoledano/claude-task-master/pull/442) [`2b3ae8b`](https://github.com/eyaltoledano/claude-task-master/commit/2b3ae8bf89dc471c4ce92f3a12ded57f61faa449) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds costs information to AI commands using input/output tokens and model costs.
- [#402](https://github.com/eyaltoledano/claude-task-master/pull/402) [`01963af`](https://github.com/eyaltoledano/claude-task-master/commit/01963af2cb6f77f43b2ad8a6e4a838ec205412bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix ERR_MODULE_NOT_FOUND when trying to run MCP Server
- [#402](https://github.com/eyaltoledano/claude-task-master/pull/402) [`01963af`](https://github.com/eyaltoledano/claude-task-master/commit/01963af2cb6f77f43b2ad8a6e4a838ec205412bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add src directory to exports
- [#523](https://github.com/eyaltoledano/claude-task-master/pull/523) [`da317f2`](https://github.com/eyaltoledano/claude-task-master/commit/da317f2607ca34db1be78c19954996f634c40923) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix the error handling of task status settings
- [#527](https://github.com/eyaltoledano/claude-task-master/pull/527) [`a8dabf4`](https://github.com/eyaltoledano/claude-task-master/commit/a8dabf44856713f488960224ee838761716bba26) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Remove caching layer from MCP direct functions for task listing, next task, and complexity report
- Fixes issues users where having where they were getting stale data
- [#417](https://github.com/eyaltoledano/claude-task-master/pull/417) [`a1f8d52`](https://github.com/eyaltoledano/claude-task-master/commit/a1f8d52474fdbdf48e17a63e3f567a6d63010d9f) Thanks [@ksylvan](https://github.com/ksylvan)! - Fix for issue #409 LOG_LEVEL Pydantic validation error
- [#442](https://github.com/eyaltoledano/claude-task-master/pull/442) [`0288311`](https://github.com/eyaltoledano/claude-task-master/commit/0288311965ae2a343ebee4a0c710dde94d2ae7e7) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Small fixes - `next` command no longer incorrectly suggests that subtasks be broken down into subtasks in the CLI - fixes the `append` flag so it properly works in the CLI
- [#501](https://github.com/eyaltoledano/claude-task-master/pull/501) [`0a61184`](https://github.com/eyaltoledano/claude-task-master/commit/0a611843b56a856ef0a479dc34078326e05ac3a8) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix initial .env.example to work out of the box
- Closes #419
- [#435](https://github.com/eyaltoledano/claude-task-master/pull/435) [`a96215a`](https://github.com/eyaltoledano/claude-task-master/commit/a96215a359b25061fd3b3f3c7b10e8ac0390c062) Thanks [@lebsral](https://github.com/lebsral)! - Fix default fallback model and maxTokens in Taskmaster initialization
- [#517](https://github.com/eyaltoledano/claude-task-master/pull/517) [`e96734a`](https://github.com/eyaltoledano/claude-task-master/commit/e96734a6cc6fec7731de72eb46b182a6e3743d02) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix bug when updating tasks on the MCP server (#412)
- [#496](https://github.com/eyaltoledano/claude-task-master/pull/496) [`efce374`](https://github.com/eyaltoledano/claude-task-master/commit/efce37469bc58eceef46763ba32df1ed45242211) Thanks [@joedanz](https://github.com/joedanz)! - Fix duplicate output on CLI help screen
- Prevent the Task Master CLI from printing the help screen more than once when using `-h` or `--help`.
- Removed redundant manual event handlers and guards for help output; now only the Commander `.helpInformation` override is used for custom help.
- Simplified logic so that help is only shown once for both "no arguments" and help flag flows.
- Ensures a clean, branded help experience with no repeated content.
- Fixes #339
## 0.14.0-rc.1
### Minor Changes
- [#536](https://github.com/eyaltoledano/claude-task-master/pull/536) [`f4a83ec`](https://github.com/eyaltoledano/claude-task-master/commit/f4a83ec047b057196833e3a9b861d4bceaec805d) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Ollama as a supported AI provider.
- You can now add it by running `task-master models --setup` and selecting it.
- Ollama is a local model provider, so no API key is required.
- Ollama models are available at `http://localhost:11434/api` by default.
- You can change the default URL by setting the `OLLAMA_BASE_URL` environment variable or by adding a `baseUrl` property to the `ollama` model role in `.taskmasterconfig`.
- If you want to use a custom API key, you can set it in the `OLLAMA_API_KEY` environment variable.
### Patch Changes
- [#442](https://github.com/eyaltoledano/claude-task-master/pull/442) [`2b3ae8b`](https://github.com/eyaltoledano/claude-task-master/commit/2b3ae8bf89dc471c4ce92f3a12ded57f61faa449) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Adds costs information to AI commands using input/output tokens and model costs.
- [#442](https://github.com/eyaltoledano/claude-task-master/pull/442) [`0288311`](https://github.com/eyaltoledano/claude-task-master/commit/0288311965ae2a343ebee4a0c710dde94d2ae7e7) Thanks [@eyaltoledano](https://github.com/eyaltoledano)! - Small fixes - `next` command no longer incorrectly suggests that subtasks be broken down into subtasks in the CLI - fixes the `append` flag so it properly works in the CLI
## 0.14.0-rc.0
### Minor Changes

View File

@@ -39,7 +39,7 @@
{
"slug": "debug",
"name": "Debug",
"roleDefinition": "You are Roo, an expert software debugger specializing in systematic problem diagnosis and resolution. When activated by another mdode, your task is to meticulously analyze the provided debugging request (potentially referencing Taskmaster tasks, logs, or metrics), use diagnostic tools as instructed to investigate the issue, identify the root cause, and report your findings and recommended next steps back via `attempt_completion`. You focus solely on diagnostics within the scope defined by the delegated task.",
"roleDefinition": "You are Roo, an expert software debugger specializing in systematic problem diagnosis and resolution. When activated by another mode, your task is to meticulously analyze the provided debugging request (potentially referencing Taskmaster tasks, logs, or metrics), use diagnostic tools as instructed to investigate the issue, identify the root cause, and report your findings and recommended next steps back via `attempt_completion`. You focus solely on diagnostics within the scope defined by the delegated task.",
"customInstructions": "Reflect on 5-7 different possible sources of the problem, distill those down to 1-2 most likely sources, and then add logs to validate your assumptions. Explicitly ask the user to confirm the diagnosis before fixing the problem.",
"groups": [
"read",

View File

@@ -1,6 +1,6 @@
{
"name": "task-master-ai",
"version": "0.14.0-rc.0",
"version": "0.14.0",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js",
"type": "module",

View File

@@ -3,7 +3,7 @@
* AI provider implementation for Ollama models using the ollama-ai-provider package.
*/
import { createOllama, ollama } from 'ollama-ai-provider';
import { createOllama } from 'ollama-ai-provider';
import { log } from '../../scripts/modules/utils.js'; // Import logging utility
import { generateObject, generateText, streamText } from 'ai';
@@ -48,7 +48,13 @@ async function generateOllamaText({
temperature
});
log('debug', `Ollama generated text: ${result.text}`);
return result.text;
return {
text: result.text,
usage: {
inputTokens: result.usage.promptTokens,
outputTokens: result.usage.completionTokens
}
};
} catch (error) {
log(
'error',
@@ -138,7 +144,13 @@ async function generateOllamaObject({
temperature: temperature,
maxRetries: maxRetries
});
return result.object;
return {
object: result.object,
usage: {
inputTokens: result.usage.promptTokens,
outputTokens: result.usage.completionTokens
}
};
} catch (error) {
log(
'error',