* Fix: Correct version resolution for banner and update check
Resolves issues where the tool's version was displayed as 'unknown'.
- Modified 'displayBanner' in 'ui.js' and 'checkForUpdate' in 'commands.js' to read package.json relative to their own script locations using import.meta.url.
- This ensures the correct local version is identified for both the main banner display and the update notification mechanism.
- Restored a missing closing brace in 'ui.js' to fix a SyntaxError.
* fix: refactor and cleanup
* fix: chores and cleanup and testing
* chore: cleanup
* fix: add changeset
---------
Co-authored-by: Christer Soederlund <christer.soderlund@gmail.com>
This commit introduces several improvements to AI interactions and
task management functionalities:
- AI Provider Enhancements (for Telemetry & Robustness):
- :
- Added a check in to ensure
is a string, throwing an error if not. This prevents downstream
errors (e.g., in ).
- , , :
- Standardized return structures for their respective
and functions to consistently include /
and fields. This aligns them with other providers (like
Anthropic, Google, Perplexity) for consistent telemetry data
collection, as part of implementing subtask 77.14 and similar work.
- Task Expansion ():
- Updated to be more explicit
about using an empty array for empty to
better guide AI output.
- Implemented a pre-emptive cleanup step in
to replace malformed with
before JSON parsing. This improves resilience to AI output quirks,
particularly observed with Perplexity.
- Adjusts issue in commands.js where successfulRemovals would be undefined. It's properly invoked from the result variable now.
- Updates supported models for Gemini
These changes address issues observed during E2E tests, enhance the
reliability of AI-driven task analysis and expansion, and promote
consistent telemetry data across multiple AI providers.
This commit updates to more robustly handle responses from .
Previously, the module strictly expected the AI-generated object to be nested under . This change ensures that it now first checks if itself contains the expected task data object, and then falls back to checking .
This enhancement increases compatibility with varying AI provider response structures, similar to the improvements recently made in .
This commit introduces two key improvements:
1. **Google Provider Telemetry:**
- Updated to include token usage data (, ) in the responses from and .
- This aligns the Google provider with others for consistent AI usage telemetry.
2. **Robust AI Object Response Handling:**
- Modified to more flexibly handle responses from .
- The add-task module now check for the AI-generated object in both and , improving compatibility with different AI provider response structures (e.g., Gemini).
These changes enhance the reliability of AI interactions, particularly with the Google provider, and ensure accurate telemetry collection.
This commit applies the standard telemetry pattern to the analyze-task-complexity command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/analyze-task-complexity.js):
- The call to generateTextService now includes commandName: 'analyze-complexity' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for parsing the complexity report JSON.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { report: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/analyze-task-complexity.js):
- The call to the core analyzeTaskComplexity function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
This commit applies the standard telemetry pattern to the update-subtask command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/update-subtask-by-id.js):
- The call to generateTextService now includes commandName: 'update-subtask' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for the appended content.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { updatedSubtask: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/update-subtask-by-id.js):
- The call to the core updateSubtaskById function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
This commit applies the standard telemetry pattern to the update-tasks command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/update-tasks.js):
- The call to generateTextService now includes commandName: 'update-tasks' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for parsing the updated task JSON.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { success: true, updatedTasks: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/update-tasks.js):
- The call to the core updateTasks function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.