feat(telemetry): Implement AI usage telemetry pattern and apply to add-task
This commit introduces a standardized pattern for capturing and propagating AI usage telemetry (cost, tokens, model used) across the Task Master stack and applies it to the 'add-task' functionality. Key changes include: - **Telemetry Pattern Definition:** - Added defining the integration pattern for core logic, direct functions, MCP tools, and CLI commands. - Updated related rules (, , Usage: mcp [OPTIONS] COMMAND [ARGS]... MCP development tools ╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ --help Show this message and exit. │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Commands ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ version Show the MCP version. │ │ dev Run a MCP server with the MCP Inspector. │ │ run Run a MCP server. │ │ install Install a MCP server in the Claude desktop app. │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯, , ) to reference the new telemetry rule. - **Core Telemetry Implementation ():** - Refactored the unified AI service to generate and return a object alongside the main AI result. - Fixed an MCP server startup crash by removing redundant local loading of and instead using the imported from for cost calculations. - Added to the object. - ** Integration:** - Modified (core) to receive from the AI service, return it, and call the new UI display function for CLI output. - Updated to receive from the core function and include it in the payload of its response. - Ensured (MCP tool) correctly passes the through via . - Updated to correctly pass context (, ) to the core function and rely on it for CLI telemetry display. - **UI Enhancement:** - Added function to to show telemetry details in the CLI. - **Project Management:** - Added subtasks 77.6 through 77.12 to track the rollout of this telemetry pattern to other AI-powered commands (, , , , , , ). This establishes the foundation for tracking AI usage across the application.
This commit is contained in:
@@ -25,6 +25,7 @@ This document outlines the architecture and usage patterns for interacting with
|
||||
* Implements **retry logic** for specific API errors (`_attemptProviderCallWithRetries`).
|
||||
* Resolves API keys automatically via `_resolveApiKey` (using `resolveEnvVariable`).
|
||||
* Maps requests to the correct provider implementation (in `src/ai-providers/`) via `PROVIDER_FUNCTIONS`.
|
||||
* Returns a structured object containing the primary AI result (`mainResult`) and telemetry data (`telemetryData`). See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for details on how this telemetry data is propagated and handled.
|
||||
|
||||
* **Provider Implementations (`src/ai-providers/*.js`):**
|
||||
* Contain provider-specific wrappers around Vercel AI SDK functions (`generateText`, `generateObject`).
|
||||
|
||||
@@ -42,6 +42,7 @@ alwaysApply: false
|
||||
- Resolves API keys (from `.env` or `session.env`).
|
||||
- Implements fallback and retry logic.
|
||||
- Orchestrates calls to provider-specific implementations (`src/ai-providers/`).
|
||||
- Telemetry data generated by the AI service layer is propagated upwards through core logic, direct functions, and MCP tools. See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for the detailed integration pattern.
|
||||
|
||||
- **[`src/ai-providers/*.js`](mdc:src/ai-providers/): Provider-Specific Implementations**
|
||||
- **Purpose**: Provider-specific wrappers for Vercel AI SDK functions.
|
||||
|
||||
@@ -3,7 +3,6 @@ description: Glossary of other Cursor rules
|
||||
globs: **/*
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# Glossary of Task Master Cursor Rules
|
||||
|
||||
This file provides a quick reference to the purpose of each rule file located in the `.cursor/rules` directory.
|
||||
@@ -23,4 +22,5 @@ This file provides a quick reference to the purpose of each rule file located in
|
||||
- **[`tests.mdc`](mdc:.cursor/rules/tests.mdc)**: Guidelines for implementing and maintaining tests for Task Master CLI.
|
||||
- **[`ui.mdc`](mdc:.cursor/rules/ui.mdc)**: Guidelines for implementing and maintaining user interface components.
|
||||
- **[`utilities.mdc`](mdc:.cursor/rules/utilities.mdc)**: Guidelines for implementing utility functions.
|
||||
- **[`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc)**: Guidelines for integrating AI usage telemetry across Task Master.
|
||||
|
||||
|
||||
@@ -522,3 +522,8 @@ Follow these steps to add MCP support for an existing Task Master command (see [
|
||||
// Add more functions as implemented
|
||||
};
|
||||
```
|
||||
|
||||
## Telemetry Integration
|
||||
|
||||
- Direct functions calling core logic that involves AI should receive and pass through `telemetryData` within their successful `data` payload. See [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc) for the standard pattern.
|
||||
- MCP tools use `handleApiResult`, which ensures the `data` object (potentially including `telemetryData`) from the direct function is correctly included in the final response.
|
||||
|
||||
@@ -3,7 +3,6 @@ description: Guidelines for integrating new features into the Task Master CLI
|
||||
globs: scripts/modules/*.js
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Task Master Feature Integration Guidelines
|
||||
|
||||
## Feature Placement Decision Process
|
||||
@@ -196,6 +195,8 @@ The standard pattern for adding a feature follows this workflow:
|
||||
- ✅ **DO**: If an MCP tool fails with vague errors (e.g., JSON parsing issues like `Unexpected token ... is not valid JSON`), **try running the equivalent CLI command directly in the terminal** (e.g., `task-master expand --all`). CLI output often provides much more specific error messages (like missing function definitions or stack traces from the core logic) that pinpoint the root cause.
|
||||
- ❌ **DON'T**: Rely solely on MCP logs if the error is unclear; use the CLI as a complementary debugging tool for core logic issues.
|
||||
|
||||
- **Telemetry Integration**: Ensure AI calls correctly handle and propagate `telemetryData` as described in [`telemetry.mdc`](mdc:.cursor/rules/telemetry.mdc).
|
||||
|
||||
```javascript
|
||||
// 1. CORE LOGIC: Add function to appropriate module (example in task-manager.js)
|
||||
/**
|
||||
|
||||
228
.cursor/rules/telemetry.mdc
Normal file
228
.cursor/rules/telemetry.mdc
Normal file
@@ -0,0 +1,228 @@
|
||||
---
|
||||
description: Guidelines for integrating AI usage telemetry across Task Master.
|
||||
globs: scripts/modules/**/*.js,mcp-server/src/**/*.js
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# AI Usage Telemetry Integration
|
||||
|
||||
This document outlines the standard pattern for capturing, propagating, and handling AI usage telemetry data (cost, tokens, model, etc.) across the Task Master stack. This ensures consistent telemetry for both CLI and MCP interactions.
|
||||
|
||||
## Overview
|
||||
|
||||
Telemetry data is generated within the unified AI service layer ([`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js)) and then passed upwards through the calling functions.
|
||||
|
||||
- **Data Source**: [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js) (specifically its `generateTextService`, `generateObjectService`, etc.) returns an object like `{ mainResult: AI_CALL_OUTPUT, telemetryData: TELEMETRY_OBJECT }`.
|
||||
- **`telemetryData` Object Structure**:
|
||||
```json
|
||||
{
|
||||
"timestamp": "ISO_STRING_DATE",
|
||||
"userId": "USER_ID_FROM_CONFIG",
|
||||
"commandName": "invoking_command_or_tool_name",
|
||||
"modelUsed": "ai_model_id",
|
||||
"providerName": "ai_provider_name",
|
||||
"inputTokens": NUMBER,
|
||||
"outputTokens": NUMBER,
|
||||
"totalTokens": NUMBER,
|
||||
"totalCost": NUMBER, // e.g., 0.012414
|
||||
"currency": "USD" // e.g., "USD"
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Pattern by Layer
|
||||
|
||||
The key principle is that each layer receives telemetry data from the layer below it (if applicable) and passes it to the layer above it, or handles it for display in the case of the CLI.
|
||||
|
||||
### 1. Core Logic Functions (e.g., in `scripts/modules/task-manager/`)
|
||||
|
||||
Functions in this layer that invoke AI services are responsible for handling the `telemetryData` they receive from [`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js).
|
||||
|
||||
- **Actions**:
|
||||
1. Call the appropriate AI service function (e.g., `generateObjectService`).
|
||||
- Pass `commandName` (e.g., `add-task`, `expand-task`) and `outputType` (e.g., `cli` or `mcp`) in the `params` object to the AI service. The `outputType` can be derived from context (e.g., presence of `mcpLog`).
|
||||
2. The AI service returns an object, e.g., `aiServiceResponse = { mainResult: {/*AI output*/}, telemetryData: {/*telemetry data*/} }`.
|
||||
3. Extract `aiServiceResponse.mainResult` for the core processing.
|
||||
4. **Must return an object that includes `aiServiceResponse.telemetryData`**.
|
||||
Example: `return { operationSpecificData: /*...*/, telemetryData: aiServiceResponse.telemetryData };`
|
||||
|
||||
- **CLI Output Handling (If Applicable)**:
|
||||
- If the core function also handles CLI output (e.g., it has an `outputFormat` parameter that can be `'text'` or `'cli'`):
|
||||
1. Check if `outputFormat === 'text'` (or `'cli'`).
|
||||
2. If so, and if `aiServiceResponse.telemetryData` is available, call `displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli')` from [`scripts/modules/ui.js`](mdc:scripts/modules/ui.js).
|
||||
- This ensures telemetry is displayed directly to CLI users after the main command output.
|
||||
|
||||
- **Example Snippet (Core Logic in `scripts/modules/task-manager/someAiAction.js`)**:
|
||||
```javascript
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
import { displayAiUsageSummary } from '../ui.js';
|
||||
|
||||
async function performAiRelatedAction(params, context, outputFormat = 'text') {
|
||||
const { commandNameFromContext, /* other context vars */ } = context;
|
||||
let aiServiceResponse = null;
|
||||
|
||||
try {
|
||||
aiServiceResponse = await generateObjectService({
|
||||
// ... other parameters for AI service ...
|
||||
commandName: commandNameFromContext || 'default-action-name',
|
||||
outputType: context.mcpLog ? 'mcp' : 'cli' // Derive outputType
|
||||
});
|
||||
|
||||
const usefulAiOutput = aiServiceResponse.mainResult.object;
|
||||
// ... do work with usefulAiOutput ...
|
||||
|
||||
if (outputFormat === 'text' && aiServiceResponse.telemetryData) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
|
||||
return {
|
||||
actionData: /* results of processing */,
|
||||
telemetryData: aiServiceResponse.telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
// ... handle error ...
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Direct Function Wrappers (in `mcp-server/src/core/direct-functions/`)
|
||||
|
||||
These functions adapt core logic for the MCP server, ensuring structured responses.
|
||||
|
||||
- **Actions**:
|
||||
1. Call the corresponding core logic function.
|
||||
- Pass necessary context (e.g., `session`, `mcpLog`, `projectRoot`).
|
||||
- Provide the `commandName` (typically derived from the MCP tool name) and `outputType: 'mcp'` in the context object passed to the core function.
|
||||
- If the core function supports an `outputFormat` parameter, pass `'json'` to suppress CLI-specific UI.
|
||||
2. The core logic function returns an object (e.g., `coreResult = { actionData: ..., telemetryData: ... }`).
|
||||
3. Include `coreResult.telemetryData` as a field within the `data` object of the successful response returned by the direct function.
|
||||
|
||||
- **Example Snippet (Direct Function `someAiActionDirect.js`)**:
|
||||
```javascript
|
||||
import { performAiRelatedAction } from '../../../../scripts/modules/task-manager/someAiAction.js'; // Core function
|
||||
import { createLogWrapper } from '../../tools/utils.js'; // MCP Log wrapper
|
||||
|
||||
export async function someAiActionDirect(args, log, context = {}) {
|
||||
const { session } = context;
|
||||
// ... prepare arguments for core function from args, including args.projectRoot ...
|
||||
|
||||
try {
|
||||
const coreResult = await performAiRelatedAction(
|
||||
{ /* parameters for core function */ },
|
||||
{ // Context for core function
|
||||
session,
|
||||
mcpLog: createLogWrapper(log),
|
||||
projectRoot: args.projectRoot,
|
||||
commandNameFromContext: 'mcp_tool_some_ai_action', // Example command name
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json' // Request 'json' output format from core function
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
operationSpecificData: coreResult.actionData,
|
||||
telemetryData: coreResult.telemetryData // Pass telemetry through
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
// ... error handling, return { success: false, error: ... } ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. MCP Tools (in `mcp-server/src/tools/`)
|
||||
|
||||
These are the exposed endpoints for MCP clients.
|
||||
|
||||
- **Actions**:
|
||||
1. Call the corresponding direct function wrapper.
|
||||
2. The direct function returns an object structured like `{ success: true, data: { operationSpecificData: ..., telemetryData: ... } }` (or an error object).
|
||||
3. Pass this entire result object to `handleApiResult(result, log)` from [`mcp-server/src/tools/utils.js`](mdc:mcp-server/src/tools/utils.js).
|
||||
4. `handleApiResult` ensures that the `data` field from the direct function's response (which correctly includes `telemetryData`) is part of the final MCP response.
|
||||
|
||||
- **Example Snippet (MCP Tool `some_ai_action.js`)**:
|
||||
```javascript
|
||||
import { someAiActionDirect } from '../core/task-master-core.js';
|
||||
import { handleApiResult, withNormalizedProjectRoot } from './utils.js';
|
||||
// ... zod for parameters ...
|
||||
|
||||
export function registerSomeAiActionTool(server) {
|
||||
server.addTool({
|
||||
name: "some_ai_action",
|
||||
// ... description, parameters ...
|
||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||
try {
|
||||
const resultFromDirectFunction = await someAiActionDirect(
|
||||
{ /* args including projectRoot */ },
|
||||
log,
|
||||
{ session }
|
||||
);
|
||||
return handleApiResult(resultFromDirectFunction, log); // This passes the nested telemetryData through
|
||||
} catch (error) {
|
||||
// ... error handling ...
|
||||
}
|
||||
})
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### 4. CLI Commands (`scripts/modules/commands.js`)
|
||||
|
||||
These define the command-line interface.
|
||||
|
||||
- **Actions**:
|
||||
1. Call the appropriate core logic function.
|
||||
2. Pass `outputFormat: 'text'` (or ensure the core function defaults to text-based output for CLI).
|
||||
3. The core logic function (as per Section 1) is responsible for calling `displayAiUsageSummary` if telemetry data is available and it's in CLI mode.
|
||||
4. The command action itself **should not** call `displayAiUsageSummary` if the core logic function already handles this. This avoids duplicate display.
|
||||
|
||||
- **Example Snippet (CLI Command in `commands.js`)**:
|
||||
```javascript
|
||||
// In scripts/modules/commands.js
|
||||
import { performAiRelatedAction } from './task-manager/someAiAction.js'; // Core function
|
||||
|
||||
programInstance
|
||||
.command('some-cli-ai-action')
|
||||
// ... .option() ...
|
||||
.action(async (options) => {
|
||||
try {
|
||||
const projectRoot = findProjectRoot() || '.'; // Example root finding
|
||||
// ... prepare parameters for core function from command options ...
|
||||
await performAiRelatedAction(
|
||||
{ /* parameters for core function */ },
|
||||
{ // Context for core function
|
||||
projectRoot,
|
||||
commandNameFromContext: 'some-cli-ai-action',
|
||||
outputType: 'cli'
|
||||
},
|
||||
'text' // Explicitly request text output format for CLI
|
||||
);
|
||||
// Core function handles displayAiUsageSummary internally for 'text' outputFormat
|
||||
} catch (error) {
|
||||
// ... error handling ...
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
## Summary Flow
|
||||
|
||||
The telemetry data flows as follows:
|
||||
|
||||
1. **[`ai-services-unified.js`](mdc:scripts/modules/ai-services-unified.js)**: Generates `telemetryData` and returns `{ mainResult, telemetryData }`.
|
||||
2. **Core Logic Function**:
|
||||
* Receives `{ mainResult, telemetryData }`.
|
||||
* Uses `mainResult`.
|
||||
* If CLI (`outputFormat: 'text'`), calls `displayAiUsageSummary(telemetryData)`.
|
||||
* Returns `{ operationSpecificData, telemetryData }`.
|
||||
3. **Direct Function Wrapper**:
|
||||
* Receives `{ operationSpecificData, telemetryData }` from core logic.
|
||||
* Returns `{ success: true, data: { operationSpecificData, telemetryData } }`.
|
||||
4. **MCP Tool**:
|
||||
* Receives direct function response.
|
||||
* `handleApiResult` ensures the final MCP response to the client is `{ success: true, data: { operationSpecificData, telemetryData } }`.
|
||||
5. **CLI Command**:
|
||||
* Calls core logic with `outputFormat: 'text'`. Display is handled by core logic.
|
||||
|
||||
This pattern ensures telemetry is captured and appropriately handled/exposed across all interaction modes.
|
||||
@@ -1,31 +1,32 @@
|
||||
{
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 100000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"projectName": "Taskmaster",
|
||||
"ollamaBaseUrl": "http://localhost:11434/api",
|
||||
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
|
||||
}
|
||||
}
|
||||
"models": {
|
||||
"main": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 100000,
|
||||
"temperature": 0.2
|
||||
},
|
||||
"research": {
|
||||
"provider": "perplexity",
|
||||
"modelId": "sonar-pro",
|
||||
"maxTokens": 8700,
|
||||
"temperature": 0.1
|
||||
},
|
||||
"fallback": {
|
||||
"provider": "anthropic",
|
||||
"modelId": "claude-3-7-sonnet-20250219",
|
||||
"maxTokens": 120000,
|
||||
"temperature": 0.2
|
||||
}
|
||||
},
|
||||
"global": {
|
||||
"userId": "1234567890",
|
||||
"logLevel": "info",
|
||||
"debug": false,
|
||||
"defaultSubtasks": 5,
|
||||
"defaultPriority": "medium",
|
||||
"projectName": "Taskmaster",
|
||||
"ollamaBaseUrl": "http://localhost:11434/api",
|
||||
"azureOpenaiBaseUrl": "https://your-endpoint.openai.azure.com/"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -94,6 +94,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
|
||||
let manualTaskData = null;
|
||||
let newTaskId;
|
||||
let telemetryData;
|
||||
|
||||
if (isManualCreation) {
|
||||
// Create manual task data object
|
||||
@@ -109,7 +110,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
);
|
||||
|
||||
// Call the addTask function with manual task data
|
||||
newTaskId = await addTask(
|
||||
const result = await addTask(
|
||||
tasksPath,
|
||||
null, // prompt is null for manual creation
|
||||
taskDependencies,
|
||||
@@ -117,13 +118,17 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
{
|
||||
session,
|
||||
mcpLog,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
commandName: 'add-task',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json', // outputFormat
|
||||
manualTaskData, // Pass the manual task data
|
||||
false, // research flag is false for manual creation
|
||||
projectRoot // Pass projectRoot
|
||||
);
|
||||
newTaskId = result.newTaskId;
|
||||
telemetryData = result.telemetryData;
|
||||
} else {
|
||||
// AI-driven task creation
|
||||
log.info(
|
||||
@@ -131,7 +136,7 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
);
|
||||
|
||||
// Call the addTask function, passing the research flag
|
||||
newTaskId = await addTask(
|
||||
const result = await addTask(
|
||||
tasksPath,
|
||||
prompt, // Use the prompt for AI creation
|
||||
taskDependencies,
|
||||
@@ -139,12 +144,16 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
{
|
||||
session,
|
||||
mcpLog,
|
||||
projectRoot
|
||||
projectRoot,
|
||||
commandName: 'add-task',
|
||||
outputType: 'mcp'
|
||||
},
|
||||
'json', // outputFormat
|
||||
null, // manualTaskData is null for AI creation
|
||||
research // Pass the research flag
|
||||
);
|
||||
newTaskId = result.newTaskId;
|
||||
telemetryData = result.telemetryData;
|
||||
}
|
||||
|
||||
// Restore normal logging
|
||||
@@ -154,7 +163,8 @@ export async function addTaskDirect(args, log, context = {}) {
|
||||
success: true,
|
||||
data: {
|
||||
taskId: newTaskId,
|
||||
message: `Successfully added new task #${newTaskId}`
|
||||
message: `Successfully added new task #${newTaskId}`,
|
||||
telemetryData: telemetryData
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
|
||||
@@ -14,9 +14,11 @@ import {
|
||||
getResearchModelId,
|
||||
getFallbackProvider,
|
||||
getFallbackModelId,
|
||||
getParametersForRole
|
||||
getParametersForRole,
|
||||
getUserId,
|
||||
MODEL_MAP
|
||||
} from './config-manager.js';
|
||||
import { log, resolveEnvVariable, findProjectRoot } from './utils.js';
|
||||
import { log, resolveEnvVariable, isSilentMode } from './utils.js';
|
||||
|
||||
import * as anthropic from '../../src/ai-providers/anthropic.js';
|
||||
import * as perplexity from '../../src/ai-providers/perplexity.js';
|
||||
@@ -26,6 +28,36 @@ import * as xai from '../../src/ai-providers/xai.js';
|
||||
import * as openrouter from '../../src/ai-providers/openrouter.js';
|
||||
// TODO: Import other provider modules when implemented (ollama, etc.)
|
||||
|
||||
// Helper function to get cost for a specific model
|
||||
function _getCostForModel(providerName, modelId) {
|
||||
if (!MODEL_MAP || !MODEL_MAP[providerName]) {
|
||||
log(
|
||||
'warn',
|
||||
`Provider "${providerName}" not found in MODEL_MAP. Cannot determine cost for model ${modelId}.`
|
||||
);
|
||||
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
|
||||
}
|
||||
|
||||
const modelData = MODEL_MAP[providerName].find((m) => m.id === modelId);
|
||||
|
||||
if (!modelData || !modelData.cost_per_1m_tokens) {
|
||||
log(
|
||||
'debug',
|
||||
`Cost data not found for model "${modelId}" under provider "${providerName}". Assuming zero cost.`
|
||||
);
|
||||
return { inputCost: 0, outputCost: 0, currency: 'USD' }; // Default to zero cost
|
||||
}
|
||||
|
||||
// Ensure currency is part of the returned object, defaulting if not present
|
||||
const currency = modelData.cost_per_1m_tokens.currency || 'USD';
|
||||
|
||||
return {
|
||||
inputCost: modelData.cost_per_1m_tokens.input || 0,
|
||||
outputCost: modelData.cost_per_1m_tokens.output || 0,
|
||||
currency: currency
|
||||
};
|
||||
}
|
||||
|
||||
// --- Provider Function Map ---
|
||||
// Maps provider names (lowercase) to their respective service functions
|
||||
const PROVIDER_FUNCTIONS = {
|
||||
@@ -242,7 +274,15 @@ async function _attemptProviderCallWithRetries(
|
||||
* Base logic for unified service functions.
|
||||
* @param {string} serviceType - Type of service ('generateText', 'streamText', 'generateObject').
|
||||
* @param {object} params - Original parameters passed to the service function.
|
||||
* @param {string} params.role - The initial client role.
|
||||
* @param {object} [params.session=null] - Optional MCP session object.
|
||||
* @param {string} [params.projectRoot] - Optional project root path.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} params.outputType - 'cli' or 'mcp'.
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* @param {string} [params.prompt] - The prompt for the AI.
|
||||
* @param {string} [params.schema] - The Zod schema for the expected object.
|
||||
* @param {string} [params.objectName] - Name for object/tool.
|
||||
* @returns {Promise<any>} Result from the underlying provider call.
|
||||
*/
|
||||
async function _unifiedServiceRunner(serviceType, params) {
|
||||
@@ -254,15 +294,23 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
prompt,
|
||||
schema,
|
||||
objectName,
|
||||
commandName,
|
||||
outputType,
|
||||
...restApiParams
|
||||
} = params;
|
||||
log('info', `${serviceType}Service called`, {
|
||||
role: initialRole,
|
||||
commandName,
|
||||
outputType,
|
||||
projectRoot
|
||||
});
|
||||
|
||||
// Determine the effective project root (passed in or detected)
|
||||
const effectiveProjectRoot = projectRoot || findProjectRoot();
|
||||
// Determine the effective project root (passed in or detected if needed by config getters)
|
||||
const { findProjectRoot: detectProjectRoot } = await import('./utils.js'); // Dynamically import if needed
|
||||
const effectiveProjectRoot = projectRoot || detectProjectRoot();
|
||||
|
||||
// Get userId from config - ensure effectiveProjectRoot is passed
|
||||
const userId = getUserId(effectiveProjectRoot);
|
||||
|
||||
let sequence;
|
||||
if (initialRole === 'main') {
|
||||
@@ -285,6 +333,8 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
|
||||
for (const currentRole of sequence) {
|
||||
let providerName, modelId, apiKey, roleParams, providerFnSet, providerApiFn;
|
||||
let aiCallResult;
|
||||
let telemetryData = null;
|
||||
|
||||
try {
|
||||
log('info', `New AI service call with role: ${currentRole}`);
|
||||
@@ -406,7 +456,7 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
};
|
||||
|
||||
// 6. Attempt the call with retries
|
||||
const result = await _attemptProviderCallWithRetries(
|
||||
aiCallResult = await _attemptProviderCallWithRetries(
|
||||
providerApiFn,
|
||||
callParams,
|
||||
providerName,
|
||||
@@ -416,7 +466,36 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
|
||||
log('info', `${serviceType}Service succeeded using role: ${currentRole}`);
|
||||
|
||||
return result;
|
||||
// --- Log Telemetry & Capture Data ---
|
||||
// TODO: Add telemetry logic gate in case user doesn't accept telemetry
|
||||
if (userId && aiCallResult && aiCallResult.usage) {
|
||||
try {
|
||||
telemetryData = await logAiUsage({
|
||||
userId,
|
||||
commandName,
|
||||
providerName,
|
||||
modelId,
|
||||
inputTokens: aiCallResult.usage.inputTokens,
|
||||
outputTokens: aiCallResult.usage.outputTokens,
|
||||
outputType
|
||||
});
|
||||
} catch (telemetryError) {
|
||||
// logAiUsage already logs its own errors and returns null on failure
|
||||
// No need to log again here, telemetryData will remain null
|
||||
}
|
||||
} else if (userId && aiCallResult && !aiCallResult.usage) {
|
||||
log(
|
||||
'warn',
|
||||
`Cannot log telemetry for ${commandName} (${providerName}/${modelId}): AI result missing 'usage' data.`
|
||||
);
|
||||
}
|
||||
// --- End Log Telemetry ---
|
||||
|
||||
// Return a composite object including the main AI result and telemetry data
|
||||
return {
|
||||
mainResult: aiCallResult,
|
||||
telemetryData: telemetryData
|
||||
};
|
||||
} catch (error) {
|
||||
const cleanMessage = _extractErrorMessage(error);
|
||||
log(
|
||||
@@ -461,11 +540,16 @@ async function _unifiedServiceRunner(serviceType, params) {
|
||||
* @param {string} [params.projectRoot=null] - Optional project root path for .env fallback.
|
||||
* @param {string} params.prompt - The prompt for the AI.
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* // Other specific generateText params can be included here.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} [params.outputType='cli'] - 'cli' or 'mcp'.
|
||||
* @returns {Promise<object>} Result object containing generated text and usage data.
|
||||
*/
|
||||
async function generateTextService(params) {
|
||||
return _unifiedServiceRunner('generateText', params);
|
||||
// Ensure default outputType if not provided
|
||||
const defaults = { outputType: 'cli' };
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
return _unifiedServiceRunner('generateText', combinedParams);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -478,11 +562,18 @@ async function generateTextService(params) {
|
||||
* @param {string} [params.projectRoot=null] - Optional project root path for .env fallback.
|
||||
* @param {string} params.prompt - The prompt for the AI.
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* // Other specific streamText params can be included here.
|
||||
* @returns {Promise<ReadableStream<string>>} A readable stream of text deltas.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} [params.outputType='cli'] - 'cli' or 'mcp'.
|
||||
* @returns {Promise<object>} Result object containing the stream and usage data.
|
||||
*/
|
||||
async function streamTextService(params) {
|
||||
return _unifiedServiceRunner('streamText', params);
|
||||
const defaults = { outputType: 'cli' };
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
// NOTE: Telemetry for streaming might be tricky as usage data often comes at the end.
|
||||
// The current implementation logs *after* the stream is returned.
|
||||
// We might need to adjust how usage is captured/logged for streams.
|
||||
return _unifiedServiceRunner('streamText', combinedParams);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -498,15 +589,87 @@ async function streamTextService(params) {
|
||||
* @param {string} [params.systemPrompt] - Optional system prompt.
|
||||
* @param {string} [params.objectName='generated_object'] - Name for object/tool.
|
||||
* @param {number} [params.maxRetries=3] - Max retries for object generation.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @param {string} params.commandName - Name of the command invoking the service.
|
||||
* @param {string} [params.outputType='cli'] - 'cli' or 'mcp'.
|
||||
* @returns {Promise<object>} Result object containing the generated object and usage data.
|
||||
*/
|
||||
async function generateObjectService(params) {
|
||||
const defaults = {
|
||||
objectName: 'generated_object',
|
||||
maxRetries: 3
|
||||
maxRetries: 3,
|
||||
outputType: 'cli'
|
||||
};
|
||||
const combinedParams = { ...defaults, ...params };
|
||||
// TODO: Validate commandName exists?
|
||||
return _unifiedServiceRunner('generateObject', combinedParams);
|
||||
}
|
||||
|
||||
export { generateTextService, streamTextService, generateObjectService };
|
||||
// --- Telemetry Function ---
|
||||
/**
|
||||
* Logs AI usage telemetry data.
|
||||
* For now, it just logs to the console. Sending will be implemented later.
|
||||
* @param {object} params - Telemetry parameters.
|
||||
* @param {string} params.userId - Unique user identifier.
|
||||
* @param {string} params.commandName - The command that triggered the AI call.
|
||||
* @param {string} params.providerName - The AI provider used (e.g., 'openai').
|
||||
* @param {string} params.modelId - The specific AI model ID used.
|
||||
* @param {number} params.inputTokens - Number of input tokens.
|
||||
* @param {number} params.outputTokens - Number of output tokens.
|
||||
*/
|
||||
async function logAiUsage({
|
||||
userId,
|
||||
commandName,
|
||||
providerName,
|
||||
modelId,
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
outputType
|
||||
}) {
|
||||
try {
|
||||
const isMCP = outputType === 'mcp';
|
||||
const timestamp = new Date().toISOString();
|
||||
const totalTokens = (inputTokens || 0) + (outputTokens || 0);
|
||||
|
||||
// Destructure currency along with costs
|
||||
const { inputCost, outputCost, currency } = _getCostForModel(
|
||||
providerName,
|
||||
modelId
|
||||
);
|
||||
|
||||
const totalCost =
|
||||
((inputTokens || 0) / 1_000_000) * inputCost +
|
||||
((outputTokens || 0) / 1_000_000) * outputCost;
|
||||
|
||||
const telemetryData = {
|
||||
timestamp,
|
||||
userId,
|
||||
commandName,
|
||||
modelUsed: modelId, // Consistent field name from requirements
|
||||
providerName, // Keep provider name for context
|
||||
inputTokens: inputTokens || 0,
|
||||
outputTokens: outputTokens || 0,
|
||||
totalTokens,
|
||||
totalCost: parseFloat(totalCost.toFixed(6)),
|
||||
currency // Add currency to the telemetry data
|
||||
};
|
||||
|
||||
log('info', 'AI Usage Telemetry:', telemetryData);
|
||||
|
||||
// TODO (Subtask 77.2): Send telemetryData securely to the external endpoint.
|
||||
|
||||
return telemetryData;
|
||||
} catch (error) {
|
||||
log('error', `Failed to log AI usage telemetry: ${error.message}`, {
|
||||
error
|
||||
});
|
||||
// Don't re-throw; telemetry failure shouldn't block core functionality.
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export {
|
||||
generateTextService,
|
||||
streamTextService,
|
||||
generateObjectService,
|
||||
logAiUsage
|
||||
};
|
||||
|
||||
@@ -62,7 +62,8 @@ import {
|
||||
stopLoadingIndicator,
|
||||
displayModelConfiguration,
|
||||
displayAvailableModels,
|
||||
displayApiKeyStatus
|
||||
displayApiKeyStatus,
|
||||
displayAiUsageSummary
|
||||
} from './ui.js';
|
||||
|
||||
import { initializeProject } from '../init.js';
|
||||
@@ -1263,7 +1264,7 @@ function registerCommands(programInstance) {
|
||||
// add-task command
|
||||
programInstance
|
||||
.command('add-task')
|
||||
.description('Add a new task using AI or manual input')
|
||||
.description('Add a new task using AI, optionally providing manual details')
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option(
|
||||
'-p, --prompt <prompt>',
|
||||
@@ -1308,74 +1309,70 @@ function registerCommands(programInstance) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const tasksPath =
|
||||
options.file ||
|
||||
path.join(findProjectRoot() || '.', 'tasks', 'tasks.json') || // Ensure tasksPath is also relative to a found root or current dir
|
||||
'tasks/tasks.json';
|
||||
|
||||
// Correctly determine projectRoot
|
||||
const projectRoot = findProjectRoot();
|
||||
|
||||
let manualTaskData = null;
|
||||
if (isManualCreation) {
|
||||
manualTaskData = {
|
||||
title: options.title,
|
||||
description: options.description,
|
||||
details: options.details || '',
|
||||
testStrategy: options.testStrategy || ''
|
||||
};
|
||||
// Restore specific logging for manual creation
|
||||
console.log(
|
||||
chalk.blue(`Creating task manually with title: "${options.title}"`)
|
||||
);
|
||||
} else {
|
||||
// Restore specific logging for AI creation
|
||||
console.log(
|
||||
chalk.blue(`Creating task with AI using prompt: "${options.prompt}"`)
|
||||
);
|
||||
}
|
||||
|
||||
// Log dependencies and priority if provided (restored)
|
||||
const dependenciesArray = options.dependencies
|
||||
? options.dependencies.split(',').map((id) => id.trim())
|
||||
: [];
|
||||
if (dependenciesArray.length > 0) {
|
||||
console.log(
|
||||
chalk.blue(`Dependencies: [${dependenciesArray.join(', ')}]`)
|
||||
);
|
||||
}
|
||||
if (options.priority) {
|
||||
console.log(chalk.blue(`Priority: ${options.priority}`));
|
||||
}
|
||||
|
||||
const context = {
|
||||
projectRoot,
|
||||
commandName: 'add-task',
|
||||
outputType: 'cli'
|
||||
};
|
||||
|
||||
try {
|
||||
// Prepare dependencies if provided
|
||||
let dependencies = [];
|
||||
if (options.dependencies) {
|
||||
dependencies = options.dependencies
|
||||
.split(',')
|
||||
.map((id) => parseInt(id.trim(), 10));
|
||||
}
|
||||
|
||||
// Create manual task data if title and description are provided
|
||||
let manualTaskData = null;
|
||||
if (isManualCreation) {
|
||||
manualTaskData = {
|
||||
title: options.title,
|
||||
description: options.description,
|
||||
details: options.details || '',
|
||||
testStrategy: options.testStrategy || ''
|
||||
};
|
||||
|
||||
console.log(
|
||||
chalk.blue(`Creating task manually with title: "${options.title}"`)
|
||||
);
|
||||
if (dependencies.length > 0) {
|
||||
console.log(
|
||||
chalk.blue(`Dependencies: [${dependencies.join(', ')}]`)
|
||||
);
|
||||
}
|
||||
if (options.priority) {
|
||||
console.log(chalk.blue(`Priority: ${options.priority}`));
|
||||
}
|
||||
} else {
|
||||
console.log(
|
||||
chalk.blue(
|
||||
`Creating task with AI using prompt: "${options.prompt}"`
|
||||
)
|
||||
);
|
||||
if (dependencies.length > 0) {
|
||||
console.log(
|
||||
chalk.blue(`Dependencies: [${dependencies.join(', ')}]`)
|
||||
);
|
||||
}
|
||||
if (options.priority) {
|
||||
console.log(chalk.blue(`Priority: ${options.priority}`));
|
||||
}
|
||||
}
|
||||
|
||||
// Pass mcpLog and session for MCP mode
|
||||
const newTaskId = await addTask(
|
||||
options.file,
|
||||
options.prompt, // Pass prompt (will be null/undefined if not provided)
|
||||
dependencies,
|
||||
const { newTaskId, telemetryData } = await addTask(
|
||||
tasksPath,
|
||||
options.prompt,
|
||||
dependenciesArray,
|
||||
options.priority,
|
||||
{
|
||||
// For CLI, session context isn't directly available like MCP
|
||||
// We don't need to pass session here for CLI API key resolution
|
||||
// as dotenv loads .env, and utils.resolveEnvVariable checks process.env
|
||||
},
|
||||
'text', // outputFormat
|
||||
manualTaskData, // Pass the potentially created manualTaskData object
|
||||
options.research || false // Pass the research flag value
|
||||
context,
|
||||
'text',
|
||||
manualTaskData,
|
||||
options.research
|
||||
);
|
||||
|
||||
console.log(chalk.green(`✓ Added new task #${newTaskId}`));
|
||||
console.log(chalk.gray('Next: Complete this task or add more tasks'));
|
||||
// addTask handles detailed CLI success logging AND telemetry display when outputFormat is 'text'
|
||||
// No need to call displayAiUsageSummary here anymore.
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error adding task: ${error.message}`));
|
||||
if (error.stack && getDebugFlag()) {
|
||||
console.error(error.stack);
|
||||
if (error.details) {
|
||||
console.error(chalk.red(error.details));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
@@ -669,6 +669,16 @@ function isConfigFilePresent(explicitRoot = null) {
|
||||
return fs.existsSync(configPath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the user ID from the configuration.
|
||||
* @param {string|null} explicitRoot - Optional explicit path to the project root.
|
||||
* @returns {string|null} The user ID or null if not found.
|
||||
*/
|
||||
function getUserId(explicitRoot = null) {
|
||||
const config = getConfig(explicitRoot);
|
||||
return config?.global?.userId || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets a list of all provider names defined in the MODEL_MAP.
|
||||
* @returns {string[]} An array of provider names.
|
||||
@@ -714,7 +724,7 @@ export {
|
||||
getProjectName,
|
||||
getOllamaBaseUrl,
|
||||
getParametersForRole,
|
||||
|
||||
getUserId,
|
||||
// API Key Checkers (still relevant)
|
||||
isApiKeySet,
|
||||
getMcpApiKeyStatus,
|
||||
|
||||
@@ -8,7 +8,8 @@ import {
|
||||
displayBanner,
|
||||
getStatusWithColor,
|
||||
startLoadingIndicator,
|
||||
stopLoadingIndicator
|
||||
stopLoadingIndicator,
|
||||
displayAiUsageSummary
|
||||
} from '../ui.js';
|
||||
import { readJSON, writeJSON, log as consoleLog, truncate } from '../utils.js';
|
||||
import { generateObjectService } from '../ai-services-unified.js';
|
||||
@@ -44,7 +45,9 @@ const AiTaskDataSchema = z.object({
|
||||
* @param {boolean} useResearch - Whether to use the research model (passed to unified service)
|
||||
* @param {Object} context - Context object containing session and potentially projectRoot
|
||||
* @param {string} [context.projectRoot] - Project root path (for MCP/env fallback)
|
||||
* @returns {number} The new task ID
|
||||
* @param {string} [context.commandName] - The name of the command being executed (for telemetry)
|
||||
* @param {string} [context.outputType] - The output type ('cli' or 'mcp', for telemetry)
|
||||
* @returns {Promise<object>} An object containing newTaskId and telemetryData
|
||||
*/
|
||||
async function addTask(
|
||||
tasksPath,
|
||||
@@ -56,7 +59,7 @@ async function addTask(
|
||||
manualTaskData = null,
|
||||
useResearch = false
|
||||
) {
|
||||
const { session, mcpLog, projectRoot } = context;
|
||||
const { session, mcpLog, projectRoot, commandName, outputType } = context;
|
||||
const isMCP = !!mcpLog;
|
||||
|
||||
// Create a consistent logFn object regardless of context
|
||||
@@ -78,6 +81,7 @@ async function addTask(
|
||||
);
|
||||
|
||||
let loadingIndicator = null;
|
||||
let aiServiceResponse = null; // To store the full response from AI service
|
||||
|
||||
// Create custom reporter that checks for MCP log
|
||||
const report = (message, level = 'info') => {
|
||||
@@ -229,29 +233,40 @@ async function addTask(
|
||||
// Start the loading indicator - only for text mode
|
||||
if (outputFormat === 'text') {
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
`Generating new task with ${useResearch ? 'Research' : 'Main'} AI...`
|
||||
`Generating new task with ${useResearch ? 'Research' : 'Main'} AI..\n`
|
||||
);
|
||||
}
|
||||
|
||||
try {
|
||||
// Determine the service role based on the useResearch flag
|
||||
const serviceRole = useResearch ? 'research' : 'main';
|
||||
|
||||
report('DEBUG: Calling generateObjectService...', 'debug');
|
||||
// Call the unified AI service
|
||||
const aiGeneratedTaskData = await generateObjectService({
|
||||
role: serviceRole, // <-- Use the determined role
|
||||
session: session, // Pass session for API key resolution
|
||||
projectRoot: projectRoot, // <<< Pass projectRoot here
|
||||
schema: AiTaskDataSchema, // Pass the Zod schema
|
||||
objectName: 'newTaskData', // Name for the object
|
||||
|
||||
aiServiceResponse = await generateObjectService({
|
||||
// Capture the full response
|
||||
role: serviceRole,
|
||||
session: session,
|
||||
projectRoot: projectRoot,
|
||||
schema: AiTaskDataSchema,
|
||||
objectName: 'newTaskData',
|
||||
systemPrompt: systemPrompt,
|
||||
prompt: userPrompt
|
||||
prompt: userPrompt,
|
||||
commandName: commandName || 'add-task', // Use passed commandName or default
|
||||
outputType: outputType || (isMCP ? 'mcp' : 'cli') // Use passed outputType or derive
|
||||
});
|
||||
report('DEBUG: generateObjectService returned successfully.', 'debug');
|
||||
|
||||
if (
|
||||
!aiServiceResponse ||
|
||||
!aiServiceResponse.mainResult ||
|
||||
!aiServiceResponse.mainResult.object
|
||||
) {
|
||||
throw new Error(
|
||||
'AI service did not return the expected object structure.'
|
||||
);
|
||||
}
|
||||
taskData = aiServiceResponse.mainResult.object; // Extract the AI-generated task data
|
||||
|
||||
report('Successfully generated task data from AI.', 'success');
|
||||
taskData = aiGeneratedTaskData; // Assign the validated object
|
||||
} catch (error) {
|
||||
report(
|
||||
`DEBUG: generateObjectService caught error: ${error.message}`,
|
||||
@@ -362,11 +377,25 @@ async function addTask(
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
|
||||
// Display AI Usage Summary if telemetryData is available
|
||||
if (
|
||||
aiServiceResponse &&
|
||||
aiServiceResponse.telemetryData &&
|
||||
(outputType === 'cli' || outputType === 'text')
|
||||
) {
|
||||
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
||||
}
|
||||
}
|
||||
|
||||
// Return the new task ID
|
||||
report(`DEBUG: Returning new task ID: ${newTaskId}`, 'debug');
|
||||
return newTaskId;
|
||||
report(
|
||||
`DEBUG: Returning new task ID: ${newTaskId} and telemetry.`,
|
||||
'debug'
|
||||
);
|
||||
return {
|
||||
newTaskId: newTaskId,
|
||||
telemetryData: aiServiceResponse ? aiServiceResponse.telemetryData : null
|
||||
};
|
||||
} catch (error) {
|
||||
// Stop any loading indicator on error
|
||||
if (loadingIndicator) {
|
||||
|
||||
@@ -3,7 +3,6 @@ import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import boxen from 'boxen';
|
||||
import Table from 'cli-table3';
|
||||
import { z } from 'zod';
|
||||
|
||||
import {
|
||||
getStatusWithColor,
|
||||
@@ -17,10 +16,7 @@ import {
|
||||
truncate,
|
||||
isSilentMode
|
||||
} from '../utils.js';
|
||||
import {
|
||||
generateObjectService,
|
||||
generateTextService
|
||||
} from '../ai-services-unified.js';
|
||||
import { generateTextService } from '../ai-services-unified.js';
|
||||
import { getDebugFlag } from '../config-manager.js';
|
||||
import generateTaskFiles from './generate-task-files.js';
|
||||
|
||||
@@ -64,7 +60,6 @@ async function updateSubtaskById(
|
||||
try {
|
||||
report('info', `Updating subtask ${subtaskId} with prompt: "${prompt}"`);
|
||||
|
||||
// Validate subtask ID format
|
||||
if (
|
||||
!subtaskId ||
|
||||
typeof subtaskId !== 'string' ||
|
||||
@@ -75,19 +70,16 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
// Validate prompt
|
||||
if (!prompt || typeof prompt !== 'string' || prompt.trim() === '') {
|
||||
throw new Error(
|
||||
'Prompt cannot be empty. Please provide context for the subtask update.'
|
||||
);
|
||||
}
|
||||
|
||||
// Validate tasks file exists
|
||||
if (!fs.existsSync(tasksPath)) {
|
||||
throw new Error(`Tasks file not found at path: ${tasksPath}`);
|
||||
}
|
||||
|
||||
// Read the tasks file
|
||||
const data = readJSON(tasksPath);
|
||||
if (!data || !data.tasks) {
|
||||
throw new Error(
|
||||
@@ -95,7 +87,6 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
// Parse parent and subtask IDs
|
||||
const [parentIdStr, subtaskIdStr] = subtaskId.split('.');
|
||||
const parentId = parseInt(parentIdStr, 10);
|
||||
const subtaskIdNum = parseInt(subtaskIdStr, 10);
|
||||
@@ -111,7 +102,6 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
// Find the parent task
|
||||
const parentTask = data.tasks.find((task) => task.id === parentId);
|
||||
if (!parentTask) {
|
||||
throw new Error(
|
||||
@@ -119,7 +109,6 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
// Find the subtask
|
||||
if (!parentTask.subtasks || !Array.isArray(parentTask.subtasks)) {
|
||||
throw new Error(`Parent task ${parentId} has no subtasks.`);
|
||||
}
|
||||
@@ -135,20 +124,7 @@ async function updateSubtaskById(
|
||||
|
||||
const subtask = parentTask.subtasks[subtaskIndex];
|
||||
|
||||
const subtaskSchema = z.object({
|
||||
id: z.number().int().positive(),
|
||||
title: z.string(),
|
||||
description: z.string().optional(),
|
||||
status: z.string(),
|
||||
dependencies: z.array(z.union([z.string(), z.number()])).optional(),
|
||||
priority: z.string().optional(),
|
||||
details: z.string().optional(),
|
||||
testStrategy: z.string().optional()
|
||||
});
|
||||
|
||||
// Only show UI elements for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
// Show the subtask that will be updated
|
||||
const table = new Table({
|
||||
head: [
|
||||
chalk.cyan.bold('ID'),
|
||||
@@ -157,13 +133,11 @@ async function updateSubtaskById(
|
||||
],
|
||||
colWidths: [10, 55, 10]
|
||||
});
|
||||
|
||||
table.push([
|
||||
subtaskId,
|
||||
truncate(subtask.title, 52),
|
||||
getStatusWithColor(subtask.status)
|
||||
]);
|
||||
|
||||
console.log(
|
||||
boxen(chalk.white.bold(`Updating Subtask #${subtaskId}`), {
|
||||
padding: 1,
|
||||
@@ -172,10 +146,7 @@ async function updateSubtaskById(
|
||||
margin: { top: 1, bottom: 0 }
|
||||
})
|
||||
);
|
||||
|
||||
console.log(table.toString());
|
||||
|
||||
// Start the loading indicator - only for text output
|
||||
loadingIndicator = startLoadingIndicator(
|
||||
useResearch
|
||||
? 'Updating subtask with research...'
|
||||
@@ -183,15 +154,13 @@ async function updateSubtaskById(
|
||||
);
|
||||
}
|
||||
|
||||
let parsedAIResponse;
|
||||
let generatedContentString = ''; // Initialize to empty string
|
||||
let newlyAddedSnippet = ''; // <--- ADD THIS LINE: Variable to store the snippet for CLI display
|
||||
try {
|
||||
// --- GET PARENT & SIBLING CONTEXT ---
|
||||
const parentContext = {
|
||||
id: parentTask.id,
|
||||
title: parentTask.title
|
||||
// Avoid sending full parent description/details unless necessary
|
||||
};
|
||||
|
||||
const prevSubtask =
|
||||
subtaskIndex > 0
|
||||
? {
|
||||
@@ -200,7 +169,6 @@ async function updateSubtaskById(
|
||||
status: parentTask.subtasks[subtaskIndex - 1].status
|
||||
}
|
||||
: null;
|
||||
|
||||
const nextSubtask =
|
||||
subtaskIndex < parentTask.subtasks.length - 1
|
||||
? {
|
||||
@@ -214,45 +182,61 @@ async function updateSubtaskById(
|
||||
Parent Task: ${JSON.stringify(parentContext)}
|
||||
${prevSubtask ? `Previous Subtask: ${JSON.stringify(prevSubtask)}` : ''}
|
||||
${nextSubtask ? `Next Subtask: ${JSON.stringify(nextSubtask)}` : ''}
|
||||
Current Subtask Details (for context only):\n${subtask.details || '(No existing details)'}
|
||||
`;
|
||||
|
||||
const systemPrompt = `You are an AI assistant updating a parent task's subtask. This subtask will be part of a larger parent task and will be used to direct AI agents to complete the subtask. Your goal is to GENERATE new, relevant information based on the user's request (which may be high-level, mid-level or low-level) and APPEND it to the existing subtask 'details' field, wrapped in specific XML-like tags with an ISO 8601 timestamp. Intelligently determine the level of detail to include based on the user's request. Some requests are meant simply to update the subtask with some mid-implementation details, while others are meant to update the subtask with a detailed plan or strategy.
|
||||
const systemPrompt = `You are an AI assistant helping to update a subtask. You will be provided with the subtask's existing details, context about its parent and sibling tasks, and a user request string.
|
||||
|
||||
Context Provided:
|
||||
- The current subtask object.
|
||||
- Basic info about the parent task (ID, title).
|
||||
- Basic info about the immediately preceding subtask (ID, title, status), if it exists.
|
||||
- Basic info about the immediately succeeding subtask (ID, title, status), if it exists.
|
||||
- A user request string.
|
||||
Your Goal: Based *only* on the user's request and all the provided context (including existing details if relevant to the request), GENERATE the new text content that should be added to the subtask's details.
|
||||
Focus *only* on generating the substance of the update.
|
||||
|
||||
Guidelines:
|
||||
1. Analyze the user request considering the provided subtask details AND the context of the parent and sibling tasks.
|
||||
2. GENERATE new, relevant text content that should be added to the 'details' field. Focus *only* on the substance of the update based on the user request and context. Do NOT add timestamps or any special formatting yourself. Avoid over-engineering the details, provide .
|
||||
3. Update the 'details' field in the subtask object with the GENERATED text content. It's okay if this overwrites previous details in the object you return, as the calling code will handle the final appending.
|
||||
4. Return the *entire* updated subtask object (with your generated content in the 'details' field) as a valid JSON object conforming to the provided schema. Do NOT return explanations or markdown formatting.`;
|
||||
Output Requirements:
|
||||
1. Return *only* the newly generated text content as a plain string. Do NOT return a JSON object or any other structured data.
|
||||
2. Your string response should NOT include any of the subtask's original details, unless the user's request explicitly asks to rephrase, summarize, or directly modify existing text.
|
||||
3. Do NOT include any timestamps, XML-like tags, markdown, or any other special formatting in your string response.
|
||||
4. Ensure the generated text is concise yet complete for the update based on the user request. Avoid conversational fillers or explanations about what you are doing (e.g., do not start with "Okay, here's the update...").`;
|
||||
|
||||
const subtaskDataString = JSON.stringify(subtask, null, 2);
|
||||
// Updated user prompt including context
|
||||
const userPrompt = `Task Context:\n${contextString}\nCurrent Subtask:\n${subtaskDataString}\n\nUser Request: "${prompt}"\n\nPlease GENERATE new, relevant text content for the 'details' field based on the user request and the provided context. Return the entire updated subtask object as a valid JSON object matching the schema, with the newly generated text placed in the 'details' field.`;
|
||||
// --- END UPDATED PROMPTS ---
|
||||
// Pass the existing subtask.details in the user prompt for the AI's context.
|
||||
const userPrompt = `Task Context:\n${contextString}\n\nUser Request: "${prompt}"\n\nBased on the User Request and all the Task Context (including current subtask details provided above), what is the new information or text that should be appended to this subtask's details? Return ONLY this new text as a plain string.`;
|
||||
|
||||
// Call Unified AI Service using generateObjectService
|
||||
const role = useResearch ? 'research' : 'main';
|
||||
report('info', `Using AI object service with role: ${role}`);
|
||||
report('info', `Using AI text service with role: ${role}`);
|
||||
|
||||
parsedAIResponse = await generateObjectService({
|
||||
// Store the entire response object from the AI service
|
||||
const aiServiceResponse = await generateTextService({
|
||||
prompt: userPrompt,
|
||||
systemPrompt: systemPrompt,
|
||||
schema: subtaskSchema,
|
||||
objectName: 'updatedSubtask',
|
||||
role,
|
||||
session,
|
||||
projectRoot,
|
||||
maxRetries: 2
|
||||
});
|
||||
|
||||
report(
|
||||
'info',
|
||||
`>>> DEBUG: AI Service Response Object: ${JSON.stringify(aiServiceResponse, null, 2)}`
|
||||
);
|
||||
report(
|
||||
'info',
|
||||
`>>> DEBUG: Extracted generatedContentString: "${generatedContentString}"`
|
||||
);
|
||||
|
||||
// Extract the actual text content from the mainResult property
|
||||
// and ensure it's a string, defaulting to empty if not.
|
||||
if (
|
||||
aiServiceResponse &&
|
||||
aiServiceResponse.mainResult &&
|
||||
typeof aiServiceResponse.mainResult.text === 'string'
|
||||
) {
|
||||
generatedContentString = aiServiceResponse.mainResult.text;
|
||||
} else {
|
||||
generatedContentString = ''; // Default to empty if mainResult.text is not a string or the path is invalid
|
||||
}
|
||||
// The telemetryData would be in aiServiceResponse.telemetryData if needed elsewhere
|
||||
|
||||
report(
|
||||
'success',
|
||||
'Successfully received object response from AI service'
|
||||
'Successfully received response object from AI service' // Log message updated for clarity
|
||||
);
|
||||
|
||||
if (outputFormat === 'text' && loadingIndicator) {
|
||||
@@ -260,14 +244,21 @@ Guidelines:
|
||||
loadingIndicator = null;
|
||||
}
|
||||
|
||||
if (!parsedAIResponse || typeof parsedAIResponse !== 'object') {
|
||||
throw new Error('AI did not return a valid object.');
|
||||
// This check now correctly validates the extracted string
|
||||
if (typeof generatedContentString !== 'string') {
|
||||
report(
|
||||
'warn',
|
||||
'AI mainResult was not a valid text string. Treating as empty.'
|
||||
);
|
||||
generatedContentString = ''; // Ensure it's a string for trim() later
|
||||
} else if (generatedContentString.trim() !== '') {
|
||||
report(
|
||||
'success',
|
||||
`Successfully extracted text from AI response using role: ${role}.`
|
||||
);
|
||||
}
|
||||
|
||||
report(
|
||||
'success',
|
||||
`Successfully generated object using AI role: ${role}.`
|
||||
);
|
||||
// No need for an else here, as an empty string from mainResult is a valid scenario
|
||||
// that will be handled by the `if (generatedContentString && generatedContentString.trim())` later.
|
||||
} catch (aiError) {
|
||||
report('error', `AI service call failed: ${aiError.message}`);
|
||||
if (outputFormat === 'text' && loadingIndicator) {
|
||||
@@ -278,19 +269,14 @@ Guidelines:
|
||||
}
|
||||
|
||||
// --- TIMESTAMP & FORMATTING LOGIC (Handled Locally) ---
|
||||
// Extract only the generated content from the AI's response details field.
|
||||
const generatedContent = parsedAIResponse.details || ''; // Default to empty string
|
||||
if (generatedContentString && generatedContentString.trim()) {
|
||||
// Check if the string is not empty
|
||||
const timestamp = new Date().toISOString();
|
||||
const formattedBlock = `<info added on ${timestamp}>\n${generatedContentString.trim()}\n</info added on ${timestamp}>`;
|
||||
newlyAddedSnippet = formattedBlock; // <--- ADD THIS LINE: Store for display
|
||||
|
||||
if (generatedContent.trim()) {
|
||||
// Generate timestamp locally
|
||||
const timestamp = new Date().toISOString(); // <<< Local Timestamp
|
||||
|
||||
// Format the content with XML-like tags and timestamp LOCALLY
|
||||
const formattedBlock = `<info added on ${timestamp}>\n${generatedContent.trim()}\n</info added on ${timestamp}>`; // <<< Local Formatting
|
||||
|
||||
// Append the formatted block to the *original* subtask details
|
||||
subtask.details =
|
||||
(subtask.details ? subtask.details + '\n' : '') + formattedBlock; // <<< Local Appending
|
||||
(subtask.details ? subtask.details + '\n' : '') + formattedBlock;
|
||||
report(
|
||||
'info',
|
||||
'Appended timestamped, formatted block with AI-generated content to subtask.details.'
|
||||
@@ -298,70 +284,56 @@ Guidelines:
|
||||
} else {
|
||||
report(
|
||||
'warn',
|
||||
'AI response object did not contain generated content in the "details" field. Original details remain unchanged.'
|
||||
'AI response was empty or whitespace after trimming. Original details remain unchanged.'
|
||||
);
|
||||
newlyAddedSnippet = 'No new details were added by the AI.'; // <--- ADD THIS LINE: Set message for CLI
|
||||
}
|
||||
// --- END TIMESTAMP & FORMATTING LOGIC ---
|
||||
|
||||
// Get a reference to the subtask *after* its details have been updated
|
||||
const updatedSubtask = parentTask.subtasks[subtaskIndex]; // subtask === updatedSubtask now
|
||||
|
||||
const updatedSubtask = parentTask.subtasks[subtaskIndex];
|
||||
report('info', 'Updated subtask details locally after AI generation.');
|
||||
// --- END UPDATE SUBTASK ---
|
||||
|
||||
// Only show debug info for text output (CLI)
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log(
|
||||
'>>> DEBUG: Subtask details AFTER AI update:',
|
||||
updatedSubtask.details // Use updatedSubtask
|
||||
updatedSubtask.details
|
||||
);
|
||||
}
|
||||
|
||||
// Description update logic (keeping as is for now)
|
||||
if (updatedSubtask.description) {
|
||||
// Use updatedSubtask
|
||||
if (prompt.length < 100) {
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log(
|
||||
'>>> DEBUG: Subtask description BEFORE append:',
|
||||
updatedSubtask.description // Use updatedSubtask
|
||||
updatedSubtask.description
|
||||
);
|
||||
}
|
||||
updatedSubtask.description += ` [Updated: ${new Date().toLocaleDateString()}]`; // Use updatedSubtask
|
||||
updatedSubtask.description += ` [Updated: ${new Date().toLocaleDateString()}]`;
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log(
|
||||
'>>> DEBUG: Subtask description AFTER append:',
|
||||
updatedSubtask.description // Use updatedSubtask
|
||||
updatedSubtask.description
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Only show debug info for text output (CLI)
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log('>>> DEBUG: About to call writeJSON with updated data...');
|
||||
}
|
||||
|
||||
// Write the updated tasks to the file (parentTask already contains the updated subtask)
|
||||
writeJSON(tasksPath, data);
|
||||
|
||||
// Only show debug info for text output (CLI)
|
||||
if (outputFormat === 'text' && getDebugFlag(session)) {
|
||||
console.log('>>> DEBUG: writeJSON call completed.');
|
||||
}
|
||||
|
||||
report('success', `Successfully updated subtask ${subtaskId}`);
|
||||
|
||||
// Generate individual task files
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
|
||||
// Stop indicator before final console output - only for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
if (loadingIndicator) {
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
loadingIndicator = null;
|
||||
}
|
||||
|
||||
console.log(
|
||||
boxen(
|
||||
chalk.green(`Successfully updated subtask #${subtaskId}`) +
|
||||
@@ -370,31 +342,22 @@ Guidelines:
|
||||
' ' +
|
||||
updatedSubtask.title +
|
||||
'\n\n' +
|
||||
// Update the display to show the new details field
|
||||
chalk.white.bold('Updated Details:') +
|
||||
chalk.white.bold('Newly Added Snippet:') +
|
||||
'\n' +
|
||||
chalk.white(truncate(updatedSubtask.details || '', 500, true)), // Use updatedSubtask
|
||||
chalk.white(newlyAddedSnippet),
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
return updatedSubtask; // Return the modified subtask object
|
||||
return updatedSubtask;
|
||||
} catch (error) {
|
||||
// Outer catch block handles final errors after loop/attempts
|
||||
// Stop indicator on error - only for text output (CLI)
|
||||
if (outputFormat === 'text' && loadingIndicator) {
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
loadingIndicator = null;
|
||||
}
|
||||
|
||||
report('error', `Error updating subtask: ${error.message}`);
|
||||
|
||||
// Only show error UI for text output (CLI)
|
||||
if (outputFormat === 'text') {
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
|
||||
// Provide helpful error messages based on error type
|
||||
if (error.message?.includes('ANTHROPIC_API_KEY')) {
|
||||
console.log(
|
||||
chalk.yellow('\nTo fix this issue, set your Anthropic API key:')
|
||||
@@ -409,7 +372,6 @@ Guidelines:
|
||||
' 2. Or run without the research flag: task-master update-subtask --id=<id> --prompt="..."'
|
||||
);
|
||||
} else if (error.message?.includes('overloaded')) {
|
||||
// Catch final overload error
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'\nAI model overloaded, and fallback failed or was unavailable:'
|
||||
@@ -417,7 +379,6 @@ Guidelines:
|
||||
);
|
||||
console.log(' 1. Try again in a few minutes.');
|
||||
console.log(' 2. Ensure PERPLEXITY_API_KEY is set for fallback.');
|
||||
console.log(' 3. Consider breaking your prompt into smaller updates.');
|
||||
} else if (error.message?.includes('not found')) {
|
||||
console.log(chalk.yellow('\nTo fix this issue:'));
|
||||
console.log(
|
||||
@@ -426,22 +387,22 @@ Guidelines:
|
||||
console.log(
|
||||
' 2. Use a valid subtask ID with the --id parameter in format "parentId.subtaskId"'
|
||||
);
|
||||
} else if (error.message?.includes('empty stream response')) {
|
||||
} else if (
|
||||
error.message?.includes('empty stream response') ||
|
||||
error.message?.includes('AI did not return a valid text string')
|
||||
) {
|
||||
console.log(
|
||||
chalk.yellow(
|
||||
'\nThe AI model returned an empty response. This might be due to the prompt or API issues. Try rephrasing or trying again later.'
|
||||
'\nThe AI model returned an empty or invalid response. This might be due to the prompt or API issues. Try rephrasing or trying again later.'
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
if (getDebugFlag(session)) {
|
||||
// Use getter
|
||||
console.error(error);
|
||||
}
|
||||
} else {
|
||||
throw error; // Re-throw for JSON output
|
||||
throw error;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1974,6 +1974,51 @@ function displayAvailableModels(availableModels) {
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Displays AI usage telemetry summary in the CLI.
|
||||
* @param {object} telemetryData - The telemetry data object.
|
||||
* @param {string} outputType - 'cli' or 'mcp' (though typically only called for 'cli').
|
||||
*/
|
||||
function displayAiUsageSummary(telemetryData, outputType = 'cli') {
|
||||
if (
|
||||
(outputType !== 'cli' && outputType !== 'text') ||
|
||||
!telemetryData ||
|
||||
isSilentMode()
|
||||
) {
|
||||
return; // Only display for CLI and if data exists and not in silent mode
|
||||
}
|
||||
|
||||
const {
|
||||
modelUsed,
|
||||
providerName,
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
totalTokens,
|
||||
totalCost,
|
||||
commandName
|
||||
} = telemetryData;
|
||||
|
||||
let summary = chalk.bold.blue('AI Usage Summary:') + '\n';
|
||||
summary += chalk.gray(` Command: ${commandName}\n`);
|
||||
summary += chalk.gray(` Provider: ${providerName}\n`);
|
||||
summary += chalk.gray(` Model: ${modelUsed}\n`);
|
||||
summary += chalk.gray(
|
||||
` Tokens: ${totalTokens} (Input: ${inputTokens}, Output: ${outputTokens})\n`
|
||||
);
|
||||
summary += chalk.gray(` Est. Cost: $${totalCost.toFixed(6)}`);
|
||||
|
||||
console.log(
|
||||
boxen(summary, {
|
||||
padding: 1,
|
||||
margin: { top: 1 },
|
||||
borderColor: 'blue',
|
||||
borderStyle: 'round',
|
||||
title: '💡 Telemetry',
|
||||
titleAlignment: 'center'
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
// Export UI functions
|
||||
export {
|
||||
displayBanner,
|
||||
@@ -1991,5 +2036,6 @@ export {
|
||||
confirmTaskOverwrite,
|
||||
displayApiKeyStatus,
|
||||
displayModelConfiguration,
|
||||
displayAvailableModels
|
||||
displayAvailableModels,
|
||||
displayAiUsageSummary
|
||||
};
|
||||
|
||||
@@ -51,7 +51,7 @@ function getClient(apiKey) {
|
||||
* @param {Array<object>} params.messages - The messages array (e.g., [{ role: 'user', content: '...' }]).
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @returns {Promise<string>} The generated text content.
|
||||
* @returns {Promise<object>} The generated text content and usage.
|
||||
* @throws {Error} If the API call fails.
|
||||
*/
|
||||
export async function generateAnthropicText({
|
||||
@@ -76,7 +76,14 @@ export async function generateAnthropicText({
|
||||
'debug',
|
||||
`Anthropic generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
return result.text;
|
||||
// Return both text and usage
|
||||
return {
|
||||
text: result.text,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
log('error', `Anthropic generateText failed: ${error.message}`);
|
||||
// Consider more specific error handling or re-throwing a standardized error
|
||||
@@ -160,7 +167,7 @@ export async function streamAnthropicText({
|
||||
* @param {number} [params.maxTokens] - Maximum tokens for the response.
|
||||
* @param {number} [params.temperature] - Temperature for generation.
|
||||
* @param {number} [params.maxRetries] - Max retries for validation/generation.
|
||||
* @returns {Promise<object>} The generated object matching the schema.
|
||||
* @returns {Promise<object>} The generated object matching the schema and usage.
|
||||
* @throws {Error} If generation or validation fails.
|
||||
*/
|
||||
export async function generateAnthropicObject({
|
||||
@@ -204,7 +211,14 @@ export async function generateAnthropicObject({
|
||||
'debug',
|
||||
`Anthropic generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
|
||||
);
|
||||
return result.object;
|
||||
// Return both object and usage
|
||||
return {
|
||||
object: result.object,
|
||||
usage: {
|
||||
inputTokens: result.usage.promptTokens,
|
||||
outputTokens: result.usage.completionTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
// Simple error logging
|
||||
log(
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Task ID: 32
|
||||
# Title: Implement "learn" Command for Automatic Cursor Rule Generation
|
||||
# Status: pending
|
||||
# Status: deferred
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Create a new "learn" command that analyzes Cursor's chat history and code changes to automatically generate or update rule files in the .cursor/rules directory, following the cursor_rules.mdc template format. This command will help Cursor autonomously improve its ability to follow development standards by learning from successful implementations.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Task ID: 43
|
||||
# Title: Add Research Flag to Add-Task Command
|
||||
# Status: pending
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Implement a '--research' flag for the add-task command that enables users to automatically generate research-related subtasks when creating a new task.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Task ID: 61
|
||||
# Title: Implement Flexible AI Model Management
|
||||
# Status: in-progress
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.
|
||||
@@ -486,7 +486,7 @@ The existing `ai-services.js` should be refactored to:
|
||||
7. Add verbose output option for debugging
|
||||
8. Testing approach: Create integration tests that verify model setting functionality with various inputs
|
||||
|
||||
## 8. Update Main Task Processing Logic [deferred]
|
||||
## 8. Update Main Task Processing Logic [done]
|
||||
### Dependencies: 61.4, 61.5, 61.18
|
||||
### Description: Refactor the main task processing logic to use the new AI services module and support dynamic model selection.
|
||||
### Details:
|
||||
@@ -554,7 +554,7 @@ When updating the main task processing logic, implement the following changes to
|
||||
```
|
||||
</info added on 2025-04-20T03:55:56.310Z>
|
||||
|
||||
## 9. Update Research Processing Logic [deferred]
|
||||
## 9. Update Research Processing Logic [done]
|
||||
### Dependencies: 61.4, 61.5, 61.8, 61.18
|
||||
### Description: Refactor the research processing logic to use the new AI services module and support dynamic model selection for research operations.
|
||||
### Details:
|
||||
@@ -747,7 +747,7 @@ const result = await generateObjectService({
|
||||
5. Ensure any default values previously hardcoded are now retrieved from the configuration system.
|
||||
</info added on 2025-04-20T03:55:01.707Z>
|
||||
|
||||
## 12. Refactor Basic Subtask Generation to use generateObjectService [cancelled]
|
||||
## 12. Refactor Basic Subtask Generation to use generateObjectService [done]
|
||||
### Dependencies: 61.23
|
||||
### Description: Update the `generateSubtasks` function in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the subtask array.
|
||||
### Details:
|
||||
@@ -798,7 +798,7 @@ The refactoring should leverage the new configuration system:
|
||||
```
|
||||
</info added on 2025-04-20T03:54:45.542Z>
|
||||
|
||||
## 13. Refactor Research Subtask Generation to use generateObjectService [cancelled]
|
||||
## 13. Refactor Research Subtask Generation to use generateObjectService [done]
|
||||
### Dependencies: 61.23
|
||||
### Description: Update the `generateSubtasksWithPerplexity` function in `ai-services.js` to first perform research (potentially keeping the Perplexity call separate or adapting it) and then use `generateObjectService` from `ai-services-unified.js` with research results included in the prompt.
|
||||
### Details:
|
||||
@@ -828,7 +828,7 @@ const { verbose } = getLoggingConfig();
|
||||
5. Ensure the transition to generateObjectService maintains all existing functionality while leveraging the new configuration system
|
||||
</info added on 2025-04-20T03:54:26.882Z>
|
||||
|
||||
## 14. Refactor Research Task Description Generation to use generateObjectService [cancelled]
|
||||
## 14. Refactor Research Task Description Generation to use generateObjectService [done]
|
||||
### Dependencies: 61.23
|
||||
### Description: Update the `generateTaskDescriptionWithPerplexity` function in `ai-services.js` to first perform research and then use `generateObjectService` from `ai-services-unified.js` to generate the structured task description.
|
||||
### Details:
|
||||
@@ -869,7 +869,7 @@ return generateObjectService({
|
||||
5. Remove any hardcoded configuration values, ensuring all settings are retrieved from the centralized configuration system.
|
||||
</info added on 2025-04-20T03:54:04.420Z>
|
||||
|
||||
## 15. Refactor Complexity Analysis AI Call to use generateObjectService [cancelled]
|
||||
## 15. Refactor Complexity Analysis AI Call to use generateObjectService [done]
|
||||
### Dependencies: 61.23
|
||||
### Description: Update the logic that calls the AI after using `generateComplexityAnalysisPrompt` in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the complexity report.
|
||||
### Details:
|
||||
@@ -916,7 +916,7 @@ The complexity analysis AI call should be updated to align with the new configur
|
||||
```
|
||||
</info added on 2025-04-20T03:53:46.120Z>
|
||||
|
||||
## 16. Refactor Task Addition AI Call to use generateObjectService [cancelled]
|
||||
## 16. Refactor Task Addition AI Call to use generateObjectService [done]
|
||||
### Dependencies: 61.23
|
||||
### Description: Update the logic that calls the AI after using `_buildAddTaskPrompt` in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the single task object.
|
||||
### Details:
|
||||
@@ -961,7 +961,7 @@ To implement this refactoring, you'll need to:
|
||||
4. Update any error handling to match the new service's error patterns.
|
||||
</info added on 2025-04-20T03:53:27.455Z>
|
||||
|
||||
## 17. Refactor General Chat/Update AI Calls [deferred]
|
||||
## 17. Refactor General Chat/Update AI Calls [done]
|
||||
### Dependencies: 61.23
|
||||
### Description: Refactor functions like `sendChatWithContext` (and potentially related task update functions in `task-manager.js` if they make direct AI calls) to use `streamTextService` or `generateTextService` from `ai-services-unified.js`.
|
||||
### Details:
|
||||
@@ -1008,7 +1008,7 @@ When refactoring `sendChatWithContext` and related functions, ensure they align
|
||||
5. Ensure any default behaviors respect configuration defaults rather than hardcoded values.
|
||||
</info added on 2025-04-20T03:53:03.709Z>
|
||||
|
||||
## 18. Refactor Callers of AI Parsing Utilities [deferred]
|
||||
## 18. Refactor Callers of AI Parsing Utilities [done]
|
||||
### Dependencies: None
|
||||
### Description: Update the code that calls `parseSubtasksFromText`, `parseTaskJsonResponse`, and `parseTasksFromCompletion` to instead directly handle the structured JSON output provided by `generateObjectService` (as the refactored AI calls will now use it).
|
||||
### Details:
|
||||
@@ -1761,19 +1761,19 @@ export async function generateGoogleObject({
|
||||
```
|
||||
</info added on 2025-04-27T00:00:46.675Z>
|
||||
|
||||
## 25. Implement `ollama.js` Provider Module [pending]
|
||||
## 25. Implement `ollama.js` Provider Module [done]
|
||||
### Dependencies: None
|
||||
### Description: Create and implement the `ollama.js` module within `src/ai-providers/`. This module should contain functions to interact with local Ollama models using the **`ollama-ai-provider` library**, adhering to the standardized input/output format defined for `ai-services-unified.js`. Note the specific library used.
|
||||
### Details:
|
||||
|
||||
|
||||
## 26. Implement `mistral.js` Provider Module using Vercel AI SDK [pending]
|
||||
## 26. Implement `mistral.js` Provider Module using Vercel AI SDK [done]
|
||||
### Dependencies: None
|
||||
### Description: Create and implement the `mistral.js` module within `src/ai-providers/`. This module should contain functions to interact with Mistral AI models using the **Vercel AI SDK (`@ai-sdk/mistral`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
|
||||
### Details:
|
||||
|
||||
|
||||
## 27. Implement `azure.js` Provider Module using Vercel AI SDK [pending]
|
||||
## 27. Implement `azure.js` Provider Module using Vercel AI SDK [done]
|
||||
### Dependencies: None
|
||||
### Description: Create and implement the `azure.js` module within `src/ai-providers/`. This module should contain functions to interact with Azure OpenAI models using the **Vercel AI SDK (`@ai-sdk/azure`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.
|
||||
### Details:
|
||||
@@ -2649,13 +2649,13 @@ Here are more detailed steps for removing unnecessary console logs:
|
||||
10. After committing changes, monitor the application in staging environment to ensure no critical information is lost.
|
||||
</info added on 2025-05-02T20:47:56.080Z>
|
||||
|
||||
## 44. Add setters for temperature, max tokens on per role basis. [pending]
|
||||
## 44. Add setters for temperature, max tokens on per role basis. [done]
|
||||
### Dependencies: None
|
||||
### Description: NOT per model/provider basis though we could probably just define those in the .taskmasterconfig file but then they would be hard-coded. if we let users define them on a per role basis, they will define incorrect values. maybe a good middle ground is to do both - we enforce maximum using known max tokens for input and output at the .taskmasterconfig level but then we also give setters to adjust temp/input tokens/output tokens for each of the 3 roles.
|
||||
### Details:
|
||||
|
||||
|
||||
## 45. Add support for Bedrock provider with ai sdk and unified service [pending]
|
||||
## 45. Add support for Bedrock provider with ai sdk and unified service [done]
|
||||
### Dependencies: None
|
||||
### Description:
|
||||
### Details:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Task ID: 66
|
||||
# Title: Support Status Filtering in Show Command for Subtasks
|
||||
# Status: pending
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Enhance the 'show' command to accept a status parameter that filters subtasks by their current status, allowing users to view only subtasks matching a specific status.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Task ID: 73
|
||||
# Title: Implement Custom Model ID Support for Ollama/OpenRouter
|
||||
# Status: in-progress
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Allow users to specify custom model IDs for Ollama and OpenRouter providers via CLI flag and interactive setup, with appropriate validation and warnings.
|
||||
|
||||
279
tasks/task_077.txt
Normal file
279
tasks/task_077.txt
Normal file
@@ -0,0 +1,279 @@
|
||||
# Task ID: 77
|
||||
# Title: Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)
|
||||
# Status: in-progress
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Capture detailed AI usage data (tokens, costs, models, commands) within Taskmaster and send this telemetry to an external, closed-source analytics backend for usage analysis, profitability measurement, and pricing optimization.
|
||||
# Details:
|
||||
* Add a telemetry utility (`logAiUsage`) within `ai-services.js` to track AI usage.
|
||||
* Collected telemetry data fields must include:
|
||||
* `timestamp`: Current date/time in ISO 8601.
|
||||
* `userId`: Unique user identifier generated at setup (stored in `.taskmasterconfig`).
|
||||
* `commandName`: Taskmaster command invoked (`expand`, `parse-prd`, `research`, etc.).
|
||||
* `modelUsed`: Name/ID of the AI model invoked.
|
||||
* `inputTokens`: Count of input tokens used.
|
||||
* `outputTokens`: Count of output tokens generated.
|
||||
* `totalTokens`: Sum of input and output tokens.
|
||||
* `totalCost`: Monetary cost calculated using pricing from `supported_models.json`.
|
||||
* Send telemetry payload securely via HTTPS POST request from user's Taskmaster installation directly to the closed-source analytics API (Express/Supabase backend).
|
||||
* Introduce a privacy notice and explicit user consent prompt upon initial installation/setup to enable telemetry.
|
||||
* Provide a graceful fallback if telemetry request fails (e.g., no internet connectivity).
|
||||
* Optionally display a usage summary directly in Taskmaster CLI output for user transparency.
|
||||
|
||||
# Test Strategy:
|
||||
|
||||
|
||||
# Subtasks:
|
||||
## 1. Implement telemetry utility and data collection [in-progress]
|
||||
### Dependencies: None
|
||||
### Description: Create the logAiUsage utility in ai-services.js that captures all required telemetry data fields
|
||||
### Details:
|
||||
Develop the logAiUsage function that collects timestamp, userId, commandName, modelUsed, inputTokens, outputTokens, totalTokens, and totalCost. Implement token counting logic and cost calculation using pricing from supported_models.json. Ensure proper error handling and data validation.
|
||||
<info added on 2025-05-05T21:08:51.413Z>
|
||||
Develop the logAiUsage function that collects timestamp, userId, commandName, modelUsed, inputTokens, outputTokens, totalTokens, and totalCost. Implement token counting logic and cost calculation using pricing from supported_models.json. Ensure proper error handling and data validation.
|
||||
|
||||
Implementation Plan:
|
||||
1. Define `logAiUsage` function in `ai-services-unified.js` that accepts parameters: userId, commandName, providerName, modelId, inputTokens, and outputTokens.
|
||||
|
||||
2. Implement data collection and calculation logic:
|
||||
- Generate timestamp using `new Date().toISOString()`
|
||||
- Calculate totalTokens by adding inputTokens and outputTokens
|
||||
- Create a helper function `_getCostForModel(providerName, modelId)` that:
|
||||
- Loads pricing data from supported-models.json
|
||||
- Finds the appropriate provider/model entry
|
||||
- Returns inputCost and outputCost rates or defaults if not found
|
||||
- Calculate totalCost using the formula: ((inputTokens/1,000,000) * inputCost) + ((outputTokens/1,000,000) * outputCost)
|
||||
- Assemble complete telemetryData object with all required fields
|
||||
|
||||
3. Add initial logging functionality:
|
||||
- Use existing log utility to record telemetry data at 'info' level
|
||||
- Implement proper error handling with try/catch blocks
|
||||
|
||||
4. Integrate with `_unifiedServiceRunner`:
|
||||
- Modify to accept commandName and userId parameters
|
||||
- After successful API calls, extract usage data from results
|
||||
- Call logAiUsage with the appropriate parameters
|
||||
|
||||
5. Update provider functions in src/ai-providers/*.js:
|
||||
- Ensure all provider functions return both the primary result and usage statistics
|
||||
- Standardize the return format to include a usage object with inputTokens and outputTokens
|
||||
</info added on 2025-05-05T21:08:51.413Z>
|
||||
<info added on 2025-05-07T17:28:57.361Z>
|
||||
To implement the AI usage telemetry effectively, we need to update each command across our different stacks. Let's create a structured approach for this implementation:
|
||||
|
||||
Command Integration Plan:
|
||||
1. Core Function Commands:
|
||||
- Identify all AI-utilizing commands in the core function library
|
||||
- For each command, modify to pass commandName and userId to _unifiedServiceRunner
|
||||
- Update return handling to process and forward usage statistics
|
||||
|
||||
2. Direct Function Commands:
|
||||
- Map all direct function commands that leverage AI capabilities
|
||||
- Implement telemetry collection at the appropriate execution points
|
||||
- Ensure consistent error handling and telemetry reporting
|
||||
|
||||
3. MCP Tool Stack Commands:
|
||||
- Inventory all MCP commands with AI dependencies
|
||||
- Standardize the telemetry collection approach across the tool stack
|
||||
- Add telemetry hooks that maintain backward compatibility
|
||||
|
||||
For each command category, we'll need to:
|
||||
- Document current implementation details
|
||||
- Define specific code changes required
|
||||
- Create tests to verify telemetry is being properly collected
|
||||
- Establish validation procedures to ensure data accuracy
|
||||
</info added on 2025-05-07T17:28:57.361Z>
|
||||
|
||||
## 2. Implement secure telemetry transmission [pending]
|
||||
### Dependencies: 77.1
|
||||
### Description: Create a secure mechanism to transmit telemetry data to the external analytics endpoint
|
||||
### Details:
|
||||
Implement HTTPS POST request functionality to securely send the telemetry payload to the closed-source analytics API. Include proper encryption in transit using TLS. Implement retry logic and graceful fallback mechanisms for handling transmission failures due to connectivity issues.
|
||||
|
||||
## 3. Develop user consent and privacy notice system [pending]
|
||||
### Dependencies: None
|
||||
### Description: Create a privacy notice and explicit consent mechanism during Taskmaster setup
|
||||
### Details:
|
||||
Design and implement a clear privacy notice explaining what data is collected and how it's used. Create a user consent prompt during initial installation/setup that requires explicit opt-in. Store the consent status in the .taskmasterconfig file and respect this setting throughout the application.
|
||||
|
||||
## 4. Integrate telemetry into Taskmaster commands [in-progress]
|
||||
### Dependencies: 77.1, 77.3
|
||||
### Description: Integrate the telemetry utility across all relevant Taskmaster commands
|
||||
### Details:
|
||||
Modify each Taskmaster command (expand, parse-prd, research, etc.) to call the logAiUsage utility after AI interactions. Ensure telemetry is only sent if user has provided consent. Implement the integration in a way that doesn't impact command performance or user experience.
|
||||
<info added on 2025-05-06T17:57:13.980Z>
|
||||
Modify each Taskmaster command (expand, parse-prd, research, etc.) to call the logAiUsage utility after AI interactions. Ensure telemetry is only sent if user has provided consent. Implement the integration in a way that doesn't impact command performance or user experience.
|
||||
|
||||
Successfully integrated telemetry calls into `addTask` (core) and `addTaskDirect` (MCP) functions by passing `commandName` and `outputType` parameters to the telemetry system. The `ai-services-unified.js` module now logs basic telemetry data, including calculated cost information, whenever the `add-task` command or tool is invoked. This integration respects user consent settings and maintains performance standards.
|
||||
</info added on 2025-05-06T17:57:13.980Z>
|
||||
|
||||
## 5. Implement usage summary display [pending]
|
||||
### Dependencies: 77.1, 77.4
|
||||
### Description: Create an optional feature to display AI usage summary in the CLI output
|
||||
### Details:
|
||||
Develop functionality to display a concise summary of AI usage (tokens used, estimated cost) directly in the CLI output after command execution. Make this feature configurable through Taskmaster settings. Ensure the display is formatted clearly and doesn't clutter the main command output.
|
||||
|
||||
## 6. Telemetry Integration for parse-prd [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate AI usage telemetry capture and propagation for the parse-prd functionality.
|
||||
### Details:
|
||||
\
|
||||
Apply telemetry pattern from telemetry.mdc:
|
||||
|
||||
1. **Core (`scripts/modules/task-manager/parse-prd.js`):**
|
||||
* Modify AI service call to include `commandName: \'parse-prd\'` and `outputType`.
|
||||
* Receive `{ mainResult, telemetryData }`.
|
||||
* Return object including `telemetryData`.
|
||||
* Handle CLI display via `displayAiUsageSummary` if applicable.
|
||||
|
||||
2. **Direct (`mcp-server/src/core/direct-functions/parse-prd.js`):**
|
||||
* Pass `commandName`, `outputType: \'mcp\'` to core.
|
||||
* Pass `outputFormat: \'json\'` if applicable.
|
||||
* Receive `{ ..., telemetryData }` from core.
|
||||
* Return `{ success: true, data: { ..., telemetryData } }`.
|
||||
|
||||
3. **Tool (`mcp-server/src/tools/parse-prd.js`):**
|
||||
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
|
||||
|
||||
|
||||
## 7. Telemetry Integration for expand-task [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate AI usage telemetry capture and propagation for the expand-task functionality.
|
||||
### Details:
|
||||
\
|
||||
Apply telemetry pattern from telemetry.mdc:
|
||||
|
||||
1. **Core (`scripts/modules/task-manager/expand-task.js`):**
|
||||
* Modify AI service call to include `commandName: \'expand-task\'` and `outputType`.
|
||||
* Receive `{ mainResult, telemetryData }`.
|
||||
* Return object including `telemetryData`.
|
||||
* Handle CLI display via `displayAiUsageSummary` if applicable.
|
||||
|
||||
2. **Direct (`mcp-server/src/core/direct-functions/expand-task.js`):**
|
||||
* Pass `commandName`, `outputType: \'mcp\'` to core.
|
||||
* Pass `outputFormat: \'json\'` if applicable.
|
||||
* Receive `{ ..., telemetryData }` from core.
|
||||
* Return `{ success: true, data: { ..., telemetryData } }`.
|
||||
|
||||
3. **Tool (`mcp-server/src/tools/expand-task.js`):**
|
||||
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
|
||||
|
||||
|
||||
## 8. Telemetry Integration for expand-all-tasks [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate AI usage telemetry capture and propagation for the expand-all-tasks functionality.
|
||||
### Details:
|
||||
\
|
||||
Apply telemetry pattern from telemetry.mdc:
|
||||
|
||||
1. **Core (`scripts/modules/task-manager/expand-all-tasks.js`):**
|
||||
* Modify AI service call (likely within a loop or called by a helper) to include `commandName: \'expand-all-tasks\'` and `outputType`.
|
||||
* Receive `{ mainResult, telemetryData }`.
|
||||
* Aggregate or handle `telemetryData` appropriately if multiple AI calls are made.
|
||||
* Return object including aggregated/relevant `telemetryData`.
|
||||
* Handle CLI display via `displayAiUsageSummary` if applicable.
|
||||
|
||||
2. **Direct (`mcp-server/src/core/direct-functions/expand-all-tasks.js`):**
|
||||
* Pass `commandName`, `outputType: \'mcp\'` to core.
|
||||
* Pass `outputFormat: \'json\'` if applicable.
|
||||
* Receive `{ ..., telemetryData }` from core.
|
||||
* Return `{ success: true, data: { ..., telemetryData } }`.
|
||||
|
||||
3. **Tool (`mcp-server/src/tools/expand-all.js`):**
|
||||
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
|
||||
|
||||
|
||||
## 9. Telemetry Integration for update-tasks [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate AI usage telemetry capture and propagation for the update-tasks (bulk update) functionality.
|
||||
### Details:
|
||||
\
|
||||
Apply telemetry pattern from telemetry.mdc:
|
||||
|
||||
1. **Core (`scripts/modules/task-manager/update-tasks.js`):**
|
||||
* Modify AI service call (likely within a loop) to include `commandName: \'update-tasks\'` and `outputType`.
|
||||
* Receive `{ mainResult, telemetryData }` for each AI call.
|
||||
* Aggregate or handle `telemetryData` appropriately for multiple calls.
|
||||
* Return object including aggregated/relevant `telemetryData`.
|
||||
* Handle CLI display via `displayAiUsageSummary` if applicable.
|
||||
|
||||
2. **Direct (`mcp-server/src/core/direct-functions/update-tasks.js`):**
|
||||
* Pass `commandName`, `outputType: \'mcp\'` to core.
|
||||
* Pass `outputFormat: \'json\'` if applicable.
|
||||
* Receive `{ ..., telemetryData }` from core.
|
||||
* Return `{ success: true, data: { ..., telemetryData } }`.
|
||||
|
||||
3. **Tool (`mcp-server/src/tools/update.js`):**
|
||||
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
|
||||
|
||||
|
||||
## 10. Telemetry Integration for update-task-by-id [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate AI usage telemetry capture and propagation for the update-task-by-id functionality.
|
||||
### Details:
|
||||
\
|
||||
Apply telemetry pattern from telemetry.mdc:
|
||||
|
||||
1. **Core (`scripts/modules/task-manager/update-task-by-id.js`):**
|
||||
* Modify AI service call to include `commandName: \'update-task\'` and `outputType`.
|
||||
* Receive `{ mainResult, telemetryData }`.
|
||||
* Return object including `telemetryData`.
|
||||
* Handle CLI display via `displayAiUsageSummary` if applicable.
|
||||
|
||||
2. **Direct (`mcp-server/src/core/direct-functions/update-task-by-id.js`):**
|
||||
* Pass `commandName`, `outputType: \'mcp\'` to core.
|
||||
* Pass `outputFormat: \'json\'` if applicable.
|
||||
* Receive `{ ..., telemetryData }` from core.
|
||||
* Return `{ success: true, data: { ..., telemetryData } }`.
|
||||
|
||||
3. **Tool (`mcp-server/src/tools/update-task.js`):**
|
||||
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
|
||||
|
||||
|
||||
## 11. Telemetry Integration for update-subtask-by-id [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate AI usage telemetry capture and propagation for the update-subtask-by-id functionality.
|
||||
### Details:
|
||||
\
|
||||
Apply telemetry pattern from telemetry.mdc:
|
||||
|
||||
1. **Core (`scripts/modules/task-manager/update-subtask-by-id.js`):**
|
||||
* Verify if this function *actually* calls an AI service. If it only appends text, telemetry integration might not apply directly here, but ensure its callers handle telemetry if they use AI.
|
||||
* *If it calls AI:* Modify AI service call to include `commandName: \'update-subtask\'` and `outputType`.
|
||||
* *If it calls AI:* Receive `{ mainResult, telemetryData }`.
|
||||
* *If it calls AI:* Return object including `telemetryData`.
|
||||
* *If it calls AI:* Handle CLI display via `displayAiUsageSummary` if applicable.
|
||||
|
||||
2. **Direct (`mcp-server/src/core/direct-functions/update-subtask-by-id.js`):**
|
||||
* *If core calls AI:* Pass `commandName`, `outputType: \'mcp\'` to core.
|
||||
* *If core calls AI:* Pass `outputFormat: \'json\'` if applicable.
|
||||
* *If core calls AI:* Receive `{ ..., telemetryData }` from core.
|
||||
* *If core calls AI:* Return `{ success: true, data: { ..., telemetryData } }`.
|
||||
|
||||
3. **Tool (`mcp-server/src/tools/update-subtask.js`):**
|
||||
* Verify `handleApiResult` correctly passes `data.telemetryData` through (if present).
|
||||
|
||||
|
||||
## 12. Telemetry Integration for analyze-task-complexity [pending]
|
||||
### Dependencies: None
|
||||
### Description: Integrate AI usage telemetry capture and propagation for the analyze-task-complexity functionality.
|
||||
### Details:
|
||||
\
|
||||
Apply telemetry pattern from telemetry.mdc:
|
||||
|
||||
1. **Core (`scripts/modules/task-manager/analyze-task-complexity.js`):**
|
||||
* Modify AI service call to include `commandName: \'analyze-complexity\'` and `outputType`.
|
||||
* Receive `{ mainResult, telemetryData }`.
|
||||
* Return object including `telemetryData` (perhaps alongside the complexity report data).
|
||||
* Handle CLI display via `displayAiUsageSummary` if applicable.
|
||||
|
||||
2. **Direct (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**
|
||||
* Pass `commandName`, `outputType: \'mcp\'` to core.
|
||||
* Pass `outputFormat: \'json\'` if applicable.
|
||||
* Receive `{ ..., telemetryData }` from core.
|
||||
* Return `{ success: true, data: { ..., telemetryData } }`.
|
||||
|
||||
3. **Tool (`mcp-server/src/tools/analyze.js`):**
|
||||
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
|
||||
|
||||
|
||||
60
tasks/task_080.txt
Normal file
60
tasks/task_080.txt
Normal file
@@ -0,0 +1,60 @@
|
||||
# Task ID: 80
|
||||
# Title: Implement Unique User ID Generation and Storage During Installation
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Generate a unique user identifier during npm installation and store it in the .taskmasterconfig globals to enable anonymous usage tracking and telemetry without requiring user registration.
|
||||
# Details:
|
||||
This task involves implementing a mechanism to generate and store a unique user identifier during the npm installation process of Taskmaster. The implementation should:
|
||||
|
||||
1. Create a post-install script that runs automatically after npm install completes
|
||||
2. Generate a cryptographically secure random UUID v4 as the unique user identifier
|
||||
3. Check if a user ID already exists in the .taskmasterconfig file before generating a new one
|
||||
4. Add the generated user ID to the globals section of the .taskmasterconfig file
|
||||
5. Ensure the user ID persists across updates but is regenerated on fresh installations
|
||||
6. Handle edge cases such as failed installations, manual deletions of the config file, or permission issues
|
||||
7. Add appropriate logging to notify users that an anonymous ID is being generated (with clear privacy messaging)
|
||||
8. Document the purpose of this ID in the codebase and user documentation
|
||||
9. Ensure the ID generation is compatible with all supported operating systems
|
||||
10. Make the ID accessible to the telemetry system implemented in Task #77
|
||||
|
||||
The implementation should respect user privacy by:
|
||||
- Not collecting any personally identifiable information
|
||||
- Making it clear in documentation how users can opt out of telemetry
|
||||
- Ensuring the ID cannot be traced back to specific users or installations
|
||||
|
||||
This user ID will serve as the foundation for anonymous usage tracking, helping to understand how Taskmaster is used without compromising user privacy.
|
||||
|
||||
# Test Strategy:
|
||||
Testing for this feature should include:
|
||||
|
||||
1. **Unit Tests**:
|
||||
- Verify the UUID generation produces valid UUIDs
|
||||
- Test the config file reading and writing functionality
|
||||
- Ensure proper error handling for file system operations
|
||||
- Verify the ID remains consistent across multiple reads
|
||||
|
||||
2. **Integration Tests**:
|
||||
- Run a complete npm installation in a clean environment and verify a new ID is generated
|
||||
- Simulate an update installation and verify the existing ID is preserved
|
||||
- Test the interaction between the ID generation and the telemetry system
|
||||
- Verify the ID is correctly stored in the expected location in .taskmasterconfig
|
||||
|
||||
3. **Manual Testing**:
|
||||
- Perform fresh installations on different operating systems (Windows, macOS, Linux)
|
||||
- Verify the installation process completes without errors
|
||||
- Check that the .taskmasterconfig file contains the generated ID
|
||||
- Test scenarios where the config file is manually deleted or corrupted
|
||||
|
||||
4. **Edge Case Testing**:
|
||||
- Test behavior when the installation is run without sufficient permissions
|
||||
- Verify handling of network disconnections during installation
|
||||
- Test with various npm versions to ensure compatibility
|
||||
- Verify behavior when .taskmasterconfig already exists but doesn't contain a user ID section
|
||||
|
||||
5. **Validation**:
|
||||
- Create a simple script to extract and analyze generated IDs to ensure uniqueness
|
||||
- Verify the ID format meets UUID v4 specifications
|
||||
- Confirm the ID is accessible to the telemetry system from Task #77
|
||||
|
||||
The test plan should include documentation of all test cases, expected results, and actual outcomes. A successful implementation will generate unique IDs for each installation while maintaining that ID across updates.
|
||||
177
tasks/tasks.json
177
tasks/tasks.json
@@ -2094,7 +2094,7 @@
|
||||
"id": 32,
|
||||
"title": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
|
||||
"description": "Create a new \"learn\" command that analyzes Cursor's chat history and code changes to automatically generate or update rule files in the .cursor/rules directory, following the cursor_rules.mdc template format. This command will help Cursor autonomously improve its ability to follow development standards by learning from successful implementations.",
|
||||
"status": "pending",
|
||||
"status": "deferred",
|
||||
"dependencies": [],
|
||||
"priority": "high",
|
||||
"details": "Implement a new command in the task-master CLI that enables Cursor to learn from successful coding patterns and chat interactions:\n\nKey Components:\n1. Cursor Data Analysis\n - Access and parse Cursor's chat history from ~/Library/Application Support/Cursor/User/History\n - Extract relevant patterns, corrections, and successful implementations\n - Track file changes and their associated chat context\n\n2. Rule Management\n - Use cursor_rules.mdc as the template for all rule file formatting\n - Manage rule files in .cursor/rules directory\n - Support both creation and updates of rule files\n - Categorize rules based on context (testing, components, API, etc.)\n\n3. AI Integration\n - Utilize ai-services.js to interact with Claude\n - Provide comprehensive context including:\n * Relevant chat history showing the evolution of solutions\n * Code changes and their outcomes\n * Existing rules and template structure\n - Generate or update rules while maintaining template consistency\n\n4. Implementation Requirements:\n - Automatic triggering after task completion (configurable)\n - Manual triggering via CLI command\n - Proper error handling for missing or corrupt files\n - Validation against cursor_rules.mdc template\n - Performance optimization for large histories\n - Clear logging and progress indication\n\n5. Key Files:\n - commands/learn.js: Main command implementation\n - rules/cursor-rules-manager.js: Rule file management\n - utils/chat-history-analyzer.js: Cursor chat analysis\n - index.js: Command registration\n\n6. Security Considerations:\n - Safe file system operations\n - Proper error handling for inaccessible files\n - Validation of generated rules\n - Backup of existing rules before updates",
|
||||
@@ -2607,7 +2607,7 @@
|
||||
"id": 43,
|
||||
"title": "Add Research Flag to Add-Task Command",
|
||||
"description": "Implement a '--research' flag for the add-task command that enables users to automatically generate research-related subtasks when creating a new task.",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"priority": "medium",
|
||||
"details": "Modify the add-task command to accept a new optional flag '--research'. When this flag is provided, the system should automatically generate and attach a set of research-oriented subtasks to the newly created task. These subtasks should follow a standard research methodology structure:\n\n1. Background Investigation: Research existing solutions and approaches\n2. Requirements Analysis: Define specific requirements and constraints\n3. Technology/Tool Evaluation: Compare potential technologies or tools for implementation\n4. Proof of Concept: Create a minimal implementation to validate approach\n5. Documentation: Document findings and recommendations\n\nThe implementation should:\n- Update the command-line argument parser to recognize the new flag\n- Create a dedicated function to generate the research subtasks with appropriate descriptions\n- Ensure subtasks are properly linked to the parent task\n- Update help documentation to explain the new flag\n- Maintain backward compatibility with existing add-task functionality\n\nThe research subtasks should be customized based on the main task's title and description when possible, rather than using generic templates.",
|
||||
@@ -3026,7 +3026,7 @@
|
||||
"description": "Currently, Task Master only supports Claude for main operations and Perplexity for research. Users are limited in flexibility when managing AI models. Adding comprehensive support for multiple popular AI models (OpenAI, Ollama, Gemini, OpenRouter, Grok) and providing intuitive CLI commands for model management will significantly enhance usability, transparency, and adaptability to user preferences and project-specific needs. This task will now leverage Vercel's AI SDK to streamline integration and management of these models.",
|
||||
"details": "### Proposed Solution\nImplement an intuitive CLI command for AI model management, leveraging Vercel's AI SDK for seamless integration:\n\n- `task-master models`: Lists currently configured models for main operations and research.\n- `task-master models --set-main=\"<model_name>\" --set-research=\"<model_name>\"`: Sets the desired models for main operations and research tasks respectively.\n\nSupported AI Models:\n- **Main Operations:** Claude (current default), OpenAI, Ollama, Gemini, OpenRouter\n- **Research Operations:** Perplexity (current default), OpenAI, Ollama, Grok\n\nIf a user specifies an invalid model, the CLI lists available models clearly.\n\n### Example CLI Usage\n\nList current models:\n```shell\ntask-master models\n```\nOutput example:\n```\nCurrent AI Model Configuration:\n- Main Operations: Claude\n- Research Operations: Perplexity\n```\n\nSet new models:\n```shell\ntask-master models --set-main=\"gemini\" --set-research=\"grok\"\n```\n\nAttempt invalid model:\n```shell\ntask-master models --set-main=\"invalidModel\"\n```\nOutput example:\n```\nError: \"invalidModel\" is not a valid model.\n\nAvailable models for Main Operations:\n- claude\n- openai\n- ollama\n- gemini\n- openrouter\n```\n\n### High-Level Workflow\n1. Update CLI parsing logic to handle new `models` command and associated flags.\n2. Consolidate all AI calls into `ai-services.js` for centralized management.\n3. Utilize Vercel's AI SDK to implement robust wrapper functions for each AI API:\n - Claude (existing)\n - Perplexity (existing)\n - OpenAI\n - Ollama\n - Gemini\n - OpenRouter\n - Grok\n4. Update environment variables and provide clear documentation in `.env_example`:\n```env\n# MAIN_MODEL options: claude, openai, ollama, gemini, openrouter\nMAIN_MODEL=claude\n\n# RESEARCH_MODEL options: perplexity, openai, ollama, grok\nRESEARCH_MODEL=perplexity\n```\n5. Ensure dynamic model switching via environment variables or configuration management.\n6. Provide clear CLI feedback and validation of model names.\n\n### Vercel AI SDK Integration\n- Use Vercel's AI SDK to abstract API calls for supported models, ensuring consistent error handling and response formatting.\n- Implement a configuration layer to map model names to their respective Vercel SDK integrations.\n- Example pattern for integration:\n```javascript\nimport { createClient } from '@vercel/ai';\n\nconst clients = {\n claude: createClient({ provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY }),\n openai: createClient({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY }),\n ollama: createClient({ provider: 'ollama', apiKey: process.env.OLLAMA_API_KEY }),\n gemini: createClient({ provider: 'gemini', apiKey: process.env.GEMINI_API_KEY }),\n openrouter: createClient({ provider: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY }),\n perplexity: createClient({ provider: 'perplexity', apiKey: process.env.PERPLEXITY_API_KEY }),\n grok: createClient({ provider: 'xai', apiKey: process.env.XAI_API_KEY })\n};\n\nexport function getClient(model) {\n if (!clients[model]) {\n throw new Error(`Invalid model: ${model}`);\n }\n return clients[model];\n}\n```\n- Leverage `generateText` and `streamText` functions from the SDK for text generation and streaming capabilities.\n- Ensure compatibility with serverless and edge deployments using Vercel's infrastructure.\n\n### Key Elements\n- Enhanced model visibility and intuitive management commands.\n- Centralized and robust handling of AI API integrations via Vercel AI SDK.\n- Clear CLI responses with detailed validation feedback.\n- Flexible, easy-to-understand environment configuration.\n\n### Implementation Considerations\n- Centralize all AI interactions through a single, maintainable module (`ai-services.js`).\n- Ensure comprehensive error handling for invalid model selections.\n- Clearly document environment variable options and their purposes.\n- Validate model names rigorously to prevent runtime errors.\n\n### Out of Scope (Future Considerations)\n- Automatic benchmarking or model performance comparison.\n- Dynamic runtime switching of models based on task type or complexity.",
|
||||
"testStrategy": "### Test Strategy\n1. **Unit Tests**:\n - Test CLI commands for listing, setting, and validating models.\n - Mock Vercel AI SDK calls to ensure proper integration and error handling.\n\n2. **Integration Tests**:\n - Validate end-to-end functionality of model management commands.\n - Test dynamic switching of models via environment variables.\n\n3. **Error Handling Tests**:\n - Simulate invalid model names and verify error messages.\n - Test API failures for each model provider and ensure graceful degradation.\n\n4. **Documentation Validation**:\n - Verify that `.env_example` and CLI usage examples are accurate and comprehensive.\n\n5. **Performance Tests**:\n - Measure response times for API calls through Vercel AI SDK.\n - Ensure no significant latency is introduced by model switching.\n\n6. **SDK-Specific Tests**:\n - Validate the behavior of `generateText` and `streamText` functions for supported models.\n - Test compatibility with serverless and edge deployments.",
|
||||
"status": "in-progress",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"priority": "high",
|
||||
"subtasks": [
|
||||
@@ -3121,7 +3121,7 @@
|
||||
"61.18"
|
||||
],
|
||||
"details": "1. Update task processing functions to use the centralized AI services\n2. Implement dynamic model selection based on configuration\n3. Add error handling for model-specific failures\n4. Implement graceful degradation when preferred models are unavailable\n5. Update prompts to be model-agnostic where possible\n6. Add telemetry for model performance monitoring\n7. Implement response validation to ensure quality across different models\n8. Testing approach: Create integration tests that verify task processing with different model configurations\n\n<info added on 2025-04-20T03:55:56.310Z>\nWhen updating the main task processing logic, implement the following changes to align with the new configuration system:\n\n1. Replace direct environment variable access with calls to the configuration manager:\n ```javascript\n // Before\n const apiKey = process.env.OPENAI_API_KEY;\n const modelId = process.env.MAIN_MODEL || \"gpt-4\";\n \n // After\n import { getMainProvider, getMainModelId, getMainMaxTokens, getMainTemperature } from './config-manager.js';\n \n const provider = getMainProvider();\n const modelId = getMainModelId();\n const maxTokens = getMainMaxTokens();\n const temperature = getMainTemperature();\n ```\n\n2. Implement model fallback logic using the configuration hierarchy:\n ```javascript\n async function processTaskWithFallback(task) {\n try {\n return await processWithModel(task, getMainModelId());\n } catch (error) {\n logger.warn(`Primary model failed: ${error.message}`);\n const fallbackModel = getMainFallbackModelId();\n if (fallbackModel) {\n return await processWithModel(task, fallbackModel);\n }\n throw error;\n }\n }\n ```\n\n3. Add configuration-aware telemetry points to track model usage and performance:\n ```javascript\n function trackModelPerformance(modelId, startTime, success) {\n const duration = Date.now() - startTime;\n telemetry.trackEvent('model_usage', {\n modelId,\n provider: getMainProvider(),\n duration,\n success,\n configVersion: getConfigVersion()\n });\n }\n ```\n\n4. Ensure all prompt templates are loaded through the configuration system rather than hardcoded:\n ```javascript\n const promptTemplate = getPromptTemplate('task_processing');\n const prompt = formatPrompt(promptTemplate, { task: taskData });\n ```\n</info added on 2025-04-20T03:55:56.310Z>",
|
||||
"status": "deferred",
|
||||
"status": "done",
|
||||
"parentTaskId": 61
|
||||
},
|
||||
{
|
||||
@@ -3135,7 +3135,7 @@
|
||||
"61.18"
|
||||
],
|
||||
"details": "1. Update research functions to use the centralized AI services\n2. Implement dynamic model selection for research operations\n3. Add specialized error handling for research-specific issues\n4. Optimize prompts for research-focused models\n5. Implement result caching for research operations\n6. Add support for model-specific research parameters\n7. Create fallback mechanisms for research operations\n8. Testing approach: Create integration tests that verify research functionality with different model configurations\n\n<info added on 2025-04-20T03:55:39.633Z>\nWhen implementing the refactored research processing logic, ensure the following:\n\n1. Replace direct environment variable access with the new configuration system:\n ```javascript\n // Old approach\n const apiKey = process.env.OPENAI_API_KEY;\n const model = \"gpt-4\";\n \n // New approach\n import { getResearchProvider, getResearchModelId, getResearchMaxTokens, \n getResearchTemperature } from './config-manager.js';\n \n const provider = getResearchProvider();\n const modelId = getResearchModelId();\n const maxTokens = getResearchMaxTokens();\n const temperature = getResearchTemperature();\n ```\n\n2. Implement model fallback chains using the configuration system:\n ```javascript\n async function performResearch(query) {\n try {\n return await callAIService({\n provider: getResearchProvider(),\n modelId: getResearchModelId(),\n maxTokens: getResearchMaxTokens(),\n temperature: getResearchTemperature()\n });\n } catch (error) {\n logger.warn(`Primary research model failed: ${error.message}`);\n return await callAIService({\n provider: getResearchProvider('fallback'),\n modelId: getResearchModelId('fallback'),\n maxTokens: getResearchMaxTokens('fallback'),\n temperature: getResearchTemperature('fallback')\n });\n }\n }\n ```\n\n3. Add support for dynamic parameter adjustment based on research type:\n ```javascript\n function getResearchParameters(researchType) {\n // Get base parameters\n const baseParams = {\n provider: getResearchProvider(),\n modelId: getResearchModelId(),\n maxTokens: getResearchMaxTokens(),\n temperature: getResearchTemperature()\n };\n \n // Adjust based on research type\n switch(researchType) {\n case 'deep':\n return {...baseParams, maxTokens: baseParams.maxTokens * 1.5};\n case 'creative':\n return {...baseParams, temperature: Math.min(baseParams.temperature + 0.2, 1.0)};\n case 'factual':\n return {...baseParams, temperature: Math.max(baseParams.temperature - 0.2, 0)};\n default:\n return baseParams;\n }\n }\n ```\n\n4. Ensure the caching mechanism uses configuration-based TTL settings:\n ```javascript\n const researchCache = new Cache({\n ttl: getResearchCacheTTL(),\n maxSize: getResearchCacheMaxSize()\n });\n ```\n</info added on 2025-04-20T03:55:39.633Z>",
|
||||
"status": "deferred",
|
||||
"status": "done",
|
||||
"parentTaskId": 61
|
||||
},
|
||||
{
|
||||
@@ -3168,7 +3168,7 @@
|
||||
"title": "Refactor Basic Subtask Generation to use generateObjectService",
|
||||
"description": "Update the `generateSubtasks` function in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the subtask array.",
|
||||
"details": "\n\n<info added on 2025-04-20T03:54:45.542Z>\nThe refactoring should leverage the new configuration system:\n\n1. Replace direct model references with calls to config-manager.js getters:\n ```javascript\n const { getModelForRole, getModelParams } = require('./config-manager');\n \n // Instead of hardcoded models/parameters:\n const model = getModelForRole('subtask-generator');\n const modelParams = getModelParams('subtask-generator');\n ```\n\n2. Update API key handling to use the resolveEnvVariable pattern:\n ```javascript\n const { resolveEnvVariable } = require('./utils');\n const apiKey = resolveEnvVariable('OPENAI_API_KEY');\n ```\n\n3. When calling generateObjectService, pass the configuration parameters:\n ```javascript\n const result = await generateObjectService({\n schema: subtasksArraySchema,\n prompt: subtaskPrompt,\n model: model,\n temperature: modelParams.temperature,\n maxTokens: modelParams.maxTokens,\n // Other parameters from config\n });\n ```\n\n4. Add error handling that respects logging configuration:\n ```javascript\n const { isLoggingEnabled } = require('./config-manager');\n \n try {\n // Generation code\n } catch (error) {\n if (isLoggingEnabled('errors')) {\n console.error('Subtask generation error:', error);\n }\n throw error;\n }\n ```\n</info added on 2025-04-20T03:54:45.542Z>",
|
||||
"status": "cancelled",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
"61.23"
|
||||
],
|
||||
@@ -3179,7 +3179,7 @@
|
||||
"title": "Refactor Research Subtask Generation to use generateObjectService",
|
||||
"description": "Update the `generateSubtasksWithPerplexity` function in `ai-services.js` to first perform research (potentially keeping the Perplexity call separate or adapting it) and then use `generateObjectService` from `ai-services-unified.js` with research results included in the prompt.",
|
||||
"details": "\n\n<info added on 2025-04-20T03:54:26.882Z>\nThe refactoring should align with the new configuration system by:\n\n1. Replace direct environment variable access with `resolveEnvVariable` for API keys\n2. Use the config-manager.js getters to retrieve model parameters:\n - Replace hardcoded model names with `getModelForRole('research')`\n - Use `getParametersForRole('research')` to get temperature, maxTokens, etc.\n3. Implement proper error handling that respects the `getLoggingConfig()` settings\n4. Example implementation pattern:\n```javascript\nconst { getModelForRole, getParametersForRole, getLoggingConfig } = require('./config-manager');\nconst { resolveEnvVariable } = require('./environment-utils');\n\n// In the refactored function:\nconst researchModel = getModelForRole('research');\nconst { temperature, maxTokens } = getParametersForRole('research');\nconst apiKey = resolveEnvVariable('PERPLEXITY_API_KEY');\nconst { verbose } = getLoggingConfig();\n\n// Then use these variables in the API call configuration\n```\n5. Ensure the transition to generateObjectService maintains all existing functionality while leveraging the new configuration system\n</info added on 2025-04-20T03:54:26.882Z>",
|
||||
"status": "cancelled",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
"61.23"
|
||||
],
|
||||
@@ -3190,7 +3190,7 @@
|
||||
"title": "Refactor Research Task Description Generation to use generateObjectService",
|
||||
"description": "Update the `generateTaskDescriptionWithPerplexity` function in `ai-services.js` to first perform research and then use `generateObjectService` from `ai-services-unified.js` to generate the structured task description.",
|
||||
"details": "\n\n<info added on 2025-04-20T03:54:04.420Z>\nThe refactoring should incorporate the new configuration management system:\n\n1. Update imports to include the config-manager:\n```javascript\nconst { getModelForRole, getParametersForRole } = require('./config-manager');\n```\n\n2. Replace any hardcoded model selections or parameters with config-manager calls:\n```javascript\n// Replace direct model references like:\n// const model = \"perplexity-model-7b-online\" \n// With:\nconst model = getModelForRole('research');\nconst parameters = getParametersForRole('research');\n```\n\n3. For API key handling, use the resolveEnvVariable pattern:\n```javascript\nconst apiKey = resolveEnvVariable('PERPLEXITY_API_KEY');\n```\n\n4. When calling generateObjectService, pass the configuration-derived parameters:\n```javascript\nreturn generateObjectService({\n prompt: researchResults,\n schema: taskDescriptionSchema,\n role: 'taskDescription',\n // Config-driven parameters will be applied within generateObjectService\n});\n```\n\n5. Remove any hardcoded configuration values, ensuring all settings are retrieved from the centralized configuration system.\n</info added on 2025-04-20T03:54:04.420Z>",
|
||||
"status": "cancelled",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
"61.23"
|
||||
],
|
||||
@@ -3201,7 +3201,7 @@
|
||||
"title": "Refactor Complexity Analysis AI Call to use generateObjectService",
|
||||
"description": "Update the logic that calls the AI after using `generateComplexityAnalysisPrompt` in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the complexity report.",
|
||||
"details": "\n\n<info added on 2025-04-20T03:53:46.120Z>\nThe complexity analysis AI call should be updated to align with the new configuration system architecture. When refactoring to use `generateObjectService`, implement the following changes:\n\n1. Replace direct model references with calls to the appropriate config getter:\n ```javascript\n const modelName = getComplexityAnalysisModel(); // Use the specific getter from config-manager.js\n ```\n\n2. Retrieve AI parameters from the config system:\n ```javascript\n const temperature = getAITemperature('complexityAnalysis');\n const maxTokens = getAIMaxTokens('complexityAnalysis');\n ```\n\n3. When constructing the call to `generateObjectService`, pass these configuration values:\n ```javascript\n const result = await generateObjectService({\n prompt,\n schema: complexityReportSchema,\n modelName,\n temperature,\n maxTokens,\n sessionEnv: session?.env\n });\n ```\n\n4. Ensure API key resolution uses the `resolveEnvVariable` helper:\n ```javascript\n // Don't hardcode API keys or directly access process.env\n // The generateObjectService should handle this internally with resolveEnvVariable\n ```\n\n5. Add logging configuration based on settings:\n ```javascript\n const enableLogging = getAILoggingEnabled('complexityAnalysis');\n if (enableLogging) {\n // Use the logging mechanism defined in the configuration\n }\n ```\n</info added on 2025-04-20T03:53:46.120Z>",
|
||||
"status": "cancelled",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
"61.23"
|
||||
],
|
||||
@@ -3212,7 +3212,7 @@
|
||||
"title": "Refactor Task Addition AI Call to use generateObjectService",
|
||||
"description": "Update the logic that calls the AI after using `_buildAddTaskPrompt` in `ai-services.js` to use the new `generateObjectService` from `ai-services-unified.js` with a Zod schema for the single task object.",
|
||||
"details": "\n\n<info added on 2025-04-20T03:53:27.455Z>\nTo implement this refactoring, you'll need to:\n\n1. Replace direct AI calls with the new `generateObjectService` approach:\n ```javascript\n // OLD approach\n const aiResponse = await callLLM(prompt, modelName, temperature, maxTokens);\n const task = parseAIResponseToTask(aiResponse);\n \n // NEW approach using generateObjectService with config-manager\n import { generateObjectService } from '../services/ai-services-unified.js';\n import { getAIModelForRole, getAITemperature, getAIMaxTokens } from '../config/config-manager.js';\n import { taskSchema } from '../schemas/task-schema.js'; // Create this Zod schema for a single task\n \n const modelName = getAIModelForRole('taskCreation');\n const temperature = getAITemperature('taskCreation');\n const maxTokens = getAIMaxTokens('taskCreation');\n \n const task = await generateObjectService({\n prompt: _buildAddTaskPrompt(...),\n schema: taskSchema,\n modelName,\n temperature,\n maxTokens\n });\n ```\n\n2. Create a Zod schema for the task object in a new file `schemas/task-schema.js` that defines the expected structure.\n\n3. Ensure API key resolution uses the new pattern:\n ```javascript\n // This happens inside generateObjectService, but verify it uses:\n import { resolveEnvVariable } from '../config/config-manager.js';\n // Instead of direct process.env access\n ```\n\n4. Update any error handling to match the new service's error patterns.\n</info added on 2025-04-20T03:53:27.455Z>",
|
||||
"status": "cancelled",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
"61.23"
|
||||
],
|
||||
@@ -3223,7 +3223,7 @@
|
||||
"title": "Refactor General Chat/Update AI Calls",
|
||||
"description": "Refactor functions like `sendChatWithContext` (and potentially related task update functions in `task-manager.js` if they make direct AI calls) to use `streamTextService` or `generateTextService` from `ai-services-unified.js`.",
|
||||
"details": "\n\n<info added on 2025-04-20T03:53:03.709Z>\nWhen refactoring `sendChatWithContext` and related functions, ensure they align with the new configuration system:\n\n1. Replace direct model references with config getter calls:\n ```javascript\n // Before\n const model = \"gpt-4\";\n \n // After\n import { getModelForRole } from './config-manager.js';\n const model = getModelForRole('chat'); // or appropriate role\n ```\n\n2. Extract AI parameters from config rather than hardcoding:\n ```javascript\n import { getAIParameters } from './config-manager.js';\n const { temperature, maxTokens } = getAIParameters('chat');\n ```\n\n3. When calling `streamTextService` or `generateTextService`, pass parameters from config:\n ```javascript\n await streamTextService({\n messages,\n model: getModelForRole('chat'),\n temperature: getAIParameters('chat').temperature,\n // other parameters as needed\n });\n ```\n\n4. For logging control, check config settings:\n ```javascript\n import { isLoggingEnabled } from './config-manager.js';\n \n if (isLoggingEnabled('aiCalls')) {\n console.log('AI request:', messages);\n }\n ```\n\n5. Ensure any default behaviors respect configuration defaults rather than hardcoded values.\n</info added on 2025-04-20T03:53:03.709Z>",
|
||||
"status": "deferred",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
"61.23"
|
||||
],
|
||||
@@ -3234,7 +3234,7 @@
|
||||
"title": "Refactor Callers of AI Parsing Utilities",
|
||||
"description": "Update the code that calls `parseSubtasksFromText`, `parseTaskJsonResponse`, and `parseTasksFromCompletion` to instead directly handle the structured JSON output provided by `generateObjectService` (as the refactored AI calls will now use it).",
|
||||
"details": "\n\n<info added on 2025-04-20T03:52:45.518Z>\nThe refactoring of callers to AI parsing utilities should align with the new configuration system. When updating these callers:\n\n1. Replace direct API key references with calls to the configuration system using `resolveEnvVariable` for sensitive credentials.\n\n2. Update model selection logic to use the centralized configuration from `.taskmasterconfig` via the getter functions in `config-manager.js`. For example:\n ```javascript\n // Old approach\n const model = \"gpt-4\";\n \n // New approach\n import { getModelForRole } from './config-manager';\n const model = getModelForRole('parsing'); // or appropriate role\n ```\n\n3. Similarly, replace hardcoded parameters with configuration-based values:\n ```javascript\n // Old approach\n const maxTokens = 2000;\n const temperature = 0.2;\n \n // New approach\n import { getAIParameterValue } from './config-manager';\n const maxTokens = getAIParameterValue('maxTokens', 'parsing');\n const temperature = getAIParameterValue('temperature', 'parsing');\n ```\n\n4. Ensure logging behavior respects the centralized logging configuration settings.\n\n5. When calling `generateObjectService`, pass the appropriate configuration context to ensure it uses the correct settings from the centralized configuration system.\n</info added on 2025-04-20T03:52:45.518Z>",
|
||||
"status": "deferred",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 61
|
||||
},
|
||||
@@ -3299,7 +3299,7 @@
|
||||
"title": "Implement `ollama.js` Provider Module",
|
||||
"description": "Create and implement the `ollama.js` module within `src/ai-providers/`. This module should contain functions to interact with local Ollama models using the **`ollama-ai-provider` library**, adhering to the standardized input/output format defined for `ai-services-unified.js`. Note the specific library used.",
|
||||
"details": "",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 61
|
||||
},
|
||||
@@ -3308,7 +3308,7 @@
|
||||
"title": "Implement `mistral.js` Provider Module using Vercel AI SDK",
|
||||
"description": "Create and implement the `mistral.js` module within `src/ai-providers/`. This module should contain functions to interact with Mistral AI models using the **Vercel AI SDK (`@ai-sdk/mistral`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.",
|
||||
"details": "",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 61
|
||||
},
|
||||
@@ -3317,7 +3317,7 @@
|
||||
"title": "Implement `azure.js` Provider Module using Vercel AI SDK",
|
||||
"description": "Create and implement the `azure.js` module within `src/ai-providers/`. This module should contain functions to interact with Azure OpenAI models using the **Vercel AI SDK (`@ai-sdk/azure`)**, adhering to the standardized input/output format defined for `ai-services-unified.js`.",
|
||||
"details": "",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 61
|
||||
},
|
||||
@@ -3476,7 +3476,7 @@
|
||||
"title": "Add setters for temperature, max tokens on per role basis.",
|
||||
"description": "NOT per model/provider basis though we could probably just define those in the .taskmasterconfig file but then they would be hard-coded. if we let users define them on a per role basis, they will define incorrect values. maybe a good middle ground is to do both - we enforce maximum using known max tokens for input and output at the .taskmasterconfig level but then we also give setters to adjust temp/input tokens/output tokens for each of the 3 roles.",
|
||||
"details": "",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 61
|
||||
},
|
||||
@@ -3485,7 +3485,7 @@
|
||||
"title": "Add support for Bedrock provider with ai sdk and unified service",
|
||||
"description": "",
|
||||
"details": "\n\n<info added on 2025-04-25T19:03:42.584Z>\n- Install the Bedrock provider for the AI SDK using your package manager (e.g., npm i @ai-sdk/amazon-bedrock) and ensure the core AI SDK is present[3][4].\n\n- To integrate with your existing config manager, externalize all Bedrock-specific configuration (such as region, model name, and credential provider) into your config management system. For example, store values like region (\"us-east-1\") and model identifier (\"meta.llama3-8b-instruct-v1:0\") in your config files or environment variables, and load them at runtime.\n\n- For credentials, leverage the AWS SDK credential provider chain to avoid hardcoding secrets. Use the @aws-sdk/credential-providers package and pass a credentialProvider (e.g., fromNodeProviderChain()) to the Bedrock provider. This allows your config manager to control credential sourcing via environment, profiles, or IAM roles, consistent with other AWS integrations[1].\n\n- Example integration with config manager:\n ```js\n import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';\n import { fromNodeProviderChain } from '@aws-sdk/credential-providers';\n\n // Assume configManager.get returns your config values\n const region = configManager.get('bedrock.region');\n const model = configManager.get('bedrock.model');\n\n const bedrock = createAmazonBedrock({\n region,\n credentialProvider: fromNodeProviderChain(),\n });\n\n // Use with AI SDK methods\n const { text } = await generateText({\n model: bedrock(model),\n prompt: 'Your prompt here',\n });\n ```\n\n- If your config manager supports dynamic provider selection, you can abstract the provider initialization so switching between Bedrock and other providers (like OpenAI or Anthropic) is seamless.\n\n- Be aware that Bedrock exposes multiple models from different vendors, each with potentially different API behaviors. Your config should allow specifying the exact model string, and your integration should handle any model-specific options or response formats[5].\n\n- For unified service integration, ensure your service layer can route requests to Bedrock using the configured provider instance, and normalize responses if you support multiple AI backends.\n</info added on 2025-04-25T19:03:42.584Z>",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 61
|
||||
}
|
||||
@@ -3824,7 +3824,7 @@
|
||||
"description": "Enhance the 'show' command to accept a status parameter that filters subtasks by their current status, allowing users to view only subtasks matching a specific status.",
|
||||
"details": "This task involves modifying the existing 'show' command functionality to support status-based filtering of subtasks. Implementation details include:\n\n1. Update the command parser to accept a new '--status' or '-s' flag followed by a status value (e.g., 'task-master show --status=in-progress' or 'task-master show -s completed').\n\n2. Modify the show command handler in the appropriate module (likely in scripts/modules/) to:\n - Parse and validate the status parameter\n - Filter the subtasks collection based on the provided status before displaying results\n - Handle invalid status values gracefully with appropriate error messages\n - Support standard status values (e.g., 'not-started', 'in-progress', 'completed', 'blocked')\n - Consider supporting multiple status values (comma-separated or multiple flags)\n\n3. Update the help documentation to include information about the new status filtering option.\n\n4. Ensure backward compatibility - the show command should function as before when no status parameter is provided.\n\n5. Consider adding a '--status-list' option to display all available status values for reference.\n\n6. Update any relevant unit tests to cover the new functionality.\n\n7. If the application uses a database or persistent storage, ensure the filtering happens at the query level for performance when possible.\n\n8. Maintain consistent formatting and styling of output regardless of filtering.",
|
||||
"testStrategy": "Testing for this feature should include:\n\n1. Unit tests:\n - Test parsing of the status parameter in various formats (--status=value, -s value)\n - Test filtering logic with different status values\n - Test error handling for invalid status values\n - Test backward compatibility (no status parameter)\n - Test edge cases (empty status, case sensitivity, etc.)\n\n2. Integration tests:\n - Verify that the command correctly filters subtasks when a valid status is provided\n - Verify that all subtasks are shown when no status filter is applied\n - Test with a project containing subtasks of various statuses\n\n3. Manual testing:\n - Create a test project with multiple subtasks having different statuses\n - Run the show command with different status filters and verify results\n - Test with both long-form (--status) and short-form (-s) parameters\n - Verify help documentation correctly explains the new parameter\n\n4. Edge case testing:\n - Test with non-existent status values\n - Test with empty project (no subtasks)\n - Test with a project where all subtasks have the same status\n\n5. Documentation verification:\n - Ensure the README or help documentation is updated to include the new parameter\n - Verify examples in documentation work as expected\n\nAll tests should pass before considering this task complete.",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"priority": "medium",
|
||||
"subtasks": []
|
||||
@@ -3953,7 +3953,7 @@
|
||||
"description": "Allow users to specify custom model IDs for Ollama and OpenRouter providers via CLI flag and interactive setup, with appropriate validation and warnings.",
|
||||
"details": "**CLI (`task-master models --set-<role> <id> --custom`):**\n- Modify `scripts/modules/task-manager/models.js`: `setModel` function.\n- Check internal `available_models.json` first.\n- If not found and `--custom` is provided:\n - Fetch `https://openrouter.ai/api/v1/models`. (Need to add `https` import).\n - If ID found in OpenRouter list: Set `provider: 'openrouter'`, `modelId: <id>`. Warn user about lack of official validation.\n - If ID not found in OpenRouter: Assume Ollama. Set `provider: 'ollama'`, `modelId: <id>`. Warn user strongly (model must be pulled, compatibility not guaranteed).\n- If not found and `--custom` is *not* provided: Fail with error message guiding user to use `--custom`.\n\n**Interactive Setup (`task-master models --setup`):**\n- Modify `scripts/modules/commands.js`: `runInteractiveSetup` function.\n- Add options to `inquirer` choices for each role: `OpenRouter (Enter Custom ID)` and `Ollama (Enter Custom ID)`.\n- If `__CUSTOM_OPENROUTER__` selected:\n - Prompt for custom ID.\n - Fetch OpenRouter list and validate ID exists. Fail setup for that role if not found.\n - Update config and show warning if found.\n- If `__CUSTOM_OLLAMA__` selected:\n - Prompt for custom ID.\n - Update config directly (no live validation).\n - Show strong Ollama warning.",
|
||||
"testStrategy": "**Unit Tests:**\n- Test `setModel` logic for internal models, custom OpenRouter (valid/invalid), custom Ollama, missing `--custom` flag.\n- Test `runInteractiveSetup` for new custom options flow, including OpenRouter validation success/failure.\n\n**Integration Tests:**\n- Test the `task-master models` command with `--custom` flag variations.\n- Test the `task-master models --setup` interactive flow for custom options.\n\n**Manual Testing:**\n- Run `task-master models --setup` and select custom options.\n- Run `task-master models --set-main <valid_openrouter_id> --custom`. Verify config and warning.\n- Run `task-master models --set-main <invalid_openrouter_id> --custom`. Verify error.\n- Run `task-master models --set-main <ollama_model_id> --custom`. Verify config and warning.\n- Run `task-master models --set-main <custom_id>` (without `--custom`). Verify error.\n- Check `getModelConfiguration` output reflects custom models correctly.",
|
||||
"status": "in-progress",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"priority": "medium",
|
||||
"subtasks": []
|
||||
@@ -4000,6 +4000,145 @@
|
||||
"details": "Research existing E2E testing approaches for MCP servers, referencing examples such as the MCP Server E2E Testing Example. Architect a test harness (preferably in Python or Node.js) that can launch the FastMCP server as a subprocess, establish stdio communication, and send well-formed JSON tool request messages. \n\nImplementation details:\n1. Use `subprocess.Popen` (Python) or `child_process.spawn` (Node.js) to launch the FastMCP server with appropriate stdin/stdout pipes\n2. Implement a message protocol handler that formats JSON requests with proper line endings and message boundaries\n3. Create a buffered reader for stdout that correctly handles chunked responses and reconstructs complete JSON objects\n4. Develop a request/response correlation mechanism using unique IDs for each request\n5. Implement timeout handling for requests that don't receive responses\n\nImplement robust parsing of JSON responses, including error handling for malformed or unexpected output. The framework should support defining test cases as scripts or data files, allowing for easy addition of new scenarios. \n\nTest case structure should include:\n- Setup phase for environment preparation\n- Sequence of tool requests with expected responses\n- Validation functions for response verification\n- Teardown phase for cleanup\n\nEnsure the framework can assert on both the structure and content of responses, and provide clear logging for debugging. Document setup, usage, and extension instructions. Consider cross-platform compatibility and CI integration.\n\n**Clarification:** The E2E test framework should focus on testing the FastMCP server's ability to correctly process tool requests and return appropriate responses. This includes verifying that the server properly handles different types of tool calls (e.g., file operations, web requests, task management), validates input parameters, and returns well-structured responses. The framework should be designed to be extensible, allowing new test cases to be added as the server's capabilities evolve. Tests should cover both happy paths and error conditions to ensure robust server behavior under various scenarios.",
|
||||
"testStrategy": "Verify the framework by implementing a suite of representative E2E tests that cover typical tool requests and edge cases. Specific test cases should include:\n\n1. Basic tool request/response validation\n - Send a simple file_read request and verify response structure\n - Test with valid and invalid file paths\n - Verify error handling for non-existent files\n\n2. Concurrent request handling\n - Send multiple requests in rapid succession\n - Verify all responses are received and correlated correctly\n\n3. Large payload testing\n - Test with large file contents (>1MB)\n - Verify correct handling of chunked responses\n\n4. Error condition testing\n - Malformed JSON requests\n - Invalid tool names\n - Missing required parameters\n - Server crash recovery\n\nConfirm that tests can start and stop the FastMCP server, send requests, and accurately parse and validate responses. Implement specific assertions for response timing, structure validation using JSON schema, and content verification. Intentionally introduce malformed requests and simulate server errors to ensure robust error handling. \n\nImplement detailed logging with different verbosity levels:\n- ERROR: Failed tests and critical issues\n- WARNING: Unexpected but non-fatal conditions\n- INFO: Test progress and results\n- DEBUG: Raw request/response data\n\nRun the test suite in a clean environment and confirm all expected assertions and logs are produced. Validate that new test cases can be added with minimal effort and that the framework integrates with CI pipelines. Create a CI configuration that runs tests on each commit.",
|
||||
"subtasks": []
|
||||
},
|
||||
{
|
||||
"id": 77,
|
||||
"title": "Implement AI Usage Telemetry for Taskmaster (with external analytics endpoint)",
|
||||
"description": "Capture detailed AI usage data (tokens, costs, models, commands) within Taskmaster and send this telemetry to an external, closed-source analytics backend for usage analysis, profitability measurement, and pricing optimization.",
|
||||
"details": "* Add a telemetry utility (`logAiUsage`) within `ai-services.js` to track AI usage.\n* Collected telemetry data fields must include:\n * `timestamp`: Current date/time in ISO 8601.\n * `userId`: Unique user identifier generated at setup (stored in `.taskmasterconfig`).\n * `commandName`: Taskmaster command invoked (`expand`, `parse-prd`, `research`, etc.).\n * `modelUsed`: Name/ID of the AI model invoked.\n * `inputTokens`: Count of input tokens used.\n * `outputTokens`: Count of output tokens generated.\n * `totalTokens`: Sum of input and output tokens.\n * `totalCost`: Monetary cost calculated using pricing from `supported_models.json`.\n* Send telemetry payload securely via HTTPS POST request from user's Taskmaster installation directly to the closed-source analytics API (Express/Supabase backend).\n* Introduce a privacy notice and explicit user consent prompt upon initial installation/setup to enable telemetry.\n* Provide a graceful fallback if telemetry request fails (e.g., no internet connectivity).\n* Optionally display a usage summary directly in Taskmaster CLI output for user transparency.",
|
||||
"testStrategy": "",
|
||||
"status": "in-progress",
|
||||
"dependencies": [],
|
||||
"priority": "medium",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Implement telemetry utility and data collection",
|
||||
"description": "Create the logAiUsage utility in ai-services.js that captures all required telemetry data fields",
|
||||
"dependencies": [],
|
||||
"details": "Develop the logAiUsage function that collects timestamp, userId, commandName, modelUsed, inputTokens, outputTokens, totalTokens, and totalCost. Implement token counting logic and cost calculation using pricing from supported_models.json. Ensure proper error handling and data validation.\n<info added on 2025-05-05T21:08:51.413Z>\nDevelop the logAiUsage function that collects timestamp, userId, commandName, modelUsed, inputTokens, outputTokens, totalTokens, and totalCost. Implement token counting logic and cost calculation using pricing from supported_models.json. Ensure proper error handling and data validation.\n\nImplementation Plan:\n1. Define `logAiUsage` function in `ai-services-unified.js` that accepts parameters: userId, commandName, providerName, modelId, inputTokens, and outputTokens.\n\n2. Implement data collection and calculation logic:\n - Generate timestamp using `new Date().toISOString()`\n - Calculate totalTokens by adding inputTokens and outputTokens\n - Create a helper function `_getCostForModel(providerName, modelId)` that:\n - Loads pricing data from supported-models.json\n - Finds the appropriate provider/model entry\n - Returns inputCost and outputCost rates or defaults if not found\n - Calculate totalCost using the formula: ((inputTokens/1,000,000) * inputCost) + ((outputTokens/1,000,000) * outputCost)\n - Assemble complete telemetryData object with all required fields\n\n3. Add initial logging functionality:\n - Use existing log utility to record telemetry data at 'info' level\n - Implement proper error handling with try/catch blocks\n\n4. Integrate with `_unifiedServiceRunner`:\n - Modify to accept commandName and userId parameters\n - After successful API calls, extract usage data from results\n - Call logAiUsage with the appropriate parameters\n\n5. Update provider functions in src/ai-providers/*.js:\n - Ensure all provider functions return both the primary result and usage statistics\n - Standardize the return format to include a usage object with inputTokens and outputTokens\n</info added on 2025-05-05T21:08:51.413Z>\n<info added on 2025-05-07T17:28:57.361Z>\nTo implement the AI usage telemetry effectively, we need to update each command across our different stacks. Let's create a structured approach for this implementation:\n\nCommand Integration Plan:\n1. Core Function Commands:\n - Identify all AI-utilizing commands in the core function library\n - For each command, modify to pass commandName and userId to _unifiedServiceRunner\n - Update return handling to process and forward usage statistics\n\n2. Direct Function Commands:\n - Map all direct function commands that leverage AI capabilities\n - Implement telemetry collection at the appropriate execution points\n - Ensure consistent error handling and telemetry reporting\n\n3. MCP Tool Stack Commands:\n - Inventory all MCP commands with AI dependencies\n - Standardize the telemetry collection approach across the tool stack\n - Add telemetry hooks that maintain backward compatibility\n\nFor each command category, we'll need to:\n- Document current implementation details\n- Define specific code changes required\n- Create tests to verify telemetry is being properly collected\n- Establish validation procedures to ensure data accuracy\n</info added on 2025-05-07T17:28:57.361Z>",
|
||||
"status": "in-progress",
|
||||
"testStrategy": "Unit test the utility with mock AI usage data to verify all fields are correctly captured and calculated"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement secure telemetry transmission",
|
||||
"description": "Create a secure mechanism to transmit telemetry data to the external analytics endpoint",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"details": "Implement HTTPS POST request functionality to securely send the telemetry payload to the closed-source analytics API. Include proper encryption in transit using TLS. Implement retry logic and graceful fallback mechanisms for handling transmission failures due to connectivity issues.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test with mock endpoints to verify secure transmission and proper handling of various response scenarios"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Develop user consent and privacy notice system",
|
||||
"description": "Create a privacy notice and explicit consent mechanism during Taskmaster setup",
|
||||
"dependencies": [],
|
||||
"details": "Design and implement a clear privacy notice explaining what data is collected and how it's used. Create a user consent prompt during initial installation/setup that requires explicit opt-in. Store the consent status in the .taskmasterconfig file and respect this setting throughout the application.",
|
||||
"status": "pending",
|
||||
"testStrategy": "Test the consent flow to ensure users can opt in/out and that their preference is properly stored and respected"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Integrate telemetry into Taskmaster commands",
|
||||
"description": "Integrate the telemetry utility across all relevant Taskmaster commands",
|
||||
"dependencies": [
|
||||
1,
|
||||
3
|
||||
],
|
||||
"details": "Modify each Taskmaster command (expand, parse-prd, research, etc.) to call the logAiUsage utility after AI interactions. Ensure telemetry is only sent if user has provided consent. Implement the integration in a way that doesn't impact command performance or user experience.\n<info added on 2025-05-06T17:57:13.980Z>\nModify each Taskmaster command (expand, parse-prd, research, etc.) to call the logAiUsage utility after AI interactions. Ensure telemetry is only sent if user has provided consent. Implement the integration in a way that doesn't impact command performance or user experience.\n\nSuccessfully integrated telemetry calls into `addTask` (core) and `addTaskDirect` (MCP) functions by passing `commandName` and `outputType` parameters to the telemetry system. The `ai-services-unified.js` module now logs basic telemetry data, including calculated cost information, whenever the `add-task` command or tool is invoked. This integration respects user consent settings and maintains performance standards.\n</info added on 2025-05-06T17:57:13.980Z>",
|
||||
"status": "in-progress",
|
||||
"testStrategy": "Integration tests to verify telemetry is correctly triggered across different commands with proper data"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Implement usage summary display",
|
||||
"description": "Create an optional feature to display AI usage summary in the CLI output",
|
||||
"dependencies": [
|
||||
1,
|
||||
4
|
||||
],
|
||||
"details": "Develop functionality to display a concise summary of AI usage (tokens used, estimated cost) directly in the CLI output after command execution. Make this feature configurable through Taskmaster settings. Ensure the display is formatted clearly and doesn't clutter the main command output.",
|
||||
"status": "pending",
|
||||
"testStrategy": "User acceptance testing to verify the summary display is clear, accurate, and properly configurable"
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "Telemetry Integration for parse-prd",
|
||||
"description": "Integrate AI usage telemetry capture and propagation for the parse-prd functionality.",
|
||||
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/parse-prd.js`):**\n * Modify AI service call to include `commandName: \\'parse-prd\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/parse-prd.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/parse-prd.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 77
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"title": "Telemetry Integration for expand-task",
|
||||
"description": "Integrate AI usage telemetry capture and propagation for the expand-task functionality.",
|
||||
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/expand-task.js`):**\n * Modify AI service call to include `commandName: \\'expand-task\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/expand-task.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/expand-task.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 77
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"title": "Telemetry Integration for expand-all-tasks",
|
||||
"description": "Integrate AI usage telemetry capture and propagation for the expand-all-tasks functionality.",
|
||||
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/expand-all-tasks.js`):**\n * Modify AI service call (likely within a loop or called by a helper) to include `commandName: \\'expand-all-tasks\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Aggregate or handle `telemetryData` appropriately if multiple AI calls are made.\n * Return object including aggregated/relevant `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/expand-all-tasks.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/expand-all.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 77
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"title": "Telemetry Integration for update-tasks",
|
||||
"description": "Integrate AI usage telemetry capture and propagation for the update-tasks (bulk update) functionality.",
|
||||
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/update-tasks.js`):**\n * Modify AI service call (likely within a loop) to include `commandName: \\'update-tasks\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }` for each AI call.\n * Aggregate or handle `telemetryData` appropriately for multiple calls.\n * Return object including aggregated/relevant `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/update-tasks.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/update.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 77
|
||||
},
|
||||
{
|
||||
"id": 10,
|
||||
"title": "Telemetry Integration for update-task-by-id",
|
||||
"description": "Integrate AI usage telemetry capture and propagation for the update-task-by-id functionality.",
|
||||
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/update-task-by-id.js`):**\n * Modify AI service call to include `commandName: \\'update-task\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData`.\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/update-task-by-id.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/update-task.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 77
|
||||
},
|
||||
{
|
||||
"id": 11,
|
||||
"title": "Telemetry Integration for update-subtask-by-id",
|
||||
"description": "Integrate AI usage telemetry capture and propagation for the update-subtask-by-id functionality.",
|
||||
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/update-subtask-by-id.js`):**\n * Verify if this function *actually* calls an AI service. If it only appends text, telemetry integration might not apply directly here, but ensure its callers handle telemetry if they use AI.\n * *If it calls AI:* Modify AI service call to include `commandName: \\'update-subtask\\'` and `outputType`.\n * *If it calls AI:* Receive `{ mainResult, telemetryData }`.\n * *If it calls AI:* Return object including `telemetryData`.\n * *If it calls AI:* Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/update-subtask-by-id.js`):**\n * *If core calls AI:* Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * *If core calls AI:* Pass `outputFormat: \\'json\\'` if applicable.\n * *If core calls AI:* Receive `{ ..., telemetryData }` from core.\n * *If core calls AI:* Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/update-subtask.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through (if present).\n",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 77
|
||||
},
|
||||
{
|
||||
"id": 12,
|
||||
"title": "Telemetry Integration for analyze-task-complexity",
|
||||
"description": "Integrate AI usage telemetry capture and propagation for the analyze-task-complexity functionality.",
|
||||
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/analyze-task-complexity.js`):**\n * Modify AI service call to include `commandName: \\'analyze-complexity\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData` (perhaps alongside the complexity report data).\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/analyze.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 77
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 80,
|
||||
"title": "Implement Unique User ID Generation and Storage During Installation",
|
||||
"description": "Generate a unique user identifier during npm installation and store it in the .taskmasterconfig globals to enable anonymous usage tracking and telemetry without requiring user registration.",
|
||||
"details": "This task involves implementing a mechanism to generate and store a unique user identifier during the npm installation process of Taskmaster. The implementation should:\n\n1. Create a post-install script that runs automatically after npm install completes\n2. Generate a cryptographically secure random UUID v4 as the unique user identifier\n3. Check if a user ID already exists in the .taskmasterconfig file before generating a new one\n4. Add the generated user ID to the globals section of the .taskmasterconfig file\n5. Ensure the user ID persists across updates but is regenerated on fresh installations\n6. Handle edge cases such as failed installations, manual deletions of the config file, or permission issues\n7. Add appropriate logging to notify users that an anonymous ID is being generated (with clear privacy messaging)\n8. Document the purpose of this ID in the codebase and user documentation\n9. Ensure the ID generation is compatible with all supported operating systems\n10. Make the ID accessible to the telemetry system implemented in Task #77\n\nThe implementation should respect user privacy by:\n- Not collecting any personally identifiable information\n- Making it clear in documentation how users can opt out of telemetry\n- Ensuring the ID cannot be traced back to specific users or installations\n\nThis user ID will serve as the foundation for anonymous usage tracking, helping to understand how Taskmaster is used without compromising user privacy.",
|
||||
"testStrategy": "Testing for this feature should include:\n\n1. **Unit Tests**:\n - Verify the UUID generation produces valid UUIDs\n - Test the config file reading and writing functionality\n - Ensure proper error handling for file system operations\n - Verify the ID remains consistent across multiple reads\n\n2. **Integration Tests**:\n - Run a complete npm installation in a clean environment and verify a new ID is generated\n - Simulate an update installation and verify the existing ID is preserved\n - Test the interaction between the ID generation and the telemetry system\n - Verify the ID is correctly stored in the expected location in .taskmasterconfig\n\n3. **Manual Testing**:\n - Perform fresh installations on different operating systems (Windows, macOS, Linux)\n - Verify the installation process completes without errors\n - Check that the .taskmasterconfig file contains the generated ID\n - Test scenarios where the config file is manually deleted or corrupted\n\n4. **Edge Case Testing**:\n - Test behavior when the installation is run without sufficient permissions\n - Verify handling of network disconnections during installation\n - Test with various npm versions to ensure compatibility\n - Verify behavior when .taskmasterconfig already exists but doesn't contain a user ID section\n\n5. **Validation**:\n - Create a simple script to extract and analyze generated IDs to ensure uniqueness\n - Verify the ID format meets UUID v4 specifications\n - Confirm the ID is accessible to the telemetry system from Task #77\n\nThe test plan should include documentation of all test cases, expected results, and actual outcomes. A successful implementation will generate unique IDs for each installation while maintaining that ID across updates.",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"priority": "medium",
|
||||
"subtasks": []
|
||||
}
|
||||
]
|
||||
}
|
||||
Reference in New Issue
Block a user