feat(telemetry): Integrate AI usage telemetry into update-subtask
This commit applies the standard telemetry pattern to the update-subtask command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/update-subtask-by-id.js):
- The call to generateTextService now includes commandName: 'update-subtask' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for the appended content.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { updatedSubtask: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/update-subtask-by-id.js):
- The call to the core updateSubtaskById function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
This commit is contained in:
@@ -230,7 +230,7 @@ Apply telemetry pattern from telemetry.mdc:
|
||||
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
|
||||
|
||||
|
||||
## 11. Telemetry Integration for update-subtask-by-id [pending]
|
||||
## 11. Telemetry Integration for update-subtask-by-id [in-progress]
|
||||
### Dependencies: None
|
||||
### Description: Integrate AI usage telemetry capture and propagation for the update-subtask-by-id functionality.
|
||||
### Details:
|
||||
|
||||
@@ -99,9 +99,18 @@ The testing strategy for the expanded telemetry system should be comprehensive a
|
||||
# Subtasks:
|
||||
## 1. Implement Additional Telemetry Data Collection Points [pending]
|
||||
### Dependencies: None
|
||||
### Description: Extend the telemetry system to capture new metrics including command execution frequency, feature usage patterns, performance metrics, error rates, session data, and system environment information.
|
||||
### Description: Extend the telemetry system to capture new metrics including command execution frequency, feature usage patterns, performance metrics, error rates, session data, and system environment information. [Updated: 5/8/2025] [Updated: 5/8/2025] [Updated: 5/8/2025]
|
||||
### Details:
|
||||
Create new telemetry event types and collection points throughout the codebase. Implement hooks in the command execution pipeline to track timing and frequency. Add performance monitoring for key operations using high-resolution timers. Capture system environment data at startup. Implement error tracking that records error types and frequencies. Add session tracking with start/end events and periodic heartbeats.
|
||||
<info added on 2025-05-08T22:57:23.259Z>
|
||||
This is a test note added via the MCP tool. The telemetry collection system should be thoroughly tested before implementation.
|
||||
</info added on 2025-05-08T22:57:23.259Z>
|
||||
<info added on 2025-05-08T22:59:29.818Z>
|
||||
For future server integration, Prometheus time-series database with its companion storage solutions (like Cortex or Thanos) would be an excellent choice for handling our telemetry data. The local telemetry collection system should be designed with compatible data structures and metrics formatting that will allow seamless export to Prometheus once server-side infrastructure is in place. This approach would provide powerful querying capabilities, visualization options through Grafana, and scalable long-term storage. Consider implementing the OpenMetrics format locally to ensure compatibility with the Prometheus ecosystem.
|
||||
</info added on 2025-05-08T22:59:29.818Z>
|
||||
<info added on 2025-05-08T23:02:59.692Z>
|
||||
Prometheus would be an excellent choice for server-side telemetry storage and analysis. When designing the local telemetry collection system, we should structure our metrics and events to be compatible with Prometheus' data model (time series with key-value pairs). This would allow for straightforward export to Prometheus once server infrastructure is established. For long-term storage, companion solutions like Cortex or Thanos could extend Prometheus' capabilities, enabling historical analysis and scalable retention. Additionally, adopting the OpenMetrics format locally would ensure seamless integration with the broader Prometheus ecosystem, including visualization through Grafana dashboards.
|
||||
</info added on 2025-05-08T23:02:59.692Z>
|
||||
|
||||
## 2. Build Robust Local Telemetry Storage System [pending]
|
||||
### Dependencies: None
|
||||
|
||||
@@ -5015,7 +5015,7 @@
|
||||
"title": "Telemetry Integration for update-subtask-by-id",
|
||||
"description": "Integrate AI usage telemetry capture and propagation for the update-subtask-by-id functionality.",
|
||||
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/update-subtask-by-id.js`):**\n * Verify if this function *actually* calls an AI service. If it only appends text, telemetry integration might not apply directly here, but ensure its callers handle telemetry if they use AI.\n * *If it calls AI:* Modify AI service call to include `commandName: \\'update-subtask\\'` and `outputType`.\n * *If it calls AI:* Receive `{ mainResult, telemetryData }`.\n * *If it calls AI:* Return object including `telemetryData`.\n * *If it calls AI:* Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/update-subtask-by-id.js`):**\n * *If core calls AI:* Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * *If core calls AI:* Pass `outputFormat: \\'json\\'` if applicable.\n * *If core calls AI:* Receive `{ ..., telemetryData }` from core.\n * *If core calls AI:* Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/update-subtask.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through (if present).\n",
|
||||
"status": "pending",
|
||||
"status": "in-progress",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 77
|
||||
},
|
||||
@@ -5154,9 +5154,9 @@
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Implement Additional Telemetry Data Collection Points",
|
||||
"description": "Extend the telemetry system to capture new metrics including command execution frequency, feature usage patterns, performance metrics, error rates, session data, and system environment information.",
|
||||
"description": "Extend the telemetry system to capture new metrics including command execution frequency, feature usage patterns, performance metrics, error rates, session data, and system environment information. [Updated: 5/8/2025] [Updated: 5/8/2025] [Updated: 5/8/2025]",
|
||||
"dependencies": [],
|
||||
"details": "Create new telemetry event types and collection points throughout the codebase. Implement hooks in the command execution pipeline to track timing and frequency. Add performance monitoring for key operations using high-resolution timers. Capture system environment data at startup. Implement error tracking that records error types and frequencies. Add session tracking with start/end events and periodic heartbeats.",
|
||||
"details": "Create new telemetry event types and collection points throughout the codebase. Implement hooks in the command execution pipeline to track timing and frequency. Add performance monitoring for key operations using high-resolution timers. Capture system environment data at startup. Implement error tracking that records error types and frequencies. Add session tracking with start/end events and periodic heartbeats.\n<info added on 2025-05-08T22:57:23.259Z>\nThis is a test note added via the MCP tool. The telemetry collection system should be thoroughly tested before implementation.\n</info added on 2025-05-08T22:57:23.259Z>\n<info added on 2025-05-08T22:59:29.818Z>\nFor future server integration, Prometheus time-series database with its companion storage solutions (like Cortex or Thanos) would be an excellent choice for handling our telemetry data. The local telemetry collection system should be designed with compatible data structures and metrics formatting that will allow seamless export to Prometheus once server-side infrastructure is in place. This approach would provide powerful querying capabilities, visualization options through Grafana, and scalable long-term storage. Consider implementing the OpenMetrics format locally to ensure compatibility with the Prometheus ecosystem.\n</info added on 2025-05-08T22:59:29.818Z>\n<info added on 2025-05-08T23:02:59.692Z>\nPrometheus would be an excellent choice for server-side telemetry storage and analysis. When designing the local telemetry collection system, we should structure our metrics and events to be compatible with Prometheus' data model (time series with key-value pairs). This would allow for straightforward export to Prometheus once server infrastructure is established. For long-term storage, companion solutions like Cortex or Thanos could extend Prometheus' capabilities, enabling historical analysis and scalable retention. Additionally, adopting the OpenMetrics format locally would ensure seamless integration with the broader Prometheus ecosystem, including visualization through Grafana dashboards.\n</info added on 2025-05-08T23:02:59.692Z>",
|
||||
"status": "pending",
|
||||
"testStrategy": "Create unit tests for each new telemetry point. Implement integration tests that verify telemetry is captured during normal application usage. Add mock services to verify data format correctness."
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user