chore: task management and formatting.

This commit is contained in:
Eyal Toledano
2025-05-09 14:12:21 -04:00
parent 04b6a3cb21
commit 59230c4d91
3 changed files with 389 additions and 285 deletions

View File

@@ -256,7 +256,7 @@ Apply telemetry pattern from telemetry.mdc:
## 12. Telemetry Integration for analyze-task-complexity [done]
### Dependencies: None
### Description: Integrate AI usage telemetry capture and propagation for the analyze-task-complexity functionality.
### Description: Integrate AI usage telemetry capture and propagation for the analyze-task-complexity functionality. [Updated: 5/9/2025]
### Details:
\
Apply telemetry pattern from telemetry.mdc:
@@ -276,6 +276,110 @@ Apply telemetry pattern from telemetry.mdc:
3. **Tool (`mcp-server/src/tools/analyze.js`):**
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
<info added on 2025-05-09T04:02:44.847Z>
## Implementation Details for Telemetry Integration
### Best Practices for Implementation
1. **Use Structured Telemetry Objects:**
- Create a standardized `TelemetryEvent` object with fields:
```javascript
{
commandName: string, // e.g., 'analyze-complexity'
timestamp: ISO8601 string,
duration: number, // in milliseconds
inputTokens: number,
outputTokens: number,
model: string, // e.g., 'gpt-4'
success: boolean,
errorType?: string, // if applicable
metadata: object // command-specific context
}
```
2. **Asynchronous Telemetry Processing:**
- Use non-blocking telemetry submission to avoid impacting performance
- Implement queue-based processing for reliability during network issues
3. **Error Handling:**
- Implement robust try/catch blocks around telemetry operations
- Ensure telemetry failures don't affect core functionality
- Log telemetry failures locally for debugging
4. **Privacy Considerations:**
- Never include PII or sensitive data in telemetry
- Implement data minimization principles
- Add sanitization functions for metadata fields
5. **Testing Strategy:**
- Create mock telemetry endpoints for testing
- Add unit tests verifying correct telemetry data structure
- Implement integration tests for end-to-end telemetry flow
### Code Implementation Examples
```javascript
// Example telemetry submission function
async function submitTelemetry(telemetryData, endpoint) {
try {
// Non-blocking submission
fetch(endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(telemetryData)
}).catch(err => console.error('Telemetry submission failed:', err));
} catch (error) {
// Log locally but don't disrupt main flow
console.error('Telemetry error:', error);
}
}
// Example integration in AI service call
async function callAiService(params) {
const startTime = Date.now();
try {
const result = await aiService.call({
...params,
commandName: 'analyze-complexity',
outputType: 'mcp'
});
// Construct telemetry object
const telemetryData = {
commandName: 'analyze-complexity',
timestamp: new Date().toISOString(),
duration: Date.now() - startTime,
inputTokens: result.usage?.prompt_tokens || 0,
outputTokens: result.usage?.completion_tokens || 0,
model: result.model || 'unknown',
success: true,
metadata: {
taskId: params.taskId,
// Add other relevant non-sensitive metadata
}
};
return { mainResult: result.data, telemetryData };
} catch (error) {
// Error telemetry
const telemetryData = {
commandName: 'analyze-complexity',
timestamp: new Date().toISOString(),
duration: Date.now() - startTime,
success: false,
errorType: error.name,
metadata: {
taskId: params.taskId,
errorMessage: sanitizeErrorMessage(error.message)
}
};
// Re-throw the original error after capturing telemetry
throw error;
}
}
```
</info added on 2025-05-09T04:02:44.847Z>
## 13. Update google.js for Telemetry Compatibility [pending]
### Dependencies: None

View File

@@ -5028,8 +5028,8 @@
{
"id": 12,
"title": "Telemetry Integration for analyze-task-complexity",
"description": "Integrate AI usage telemetry capture and propagation for the analyze-task-complexity functionality.",
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/analyze-task-complexity.js`):**\n * Modify AI service call to include `commandName: \\'analyze-complexity\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData` (perhaps alongside the complexity report data).\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/analyze.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n",
"description": "Integrate AI usage telemetry capture and propagation for the analyze-task-complexity functionality. [Updated: 5/9/2025]",
"details": "\\\nApply telemetry pattern from telemetry.mdc:\n\n1. **Core (`scripts/modules/task-manager/analyze-task-complexity.js`):**\n * Modify AI service call to include `commandName: \\'analyze-complexity\\'` and `outputType`.\n * Receive `{ mainResult, telemetryData }`.\n * Return object including `telemetryData` (perhaps alongside the complexity report data).\n * Handle CLI display via `displayAiUsageSummary` if applicable.\n\n2. **Direct (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**\n * Pass `commandName`, `outputType: \\'mcp\\'` to core.\n * Pass `outputFormat: \\'json\\'` if applicable.\n * Receive `{ ..., telemetryData }` from core.\n * Return `{ success: true, data: { ..., telemetryData } }`.\n\n3. **Tool (`mcp-server/src/tools/analyze.js`):**\n * Verify `handleApiResult` correctly passes `data.telemetryData` through.\n\n<info added on 2025-05-09T04:02:44.847Z>\n## Implementation Details for Telemetry Integration\n\n### Best Practices for Implementation\n\n1. **Use Structured Telemetry Objects:**\n - Create a standardized `TelemetryEvent` object with fields:\n ```javascript\n {\n commandName: string, // e.g., 'analyze-complexity'\n timestamp: ISO8601 string,\n duration: number, // in milliseconds\n inputTokens: number,\n outputTokens: number,\n model: string, // e.g., 'gpt-4'\n success: boolean,\n errorType?: string, // if applicable\n metadata: object // command-specific context\n }\n ```\n\n2. **Asynchronous Telemetry Processing:**\n - Use non-blocking telemetry submission to avoid impacting performance\n - Implement queue-based processing for reliability during network issues\n\n3. **Error Handling:**\n - Implement robust try/catch blocks around telemetry operations\n - Ensure telemetry failures don't affect core functionality\n - Log telemetry failures locally for debugging\n\n4. **Privacy Considerations:**\n - Never include PII or sensitive data in telemetry\n - Implement data minimization principles\n - Add sanitization functions for metadata fields\n\n5. **Testing Strategy:**\n - Create mock telemetry endpoints for testing\n - Add unit tests verifying correct telemetry data structure\n - Implement integration tests for end-to-end telemetry flow\n\n### Code Implementation Examples\n\n```javascript\n// Example telemetry submission function\nasync function submitTelemetry(telemetryData, endpoint) {\n try {\n // Non-blocking submission\n fetch(endpoint, {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify(telemetryData)\n }).catch(err => console.error('Telemetry submission failed:', err));\n } catch (error) {\n // Log locally but don't disrupt main flow\n console.error('Telemetry error:', error);\n }\n}\n\n// Example integration in AI service call\nasync function callAiService(params) {\n const startTime = Date.now();\n try {\n const result = await aiService.call({\n ...params,\n commandName: 'analyze-complexity',\n outputType: 'mcp'\n });\n \n // Construct telemetry object\n const telemetryData = {\n commandName: 'analyze-complexity',\n timestamp: new Date().toISOString(),\n duration: Date.now() - startTime,\n inputTokens: result.usage?.prompt_tokens || 0,\n outputTokens: result.usage?.completion_tokens || 0,\n model: result.model || 'unknown',\n success: true,\n metadata: {\n taskId: params.taskId,\n // Add other relevant non-sensitive metadata\n }\n };\n \n return { mainResult: result.data, telemetryData };\n } catch (error) {\n // Error telemetry\n const telemetryData = {\n commandName: 'analyze-complexity',\n timestamp: new Date().toISOString(),\n duration: Date.now() - startTime,\n success: false,\n errorType: error.name,\n metadata: {\n taskId: params.taskId,\n errorMessage: sanitizeErrorMessage(error.message)\n }\n };\n \n // Re-throw the original error after capturing telemetry\n throw error;\n }\n}\n```\n</info added on 2025-05-09T04:02:44.847Z>",
"status": "done",
"dependencies": [],
"parentTaskId": 77