feat(telemetry): Integrate AI usage telemetry into analyze-complexity
This commit applies the standard telemetry pattern to the analyze-task-complexity command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/analyze-task-complexity.js):
- The call to generateTextService now includes commandName: 'analyze-complexity' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for parsing the complexity report JSON.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { report: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/analyze-task-complexity.js):
- The call to the core analyzeTaskComplexity function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
This commit is contained in:
@@ -305,7 +305,9 @@ describe('Unified AI Services', () => {
|
||||
expect(mockGenerateAnthropicText).toHaveBeenCalledTimes(2); // Initial + 1 retry
|
||||
expect(mockLog).toHaveBeenCalledWith(
|
||||
'info',
|
||||
expect.stringContaining('Retryable error detected. Retrying')
|
||||
expect.stringContaining(
|
||||
'Something went wrong on the provider side. Retrying'
|
||||
)
|
||||
);
|
||||
});
|
||||
|
||||
|
||||
Reference in New Issue
Block a user