fix(ai): Align Perplexity provider with standard telemetry response structure

This commit updates the Perplexity AI provider () to ensure its functions return data in a structure consistent with other providers and the expectations of the unified AI service layer ().

Specifically:
-  now returns an object  instead of only the text string.
-  now returns an object  instead of only the result object.

These changes ensure that  can correctly extract both the primary AI-generated content and the token usage data for telemetry purposes when Perplexity models are used. This resolves issues encountered during E2E testing where complexity analysis (which can use Perplexity for its research role) failed due to unexpected response formats.

The  function was already compliant.
This commit is contained in:
Eyal Toledano
2025-05-14 11:46:35 -04:00
parent 9f4bac8d6a
commit 79a41543d5
4 changed files with 17 additions and 5 deletions

View File

@@ -11,8 +11,20 @@ A task management system for AI-driven development with Claude, designed to work
## Requirements ## Requirements
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
You can define 3 types of models to be used: the main model, the research model, and the fallback model (in case either the main or research fail). Whatever model you use, its provider API key must be present in either mcp.json or .env.
At least one (1) of the following is required:
- Anthropic API key (Claude API) - Anthropic API key (Claude API)
- OpenAI SDK (for Perplexity API integration, optional) - OpenAI API key
- Google Gemini API key
- Perplexity API key (for research model)
- xAI API Key (for research or main model)
- OpenRouter API Key (for research or main model)
Using the research model is optional but highly recommended. You will need at least ONE API key. Adding all API keys enables you to seamlessly switch between model providers at will.
## Quick Start ## Quick Start

View File

@@ -54,7 +54,7 @@ export async function generatePerplexityText({
'debug', 'debug',
`Perplexity generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}` `Perplexity generateText result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
); );
return result.text; return { text: result.text, usage: result.usage };
} catch (error) { } catch (error) {
log('error', `Perplexity generateText failed: ${error.message}`); log('error', `Perplexity generateText failed: ${error.message}`);
throw error; throw error;
@@ -148,7 +148,7 @@ export async function generatePerplexityObject({
'debug', 'debug',
`Perplexity generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}` `Perplexity generateObject result received. Tokens: ${result.usage.completionTokens}/${result.usage.promptTokens}`
); );
return result.object; return { object: result.object, usage: result.usage };
} catch (error) { } catch (error) {
log( log(
'error', 'error',

View File

@@ -399,7 +399,7 @@ Update the provider functions in `src/ai-providers/openai.js` to ensure they ret
### Details: ### Details:
Update the provider functions in `src/ai-providers/openrouter.js` to ensure they return telemetry-compatible results:\n\n1. **`generateOpenRouterText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generateOpenRouterObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamOpenRouterText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern. Update the provider functions in `src/ai-providers/openrouter.js` to ensure they return telemetry-compatible results:\n\n1. **`generateOpenRouterText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\n2. **`generateOpenRouterObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\n3. **`streamOpenRouterText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\n\nReference `anthropic.js` for the pattern.
## 16. Update perplexity.js for Telemetry Compatibility [pending] ## 16. Update perplexity.js for Telemetry Compatibility [done]
### Dependencies: None ### Dependencies: None
### Description: Modify src/ai-providers/perplexity.js functions to return usage data. ### Description: Modify src/ai-providers/perplexity.js functions to return usage data.
### Details: ### Details:

View File

@@ -5066,7 +5066,7 @@
"title": "Update perplexity.js for Telemetry Compatibility", "title": "Update perplexity.js for Telemetry Compatibility",
"description": "Modify src/ai-providers/perplexity.js functions to return usage data.", "description": "Modify src/ai-providers/perplexity.js functions to return usage data.",
"details": "Update the provider functions in `src/ai-providers/perplexity.js` to ensure they return telemetry-compatible results:\\n\\n1. **`generatePerplexityText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\\n2. **`generatePerplexityObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\\n3. **`streamPerplexityText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\\n\\nReference `anthropic.js` for the pattern.", "details": "Update the provider functions in `src/ai-providers/perplexity.js` to ensure they return telemetry-compatible results:\\n\\n1. **`generatePerplexityText`**: Return `{ text: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts from the Vercel AI SDK result.\\n2. **`generatePerplexityObject`**: Return `{ object: ..., usage: { inputTokens: ..., outputTokens: ... } }`. Extract token counts.\\n3. **`streamPerplexityText`**: Return the *full stream result object* returned by the Vercel AI SDK's `streamText`, not just the `textStream` property. The full object contains usage information.\\n\\nReference `anthropic.js` for the pattern.",
"status": "pending", "status": "done",
"dependencies": [], "dependencies": [],
"parentTaskId": 77 "parentTaskId": 77
}, },