feat(ai): Enhance Perplexity research calls & fix docs examples

Improves the quality and relevance of research-backed AI operations:
- Tweaks Perplexity AI calls to use max input tokens (8700), temperature 0.1, high context size, and day-fresh search recency.
- Adds a system prompt to guide Perplexity research output.

Docs:
- Updates CLI examples in taskmaster.mdc to use ANSI-C quoting ($'...') for multi-line prompts, ensuring they work correctly in bash/zsh.
This commit is contained in:
Eyal Toledano
2025-04-14 17:09:06 -04:00
parent a0663914e6
commit 2c2e60ad55
8 changed files with 211 additions and 37 deletions

View File

@@ -709,12 +709,24 @@ Include concrete code examples and technical considerations where relevant.`;
const researchResponse = await perplexityClient.chat.completions.create({
model: PERPLEXITY_MODEL,
messages: [
{
role: 'system',
content: `You are a helpful assistant that provides research on current best practices and implementation approaches for software development.
You are given a task and a description of the task.
You need to provide a list of best practices, libraries, design patterns, and implementation approaches that are relevant to the task.
You should provide concrete code examples and technical considerations where relevant.`
},
{
role: 'user',
content: researchQuery
}
],
temperature: 0.1 // Lower temperature for more factual responses
temperature: 0.1, // Lower temperature for more factual responses
max_tokens: 8700, // Respect maximum input tokens for Perplexity (8719 max)
web_search_options: {
search_context_size: 'high'
},
search_recency_filter: 'day' // Filter for results that are as recent as today to capture new releases
});
const researchResult = researchResponse.choices[0].message.content;
@@ -814,7 +826,7 @@ Note on dependencies: Subtasks can depend on other subtasks with lower IDs. Use
anthropic,
{
model: session?.env?.ANTHROPIC_MODEL || CONFIG.model,
max_tokens: session?.env?.MAX_TOKENS || CONFIG.maxTokens,
max_tokens: 8700,
temperature: session?.env?.TEMPERATURE || CONFIG.temperature,
system: systemPrompt,
messages: [{ role: 'user', content: userPrompt }]
@@ -1328,7 +1340,12 @@ Include concrete code examples and technical considerations where relevant.`;
content: researchQuery
}
],
temperature: 0.1 // Lower temperature for more factual responses
temperature: 0.1, // Lower temperature for more factual responses
max_tokens: 8700, // Respect maximum input tokens for Perplexity (8719 max)
web_search_options: {
search_context_size: 'high'
},
search_recency_filter: 'day' // Filter for results that are as recent as today to capture new releases
});
const researchResult = researchResponse.choices[0].message.content;

View File

@@ -427,11 +427,7 @@ Return only the updated tasks as a valid JSON array.`
session?.env?.TEMPERATURE ||
CONFIG.temperature
),
max_tokens: parseInt(
process.env.MAX_TOKENS ||
session?.env?.MAX_TOKENS ||
CONFIG.maxTokens
)
max_tokens: 8700
});
const responseText = result.choices[0].message.content;
@@ -972,11 +968,7 @@ Return only the updated task as a valid JSON object.`
session?.env?.TEMPERATURE ||
CONFIG.temperature
),
max_tokens: parseInt(
process.env.MAX_TOKENS ||
session?.env?.MAX_TOKENS ||
CONFIG.maxTokens
)
max_tokens: 8700
});
const responseText = result.choices[0].message.content;
@@ -3738,7 +3730,11 @@ DO NOT include any text before or after the JSON array. No explanations, no mark
}
],
temperature: session?.env?.TEMPERATURE || CONFIG.temperature,
max_tokens: session?.env?.MAX_TOKENS || CONFIG.maxTokens
max_tokens: 8700,
web_search_options: {
search_context_size: 'high'
},
search_recency_filter: 'day'
});
// Extract the response text