feat(research): Add subtasks to fuzzy search and follow-up questions

- Enhanced fuzzy search to include subtasks in discovery - Added interactive follow-up question functionality using inquirer
- Improved context discovery by including both tasks and subtasks
- Follow-up option for research with default to 'n' for quick workflow
This commit is contained in:
Eyal Toledano
2025-05-25 18:48:39 -04:00
parent 15ad34928d
commit cc26c36366
4 changed files with 189 additions and 5 deletions

View File

@@ -2,4 +2,43 @@
'task-master-ai': minor
---
Introduces the 'research' command to give your agent or yourself the ability to quickly get an answer using context from tasks, files, your project tree, or all.
Add comprehensive AI-powered research command with intelligent context gathering and interactive follow-ups.
The new `research` command provides AI-powered research capabilities that automatically gather relevant project context to answer your questions. The command intelligently selects context from multiple sources and supports interactive follow-up questions in CLI mode.
**Key Features:**
- **Intelligent Task Discovery**: Automatically finds relevant tasks and subtasks using fuzzy search based on your query keywords, supplementing any explicitly provided task IDs
- **Multi-Source Context**: Gathers context from tasks, files, project structure, and custom text to provide comprehensive answers
- **Interactive Follow-ups**: CLI users can ask follow-up questions that build on the conversation history while allowing fresh context discovery for each question
- **Flexible Detail Levels**: Choose from low (concise), medium (balanced), or high (comprehensive) response detail levels
- **Token Transparency**: Displays detailed token breakdown showing context size, sources, and estimated costs
- **Enhanced Display**: Syntax-highlighted code blocks and structured output with clear visual separation
**Usage Examples:**
```bash
# Basic research with auto-discovered context
task-master research "How should I implement user authentication?"
# Research with specific task context
task-master research "What's the best approach for this?" --id=15,23.2
# Research with file context and project tree
task-master research "How does the current auth system work?" --files=src/auth.js,config/auth.json --tree
# Research with custom context and low detail
task-master research "Quick implementation steps?" --context="Using JWT tokens" --detail=low
```
**Context Sources:**
- **Tasks**: Automatically discovers relevant tasks/subtasks via fuzzy search, plus any explicitly specified via `--id`
- **Files**: Include specific files via `--files` for code-aware responses
- **Project Tree**: Add `--tree` to include project structure overview
- **Custom Context**: Provide additional context via `--context` for domain-specific information
**Interactive Features (CLI only):**
- Follow-up questions that maintain conversation history
- Fresh fuzzy search for each follow-up to discover newly relevant tasks
- Cumulative context building across the conversation
- Clean visual separation between exchanges
The research command integrates with the existing AI service layer and supports all configured AI providers. MCP integration provides the same functionality for programmatic access without interactive features.

View File

@@ -348,7 +348,7 @@ For AI-powered commands that benefit from project context, follow the research c
.option('-i, --id <ids>', 'Comma-separated task/subtask IDs to include as context')
.option('-f, --files <paths>', 'Comma-separated file paths to include as context')
.option('-c, --context <text>', 'Additional custom context')
.option('--project-tree', 'Include project file tree structure')
.option('--tree', 'Include project file tree structure')
.option('-d, --detail <level>', 'Output detail level: low, medium, high', 'medium')
.action(async (prompt, options) => {
// 1. Parameter validation and parsing

View File

@@ -1393,7 +1393,7 @@ function registerCommands(programInstance) {
'Additional custom context to include in the research prompt'
)
.option(
'--project-tree',
'-t, --tree',
'Include project file tree structure in the research context'
)
.option(
@@ -1527,7 +1527,7 @@ function registerCommands(programInstance) {
taskIds: taskIds,
filePaths: filePaths,
customContext: options.context ? options.context.trim() : null,
includeProjectTree: !!options.projectTree,
includeProjectTree: !!options.tree,
saveTarget: options.save ? options.save.trim() : null,
detailLevel: options.detail ? options.detail.toLowerCase() : 'medium',
tasksPath: tasksPath,

View File

@@ -6,6 +6,7 @@
import path from 'path';
import chalk from 'chalk';
import boxen from 'boxen';
import inquirer from 'inquirer';
import { highlight } from 'cli-highlight';
import { ContextGatherer } from '../utils/contextGatherer.js';
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
@@ -33,13 +34,15 @@ import {
* @param {string} [context.commandName] - Command name for telemetry
* @param {string} [context.outputType] - Output type ('cli' or 'mcp')
* @param {string} [outputFormat] - Output format ('text' or 'json')
* @param {boolean} [allowFollowUp] - Whether to allow follow-up questions (default: true)
* @returns {Promise<Object>} Research results with telemetry data
*/
async function performResearch(
query,
options = {},
context = {},
outputFormat = 'text'
outputFormat = 'text',
allowFollowUp = true
) {
const {
taskIds = [],
@@ -250,6 +253,19 @@ async function performResearch(
if (telemetryData) {
displayAiUsageSummary(telemetryData, 'cli');
}
// Offer follow-up question option (only for initial CLI queries, not MCP)
if (allowFollowUp && !isMCP) {
await handleFollowUpQuestions(
options,
context,
outputFormat,
projectRoot,
logFn,
query,
researchResult
);
}
}
logFn.success('Research query completed successfully');
@@ -599,4 +615,133 @@ function flattenTasksWithSubtasks(tasks) {
return flattened;
}
/**
* Handle follow-up questions in interactive mode
* @param {Object} originalOptions - Original research options
* @param {Object} context - Execution context
* @param {string} outputFormat - Output format
* @param {string} projectRoot - Project root directory
* @param {Object} logFn - Logger function
* @param {string} initialQuery - Initial query for context
* @param {string} initialResult - Initial AI result for context
*/
async function handleFollowUpQuestions(
originalOptions,
context,
outputFormat,
projectRoot,
logFn,
initialQuery,
initialResult
) {
try {
// Initialize conversation history with the initial Q&A
const conversationHistory = [
{
question: initialQuery,
answer: initialResult,
type: 'initial'
}
];
while (true) {
// Ask if user wants to ask a follow-up question
const { wantFollowUp } = await inquirer.prompt([
{
type: 'confirm',
name: 'wantFollowUp',
message: 'Would you like to ask a follow-up question?',
default: false // Default to 'n' as requested
}
]);
if (!wantFollowUp) {
break;
}
// Get the follow-up question
const { followUpQuery } = await inquirer.prompt([
{
type: 'input',
name: 'followUpQuery',
message: 'Enter your follow-up question:',
validate: (input) => {
if (!input || input.trim().length === 0) {
return 'Please enter a valid question.';
}
return true;
}
}
]);
if (!followUpQuery || followUpQuery.trim().length === 0) {
continue;
}
console.log('\n' + chalk.gray('─'.repeat(60)) + '\n');
// Build cumulative conversation context from all previous exchanges
const conversationContext = buildConversationContext(conversationHistory);
// Create enhanced options for follow-up with full conversation context
// Remove explicit task IDs to allow fresh fuzzy search based on new question
const followUpOptions = {
...originalOptions,
taskIds: [], // Clear task IDs to allow fresh fuzzy search
customContext:
conversationContext +
(originalOptions.customContext
? `\n\n--- Original Context ---\n${originalOptions.customContext}`
: '')
};
// Perform follow-up research with fresh fuzzy search and conversation context
// Disable follow-up prompts for nested calls to prevent infinite recursion
const followUpResult = await performResearch(
followUpQuery.trim(),
followUpOptions,
context,
outputFormat,
false // allowFollowUp = false for nested calls
);
// Add this exchange to the conversation history
conversationHistory.push({
question: followUpQuery.trim(),
answer: followUpResult.result,
type: 'followup'
});
}
} catch (error) {
// If there's an error with inquirer (e.g., non-interactive terminal),
// silently continue without follow-up functionality
logFn.debug(`Follow-up questions not available: ${error.message}`);
}
}
/**
* Build conversation context string from conversation history
* @param {Array} conversationHistory - Array of conversation exchanges
* @returns {string} Formatted conversation context
*/
function buildConversationContext(conversationHistory) {
if (conversationHistory.length === 0) {
return '';
}
const contextParts = ['--- Conversation History ---'];
conversationHistory.forEach((exchange, index) => {
const questionLabel =
exchange.type === 'initial' ? 'Initial Question' : `Follow-up ${index}`;
const answerLabel =
exchange.type === 'initial' ? 'Initial Answer' : `Answer ${index}`;
contextParts.push(`\n${questionLabel}: ${exchange.question}`);
contextParts.push(`${answerLabel}: ${exchange.answer}`);
});
return contextParts.join('\n');
}
export { performResearch };