feat(telemetry): Integrate telemetry for expand-all, aggregate results

This commit implements AI usage telemetry for the `expand-all-tasks` command/tool and refactors its CLI output for clarity and consistency.

Key Changes:

1.  **Telemetry Integration for `expand-all-tasks` (Subtask 77.8):**\n    -   The `expandAllTasks` core logic (`scripts/modules/task-manager/expand-all-tasks.js`) now calls the `expandTask` function for each eligible task and collects the individual `telemetryData` returned.\n    -   A new helper function `_aggregateTelemetry` (in `utils.js`) is used to sum up token counts and costs from all individual expansions into a single `telemetryData` object for the entire `expand-all` operation.\n    -   The `expandAllTasksDirect` wrapper (`mcp-server/src/core/direct-functions/expand-all-tasks.js`) now receives and passes this aggregated `telemetryData` in the MCP response.\n    -   For CLI usage, `displayAiUsageSummary` is called once with the aggregated telemetry.

2.  **Improved CLI Output for `expand-all`:**\n    -   The `expandAllTasks` core function now handles displaying a final "Expansion Summary" box (showing Attempted, Expanded, Skipped, Failed counts) directly after the aggregated telemetry summary.\n    -   This consolidates all summary output within the core function for better flow and removes redundant logging from the command action in `scripts/modules/commands.js`.\n    -   The summary box border is green for success and red if any expansions failed.

3.  **Code Refinements:**\n    -   Ensured `chalk` and `boxen` are imported in `expand-all-tasks.js` for the new summary box.\n    -   Minor adjustments to logging messages for clarity.
This commit is contained in:
Eyal Toledano
2025-05-08 18:22:00 -04:00
parent ab84afd036
commit 21c3cb8cda
24 changed files with 1693 additions and 56 deletions

View File

@@ -63,12 +63,18 @@ export async function expandAllTasksDirect(args, log, context = {}) {
{ session, mcpLog, projectRoot }
);
// Core function now returns a summary object
// Core function now returns a summary object including the *aggregated* telemetryData
return {
success: true,
data: {
message: `Expand all operation completed. Expanded: ${result.expandedCount}, Failed: ${result.failedCount}, Skipped: ${result.skippedCount}`,
details: result // Include the full result details
details: {
expandedCount: result.expandedCount,
failedCount: result.failedCount,
skippedCount: result.skippedCount,
tasksToExpand: result.tasksToExpand
},
telemetryData: result.telemetryData // Pass the aggregated object
}
};
} catch (error) {

View File

@@ -1129,12 +1129,6 @@ function registerCommands(programInstance) {
{} // Pass empty context for CLI calls
// outputFormat defaults to 'text' in expandAllTasks for CLI
);
// Optional: Display summary from result
console.log(chalk.green(`Expansion Summary:`));
console.log(chalk.green(` - Attempted: ${result.tasksToExpand}`));
console.log(chalk.green(` - Expanded: ${result.expandedCount}`));
console.log(chalk.yellow(` - Skipped: ${result.skippedCount}`));
console.log(chalk.red(` - Failed: ${result.failedCount}`));
} catch (error) {
console.error(
chalk.red(`Error expanding all tasks: ${error.message}`)

View File

@@ -1,7 +1,14 @@
import { log, readJSON, isSilentMode } from '../utils.js';
import { startLoadingIndicator, stopLoadingIndicator } from '../ui.js';
import {
startLoadingIndicator,
stopLoadingIndicator,
displayAiUsageSummary
} from '../ui.js';
import expandTask from './expand-task.js';
import { getDebugFlag } from '../config-manager.js';
import { _aggregateTelemetry } from '../utils.js';
import chalk from 'chalk';
import boxen from 'boxen';
/**
* Expand all eligible pending or in-progress tasks using the expandTask function.
@@ -14,7 +21,7 @@ import { getDebugFlag } from '../config-manager.js';
* @param {Object} [context.session] - Session object from MCP.
* @param {Object} [context.mcpLog] - MCP logger object.
* @param {string} [outputFormat='text'] - Output format ('text' or 'json'). MCP calls should use 'json'.
* @returns {Promise<{success: boolean, expandedCount: number, failedCount: number, skippedCount: number, tasksToExpand: number, message?: string}>} - Result summary.
* @returns {Promise<{success: boolean, expandedCount: number, failedCount: number, skippedCount: number, tasksToExpand: number, telemetryData: Array<Object>}>} - Result summary.
*/
async function expandAllTasks(
tasksPath,
@@ -51,8 +58,8 @@ async function expandAllTasks(
let loadingIndicator = null;
let expandedCount = 0;
let failedCount = 0;
// No skipped count needed now as the filter handles it upfront
let tasksToExpandCount = 0; // Renamed for clarity
let tasksToExpandCount = 0;
const allTelemetryData = []; // Still collect individual data first
if (!isMCPCall && outputFormat === 'text') {
loadingIndicator = startLoadingIndicator(
@@ -90,6 +97,7 @@ async function expandAllTasks(
failedCount: 0,
skippedCount: 0,
tasksToExpand: 0,
telemetryData: allTelemetryData,
message: 'No tasks eligible for expansion.'
};
// --- End Fix ---
@@ -97,19 +105,6 @@ async function expandAllTasks(
// Iterate over the already filtered tasks
for (const task of tasksToExpand) {
// --- Remove Redundant Check ---
// The check below is no longer needed as the initial filter handles it
/*
if (task.subtasks && task.subtasks.length > 0 && !force) {
logger.info(
`Skipping task ${task.id}: Already has subtasks. Use --force to overwrite.`
);
skippedCount++;
continue;
}
*/
// --- End Removed Redundant Check ---
// Start indicator for individual task expansion in CLI mode
let taskIndicator = null;
if (!isMCPCall && outputFormat === 'text') {
@@ -117,17 +112,23 @@ async function expandAllTasks(
}
try {
// Call the refactored expandTask function
await expandTask(
// Call the refactored expandTask function AND capture result
const result = await expandTask(
tasksPath,
task.id,
numSubtasks, // Pass numSubtasks, expandTask handles defaults/complexity
numSubtasks,
useResearch,
additionalContext,
context, // Pass the whole context object { session, mcpLog }
force // Pass the force flag down
force
);
expandedCount++;
// Collect individual telemetry data
if (result && result.telemetryData) {
allTelemetryData.push(result.telemetryData);
}
if (taskIndicator) {
stopLoadingIndicator(taskIndicator, `Task ${task.id} expanded.`);
}
@@ -146,18 +147,48 @@ async function expandAllTasks(
}
}
// Log final summary (removed skipped count from message)
// --- AGGREGATION AND DISPLAY ---
logger.info(
`Expansion complete: ${expandedCount} expanded, ${failedCount} failed.`
);
// Return summary (skippedCount is now 0) - Add success: true here as well for consistency
// Aggregate the collected telemetry data
const aggregatedTelemetryData = _aggregateTelemetry(
allTelemetryData,
'expand-all-tasks'
);
if (outputFormat === 'text') {
const summaryContent =
`${chalk.white.bold('Expansion Summary:')}\n\n` +
`${chalk.cyan('-')} Attempted: ${chalk.bold(tasksToExpandCount)}\n` +
`${chalk.green('-')} Expanded: ${chalk.bold(expandedCount)}\n` +
// Skipped count is always 0 now due to pre-filtering
`${chalk.gray('-')} Skipped: ${chalk.bold(0)}\n` +
`${chalk.red('-')} Failed: ${chalk.bold(failedCount)}`;
console.log(
boxen(summaryContent, {
padding: 1,
margin: { top: 1 },
borderColor: failedCount > 0 ? 'red' : 'green', // Red if failures, green otherwise
borderStyle: 'round'
})
);
}
if (outputFormat === 'text' && aggregatedTelemetryData) {
displayAiUsageSummary(aggregatedTelemetryData, 'cli');
}
// Return summary including the AGGREGATED telemetry data
return {
success: true, // Indicate overall success
success: true,
expandedCount,
failedCount,
skippedCount: 0,
tasksToExpand: tasksToExpandCount
tasksToExpand: tasksToExpandCount,
telemetryData: aggregatedTelemetryData
};
} catch (error) {
if (loadingIndicator)

View File

@@ -508,6 +508,61 @@ function detectCamelCaseFlags(args) {
return camelCaseFlags;
}
/**
* Aggregates an array of telemetry objects into a single summary object.
* @param {Array<Object>} telemetryArray - Array of telemetryData objects.
* @param {string} overallCommandName - The name for the aggregated command.
* @returns {Object|null} Aggregated telemetry object or null if input is empty.
*/
function _aggregateTelemetry(telemetryArray, overallCommandName) {
if (!telemetryArray || telemetryArray.length === 0) {
return null;
}
const aggregated = {
timestamp: new Date().toISOString(), // Use current time for aggregation time
userId: telemetryArray[0].userId, // Assume userId is consistent
commandName: overallCommandName,
modelUsed: 'Multiple', // Default if models vary
providerName: 'Multiple', // Default if providers vary
inputTokens: 0,
outputTokens: 0,
totalTokens: 0,
totalCost: 0,
currency: telemetryArray[0].currency || 'USD' // Assume consistent currency or default
};
const uniqueModels = new Set();
const uniqueProviders = new Set();
const uniqueCurrencies = new Set();
telemetryArray.forEach((item) => {
aggregated.inputTokens += item.inputTokens || 0;
aggregated.outputTokens += item.outputTokens || 0;
aggregated.totalCost += item.totalCost || 0;
uniqueModels.add(item.modelUsed);
uniqueProviders.add(item.providerName);
uniqueCurrencies.add(item.currency || 'USD');
});
aggregated.totalTokens = aggregated.inputTokens + aggregated.outputTokens;
aggregated.totalCost = parseFloat(aggregated.totalCost.toFixed(6)); // Fix precision
if (uniqueModels.size === 1) {
aggregated.modelUsed = [...uniqueModels][0];
}
if (uniqueProviders.size === 1) {
aggregated.providerName = [...uniqueProviders][0];
}
if (uniqueCurrencies.size > 1) {
aggregated.currency = 'Multiple'; // Mark if currencies actually differ
} else if (uniqueCurrencies.size === 1) {
aggregated.currency = [...uniqueCurrencies][0];
}
return aggregated;
}
// Export all utility functions and configuration
export {
LOG_LEVELS,
@@ -529,5 +584,6 @@ export {
isSilentMode,
resolveEnvVariable,
getTaskManager,
findProjectRoot
findProjectRoot,
_aggregateTelemetry
};

View File

@@ -48,3 +48,47 @@ Testing should verify both the functionality and security of the webhook system:
5. Manual verification:
- Set up integrations with common services (GitHub, Slack, etc.) to verify real-world functionality
- Verify that the CLI interface for managing webhooks works as expected
# Subtasks:
## 1. Design webhook registration API endpoints [pending]
### Dependencies: None
### Description: Create API endpoints for registering, updating, and deleting webhook subscriptions
### Details:
Implement RESTful API endpoints that allow clients to register webhook URLs, specify event types they want to subscribe to, and manage their subscriptions. Include validation for URL format, required parameters, and authentication requirements.
## 2. Implement webhook authentication and security measures [pending]
### Dependencies: 44.1
### Description: Develop security mechanisms for webhook verification and payload signing
### Details:
Implement signature verification using HMAC, rate limiting to prevent abuse, IP whitelisting options, and webhook secret management. Create a secure token system for webhook verification and implement TLS for all webhook communications.
## 3. Create event trigger definition interface [pending]
### Dependencies: None
### Description: Design and implement the interface for defining event triggers and conditions
### Details:
Develop a user interface or API that allows defining what events should trigger webhooks. Include support for conditional triggers based on event properties, filtering options, and the ability to specify payload formats.
## 4. Build event processing and queuing system [pending]
### Dependencies: 44.1, 44.3
### Description: Implement a robust system for processing and queuing events before webhook delivery
### Details:
Create an event queue using a message broker (like RabbitMQ or Kafka) to handle high volumes of events. Implement event deduplication, prioritization, and persistence to ensure reliable delivery even during system failures.
## 5. Develop webhook delivery and retry mechanism [pending]
### Dependencies: 44.2, 44.4
### Description: Create a reliable system for webhook delivery with retry logic and failure handling
### Details:
Implement exponential backoff retry logic, configurable retry attempts, and dead letter queues for failed deliveries. Add monitoring for webhook delivery success rates and performance metrics. Include timeout handling for unresponsive webhook endpoints.
## 6. Implement comprehensive error handling and logging [pending]
### Dependencies: 44.5
### Description: Create robust error handling, logging, and monitoring for the webhook system
### Details:
Develop detailed error logging for webhook failures, including response codes, error messages, and timing information. Implement alerting for critical failures and create a dashboard for monitoring system health. Add debugging tools for webhook delivery issues.
## 7. Create webhook testing and simulation tools [pending]
### Dependencies: 44.3, 44.5, 44.6
### Description: Develop tools for testing webhook integrations and simulating event triggers
### Details:
Build a webhook testing console that allows manual triggering of events, viewing delivery history, and replaying failed webhooks. Create a webhook simulator for developers to test their endpoint implementations without generating real system events.

View File

@@ -53,3 +53,35 @@ Testing should cover the following scenarios:
- Test the interaction with other flags and commands
Create mock GitHub API responses for testing to avoid hitting rate limits during development and testing. Use environment variables to configure test credentials if needed.
# Subtasks:
## 1. Design GitHub API integration architecture [pending]
### Dependencies: None
### Description: Create a technical design document outlining the architecture for GitHub API integration, including authentication flow, rate limiting considerations, and error handling strategies.
### Details:
Document should include: API endpoints to be used, authentication method (OAuth vs Personal Access Token), data flow diagrams, and security considerations. Research GitHub API rate limits and implement appropriate throttling mechanisms.
## 2. Implement GitHub URL parsing and validation [pending]
### Dependencies: 45.1
### Description: Create a module to parse and validate GitHub issue URLs, extracting repository owner, repository name, and issue number.
### Details:
Handle various GitHub URL formats (e.g., github.com/owner/repo/issues/123, github.com/owner/repo/pull/123). Implement validation to ensure the URL points to a valid issue or pull request. Return structured data with owner, repo, and issue number for valid URLs.
## 3. Develop GitHub API client for issue fetching [pending]
### Dependencies: 45.1, 45.2
### Description: Create a service to authenticate with GitHub and fetch issue details using the GitHub REST API.
### Details:
Implement authentication using GitHub Personal Access Tokens or OAuth. Handle API responses, including error cases (rate limiting, authentication failures, not found). Extract relevant issue data: title, description, labels, assignees, and comments.
## 4. Create task formatter for GitHub issues [pending]
### Dependencies: 45.3
### Description: Develop a formatter to convert GitHub issue data into the application's task format.
### Details:
Map GitHub issue fields to task fields (title, description, etc.). Convert GitHub markdown to the application's supported format. Handle special GitHub features like issue references and user mentions. Generate appropriate tags based on GitHub labels.
## 5. Implement end-to-end import flow with UI [pending]
### Dependencies: 45.4
### Description: Create the user interface and workflow for importing GitHub issues, including progress indicators and error handling.
### Details:
Design and implement UI for URL input and import confirmation. Show loading states during API calls. Display meaningful error messages for various failure scenarios. Allow users to review and modify imported task details before saving. Add automated tests for the entire import flow.

View File

@@ -53,3 +53,35 @@ The command should follow the same design patterns as `analyze-complexity` for c
- The ranking should prioritize high-impact, high-confidence, easy-to-implement tasks
- Performance should be acceptable even with a large number of tasks
- The command should handle edge cases gracefully (empty projects, missing data)
# Subtasks:
## 1. Design ICE scoring algorithm [pending]
### Dependencies: None
### Description: Create the algorithm for calculating Impact, Confidence, and Ease scores for tasks
### Details:
Define the mathematical formula for ICE scoring (Impact × Confidence × Ease). Determine the scale for each component (e.g., 1-10). Create rules for how AI will evaluate each component based on task attributes like complexity, dependencies, and descriptions. Document the scoring methodology for future reference.
## 2. Implement AI integration for ICE scoring [pending]
### Dependencies: 46.1
### Description: Develop the AI component that will analyze tasks and generate ICE scores
### Details:
Create prompts for the AI to evaluate Impact, Confidence, and Ease. Implement error handling for AI responses. Add caching to prevent redundant AI calls. Ensure the AI provides justification for each score component. Test with various task types to ensure consistent scoring.
## 3. Create report file generator [pending]
### Dependencies: 46.2
### Description: Build functionality to generate a structured report file with ICE analysis results
### Details:
Design the report file format (JSON, CSV, or Markdown). Implement sorting of tasks by ICE score. Include task details, individual I/C/E scores, and final ICE score in the report. Add timestamp and project metadata. Create a function to save the report to the specified location.
## 4. Implement CLI rendering for ICE analysis [pending]
### Dependencies: 46.3
### Description: Develop the command-line interface for displaying ICE analysis results
### Details:
Design a tabular format for displaying ICE scores in the terminal. Use color coding to highlight high/medium/low priority tasks. Implement filtering options (by score range, task type, etc.). Add sorting capabilities. Create a summary view that shows top N tasks by ICE score.
## 5. Integrate with existing complexity reports [pending]
### Dependencies: 46.3, 46.4
### Description: Connect the ICE analysis functionality with the existing complexity reporting system
### Details:
Modify the existing complexity report to include ICE scores. Ensure consistent formatting between complexity and ICE reports. Add cross-referencing between reports. Update the command-line help documentation. Test the integrated system with various project sizes and configurations.

View File

@@ -64,3 +64,41 @@ Testing should verify the complete workflow functions correctly:
5. Regression Testing:
- Verify that existing functionality continues to work
- Ensure compatibility with keyboard shortcuts and accessibility features
# Subtasks:
## 1. Design Task Expansion UI Components [pending]
### Dependencies: None
### Description: Create UI components for the expanded task suggestion actions card that allow for task breakdown and additional context input.
### Details:
Design mockups for expanded card view, including subtask creation interface, context input fields, and task management controls. Ensure the design is consistent with existing UI patterns and responsive across different screen sizes. Include animations for card expansion/collapse.
## 2. Implement State Management for Task Expansion [pending]
### Dependencies: 47.1
### Description: Develop the state management logic to handle expanded task states, subtask creation, and context additions.
### Details:
Create state handlers for expanded/collapsed states, subtask array management, and context data. Implement proper validation for user inputs and error handling. Ensure state persistence across user sessions and synchronization with backend services.
## 3. Build Context Addition Functionality [pending]
### Dependencies: 47.2
### Description: Create the functionality that allows users to add additional context to tasks and subtasks.
### Details:
Implement context input fields with support for rich text, attachments, links, and references to other tasks. Add auto-save functionality for context changes and version history if applicable. Include context suggestion features based on task content.
## 4. Develop Task Management Controls [pending]
### Dependencies: 47.2
### Description: Implement controls for managing tasks within the expanded card view, including prioritization, scheduling, and assignment.
### Details:
Create UI controls for task prioritization (drag-and-drop ranking), deadline setting with calendar integration, assignee selection with user search, and status updates. Implement notification triggers for task changes and deadline reminders.
## 5. Integrate with Existing Task Systems [pending]
### Dependencies: 47.3, 47.4
### Description: Ensure the enhanced actions card workflow integrates seamlessly with existing task management functionality.
### Details:
Connect the new UI components to existing backend APIs. Update data models if necessary to support new features. Ensure compatibility with existing task filters, search, and reporting features. Implement data migration plan for existing tasks if needed.
## 6. Test and Optimize User Experience [pending]
### Dependencies: 47.5
### Description: Conduct thorough testing of the enhanced workflow and optimize based on user feedback and performance metrics.
### Details:
Perform usability testing with representative users. Collect metrics on task completion time, error rates, and user satisfaction. Optimize performance for large task lists and complex subtask hierarchies. Implement A/B testing for alternative UI approaches if needed.

View File

@@ -42,3 +42,23 @@ Testing should verify that the refactoring maintains identical functionality whi
4. Documentation:
- Verify documentation is updated to reflect the new prompt organization
- Confirm the index.js export pattern works as expected for importing prompts
# Subtasks:
## 1. Create prompts directory structure [pending]
### Dependencies: None
### Description: Create a centralized 'prompts' directory with appropriate subdirectories for different prompt categories
### Details:
Create a 'prompts' directory at the project root. Within this directory, create subdirectories based on functional categories (e.g., 'core', 'agents', 'utils'). Add an index.js file in each subdirectory to facilitate imports. Create a root index.js file that re-exports all prompts for easy access.
## 2. Extract prompts into individual files [pending]
### Dependencies: 48.1
### Description: Identify all hardcoded prompts in the codebase and extract them into individual files in the prompts directory
### Details:
Search through the codebase for all hardcoded prompt strings. For each prompt, create a new file in the appropriate subdirectory with a descriptive name (e.g., 'taskBreakdownPrompt.js'). Format each file to export the prompt string as a constant. Add JSDoc comments to document the purpose and expected usage of each prompt.
## 3. Update functions to import prompts [pending]
### Dependencies: 48.1, 48.2
### Description: Modify all functions that use hardcoded prompts to import them from the centralized structure
### Details:
For each function that previously used a hardcoded prompt, add an import statement to pull in the prompt from the centralized structure. Test each function after modification to ensure it still works correctly. Update any tests that might be affected by the refactoring. Create a pull request with the changes and document the new prompt structure in the project documentation.

View File

@@ -64,3 +64,41 @@ Testing should verify all aspects of the code analysis command:
- Generated recommendations are specific and actionable
- Created tasks follow the project's task format standards
- Analysis results are consistent across multiple runs on the same codebase
# Subtasks:
## 1. Design pattern recognition algorithm [pending]
### Dependencies: None
### Description: Create an algorithm to identify common code patterns and anti-patterns in the codebase
### Details:
Develop a system that can scan code files and identify common design patterns (Factory, Singleton, etc.) and anti-patterns (God objects, excessive coupling, etc.). Include detection for language-specific patterns and create a classification system for identified patterns.
## 2. Implement best practice verification [pending]
### Dependencies: 49.1
### Description: Build verification checks against established coding standards and best practices
### Details:
Create a framework to compare code against established best practices for the specific language/framework. Include checks for naming conventions, function length, complexity metrics, comment coverage, and other industry-standard quality indicators.
## 3. Develop AI integration for code analysis [pending]
### Dependencies: 49.1, 49.2
### Description: Integrate AI capabilities to enhance code analysis and provide intelligent recommendations
### Details:
Connect to AI services (like OpenAI) to analyze code beyond rule-based checks. Configure the AI to understand context, project-specific patterns, and provide nuanced analysis that rule-based systems might miss.
## 4. Create recommendation generation system [pending]
### Dependencies: 49.2, 49.3
### Description: Build a system to generate actionable improvement recommendations based on analysis results
### Details:
Develop algorithms to transform analysis results into specific, actionable recommendations. Include priority levels, effort estimates, and potential impact assessments for each recommendation.
## 5. Implement task creation functionality [pending]
### Dependencies: 49.4
### Description: Add capability to automatically create tasks from code quality recommendations
### Details:
Build functionality to convert recommendations into tasks in the project management system. Include appropriate metadata, assignee suggestions based on code ownership, and integration with existing workflow systems.
## 6. Create comprehensive reporting interface [pending]
### Dependencies: 49.4, 49.5
### Description: Develop a user interface to display analysis results and recommendations
### Details:
Build a dashboard showing code quality metrics, identified patterns, recommendations, and created tasks. Include filtering options, trend analysis over time, and the ability to drill down into specific issues with code snippets and explanations.

View File

@@ -49,3 +49,35 @@ Testing should verify both the functionality and user experience of the suggest-
- Test with extremely large numbers of existing tasks
Manually verify the command produces contextually appropriate suggestions that align with the project's current state and needs.
# Subtasks:
## 1. Design data collection mechanism for existing tasks [pending]
### Dependencies: None
### Description: Create a module to collect and format existing task data from the system for AI processing
### Details:
Implement a function that retrieves all existing tasks from storage, formats them appropriately for AI context, and handles edge cases like empty task lists or corrupted data. Include metadata like task status, dependencies, and creation dates to provide rich context for suggestions.
## 2. Implement AI integration for task suggestions [pending]
### Dependencies: 52.1
### Description: Develop the core functionality to generate task suggestions using AI based on existing tasks
### Details:
Create an AI prompt template that effectively communicates the existing task context and request for suggestions. Implement error handling for API failures, rate limiting, and malformed responses. Include parameters for controlling suggestion quantity and specificity.
## 3. Build interactive CLI interface for suggestions [pending]
### Dependencies: 52.2
### Description: Create the command-line interface for requesting and displaying task suggestions
### Details:
Design a user-friendly CLI command structure with appropriate flags for customization. Implement progress indicators during AI processing and format the output of suggestions in a clear, readable format. Include help text and examples in the command documentation.
## 4. Implement suggestion selection and task creation [pending]
### Dependencies: 52.3
### Description: Allow users to interactively select suggestions to convert into actual tasks
### Details:
Create an interactive selection interface where users can review suggestions, select which ones to create as tasks, and optionally modify them before creation. Implement batch creation capabilities and validation to ensure new tasks meet system requirements.
## 5. Add configuration options and flag handling [pending]
### Dependencies: 52.3, 52.4
### Description: Implement various configuration options and command flags for customizing suggestion behavior
### Details:
Create a comprehensive set of command flags for controlling suggestion quantity, specificity, format, and other parameters. Implement persistent configuration options that users can set as defaults. Document all available options and provide examples of common usage patterns.

View File

@@ -48,3 +48,35 @@ Testing should verify both the new positional argument functionality and continu
- Verify examples in documentation show both styles where appropriate
All tests should pass with 100% of commands supporting both argument styles without any regression in existing functionality.
# Subtasks:
## 1. Analyze current CLI argument parsing structure [pending]
### Dependencies: None
### Description: Review the existing CLI argument parsing code to understand how arguments are currently processed and identify integration points for positional arguments.
### Details:
Document the current argument parsing flow, identify key classes and methods responsible for argument handling, and determine how named arguments are currently processed. Create a technical design document outlining the current architecture and proposed changes.
## 2. Design positional argument specification format [pending]
### Dependencies: 55.1
### Description: Create a specification for how positional arguments will be defined in command definitions, including their order, required/optional status, and type validation.
### Details:
Define a clear syntax for specifying positional arguments in command definitions. Consider how to handle mixed positional and named arguments, default values, and type constraints. Document the specification with examples for different command types.
## 3. Implement core positional argument parsing logic [pending]
### Dependencies: 55.1, 55.2
### Description: Modify the argument parser to recognize and process positional arguments according to the specification, while maintaining compatibility with existing named arguments.
### Details:
Update the parser to identify arguments without flags as positional, map them to the correct parameter based on order, and apply appropriate validation. Ensure the implementation handles missing required positional arguments and provides helpful error messages.
## 4. Handle edge cases and error conditions [pending]
### Dependencies: 55.3
### Description: Implement robust handling for edge cases such as too many/few arguments, type mismatches, and ambiguous situations between positional and named arguments.
### Details:
Create comprehensive error handling for scenarios like: providing both positional and named version of the same argument, incorrect argument types, missing required positional arguments, and excess positional arguments. Ensure error messages are clear and actionable for users.
## 5. Update documentation and create usage examples [pending]
### Dependencies: 55.2, 55.3, 55.4
### Description: Update CLI documentation to explain positional argument support and provide clear examples showing how to use positional arguments with different commands.
### Details:
Revise user documentation to include positional argument syntax, update command reference with positional argument information, and create example command snippets showing both positional and named argument usage. Include a migration guide for users transitioning from named-only to positional arguments.

View File

@@ -65,3 +65,41 @@ Acceptance Criteria:
- Help text is comprehensive and includes examples
- Interface is visually consistent across all commands
- Tool remains fully functional in non-interactive environments
# Subtasks:
## 1. Implement Configurable Log Levels [pending]
### Dependencies: None
### Description: Create a logging system with different verbosity levels that users can configure
### Details:
Design and implement a logging system with at least 4 levels (ERROR, WARNING, INFO, DEBUG). Add command-line options to set the verbosity level. Ensure logs are color-coded by severity and can be redirected to files. Include timestamp formatting options.
## 2. Design Terminal Color Scheme and Visual Elements [pending]
### Dependencies: None
### Description: Create a consistent and accessible color scheme for the CLI interface
### Details:
Define a color palette that works across different terminal environments. Implement color-coding for different task states, priorities, and command categories. Add support for terminals without color capabilities. Design visual separators, headers, and footers for different output sections.
## 3. Implement Progress Indicators and Loading Animations [pending]
### Dependencies: 57.2
### Description: Add visual feedback for long-running operations
### Details:
Create spinner animations for operations that take time to complete. Implement progress bars for operations with known completion percentages. Ensure animations degrade gracefully in terminals with limited capabilities. Add estimated time remaining calculations where possible.
## 4. Develop Interactive Selection Menus [pending]
### Dependencies: 57.2
### Description: Create interactive menus for task selection and configuration
### Details:
Implement arrow-key navigation for selecting tasks from a list. Add checkbox and radio button interfaces for multi-select and single-select options. Include search/filter functionality for large task lists. Ensure keyboard shortcuts are consistent and documented.
## 5. Design Tabular and Structured Output Formats [pending]
### Dependencies: 57.2
### Description: Improve the formatting of task lists and detailed information
### Details:
Create table layouts with proper column alignment for task lists. Implement tree views for displaying task hierarchies and dependencies. Add support for different output formats (plain text, JSON, CSV). Ensure outputs are properly paginated for large datasets.
## 6. Create Help System and Interactive Documentation [pending]
### Dependencies: 57.2, 57.4, 57.5
### Description: Develop an in-CLI help system with examples and contextual assistance
### Details:
Implement a comprehensive help command with examples for each feature. Add contextual help that suggests relevant commands based on user actions. Create interactive tutorials for new users. Include command auto-completion suggestions and syntax highlighting for command examples.

View File

@@ -71,3 +71,47 @@ Ensure all commands have proper help text and error handling for cases like no m
- Verify the personality simulation is consistent and believable
- Test the round-table output file readability and usefulness
- Verify that using round-table output to update tasks produces meaningful improvements
# Subtasks:
## 1. Design Mentor System Architecture [pending]
### Dependencies: None
### Description: Create a comprehensive architecture for the mentor system, defining data models, relationships, and interaction patterns.
### Details:
Define mentor profiles structure, expertise categorization, availability tracking, and relationship to user accounts. Design the database schema for storing mentor information and interactions. Create flowcharts for mentor-mentee matching algorithms and interaction workflows.
## 2. Implement Mentor Profile Management [pending]
### Dependencies: 60.1
### Description: Develop the functionality for creating, editing, and managing mentor profiles in the system.
### Details:
Build UI components for mentor profile creation and editing. Implement backend APIs for profile CRUD operations. Create expertise tagging system and availability calendar. Add profile verification and approval workflows for quality control.
## 3. Develop Round-Table Discussion Framework [pending]
### Dependencies: 60.1
### Description: Create the core framework for hosting and managing round-table discussions between mentors and users.
### Details:
Design the discussion room data model and state management. Implement discussion scheduling and participant management. Create discussion topic and agenda setting functionality. Develop discussion moderation tools and rules enforcement mechanisms.
## 4. Implement LLM Integration for AI Mentors [pending]
### Dependencies: 60.3
### Description: Integrate LLM capabilities to simulate AI mentors that can participate in round-table discussions.
### Details:
Select appropriate LLM models for mentor simulation. Develop prompt engineering templates for different mentor personas and expertise areas. Implement context management to maintain conversation coherence. Create fallback mechanisms for handling edge cases in discussions.
## 5. Build Discussion Output Formatter [pending]
### Dependencies: 60.3, 60.4
### Description: Create a system to format and present round-table discussion outputs in a structured, readable format.
### Details:
Design templates for discussion summaries and transcripts. Implement real-time formatting of ongoing discussions. Create exportable formats for discussion outcomes (PDF, markdown, etc.). Develop highlighting and annotation features for key insights.
## 6. Integrate Mentor System with Task Management [pending]
### Dependencies: 60.2, 60.3
### Description: Connect the mentor system with the existing task management functionality to enable task-specific mentoring.
### Details:
Create APIs to link tasks with relevant mentors based on expertise. Implement functionality to initiate discussions around specific tasks. Develop mechanisms for mentors to provide feedback and guidance on tasks. Build notification system for task-related mentor interactions.
## 7. Test and Optimize Round-Table Discussions [pending]
### Dependencies: 60.4, 60.5, 60.6
### Description: Conduct comprehensive testing of the round-table discussion feature and optimize for performance and user experience.
### Details:
Perform load testing with multiple concurrent discussions. Test AI mentor responses for quality and relevance. Optimize LLM usage for cost efficiency. Conduct user testing sessions and gather feedback. Implement performance monitoring and analytics for ongoing optimization.

View File

@@ -9,3 +9,41 @@ Update the Taskmaster installation scripts and documentation to support Bun as a
# Test Strategy:
1. Install Taskmaster using Bun on macOS, Linux, and Windows (including WSL and PowerShell), following the updated documentation. 2. Run the full installation and initialization process, verifying that the directory structure, templates, and MCP config are set up identically to npm, pnpm, and Yarn. 3. Execute all CLI commands (including 'init') and confirm functional parity. 4. If a website or account setup is required, test these flows for consistency; if not, confirm and document this. 5. Check for Bun-specific issues (e.g., install hangs) and verify that troubleshooting steps are effective. 6. Ensure the documentation is clear, accurate, and up to date for all supported platforms.
# Subtasks:
## 1. Research Bun compatibility requirements [pending]
### Dependencies: None
### Description: Investigate Bun's JavaScript runtime environment and identify key differences from Node.js that may affect Taskmaster's installation and operation.
### Details:
Research Bun's package management, module resolution, and API compatibility with Node.js. Document any potential issues or limitations that might affect Taskmaster. Identify required changes to make Taskmaster compatible with Bun's execution model.
## 2. Update installation scripts for Bun compatibility [pending]
### Dependencies: 65.1
### Description: Modify the existing installation scripts to detect and support Bun as a runtime environment.
### Details:
Add Bun detection logic to installation scripts. Update package management commands to use Bun equivalents where needed. Ensure all dependencies are compatible with Bun. Modify any Node.js-specific code to work with Bun's runtime.
## 3. Create Bun-specific installation path [pending]
### Dependencies: 65.2
### Description: Implement a dedicated installation flow for Bun users that optimizes for Bun's capabilities.
### Details:
Create a Bun-specific installation script that leverages Bun's performance advantages. Update any environment detection logic to properly identify Bun environments. Ensure proper path resolution and environment variable handling for Bun.
## 4. Test Taskmaster installation with Bun [pending]
### Dependencies: 65.3
### Description: Perform comprehensive testing of the installation process using Bun across different operating systems.
### Details:
Test installation on Windows, macOS, and Linux using Bun. Verify that all Taskmaster features work correctly when installed via Bun. Document any issues encountered and implement fixes as needed.
## 5. Test Taskmaster operation with Bun [pending]
### Dependencies: 65.4
### Description: Ensure all Taskmaster functionality works correctly when running under Bun.
### Details:
Test all Taskmaster commands and features when running with Bun. Compare performance metrics between Node.js and Bun. Identify and fix any runtime issues specific to Bun. Ensure all plugins and extensions are compatible.
## 6. Update documentation for Bun support [pending]
### Dependencies: 65.4, 65.5
### Description: Update all relevant documentation to include information about installing and running Taskmaster with Bun.
### Details:
Add Bun installation instructions to README and documentation. Document any Bun-specific considerations or limitations. Update troubleshooting guides to include Bun-specific issues. Create examples showing Bun usage with Taskmaster.

View File

@@ -9,3 +9,17 @@
# Test Strategy:
# Subtasks:
## 1. Design task creation form without PRD [pending]
### Dependencies: None
### Description: Create a user interface form that allows users to manually input task details without requiring a PRD document
### Details:
Design a form with fields for task title, description, priority, assignee, due date, and other relevant task attributes. Include validation to ensure required fields are completed. The form should be intuitive and provide clear guidance on how to create a task manually.
## 2. Implement task saving functionality [pending]
### Dependencies: 68.1
### Description: Develop the backend functionality to save manually created tasks to the database
### Details:
Create API endpoints to handle task creation requests from the frontend. Implement data validation, error handling, and confirmation messages. Ensure the saved tasks appear in the task list view and can be edited or deleted like PRD-parsed tasks.

View File

@@ -57,3 +57,29 @@ Implementation Plan:
* Call `analyze_project_complexity` tool without `ids`. Verify full analysis and merging.
3. Verify report `meta` section is updated correctly on each run.
# Subtasks:
## 1. Modify core complexity analysis logic [pending]
### Dependencies: None
### Description: Update the core complexity analysis function to accept specific task IDs as input parameters
### Details:
Refactor the existing complexity analysis module to allow filtering by task IDs. This involves modifying the data processing pipeline to filter tasks before analysis, ensuring the complexity metrics are calculated only for the specified tasks while maintaining context awareness.
## 2. Update CLI interface for task-specific complexity analysis [pending]
### Dependencies: 69.1
### Description: Extend the CLI to accept task IDs as parameters for the complexity analysis command
### Details:
Add a new flag or parameter to the CLI that allows users to specify task IDs for targeted complexity analysis. Update the command parser, help documentation, and ensure proper validation of the provided task IDs.
## 3. Integrate task-specific analysis with MCP tool [pending]
### Dependencies: 69.1
### Description: Update the MCP tool interface to support analyzing complexity for specific tasks
### Details:
Modify the MCP tool's API endpoints and UI components to allow users to select specific tasks for complexity analysis. Ensure the UI provides clear feedback about which tasks are being analyzed and update the visualization components to properly display partial analysis results.
## 4. Create comprehensive tests for task-specific complexity analysis [pending]
### Dependencies: 69.1, 69.2, 69.3
### Description: Develop test cases to verify the correct functioning of task-specific complexity analysis
### Details:
Create unit and integration tests that verify the task-specific complexity analysis works correctly across both CLI and MCP interfaces. Include tests for edge cases such as invalid task IDs, tasks with dependencies outside the selected set, and performance tests for large task sets.

View File

@@ -9,3 +9,29 @@ The task involves implementing a new command that accepts an optional '--id' par
# Test Strategy:
Verify the command functionality by testing with both specific task IDs and general invocation: 1) Run the command with a valid '--id' and ensure the resulting diagram accurately depicts the specified task's dependencies with correct color codings for statuses. 2) Execute the command without '--id' to ensure a complete workflow diagram is generated for all tasks. 3) Check that arrows correctly represent dependency relationships. 4) Validate the Markdown (.md) file export option by confirming the file format and content after saving. 5) Test error responses for non-existent task IDs and malformed inputs.
# Subtasks:
## 1. Design the 'diagram' command interface [pending]
### Dependencies: None
### Description: Define the command structure, arguments, and options for the Mermaid diagram generation feature
### Details:
Create a command specification that includes: input parameters for diagram source (file, stdin, or string), output options (file, stdout, clipboard), format options (SVG, PNG, PDF), styling parameters, and help documentation. Consider compatibility with existing command patterns in the application.
## 2. Implement Mermaid diagram generation core functionality [pending]
### Dependencies: 70.1
### Description: Create the core logic to parse Mermaid syntax and generate diagram output
### Details:
Integrate with the Mermaid library to parse diagram syntax. Implement error handling for invalid syntax. Create the rendering pipeline to generate the diagram in memory before output. Support all standard Mermaid diagram types (flowchart, sequence, class, etc.). Include proper logging for the generation process.
## 3. Develop output handling mechanisms [pending]
### Dependencies: 70.2
### Description: Implement different output options for the generated diagrams
### Details:
Create handlers for different output formats (SVG, PNG, PDF). Implement file output with appropriate naming conventions and directory handling. Add clipboard support for direct pasting. Implement stdout output for piping to other commands. Include progress indicators for longer rendering operations.
## 4. Create documentation and examples [pending]
### Dependencies: 70.3
### Description: Provide comprehensive documentation and examples for the 'diagram' command
### Details:
Write detailed command documentation with all options explained. Create example diagrams covering different diagram types. Include troubleshooting section for common errors. Add documentation on extending the command with custom themes or templates. Create integration examples showing how to use the command in workflows with other tools.

View File

@@ -9,3 +9,41 @@ This task involves creating a new CLI command named 'progress-pdf' within the ex
# Test Strategy:
Verify the completion of this task through a multi-step testing approach: 1) Unit Tests: Create tests for the PDF generation logic to ensure data (task statuses and dependencies) is correctly fetched and formatted. Mock the PDF library to test edge cases like empty task lists or broken dependency links. 2) Integration Tests: Run the 'progress-pdf' command via CLI to confirm it generates a PDF file without errors under normal conditions, with filtered task IDs, and with various status filters. Validate that the output file exists in the specified directory and can be opened. 3) Content Validation: Manually or via automated script, check the generated PDF content to ensure it accurately reflects the current project state (compare task counts and statuses against a known project state) and includes dependency diagrams as images. 4) Error Handling Tests: Simulate failures in diagram generation or PDF creation (e.g., invalid output path, library errors) and verify that appropriate error messages are logged and the command exits gracefully. 5) Accessibility Checks: Use a PDF accessibility tool or manual inspection to confirm that text is selectable and images have alt text. Run these tests across different project sizes (small with few tasks, large with complex dependencies) to ensure scalability. Document test results and include a sample PDF output in the project repository for reference.
# Subtasks:
## 1. Research and select PDF generation library [pending]
### Dependencies: None
### Description: Evaluate available PDF generation libraries for Node.js that can handle diagrams and formatted text
### Details:
Compare libraries like PDFKit, jsPDF, and Puppeteer based on features, performance, and ease of integration. Consider compatibility with diagram visualization tools. Document findings and make a recommendation with justification.
## 2. Design PDF template and layout [pending]
### Dependencies: 72.1
### Description: Create a template design for the project progress PDF including sections for summary, metrics, and dependency visualization
### Details:
Design should include header/footer, progress summary section, key metrics visualization, dependency diagram placement, and styling guidelines. Create a mockup of the final PDF output for approval.
## 3. Implement project progress data collection module [pending]
### Dependencies: 72.1
### Description: Develop functionality to gather and process project data for the PDF report
### Details:
Create functions to extract task completion percentages, milestone status, timeline adherence, and other relevant metrics from the project database. Include data transformation logic to prepare for PDF rendering.
## 4. Integrate with dependency visualization system [pending]
### Dependencies: 72.1, 72.3
### Description: Connect to the existing diagram command to generate visual representation of task dependencies
### Details:
Implement adapter for the diagram command output to be compatible with the PDF generation library. Handle different scales of dependency chains and ensure proper rendering of complex relationships.
## 5. Build PDF generation core functionality [pending]
### Dependencies: 72.2, 72.3, 72.4
### Description: Develop the main module that combines data and visualizations into a formatted PDF document
### Details:
Implement the core PDF generation logic using the selected library. Include functions for adding text sections, embedding visualizations, formatting tables, and applying the template design. Add pagination and document metadata.
## 6. Create export options and command interface [pending]
### Dependencies: 72.5
### Description: Implement user-facing commands and options for generating and saving PDF reports
### Details:
Develop CLI commands for PDF generation with parameters for customization (time period, detail level, etc.). Include options for automatic saving to specified locations, email distribution, and integration with existing project workflows.

View File

@@ -9,3 +9,29 @@
# Test Strategy:
1. Configure a Google model (e.g., gemini-1.5-flash-latest) as the 'research' model in `.taskmasterconfig`.\n2. Run a command with the `--research` flag (e.g., `task-master add-task --prompt='Latest news on AI SDK 4.2' --research`).\n3. Verify logs show 'Enabling Google Search Grounding'.\n4. Check if the task output incorporates recent information.\n5. Configure the same Google model as the 'main' model.\n6. Run a command *without* the `--research` flag.\n7. Verify logs *do not* show grounding being enabled.\n8. Add unit tests to `ai-services-unified.test.js` to verify the conditional logic for adding `providerOptions`. Ensure mocks correctly simulate different roles and providers.
# Subtasks:
## 1. Modify AI service layer to support Google Search Grounding [pending]
### Dependencies: None
### Description: Update the AI service layer to include the capability to integrate with Google Search Grounding API for research-related queries.
### Details:
Extend the existing AI service layer by adding new methods and interfaces to handle Google Search Grounding API calls. This includes creating authentication mechanisms, request formatters, and response parsers specific to the Google Search API. Ensure proper error handling and retry logic for API failures.
## 2. Implement conditional logic for research role detection [pending]
### Dependencies: 75.1
### Description: Create logic to detect when a conversation is in 'research mode' and should trigger the Google Search Grounding functionality.
### Details:
Develop heuristics or machine learning-based detection to identify when a user's query requires research capabilities. Implement a decision tree that determines when to activate Google Search Grounding based on conversation context, explicit user requests for research, or specific keywords. Include configuration options to adjust sensitivity of the detection mechanism.
## 3. Update supported models configuration [pending]
### Dependencies: 75.1
### Description: Modify the model configuration to specify which AI models can utilize the Google Search Grounding capability.
### Details:
Update the model configuration files to include flags for Google Search Grounding compatibility. Create a registry of supported models with their specific parameters for optimal integration with the search API. Implement version checking to ensure compatibility between model versions and the Google Search Grounding API version.
## 4. Create end-to-end testing suite for research functionality [pending]
### Dependencies: 75.1, 75.2, 75.3
### Description: Develop comprehensive tests to verify the correct operation of the Google Search Grounding integration in research contexts.
### Details:
Build automated test cases that cover various research scenarios, including edge cases. Create mock responses for the Google Search API to enable testing without actual API calls. Implement integration tests that verify the entire flow from user query to research-enhanced response. Include performance benchmarks to ensure the integration doesn't significantly impact response times.

View File

@@ -57,3 +57,47 @@ Implement detailed logging with different verbosity levels:
- DEBUG: Raw request/response data
Run the test suite in a clean environment and confirm all expected assertions and logs are produced. Validate that new test cases can be added with minimal effort and that the framework integrates with CI pipelines. Create a CI configuration that runs tests on each commit.
# Subtasks:
## 1. Design E2E Test Framework Architecture [pending]
### Dependencies: None
### Description: Create a high-level design document for the E2E test framework that outlines components, interactions, and test flow
### Details:
Define the overall architecture of the test framework, including test runner, FastMCP server launcher, message protocol handler, and assertion components. Document how these components will interact and the data flow between them. Include error handling strategies and logging requirements.
## 2. Implement FastMCP Server Launcher [pending]
### Dependencies: 76.1
### Description: Create a component that can programmatically launch and manage the FastMCP server process over stdio
### Details:
Develop a module that can spawn the FastMCP server as a child process, establish stdio communication channels, handle process lifecycle events, and implement proper cleanup procedures. Include error handling for process failures and timeout mechanisms.
## 3. Develop Message Protocol Handler [pending]
### Dependencies: 76.1
### Description: Implement a handler that can serialize/deserialize messages according to the FastMCP protocol specification
### Details:
Create a protocol handler that formats outgoing messages and parses incoming messages according to the FastMCP protocol. Implement validation for message format compliance and error handling for malformed messages. Support all required message types defined in the protocol.
## 4. Create Request/Response Correlation Mechanism [pending]
### Dependencies: 76.3
### Description: Implement a system to track and correlate requests with their corresponding responses
### Details:
Develop a correlation mechanism using unique identifiers to match requests with their responses. Implement timeout handling for unresponded requests and proper error propagation. Design the API to support both synchronous and asynchronous request patterns.
## 5. Build Test Assertion Framework [pending]
### Dependencies: 76.3, 76.4
### Description: Create a set of assertion utilities specific to FastMCP server testing
### Details:
Develop assertion utilities that can validate server responses against expected values, verify timing constraints, and check for proper error handling. Include support for complex response validation patterns and detailed failure reporting.
## 6. Implement Test Cases [pending]
### Dependencies: 76.2, 76.4, 76.5
### Description: Develop a comprehensive set of test cases covering all FastMCP server functionality
### Details:
Create test cases for basic server operations, error conditions, edge cases, and performance scenarios. Organize tests into logical groups and ensure proper isolation between test cases. Include documentation for each test explaining its purpose and expected outcomes.
## 7. Create CI Integration and Documentation [pending]
### Dependencies: 76.6
### Description: Set up continuous integration for the test framework and create comprehensive documentation
### Details:
Configure the test framework to run in CI environments, generate reports, and fail builds appropriately. Create documentation covering framework architecture, usage instructions, test case development guidelines, and troubleshooting procedures. Include examples of extending the framework for new test scenarios.

View File

@@ -136,7 +136,7 @@ Apply telemetry pattern from telemetry.mdc:
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
## 7. Telemetry Integration for expand-task [in-progress]
## 7. Telemetry Integration for expand-task [done]
### Dependencies: None
### Description: Integrate AI usage telemetry capture and propagation for the expand-task functionality.
### Details:
@@ -159,7 +159,7 @@ Apply telemetry pattern from telemetry.mdc:
* Verify `handleApiResult` correctly passes `data.telemetryData` through.
## 8. Telemetry Integration for expand-all-tasks [pending]
## 8. Telemetry Integration for expand-all-tasks [done]
### Dependencies: None
### Description: Integrate AI usage telemetry capture and propagation for the expand-all-tasks functionality.
### Details:

View File

@@ -58,3 +58,35 @@ Testing for this feature should include:
- Confirm the ID is accessible to the telemetry system from Task #77
The test plan should include documentation of all test cases, expected results, and actual outcomes. A successful implementation will generate unique IDs for each installation while maintaining that ID across updates.
# Subtasks:
## 1. Create post-install script structure [pending]
### Dependencies: None
### Description: Set up the post-install script that will run automatically after npm installation to handle user ID generation.
### Details:
Create a new file called 'postinstall.js' in the project root. Configure package.json to run this script after installation by adding it to the 'scripts' section with the key 'postinstall'. The script should import necessary dependencies (fs, path, crypto) and set up the basic structure to access and modify the .taskmasterconfig file. Include proper error handling and logging to capture any issues during execution.
## 2. Implement UUID generation functionality [pending]
### Dependencies: 80.1
### Description: Create a function to generate cryptographically secure UUIDs v4 for unique user identification.
### Details:
Implement a function called 'generateUniqueUserId()' that uses the crypto module to create a UUID v4. The function should follow RFC 4122 for UUID generation to ensure uniqueness and security. Include validation to verify the generated ID matches the expected UUID v4 format. Document the function with JSDoc comments explaining its purpose for anonymous telemetry.
## 3. Develop config file handling logic [pending]
### Dependencies: 80.1
### Description: Create functions to read, parse, modify, and write to the .taskmasterconfig file for storing the user ID.
### Details:
Implement functions to: 1) Check if .taskmasterconfig exists and create it if not, 2) Read and parse the existing config file, 3) Check if a user ID already exists in the globals section, 4) Add or update the user ID in the globals section, and 5) Write the updated config back to disk. Handle edge cases like malformed config files, permission issues, and concurrent access. Use atomic write operations to prevent config corruption.
## 4. Integrate user ID generation with config storage [pending]
### Dependencies: 80.2, 80.3
### Description: Connect the UUID generation with the config file handling to create and store user IDs during installation.
### Details:
Combine the UUID generation and config handling functions to: 1) Check if a user ID already exists in config, 2) Generate a new ID only if needed, 3) Store the ID in the config file, and 4) Handle installation scenarios (fresh install vs. update). Add appropriate logging to inform users about the anonymous ID generation with privacy-focused messaging. Ensure the process is idempotent so running it multiple times won't create multiple IDs.
## 5. Add documentation and telemetry system access [pending]
### Dependencies: 80.4
### Description: Document the user ID system and create an API for the telemetry system to access the user ID.
### Details:
Create comprehensive documentation explaining: 1) The purpose of the anonymous ID, 2) How user privacy is protected, 3) How to opt out of telemetry, and 4) Technical details of the implementation. Implement a simple API function 'getUserId()' that reads the ID from config for use by the telemetry system. Update the README and user documentation to include information about anonymous usage tracking. Ensure cross-platform compatibility by testing on all supported operating systems.

File diff suppressed because it is too large Load Diff