# Task Structure Tasks in Task Master follow a specific format designed to provide comprehensive information for both humans and AI assistants. ## Task Fields in tasks.json Tasks in tasks.json have the following structure: - `id`: Unique identifier for the task (Example: `1`) - `title`: Brief, descriptive title of the task (Example: `"Initialize Repo"`) - `description`: Concise description of what the task involves (Example: `"Create a new repository, set up initial structure."`) - `status`: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`) - `dependencies`: IDs of tasks that must be completed before this task (Example: `[1, 2]`) - Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending) - This helps quickly identify which prerequisite tasks are blocking work - `priority`: Importance level of the task (Example: `"high"`, `"medium"`, `"low"`) - `details`: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`) - `testStrategy`: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`) - `subtasks`: List of smaller, more specific tasks that make up the main task (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`) ## Task File Format Individual task files follow this format: ``` # Task ID: # Title: # Status: <status> # Dependencies: <comma-separated list of dependency IDs> # Priority: <priority> # Description: <brief description> # Details: <detailed implementation notes> # Test Strategy: <verification approach> ``` ## Features in Detail ### Analyzing Task Complexity The `analyze-complexity` command: - Analyzes each task using AI to assess its complexity on a scale of 1-10 - Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS - Generates tailored prompts for expanding each task - Creates a comprehensive JSON report with ready-to-use commands - Saves the report to scripts/task-complexity-report.json by default The generated report contains: - Complexity analysis for each task (scored 1-10) - Recommended number of subtasks based on complexity - AI-generated expansion prompts customized for each task - Ready-to-run expansion commands directly within each task analysis ### Viewing Complexity Report The `complexity-report` command: - Displays a formatted, easy-to-read version of the complexity analysis report - Shows tasks organized by complexity score (highest to lowest) - Provides complexity distribution statistics (low, medium, high) - Highlights tasks recommended for expansion based on threshold score - Includes ready-to-use expansion commands for each complex task - If no report exists, offers to generate one on the spot ### Smart Task Expansion The `expand` command automatically checks for and uses the complexity report: When a complexity report exists: - Tasks are automatically expanded using the recommended subtask count and prompts - When expanding all tasks, they're processed in order of complexity (highest first) - Research-backed generation is preserved from the complexity analysis - You can still override recommendations with explicit command-line options Example workflow: ```bash # Generate the complexity analysis report with research capabilities task-master analyze-complexity --research # Review the report in a readable format task-master complexity-report # Expand tasks using the optimized recommendations task-master expand --id=8 # or expand all tasks task-master expand --all ``` ### Finding the Next Task The `next` command: - Identifies tasks that are pending/in-progress and have all dependencies satisfied - Prioritizes tasks by priority level, dependency count, and task ID - Displays comprehensive information about the selected task: - Basic task details (ID, title, priority, dependencies) - Implementation details - Subtasks (if they exist) - Provides contextual suggested actions: - Command to mark the task as in-progress - Command to mark the task as done - Commands for working with subtasks ### Viewing Specific Task Details The `show` command: - Displays comprehensive details about a specific task or subtask - Shows task status, priority, dependencies, and detailed implementation notes - For parent tasks, displays all subtasks and their status - For subtasks, shows parent task relationship - Provides contextual action suggestions based on the task's state - Works with both regular tasks and subtasks (using the format taskId.subtaskId) ## Best Practices for AI-Driven Development 1. **Start with a detailed PRD**: The more detailed your PRD, the better the generated tasks will be. 2. **Review generated tasks**: After parsing the PRD, review the tasks to ensure they make sense and have appropriate dependencies. 3. **Analyze task complexity**: Use the complexity analysis feature to identify which tasks should be broken down further. 4. **Follow the dependency chain**: Always respect task dependencies - the Cursor agent will help with this. 5. **Update as you go**: If your implementation diverges from the plan, use the update command to keep future tasks aligned with your current approach. 6. **Break down complex tasks**: Use the expand command to break down complex tasks into manageable subtasks. 7. **Regenerate task files**: After any updates to tasks.json, regenerate the task files to keep them in sync. 8. **Communicate context to the agent**: When asking the Cursor agent to help with a task, provide context about what you're trying to achieve. 9. **Validate dependencies**: Periodically run the validate-dependencies command to check for invalid or circular dependencies.