Files
spec-kit/templates/commands/tasks.md
Matt Van Horn 333a76535b feat(commands): wire before/after hook events into specify and plan templates (#1886)
* feat(commands): wire before/after hook events into specify and plan templates

Replicates the hook evaluation pattern from tasks.md and implement.md
(introduced in PR #1702) into the specify and plan command templates.
This completes the hook lifecycle across all SDD phases.

Changes:
- specify.md: Add before_specify/after_specify hook blocks
- plan.md: Add before_plan/after_plan hook blocks
- EXTENSION-API-REFERENCE.md: Document new hook events
- EXTENSION-USER-GUIDE.md: List all available hook events

Fixes #1788

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Mark before_commit/after_commit as planned in extension docs

These hook events are defined in the API reference but not yet wired
into any core command template. Marking them as planned rather than
removing them, since the infrastructure supports them.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix hook enablement to default true when field is absent

Matches HookExecutor.get_hooks_for_event() semantics where
hooks without an explicit enabled field are treated as enabled.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(docs): mark commit hooks as planned in user guide config example

The yaml config comment listed before_commit/after_commit as
"Available events" but they are not yet wired into core templates.
Moved them to a separate "Planned" line, consistent with the
API reference.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(commands): align enabled-filtering semantics across all hook templates

tasks.md and implement.md previously said "Filter to only hooks where
enabled: true", which would skip hooks that omit the enabled field.
Updated to match specify.md/plan.md and HookExecutor's h.get('enabled', True)
behavior: filter out only hooks where enabled is explicitly false.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Matt Van Horn <455140+mvanhorn@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 06:37:03 -05:00

9.0 KiB

description, handoffs, scripts
description handoffs scripts
Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
label agent prompt send
Analyze For Consistency speckit.analyze Run a project analysis for consistency true
label agent prompt send
Implement Project speckit.implement Start the implementation in phases true
sh ps
scripts/bash/check-prerequisites.sh --json scripts/powershell/check-prerequisites.ps1 -Json

User Input

$ARGUMENTS

You MUST consider the user input before proceeding (if not empty).

Pre-Execution Checks

Check for extension hooks (before tasks generation):

  • Check if .specify/extensions.yml exists in the project root.
  • If it exists, read it and look for entries under the hooks.before_tasks key
  • If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
  • Filter out hooks where enabled is explicitly false. Treat hooks without an enabled field as enabled by default.
  • For each remaining hook, do not attempt to interpret or evaluate hook condition expressions:
    • If the hook has no condition field, or it is null/empty, treat the hook as executable
    • If the hook defines a non-empty condition, skip the hook and leave condition evaluation to the HookExecutor implementation
  • For each executable hook, output the following based on its optional flag:
    • Optional hook (optional: true):
      ## Extension Hooks
      
      **Optional Pre-Hook**: {extension}
      Command: `/{command}`
      Description: {description}
      
      Prompt: {prompt}
      To execute: `/{command}`
      
    • Mandatory hook (optional: false):
      ## Extension Hooks
      
      **Automatic Pre-Hook**: {extension}
      Executing: `/{command}`
      EXECUTE_COMMAND: {command}
      
      Wait for the result of the hook command before proceeding to the Outline.
      
  • If no hooks are registered or .specify/extensions.yml does not exist, skip silently

Outline

  1. Setup: Run {SCRIPT} from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'''m Groot' (or double-quote if possible: "I'm Groot").

  2. Load design documents: Read from FEATURE_DIR:

    • Required: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
    • Optional: data-model.md (entities), contracts/ (interface contracts), research.md (decisions), quickstart.md (test scenarios)
    • Note: Not all projects have all documents. Generate tasks based on what's available.
  3. Execute task generation workflow:

    • Load plan.md and extract tech stack, libraries, project structure
    • Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
    • If data-model.md exists: Extract entities and map to user stories
    • If contracts/ exists: Map interface contracts to user stories
    • If research.md exists: Extract decisions for setup tasks
    • Generate tasks organized by user story (see Task Generation Rules below)
    • Generate dependency graph showing user story completion order
    • Create parallel execution examples per user story
    • Validate task completeness (each user story has all needed tasks, independently testable)
  4. Generate tasks.md: Use templates/tasks-template.md as structure, fill with:

    • Correct feature name from plan.md
    • Phase 1: Setup tasks (project initialization)
    • Phase 2: Foundational tasks (blocking prerequisites for all user stories)
    • Phase 3+: One phase per user story (in priority order from spec.md)
    • Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
    • Final Phase: Polish & cross-cutting concerns
    • All tasks must follow the strict checklist format (see Task Generation Rules below)
    • Clear file paths for each task
    • Dependencies section showing story completion order
    • Parallel execution examples per story
    • Implementation strategy section (MVP first, incremental delivery)
  5. Report: Output path to generated tasks.md and summary:

    • Total task count
    • Task count per user story
    • Parallel opportunities identified
    • Independent test criteria for each story
    • Suggested MVP scope (typically just User Story 1)
    • Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
  6. Check for extension hooks: After tasks.md is generated, check if .specify/extensions.yml exists in the project root.

    • If it exists, read it and look for entries under the hooks.after_tasks key
    • If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
    • Filter out hooks where enabled is explicitly false. Treat hooks without an enabled field as enabled by default.
    • For each remaining hook, do not attempt to interpret or evaluate hook condition expressions:
      • If the hook has no condition field, or it is null/empty, treat the hook as executable
      • If the hook defines a non-empty condition, skip the hook and leave condition evaluation to the HookExecutor implementation
    • For each executable hook, output the following based on its optional flag:
      • Optional hook (optional: true):
        ## Extension Hooks
        
        **Optional Hook**: {extension}
        Command: `/{command}`
        Description: {description}
        
        Prompt: {prompt}
        To execute: `/{command}`
        
      • Mandatory hook (optional: false):
        ## Extension Hooks
        
        **Automatic Hook**: {extension}
        Executing: `/{command}`
        EXECUTE_COMMAND: {command}
        
    • If no hooks are registered or .specify/extensions.yml does not exist, skip silently

Context for task generation: {ARGS}

The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.

Task Generation Rules

CRITICAL: Tasks MUST be organized by user story to enable independent implementation and testing.

Tests are OPTIONAL: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.

Checklist Format (REQUIRED)

Every task MUST strictly follow this format:

- [ ] [TaskID] [P?] [Story?] Description with file path

Format Components:

  1. Checkbox: ALWAYS start with - [ ] (markdown checkbox)
  2. Task ID: Sequential number (T001, T002, T003...) in execution order
  3. [P] marker: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
  4. [Story] label: REQUIRED for user story phase tasks only
    • Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
    • Setup phase: NO story label
    • Foundational phase: NO story label
    • User Story phases: MUST have story label
    • Polish phase: NO story label
  5. Description: Clear action with exact file path

Examples:

  • CORRECT: - [ ] T001 Create project structure per implementation plan
  • CORRECT: - [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py
  • CORRECT: - [ ] T012 [P] [US1] Create User model in src/models/user.py
  • CORRECT: - [ ] T014 [US1] Implement UserService in src/services/user_service.py
  • WRONG: - [ ] Create User model (missing ID and Story label)
  • WRONG: T001 [US1] Create model (missing checkbox)
  • WRONG: - [ ] [US1] Create User model (missing Task ID)
  • WRONG: - [ ] T001 [US1] Create model (missing file path)

Task Organization

  1. From User Stories (spec.md) - PRIMARY ORGANIZATION:

    • Each user story (P1, P2, P3...) gets its own phase
    • Map all related components to their story:
      • Models needed for that story
      • Services needed for that story
      • Interfaces/UI needed for that story
      • If tests requested: Tests specific to that story
    • Mark story dependencies (most stories should be independent)
  2. From Contracts:

    • Map each interface contract → to the user story it serves
    • If tests requested: Each interface contract → contract test task [P] before implementation in that story's phase
  3. From Data Model:

    • Map each entity to the user story(ies) that need it
    • If entity serves multiple stories: Put in earliest story or Setup phase
    • Relationships → service layer tasks in appropriate story phase
  4. From Setup/Infrastructure:

    • Shared infrastructure → Setup phase (Phase 1)
    • Foundational/blocking tasks → Foundational phase (Phase 2)
    • Story-specific setup → within that story's phase

Phase Structure

  • Phase 1: Setup (project initialization)
  • Phase 2: Foundational (blocking prerequisites - MUST complete before user stories)
  • Phase 3+: User Stories in priority order (P1, P2, P3...)
    • Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
    • Each phase should be a complete, independently testable increment
  • Final Phase: Polish & Cross-Cutting Concerns