Files
spec-kit/tests/hooks/TESTING.md
Manfred Riem d0a112c60f fix: wire after_tasks and after_implement hook events into command templates (#1702)
* fix: wire after_tasks and after_implement hook events into command templates (#1701)

The HookExecutor backend in extensions.py was fully implemented but
check_hooks_for_event() was never called by anything — the core command
templates had no instructions to check .specify/extensions.yml.

Add a final step to templates/commands/tasks.md (step 6, after_tasks) and
templates/commands/implement.md (step 10, after_implement) that instructs
the AI agent to:

- Read .specify/extensions.yml if it exists
- Filter hooks.{event} to enabled: true entries
- Evaluate any condition fields and skip non-matching hooks
- Output the RFC-specified hook message format, including
  EXECUTE_COMMAND: markers for mandatory (optional: false) hooks

Bumps version to 0.1.7.

* fix: clarify hook condition handling and add YAML error guidance in templates

- Replace ambiguous "evaluate any condition value" instruction with explicit
  guidance to skip hooks with non-empty conditions, deferring evaluation to
  HookExecutor
- Add instruction to skip hook checking silently if extensions.yml cannot
  be parsed or is invalid

* Fix/extension hooks not triggered (#1)

* feat(templates): implement before-hooks check as pre-execution phase

* test(hooks): create scenario for LLMs/Agents on hooks

---------

Co-authored-by: Dhilip <s.dhilipkumar@gmail.com>
2026-03-04 08:16:31 -06:00

1.8 KiB

Testing Extension Hooks

This directory contains a mock project to verify that LLM agents correctly identify and execute hook commands defined in .specify/extensions.yml.

Test 1: Testing before_tasks and after_tasks

  1. Open a chat with an LLM (like GitHub Copilot) in this project.
  2. Ask it to generate tasks for the current directory:

    "Please follow /speckit.tasks for the ./tests/hooks directory."

  3. Expected Behavior:
    • Before doing any generation, the LLM should notice the AUTOMATIC Pre-Hook in .specify/extensions.yml under before_tasks.
    • It should state it is executing EXECUTE_COMMAND: pre_tasks_test.
    • It should then proceed to read the .md docs and produce a tasks.md.
    • After generation, it should output the optional after_tasks hook (post_tasks_test) block, asking if you want to run it.

Test 2: Testing before_implement and after_implement

(Requires tasks.md from Test 1 to exist)

  1. In the same (or new) chat, ask the LLM to implement the tasks:

    "Please follow /speckit.implement for the ./tests/hooks directory."

  2. Expected Behavior:
    • The LLM should first check for before_implement hooks.
    • It should state it is executing EXECUTE_COMMAND: pre_implement_test BEFORE doing any actual task execution.
    • It should evaluate the checklists and execute the code writing tasks.
    • Upon completion, it should output the optional after_implement hook (post_implement_test) block.

How it works

The templates for these commands in templates/commands/tasks.md and templates/commands/implement.md contains strict ordered lists. The new before_* hooks are explicitly formulated in a Pre-Execution Checks section prior to the outline to ensure they're evaluated first without breaking template step numbers.