# TDD Dev Story - Workflow Instructions
```xml
The workflow execution engine is governed by: /home/bj/python/BMAD-METHOD/bmad/core/tasks/workflow.xml
You MUST have already loaded and processed: {installed_path}/workflow.yaml
Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status
Execute ALL steps in exact order; do NOT skip steps
If {{run_until_complete}} == true, run non-interactively: do not pause between steps unless a HALT condition is reached or explicit user approval is required for unapproved dependencies.
Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) or a HALT condition is triggered.
Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 7 decides completion.
TEST-FIRST MANDATE: NEVER write implementation code before tests exist and fail. This is RED-GREEN-REFACTOR, not GREEN-RED.
If {{story_path}} was explicitly provided and is valid → use it. Otherwise, attempt auto-discovery.
Auto-discovery: Read {{story_dir}} from config (dev_story_location). If invalid/missing or contains no .md files, ASK user to provide either: (a) a story file path, or (b) a directory to scan.
If a directory is provided, list story markdown files recursively under that directory matching pattern: "story-*.md".
Sort candidates by last modified time (newest first) and take the top {{story_selection_limit}} items.
Present the list with index, filename, and modified time. Ask: "Select a story (1-{{story_selection_limit}}) or enter a path:"
Resolve the selected item into {{story_path}}
Read the COMPLETE story file from {{story_path}}
Parse sections: Story, Acceptance Criteria, Tasks/Subtasks (including subtasks), Dev Notes, Dev Agent Record, File List, Change Log, Status
Identify the first incomplete task (unchecked [ ]) in Tasks/Subtasks; if subtasks exist, treat all subtasks as part of the selected task scope
If no incomplete tasks found → "All tasks completed - proceed to completion sequence" and Continue
If story file inaccessible → HALT: "Cannot develop story without access to story file"
If task requirements ambiguous → ASK user to clarify; if unresolved, HALT: "Task requirements must be clear before implementation"
Review Story Context JSON and extract acceptance criteria for the selected task
Review test strategy from Story Context (if present): required test types, coverage thresholds, test patterns
Plan test suite: determine which acceptance criteria need which test types (unit, integration, e2e)
Write brief test plan in Dev Agent Record → Debug Log
Invoke ATDD task to generate comprehensive failing tests:
{{story_path}}
{{current_task_id}}
{{current_task_acceptance_criteria}}
This will:
- Generate failing tests for all acceptance criteria
- Automatically register tests with RVTM (linking to requirements)
- Create test files following project conventions
- Provide implementation checklist
Review generated tests for completeness and clarity
Ensure tests follow patterns from TEA knowledge base: one test = one concern, explicit assertions, clear failure messages
Run the generated tests to verify RED state
If {{verify_red_state}} == true and tests PASS without implementation → HALT: "Tests must fail initially to prove they test the right thing. Review test assertions."
If tests cannot be created due to missing acceptance criteria → ASK user for clarification
If ATDD task fails → Review error, attempt fix, or ask user for guidance
Confirm RED state: Display "✅ RED Phase Complete: N tests created, all failing as expected"
Log to Dev Agent Record: "RED: Generated N tests for task {{current_task_id}}, all tests failing (validated)"
Review the failing tests and their error messages
Review the implementation checklist provided by ATDD task
Plan implementation approach in Dev Agent Record → Debug Log
Implement ONLY enough code to make the failing tests pass
Follow the principle: simplest implementation that satisfies tests
Handle error conditions and edge cases as specified in tests
Follow architecture patterns and coding standards from Story Context
Run tests iteratively during implementation
Continue implementing until all tests for this task PASS
If unapproved dependencies are needed → ASK user for approval before adding
If 3 consecutive implementation failures occur → HALT and request guidance
If required configuration is missing → HALT: "Cannot proceed without necessary configuration files"
If tests still fail after reasonable attempts → Review test expectations vs implementation, ask user if tests need adjustment
Confirm GREEN state: Display "✅ GREEN Phase Complete: All N tests now passing"
Log to Dev Agent Record: "GREEN: Implemented task {{current_task_id}}, all tests passing"
If {{refactor_required}} == false → Skip this step
Review implementation against code quality standards:
- DRY (Don't Repeat Yourself): Identify and eliminate duplication
- SOLID principles: Check single responsibility, proper abstractions
- Naming clarity: Ensure variables, functions, classes are well-named
- Function/method size: Break down large functions
- Complexity reduction: Simplify complex logic
- Pattern consistency: Match existing codebase patterns
Identify specific refactoring opportunities and list them
For each refactoring opportunity:
1. Make ONE small refactoring change
2. Run all tests immediately
3. If tests FAIL:
- Revert the change immediately
- Try a different refactoring approach
- Document what didn't work
4. If tests PASS:
- Keep the change
- Continue to next refactoring
After all refactoring, validate final state:
- All tests still GREEN ✅
- Code quality metrics improved (less duplication, lower complexity)
- No new warnings or linting errors introduced
- Architecture patterns maintained
If tests fail during refactoring and revert doesn't fix → HALT: "Refactoring broke tests, cannot recover. Manual review required."
If code quality decreases (more duplication, higher complexity) → Revert refactoring and try different approach
Confirm REFACTOR complete: Display "✅ REFACTOR Phase Complete: Code quality improved, all tests still GREEN"
Log to Dev Agent Record: "REFACTOR: Improved code quality for task {{current_task_id}}, N refactorings applied, all tests green"
Determine how to run tests for this repo (infer or use {{run_tests_command}} if provided)
Run the complete test suite (not just new tests):
- Unit tests
- Integration tests (if applicable)
- End-to-end tests (if applicable)
Run linting and code quality checks if configured
Validate implementation meets ALL acceptance criteria for this task
If ACs include quantitative thresholds (e.g., test pass rate, coverage %), ensure they are met
Capture test results summary (for RVTM update):
- Total tests run
- Tests passed
- Tests failed (should be 0)
- Coverage % (if measured)
- Execution time
If regression tests fail → STOP and fix before continuing. Log failure in Dev Agent Record.
If new tests fail → STOP and fix before continuing. Return to GREEN phase if needed.
If linting fails → Fix issues before continuing
If acceptance criteria not met → Document gaps and ask user how to proceed
ONLY mark the task (and subtasks) checkbox with [x] if ALL tests pass and validation succeeds
Update File List section with any new, modified, or deleted files (paths relative to repo root)
Add completion notes to Dev Agent Record summarizing:
- RED: Tests created
- GREEN: Implementation approach
- REFACTOR: Quality improvements made
- Any follow-ups or technical debt noted
Append entry to Change Log describing the change with test count and coverage
Save the story file
Update RVTM with test execution results:
{{story_path}}
{{test_results_summary}}
.rvtm/matrix.yaml
This will:
- Update test status to "passed" for all passing tests
- Update test execution timestamps
- Recalculate coverage metrics
- Non-blocking: warns if RVTM unavailable but continues
Display summary:
- Task completed: {{current_task_id}}
- Tests: N created, N passing
- RVTM: Updated with test results
- Files modified: list paths
Determine if more incomplete tasks remain
If more tasks remain → Next task
If no tasks remain → Completion
Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)
Run the full regression suite one final time (do not skip)
Confirm File List includes every changed file
Execute story definition-of-done checklist, if the story includes one
Update the story Status to: Ready for Review
Update RVTM story status to completed:
{{story_path}}
completed
.rvtm/matrix.yaml
This will:
- Mark story as completed with timestamp
- Update linked requirements to "implemented" status
- Recalculate all coverage metrics
- Generate final traceability report
Generate final TDD summary report:
- Story: {{story_title}}
- Tasks completed: N
- Total tests created: N
- All tests passing: ✅
- RED-GREEN-REFACTOR cycles: N
- RVTM Traceability:
* Requirements linked: N
* Tests registered: N
* Coverage: X%
* All requirements verified: ✅
- Status: Ready for Review
If any task is incomplete → Return to step 1 to complete remaining work (Do NOT finish with partial progress)
If regression failures exist → STOP and resolve before completing
If File List is incomplete → Update it before completing
If RVTM shows coverage gaps → Warn user but proceed (traceability is best-effort)
Optionally run the workflow validation task against the story using /home/bj/python/BMAD-METHOD/bmad/core/tasks/validate-workflow.md
Run validation against TDD checklist: {installed_path}/checklist.md
Prepare a concise summary in Dev Agent Record → Completion Notes highlighting TDD discipline maintained
Communicate that the story is Ready for Review with full test coverage and RVTM traceability
```