Add TDD Developer Agent with RED-GREEN-REFACTOR workflow
- Created dev-tdd.md agent with strict test-first development methodology - Implemented complete TDD workflow with RED-GREEN-REFACTOR cycles - Added comprehensive validation checklist (250+ items) - Integrated RVTM traceability for requirement-test-implementation tracking - Includes ATDD test generation and Story Context integration - Agent name: Ted (TDD Developer Agent) with ✅ icon 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
279
bmad/bmm/workflows/4-implementation/dev-story-tdd/checklist.md
Normal file
279
bmad/bmm/workflows/4-implementation/dev-story-tdd/checklist.md
Normal file
@@ -0,0 +1,279 @@
|
||||
# TDD Workflow Validation Checklist
|
||||
|
||||
## Story Initialization
|
||||
|
||||
- [ ] Story file loaded and Status == 'Approved'
|
||||
- [ ] Story Context JSON loaded and parsed
|
||||
- [ ] Story Context contains acceptance criteria
|
||||
- [ ] Story Context contains test strategy (if applicable)
|
||||
- [ ] All tasks and subtasks identified
|
||||
- [ ] RVTM matrix available (or gracefully degraded if not)
|
||||
|
||||
## Phase 1: RED - Failing Tests
|
||||
|
||||
### Test Generation
|
||||
|
||||
- [ ] ATDD task invoked successfully for current task
|
||||
- [ ] Test files generated for all acceptance criteria
|
||||
- [ ] Tests follow TEA knowledge patterns (one test = one concern)
|
||||
- [ ] Tests have explicit assertions
|
||||
- [ ] Test failure messages are clear and actionable
|
||||
- [ ] Tests match required types from test strategy (unit/integration/e2e)
|
||||
|
||||
### RVTM Integration (RED Phase)
|
||||
|
||||
- [ ] Tests automatically registered in RVTM
|
||||
- [ ] Tests linked to story requirements
|
||||
- [ ] Test status set to 'pending' initially
|
||||
- [ ] Test file paths recorded correctly
|
||||
- [ ] Requirement inheritance from story working
|
||||
|
||||
### RED State Verification
|
||||
|
||||
- [ ] Tests executed after generation
|
||||
- [ ] All tests FAILED (RED state verified)
|
||||
- [ ] Failure messages indicate what needs to be implemented
|
||||
- [ ] No tests passing before implementation (proves tests are valid)
|
||||
- [ ] RED phase logged in Dev Agent Record
|
||||
|
||||
## Phase 2: GREEN - Implementation
|
||||
|
||||
### Implementation Quality
|
||||
|
||||
- [ ] Code implemented to pass failing tests
|
||||
- [ ] Implementation follows Story Context architecture patterns
|
||||
- [ ] Implementation uses existing interfaces from Story Context
|
||||
- [ ] Coding standards from repository maintained
|
||||
- [ ] Error handling implemented as specified in tests
|
||||
- [ ] Edge cases covered per test requirements
|
||||
|
||||
### Test Execution (GREEN Phase)
|
||||
|
||||
- [ ] Tests run iteratively during implementation
|
||||
- [ ] All new tests PASS (GREEN state achieved)
|
||||
- [ ] No tests skipped or disabled
|
||||
- [ ] Test execution time reasonable
|
||||
- [ ] GREEN phase logged in Dev Agent Record
|
||||
|
||||
### Acceptance Criteria Validation
|
||||
|
||||
- [ ] Implementation satisfies all task acceptance criteria
|
||||
- [ ] Quantitative thresholds met (if specified in ACs)
|
||||
- [ ] No acceptance criteria left unaddressed
|
||||
- [ ] Acceptance criteria validation documented
|
||||
|
||||
## Phase 3: REFACTOR - Code Quality Improvement
|
||||
|
||||
### Refactoring Discipline
|
||||
|
||||
- [ ] Code quality issues identified before refactoring
|
||||
- [ ] Refactoring applied incrementally (one change at a time)
|
||||
- [ ] Tests run after EACH refactoring change
|
||||
- [ ] All tests remained GREEN throughout refactoring
|
||||
- [ ] Failed refactoring attempts reverted immediately
|
||||
|
||||
### Code Quality Metrics
|
||||
|
||||
- [ ] DRY principle applied (duplication reduced)
|
||||
- [ ] SOLID principles followed
|
||||
- [ ] Naming clarity improved
|
||||
- [ ] Function/method size appropriate
|
||||
- [ ] Complexity reduced where possible
|
||||
- [ ] Architecture patterns consistent with codebase
|
||||
|
||||
### Refactoring Outcome
|
||||
|
||||
- [ ] Code quality improved measurably
|
||||
- [ ] No new duplication introduced
|
||||
- [ ] No increased complexity
|
||||
- [ ] All tests still GREEN after all refactoring
|
||||
- [ ] REFACTOR phase logged in Dev Agent Record
|
||||
|
||||
## Phase 4: Comprehensive Validation
|
||||
|
||||
### Test Suite Execution
|
||||
|
||||
- [ ] Full test suite executed (not just new tests)
|
||||
- [ ] Unit tests: all passing
|
||||
- [ ] Integration tests: all passing (if applicable)
|
||||
- [ ] E2E tests: all passing (if applicable)
|
||||
- [ ] No regression failures introduced
|
||||
- [ ] Test coverage meets threshold (if specified)
|
||||
|
||||
### Code Quality Checks
|
||||
|
||||
- [ ] Linting passes with no errors
|
||||
- [ ] Code quality tools pass (if configured)
|
||||
- [ ] No new warnings introduced
|
||||
- [ ] Security checks pass (if configured)
|
||||
- [ ] Performance acceptable (if thresholds specified)
|
||||
|
||||
### Validation Results
|
||||
|
||||
- [ ] Test results captured for RVTM update
|
||||
- [ ] Total tests count recorded
|
||||
- [ ] Pass/fail counts correct
|
||||
- [ ] Coverage percentage calculated (if applicable)
|
||||
- [ ] Execution time recorded
|
||||
|
||||
## Phase 5: Task Completion & RVTM Update
|
||||
|
||||
### Story File Updates
|
||||
|
||||
- [ ] Task checkbox marked [x] (only if all tests pass)
|
||||
- [ ] Subtasks checkboxes marked [x] (if applicable)
|
||||
- [ ] File List updated with all changed files
|
||||
- [ ] File paths relative to repo root
|
||||
- [ ] Change Log entry added
|
||||
- [ ] Change Log describes what was implemented and test coverage
|
||||
|
||||
### Dev Agent Record Updates
|
||||
|
||||
- [ ] Debug Log contains RED-GREEN-REFACTOR summary
|
||||
- [ ] Completion Notes summarize implementation approach
|
||||
- [ ] Test count and coverage documented
|
||||
- [ ] Follow-up items noted (if any)
|
||||
- [ ] Technical debt documented (if any)
|
||||
|
||||
### RVTM Traceability Updates
|
||||
|
||||
- [ ] update-story-status.md task invoked
|
||||
- [ ] Test status updated to 'passed' in RVTM
|
||||
- [ ] Test execution timestamps recorded
|
||||
- [ ] Coverage metrics recalculated
|
||||
- [ ] RVTM update completed (or warning logged if unavailable)
|
||||
- [ ] Traceability maintained: requirement → story → test → implementation
|
||||
|
||||
## Phase 6: Story Completion
|
||||
|
||||
### All Tasks Verification
|
||||
|
||||
- [ ] All tasks marked [x] (complete scan performed)
|
||||
- [ ] All subtasks marked [x]
|
||||
- [ ] No incomplete tasks remain
|
||||
- [ ] Final regression suite executed
|
||||
- [ ] All regression tests passing
|
||||
|
||||
### Story Metadata Complete
|
||||
|
||||
- [ ] File List includes ALL changed files
|
||||
- [ ] Change Log complete for entire story
|
||||
- [ ] Dev Agent Record has completion summary
|
||||
- [ ] Story Status updated to 'Ready for Review'
|
||||
- [ ] Story file saved
|
||||
|
||||
### RVTM Story Completion
|
||||
|
||||
- [ ] Story marked as 'completed' in RVTM
|
||||
- [ ] Linked requirements updated to 'implemented' status
|
||||
- [ ] All coverage metrics final and accurate
|
||||
- [ ] Traceability report available
|
||||
- [ ] Audit trail complete in RVTM history
|
||||
|
||||
### Final TDD Summary
|
||||
|
||||
- [ ] Total tasks completed: count correct
|
||||
- [ ] Total tests created: count correct
|
||||
- [ ] All tests passing: verified
|
||||
- [ ] RED-GREEN-REFACTOR cycles: counted
|
||||
- [ ] RVTM traceability complete:
|
||||
- [ ] Requirements linked count
|
||||
- [ ] Tests registered count
|
||||
- [ ] Coverage percentage
|
||||
- [ ] All requirements verified
|
||||
|
||||
## TDD Discipline Validation
|
||||
|
||||
### Test-First Adherence
|
||||
|
||||
- [ ] NO code written before tests existed
|
||||
- [ ] NO implementation started in RED phase
|
||||
- [ ] All tests failed initially (RED validated)
|
||||
- [ ] Tests drove implementation (GREEN)
|
||||
- [ ] Refactoring kept tests green (REFACTOR)
|
||||
- [ ] Test-first discipline maintained throughout
|
||||
|
||||
### ATDD Integration
|
||||
|
||||
- [ ] ATDD task used for all test generation
|
||||
- [ ] Acceptance criteria drove test creation
|
||||
- [ ] Tests map one-to-one with acceptance criteria
|
||||
- [ ] TEA knowledge patterns applied
|
||||
- [ ] Test quality high (clear, explicit, isolated)
|
||||
|
||||
### Traceability Discipline
|
||||
|
||||
- [ ] RVTM updated automatically at each phase
|
||||
- [ ] No manual traceability steps required
|
||||
- [ ] Requirements → Tests → Implementation linkage complete
|
||||
- [ ] Bidirectional traceability verified
|
||||
- [ ] Stakeholder visibility maintained throughout
|
||||
|
||||
## Definition of Done
|
||||
|
||||
### Story Level
|
||||
|
||||
- [ ] All acceptance criteria satisfied
|
||||
- [ ] All tasks complete
|
||||
- [ ] All tests passing
|
||||
- [ ] Code quality high
|
||||
- [ ] Documentation complete
|
||||
- [ ] RVTM traceability complete
|
||||
- [ ] Ready for review
|
||||
|
||||
### Test Coverage
|
||||
|
||||
- [ ] All requirements have tests
|
||||
- [ ] All tests passing
|
||||
- [ ] Coverage threshold met (if specified)
|
||||
- [ ] No orphaned tests (all linked to requirements)
|
||||
- [ ] No coverage gaps (all requirements covered)
|
||||
|
||||
### Quality Gates
|
||||
|
||||
- [ ] No regressions introduced
|
||||
- [ ] Linting clean
|
||||
- [ ] Code quality metrics acceptable
|
||||
- [ ] Security checks pass
|
||||
- [ ] Performance acceptable
|
||||
- [ ] Architecture patterns maintained
|
||||
|
||||
## TDD Benefits Realized
|
||||
|
||||
### Verification
|
||||
|
||||
- [ ] Tests prove code works before deployment
|
||||
- [ ] Acceptance criteria verified by passing tests
|
||||
- [ ] Refactoring safety net in place
|
||||
- [ ] Regression protection established
|
||||
|
||||
### Documentation
|
||||
|
||||
- [ ] Tests document expected behavior
|
||||
- [ ] Test names describe functionality
|
||||
- [ ] Failure messages guide debugging
|
||||
- [ ] Requirements traced through tests
|
||||
|
||||
### Quality
|
||||
|
||||
- [ ] Test-first led to better design
|
||||
- [ ] Code is testable and modular
|
||||
- [ ] Edge cases identified and handled
|
||||
- [ ] Error handling comprehensive
|
||||
|
||||
### Traceability
|
||||
|
||||
- [ ] Stakeholders can see requirement status
|
||||
- [ ] Test verification visible in real-time
|
||||
- [ ] Implementation completeness measurable
|
||||
- [ ] Audit trail complete for compliance
|
||||
|
||||
---
|
||||
|
||||
**Validation Result:** [Pass/Fail]
|
||||
|
||||
**Validator:** ****\*\*****\_\_****\*\*****
|
||||
|
||||
**Date:** ****\*\*****\_\_****\*\*****
|
||||
|
||||
**Notes:**
|
||||
@@ -0,0 +1,233 @@
|
||||
# TDD Dev Story - Workflow Instructions
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: /home/bj/python/BMAD-METHOD/bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status</critical>
|
||||
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
|
||||
<critical>If {{run_until_complete}} == true, run non-interactively: do not pause between steps unless a HALT condition is reached or explicit user approval is required for unapproved dependencies.</critical>
|
||||
<critical>Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) or a HALT condition is triggered.</critical>
|
||||
<critical>Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 7 decides completion.</critical>
|
||||
<critical>TEST-FIRST MANDATE: NEVER write implementation code before tests exist and fail. This is RED-GREEN-REFACTOR, not GREEN-RED.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Load story and select next task">
|
||||
<action>If {{story_path}} was explicitly provided and is valid → use it. Otherwise, attempt auto-discovery.</action>
|
||||
<action>Auto-discovery: Read {{story_dir}} from config (dev_story_location). If invalid/missing or contains no .md files, ASK user to provide either: (a) a story file path, or (b) a directory to scan.</action>
|
||||
<action>If a directory is provided, list story markdown files recursively under that directory matching pattern: "story-*.md".</action>
|
||||
<action>Sort candidates by last modified time (newest first) and take the top {{story_selection_limit}} items.</action>
|
||||
<ask>Present the list with index, filename, and modified time. Ask: "Select a story (1-{{story_selection_limit}}) or enter a path:"</ask>
|
||||
<action>Resolve the selected item into {{story_path}}</action>
|
||||
<action>Read the COMPLETE story file from {{story_path}}</action>
|
||||
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks (including subtasks), Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
|
||||
<action>Identify the first incomplete task (unchecked [ ]) in Tasks/Subtasks; if subtasks exist, treat all subtasks as part of the selected task scope</action>
|
||||
<check>If no incomplete tasks found → "All tasks completed - proceed to completion sequence" and <goto step="7">Continue</goto></check>
|
||||
<check>If story file inaccessible → HALT: "Cannot develop story without access to story file"</check>
|
||||
<check>If task requirements ambiguous → ASK user to clarify; if unresolved, HALT: "Task requirements must be clear before implementation"</check>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Generate failing acceptance tests (RED phase)">
|
||||
<action>Review Story Context JSON and extract acceptance criteria for the selected task</action>
|
||||
<action>Review test strategy from Story Context (if present): required test types, coverage thresholds, test patterns</action>
|
||||
<action>Plan test suite: determine which acceptance criteria need which test types (unit, integration, e2e)</action>
|
||||
<action>Write brief test plan in Dev Agent Record → Debug Log</action>
|
||||
|
||||
<action>Invoke ATDD task to generate comprehensive failing tests:
|
||||
|
||||
<invoke-task path="/home/bj/python/BMAD-METHOD/bmad/bmm/workflows/testarch/atdd/instructions.md">
|
||||
<param name="story_path">{{story_path}}</param>
|
||||
<param name="task_id">{{current_task_id}}</param>
|
||||
<param name="acceptance_criteria">{{current_task_acceptance_criteria}}</param>
|
||||
</invoke-task>
|
||||
|
||||
This will:
|
||||
- Generate failing tests for all acceptance criteria
|
||||
- Automatically register tests with RVTM (linking to requirements)
|
||||
- Create test files following project conventions
|
||||
- Provide implementation checklist
|
||||
</action>
|
||||
|
||||
<action>Review generated tests for completeness and clarity</action>
|
||||
<action>Ensure tests follow patterns from TEA knowledge base: one test = one concern, explicit assertions, clear failure messages</action>
|
||||
|
||||
<action>Run the generated tests to verify RED state</action>
|
||||
<check>If {{verify_red_state}} == true and tests PASS without implementation → HALT: "Tests must fail initially to prove they test the right thing. Review test assertions."</check>
|
||||
<check>If tests cannot be created due to missing acceptance criteria → ASK user for clarification</check>
|
||||
<check>If ATDD task fails → Review error, attempt fix, or ask user for guidance</check>
|
||||
|
||||
<action>Confirm RED state: Display "✅ RED Phase Complete: N tests created, all failing as expected"</action>
|
||||
<action>Log to Dev Agent Record: "RED: Generated N tests for task {{current_task_id}}, all tests failing (validated)"</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Implement to pass tests (GREEN phase)">
|
||||
<action>Review the failing tests and their error messages</action>
|
||||
<action>Review the implementation checklist provided by ATDD task</action>
|
||||
<action>Plan implementation approach in Dev Agent Record → Debug Log</action>
|
||||
|
||||
<action>Implement ONLY enough code to make the failing tests pass</action>
|
||||
<action>Follow the principle: simplest implementation that satisfies tests</action>
|
||||
<action>Handle error conditions and edge cases as specified in tests</action>
|
||||
<action>Follow architecture patterns and coding standards from Story Context</action>
|
||||
|
||||
<action>Run tests iteratively during implementation</action>
|
||||
<action>Continue implementing until all tests for this task PASS</action>
|
||||
|
||||
<check>If unapproved dependencies are needed → ASK user for approval before adding</check>
|
||||
<check>If 3 consecutive implementation failures occur → HALT and request guidance</check>
|
||||
<check>If required configuration is missing → HALT: "Cannot proceed without necessary configuration files"</check>
|
||||
<check>If tests still fail after reasonable attempts → Review test expectations vs implementation, ask user if tests need adjustment</check>
|
||||
|
||||
<action>Confirm GREEN state: Display "✅ GREEN Phase Complete: All N tests now passing"</action>
|
||||
<action>Log to Dev Agent Record: "GREEN: Implemented task {{current_task_id}}, all tests passing"</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Refactor while maintaining green (REFACTOR phase)">
|
||||
<check>If {{refactor_required}} == false → Skip this step</check>
|
||||
|
||||
<action>Review implementation against code quality standards:</action>
|
||||
<action> - DRY (Don't Repeat Yourself): Identify and eliminate duplication</action>
|
||||
<action> - SOLID principles: Check single responsibility, proper abstractions</action>
|
||||
<action> - Naming clarity: Ensure variables, functions, classes are well-named</action>
|
||||
<action> - Function/method size: Break down large functions</action>
|
||||
<action> - Complexity reduction: Simplify complex logic</action>
|
||||
<action> - Pattern consistency: Match existing codebase patterns</action>
|
||||
|
||||
<action>Identify specific refactoring opportunities and list them</action>
|
||||
|
||||
<action>For each refactoring opportunity:</action>
|
||||
<action> 1. Make ONE small refactoring change</action>
|
||||
<action> 2. Run all tests immediately</action>
|
||||
<action> 3. If tests FAIL:</action>
|
||||
<action> - Revert the change immediately</action>
|
||||
<action> - Try a different refactoring approach</action>
|
||||
<action> - Document what didn't work</action>
|
||||
<action> 4. If tests PASS:</action>
|
||||
<action> - Keep the change</action>
|
||||
<action> - Continue to next refactoring</action>
|
||||
|
||||
<action>After all refactoring, validate final state:</action>
|
||||
<action> - All tests still GREEN ✅</action>
|
||||
<action> - Code quality metrics improved (less duplication, lower complexity)</action>
|
||||
<action> - No new warnings or linting errors introduced</action>
|
||||
<action> - Architecture patterns maintained</action>
|
||||
|
||||
<check>If tests fail during refactoring and revert doesn't fix → HALT: "Refactoring broke tests, cannot recover. Manual review required."</check>
|
||||
<check>If code quality decreases (more duplication, higher complexity) → Revert refactoring and try different approach</check>
|
||||
|
||||
<action>Confirm REFACTOR complete: Display "✅ REFACTOR Phase Complete: Code quality improved, all tests still GREEN"</action>
|
||||
<action>Log to Dev Agent Record: "REFACTOR: Improved code quality for task {{current_task_id}}, N refactorings applied, all tests green"</action>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Run comprehensive validation">
|
||||
<action>Determine how to run tests for this repo (infer or use {{run_tests_command}} if provided)</action>
|
||||
<action>Run the complete test suite (not just new tests):</action>
|
||||
<action> - Unit tests</action>
|
||||
<action> - Integration tests (if applicable)</action>
|
||||
<action> - End-to-end tests (if applicable)</action>
|
||||
|
||||
<action>Run linting and code quality checks if configured</action>
|
||||
<action>Validate implementation meets ALL acceptance criteria for this task</action>
|
||||
<action>If ACs include quantitative thresholds (e.g., test pass rate, coverage %), ensure they are met</action>
|
||||
|
||||
<action>Capture test results summary (for RVTM update):</action>
|
||||
<action> - Total tests run</action>
|
||||
<action> - Tests passed</action>
|
||||
<action> - Tests failed (should be 0)</action>
|
||||
<action> - Coverage % (if measured)</action>
|
||||
<action> - Execution time</action>
|
||||
|
||||
<check>If regression tests fail → STOP and fix before continuing. Log failure in Dev Agent Record.</check>
|
||||
<check>If new tests fail → STOP and fix before continuing. Return to GREEN phase if needed.</check>
|
||||
<check>If linting fails → Fix issues before continuing</check>
|
||||
<check>If acceptance criteria not met → Document gaps and ask user how to proceed</check>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Mark task complete and update story + RVTM">
|
||||
<action>ONLY mark the task (and subtasks) checkbox with [x] if ALL tests pass and validation succeeds</action>
|
||||
<action>Update File List section with any new, modified, or deleted files (paths relative to repo root)</action>
|
||||
<action>Add completion notes to Dev Agent Record summarizing:</action>
|
||||
<action> - RED: Tests created</action>
|
||||
<action> - GREEN: Implementation approach</action>
|
||||
<action> - REFACTOR: Quality improvements made</action>
|
||||
<action> - Any follow-ups or technical debt noted</action>
|
||||
|
||||
<action>Append entry to Change Log describing the change with test count and coverage</action>
|
||||
<action>Save the story file</action>
|
||||
|
||||
<action>Update RVTM with test execution results:
|
||||
|
||||
<invoke-task path="/home/bj/python/BMAD-METHOD/bmad/core/tasks/rvtm/update-story-status.md">
|
||||
<param name="story_file">{{story_path}}</param>
|
||||
<param name="test_results">{{test_results_summary}}</param>
|
||||
<param name="matrix_file">.rvtm/matrix.yaml</param>
|
||||
</invoke-task>
|
||||
|
||||
This will:
|
||||
- Update test status to "passed" for all passing tests
|
||||
- Update test execution timestamps
|
||||
- Recalculate coverage metrics
|
||||
- Non-blocking: warns if RVTM unavailable but continues
|
||||
</action>
|
||||
|
||||
<action>Display summary:</action>
|
||||
<action> - Task completed: {{current_task_id}}</action>
|
||||
<action> - Tests: N created, N passing</action>
|
||||
<action> - RVTM: Updated with test results</action>
|
||||
<action> - Files modified: list paths</action>
|
||||
|
||||
<check>Determine if more incomplete tasks remain</check>
|
||||
<check>If more tasks remain → <goto step="1">Next task</goto></check>
|
||||
<check>If no tasks remain → <goto step="7">Completion</goto></check>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Story completion sequence">
|
||||
<action>Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)</action>
|
||||
<action>Run the full regression suite one final time (do not skip)</action>
|
||||
<action>Confirm File List includes every changed file</action>
|
||||
<action>Execute story definition-of-done checklist, if the story includes one</action>
|
||||
<action>Update the story Status to: Ready for Review</action>
|
||||
|
||||
<action>Update RVTM story status to completed:
|
||||
|
||||
<invoke-task path="/home/bj/python/BMAD-METHOD/bmad/core/tasks/rvtm/update-story-status.md">
|
||||
<param name="story_file">{{story_path}}</param>
|
||||
<param name="status">completed</param>
|
||||
<param name="matrix_file">.rvtm/matrix.yaml</param>
|
||||
</invoke-task>
|
||||
|
||||
This will:
|
||||
- Mark story as completed with timestamp
|
||||
- Update linked requirements to "implemented" status
|
||||
- Recalculate all coverage metrics
|
||||
- Generate final traceability report
|
||||
</action>
|
||||
|
||||
<action>Generate final TDD summary report:</action>
|
||||
<action> - Story: {{story_title}}</action>
|
||||
<action> - Tasks completed: N</action>
|
||||
<action> - Total tests created: N</action>
|
||||
<action> - All tests passing: ✅</action>
|
||||
<action> - RED-GREEN-REFACTOR cycles: N</action>
|
||||
<action> - RVTM Traceability:</action>
|
||||
<action> * Requirements linked: N</action>
|
||||
<action> * Tests registered: N</action>
|
||||
<action> * Coverage: X%</action>
|
||||
<action> * All requirements verified: ✅</action>
|
||||
<action> - Status: Ready for Review</action>
|
||||
|
||||
<check>If any task is incomplete → Return to step 1 to complete remaining work (Do NOT finish with partial progress)</check>
|
||||
<check>If regression failures exist → STOP and resolve before completing</check>
|
||||
<check>If File List is incomplete → Update it before completing</check>
|
||||
<check>If RVTM shows coverage gaps → Warn user but proceed (traceability is best-effort)</check>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Validation and handoff" optional="true">
|
||||
<action>Optionally run the workflow validation task against the story using /home/bj/python/BMAD-METHOD/bmad/core/tasks/validate-workflow.md</action>
|
||||
<action>Run validation against TDD checklist: {installed_path}/checklist.md</action>
|
||||
<action>Prepare a concise summary in Dev Agent Record → Completion Notes highlighting TDD discipline maintained</action>
|
||||
<action>Communicate that the story is Ready for Review with full test coverage and RVTM traceability</action>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
@@ -0,0 +1,63 @@
|
||||
name: dev-story-tdd
|
||||
description: "Execute a story using Test-Driven Development with RED-GREEN-REFACTOR methodology, automatically maintaining full RVTM traceability between requirements, tests, and implementation"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "/home/bj/python/BMAD-METHOD/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "/home/bj/python/BMAD-METHOD/bmad/bmm/workflows/4-implementation/dev-story-tdd"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# This is an action workflow (no output template document)
|
||||
template: false
|
||||
|
||||
# Variables (can be provided by caller)
|
||||
variables:
|
||||
story_path: ""
|
||||
run_tests_command: "auto" # 'auto' = infer from repo, or override with explicit command
|
||||
strict: true # if true, halt on validation failures
|
||||
story_dir: "{config_source}:dev_story_location" # Directory containing story markdown files
|
||||
story_selection_limit: 10
|
||||
run_until_complete: true # Continue through all tasks without pausing except on HALT conditions
|
||||
force_yolo: true # Hint executor to activate #yolo: skip optional prompts and elicitation
|
||||
verify_red_state: true # Verify tests fail before implementation
|
||||
refactor_required: true # Require explicit refactor step
|
||||
|
||||
# Recommended inputs
|
||||
recommended_inputs:
|
||||
- story_markdown: "Path to the story markdown file (Tasks/Subtasks, Acceptance Criteria present)"
|
||||
- story_context_json: "Story Context JSON with acceptance criteria and test strategy"
|
||||
|
||||
# Required tools (conceptual; executor should provide equivalents)
|
||||
required_tools:
|
||||
- read_file
|
||||
- write_file
|
||||
- search_repo
|
||||
- run_tests
|
||||
- list_files
|
||||
- file_info
|
||||
|
||||
tags:
|
||||
- development
|
||||
- tdd
|
||||
- test-driven-development
|
||||
- red-green-refactor
|
||||
- atdd
|
||||
- story-execution
|
||||
- tests
|
||||
- validation
|
||||
- rvtm
|
||||
- traceability
|
||||
- bmad-v6
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts; intended to run to completion
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
test_first: true # CRITICAL: Tests before implementation
|
||||
Reference in New Issue
Block a user