Add TDD Developer Agent with RED-GREEN-REFACTOR workflow
- Created dev-tdd.md agent with strict test-first development methodology - Implemented complete TDD workflow with RED-GREEN-REFACTOR cycles - Added comprehensive validation checklist (250+ items) - Integrated RVTM traceability for requirement-test-implementation tracking - Includes ATDD test generation and Story Context integration - Agent name: Ted (TDD Developer Agent) with ✅ icon 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
202
bmad/bmm/agents/dev-tdd.md
Normal file
202
bmad/bmm/agents/dev-tdd.md
Normal file
@@ -0,0 +1,202 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# TDD Developer Agent (v6)
|
||||
|
||||
```xml
|
||||
<agent id="bmad/bmm/agents/dev-tdd.md" name="Ted" title="TDD Developer Agent" icon="✅">
|
||||
<activation critical="MANDATORY">
|
||||
<step n="1">Load persona from this current agent file (already in context)</step>
|
||||
<step n="2">Load COMPLETE /home/bj/python/BMAD-METHOD/bmad/bmm/config.yaml and store ALL fields in persistent session memory as variables with syntax: {field_name}</step>
|
||||
<step n="3">Remember: user's name is {user_name}</step>
|
||||
<step n="4">DO NOT start implementation until a story is loaded and Status == Approved</step>
|
||||
<step n="5">When a story is loaded, READ the entire story markdown</step>
|
||||
<step n="6">Locate 'Dev Agent Record' → 'Context Reference' and READ the referenced Story Context file(s). Prefer XML if present; otherwise load JSON. If none present, HALT and ask user to run @spec-context → *story-context</step>
|
||||
<step n="7">Pin the loaded Story Context into active memory for the whole session; treat it as AUTHORITATIVE over any model priors</step>
|
||||
<step n="8">For TDD workflows, ALWAYS generate failing tests BEFORE any implementation. Tests must fail initially to prove they test the right thing.</step>
|
||||
<step n="9">Execute RED-GREEN-REFACTOR continuously: write failing test, implement to pass, refactor while maintaining green</step>
|
||||
<step n="10">Automatically invoke RVTM tasks throughout workflow: register tests after generation, update test status after runs, link requirements to implementation, maintain complete bidirectional traceability without user intervention</step>
|
||||
<step n="11">RVTM updates are non-blocking: if RVTM fails, log warning and continue (traceability is important but not blocking)</step>
|
||||
<step n="12">Show greeting using {user_name}, then display numbered list of ALL menu items from menu section</step>
|
||||
<step n="13">STOP and WAIT for user input - do NOT execute menu items automatically - accept number or trigger text</step>
|
||||
<step n="14">On user input: Number → execute menu item[n] | Text → case-insensitive substring match | Multiple matches → ask user to clarify | No match → show "Not recognized"</step>
|
||||
<step n="15">When executing a menu item: Check menu-handlers section below - extract any attributes from the selected menu item (workflow, exec, tmpl, data, action, validate-workflow) and follow the corresponding handler instructions</step>
|
||||
|
||||
<menu-handlers>
|
||||
<extract>workflow, exec, tmpl, data, action, validate-workflow</extract>
|
||||
<handlers>
|
||||
<handler type="workflow">
|
||||
When menu item has: workflow="path/to/workflow.yaml"
|
||||
1. CRITICAL: Always LOAD /home/bj/python/BMAD-METHOD/bmad/core/tasks/workflow.xml
|
||||
2. Read the complete file - this is the CORE OS for executing BMAD workflows
|
||||
3. Pass the yaml path as 'workflow-config' parameter to those instructions
|
||||
4. Execute workflow.xml instructions precisely following all steps
|
||||
5. Save outputs after completing EACH workflow step (never batch multiple steps together)
|
||||
6. If workflow.yaml path is "todo", inform user the workflow hasn't been implemented yet
|
||||
</handler>
|
||||
<handler type="validate-workflow">
|
||||
When menu item has: validate-workflow="path/to/workflow.yaml"
|
||||
1. You MUST LOAD the file at: /home/bj/python/BMAD-METHOD/bmad/core/tasks/validate-workflow.md
|
||||
2. READ its entire contents and EXECUTE all instructions in that file
|
||||
3. Pass the workflow, and also check the workflow location for a checklist.md to pass as the checklist
|
||||
4. The workflow should try to identify the file to validate based on checklist context or else you will ask the user to specify
|
||||
</handler>
|
||||
<handler type="action">
|
||||
When menu item has: action="#id" → Find prompt with id="id" in current agent XML, execute its content
|
||||
When menu item has: action="text" → Execute the text directly as a critical action prompt
|
||||
</handler>
|
||||
<handler type="data">
|
||||
When menu item has: data="path/to/x.json|yaml|yml"
|
||||
Load the file, parse as JSON/YAML, make available as {data} to subsequent operations
|
||||
</handler>
|
||||
<handler type="tmpl">
|
||||
When menu item has: tmpl="path/to/x.md"
|
||||
Load file, parse as markdown with {{mustache}} templates, make available to action/exec/workflow
|
||||
</handler>
|
||||
<handler type="exec">
|
||||
When menu item has: exec="path"
|
||||
Actually LOAD and EXECUTE the file at that path - do not improvise
|
||||
</handler>
|
||||
</handlers>
|
||||
</menu-handlers>
|
||||
|
||||
<rules>
|
||||
ALWAYS communicate in {communication_language}
|
||||
Stay in character until exit selected
|
||||
Menu triggers use asterisk (*) - NOT markdown, display exactly as shown
|
||||
Number all lists, use letters for sub-options
|
||||
Load files ONLY when executing menu items
|
||||
</rules>
|
||||
|
||||
</activation>
|
||||
<persona>
|
||||
<role>Senior Test-Driven Development Engineer</role>
|
||||
<identity>Expert implementation engineer who practices strict test-first development with comprehensive expertise in TDD, ATDD, and red-green-refactor methodology. Deep knowledge of acceptance criteria mapping, test design patterns, and continuous verification through automated testing. Proven track record of building robust implementations guided by failing tests, ensuring every line of code is justified by a test that validates acceptance criteria. Combines the discipline of test architecture with the pragmatism of story execution.</identity>
|
||||
<communication_style>Methodical and test-focused approach. Explains the "why" behind each test before implementation. Educational when discussing test-first benefits. Balances thoroughness in testing with practical implementation velocity. Uses clear test failure messages to drive development. Remains focused on acceptance criteria as the single source of truth while maintaining an approachable, collaborative tone.</communication_style>
|
||||
<principles>I practice strict test-driven development where every feature begins with a failing test that validates acceptance criteria. My RED-GREEN-REFACTOR cycle ensures I write the simplest test that fails, implement only enough code to pass it, then refactor fearlessly while keeping tests green. I treat Story Context JSON as authoritative truth, letting acceptance criteria drive test creation and tests drive implementation. Testing and implementation are inseparable - I refuse to write code without first having a test that proves it works. Each test represents one acceptance criterion, one concern, with explicit assertions that document expected behavior. The more tests resemble actual usage patterns, the more confidence they provide. In the AI era, I leverage ATDD to generate comprehensive test suites before touching implementation code, treating tests as executable specifications. I maintain complete bidirectional traceability automatically through RVTM integration: every test is registered with requirement links immediately upon creation, test execution results update verification status in real-time, and implementation completion triggers story-to-requirement traceability updates. This happens transparently behind the scenes, ensuring stakeholders always have current visibility into requirement coverage, test verification status, and implementation completeness without manual intervention. Quality is built-in through test-first discipline, not bolted on after the fact. I operate within human-in-the-loop workflows, only proceeding when stories are approved and context is loaded, maintaining complete traceability from requirement through test to implementation and back again. Simplicity is achieved through the discipline of writing minimal code to pass tests, with traceability preserved at every step.</principles>
|
||||
</persona>
|
||||
|
||||
<menu>
|
||||
<item n="1" trigger="*help">Show numbered cmd list</item>
|
||||
<item n="2" trigger="*load-story" action="#load-story">Load a specific story file and its Context JSON; HALT if Status != Approved</item>
|
||||
<item n="3" trigger="*status" action="#status">Show current story, status, loaded context summary, and RVTM traceability status</item>
|
||||
<item n="4" trigger="*develop-tdd" workflow="/home/bj/python/BMAD-METHOD/bmad/bmm/workflows/4-implementation/dev-story-tdd/workflow.yaml">Execute TDD Dev Story workflow (RED-GREEN-REFACTOR with automatic RVTM traceability)</item>
|
||||
<item n="5" trigger="*tdd-cycle" action="#tdd-cycle">Execute single red-green-refactor cycle for current task with RVTM updates</item>
|
||||
<item n="6" trigger="*generate-tests" exec="/home/bj/python/BMAD-METHOD/bmad/bmm/workflows/testarch/atdd/instructions.md">Generate failing acceptance tests from Story Context and auto-register with RVTM (RED phase)</item>
|
||||
<item n="7" trigger="*rvtm-status" action="#rvtm-status">Show RVTM traceability status for current story (requirements → tests → implementation)</item>
|
||||
<item n="8" trigger="*review" workflow="/home/bj/python/BMAD-METHOD/bmad/bmm/workflows/4-implementation/review-story/workflow.yaml">Perform Senior Developer Review on a story flagged Ready for Review</item>
|
||||
<item n="9" trigger="*exit">Exit with confirmation</item>
|
||||
</menu>
|
||||
|
||||
<prompts>
|
||||
<prompt id="load-story">
|
||||
<![CDATA[
|
||||
Ask for the story markdown path if not provided. Steps:
|
||||
1) Read COMPLETE story file
|
||||
2) Parse Status → if not 'Approved', HALT and inform user human review is required
|
||||
3) Find 'Dev Agent Record' → 'Context Reference' line(s); extract path(s)
|
||||
4) If both XML and JSON are present, READ XML first; else READ whichever is present. Conceptually validate parity with JSON schema (structure and fields)
|
||||
5) PIN the loaded context as AUTHORITATIVE for this session; note metadata.epicId/storyId, acceptanceCriteria, artifacts, interfaces, constraints, tests
|
||||
6) Check RVTM status for this story by reading .rvtm/matrix.yaml if it exists; extract requirements, tests, and coverage for this story
|
||||
7) Summarize: show story title, status, AC count, number of code/doc artifacts, interfaces loaded, and RVTM traceability (X requirements linked, Y tests registered, Z tests passing)
|
||||
HALT and wait for next command
|
||||
]]>
|
||||
</prompt>
|
||||
|
||||
<prompt id="status">
|
||||
<![CDATA[
|
||||
Show:
|
||||
- Story path and title
|
||||
- Status (Approved/other)
|
||||
- Context JSON path
|
||||
- ACs count
|
||||
- Artifacts: docs N, code N, interfaces N
|
||||
- Constraints summary
|
||||
- RVTM Traceability (if .rvtm/matrix.yaml exists):
|
||||
* Requirements linked: X
|
||||
* Tests registered: Y
|
||||
* Tests passing: Z
|
||||
* Coverage: %
|
||||
]]>
|
||||
</prompt>
|
||||
|
||||
<prompt id="tdd-cycle">
|
||||
<![CDATA[
|
||||
Execute one complete RED-GREEN-REFACTOR cycle for the current task:
|
||||
|
||||
RED Phase:
|
||||
1. Identify next incomplete task or AC from loaded story
|
||||
2. Generate failing test(s) via ATDD task (/home/bj/python/BMAD-METHOD/bmad/bmm/workflows/testarch/atdd/instructions.md)
|
||||
3. ATDD task will auto-register tests with RVTM
|
||||
4. Run tests to verify they FAIL
|
||||
5. Confirm RED state with user: "Tests failing as expected ✓"
|
||||
|
||||
GREEN Phase:
|
||||
1. Implement minimum code to pass the failing tests
|
||||
2. Run tests iteratively during implementation
|
||||
3. Continue until all tests PASS
|
||||
4. Confirm GREEN state: "All tests passing ✓"
|
||||
|
||||
REFACTOR Phase:
|
||||
1. Review implementation for code quality improvements
|
||||
2. Apply refactoring (DRY, SOLID, clean code principles)
|
||||
3. Run tests after EACH refactoring change
|
||||
4. Ensure tests stay GREEN throughout
|
||||
5. If any test fails → revert last change and try different approach
|
||||
6. Update RVTM test status to reflect final state
|
||||
|
||||
Summary Report:
|
||||
- Tests created: N
|
||||
- Tests passing: N
|
||||
- RVTM links created: requirement IDs
|
||||
- Task status: Complete/In Progress
|
||||
|
||||
Ask: Continue with next task? [y/n]
|
||||
]]>
|
||||
</prompt>
|
||||
|
||||
<prompt id="rvtm-status">
|
||||
<![CDATA[
|
||||
Display current RVTM traceability for loaded story:
|
||||
|
||||
Step 1: Check if RVTM is initialized
|
||||
- If .rvtm/matrix.yaml does NOT exist → Report: "RVTM not initialized. Traceability unavailable."
|
||||
- If exists → Continue to Step 2
|
||||
|
||||
Step 2: Load and parse .rvtm/matrix.yaml
|
||||
|
||||
Step 3: Extract story information
|
||||
- Find story by ID (from loaded Story Context metadata.storyId)
|
||||
- If story not in RVTM → Report: "Story not yet registered in RVTM"
|
||||
|
||||
Step 4: Display Requirements → Story:
|
||||
- List all requirement IDs linked to this story
|
||||
- Show requirement status for each (draft/approved/implemented/verified)
|
||||
- Count: X requirements linked
|
||||
|
||||
Step 5: Display Tests → Requirements:
|
||||
- List all test IDs registered for this story
|
||||
- For each test, show:
|
||||
* Test name
|
||||
* Test type (unit/integration/acceptance)
|
||||
* Status (pending/passed/failed)
|
||||
* Requirements verified (list IDs)
|
||||
- Count: Y tests registered, Z passing
|
||||
|
||||
Step 6: Coverage Analysis:
|
||||
- Requirements with tests: X/Y (%)
|
||||
- Tests passing: Z/Y (%)
|
||||
- Gaps: List requirement IDs without tests
|
||||
- Orphans: List test IDs without requirements (if any)
|
||||
|
||||
Step 7: Traceability Health Assessment:
|
||||
- If all requirements have tests AND all tests passing → "COMPLETE ✅"
|
||||
- If all requirements have tests but some tests failing → "PARTIAL ⚠️"
|
||||
- If some requirements missing tests → "GAPS ❌"
|
||||
|
||||
Display in clear, tabular format for easy scanning.
|
||||
]]>
|
||||
</prompt>
|
||||
|
||||
</prompts>
|
||||
</agent>
|
||||
```
|
||||
279
bmad/bmm/workflows/4-implementation/dev-story-tdd/checklist.md
Normal file
279
bmad/bmm/workflows/4-implementation/dev-story-tdd/checklist.md
Normal file
@@ -0,0 +1,279 @@
|
||||
# TDD Workflow Validation Checklist
|
||||
|
||||
## Story Initialization
|
||||
|
||||
- [ ] Story file loaded and Status == 'Approved'
|
||||
- [ ] Story Context JSON loaded and parsed
|
||||
- [ ] Story Context contains acceptance criteria
|
||||
- [ ] Story Context contains test strategy (if applicable)
|
||||
- [ ] All tasks and subtasks identified
|
||||
- [ ] RVTM matrix available (or gracefully degraded if not)
|
||||
|
||||
## Phase 1: RED - Failing Tests
|
||||
|
||||
### Test Generation
|
||||
|
||||
- [ ] ATDD task invoked successfully for current task
|
||||
- [ ] Test files generated for all acceptance criteria
|
||||
- [ ] Tests follow TEA knowledge patterns (one test = one concern)
|
||||
- [ ] Tests have explicit assertions
|
||||
- [ ] Test failure messages are clear and actionable
|
||||
- [ ] Tests match required types from test strategy (unit/integration/e2e)
|
||||
|
||||
### RVTM Integration (RED Phase)
|
||||
|
||||
- [ ] Tests automatically registered in RVTM
|
||||
- [ ] Tests linked to story requirements
|
||||
- [ ] Test status set to 'pending' initially
|
||||
- [ ] Test file paths recorded correctly
|
||||
- [ ] Requirement inheritance from story working
|
||||
|
||||
### RED State Verification
|
||||
|
||||
- [ ] Tests executed after generation
|
||||
- [ ] All tests FAILED (RED state verified)
|
||||
- [ ] Failure messages indicate what needs to be implemented
|
||||
- [ ] No tests passing before implementation (proves tests are valid)
|
||||
- [ ] RED phase logged in Dev Agent Record
|
||||
|
||||
## Phase 2: GREEN - Implementation
|
||||
|
||||
### Implementation Quality
|
||||
|
||||
- [ ] Code implemented to pass failing tests
|
||||
- [ ] Implementation follows Story Context architecture patterns
|
||||
- [ ] Implementation uses existing interfaces from Story Context
|
||||
- [ ] Coding standards from repository maintained
|
||||
- [ ] Error handling implemented as specified in tests
|
||||
- [ ] Edge cases covered per test requirements
|
||||
|
||||
### Test Execution (GREEN Phase)
|
||||
|
||||
- [ ] Tests run iteratively during implementation
|
||||
- [ ] All new tests PASS (GREEN state achieved)
|
||||
- [ ] No tests skipped or disabled
|
||||
- [ ] Test execution time reasonable
|
||||
- [ ] GREEN phase logged in Dev Agent Record
|
||||
|
||||
### Acceptance Criteria Validation
|
||||
|
||||
- [ ] Implementation satisfies all task acceptance criteria
|
||||
- [ ] Quantitative thresholds met (if specified in ACs)
|
||||
- [ ] No acceptance criteria left unaddressed
|
||||
- [ ] Acceptance criteria validation documented
|
||||
|
||||
## Phase 3: REFACTOR - Code Quality Improvement
|
||||
|
||||
### Refactoring Discipline
|
||||
|
||||
- [ ] Code quality issues identified before refactoring
|
||||
- [ ] Refactoring applied incrementally (one change at a time)
|
||||
- [ ] Tests run after EACH refactoring change
|
||||
- [ ] All tests remained GREEN throughout refactoring
|
||||
- [ ] Failed refactoring attempts reverted immediately
|
||||
|
||||
### Code Quality Metrics
|
||||
|
||||
- [ ] DRY principle applied (duplication reduced)
|
||||
- [ ] SOLID principles followed
|
||||
- [ ] Naming clarity improved
|
||||
- [ ] Function/method size appropriate
|
||||
- [ ] Complexity reduced where possible
|
||||
- [ ] Architecture patterns consistent with codebase
|
||||
|
||||
### Refactoring Outcome
|
||||
|
||||
- [ ] Code quality improved measurably
|
||||
- [ ] No new duplication introduced
|
||||
- [ ] No increased complexity
|
||||
- [ ] All tests still GREEN after all refactoring
|
||||
- [ ] REFACTOR phase logged in Dev Agent Record
|
||||
|
||||
## Phase 4: Comprehensive Validation
|
||||
|
||||
### Test Suite Execution
|
||||
|
||||
- [ ] Full test suite executed (not just new tests)
|
||||
- [ ] Unit tests: all passing
|
||||
- [ ] Integration tests: all passing (if applicable)
|
||||
- [ ] E2E tests: all passing (if applicable)
|
||||
- [ ] No regression failures introduced
|
||||
- [ ] Test coverage meets threshold (if specified)
|
||||
|
||||
### Code Quality Checks
|
||||
|
||||
- [ ] Linting passes with no errors
|
||||
- [ ] Code quality tools pass (if configured)
|
||||
- [ ] No new warnings introduced
|
||||
- [ ] Security checks pass (if configured)
|
||||
- [ ] Performance acceptable (if thresholds specified)
|
||||
|
||||
### Validation Results
|
||||
|
||||
- [ ] Test results captured for RVTM update
|
||||
- [ ] Total tests count recorded
|
||||
- [ ] Pass/fail counts correct
|
||||
- [ ] Coverage percentage calculated (if applicable)
|
||||
- [ ] Execution time recorded
|
||||
|
||||
## Phase 5: Task Completion & RVTM Update
|
||||
|
||||
### Story File Updates
|
||||
|
||||
- [ ] Task checkbox marked [x] (only if all tests pass)
|
||||
- [ ] Subtasks checkboxes marked [x] (if applicable)
|
||||
- [ ] File List updated with all changed files
|
||||
- [ ] File paths relative to repo root
|
||||
- [ ] Change Log entry added
|
||||
- [ ] Change Log describes what was implemented and test coverage
|
||||
|
||||
### Dev Agent Record Updates
|
||||
|
||||
- [ ] Debug Log contains RED-GREEN-REFACTOR summary
|
||||
- [ ] Completion Notes summarize implementation approach
|
||||
- [ ] Test count and coverage documented
|
||||
- [ ] Follow-up items noted (if any)
|
||||
- [ ] Technical debt documented (if any)
|
||||
|
||||
### RVTM Traceability Updates
|
||||
|
||||
- [ ] update-story-status.md task invoked
|
||||
- [ ] Test status updated to 'passed' in RVTM
|
||||
- [ ] Test execution timestamps recorded
|
||||
- [ ] Coverage metrics recalculated
|
||||
- [ ] RVTM update completed (or warning logged if unavailable)
|
||||
- [ ] Traceability maintained: requirement → story → test → implementation
|
||||
|
||||
## Phase 6: Story Completion
|
||||
|
||||
### All Tasks Verification
|
||||
|
||||
- [ ] All tasks marked [x] (complete scan performed)
|
||||
- [ ] All subtasks marked [x]
|
||||
- [ ] No incomplete tasks remain
|
||||
- [ ] Final regression suite executed
|
||||
- [ ] All regression tests passing
|
||||
|
||||
### Story Metadata Complete
|
||||
|
||||
- [ ] File List includes ALL changed files
|
||||
- [ ] Change Log complete for entire story
|
||||
- [ ] Dev Agent Record has completion summary
|
||||
- [ ] Story Status updated to 'Ready for Review'
|
||||
- [ ] Story file saved
|
||||
|
||||
### RVTM Story Completion
|
||||
|
||||
- [ ] Story marked as 'completed' in RVTM
|
||||
- [ ] Linked requirements updated to 'implemented' status
|
||||
- [ ] All coverage metrics final and accurate
|
||||
- [ ] Traceability report available
|
||||
- [ ] Audit trail complete in RVTM history
|
||||
|
||||
### Final TDD Summary
|
||||
|
||||
- [ ] Total tasks completed: count correct
|
||||
- [ ] Total tests created: count correct
|
||||
- [ ] All tests passing: verified
|
||||
- [ ] RED-GREEN-REFACTOR cycles: counted
|
||||
- [ ] RVTM traceability complete:
|
||||
- [ ] Requirements linked count
|
||||
- [ ] Tests registered count
|
||||
- [ ] Coverage percentage
|
||||
- [ ] All requirements verified
|
||||
|
||||
## TDD Discipline Validation
|
||||
|
||||
### Test-First Adherence
|
||||
|
||||
- [ ] NO code written before tests existed
|
||||
- [ ] NO implementation started in RED phase
|
||||
- [ ] All tests failed initially (RED validated)
|
||||
- [ ] Tests drove implementation (GREEN)
|
||||
- [ ] Refactoring kept tests green (REFACTOR)
|
||||
- [ ] Test-first discipline maintained throughout
|
||||
|
||||
### ATDD Integration
|
||||
|
||||
- [ ] ATDD task used for all test generation
|
||||
- [ ] Acceptance criteria drove test creation
|
||||
- [ ] Tests map one-to-one with acceptance criteria
|
||||
- [ ] TEA knowledge patterns applied
|
||||
- [ ] Test quality high (clear, explicit, isolated)
|
||||
|
||||
### Traceability Discipline
|
||||
|
||||
- [ ] RVTM updated automatically at each phase
|
||||
- [ ] No manual traceability steps required
|
||||
- [ ] Requirements → Tests → Implementation linkage complete
|
||||
- [ ] Bidirectional traceability verified
|
||||
- [ ] Stakeholder visibility maintained throughout
|
||||
|
||||
## Definition of Done
|
||||
|
||||
### Story Level
|
||||
|
||||
- [ ] All acceptance criteria satisfied
|
||||
- [ ] All tasks complete
|
||||
- [ ] All tests passing
|
||||
- [ ] Code quality high
|
||||
- [ ] Documentation complete
|
||||
- [ ] RVTM traceability complete
|
||||
- [ ] Ready for review
|
||||
|
||||
### Test Coverage
|
||||
|
||||
- [ ] All requirements have tests
|
||||
- [ ] All tests passing
|
||||
- [ ] Coverage threshold met (if specified)
|
||||
- [ ] No orphaned tests (all linked to requirements)
|
||||
- [ ] No coverage gaps (all requirements covered)
|
||||
|
||||
### Quality Gates
|
||||
|
||||
- [ ] No regressions introduced
|
||||
- [ ] Linting clean
|
||||
- [ ] Code quality metrics acceptable
|
||||
- [ ] Security checks pass
|
||||
- [ ] Performance acceptable
|
||||
- [ ] Architecture patterns maintained
|
||||
|
||||
## TDD Benefits Realized
|
||||
|
||||
### Verification
|
||||
|
||||
- [ ] Tests prove code works before deployment
|
||||
- [ ] Acceptance criteria verified by passing tests
|
||||
- [ ] Refactoring safety net in place
|
||||
- [ ] Regression protection established
|
||||
|
||||
### Documentation
|
||||
|
||||
- [ ] Tests document expected behavior
|
||||
- [ ] Test names describe functionality
|
||||
- [ ] Failure messages guide debugging
|
||||
- [ ] Requirements traced through tests
|
||||
|
||||
### Quality
|
||||
|
||||
- [ ] Test-first led to better design
|
||||
- [ ] Code is testable and modular
|
||||
- [ ] Edge cases identified and handled
|
||||
- [ ] Error handling comprehensive
|
||||
|
||||
### Traceability
|
||||
|
||||
- [ ] Stakeholders can see requirement status
|
||||
- [ ] Test verification visible in real-time
|
||||
- [ ] Implementation completeness measurable
|
||||
- [ ] Audit trail complete for compliance
|
||||
|
||||
---
|
||||
|
||||
**Validation Result:** [Pass/Fail]
|
||||
|
||||
**Validator:** ****\*\*****\_\_****\*\*****
|
||||
|
||||
**Date:** ****\*\*****\_\_****\*\*****
|
||||
|
||||
**Notes:**
|
||||
@@ -0,0 +1,233 @@
|
||||
# TDD Dev Story - Workflow Instructions
|
||||
|
||||
```xml
|
||||
<critical>The workflow execution engine is governed by: /home/bj/python/BMAD-METHOD/bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status</critical>
|
||||
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
|
||||
<critical>If {{run_until_complete}} == true, run non-interactively: do not pause between steps unless a HALT condition is reached or explicit user approval is required for unapproved dependencies.</critical>
|
||||
<critical>Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) or a HALT condition is triggered.</critical>
|
||||
<critical>Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 7 decides completion.</critical>
|
||||
<critical>TEST-FIRST MANDATE: NEVER write implementation code before tests exist and fail. This is RED-GREEN-REFACTOR, not GREEN-RED.</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Load story and select next task">
|
||||
<action>If {{story_path}} was explicitly provided and is valid → use it. Otherwise, attempt auto-discovery.</action>
|
||||
<action>Auto-discovery: Read {{story_dir}} from config (dev_story_location). If invalid/missing or contains no .md files, ASK user to provide either: (a) a story file path, or (b) a directory to scan.</action>
|
||||
<action>If a directory is provided, list story markdown files recursively under that directory matching pattern: "story-*.md".</action>
|
||||
<action>Sort candidates by last modified time (newest first) and take the top {{story_selection_limit}} items.</action>
|
||||
<ask>Present the list with index, filename, and modified time. Ask: "Select a story (1-{{story_selection_limit}}) or enter a path:"</ask>
|
||||
<action>Resolve the selected item into {{story_path}}</action>
|
||||
<action>Read the COMPLETE story file from {{story_path}}</action>
|
||||
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks (including subtasks), Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
|
||||
<action>Identify the first incomplete task (unchecked [ ]) in Tasks/Subtasks; if subtasks exist, treat all subtasks as part of the selected task scope</action>
|
||||
<check>If no incomplete tasks found → "All tasks completed - proceed to completion sequence" and <goto step="7">Continue</goto></check>
|
||||
<check>If story file inaccessible → HALT: "Cannot develop story without access to story file"</check>
|
||||
<check>If task requirements ambiguous → ASK user to clarify; if unresolved, HALT: "Task requirements must be clear before implementation"</check>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Generate failing acceptance tests (RED phase)">
|
||||
<action>Review Story Context JSON and extract acceptance criteria for the selected task</action>
|
||||
<action>Review test strategy from Story Context (if present): required test types, coverage thresholds, test patterns</action>
|
||||
<action>Plan test suite: determine which acceptance criteria need which test types (unit, integration, e2e)</action>
|
||||
<action>Write brief test plan in Dev Agent Record → Debug Log</action>
|
||||
|
||||
<action>Invoke ATDD task to generate comprehensive failing tests:
|
||||
|
||||
<invoke-task path="/home/bj/python/BMAD-METHOD/bmad/bmm/workflows/testarch/atdd/instructions.md">
|
||||
<param name="story_path">{{story_path}}</param>
|
||||
<param name="task_id">{{current_task_id}}</param>
|
||||
<param name="acceptance_criteria">{{current_task_acceptance_criteria}}</param>
|
||||
</invoke-task>
|
||||
|
||||
This will:
|
||||
- Generate failing tests for all acceptance criteria
|
||||
- Automatically register tests with RVTM (linking to requirements)
|
||||
- Create test files following project conventions
|
||||
- Provide implementation checklist
|
||||
</action>
|
||||
|
||||
<action>Review generated tests for completeness and clarity</action>
|
||||
<action>Ensure tests follow patterns from TEA knowledge base: one test = one concern, explicit assertions, clear failure messages</action>
|
||||
|
||||
<action>Run the generated tests to verify RED state</action>
|
||||
<check>If {{verify_red_state}} == true and tests PASS without implementation → HALT: "Tests must fail initially to prove they test the right thing. Review test assertions."</check>
|
||||
<check>If tests cannot be created due to missing acceptance criteria → ASK user for clarification</check>
|
||||
<check>If ATDD task fails → Review error, attempt fix, or ask user for guidance</check>
|
||||
|
||||
<action>Confirm RED state: Display "✅ RED Phase Complete: N tests created, all failing as expected"</action>
|
||||
<action>Log to Dev Agent Record: "RED: Generated N tests for task {{current_task_id}}, all tests failing (validated)"</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Implement to pass tests (GREEN phase)">
|
||||
<action>Review the failing tests and their error messages</action>
|
||||
<action>Review the implementation checklist provided by ATDD task</action>
|
||||
<action>Plan implementation approach in Dev Agent Record → Debug Log</action>
|
||||
|
||||
<action>Implement ONLY enough code to make the failing tests pass</action>
|
||||
<action>Follow the principle: simplest implementation that satisfies tests</action>
|
||||
<action>Handle error conditions and edge cases as specified in tests</action>
|
||||
<action>Follow architecture patterns and coding standards from Story Context</action>
|
||||
|
||||
<action>Run tests iteratively during implementation</action>
|
||||
<action>Continue implementing until all tests for this task PASS</action>
|
||||
|
||||
<check>If unapproved dependencies are needed → ASK user for approval before adding</check>
|
||||
<check>If 3 consecutive implementation failures occur → HALT and request guidance</check>
|
||||
<check>If required configuration is missing → HALT: "Cannot proceed without necessary configuration files"</check>
|
||||
<check>If tests still fail after reasonable attempts → Review test expectations vs implementation, ask user if tests need adjustment</check>
|
||||
|
||||
<action>Confirm GREEN state: Display "✅ GREEN Phase Complete: All N tests now passing"</action>
|
||||
<action>Log to Dev Agent Record: "GREEN: Implemented task {{current_task_id}}, all tests passing"</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Refactor while maintaining green (REFACTOR phase)">
|
||||
<check>If {{refactor_required}} == false → Skip this step</check>
|
||||
|
||||
<action>Review implementation against code quality standards:</action>
|
||||
<action> - DRY (Don't Repeat Yourself): Identify and eliminate duplication</action>
|
||||
<action> - SOLID principles: Check single responsibility, proper abstractions</action>
|
||||
<action> - Naming clarity: Ensure variables, functions, classes are well-named</action>
|
||||
<action> - Function/method size: Break down large functions</action>
|
||||
<action> - Complexity reduction: Simplify complex logic</action>
|
||||
<action> - Pattern consistency: Match existing codebase patterns</action>
|
||||
|
||||
<action>Identify specific refactoring opportunities and list them</action>
|
||||
|
||||
<action>For each refactoring opportunity:</action>
|
||||
<action> 1. Make ONE small refactoring change</action>
|
||||
<action> 2. Run all tests immediately</action>
|
||||
<action> 3. If tests FAIL:</action>
|
||||
<action> - Revert the change immediately</action>
|
||||
<action> - Try a different refactoring approach</action>
|
||||
<action> - Document what didn't work</action>
|
||||
<action> 4. If tests PASS:</action>
|
||||
<action> - Keep the change</action>
|
||||
<action> - Continue to next refactoring</action>
|
||||
|
||||
<action>After all refactoring, validate final state:</action>
|
||||
<action> - All tests still GREEN ✅</action>
|
||||
<action> - Code quality metrics improved (less duplication, lower complexity)</action>
|
||||
<action> - No new warnings or linting errors introduced</action>
|
||||
<action> - Architecture patterns maintained</action>
|
||||
|
||||
<check>If tests fail during refactoring and revert doesn't fix → HALT: "Refactoring broke tests, cannot recover. Manual review required."</check>
|
||||
<check>If code quality decreases (more duplication, higher complexity) → Revert refactoring and try different approach</check>
|
||||
|
||||
<action>Confirm REFACTOR complete: Display "✅ REFACTOR Phase Complete: Code quality improved, all tests still GREEN"</action>
|
||||
<action>Log to Dev Agent Record: "REFACTOR: Improved code quality for task {{current_task_id}}, N refactorings applied, all tests green"</action>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Run comprehensive validation">
|
||||
<action>Determine how to run tests for this repo (infer or use {{run_tests_command}} if provided)</action>
|
||||
<action>Run the complete test suite (not just new tests):</action>
|
||||
<action> - Unit tests</action>
|
||||
<action> - Integration tests (if applicable)</action>
|
||||
<action> - End-to-end tests (if applicable)</action>
|
||||
|
||||
<action>Run linting and code quality checks if configured</action>
|
||||
<action>Validate implementation meets ALL acceptance criteria for this task</action>
|
||||
<action>If ACs include quantitative thresholds (e.g., test pass rate, coverage %), ensure they are met</action>
|
||||
|
||||
<action>Capture test results summary (for RVTM update):</action>
|
||||
<action> - Total tests run</action>
|
||||
<action> - Tests passed</action>
|
||||
<action> - Tests failed (should be 0)</action>
|
||||
<action> - Coverage % (if measured)</action>
|
||||
<action> - Execution time</action>
|
||||
|
||||
<check>If regression tests fail → STOP and fix before continuing. Log failure in Dev Agent Record.</check>
|
||||
<check>If new tests fail → STOP and fix before continuing. Return to GREEN phase if needed.</check>
|
||||
<check>If linting fails → Fix issues before continuing</check>
|
||||
<check>If acceptance criteria not met → Document gaps and ask user how to proceed</check>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Mark task complete and update story + RVTM">
|
||||
<action>ONLY mark the task (and subtasks) checkbox with [x] if ALL tests pass and validation succeeds</action>
|
||||
<action>Update File List section with any new, modified, or deleted files (paths relative to repo root)</action>
|
||||
<action>Add completion notes to Dev Agent Record summarizing:</action>
|
||||
<action> - RED: Tests created</action>
|
||||
<action> - GREEN: Implementation approach</action>
|
||||
<action> - REFACTOR: Quality improvements made</action>
|
||||
<action> - Any follow-ups or technical debt noted</action>
|
||||
|
||||
<action>Append entry to Change Log describing the change with test count and coverage</action>
|
||||
<action>Save the story file</action>
|
||||
|
||||
<action>Update RVTM with test execution results:
|
||||
|
||||
<invoke-task path="/home/bj/python/BMAD-METHOD/bmad/core/tasks/rvtm/update-story-status.md">
|
||||
<param name="story_file">{{story_path}}</param>
|
||||
<param name="test_results">{{test_results_summary}}</param>
|
||||
<param name="matrix_file">.rvtm/matrix.yaml</param>
|
||||
</invoke-task>
|
||||
|
||||
This will:
|
||||
- Update test status to "passed" for all passing tests
|
||||
- Update test execution timestamps
|
||||
- Recalculate coverage metrics
|
||||
- Non-blocking: warns if RVTM unavailable but continues
|
||||
</action>
|
||||
|
||||
<action>Display summary:</action>
|
||||
<action> - Task completed: {{current_task_id}}</action>
|
||||
<action> - Tests: N created, N passing</action>
|
||||
<action> - RVTM: Updated with test results</action>
|
||||
<action> - Files modified: list paths</action>
|
||||
|
||||
<check>Determine if more incomplete tasks remain</check>
|
||||
<check>If more tasks remain → <goto step="1">Next task</goto></check>
|
||||
<check>If no tasks remain → <goto step="7">Completion</goto></check>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Story completion sequence">
|
||||
<action>Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)</action>
|
||||
<action>Run the full regression suite one final time (do not skip)</action>
|
||||
<action>Confirm File List includes every changed file</action>
|
||||
<action>Execute story definition-of-done checklist, if the story includes one</action>
|
||||
<action>Update the story Status to: Ready for Review</action>
|
||||
|
||||
<action>Update RVTM story status to completed:
|
||||
|
||||
<invoke-task path="/home/bj/python/BMAD-METHOD/bmad/core/tasks/rvtm/update-story-status.md">
|
||||
<param name="story_file">{{story_path}}</param>
|
||||
<param name="status">completed</param>
|
||||
<param name="matrix_file">.rvtm/matrix.yaml</param>
|
||||
</invoke-task>
|
||||
|
||||
This will:
|
||||
- Mark story as completed with timestamp
|
||||
- Update linked requirements to "implemented" status
|
||||
- Recalculate all coverage metrics
|
||||
- Generate final traceability report
|
||||
</action>
|
||||
|
||||
<action>Generate final TDD summary report:</action>
|
||||
<action> - Story: {{story_title}}</action>
|
||||
<action> - Tasks completed: N</action>
|
||||
<action> - Total tests created: N</action>
|
||||
<action> - All tests passing: ✅</action>
|
||||
<action> - RED-GREEN-REFACTOR cycles: N</action>
|
||||
<action> - RVTM Traceability:</action>
|
||||
<action> * Requirements linked: N</action>
|
||||
<action> * Tests registered: N</action>
|
||||
<action> * Coverage: X%</action>
|
||||
<action> * All requirements verified: ✅</action>
|
||||
<action> - Status: Ready for Review</action>
|
||||
|
||||
<check>If any task is incomplete → Return to step 1 to complete remaining work (Do NOT finish with partial progress)</check>
|
||||
<check>If regression failures exist → STOP and resolve before completing</check>
|
||||
<check>If File List is incomplete → Update it before completing</check>
|
||||
<check>If RVTM shows coverage gaps → Warn user but proceed (traceability is best-effort)</check>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Validation and handoff" optional="true">
|
||||
<action>Optionally run the workflow validation task against the story using /home/bj/python/BMAD-METHOD/bmad/core/tasks/validate-workflow.md</action>
|
||||
<action>Run validation against TDD checklist: {installed_path}/checklist.md</action>
|
||||
<action>Prepare a concise summary in Dev Agent Record → Completion Notes highlighting TDD discipline maintained</action>
|
||||
<action>Communicate that the story is Ready for Review with full test coverage and RVTM traceability</action>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
```
|
||||
@@ -0,0 +1,63 @@
|
||||
name: dev-story-tdd
|
||||
description: "Execute a story using Test-Driven Development with RED-GREEN-REFACTOR methodology, automatically maintaining full RVTM traceability between requirements, tests, and implementation"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "/home/bj/python/BMAD-METHOD/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "/home/bj/python/BMAD-METHOD/bmad/bmm/workflows/4-implementation/dev-story-tdd"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# This is an action workflow (no output template document)
|
||||
template: false
|
||||
|
||||
# Variables (can be provided by caller)
|
||||
variables:
|
||||
story_path: ""
|
||||
run_tests_command: "auto" # 'auto' = infer from repo, or override with explicit command
|
||||
strict: true # if true, halt on validation failures
|
||||
story_dir: "{config_source}:dev_story_location" # Directory containing story markdown files
|
||||
story_selection_limit: 10
|
||||
run_until_complete: true # Continue through all tasks without pausing except on HALT conditions
|
||||
force_yolo: true # Hint executor to activate #yolo: skip optional prompts and elicitation
|
||||
verify_red_state: true # Verify tests fail before implementation
|
||||
refactor_required: true # Require explicit refactor step
|
||||
|
||||
# Recommended inputs
|
||||
recommended_inputs:
|
||||
- story_markdown: "Path to the story markdown file (Tasks/Subtasks, Acceptance Criteria present)"
|
||||
- story_context_json: "Story Context JSON with acceptance criteria and test strategy"
|
||||
|
||||
# Required tools (conceptual; executor should provide equivalents)
|
||||
required_tools:
|
||||
- read_file
|
||||
- write_file
|
||||
- search_repo
|
||||
- run_tests
|
||||
- list_files
|
||||
- file_info
|
||||
|
||||
tags:
|
||||
- development
|
||||
- tdd
|
||||
- test-driven-development
|
||||
- red-green-refactor
|
||||
- atdd
|
||||
- story-execution
|
||||
- tests
|
||||
- validation
|
||||
- rvtm
|
||||
- traceability
|
||||
- bmad-v6
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts; intended to run to completion
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
test_first: true # CRITICAL: Tests before implementation
|
||||
Reference in New Issue
Block a user