````xml
The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml
You MUST have already loaded and processed: {installed_path}/workflow.yaml
This workflow assembles a Story Context XML for a single user story by extracting ACs, tasks, relevant docs/code, interfaces, constraints, and testing guidance to support implementation.
Default execution mode: #yolo (non-interactive). Only ask if {{non_interactive}} == false. If auto-discovery fails, HALT and request 'story_path' or 'story_dir'.
Search {output_folder}/ for files matching pattern: project-workflow-status*.md
Find the most recent file (by date in filename: project-workflow-status-YYYY-MM-DD.md)
Load the status file
Extract key information:
- current_step: What workflow was last run
- next_step: What workflow should run next
- planned_workflow: The complete workflow journey table
- progress_percentage: Current progress
- IN PROGRESS story: The story being worked on (from Implementation Progress section)
Set status_file_found = true
Store status_file_path for later updates
**⚠️ Workflow Sequence Note**
Status file shows:
- Current step: {{current_step}}
- Expected next: {{next_step}}
This workflow (story-context) is typically run after story-ready.
Options:
1. Continue anyway (story-context is optional)
2. Exit and run the expected workflow: {{next_step}}
3. Check status with workflow-status
What would you like to do?
If user chooses exit → HALT with message: "Run workflow-status to see current state"
**No workflow status file found.**
The status file tracks progress across all workflows and provides context about which story to work on.
Options:
1. Run workflow-status first to create the status file (recommended)
2. Continue in standalone mode (no progress tracking)
3. Exit
What would you like to do?
If user chooses option 1 → HALT with message: "Please run workflow-status first, then return to story-context"
If user chooses option 2 → Set standalone_mode = true and continue
If user chooses option 3 → HALT
If {{story_path}} provided and valid → use it; else auto-discover from {{story_dir}}.
Auto-discovery: read {{story_dir}} (dev_story_location). If invalid/missing or contains no .md files, ASK for a story file path or directory to scan.
If a directory is provided, list markdown files named "story-*.md" recursively; sort by last modified time; display top {{story_selection_limit}} with index, filename, path, modified time.
"Select a story (1-{{story_selection_limit}}) or enter a path:"
If {{non_interactive}} == true: choose the most recently modified story automatically. If none found, HALT with a clear message to provide 'story_path' or 'story_dir'. Else resolve selection into {{story_path}} and READ COMPLETE file.
Extract {{epic_id}}, {{story_id}}, {{story_title}}, {{story_status}} from filename/content; parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes.
Extract user story fields (asA, iWant, soThat).
Initialize output by writing template to {default_output_file}.
as_a
i_want
so_that
Scan docs and src module docs for items relevant to this story's domain: search keywords from story title, ACs, and tasks<
Prefer authoritative sources: PRD, Architecture, Front-end Spec, Testing standards, module-specific docs.
Add artifacts.docs entries with {path, title, section, snippet} (NO invention)
Search source tree for modules, files, and symbols matching story intent and AC keywords (controllers, services, components, tests).
Identify existing interfaces/APIs the story should reuse rather than recreate.
Extract development constraints from Dev Notes and architecture (patterns, layers, testing requirements).
Add artifacts.code entries with {path, kind, symbol, lines, reason}; include a brief reason explaining relevance to the story
Populate interfaces with any API/interface signatures that the developer must call (name, kind, signature, path)
Populate constraints with development rules extracted from Dev Notes and architecture (e.g., patterns, layers, testing requirements)
Detect dependency manifests and frameworks in the repo:
- Node: package.json (dependencies/devDependencies)
- Python: pyproject.toml/requirements.txt
- Go: go.mod
- Unity: Packages/manifest.json, Assets/, ProjectSettings/
- Other: list notable frameworks/configs found
Populate artifacts.dependencies with keys for detected ecosystems and their packages with version ranges where present
From Dev Notes, architecture docs, testing docs, and existing tests, extract testing standards (frameworks, patterns, locations).
Populate tests.standards with a concise paragraph
Populate tests.locations with directories or glob patterns where tests live
Populate tests.ideas with initial test ideas mapped to acceptance criteria IDs
Validate output XML structure and content.
Validate against checklist at {installed_path}/checklist.md using bmad/core/tasks/validate-workflow.xml
Open {{story_path}}; if Status == 'Draft' then set to 'ContextReadyDraft'; otherwise leave unchanged.
Under 'Dev Agent Record' → 'Context Reference' (create if missing), add or update a list item for {default_output_file}.
Save the story file.
Search {output_folder}/ for files matching pattern: project-workflow-status*.md
Find the most recent file (by date in filename)
Load the status file
current_step
Set to: "story-context (Story {{story_id}})"
current_workflow
Set to: "story-context (Story {{story_id}}) - Complete"
progress_percentage
Calculate per-story weight: remaining_40_percent / total_stories / 5
Increment by: {{per_story_weight}} * 1 (story-context weight is ~1% per story)
decisions_log
Add entry:
```
- **{{date}}**: Completed story-context for Story {{story_id}} ({{story_title}}). Context file: {{default_output_file}}. Next: DEV agent should run dev-story to implement.
```
````