Template cleanup and reorganization

This commit is contained in:
den (work)
2025-10-03 17:08:14 -07:00
parent 23e0c5c83c
commit 5042c76558
7 changed files with 251 additions and 370 deletions

View File

@@ -3,6 +3,9 @@ description: Execute the implementation planning workflow using the plan templat
scripts:
sh: scripts/bash/setup-plan.sh --json
ps: scripts/powershell/setup-plan.ps1 -Json
agent_scripts:
sh: scripts/bash/update-agent-context.sh __AGENT__
ps: scripts/powershell/update-agent-context.ps1 -AgentType __AGENT__
---
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
@@ -11,36 +14,72 @@ User input:
$ARGUMENTS
Given the implementation details provided as an argument, do this:
## Execution Steps
1. Run `{SCRIPT}` from the repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. All future file paths must be absolute.
- BEFORE proceeding, inspect FEATURE_SPEC for a `## Clarifications` section with at least one `Session` subheading. If missing or clearly ambiguous areas remain (vague adjectives, unresolved critical choices), PAUSE and instruct the user to run `/clarify` first to reduce rework. Only continue if: (a) Clarifications exist OR (b) an explicit user override is provided (e.g., "proceed without clarification"). Do not attempt to fabricate clarifications yourself.
2. Read and analyze the feature specification to understand:
- The feature requirements and user stories
- Functional and non-functional requirements
- Success criteria and acceptance criteria
- Any technical constraints or dependencies mentioned
1. **Setup**: Run `{SCRIPT}` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH.
- Before proceeding: Check FEATURE_SPEC has `## Clarifications` section. If missing or ambiguous, instruct user to run `/clarify` first.
3. Read the constitution at `/memory/constitution.md` to understand constitutional requirements.
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
4. Execute the implementation plan template:
- Load `/templates/plan-template.md` (already copied to IMPL_PLAN path)
- Set Input path to FEATURE_SPEC
- Run the Execution Flow (main) function steps 1-9
- The template is self-contained and executable
- Follow error handling and gate checks as specified
- Let the template guide artifact generation in $SPECS_DIR:
* Phase 0 generates research.md
* Phase 1 generates data-model.md, contracts/, quickstart.md
* Phase 2 generates tasks.md
- Incorporate user-provided details from arguments into Technical Context: {ARGS}
- Update Progress Tracking as you complete each phase
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
- Fill Constitution Check section from constitution
- Evaluate gates (ERROR if violations unjustified)
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
- Phase 1: Generate data-model.md, contracts/, quickstart.md
- Phase 1: Update agent context by running the agent script
- Re-evaluate Constitution Check post-design
5. Verify execution completed:
- Check Progress Tracking shows all phases complete
- Ensure all required artifacts were generated
- Confirm no ERROR states in execution
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
6. Report results with branch name, file paths, and generated artifacts.
## Phases
Use absolute paths with the repository root for all file operations to avoid path issues.
### Phase 0: Outline & Research
1. **Extract unknowns from Technical Context** above:
- For each NEEDS CLARIFICATION → research task
- For each dependency → best practices task
- For each integration → patterns task
2. **Generate and dispatch research agents**:
```
For each unknown in Technical Context:
Task: "Research {unknown} for {feature context}"
For each technology choice:
Task: "Find best practices for {tech} in {domain}"
```
3. **Consolidate findings** in `research.md` using format:
- Decision: [what was chosen]
- Rationale: [why chosen]
- Alternatives considered: [what else evaluated]
**Output**: research.md with all NEEDS CLARIFICATION resolved
### Phase 1: Design & Contracts
**Prerequisites:** `research.md` complete
1. **Extract entities from feature spec** → `data-model.md`:
- Entity name, fields, relationships
- Validation rules from requirements
- State transitions if applicable
2. **Generate API contracts** from functional requirements:
- For each user action → endpoint
- Use standard REST/GraphQL patterns
- Output OpenAPI/GraphQL schema to `/contracts/`
3. **Agent context update**:
- Run `{AGENT_SCRIPT}`
- These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file
- Add only new technology from current plan
- Preserve manual additions between markers
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
## Key rules
- Use absolute paths
- ERROR on gate failures or unresolved clarifications

View File

@@ -18,7 +18,56 @@ Given that feature description, do this:
1. Run the script `{SCRIPT}` from repo root and parse its JSON output for BRANCH_NAME and SPEC_FILE. All file paths must be absolute.
**IMPORTANT** You must only ever run this script once. The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for.
2. Load `templates/spec-template.md` to understand required sections.
3. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
4. Report completion with branch name, spec file path, and readiness for the next phase.
Note: The script creates and checks out the new branch and initializes the spec file before writing.
3. Follow this execution flow:
1. Parse user description from Input
If empty: ERROR "No feature description provided"
2. Extract key concepts from description
Identify: actors, actions, data, constraints
3. For each unclear aspect:
Mark with [NEEDS CLARIFICATION: specific question]
4. Fill User Scenarios & Testing section
If no clear user flow: ERROR "Cannot determine user scenarios"
5. Generate Functional Requirements
Each requirement must be testable
Mark ambiguous requirements
6. Identify Key Entities (if data involved)
7. Run Review Checklist
If any [NEEDS CLARIFICATION]: WARN "Spec has uncertainties"
If implementation details found: ERROR "Remove tech details"
8. Return: SUCCESS (spec ready for planning)
4. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
5. Report completion with branch name, spec file path, and readiness for the next phase.
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
## General Guidelines
## Quick Guidelines
- Focus on WHAT users need and WHY
- Avoid HOW to implement (no tech stack, APIs, code structure)
- Written for business stakeholders, not developers
### Section Requirements
- **Mandatory sections**: Must be completed for every feature
- **Optional sections**: Include only when relevant to the feature
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
### For AI Generation
When creating this spec from a user prompt:
1. **Mark all ambiguities**: Use [NEEDS CLARIFICATION: specific question] for any assumption you'd need to make
2. **Don't guess**: If the prompt doesn't specify something (e.g., "login system" without auth method), mark it
3. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
4. **Common underspecified areas**:
- User types and permissions
- Data retention/deletion policies
- Performance targets and scale
- Error handling behaviors
- Integration requirements
- Security/compliance needs

View File

@@ -11,55 +11,65 @@ User input:
$ARGUMENTS
1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
2. Load and analyze available design documents:
- Always read plan.md for tech stack and libraries
- IF EXISTS: Read data-model.md for entities
- IF EXISTS: Read contracts/ for API endpoints
- IF EXISTS: Read research.md for technical decisions
- IF EXISTS: Read quickstart.md for test scenarios
## Execution Steps
Note: Not all projects have all documents. For example:
- CLI tools might not have contracts/
- Simple libraries might not need data-model.md
- Generate tasks based on what's available
1. **Setup**: Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
3. Generate tasks following the template:
- Use `/templates/tasks-template.md` as the base
- Replace example tasks with actual tasks based on:
* **Setup tasks**: Project init, dependencies, linting
* **Test tasks [P]**: One per contract, one per integration scenario
* **Core tasks**: One per entity, service, CLI command, endpoint
* **Integration tasks**: DB connections, middleware, logging
* **Polish tasks [P]**: Unit tests, performance, docs
2. **Load design documents**: Read from FEATURE_DIR:
- **Required**: plan.md (tech stack, libraries, structure)
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
- Note: Not all projects have all documents. Generate tasks based on what's available.
4. Task generation rules:
- Each contract file → contract test task marked [P]
- Each entity in data-model → model creation task marked [P]
- Each endpoint → implementation task (not parallel if shared files)
- Each user story → integration test marked [P]
- Different files = can be parallel [P]
- Same file = sequential (no [P])
3. **Execute task generation workflow** (follow the template structure):
- Load plan.md and extract tech stack, libraries, project structure
- If data-model.md exists: Extract entities → generate model tasks
- If contracts/ exists: Each file → generate endpoint/API tasks
- If research.md exists: Extract decisions → generate setup tasks
- Generate tasks by category: Setup, Core Implementation, Integration, Polish
- **Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature spec or user asks for TDD approach
- Apply task rules:
* Different files = mark [P] for parallel
* Same file = sequential (no [P])
* If tests requested: Tests before implementation (TDD order)
- Number tasks sequentially (T001, T002...)
- Generate dependency graph
- Create parallel execution examples
- Validate task completeness (all entities have implementations, all endpoints covered)
5. Order tasks by dependencies:
- Setup before everything
- Tests before implementation (TDD)
- Models before services
- Services before endpoints
- Core before integration
- Everything before polish
6. Include parallel execution examples:
- Group [P] tasks that can run together
- Show actual Task agent commands
7. Create FEATURE_DIR/tasks.md with:
- Correct feature name from implementation plan
- Numbered tasks (T001, T002, etc.)
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
- Correct feature name from plan.md
- Numbered tasks (T001, T002...) in dependency order
- Clear file paths for each task
- [P] markers for parallelizable tasks
- Phase groupings based on what's needed (Setup, Core Implementation, Integration, Polish)
- If tests requested: Include separate "Tests First (TDD)" phase before Core Implementation
- Dependency notes
5. **Report**: Output path to generated tasks.md and summary of task counts by phase.
- Parallel execution guidance
Context for task generation: {ARGS}
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
## Task Generation Rules
**IMPORTANT**: Tests are optional. Only generate test tasks if the user explicitly requested testing or TDD approach in the feature specification.
1. **From Contracts**:
- Each contract/endpoint → implementation task
- If tests requested: Each contract → contract test task [P] before implementation
2. **From Data Model**:
- Each entity → model creation task [P]
- Relationships → service layer tasks
3. **From User Stories**:
- Each story → implementation tasks
- If tests requested: Each story → integration test [P]
- If quickstart.md exists: Validation tasks
4. **Ordering**:
- Without tests: Setup → Models → Services → Endpoints → Integration → Polish
- With tests (TDD): Setup → Tests → Models → Services → Endpoints → Integration → Polish
- Dependencies block parallel execution