Template cleanup and reorganization

This commit is contained in:
den (work)
2025-10-03 17:08:14 -07:00
parent 23e0c5c83c
commit 5042c76558
7 changed files with 251 additions and 370 deletions

View File

@@ -11,55 +11,65 @@ User input:
$ARGUMENTS
1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
2. Load and analyze available design documents:
- Always read plan.md for tech stack and libraries
- IF EXISTS: Read data-model.md for entities
- IF EXISTS: Read contracts/ for API endpoints
- IF EXISTS: Read research.md for technical decisions
- IF EXISTS: Read quickstart.md for test scenarios
## Execution Steps
Note: Not all projects have all documents. For example:
- CLI tools might not have contracts/
- Simple libraries might not need data-model.md
- Generate tasks based on what's available
1. **Setup**: Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
3. Generate tasks following the template:
- Use `/templates/tasks-template.md` as the base
- Replace example tasks with actual tasks based on:
* **Setup tasks**: Project init, dependencies, linting
* **Test tasks [P]**: One per contract, one per integration scenario
* **Core tasks**: One per entity, service, CLI command, endpoint
* **Integration tasks**: DB connections, middleware, logging
* **Polish tasks [P]**: Unit tests, performance, docs
2. **Load design documents**: Read from FEATURE_DIR:
- **Required**: plan.md (tech stack, libraries, structure)
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
- Note: Not all projects have all documents. Generate tasks based on what's available.
4. Task generation rules:
- Each contract file → contract test task marked [P]
- Each entity in data-model → model creation task marked [P]
- Each endpoint → implementation task (not parallel if shared files)
- Each user story → integration test marked [P]
- Different files = can be parallel [P]
- Same file = sequential (no [P])
3. **Execute task generation workflow** (follow the template structure):
- Load plan.md and extract tech stack, libraries, project structure
- If data-model.md exists: Extract entities → generate model tasks
- If contracts/ exists: Each file → generate endpoint/API tasks
- If research.md exists: Extract decisions → generate setup tasks
- Generate tasks by category: Setup, Core Implementation, Integration, Polish
- **Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature spec or user asks for TDD approach
- Apply task rules:
* Different files = mark [P] for parallel
* Same file = sequential (no [P])
* If tests requested: Tests before implementation (TDD order)
- Number tasks sequentially (T001, T002...)
- Generate dependency graph
- Create parallel execution examples
- Validate task completeness (all entities have implementations, all endpoints covered)
5. Order tasks by dependencies:
- Setup before everything
- Tests before implementation (TDD)
- Models before services
- Services before endpoints
- Core before integration
- Everything before polish
6. Include parallel execution examples:
- Group [P] tasks that can run together
- Show actual Task agent commands
7. Create FEATURE_DIR/tasks.md with:
- Correct feature name from implementation plan
- Numbered tasks (T001, T002, etc.)
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
- Correct feature name from plan.md
- Numbered tasks (T001, T002...) in dependency order
- Clear file paths for each task
- [P] markers for parallelizable tasks
- Phase groupings based on what's needed (Setup, Core Implementation, Integration, Polish)
- If tests requested: Include separate "Tests First (TDD)" phase before Core Implementation
- Dependency notes
5. **Report**: Output path to generated tasks.md and summary of task counts by phase.
- Parallel execution guidance
Context for task generation: {ARGS}
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
## Task Generation Rules
**IMPORTANT**: Tests are optional. Only generate test tasks if the user explicitly requested testing or TDD approach in the feature specification.
1. **From Contracts**:
- Each contract/endpoint → implementation task
- If tests requested: Each contract → contract test task [P] before implementation
2. **From Data Model**:
- Each entity → model creation task [P]
- Relationships → service layer tasks
3. **From User Stories**:
- Each story → implementation tasks
- If tests requested: Each story → integration test [P]
- If quickstart.md exists: Validation tasks
4. **Ordering**:
- Without tests: Setup → Models → Services → Endpoints → Integration → Polish
- With tests (TDD): Setup → Tests → Models → Services → Endpoints → Integration → Polish
- Dependencies block parallel execution