Remove Codex-specific logic in the initialization script

This commit is contained in:
Den Delimarsky 🌺
2025-09-20 09:09:24 -07:00
parent 8784f39755
commit 6a3ff650f1
2 changed files with 1 additions and 343 deletions

View File

@@ -28,7 +28,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added
- Windsurf IDE support as additional AI assistant option
- Windsurf IDE support as additional AI assistant option (thank you [@raedkit](https://github.com/raedkit) for the work in [#151](https://github.com/github/spec-kit/pull/151))
- GitHub token support for API requests to handle corporate environments and rate limiting (contributed by [@zryfish](https://github.com/@zryfish) in [#243](https://github.com/github/spec-kit/pull/243))
### Changed

View File

@@ -78,156 +78,6 @@ SCRIPT_TYPE_CHOICES = {"sh": "POSIX Shell (bash/zsh)", "ps": "PowerShell"}
# Claude CLI local installation path after migrate-installer
CLAUDE_LOCAL_PATH = Path.home() / ".claude" / "local" / "claude"
# Embedded fallback command templates (used if packaged templates are unavailable)
COMMAND_TEMPLATE_SPECIFY = """---
description: Create or update the feature specification from a natural language feature description.
scripts:
sh: scripts/bash/create-new-feature.sh --json "{ARGS}"
ps: scripts/powershell/create-new-feature.ps1 -Json "{ARGS}"
---
Given the feature description provided as an argument, do this:
1. Run the script `{SCRIPT}` from repo root and parse its JSON output for BRANCH_NAME and SPEC_FILE. All file paths must be absolute.
2. Load `templates/spec-template.md` to understand required sections.
3. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
4. Report completion with branch name, spec file path, and readiness for the next phase.
Note: The script creates and checks out the new branch and initializes the spec file before writing.
"""
COMMAND_TEMPLATE_PLAN = """---
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
scripts:
sh: scripts/bash/setup-plan.sh --json
ps: scripts/powershell/setup-plan.ps1 -Json
---
Given the implementation details provided as an argument, do this:
1. Run `{SCRIPT}` from the repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. All future file paths must be absolute.
2. Read and analyze the feature specification to understand:
- The feature requirements and user stories
- Functional and non-functional requirements
- Success criteria and acceptance criteria
- Any technical constraints or dependencies mentioned
3. Read the constitution at `memory/constitution.md` to understand constitutional requirements.
4. Execute the implementation plan template:
- Load `templates/plan-template.md` (already copied to IMPL_PLAN path)
- Set Input path to FEATURE_SPEC
- Run the Execution Flow (main) function steps 1-9
- The template is self-contained and executable
- Follow error handling and gate checks as specified
- Let the template guide artifact generation in $SPECS_DIR:
* Phase 0 generates research.md
* Phase 1 generates data-model.md, contracts/, quickstart.md
* Phase 2 generates tasks.md
- Incorporate user-provided details from arguments into Technical Context: {ARGS}
- Update Progress Tracking as you complete each phase
5. Verify execution completed:
- Check Progress Tracking shows all phases complete
- Ensure all required artifacts were generated
- Confirm no ERROR states in execution
6. Report results with branch name, file paths, and generated artifacts.
Use absolute paths with the repository root for all file operations to avoid path issues.
"""
COMMAND_TEMPLATE_TASKS = """---
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
scripts:
sh: scripts/bash/check-task-prerequisites.sh --json
ps: scripts/powershell/check-task-prerequisites.ps1 -Json
---
Given the context provided as an argument, do this:
1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
2. Load and analyze available design documents:
- Always read plan.md for tech stack and libraries
- IF EXISTS: Read data-model.md for entities
- IF EXISTS: Read contracts/ for API endpoints
- IF EXISTS: Read research.md for technical decisions
- IF EXISTS: Read quickstart.md for test scenarios
Note: Not all projects have all documents. For example:
- CLI tools might not have contracts/
- Simple libraries might not need data-model.md
- Generate tasks based on what's available
3. Generate tasks following the template:
- Use `templates/tasks-template.md` as the base
- Replace example tasks with actual tasks based on:
* **Setup tasks**: Project init, dependencies, linting
* **Test tasks [P]**: One per contract, one per integration scenario
* **Core tasks**: One per entity, service, CLI command, endpoint
* **Integration tasks**: DB connections, middleware, logging
* **Polish tasks [P]**: Unit tests, performance, docs
4. Task generation rules:
- Each contract file → contract test task marked [P]
- Each entity in data-model → model creation task marked [P]
- Each endpoint → implementation task (not parallel if shared files)
- Each user story → integration test marked [P]
- Different files = can be parallel [P]
- Same file = sequential (no [P])
5. Order tasks by dependencies:
- Setup before everything
- Tests before implementation (TDD)
- Models before services
- Services before endpoints
- Core before integration
- Everything before polish
6. Include parallel execution examples:
- Group [P] tasks that can run together
- Show actual Task agent commands
7. Create FEATURE_DIR/tasks.md with:
- Correct feature name from implementation plan
- Numbered tasks (T001, T002, etc.)
- Clear file paths for each task
- Dependency notes
- Parallel execution guidance
Context for task generation: {ARGS}
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
"""
# Utility to ensure command templates use the modern schema (with scripts mapping)
def ensure_command_templates_current(commands_dir: Path) -> None:
expected = {
"specify.md": COMMAND_TEMPLATE_SPECIFY,
"plan.md": COMMAND_TEMPLATE_PLAN,
"tasks.md": COMMAND_TEMPLATE_TASKS,
}
def needs_upgrade(content: str) -> bool:
# Old templates lacked the scripts: mapping and {SCRIPT} placeholder
return "scripts:" not in content or "{SCRIPT}" not in content
for filename, template_text in expected.items():
target_file = commands_dir / filename
if target_file.exists():
try:
current = target_file.read_text(encoding="utf-8")
except Exception:
target_file.write_text(template_text, encoding="utf-8")
else:
if needs_upgrade(current):
target_file.write_text(template_text, encoding="utf-8")
else:
target_file.parent.mkdir(parents=True, exist_ok=True)
target_file.write_text(template_text, encoding="utf-8")
# ASCII Art Banner
BANNER = """
███████╗██████╗ ███████╗ ██████╗██╗███████╗██╗ ██╗
@@ -910,187 +760,6 @@ def ensure_executable_scripts(project_path: Path, tracker: StepTracker | None =
for f in failures:
console.print(f" - {f}")
def ensure_workspace_commands(project_path: Path, tracker: StepTracker | None = None) -> None:
"""Ensure a workspace-level commands/ directory exists and has up-to-date templates."""
if tracker:
tracker.start("commands")
commands_dir = project_path / "commands"
seeded_from: str | None = None
try:
existed = commands_dir.exists()
if not existed:
commands_dir.mkdir(parents=True, exist_ok=True)
try:
is_empty = not any(commands_dir.iterdir())
except FileNotFoundError:
is_empty = True
should_seed = not existed or is_empty
if should_seed:
candidates: list[tuple[str, Path]] = []
template_commands = project_path / ".specify" / "templates" / "commands"
if template_commands.exists() and template_commands.is_dir():
candidates.append(("release bundle", template_commands))
packaged_commands = None
for ancestor in Path(__file__).resolve().parents:
candidate = ancestor / "templates" / "commands"
if candidate.exists() and candidate.is_dir():
packaged_commands = candidate
break
if packaged_commands is not None:
candidates.append(("packaged defaults", packaged_commands))
for label, source in candidates:
try:
shutil.copytree(source, commands_dir, dirs_exist_ok=True)
seeded_from = label
break
except Exception:
continue
if seeded_from is None:
seeded_from = "embedded defaults"
ensure_command_templates_current(commands_dir)
detail = "verified" if seeded_from is None else seeded_from
if tracker:
tracker.complete("commands", detail)
else:
if seeded_from:
console.print(f"[cyan]Seeded workspace commands from {seeded_from}[/cyan]")
except Exception as exc:
if tracker:
tracker.error("commands", str(exc))
else:
console.print(f"[yellow]Warning: could not ensure commands directory ({exc})[/yellow]")
def _resolve_codex_home() -> Path:
env_value = os.environ.get("CODEX_HOME")
if env_value:
return Path(env_value).expanduser()
return Path.home() / ".codex"
def _ensure_gitignore_entries(project_path: Path, entries: list[str]) -> None:
if not entries:
return
gitignore_path = project_path / ".gitignore"
existing_text = ""
existing: set[str] = set()
if gitignore_path.exists():
try:
existing_text = gitignore_path.read_text(encoding="utf-8")
existing = {line.strip() for line in existing_text.splitlines()}
except Exception:
return
new_entries = [entry for entry in entries if entry not in existing]
if not new_entries:
return
try:
with gitignore_path.open("a", encoding="utf-8") as fh:
if existing_text and not existing_text.endswith("\n"):
fh.write("\n")
for entry in new_entries:
fh.write(f"{entry}\n")
except Exception:
return
def sync_codex_prompts(project_path: Path, tracker: StepTracker | None = None) -> None:
if tracker:
tracker.start("codex-prompts")
commands_dir = project_path / "commands"
if not commands_dir.is_dir():
if tracker:
tracker.skip("codex-prompts", "no commands directory")
return
try:
codex_home = _resolve_codex_home()
prompts_dir = (codex_home / "prompts").expanduser()
prompts_dir.mkdir(parents=True, exist_ok=True)
if not os.access(prompts_dir, os.W_OK):
raise PermissionError(f"Codex prompts directory not writable: {prompts_dir}")
expected: set[str] = set()
copied = 0
skipped = 0
for source in sorted(commands_dir.glob("*.md")):
if not source.is_file():
continue
dest_name = source.name
dest_path = prompts_dir / dest_name
expected.add(dest_name)
data = source.read_bytes()
if dest_path.exists():
try:
if dest_path.read_bytes() == data:
skipped += 1
continue
except Exception:
pass
dest_path.write_bytes(data)
copied += 1
# Clean up any legacy spec-kit-prefixed prompts from earlier installer versions
for legacy in prompts_dir.glob("spec-kit-*.md"):
try:
legacy.unlink()
except Exception:
continue
detail_bits = []
if copied:
detail_bits.append(f"{copied} updated")
if skipped:
detail_bits.append(f"{skipped} unchanged")
detail = ", ".join(detail_bits) if detail_bits else "ok"
if tracker:
tracker.complete("codex-prompts", detail)
# If CODEX_HOME lives inside this project, make sure generated files stay untracked
try:
codex_home_relative = codex_home.resolve().relative_to(project_path.resolve())
except Exception:
return
codex_prefix = codex_home_relative.as_posix()
if codex_prefix == ".":
return
ignore_entries = [
f"{codex_prefix}/*.json",
f"{codex_prefix}/*.jsonl",
f"{codex_prefix}/*.toml",
f"{codex_prefix}/log",
f"{codex_prefix}/sessions",
]
_ensure_gitignore_entries(project_path, ignore_entries)
except Exception as exc:
if tracker:
tracker.error("codex-prompts", str(exc))
else:
console.print(f"[yellow]Warning: could not sync Codex prompts ({exc})[/yellow]")
@app.command()
def init(
project_name: str = typer.Argument(None, help="Name for your new project directory (optional if using --here)"),
@@ -1275,10 +944,6 @@ def init(
]:
tracker.add(key, label)
if selected_ai == "codex":
tracker.add("commands", "Ensure workspace commands")
tracker.add("codex-prompts", "Sync Codex prompts")
# Use transient so live tree is replaced by the final static render (avoids duplicate output)
with Live(tracker.render(), console=console, refresh_per_second=8, transient=True) as live:
tracker.attach_refresh(lambda: live.update(tracker.render()))
@@ -1290,11 +955,6 @@ def init(
download_and_extract_template(project_path, selected_ai, selected_script, here, verbose=False, tracker=tracker, client=local_client, debug=debug, github_token=github_token)
# Ensure /commands directory for Codex CLI workspaces only
if selected_ai == "codex":
ensure_workspace_commands(project_path, tracker=tracker)
sync_codex_prompts(project_path, tracker=tracker)
# Ensure scripts are executable (POSIX)
ensure_executable_scripts(project_path, tracker=tracker)
@@ -1352,8 +1012,6 @@ def init(
steps_lines.append(" 2.3 [cyan]/plan[/] - Create implementation plans")
steps_lines.append(" 2.4 [cyan]/tasks[/] - Generate actionable tasks")
steps_lines.append(" 2.5 [cyan]/implement[/] - Execute implementation")
if selected_ai == "codex":
steps_lines.append(" 2.6 [cyan]Codex CLI[/] - Restart Codex if slash commands are missing; commands mirror into AGENTS.md")
steps_panel = Panel("\n".join(steps_lines), title="Next steps", border_style="cyan", padding=(1,2))
console.print()