mirror of
https://github.com/github/spec-kit.git
synced 2026-03-21 12:53:08 +00:00
* feat(cli): embed core pack in wheel + offline-first init (#1711, #1752) Bundle templates, commands, and scripts inside the specify-cli wheel so that `specify init` works without any network access by default. Changes: - pyproject.toml: add hatchling force-include for core_pack assets; bump version to 0.2.1 - __init__.py: add _locate_core_pack(), _generate_agent_commands() (Python port of generate_commands() shell function), and scaffold_from_core_pack(); modify init() to scaffold from bundled assets by default; add --from-github flag to opt back in to the GitHub download path - release.yml: build wheel during CI release job - create-github-release.sh: attach .whl as a release asset - docs/installation.md: add Enterprise/Air-Gapped Installation section - README.md: add Option 3 enterprise install with accurate offline story Closes #1711 Addresses #1752 * fix(tests): update kiro alias test for offline-first scaffold path * feat(cli): invoke bundled release script at runtime for offline scaffold - Embed release scripts (bash + PowerShell) in wheel via pyproject.toml - Replace Python _generate_agent_commands() with subprocess invocation of the canonical create-release-packages.sh, guaranteeing byte-for-byte parity between 'specify init --offline' and GitHub release ZIPs - Fix macOS bash 3.2 compat in release script: replace cp --parents, local -n (nameref), and mapfile with POSIX-safe alternatives - Fix _TOML_AGENTS: remove qwen (uses markdown per release script) - Rename --from-github to --offline (opt-in to bundled assets) - Add _locate_release_script() for cross-platform script discovery - Update tests: remove bash 4+/GNU coreutils requirements, handle Kimi directory-per-skill layout, 576 tests passing - Update CHANGELOG and docs/installation.md * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * fix(offline): error out if --offline fails instead of falling back to network - _locate_core_pack() docstring now accurately describes that it only finds wheel-bundled core_pack/; source-checkout fallback lives in callers - init() --offline + no bundled assets now exits with a clear error (previously printed a warning and silently fell back to GitHub download) - init() scaffold failure under --offline now exits with an error instead of retrying via download_and_extract_template Addresses reviewer comment: https://github.com/github/spec-kit/pull/1803 * fix(offline): address PR review comments - fix(shell): harden validate_subset against glob injection in case patterns - fix(shell): make GENRELEASES_DIR overridable via env var for test isolation - fix(cli): probe pwsh then powershell on Windows instead of hardcoding pwsh - fix(cli): remove unreachable fallback branch when --offline fails - fix(cli): improve --offline error message with common failure causes - fix(release): move wheel build step after create-release-packages.sh - fix(docs): add --offline to installation.md air-gapped example - fix(tests): remove unused genreleases_dir param from _run_release_script - fix(tests): rewrite parity test to run one agent at a time with isolated temp dirs, preventing cross-agent interference from rm -rf * fix(offline): address second round of review comments - fix(shell): replace case-pattern membership with explicit loop + == check for unambiguous glob-safety in validate_subset() - fix(cli): require pwsh (PowerShell 7) only; drop powershell (PS5) fallback since the bundled script uses #requires -Version 7.0 - fix(cli): add bash and zip preflight checks in scaffold_from_core_pack() with clear error messages if either is missing - fix(build): list individual template files in pyproject.toml force-include to avoid duplicating templates/commands/ in the wheel * fix(offline): address third round of review comments - Add 120s timeout to subprocess.run in scaffold_from_core_pack to prevent indefinite hangs during offline scaffolding - Add test_pyproject_force_include_covers_all_templates to catch missing template files in wheel bundling - Tighten kiro alias test to assert specific scaffold path (download vs offline) * fix(offline): address Copilot review round 4 - fix(offline): use handle_vscode_settings() merge for --here --offline to prevent data loss on existing .vscode/settings.json - fix(release): glob wheel filename in create-github-release.sh instead of hardcoding version, preventing upload failures on version mismatch - docs(release): add comment noting pyproject.toml version is synced by release-trigger.yml before the tag is pushed * fix(offline): address review round 5 + offline bundle ZIP - fix(offline): pwsh-only, no powershell.exe fallback; clarify error message - fix(offline): tighten _has_bundled to check scripts dir for source checkouts - feat(release): build specify-bundle-v*.zip with all deps at release time - feat(release): attach offline bundle ZIP to GitHub release assets - docs: simplify air-gapped install to single ZIP download from releases - docs: add Windows PowerShell 7+ (pwsh) requirement note * fix(tests): session-scoped scaffold cache + timeout + dead code removal - Add timeout=300 and returncode check to _run_release_script() to fail fast with clear output on script hangs or failures - Remove unused import specify_cli, _SOURCE_TEMPLATES, bundled_project fixture - Add session-scoped scaffolded_sh/scaffolded_ps fixtures that scaffold once per agent and reuse the output directory across all invariant tests - Reduces test_core_pack_scaffold runtime from ~175s to ~51s (3.4x faster) - Parity tests still scaffold independently for isolation * fix(offline): remove wheel from release, update air-gapped docs to use pip download * fix(tests): handle codex skills layout and iflow agent in scaffold tests Codex now uses create_skills() with hyphenated separator (speckit-plan/SKILL.md) instead of generate_commands(). Update _SKILL_AGENTS, _expected_ext, and _list_command_files to handle both codex ('-') and kimi ('.') skill agents. Also picks up iflow as a new testable agent automatically via AGENT_CONFIG. * fix(offline): require wheel core_pack for --offline, remove source-checkout fallback --offline now strictly requires _locate_core_pack() to find the wheel's bundled core_pack/ directory. Source-checkout fallbacks are no longer accepted at the init() level — if core_pack/ is missing, the CLI errors out with a clear message pointing to the installation docs. scaffold_from_core_pack() retains its internal source-checkout fallbacks so parity tests can call it directly from a source checkout. * fix(offline): remove stale [Unreleased] CHANGELOG section, scope httpx.Client to download path - Remove entire [Unreleased] section — CHANGELOG is auto-generated at release - Move httpx.Client into use_github branch with context manager so --offline path doesn't allocate an unused network client * fix(offline): remove dead --from-github flag, fix typer.Exit handling, add page templates validation - Remove unused --from-github CLI option and docstring example - Add (typer.Exit, SystemExit) re-raise before broad except Exception to prevent duplicate error panel on offline scaffold failure - Validate page templates directory exists in scaffold_from_core_pack() to fail fast on incomplete wheel installs - Fix ruff lint: remove unused shutil import, remove f-prefix on strings without placeholders in test_core_pack_scaffold.py * docs(offline): add v0.6.0 deprecation notice with rationale - Help text: note bundled assets become default in v0.6.0 - Docstring: explain why GitHub download is being retired (no network dependency, no proxy/firewall issues, guaranteed version match) - Runtime nudge: when bundled assets are available but user takes the GitHub download path, suggest --offline with rationale - docs/installation.md: add deprecation notice with full rationale * fix(offline): allow --offline in source checkouts, fix CHANGELOG truncation - Simplify use_github logic: use_github = not offline (let scaffold_from_core_pack handle fallback to source-checkout paths) - Remove hard-fail when core_pack/ is absent — scaffold_from_core_pack already falls back to repo-root templates/scripts/commands - Fix truncated 'skill…' → 'skills' in CHANGELOG.md * fix(offline): sandbox GENRELEASES_DIR and clean up on failure - Pin GENRELEASES_DIR to temp dir in scaffold_from_core_pack() so a user-exported value cannot redirect output or cause rm -rf outside the sandbox - Clean up partial project directory on --offline scaffold failure (same behavior as the GitHub-download failure path) * fix(tests): use shutil.which for bash discovery, add ps parity tests - _find_bash() now tries shutil.which('bash') first so non-standard install locations (Nix, custom CI images) are found - Parametrize parity test over both 'sh' and 'ps' script types to ensure PowerShell variant stays byte-for-byte identical to release script output (353 scaffold tests, 810 total) * fix(tests): parse pyproject.toml with tomllib, remove unused fixture - Use tomllib to parse force-include keys from the actual TOML table instead of raw substring search (avoids false positives) - Remove unused source_template_stems fixture from test_scaffold_command_dir_location * fix: guard GENRELEASES_DIR against unsafe values, update docstring - Add safety check in create-release-packages.sh: reject empty, '/', '.', '..' values for GENRELEASES_DIR before rm -rf - Strip trailing slash to avoid path surprises - Update scaffold_from_core_pack() docstring to accurately describe all failure modes (not just 'assets not found') * fix: harden GENRELEASES_DIR guard, cache parity tests, safe iterdir - Reject '..' path segments in GENRELEASES_DIR to prevent traversal - Session-cache both scaffold and release-script results in parity tests — runtime drops from ~74s to ~45s (40% faster) - Guard cmd_dir.iterdir() in assertion message against missing dirs * fix(tests): exclude YAML frontmatter source metadata from path rewrite check The codex and kimi SKILL.md files have 'source: templates/commands/...' in their YAML frontmatter — this is provenance metadata, not a runtime path that needs rewriting. Strip frontmatter before checking for bare scripts/ and templates/ paths. * fix(offline): surface scaffold failure detail in error output When --offline scaffold fails, look up the tracker's 'scaffold' step detail and print it alongside the generic error message so users see the specific root cause (e.g. missing zip/pwsh, script stderr). --------- Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
614 lines
23 KiB
Python
614 lines
23 KiB
Python
"""
|
||
Validation tests for offline/air-gapped scaffolding (PR #1803).
|
||
|
||
For every supported AI agent (except "generic") the scaffold output is verified
|
||
against invariants and compared byte-for-byte with the canonical output produced
|
||
by create-release-packages.sh.
|
||
|
||
Since scaffold_from_core_pack() now invokes the release script at runtime, the
|
||
parity test (section 9) runs the script independently and compares the results
|
||
to ensure the integration is correct.
|
||
|
||
Per-agent invariants verified
|
||
──────────────────────────────
|
||
• Command files are written to the directory declared in AGENT_CONFIG
|
||
• File count matches the number of source templates
|
||
• Extension is correct: .toml (TOML agents), .agent.md (copilot), .md (rest)
|
||
• No unresolved placeholders remain ({SCRIPT}, {ARGS}, __AGENT__)
|
||
• Argument token is correct: {{args}} for TOML agents, $ARGUMENTS for others
|
||
• Path rewrites applied: scripts/ → .specify/scripts/ etc.
|
||
• TOML files have "description" and "prompt" fields
|
||
• Markdown files have parseable YAML frontmatter
|
||
• Copilot: companion speckit.*.prompt.md files are generated in prompts/
|
||
• .specify/scripts/ contains at least one script file
|
||
• .specify/templates/ contains at least one template file
|
||
|
||
Parity invariant
|
||
────────────────
|
||
Every file produced by scaffold_from_core_pack() must be byte-for-byte
|
||
identical to the same file in the ZIP produced by the release script.
|
||
"""
|
||
|
||
import os
|
||
import re
|
||
import shutil
|
||
import subprocess
|
||
import tomllib
|
||
import zipfile
|
||
from pathlib import Path
|
||
|
||
import pytest
|
||
import yaml
|
||
|
||
from specify_cli import (
|
||
AGENT_CONFIG,
|
||
_TOML_AGENTS,
|
||
_locate_core_pack,
|
||
scaffold_from_core_pack,
|
||
)
|
||
|
||
_REPO_ROOT = Path(__file__).parent.parent
|
||
_RELEASE_SCRIPT = _REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.sh"
|
||
|
||
|
||
def _find_bash() -> str | None:
|
||
"""Return the path to a usable bash on this machine, or None."""
|
||
# Prefer PATH lookup so non-standard install locations (Nix, CI) are found.
|
||
on_path = shutil.which("bash")
|
||
if on_path:
|
||
return on_path
|
||
candidates = [
|
||
"/opt/homebrew/bin/bash",
|
||
"/usr/local/bin/bash",
|
||
"/bin/bash",
|
||
"/usr/bin/bash",
|
||
]
|
||
for candidate in candidates:
|
||
try:
|
||
result = subprocess.run(
|
||
[candidate, "--version"],
|
||
capture_output=True, text=True, timeout=5,
|
||
)
|
||
if result.returncode == 0:
|
||
return candidate
|
||
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||
continue
|
||
return None
|
||
|
||
|
||
def _run_release_script(agent: str, script_type: str, bash: str, output_dir: Path) -> Path:
|
||
"""Run create-release-packages.sh for *agent*/*script_type* and return the
|
||
path to the generated ZIP. *output_dir* receives the build artifacts so
|
||
the repo working tree stays clean."""
|
||
env = os.environ.copy()
|
||
env["AGENTS"] = agent
|
||
env["SCRIPTS"] = script_type
|
||
env["GENRELEASES_DIR"] = str(output_dir)
|
||
|
||
result = subprocess.run(
|
||
[bash, str(_RELEASE_SCRIPT), "v0.0.0"],
|
||
capture_output=True, text=True,
|
||
cwd=str(_REPO_ROOT),
|
||
env=env,
|
||
timeout=300,
|
||
)
|
||
|
||
if result.returncode != 0:
|
||
pytest.fail(
|
||
f"Release script failed with exit code {result.returncode}\n"
|
||
f"stdout:\n{result.stdout}\nstderr:\n{result.stderr}"
|
||
)
|
||
|
||
zip_pattern = f"spec-kit-template-{agent}-{script_type}-v0.0.0.zip"
|
||
zip_path = output_dir / zip_pattern
|
||
if not zip_path.exists():
|
||
pytest.fail(
|
||
f"Release script did not produce expected ZIP: {zip_path}\n"
|
||
f"stdout:\n{result.stdout}\nstderr:\n{result.stderr}"
|
||
)
|
||
return zip_path
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# Helpers
|
||
# ---------------------------------------------------------------------------
|
||
|
||
# Number of source command templates (one per .md file in templates/commands/)
|
||
|
||
|
||
def _commands_dir() -> Path:
|
||
"""Return the command templates directory (source-checkout or core_pack)."""
|
||
core = _locate_core_pack()
|
||
if core and (core / "commands").is_dir():
|
||
return core / "commands"
|
||
# Source-checkout fallback
|
||
repo_root = Path(__file__).parent.parent
|
||
return repo_root / "templates" / "commands"
|
||
|
||
|
||
def _get_source_template_stems() -> list[str]:
|
||
"""Return the stems of source command template files (e.g. ['specify', 'plan', ...])."""
|
||
return sorted(p.stem for p in _commands_dir().glob("*.md"))
|
||
|
||
|
||
def _expected_cmd_dir(project_path: Path, agent: str) -> Path:
|
||
"""Return the expected command-files directory for a given agent."""
|
||
cfg = AGENT_CONFIG[agent]
|
||
folder = (cfg.get("folder") or "").rstrip("/")
|
||
subdir = cfg.get("commands_subdir", "commands")
|
||
if folder:
|
||
return project_path / folder / subdir
|
||
return project_path / ".speckit" / subdir
|
||
|
||
|
||
# Agents whose commands are laid out as <skills_dir>/<name>/SKILL.md.
|
||
# Maps agent -> separator used in skill directory names.
|
||
_SKILL_AGENTS: dict[str, str] = {"codex": "-", "kimi": "."}
|
||
|
||
|
||
def _expected_ext(agent: str) -> str:
|
||
if agent in _TOML_AGENTS:
|
||
return "toml"
|
||
if agent == "copilot":
|
||
return "agent.md"
|
||
if agent in _SKILL_AGENTS:
|
||
return "SKILL.md"
|
||
return "md"
|
||
|
||
|
||
def _list_command_files(cmd_dir: Path, agent: str) -> list[Path]:
|
||
"""List generated command files, handling skills-based directory layouts."""
|
||
if agent in _SKILL_AGENTS:
|
||
sep = _SKILL_AGENTS[agent]
|
||
return sorted(cmd_dir.glob(f"speckit{sep}*/SKILL.md"))
|
||
ext = _expected_ext(agent)
|
||
return sorted(cmd_dir.glob(f"speckit.*.{ext}"))
|
||
|
||
|
||
def _collect_relative_files(root: Path) -> dict[str, bytes]:
|
||
"""Walk *root* and return {relative_posix_path: file_bytes}."""
|
||
result: dict[str, bytes] = {}
|
||
for p in root.rglob("*"):
|
||
if p.is_file():
|
||
result[p.relative_to(root).as_posix()] = p.read_bytes()
|
||
return result
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# Fixtures
|
||
# ---------------------------------------------------------------------------
|
||
|
||
@pytest.fixture(scope="session")
|
||
def source_template_stems() -> list[str]:
|
||
return _get_source_template_stems()
|
||
|
||
|
||
@pytest.fixture(scope="session")
|
||
def scaffolded_sh(tmp_path_factory):
|
||
"""Session-scoped cache: scaffold once per agent with script_type='sh'."""
|
||
cache = {}
|
||
def _get(agent: str) -> Path:
|
||
if agent not in cache:
|
||
project = tmp_path_factory.mktemp(f"scaffold_sh_{agent}")
|
||
ok = scaffold_from_core_pack(project, agent, "sh")
|
||
assert ok, f"scaffold_from_core_pack returned False for agent '{agent}'"
|
||
cache[agent] = project
|
||
return cache[agent]
|
||
return _get
|
||
|
||
|
||
@pytest.fixture(scope="session")
|
||
def scaffolded_ps(tmp_path_factory):
|
||
"""Session-scoped cache: scaffold once per agent with script_type='ps'."""
|
||
cache = {}
|
||
def _get(agent: str) -> Path:
|
||
if agent not in cache:
|
||
project = tmp_path_factory.mktemp(f"scaffold_ps_{agent}")
|
||
ok = scaffold_from_core_pack(project, agent, "ps")
|
||
assert ok, f"scaffold_from_core_pack returned False for agent '{agent}'"
|
||
cache[agent] = project
|
||
return cache[agent]
|
||
return _get
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# Parametrize over all agents except "generic"
|
||
# ---------------------------------------------------------------------------
|
||
|
||
_TESTABLE_AGENTS = [a for a in AGENT_CONFIG if a != "generic"]
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# 1. Bundled scaffold — directory structure
|
||
# ---------------------------------------------------------------------------
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_scaffold_creates_specify_scripts(agent, scaffolded_sh):
|
||
"""scaffold_from_core_pack copies at least one script into .specify/scripts/."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
scripts_dir = project / ".specify" / "scripts" / "bash"
|
||
assert scripts_dir.is_dir(), f".specify/scripts/bash/ missing for agent '{agent}'"
|
||
assert any(scripts_dir.iterdir()), f".specify/scripts/bash/ is empty for agent '{agent}'"
|
||
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_scaffold_creates_specify_templates(agent, scaffolded_sh):
|
||
"""scaffold_from_core_pack copies at least one page template into .specify/templates/."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
tpl_dir = project / ".specify" / "templates"
|
||
assert tpl_dir.is_dir(), f".specify/templates/ missing for agent '{agent}'"
|
||
assert any(tpl_dir.iterdir()), ".specify/templates/ is empty"
|
||
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_scaffold_command_dir_location(agent, scaffolded_sh):
|
||
"""Command files land in the directory declared by AGENT_CONFIG."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
assert cmd_dir.is_dir(), (
|
||
f"Command dir '{cmd_dir.relative_to(project)}' not created for agent '{agent}'"
|
||
)
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# 2. Bundled scaffold — file count
|
||
# ---------------------------------------------------------------------------
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_scaffold_command_file_count(agent, scaffolded_sh, source_template_stems):
|
||
"""One command file is generated per source template for every agent."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
generated = _list_command_files(cmd_dir, agent)
|
||
|
||
if cmd_dir.is_dir():
|
||
dir_listing = list(cmd_dir.iterdir())
|
||
else:
|
||
dir_listing = f"<command dir missing: {cmd_dir}>"
|
||
|
||
assert len(generated) == len(source_template_stems), (
|
||
f"Agent '{agent}': expected {len(source_template_stems)} command files "
|
||
f"({_expected_ext(agent)}), found {len(generated)}. Dir: {dir_listing}"
|
||
)
|
||
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_scaffold_command_file_names(agent, scaffolded_sh, source_template_stems):
|
||
"""Each source template stem maps to a corresponding speckit.<stem>.<ext> file."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
for stem in source_template_stems:
|
||
if agent in _SKILL_AGENTS:
|
||
sep = _SKILL_AGENTS[agent]
|
||
expected = cmd_dir / f"speckit{sep}{stem}" / "SKILL.md"
|
||
else:
|
||
ext = _expected_ext(agent)
|
||
expected = cmd_dir / f"speckit.{stem}.{ext}"
|
||
assert expected.is_file(), (
|
||
f"Agent '{agent}': expected file '{expected.name}' not found in '{cmd_dir}'"
|
||
)
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# 3. Bundled scaffold — content invariants
|
||
# ---------------------------------------------------------------------------
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_no_unresolved_script_placeholder(agent, scaffolded_sh):
|
||
"""{SCRIPT} must not appear in any generated command file."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
for f in cmd_dir.rglob("*"):
|
||
if f.is_file():
|
||
content = f.read_text(encoding="utf-8")
|
||
assert "{SCRIPT}" not in content, (
|
||
f"Unresolved {{SCRIPT}} in '{f.relative_to(project)}' for agent '{agent}'"
|
||
)
|
||
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_no_unresolved_agent_placeholder(agent, scaffolded_sh):
|
||
"""__AGENT__ must not appear in any generated command file."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
for f in cmd_dir.rglob("*"):
|
||
if f.is_file():
|
||
content = f.read_text(encoding="utf-8")
|
||
assert "__AGENT__" not in content, (
|
||
f"Unresolved __AGENT__ in '{f.relative_to(project)}' for agent '{agent}'"
|
||
)
|
||
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_no_unresolved_args_placeholder(agent, scaffolded_sh):
|
||
"""{ARGS} must not appear in any generated command file (replaced with agent-specific token)."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
for f in cmd_dir.rglob("*"):
|
||
if f.is_file():
|
||
content = f.read_text(encoding="utf-8")
|
||
assert "{ARGS}" not in content, (
|
||
f"Unresolved {{ARGS}} in '{f.relative_to(project)}' for agent '{agent}'"
|
||
)
|
||
|
||
|
||
# Build a set of template stems that actually contain {ARGS} in their source.
|
||
_TEMPLATES_WITH_ARGS: frozenset[str] = frozenset(
|
||
p.stem
|
||
for p in _commands_dir().glob("*.md")
|
||
if "{ARGS}" in p.read_text(encoding="utf-8")
|
||
)
|
||
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_argument_token_format(agent, scaffolded_sh):
|
||
"""For templates that carry an {ARGS} token:
|
||
- TOML agents must emit {{args}}
|
||
- Markdown agents must emit $ARGUMENTS
|
||
Templates without {ARGS} (e.g. implement, plan) are skipped.
|
||
"""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
|
||
for f in _list_command_files(cmd_dir, agent):
|
||
# Recover the stem from the file path
|
||
if agent in _SKILL_AGENTS:
|
||
sep = _SKILL_AGENTS[agent]
|
||
stem = f.parent.name.removeprefix(f"speckit{sep}")
|
||
else:
|
||
ext = _expected_ext(agent)
|
||
stem = f.name.removeprefix("speckit.").removesuffix(f".{ext}")
|
||
if stem not in _TEMPLATES_WITH_ARGS:
|
||
continue # this template has no argument token
|
||
|
||
content = f.read_text(encoding="utf-8")
|
||
if agent in _TOML_AGENTS:
|
||
assert "{{args}}" in content, (
|
||
f"TOML agent '{agent}': expected '{{{{args}}}}' in '{f.name}'"
|
||
)
|
||
else:
|
||
assert "$ARGUMENTS" in content, (
|
||
f"Markdown agent '{agent}': expected '$ARGUMENTS' in '{f.name}'"
|
||
)
|
||
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_path_rewrites_applied(agent, scaffolded_sh):
|
||
"""Bare scripts/ and templates/ paths must be rewritten to .specify/ variants.
|
||
|
||
YAML frontmatter 'source:' metadata fields are excluded — they reference
|
||
the original template path for provenance, not a runtime path.
|
||
"""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
for f in cmd_dir.rglob("*"):
|
||
if not f.is_file():
|
||
continue
|
||
content = f.read_text(encoding="utf-8")
|
||
|
||
# Strip YAML frontmatter before checking — source: metadata is not a runtime path
|
||
body = content
|
||
if content.startswith("---"):
|
||
parts = content.split("---", 2)
|
||
if len(parts) >= 3:
|
||
body = parts[2]
|
||
|
||
# Should not contain bare (non-.specify/) script paths
|
||
assert not re.search(r'(?<!\.specify/)scripts/', body), (
|
||
f"Bare scripts/ path found in '{f.relative_to(project)}' for agent '{agent}'"
|
||
)
|
||
assert not re.search(r'(?<!\.specify/)templates/', body), (
|
||
f"Bare templates/ path found in '{f.relative_to(project)}' for agent '{agent}'"
|
||
)
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# 4. TOML format checks
|
||
# ---------------------------------------------------------------------------
|
||
|
||
@pytest.mark.parametrize("agent", sorted(_TOML_AGENTS))
|
||
def test_toml_format_valid(agent, scaffolded_sh):
|
||
"""TOML agents: every command file must have description and prompt fields."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
for f in cmd_dir.glob("speckit.*.toml"):
|
||
content = f.read_text(encoding="utf-8")
|
||
assert 'description = "' in content, (
|
||
f"Missing 'description' in '{f.name}' for agent '{agent}'"
|
||
)
|
||
assert 'prompt = """' in content, (
|
||
f"Missing 'prompt' block in '{f.name}' for agent '{agent}'"
|
||
)
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# 5. Markdown frontmatter checks
|
||
# ---------------------------------------------------------------------------
|
||
|
||
_MARKDOWN_AGENTS = [a for a in _TESTABLE_AGENTS if a not in _TOML_AGENTS]
|
||
|
||
|
||
@pytest.mark.parametrize("agent", _MARKDOWN_AGENTS)
|
||
def test_markdown_has_frontmatter(agent, scaffolded_sh):
|
||
"""Markdown agents: every command file must start with valid YAML frontmatter."""
|
||
project = scaffolded_sh(agent)
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
for f in _list_command_files(cmd_dir, agent):
|
||
content = f.read_text(encoding="utf-8")
|
||
assert content.startswith("---"), (
|
||
f"No YAML frontmatter in '{f.name}' for agent '{agent}'"
|
||
)
|
||
parts = content.split("---", 2)
|
||
assert len(parts) >= 3, f"Incomplete frontmatter in '{f.name}'"
|
||
fm = yaml.safe_load(parts[1])
|
||
assert fm is not None, f"Empty frontmatter in '{f.name}'"
|
||
assert "description" in fm, (
|
||
f"'description' key missing from frontmatter in '{f.name}' for agent '{agent}'"
|
||
)
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# 6. Copilot-specific: companion .prompt.md files
|
||
# ---------------------------------------------------------------------------
|
||
|
||
def test_copilot_companion_prompt_files(scaffolded_sh, source_template_stems):
|
||
"""Copilot: a speckit.<stem>.prompt.md companion is created for every .agent.md file."""
|
||
project = scaffolded_sh("copilot")
|
||
|
||
prompts_dir = project / ".github" / "prompts"
|
||
assert prompts_dir.is_dir(), ".github/prompts/ not created for copilot"
|
||
|
||
for stem in source_template_stems:
|
||
prompt_file = prompts_dir / f"speckit.{stem}.prompt.md"
|
||
assert prompt_file.is_file(), (
|
||
f"Companion prompt file '{prompt_file.name}' missing for copilot"
|
||
)
|
||
|
||
|
||
def test_copilot_prompt_file_content(scaffolded_sh, source_template_stems):
|
||
"""Copilot companion .prompt.md files must reference their parent .agent.md."""
|
||
project = scaffolded_sh("copilot")
|
||
|
||
prompts_dir = project / ".github" / "prompts"
|
||
for stem in source_template_stems:
|
||
f = prompts_dir / f"speckit.{stem}.prompt.md"
|
||
content = f.read_text(encoding="utf-8")
|
||
assert f"agent: speckit.{stem}" in content, (
|
||
f"Companion '{f.name}' does not reference 'speckit.{stem}'"
|
||
)
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# 7. PowerShell script variant
|
||
# ---------------------------------------------------------------------------
|
||
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_scaffold_powershell_variant(agent, scaffolded_ps, source_template_stems):
|
||
"""scaffold_from_core_pack with script_type='ps' creates correct files."""
|
||
project = scaffolded_ps(agent)
|
||
|
||
scripts_dir = project / ".specify" / "scripts" / "powershell"
|
||
assert scripts_dir.is_dir(), f".specify/scripts/powershell/ missing for '{agent}'"
|
||
assert any(scripts_dir.iterdir()), ".specify/scripts/powershell/ is empty"
|
||
|
||
cmd_dir = _expected_cmd_dir(project, agent)
|
||
generated = _list_command_files(cmd_dir, agent)
|
||
assert len(generated) == len(source_template_stems)
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# 8. Parity: bundled vs. real create-release-packages.sh ZIP
|
||
# ---------------------------------------------------------------------------
|
||
|
||
@pytest.fixture(scope="session")
|
||
def release_script_trees(tmp_path_factory):
|
||
"""Session-scoped cache: run release script once per (agent, script_type)."""
|
||
cache: dict[tuple[str, str], dict[str, bytes]] = {}
|
||
bash = _find_bash()
|
||
|
||
def _get(agent: str, script_type: str) -> dict[str, bytes] | None:
|
||
if bash is None:
|
||
return None
|
||
key = (agent, script_type)
|
||
if key not in cache:
|
||
tmp = tmp_path_factory.mktemp(f"release_{agent}_{script_type}")
|
||
gen_dir = tmp / "genreleases"
|
||
gen_dir.mkdir()
|
||
zip_path = _run_release_script(agent, script_type, bash, gen_dir)
|
||
extracted = tmp / "extracted"
|
||
extracted.mkdir()
|
||
with zipfile.ZipFile(zip_path) as zf:
|
||
zf.extractall(extracted)
|
||
cache[key] = _collect_relative_files(extracted)
|
||
return cache[key]
|
||
return _get
|
||
|
||
|
||
@pytest.mark.parametrize("script_type", ["sh", "ps"])
|
||
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||
def test_parity_bundled_vs_release_script(agent, script_type, scaffolded_sh, scaffolded_ps, release_script_trees):
|
||
"""scaffold_from_core_pack() file tree is identical to the ZIP produced by
|
||
create-release-packages.sh for every agent and script type.
|
||
|
||
This is the true end-to-end parity check: the Python offline path must
|
||
produce exactly the same artifacts as the canonical shell release script.
|
||
|
||
Both sides are session-cached: each agent/script_type combination is
|
||
scaffolded and release-scripted only once across all tests.
|
||
"""
|
||
script_tree = release_script_trees(agent, script_type)
|
||
if script_tree is None:
|
||
pytest.skip("bash required to run create-release-packages.sh")
|
||
|
||
# Reuse session-cached scaffold output
|
||
if script_type == "sh":
|
||
bundled_dir = scaffolded_sh(agent)
|
||
else:
|
||
bundled_dir = scaffolded_ps(agent)
|
||
|
||
bundled_tree = _collect_relative_files(bundled_dir)
|
||
|
||
only_bundled = set(bundled_tree) - set(script_tree)
|
||
only_script = set(script_tree) - set(bundled_tree)
|
||
|
||
assert not only_bundled, (
|
||
f"Agent '{agent}' ({script_type}): files only in bundled output (not in release ZIP):\n "
|
||
+ "\n ".join(sorted(only_bundled))
|
||
)
|
||
assert not only_script, (
|
||
f"Agent '{agent}' ({script_type}): files only in release ZIP (not in bundled output):\n "
|
||
+ "\n ".join(sorted(only_script))
|
||
)
|
||
|
||
for name in bundled_tree:
|
||
assert bundled_tree[name] == script_tree[name], (
|
||
f"Agent '{agent}' ({script_type}): file '{name}' content differs between "
|
||
f"bundled output and release script ZIP"
|
||
)
|
||
|
||
|
||
# ---------------------------------------------------------------------------
|
||
# Section 10 – pyproject.toml force-include covers all template files
|
||
# ---------------------------------------------------------------------------
|
||
|
||
def test_pyproject_force_include_covers_all_templates():
|
||
"""Every file in templates/ (excluding commands/) must be listed in
|
||
pyproject.toml's [tool.hatch.build.targets.wheel.force-include] section.
|
||
|
||
This prevents new template files from being silently omitted from the
|
||
wheel, which would break ``specify init --offline``.
|
||
"""
|
||
templates_dir = _REPO_ROOT / "templates"
|
||
# Collect all files directly in templates/ (not in subdirectories like commands/)
|
||
repo_template_files = sorted(
|
||
f.name for f in templates_dir.iterdir()
|
||
if f.is_file()
|
||
)
|
||
assert repo_template_files, "Expected at least one template file in templates/"
|
||
|
||
pyproject_path = _REPO_ROOT / "pyproject.toml"
|
||
with open(pyproject_path, "rb") as f:
|
||
pyproject = tomllib.load(f)
|
||
force_include = pyproject.get("tool", {}).get("hatch", {}).get("build", {}).get("targets", {}).get("wheel", {}).get("force-include", {})
|
||
|
||
missing = [
|
||
name for name in repo_template_files
|
||
if f"templates/{name}" not in force_include
|
||
]
|
||
assert not missing, (
|
||
"Template files not listed in pyproject.toml force-include "
|
||
"(offline scaffolding will miss them):\n "
|
||
+ "\n ".join(missing)
|
||
)
|