mirror of
https://github.com/github/spec-kit.git
synced 2026-03-23 05:43:08 +00:00
feat(cli): embed core pack in wheel for offline/air-gapped deployment (#1803)
* feat(cli): embed core pack in wheel + offline-first init (#1711, #1752) Bundle templates, commands, and scripts inside the specify-cli wheel so that `specify init` works without any network access by default. Changes: - pyproject.toml: add hatchling force-include for core_pack assets; bump version to 0.2.1 - __init__.py: add _locate_core_pack(), _generate_agent_commands() (Python port of generate_commands() shell function), and scaffold_from_core_pack(); modify init() to scaffold from bundled assets by default; add --from-github flag to opt back in to the GitHub download path - release.yml: build wheel during CI release job - create-github-release.sh: attach .whl as a release asset - docs/installation.md: add Enterprise/Air-Gapped Installation section - README.md: add Option 3 enterprise install with accurate offline story Closes #1711 Addresses #1752 * fix(tests): update kiro alias test for offline-first scaffold path * feat(cli): invoke bundled release script at runtime for offline scaffold - Embed release scripts (bash + PowerShell) in wheel via pyproject.toml - Replace Python _generate_agent_commands() with subprocess invocation of the canonical create-release-packages.sh, guaranteeing byte-for-byte parity between 'specify init --offline' and GitHub release ZIPs - Fix macOS bash 3.2 compat in release script: replace cp --parents, local -n (nameref), and mapfile with POSIX-safe alternatives - Fix _TOML_AGENTS: remove qwen (uses markdown per release script) - Rename --from-github to --offline (opt-in to bundled assets) - Add _locate_release_script() for cross-platform script discovery - Update tests: remove bash 4+/GNU coreutils requirements, handle Kimi directory-per-skill layout, 576 tests passing - Update CHANGELOG and docs/installation.md * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * fix(offline): error out if --offline fails instead of falling back to network - _locate_core_pack() docstring now accurately describes that it only finds wheel-bundled core_pack/; source-checkout fallback lives in callers - init() --offline + no bundled assets now exits with a clear error (previously printed a warning and silently fell back to GitHub download) - init() scaffold failure under --offline now exits with an error instead of retrying via download_and_extract_template Addresses reviewer comment: https://github.com/github/spec-kit/pull/1803 * fix(offline): address PR review comments - fix(shell): harden validate_subset against glob injection in case patterns - fix(shell): make GENRELEASES_DIR overridable via env var for test isolation - fix(cli): probe pwsh then powershell on Windows instead of hardcoding pwsh - fix(cli): remove unreachable fallback branch when --offline fails - fix(cli): improve --offline error message with common failure causes - fix(release): move wheel build step after create-release-packages.sh - fix(docs): add --offline to installation.md air-gapped example - fix(tests): remove unused genreleases_dir param from _run_release_script - fix(tests): rewrite parity test to run one agent at a time with isolated temp dirs, preventing cross-agent interference from rm -rf * fix(offline): address second round of review comments - fix(shell): replace case-pattern membership with explicit loop + == check for unambiguous glob-safety in validate_subset() - fix(cli): require pwsh (PowerShell 7) only; drop powershell (PS5) fallback since the bundled script uses #requires -Version 7.0 - fix(cli): add bash and zip preflight checks in scaffold_from_core_pack() with clear error messages if either is missing - fix(build): list individual template files in pyproject.toml force-include to avoid duplicating templates/commands/ in the wheel * fix(offline): address third round of review comments - Add 120s timeout to subprocess.run in scaffold_from_core_pack to prevent indefinite hangs during offline scaffolding - Add test_pyproject_force_include_covers_all_templates to catch missing template files in wheel bundling - Tighten kiro alias test to assert specific scaffold path (download vs offline) * fix(offline): address Copilot review round 4 - fix(offline): use handle_vscode_settings() merge for --here --offline to prevent data loss on existing .vscode/settings.json - fix(release): glob wheel filename in create-github-release.sh instead of hardcoding version, preventing upload failures on version mismatch - docs(release): add comment noting pyproject.toml version is synced by release-trigger.yml before the tag is pushed * fix(offline): address review round 5 + offline bundle ZIP - fix(offline): pwsh-only, no powershell.exe fallback; clarify error message - fix(offline): tighten _has_bundled to check scripts dir for source checkouts - feat(release): build specify-bundle-v*.zip with all deps at release time - feat(release): attach offline bundle ZIP to GitHub release assets - docs: simplify air-gapped install to single ZIP download from releases - docs: add Windows PowerShell 7+ (pwsh) requirement note * fix(tests): session-scoped scaffold cache + timeout + dead code removal - Add timeout=300 and returncode check to _run_release_script() to fail fast with clear output on script hangs or failures - Remove unused import specify_cli, _SOURCE_TEMPLATES, bundled_project fixture - Add session-scoped scaffolded_sh/scaffolded_ps fixtures that scaffold once per agent and reuse the output directory across all invariant tests - Reduces test_core_pack_scaffold runtime from ~175s to ~51s (3.4x faster) - Parity tests still scaffold independently for isolation * fix(offline): remove wheel from release, update air-gapped docs to use pip download * fix(tests): handle codex skills layout and iflow agent in scaffold tests Codex now uses create_skills() with hyphenated separator (speckit-plan/SKILL.md) instead of generate_commands(). Update _SKILL_AGENTS, _expected_ext, and _list_command_files to handle both codex ('-') and kimi ('.') skill agents. Also picks up iflow as a new testable agent automatically via AGENT_CONFIG. * fix(offline): require wheel core_pack for --offline, remove source-checkout fallback --offline now strictly requires _locate_core_pack() to find the wheel's bundled core_pack/ directory. Source-checkout fallbacks are no longer accepted at the init() level — if core_pack/ is missing, the CLI errors out with a clear message pointing to the installation docs. scaffold_from_core_pack() retains its internal source-checkout fallbacks so parity tests can call it directly from a source checkout. * fix(offline): remove stale [Unreleased] CHANGELOG section, scope httpx.Client to download path - Remove entire [Unreleased] section — CHANGELOG is auto-generated at release - Move httpx.Client into use_github branch with context manager so --offline path doesn't allocate an unused network client * fix(offline): remove dead --from-github flag, fix typer.Exit handling, add page templates validation - Remove unused --from-github CLI option and docstring example - Add (typer.Exit, SystemExit) re-raise before broad except Exception to prevent duplicate error panel on offline scaffold failure - Validate page templates directory exists in scaffold_from_core_pack() to fail fast on incomplete wheel installs - Fix ruff lint: remove unused shutil import, remove f-prefix on strings without placeholders in test_core_pack_scaffold.py * docs(offline): add v0.6.0 deprecation notice with rationale - Help text: note bundled assets become default in v0.6.0 - Docstring: explain why GitHub download is being retired (no network dependency, no proxy/firewall issues, guaranteed version match) - Runtime nudge: when bundled assets are available but user takes the GitHub download path, suggest --offline with rationale - docs/installation.md: add deprecation notice with full rationale * fix(offline): allow --offline in source checkouts, fix CHANGELOG truncation - Simplify use_github logic: use_github = not offline (let scaffold_from_core_pack handle fallback to source-checkout paths) - Remove hard-fail when core_pack/ is absent — scaffold_from_core_pack already falls back to repo-root templates/scripts/commands - Fix truncated 'skill…' → 'skills' in CHANGELOG.md * fix(offline): sandbox GENRELEASES_DIR and clean up on failure - Pin GENRELEASES_DIR to temp dir in scaffold_from_core_pack() so a user-exported value cannot redirect output or cause rm -rf outside the sandbox - Clean up partial project directory on --offline scaffold failure (same behavior as the GitHub-download failure path) * fix(tests): use shutil.which for bash discovery, add ps parity tests - _find_bash() now tries shutil.which('bash') first so non-standard install locations (Nix, custom CI images) are found - Parametrize parity test over both 'sh' and 'ps' script types to ensure PowerShell variant stays byte-for-byte identical to release script output (353 scaffold tests, 810 total) * fix(tests): parse pyproject.toml with tomllib, remove unused fixture - Use tomllib to parse force-include keys from the actual TOML table instead of raw substring search (avoids false positives) - Remove unused source_template_stems fixture from test_scaffold_command_dir_location * fix: guard GENRELEASES_DIR against unsafe values, update docstring - Add safety check in create-release-packages.sh: reject empty, '/', '.', '..' values for GENRELEASES_DIR before rm -rf - Strip trailing slash to avoid path surprises - Update scaffold_from_core_pack() docstring to accurately describe all failure modes (not just 'assets not found') * fix: harden GENRELEASES_DIR guard, cache parity tests, safe iterdir - Reject '..' path segments in GENRELEASES_DIR to prevent traversal - Session-cache both scaffold and release-script results in parity tests — runtime drops from ~74s to ~45s (40% faster) - Guard cmd_dir.iterdir() in assertion message against missing dirs * fix(tests): exclude YAML frontmatter source metadata from path rewrite check The codex and kimi SKILL.md files have 'source: templates/commands/...' in their YAML frontmatter — this is provenance metadata, not a runtime path that needs rewriting. Strip frontmatter before checking for bare scripts/ and templates/ paths. * fix(offline): surface scaffold failure detail in error output When --offline scaffold fails, look up the tracker's 'scaffold' step detail and print it alongside the generic error message so users see the specific root cause (e.g. missing zip/pwsh, script stderr). --------- Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
This commit is contained in:
@@ -26,9 +26,27 @@ fi
|
|||||||
echo "Building release packages for $NEW_VERSION"
|
echo "Building release packages for $NEW_VERSION"
|
||||||
|
|
||||||
# Create and use .genreleases directory for all build artifacts
|
# Create and use .genreleases directory for all build artifacts
|
||||||
GENRELEASES_DIR=".genreleases"
|
# Override via GENRELEASES_DIR env var (e.g. for tests writing to a temp dir)
|
||||||
|
GENRELEASES_DIR="${GENRELEASES_DIR:-.genreleases}"
|
||||||
|
|
||||||
|
# Guard against unsafe GENRELEASES_DIR values before cleaning
|
||||||
|
if [[ -z "$GENRELEASES_DIR" ]]; then
|
||||||
|
echo "GENRELEASES_DIR must not be empty" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
case "$GENRELEASES_DIR" in
|
||||||
|
'/'|'.'|'..')
|
||||||
|
echo "Refusing to use unsafe GENRELEASES_DIR value: $GENRELEASES_DIR" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
if [[ "$GENRELEASES_DIR" == *".."* ]]; then
|
||||||
|
echo "Refusing to use GENRELEASES_DIR containing '..' path segments: $GENRELEASES_DIR" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
mkdir -p "$GENRELEASES_DIR"
|
mkdir -p "$GENRELEASES_DIR"
|
||||||
rm -rf "$GENRELEASES_DIR"/* || true
|
rm -rf "${GENRELEASES_DIR%/}/"* || true
|
||||||
|
|
||||||
rewrite_paths() {
|
rewrite_paths() {
|
||||||
sed -E \
|
sed -E \
|
||||||
@@ -228,7 +246,7 @@ build_variant() {
|
|||||||
esac
|
esac
|
||||||
fi
|
fi
|
||||||
|
|
||||||
[[ -d templates ]] && { mkdir -p "$SPEC_DIR/templates"; find templates -type f -not -path "templates/commands/*" -not -name "vscode-settings.json" -exec cp --parents {} "$SPEC_DIR"/ \; ; echo "Copied templates -> .specify/templates"; }
|
[[ -d templates ]] && { mkdir -p "$SPEC_DIR/templates"; find templates -type f -not -path "templates/commands/*" -not -name "vscode-settings.json" | while IFS= read -r f; do d="$SPEC_DIR/$(dirname "$f")"; mkdir -p "$d"; cp "$f" "$d/"; done; echo "Copied templates -> .specify/templates"; }
|
||||||
|
|
||||||
case $agent in
|
case $agent in
|
||||||
claude)
|
claude)
|
||||||
@@ -325,34 +343,35 @@ build_variant() {
|
|||||||
ALL_AGENTS=(claude gemini copilot cursor-agent qwen opencode windsurf junie codex kilocode auggie roo codebuddy amp shai tabnine kiro-cli agy bob vibe qodercli kimi trae pi iflow generic)
|
ALL_AGENTS=(claude gemini copilot cursor-agent qwen opencode windsurf junie codex kilocode auggie roo codebuddy amp shai tabnine kiro-cli agy bob vibe qodercli kimi trae pi iflow generic)
|
||||||
ALL_SCRIPTS=(sh ps)
|
ALL_SCRIPTS=(sh ps)
|
||||||
|
|
||||||
norm_list() {
|
|
||||||
tr ',\n' ' ' | awk '{for(i=1;i<=NF;i++){if(!seen[$i]++){printf((out?"\n":"") $i);out=1}}}END{printf("\n")}'
|
|
||||||
}
|
|
||||||
|
|
||||||
validate_subset() {
|
validate_subset() {
|
||||||
local type=$1; shift; local -n allowed=$1; shift; local items=("$@")
|
local type=$1; shift
|
||||||
|
local allowed_str="$1"; shift
|
||||||
local invalid=0
|
local invalid=0
|
||||||
for it in "${items[@]}"; do
|
for it in "$@"; do
|
||||||
local found=0
|
local found=0
|
||||||
for a in "${allowed[@]}"; do [[ $it == "$a" ]] && { found=1; break; }; done
|
for a in $allowed_str; do
|
||||||
|
if [[ "$it" == "$a" ]]; then found=1; break; fi
|
||||||
|
done
|
||||||
if [[ $found -eq 0 ]]; then
|
if [[ $found -eq 0 ]]; then
|
||||||
echo "Error: unknown $type '$it' (allowed: ${allowed[*]})" >&2
|
echo "Error: unknown $type '$it' (allowed: $allowed_str)" >&2
|
||||||
invalid=1
|
invalid=1
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
return $invalid
|
return $invalid
|
||||||
}
|
}
|
||||||
|
|
||||||
|
read_list() { tr ',\n' ' ' | awk '{for(i=1;i<=NF;i++){if(!seen[$i]++){printf((out?" ":"") $i);out=1}}}END{printf("\n")}'; }
|
||||||
|
|
||||||
if [[ -n ${AGENTS:-} ]]; then
|
if [[ -n ${AGENTS:-} ]]; then
|
||||||
mapfile -t AGENT_LIST < <(printf '%s' "$AGENTS" | norm_list)
|
read -ra AGENT_LIST <<< "$(printf '%s' "$AGENTS" | read_list)"
|
||||||
validate_subset agent ALL_AGENTS "${AGENT_LIST[@]}" || exit 1
|
validate_subset agent "${ALL_AGENTS[*]}" "${AGENT_LIST[@]}" || exit 1
|
||||||
else
|
else
|
||||||
AGENT_LIST=("${ALL_AGENTS[@]}")
|
AGENT_LIST=("${ALL_AGENTS[@]}")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -n ${SCRIPTS:-} ]]; then
|
if [[ -n ${SCRIPTS:-} ]]; then
|
||||||
mapfile -t SCRIPT_LIST < <(printf '%s' "$SCRIPTS" | norm_list)
|
read -ra SCRIPT_LIST <<< "$(printf '%s' "$SCRIPTS" | read_list)"
|
||||||
validate_subset script ALL_SCRIPTS "${SCRIPT_LIST[@]}" || exit 1
|
validate_subset script "${ALL_SCRIPTS[*]}" "${SCRIPT_LIST[@]}" || exit 1
|
||||||
else
|
else
|
||||||
SCRIPT_LIST=("${ALL_SCRIPTS[@]}")
|
SCRIPT_LIST=("${ALL_SCRIPTS[@]}")
|
||||||
fi
|
fi
|
||||||
|
|||||||
65
CHANGELOG.md
65
CHANGELOG.md
@@ -4,6 +4,7 @@
|
|||||||
|
|
||||||
### Changes
|
### Changes
|
||||||
|
|
||||||
|
- chore: bump version to 0.3.2
|
||||||
- Add conduct extension to community catalog (#1908)
|
- Add conduct extension to community catalog (#1908)
|
||||||
- feat(extensions): add verify-tasks extension to community catalog (#1871)
|
- feat(extensions): add verify-tasks extension to community catalog (#1871)
|
||||||
- feat(presets): add enable/disable toggle and update semantics (#1891)
|
- feat(presets): add enable/disable toggle and update semantics (#1891)
|
||||||
@@ -20,11 +21,11 @@
|
|||||||
- Feature/spec kit add pi coding agent pullrequest (#1853)
|
- Feature/spec kit add pi coding agent pullrequest (#1853)
|
||||||
- feat: register spec-kit-learn extension (#1883)
|
- feat: register spec-kit-learn extension (#1883)
|
||||||
|
|
||||||
|
|
||||||
## [0.3.1] - 2026-03-17
|
## [0.3.1] - 2026-03-17
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
|
- chore: bump version to 0.3.1
|
||||||
- docs: add greenfield Spring Boot pirate-speak preset demo to README (#1878)
|
- docs: add greenfield Spring Boot pirate-speak preset demo to README (#1878)
|
||||||
- fix(ai-skills): exclude non-speckit copilot agent markdown from skills (#1867)
|
- fix(ai-skills): exclude non-speckit copilot agent markdown from skills (#1867)
|
||||||
- feat: add Trae IDE support as a new agent (#1817)
|
- feat: add Trae IDE support as a new agent (#1817)
|
||||||
@@ -40,52 +41,21 @@
|
|||||||
- feat(extensions): add Archive and Reconcile extensions to community catalog (#1844)
|
- feat(extensions): add Archive and Reconcile extensions to community catalog (#1844)
|
||||||
- feat: Add DocGuard CDD enforcement extension to community catalog (#1838)
|
- feat: Add DocGuard CDD enforcement extension to community catalog (#1838)
|
||||||
|
|
||||||
|
|
||||||
## [0.3.0] - 2026-03-13
|
## [0.3.0] - 2026-03-13
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- No changes have been documented for this release yet.
|
- chore: bump version to 0.3.0
|
||||||
|
- feat(presets): Pluggable preset system with catalog, resolver, and skills propagation (#1787)
|
||||||
<!-- Entries for 0.2.x and earlier releases are documented in their respective sections below. -->
|
- fix: match 'Last updated' timestamp with or without bold markers (#1836)
|
||||||
- make c ignores consistent with c++ (#1747)
|
- Add specify doctor command for project health diagnostics (#1828)
|
||||||
- chore: bump version to 0.1.13 (#1746)
|
- fix: harden bash scripts against shell injection and improve robustness (#1809)
|
||||||
- feat: add kiro-cli and AGENT_CONFIG consistency coverage (#1690)
|
- fix: clean up command templates (specify, analyze) (#1810)
|
||||||
- feat: add verify extension to community catalog (#1726)
|
- fix: migrate Qwen Code CLI from TOML to Markdown format (#1589) (#1730)
|
||||||
- Add Retrospective Extension to community catalog README table (#1741)
|
- fix(cli): deprecate explicit command support for agy (#1798) (#1808)
|
||||||
- fix(scripts): add empty description validation and branch checkout error handling (#1559)
|
- Add /selftest.extension core extension to test other extensions (#1758)
|
||||||
- fix: correct Copilot extension command registration (#1724)
|
- feat(extensions): Quality of life improvements for RFC-aligned catalog integration (#1776)
|
||||||
- fix(implement): remove Makefile from C ignore patterns (#1558)
|
- Add Java brownfield walkthrough to community walkthroughs (#1820)
|
||||||
- Add sync extension to community catalog (#1728)
|
|
||||||
- fix(checklist): clarify file handling behavior for append vs create (#1556)
|
|
||||||
- fix(clarify): correct conflicting question limit from 10 to 5 (#1557)
|
|
||||||
- chore: bump version to 0.1.12 (#1737)
|
|
||||||
- fix: use RELEASE_PAT so tag push triggers release workflow (#1736)
|
|
||||||
- fix: release-trigger uses release branch + PR instead of direct push to main (#1733)
|
|
||||||
- fix: Split release process to sync pyproject.toml version with git tags (#1732)
|
|
||||||
|
|
||||||
|
|
||||||
## [Unreleased]
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- feat(cli): polite deep merge for VSCode settings.json with JSONC support via `json5` and zero-data-loss fallbacks
|
|
||||||
- feat(presets): Pluggable preset system with preset catalog and template resolver
|
|
||||||
- Preset manifest (`preset.yml`) with validation for artifact, command, and script types
|
|
||||||
- `PresetManifest`, `PresetRegistry`, `PresetManager`, `PresetCatalog`, `PresetResolver` classes in `src/specify_cli/presets.py`
|
|
||||||
- CLI commands: `specify preset search`, `specify preset add`, `specify preset list`, `specify preset remove`, `specify preset resolve`, `specify preset info`
|
|
||||||
- CLI commands: `specify preset catalog list`, `specify preset catalog add`, `specify preset catalog remove` for multi-catalog management
|
|
||||||
- `PresetCatalogEntry` dataclass and multi-catalog support mirroring the extension catalog system
|
|
||||||
- `--preset` option for `specify init` to install presets during initialization
|
|
||||||
- Priority-based preset resolution: presets with lower priority number win (`--priority` flag)
|
|
||||||
- `resolve_template()` / `Resolve-Template` helpers in bash and PowerShell common scripts
|
|
||||||
- Template resolution priority stack: overrides → presets → extensions → core
|
|
||||||
- Preset catalog files (`presets/catalog.json`, `presets/catalog.community.json`)
|
|
||||||
- Preset scaffold directory (`presets/scaffold/`)
|
|
||||||
- Scripts updated to use template resolution instead of hardcoded paths
|
|
||||||
- feat(presets): Preset command overrides now propagate to agent skills when `--ai-skills` was used during init
|
|
||||||
- feat: `specify init` persists CLI options to `.specify/init-options.json` for downstream operations
|
|
||||||
- feat(extensions): support `.extensionignore` to exclude files/folders during `specify extension add` (#1781)
|
|
||||||
|
|
||||||
## [0.2.1] - 2026-03-11
|
## [0.2.1] - 2026-03-11
|
||||||
|
|
||||||
@@ -312,12 +282,3 @@
|
|||||||
|
|
||||||
- Add pytest and Python linting (ruff) to CI (#1637)
|
- Add pytest and Python linting (ruff) to CI (#1637)
|
||||||
- feat: add pull request template for better contribution guidelines (#1634)
|
- feat: add pull request template for better contribution guidelines (#1634)
|
||||||
|
|
||||||
## [0.0.99] - 2026-02-19
|
|
||||||
|
|
||||||
- Feat/ai skills (#1632)
|
|
||||||
|
|
||||||
## [0.0.98] - 2026-02-19
|
|
||||||
|
|
||||||
- chore(deps): bump actions/stale from 9 to 10 (#1623)
|
|
||||||
- feat: add dependabot configuration for pip and GitHub Actions updates (#1622)
|
|
||||||
|
|||||||
20
README.md
20
README.md
@@ -49,9 +49,13 @@ Choose your preferred installation method:
|
|||||||
|
|
||||||
#### Option 1: Persistent Installation (Recommended)
|
#### Option 1: Persistent Installation (Recommended)
|
||||||
|
|
||||||
Install once and use everywhere:
|
Install once and use everywhere. Pin a specific release tag for stability (check [Releases](https://github.com/github/spec-kit/releases) for the latest):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Install a specific stable release (recommended — replace vX.Y.Z with the latest tag)
|
||||||
|
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git@vX.Y.Z
|
||||||
|
|
||||||
|
# Or install latest from main (may include unreleased changes)
|
||||||
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git
|
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -73,7 +77,7 @@ specify check
|
|||||||
To upgrade Specify, see the [Upgrade Guide](./docs/upgrade.md) for detailed instructions. Quick upgrade:
|
To upgrade Specify, see the [Upgrade Guide](./docs/upgrade.md) for detailed instructions. Quick upgrade:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uv tool install specify-cli --force --from git+https://github.com/github/spec-kit.git
|
uv tool install specify-cli --force --from git+https://github.com/github/spec-kit.git@vX.Y.Z
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Option 2: One-time Usage
|
#### Option 2: One-time Usage
|
||||||
@@ -81,13 +85,13 @@ uv tool install specify-cli --force --from git+https://github.com/github/spec-ki
|
|||||||
Run directly without installing:
|
Run directly without installing:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Create new project
|
# Create new project (pinned to a stable release — replace vX.Y.Z with the latest tag)
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <PROJECT_NAME>
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <PROJECT_NAME>
|
||||||
|
|
||||||
# Or initialize in existing project
|
# Or initialize in existing project
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init . --ai claude
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init . --ai claude
|
||||||
# or
|
# or
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init --here --ai claude
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init --here --ai claude
|
||||||
```
|
```
|
||||||
|
|
||||||
**Benefits of persistent installation:**
|
**Benefits of persistent installation:**
|
||||||
@@ -97,6 +101,10 @@ uvx --from git+https://github.com/github/spec-kit.git specify init --here --ai c
|
|||||||
- Better tool management with `uv tool list`, `uv tool upgrade`, `uv tool uninstall`
|
- Better tool management with `uv tool list`, `uv tool upgrade`, `uv tool uninstall`
|
||||||
- Cleaner shell configuration
|
- Cleaner shell configuration
|
||||||
|
|
||||||
|
#### Option 3: Enterprise / Air-Gapped Installation
|
||||||
|
|
||||||
|
If your environment blocks access to PyPI or GitHub, see the [Enterprise / Air-Gapped Installation](./docs/installation.md#enterprise--air-gapped-installation) guide for step-by-step instructions on using `pip download` to create portable, OS-specific wheel bundles on a connected machine.
|
||||||
|
|
||||||
### 2. Establish project principles
|
### 2. Establish project principles
|
||||||
|
|
||||||
Launch your AI assistant in the project directory. Most agents expose spec-kit as `/speckit.*` slash commands; Codex CLI in skills mode uses `$speckit-*` instead.
|
Launch your AI assistant in the project directory. Most agents expose spec-kit as `/speckit.*` slash commands; Codex CLI in skills mode uses `$speckit-*` instead.
|
||||||
|
|||||||
@@ -12,18 +12,22 @@
|
|||||||
|
|
||||||
### Initialize a New Project
|
### Initialize a New Project
|
||||||
|
|
||||||
The easiest way to get started is to initialize a new project:
|
The easiest way to get started is to initialize a new project. Pin a specific release tag for stability (check [Releases](https://github.com/github/spec-kit/releases) for the latest):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Install from a specific stable release (recommended — replace vX.Y.Z with the latest tag)
|
||||||
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <PROJECT_NAME>
|
||||||
|
|
||||||
|
# Or install latest from main (may include unreleased changes)
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <PROJECT_NAME>
|
uvx --from git+https://github.com/github/spec-kit.git specify init <PROJECT_NAME>
|
||||||
```
|
```
|
||||||
|
|
||||||
Or initialize in the current directory:
|
Or initialize in the current directory:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init .
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init .
|
||||||
# or use the --here flag
|
# or use the --here flag
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init --here
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init --here
|
||||||
```
|
```
|
||||||
|
|
||||||
### Specify AI Agent
|
### Specify AI Agent
|
||||||
@@ -31,11 +35,11 @@ uvx --from git+https://github.com/github/spec-kit.git specify init --here
|
|||||||
You can proactively specify your AI agent during initialization:
|
You can proactively specify your AI agent during initialization:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <project_name> --ai claude
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <project_name> --ai claude
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <project_name> --ai gemini
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <project_name> --ai gemini
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <project_name> --ai copilot
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <project_name> --ai copilot
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <project_name> --ai codebuddy
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <project_name> --ai codebuddy
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <project_name> --ai pi
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <project_name> --ai pi
|
||||||
```
|
```
|
||||||
|
|
||||||
### Specify Script Type (Shell vs PowerShell)
|
### Specify Script Type (Shell vs PowerShell)
|
||||||
@@ -51,8 +55,8 @@ Auto behavior:
|
|||||||
Force a specific script type:
|
Force a specific script type:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <project_name> --script sh
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <project_name> --script sh
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <project_name> --script ps
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <project_name> --script ps
|
||||||
```
|
```
|
||||||
|
|
||||||
### Ignore Agent Tools Check
|
### Ignore Agent Tools Check
|
||||||
@@ -60,7 +64,7 @@ uvx --from git+https://github.com/github/spec-kit.git specify init <project_name
|
|||||||
If you prefer to get the templates without checking for the right tools:
|
If you prefer to get the templates without checking for the right tools:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init <project_name> --ai claude --ignore-agent-tools
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <project_name> --ai claude --ignore-agent-tools
|
||||||
```
|
```
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
@@ -75,6 +79,52 @@ The `.specify/scripts` directory will contain both `.sh` and `.ps1` scripts.
|
|||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Enterprise / Air-Gapped Installation
|
||||||
|
|
||||||
|
If your environment blocks access to PyPI (you see 403 errors when running `uv tool install` or `pip install`), you can create a portable wheel bundle on a connected machine and transfer it to the air-gapped target.
|
||||||
|
|
||||||
|
**Step 1: Build the wheel on a connected machine (same OS and Python version as the target)**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone the repository
|
||||||
|
git clone https://github.com/github/spec-kit.git
|
||||||
|
cd spec-kit
|
||||||
|
|
||||||
|
# Build the wheel
|
||||||
|
pip install build
|
||||||
|
python -m build --wheel --outdir dist/
|
||||||
|
|
||||||
|
# Download the wheel and all its runtime dependencies
|
||||||
|
pip download -d dist/ dist/specify_cli-*.whl
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Important:** `pip download` resolves platform-specific wheels (e.g., PyYAML includes native extensions). You must run this step on a machine with the **same OS and Python version** as the air-gapped target. If you need to support multiple platforms, repeat this step on each target OS (Linux, macOS, Windows) and Python version.
|
||||||
|
|
||||||
|
**Step 2: Transfer the `dist/` directory to the air-gapped machine**
|
||||||
|
|
||||||
|
Copy the entire `dist/` directory (which contains the `specify-cli` wheel and all dependency wheels) to the target machine via USB, network share, or other approved transfer method.
|
||||||
|
|
||||||
|
**Step 3: Install on the air-gapped machine**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install --no-index --find-links=./dist specify-cli
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4: Initialize a project (no network required)**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initialize a project — no GitHub access needed
|
||||||
|
specify init my-project --ai claude --offline
|
||||||
|
```
|
||||||
|
|
||||||
|
The `--offline` flag tells the CLI to use the templates, commands, and scripts bundled inside the wheel instead of downloading from GitHub.
|
||||||
|
|
||||||
|
> **Deprecation notice:** Starting with v0.6.0, `specify init` will use bundled assets by default and the `--offline` flag will be removed. The GitHub download path will be retired because bundled assets eliminate the need for network access, avoid proxy/firewall issues, and guarantee that templates always match the installed CLI version. No action will be needed — `specify init` will simply work without network access out of the box.
|
||||||
|
|
||||||
|
> **Note:** Python 3.11+ is required.
|
||||||
|
|
||||||
|
> **Windows note:** Offline scaffolding requires PowerShell 7+ (`pwsh`), not Windows PowerShell 5.x (`powershell.exe`). Install from https://aka.ms/powershell.
|
||||||
|
|
||||||
### Git Credential Manager on Linux
|
### Git Credential Manager on Linux
|
||||||
|
|
||||||
If you're having issues with Git authentication on Linux, you can install Git Credential Manager:
|
If you're having issues with Git authentication on Linux, you can install Git Credential Manager:
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
|
|
||||||
| What to Upgrade | Command | When to Use |
|
| What to Upgrade | Command | When to Use |
|
||||||
|----------------|---------|-------------|
|
|----------------|---------|-------------|
|
||||||
| **CLI Tool Only** | `uv tool install specify-cli --force --from git+https://github.com/github/spec-kit.git` | Get latest CLI features without touching project files |
|
| **CLI Tool Only** | `uv tool install specify-cli --force --from git+https://github.com/github/spec-kit.git@vX.Y.Z` | Get latest CLI features without touching project files |
|
||||||
| **Project Files** | `specify init --here --force --ai <your-agent>` | Update slash commands, templates, and scripts in your project |
|
| **Project Files** | `specify init --here --force --ai <your-agent>` | Update slash commands, templates, and scripts in your project |
|
||||||
| **Both** | Run CLI upgrade, then project update | Recommended for major version updates |
|
| **Both** | Run CLI upgrade, then project update | Recommended for major version updates |
|
||||||
|
|
||||||
@@ -20,16 +20,18 @@ The CLI tool (`specify`) is separate from your project files. Upgrade it to get
|
|||||||
|
|
||||||
### If you installed with `uv tool install`
|
### If you installed with `uv tool install`
|
||||||
|
|
||||||
|
Upgrade to a specific release (check [Releases](https://github.com/github/spec-kit/releases) for the latest tag):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uv tool install specify-cli --force --from git+https://github.com/github/spec-kit.git
|
uv tool install specify-cli --force --from git+https://github.com/github/spec-kit.git@vX.Y.Z
|
||||||
```
|
```
|
||||||
|
|
||||||
### If you use one-shot `uvx` commands
|
### If you use one-shot `uvx` commands
|
||||||
|
|
||||||
No upgrade needed—`uvx` always fetches the latest version. Just run your commands as normal:
|
Specify the desired release tag:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uvx --from git+https://github.com/github/spec-kit.git specify init --here --ai copilot
|
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init --here --ai copilot
|
||||||
```
|
```
|
||||||
|
|
||||||
### Verify the upgrade
|
### Verify the upgrade
|
||||||
|
|||||||
@@ -27,6 +27,23 @@ build-backend = "hatchling.build"
|
|||||||
[tool.hatch.build.targets.wheel]
|
[tool.hatch.build.targets.wheel]
|
||||||
packages = ["src/specify_cli"]
|
packages = ["src/specify_cli"]
|
||||||
|
|
||||||
|
[tool.hatch.build.targets.wheel.force-include]
|
||||||
|
# Bundle core assets so `specify init` works without network access (air-gapped / enterprise)
|
||||||
|
# Page templates (exclude commands/ — bundled separately below to avoid duplication)
|
||||||
|
"templates/agent-file-template.md" = "specify_cli/core_pack/templates/agent-file-template.md"
|
||||||
|
"templates/checklist-template.md" = "specify_cli/core_pack/templates/checklist-template.md"
|
||||||
|
"templates/constitution-template.md" = "specify_cli/core_pack/templates/constitution-template.md"
|
||||||
|
"templates/plan-template.md" = "specify_cli/core_pack/templates/plan-template.md"
|
||||||
|
"templates/spec-template.md" = "specify_cli/core_pack/templates/spec-template.md"
|
||||||
|
"templates/tasks-template.md" = "specify_cli/core_pack/templates/tasks-template.md"
|
||||||
|
"templates/vscode-settings.json" = "specify_cli/core_pack/templates/vscode-settings.json"
|
||||||
|
# Command templates
|
||||||
|
"templates/commands" = "specify_cli/core_pack/commands"
|
||||||
|
"scripts/bash" = "specify_cli/core_pack/scripts/bash"
|
||||||
|
"scripts/powershell" = "specify_cli/core_pack/scripts/powershell"
|
||||||
|
".github/workflows/scripts/create-release-packages.sh" = "specify_cli/core_pack/release_scripts/create-release-packages.sh"
|
||||||
|
".github/workflows/scripts/create-release-packages.ps1" = "specify_cli/core_pack/release_scripts/create-release-packages.ps1"
|
||||||
|
|
||||||
[project.optional-dependencies]
|
[project.optional-dependencies]
|
||||||
test = [
|
test = [
|
||||||
"pytest>=7.0",
|
"pytest>=7.0",
|
||||||
|
|||||||
@@ -315,6 +315,9 @@ AI_ASSISTANT_ALIASES = {
|
|||||||
"kiro": "kiro-cli",
|
"kiro": "kiro-cli",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Agents that use TOML command format (others use Markdown)
|
||||||
|
_TOML_AGENTS = frozenset({"gemini", "tabnine"})
|
||||||
|
|
||||||
def _build_ai_assistant_help() -> str:
|
def _build_ai_assistant_help() -> str:
|
||||||
"""Build the --ai help text from AGENT_CONFIG so it stays in sync with runtime config."""
|
"""Build the --ai help text from AGENT_CONFIG so it stays in sync with runtime config."""
|
||||||
|
|
||||||
@@ -1095,6 +1098,241 @@ def download_and_extract_template(project_path: Path, ai_assistant: str, script_
|
|||||||
return project_path
|
return project_path
|
||||||
|
|
||||||
|
|
||||||
|
def _locate_core_pack() -> Path | None:
|
||||||
|
"""Return the filesystem path to the bundled core_pack directory, or None.
|
||||||
|
|
||||||
|
Only present in wheel installs: hatchling's force-include copies
|
||||||
|
templates/, scripts/ etc. into specify_cli/core_pack/ at build time.
|
||||||
|
|
||||||
|
Source-checkout and editable installs do NOT have this directory.
|
||||||
|
Callers that need to work in both environments must check the repo-root
|
||||||
|
trees (templates/, scripts/) as a fallback when this returns None.
|
||||||
|
"""
|
||||||
|
# Wheel install: core_pack is a sibling directory of this file
|
||||||
|
candidate = Path(__file__).parent / "core_pack"
|
||||||
|
if candidate.is_dir():
|
||||||
|
return candidate
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _locate_release_script() -> tuple[Path, str]:
|
||||||
|
"""Return (script_path, shell_cmd) for the platform-appropriate release script.
|
||||||
|
|
||||||
|
Checks the bundled core_pack first, then falls back to the source checkout.
|
||||||
|
Returns the bash script on Unix and the PowerShell script on Windows.
|
||||||
|
Raises FileNotFoundError if neither can be found.
|
||||||
|
"""
|
||||||
|
if os.name == "nt":
|
||||||
|
name = "create-release-packages.ps1"
|
||||||
|
shell = shutil.which("pwsh")
|
||||||
|
if not shell:
|
||||||
|
raise FileNotFoundError(
|
||||||
|
"'pwsh' (PowerShell 7+) not found on PATH. "
|
||||||
|
"The bundled release script requires PowerShell 7+ (pwsh), "
|
||||||
|
"not Windows PowerShell 5.x (powershell.exe). "
|
||||||
|
"Install from https://aka.ms/powershell to use offline scaffolding."
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
name = "create-release-packages.sh"
|
||||||
|
shell = "bash"
|
||||||
|
|
||||||
|
# Wheel install: core_pack/release_scripts/
|
||||||
|
candidate = Path(__file__).parent / "core_pack" / "release_scripts" / name
|
||||||
|
if candidate.is_file():
|
||||||
|
return candidate, shell
|
||||||
|
|
||||||
|
# Source-checkout fallback
|
||||||
|
repo_root = Path(__file__).parent.parent.parent
|
||||||
|
candidate = repo_root / ".github" / "workflows" / "scripts" / name
|
||||||
|
if candidate.is_file():
|
||||||
|
return candidate, shell
|
||||||
|
|
||||||
|
raise FileNotFoundError(f"Release script '{name}' not found in core_pack or source checkout")
|
||||||
|
|
||||||
|
|
||||||
|
def scaffold_from_core_pack(
|
||||||
|
project_path: Path,
|
||||||
|
ai_assistant: str,
|
||||||
|
script_type: str,
|
||||||
|
is_current_dir: bool = False,
|
||||||
|
*,
|
||||||
|
tracker: StepTracker | None = None,
|
||||||
|
) -> bool:
|
||||||
|
"""Scaffold a project from bundled core_pack assets — no network access required.
|
||||||
|
|
||||||
|
Invokes the bundled create-release-packages script (bash on Unix, PowerShell
|
||||||
|
on Windows) to generate the full project scaffold for a single agent. This
|
||||||
|
guarantees byte-for-byte parity between ``specify init`` and the GitHub
|
||||||
|
release ZIPs because both use the exact same script.
|
||||||
|
|
||||||
|
Returns True on success. Returns False if offline scaffolding failed for
|
||||||
|
any reason, including missing or unreadable assets, missing required tools
|
||||||
|
(bash, pwsh, zip), release-script failure or timeout, or unexpected runtime
|
||||||
|
exceptions. When ``--offline`` is active the caller should treat False as
|
||||||
|
a hard error rather than falling back to a network download.
|
||||||
|
"""
|
||||||
|
# --- Locate asset sources ---
|
||||||
|
core = _locate_core_pack()
|
||||||
|
|
||||||
|
# Command templates
|
||||||
|
if core and (core / "commands").is_dir():
|
||||||
|
commands_dir = core / "commands"
|
||||||
|
else:
|
||||||
|
repo_root = Path(__file__).parent.parent.parent
|
||||||
|
commands_dir = repo_root / "templates" / "commands"
|
||||||
|
if not commands_dir.is_dir():
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", "command templates not found")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Scripts directory (parent of bash/ and powershell/)
|
||||||
|
if core and (core / "scripts").is_dir():
|
||||||
|
scripts_dir = core / "scripts"
|
||||||
|
else:
|
||||||
|
repo_root = Path(__file__).parent.parent.parent
|
||||||
|
scripts_dir = repo_root / "scripts"
|
||||||
|
if not scripts_dir.is_dir():
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", "scripts directory not found")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Page templates (spec-template.md, plan-template.md, vscode-settings.json, etc.)
|
||||||
|
if core and (core / "templates").is_dir():
|
||||||
|
templates_dir = core / "templates"
|
||||||
|
else:
|
||||||
|
repo_root = Path(__file__).parent.parent.parent
|
||||||
|
templates_dir = repo_root / "templates"
|
||||||
|
if not templates_dir.is_dir():
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", "page templates not found")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Release script
|
||||||
|
try:
|
||||||
|
release_script, shell_cmd = _locate_release_script()
|
||||||
|
except FileNotFoundError as exc:
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", str(exc))
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Preflight: verify required external tools are available
|
||||||
|
if os.name != "nt":
|
||||||
|
if not shutil.which("bash"):
|
||||||
|
msg = "'bash' not found on PATH. Required for offline scaffolding."
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", msg)
|
||||||
|
return False
|
||||||
|
if not shutil.which("zip"):
|
||||||
|
msg = "'zip' not found on PATH. Required for offline scaffolding. Install with: apt install zip / brew install zip"
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", msg)
|
||||||
|
return False
|
||||||
|
|
||||||
|
if tracker:
|
||||||
|
tracker.start("scaffold", "applying bundled assets")
|
||||||
|
|
||||||
|
try:
|
||||||
|
if not is_current_dir:
|
||||||
|
project_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
tmp = Path(tmpdir)
|
||||||
|
|
||||||
|
# Set up a repo-like directory layout in the temp dir so the
|
||||||
|
# release script finds templates/commands/, scripts/, etc.
|
||||||
|
tmpl_cmds = tmp / "templates" / "commands"
|
||||||
|
tmpl_cmds.mkdir(parents=True)
|
||||||
|
for f in commands_dir.iterdir():
|
||||||
|
if f.is_file():
|
||||||
|
shutil.copy2(f, tmpl_cmds / f.name)
|
||||||
|
|
||||||
|
# Page templates (needed for vscode-settings.json etc.)
|
||||||
|
if templates_dir.is_dir():
|
||||||
|
tmpl_root = tmp / "templates"
|
||||||
|
for f in templates_dir.iterdir():
|
||||||
|
if f.is_file():
|
||||||
|
shutil.copy2(f, tmpl_root / f.name)
|
||||||
|
|
||||||
|
# Scripts (bash/ and powershell/)
|
||||||
|
for subdir in ("bash", "powershell"):
|
||||||
|
src = scripts_dir / subdir
|
||||||
|
if src.is_dir():
|
||||||
|
dst = tmp / "scripts" / subdir
|
||||||
|
dst.mkdir(parents=True, exist_ok=True)
|
||||||
|
for f in src.iterdir():
|
||||||
|
if f.is_file():
|
||||||
|
shutil.copy2(f, dst / f.name)
|
||||||
|
|
||||||
|
# Run the release script for this single agent + script type
|
||||||
|
env = os.environ.copy()
|
||||||
|
# Pin GENRELEASES_DIR inside the temp dir so a user-exported
|
||||||
|
# value cannot redirect output or cause rm -rf outside the sandbox.
|
||||||
|
env["GENRELEASES_DIR"] = str(tmp / ".genreleases")
|
||||||
|
if os.name == "nt":
|
||||||
|
cmd = [
|
||||||
|
shell_cmd, "-File", str(release_script),
|
||||||
|
"-Version", "v0.0.0",
|
||||||
|
"-Agents", ai_assistant,
|
||||||
|
"-Scripts", script_type,
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
cmd = [shell_cmd, str(release_script), "v0.0.0"]
|
||||||
|
env["AGENTS"] = ai_assistant
|
||||||
|
env["SCRIPTS"] = script_type
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd, cwd=str(tmp), env=env,
|
||||||
|
capture_output=True, text=True,
|
||||||
|
timeout=120,
|
||||||
|
)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
msg = "release script timed out after 120 seconds"
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", msg)
|
||||||
|
else:
|
||||||
|
console.print(f"[red]Error:[/red] {msg}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if result.returncode != 0:
|
||||||
|
msg = result.stderr.strip() or result.stdout.strip() or "unknown error"
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", f"release script failed: {msg}")
|
||||||
|
else:
|
||||||
|
console.print(f"[red]Release script failed:[/red] {msg}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Copy the generated files to the project directory
|
||||||
|
build_dir = tmp / ".genreleases" / f"sdd-{ai_assistant}-package-{script_type}"
|
||||||
|
if not build_dir.is_dir():
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", "release script produced no output")
|
||||||
|
return False
|
||||||
|
|
||||||
|
for item in build_dir.rglob("*"):
|
||||||
|
if item.is_file():
|
||||||
|
rel = item.relative_to(build_dir)
|
||||||
|
dest = project_path / rel
|
||||||
|
dest.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
# When scaffolding into an existing directory (--here),
|
||||||
|
# use the same merge semantics as the GitHub-download path.
|
||||||
|
if is_current_dir and dest.name == "settings.json" and dest.parent.name == ".vscode":
|
||||||
|
handle_vscode_settings(item, dest, rel, verbose=False, tracker=tracker)
|
||||||
|
else:
|
||||||
|
shutil.copy2(item, dest)
|
||||||
|
|
||||||
|
if tracker:
|
||||||
|
tracker.complete("scaffold", "bundled assets applied")
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
if tracker:
|
||||||
|
tracker.error("scaffold", str(e))
|
||||||
|
else:
|
||||||
|
console.print(f"[red]Error scaffolding from bundled assets:[/red] {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
def ensure_executable_scripts(project_path: Path, tracker: StepTracker | None = None) -> None:
|
def ensure_executable_scripts(project_path: Path, tracker: StepTracker | None = None) -> None:
|
||||||
"""Ensure POSIX .sh scripts under .specify/scripts (recursively) have execute bits (no-op on Windows)."""
|
"""Ensure POSIX .sh scripts under .specify/scripts (recursively) have execute bits (no-op on Windows)."""
|
||||||
if os.name == "nt":
|
if os.name == "nt":
|
||||||
@@ -1487,20 +1725,31 @@ def init(
|
|||||||
debug: bool = typer.Option(False, "--debug", help="Show verbose diagnostic output for network and extraction failures"),
|
debug: bool = typer.Option(False, "--debug", help="Show verbose diagnostic output for network and extraction failures"),
|
||||||
github_token: str = typer.Option(None, "--github-token", help="GitHub token to use for API requests (or set GH_TOKEN or GITHUB_TOKEN environment variable)"),
|
github_token: str = typer.Option(None, "--github-token", help="GitHub token to use for API requests (or set GH_TOKEN or GITHUB_TOKEN environment variable)"),
|
||||||
ai_skills: bool = typer.Option(False, "--ai-skills", help="Install Prompt.MD templates as agent skills (requires --ai)"),
|
ai_skills: bool = typer.Option(False, "--ai-skills", help="Install Prompt.MD templates as agent skills (requires --ai)"),
|
||||||
|
offline: bool = typer.Option(False, "--offline", help="Use assets bundled in the specify-cli package instead of downloading from GitHub (no network access required). Bundled assets will become the default in v0.6.0 and this flag will be removed."),
|
||||||
preset: str = typer.Option(None, "--preset", help="Install a preset during initialization (by preset ID)"),
|
preset: str = typer.Option(None, "--preset", help="Install a preset during initialization (by preset ID)"),
|
||||||
branch_numbering: str = typer.Option(None, "--branch-numbering", help="Branch numbering strategy: 'sequential' (001, 002, ...) or 'timestamp' (YYYYMMDD-HHMMSS)"),
|
branch_numbering: str = typer.Option(None, "--branch-numbering", help="Branch numbering strategy: 'sequential' (001, 002, ...) or 'timestamp' (YYYYMMDD-HHMMSS)"),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Initialize a new Specify project from the latest template.
|
Initialize a new Specify project.
|
||||||
|
|
||||||
|
By default, project files are downloaded from the latest GitHub release.
|
||||||
|
Use --offline to scaffold from assets bundled inside the specify-cli
|
||||||
|
package instead (no internet access required, ideal for air-gapped or
|
||||||
|
enterprise environments).
|
||||||
|
|
||||||
|
NOTE: Starting with v0.6.0, bundled assets will be used by default and
|
||||||
|
the --offline flag will be removed. The GitHub download path will be
|
||||||
|
retired because bundled assets eliminate the need for network access,
|
||||||
|
avoid proxy/firewall issues, and guarantee that templates always match
|
||||||
|
the installed CLI version.
|
||||||
|
|
||||||
This command will:
|
This command will:
|
||||||
1. Check that required tools are installed (git is optional)
|
1. Check that required tools are installed (git is optional)
|
||||||
2. Let you choose your AI assistant
|
2. Let you choose your AI assistant
|
||||||
3. Download the appropriate template from GitHub
|
3. Download template from GitHub (or use bundled assets with --offline)
|
||||||
4. Extract the template to a new project directory or current directory
|
4. Initialize a fresh git repository (if not --no-git and no existing repo)
|
||||||
5. Initialize a fresh git repository (if not --no-git and no existing repo)
|
5. Optionally set up AI assistant commands
|
||||||
6. Optionally set up AI assistant commands
|
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
specify init my-project
|
specify init my-project
|
||||||
specify init my-project --ai claude
|
specify init my-project --ai claude
|
||||||
@@ -1517,6 +1766,7 @@ def init(
|
|||||||
specify init my-project --ai claude --ai-skills # Install agent skills
|
specify init my-project --ai claude --ai-skills # Install agent skills
|
||||||
specify init --here --ai gemini --ai-skills
|
specify init --here --ai gemini --ai-skills
|
||||||
specify init my-project --ai generic --ai-commands-dir .myagent/commands/ # Unsupported agent
|
specify init my-project --ai generic --ai-commands-dir .myagent/commands/ # Unsupported agent
|
||||||
|
specify init my-project --offline # Use bundled assets (no network access)
|
||||||
specify init my-project --ai claude --preset healthcare-compliance # With preset
|
specify init my-project --ai claude --preset healthcare-compliance # With preset
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@@ -1689,12 +1939,37 @@ def init(
|
|||||||
tracker.complete("ai-select", f"{selected_ai}")
|
tracker.complete("ai-select", f"{selected_ai}")
|
||||||
tracker.add("script-select", "Select script type")
|
tracker.add("script-select", "Select script type")
|
||||||
tracker.complete("script-select", selected_script)
|
tracker.complete("script-select", selected_script)
|
||||||
|
|
||||||
|
# Determine whether to use bundled assets or download from GitHub (default).
|
||||||
|
# --offline opts in to bundled assets; without it, always use GitHub.
|
||||||
|
# When --offline is set, scaffold_from_core_pack() will try the wheel's
|
||||||
|
# core_pack/ first, then fall back to source-checkout paths. If neither
|
||||||
|
# location has the required assets it returns False and we error out.
|
||||||
|
_core = _locate_core_pack()
|
||||||
|
|
||||||
|
use_github = not offline
|
||||||
|
|
||||||
|
if use_github and _core is not None:
|
||||||
|
console.print(
|
||||||
|
"[yellow]Note:[/yellow] Bundled assets are available in this install. "
|
||||||
|
"Use [bold]--offline[/bold] to skip the GitHub download — faster, "
|
||||||
|
"no network required, and guaranteed version match.\n"
|
||||||
|
"This will become the default in v0.6.0."
|
||||||
|
)
|
||||||
|
|
||||||
|
if use_github:
|
||||||
|
for key, label in [
|
||||||
|
("fetch", "Fetch latest release"),
|
||||||
|
("download", "Download template"),
|
||||||
|
("extract", "Extract template"),
|
||||||
|
("zip-list", "Archive contents"),
|
||||||
|
("extracted-summary", "Extraction summary"),
|
||||||
|
]:
|
||||||
|
tracker.add(key, label)
|
||||||
|
else:
|
||||||
|
tracker.add("scaffold", "Apply bundled assets")
|
||||||
|
|
||||||
for key, label in [
|
for key, label in [
|
||||||
("fetch", "Fetch latest release"),
|
|
||||||
("download", "Download template"),
|
|
||||||
("extract", "Extract template"),
|
|
||||||
("zip-list", "Archive contents"),
|
|
||||||
("extracted-summary", "Extraction summary"),
|
|
||||||
("chmod", "Ensure scripts executable"),
|
("chmod", "Ensure scripts executable"),
|
||||||
("constitution", "Constitution setup"),
|
("constitution", "Constitution setup"),
|
||||||
]:
|
]:
|
||||||
@@ -1716,9 +1991,28 @@ def init(
|
|||||||
try:
|
try:
|
||||||
verify = not skip_tls
|
verify = not skip_tls
|
||||||
local_ssl_context = ssl_context if verify else False
|
local_ssl_context = ssl_context if verify else False
|
||||||
local_client = httpx.Client(verify=local_ssl_context)
|
|
||||||
|
|
||||||
download_and_extract_template(project_path, selected_ai, selected_script, here, verbose=False, tracker=tracker, client=local_client, debug=debug, github_token=github_token)
|
if use_github:
|
||||||
|
with httpx.Client(verify=local_ssl_context) as local_client:
|
||||||
|
download_and_extract_template(project_path, selected_ai, selected_script, here, verbose=False, tracker=tracker, client=local_client, debug=debug, github_token=github_token)
|
||||||
|
else:
|
||||||
|
scaffold_ok = scaffold_from_core_pack(project_path, selected_ai, selected_script, here, tracker=tracker)
|
||||||
|
if not scaffold_ok:
|
||||||
|
# --offline explicitly requested: never attempt a network download
|
||||||
|
console.print(
|
||||||
|
"\n[red]Error:[/red] --offline was specified but scaffolding from bundled assets failed.\n"
|
||||||
|
"Common causes: missing bash/pwsh, script permission errors, or incomplete wheel.\n"
|
||||||
|
"Remove --offline to attempt a GitHub download instead."
|
||||||
|
)
|
||||||
|
# Surface the specific failure reason from the tracker
|
||||||
|
for step in tracker.steps:
|
||||||
|
if step["key"] == "scaffold" and step["detail"]:
|
||||||
|
console.print(f"[red]Detail:[/red] {step['detail']}")
|
||||||
|
break
|
||||||
|
# Clean up partial project directory (same as the GitHub-download failure path)
|
||||||
|
if not here and project_path.exists():
|
||||||
|
shutil.rmtree(project_path)
|
||||||
|
raise typer.Exit(1)
|
||||||
|
|
||||||
# For generic agent, rename placeholder directory to user-specified path
|
# For generic agent, rename placeholder directory to user-specified path
|
||||||
if selected_ai == "generic" and ai_commands_dir:
|
if selected_ai == "generic" and ai_commands_dir:
|
||||||
@@ -1799,6 +2093,7 @@ def init(
|
|||||||
"branch_numbering": branch_numbering or "sequential",
|
"branch_numbering": branch_numbering or "sequential",
|
||||||
"here": here,
|
"here": here,
|
||||||
"preset": preset,
|
"preset": preset,
|
||||||
|
"offline": offline,
|
||||||
"script": selected_script,
|
"script": selected_script,
|
||||||
"speckit_version": get_speckit_version(),
|
"speckit_version": get_speckit_version(),
|
||||||
})
|
})
|
||||||
@@ -1834,7 +2129,13 @@ def init(
|
|||||||
except Exception as preset_err:
|
except Exception as preset_err:
|
||||||
console.print(f"[yellow]Warning:[/yellow] Failed to install preset: {preset_err}")
|
console.print(f"[yellow]Warning:[/yellow] Failed to install preset: {preset_err}")
|
||||||
|
|
||||||
|
# Scaffold path has no zip archive to clean up
|
||||||
|
if not use_github:
|
||||||
|
tracker.skip("cleanup", "not needed (no download)")
|
||||||
|
|
||||||
tracker.complete("final", "project ready")
|
tracker.complete("final", "project ready")
|
||||||
|
except (typer.Exit, SystemExit):
|
||||||
|
raise
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
tracker.error("final", str(e))
|
tracker.error("final", str(e))
|
||||||
console.print(Panel(f"Initialization failed: {e}", title="Failure", border_style="red"))
|
console.print(Panel(f"Initialization failed: {e}", title="Failure", border_style="red"))
|
||||||
|
|||||||
@@ -1050,10 +1050,12 @@ class TestCliValidation:
|
|||||||
target = tmp_path / "kiro-alias-proj"
|
target = tmp_path / "kiro-alias-proj"
|
||||||
|
|
||||||
with patch("specify_cli.download_and_extract_template") as mock_download, \
|
with patch("specify_cli.download_and_extract_template") as mock_download, \
|
||||||
|
patch("specify_cli.scaffold_from_core_pack", create=True) as mock_scaffold, \
|
||||||
patch("specify_cli.ensure_executable_scripts"), \
|
patch("specify_cli.ensure_executable_scripts"), \
|
||||||
patch("specify_cli.ensure_constitution_from_template"), \
|
patch("specify_cli.ensure_constitution_from_template"), \
|
||||||
patch("specify_cli.is_git_repo", return_value=False), \
|
patch("specify_cli.is_git_repo", return_value=False), \
|
||||||
patch("specify_cli.shutil.which", return_value="/usr/bin/git"):
|
patch("specify_cli.shutil.which", return_value="/usr/bin/git"):
|
||||||
|
mock_scaffold.return_value = True
|
||||||
result = runner.invoke(
|
result = runner.invoke(
|
||||||
app,
|
app,
|
||||||
[
|
[
|
||||||
@@ -1069,9 +1071,14 @@ class TestCliValidation:
|
|||||||
)
|
)
|
||||||
|
|
||||||
assert result.exit_code == 0
|
assert result.exit_code == 0
|
||||||
assert mock_download.called
|
# Without --offline, the download path should be taken.
|
||||||
# download_and_extract_template(project_path, ai_assistant, script_type, ...)
|
assert mock_download.called, (
|
||||||
|
"Expected download_and_extract_template to be called (default non-offline path)"
|
||||||
|
)
|
||||||
assert mock_download.call_args.args[1] == "kiro-cli"
|
assert mock_download.call_args.args[1] == "kiro-cli"
|
||||||
|
assert not mock_scaffold.called, (
|
||||||
|
"scaffold_from_core_pack should not be called without --offline"
|
||||||
|
)
|
||||||
|
|
||||||
def test_q_removed_from_agent_config(self):
|
def test_q_removed_from_agent_config(self):
|
||||||
"""Amazon Q legacy key should not remain in AGENT_CONFIG."""
|
"""Amazon Q legacy key should not remain in AGENT_CONFIG."""
|
||||||
|
|||||||
613
tests/test_core_pack_scaffold.py
Normal file
613
tests/test_core_pack_scaffold.py
Normal file
@@ -0,0 +1,613 @@
|
|||||||
|
"""
|
||||||
|
Validation tests for offline/air-gapped scaffolding (PR #1803).
|
||||||
|
|
||||||
|
For every supported AI agent (except "generic") the scaffold output is verified
|
||||||
|
against invariants and compared byte-for-byte with the canonical output produced
|
||||||
|
by create-release-packages.sh.
|
||||||
|
|
||||||
|
Since scaffold_from_core_pack() now invokes the release script at runtime, the
|
||||||
|
parity test (section 9) runs the script independently and compares the results
|
||||||
|
to ensure the integration is correct.
|
||||||
|
|
||||||
|
Per-agent invariants verified
|
||||||
|
──────────────────────────────
|
||||||
|
• Command files are written to the directory declared in AGENT_CONFIG
|
||||||
|
• File count matches the number of source templates
|
||||||
|
• Extension is correct: .toml (TOML agents), .agent.md (copilot), .md (rest)
|
||||||
|
• No unresolved placeholders remain ({SCRIPT}, {ARGS}, __AGENT__)
|
||||||
|
• Argument token is correct: {{args}} for TOML agents, $ARGUMENTS for others
|
||||||
|
• Path rewrites applied: scripts/ → .specify/scripts/ etc.
|
||||||
|
• TOML files have "description" and "prompt" fields
|
||||||
|
• Markdown files have parseable YAML frontmatter
|
||||||
|
• Copilot: companion speckit.*.prompt.md files are generated in prompts/
|
||||||
|
• .specify/scripts/ contains at least one script file
|
||||||
|
• .specify/templates/ contains at least one template file
|
||||||
|
|
||||||
|
Parity invariant
|
||||||
|
────────────────
|
||||||
|
Every file produced by scaffold_from_core_pack() must be byte-for-byte
|
||||||
|
identical to the same file in the ZIP produced by the release script.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import tomllib
|
||||||
|
import zipfile
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
from specify_cli import (
|
||||||
|
AGENT_CONFIG,
|
||||||
|
_TOML_AGENTS,
|
||||||
|
_locate_core_pack,
|
||||||
|
scaffold_from_core_pack,
|
||||||
|
)
|
||||||
|
|
||||||
|
_REPO_ROOT = Path(__file__).parent.parent
|
||||||
|
_RELEASE_SCRIPT = _REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.sh"
|
||||||
|
|
||||||
|
|
||||||
|
def _find_bash() -> str | None:
|
||||||
|
"""Return the path to a usable bash on this machine, or None."""
|
||||||
|
# Prefer PATH lookup so non-standard install locations (Nix, CI) are found.
|
||||||
|
on_path = shutil.which("bash")
|
||||||
|
if on_path:
|
||||||
|
return on_path
|
||||||
|
candidates = [
|
||||||
|
"/opt/homebrew/bin/bash",
|
||||||
|
"/usr/local/bin/bash",
|
||||||
|
"/bin/bash",
|
||||||
|
"/usr/bin/bash",
|
||||||
|
]
|
||||||
|
for candidate in candidates:
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
[candidate, "--version"],
|
||||||
|
capture_output=True, text=True, timeout=5,
|
||||||
|
)
|
||||||
|
if result.returncode == 0:
|
||||||
|
return candidate
|
||||||
|
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||||||
|
continue
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _run_release_script(agent: str, script_type: str, bash: str, output_dir: Path) -> Path:
|
||||||
|
"""Run create-release-packages.sh for *agent*/*script_type* and return the
|
||||||
|
path to the generated ZIP. *output_dir* receives the build artifacts so
|
||||||
|
the repo working tree stays clean."""
|
||||||
|
env = os.environ.copy()
|
||||||
|
env["AGENTS"] = agent
|
||||||
|
env["SCRIPTS"] = script_type
|
||||||
|
env["GENRELEASES_DIR"] = str(output_dir)
|
||||||
|
|
||||||
|
result = subprocess.run(
|
||||||
|
[bash, str(_RELEASE_SCRIPT), "v0.0.0"],
|
||||||
|
capture_output=True, text=True,
|
||||||
|
cwd=str(_REPO_ROOT),
|
||||||
|
env=env,
|
||||||
|
timeout=300,
|
||||||
|
)
|
||||||
|
|
||||||
|
if result.returncode != 0:
|
||||||
|
pytest.fail(
|
||||||
|
f"Release script failed with exit code {result.returncode}\n"
|
||||||
|
f"stdout:\n{result.stdout}\nstderr:\n{result.stderr}"
|
||||||
|
)
|
||||||
|
|
||||||
|
zip_pattern = f"spec-kit-template-{agent}-{script_type}-v0.0.0.zip"
|
||||||
|
zip_path = output_dir / zip_pattern
|
||||||
|
if not zip_path.exists():
|
||||||
|
pytest.fail(
|
||||||
|
f"Release script did not produce expected ZIP: {zip_path}\n"
|
||||||
|
f"stdout:\n{result.stdout}\nstderr:\n{result.stderr}"
|
||||||
|
)
|
||||||
|
return zip_path
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Helpers
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Number of source command templates (one per .md file in templates/commands/)
|
||||||
|
|
||||||
|
|
||||||
|
def _commands_dir() -> Path:
|
||||||
|
"""Return the command templates directory (source-checkout or core_pack)."""
|
||||||
|
core = _locate_core_pack()
|
||||||
|
if core and (core / "commands").is_dir():
|
||||||
|
return core / "commands"
|
||||||
|
# Source-checkout fallback
|
||||||
|
repo_root = Path(__file__).parent.parent
|
||||||
|
return repo_root / "templates" / "commands"
|
||||||
|
|
||||||
|
|
||||||
|
def _get_source_template_stems() -> list[str]:
|
||||||
|
"""Return the stems of source command template files (e.g. ['specify', 'plan', ...])."""
|
||||||
|
return sorted(p.stem for p in _commands_dir().glob("*.md"))
|
||||||
|
|
||||||
|
|
||||||
|
def _expected_cmd_dir(project_path: Path, agent: str) -> Path:
|
||||||
|
"""Return the expected command-files directory for a given agent."""
|
||||||
|
cfg = AGENT_CONFIG[agent]
|
||||||
|
folder = (cfg.get("folder") or "").rstrip("/")
|
||||||
|
subdir = cfg.get("commands_subdir", "commands")
|
||||||
|
if folder:
|
||||||
|
return project_path / folder / subdir
|
||||||
|
return project_path / ".speckit" / subdir
|
||||||
|
|
||||||
|
|
||||||
|
# Agents whose commands are laid out as <skills_dir>/<name>/SKILL.md.
|
||||||
|
# Maps agent -> separator used in skill directory names.
|
||||||
|
_SKILL_AGENTS: dict[str, str] = {"codex": "-", "kimi": "."}
|
||||||
|
|
||||||
|
|
||||||
|
def _expected_ext(agent: str) -> str:
|
||||||
|
if agent in _TOML_AGENTS:
|
||||||
|
return "toml"
|
||||||
|
if agent == "copilot":
|
||||||
|
return "agent.md"
|
||||||
|
if agent in _SKILL_AGENTS:
|
||||||
|
return "SKILL.md"
|
||||||
|
return "md"
|
||||||
|
|
||||||
|
|
||||||
|
def _list_command_files(cmd_dir: Path, agent: str) -> list[Path]:
|
||||||
|
"""List generated command files, handling skills-based directory layouts."""
|
||||||
|
if agent in _SKILL_AGENTS:
|
||||||
|
sep = _SKILL_AGENTS[agent]
|
||||||
|
return sorted(cmd_dir.glob(f"speckit{sep}*/SKILL.md"))
|
||||||
|
ext = _expected_ext(agent)
|
||||||
|
return sorted(cmd_dir.glob(f"speckit.*.{ext}"))
|
||||||
|
|
||||||
|
|
||||||
|
def _collect_relative_files(root: Path) -> dict[str, bytes]:
|
||||||
|
"""Walk *root* and return {relative_posix_path: file_bytes}."""
|
||||||
|
result: dict[str, bytes] = {}
|
||||||
|
for p in root.rglob("*"):
|
||||||
|
if p.is_file():
|
||||||
|
result[p.relative_to(root).as_posix()] = p.read_bytes()
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Fixtures
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def source_template_stems() -> list[str]:
|
||||||
|
return _get_source_template_stems()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def scaffolded_sh(tmp_path_factory):
|
||||||
|
"""Session-scoped cache: scaffold once per agent with script_type='sh'."""
|
||||||
|
cache = {}
|
||||||
|
def _get(agent: str) -> Path:
|
||||||
|
if agent not in cache:
|
||||||
|
project = tmp_path_factory.mktemp(f"scaffold_sh_{agent}")
|
||||||
|
ok = scaffold_from_core_pack(project, agent, "sh")
|
||||||
|
assert ok, f"scaffold_from_core_pack returned False for agent '{agent}'"
|
||||||
|
cache[agent] = project
|
||||||
|
return cache[agent]
|
||||||
|
return _get
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def scaffolded_ps(tmp_path_factory):
|
||||||
|
"""Session-scoped cache: scaffold once per agent with script_type='ps'."""
|
||||||
|
cache = {}
|
||||||
|
def _get(agent: str) -> Path:
|
||||||
|
if agent not in cache:
|
||||||
|
project = tmp_path_factory.mktemp(f"scaffold_ps_{agent}")
|
||||||
|
ok = scaffold_from_core_pack(project, agent, "ps")
|
||||||
|
assert ok, f"scaffold_from_core_pack returned False for agent '{agent}'"
|
||||||
|
cache[agent] = project
|
||||||
|
return cache[agent]
|
||||||
|
return _get
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Parametrize over all agents except "generic"
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
_TESTABLE_AGENTS = [a for a in AGENT_CONFIG if a != "generic"]
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# 1. Bundled scaffold — directory structure
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_scaffold_creates_specify_scripts(agent, scaffolded_sh):
|
||||||
|
"""scaffold_from_core_pack copies at least one script into .specify/scripts/."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
scripts_dir = project / ".specify" / "scripts" / "bash"
|
||||||
|
assert scripts_dir.is_dir(), f".specify/scripts/bash/ missing for agent '{agent}'"
|
||||||
|
assert any(scripts_dir.iterdir()), f".specify/scripts/bash/ is empty for agent '{agent}'"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_scaffold_creates_specify_templates(agent, scaffolded_sh):
|
||||||
|
"""scaffold_from_core_pack copies at least one page template into .specify/templates/."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
tpl_dir = project / ".specify" / "templates"
|
||||||
|
assert tpl_dir.is_dir(), f".specify/templates/ missing for agent '{agent}'"
|
||||||
|
assert any(tpl_dir.iterdir()), ".specify/templates/ is empty"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_scaffold_command_dir_location(agent, scaffolded_sh):
|
||||||
|
"""Command files land in the directory declared by AGENT_CONFIG."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
assert cmd_dir.is_dir(), (
|
||||||
|
f"Command dir '{cmd_dir.relative_to(project)}' not created for agent '{agent}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# 2. Bundled scaffold — file count
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_scaffold_command_file_count(agent, scaffolded_sh, source_template_stems):
|
||||||
|
"""One command file is generated per source template for every agent."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
generated = _list_command_files(cmd_dir, agent)
|
||||||
|
|
||||||
|
if cmd_dir.is_dir():
|
||||||
|
dir_listing = list(cmd_dir.iterdir())
|
||||||
|
else:
|
||||||
|
dir_listing = f"<command dir missing: {cmd_dir}>"
|
||||||
|
|
||||||
|
assert len(generated) == len(source_template_stems), (
|
||||||
|
f"Agent '{agent}': expected {len(source_template_stems)} command files "
|
||||||
|
f"({_expected_ext(agent)}), found {len(generated)}. Dir: {dir_listing}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_scaffold_command_file_names(agent, scaffolded_sh, source_template_stems):
|
||||||
|
"""Each source template stem maps to a corresponding speckit.<stem>.<ext> file."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
for stem in source_template_stems:
|
||||||
|
if agent in _SKILL_AGENTS:
|
||||||
|
sep = _SKILL_AGENTS[agent]
|
||||||
|
expected = cmd_dir / f"speckit{sep}{stem}" / "SKILL.md"
|
||||||
|
else:
|
||||||
|
ext = _expected_ext(agent)
|
||||||
|
expected = cmd_dir / f"speckit.{stem}.{ext}"
|
||||||
|
assert expected.is_file(), (
|
||||||
|
f"Agent '{agent}': expected file '{expected.name}' not found in '{cmd_dir}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# 3. Bundled scaffold — content invariants
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_no_unresolved_script_placeholder(agent, scaffolded_sh):
|
||||||
|
"""{SCRIPT} must not appear in any generated command file."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
for f in cmd_dir.rglob("*"):
|
||||||
|
if f.is_file():
|
||||||
|
content = f.read_text(encoding="utf-8")
|
||||||
|
assert "{SCRIPT}" not in content, (
|
||||||
|
f"Unresolved {{SCRIPT}} in '{f.relative_to(project)}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_no_unresolved_agent_placeholder(agent, scaffolded_sh):
|
||||||
|
"""__AGENT__ must not appear in any generated command file."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
for f in cmd_dir.rglob("*"):
|
||||||
|
if f.is_file():
|
||||||
|
content = f.read_text(encoding="utf-8")
|
||||||
|
assert "__AGENT__" not in content, (
|
||||||
|
f"Unresolved __AGENT__ in '{f.relative_to(project)}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_no_unresolved_args_placeholder(agent, scaffolded_sh):
|
||||||
|
"""{ARGS} must not appear in any generated command file (replaced with agent-specific token)."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
for f in cmd_dir.rglob("*"):
|
||||||
|
if f.is_file():
|
||||||
|
content = f.read_text(encoding="utf-8")
|
||||||
|
assert "{ARGS}" not in content, (
|
||||||
|
f"Unresolved {{ARGS}} in '{f.relative_to(project)}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# Build a set of template stems that actually contain {ARGS} in their source.
|
||||||
|
_TEMPLATES_WITH_ARGS: frozenset[str] = frozenset(
|
||||||
|
p.stem
|
||||||
|
for p in _commands_dir().glob("*.md")
|
||||||
|
if "{ARGS}" in p.read_text(encoding="utf-8")
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_argument_token_format(agent, scaffolded_sh):
|
||||||
|
"""For templates that carry an {ARGS} token:
|
||||||
|
- TOML agents must emit {{args}}
|
||||||
|
- Markdown agents must emit $ARGUMENTS
|
||||||
|
Templates without {ARGS} (e.g. implement, plan) are skipped.
|
||||||
|
"""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
|
||||||
|
for f in _list_command_files(cmd_dir, agent):
|
||||||
|
# Recover the stem from the file path
|
||||||
|
if agent in _SKILL_AGENTS:
|
||||||
|
sep = _SKILL_AGENTS[agent]
|
||||||
|
stem = f.parent.name.removeprefix(f"speckit{sep}")
|
||||||
|
else:
|
||||||
|
ext = _expected_ext(agent)
|
||||||
|
stem = f.name.removeprefix("speckit.").removesuffix(f".{ext}")
|
||||||
|
if stem not in _TEMPLATES_WITH_ARGS:
|
||||||
|
continue # this template has no argument token
|
||||||
|
|
||||||
|
content = f.read_text(encoding="utf-8")
|
||||||
|
if agent in _TOML_AGENTS:
|
||||||
|
assert "{{args}}" in content, (
|
||||||
|
f"TOML agent '{agent}': expected '{{{{args}}}}' in '{f.name}'"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
assert "$ARGUMENTS" in content, (
|
||||||
|
f"Markdown agent '{agent}': expected '$ARGUMENTS' in '{f.name}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_path_rewrites_applied(agent, scaffolded_sh):
|
||||||
|
"""Bare scripts/ and templates/ paths must be rewritten to .specify/ variants.
|
||||||
|
|
||||||
|
YAML frontmatter 'source:' metadata fields are excluded — they reference
|
||||||
|
the original template path for provenance, not a runtime path.
|
||||||
|
"""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
for f in cmd_dir.rglob("*"):
|
||||||
|
if not f.is_file():
|
||||||
|
continue
|
||||||
|
content = f.read_text(encoding="utf-8")
|
||||||
|
|
||||||
|
# Strip YAML frontmatter before checking — source: metadata is not a runtime path
|
||||||
|
body = content
|
||||||
|
if content.startswith("---"):
|
||||||
|
parts = content.split("---", 2)
|
||||||
|
if len(parts) >= 3:
|
||||||
|
body = parts[2]
|
||||||
|
|
||||||
|
# Should not contain bare (non-.specify/) script paths
|
||||||
|
assert not re.search(r'(?<!\.specify/)scripts/', body), (
|
||||||
|
f"Bare scripts/ path found in '{f.relative_to(project)}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
assert not re.search(r'(?<!\.specify/)templates/', body), (
|
||||||
|
f"Bare templates/ path found in '{f.relative_to(project)}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# 4. TOML format checks
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", sorted(_TOML_AGENTS))
|
||||||
|
def test_toml_format_valid(agent, scaffolded_sh):
|
||||||
|
"""TOML agents: every command file must have description and prompt fields."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
for f in cmd_dir.glob("speckit.*.toml"):
|
||||||
|
content = f.read_text(encoding="utf-8")
|
||||||
|
assert 'description = "' in content, (
|
||||||
|
f"Missing 'description' in '{f.name}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
assert 'prompt = """' in content, (
|
||||||
|
f"Missing 'prompt' block in '{f.name}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# 5. Markdown frontmatter checks
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
_MARKDOWN_AGENTS = [a for a in _TESTABLE_AGENTS if a not in _TOML_AGENTS]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _MARKDOWN_AGENTS)
|
||||||
|
def test_markdown_has_frontmatter(agent, scaffolded_sh):
|
||||||
|
"""Markdown agents: every command file must start with valid YAML frontmatter."""
|
||||||
|
project = scaffolded_sh(agent)
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
for f in _list_command_files(cmd_dir, agent):
|
||||||
|
content = f.read_text(encoding="utf-8")
|
||||||
|
assert content.startswith("---"), (
|
||||||
|
f"No YAML frontmatter in '{f.name}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
parts = content.split("---", 2)
|
||||||
|
assert len(parts) >= 3, f"Incomplete frontmatter in '{f.name}'"
|
||||||
|
fm = yaml.safe_load(parts[1])
|
||||||
|
assert fm is not None, f"Empty frontmatter in '{f.name}'"
|
||||||
|
assert "description" in fm, (
|
||||||
|
f"'description' key missing from frontmatter in '{f.name}' for agent '{agent}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# 6. Copilot-specific: companion .prompt.md files
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def test_copilot_companion_prompt_files(scaffolded_sh, source_template_stems):
|
||||||
|
"""Copilot: a speckit.<stem>.prompt.md companion is created for every .agent.md file."""
|
||||||
|
project = scaffolded_sh("copilot")
|
||||||
|
|
||||||
|
prompts_dir = project / ".github" / "prompts"
|
||||||
|
assert prompts_dir.is_dir(), ".github/prompts/ not created for copilot"
|
||||||
|
|
||||||
|
for stem in source_template_stems:
|
||||||
|
prompt_file = prompts_dir / f"speckit.{stem}.prompt.md"
|
||||||
|
assert prompt_file.is_file(), (
|
||||||
|
f"Companion prompt file '{prompt_file.name}' missing for copilot"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_copilot_prompt_file_content(scaffolded_sh, source_template_stems):
|
||||||
|
"""Copilot companion .prompt.md files must reference their parent .agent.md."""
|
||||||
|
project = scaffolded_sh("copilot")
|
||||||
|
|
||||||
|
prompts_dir = project / ".github" / "prompts"
|
||||||
|
for stem in source_template_stems:
|
||||||
|
f = prompts_dir / f"speckit.{stem}.prompt.md"
|
||||||
|
content = f.read_text(encoding="utf-8")
|
||||||
|
assert f"agent: speckit.{stem}" in content, (
|
||||||
|
f"Companion '{f.name}' does not reference 'speckit.{stem}'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# 7. PowerShell script variant
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_scaffold_powershell_variant(agent, scaffolded_ps, source_template_stems):
|
||||||
|
"""scaffold_from_core_pack with script_type='ps' creates correct files."""
|
||||||
|
project = scaffolded_ps(agent)
|
||||||
|
|
||||||
|
scripts_dir = project / ".specify" / "scripts" / "powershell"
|
||||||
|
assert scripts_dir.is_dir(), f".specify/scripts/powershell/ missing for '{agent}'"
|
||||||
|
assert any(scripts_dir.iterdir()), ".specify/scripts/powershell/ is empty"
|
||||||
|
|
||||||
|
cmd_dir = _expected_cmd_dir(project, agent)
|
||||||
|
generated = _list_command_files(cmd_dir, agent)
|
||||||
|
assert len(generated) == len(source_template_stems)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# 8. Parity: bundled vs. real create-release-packages.sh ZIP
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def release_script_trees(tmp_path_factory):
|
||||||
|
"""Session-scoped cache: run release script once per (agent, script_type)."""
|
||||||
|
cache: dict[tuple[str, str], dict[str, bytes]] = {}
|
||||||
|
bash = _find_bash()
|
||||||
|
|
||||||
|
def _get(agent: str, script_type: str) -> dict[str, bytes] | None:
|
||||||
|
if bash is None:
|
||||||
|
return None
|
||||||
|
key = (agent, script_type)
|
||||||
|
if key not in cache:
|
||||||
|
tmp = tmp_path_factory.mktemp(f"release_{agent}_{script_type}")
|
||||||
|
gen_dir = tmp / "genreleases"
|
||||||
|
gen_dir.mkdir()
|
||||||
|
zip_path = _run_release_script(agent, script_type, bash, gen_dir)
|
||||||
|
extracted = tmp / "extracted"
|
||||||
|
extracted.mkdir()
|
||||||
|
with zipfile.ZipFile(zip_path) as zf:
|
||||||
|
zf.extractall(extracted)
|
||||||
|
cache[key] = _collect_relative_files(extracted)
|
||||||
|
return cache[key]
|
||||||
|
return _get
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("script_type", ["sh", "ps"])
|
||||||
|
@pytest.mark.parametrize("agent", _TESTABLE_AGENTS)
|
||||||
|
def test_parity_bundled_vs_release_script(agent, script_type, scaffolded_sh, scaffolded_ps, release_script_trees):
|
||||||
|
"""scaffold_from_core_pack() file tree is identical to the ZIP produced by
|
||||||
|
create-release-packages.sh for every agent and script type.
|
||||||
|
|
||||||
|
This is the true end-to-end parity check: the Python offline path must
|
||||||
|
produce exactly the same artifacts as the canonical shell release script.
|
||||||
|
|
||||||
|
Both sides are session-cached: each agent/script_type combination is
|
||||||
|
scaffolded and release-scripted only once across all tests.
|
||||||
|
"""
|
||||||
|
script_tree = release_script_trees(agent, script_type)
|
||||||
|
if script_tree is None:
|
||||||
|
pytest.skip("bash required to run create-release-packages.sh")
|
||||||
|
|
||||||
|
# Reuse session-cached scaffold output
|
||||||
|
if script_type == "sh":
|
||||||
|
bundled_dir = scaffolded_sh(agent)
|
||||||
|
else:
|
||||||
|
bundled_dir = scaffolded_ps(agent)
|
||||||
|
|
||||||
|
bundled_tree = _collect_relative_files(bundled_dir)
|
||||||
|
|
||||||
|
only_bundled = set(bundled_tree) - set(script_tree)
|
||||||
|
only_script = set(script_tree) - set(bundled_tree)
|
||||||
|
|
||||||
|
assert not only_bundled, (
|
||||||
|
f"Agent '{agent}' ({script_type}): files only in bundled output (not in release ZIP):\n "
|
||||||
|
+ "\n ".join(sorted(only_bundled))
|
||||||
|
)
|
||||||
|
assert not only_script, (
|
||||||
|
f"Agent '{agent}' ({script_type}): files only in release ZIP (not in bundled output):\n "
|
||||||
|
+ "\n ".join(sorted(only_script))
|
||||||
|
)
|
||||||
|
|
||||||
|
for name in bundled_tree:
|
||||||
|
assert bundled_tree[name] == script_tree[name], (
|
||||||
|
f"Agent '{agent}' ({script_type}): file '{name}' content differs between "
|
||||||
|
f"bundled output and release script ZIP"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Section 10 – pyproject.toml force-include covers all template files
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def test_pyproject_force_include_covers_all_templates():
|
||||||
|
"""Every file in templates/ (excluding commands/) must be listed in
|
||||||
|
pyproject.toml's [tool.hatch.build.targets.wheel.force-include] section.
|
||||||
|
|
||||||
|
This prevents new template files from being silently omitted from the
|
||||||
|
wheel, which would break ``specify init --offline``.
|
||||||
|
"""
|
||||||
|
templates_dir = _REPO_ROOT / "templates"
|
||||||
|
# Collect all files directly in templates/ (not in subdirectories like commands/)
|
||||||
|
repo_template_files = sorted(
|
||||||
|
f.name for f in templates_dir.iterdir()
|
||||||
|
if f.is_file()
|
||||||
|
)
|
||||||
|
assert repo_template_files, "Expected at least one template file in templates/"
|
||||||
|
|
||||||
|
pyproject_path = _REPO_ROOT / "pyproject.toml"
|
||||||
|
with open(pyproject_path, "rb") as f:
|
||||||
|
pyproject = tomllib.load(f)
|
||||||
|
force_include = pyproject.get("tool", {}).get("hatch", {}).get("build", {}).get("targets", {}).get("wheel", {}).get("force-include", {})
|
||||||
|
|
||||||
|
missing = [
|
||||||
|
name for name in repo_template_files
|
||||||
|
if f"templates/{name}" not in force_include
|
||||||
|
]
|
||||||
|
assert not missing, (
|
||||||
|
"Template files not listed in pyproject.toml force-include "
|
||||||
|
"(offline scaffolding will miss them):\n "
|
||||||
|
+ "\n ".join(missing)
|
||||||
|
)
|
||||||
Reference in New Issue
Block a user