mirror of
https://github.com/github/spec-kit.git
synced 2026-03-25 06:43:09 +00:00
Compare commits
8 Commits
copilot/ag
...
v0.4.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
765b125a0f | ||
|
|
a01180955d | ||
|
|
b1ba972978 | ||
|
|
24247c24c9 | ||
|
|
dc7f09a711 | ||
|
|
b72a5850fe | ||
|
|
a351c826ee | ||
|
|
6223d10d84 |
35
CHANGELOG.md
35
CHANGELOG.md
@@ -1,11 +1,33 @@
|
||||
# Changelog
|
||||
|
||||
## [0.3.2] - 2026-03-19
|
||||
## [0.4.0] - 2026-03-23
|
||||
|
||||
### Changes
|
||||
|
||||
- chore: bump version to 0.3.2
|
||||
- Add conduct extension to community catalog (#1908)
|
||||
- fix(cli): add allow_unicode=True and encoding="utf-8" to YAML I/O (#1936)
|
||||
- fix(codex): native skills fallback refresh + legacy prompt suppression (#1930)
|
||||
|
||||
## [0.4.1] - 2026-03-24
|
||||
|
||||
### Changes
|
||||
|
||||
- Add checkpoint extension (#1947)
|
||||
- fix(scripts): prioritize .specify over git for repo root detection (#1933)
|
||||
- docs: add AIDE extension demo to community projects (#1943)
|
||||
- fix(templates): add missing Assumptions section to spec template (#1939)
|
||||
- chore: bump version to 0.4.0 (#1937)
|
||||
|
||||
- feat(cli): embed core pack in wheel for offline/air-gapped deployment (#1803)
|
||||
- ci: increase stale workflow operations-per-run to 250 (#1922)
|
||||
- docs: update publishing guide with Category and Effect columns (#1913)
|
||||
- fix: Align native skills frontmatter with install_ai_skills (#1920)
|
||||
- feat: add timestamp-based branch naming option for `specify init` (#1911)
|
||||
- docs: add Extension Comparison Guide for community extensions (#1897)
|
||||
- docs: update SUPPORT.md, fix issue templates, add preset submission template (#1910)
|
||||
- Add support for Junie (#1831)
|
||||
- feat: migrate Codex/agy init to native skills workflow (#1906)
|
||||
- chore: bump version to 0.3.2 (#1909)
|
||||
|
||||
- feat(extensions): add verify-tasks extension to community catalog (#1871)
|
||||
- feat(presets): add enable/disable toggle and update semantics (#1891)
|
||||
- feat: add iFlow CLI support (#1875)
|
||||
@@ -21,6 +43,13 @@
|
||||
- Feature/spec kit add pi coding agent pullrequest (#1853)
|
||||
- feat: register spec-kit-learn extension (#1883)
|
||||
|
||||
## [0.3.2] - 2026-03-19
|
||||
|
||||
### Changes
|
||||
|
||||
- chore: bump version to 0.3.2
|
||||
- Add conduct extension to community catalog (#1908)
|
||||
|
||||
## [0.3.1] - 2026-03-17
|
||||
|
||||
### Changed
|
||||
|
||||
@@ -171,6 +171,8 @@ See Spec-Driven Development in action across different scenarios with these comm
|
||||
|
||||
- **[Greenfield Spring Boot MVC with a custom preset](https://github.com/mnriem/spec-kit-pirate-speak-preset-demo)** — Builds a Spring Boot MVC application from scratch using a custom pirate-speak preset, demonstrating how presets can reshape the entire spec-kit experience: specifications become "Voyage Manifests," plans become "Battle Plans," and tasks become "Crew Assignments" — all generated in full pirate vernacular without changing any tooling.
|
||||
|
||||
- **[Greenfield Spring Boot + React with a custom extension](https://github.com/mnriem/spec-kit-aide-extension-demo)** — Walks through the **AIDE extension**, a community extension that adds an alternative spec-driven workflow to spec-kit with high-level specs (vision) and low-level specs (work items) organized in a 7-step iterative lifecycle: vision → roadmap → progress tracking → work queue → work items → execution → feedback loops. Uses a family trading platform (Spring Boot 4, React 19, PostgreSQL, Docker Compose) as the scenario to illustrate how the extension mechanism lets you plug in a different style of spec-driven development without changing any core tooling — truly utilizing the "Kit" in Spec Kit.
|
||||
|
||||
## 🤖 Supported AI Agents
|
||||
|
||||
| Agent | Support | Notes |
|
||||
|
||||
@@ -78,6 +78,7 @@ The following community-contributed extensions are available in [`catalog.commun
|
||||
|-----------|---------|----------|--------|-----|
|
||||
| Archive Extension | Archive merged features into main project memory. | `docs` | Read+Write | [spec-kit-archive](https://github.com/stn1slv/spec-kit-archive) |
|
||||
| Azure DevOps Integration | Sync user stories and tasks to Azure DevOps work items using OAuth authentication | `integration` | Read+Write | [spec-kit-azure-devops](https://github.com/pragya247/spec-kit-azure-devops) |
|
||||
| Checkpoint Extension | Commit the changes made during the middle of the implementation, so you don't end up with just one very large commit at the end | `code` | Read+Write | [spec-kit-checkpoint](https://github.com/aaronrsun/spec-kit-checkpoint) |
|
||||
| Cleanup Extension | Post-implementation quality gate that reviews changes, fixes small issues (scout rule), creates tasks for medium issues, and generates analysis for large issues | `code` | Read+Write | [spec-kit-cleanup](https://github.com/dsrednicki/spec-kit-cleanup) |
|
||||
| Cognitive Squad | Multi-agent cognitive system with Triadic Model: understanding, internalization, application — with quality gates, backpropagation verification, and self-healing | `docs` | Read+Write | [cognitive-squad](https://github.com/Testimonial/cognitive-squad) |
|
||||
| Conduct Extension | Orchestrates spec-kit phases via sub-agent delegation to reduce context pollution. | `process` | Read+Write | [spec-kit-conduct-ext](https://github.com/twbrandon7/spec-kit-conduct-ext) |
|
||||
|
||||
@@ -73,6 +73,35 @@
|
||||
"created_at": "2026-03-03T00:00:00Z",
|
||||
"updated_at": "2026-03-03T00:00:00Z"
|
||||
},
|
||||
"checkpoint": {
|
||||
"name": "Checkpoint Extension",
|
||||
"id": "checkpoint",
|
||||
"description": "An extension to commit the changes made during the middle of the implementation, so you don't end up with just one very large commit at the end.",
|
||||
"author": "aaronrsun",
|
||||
"version": "1.0.0",
|
||||
"download_url": "https://github.com/aaronrsun/spec-kit-checkpoint/archive/refs/tags/v1.0.0.zip",
|
||||
"repository": "https://github.com/aaronrsun/spec-kit-checkpoint",
|
||||
"homepage": "https://github.com/aaronrsun/spec-kit-checkpoint",
|
||||
"documentation": "https://github.com/aaronrsun/spec-kit-checkpoint/blob/main/README.md",
|
||||
"changelog": "https://github.com/aaronrsun/spec-kit-checkpoint/blob/main/CHANGELOG.md",
|
||||
"license": "MIT",
|
||||
"requires": {
|
||||
"speckit_version": ">=0.1.0"
|
||||
},
|
||||
"provides": {
|
||||
"commands": 1,
|
||||
"hooks": 0
|
||||
},
|
||||
"tags": [
|
||||
"checkpoint",
|
||||
"commit"
|
||||
],
|
||||
"verified": false,
|
||||
"downloads": 0,
|
||||
"stars": 0,
|
||||
"created_at": "2026-03-22T00:00:00Z",
|
||||
"updated_at": "2026-03-22T00:00:00Z"
|
||||
},
|
||||
"cleanup": {
|
||||
"name": "Cleanup Extension",
|
||||
"id": "cleanup",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "specify-cli"
|
||||
version = "0.3.2"
|
||||
version = "0.4.1"
|
||||
description = "Specify CLI, part of GitHub Spec Kit. A tool to bootstrap your projects for Spec-Driven Development (SDD)."
|
||||
requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
|
||||
@@ -1,15 +1,48 @@
|
||||
#!/usr/bin/env bash
|
||||
# Common functions and variables for all scripts
|
||||
|
||||
# Get repository root, with fallback for non-git repositories
|
||||
# Find repository root by searching upward for .specify directory
|
||||
# This is the primary marker for spec-kit projects
|
||||
find_specify_root() {
|
||||
local dir="${1:-$(pwd)}"
|
||||
# Normalize to absolute path to prevent infinite loop with relative paths
|
||||
# Use -- to handle paths starting with - (e.g., -P, -L)
|
||||
dir="$(cd -- "$dir" 2>/dev/null && pwd)" || return 1
|
||||
local prev_dir=""
|
||||
while true; do
|
||||
if [ -d "$dir/.specify" ]; then
|
||||
echo "$dir"
|
||||
return 0
|
||||
fi
|
||||
# Stop if we've reached filesystem root or dirname stops changing
|
||||
if [ "$dir" = "/" ] || [ "$dir" = "$prev_dir" ]; then
|
||||
break
|
||||
fi
|
||||
prev_dir="$dir"
|
||||
dir="$(dirname "$dir")"
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# Get repository root, prioritizing .specify directory over git
|
||||
# This prevents using a parent git repo when spec-kit is initialized in a subdirectory
|
||||
get_repo_root() {
|
||||
# First, look for .specify directory (spec-kit's own marker)
|
||||
local specify_root
|
||||
if specify_root=$(find_specify_root); then
|
||||
echo "$specify_root"
|
||||
return
|
||||
fi
|
||||
|
||||
# Fallback to git if no .specify found
|
||||
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||
git rev-parse --show-toplevel
|
||||
else
|
||||
# Fall back to script location for non-git repos
|
||||
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
(cd "$script_dir/../../.." && pwd)
|
||||
return
|
||||
fi
|
||||
|
||||
# Final fallback to script location for non-git repos
|
||||
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
(cd "$script_dir/../../.." && pwd)
|
||||
}
|
||||
|
||||
# Get current branch, with fallback for non-git repositories
|
||||
@@ -20,14 +53,14 @@ get_current_branch() {
|
||||
return
|
||||
fi
|
||||
|
||||
# Then check git if available
|
||||
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
|
||||
git rev-parse --abbrev-ref HEAD
|
||||
# Then check git if available at the spec-kit root (not parent)
|
||||
local repo_root=$(get_repo_root)
|
||||
if has_git; then
|
||||
git -C "$repo_root" rev-parse --abbrev-ref HEAD
|
||||
return
|
||||
fi
|
||||
|
||||
# For non-git repos, try to find the latest feature directory
|
||||
local repo_root=$(get_repo_root)
|
||||
local specs_dir="$repo_root/specs"
|
||||
|
||||
if [[ -d "$specs_dir" ]]; then
|
||||
@@ -68,9 +101,17 @@ get_current_branch() {
|
||||
echo "main" # Final fallback
|
||||
}
|
||||
|
||||
# Check if we have git available
|
||||
# Check if we have git available at the spec-kit root level
|
||||
# Returns true only if git is installed and the repo root is inside a git work tree
|
||||
# Handles both regular repos (.git directory) and worktrees/submodules (.git file)
|
||||
has_git() {
|
||||
git rev-parse --show-toplevel >/dev/null 2>&1
|
||||
# First check if git command is available (before calling get_repo_root which may use git)
|
||||
command -v git >/dev/null 2>&1 || return 1
|
||||
local repo_root=$(get_repo_root)
|
||||
# Check if .git exists (directory or file for worktrees/submodules)
|
||||
[ -e "$repo_root/.git" ] || return 1
|
||||
# Verify it's actually a valid git work tree
|
||||
git -C "$repo_root" rev-parse --is-inside-work-tree >/dev/null 2>&1
|
||||
}
|
||||
|
||||
check_feature_branch() {
|
||||
|
||||
@@ -80,19 +80,6 @@ if [ -z "$FEATURE_DESCRIPTION" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Function to find the repository root by searching for existing project markers
|
||||
find_repo_root() {
|
||||
local dir="$1"
|
||||
while [ "$dir" != "/" ]; do
|
||||
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
|
||||
echo "$dir"
|
||||
return 0
|
||||
fi
|
||||
dir="$(dirname "$dir")"
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# Function to get highest number from specs directory
|
||||
get_highest_from_specs() {
|
||||
local specs_dir="$1"
|
||||
@@ -171,21 +158,16 @@ clean_branch_name() {
|
||||
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
|
||||
}
|
||||
|
||||
# Resolve repository root. Prefer git information when available, but fall back
|
||||
# to searching for repository markers so the workflow still functions in repositories that
|
||||
# were initialised with --no-git.
|
||||
# Resolve repository root using common.sh functions which prioritize .specify over git
|
||||
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
REPO_ROOT=$(get_repo_root)
|
||||
|
||||
# Check if git is available at this repo root (not a parent)
|
||||
if has_git; then
|
||||
HAS_GIT=true
|
||||
else
|
||||
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
|
||||
if [ -z "$REPO_ROOT" ]; then
|
||||
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
|
||||
exit 1
|
||||
fi
|
||||
HAS_GIT=false
|
||||
fi
|
||||
|
||||
|
||||
@@ -1,7 +1,38 @@
|
||||
#!/usr/bin/env pwsh
|
||||
# Common PowerShell functions analogous to common.sh
|
||||
|
||||
# Find repository root by searching upward for .specify directory
|
||||
# This is the primary marker for spec-kit projects
|
||||
function Find-SpecifyRoot {
|
||||
param([string]$StartDir = (Get-Location).Path)
|
||||
|
||||
# Normalize to absolute path to prevent issues with relative paths
|
||||
# Use -LiteralPath to handle paths with wildcard characters ([, ], *, ?)
|
||||
$current = (Resolve-Path -LiteralPath $StartDir -ErrorAction SilentlyContinue)?.Path
|
||||
if (-not $current) { return $null }
|
||||
|
||||
while ($true) {
|
||||
if (Test-Path -LiteralPath (Join-Path $current ".specify") -PathType Container) {
|
||||
return $current
|
||||
}
|
||||
$parent = Split-Path $current -Parent
|
||||
if ([string]::IsNullOrEmpty($parent) -or $parent -eq $current) {
|
||||
return $null
|
||||
}
|
||||
$current = $parent
|
||||
}
|
||||
}
|
||||
|
||||
# Get repository root, prioritizing .specify directory over git
|
||||
# This prevents using a parent git repo when spec-kit is initialized in a subdirectory
|
||||
function Get-RepoRoot {
|
||||
# First, look for .specify directory (spec-kit's own marker)
|
||||
$specifyRoot = Find-SpecifyRoot
|
||||
if ($specifyRoot) {
|
||||
return $specifyRoot
|
||||
}
|
||||
|
||||
# Fallback to git if no .specify found
|
||||
try {
|
||||
$result = git rev-parse --show-toplevel 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
@@ -10,9 +41,10 @@ function Get-RepoRoot {
|
||||
} catch {
|
||||
# Git command failed
|
||||
}
|
||||
|
||||
# Fall back to script location for non-git repos
|
||||
return (Resolve-Path (Join-Path $PSScriptRoot "../../..")).Path
|
||||
|
||||
# Final fallback to script location for non-git repos
|
||||
# Use -LiteralPath to handle paths with wildcard characters
|
||||
return (Resolve-Path -LiteralPath (Join-Path $PSScriptRoot "../../..")).Path
|
||||
}
|
||||
|
||||
function Get-CurrentBranch {
|
||||
@@ -20,19 +52,21 @@ function Get-CurrentBranch {
|
||||
if ($env:SPECIFY_FEATURE) {
|
||||
return $env:SPECIFY_FEATURE
|
||||
}
|
||||
|
||||
# Then check git if available
|
||||
try {
|
||||
$result = git rev-parse --abbrev-ref HEAD 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
return $result
|
||||
}
|
||||
} catch {
|
||||
# Git command failed
|
||||
}
|
||||
|
||||
# For non-git repos, try to find the latest feature directory
|
||||
|
||||
# Then check git if available at the spec-kit root (not parent)
|
||||
$repoRoot = Get-RepoRoot
|
||||
if (Test-HasGit) {
|
||||
try {
|
||||
$result = git -C $repoRoot rev-parse --abbrev-ref HEAD 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
return $result
|
||||
}
|
||||
} catch {
|
||||
# Git command failed
|
||||
}
|
||||
}
|
||||
|
||||
# For non-git repos, try to find the latest feature directory
|
||||
$specsDir = Join-Path $repoRoot "specs"
|
||||
|
||||
if (Test-Path $specsDir) {
|
||||
@@ -69,9 +103,23 @@ function Get-CurrentBranch {
|
||||
return "main"
|
||||
}
|
||||
|
||||
# Check if we have git available at the spec-kit root level
|
||||
# Returns true only if git is installed and the repo root is inside a git work tree
|
||||
# Handles both regular repos (.git directory) and worktrees/submodules (.git file)
|
||||
function Test-HasGit {
|
||||
# First check if git command is available (before calling Get-RepoRoot which may use git)
|
||||
if (-not (Get-Command git -ErrorAction SilentlyContinue)) {
|
||||
return $false
|
||||
}
|
||||
$repoRoot = Get-RepoRoot
|
||||
# Check if .git exists (directory or file for worktrees/submodules)
|
||||
# Use -LiteralPath to handle paths with wildcard characters
|
||||
if (-not (Test-Path -LiteralPath (Join-Path $repoRoot ".git"))) {
|
||||
return $false
|
||||
}
|
||||
# Verify it's actually a valid git work tree
|
||||
try {
|
||||
git rev-parse --show-toplevel 2>$null | Out-Null
|
||||
$null = git -C $repoRoot rev-parse --is-inside-work-tree 2>$null
|
||||
return ($LASTEXITCODE -eq 0)
|
||||
} catch {
|
||||
return $false
|
||||
|
||||
@@ -45,30 +45,6 @@ if ([string]::IsNullOrWhiteSpace($featureDesc)) {
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Resolve repository root. Prefer git information when available, but fall back
|
||||
# to searching for repository markers so the workflow still functions in repositories that
|
||||
# were initialized with --no-git.
|
||||
function Find-RepositoryRoot {
|
||||
param(
|
||||
[string]$StartDir,
|
||||
[string[]]$Markers = @('.git', '.specify')
|
||||
)
|
||||
$current = Resolve-Path $StartDir
|
||||
while ($true) {
|
||||
foreach ($marker in $Markers) {
|
||||
if (Test-Path (Join-Path $current $marker)) {
|
||||
return $current
|
||||
}
|
||||
}
|
||||
$parent = Split-Path $current -Parent
|
||||
if ($parent -eq $current) {
|
||||
# Reached filesystem root without finding markers
|
||||
return $null
|
||||
}
|
||||
$current = $parent
|
||||
}
|
||||
}
|
||||
|
||||
function Get-HighestNumberFromSpecs {
|
||||
param([string]$SpecsDir)
|
||||
|
||||
@@ -139,26 +115,14 @@ function ConvertTo-CleanBranchName {
|
||||
|
||||
return $Name.ToLower() -replace '[^a-z0-9]', '-' -replace '-{2,}', '-' -replace '^-', '' -replace '-$', ''
|
||||
}
|
||||
$fallbackRoot = (Find-RepositoryRoot -StartDir $PSScriptRoot)
|
||||
if (-not $fallbackRoot) {
|
||||
Write-Error "Error: Could not determine repository root. Please run this script from within the repository."
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Load common functions (includes Resolve-Template)
|
||||
# Load common functions (includes Get-RepoRoot, Test-HasGit, Resolve-Template)
|
||||
. "$PSScriptRoot/common.ps1"
|
||||
|
||||
try {
|
||||
$repoRoot = git rev-parse --show-toplevel 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
$hasGit = $true
|
||||
} else {
|
||||
throw "Git not available"
|
||||
}
|
||||
} catch {
|
||||
$repoRoot = $fallbackRoot
|
||||
$hasGit = $false
|
||||
}
|
||||
# Use common.ps1 functions which prioritize .specify over git
|
||||
$repoRoot = Get-RepoRoot
|
||||
|
||||
# Check if git is available at this repo root (not a parent)
|
||||
$hasGit = Test-HasGit
|
||||
|
||||
Set-Location $repoRoot
|
||||
|
||||
|
||||
@@ -948,9 +948,26 @@ def download_template_from_github(ai_assistant: str, download_dir: Path, *, scri
|
||||
}
|
||||
return zip_path, metadata
|
||||
|
||||
def download_and_extract_template(project_path: Path, ai_assistant: str, script_type: str, is_current_dir: bool = False, *, verbose: bool = True, tracker: StepTracker | None = None, client: httpx.Client = None, debug: bool = False, github_token: str = None) -> Path:
|
||||
def download_and_extract_template(
|
||||
project_path: Path,
|
||||
ai_assistant: str,
|
||||
script_type: str,
|
||||
is_current_dir: bool = False,
|
||||
*,
|
||||
skip_legacy_codex_prompts: bool = False,
|
||||
verbose: bool = True,
|
||||
tracker: StepTracker | None = None,
|
||||
client: httpx.Client = None,
|
||||
debug: bool = False,
|
||||
github_token: str = None,
|
||||
) -> Path:
|
||||
"""Download the latest release and extract it to create a new project.
|
||||
Returns project_path. Uses tracker if provided (with keys: fetch, download, extract, cleanup)
|
||||
|
||||
Note:
|
||||
``skip_legacy_codex_prompts`` suppresses the legacy top-level
|
||||
``.codex`` directory from older template archives in Codex skills mode.
|
||||
The name is kept for backward compatibility with existing callers.
|
||||
"""
|
||||
current_dir = Path.cwd()
|
||||
|
||||
@@ -990,6 +1007,19 @@ def download_and_extract_template(project_path: Path, ai_assistant: str, script_
|
||||
project_path.mkdir(parents=True)
|
||||
|
||||
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
|
||||
def _validate_zip_members_within(root: Path) -> None:
|
||||
"""Validate all ZIP members stay within ``root`` (Zip Slip guard)."""
|
||||
root_resolved = root.resolve()
|
||||
for member in zip_ref.namelist():
|
||||
member_path = (root / member).resolve()
|
||||
try:
|
||||
member_path.relative_to(root_resolved)
|
||||
except ValueError:
|
||||
raise RuntimeError(
|
||||
f"Unsafe path in ZIP archive: {member} "
|
||||
"(potential path traversal)"
|
||||
)
|
||||
|
||||
zip_contents = zip_ref.namelist()
|
||||
if tracker:
|
||||
tracker.start("zip-list")
|
||||
@@ -1000,6 +1030,7 @@ def download_and_extract_template(project_path: Path, ai_assistant: str, script_
|
||||
if is_current_dir:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_path = Path(temp_dir)
|
||||
_validate_zip_members_within(temp_path)
|
||||
zip_ref.extractall(temp_path)
|
||||
|
||||
extracted_items = list(temp_path.iterdir())
|
||||
@@ -1019,6 +1050,11 @@ def download_and_extract_template(project_path: Path, ai_assistant: str, script_
|
||||
console.print("[cyan]Found nested directory structure[/cyan]")
|
||||
|
||||
for item in source_dir.iterdir():
|
||||
# In Codex skills mode, do not materialize the legacy
|
||||
# top-level .codex directory from older prompt-based
|
||||
# template archives.
|
||||
if skip_legacy_codex_prompts and ai_assistant == "codex" and item.name == ".codex":
|
||||
continue
|
||||
dest_path = project_path / item.name
|
||||
if item.is_dir():
|
||||
if dest_path.exists():
|
||||
@@ -1043,6 +1079,7 @@ def download_and_extract_template(project_path: Path, ai_assistant: str, script_
|
||||
if verbose and not tracker:
|
||||
console.print("[cyan]Template files merged into current directory[/cyan]")
|
||||
else:
|
||||
_validate_zip_members_within(project_path)
|
||||
zip_ref.extractall(project_path)
|
||||
|
||||
extracted_items = list(project_path.iterdir())
|
||||
@@ -1069,6 +1106,13 @@ def download_and_extract_template(project_path: Path, ai_assistant: str, script_
|
||||
elif verbose:
|
||||
console.print("[cyan]Flattened nested directory structure[/cyan]")
|
||||
|
||||
# For fresh-directory Codex skills init, suppress legacy
|
||||
# top-level .codex layout extracted from older archives.
|
||||
if skip_legacy_codex_prompts and ai_assistant == "codex":
|
||||
legacy_codex_dir = project_path / ".codex"
|
||||
if legacy_codex_dir.is_dir():
|
||||
shutil.rmtree(legacy_codex_dir, ignore_errors=True)
|
||||
|
||||
except Exception as e:
|
||||
if tracker:
|
||||
tracker.error("extract", str(e))
|
||||
@@ -1499,18 +1543,27 @@ def _get_skills_dir(project_path: Path, selected_ai: str) -> Path:
|
||||
return project_path / DEFAULT_SKILLS_DIR
|
||||
|
||||
|
||||
def install_ai_skills(project_path: Path, selected_ai: str, tracker: StepTracker | None = None) -> bool:
|
||||
def install_ai_skills(
|
||||
project_path: Path,
|
||||
selected_ai: str,
|
||||
tracker: StepTracker | None = None,
|
||||
*,
|
||||
overwrite_existing: bool = False,
|
||||
) -> bool:
|
||||
"""Install Prompt.MD files from templates/commands/ as agent skills.
|
||||
|
||||
Skills are written to the agent-specific skills directory following the
|
||||
`agentskills.io <https://agentskills.io/specification>`_ specification.
|
||||
Installation is additive — existing files are never removed and prompt
|
||||
command files in the agent's commands directory are left untouched.
|
||||
Installation is additive by default — existing files are never removed and
|
||||
prompt command files in the agent's commands directory are left untouched.
|
||||
|
||||
Args:
|
||||
project_path: Target project directory.
|
||||
selected_ai: AI assistant key from ``AGENT_CONFIG``.
|
||||
tracker: Optional progress tracker.
|
||||
overwrite_existing: When True, overwrite any existing ``SKILL.md`` file
|
||||
in the target skills directory (including user-authored content).
|
||||
Defaults to False.
|
||||
|
||||
Returns:
|
||||
``True`` if at least one skill was installed or all skills were
|
||||
@@ -1640,9 +1693,10 @@ def install_ai_skills(project_path: Path, selected_ai: str, tracker: StepTracker
|
||||
|
||||
skill_file = skill_dir / "SKILL.md"
|
||||
if skill_file.exists():
|
||||
# Do not overwrite user-customized skills on re-runs
|
||||
skipped_count += 1
|
||||
continue
|
||||
if not overwrite_existing:
|
||||
# Default behavior: do not overwrite user-customized skills on re-runs
|
||||
skipped_count += 1
|
||||
continue
|
||||
skill_file.write_text(skill_content, encoding="utf-8")
|
||||
installed_count += 1
|
||||
|
||||
@@ -1994,7 +2048,18 @@ def init(
|
||||
|
||||
if use_github:
|
||||
with httpx.Client(verify=local_ssl_context) as local_client:
|
||||
download_and_extract_template(project_path, selected_ai, selected_script, here, verbose=False, tracker=tracker, client=local_client, debug=debug, github_token=github_token)
|
||||
download_and_extract_template(
|
||||
project_path,
|
||||
selected_ai,
|
||||
selected_script,
|
||||
here,
|
||||
skip_legacy_codex_prompts=(selected_ai == "codex" and ai_skills),
|
||||
verbose=False,
|
||||
tracker=tracker,
|
||||
client=local_client,
|
||||
debug=debug,
|
||||
github_token=github_token,
|
||||
)
|
||||
else:
|
||||
scaffold_ok = scaffold_from_core_pack(project_path, selected_ai, selected_script, here, tracker=tracker)
|
||||
if not scaffold_ok:
|
||||
@@ -2013,7 +2078,6 @@ def init(
|
||||
if not here and project_path.exists():
|
||||
shutil.rmtree(project_path)
|
||||
raise typer.Exit(1)
|
||||
|
||||
# For generic agent, rename placeholder directory to user-specified path
|
||||
if selected_ai == "generic" and ai_commands_dir:
|
||||
placeholder_dir = project_path / ".speckit" / "commands"
|
||||
@@ -2033,16 +2097,30 @@ def init(
|
||||
if ai_skills:
|
||||
if selected_ai in NATIVE_SKILLS_AGENTS:
|
||||
skills_dir = _get_skills_dir(project_path, selected_ai)
|
||||
if not _has_bundled_skills(project_path, selected_ai):
|
||||
raise RuntimeError(
|
||||
f"Expected bundled agent skills in {skills_dir.relative_to(project_path)}, "
|
||||
"but none were found. Re-run with an up-to-date template."
|
||||
)
|
||||
if tracker:
|
||||
tracker.start("ai-skills")
|
||||
tracker.complete("ai-skills", f"bundled skills → {skills_dir.relative_to(project_path)}")
|
||||
bundled_found = _has_bundled_skills(project_path, selected_ai)
|
||||
if bundled_found:
|
||||
if tracker:
|
||||
tracker.start("ai-skills")
|
||||
tracker.complete("ai-skills", f"bundled skills → {skills_dir.relative_to(project_path)}")
|
||||
else:
|
||||
console.print(f"[green]✓[/green] Using bundled agent skills in {skills_dir.relative_to(project_path)}/")
|
||||
else:
|
||||
console.print(f"[green]✓[/green] Using bundled agent skills in {skills_dir.relative_to(project_path)}/")
|
||||
# Compatibility fallback: convert command templates to skills
|
||||
# when an older template archive does not include native skills.
|
||||
# This keeps `specify init --here --ai codex --ai-skills` usable
|
||||
# in repos that already contain unrelated skills under .agents/skills.
|
||||
fallback_ok = install_ai_skills(
|
||||
project_path,
|
||||
selected_ai,
|
||||
tracker=tracker,
|
||||
overwrite_existing=True,
|
||||
)
|
||||
if not fallback_ok:
|
||||
raise RuntimeError(
|
||||
f"Expected bundled agent skills in {skills_dir.relative_to(project_path)}, "
|
||||
"but none were found and fallback conversion failed. "
|
||||
"Re-run with an up-to-date template."
|
||||
)
|
||||
else:
|
||||
skills_ok = install_ai_skills(project_path, selected_ai, tracker=tracker)
|
||||
|
||||
@@ -2964,7 +3042,7 @@ def preset_catalog_add(
|
||||
# Load existing config
|
||||
if config_path.exists():
|
||||
try:
|
||||
config = yaml.safe_load(config_path.read_text()) or {}
|
||||
config = yaml.safe_load(config_path.read_text(encoding="utf-8")) or {}
|
||||
except Exception as e:
|
||||
console.print(f"[red]Error:[/red] Failed to read {config_path}: {e}")
|
||||
raise typer.Exit(1)
|
||||
@@ -2992,7 +3070,7 @@ def preset_catalog_add(
|
||||
})
|
||||
|
||||
config["catalogs"] = catalogs
|
||||
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False))
|
||||
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False, allow_unicode=True), encoding="utf-8")
|
||||
|
||||
install_label = "install allowed" if install_allowed else "discovery only"
|
||||
console.print(f"\n[green]✓[/green] Added catalog '[bold]{name}[/bold]' ({install_label})")
|
||||
@@ -3020,7 +3098,7 @@ def preset_catalog_remove(
|
||||
raise typer.Exit(1)
|
||||
|
||||
try:
|
||||
config = yaml.safe_load(config_path.read_text()) or {}
|
||||
config = yaml.safe_load(config_path.read_text(encoding="utf-8")) or {}
|
||||
except Exception:
|
||||
console.print("[red]Error:[/red] Failed to read preset catalog config.")
|
||||
raise typer.Exit(1)
|
||||
@@ -3037,7 +3115,7 @@ def preset_catalog_remove(
|
||||
raise typer.Exit(1)
|
||||
|
||||
config["catalogs"] = catalogs
|
||||
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False))
|
||||
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False, allow_unicode=True), encoding="utf-8")
|
||||
|
||||
console.print(f"[green]✓[/green] Removed catalog '{name}'")
|
||||
if not catalogs:
|
||||
@@ -3306,7 +3384,7 @@ def catalog_add(
|
||||
# Load existing config
|
||||
if config_path.exists():
|
||||
try:
|
||||
config = yaml.safe_load(config_path.read_text()) or {}
|
||||
config = yaml.safe_load(config_path.read_text(encoding="utf-8")) or {}
|
||||
except Exception as e:
|
||||
console.print(f"[red]Error:[/red] Failed to read {config_path}: {e}")
|
||||
raise typer.Exit(1)
|
||||
@@ -3334,7 +3412,7 @@ def catalog_add(
|
||||
})
|
||||
|
||||
config["catalogs"] = catalogs
|
||||
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False))
|
||||
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False, allow_unicode=True), encoding="utf-8")
|
||||
|
||||
install_label = "install allowed" if install_allowed else "discovery only"
|
||||
console.print(f"\n[green]✓[/green] Added catalog '[bold]{name}[/bold]' ({install_label})")
|
||||
@@ -3362,7 +3440,7 @@ def catalog_remove(
|
||||
raise typer.Exit(1)
|
||||
|
||||
try:
|
||||
config = yaml.safe_load(config_path.read_text()) or {}
|
||||
config = yaml.safe_load(config_path.read_text(encoding="utf-8")) or {}
|
||||
except Exception:
|
||||
console.print("[red]Error:[/red] Failed to read catalog config.")
|
||||
raise typer.Exit(1)
|
||||
@@ -3379,7 +3457,7 @@ def catalog_remove(
|
||||
raise typer.Exit(1)
|
||||
|
||||
config["catalogs"] = catalogs
|
||||
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False))
|
||||
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False, allow_unicode=True), encoding="utf-8")
|
||||
|
||||
console.print(f"[green]✓[/green] Removed catalog '{name}'")
|
||||
if not catalogs:
|
||||
|
||||
@@ -207,7 +207,7 @@ class CommandRegistrar:
|
||||
if not fm:
|
||||
return ""
|
||||
|
||||
yaml_str = yaml.dump(fm, default_flow_style=False, sort_keys=False)
|
||||
yaml_str = yaml.dump(fm, default_flow_style=False, sort_keys=False, allow_unicode=True)
|
||||
return f"---\n{yaml_str}---\n"
|
||||
|
||||
def _adjust_script_paths(self, frontmatter: dict) -> dict:
|
||||
|
||||
@@ -975,8 +975,8 @@ class ExtensionCatalog:
|
||||
if not config_path.exists():
|
||||
return None
|
||||
try:
|
||||
data = yaml.safe_load(config_path.read_text()) or {}
|
||||
except (yaml.YAMLError, OSError) as e:
|
||||
data = yaml.safe_load(config_path.read_text(encoding="utf-8")) or {}
|
||||
except (yaml.YAMLError, OSError, UnicodeError) as e:
|
||||
raise ValidationError(
|
||||
f"Failed to read catalog config {config_path}: {e}"
|
||||
)
|
||||
@@ -1467,8 +1467,8 @@ class ConfigManager:
|
||||
return {}
|
||||
|
||||
try:
|
||||
return yaml.safe_load(file_path.read_text()) or {}
|
||||
except (yaml.YAMLError, OSError):
|
||||
return yaml.safe_load(file_path.read_text(encoding="utf-8")) or {}
|
||||
except (yaml.YAMLError, OSError, UnicodeError):
|
||||
return {}
|
||||
|
||||
def _get_extension_defaults(self) -> Dict[str, Any]:
|
||||
@@ -1659,8 +1659,8 @@ class HookExecutor:
|
||||
}
|
||||
|
||||
try:
|
||||
return yaml.safe_load(self.config_file.read_text()) or {}
|
||||
except (yaml.YAMLError, OSError):
|
||||
return yaml.safe_load(self.config_file.read_text(encoding="utf-8")) or {}
|
||||
except (yaml.YAMLError, OSError, UnicodeError):
|
||||
return {
|
||||
"installed": [],
|
||||
"settings": {"auto_execute_hooks": True},
|
||||
@@ -1675,7 +1675,8 @@ class HookExecutor:
|
||||
"""
|
||||
self.config_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
self.config_file.write_text(
|
||||
yaml.dump(config, default_flow_style=False, sort_keys=False)
|
||||
yaml.dump(config, default_flow_style=False, sort_keys=False, allow_unicode=True),
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
def register_hooks(self, manifest: ExtensionManifest):
|
||||
|
||||
@@ -1062,8 +1062,8 @@ class PresetCatalog:
|
||||
if not config_path.exists():
|
||||
return None
|
||||
try:
|
||||
data = yaml.safe_load(config_path.read_text()) or {}
|
||||
except (yaml.YAMLError, OSError) as e:
|
||||
data = yaml.safe_load(config_path.read_text(encoding="utf-8")) or {}
|
||||
except (yaml.YAMLError, OSError, UnicodeError) as e:
|
||||
raise PresetValidationError(
|
||||
f"Failed to read catalog config {config_path}: {e}"
|
||||
)
|
||||
|
||||
@@ -113,3 +113,16 @@
|
||||
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
|
||||
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
|
||||
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
|
||||
|
||||
## Assumptions
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: The content in this section represents placeholders.
|
||||
Fill them out with the right assumptions based on reasonable defaults
|
||||
chosen when the feature description did not specify certain details.
|
||||
-->
|
||||
|
||||
- [Assumption about target users, e.g., "Users have stable internet connectivity"]
|
||||
- [Assumption about scope boundaries, e.g., "Mobile support is out of scope for v1"]
|
||||
- [Assumption about data/environment, e.g., "Existing authentication system will be reused"]
|
||||
- [Dependency on existing system/service, e.g., "Requires access to the existing user profile API"]
|
||||
|
||||
@@ -11,10 +11,12 @@ Tests cover:
|
||||
"""
|
||||
|
||||
import re
|
||||
import zipfile
|
||||
import pytest
|
||||
import tempfile
|
||||
import shutil
|
||||
import yaml
|
||||
import typer
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
@@ -720,8 +722,8 @@ class TestNewProjectCommandSkip:
|
||||
mock_skills.assert_not_called()
|
||||
assert (target / ".agents" / "skills" / "speckit-specify" / "SKILL.md").exists()
|
||||
|
||||
def test_codex_native_skills_missing_fails_clearly(self, tmp_path):
|
||||
"""Codex native skills init should fail if bundled skills are missing."""
|
||||
def test_codex_native_skills_missing_falls_back_then_fails_cleanly(self, tmp_path):
|
||||
"""Codex should attempt fallback conversion when bundled skills are missing."""
|
||||
from typer.testing import CliRunner
|
||||
|
||||
runner = CliRunner()
|
||||
@@ -730,7 +732,7 @@ class TestNewProjectCommandSkip:
|
||||
with patch("specify_cli.download_and_extract_template", lambda *args, **kwargs: None), \
|
||||
patch("specify_cli.ensure_executable_scripts"), \
|
||||
patch("specify_cli.ensure_constitution_from_template"), \
|
||||
patch("specify_cli.install_ai_skills") as mock_skills, \
|
||||
patch("specify_cli.install_ai_skills", return_value=False) as mock_skills, \
|
||||
patch("specify_cli.is_git_repo", return_value=False), \
|
||||
patch("specify_cli.shutil.which", return_value="/usr/bin/codex"):
|
||||
result = runner.invoke(
|
||||
@@ -739,11 +741,13 @@ class TestNewProjectCommandSkip:
|
||||
)
|
||||
|
||||
assert result.exit_code == 1
|
||||
mock_skills.assert_not_called()
|
||||
mock_skills.assert_called_once()
|
||||
assert mock_skills.call_args.kwargs.get("overwrite_existing") is True
|
||||
assert "Expected bundled agent skills" in result.output
|
||||
assert "fallback conversion failed" in result.output
|
||||
|
||||
def test_codex_native_skills_ignores_non_speckit_skill_dirs(self, tmp_path):
|
||||
"""Non-spec-kit SKILL.md files should not satisfy Codex bundled-skills validation."""
|
||||
"""Non-spec-kit SKILL.md files should trigger fallback conversion, not hard-fail."""
|
||||
from typer.testing import CliRunner
|
||||
|
||||
runner = CliRunner()
|
||||
@@ -757,7 +761,7 @@ class TestNewProjectCommandSkip:
|
||||
with patch("specify_cli.download_and_extract_template", side_effect=fake_download), \
|
||||
patch("specify_cli.ensure_executable_scripts"), \
|
||||
patch("specify_cli.ensure_constitution_from_template"), \
|
||||
patch("specify_cli.install_ai_skills") as mock_skills, \
|
||||
patch("specify_cli.install_ai_skills", return_value=True) as mock_skills, \
|
||||
patch("specify_cli.is_git_repo", return_value=False), \
|
||||
patch("specify_cli.shutil.which", return_value="/usr/bin/codex"):
|
||||
result = runner.invoke(
|
||||
@@ -765,9 +769,100 @@ class TestNewProjectCommandSkip:
|
||||
["init", str(target), "--ai", "codex", "--ai-skills", "--script", "sh", "--no-git"],
|
||||
)
|
||||
|
||||
assert result.exit_code == 1
|
||||
mock_skills.assert_not_called()
|
||||
assert "Expected bundled agent skills" in result.output
|
||||
assert result.exit_code == 0
|
||||
mock_skills.assert_called_once()
|
||||
assert mock_skills.call_args.kwargs.get("overwrite_existing") is True
|
||||
|
||||
def test_codex_ai_skills_here_mode_preserves_existing_codex_dir(self, tmp_path, monkeypatch):
|
||||
"""Codex --here skills init should not delete a pre-existing .codex directory."""
|
||||
from typer.testing import CliRunner
|
||||
|
||||
runner = CliRunner()
|
||||
target = tmp_path / "codex-preserve-here"
|
||||
target.mkdir()
|
||||
existing_prompts = target / ".codex" / "prompts"
|
||||
existing_prompts.mkdir(parents=True)
|
||||
(existing_prompts / "custom.md").write_text("custom")
|
||||
monkeypatch.chdir(target)
|
||||
|
||||
with patch("specify_cli.download_and_extract_template", return_value=target), \
|
||||
patch("specify_cli.ensure_executable_scripts"), \
|
||||
patch("specify_cli.ensure_constitution_from_template"), \
|
||||
patch("specify_cli.install_ai_skills", return_value=True), \
|
||||
patch("specify_cli.is_git_repo", return_value=True), \
|
||||
patch("specify_cli.shutil.which", return_value="/usr/bin/codex"):
|
||||
result = runner.invoke(
|
||||
app,
|
||||
["init", "--here", "--ai", "codex", "--ai-skills", "--script", "sh", "--no-git"],
|
||||
input="y\n",
|
||||
)
|
||||
|
||||
assert result.exit_code == 0
|
||||
assert (target / ".codex").exists()
|
||||
assert (existing_prompts / "custom.md").exists()
|
||||
|
||||
def test_codex_ai_skills_fresh_dir_does_not_create_codex_dir(self, tmp_path):
|
||||
"""Fresh-directory Codex skills init should not leave legacy .codex from archive."""
|
||||
target = tmp_path / "fresh-codex-proj"
|
||||
archive = tmp_path / "codex-template.zip"
|
||||
|
||||
with zipfile.ZipFile(archive, "w") as zf:
|
||||
zf.writestr("template-root/.codex/prompts/speckit.specify.md", "legacy")
|
||||
zf.writestr("template-root/.specify/templates/constitution-template.md", "constitution")
|
||||
|
||||
fake_meta = {
|
||||
"filename": archive.name,
|
||||
"size": archive.stat().st_size,
|
||||
"release": "vtest",
|
||||
"asset_url": "https://example.invalid/template.zip",
|
||||
}
|
||||
|
||||
with patch("specify_cli.download_template_from_github", return_value=(archive, fake_meta)):
|
||||
specify_cli.download_and_extract_template(
|
||||
target,
|
||||
"codex",
|
||||
"sh",
|
||||
is_current_dir=False,
|
||||
skip_legacy_codex_prompts=True,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
assert target.exists()
|
||||
assert (target / ".specify").exists()
|
||||
assert not (target / ".codex").exists()
|
||||
|
||||
@pytest.mark.parametrize("is_current_dir", [False, True])
|
||||
def test_download_and_extract_template_blocks_zip_path_traversal(self, tmp_path, monkeypatch, is_current_dir):
|
||||
"""Extraction should reject ZIP members escaping the target directory."""
|
||||
target = (tmp_path / "here-proj") if is_current_dir else (tmp_path / "new-proj")
|
||||
if is_current_dir:
|
||||
target.mkdir()
|
||||
monkeypatch.chdir(target)
|
||||
|
||||
archive = tmp_path / "malicious-template.zip"
|
||||
with zipfile.ZipFile(archive, "w") as zf:
|
||||
zf.writestr("../evil.txt", "pwned")
|
||||
zf.writestr("template-root/.specify/templates/constitution-template.md", "constitution")
|
||||
|
||||
fake_meta = {
|
||||
"filename": archive.name,
|
||||
"size": archive.stat().st_size,
|
||||
"release": "vtest",
|
||||
"asset_url": "https://example.invalid/template.zip",
|
||||
}
|
||||
|
||||
with patch("specify_cli.download_template_from_github", return_value=(archive, fake_meta)):
|
||||
with pytest.raises(typer.Exit):
|
||||
specify_cli.download_and_extract_template(
|
||||
target,
|
||||
"codex",
|
||||
"sh",
|
||||
is_current_dir=is_current_dir,
|
||||
skip_legacy_codex_prompts=True,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
assert not (tmp_path / "evil.txt").exists()
|
||||
|
||||
def test_commands_preserved_when_skills_fail(self, tmp_path):
|
||||
"""If skills fail, commands should NOT be removed (safety net)."""
|
||||
@@ -859,6 +954,21 @@ class TestSkipIfExists:
|
||||
# All 4 templates should produce skills (specify, plan, tasks, empty_fm)
|
||||
assert len(skill_dirs) == 4
|
||||
|
||||
def test_existing_skill_overwritten_when_enabled(self, project_dir, templates_dir):
|
||||
"""When overwrite_existing=True, pre-existing SKILL.md should be replaced."""
|
||||
skill_dir = project_dir / ".claude" / "skills" / "speckit-specify"
|
||||
skill_dir.mkdir(parents=True)
|
||||
custom_content = "# My Custom Specify Skill\nUser-modified content\n"
|
||||
skill_file = skill_dir / "SKILL.md"
|
||||
skill_file.write_text(custom_content)
|
||||
|
||||
result = install_ai_skills(project_dir, "claude", overwrite_existing=True)
|
||||
|
||||
assert result is True
|
||||
updated_content = skill_file.read_text()
|
||||
assert updated_content != custom_content
|
||||
assert "name: speckit-specify" in updated_content
|
||||
|
||||
|
||||
# ===== SKILL_DESCRIPTIONS Coverage Tests =====
|
||||
|
||||
|
||||
@@ -747,6 +747,18 @@ $ARGUMENTS
|
||||
assert output.endswith("---\n")
|
||||
assert "description: Test command" in output
|
||||
|
||||
def test_render_frontmatter_unicode(self):
|
||||
"""Test rendering frontmatter preserves non-ASCII characters."""
|
||||
frontmatter = {
|
||||
"description": "Prüfe Konformität der Implementierung"
|
||||
}
|
||||
|
||||
registrar = CommandRegistrar()
|
||||
output = registrar.render_frontmatter(frontmatter)
|
||||
|
||||
assert "Prüfe Konformität" in output
|
||||
assert "\\u" not in output
|
||||
|
||||
def test_register_commands_for_claude(self, extension_dir, project_dir):
|
||||
"""Test registering commands for Claude agent."""
|
||||
# Create .claude directory
|
||||
|
||||
Reference in New Issue
Block a user