mirror of
https://github.com/github/spec-kit.git
synced 2026-03-25 14:53:08 +00:00
Compare commits
2 Commits
main
...
chore/rele
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
07fd90a4a1 | ||
|
|
765b125a0f |
65
README.md
65
README.md
@@ -22,10 +22,7 @@
|
|||||||
- [🤔 What is Spec-Driven Development?](#-what-is-spec-driven-development)
|
- [🤔 What is Spec-Driven Development?](#-what-is-spec-driven-development)
|
||||||
- [⚡ Get Started](#-get-started)
|
- [⚡ Get Started](#-get-started)
|
||||||
- [📽️ Video Overview](#️-video-overview)
|
- [📽️ Video Overview](#️-video-overview)
|
||||||
- [🧩 Community Extensions](#-community-extensions)
|
|
||||||
- [🎨 Community Presets](#-community-presets)
|
|
||||||
- [🚶 Community Walkthroughs](#-community-walkthroughs)
|
- [🚶 Community Walkthroughs](#-community-walkthroughs)
|
||||||
- [🛠️ Community Friends](#️-community-friends)
|
|
||||||
- [🤖 Supported AI Agents](#-supported-ai-agents)
|
- [🤖 Supported AI Agents](#-supported-ai-agents)
|
||||||
- [🔧 Specify CLI Reference](#-specify-cli-reference)
|
- [🔧 Specify CLI Reference](#-specify-cli-reference)
|
||||||
- [🧩 Making Spec Kit Your Own: Extensions & Presets](#-making-spec-kit-your-own-extensions--presets)
|
- [🧩 Making Spec Kit Your Own: Extensions & Presets](#-making-spec-kit-your-own-extensions--presets)
|
||||||
@@ -158,56 +155,6 @@ Want to see Spec Kit in action? Watch our [video overview](https://www.youtube.c
|
|||||||
|
|
||||||
[](https://www.youtube.com/watch?v=a9eR1xsfvHg&pp=0gcJCckJAYcqIYzv)
|
[](https://www.youtube.com/watch?v=a9eR1xsfvHg&pp=0gcJCckJAYcqIYzv)
|
||||||
|
|
||||||
## 🧩 Community Extensions
|
|
||||||
|
|
||||||
The following community-contributed extensions are available in [`catalog.community.json`](extensions/catalog.community.json):
|
|
||||||
|
|
||||||
**Categories:** `docs` — reads, validates, or generates spec artifacts · `code` — reviews, validates, or modifies source code · `process` — orchestrates workflow across phases · `integration` — syncs with external platforms · `visibility` — reports on project health or progress
|
|
||||||
|
|
||||||
**Effect:** `Read-only` — produces reports without modifying files · `Read+Write` — modifies files, creates artifacts, or updates specs
|
|
||||||
|
|
||||||
| Extension | Purpose | Category | Effect | URL |
|
|
||||||
|-----------|---------|----------|--------|-----|
|
|
||||||
| AI-Driven Engineering (AIDE) | A structured 7-step workflow for building new projects from scratch with AI assistants — from vision through implementation | `process` | Read+Write | [aide](https://github.com/mnriem/spec-kit-extensions/tree/main/aide) |
|
|
||||||
| Archive Extension | Archive merged features into main project memory. | `docs` | Read+Write | [spec-kit-archive](https://github.com/stn1slv/spec-kit-archive) |
|
|
||||||
| Azure DevOps Integration | Sync user stories and tasks to Azure DevOps work items using OAuth authentication | `integration` | Read+Write | [spec-kit-azure-devops](https://github.com/pragya247/spec-kit-azure-devops) |
|
|
||||||
| Checkpoint Extension | Commit the changes made during the middle of the implementation, so you don't end up with just one very large commit at the end | `code` | Read+Write | [spec-kit-checkpoint](https://github.com/aaronrsun/spec-kit-checkpoint) |
|
|
||||||
| Cleanup Extension | Post-implementation quality gate that reviews changes, fixes small issues (scout rule), creates tasks for medium issues, and generates analysis for large issues | `code` | Read+Write | [spec-kit-cleanup](https://github.com/dsrednicki/spec-kit-cleanup) |
|
|
||||||
| Cognitive Squad | Multi-agent cognitive system with Triadic Model: understanding, internalization, application — with quality gates, backpropagation verification, and self-healing | `docs` | Read+Write | [cognitive-squad](https://github.com/Testimonial/cognitive-squad) |
|
|
||||||
| Conduct Extension | Orchestrates spec-kit phases via sub-agent delegation to reduce context pollution. | `process` | Read+Write | [spec-kit-conduct-ext](https://github.com/twbrandon7/spec-kit-conduct-ext) |
|
|
||||||
| DocGuard — CDD Enforcement | Canonical-Driven Development enforcement. Validates, scores, and traces project documentation with automated checks, AI-driven workflows, and spec-kit hooks. Zero NPM runtime dependencies. | `docs` | Read+Write | [spec-kit-docguard](https://github.com/raccioly/docguard) |
|
|
||||||
| Extensify | Create and validate extensions and extension catalogs | `process` | Read+Write | [extensify](https://github.com/mnriem/spec-kit-extensions/tree/main/extensify) |
|
|
||||||
| Fleet Orchestrator | Orchestrate a full feature lifecycle with human-in-the-loop gates across all SpecKit phases | `process` | Read+Write | [spec-kit-fleet](https://github.com/sharathsatish/spec-kit-fleet) |
|
|
||||||
| Iterate | Iterate on spec documents with a two-phase define-and-apply workflow — refine specs mid-implementation and go straight back to building | `docs` | Read+Write | [spec-kit-iterate](https://github.com/imviancagrace/spec-kit-iterate) |
|
|
||||||
| Jira Integration | Create Jira Epics, Stories, and Issues from spec-kit specifications and task breakdowns with configurable hierarchy and custom field support | `integration` | Read+Write | [spec-kit-jira](https://github.com/mbachorik/spec-kit-jira) |
|
|
||||||
| Learning Extension | Generate educational guides from implementations and enhance clarifications with mentoring context | `docs` | Read+Write | [spec-kit-learn](https://github.com/imviancagrace/spec-kit-learn) |
|
|
||||||
| Presetify | Create and validate presets and preset catalogs | `process` | Read+Write | [presetify](https://github.com/mnriem/spec-kit-extensions/tree/main/presetify) |
|
|
||||||
| Project Health Check | Diagnose a Spec Kit project and report health issues across structure, agents, features, scripts, extensions, and git | `visibility` | Read-only | [spec-kit-doctor](https://github.com/KhawarHabibKhan/spec-kit-doctor) |
|
|
||||||
| Project Status | Show current SDD workflow progress — active feature, artifact status, task completion, workflow phase, and extensions summary | `visibility` | Read-only | [spec-kit-status](https://github.com/KhawarHabibKhan/spec-kit-status) |
|
|
||||||
| Ralph Loop | Autonomous implementation loop using AI agent CLI | `code` | Read+Write | [spec-kit-ralph](https://github.com/Rubiss/spec-kit-ralph) |
|
|
||||||
| Reconcile Extension | Reconcile implementation drift by surgically updating feature artifacts. | `docs` | Read+Write | [spec-kit-reconcile](https://github.com/stn1slv/spec-kit-reconcile) |
|
|
||||||
| Retrospective Extension | Post-implementation retrospective with spec adherence scoring, drift analysis, and human-gated spec updates | `docs` | Read+Write | [spec-kit-retrospective](https://github.com/emi-dm/spec-kit-retrospective) |
|
|
||||||
| Review Extension | Post-implementation comprehensive code review with specialized agents for code quality, comments, tests, error handling, type design, and simplification | `code` | Read-only | [spec-kit-review](https://github.com/ismaelJimenez/spec-kit-review) |
|
|
||||||
| SDD Utilities | Resume interrupted workflows, validate project health, and verify spec-to-task traceability | `process` | Read+Write | [speckit-utils](https://github.com/mvanhorn/speckit-utils) |
|
|
||||||
| Spec Sync | Detect and resolve drift between specs and implementation. AI-assisted resolution with human approval | `docs` | Read+Write | [spec-kit-sync](https://github.com/bgervin/spec-kit-sync) |
|
|
||||||
| Understanding | Automated requirements quality analysis — 31 deterministic metrics against IEEE/ISO standards with experimental energy-based ambiguity detection | `docs` | Read-only | [understanding](https://github.com/Testimonial/understanding) |
|
|
||||||
| V-Model Extension Pack | Enforces V-Model paired generation of development specs and test specs with full traceability | `docs` | Read+Write | [spec-kit-v-model](https://github.com/leocamello/spec-kit-v-model) |
|
|
||||||
| Verify Extension | Post-implementation quality gate that validates implemented code against specification artifacts | `code` | Read-only | [spec-kit-verify](https://github.com/ismaelJimenez/spec-kit-verify) |
|
|
||||||
| Verify Tasks Extension | Detect phantom completions: tasks marked [X] in tasks.md with no real implementation | `code` | Read-only | [spec-kit-verify-tasks](https://github.com/datastone-inc/spec-kit-verify-tasks) |
|
|
||||||
|
|
||||||
To submit your own extension, see the [Extension Publishing Guide](extensions/EXTENSION-PUBLISHING-GUIDE.md).
|
|
||||||
|
|
||||||
## 🎨 Community Presets
|
|
||||||
|
|
||||||
The following community-contributed presets customize how Spec Kit behaves — overriding templates, commands, and terminology without changing any tooling. Presets are available in [`catalog.community.json`](presets/catalog.community.json):
|
|
||||||
|
|
||||||
| Preset | Purpose | Provides | Requires | URL |
|
|
||||||
|--------|---------|----------|----------|-----|
|
|
||||||
| AIDE In-Place Migration | Adapts the AIDE extension workflow for in-place technology migrations (X → Y pattern) — adds migration objectives, verification gates, knowledge documents, and behavioral equivalence criteria | 2 templates, 8 commands | AIDE extension | [spec-kit-presets](https://github.com/mnriem/spec-kit-presets) |
|
|
||||||
| Pirate Speak (Full) | Transforms all Spec Kit output into pirate speak — specs become "Voyage Manifests", plans become "Battle Plans", tasks become "Crew Assignments" | 6 templates, 9 commands | — | [spec-kit-presets](https://github.com/mnriem/spec-kit-presets) |
|
|
||||||
|
|
||||||
To build and publish your own preset, see the [Presets Publishing Guide](presets/PUBLISHING.md).
|
|
||||||
|
|
||||||
## 🚶 Community Walkthroughs
|
## 🚶 Community Walkthroughs
|
||||||
|
|
||||||
See Spec-Driven Development in action across different scenarios with these community-contributed walkthroughs:
|
See Spec-Driven Development in action across different scenarios with these community-contributed walkthroughs:
|
||||||
@@ -226,14 +173,6 @@ See Spec-Driven Development in action across different scenarios with these comm
|
|||||||
|
|
||||||
- **[Greenfield Spring Boot + React with a custom extension](https://github.com/mnriem/spec-kit-aide-extension-demo)** — Walks through the **AIDE extension**, a community extension that adds an alternative spec-driven workflow to spec-kit with high-level specs (vision) and low-level specs (work items) organized in a 7-step iterative lifecycle: vision → roadmap → progress tracking → work queue → work items → execution → feedback loops. Uses a family trading platform (Spring Boot 4, React 19, PostgreSQL, Docker Compose) as the scenario to illustrate how the extension mechanism lets you plug in a different style of spec-driven development without changing any core tooling — truly utilizing the "Kit" in Spec Kit.
|
- **[Greenfield Spring Boot + React with a custom extension](https://github.com/mnriem/spec-kit-aide-extension-demo)** — Walks through the **AIDE extension**, a community extension that adds an alternative spec-driven workflow to spec-kit with high-level specs (vision) and low-level specs (work items) organized in a 7-step iterative lifecycle: vision → roadmap → progress tracking → work queue → work items → execution → feedback loops. Uses a family trading platform (Spring Boot 4, React 19, PostgreSQL, Docker Compose) as the scenario to illustrate how the extension mechanism lets you plug in a different style of spec-driven development without changing any core tooling — truly utilizing the "Kit" in Spec Kit.
|
||||||
|
|
||||||
## 🛠️ Community Friends
|
|
||||||
|
|
||||||
Community projects that extend, visualize, or build on Spec Kit:
|
|
||||||
|
|
||||||
- **[cc-sdd](https://github.com/rhuss/cc-sdd)** - A Claude Code plugin that adds composable traits on top of Spec Kit with [Superpowers](https://github.com/obra/superpowers)-based quality gates, spec/code review, git worktree isolation, and parallel implementation via agent teams.
|
|
||||||
|
|
||||||
- **[Spec Kit Assistant](https://marketplace.visualstudio.com/items?itemName=rfsales.speckit-assistant)** — A VS Code extension that provides a visual orchestrator for the full SDD workflow (constitution → specification → planning → tasks → implementation) with phase status visualization, an interactive task checklist, DAG visualization, and support for Claude, Gemini, GitHub Copilot, and OpenAI backends. Requires the `specify` CLI in your PATH.
|
|
||||||
|
|
||||||
## 🤖 Supported AI Agents
|
## 🤖 Supported AI Agents
|
||||||
|
|
||||||
| Agent | Support | Notes |
|
| Agent | Support | Notes |
|
||||||
@@ -292,7 +231,7 @@ The `specify` command supports the following options:
|
|||||||
| `--skip-tls` | Flag | Skip SSL/TLS verification (not recommended) |
|
| `--skip-tls` | Flag | Skip SSL/TLS verification (not recommended) |
|
||||||
| `--debug` | Flag | Enable detailed debug output for troubleshooting |
|
| `--debug` | Flag | Enable detailed debug output for troubleshooting |
|
||||||
| `--github-token` | Option | GitHub token for API requests (or set GH_TOKEN/GITHUB_TOKEN env variable) |
|
| `--github-token` | Option | GitHub token for API requests (or set GH_TOKEN/GITHUB_TOKEN env variable) |
|
||||||
| `--ai-skills` | Flag | Install Prompt.MD templates as agent skills in agent-specific `skills/` directory (requires `--ai`). Extension commands are also auto-registered as skills when extensions are added later. |
|
| `--ai-skills` | Flag | Install Prompt.MD templates as agent skills in agent-specific `skills/` directory (requires `--ai`) |
|
||||||
| `--branch-numbering` | Option | Branch numbering strategy: `sequential` (default — `001`, `002`, `003`) or `timestamp` (`YYYYMMDD-HHMMSS`). Timestamp mode is useful for distributed teams to avoid numbering conflicts |
|
| `--branch-numbering` | Option | Branch numbering strategy: `sequential` (default — `001`, `002`, `003`) or `timestamp` (`YYYYMMDD-HHMMSS`). Timestamp mode is useful for distributed teams to avoid numbering conflicts |
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
@@ -443,7 +382,7 @@ specify extension add <extension-name>
|
|||||||
|
|
||||||
For example, extensions could add Jira integration, post-implementation code review, V-Model test traceability, or project health diagnostics.
|
For example, extensions could add Jira integration, post-implementation code review, V-Model test traceability, or project health diagnostics.
|
||||||
|
|
||||||
See the [Extensions README](./extensions/README.md) for the full guide and how to build and publish your own. Browse the [community extensions](#-community-extensions) above for what's available.
|
See the [Extensions README](./extensions/README.md) for the full guide, the complete community catalog, and how to build and publish your own.
|
||||||
|
|
||||||
### Presets — Customize Existing Workflows
|
### Presets — Customize Existing Workflows
|
||||||
|
|
||||||
|
|||||||
79
TESTING.md
79
TESTING.md
@@ -1,79 +0,0 @@
|
|||||||
# Manual Testing Guide
|
|
||||||
|
|
||||||
Any change that affects a slash command's behavior requires manually testing that command through an AI agent and submitting results with the PR.
|
|
||||||
|
|
||||||
## Process
|
|
||||||
|
|
||||||
1. **Identify affected commands** — use the [prompt below](#determining-which-tests-to-run) to have your agent analyze your changed files and determine which commands need testing.
|
|
||||||
2. **Set up a test project** — scaffold from your local branch (see [Setup](#setup)).
|
|
||||||
3. **Run each affected command** — invoke it in your agent, verify it completes successfully, and confirm it produces the expected output (files created, scripts executed, artifacts populated).
|
|
||||||
4. **Run prerequisites first** — commands that depend on earlier commands (e.g., `/speckit.tasks` requires `/speckit.plan` which requires `/speckit.specify`) must be run in order.
|
|
||||||
5. **Report results** — paste the [reporting template](#reporting-results) into your PR with pass/fail for each command tested.
|
|
||||||
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install the CLI from your local branch
|
|
||||||
cd <spec-kit-repo>
|
|
||||||
uv venv .venv
|
|
||||||
source .venv/bin/activate # On Windows: .venv\Scripts\activate
|
|
||||||
uv pip install -e .
|
|
||||||
|
|
||||||
# Initialize a test project using your local changes
|
|
||||||
specify init /tmp/speckit-test --ai <agent> --offline
|
|
||||||
cd /tmp/speckit-test
|
|
||||||
|
|
||||||
# Open in your agent
|
|
||||||
```
|
|
||||||
|
|
||||||
## Reporting results
|
|
||||||
|
|
||||||
Paste this into your PR:
|
|
||||||
|
|
||||||
~~~markdown
|
|
||||||
## Manual test results
|
|
||||||
|
|
||||||
**Agent**: [e.g., GitHub Copilot in VS Code] | **OS/Shell**: [e.g., macOS/zsh]
|
|
||||||
|
|
||||||
| Command tested | Notes |
|
|
||||||
|----------------|-------|
|
|
||||||
| `/speckit.command` | |
|
|
||||||
~~~
|
|
||||||
|
|
||||||
## Determining which tests to run
|
|
||||||
|
|
||||||
Copy this prompt into your agent. Include the agent's response (selected tests plus a brief explanation of the mapping) in your PR.
|
|
||||||
|
|
||||||
~~~text
|
|
||||||
Read TESTING.md, then run `git diff --name-only main` to get my changed files.
|
|
||||||
For each changed file, determine which slash commands it affects by reading
|
|
||||||
the command templates in templates/commands/ to understand what each command
|
|
||||||
invokes. Use these mapping rules:
|
|
||||||
|
|
||||||
- templates/commands/X.md → the command it defines
|
|
||||||
- scripts/bash/Y.sh or scripts/powershell/Y.ps1 → every command that invokes that script (grep templates/commands/ for the script name). Also check transitive dependencies: if the changed script is sourced by other scripts (e.g., common.sh is sourced by create-new-feature.sh, check-prerequisites.sh, setup-plan.sh, update-agent-context.sh), then every command invoking those downstream scripts is also affected
|
|
||||||
- templates/Z-template.md → every command that consumes that template during execution
|
|
||||||
- src/specify_cli/*.py → CLI commands (`specify init`, `specify check`, `specify extension *`, `specify preset *`); test the affected CLI command and, for init/scaffolding changes, at minimum test /speckit.specify
|
|
||||||
- extensions/X/commands/* → the extension command it defines
|
|
||||||
- extensions/X/scripts/* → every extension command that invokes that script
|
|
||||||
- extensions/X/extension.yml or config-template.yml → every command in that extension. Also check if the manifest defines hooks (look for `hooks:` entries like `before_specify`, `after_implement`, etc.) — if so, the core commands those hooks attach to are also affected
|
|
||||||
- presets/*/* → test preset scaffolding via `specify init` with the preset
|
|
||||||
- pyproject.toml → packaging/bundling; test `specify init` and verify bundled assets
|
|
||||||
|
|
||||||
Include prerequisite tests (e.g., T5 requires T3 requires T1).
|
|
||||||
|
|
||||||
Output in this format:
|
|
||||||
|
|
||||||
### Test selection reasoning
|
|
||||||
|
|
||||||
| Changed file | Affects | Test | Why |
|
|
||||||
|---|---|---|---|
|
|
||||||
| (path) | (command) | T# | (reason) |
|
|
||||||
|
|
||||||
### Required tests
|
|
||||||
|
|
||||||
Number each test sequentially (T1, T2, ...). List prerequisite tests first.
|
|
||||||
|
|
||||||
- T1: /speckit.command — (reason)
|
|
||||||
- T2: /speckit.command — (reason)
|
|
||||||
~~~
|
|
||||||
@@ -523,7 +523,7 @@ Submit to the community catalog for public discovery:
|
|||||||
|
|
||||||
1. **Fork** spec-kit repository
|
1. **Fork** spec-kit repository
|
||||||
2. **Add entry** to `extensions/catalog.community.json`
|
2. **Add entry** to `extensions/catalog.community.json`
|
||||||
3. **Update** the Community Extensions table in `README.md` with your extension
|
3. **Update** `extensions/README.md` with your extension
|
||||||
4. **Create PR** following the [Extension Publishing Guide](EXTENSION-PUBLISHING-GUIDE.md)
|
4. **Create PR** following the [Extension Publishing Guide](EXTENSION-PUBLISHING-GUIDE.md)
|
||||||
5. **After merge**, your extension becomes available:
|
5. **After merge**, your extension becomes available:
|
||||||
- Users can browse `catalog.community.json` to discover your extension
|
- Users can browse `catalog.community.json` to discover your extension
|
||||||
|
|||||||
@@ -204,9 +204,9 @@ Edit `extensions/catalog.community.json` and add your extension:
|
|||||||
- Use current timestamp for `created_at` and `updated_at`
|
- Use current timestamp for `created_at` and `updated_at`
|
||||||
- Update the top-level `updated_at` to current time
|
- Update the top-level `updated_at` to current time
|
||||||
|
|
||||||
### 3. Update Community Extensions Table
|
### 3. Update Extensions README
|
||||||
|
|
||||||
Add your extension to the Community Extensions table in the project root `README.md`:
|
Add your extension to the Available Extensions table in `extensions/README.md`:
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
| Your Extension Name | Brief description of what it does | `<category>` | <effect> | [repo-name](https://github.com/your-org/spec-kit-your-extension) |
|
| Your Extension Name | Brief description of what it does | `<category>` | <effect> | [repo-name](https://github.com/your-org/spec-kit-your-extension) |
|
||||||
@@ -234,7 +234,7 @@ Insert your extension in alphabetical order in the table.
|
|||||||
git checkout -b add-your-extension
|
git checkout -b add-your-extension
|
||||||
|
|
||||||
# Commit your changes
|
# Commit your changes
|
||||||
git add extensions/catalog.community.json README.md
|
git add extensions/catalog.community.json extensions/README.md
|
||||||
git commit -m "Add your-extension to community catalog
|
git commit -m "Add your-extension to community catalog
|
||||||
|
|
||||||
- Extension ID: your-extension
|
- Extension ID: your-extension
|
||||||
@@ -273,7 +273,7 @@ Brief description of what your extension does.
|
|||||||
- [x] All commands working
|
- [x] All commands working
|
||||||
- [x] No security vulnerabilities
|
- [x] No security vulnerabilities
|
||||||
- [x] Added to extensions/catalog.community.json
|
- [x] Added to extensions/catalog.community.json
|
||||||
- [x] Added to Community Extensions table in README.md
|
- [x] Added to extensions/README.md Available Extensions table
|
||||||
|
|
||||||
### Testing
|
### Testing
|
||||||
Tested on:
|
Tested on:
|
||||||
|
|||||||
@@ -187,21 +187,6 @@ Provided commands:
|
|||||||
Check: .specify/extensions/jira/
|
Check: .specify/extensions/jira/
|
||||||
```
|
```
|
||||||
|
|
||||||
### Automatic Agent Skill Registration
|
|
||||||
|
|
||||||
If your project was initialized with `--ai-skills`, extension commands are **automatically registered as agent skills** during installation. This ensures that extensions are discoverable by agents that use the [agentskills.io](https://agentskills.io) skill specification.
|
|
||||||
|
|
||||||
```text
|
|
||||||
✓ Extension installed successfully!
|
|
||||||
|
|
||||||
Jira Integration (v1.0.0)
|
|
||||||
...
|
|
||||||
|
|
||||||
✓ 3 agent skill(s) auto-registered
|
|
||||||
```
|
|
||||||
|
|
||||||
When an extension is removed, its corresponding skills are also cleaned up automatically. Pre-existing skills that were manually customized are never overwritten.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Using Extensions
|
## Using Extensions
|
||||||
|
|||||||
@@ -68,9 +68,37 @@ specify extension add --from https://github.com/org/spec-kit-ext/archive/refs/ta
|
|||||||
|
|
||||||
## Available Community Extensions
|
## Available Community Extensions
|
||||||
|
|
||||||
See the [Community Extensions](../README.md#-community-extensions) section in the main README for the full list of available community-contributed extensions.
|
The following community-contributed extensions are available in [`catalog.community.json`](catalog.community.json):
|
||||||
|
|
||||||
For the raw catalog data, see [`catalog.community.json`](catalog.community.json).
|
**Categories:** `docs` — reads, validates, or generates spec artifacts · `code` — reviews, validates, or modifies source code · `process` — orchestrates workflow across phases · `integration` — syncs with external platforms · `visibility` — reports on project health or progress
|
||||||
|
|
||||||
|
**Effect:** `Read-only` — produces reports without modifying files · `Read+Write` — modifies files, creates artifacts, or updates specs
|
||||||
|
|
||||||
|
| Extension | Purpose | Category | Effect | URL |
|
||||||
|
|-----------|---------|----------|--------|-----|
|
||||||
|
| Archive Extension | Archive merged features into main project memory. | `docs` | Read+Write | [spec-kit-archive](https://github.com/stn1slv/spec-kit-archive) |
|
||||||
|
| Azure DevOps Integration | Sync user stories and tasks to Azure DevOps work items using OAuth authentication | `integration` | Read+Write | [spec-kit-azure-devops](https://github.com/pragya247/spec-kit-azure-devops) |
|
||||||
|
| Checkpoint Extension | Commit the changes made during the middle of the implementation, so you don't end up with just one very large commit at the end | `code` | Read+Write | [spec-kit-checkpoint](https://github.com/aaronrsun/spec-kit-checkpoint) |
|
||||||
|
| Cleanup Extension | Post-implementation quality gate that reviews changes, fixes small issues (scout rule), creates tasks for medium issues, and generates analysis for large issues | `code` | Read+Write | [spec-kit-cleanup](https://github.com/dsrednicki/spec-kit-cleanup) |
|
||||||
|
| Cognitive Squad | Multi-agent cognitive system with Triadic Model: understanding, internalization, application — with quality gates, backpropagation verification, and self-healing | `docs` | Read+Write | [cognitive-squad](https://github.com/Testimonial/cognitive-squad) |
|
||||||
|
| Conduct Extension | Orchestrates spec-kit phases via sub-agent delegation to reduce context pollution. | `process` | Read+Write | [spec-kit-conduct-ext](https://github.com/twbrandon7/spec-kit-conduct-ext) |
|
||||||
|
| DocGuard — CDD Enforcement | Canonical-Driven Development enforcement. Validates, scores, and traces project documentation with automated checks, AI-driven workflows, and spec-kit hooks. Zero NPM runtime dependencies. | `docs` | Read+Write | [spec-kit-docguard](https://github.com/raccioly/docguard) |
|
||||||
|
| Fleet Orchestrator | Orchestrate a full feature lifecycle with human-in-the-loop gates across all SpecKit phases | `process` | Read+Write | [spec-kit-fleet](https://github.com/sharathsatish/spec-kit-fleet) |
|
||||||
|
| Iterate | Iterate on spec documents with a two-phase define-and-apply workflow — refine specs mid-implementation and go straight back to building | `docs` | Read+Write | [spec-kit-iterate](https://github.com/imviancagrace/spec-kit-iterate) |
|
||||||
|
| Jira Integration | Create Jira Epics, Stories, and Issues from spec-kit specifications and task breakdowns with configurable hierarchy and custom field support | `integration` | Read+Write | [spec-kit-jira](https://github.com/mbachorik/spec-kit-jira) |
|
||||||
|
| Learning Extension | Generate educational guides from implementations and enhance clarifications with mentoring context | `docs` | Read+Write | [spec-kit-learn](https://github.com/imviancagrace/spec-kit-learn) |
|
||||||
|
| Project Health Check | Diagnose a Spec Kit project and report health issues across structure, agents, features, scripts, extensions, and git | `visibility` | Read-only | [spec-kit-doctor](https://github.com/KhawarHabibKhan/spec-kit-doctor) |
|
||||||
|
| Project Status | Show current SDD workflow progress — active feature, artifact status, task completion, workflow phase, and extensions summary | `visibility` | Read-only | [spec-kit-status](https://github.com/KhawarHabibKhan/spec-kit-status) |
|
||||||
|
| Ralph Loop | Autonomous implementation loop using AI agent CLI | `code` | Read+Write | [spec-kit-ralph](https://github.com/Rubiss/spec-kit-ralph) |
|
||||||
|
| Reconcile Extension | Reconcile implementation drift by surgically updating feature artifacts. | `docs` | Read+Write | [spec-kit-reconcile](https://github.com/stn1slv/spec-kit-reconcile) |
|
||||||
|
| Retrospective Extension | Post-implementation retrospective with spec adherence scoring, drift analysis, and human-gated spec updates | `docs` | Read+Write | [spec-kit-retrospective](https://github.com/emi-dm/spec-kit-retrospective) |
|
||||||
|
| Review Extension | Post-implementation comprehensive code review with specialized agents for code quality, comments, tests, error handling, type design, and simplification | `code` | Read-only | [spec-kit-review](https://github.com/ismaelJimenez/spec-kit-review) |
|
||||||
|
| SDD Utilities | Resume interrupted workflows, validate project health, and verify spec-to-task traceability | `process` | Read+Write | [speckit-utils](https://github.com/mvanhorn/speckit-utils) |
|
||||||
|
| Spec Sync | Detect and resolve drift between specs and implementation. AI-assisted resolution with human approval | `docs` | Read+Write | [spec-kit-sync](https://github.com/bgervin/spec-kit-sync) |
|
||||||
|
| Understanding | Automated requirements quality analysis — 31 deterministic metrics against IEEE/ISO standards with experimental energy-based ambiguity detection | `docs` | Read-only | [understanding](https://github.com/Testimonial/understanding) |
|
||||||
|
| V-Model Extension Pack | Enforces V-Model paired generation of development specs and test specs with full traceability | `docs` | Read+Write | [spec-kit-v-model](https://github.com/leocamello/spec-kit-v-model) |
|
||||||
|
| Verify Extension | Post-implementation quality gate that validates implemented code against specification artifacts | `code` | Read-only | [spec-kit-verify](https://github.com/ismaelJimenez/spec-kit-verify) |
|
||||||
|
| Verify Tasks Extension | Detect phantom completions: tasks marked [X] in tasks.md with no real implementation | `code` | Read-only | [spec-kit-verify-tasks](https://github.com/datastone-inc/spec-kit-verify-tasks) |
|
||||||
|
|
||||||
|
|
||||||
## Adding Your Extension
|
## Adding Your Extension
|
||||||
|
|||||||
@@ -3,39 +3,6 @@
|
|||||||
"updated_at": "2026-03-19T12:08:20Z",
|
"updated_at": "2026-03-19T12:08:20Z",
|
||||||
"catalog_url": "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json",
|
"catalog_url": "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json",
|
||||||
"extensions": {
|
"extensions": {
|
||||||
"aide": {
|
|
||||||
"name": "AI-Driven Engineering (AIDE)",
|
|
||||||
"id": "aide",
|
|
||||||
"description": "A structured 7-step workflow for building new projects from scratch with AI assistants — from vision through implementation.",
|
|
||||||
"author": "mnriem",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"download_url": "https://github.com/mnriem/spec-kit-extensions/releases/download/aide-v1.0.0/aide.zip",
|
|
||||||
"repository": "https://github.com/mnriem/spec-kit-extensions",
|
|
||||||
"homepage": "https://github.com/mnriem/spec-kit-extensions",
|
|
||||||
"documentation": "https://github.com/mnriem/spec-kit-extensions/blob/main/aide/README.md",
|
|
||||||
"changelog": "https://github.com/mnriem/spec-kit-extensions/blob/main/aide/CHANGELOG.md",
|
|
||||||
"license": "MIT",
|
|
||||||
"requires": {
|
|
||||||
"speckit_version": ">=0.2.0"
|
|
||||||
},
|
|
||||||
"provides": {
|
|
||||||
"commands": 7,
|
|
||||||
"hooks": 0
|
|
||||||
},
|
|
||||||
"tags": [
|
|
||||||
"workflow",
|
|
||||||
"project-management",
|
|
||||||
"ai-driven",
|
|
||||||
"new-project",
|
|
||||||
"planning",
|
|
||||||
"experimental"
|
|
||||||
],
|
|
||||||
"verified": false,
|
|
||||||
"downloads": 0,
|
|
||||||
"stars": 0,
|
|
||||||
"created_at": "2026-03-18T00:00:00Z",
|
|
||||||
"updated_at": "2026-03-18T00:00:00Z"
|
|
||||||
},
|
|
||||||
"archive": {
|
"archive": {
|
||||||
"name": "Archive Extension",
|
"name": "Archive Extension",
|
||||||
"id": "archive",
|
"id": "archive",
|
||||||
@@ -314,37 +281,6 @@
|
|||||||
"created_at": "2026-03-13T00:00:00Z",
|
"created_at": "2026-03-13T00:00:00Z",
|
||||||
"updated_at": "2026-03-13T00:00:00Z"
|
"updated_at": "2026-03-13T00:00:00Z"
|
||||||
},
|
},
|
||||||
"extensify": {
|
|
||||||
"name": "Extensify",
|
|
||||||
"id": "extensify",
|
|
||||||
"description": "Create and validate extensions and extension catalogs.",
|
|
||||||
"author": "mnriem",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"download_url": "https://github.com/mnriem/spec-kit-extensions/releases/download/extensify-v1.0.0/extensify.zip",
|
|
||||||
"repository": "https://github.com/mnriem/spec-kit-extensions",
|
|
||||||
"homepage": "https://github.com/mnriem/spec-kit-extensions",
|
|
||||||
"documentation": "https://github.com/mnriem/spec-kit-extensions/blob/main/extensify/README.md",
|
|
||||||
"changelog": "https://github.com/mnriem/spec-kit-extensions/blob/main/extensify/CHANGELOG.md",
|
|
||||||
"license": "MIT",
|
|
||||||
"requires": {
|
|
||||||
"speckit_version": ">=0.2.0"
|
|
||||||
},
|
|
||||||
"provides": {
|
|
||||||
"commands": 4,
|
|
||||||
"hooks": 0
|
|
||||||
},
|
|
||||||
"tags": [
|
|
||||||
"extensions",
|
|
||||||
"workflow",
|
|
||||||
"validation",
|
|
||||||
"experimental"
|
|
||||||
],
|
|
||||||
"verified": false,
|
|
||||||
"downloads": 0,
|
|
||||||
"stars": 0,
|
|
||||||
"created_at": "2026-03-18T00:00:00Z",
|
|
||||||
"updated_at": "2026-03-18T00:00:00Z"
|
|
||||||
},
|
|
||||||
"fleet": {
|
"fleet": {
|
||||||
"name": "Fleet Orchestrator",
|
"name": "Fleet Orchestrator",
|
||||||
"id": "fleet",
|
"id": "fleet",
|
||||||
@@ -437,37 +373,6 @@
|
|||||||
"created_at": "2026-03-05T00:00:00Z",
|
"created_at": "2026-03-05T00:00:00Z",
|
||||||
"updated_at": "2026-03-05T00:00:00Z"
|
"updated_at": "2026-03-05T00:00:00Z"
|
||||||
},
|
},
|
||||||
"presetify": {
|
|
||||||
"name": "Presetify",
|
|
||||||
"id": "presetify",
|
|
||||||
"description": "Create and validate presets and preset catalogs.",
|
|
||||||
"author": "mnriem",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"download_url": "https://github.com/mnriem/spec-kit-extensions/releases/download/presetify-v1.0.0/presetify.zip",
|
|
||||||
"repository": "https://github.com/mnriem/spec-kit-extensions",
|
|
||||||
"homepage": "https://github.com/mnriem/spec-kit-extensions",
|
|
||||||
"documentation": "https://github.com/mnriem/spec-kit-extensions/blob/main/presetify/README.md",
|
|
||||||
"changelog": "https://github.com/mnriem/spec-kit-extensions/blob/main/presetify/CHANGELOG.md",
|
|
||||||
"license": "MIT",
|
|
||||||
"requires": {
|
|
||||||
"speckit_version": ">=0.2.0"
|
|
||||||
},
|
|
||||||
"provides": {
|
|
||||||
"commands": 4,
|
|
||||||
"hooks": 0
|
|
||||||
},
|
|
||||||
"tags": [
|
|
||||||
"presets",
|
|
||||||
"workflow",
|
|
||||||
"templates",
|
|
||||||
"experimental"
|
|
||||||
],
|
|
||||||
"verified": false,
|
|
||||||
"downloads": 0,
|
|
||||||
"stars": 0,
|
|
||||||
"created_at": "2026-03-18T00:00:00Z",
|
|
||||||
"updated_at": "2026-03-18T00:00:00Z"
|
|
||||||
},
|
|
||||||
"ralph": {
|
"ralph": {
|
||||||
"name": "Ralph Loop",
|
"name": "Ralph Loop",
|
||||||
"id": "ralph",
|
"id": "ralph",
|
||||||
|
|||||||
@@ -1,58 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schema_version": "1.0",
|
"schema_version": "1.0",
|
||||||
"updated_at": "2026-03-24T00:00:00Z",
|
"updated_at": "2026-03-09T00:00:00Z",
|
||||||
"catalog_url": "https://raw.githubusercontent.com/github/spec-kit/main/presets/catalog.community.json",
|
"catalog_url": "https://raw.githubusercontent.com/github/spec-kit/main/presets/catalog.community.json",
|
||||||
"presets": {
|
"presets": {}
|
||||||
"aide-in-place": {
|
|
||||||
"name": "AIDE In-Place Migration",
|
|
||||||
"id": "aide-in-place",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"description": "Adapts the AIDE workflow for in-place technology migrations (X → Y pattern). Overrides vision, roadmap, progress, and work item commands with migration-specific guidance.",
|
|
||||||
"author": "mnriem",
|
|
||||||
"repository": "https://github.com/mnriem/spec-kit-presets",
|
|
||||||
"download_url": "https://github.com/mnriem/spec-kit-presets/releases/download/aide-in-place-v1.0.0/aide-in-place.zip",
|
|
||||||
"homepage": "https://github.com/mnriem/spec-kit-presets",
|
|
||||||
"documentation": "https://github.com/mnriem/spec-kit-presets/blob/main/aide-in-place/README.md",
|
|
||||||
"license": "MIT",
|
|
||||||
"requires": {
|
|
||||||
"speckit_version": ">=0.2.0",
|
|
||||||
"extensions": ["aide"]
|
|
||||||
},
|
|
||||||
"provides": {
|
|
||||||
"templates": 2,
|
|
||||||
"commands": 8
|
|
||||||
},
|
|
||||||
"tags": [
|
|
||||||
"migration",
|
|
||||||
"in-place",
|
|
||||||
"brownfield",
|
|
||||||
"aide"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"pirate": {
|
|
||||||
"name": "Pirate Speak (Full)",
|
|
||||||
"id": "pirate",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"description": "Arrr! Transforms all Spec Kit output into pirate speak. Specs, plans, and tasks be written fer scallywags.",
|
|
||||||
"author": "mnriem",
|
|
||||||
"repository": "https://github.com/mnriem/spec-kit-presets",
|
|
||||||
"download_url": "https://github.com/mnriem/spec-kit-presets/releases/download/pirate-v1.0.0/pirate.zip",
|
|
||||||
"homepage": "https://github.com/mnriem/spec-kit-presets",
|
|
||||||
"documentation": "https://github.com/mnriem/spec-kit-presets/blob/main/pirate/README.md",
|
|
||||||
"license": "MIT",
|
|
||||||
"requires": {
|
|
||||||
"speckit_version": ">=0.1.0"
|
|
||||||
},
|
|
||||||
"provides": {
|
|
||||||
"templates": 6,
|
|
||||||
"commands": 9
|
|
||||||
},
|
|
||||||
"tags": [
|
|
||||||
"pirate",
|
|
||||||
"theme",
|
|
||||||
"fun",
|
|
||||||
"experimental"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3594,15 +3594,6 @@ def extension_add(
|
|||||||
for cmd in manifest.commands:
|
for cmd in manifest.commands:
|
||||||
console.print(f" • {cmd['name']} - {cmd.get('description', '')}")
|
console.print(f" • {cmd['name']} - {cmd.get('description', '')}")
|
||||||
|
|
||||||
# Report agent skills registration
|
|
||||||
reg_meta = manager.registry.get(manifest.id)
|
|
||||||
reg_skills = reg_meta.get("registered_skills", []) if reg_meta else []
|
|
||||||
# Normalize to guard against corrupted registry entries
|
|
||||||
if not isinstance(reg_skills, list):
|
|
||||||
reg_skills = []
|
|
||||||
if reg_skills:
|
|
||||||
console.print(f"\n[green]✓[/green] {len(reg_skills)} agent skill(s) auto-registered")
|
|
||||||
|
|
||||||
console.print("\n[yellow]⚠[/yellow] Configuration may be required")
|
console.print("\n[yellow]⚠[/yellow] Configuration may be required")
|
||||||
console.print(f" Check: .specify/extensions/{manifest.id}/")
|
console.print(f" Check: .specify/extensions/{manifest.id}/")
|
||||||
|
|
||||||
@@ -3641,19 +3632,14 @@ def extension_remove(
|
|||||||
installed = manager.list_installed()
|
installed = manager.list_installed()
|
||||||
extension_id, display_name = _resolve_installed_extension(extension, installed, "remove")
|
extension_id, display_name = _resolve_installed_extension(extension, installed, "remove")
|
||||||
|
|
||||||
# Get extension info for command and skill counts
|
# Get extension info for command count
|
||||||
ext_manifest = manager.get_extension(extension_id)
|
ext_manifest = manager.get_extension(extension_id)
|
||||||
cmd_count = len(ext_manifest.commands) if ext_manifest else 0
|
cmd_count = len(ext_manifest.commands) if ext_manifest else 0
|
||||||
reg_meta = manager.registry.get(extension_id)
|
|
||||||
raw_skills = reg_meta.get("registered_skills") if reg_meta else None
|
|
||||||
skill_count = len(raw_skills) if isinstance(raw_skills, list) else 0
|
|
||||||
|
|
||||||
# Confirm removal
|
# Confirm removal
|
||||||
if not force:
|
if not force:
|
||||||
console.print("\n[yellow]⚠ This will remove:[/yellow]")
|
console.print("\n[yellow]⚠ This will remove:[/yellow]")
|
||||||
console.print(f" • {cmd_count} commands from AI agent")
|
console.print(f" • {cmd_count} commands from AI agent")
|
||||||
if skill_count:
|
|
||||||
console.print(f" • {skill_count} agent skill(s)")
|
|
||||||
console.print(f" • Extension directory: .specify/extensions/{extension_id}/")
|
console.print(f" • Extension directory: .specify/extensions/{extension_id}/")
|
||||||
if not keep_config:
|
if not keep_config:
|
||||||
console.print(" • Config files (will be backed up)")
|
console.print(" • Config files (will be backed up)")
|
||||||
|
|||||||
@@ -510,288 +510,6 @@ class ExtensionManager:
|
|||||||
|
|
||||||
return _ignore
|
return _ignore
|
||||||
|
|
||||||
def _get_skills_dir(self) -> Optional[Path]:
|
|
||||||
"""Return the skills directory if ``--ai-skills`` was used during init.
|
|
||||||
|
|
||||||
Reads ``.specify/init-options.json`` to determine whether skills
|
|
||||||
are enabled and which agent was selected, then delegates to
|
|
||||||
the module-level ``_get_skills_dir()`` helper for the concrete path.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
The skills directory ``Path``, or ``None`` if skills were not
|
|
||||||
enabled or the init-options file is missing.
|
|
||||||
"""
|
|
||||||
from . import load_init_options, _get_skills_dir as resolve_skills_dir
|
|
||||||
|
|
||||||
opts = load_init_options(self.project_root)
|
|
||||||
if not opts.get("ai_skills"):
|
|
||||||
return None
|
|
||||||
|
|
||||||
agent = opts.get("ai")
|
|
||||||
if not agent:
|
|
||||||
return None
|
|
||||||
|
|
||||||
skills_dir = resolve_skills_dir(self.project_root, agent)
|
|
||||||
if not skills_dir.is_dir():
|
|
||||||
return None
|
|
||||||
|
|
||||||
return skills_dir
|
|
||||||
|
|
||||||
def _register_extension_skills(
|
|
||||||
self,
|
|
||||||
manifest: ExtensionManifest,
|
|
||||||
extension_dir: Path,
|
|
||||||
) -> List[str]:
|
|
||||||
"""Generate SKILL.md files for extension commands as agent skills.
|
|
||||||
|
|
||||||
For every command in the extension manifest, creates a SKILL.md
|
|
||||||
file in the agent's skills directory following the agentskills.io
|
|
||||||
specification. This is only done when ``--ai-skills`` was used
|
|
||||||
during project initialisation.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
manifest: Extension manifest.
|
|
||||||
extension_dir: Installed extension directory.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of skill names that were created (for registry storage).
|
|
||||||
"""
|
|
||||||
skills_dir = self._get_skills_dir()
|
|
||||||
if not skills_dir:
|
|
||||||
return []
|
|
||||||
|
|
||||||
from . import load_init_options
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
opts = load_init_options(self.project_root)
|
|
||||||
selected_ai = opts.get("ai", "")
|
|
||||||
|
|
||||||
written: List[str] = []
|
|
||||||
|
|
||||||
for cmd_info in manifest.commands:
|
|
||||||
cmd_name = cmd_info["name"]
|
|
||||||
cmd_file_rel = cmd_info["file"]
|
|
||||||
|
|
||||||
# Guard against path traversal: reject absolute paths and ensure
|
|
||||||
# the resolved file stays within the extension directory.
|
|
||||||
cmd_path = Path(cmd_file_rel)
|
|
||||||
if cmd_path.is_absolute():
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
ext_root = extension_dir.resolve()
|
|
||||||
source_file = (ext_root / cmd_path).resolve()
|
|
||||||
source_file.relative_to(ext_root) # raises ValueError if outside
|
|
||||||
except (OSError, ValueError):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if not source_file.is_file():
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Derive skill name from command name, matching the convention used by
|
|
||||||
# presets.py: strip the leading "speckit." prefix, then form:
|
|
||||||
# Kimi → "speckit.{short_name}" (dot preserved for Kimi agent)
|
|
||||||
# other → "speckit-{short_name}" (hyphen separator)
|
|
||||||
short_name_raw = cmd_name
|
|
||||||
if short_name_raw.startswith("speckit."):
|
|
||||||
short_name_raw = short_name_raw[len("speckit."):]
|
|
||||||
if selected_ai == "kimi":
|
|
||||||
skill_name = f"speckit.{short_name_raw}"
|
|
||||||
else:
|
|
||||||
skill_name = f"speckit-{short_name_raw}"
|
|
||||||
|
|
||||||
# Check if skill already exists before creating the directory
|
|
||||||
skill_subdir = skills_dir / skill_name
|
|
||||||
skill_file = skill_subdir / "SKILL.md"
|
|
||||||
if skill_file.exists():
|
|
||||||
# Do not overwrite user-customized skills
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Create skill directory; track whether we created it so we can clean
|
|
||||||
# up safely if reading the source file subsequently fails.
|
|
||||||
created_now = not skill_subdir.exists()
|
|
||||||
skill_subdir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Parse the command file — guard against IsADirectoryError / decode errors
|
|
||||||
try:
|
|
||||||
content = source_file.read_text(encoding="utf-8")
|
|
||||||
except (OSError, UnicodeDecodeError):
|
|
||||||
if created_now:
|
|
||||||
try:
|
|
||||||
skill_subdir.rmdir() # undo the mkdir; dir is empty at this point
|
|
||||||
except OSError:
|
|
||||||
pass # best-effort cleanup
|
|
||||||
continue
|
|
||||||
if content.startswith("---"):
|
|
||||||
parts = content.split("---", 2)
|
|
||||||
if len(parts) >= 3:
|
|
||||||
try:
|
|
||||||
frontmatter = yaml.safe_load(parts[1])
|
|
||||||
except yaml.YAMLError:
|
|
||||||
frontmatter = {}
|
|
||||||
if not isinstance(frontmatter, dict):
|
|
||||||
frontmatter = {}
|
|
||||||
body = parts[2].strip()
|
|
||||||
else:
|
|
||||||
frontmatter = {}
|
|
||||||
body = content
|
|
||||||
else:
|
|
||||||
frontmatter = {}
|
|
||||||
body = content
|
|
||||||
|
|
||||||
original_desc = frontmatter.get("description", "")
|
|
||||||
description = original_desc or f"Extension command: {cmd_name}"
|
|
||||||
|
|
||||||
frontmatter_data = {
|
|
||||||
"name": skill_name,
|
|
||||||
"description": description,
|
|
||||||
"compatibility": "Requires spec-kit project structure with .specify/ directory",
|
|
||||||
"metadata": {
|
|
||||||
"author": "github-spec-kit",
|
|
||||||
"source": f"extension:{manifest.id}",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
frontmatter_text = yaml.safe_dump(frontmatter_data, sort_keys=False).strip()
|
|
||||||
|
|
||||||
# Derive a human-friendly title from the command name
|
|
||||||
short_name = cmd_name
|
|
||||||
if short_name.startswith("speckit."):
|
|
||||||
short_name = short_name[len("speckit."):]
|
|
||||||
title_name = short_name.replace(".", " ").replace("-", " ").title()
|
|
||||||
|
|
||||||
skill_content = (
|
|
||||||
f"---\n"
|
|
||||||
f"{frontmatter_text}\n"
|
|
||||||
f"---\n\n"
|
|
||||||
f"# {title_name} Skill\n\n"
|
|
||||||
f"{body}\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
skill_file.write_text(skill_content, encoding="utf-8")
|
|
||||||
written.append(skill_name)
|
|
||||||
|
|
||||||
return written
|
|
||||||
|
|
||||||
def _unregister_extension_skills(self, skill_names: List[str], extension_id: str) -> None:
|
|
||||||
"""Remove SKILL.md directories for extension skills.
|
|
||||||
|
|
||||||
Called during extension removal to clean up skill files that
|
|
||||||
were created by ``_register_extension_skills()``.
|
|
||||||
|
|
||||||
If ``_get_skills_dir()`` returns ``None`` (e.g. the user removed
|
|
||||||
init-options.json or toggled ai_skills after installation), we
|
|
||||||
fall back to scanning all known agent skills directories so that
|
|
||||||
orphaned skill directories are still cleaned up. In that case
|
|
||||||
each candidate directory is verified against the SKILL.md
|
|
||||||
``metadata.source`` field before removal to avoid accidentally
|
|
||||||
deleting user-created skills with the same name.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
skill_names: List of skill names to remove.
|
|
||||||
extension_id: Extension ID used to verify ownership during
|
|
||||||
fallback candidate scanning.
|
|
||||||
"""
|
|
||||||
if not skill_names:
|
|
||||||
return
|
|
||||||
|
|
||||||
skills_dir = self._get_skills_dir()
|
|
||||||
|
|
||||||
if skills_dir:
|
|
||||||
# Fast path: we know the exact skills directory
|
|
||||||
for skill_name in skill_names:
|
|
||||||
# Guard against path traversal from a corrupted registry entry:
|
|
||||||
# reject names that are absolute, contain path separators, or
|
|
||||||
# resolve to a path outside the skills directory.
|
|
||||||
sn_path = Path(skill_name)
|
|
||||||
if sn_path.is_absolute() or len(sn_path.parts) != 1:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
skill_subdir = (skills_dir / skill_name).resolve()
|
|
||||||
skill_subdir.relative_to(skills_dir.resolve()) # raises if outside
|
|
||||||
except (OSError, ValueError):
|
|
||||||
continue
|
|
||||||
if not skill_subdir.is_dir():
|
|
||||||
continue
|
|
||||||
# Safety check: only delete if SKILL.md exists and its
|
|
||||||
# metadata.source matches exactly this extension — mirroring
|
|
||||||
# the fallback branch — so a corrupted registry entry cannot
|
|
||||||
# delete an unrelated user skill.
|
|
||||||
skill_md = skill_subdir / "SKILL.md"
|
|
||||||
if not skill_md.is_file():
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
import yaml as _yaml
|
|
||||||
raw = skill_md.read_text(encoding="utf-8")
|
|
||||||
source = ""
|
|
||||||
if raw.startswith("---"):
|
|
||||||
parts = raw.split("---", 2)
|
|
||||||
if len(parts) >= 3:
|
|
||||||
fm = _yaml.safe_load(parts[1]) or {}
|
|
||||||
source = (
|
|
||||||
fm.get("metadata", {}).get("source", "")
|
|
||||||
if isinstance(fm, dict)
|
|
||||||
else ""
|
|
||||||
)
|
|
||||||
if source != f"extension:{extension_id}":
|
|
||||||
continue
|
|
||||||
except (OSError, UnicodeDecodeError, Exception):
|
|
||||||
continue
|
|
||||||
shutil.rmtree(skill_subdir)
|
|
||||||
else:
|
|
||||||
# Fallback: scan all possible agent skills directories
|
|
||||||
from . import AGENT_CONFIG, AGENT_SKILLS_DIR_OVERRIDES, DEFAULT_SKILLS_DIR
|
|
||||||
|
|
||||||
candidate_dirs: set[Path] = set()
|
|
||||||
for override_path in AGENT_SKILLS_DIR_OVERRIDES.values():
|
|
||||||
candidate_dirs.add(self.project_root / override_path)
|
|
||||||
for cfg in AGENT_CONFIG.values():
|
|
||||||
folder = cfg.get("folder", "")
|
|
||||||
if folder:
|
|
||||||
candidate_dirs.add(self.project_root / folder.rstrip("/") / "skills")
|
|
||||||
candidate_dirs.add(self.project_root / DEFAULT_SKILLS_DIR)
|
|
||||||
|
|
||||||
for skills_candidate in candidate_dirs:
|
|
||||||
if not skills_candidate.is_dir():
|
|
||||||
continue
|
|
||||||
for skill_name in skill_names:
|
|
||||||
# Same path-traversal guard as the fast path above
|
|
||||||
sn_path = Path(skill_name)
|
|
||||||
if sn_path.is_absolute() or len(sn_path.parts) != 1:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
skill_subdir = (skills_candidate / skill_name).resolve()
|
|
||||||
skill_subdir.relative_to(skills_candidate.resolve()) # raises if outside
|
|
||||||
except (OSError, ValueError):
|
|
||||||
continue
|
|
||||||
if not skill_subdir.is_dir():
|
|
||||||
continue
|
|
||||||
# Safety check: only delete if SKILL.md exists and its
|
|
||||||
# metadata.source matches exactly this extension. If the
|
|
||||||
# file is missing or unreadable we skip to avoid deleting
|
|
||||||
# unrelated user-created directories.
|
|
||||||
skill_md = skill_subdir / "SKILL.md"
|
|
||||||
if not skill_md.is_file():
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
import yaml as _yaml
|
|
||||||
raw = skill_md.read_text(encoding="utf-8")
|
|
||||||
source = ""
|
|
||||||
if raw.startswith("---"):
|
|
||||||
parts = raw.split("---", 2)
|
|
||||||
if len(parts) >= 3:
|
|
||||||
fm = _yaml.safe_load(parts[1]) or {}
|
|
||||||
source = (
|
|
||||||
fm.get("metadata", {}).get("source", "")
|
|
||||||
if isinstance(fm, dict)
|
|
||||||
else ""
|
|
||||||
)
|
|
||||||
# Only remove skills explicitly created by this extension
|
|
||||||
if source != f"extension:{extension_id}":
|
|
||||||
continue
|
|
||||||
except (OSError, UnicodeDecodeError, Exception):
|
|
||||||
# If we can't verify, skip to avoid accidental deletion
|
|
||||||
continue
|
|
||||||
shutil.rmtree(skill_subdir)
|
|
||||||
|
|
||||||
def check_compatibility(
|
def check_compatibility(
|
||||||
self,
|
self,
|
||||||
manifest: ExtensionManifest,
|
manifest: ExtensionManifest,
|
||||||
@@ -883,10 +601,6 @@ class ExtensionManager:
|
|||||||
manifest, dest_dir, self.project_root
|
manifest, dest_dir, self.project_root
|
||||||
)
|
)
|
||||||
|
|
||||||
# Auto-register extension commands as agent skills when --ai-skills
|
|
||||||
# was used during project initialisation (feature parity).
|
|
||||||
registered_skills = self._register_extension_skills(manifest, dest_dir)
|
|
||||||
|
|
||||||
# Register hooks
|
# Register hooks
|
||||||
hook_executor = HookExecutor(self.project_root)
|
hook_executor = HookExecutor(self.project_root)
|
||||||
hook_executor.register_hooks(manifest)
|
hook_executor.register_hooks(manifest)
|
||||||
@@ -898,8 +612,7 @@ class ExtensionManager:
|
|||||||
"manifest_hash": manifest.get_hash(),
|
"manifest_hash": manifest.get_hash(),
|
||||||
"enabled": True,
|
"enabled": True,
|
||||||
"priority": priority,
|
"priority": priority,
|
||||||
"registered_commands": registered_commands,
|
"registered_commands": registered_commands
|
||||||
"registered_skills": registered_skills,
|
|
||||||
})
|
})
|
||||||
|
|
||||||
return manifest
|
return manifest
|
||||||
@@ -977,15 +690,9 @@ class ExtensionManager:
|
|||||||
if not self.registry.is_installed(extension_id):
|
if not self.registry.is_installed(extension_id):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# Get registered commands and skills before removal
|
# Get registered commands before removal
|
||||||
metadata = self.registry.get(extension_id)
|
metadata = self.registry.get(extension_id)
|
||||||
registered_commands = metadata.get("registered_commands", {}) if metadata else {}
|
registered_commands = metadata.get("registered_commands", {}) if metadata else {}
|
||||||
raw_skills = metadata.get("registered_skills", []) if metadata else []
|
|
||||||
# Normalize: must be a list of plain strings to avoid corrupted-registry errors
|
|
||||||
if isinstance(raw_skills, list):
|
|
||||||
registered_skills = [s for s in raw_skills if isinstance(s, str)]
|
|
||||||
else:
|
|
||||||
registered_skills = []
|
|
||||||
|
|
||||||
extension_dir = self.extensions_dir / extension_id
|
extension_dir = self.extensions_dir / extension_id
|
||||||
|
|
||||||
@@ -994,9 +701,6 @@ class ExtensionManager:
|
|||||||
registrar = CommandRegistrar()
|
registrar = CommandRegistrar()
|
||||||
registrar.unregister_commands(registered_commands, self.project_root)
|
registrar.unregister_commands(registered_commands, self.project_root)
|
||||||
|
|
||||||
# Unregister agent skills
|
|
||||||
self._unregister_extension_skills(registered_skills, extension_id)
|
|
||||||
|
|
||||||
if keep_config:
|
if keep_config:
|
||||||
# Preserve config files, only remove non-config files
|
# Preserve config files, only remove non-config files
|
||||||
if extension_dir.exists():
|
if extension_dir.exists():
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ Load only the minimal necessary context from each artifact:
|
|||||||
|
|
||||||
- Overview/Context
|
- Overview/Context
|
||||||
- Functional Requirements
|
- Functional Requirements
|
||||||
- Success Criteria (measurable outcomes — e.g., performance, security, availability, user success, business impact)
|
- Non-Functional Requirements
|
||||||
- User Stories
|
- User Stories
|
||||||
- Edge Cases (if present)
|
- Edge Cases (if present)
|
||||||
|
|
||||||
@@ -71,7 +71,7 @@ Load only the minimal necessary context from each artifact:
|
|||||||
|
|
||||||
Create internal representations (do not include raw artifacts in output):
|
Create internal representations (do not include raw artifacts in output):
|
||||||
|
|
||||||
- **Requirements inventory**: For each Functional Requirement (FR-###) and Success Criterion (SC-###), record a stable key. Use the explicit FR-/SC- identifier as the primary key when present, and optionally also derive an imperative-phrase slug for readability (e.g., "User can upload file" → `user-can-upload-file`). Include only Success Criteria items that require buildable work (e.g., load-testing infrastructure, security audit tooling), and exclude post-launch outcome metrics and business KPIs (e.g., "Reduce support tickets by 50%").
|
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
|
||||||
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
||||||
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
||||||
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
||||||
@@ -105,7 +105,7 @@ Focus on high-signal findings. Limit to 50 findings total; aggregate remainder i
|
|||||||
|
|
||||||
- Requirements with zero associated tasks
|
- Requirements with zero associated tasks
|
||||||
- Tasks with no mapped requirement/story
|
- Tasks with no mapped requirement/story
|
||||||
- Success Criteria requiring buildable work (performance, security, availability) not reflected in tasks
|
- Non-functional requirements not reflected in tasks (e.g., performance, security)
|
||||||
|
|
||||||
#### F. Inconsistency
|
#### F. Inconsistency
|
||||||
|
|
||||||
|
|||||||
@@ -145,7 +145,7 @@ Execution steps:
|
|||||||
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
||||||
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
||||||
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
||||||
- Non-functional constraint → Add/modify measurable criteria in Success Criteria > Measurable Outcomes (convert vague adjective to metric or explicit target).
|
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
||||||
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
||||||
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
||||||
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
||||||
|
|||||||
@@ -1,640 +0,0 @@
|
|||||||
"""
|
|
||||||
Unit tests for extension skill auto-registration.
|
|
||||||
|
|
||||||
Tests cover:
|
|
||||||
- SKILL.md generation when --ai-skills was used during init
|
|
||||||
- No skills created when ai_skills not active
|
|
||||||
- SKILL.md content correctness
|
|
||||||
- Existing user-modified skills not overwritten
|
|
||||||
- Skill cleanup on extension removal
|
|
||||||
- Registry metadata includes registered_skills
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
import pytest
|
|
||||||
import tempfile
|
|
||||||
import shutil
|
|
||||||
import yaml
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from specify_cli.extensions import (
|
|
||||||
ExtensionManifest,
|
|
||||||
ExtensionManager,
|
|
||||||
ExtensionError,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# ===== Helpers =====
|
|
||||||
|
|
||||||
def _create_init_options(project_root: Path, ai: str = "claude", ai_skills: bool = True):
|
|
||||||
"""Write a .specify/init-options.json file."""
|
|
||||||
opts_dir = project_root / ".specify"
|
|
||||||
opts_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
opts_file = opts_dir / "init-options.json"
|
|
||||||
opts_file.write_text(json.dumps({
|
|
||||||
"ai": ai,
|
|
||||||
"ai_skills": ai_skills,
|
|
||||||
"script": "sh",
|
|
||||||
}))
|
|
||||||
|
|
||||||
|
|
||||||
def _create_skills_dir(project_root: Path, ai: str = "claude") -> Path:
|
|
||||||
"""Create and return the expected skills directory for the given agent."""
|
|
||||||
# Match the logic in _get_skills_dir() from specify_cli
|
|
||||||
from specify_cli import AGENT_CONFIG, AGENT_SKILLS_DIR_OVERRIDES, DEFAULT_SKILLS_DIR
|
|
||||||
|
|
||||||
if ai in AGENT_SKILLS_DIR_OVERRIDES:
|
|
||||||
skills_dir = project_root / AGENT_SKILLS_DIR_OVERRIDES[ai]
|
|
||||||
else:
|
|
||||||
agent_config = AGENT_CONFIG.get(ai, {})
|
|
||||||
agent_folder = agent_config.get("folder", "")
|
|
||||||
if agent_folder:
|
|
||||||
skills_dir = project_root / agent_folder.rstrip("/") / "skills"
|
|
||||||
else:
|
|
||||||
skills_dir = project_root / DEFAULT_SKILLS_DIR
|
|
||||||
|
|
||||||
skills_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
return skills_dir
|
|
||||||
|
|
||||||
|
|
||||||
def _create_extension_dir(temp_dir: Path, ext_id: str = "test-ext") -> Path:
|
|
||||||
"""Create a complete extension directory with manifest and command files."""
|
|
||||||
ext_dir = temp_dir / ext_id
|
|
||||||
ext_dir.mkdir()
|
|
||||||
|
|
||||||
manifest_data = {
|
|
||||||
"schema_version": "1.0",
|
|
||||||
"extension": {
|
|
||||||
"id": ext_id,
|
|
||||||
"name": "Test Extension",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"description": "A test extension for skill registration",
|
|
||||||
},
|
|
||||||
"requires": {
|
|
||||||
"speckit_version": ">=0.1.0",
|
|
||||||
},
|
|
||||||
"provides": {
|
|
||||||
"commands": [
|
|
||||||
{
|
|
||||||
"name": f"speckit.{ext_id}.hello",
|
|
||||||
"file": "commands/hello.md",
|
|
||||||
"description": "Test hello command",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": f"speckit.{ext_id}.world",
|
|
||||||
"file": "commands/world.md",
|
|
||||||
"description": "Test world command",
|
|
||||||
},
|
|
||||||
]
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
with open(ext_dir / "extension.yml", "w") as f:
|
|
||||||
yaml.dump(manifest_data, f)
|
|
||||||
|
|
||||||
commands_dir = ext_dir / "commands"
|
|
||||||
commands_dir.mkdir()
|
|
||||||
|
|
||||||
(commands_dir / "hello.md").write_text(
|
|
||||||
"---\n"
|
|
||||||
"description: \"Test hello command\"\n"
|
|
||||||
"---\n"
|
|
||||||
"\n"
|
|
||||||
"# Hello Command\n"
|
|
||||||
"\n"
|
|
||||||
"Run this to say hello.\n"
|
|
||||||
"$ARGUMENTS\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
(commands_dir / "world.md").write_text(
|
|
||||||
"---\n"
|
|
||||||
"description: \"Test world command\"\n"
|
|
||||||
"---\n"
|
|
||||||
"\n"
|
|
||||||
"# World Command\n"
|
|
||||||
"\n"
|
|
||||||
"Run this to greet the world.\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
return ext_dir
|
|
||||||
|
|
||||||
|
|
||||||
# ===== Fixtures =====
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def temp_dir():
|
|
||||||
"""Create a temporary directory for tests."""
|
|
||||||
tmpdir = tempfile.mkdtemp()
|
|
||||||
yield Path(tmpdir)
|
|
||||||
shutil.rmtree(tmpdir)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def project_dir(temp_dir):
|
|
||||||
"""Create a mock spec-kit project directory."""
|
|
||||||
proj_dir = temp_dir / "project"
|
|
||||||
proj_dir.mkdir()
|
|
||||||
|
|
||||||
# Create .specify directory
|
|
||||||
specify_dir = proj_dir / ".specify"
|
|
||||||
specify_dir.mkdir()
|
|
||||||
|
|
||||||
return proj_dir
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def extension_dir(temp_dir):
|
|
||||||
"""Create a complete extension directory."""
|
|
||||||
return _create_extension_dir(temp_dir)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def skills_project(project_dir):
|
|
||||||
"""Create a project with --ai-skills enabled and skills directory."""
|
|
||||||
_create_init_options(project_dir, ai="claude", ai_skills=True)
|
|
||||||
skills_dir = _create_skills_dir(project_dir, ai="claude")
|
|
||||||
return project_dir, skills_dir
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def no_skills_project(project_dir):
|
|
||||||
"""Create a project without --ai-skills."""
|
|
||||||
_create_init_options(project_dir, ai="claude", ai_skills=False)
|
|
||||||
return project_dir
|
|
||||||
|
|
||||||
|
|
||||||
# ===== ExtensionManager._get_skills_dir Tests =====
|
|
||||||
|
|
||||||
class TestExtensionManagerGetSkillsDir:
|
|
||||||
"""Test _get_skills_dir() on ExtensionManager."""
|
|
||||||
|
|
||||||
def test_returns_skills_dir_when_active(self, skills_project):
|
|
||||||
"""Should return skills dir when ai_skills is true and dir exists."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
result = manager._get_skills_dir()
|
|
||||||
assert result == skills_dir
|
|
||||||
|
|
||||||
def test_returns_none_when_no_ai_skills(self, no_skills_project):
|
|
||||||
"""Should return None when ai_skills is false."""
|
|
||||||
manager = ExtensionManager(no_skills_project)
|
|
||||||
result = manager._get_skills_dir()
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_returns_none_when_no_init_options(self, project_dir):
|
|
||||||
"""Should return None when init-options.json is missing."""
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
result = manager._get_skills_dir()
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_returns_none_when_skills_dir_missing(self, project_dir):
|
|
||||||
"""Should return None when skills dir doesn't exist on disk."""
|
|
||||||
_create_init_options(project_dir, ai="claude", ai_skills=True)
|
|
||||||
# Don't create the skills directory
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
result = manager._get_skills_dir()
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
|
|
||||||
# ===== Extension Skill Registration Tests =====
|
|
||||||
|
|
||||||
class TestExtensionSkillRegistration:
|
|
||||||
"""Test _register_extension_skills() on ExtensionManager."""
|
|
||||||
|
|
||||||
def test_skills_created_when_ai_skills_active(self, skills_project, extension_dir):
|
|
||||||
"""Skills should be created when ai_skills is enabled."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check that skill directories were created
|
|
||||||
skill_dirs = sorted([d.name for d in skills_dir.iterdir() if d.is_dir()])
|
|
||||||
assert "speckit-test-ext.hello" in skill_dirs
|
|
||||||
assert "speckit-test-ext.world" in skill_dirs
|
|
||||||
|
|
||||||
def test_skill_md_content_correct(self, skills_project, extension_dir):
|
|
||||||
"""SKILL.md should have correct agentskills.io structure."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
skill_file = skills_dir / "speckit-test-ext.hello" / "SKILL.md"
|
|
||||||
assert skill_file.exists()
|
|
||||||
content = skill_file.read_text()
|
|
||||||
|
|
||||||
# Check structure
|
|
||||||
assert content.startswith("---\n")
|
|
||||||
assert "name: speckit-test-ext.hello" in content
|
|
||||||
assert "description:" in content
|
|
||||||
assert "Test hello command" in content
|
|
||||||
assert "source: extension:test-ext" in content
|
|
||||||
assert "author: github-spec-kit" in content
|
|
||||||
assert "compatibility:" in content
|
|
||||||
assert "Run this to say hello." in content
|
|
||||||
|
|
||||||
def test_skill_md_has_parseable_yaml(self, skills_project, extension_dir):
|
|
||||||
"""Generated SKILL.md should contain valid, parseable YAML frontmatter."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
skill_file = skills_dir / "speckit-test-ext.hello" / "SKILL.md"
|
|
||||||
content = skill_file.read_text()
|
|
||||||
|
|
||||||
assert content.startswith("---\n")
|
|
||||||
parts = content.split("---", 2)
|
|
||||||
assert len(parts) >= 3
|
|
||||||
parsed = yaml.safe_load(parts[1])
|
|
||||||
assert isinstance(parsed, dict)
|
|
||||||
assert parsed["name"] == "speckit-test-ext.hello"
|
|
||||||
assert "description" in parsed
|
|
||||||
|
|
||||||
def test_no_skills_when_ai_skills_disabled(self, no_skills_project, extension_dir):
|
|
||||||
"""No skills should be created when ai_skills is false."""
|
|
||||||
manager = ExtensionManager(no_skills_project)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Verify registry
|
|
||||||
metadata = manager.registry.get(manifest.id)
|
|
||||||
assert metadata["registered_skills"] == []
|
|
||||||
|
|
||||||
def test_no_skills_when_init_options_missing(self, project_dir, extension_dir):
|
|
||||||
"""No skills should be created when init-options.json is absent."""
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
metadata = manager.registry.get(manifest.id)
|
|
||||||
assert metadata["registered_skills"] == []
|
|
||||||
|
|
||||||
def test_existing_skill_not_overwritten(self, skills_project, extension_dir):
|
|
||||||
"""Pre-existing SKILL.md should not be overwritten."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
|
|
||||||
# Pre-create a custom skill
|
|
||||||
custom_dir = skills_dir / "speckit-test-ext.hello"
|
|
||||||
custom_dir.mkdir(parents=True)
|
|
||||||
custom_content = "# My Custom Hello Skill\nUser-modified content\n"
|
|
||||||
(custom_dir / "SKILL.md").write_text(custom_content)
|
|
||||||
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Custom skill should be untouched
|
|
||||||
assert (custom_dir / "SKILL.md").read_text() == custom_content
|
|
||||||
|
|
||||||
# But the other skill should still be created
|
|
||||||
metadata = manager.registry.get(manifest.id)
|
|
||||||
assert "speckit-test-ext.world" in metadata["registered_skills"]
|
|
||||||
# The pre-existing one should NOT be in registered_skills (it was skipped)
|
|
||||||
assert "speckit-test-ext.hello" not in metadata["registered_skills"]
|
|
||||||
|
|
||||||
def test_registered_skills_in_registry(self, skills_project, extension_dir):
|
|
||||||
"""Registry should contain registered_skills list."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
metadata = manager.registry.get(manifest.id)
|
|
||||||
assert "registered_skills" in metadata
|
|
||||||
assert len(metadata["registered_skills"]) == 2
|
|
||||||
assert "speckit-test-ext.hello" in metadata["registered_skills"]
|
|
||||||
assert "speckit-test-ext.world" in metadata["registered_skills"]
|
|
||||||
|
|
||||||
def test_kimi_uses_dot_notation(self, project_dir, temp_dir):
|
|
||||||
"""Kimi agent should use dot notation for skill names."""
|
|
||||||
_create_init_options(project_dir, ai="kimi", ai_skills=True)
|
|
||||||
_create_skills_dir(project_dir, ai="kimi")
|
|
||||||
ext_dir = _create_extension_dir(temp_dir, ext_id="test-ext")
|
|
||||||
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
ext_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
metadata = manager.registry.get(manifest.id)
|
|
||||||
# Kimi should use dots, not hyphens
|
|
||||||
assert "speckit.test-ext.hello" in metadata["registered_skills"]
|
|
||||||
assert "speckit.test-ext.world" in metadata["registered_skills"]
|
|
||||||
|
|
||||||
def test_missing_command_file_skipped(self, skills_project, temp_dir):
|
|
||||||
"""Commands with missing source files should be skipped gracefully."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
|
|
||||||
ext_dir = temp_dir / "missing-cmd-ext"
|
|
||||||
ext_dir.mkdir()
|
|
||||||
manifest_data = {
|
|
||||||
"schema_version": "1.0",
|
|
||||||
"extension": {
|
|
||||||
"id": "missing-cmd-ext",
|
|
||||||
"name": "Missing Cmd Extension",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"description": "Test",
|
|
||||||
},
|
|
||||||
"requires": {"speckit_version": ">=0.1.0"},
|
|
||||||
"provides": {
|
|
||||||
"commands": [
|
|
||||||
{
|
|
||||||
"name": "speckit.missing-cmd-ext.exists",
|
|
||||||
"file": "commands/exists.md",
|
|
||||||
"description": "Exists",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "speckit.missing-cmd-ext.ghost",
|
|
||||||
"file": "commands/ghost.md",
|
|
||||||
"description": "Does not exist",
|
|
||||||
},
|
|
||||||
]
|
|
||||||
},
|
|
||||||
}
|
|
||||||
with open(ext_dir / "extension.yml", "w") as f:
|
|
||||||
yaml.dump(manifest_data, f)
|
|
||||||
|
|
||||||
(ext_dir / "commands").mkdir()
|
|
||||||
(ext_dir / "commands" / "exists.md").write_text(
|
|
||||||
"---\ndescription: Exists\n---\n\n# Exists\n\nBody.\n"
|
|
||||||
)
|
|
||||||
# Intentionally do NOT create ghost.md
|
|
||||||
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
ext_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
metadata = manager.registry.get(manifest.id)
|
|
||||||
assert "speckit-missing-cmd-ext.exists" in metadata["registered_skills"]
|
|
||||||
assert "speckit-missing-cmd-ext.ghost" not in metadata["registered_skills"]
|
|
||||||
|
|
||||||
|
|
||||||
# ===== Extension Skill Unregistration Tests =====
|
|
||||||
|
|
||||||
class TestExtensionSkillUnregistration:
|
|
||||||
"""Test _unregister_extension_skills() on ExtensionManager."""
|
|
||||||
|
|
||||||
def test_skills_removed_on_extension_remove(self, skills_project, extension_dir):
|
|
||||||
"""Removing an extension should clean up its skill directories."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Verify skills exist
|
|
||||||
assert (skills_dir / "speckit-test-ext.hello" / "SKILL.md").exists()
|
|
||||||
assert (skills_dir / "speckit-test-ext.world" / "SKILL.md").exists()
|
|
||||||
|
|
||||||
# Remove extension
|
|
||||||
result = manager.remove(manifest.id, keep_config=False)
|
|
||||||
assert result is True
|
|
||||||
|
|
||||||
# Skills should be gone
|
|
||||||
assert not (skills_dir / "speckit-test-ext.hello").exists()
|
|
||||||
assert not (skills_dir / "speckit-test-ext.world").exists()
|
|
||||||
|
|
||||||
def test_other_skills_preserved_on_remove(self, skills_project, extension_dir):
|
|
||||||
"""Non-extension skills should not be affected by extension removal."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
|
|
||||||
# Pre-create a custom skill
|
|
||||||
custom_dir = skills_dir / "my-custom-skill"
|
|
||||||
custom_dir.mkdir(parents=True)
|
|
||||||
(custom_dir / "SKILL.md").write_text("# My Custom Skill\n")
|
|
||||||
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
manager.remove(manifest.id, keep_config=False)
|
|
||||||
|
|
||||||
# Custom skill should still exist
|
|
||||||
assert (custom_dir / "SKILL.md").exists()
|
|
||||||
assert (custom_dir / "SKILL.md").read_text() == "# My Custom Skill\n"
|
|
||||||
|
|
||||||
def test_remove_handles_already_deleted_skills(self, skills_project, extension_dir):
|
|
||||||
"""Gracefully handle case where skill dirs were already deleted."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Manually delete skill dirs before calling remove
|
|
||||||
shutil.rmtree(skills_dir / "speckit-test-ext.hello")
|
|
||||||
shutil.rmtree(skills_dir / "speckit-test-ext.world")
|
|
||||||
|
|
||||||
# Should not raise
|
|
||||||
result = manager.remove(manifest.id, keep_config=False)
|
|
||||||
assert result is True
|
|
||||||
|
|
||||||
def test_remove_no_skills_when_not_active(self, no_skills_project, extension_dir):
|
|
||||||
"""Removal without active skills should not attempt skill cleanup."""
|
|
||||||
manager = ExtensionManager(no_skills_project)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Should not raise even though no skills exist
|
|
||||||
result = manager.remove(manifest.id, keep_config=False)
|
|
||||||
assert result is True
|
|
||||||
|
|
||||||
|
|
||||||
# ===== Command File Without Frontmatter =====
|
|
||||||
|
|
||||||
class TestExtensionSkillEdgeCases:
|
|
||||||
"""Test edge cases in extension skill registration."""
|
|
||||||
|
|
||||||
def test_command_without_frontmatter(self, skills_project, temp_dir):
|
|
||||||
"""Commands without YAML frontmatter should still produce valid skills."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
|
|
||||||
ext_dir = temp_dir / "nofm-ext"
|
|
||||||
ext_dir.mkdir()
|
|
||||||
manifest_data = {
|
|
||||||
"schema_version": "1.0",
|
|
||||||
"extension": {
|
|
||||||
"id": "nofm-ext",
|
|
||||||
"name": "No Frontmatter Extension",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"description": "Test",
|
|
||||||
},
|
|
||||||
"requires": {"speckit_version": ">=0.1.0"},
|
|
||||||
"provides": {
|
|
||||||
"commands": [
|
|
||||||
{
|
|
||||||
"name": "speckit.nofm-ext.plain",
|
|
||||||
"file": "commands/plain.md",
|
|
||||||
"description": "Plain command",
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
}
|
|
||||||
with open(ext_dir / "extension.yml", "w") as f:
|
|
||||||
yaml.dump(manifest_data, f)
|
|
||||||
|
|
||||||
(ext_dir / "commands").mkdir()
|
|
||||||
(ext_dir / "commands" / "plain.md").write_text(
|
|
||||||
"# Plain Command\n\nBody without frontmatter.\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
ext_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
skill_file = skills_dir / "speckit-nofm-ext.plain" / "SKILL.md"
|
|
||||||
assert skill_file.exists()
|
|
||||||
content = skill_file.read_text()
|
|
||||||
assert "name: speckit-nofm-ext.plain" in content
|
|
||||||
# Fallback description when no frontmatter description
|
|
||||||
assert "Extension command: speckit.nofm-ext.plain" in content
|
|
||||||
assert "Body without frontmatter." in content
|
|
||||||
|
|
||||||
def test_gemini_agent_skills(self, project_dir, temp_dir):
|
|
||||||
"""Gemini agent should use .gemini/skills/ for skill directory."""
|
|
||||||
_create_init_options(project_dir, ai="gemini", ai_skills=True)
|
|
||||||
_create_skills_dir(project_dir, ai="gemini")
|
|
||||||
ext_dir = _create_extension_dir(temp_dir, ext_id="test-ext")
|
|
||||||
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
ext_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
skills_dir = project_dir / ".gemini" / "skills"
|
|
||||||
assert (skills_dir / "speckit-test-ext.hello" / "SKILL.md").exists()
|
|
||||||
assert (skills_dir / "speckit-test-ext.world" / "SKILL.md").exists()
|
|
||||||
|
|
||||||
def test_multiple_extensions_independent_skills(self, skills_project, temp_dir):
|
|
||||||
"""Installing and removing different extensions should be independent."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
|
|
||||||
ext_dir_a = _create_extension_dir(temp_dir, ext_id="ext-a")
|
|
||||||
ext_dir_b = _create_extension_dir(temp_dir, ext_id="ext-b")
|
|
||||||
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest_a = manager.install_from_directory(
|
|
||||||
ext_dir_a, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
manifest_b = manager.install_from_directory(
|
|
||||||
ext_dir_b, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Both should have skills
|
|
||||||
assert (skills_dir / "speckit-ext-a.hello" / "SKILL.md").exists()
|
|
||||||
assert (skills_dir / "speckit-ext-b.hello" / "SKILL.md").exists()
|
|
||||||
|
|
||||||
# Remove ext-a
|
|
||||||
manager.remove("ext-a", keep_config=False)
|
|
||||||
|
|
||||||
# ext-a skills gone, ext-b skills preserved
|
|
||||||
assert not (skills_dir / "speckit-ext-a.hello").exists()
|
|
||||||
assert (skills_dir / "speckit-ext-b.hello" / "SKILL.md").exists()
|
|
||||||
|
|
||||||
def test_malformed_frontmatter_handled(self, skills_project, temp_dir):
|
|
||||||
"""Commands with invalid YAML frontmatter should still produce valid skills."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
|
|
||||||
ext_dir = temp_dir / "badfm-ext"
|
|
||||||
ext_dir.mkdir()
|
|
||||||
manifest_data = {
|
|
||||||
"schema_version": "1.0",
|
|
||||||
"extension": {
|
|
||||||
"id": "badfm-ext",
|
|
||||||
"name": "Bad Frontmatter Extension",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"description": "Test",
|
|
||||||
},
|
|
||||||
"requires": {"speckit_version": ">=0.1.0"},
|
|
||||||
"provides": {
|
|
||||||
"commands": [
|
|
||||||
{
|
|
||||||
"name": "speckit.badfm-ext.broken",
|
|
||||||
"file": "commands/broken.md",
|
|
||||||
"description": "Broken frontmatter",
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
}
|
|
||||||
with open(ext_dir / "extension.yml", "w") as f:
|
|
||||||
yaml.dump(manifest_data, f)
|
|
||||||
|
|
||||||
(ext_dir / "commands").mkdir()
|
|
||||||
# Malformed YAML: invalid key-value syntax
|
|
||||||
(ext_dir / "commands" / "broken.md").write_text(
|
|
||||||
"---\n"
|
|
||||||
"description: [invalid yaml\n"
|
|
||||||
" unclosed: bracket\n"
|
|
||||||
"---\n"
|
|
||||||
"\n"
|
|
||||||
"# Broken Command\n"
|
|
||||||
"\n"
|
|
||||||
"This body should still be used.\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
# Should not raise
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
ext_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
skill_file = skills_dir / "speckit-badfm-ext.broken" / "SKILL.md"
|
|
||||||
assert skill_file.exists()
|
|
||||||
content = skill_file.read_text()
|
|
||||||
# Fallback description since frontmatter was invalid
|
|
||||||
assert "Extension command: speckit.badfm-ext.broken" in content
|
|
||||||
assert "This body should still be used." in content
|
|
||||||
|
|
||||||
def test_remove_cleans_up_when_init_options_deleted(self, skills_project, extension_dir):
|
|
||||||
"""Skills should be cleaned up even if init-options.json is deleted after install."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Verify skills exist
|
|
||||||
assert (skills_dir / "speckit-test-ext.hello" / "SKILL.md").exists()
|
|
||||||
|
|
||||||
# Delete init-options.json to simulate user change
|
|
||||||
init_opts = project_dir / ".specify" / "init-options.json"
|
|
||||||
init_opts.unlink()
|
|
||||||
|
|
||||||
# Remove should still clean up via fallback scan
|
|
||||||
result = manager.remove(manifest.id, keep_config=False)
|
|
||||||
assert result is True
|
|
||||||
assert not (skills_dir / "speckit-test-ext.hello").exists()
|
|
||||||
assert not (skills_dir / "speckit-test-ext.world").exists()
|
|
||||||
|
|
||||||
def test_remove_cleans_up_when_ai_skills_toggled(self, skills_project, extension_dir):
|
|
||||||
"""Skills should be cleaned up even if ai_skills is toggled to false after install."""
|
|
||||||
project_dir, skills_dir = skills_project
|
|
||||||
manager = ExtensionManager(project_dir)
|
|
||||||
manifest = manager.install_from_directory(
|
|
||||||
extension_dir, "0.1.0", register_commands=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Verify skills exist
|
|
||||||
assert (skills_dir / "speckit-test-ext.hello" / "SKILL.md").exists()
|
|
||||||
|
|
||||||
# Toggle ai_skills to false
|
|
||||||
_create_init_options(project_dir, ai="claude", ai_skills=False)
|
|
||||||
|
|
||||||
# Remove should still clean up via fallback scan
|
|
||||||
result = manager.remove(manifest.id, keep_config=False)
|
|
||||||
assert result is True
|
|
||||||
assert not (skills_dir / "speckit-test-ext.hello").exists()
|
|
||||||
assert not (skills_dir / "speckit-test-ext.world").exists()
|
|
||||||
@@ -1170,12 +1170,8 @@ class TestPresetCatalog:
|
|||||||
assert not catalog.cache_file.exists()
|
assert not catalog.cache_file.exists()
|
||||||
assert not catalog.cache_metadata_file.exists()
|
assert not catalog.cache_metadata_file.exists()
|
||||||
|
|
||||||
def test_search_with_cached_data(self, project_dir, monkeypatch):
|
def test_search_with_cached_data(self, project_dir):
|
||||||
"""Test search with cached catalog data."""
|
"""Test search with cached catalog data."""
|
||||||
from unittest.mock import patch
|
|
||||||
|
|
||||||
# Only use the default catalog to prevent fetching the community catalog from the network
|
|
||||||
monkeypatch.setenv("SPECKIT_PRESET_CATALOG_URL", PresetCatalog.DEFAULT_CATALOG_URL)
|
|
||||||
catalog = PresetCatalog(project_dir)
|
catalog = PresetCatalog(project_dir)
|
||||||
catalog.cache_dir.mkdir(parents=True, exist_ok=True)
|
catalog.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
@@ -1204,26 +1200,23 @@ class TestPresetCatalog:
|
|||||||
"cached_at": datetime.now(timezone.utc).isoformat(),
|
"cached_at": datetime.now(timezone.utc).isoformat(),
|
||||||
}))
|
}))
|
||||||
|
|
||||||
# Isolate from community catalog so results are deterministic
|
# Search by query
|
||||||
default_only = [PresetCatalogEntry(url=catalog.DEFAULT_CATALOG_URL, name="default", priority=1, install_allowed=True)]
|
results = catalog.search(query="agile")
|
||||||
with patch.object(catalog, "get_active_catalogs", return_value=default_only):
|
assert len(results) == 1
|
||||||
# Search by query
|
assert results[0]["id"] == "safe-agile"
|
||||||
results = catalog.search(query="agile")
|
|
||||||
assert len(results) == 1
|
|
||||||
assert results[0]["id"] == "safe-agile"
|
|
||||||
|
|
||||||
# Search by tag
|
# Search by tag
|
||||||
results = catalog.search(tag="hipaa")
|
results = catalog.search(tag="hipaa")
|
||||||
assert len(results) == 1
|
assert len(results) == 1
|
||||||
assert results[0]["id"] == "healthcare"
|
assert results[0]["id"] == "healthcare"
|
||||||
|
|
||||||
# Search by author
|
# Search by author
|
||||||
results = catalog.search(author="agile-community")
|
results = catalog.search(author="agile-community")
|
||||||
assert len(results) == 1
|
assert len(results) == 1
|
||||||
|
|
||||||
# Search all
|
# Search all
|
||||||
results = catalog.search()
|
results = catalog.search()
|
||||||
assert len(results) == 2
|
assert len(results) == 2
|
||||||
|
|
||||||
def test_get_pack_info(self, project_dir):
|
def test_get_pack_info(self, project_dir):
|
||||||
"""Test getting info for a specific pack."""
|
"""Test getting info for a specific pack."""
|
||||||
|
|||||||
Reference in New Issue
Block a user