Compare commits

...

42 Commits

Author SHA1 Message Date
github-actions[bot]
8e733cd563 chore: bump version to 0.2.1 2026-03-11 22:14:12 +00:00
Manfred Riem
ec60c5b2fe Added February 2026 newsletter (#1812) 2026-03-11 15:33:30 -05:00
Fanch Daniel
e56d37db8c feat: add Kimi Code CLI agent support (#1790)
* feat: add Kimi Code (kimi) CLI agent support

- Register kimi in AGENT_CONFIG with folder `.kimi/`, markdown format, requires_cli=True
- Register kimi in CommandRegistrar.AGENT_CONFIGS
- Add kimi to supported agents table in AGENTS.md and README.md
- Add kimi to release packaging scripts (bash and PowerShell)
- Add kimi CLI installation to devcontainer post-create script
- Add kimi support to update-agent-context scripts (bash and PowerShell)
- Add 4 consistency tests covering all kimi integration surfaces
- Bump version to 0.1.14 and update CHANGELOG

* fix: include .specify/templates/ and real command files in release ZIPs

- Copy real command files from templates/commands/ (with speckit. prefix)
  instead of generating stubs, so slash commands have actual content
- Add .specify/templates/ to every ZIP so ensure_constitution_from_template
  can find constitution-template.md on init
- Add .vscode/settings.json to every ZIP
- Having 3 top-level dirs prevents the extraction flatten heuristic from
  incorrectly stripping the agent config folder (.kimi/, .claude/, etc.)
- Bump version to 0.1.14.1

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(kimi): use .kimi/skills/<name>/SKILL.md structure for Kimi Code CLI

Kimi Code CLI uses a skills system, not flat command files:
- Skills live in .kimi/skills/<name>/SKILL.md (project-level)
- Invoked with /skill:<name> (e.g. /skill:speckit.specify)
- Each skill is a directory containing SKILL.md with YAML frontmatter

Changes:
- AGENT_CONFIG["kimi"]["commands_subdir"] = "skills" (was "commands")
- create-release-packages.sh: new create_kimi_skills() function creates
  skill directories with SKILL.md from real template content
- Bump version to 0.1.14.2

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(test): align kimi commands_subdir assertion with skills structure

* fix: use forward slashes for tabnine path in create-release-packages.ps1

* fix: align kimi to .kimi/skills convention and fix ARGUMENTS unbound variable

* fix: address PR review comments for kimi agent support

  - Fix VERSION_NO_V undefined variable in create-github-release.sh
  - Restore version $1 argument handling in create-release-packages.sh
  - Fix tabnine/vibe/generic cases calling undefined generate_commands
  - Align roo path .roo/rules -> .roo/commands with AGENT_CONFIG
  - Fix kimi extension to use per-skill SKILL.md directory structure
  - Add parent mkdir before dest_file.write_text for nested paths
  - Restore devcontainer tools removed by regression + add Kimi CLI
  - Strengthen test_kimi_in_powershell_validate_set assertion

* fix: restore release scripts and address all PR review comments

  - Restore create-release-packages.sh to original with full generate_commands/
    rewrite_paths logic; add kimi case using create_kimi_skills function
  - Restore create-release-packages.ps1 to original with full Generate-Commands/
    Rewrite-Paths logic; add kimi case using New-KimiSkills function
  - Restore create-github-release.sh to original with proper $1 argument
    handling and VERSION_NO_V; add kimi zip entries
  - Add test_ai_help_includes_kimi for consistency with other agents
  - Strengthen test_kimi_in_powershell_validate_set to check ValidateSet

* fix: address second round of PR review comments

  - Add __AGENT__ and {AGENT_SCRIPT} substitutions in create_kimi_skills (bash)
  - Add __AGENT__ and {AGENT_SCRIPT} substitutions in New-KimiSkills (PowerShell)
  - Replace curl|bash Kimi installer with pipx install kimi-cli in post-create.sh

* fix: align kimi skill naming and add extension registrar test

  - Fix install_ai_skills() to use speckit.<cmd> naming for kimi (dot
    separator) instead of speckit-<cmd>, matching /skill:speckit.<cmd>
    invocation convention and packaging scripts
  - Add test_kimi_in_extension_registrar to verify CommandRegistrar.AGENT_CONFIGS
    includes kimi with correct dir and SKILL.md extension

* fix(test): align kimi skill name assertion with dot-separator convention

  test_skills_install_for_all_agents now expects speckit.specify (dot) for
  kimi and speckit-specify (hyphen) for all other agents, matching the
  install_ai_skills() implementation added in the previous commit.

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-11 07:57:18 -05:00
Dhilip
33e853e9c9 docs: fix broken links in quickstart guide (#1759) (#1797) 2026-03-11 07:51:04 -05:00
Dhilip
929fab5d98 docs: add catalog cli help documentation (#1793) (#1794)
* docs: add catalog cli help documentation (#1793)

* Fix code block formatting in user guide

Corrected code block syntax for CLI commands in user guide.

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-11 06:54:07 -05:00
LifeIsAnAbstraction
56095f06d2 fix: use quiet checkout to avoid exception on git checkout (#1792) 2026-03-10 15:41:27 -05:00
Ben Lawson
2632a0f52d feat(extensions): support .extensionignore to exclude files during install (#1781)
* feat(extensions): support .extensionignore to exclude files during install

Add .extensionignore support so extension authors can exclude files and
folders from being copied when users run 'specify extension add'.

The file uses glob-style patterns (one per line), supports comments (#),
blank lines, trailing-slash directory patterns, and relative path matching.
The .extensionignore file itself is always excluded from the copy.

- Add _load_extensionignore() to ExtensionManager
- Integrate ignore function into shutil.copytree in install_from_directory
- Document .extensionignore in EXTENSION-DEVELOPMENT-GUIDE.md
- Add 6 tests covering all pattern matching scenarios
- Bump version to 0.1.14

* fix(extensions): use pathspec for gitignore-compatible .extensionignore matching

Replace fnmatch with pathspec.GitIgnoreSpec to get proper .gitignore
semantics where * does not cross directory boundaries. This addresses
review feedback on #1781.

Changes:
- Switch from fnmatch to pathspec>=0.12.0 (GitIgnoreSpec.from_lines)
- Normalize backslashes in patterns for cross-platform compatibility
- Distinguish directories from files for trailing-slash patterns
- Update docs to accurately describe supported pattern semantics
- Add edge-case tests: .., absolute paths, empty file, backslashes,
  * vs ** boundary behavior, and ! negation
- Move changelog entry to [Unreleased] section
2026-03-10 12:02:04 -05:00
Adrián Luján Muñoz
4ab91fbadf feat: add Codex support for extension command registration (#1767)
* feat: add Codex support for extension command registration

* test: add codex command registrar mapping check

* test: add codex consistency check to test_agent_config_consistency

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-10 09:50:42 -05:00
Manfred Riem
5c0bedb410 chore: bump version to 0.2.0 (#1786)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-03-09 14:54:08 -05:00
Pavel-tabnine
d92798d5b0 fix: sync agent list comments with actual supported agents (#1785)
Several comment and documentation strings were not updated when
Mistral Vibe support was added, leaving them out of sync with the
code. This fixes:
- update-agent-context.sh: add Generic to Supports header comment
- update-agent-context.ps1: add generic to Multi-Agent header comment
- README.md: add Mistral Vibe to CLI tool-check text
2026-03-09 14:34:54 -05:00
Copilot
ee922cbde9 feat(extensions): support multiple active catalogs simultaneously (#1720)
* Initial plan

* feat(extensions): implement multi-catalog stack support

- Add CatalogEntry dataclass to represent catalog entries
- Add get_active_catalogs() reading SPECKIT_CATALOG_URL, project config,
  user config, or built-in default stack (org-approved + community)
- Add _load_catalog_config() to parse .specify/extension-catalogs.yml
- Add _validate_catalog_url() HTTPS validation helper
- Add _fetch_single_catalog() with per-URL caching, backward-compat for DEFAULT_CATALOG_URL
- Add _get_merged_extensions() that merges all catalogs (priority wins on conflict)
- Update search() and get_extension_info() to use merged results
  annotated with _catalog_name and _install_allowed
- Update clear_cache() to also remove per-URL hash cache files
- Add extension_catalogs CLI command to list active catalogs
- Add catalog add/remove sub-commands for .specify/extension-catalogs.yml
- Update extension_add to enforce install_allowed=false policy
- Update extension_search to show source catalog per result
- Update extension_info to show source catalog with install_allowed status
- Add 13 new tests covering catalog stack, merge conflict resolution,
  install_allowed enforcement, and catalog metadata

Co-authored-by: mnriem <15701806+mnriem@users.noreply.github.com>

* docs: update RFC, user guide, and API reference for multi-catalog support

- RFC: replace FUTURE FEATURE section with full implementation docs,
  add catalog stack resolution order, config file examples, merge
  conflict resolution, and install_allowed behavior
- EXTENSION-USER-GUIDE.md: add multi-catalog section with CLI examples
  for catalogs/catalog-add/catalog-remove, update catalog config docs
- EXTENSION-API-REFERENCE.md: add CatalogEntry class docs, update
  ExtensionCatalog docs with new methods and result annotations,
  add catalog CLI commands (catalogs, catalog add, catalog remove)

Also fix extension_catalogs command to correctly show "Using built-in
default catalog stack" when config file exists but has empty catalogs

Co-authored-by: mnriem <15701806+mnriem@users.noreply.github.com>

* Potential fix for pull request finding 'Empty except'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* fix: remove extraneous f-string prefixes (ruff F541)

Remove f-prefix from strings with no placeholders in catalog_remove
and extension_search commands.

Co-authored-by: mnriem <15701806+mnriem@users.noreply.github.com>

* fix: address PR review feedback for multi-catalog support

- Rename 'org-approved' catalog to 'default'
- Move 'catalogs' command to 'catalog list' for consistency
- Add 'description' field to CatalogEntry dataclass
- Add --description option to 'catalog add' CLI command
- Align install_allowed default to False in _load_catalog_config
- Add user-level config detection in catalog list footer
- Fix _load_catalog_config docstring (document ValidationError)
- Fix test isolation for test_search_by_tag, test_search_by_query,
  test_search_verified_only, test_get_extension_info
- Update version to 0.1.14 and CHANGELOG
- Update all docs (RFC, User Guide, API Reference)

* fix: wrap _load_catalog_config() calls in catalog_list with try/except

- Check SPECKIT_CATALOG_URL first (matching get_active_catalogs() resolution order)
- Wrap both _load_catalog_config() calls in try/except ValidationError so a
  malformed config file cannot crash `specify extension catalog list` after
  the active catalogs have already been printed successfully

Co-authored-by: mnriem <15701806+mnriem@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mnriem <15701806+mnriem@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-03-09 14:30:27 -05:00
Pavel-tabnine
1df24f1953 Pavel/add tabnine cli support (#1503)
* feat: add Tabnine CLI agent support

Tabnine CLI is a Gemini fork that uses TOML commands with the
.tabnine/agent/ directory structure and TABNINE.md context files.

Changes:
- Add 'tabnine' to AGENT_CONFIG in __init__.py
- Update release scripts (bash + PowerShell) for TOML command generation
- Update agent context scripts (bash + PowerShell)
- Add to GitHub release packages
- Update README.md and AGENTS.md documentation
- Bump version to 0.1.14
- Add 8 new tests for cross-file consistency

* fix: add missing generic to agent-context script usage string
2026-03-09 14:04:02 -05:00
LADISLAV BIHARI
3033834d64 Add Understanding extension to community catalog (#1778)
* Add Understanding extension to community catalog

31 deterministic requirements quality metrics based on IEEE/ISO standards.
Catches ambiguity, missing testability, and structural issues before implementation.
Includes experimental energy-based ambiguity detection.

Repository: https://github.com/Testimonial/understanding
Commands: scan, validate, energy
Hook: after_tasks validation prompt

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Sort README table and catalog entries alphabetically

Move Understanding extension entry between Spec Sync and V-Model
to maintain alphabetical ordering in both files.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Ladislav Bihari <ladislav.bihari@statsperform.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 10:18:41 -05:00
Ben Lawson
4b00078907 Add ralph extension to community catalog (#1780)
- Extension ID: ralph
- Version: 1.0.0
- Author: Rubiss
- Description: Autonomous implementation loop using AI agent CLI
2026-03-09 10:01:34 -05:00
Lautaro Lubatti
2d72f85790 Update README with project initialization instructions (#1772)
* Update README with project initialization instructions

Added instructions for creating a new project and initializing in an existing project.

* Update README.md with alternative one-time usage command for existing projects

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Added --ai option to prevent interactive AI selection

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-09 09:56:49 -05:00
Ismael
855ac838b8 feat: add review extension to community catalog (#1775)
Add spec-kit-review to catalog.community.json
and the Available Community Extensions table in extensions/README.md.

Co-authored-by: Ismael <ismael.jimenez-martinez@bmw.de>
2026-03-09 09:10:59 -05:00
Sharath Satish
a8ec87e3c2 Add fleet extension to community catalog (#1771)
- Extension ID: fleet
- Version: 1.0.0
- Author: sharathsatish
- Description: Orchestrate a full feature lifecycle with human-in-the-loop gates across all SpecKit phases
2026-03-09 08:13:36 -05:00
Gaël
9d6c05ad5b Integration of Mistral vibe support into speckit (#1725)
* Add Mistral Vibe support to Spec Kit

This commit adds comprehensive support for Mistral Vibe as an AI agent in the
Spec Kit project. The integration includes:

- Added Mistral Vibe to AGENT_CONFIG with proper CLI tool configuration
- Updated README.md with Mistral Vibe in supported agents table and examples
- Modified release package scripts to generate Mistral Vibe templates
- Updated both bash and PowerShell agent context update scripts
- Added appropriate CLI help text and documentation

Mistral Vibe is now fully supported with the same level of integration as
other CLI-based agents like Claude Code, Gemini CLI, etc.

Generated by Mistral Vibe.
Co-Authored-By: Mistral Vibe <vibe@mistral.ai>

* Add Mistral Vibe support to Spec Kit

- Added Mistral Vibe (vibe) to AGENT_CONFIG with proper TOML format support
- Updated CLI help text to include vibe as a valid AI assistant option
- Added Mistral Vibe to release scripts with correct .vibe/agents/ directory structure
- Updated agent context scripts (bash and PowerShell) with proper TOML file paths
- Added Mistral Vibe to README.md supported agents table with v2.0 slash command notes
- Used correct argument syntax {{args}} for Mistral Vibe TOML configurations

Mistral Vibe is now fully integrated with the same level of support as other
CLI-based agents like Gemini and Qwen. Users can now use specify init --ai vibe
to create projects with Mistral Vibe support.

Generated by Mistral Vibe.
Co-Authored-By: Mistral Vibe <vibe@mistral.ai>

* Add Vibe templates to GitHub release script

creation of Mistral vibe zip

* Add 'vibe' agent to release package script

* Add 'vibe' to the list of agents in create-release-packages.sh

* chore: bump version to v1.0.1 [skip ci]

* Add generic spec kit templates to release script

* chore: bump version to v1.0.2 [skip ci]

* Update project version to 0.1.5

* Add generic spec kit templates to release script

* Add 'generic' and 'qodercli' to agent list to be aligned

* Update supported agents in update-agent-context.sh to be aligned

* Update README with new AI assistant options to be aligned

* Document --ai-commands-dir option in README to be aligned

Added new option for AI commands directory in README.

* Fix formatting in README.md for init arguments to be aligned

* Update README with AI assistant options to be aligned

Added AI options to specify init arguments in README.

* Fix formatting in README.md for project-name argument

* Update expected agent types in update-agent-context.sh to be aligned

* Update agent types and usage in update-agent-context.ps1 to be aligned

* Add support for generic AI assistant configuration to be aligned

* Fix formatting in __init__.py clean space

* Update AI assistant options in init command to be aligned

* Add 'qodercli' to expected agent types to be aligned

* Remove 'vibe' case from release package script

Removed the 'vibe' case from the create-release-packages script.

* Update README.md

ok for this

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .github/workflows/scripts/create-release-packages.ps1

ok to commit

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Add commands_subdir key to Mistral Vibe configuration

* Rename specify-agents.toml to specify-agents.md

* Update scripts/bash/update-agent-context.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .github/workflows/scripts/create-release-packages.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/specify_cli/__init__.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/specify_cli/__init__.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Fix duplicate 'commands_subdir' in vibe configuration

Removed duplicate 'commands_subdir' entries for 'vibe'.

* Add support for 'vibe' command in release script

add an mkdir and generate command

* Change commands_subdir from 'commands' to 'prompts'

* Update README.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update update-agent-context.ps1

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update create-release-packages.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update create-release-packages.ps1

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update agent list in update-agent-context.sh

Kiro

---------

Co-authored-by: Lénaïc Huard <lenaic@lhuard.fr>
Co-authored-by: Mistral Vibe <vibe@mistral.ai>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-09 08:05:13 -05:00
Ryo Hasegawa
3ef12cae3e fix: Remove duplicate options in specify.md (#1765) 2026-03-09 07:38:10 -05:00
Fabián Silva
8618d0a53e fix: use global branch numbering instead of per-short-name detection (#1757)
* fix: remove per-short-name number detection from specify prompt

The specify.md prompt instructed the AI to search for existing branches
filtered by the exact short-name, causing every new feature to start at
001 since no branches matched the new short-name. The underlying
create-new-feature.sh script already has correct global numbering logic
via check_existing_branches() that searches ALL branches and spec
directories.

The fix removes the AI's flawed number-detection steps and tells it to
NOT pass --number, letting the script auto-detect the next globally
available number.

Closes #1744
Closes #1468

🤖 Generated with [Claude Code](https://claude.com/code)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: clarify --json flag requirement per Copilot review

- Rephrased step 2 to mention both --short-name and --json flags
- Added explicit note to always include the JSON flag for reliable output parsing

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 07:37:46 -05:00
Copilot
71e6b4da4a Add Community Walkthroughs section to README (#1766)
* Initial plan

* Add Community Walkthroughs section to README.md

Co-authored-by: mnriem <15701806+mnriem@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mnriem <15701806+mnriem@users.noreply.github.com>
2026-03-05 12:55:29 -06:00
Michal Bachorik
ad74334a85 feat(extensions): add Jira Integration to community catalog (#1764)
* feat(extensions): add Jira Integration to community catalog

Adds the spec-kit-jira extension to the community catalog.

## Extension Details
- **Name**: Jira Integration
- **Version**: 2.1.0
- **Repository**: https://github.com/mbachorik/spec-kit-jira

## Features
- Create Jira Epics, Stories, and Issues from spec-kit artifacts
- 3-level hierarchy (Epic → Stories → Tasks) or 2-level mode
- Configurable custom field support
- Status synchronization between local tasks and Jira

## Commands
- `/speckit.jira.specstoissues` - Create Jira hierarchy from spec and tasks
- `/speckit.jira.discover-fields` - Discover Jira custom fields
- `/speckit.jira.sync-status` - Sync task completion to Jira

---
This PR was prepared with the assistance of Claude (Anthropic).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address PR review comments

- Set created_at to catalog submission date (2026-03-05)
- Add Jira Integration to Available Community Extensions table in README

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: iamaeroplane <michal.bachorik@gmail.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-03-05 07:21:38 -06:00
Pragya Chaurasia
8c3982d65b Add Azure DevOps Integration extension to community catalog (#1734)
* Add Azure DevOps work item synchronization with handoffs system

* Resolving the comments

* Added details of Azure DevOps extention

* t status
t revert --abort
Revert "Add Azure DevOps work item synchronization with handoffs system"

This reverts commit 39ac7e48d6.

* Update extensions/README.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update extensions/catalog.community.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: pragya247 <pragya@microsoft.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-05 07:11:51 -06:00
layla
13dec1de05 Fix docs: update Antigravity link and add initialization example (#1748)
* Fix docs: update Antigravity link and add initialization example

* Update README.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Manfred Riem <15701806+mnriem@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-04 08:29:52 -06:00
Manfred Riem
d0a112c60f fix: wire after_tasks and after_implement hook events into command templates (#1702)
* fix: wire after_tasks and after_implement hook events into command templates (#1701)

The HookExecutor backend in extensions.py was fully implemented but
check_hooks_for_event() was never called by anything — the core command
templates had no instructions to check .specify/extensions.yml.

Add a final step to templates/commands/tasks.md (step 6, after_tasks) and
templates/commands/implement.md (step 10, after_implement) that instructs
the AI agent to:

- Read .specify/extensions.yml if it exists
- Filter hooks.{event} to enabled: true entries
- Evaluate any condition fields and skip non-matching hooks
- Output the RFC-specified hook message format, including
  EXECUTE_COMMAND: markers for mandatory (optional: false) hooks

Bumps version to 0.1.7.

* fix: clarify hook condition handling and add YAML error guidance in templates

- Replace ambiguous "evaluate any condition value" instruction with explicit
  guidance to skip hooks with non-empty conditions, deferring evaluation to
  HookExecutor
- Add instruction to skip hook checking silently if extensions.yml cannot
  be parsed or is invalid

* Fix/extension hooks not triggered (#1)

* feat(templates): implement before-hooks check as pre-execution phase

* test(hooks): create scenario for LLMs/Agents on hooks

---------

Co-authored-by: Dhilip <s.dhilipkumar@gmail.com>
2026-03-04 08:16:31 -06:00
Dhilip
c84756b7f3 make c ignores consistent with c++ (#1747) 2026-03-04 08:08:04 -06:00
Manfred Riem
524affca79 chore: bump version to 0.1.13 (#1746)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-03-03 16:08:40 -06:00
Medhat Galal
32c6e7f40c feat: add kiro-cli and AGENT_CONFIG consistency coverage (#1690)
* feat: add kiro-cli and AGENT_CONFIG consistency coverage

* fix: address PR feedback for kiro-cli migration

* test: assert init invocation result in --here mode test

* test: capture init result in here-mode command test

* chore: save local unapproved work in progress

* fix: resolve remaining PR1690 review threads

* Update src/specify_cli/__init__.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* test: reduce brittleness in ai help alias assertion

* fix: resolve PR1690 ruff syntax regression

---------

Co-authored-by: Manfred Riem <15701806+mnriem@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-03 12:04:46 -06:00
Ismael
9cf33e81cc feat: add verify extension to community catalog (#1726)
* feat: add verify extension to community catalog

* Add verify to extension readme

---------

Co-authored-by: Ismael <ismael.jimenez-martinez@bmw.de>
2026-03-03 11:21:06 -06:00
Emi
254e9bbdf0 Add Retrospective Extension to community catalog README table (#1741)
* Add retrospective extension to community catalog.

Register the new retrospective extension release so users can discover and install it from the community catalog.

Co-authored-by: Cursor <cursoragent@cursor.com>

* Add Retrospective Extension to community catalog

---------

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-03-03 09:59:53 -06:00
Zaheer Uddin
6757c90dbd fix(scripts): add empty description validation and branch checkout error handling (#1559)
* fix(scripts): add empty description validation and branch checkout error handling

Adds two critical improvements to both PowerShell and Bash feature creation scripts:

1. Post-trim validation: Prevents creating features with whitespace-only descriptions
2. Branch checkout error handling: Provides clear error messages when branch
   creation fails (e.g., branch already exists) instead of silently continuing

Co-authored-by: Augment Agent <noreply@augmentcode.com>

* fix(scripts): use consistent stderr redirection for branch checkout

---------

Co-authored-by: Augment Agent <noreply@augmentcode.com>
2026-03-03 09:42:46 -06:00
Ismael
f6264d4ef4 fix: correct Copilot extension command registration (#1724)
* fix: correct Copilot extension command registration (#copilot)

- Use .agent.md extension for commands in .github/agents/
- Generate companion .prompt.md files in .github/prompts/
- Clean up .prompt.md files on extension removal
- Add tests for Copilot-specific registration behavior

Bumps version to 0.1.7.

* fix: test copilot prompt cleanup via ExtensionManager.remove() instead of manual unlink

---------

Co-authored-by: Ismael <ismael.jimenez-martinez@bmw.de>
2026-03-03 09:03:38 -06:00
Zaheer Uddin
dd8dbf6344 fix(implement): remove Makefile from C ignore patterns (#1558)
* fix(implement): remove Makefile from C ignore patterns

Makefile is typically source-controlled and should not be in .gitignore.
Replaced with proper autotools-generated files (autom4te.cache/, config.status).

Co-authored-by: Augment Agent <noreply@augmentcode.com>

* fix(implement): restore config.log in C ignore patterns

---------

Co-authored-by: Augment Agent <noreply@augmentcode.com>
2026-03-03 08:28:40 -06:00
Barry Gervin
bf8fb125ad Add sync extension to community catalog (#1728)
* Add sync extension to community catalog

- Extension ID: sync
- Version: 0.1.0
- Author: bgervin
- Description: Detect and resolve drift between specs and implementation

* fix: use main branch in URLs per Copilot review

* Reorder community extensions table alphabetically

Per Copilot review feedback and EXTENSION-PUBLISHING-GUIDE.md

---------

Co-authored-by: Barry Gervin <bgervin@hotmail.com>
2026-03-03 08:18:18 -06:00
Zaheer Uddin
2b92ab444d fix(checklist): clarify file handling behavior for append vs create (#1556)
* fix(checklist): clarify file handling behavior for append vs create

Resolves contradictory instructions in checklist.md lines 97-99 where the
template stated both 'append to existing file' and 'creates a NEW file'.

Changes:
- Clarified that if file doesn't exist, create new with CHK001
- If file exists, append new items continuing from last CHK ID
- Emphasized preservation of existing content (never delete/replace)

Co-authored-by: Augment Agent <noreply@augmentcode.com>

* fix(checklist): align report language with append behavior

---------

Co-authored-by: Augment Agent <noreply@augmentcode.com>
2026-03-03 07:52:02 -06:00
Zaheer Uddin
abe1b7085c fix(clarify): correct conflicting question limit from 10 to 5 (#1557)
Resolves inconsistency where line 92 stated 'Maximum of 10 total questions'
while all other references (lines 2, 91, 100, 134, 158, 178) consistently
specify a maximum of 5 questions.

Co-authored-by: Augment Agent <noreply@augmentcode.com>
2026-03-03 07:46:31 -06:00
Manfred Riem
bfaca2c449 chore: bump version to 0.1.12 (#1737)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-03-02 13:35:43 -06:00
Manfred Riem
78ed453e38 fix: use RELEASE_PAT so tag push triggers release workflow (#1736) 2026-03-02 13:31:18 -06:00
Manfred Riem
658ab2a38c fix: release-trigger uses release branch + PR instead of direct push to main (#1733)
* fix: use release branch + PR instead of direct push to main

Bypass branch protection rules by pushing version bump to a
chore/release-vX.Y.Z branch, tagging that commit, then opening
an auto PR to merge back into main. The release workflow still
triggers immediately from the tag push.

* fix: remove --label automated from gh pr create (label does not exist)
2026-03-02 13:16:13 -06:00
Manfred Riem
2c41d3627e fix: Split release process to sync pyproject.toml version with git tags (#1732)
* fix: split release process to sync pyproject.toml version with git tags (#1721)

- Split release workflow into two: release-trigger.yml and release.yml
- release-trigger.yml: Updates pyproject.toml, generates changelog from commits, creates tag
- release.yml: Triggered by tag push, builds artifacts, creates GitHub release
- Ensures git tags point to commits with correct version in pyproject.toml
- Auto-generates changelog from commit messages since last tag
- Supports manual version input or auto-increment patch version
- Added simulate-release.sh for local testing without pushing
- Added comprehensive RELEASE-PROCESS.md documentation
- Updated pyproject.toml to v0.1.10 to sync with latest release

This fixes the version mismatch issue where tags pointed to commits with
outdated pyproject.toml versions, preventing confusion when installing from source.

* Update .github/workflows/RELEASE-PROCESS.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .github/workflows/scripts/simulate-release.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .github/workflows/release.yml

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .github/workflows/release-trigger.yml

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix: harden release-trigger against shell injection and fix stale docs

- Pass workflow_dispatch version input via env: instead of direct
  interpolation into shell script, preventing potential injection attacks
- Validate version input against strict semver regex before use
- Fix RELEASE-PROCESS.md Option 2 still referencing [Unreleased] section
  handling that no longer exists in the workflow

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-02 12:52:13 -06:00
Fabián Silva
b55d00beed fix: prepend YAML frontmatter to Cursor .mdc files (#1699)
* fix: prepend YAML frontmatter to Cursor .mdc files for auto-inclusion

Cursor IDE requires YAML frontmatter with `alwaysApply: true` in .mdc
rule files for them to be automatically loaded. Without this frontmatter,
users must manually configure glob patterns for the rules to take effect.

This fix adds frontmatter generation to both the bash and PowerShell
update-agent-context scripts, handling three scenarios:
- New .mdc file creation (frontmatter prepended after template processing)
- Existing .mdc file update without frontmatter (frontmatter added)
- Existing .mdc file with frontmatter (no duplication)

Closes #669

🤖 Generated with [Claude Code](https://claude.com/code)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: address Copilot review suggestions

- Handle CRLF line endings in frontmatter detection (grep '^---' instead
  of '^---$')
- Fix double blank line after frontmatter in PowerShell New-AgentFile
- Remove unused tempfile import from tests
- Add encoding="utf-8" to all open() calls for cross-platform safety

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 08:20:00 -06:00
dependabot[bot]
525eae7f7e chore(deps): bump astral-sh/setup-uv from 6 to 7 (#1709)
Bumps [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) from 6 to 7.
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](https://github.com/astral-sh/setup-uv/compare/v6...v7)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-27 08:09:43 -06:00
45 changed files with 4376 additions and 346 deletions

View File

@@ -50,8 +50,6 @@
"kilocode.Kilo-Code", "kilocode.Kilo-Code",
// Roo Code // Roo Code
"RooVeterinaryInc.roo-cline", "RooVeterinaryInc.roo-cline",
// Amazon Developer Q
"AmazonWebServices.amazon-q-vscode",
// Claude Code // Claude Code
"anthropic.claude-code" "anthropic.claude-code"
], ],

View File

@@ -8,15 +8,15 @@ run_command() {
local command_to_run="$*" local command_to_run="$*"
local output local output
local exit_code local exit_code
# Capture all output (stdout and stderr) # Capture all output (stdout and stderr)
output=$(eval "$command_to_run" 2>&1) || exit_code=$? output=$(eval "$command_to_run" 2>&1) || exit_code=$?
exit_code=${exit_code:-0} exit_code=${exit_code:-0}
if [ $exit_code -ne 0 ]; then if [ $exit_code -ne 0 ]; then
echo -e "\033[0;31m[ERROR] Command failed (Exit Code $exit_code): $command_to_run\033[0m" >&2 echo -e "\033[0;31m[ERROR] Command failed (Exit Code $exit_code): $command_to_run\033[0m" >&2
echo -e "\033[0;31m$output\033[0m" >&2 echo -e "\033[0;31m$output\033[0m" >&2
exit $exit_code exit $exit_code
fi fi
} }
@@ -51,32 +51,38 @@ echo -e "\n🤖 Installing OpenCode CLI..."
run_command "npm install -g opencode-ai@latest" run_command "npm install -g opencode-ai@latest"
echo "✅ Done" echo "✅ Done"
echo -e "\n🤖 Installing Amazon Q CLI..." echo -e "\n🤖 Installing Kiro CLI..."
# 👉🏾 https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-verify-download.html # https://kiro.dev/docs/cli/
KIRO_INSTALLER_URL="https://kiro.dev/install.sh"
KIRO_INSTALLER_SHA256="7487a65cf310b7fb59b357c4b5e6e3f3259d383f4394ecedb39acf70f307cffb"
KIRO_INSTALLER_PATH="$(mktemp)"
run_command "curl --proto '=https' --tlsv1.2 -sSf 'https://desktop-release.q.us-east-1.amazonaws.com/latest/q-x86_64-linux.zip' -o 'q.zip'" cleanup_kiro_installer() {
run_command "curl --proto '=https' --tlsv1.2 -sSf 'https://desktop-release.q.us-east-1.amazonaws.com/latest/q-x86_64-linux.zip.sig' -o 'q.zip.sig'" rm -f "$KIRO_INSTALLER_PATH"
cat > amazonq-public-key.asc << 'EOF' }
-----BEGIN PGP PUBLIC KEY BLOCK----- trap cleanup_kiro_installer EXIT
mDMEZig60RYJKwYBBAHaRw8BAQdAy/+G05U5/EOA72WlcD4WkYn5SInri8pc4Z6D run_command "curl -fsSL \"$KIRO_INSTALLER_URL\" -o \"$KIRO_INSTALLER_PATH\""
BKNNGOm0JEFtYXpvbiBRIENMSSBUZWFtIDxxLWNsaUBhbWF6b24uY29tPoiZBBMW run_command "echo \"$KIRO_INSTALLER_SHA256 $KIRO_INSTALLER_PATH\" | sha256sum -c -"
CgBBFiEEmvYEF+gnQskUPgPsUNx6jcJMVmcFAmYoOtECGwMFCQPCZwAFCwkIBwIC
IgIGFQoJCAsCBBYCAwECHgcCF4AACgkQUNx6jcJMVmef5QD/QWWEGG/cOnbDnp68 run_command "bash \"$KIRO_INSTALLER_PATH\""
SJXuFkwiNwlH2rPw9ZRIQMnfAS0A/0V6ZsGB4kOylBfc7CNfzRFGtovdBBgHqA6P
zQ/PNscGuDgEZig60RIKKwYBBAGXVQEFAQEHQC4qleONMBCq3+wJwbZSr0vbuRba kiro_binary=""
D1xr4wUPn4Avn4AnAwEIB4h+BBgWCgAmFiEEmvYEF+gnQskUPgPsUNx6jcJMVmcF if command -v kiro-cli >/dev/null 2>&1; then
AmYoOtECGwwFCQPCZwAACgkQUNx6jcJMVmchMgEA6l3RveCM0YHAGQaSFMkguoAo kiro_binary="kiro-cli"
vK6FgOkDawgP0NPIP2oA/jIAO4gsAntuQgMOsPunEdDeji2t+AhV02+DQIsXZpoB elif command -v kiro >/dev/null 2>&1; then
=f8yY kiro_binary="kiro"
-----END PGP PUBLIC KEY BLOCK----- else
EOF echo -e "\033[0;31m[ERROR] Kiro CLI installation did not create 'kiro-cli' or 'kiro' in PATH.\033[0m" >&2
run_command "gpg --batch --import amazonq-public-key.asc" exit 1
run_command "gpg --verify q.zip.sig q.zip" fi
run_command "unzip -q q.zip"
run_command "chmod +x ./q/install.sh" run_command "$kiro_binary --help > /dev/null"
run_command "./q/install.sh --no-confirm" echo "✅ Done"
run_command "rm -rf ./q q.zip q.zip.sig amazonq-public-key.asc"
echo -e "\n🤖 Installing Kimi CLI..."
# https://code.kimi.com
run_command "pipx install kimi-cli"
echo "✅ Done" echo "✅ Done"
echo -e "\n🤖 Installing CodeBuddy CLI..." echo -e "\n🤖 Installing CodeBuddy CLI..."

View File

@@ -8,7 +8,7 @@ body:
value: | value: |
Thanks for requesting a new agent! Before submitting, please check if the agent is already supported. Thanks for requesting a new agent! Before submitting, please check if the agent is already supported.
**Currently supported agents**: Claude Code, Gemini CLI, GitHub Copilot, Cursor, Qwen Code, opencode, Codex CLI, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy, Qoder CLI, Amazon Q Developer CLI, Amp, SHAI, IBM Bob, Antigravity **Currently supported agents**: Claude Code, Gemini CLI, GitHub Copilot, Cursor, Qwen Code, opencode, Codex CLI, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy, Qoder CLI, Kiro CLI, Amp, SHAI, IBM Bob, Antigravity
- type: input - type: input
id: agent-name id: agent-name

View File

@@ -75,7 +75,7 @@ body:
- Roo Code - Roo Code
- CodeBuddy - CodeBuddy
- Qoder CLI - Qoder CLI
- Amazon Q Developer CLI - Kiro CLI
- Amp - Amp
- SHAI - SHAI
- IBM Bob - IBM Bob

View File

@@ -69,7 +69,7 @@ body:
- Roo Code - Roo Code
- CodeBuddy - CodeBuddy
- Qoder CLI - Qoder CLI
- Amazon Q Developer CLI - Kiro CLI
- Amp - Amp
- SHAI - SHAI
- IBM Bob - IBM Bob

191
.github/workflows/RELEASE-PROCESS.md vendored Normal file
View File

@@ -0,0 +1,191 @@
# Release Process
This document describes the automated release process for Spec Kit.
## Overview
The release process is split into two workflows to ensure version consistency:
1. **Release Trigger Workflow** (`release-trigger.yml`) - Manages versioning and triggers release
2. **Release Workflow** (`release.yml`) - Builds and publishes artifacts
This separation ensures that git tags always point to commits with the correct version in `pyproject.toml`.
## Before Creating a Release
**Important**: Write clear, descriptive commit messages!
### How CHANGELOG.md Works
The CHANGELOG is **automatically generated** from your git commit messages:
1. **During Development**: Write clear, descriptive commit messages:
```bash
git commit -m "feat: Add new authentication feature"
git commit -m "fix: Resolve timeout issue in API client (#123)"
git commit -m "docs: Update installation instructions"
```
2. **When Releasing**: The release trigger workflow automatically:
- Finds all commits since the last release tag
- Formats them as changelog entries
- Inserts them into CHANGELOG.md
- Commits the updated changelog before creating the new tag
### Commit Message Best Practices
Good commit messages make good changelogs:
- **Be descriptive**: "Add user authentication" not "Update files"
- **Reference issues/PRs**: Include `(#123)` for automated linking
- **Use conventional commits** (optional): `feat:`, `fix:`, `docs:`, `chore:`
- **Keep it concise**: One line is ideal, details go in commit body
**Example commits that become good changelog entries:**
```
fix: prepend YAML frontmatter to Cursor .mdc files (#1699)
feat: add generic agent support with customizable command directories (#1639)
docs: document dual-catalog system for extensions (#1689)
```
## Creating a Release
### Option 1: Auto-Increment (Recommended for patches)
1. Go to **Actions** → **Release Trigger**
2. Click **Run workflow**
3. Leave the version field **empty**
4. Click **Run workflow**
The workflow will:
- Auto-increment the patch version (e.g., `0.1.10` → `0.1.11`)
- Update `pyproject.toml`
- Update `CHANGELOG.md` by adding a new section for the release based on commits since the last tag
- Commit changes to a `chore/release-vX.Y.Z` branch
- Create and push the git tag from that branch
- Open a PR to merge the version bump into `main`
- Trigger the release workflow automatically via the tag push
### Option 2: Manual Version (For major/minor bumps)
1. Go to **Actions** → **Release Trigger**
2. Click **Run workflow**
3. Enter the desired version (e.g., `0.2.0` or `v0.2.0`)
4. Click **Run workflow**
The workflow will:
- Use your specified version
- Update `pyproject.toml`
- Update `CHANGELOG.md` by adding a new section for the release based on commits since the last tag
- Commit changes to a `chore/release-vX.Y.Z` branch
- Create and push the git tag from that branch
- Open a PR to merge the version bump into `main`
- Trigger the release workflow automatically via the tag push
## What Happens Next
Once the release trigger workflow completes:
1. A `chore/release-vX.Y.Z` branch is pushed with the version bump commit
2. The git tag is pushed, pointing to that commit
3. The **Release Workflow** is automatically triggered by the tag push
4. Release artifacts are built for all supported agents
5. A GitHub Release is created with all assets
6. A PR is opened to merge the version bump branch into `main`
> **Note**: Merge the auto-opened PR after the release is published to keep `main` in sync.
## Workflow Details
### Release Trigger Workflow
**File**: `.github/workflows/release-trigger.yml`
**Trigger**: Manual (`workflow_dispatch`)
**Permissions Required**: `contents: write`
**Steps**:
1. Checkout repository
2. Determine version (manual or auto-increment)
3. Check if tag already exists (prevents duplicates)
4. Create `chore/release-vX.Y.Z` branch
5. Update `pyproject.toml`
6. Update `CHANGELOG.md` from git commits
7. Commit changes
8. Push branch and tag
9. Open PR to merge version bump into `main`
### Release Workflow
**File**: `.github/workflows/release.yml`
**Trigger**: Tag push (`v*`)
**Permissions Required**: `contents: write`
**Steps**:
1. Checkout repository at tag
2. Extract version from tag name
3. Check if release already exists
4. Build release package variants (all agents × shell/powershell)
5. Generate release notes from commits
6. Create GitHub Release with all assets
## Version Constraints
- Tags must follow format: `v{MAJOR}.{MINOR}.{PATCH}`
- Example valid versions: `v0.1.11`, `v0.2.0`, `v1.0.0`
- Auto-increment only bumps patch version
- Cannot create duplicate tags (workflow will fail)
## Benefits of This Approach
✅ **Version Consistency**: Git tags point to commits with matching `pyproject.toml` version
✅ **Single Source of Truth**: Version set once, used everywhere
✅ **Prevents Drift**: No more manual version synchronization needed
✅ **Clean Separation**: Versioning logic separate from artifact building
✅ **Flexibility**: Supports both auto-increment and manual versioning
## Troubleshooting
### No Commits Since Last Release
If you run the release trigger workflow when there are no new commits since the last tag:
- The workflow will still succeed
- The CHANGELOG will show "- Initial release" if it's the first release
- Or it will be empty if there are no commits
- Consider adding meaningful commits before releasing
**Best Practice**: Use descriptive commit messages - they become your changelog!
### Tag Already Exists
If you see "Error: Tag vX.Y.Z already exists!", you need to:
- Choose a different version number, or
- Delete the existing tag if it was created in error
### Release Workflow Didn't Trigger
Check that:
- The release trigger workflow completed successfully
- The tag was pushed (check repository tags)
- The release workflow is enabled in Actions settings
### Version Mismatch
If `pyproject.toml` doesn't match the latest tag:
- Run the release trigger workflow to sync versions
- Or manually update `pyproject.toml` and push changes before running the release trigger
## Legacy Behavior (Pre-v0.1.10)
Before this change, the release workflow:
- Created tags automatically on main branch pushes
- Updated `pyproject.toml` AFTER creating the tag
- Resulted in tags pointing to commits with outdated versions
This has been fixed in v0.1.10+.

161
.github/workflows/release-trigger.yml vendored Normal file
View File

@@ -0,0 +1,161 @@
name: Release Trigger
on:
workflow_dispatch:
inputs:
version:
description: 'Version to release (e.g., 0.1.11). Leave empty to auto-increment patch version.'
required: false
type: string
jobs:
bump-version:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
fetch-depth: 0
token: ${{ secrets.RELEASE_PAT }}
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
- name: Determine version
id: version
env:
INPUT_VERSION: ${{ github.event.inputs.version }}
run: |
if [[ -n "$INPUT_VERSION" ]]; then
# Manual version specified - strip optional v prefix
VERSION="${INPUT_VERSION#v}"
# Validate strict semver format to prevent injection
if [[ ! "$VERSION" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Error: Invalid version format '$VERSION'. Must be X.Y.Z (e.g. 1.2.3 or v1.2.3)"
exit 1
fi
echo "version=$VERSION" >> $GITHUB_OUTPUT
echo "tag=v$VERSION" >> $GITHUB_OUTPUT
echo "Using manual version: $VERSION"
else
# Auto-increment patch version
LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0")
echo "Latest tag: $LATEST_TAG"
# Extract version number and increment
VERSION=$(echo $LATEST_TAG | sed 's/v//')
IFS='.' read -ra VERSION_PARTS <<< "$VERSION"
MAJOR=${VERSION_PARTS[0]:-0}
MINOR=${VERSION_PARTS[1]:-0}
PATCH=${VERSION_PARTS[2]:-0}
# Increment patch version
PATCH=$((PATCH + 1))
NEW_VERSION="$MAJOR.$MINOR.$PATCH"
echo "version=$NEW_VERSION" >> $GITHUB_OUTPUT
echo "tag=v$NEW_VERSION" >> $GITHUB_OUTPUT
echo "Auto-incremented version: $NEW_VERSION"
fi
- name: Check if tag already exists
run: |
if git rev-parse "${{ steps.version.outputs.tag }}" >/dev/null 2>&1; then
echo "Error: Tag ${{ steps.version.outputs.tag }} already exists!"
exit 1
fi
- name: Create release branch
run: |
BRANCH="chore/release-${{ steps.version.outputs.tag }}"
git checkout -b "$BRANCH"
echo "branch=$BRANCH" >> $GITHUB_ENV
- name: Update pyproject.toml
run: |
sed -i "s/version = \".*\"/version = \"${{ steps.version.outputs.version }}\"/" pyproject.toml
echo "Updated pyproject.toml to version ${{ steps.version.outputs.version }}"
- name: Update CHANGELOG.md
run: |
if [ -f "CHANGELOG.md" ]; then
DATE=$(date +%Y-%m-%d)
# Get the previous tag to compare commits
PREVIOUS_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
echo "Generating changelog from commits..."
if [[ -n "$PREVIOUS_TAG" ]]; then
echo "Changes since $PREVIOUS_TAG"
COMMITS=$(git log --oneline "$PREVIOUS_TAG"..HEAD --no-merges --pretty=format:"- %s" 2>/dev/null || echo "- Initial release")
else
echo "No previous tag found - this is the first release"
COMMITS="- Initial release"
fi
# Create new changelog entry
{
head -n 8 CHANGELOG.md
echo ""
echo "## [${{ steps.version.outputs.version }}] - $DATE"
echo ""
echo "### Changed"
echo ""
echo "$COMMITS"
echo ""
tail -n +9 CHANGELOG.md
} > CHANGELOG.md.tmp
mv CHANGELOG.md.tmp CHANGELOG.md
echo "✅ Updated CHANGELOG.md with commits since $PREVIOUS_TAG"
else
echo "No CHANGELOG.md found"
fi
- name: Commit version bump
run: |
if [ -f "CHANGELOG.md" ]; then
git add pyproject.toml CHANGELOG.md
else
git add pyproject.toml
fi
if git diff --cached --quiet; then
echo "No changes to commit"
else
git commit -m "chore: bump version to ${{ steps.version.outputs.version }}"
echo "Changes committed"
fi
- name: Create and push tag
run: |
git tag -a "${{ steps.version.outputs.tag }}" -m "Release ${{ steps.version.outputs.tag }}"
git push origin "${{ env.branch }}"
git push origin "${{ steps.version.outputs.tag }}"
echo "Branch ${{ env.branch }} and tag ${{ steps.version.outputs.tag }} pushed"
- name: Open pull request
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_PAT }}
run: |
gh pr create \
--base main \
--head "${{ env.branch }}" \
--title "chore: bump version to ${{ steps.version.outputs.version }}" \
--body "Automated version bump to ${{ steps.version.outputs.version }}.
This PR was created by the Release Trigger workflow. The git tag \`${{ steps.version.outputs.tag }}\` has already been pushed and the release artifacts are being built.
Merge this PR to record the version bump and changelog update on \`main\`."
- name: Summary
run: |
echo "✅ Version bumped to ${{ steps.version.outputs.version }}"
echo "✅ Tag ${{ steps.version.outputs.tag }} created and pushed"
echo "✅ PR opened to merge version bump into main"
echo "🚀 Release workflow is building artifacts from the tag"

View File

@@ -2,68 +2,60 @@ name: Create Release
on: on:
push: push:
branches: [ main ] tags:
paths: - 'v*'
- 'memory/**'
- 'scripts/**'
- 'src/**'
- 'templates/**'
- '.github/workflows/**'
workflow_dispatch:
jobs: jobs:
release: release:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
contents: write contents: write
pull-requests: write
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
- name: Get latest tag
id: get_tag - name: Extract version from tag
id: version
run: | run: |
chmod +x .github/workflows/scripts/get-next-version.sh VERSION=${GITHUB_REF#refs/tags/}
.github/workflows/scripts/get-next-version.sh echo "tag=$VERSION" >> $GITHUB_OUTPUT
echo "Building release for $VERSION"
- name: Check if release already exists - name: Check if release already exists
id: check_release id: check_release
run: | run: |
chmod +x .github/workflows/scripts/check-release-exists.sh chmod +x .github/workflows/scripts/check-release-exists.sh
.github/workflows/scripts/check-release-exists.sh ${{ steps.get_tag.outputs.new_version }} .github/workflows/scripts/check-release-exists.sh ${{ steps.version.outputs.tag }}
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create release package variants - name: Create release package variants
if: steps.check_release.outputs.exists == 'false' if: steps.check_release.outputs.exists == 'false'
run: | run: |
chmod +x .github/workflows/scripts/create-release-packages.sh chmod +x .github/workflows/scripts/create-release-packages.sh
.github/workflows/scripts/create-release-packages.sh ${{ steps.get_tag.outputs.new_version }} .github/workflows/scripts/create-release-packages.sh ${{ steps.version.outputs.tag }}
- name: Generate release notes - name: Generate release notes
if: steps.check_release.outputs.exists == 'false' if: steps.check_release.outputs.exists == 'false'
id: release_notes id: release_notes
run: | run: |
chmod +x .github/workflows/scripts/generate-release-notes.sh chmod +x .github/workflows/scripts/generate-release-notes.sh
.github/workflows/scripts/generate-release-notes.sh ${{ steps.get_tag.outputs.new_version }} ${{ steps.get_tag.outputs.latest_tag }} # Get the previous tag for changelog generation
PREVIOUS_TAG=$(git describe --tags --abbrev=0 ${{ steps.version.outputs.tag }}^ 2>/dev/null || echo "")
# Default to v0.0.0 if no previous tag is found (e.g., first release)
if [ -z "$PREVIOUS_TAG" ]; then
PREVIOUS_TAG="v0.0.0"
fi
.github/workflows/scripts/generate-release-notes.sh ${{ steps.version.outputs.tag }} "$PREVIOUS_TAG"
- name: Create GitHub Release - name: Create GitHub Release
if: steps.check_release.outputs.exists == 'false' if: steps.check_release.outputs.exists == 'false'
run: | run: |
chmod +x .github/workflows/scripts/create-github-release.sh chmod +x .github/workflows/scripts/create-github-release.sh
.github/workflows/scripts/create-github-release.sh ${{ steps.get_tag.outputs.new_version }} .github/workflows/scripts/create-github-release.sh ${{ steps.version.outputs.tag }}
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Update version in pyproject.toml (for release artifacts only)
if: steps.check_release.outputs.exists == 'false'
run: |
chmod +x .github/workflows/scripts/update-version.sh
.github/workflows/scripts/update-version.sh ${{ steps.get_tag.outputs.new_version }}
- name: Commit version bump to main
if: steps.check_release.outputs.exists == 'false'
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add pyproject.toml
git diff --cached --quiet || git commit -m "chore: bump version to ${{ steps.get_tag.outputs.new_version }} [skip ci]"
git push

10
.github/workflows/scripts/create-github-release.sh vendored Normal file → Executable file
View File

@@ -46,12 +46,18 @@ gh release create "$VERSION" \
.genreleases/spec-kit-template-amp-ps-"$VERSION".zip \ .genreleases/spec-kit-template-amp-ps-"$VERSION".zip \
.genreleases/spec-kit-template-shai-sh-"$VERSION".zip \ .genreleases/spec-kit-template-shai-sh-"$VERSION".zip \
.genreleases/spec-kit-template-shai-ps-"$VERSION".zip \ .genreleases/spec-kit-template-shai-ps-"$VERSION".zip \
.genreleases/spec-kit-template-q-sh-"$VERSION".zip \ .genreleases/spec-kit-template-tabnine-sh-"$VERSION".zip \
.genreleases/spec-kit-template-q-ps-"$VERSION".zip \ .genreleases/spec-kit-template-tabnine-ps-"$VERSION".zip \
.genreleases/spec-kit-template-kiro-cli-sh-"$VERSION".zip \
.genreleases/spec-kit-template-kiro-cli-ps-"$VERSION".zip \
.genreleases/spec-kit-template-agy-sh-"$VERSION".zip \ .genreleases/spec-kit-template-agy-sh-"$VERSION".zip \
.genreleases/spec-kit-template-agy-ps-"$VERSION".zip \ .genreleases/spec-kit-template-agy-ps-"$VERSION".zip \
.genreleases/spec-kit-template-bob-sh-"$VERSION".zip \ .genreleases/spec-kit-template-bob-sh-"$VERSION".zip \
.genreleases/spec-kit-template-bob-ps-"$VERSION".zip \ .genreleases/spec-kit-template-bob-ps-"$VERSION".zip \
.genreleases/spec-kit-template-vibe-sh-"$VERSION".zip \
.genreleases/spec-kit-template-vibe-ps-"$VERSION".zip \
.genreleases/spec-kit-template-kimi-sh-"$VERSION".zip \
.genreleases/spec-kit-template-kimi-ps-"$VERSION".zip \
.genreleases/spec-kit-template-generic-sh-"$VERSION".zip \ .genreleases/spec-kit-template-generic-sh-"$VERSION".zip \
.genreleases/spec-kit-template-generic-ps-"$VERSION".zip \ .genreleases/spec-kit-template-generic-ps-"$VERSION".zip \
--title "Spec Kit Templates - $VERSION_NO_V" \ --title "Spec Kit Templates - $VERSION_NO_V" \

View File

@@ -8,13 +8,13 @@
.DESCRIPTION .DESCRIPTION
create-release-packages.ps1 (workflow-local) create-release-packages.ps1 (workflow-local)
Build Spec Kit template release archives for each supported AI assistant and script type. Build Spec Kit template release archives for each supported AI assistant and script type.
.PARAMETER Version .PARAMETER Version
Version string with leading 'v' (e.g., v0.2.0) Version string with leading 'v' (e.g., v0.2.0)
.PARAMETER Agents .PARAMETER Agents
Comma or space separated subset of agents to build (default: all) Comma or space separated subset of agents to build (default: all)
Valid agents: claude, gemini, copilot, cursor-agent, qwen, opencode, windsurf, codex, kilocode, auggie, roo, codebuddy, amp, q, bob, qodercli, shai, agy, generic Valid agents: claude, gemini, copilot, cursor-agent, qwen, opencode, windsurf, codex, kilocode, auggie, roo, codebuddy, amp, kiro-cli, bob, qodercli, shai, tabnine, agy, vibe, kimi, generic
.PARAMETER Scripts .PARAMETER Scripts
Comma or space separated subset of script types to build (default: both) Comma or space separated subset of script types to build (default: both)
@@ -33,10 +33,10 @@
param( param(
[Parameter(Mandatory=$true, Position=0)] [Parameter(Mandatory=$true, Position=0)]
[string]$Version, [string]$Version,
[Parameter(Mandatory=$false)] [Parameter(Mandatory=$false)]
[string]$Agents = "", [string]$Agents = "",
[Parameter(Mandatory=$false)] [Parameter(Mandatory=$false)]
[string]$Scripts = "" [string]$Scripts = ""
) )
@@ -60,7 +60,7 @@ New-Item -ItemType Directory -Path $GenReleasesDir -Force | Out-Null
function Rewrite-Paths { function Rewrite-Paths {
param([string]$Content) param([string]$Content)
$Content = $Content -replace '(/?)\bmemory/', '.specify/memory/' $Content = $Content -replace '(/?)\bmemory/', '.specify/memory/'
$Content = $Content -replace '(/?)\bscripts/', '.specify/scripts/' $Content = $Content -replace '(/?)\bscripts/', '.specify/scripts/'
$Content = $Content -replace '(/?)\btemplates/', '.specify/templates/' $Content = $Content -replace '(/?)\btemplates/', '.specify/templates/'
@@ -75,55 +75,55 @@ function Generate-Commands {
[string]$OutputDir, [string]$OutputDir,
[string]$ScriptVariant [string]$ScriptVariant
) )
New-Item -ItemType Directory -Path $OutputDir -Force | Out-Null New-Item -ItemType Directory -Path $OutputDir -Force | Out-Null
$templates = Get-ChildItem -Path "templates/commands/*.md" -File -ErrorAction SilentlyContinue $templates = Get-ChildItem -Path "templates/commands/*.md" -File -ErrorAction SilentlyContinue
foreach ($template in $templates) { foreach ($template in $templates) {
$name = [System.IO.Path]::GetFileNameWithoutExtension($template.Name) $name = [System.IO.Path]::GetFileNameWithoutExtension($template.Name)
# Read file content and normalize line endings # Read file content and normalize line endings
$fileContent = (Get-Content -Path $template.FullName -Raw) -replace "`r`n", "`n" $fileContent = (Get-Content -Path $template.FullName -Raw) -replace "`r`n", "`n"
# Extract description from YAML frontmatter # Extract description from YAML frontmatter
$description = "" $description = ""
if ($fileContent -match '(?m)^description:\s*(.+)$') { if ($fileContent -match '(?m)^description:\s*(.+)$') {
$description = $matches[1] $description = $matches[1]
} }
# Extract script command from YAML frontmatter # Extract script command from YAML frontmatter
$scriptCommand = "" $scriptCommand = ""
if ($fileContent -match "(?m)^\s*${ScriptVariant}:\s*(.+)$") { if ($fileContent -match "(?m)^\s*${ScriptVariant}:\s*(.+)$") {
$scriptCommand = $matches[1] $scriptCommand = $matches[1]
} }
if ([string]::IsNullOrEmpty($scriptCommand)) { if ([string]::IsNullOrEmpty($scriptCommand)) {
Write-Warning "No script command found for $ScriptVariant in $($template.Name)" Write-Warning "No script command found for $ScriptVariant in $($template.Name)"
$scriptCommand = "(Missing script command for $ScriptVariant)" $scriptCommand = "(Missing script command for $ScriptVariant)"
} }
# Extract agent_script command from YAML frontmatter if present # Extract agent_script command from YAML frontmatter if present
$agentScriptCommand = "" $agentScriptCommand = ""
if ($fileContent -match "(?ms)agent_scripts:.*?^\s*${ScriptVariant}:\s*(.+?)$") { if ($fileContent -match "(?ms)agent_scripts:.*?^\s*${ScriptVariant}:\s*(.+?)$") {
$agentScriptCommand = $matches[1].Trim() $agentScriptCommand = $matches[1].Trim()
} }
# Replace {SCRIPT} placeholder with the script command # Replace {SCRIPT} placeholder with the script command
$body = $fileContent -replace '\{SCRIPT\}', $scriptCommand $body = $fileContent -replace '\{SCRIPT\}', $scriptCommand
# Replace {AGENT_SCRIPT} placeholder with the agent script command if found # Replace {AGENT_SCRIPT} placeholder with the agent script command if found
if (-not [string]::IsNullOrEmpty($agentScriptCommand)) { if (-not [string]::IsNullOrEmpty($agentScriptCommand)) {
$body = $body -replace '\{AGENT_SCRIPT\}', $agentScriptCommand $body = $body -replace '\{AGENT_SCRIPT\}', $agentScriptCommand
} }
# Remove the scripts: and agent_scripts: sections from frontmatter # Remove the scripts: and agent_scripts: sections from frontmatter
$lines = $body -split "`n" $lines = $body -split "`n"
$outputLines = @() $outputLines = @()
$inFrontmatter = $false $inFrontmatter = $false
$skipScripts = $false $skipScripts = $false
$dashCount = 0 $dashCount = 0
foreach ($line in $lines) { foreach ($line in $lines) {
if ($line -match '^---$') { if ($line -match '^---$') {
$outputLines += $line $outputLines += $line
@@ -135,7 +135,7 @@ function Generate-Commands {
} }
continue continue
} }
if ($inFrontmatter) { if ($inFrontmatter) {
if ($line -match '^(scripts|agent_scripts):$') { if ($line -match '^(scripts|agent_scripts):$') {
$skipScripts = $true $skipScripts = $true
@@ -148,20 +148,20 @@ function Generate-Commands {
continue continue
} }
} }
$outputLines += $line $outputLines += $line
} }
$body = $outputLines -join "`n" $body = $outputLines -join "`n"
# Apply other substitutions # Apply other substitutions
$body = $body -replace '\{ARGS\}', $ArgFormat $body = $body -replace '\{ARGS\}', $ArgFormat
$body = $body -replace '__AGENT__', $Agent $body = $body -replace '__AGENT__', $Agent
$body = Rewrite-Paths -Content $body $body = Rewrite-Paths -Content $body
# Generate output file based on extension # Generate output file based on extension
$outputFile = Join-Path $OutputDir "speckit.$name.$Extension" $outputFile = Join-Path $OutputDir "speckit.$name.$Extension"
switch ($Extension) { switch ($Extension) {
'toml' { 'toml' {
$body = $body -replace '\\', '\\' $body = $body -replace '\\', '\\'
@@ -183,15 +183,15 @@ function Generate-CopilotPrompts {
[string]$AgentsDir, [string]$AgentsDir,
[string]$PromptsDir [string]$PromptsDir
) )
New-Item -ItemType Directory -Path $PromptsDir -Force | Out-Null New-Item -ItemType Directory -Path $PromptsDir -Force | Out-Null
$agentFiles = Get-ChildItem -Path "$AgentsDir/speckit.*.agent.md" -File -ErrorAction SilentlyContinue $agentFiles = Get-ChildItem -Path "$AgentsDir/speckit.*.agent.md" -File -ErrorAction SilentlyContinue
foreach ($agentFile in $agentFiles) { foreach ($agentFile in $agentFiles) {
$basename = $agentFile.Name -replace '\.agent\.md$', '' $basename = $agentFile.Name -replace '\.agent\.md$', ''
$promptFile = Join-Path $PromptsDir "$basename.prompt.md" $promptFile = Join-Path $PromptsDir "$basename.prompt.md"
$content = @" $content = @"
--- ---
agent: $basename agent: $basename
@@ -201,31 +201,118 @@ agent: $basename
} }
} }
# Create Kimi Code skills in .kimi/skills/<name>/SKILL.md format.
# Kimi CLI discovers skills as directories containing a SKILL.md file,
# invoked with /skill:<name> (e.g. /skill:speckit.specify).
function New-KimiSkills {
param(
[string]$SkillsDir,
[string]$ScriptVariant
)
$templates = Get-ChildItem -Path "templates/commands/*.md" -File -ErrorAction SilentlyContinue
foreach ($template in $templates) {
$name = [System.IO.Path]::GetFileNameWithoutExtension($template.Name)
$skillName = "speckit.$name"
$skillDir = Join-Path $SkillsDir $skillName
New-Item -ItemType Directory -Force -Path $skillDir | Out-Null
$fileContent = (Get-Content -Path $template.FullName -Raw) -replace "`r`n", "`n"
# Extract description
$description = "Spec Kit: $name workflow"
if ($fileContent -match '(?m)^description:\s*(.+)$') {
$description = $matches[1]
}
# Extract script command
$scriptCommand = "(Missing script command for $ScriptVariant)"
if ($fileContent -match "(?m)^\s*${ScriptVariant}:\s*(.+)$") {
$scriptCommand = $matches[1]
}
# Extract agent_script command from frontmatter if present
$agentScriptCommand = ""
if ($fileContent -match "(?ms)agent_scripts:.*?^\s*${ScriptVariant}:\s*(.+?)$") {
$agentScriptCommand = $matches[1].Trim()
}
# Replace {SCRIPT}, strip scripts sections, rewrite paths
$body = $fileContent -replace '\{SCRIPT\}', $scriptCommand
if (-not [string]::IsNullOrEmpty($agentScriptCommand)) {
$body = $body -replace '\{AGENT_SCRIPT\}', $agentScriptCommand
}
$lines = $body -split "`n"
$outputLines = @()
$inFrontmatter = $false
$skipScripts = $false
$dashCount = 0
foreach ($line in $lines) {
if ($line -match '^---$') {
$outputLines += $line
$dashCount++
$inFrontmatter = ($dashCount -eq 1)
continue
}
if ($inFrontmatter) {
if ($line -match '^(scripts|agent_scripts):$') { $skipScripts = $true; continue }
if ($line -match '^[a-zA-Z].*:' -and $skipScripts) { $skipScripts = $false }
if ($skipScripts -and $line -match '^\s+') { continue }
}
$outputLines += $line
}
$body = $outputLines -join "`n"
$body = $body -replace '\{ARGS\}', '$ARGUMENTS'
$body = $body -replace '__AGENT__', 'kimi'
$body = Rewrite-Paths -Content $body
# Strip existing frontmatter, keep only body
$templateBody = ""
$fmCount = 0
$inBody = $false
foreach ($line in ($body -split "`n")) {
if ($line -match '^---$') {
$fmCount++
if ($fmCount -eq 2) { $inBody = $true }
continue
}
if ($inBody) { $templateBody += "$line`n" }
}
$skillContent = "---`nname: `"$skillName`"`ndescription: `"$description`"`n---`n`n$templateBody"
Set-Content -Path (Join-Path $skillDir "SKILL.md") -Value $skillContent -NoNewline
}
}
function Build-Variant { function Build-Variant {
param( param(
[string]$Agent, [string]$Agent,
[string]$Script [string]$Script
) )
$baseDir = Join-Path $GenReleasesDir "sdd-${Agent}-package-${Script}" $baseDir = Join-Path $GenReleasesDir "sdd-${Agent}-package-${Script}"
Write-Host "Building $Agent ($Script) package..." Write-Host "Building $Agent ($Script) package..."
New-Item -ItemType Directory -Path $baseDir -Force | Out-Null New-Item -ItemType Directory -Path $baseDir -Force | Out-Null
# Copy base structure but filter scripts by variant # Copy base structure but filter scripts by variant
$specDir = Join-Path $baseDir ".specify" $specDir = Join-Path $baseDir ".specify"
New-Item -ItemType Directory -Path $specDir -Force | Out-Null New-Item -ItemType Directory -Path $specDir -Force | Out-Null
# Copy memory directory # Copy memory directory
if (Test-Path "memory") { if (Test-Path "memory") {
Copy-Item -Path "memory" -Destination $specDir -Recurse -Force Copy-Item -Path "memory" -Destination $specDir -Recurse -Force
Write-Host "Copied memory -> .specify" Write-Host "Copied memory -> .specify"
} }
# Only copy the relevant script variant directory # Only copy the relevant script variant directory
if (Test-Path "scripts") { if (Test-Path "scripts") {
$scriptsDestDir = Join-Path $specDir "scripts" $scriptsDestDir = Join-Path $specDir "scripts"
New-Item -ItemType Directory -Path $scriptsDestDir -Force | Out-Null New-Item -ItemType Directory -Path $scriptsDestDir -Force | Out-Null
switch ($Script) { switch ($Script) {
'sh' { 'sh' {
if (Test-Path "scripts/bash") { if (Test-Path "scripts/bash") {
@@ -240,18 +327,17 @@ function Build-Variant {
} }
} }
} }
# Copy any script files that aren't in variant-specific directories
Get-ChildItem -Path "scripts" -File -ErrorAction SilentlyContinue | ForEach-Object { Get-ChildItem -Path "scripts" -File -ErrorAction SilentlyContinue | ForEach-Object {
Copy-Item -Path $_.FullName -Destination $scriptsDestDir -Force Copy-Item -Path $_.FullName -Destination $scriptsDestDir -Force
} }
} }
# Copy templates (excluding commands directory and vscode-settings.json) # Copy templates (excluding commands directory and vscode-settings.json)
if (Test-Path "templates") { if (Test-Path "templates") {
$templatesDestDir = Join-Path $specDir "templates" $templatesDestDir = Join-Path $specDir "templates"
New-Item -ItemType Directory -Path $templatesDestDir -Force | Out-Null New-Item -ItemType Directory -Path $templatesDestDir -Force | Out-Null
Get-ChildItem -Path "templates" -Recurse -File | Where-Object { Get-ChildItem -Path "templates" -Recurse -File | Where-Object {
$_.FullName -notmatch 'templates[/\\]commands[/\\]' -and $_.Name -ne 'vscode-settings.json' $_.FullName -notmatch 'templates[/\\]commands[/\\]' -and $_.Name -ne 'vscode-settings.json'
} | ForEach-Object { } | ForEach-Object {
@@ -263,7 +349,7 @@ function Build-Variant {
} }
Write-Host "Copied templates -> .specify/templates" Write-Host "Copied templates -> .specify/templates"
} }
# Generate agent-specific command files # Generate agent-specific command files
switch ($Agent) { switch ($Agent) {
'claude' { 'claude' {
@@ -280,12 +366,10 @@ function Build-Variant {
'copilot' { 'copilot' {
$agentsDir = Join-Path $baseDir ".github/agents" $agentsDir = Join-Path $baseDir ".github/agents"
Generate-Commands -Agent 'copilot' -Extension 'agent.md' -ArgFormat '$ARGUMENTS' -OutputDir $agentsDir -ScriptVariant $Script Generate-Commands -Agent 'copilot' -Extension 'agent.md' -ArgFormat '$ARGUMENTS' -OutputDir $agentsDir -ScriptVariant $Script
# Generate companion prompt files
$promptsDir = Join-Path $baseDir ".github/prompts" $promptsDir = Join-Path $baseDir ".github/prompts"
Generate-CopilotPrompts -AgentsDir $agentsDir -PromptsDir $promptsDir Generate-CopilotPrompts -AgentsDir $agentsDir -PromptsDir $promptsDir
# Create VS Code workspace settings
$vscodeDir = Join-Path $baseDir ".vscode" $vscodeDir = Join-Path $baseDir ".vscode"
New-Item -ItemType Directory -Path $vscodeDir -Force | Out-Null New-Item -ItemType Directory -Path $vscodeDir -Force | Out-Null
if (Test-Path "templates/vscode-settings.json") { if (Test-Path "templates/vscode-settings.json") {
@@ -335,9 +419,9 @@ function Build-Variant {
$cmdDir = Join-Path $baseDir ".agents/commands" $cmdDir = Join-Path $baseDir ".agents/commands"
Generate-Commands -Agent 'amp' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script Generate-Commands -Agent 'amp' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script
} }
'q' { 'kiro-cli' {
$cmdDir = Join-Path $baseDir ".amazonq/prompts" $cmdDir = Join-Path $baseDir ".kiro/prompts"
Generate-Commands -Agent 'q' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script Generate-Commands -Agent 'kiro-cli' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script
} }
'bob' { 'bob' {
$cmdDir = Join-Path $baseDir ".bob/commands" $cmdDir = Join-Path $baseDir ".bob/commands"
@@ -347,12 +431,38 @@ function Build-Variant {
$cmdDir = Join-Path $baseDir ".qoder/commands" $cmdDir = Join-Path $baseDir ".qoder/commands"
Generate-Commands -Agent 'qodercli' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script Generate-Commands -Agent 'qodercli' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script
} }
'shai' {
$cmdDir = Join-Path $baseDir ".shai/commands"
Generate-Commands -Agent 'shai' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script
}
'tabnine' {
$cmdDir = Join-Path $baseDir ".tabnine/agent/commands"
Generate-Commands -Agent 'tabnine' -Extension 'toml' -ArgFormat '{{args}}' -OutputDir $cmdDir -ScriptVariant $Script
$tabnineTemplate = Join-Path 'agent_templates' 'tabnine/TABNINE.md'
if (Test-Path $tabnineTemplate) { Copy-Item $tabnineTemplate (Join-Path $baseDir 'TABNINE.md') }
}
'agy' {
$cmdDir = Join-Path $baseDir ".agent/workflows"
Generate-Commands -Agent 'agy' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script
}
'vibe' {
$cmdDir = Join-Path $baseDir ".vibe/prompts"
Generate-Commands -Agent 'vibe' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script
}
'kimi' {
$skillsDir = Join-Path $baseDir ".kimi/skills"
New-Item -ItemType Directory -Force -Path $skillsDir | Out-Null
New-KimiSkills -SkillsDir $skillsDir -ScriptVariant $Script
}
'generic' { 'generic' {
$cmdDir = Join-Path $baseDir ".speckit/commands" $cmdDir = Join-Path $baseDir ".speckit/commands"
Generate-Commands -Agent 'generic' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script Generate-Commands -Agent 'generic' -Extension 'md' -ArgFormat '$ARGUMENTS' -OutputDir $cmdDir -ScriptVariant $Script
} }
default {
throw "Unsupported agent '$Agent'."
}
} }
# Create zip archive # Create zip archive
$zipFile = Join-Path $GenReleasesDir "spec-kit-template-${Agent}-${Script}-${Version}.zip" $zipFile = Join-Path $GenReleasesDir "spec-kit-template-${Agent}-${Script}-${Version}.zip"
Compress-Archive -Path "$baseDir/*" -DestinationPath $zipFile -Force Compress-Archive -Path "$baseDir/*" -DestinationPath $zipFile -Force
@@ -360,17 +470,16 @@ function Build-Variant {
} }
# Define all agents and scripts # Define all agents and scripts
$AllAgents = @('claude', 'gemini', 'copilot', 'cursor-agent', 'qwen', 'opencode', 'windsurf', 'codex', 'kilocode', 'auggie', 'roo', 'codebuddy', 'amp', 'q', 'bob', 'qodercli', 'shai', 'agy', 'generic') $AllAgents = @('claude', 'gemini', 'copilot', 'cursor-agent', 'qwen', 'opencode', 'windsurf', 'codex', 'kilocode', 'auggie', 'roo', 'codebuddy', 'amp', 'kiro-cli', 'bob', 'qodercli', 'shai', 'tabnine', 'agy', 'vibe', 'kimi', 'generic')
$AllScripts = @('sh', 'ps') $AllScripts = @('sh', 'ps')
function Normalize-List { function Normalize-List {
param([string]$Input) param([string]$Input)
if ([string]::IsNullOrEmpty($Input)) { if ([string]::IsNullOrEmpty($Input)) {
return @() return @()
} }
# Split by comma or space and remove duplicates while preserving order
$items = $Input -split '[,\s]+' | Where-Object { $_ } | Select-Object -Unique $items = $Input -split '[,\s]+' | Where-Object { $_ } | Select-Object -Unique
return $items return $items
} }
@@ -381,7 +490,7 @@ function Validate-Subset {
[string[]]$Allowed, [string[]]$Allowed,
[string[]]$Items [string[]]$Items
) )
$ok = $true $ok = $true
foreach ($item in $Items) { foreach ($item in $Items) {
if ($item -notin $Allowed) { if ($item -notin $Allowed) {
@@ -425,4 +534,4 @@ foreach ($agent in $AgentList) {
Write-Host "`nArchives in ${GenReleasesDir}:" Write-Host "`nArchives in ${GenReleasesDir}:"
Get-ChildItem -Path $GenReleasesDir -Filter "spec-kit-template-*-${Version}.zip" | ForEach-Object { Get-ChildItem -Path $GenReleasesDir -Filter "spec-kit-template-*-${Version}.zip" | ForEach-Object {
Write-Host " $($_.Name)" Write-Host " $($_.Name)"
} }

View File

@@ -6,7 +6,7 @@ set -euo pipefail
# Usage: .github/workflows/scripts/create-release-packages.sh <version> # Usage: .github/workflows/scripts/create-release-packages.sh <version>
# Version argument should include leading 'v'. # Version argument should include leading 'v'.
# Optionally set AGENTS and/or SCRIPTS env vars to limit what gets built. # Optionally set AGENTS and/or SCRIPTS env vars to limit what gets built.
# AGENTS : space or comma separated subset of: claude gemini copilot cursor-agent qwen opencode windsurf codex amp shai bob generic (default: all) # AGENTS : space or comma separated subset of: claude gemini copilot cursor-agent qwen opencode windsurf codex kilocode auggie roo codebuddy amp shai tabnine kiro-cli agy bob vibe qodercli kimi generic (default: all)
# SCRIPTS : space or comma separated subset of: sh ps (default: both) # SCRIPTS : space or comma separated subset of: sh ps (default: both)
# Examples: # Examples:
# AGENTS=claude SCRIPTS=sh $0 v0.2.0 # AGENTS=claude SCRIPTS=sh $0 v0.2.0
@@ -45,19 +45,19 @@ generate_commands() {
[[ -f "$template" ]] || continue [[ -f "$template" ]] || continue
local name description script_command agent_script_command body local name description script_command agent_script_command body
name=$(basename "$template" .md) name=$(basename "$template" .md)
# Normalize line endings # Normalize line endings
file_content=$(tr -d '\r' < "$template") file_content=$(tr -d '\r' < "$template")
# Extract description and script command from YAML frontmatter # Extract description and script command from YAML frontmatter
description=$(printf '%s\n' "$file_content" | awk '/^description:/ {sub(/^description:[[:space:]]*/, ""); print; exit}') description=$(printf '%s\n' "$file_content" | awk '/^description:/ {sub(/^description:[[:space:]]*/, ""); print; exit}')
script_command=$(printf '%s\n' "$file_content" | awk -v sv="$script_variant" '/^[[:space:]]*'"$script_variant"':[[:space:]]*/ {sub(/^[[:space:]]*'"$script_variant"':[[:space:]]*/, ""); print; exit}') script_command=$(printf '%s\n' "$file_content" | awk -v sv="$script_variant" '/^[[:space:]]*'"$script_variant"':[[:space:]]*/ {sub(/^[[:space:]]*'"$script_variant"':[[:space:]]*/, ""); print; exit}')
if [[ -z $script_command ]]; then if [[ -z $script_command ]]; then
echo "Warning: no script command found for $script_variant in $template" >&2 echo "Warning: no script command found for $script_variant in $template" >&2
script_command="(Missing script command for $script_variant)" script_command="(Missing script command for $script_variant)"
fi fi
# Extract agent_script command from YAML frontmatter if present # Extract agent_script command from YAML frontmatter if present
agent_script_command=$(printf '%s\n' "$file_content" | awk ' agent_script_command=$(printf '%s\n' "$file_content" | awk '
/^agent_scripts:$/ { in_agent_scripts=1; next } /^agent_scripts:$/ { in_agent_scripts=1; next }
@@ -68,15 +68,15 @@ generate_commands() {
} }
in_agent_scripts && /^[a-zA-Z]/ { in_agent_scripts=0 } in_agent_scripts && /^[a-zA-Z]/ { in_agent_scripts=0 }
') ')
# Replace {SCRIPT} placeholder with the script command # Replace {SCRIPT} placeholder with the script command
body=$(printf '%s\n' "$file_content" | sed "s|{SCRIPT}|${script_command}|g") body=$(printf '%s\n' "$file_content" | sed "s|{SCRIPT}|${script_command}|g")
# Replace {AGENT_SCRIPT} placeholder with the agent script command if found # Replace {AGENT_SCRIPT} placeholder with the agent script command if found
if [[ -n $agent_script_command ]]; then if [[ -n $agent_script_command ]]; then
body=$(printf '%s\n' "$body" | sed "s|{AGENT_SCRIPT}|${agent_script_command}|g") body=$(printf '%s\n' "$body" | sed "s|{AGENT_SCRIPT}|${agent_script_command}|g")
fi fi
# Remove the scripts: and agent_scripts: sections from frontmatter while preserving YAML structure # Remove the scripts: and agent_scripts: sections from frontmatter while preserving YAML structure
body=$(printf '%s\n' "$body" | awk ' body=$(printf '%s\n' "$body" | awk '
/^---$/ { print; if (++dash_count == 1) in_frontmatter=1; else in_frontmatter=0; next } /^---$/ { print; if (++dash_count == 1) in_frontmatter=1; else in_frontmatter=0; next }
@@ -86,10 +86,10 @@ generate_commands() {
in_frontmatter && skip_scripts && /^[[:space:]]/ { next } in_frontmatter && skip_scripts && /^[[:space:]]/ { next }
{ print } { print }
') ')
# Apply other substitutions # Apply other substitutions
body=$(printf '%s\n' "$body" | sed "s/{ARGS}/$arg_format/g" | sed "s/__AGENT__/$agent/g" | rewrite_paths) body=$(printf '%s\n' "$body" | sed "s/{ARGS}/$arg_format/g" | sed "s/__AGENT__/$agent/g" | rewrite_paths)
case $ext in case $ext in
toml) toml)
body=$(printf '%s\n' "$body" | sed 's/\\/\\\\/g') body=$(printf '%s\n' "$body" | sed 's/\\/\\\\/g')
@@ -105,15 +105,14 @@ generate_commands() {
generate_copilot_prompts() { generate_copilot_prompts() {
local agents_dir=$1 prompts_dir=$2 local agents_dir=$1 prompts_dir=$2
mkdir -p "$prompts_dir" mkdir -p "$prompts_dir"
# Generate a .prompt.md file for each .agent.md file # Generate a .prompt.md file for each .agent.md file
for agent_file in "$agents_dir"/speckit.*.agent.md; do for agent_file in "$agents_dir"/speckit.*.agent.md; do
[[ -f "$agent_file" ]] || continue [[ -f "$agent_file" ]] || continue
local basename=$(basename "$agent_file" .agent.md) local basename=$(basename "$agent_file" .agent.md)
local prompt_file="$prompts_dir/${basename}.prompt.md" local prompt_file="$prompts_dir/${basename}.prompt.md"
# Create prompt file with agent frontmatter
cat > "$prompt_file" <<EOF cat > "$prompt_file" <<EOF
--- ---
agent: ${basename} agent: ${basename}
@@ -122,41 +121,104 @@ EOF
done done
} }
# Create Kimi Code skills in .kimi/skills/<name>/SKILL.md format.
# Kimi CLI discovers skills as directories containing a SKILL.md file,
# invoked with /skill:<name> (e.g. /skill:speckit.specify).
create_kimi_skills() {
local skills_dir="$1"
local script_variant="$2"
for template in templates/commands/*.md; do
[[ -f "$template" ]] || continue
local name
name=$(basename "$template" .md)
local skill_name="speckit.${name}"
local skill_dir="${skills_dir}/${skill_name}"
mkdir -p "$skill_dir"
local file_content
file_content=$(tr -d '\r' < "$template")
# Extract description from frontmatter
local description
description=$(printf '%s\n' "$file_content" | awk '/^description:/ {sub(/^description:[[:space:]]*/, ""); print; exit}')
[[ -z "$description" ]] && description="Spec Kit: ${name} workflow"
# Extract script command
local script_command
script_command=$(printf '%s\n' "$file_content" | awk -v sv="$script_variant" '/^[[:space:]]*'"$script_variant"':[[:space:]]*/ {sub(/^[[:space:]]*'"$script_variant"':[[:space:]]*/, ""); print; exit}')
[[ -z "$script_command" ]] && script_command="(Missing script command for $script_variant)"
# Extract agent_script command from frontmatter if present
local agent_script_command
agent_script_command=$(printf '%s\n' "$file_content" | awk '
/^agent_scripts:$/ { in_agent_scripts=1; next }
in_agent_scripts && /^[[:space:]]*'"$script_variant"':[[:space:]]*/ {
sub(/^[[:space:]]*'"$script_variant"':[[:space:]]*/, "")
print
exit
}
in_agent_scripts && /^[a-zA-Z]/ { in_agent_scripts=0 }
')
# Build body: replace placeholders, strip scripts sections, rewrite paths
local body
body=$(printf '%s\n' "$file_content" | sed "s|{SCRIPT}|${script_command}|g")
if [[ -n $agent_script_command ]]; then
body=$(printf '%s\n' "$body" | sed "s|{AGENT_SCRIPT}|${agent_script_command}|g")
fi
body=$(printf '%s\n' "$body" | awk '
/^---$/ { print; if (++dash_count == 1) in_frontmatter=1; else in_frontmatter=0; next }
in_frontmatter && /^scripts:$/ { skip_scripts=1; next }
in_frontmatter && /^agent_scripts:$/ { skip_scripts=1; next }
in_frontmatter && /^[a-zA-Z].*:/ && skip_scripts { skip_scripts=0 }
in_frontmatter && skip_scripts && /^[[:space:]]/ { next }
{ print }
')
body=$(printf '%s\n' "$body" | sed 's/{ARGS}/\$ARGUMENTS/g' | sed 's/__AGENT__/kimi/g' | rewrite_paths)
# Strip existing frontmatter and prepend Kimi frontmatter
local template_body
template_body=$(printf '%s\n' "$body" | awk '/^---/{p++; if(p==2){found=1; next}} found')
{
printf -- '---\n'
printf 'name: "%s"\n' "$skill_name"
printf 'description: "%s"\n' "$description"
printf -- '---\n\n'
printf '%s\n' "$template_body"
} > "$skill_dir/SKILL.md"
done
}
build_variant() { build_variant() {
local agent=$1 script=$2 local agent=$1 script=$2
local base_dir="$GENRELEASES_DIR/sdd-${agent}-package-${script}" local base_dir="$GENRELEASES_DIR/sdd-${agent}-package-${script}"
echo "Building $agent ($script) package..." echo "Building $agent ($script) package..."
mkdir -p "$base_dir" mkdir -p "$base_dir"
# Copy base structure but filter scripts by variant # Copy base structure but filter scripts by variant
SPEC_DIR="$base_dir/.specify" SPEC_DIR="$base_dir/.specify"
mkdir -p "$SPEC_DIR" mkdir -p "$SPEC_DIR"
[[ -d memory ]] && { cp -r memory "$SPEC_DIR/"; echo "Copied memory -> .specify"; } [[ -d memory ]] && { cp -r memory "$SPEC_DIR/"; echo "Copied memory -> .specify"; }
# Only copy the relevant script variant directory # Only copy the relevant script variant directory
if [[ -d scripts ]]; then if [[ -d scripts ]]; then
mkdir -p "$SPEC_DIR/scripts" mkdir -p "$SPEC_DIR/scripts"
case $script in case $script in
sh) sh)
[[ -d scripts/bash ]] && { cp -r scripts/bash "$SPEC_DIR/scripts/"; echo "Copied scripts/bash -> .specify/scripts"; } [[ -d scripts/bash ]] && { cp -r scripts/bash "$SPEC_DIR/scripts/"; echo "Copied scripts/bash -> .specify/scripts"; }
# Copy any script files that aren't in variant-specific directories
find scripts -maxdepth 1 -type f -exec cp {} "$SPEC_DIR/scripts/" \; 2>/dev/null || true find scripts -maxdepth 1 -type f -exec cp {} "$SPEC_DIR/scripts/" \; 2>/dev/null || true
;; ;;
ps) ps)
[[ -d scripts/powershell ]] && { cp -r scripts/powershell "$SPEC_DIR/scripts/"; echo "Copied scripts/powershell -> .specify/scripts"; } [[ -d scripts/powershell ]] && { cp -r scripts/powershell "$SPEC_DIR/scripts/"; echo "Copied scripts/powershell -> .specify/scripts"; }
# Copy any script files that aren't in variant-specific directories
find scripts -maxdepth 1 -type f -exec cp {} "$SPEC_DIR/scripts/" \; 2>/dev/null || true find scripts -maxdepth 1 -type f -exec cp {} "$SPEC_DIR/scripts/" \; 2>/dev/null || true
;; ;;
esac esac
fi fi
[[ -d templates ]] && { mkdir -p "$SPEC_DIR/templates"; find templates -type f -not -path "templates/commands/*" -not -name "vscode-settings.json" -exec cp --parents {} "$SPEC_DIR"/ \; ; echo "Copied templates -> .specify/templates"; } [[ -d templates ]] && { mkdir -p "$SPEC_DIR/templates"; find templates -type f -not -path "templates/commands/*" -not -name "vscode-settings.json" -exec cp --parents {} "$SPEC_DIR"/ \; ; echo "Copied templates -> .specify/templates"; }
# NOTE: We substitute {ARGS} internally. Outward tokens differ intentionally:
# * Markdown/prompt (claude, copilot, cursor-agent, opencode): $ARGUMENTS
# * TOML (gemini, qwen): {{args}}
# This keeps formats readable without extra abstraction.
case $agent in case $agent in
claude) claude)
@@ -169,9 +231,7 @@ build_variant() {
copilot) copilot)
mkdir -p "$base_dir/.github/agents" mkdir -p "$base_dir/.github/agents"
generate_commands copilot agent.md "\$ARGUMENTS" "$base_dir/.github/agents" "$script" generate_commands copilot agent.md "\$ARGUMENTS" "$base_dir/.github/agents" "$script"
# Generate companion prompt files
generate_copilot_prompts "$base_dir/.github/agents" "$base_dir/.github/prompts" generate_copilot_prompts "$base_dir/.github/agents" "$base_dir/.github/prompts"
# Create VS Code workspace settings
mkdir -p "$base_dir/.vscode" mkdir -p "$base_dir/.vscode"
[[ -f templates/vscode-settings.json ]] && cp templates/vscode-settings.json "$base_dir/.vscode/settings.json" [[ -f templates/vscode-settings.json ]] && cp templates/vscode-settings.json "$base_dir/.vscode/settings.json"
;; ;;
@@ -212,15 +272,25 @@ build_variant() {
shai) shai)
mkdir -p "$base_dir/.shai/commands" mkdir -p "$base_dir/.shai/commands"
generate_commands shai md "\$ARGUMENTS" "$base_dir/.shai/commands" "$script" ;; generate_commands shai md "\$ARGUMENTS" "$base_dir/.shai/commands" "$script" ;;
q) tabnine)
mkdir -p "$base_dir/.amazonq/prompts" mkdir -p "$base_dir/.tabnine/agent/commands"
generate_commands q md "\$ARGUMENTS" "$base_dir/.amazonq/prompts" "$script" ;; generate_commands tabnine toml "{{args}}" "$base_dir/.tabnine/agent/commands" "$script"
[[ -f agent_templates/tabnine/TABNINE.md ]] && cp agent_templates/tabnine/TABNINE.md "$base_dir/TABNINE.md" ;;
kiro-cli)
mkdir -p "$base_dir/.kiro/prompts"
generate_commands kiro-cli md "\$ARGUMENTS" "$base_dir/.kiro/prompts" "$script" ;;
agy) agy)
mkdir -p "$base_dir/.agent/workflows" mkdir -p "$base_dir/.agent/workflows"
generate_commands agy md "\$ARGUMENTS" "$base_dir/.agent/workflows" "$script" ;; generate_commands agy md "\$ARGUMENTS" "$base_dir/.agent/workflows" "$script" ;;
bob) bob)
mkdir -p "$base_dir/.bob/commands" mkdir -p "$base_dir/.bob/commands"
generate_commands bob md "\$ARGUMENTS" "$base_dir/.bob/commands" "$script" ;; generate_commands bob md "\$ARGUMENTS" "$base_dir/.bob/commands" "$script" ;;
vibe)
mkdir -p "$base_dir/.vibe/prompts"
generate_commands vibe md "\$ARGUMENTS" "$base_dir/.vibe/prompts" "$script" ;;
kimi)
mkdir -p "$base_dir/.kimi/skills"
create_kimi_skills "$base_dir/.kimi/skills" "$script" ;;
generic) generic)
mkdir -p "$base_dir/.speckit/commands" mkdir -p "$base_dir/.speckit/commands"
generate_commands generic md "\$ARGUMENTS" "$base_dir/.speckit/commands" "$script" ;; generate_commands generic md "\$ARGUMENTS" "$base_dir/.speckit/commands" "$script" ;;
@@ -230,11 +300,10 @@ build_variant() {
} }
# Determine agent list # Determine agent list
ALL_AGENTS=(claude gemini copilot cursor-agent qwen opencode windsurf codex kilocode auggie roo codebuddy amp shai q agy bob qodercli generic) ALL_AGENTS=(claude gemini copilot cursor-agent qwen opencode windsurf codex kilocode auggie roo codebuddy amp shai tabnine kiro-cli agy bob vibe qodercli kimi generic)
ALL_SCRIPTS=(sh ps) ALL_SCRIPTS=(sh ps)
norm_list() { norm_list() {
# convert comma+space separated -> line separated unique while preserving order of first occurrence
tr ',\n' ' ' | awk '{for(i=1;i<=NF;i++){if(!seen[$i]++){printf((out?"\n":"") $i);out=1}}}END{printf("\n")}' tr ',\n' ' ' | awk '{for(i=1;i<=NF;i++){if(!seen[$i]++){printf((out?"\n":"") $i);out=1}}}END{printf("\n")}'
} }
@@ -277,4 +346,3 @@ done
echo "Archives in $GENRELEASES_DIR:" echo "Archives in $GENRELEASES_DIR:"
ls -1 "$GENRELEASES_DIR"/spec-kit-template-*-"${NEW_VERSION}".zip ls -1 "$GENRELEASES_DIR"/spec-kit-template-*-"${NEW_VERSION}".zip

161
.github/workflows/scripts/simulate-release.sh vendored Executable file
View File

@@ -0,0 +1,161 @@
#!/usr/bin/env bash
set -euo pipefail
# simulate-release.sh
# Simulate the release process locally without pushing to GitHub
# Usage: simulate-release.sh [version]
# If version is omitted, auto-increments patch version
# Colors for output
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
echo -e "${BLUE}🧪 Simulating Release Process Locally${NC}"
echo "======================================"
echo ""
# Step 1: Determine version
if [[ -n "${1:-}" ]]; then
VERSION="${1#v}"
TAG="v$VERSION"
echo -e "${GREEN}📝 Using manual version: $VERSION${NC}"
else
LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0")
echo -e "${BLUE}Latest tag: $LATEST_TAG${NC}"
VERSION=$(echo $LATEST_TAG | sed 's/v//')
IFS='.' read -ra VERSION_PARTS <<< "$VERSION"
MAJOR=${VERSION_PARTS[0]:-0}
MINOR=${VERSION_PARTS[1]:-0}
PATCH=${VERSION_PARTS[2]:-0}
PATCH=$((PATCH + 1))
VERSION="$MAJOR.$MINOR.$PATCH"
TAG="v$VERSION"
echo -e "${GREEN}📝 Auto-incremented to: $VERSION${NC}"
fi
echo ""
# Step 2: Check if tag exists
if git rev-parse "$TAG" >/dev/null 2>&1; then
echo -e "${RED}❌ Error: Tag $TAG already exists!${NC}"
echo " Please use a different version or delete the tag first."
exit 1
fi
echo -e "${GREEN}✓ Tag $TAG is available${NC}"
# Step 3: Backup current state
echo ""
echo -e "${YELLOW}💾 Creating backup of current state...${NC}"
BACKUP_DIR=$(mktemp -d)
cp pyproject.toml "$BACKUP_DIR/pyproject.toml.bak"
cp CHANGELOG.md "$BACKUP_DIR/CHANGELOG.md.bak"
echo -e "${GREEN}✓ Backup created at: $BACKUP_DIR${NC}"
# Step 4: Update pyproject.toml
echo ""
echo -e "${YELLOW}📝 Updating pyproject.toml...${NC}"
sed -i.tmp "s/version = \".*\"/version = \"$VERSION\"/" pyproject.toml
rm -f pyproject.toml.tmp
echo -e "${GREEN}✓ Updated pyproject.toml to version $VERSION${NC}"
# Step 5: Update CHANGELOG.md
echo ""
echo -e "${YELLOW}📝 Updating CHANGELOG.md...${NC}"
DATE=$(date +%Y-%m-%d)
# Get the previous tag to compare commits
PREVIOUS_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
if [[ -n "$PREVIOUS_TAG" ]]; then
echo " Generating changelog from commits since $PREVIOUS_TAG"
# Get commits since last tag, format as bullet points
COMMITS=$(git log --oneline "$PREVIOUS_TAG"..HEAD --no-merges --pretty=format:"- %s" 2>/dev/null || echo "- Initial release")
else
echo " No previous tag found - this is the first release"
COMMITS="- Initial release"
fi
# Create temp file with new entry
{
head -n 8 CHANGELOG.md
echo ""
echo "## [$VERSION] - $DATE"
echo ""
echo "### Changed"
echo ""
echo "$COMMITS"
echo ""
tail -n +9 CHANGELOG.md
} > CHANGELOG.md.tmp
mv CHANGELOG.md.tmp CHANGELOG.md
echo -e "${GREEN}✓ Updated CHANGELOG.md with commits since $PREVIOUS_TAG${NC}"
# Step 6: Show what would be committed
echo ""
echo -e "${YELLOW}📋 Changes that would be committed:${NC}"
git diff pyproject.toml CHANGELOG.md
# Step 7: Create temporary tag (no push)
echo ""
echo -e "${YELLOW}🏷️ Creating temporary local tag...${NC}"
git tag -a "$TAG" -m "Simulated release $TAG" 2>/dev/null || true
echo -e "${GREEN}✓ Tag $TAG created locally${NC}"
# Step 8: Simulate release artifact creation
echo ""
echo -e "${YELLOW}📦 Simulating release package creation...${NC}"
echo " (High-level simulation only; packaging script is not executed)"
echo ""
# Check if script exists and is executable
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [[ -x "$SCRIPT_DIR/create-release-packages.sh" ]]; then
echo -e "${BLUE}In a real release, the following command would be run to create packages:${NC}"
echo " $SCRIPT_DIR/create-release-packages.sh \"$TAG\""
echo ""
echo "This simulation does not enumerate individual package files to avoid"
echo "drifting from the actual behavior of create-release-packages.sh."
else
echo -e "${RED}⚠️ create-release-packages.sh not found or not executable${NC}"
fi
# Step 9: Simulate release notes generation
echo ""
echo -e "${YELLOW}📄 Simulating release notes generation...${NC}"
echo ""
PREVIOUS_TAG=$(git describe --tags --abbrev=0 $TAG^ 2>/dev/null || echo "")
if [[ -n "$PREVIOUS_TAG" ]]; then
echo -e "${BLUE}Changes since $PREVIOUS_TAG:${NC}"
git log --oneline "$PREVIOUS_TAG".."$TAG" | head -n 10
echo ""
else
echo -e "${BLUE}No previous tag found - this would be the first release${NC}"
fi
# Step 10: Summary
echo ""
echo -e "${GREEN}🎉 Simulation Complete!${NC}"
echo "======================================"
echo ""
echo -e "${BLUE}Summary:${NC}"
echo " Version: $VERSION"
echo " Tag: $TAG"
echo " Backup: $BACKUP_DIR"
echo ""
echo -e "${YELLOW}⚠️ SIMULATION ONLY - NO CHANGES PUSHED${NC}"
echo ""
echo -e "${BLUE}Next steps:${NC}"
echo " 1. Review the changes above"
echo " 2. To keep changes: git add pyproject.toml CHANGELOG.md && git commit"
echo " 3. To discard changes: git checkout pyproject.toml CHANGELOG.md && git tag -d $TAG"
echo " 4. To restore from backup: cp $BACKUP_DIR/* ."
echo ""
echo -e "${BLUE}To run the actual release:${NC}"
echo " Go to: https://github.com/github/spec-kit/actions/workflows/release-trigger.yml"
echo " Click 'Run workflow' and enter version: $VERSION"
echo ""

View File

@@ -16,7 +16,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Install uv - name: Install uv
uses: astral-sh/setup-uv@v6 uses: astral-sh/setup-uv@v7
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v6 uses: actions/setup-python@v6
@@ -36,7 +36,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Install uv - name: Install uv
uses: astral-sh/setup-uv@v6 uses: astral-sh/setup-uv@v7
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v6 uses: actions/setup-python@v6

View File

@@ -44,9 +44,11 @@ Specify supports multiple AI agents by generating agent-specific command files a
| **Roo Code** | `.roo/rules/` | Markdown | N/A (IDE-based) | Roo Code IDE | | **Roo Code** | `.roo/rules/` | Markdown | N/A (IDE-based) | Roo Code IDE |
| **CodeBuddy CLI** | `.codebuddy/commands/` | Markdown | `codebuddy` | CodeBuddy CLI | | **CodeBuddy CLI** | `.codebuddy/commands/` | Markdown | `codebuddy` | CodeBuddy CLI |
| **Qoder CLI** | `.qoder/commands/` | Markdown | `qodercli` | Qoder CLI | | **Qoder CLI** | `.qoder/commands/` | Markdown | `qodercli` | Qoder CLI |
| **Amazon Q Developer CLI** | `.amazonq/prompts/` | Markdown | `q` | Amazon Q Developer CLI | | **Kiro CLI** | `.kiro/prompts/` | Markdown | `kiro-cli` | Kiro CLI |
| **Amp** | `.agents/commands/` | Markdown | `amp` | Amp CLI | | **Amp** | `.agents/commands/` | Markdown | `amp` | Amp CLI |
| **SHAI** | `.shai/commands/` | Markdown | `shai` | SHAI CLI | | **SHAI** | `.shai/commands/` | Markdown | `shai` | SHAI CLI |
| **Tabnine CLI** | `.tabnine/agent/commands/` | TOML | `tabnine` | Tabnine CLI |
| **Kimi Code** | `.kimi/skills/` | Markdown | `kimi` | Kimi Code CLI (Moonshot AI) |
| **IBM Bob** | `.bob/commands/` | Markdown | N/A (IDE-based) | IBM Bob IDE | | **IBM Bob** | `.bob/commands/` | Markdown | N/A (IDE-based) | IBM Bob IDE |
| **Generic** | User-specified via `--ai-commands-dir` | Markdown | N/A | Bring your own agent | | **Generic** | User-specified via `--ai-commands-dir` | Markdown | N/A | Bring your own agent |
@@ -86,7 +88,7 @@ This eliminates the need for special-case mappings throughout the codebase.
- `folder`: Directory where agent-specific files are stored (relative to project root) - `folder`: Directory where agent-specific files are stored (relative to project root)
- `commands_subdir`: Subdirectory name within the agent folder where command/prompt files are stored (default: `"commands"`) - `commands_subdir`: Subdirectory name within the agent folder where command/prompt files are stored (default: `"commands"`)
- Most agents use `"commands"` (e.g., `.claude/commands/`) - Most agents use `"commands"` (e.g., `.claude/commands/`)
- Some agents use alternative names: `"agents"` (copilot), `"workflows"` (windsurf, kilocode, agy), `"prompts"` (codex, q), `"command"` (opencode - singular) - Some agents use alternative names: `"agents"` (copilot), `"workflows"` (windsurf, kilocode, agy), `"prompts"` (codex, kiro-cli), `"command"` (opencode - singular)
- This field enables `--ai-skills` to locate command templates correctly for skill generation - This field enables `--ai-skills` to locate command templates correctly for skill generation
- `install_url`: Installation documentation URL (set to `None` for IDE-based agents) - `install_url`: Installation documentation URL (set to `None` for IDE-based agents)
- `requires_cli`: Whether the agent requires a CLI tool check during initialization - `requires_cli`: Whether the agent requires a CLI tool check during initialization
@@ -96,7 +98,7 @@ This eliminates the need for special-case mappings throughout the codebase.
Update the `--ai` parameter help text in the `init()` command to include the new agent: Update the `--ai` parameter help text in the `init()` command to include the new agent:
```python ```python
ai_assistant: str = typer.Option(None, "--ai", help="AI assistant to use: claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, kilocode, auggie, codebuddy, new-agent-cli, or q"), ai_assistant: str = typer.Option(None, "--ai", help="AI assistant to use: claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, kilocode, auggie, codebuddy, new-agent-cli, or kiro-cli"),
``` ```
Also update any function docstrings, examples, and error messages that list available agents. Also update any function docstrings, examples, and error messages that list available agents.
@@ -117,7 +119,7 @@ Modify `.github/workflows/scripts/create-release-packages.sh`:
##### Add to ALL_AGENTS array ##### Add to ALL_AGENTS array
```bash ```bash
ALL_AGENTS=(claude gemini copilot cursor-agent qwen opencode windsurf q) ALL_AGENTS=(claude gemini copilot cursor-agent qwen opencode windsurf kiro-cli)
``` ```
##### Add case statement for directory structure ##### Add case statement for directory structure
@@ -317,11 +319,13 @@ Require a command-line tool to be installed:
- **Cursor**: `cursor-agent` CLI - **Cursor**: `cursor-agent` CLI
- **Qwen Code**: `qwen` CLI - **Qwen Code**: `qwen` CLI
- **opencode**: `opencode` CLI - **opencode**: `opencode` CLI
- **Amazon Q Developer CLI**: `q` CLI - **Kiro CLI**: `kiro-cli` CLI
- **CodeBuddy CLI**: `codebuddy` CLI - **CodeBuddy CLI**: `codebuddy` CLI
- **Qoder CLI**: `qodercli` CLI - **Qoder CLI**: `qodercli` CLI
- **Amp**: `amp` CLI - **Amp**: `amp` CLI
- **SHAI**: `shai` CLI - **SHAI**: `shai` CLI
- **Tabnine CLI**: `tabnine` CLI
- **Kimi Code**: `kimi` CLI
### IDE-Based Agents ### IDE-Based Agents
@@ -335,7 +339,7 @@ Work within integrated development environments:
### Markdown Format ### Markdown Format
Used by: Claude, Cursor, opencode, Windsurf, Amazon Q Developer, Amp, SHAI, IBM Bob Used by: Claude, Cursor, opencode, Windsurf, Kiro CLI, Amp, SHAI, IBM Bob, Kimi Code
**Standard format:** **Standard format:**
@@ -360,7 +364,7 @@ Command content with {SCRIPT} and $ARGUMENTS placeholders.
### TOML Format ### TOML Format
Used by: Gemini, Qwen Used by: Gemini, Qwen, Tabnine
```toml ```toml
description = "Command description" description = "Command description"

View File

@@ -7,6 +7,190 @@ Recent changes to the Specify CLI and templates are documented here.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.2.1] - 2026-03-11
### Changed
- Added February 2026 newsletter (#1812)
- feat: add Kimi Code CLI agent support (#1790)
- docs: fix broken links in quickstart guide (#1759) (#1797)
- docs: add catalog cli help documentation (#1793) (#1794)
- fix: use quiet checkout to avoid exception on git checkout (#1792)
- feat(extensions): support .extensionignore to exclude files during install (#1781)
- feat: add Codex support for extension command registration (#1767)
- chore: bump version to 0.2.0 (#1786)
- fix: sync agent list comments with actual supported agents (#1785)
- feat(extensions): support multiple active catalogs simultaneously (#1720)
- Pavel/add tabnine cli support (#1503)
- Add Understanding extension to community catalog (#1778)
- Add ralph extension to community catalog (#1780)
- Update README with project initialization instructions (#1772)
- feat: add review extension to community catalog (#1775)
- Add fleet extension to community catalog (#1771)
- Integration of Mistral vibe support into speckit (#1725)
- fix: Remove duplicate options in specify.md (#1765)
- fix: use global branch numbering instead of per-short-name detection (#1757)
- Add Community Walkthroughs section to README (#1766)
- feat(extensions): add Jira Integration to community catalog (#1764)
- Add Azure DevOps Integration extension to community catalog (#1734)
- Fix docs: update Antigravity link and add initialization example (#1748)
- fix: wire after_tasks and after_implement hook events into command templates (#1702)
- make c ignores consistent with c++ (#1747)
- chore: bump version to 0.1.13 (#1746)
- feat: add kiro-cli and AGENT_CONFIG consistency coverage (#1690)
- feat: add verify extension to community catalog (#1726)
- Add Retrospective Extension to community catalog README table (#1741)
- fix(scripts): add empty description validation and branch checkout error handling (#1559)
- fix: correct Copilot extension command registration (#1724)
- fix(implement): remove Makefile from C ignore patterns (#1558)
- Add sync extension to community catalog (#1728)
- fix(checklist): clarify file handling behavior for append vs create (#1556)
- fix(clarify): correct conflicting question limit from 10 to 5 (#1557)
- chore: bump version to 0.1.12 (#1737)
- fix: use RELEASE_PAT so tag push triggers release workflow (#1736)
- fix: release-trigger uses release branch + PR instead of direct push to main (#1733)
- fix: Split release process to sync pyproject.toml version with git tags (#1732)
## [Unreleased]
### Added
- feat(extensions): support `.extensionignore` to exclude files/folders during `specify extension add` (#1781)
## [0.2.0] - 2026-03-09
### Changed
- feat: add Kimi Code CLI agent support
- fix: sync agent list comments with actual supported agents (#1785)
- feat(extensions): support multiple active catalogs simultaneously (#1720)
- Pavel/add tabnine cli support (#1503)
- Add Understanding extension to community catalog (#1778)
- Add ralph extension to community catalog (#1780)
- Update README with project initialization instructions (#1772)
- feat: add review extension to community catalog (#1775)
- Add fleet extension to community catalog (#1771)
- Integration of Mistral vibe support into speckit (#1725)
- fix: Remove duplicate options in specify.md (#1765)
- fix: use global branch numbering instead of per-short-name detection (#1757)
- Add Community Walkthroughs section to README (#1766)
- feat(extensions): add Jira Integration to community catalog (#1764)
- Add Azure DevOps Integration extension to community catalog (#1734)
- Fix docs: update Antigravity link and add initialization example (#1748)
- fix: wire after_tasks and after_implement hook events into command templates (#1702)
- make c ignores consistent with c++ (#1747)
- chore: bump version to 0.1.13 (#1746)
- feat: add kiro-cli and AGENT_CONFIG consistency coverage (#1690)
- feat: add verify extension to community catalog (#1726)
- Add Retrospective Extension to community catalog README table (#1741)
- fix(scripts): add empty description validation and branch checkout error handling (#1559)
- fix: correct Copilot extension command registration (#1724)
- fix(implement): remove Makefile from C ignore patterns (#1558)
- Add sync extension to community catalog (#1728)
- fix(checklist): clarify file handling behavior for append vs create (#1556)
- fix(clarify): correct conflicting question limit from 10 to 5 (#1557)
- chore: bump version to 0.1.12 (#1737)
- fix: use RELEASE_PAT so tag push triggers release workflow (#1736)
- fix: release-trigger uses release branch + PR instead of direct push to main (#1733)
- fix: Split release process to sync pyproject.toml version with git tags (#1732)
## [0.1.14] - 2026-03-09
### Added
- feat: add Tabnine CLI agent support
- **Multi-Catalog Support (#1707)**: Extension catalog system now supports multiple active catalogs simultaneously via a catalog stack
- New `specify extension catalog list` command lists all active catalogs with name, URL, priority, and `install_allowed` status
- New `specify extension catalog add` and `specify extension catalog remove` commands for project-scoped catalog management
- Default built-in stack includes `catalog.json` (default, installable) and `catalog.community.json` (community, discovery only) — community extensions are now surfaced in search results out of the box
- `specify extension search` aggregates results across all active catalogs, annotating each result with source catalog
- `specify extension add` enforces `install_allowed` policy — extensions from discovery-only catalogs cannot be installed directly
- Project-level `.specify/extension-catalogs.yml` and user-level `~/.specify/extension-catalogs.yml` config files supported, with project-level taking precedence
- `SPECKIT_CATALOG_URL` environment variable still works for backward compatibility (replaces full stack with single catalog)
- All catalog URLs require HTTPS (HTTP allowed for localhost development)
- New `CatalogEntry` dataclass in `extensions.py` for catalog stack representation
- Per-URL hash-based caching for non-default catalogs; legacy cache preserved for default catalog
- Higher-priority catalogs win on merge conflicts (same extension id in multiple catalogs)
- 13 new tests covering catalog stack resolution, merge conflicts, URL validation, and `install_allowed` enforcement
- Updated RFC, Extension User Guide, and Extension API Reference documentation
## [0.1.13] - 2026-03-03
### Changed
- feat: add kiro-cli and AGENT_CONFIG consistency coverage (#1690)
- feat: add verify extension to community catalog (#1726)
- Add Retrospective Extension to community catalog README table (#1741)
- fix(scripts): add empty description validation and branch checkout error handling (#1559)
- fix: correct Copilot extension command registration (#1724)
- fix(implement): remove Makefile from C ignore patterns (#1558)
- Add sync extension to community catalog (#1728)
- fix(checklist): clarify file handling behavior for append vs create (#1556)
- fix(clarify): correct conflicting question limit from 10 to 5 (#1557)
- chore: bump version to 0.1.12 (#1737)
- fix: use RELEASE_PAT so tag push triggers release workflow (#1736)
- fix: release-trigger uses release branch + PR instead of direct push to main (#1733)
- fix: Split release process to sync pyproject.toml version with git tags (#1732)
## [0.1.13] - 2026-03-03
### Fixed
- **Copilot Extension Commands Not Visible**: Fixed extension commands not appearing in GitHub Copilot when installed via `specify extension add --dev`
- Changed Copilot file extension from `.md` to `.agent.md` in `CommandRegistrar.AGENT_CONFIGS` so Copilot recognizes agent files
- Added generation of companion `.prompt.md` files in `.github/prompts/` during extension command registration, matching the release packaging behavior
- Added cleanup of `.prompt.md` companion files when removing extensions via `specify extension remove`
- Fixed a syntax regression in `src/specify_cli/__init__.py` in `_build_ai_assistant_help()` that broke `ruff` and `pytest` collection in CI.
## [0.1.12] - 2026-03-02
### Changed
- fix: use RELEASE_PAT so tag push triggers release workflow (#1736)
- fix: release-trigger uses release branch + PR instead of direct push to main (#1733)
- fix: Split release process to sync pyproject.toml version with git tags (#1732)
## [0.1.10] - 2026-03-02
### Fixed
- **Version Sync Issue (#1721)**: Fixed version mismatch between `pyproject.toml` and git release tags
- Split release process into two workflows: `release-trigger.yml` for version management and `release.yml` for artifact building
- Version bump now happens BEFORE tag creation, ensuring tags point to commits with correct version
- Supports both manual version specification and auto-increment (patch version)
- Git tags now accurately reflect the version in `pyproject.toml` at that commit
- Prevents confusion when installing from source
## [0.1.9] - 2026-02-28
### Changed
- Updated dependency: bumped astral-sh/setup-uv from 6 to 7
## [0.1.8] - 2026-02-28
### Changed
- Updated dependency: bumped actions/setup-python from 5 to 6
## [0.1.7] - 2026-02-27
### Changed
- Updated outdated GitHub Actions versions
- Documented dual-catalog system for extensions
### Fixed
- Fixed version command in documentation
### Added
- Added Cleanup Extension to README
- Added retrospective extension to community catalog
## [0.1.6] - 2026-02-23 ## [0.1.6] - 2026-02-23
### Fixed ### Fixed

View File

@@ -22,6 +22,7 @@
- [🤔 What is Spec-Driven Development?](#-what-is-spec-driven-development) - [🤔 What is Spec-Driven Development?](#-what-is-spec-driven-development)
- [⚡ Get Started](#-get-started) - [⚡ Get Started](#-get-started)
- [📽️ Video Overview](#-video-overview) - [📽️ Video Overview](#-video-overview)
- [🚶 Community Walkthroughs](#-community-walkthroughs)
- [🤖 Supported AI Agents](#-supported-ai-agents) - [🤖 Supported AI Agents](#-supported-ai-agents)
- [🔧 Specify CLI Reference](#-specify-cli-reference) - [🔧 Specify CLI Reference](#-specify-cli-reference)
- [📚 Core Philosophy](#-core-philosophy) - [📚 Core Philosophy](#-core-philosophy)
@@ -79,7 +80,13 @@ uv tool install specify-cli --force --from git+https://github.com/github/spec-ki
Run directly without installing: Run directly without installing:
```bash ```bash
# Create new project
uvx --from git+https://github.com/github/spec-kit.git specify init <PROJECT_NAME> uvx --from git+https://github.com/github/spec-kit.git specify init <PROJECT_NAME>
# Or initialize in existing project
uvx --from git+https://github.com/github/spec-kit.git specify init . --ai claude
# or
uvx --from git+https://github.com/github/spec-kit.git specify init --here --ai claude
``` ```
**Benefits of persistent installation:** **Benefits of persistent installation:**
@@ -139,12 +146,22 @@ Want to see Spec Kit in action? Watch our [video overview](https://www.youtube.c
[![Spec Kit video header](/media/spec-kit-video-header.jpg)](https://www.youtube.com/watch?v=a9eR1xsfvHg&pp=0gcJCckJAYcqIYzv) [![Spec Kit video header](/media/spec-kit-video-header.jpg)](https://www.youtube.com/watch?v=a9eR1xsfvHg&pp=0gcJCckJAYcqIYzv)
## 🚶 Community Walkthroughs
See Spec-Driven Development in action across different scenarios with these community-contributed walkthroughs:
- **[Greenfield .NET CLI tool](https://github.com/mnriem/spec-kit-dotnet-cli-demo)** — Builds a Timezone Utility as a .NET single-binary CLI tool from a blank directory, covering the full spec-kit workflow: constitution, specify, plan, tasks, and multi-pass implement using GitHub Copilot agents.
- **[Greenfield Spring Boot + React platform](https://github.com/mnriem/spec-kit-spring-react-demo)** — Builds an LLM performance analytics platform (REST API, graphs, iteration tracking) from scratch using Spring Boot, embedded React, PostgreSQL, and Docker Compose, with a clarify step and a cross-artifact consistency analysis pass included.
- **[Brownfield ASP.NET CMS extension](https://github.com/mnriem/spec-kit-aspnet-brownfield-demo)** — Extends an existing open-source .NET CMS (CarrotCakeCMS-Core) with two new features — cross-platform Docker Compose infrastructure and a token-authenticated headless REST API — demonstrating how spec-kit fits into existing codebases without prior specs or a constitution.
## 🤖 Supported AI Agents ## 🤖 Supported AI Agents
| Agent | Support | Notes | | Agent | Support | Notes |
| ------------------------------------------------------------------------------------ | ------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | ------------------------------------------------------------------------------------ | ------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| [Qoder CLI](https://qoder.com/cli) | ✅ | | | [Qoder CLI](https://qoder.com/cli) | ✅ | |
| [Amazon Q Developer CLI](https://aws.amazon.com/developer/learning/q-developer-cli/) | ⚠️ | Amazon Q Developer CLI [does not support](https://github.com/aws/amazon-q-developer-cli/issues/3064) custom arguments for slash commands. | | [Kiro CLI](https://kiro.dev/docs/cli/) | ✅ | Use `--ai kiro-cli` (alias: `--ai kiro`) |
| [Amp](https://ampcode.com/) | ✅ | | | [Amp](https://ampcode.com/) | ✅ | |
| [Auggie CLI](https://docs.augmentcode.com/cli/overview) | ✅ | | | [Auggie CLI](https://docs.augmentcode.com/cli/overview) | ✅ | |
| [Claude Code](https://www.anthropic.com/claude-code) | ✅ | | | [Claude Code](https://www.anthropic.com/claude-code) | ✅ | |
@@ -160,8 +177,11 @@ Want to see Spec Kit in action? Watch our [video overview](https://www.youtube.c
| [Qwen Code](https://github.com/QwenLM/qwen-code) | ✅ | | | [Qwen Code](https://github.com/QwenLM/qwen-code) | ✅ | |
| [Roo Code](https://roocode.com/) | ✅ | | | [Roo Code](https://roocode.com/) | ✅ | |
| [SHAI (OVHcloud)](https://github.com/ovh/shai) | ✅ | | | [SHAI (OVHcloud)](https://github.com/ovh/shai) | ✅ | |
| [Tabnine CLI](https://docs.tabnine.com/main/getting-started/tabnine-cli) | ✅ | |
| [Mistral Vibe](https://github.com/mistralai/mistral-vibe) | ✅ | |
| [Kimi Code](https://code.kimi.com/) | ✅ | |
| [Windsurf](https://windsurf.com/) | ✅ | | | [Windsurf](https://windsurf.com/) | ✅ | |
| [Antigravity (agy)](https://agy.ai/) | ✅ | | | [Antigravity (agy)](https://antigravity.google/) | ✅ | |
| Generic | ✅ | Bring your own agent — use `--ai generic --ai-commands-dir <path>` for unsupported agents | | Generic | ✅ | Bring your own agent — use `--ai generic --ai-commands-dir <path>` for unsupported agents |
## 🔧 Specify CLI Reference ## 🔧 Specify CLI Reference
@@ -173,14 +193,14 @@ The `specify` command supports the following options:
| Command | Description | | Command | Description |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `init` | Initialize a new Specify project from the latest template | | `init` | Initialize a new Specify project from the latest template |
| `check` | Check for installed tools (`git`, `claude`, `gemini`, `code`/`code-insiders`, `cursor-agent`, `windsurf`, `qwen`, `opencode`, `codex`, `shai`, `qodercli`) | | `check` | Check for installed tools (`git`, `claude`, `gemini`, `code`/`code-insiders`, `cursor-agent`, `windsurf`, `qwen`, `opencode`, `codex`, `kiro-cli`, `shai`, `qodercli`, `vibe`, `kimi`) |
### `specify init` Arguments & Options ### `specify init` Arguments & Options
| Argument/Option | Type | Description | | Argument/Option | Type | Description |
| ---------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ---------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `<project-name>` | Argument | Name for your new project directory (optional if using `--here`, or use `.` for current directory) | | `<project-name>` | Argument | Name for your new project directory (optional if using `--here`, or use `.` for current directory) |
| `--ai` | Option | AI assistant to use: `claude`, `gemini`, `copilot`, `cursor-agent`, `qwen`, `opencode`, `codex`, `windsurf`, `kilocode`, `auggie`, `roo`, `codebuddy`, `amp`, `shai`, `q`, `agy`, `bob`, `qodercli`, or `generic` (requires `--ai-commands-dir`) | | `--ai` | Option | AI assistant to use: `claude`, `gemini`, `copilot`, `cursor-agent`, `qwen`, `opencode`, `codex`, `windsurf`, `kilocode`, `auggie`, `roo`, `codebuddy`, `amp`, `shai`, `kiro-cli` (`kiro` alias), `agy`, `bob`, `qodercli`, `vibe`, `kimi`, or `generic` (requires `--ai-commands-dir`) |
| `--ai-commands-dir` | Option | Directory for agent command files (required with `--ai generic`, e.g. `.myagent/commands/`) | | `--ai-commands-dir` | Option | Directory for agent command files (required with `--ai generic`, e.g. `.myagent/commands/`) |
| `--script` | Option | Script variant to use: `sh` (bash/zsh) or `ps` (PowerShell) | | `--script` | Option | Script variant to use: `sh` (bash/zsh) or `ps` (PowerShell) |
| `--ignore-agent-tools` | Flag | Skip checks for AI agent tools like Claude Code | | `--ignore-agent-tools` | Flag | Skip checks for AI agent tools like Claude Code |
@@ -210,15 +230,24 @@ specify init my-project --ai qodercli
# Initialize with Windsurf support # Initialize with Windsurf support
specify init my-project --ai windsurf specify init my-project --ai windsurf
# Initialize with Kiro CLI support
specify init my-project --ai kiro-cli
# Initialize with Amp support # Initialize with Amp support
specify init my-project --ai amp specify init my-project --ai amp
# Initialize with SHAI support # Initialize with SHAI support
specify init my-project --ai shai specify init my-project --ai shai
# Initialize with Mistral Vibe support
specify init my-project --ai vibe
# Initialize with IBM Bob support # Initialize with IBM Bob support
specify init my-project --ai bob specify init my-project --ai bob
# Initialize with Antigravity support
specify init my-project --ai agy
# Initialize with an unsupported agent (generic / bring your own agent) # Initialize with an unsupported agent (generic / bring your own agent)
specify init my-project --ai generic --ai-commands-dir .myagent/commands/ specify init my-project --ai generic --ai-commands-dir .myagent/commands/
@@ -393,7 +422,7 @@ specify init . --force --ai claude
specify init --here --force --ai claude specify init --here --force --ai claude
``` ```
The CLI will check if you have Claude Code, Gemini CLI, Cursor CLI, Qwen CLI, opencode, Codex CLI, Qoder CLI, or Amazon Q Developer CLI installed. If you do not, or you prefer to get the templates without checking for the right tools, use `--ignore-agent-tools` with your command: The CLI will check if you have Claude Code, Gemini CLI, Cursor CLI, Qwen CLI, opencode, Codex CLI, Qoder CLI, Tabnine CLI, Kiro CLI, or Mistral Vibe installed. If you do not, or you prefer to get the templates without checking for the right tools, use `--ignore-agent-tools` with your command:
```bash ```bash
specify init <project_name> --ai claude --ignore-agent-tools specify init <project_name> --ai claude --ignore-agent-tools

View File

@@ -173,6 +173,6 @@ Finally, implement the solution:
## Next Steps ## Next Steps
- Read the [complete methodology](../spec-driven.md) for in-depth guidance - Read the [complete methodology](https://github.com/github/spec-kit/blob/main/spec-driven.md) for in-depth guidance
- Check out [more examples](../templates) in the repository - Check out [more examples](https://github.com/github/spec-kit/tree/main/templates) in the repository
- Explore the [source code on GitHub](https://github.com/github/spec-kit) - Explore the [source code on GitHub](https://github.com/github/spec-kit)

View File

@@ -243,6 +243,34 @@ manager.check_compatibility(
) # Raises: CompatibilityError if incompatible ) # Raises: CompatibilityError if incompatible
``` ```
### CatalogEntry
**Module**: `specify_cli.extensions`
Represents a single catalog in the active catalog stack.
```python
from specify_cli.extensions import CatalogEntry
entry = CatalogEntry(
url="https://example.com/catalog.json",
name="default",
priority=1,
install_allowed=True,
description="Built-in catalog of installable extensions",
)
```
**Fields**:
| Field | Type | Description |
|-------|------|-------------|
| `url` | `str` | Catalog URL (must use HTTPS, or HTTP for localhost) |
| `name` | `str` | Human-readable catalog name |
| `priority` | `int` | Sort order (lower = higher priority, wins on conflicts) |
| `install_allowed` | `bool` | Whether extensions from this catalog can be installed |
| `description` | `str` | Optional human-readable description of the catalog (default: empty) |
### ExtensionCatalog ### ExtensionCatalog
**Module**: `specify_cli.extensions` **Module**: `specify_cli.extensions`
@@ -253,30 +281,67 @@ from specify_cli.extensions import ExtensionCatalog
catalog = ExtensionCatalog(project_root) catalog = ExtensionCatalog(project_root)
``` ```
**Class attributes**:
```python
ExtensionCatalog.DEFAULT_CATALOG_URL # default catalog URL
ExtensionCatalog.COMMUNITY_CATALOG_URL # community catalog URL
```
**Methods**: **Methods**:
```python ```python
# Fetch catalog # Get the ordered list of active catalogs
entries = catalog.get_active_catalogs() # List[CatalogEntry]
# Fetch catalog (primary catalog, backward compat)
catalog_data = catalog.fetch_catalog(force_refresh: bool = False) # Dict catalog_data = catalog.fetch_catalog(force_refresh: bool = False) # Dict
# Search extensions # Search extensions across all active catalogs
# Each result includes _catalog_name and _install_allowed
results = catalog.search( results = catalog.search(
query: Optional[str] = None, query: Optional[str] = None,
tag: Optional[str] = None, tag: Optional[str] = None,
author: Optional[str] = None, author: Optional[str] = None,
verified_only: bool = False verified_only: bool = False
) # Returns: List[Dict] ) # Returns: List[Dict] — each dict includes _catalog_name, _install_allowed
# Get extension info # Get extension info (searches all active catalogs)
# Returns None if not found; includes _catalog_name and _install_allowed
ext_info = catalog.get_extension_info(extension_id: str) # Optional[Dict] ext_info = catalog.get_extension_info(extension_id: str) # Optional[Dict]
# Check cache validity # Check cache validity (primary catalog)
is_valid = catalog.is_cache_valid() # bool is_valid = catalog.is_cache_valid() # bool
# Clear cache # Clear all catalog caches
catalog.clear_cache() catalog.clear_cache()
``` ```
**Result annotation fields**:
Each extension dict returned by `search()` and `get_extension_info()` includes:
| Field | Type | Description |
|-------|------|-------------|
| `_catalog_name` | `str` | Name of the source catalog |
| `_install_allowed` | `bool` | Whether installation is allowed from this catalog |
**Catalog config file** (`.specify/extension-catalogs.yml`):
```yaml
catalogs:
- name: "default"
url: "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.json"
priority: 1
install_allowed: true
description: "Built-in catalog of installable extensions"
- name: "community"
url: "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json"
priority: 2
install_allowed: false
description: "Community-contributed extensions (discovery only)"
```
### HookExecutor ### HookExecutor
**Module**: `specify_cli.extensions` **Module**: `specify_cli.extensions`
@@ -543,6 +608,39 @@ EXECUTE_COMMAND: {command}
**Output**: List of installed extensions with metadata **Output**: List of installed extensions with metadata
### extension catalog list
**Usage**: `specify extension catalog list`
Lists all active catalogs in the current catalog stack, showing name, description, URL, priority, and `install_allowed` status.
### extension catalog add
**Usage**: `specify extension catalog add URL [OPTIONS]`
**Options**:
- `--name NAME` - Catalog name (required)
- `--priority INT` - Priority (lower = higher priority, default: 10)
- `--install-allowed / --no-install-allowed` - Allow installs from this catalog (default: false)
- `--description TEXT` - Optional description of the catalog
**Arguments**:
- `URL` - Catalog URL (must use HTTPS)
Adds a catalog entry to `.specify/extension-catalogs.yml`.
### extension catalog remove
**Usage**: `specify extension catalog remove NAME`
**Arguments**:
- `NAME` - Catalog name to remove
Removes a catalog entry from `.specify/extension-catalogs.yml`.
### extension add ### extension add
**Usage**: `specify extension add EXTENSION [OPTIONS]` **Usage**: `specify extension add EXTENSION [OPTIONS]`
@@ -551,13 +649,13 @@ EXECUTE_COMMAND: {command}
- `--from URL` - Install from custom URL - `--from URL` - Install from custom URL
- `--dev PATH` - Install from local directory - `--dev PATH` - Install from local directory
- `--version VERSION` - Install specific version
- `--no-register` - Skip command registration
**Arguments**: **Arguments**:
- `EXTENSION` - Extension name or URL - `EXTENSION` - Extension name or URL
**Note**: Extensions from catalogs with `install_allowed: false` cannot be installed via this command.
### extension remove ### extension remove
**Usage**: `specify extension remove EXTENSION [OPTIONS]` **Usage**: `specify extension remove EXTENSION [OPTIONS]`
@@ -575,6 +673,8 @@ EXECUTE_COMMAND: {command}
**Usage**: `specify extension search [QUERY] [OPTIONS]` **Usage**: `specify extension search [QUERY] [OPTIONS]`
Searches all active catalogs simultaneously. Results include source catalog name and install_allowed status.
**Options**: **Options**:
- `--tag TAG` - Filter by tag - `--tag TAG` - Filter by tag
@@ -589,6 +689,8 @@ EXECUTE_COMMAND: {command}
**Usage**: `specify extension info EXTENSION` **Usage**: `specify extension info EXTENSION`
Shows source catalog and install_allowed status.
**Arguments**: **Arguments**:
- `EXTENSION` - Extension ID - `EXTENSION` - Extension ID

View File

@@ -332,6 +332,67 @@ echo "$config"
--- ---
## Excluding Files with `.extensionignore`
Extension authors can create a `.extensionignore` file in the extension root to exclude files and folders from being copied when a user installs the extension with `specify extension add`. This is useful for keeping development-only files (tests, CI configs, docs source, etc.) out of the installed copy.
### Format
The file uses `.gitignore`-compatible patterns (one per line), powered by the [`pathspec`](https://pypi.org/project/pathspec/) library:
- Blank lines are ignored
- Lines starting with `#` are comments
- `*` matches anything **except** `/` (does not cross directory boundaries)
- `**` matches zero or more directories (e.g., `docs/**/*.draft.md`)
- `?` matches any single character except `/`
- A trailing `/` restricts a pattern to directories only
- Patterns containing `/` (other than a trailing slash) are anchored to the extension root
- Patterns without `/` match at any depth in the tree
- `!` negates a previously excluded pattern (re-includes a file)
- Backslashes in patterns are normalised to forward slashes for cross-platform compatibility
- The `.extensionignore` file itself is always excluded automatically
### Example
```gitignore
# .extensionignore
# Development files
tests/
.github/
.gitignore
# Build artifacts
__pycache__/
*.pyc
dist/
# Documentation source (keep only the built README)
docs/
CONTRIBUTING.md
```
### Pattern Matching
| Pattern | Matches | Does NOT match |
|---------|---------|----------------|
| `*.pyc` | Any `.pyc` file in any directory | — |
| `tests/` | The `tests` directory (and all its contents) | A file named `tests` |
| `docs/*.draft.md` | `docs/api.draft.md` (directly inside `docs/`) | `docs/sub/api.draft.md` (nested) |
| `.env` | The `.env` file at any level | — |
| `!README.md` | Re-includes `README.md` even if matched by an earlier pattern | — |
| `docs/**/*.draft.md` | `docs/api.draft.md`, `docs/sub/api.draft.md` | — |
### Unsupported Features
The following `.gitignore` features are **not applicable** in this context:
- **Multiple `.extensionignore` files**: Only a single file at the extension root is supported (`.gitignore` supports files in subdirectories)
- **`$GIT_DIR/info/exclude` and `core.excludesFile`**: These are Git-specific and have no equivalent here
- **Negation inside excluded directories**: Because file copying uses `shutil.copytree`, excluding a directory prevents recursion into it entirely. A negation pattern cannot re-include a file inside a directory that was itself excluded. For example, the combination `tests/` followed by `!tests/important.py` will **not** preserve `tests/important.py` — the `tests/` directory is skipped at the root level and its contents are never evaluated. To work around this, exclude the directory's contents individually instead of the directory itself (e.g., `tests/*.pyc` and `tests/.cache/` rather than `tests/`).
---
## Validation Rules ## Validation Rules
### Extension ID ### Extension ID

View File

@@ -76,7 +76,7 @@ vim .specify/extensions/jira/jira-config.yml
## Finding Extensions ## Finding Extensions
**Note**: By default, `specify extension search` uses your organization's catalog (`catalog.json`). If the catalog is empty, you won't see any results. See [Extension Catalogs](#extension-catalogs) to learn how to populate your catalog from the community reference catalog. `specify extension search` searches **all active catalogs** simultaneously, including the community catalog by default. Results are annotated with their source catalog and install status.
### Browse All Extensions ### Browse All Extensions
@@ -84,7 +84,7 @@ vim .specify/extensions/jira/jira-config.yml
specify extension search specify extension search
``` ```
Shows all extensions in your organization's catalog. Shows all extensions across all active catalogs (default and community by default).
### Search by Keyword ### Search by Keyword
@@ -402,13 +402,13 @@ In addition to extension-specific environment variables (`SPECKIT_{EXT_ID}_*`),
| Variable | Description | Default | | Variable | Description | Default |
|----------|-------------|---------| |----------|-------------|---------|
| `SPECKIT_CATALOG_URL` | Override the extension catalog URL | GitHub-hosted catalog | | `SPECKIT_CATALOG_URL` | Override the full catalog stack with a single URL (backward compat) | Built-in default stack |
| `GH_TOKEN` / `GITHUB_TOKEN` | GitHub API token for downloads | None | | `GH_TOKEN` / `GITHUB_TOKEN` | GitHub API token for downloads | None |
#### Example: Using a custom catalog for testing #### Example: Using a custom catalog for testing
```bash ```bash
# Point to a local or alternative catalog # Point to a local or alternative catalog (replaces the full stack)
export SPECKIT_CATALOG_URL="http://localhost:8000/catalog.json" export SPECKIT_CATALOG_URL="http://localhost:8000/catalog.json"
# Or use a staging catalog # Or use a staging catalog
@@ -419,13 +419,96 @@ export SPECKIT_CATALOG_URL="https://example.com/staging/catalog.json"
## Extension Catalogs ## Extension Catalogs
For information about how Spec Kit's dual-catalog system works (`catalog.json` vs `catalog.community.json`), see the main [Extensions README](README.md#extension-catalogs). Spec Kit uses a **catalog stack** — an ordered list of catalogs searched simultaneously. By default, two catalogs are active:
| Priority | Catalog | Install Allowed | Purpose |
|----------|---------|-----------------|---------|
| 1 | `catalog.json` (default) | ✅ Yes | Curated extensions available for installation |
| 2 | `catalog.community.json` (community) | ❌ No (discovery only) | Browse community extensions |
### Listing Active Catalogs
```bash
specify extension catalog list
```
### Managing Catalogs via CLI
You can view the main catalog management commands using `--help`:
```text
specify extension catalog --help
Usage: specify extension catalog [OPTIONS] COMMAND [ARGS]...
Manage extension catalogs
╭─ Options ────────────────────────────────────────────────────────────────────────╮
│ --help Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────────╮
│ list List all active extension catalogs. │
│ add Add a catalog to .specify/extension-catalogs.yml. │
│ remove Remove a catalog from .specify/extension-catalogs.yml. │
╰──────────────────────────────────────────────────────────────────────────────────╯
```
### Adding a Catalog (Project-scoped)
```bash
# Add an internal catalog that allows installs
specify extension catalog add \
--name "internal" \
--priority 2 \
--install-allowed \
https://internal.company.com/spec-kit/catalog.json
# Add a discovery-only catalog
specify extension catalog add \
--name "partner" \
--priority 5 \
https://partner.example.com/spec-kit/catalog.json
```
This creates or updates `.specify/extension-catalogs.yml`.
### Removing a Catalog
```bash
specify extension catalog remove internal
```
### Manual Config File
You can also edit `.specify/extension-catalogs.yml` directly:
```yaml
catalogs:
- name: "default"
url: "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.json"
priority: 1
install_allowed: true
description: "Built-in catalog of installable extensions"
- name: "internal"
url: "https://internal.company.com/spec-kit/catalog.json"
priority: 2
install_allowed: true
description: "Internal company extensions"
- name: "community"
url: "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json"
priority: 3
install_allowed: false
description: "Community-contributed extensions (discovery only)"
```
A user-level equivalent lives at `~/.specify/extension-catalogs.yml`. Project-level config takes full precedence when it contains one or more catalog entries. An empty `catalogs: []` list falls back to built-in defaults.
## Organization Catalog Customization ## Organization Catalog Customization
### Why Customize Your Catalog ### Why Customize Your Catalog
Organizations customize their `catalog.json` to: Organizations customize their catalogs to:
- **Control available extensions** - Curate which extensions your team can install - **Control available extensions** - Curate which extensions your team can install
- **Host private extensions** - Internal tools that shouldn't be public - **Host private extensions** - Internal tools that shouldn't be public
@@ -503,24 +586,40 @@ Options for hosting your catalog:
#### 3. Configure Your Environment #### 3. Configure Your Environment
##### Option A: Environment variable (recommended for CI/CD) ##### Option A: Catalog stack config file (recommended)
Add to `.specify/extension-catalogs.yml` in your project:
```yaml
catalogs:
- name: "my-org"
url: "https://your-org.com/spec-kit/catalog.json"
priority: 1
install_allowed: true
```
Or use the CLI:
```bash
specify extension catalog add \
--name "my-org" \
--install-allowed \
https://your-org.com/spec-kit/catalog.json
```
##### Option B: Environment variable (recommended for CI/CD, single-catalog)
```bash ```bash
# In ~/.bashrc, ~/.zshrc, or CI pipeline # In ~/.bashrc, ~/.zshrc, or CI pipeline
export SPECKIT_CATALOG_URL="https://your-org.com/spec-kit/catalog.json" export SPECKIT_CATALOG_URL="https://your-org.com/spec-kit/catalog.json"
``` ```
##### Option B: Per-project configuration
Create `.env` or set in your shell before running spec-kit commands:
```bash
SPECKIT_CATALOG_URL="https://your-org.com/spec-kit/catalog.json" specify extension search
```
#### 4. Verify Configuration #### 4. Verify Configuration
```bash ```bash
# List active catalogs
specify extension catalog list
# Search should now show your catalog's extensions # Search should now show your catalog's extensions
specify extension search specify extension search

View File

@@ -72,8 +72,18 @@ The following community-contributed extensions are available in [`catalog.commun
| Extension | Purpose | URL | | Extension | Purpose | URL |
|-----------|---------|-----| |-----------|---------|-----|
| V-Model Extension Pack | Enforces V-Model paired generation of development specs and test specs with full traceability | [spec-kit-v-model](https://github.com/leocamello/spec-kit-v-model) | | Azure DevOps Integration | Sync user stories and tasks to Azure DevOps work items using OAuth authentication | [spec-kit-azure-devops](https://github.com/pragya247/spec-kit-azure-devops) |
| Cleanup Extension | Post-implementation quality gate that reviews changes, fixes small issues (scout rule), creates tasks for medium issues, and generates analysis for large issues | [spec-kit-cleanup](https://github.com/dsrednicki/spec-kit-cleanup) | | Cleanup Extension | Post-implementation quality gate that reviews changes, fixes small issues (scout rule), creates tasks for medium issues, and generates analysis for large issues | [spec-kit-cleanup](https://github.com/dsrednicki/spec-kit-cleanup) |
| Fleet Orchestrator | Orchestrate a full feature lifecycle with human-in-the-loop gates across all SpecKit phases | [spec-kit-fleet](https://github.com/sharathsatish/spec-kit-fleet) |
| Jira Integration | Create Jira Epics, Stories, and Issues from spec-kit specifications and task breakdowns with configurable hierarchy and custom field support | [spec-kit-jira](https://github.com/mbachorik/spec-kit-jira) |
| Ralph Loop | Autonomous implementation loop using AI agent CLI | [spec-kit-ralph](https://github.com/Rubiss/spec-kit-ralph) |
| Retrospective Extension | Post-implementation retrospective with spec adherence scoring, drift analysis, and human-gated spec updates | [spec-kit-retrospective](https://github.com/emi-dm/spec-kit-retrospective) |
| Review Extension | Post-implementation comprehensive code review with specialized agents for code quality, comments, tests, error handling, type design, and simplification | [spec-kit-review](https://github.com/ismaelJimenez/spec-kit-review) |
| Spec Sync | Detect and resolve drift between specs and implementation. AI-assisted resolution with human approval | [spec-kit-sync](https://github.com/bgervin/spec-kit-sync) |
| Understanding | Automated requirements quality analysis — 31 deterministic metrics against IEEE/ISO standards with experimental energy-based ambiguity detection | [understanding](https://github.com/Testimonial/understanding) |
| V-Model Extension Pack | Enforces V-Model paired generation of development specs and test specs with full traceability | [spec-kit-v-model](https://github.com/leocamello/spec-kit-v-model) |
| Verify Extension | Post-implementation quality gate that validates implemented code against specification artifacts | [spec-kit-verify](https://github.com/ismaelJimenez/spec-kit-verify) |
## Adding Your Extension ## Adding Your Extension

View File

@@ -868,7 +868,7 @@ Spec Kit uses two catalog files with different purposes:
- **Purpose**: Organization's curated catalog of approved extensions - **Purpose**: Organization's curated catalog of approved extensions
- **Default State**: Empty by design - users populate with extensions they trust - **Default State**: Empty by design - users populate with extensions they trust
- **Usage**: Default catalog used by `specify extension` CLI commands - **Usage**: Primary catalog (priority 1, `install_allowed: true`) in the default stack
- **Control**: Organizations maintain their own fork/version for their teams - **Control**: Organizations maintain their own fork/version for their teams
#### Community Reference Catalog (`catalog.community.json`) #### Community Reference Catalog (`catalog.community.json`)
@@ -879,16 +879,16 @@ Spec Kit uses two catalog files with different purposes:
- **Verification**: Community extensions may have `verified: false` initially - **Verification**: Community extensions may have `verified: false` initially
- **Status**: Active - open for community contributions - **Status**: Active - open for community contributions
- **Submission**: Via Pull Request following the Extension Publishing Guide - **Submission**: Via Pull Request following the Extension Publishing Guide
- **Usage**: Browse to discover extensions, then copy to your `catalog.json` - **Usage**: Secondary catalog (priority 2, `install_allowed: false`) in the default stack — discovery only
**How It Works:** **How It Works (default stack):**
1. **Discover**: Browse `catalog.community.json` to find available extensions 1. **Discover**: `specify extension search` searches both catalogs — community extensions appear automatically
2. **Review**: Evaluate extensions for security, quality, and organizational fit 2. **Review**: Evaluate community extensions for security, quality, and organizational fit
3. **Curate**: Copy approved extension entries from community catalog to your `catalog.json` 3. **Curate**: Copy approved entries from community catalog to your `catalog.json`, or add to `.specify/extension-catalogs.yml` with `install_allowed: true`
4. **Install**: Use `specify extension add <name>` (pulls from your curated catalog) 4. **Install**: Use `specify extension add <name>` — only allowed from `install_allowed: true` catalogs
This approach gives organizations full control over which extensions are available to their teams while maintaining a shared community resource for discovery. This approach gives organizations full control over which extensions can be installed while still providing community discoverability out of the box.
### Catalog Format ### Catalog Format
@@ -961,30 +961,92 @@ specify extension info jira
### Custom Catalogs ### Custom Catalogs
**⚠️ FUTURE FEATURE - NOT YET IMPLEMENTED** Spec Kit supports a **catalog stack** — an ordered list of catalogs that the CLI merges and searches across. This allows organizations to maintain their own org-approved extensions alongside an internal catalog and community discovery, all at once.
The following catalog management commands are proposed design concepts but are not yet available in the current implementation: #### Catalog Stack Resolution
```bash The active catalog stack is resolved in this order (first match wins):
# Add custom catalog (FUTURE - NOT AVAILABLE)
specify extension add-catalog https://internal.company.com/spec-kit/catalog.json
# Set as default (FUTURE - NOT AVAILABLE) 1. **`SPECKIT_CATALOG_URL` environment variable** — single catalog replacing all defaults (backward compat)
specify extension set-catalog --default https://internal.company.com/spec-kit/catalog.json 2. **Project-level `.specify/extension-catalogs.yml`** — full control for the project
3. **User-level `~/.specify/extension-catalogs.yml`** — personal defaults
4. **Built-in default stack** — `catalog.json` (install_allowed: true) + `catalog.community.json` (install_allowed: false)
# List catalogs (FUTURE - NOT AVAILABLE) #### Default Built-in Stack
specify extension catalogs
When no config file exists, the CLI uses:
| Priority | Catalog | install_allowed | Purpose |
|----------|---------|-----------------|---------|
| 1 | `catalog.json` (default) | `true` | Curated extensions available for installation |
| 2 | `catalog.community.json` (community) | `false` | Discovery only — browse but not install |
This means `specify extension search` surfaces community extensions out of the box, while `specify extension add` is still restricted to entries from catalogs with `install_allowed: true`.
#### `.specify/extension-catalogs.yml` Config File
```yaml
catalogs:
- name: "default"
url: "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.json"
priority: 1 # Highest — only approved entries can be installed
install_allowed: true
description: "Built-in catalog of installable extensions"
- name: "internal"
url: "https://internal.company.com/spec-kit/catalog.json"
priority: 2
install_allowed: true
description: "Internal company extensions"
- name: "community"
url: "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json"
priority: 3 # Lowest — discovery only, not installable
install_allowed: false
description: "Community-contributed extensions (discovery only)"
``` ```
**Proposed catalog priority** (future design): A user-level equivalent lives at `~/.specify/extension-catalogs.yml`. When a project-level config is present with one or more catalog entries, it takes full control and the built-in defaults are not applied. An empty `catalogs: []` list is treated the same as no config file, falling back to defaults.
1. Project-specific catalog (`.specify/extension-catalogs.yml`) - *not implemented* #### Catalog CLI Commands
2. User-level catalog (`~/.specify/extension-catalogs.yml`) - *not implemented*
3. Default GitHub catalog
#### Current Implementation: SPECKIT_CATALOG_URL ```bash
# List active catalogs with name, URL, priority, and install_allowed
specify extension catalog list
**The currently available method** for using custom catalogs is the `SPECKIT_CATALOG_URL` environment variable: # Add a catalog (project-scoped)
specify extension catalog add --name "internal" --install-allowed \
https://internal.company.com/spec-kit/catalog.json
# Add a discovery-only catalog
specify extension catalog add --name "community" \
https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json
# Remove a catalog
specify extension catalog remove internal
# Show which catalog an extension came from
specify extension info jira
# → Source catalog: default
```
#### Merge Conflict Resolution
When the same extension `id` appears in multiple catalogs, the higher-priority (lower priority number) catalog wins. Extensions from lower-priority catalogs with the same `id` are ignored.
#### `install_allowed: false` Behavior
Extensions from discovery-only catalogs are shown in `specify extension search` results but cannot be installed directly:
```
⚠ 'linear' is available in the 'community' catalog but installation is not allowed from that catalog.
To enable installation, add 'linear' to an approved catalog (install_allowed: true) in .specify/extension-catalogs.yml.
```
#### `SPECKIT_CATALOG_URL` (Backward Compatibility)
The `SPECKIT_CATALOG_URL` environment variable still works — it is treated as a single `install_allowed: true` catalog, **replacing both defaults** for full backward compatibility:
```bash ```bash
# Point to your organization's catalog # Point to your organization's catalog

View File

@@ -1,8 +1,47 @@
{ {
"schema_version": "1.0", "schema_version": "1.0",
"updated_at": "2026-02-24T00:00:00Z", "updated_at": "2026-03-09T00:00:00Z",
"catalog_url": "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json", "catalog_url": "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json",
"extensions": { "extensions": {
"azure-devops": {
"name": "Azure DevOps Integration",
"id": "azure-devops",
"description": "Sync user stories and tasks to Azure DevOps work items using OAuth authentication.",
"author": "pragya247",
"version": "1.0.0",
"download_url": "https://github.com/pragya247/spec-kit-azure-devops/archive/refs/tags/v1.0.0.zip",
"repository": "https://github.com/pragya247/spec-kit-azure-devops",
"homepage": "https://github.com/pragya247/spec-kit-azure-devops",
"documentation": "https://github.com/pragya247/spec-kit-azure-devops/blob/main/README.md",
"changelog": "https://github.com/pragya247/spec-kit-azure-devops/blob/main/CHANGELOG.md",
"license": "MIT",
"requires": {
"speckit_version": ">=0.1.0",
"tools": [
{
"name": "az",
"version": ">=2.0.0",
"required": true
}
]
},
"provides": {
"commands": 1,
"hooks": 1
},
"tags": [
"azure",
"devops",
"project-management",
"work-items",
"issue-tracking"
],
"verified": false,
"downloads": 0,
"stars": 0,
"created_at": "2026-03-03T00:00:00Z",
"updated_at": "2026-03-03T00:00:00Z"
},
"cleanup": { "cleanup": {
"name": "Cleanup Extension", "name": "Cleanup Extension",
"id": "cleanup", "id": "cleanup",
@@ -22,13 +61,112 @@
"commands": 1, "commands": 1,
"hooks": 1 "hooks": 1
}, },
"tags": ["quality", "tech-debt", "review", "cleanup", "scout-rule"], "tags": [
"quality",
"tech-debt",
"review",
"cleanup",
"scout-rule"
],
"verified": false, "verified": false,
"downloads": 0, "downloads": 0,
"stars": 0, "stars": 0,
"created_at": "2026-02-22T00:00:00Z", "created_at": "2026-02-22T00:00:00Z",
"updated_at": "2026-02-22T00:00:00Z" "updated_at": "2026-02-22T00:00:00Z"
}, },
"fleet": {
"name": "Fleet Orchestrator",
"id": "fleet",
"description": "Orchestrate a full feature lifecycle with human-in-the-loop gates across all SpecKit phases.",
"author": "sharathsatish",
"version": "1.0.0",
"download_url": "https://github.com/sharathsatish/spec-kit-fleet/archive/refs/tags/v1.0.0.zip",
"repository": "https://github.com/sharathsatish/spec-kit-fleet",
"homepage": "https://github.com/sharathsatish/spec-kit-fleet",
"documentation": "https://github.com/sharathsatish/spec-kit-fleet/blob/main/README.md",
"changelog": "https://github.com/sharathsatish/spec-kit-fleet/blob/main/CHANGELOG.md",
"license": "MIT",
"requires": {
"speckit_version": ">=0.1.0"
},
"provides": {
"commands": 2,
"hooks": 1
},
"tags": ["orchestration", "workflow", "human-in-the-loop", "parallel"],
"verified": false,
"downloads": 0,
"stars": 0,
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z"
},
"jira": {
"name": "Jira Integration",
"id": "jira",
"description": "Create Jira Epics, Stories, and Issues from spec-kit specifications and task breakdowns with configurable hierarchy and custom field support.",
"author": "mbachorik",
"version": "2.1.0",
"download_url": "https://github.com/mbachorik/spec-kit-jira/archive/refs/tags/v2.1.0.zip",
"repository": "https://github.com/mbachorik/spec-kit-jira",
"homepage": "https://github.com/mbachorik/spec-kit-jira",
"documentation": "https://github.com/mbachorik/spec-kit-jira/blob/main/README.md",
"changelog": "https://github.com/mbachorik/spec-kit-jira/blob/main/CHANGELOG.md",
"license": "MIT",
"requires": {
"speckit_version": ">=0.1.0"
},
"provides": {
"commands": 3,
"hooks": 1
},
"tags": [
"issue-tracking",
"jira",
"atlassian",
"project-management"
],
"verified": false,
"downloads": 0,
"stars": 0,
"created_at": "2026-03-05T00:00:00Z",
"updated_at": "2026-03-05T00:00:00Z"
},
"ralph": {
"name": "Ralph Loop",
"id": "ralph",
"description": "Autonomous implementation loop using AI agent CLI.",
"author": "Rubiss",
"version": "1.0.0",
"download_url": "https://github.com/Rubiss/spec-kit-ralph/archive/refs/tags/v1.0.0.zip",
"repository": "https://github.com/Rubiss/spec-kit-ralph",
"homepage": "https://github.com/Rubiss/spec-kit-ralph",
"documentation": "https://github.com/Rubiss/spec-kit-ralph/blob/main/README.md",
"changelog": "https://github.com/Rubiss/spec-kit-ralph/blob/main/CHANGELOG.md",
"license": "MIT",
"requires": {
"speckit_version": ">=0.1.0",
"tools": [
{
"name": "copilot",
"required": true
},
{
"name": "git",
"required": true
}
]
},
"provides": {
"commands": 2,
"hooks": 1
},
"tags": ["implementation", "automation", "loop", "copilot"],
"verified": false,
"downloads": 0,
"stars": 0,
"created_at": "2026-03-09T00:00:00Z",
"updated_at": "2026-03-09T00:00:00Z"
},
"retrospective": { "retrospective": {
"name": "Retrospective Extension", "name": "Retrospective Extension",
"id": "retrospective", "id": "retrospective",
@@ -48,13 +186,118 @@
"commands": 1, "commands": 1,
"hooks": 1 "hooks": 1
}, },
"tags": ["retrospective", "spec-drift", "quality", "analysis", "governance"], "tags": [
"retrospective",
"spec-drift",
"quality",
"analysis",
"governance"
],
"verified": false, "verified": false,
"downloads": 0, "downloads": 0,
"stars": 0, "stars": 0,
"created_at": "2026-02-24T00:00:00Z", "created_at": "2026-02-24T00:00:00Z",
"updated_at": "2026-02-24T00:00:00Z" "updated_at": "2026-02-24T00:00:00Z"
}, },
"review": {
"name": "Review Extension",
"id": "review",
"description": "Post-implementation comprehensive code review with specialized agents for code quality, comments, tests, error handling, type design, and simplification.",
"author": "ismaelJimenez",
"version": "1.0.0",
"download_url": "https://github.com/ismaelJimenez/spec-kit-review/archive/refs/tags/v1.0.0.zip",
"repository": "https://github.com/ismaelJimenez/spec-kit-review",
"homepage": "https://github.com/ismaelJimenez/spec-kit-review",
"documentation": "https://github.com/ismaelJimenez/spec-kit-review/blob/main/README.md",
"changelog": "https://github.com/ismaelJimenez/spec-kit-review/blob/main/CHANGELOG.md",
"license": "MIT",
"requires": {
"speckit_version": ">=0.1.0"
},
"provides": {
"commands": 7,
"hooks": 1
},
"tags": ["code-review", "quality", "review", "testing", "error-handling", "type-design", "simplification"],
"verified": false,
"downloads": 0,
"stars": 0,
"created_at": "2026-03-06T00:00:00Z",
"updated_at": "2026-03-06T00:00:00Z"
},
"sync": {
"name": "Spec Sync",
"id": "sync",
"description": "Detect and resolve drift between specs and implementation. AI-assisted resolution with human approval.",
"author": "bgervin",
"version": "0.1.0",
"download_url": "https://github.com/bgervin/spec-kit-sync/archive/refs/tags/v0.1.0.zip",
"repository": "https://github.com/bgervin/spec-kit-sync",
"homepage": "https://github.com/bgervin/spec-kit-sync",
"documentation": "https://github.com/bgervin/spec-kit-sync/blob/main/README.md",
"changelog": "https://github.com/bgervin/spec-kit-sync/blob/main/CHANGELOG.md",
"license": "MIT",
"requires": {
"speckit_version": ">=0.1.0"
},
"provides": {
"commands": 5,
"hooks": 1
},
"tags": [
"sync",
"drift",
"validation",
"bidirectional",
"backfill"
],
"verified": false,
"downloads": 0,
"stars": 0,
"created_at": "2026-03-02T00:00:00Z",
"updated_at": "2026-03-02T00:00:00Z"
},
"understanding": {
"name": "Understanding",
"id": "understanding",
"description": "Automated requirements quality analysis — validates specs against IEEE/ISO standards using 31 deterministic metrics. Catches ambiguity, missing testability, and structural issues before they reach implementation. Includes experimental energy-based ambiguity detection using local LM token perplexity.",
"author": "Ladislav Bihari",
"version": "3.4.0",
"download_url": "https://github.com/Testimonial/understanding/archive/refs/tags/v3.4.0.zip",
"repository": "https://github.com/Testimonial/understanding",
"homepage": "https://github.com/Testimonial/understanding",
"documentation": "https://github.com/Testimonial/understanding/blob/main/extension/README.md",
"changelog": "https://github.com/Testimonial/understanding/blob/main/extension/CHANGELOG.md",
"license": "MIT",
"requires": {
"speckit_version": ">=0.1.0",
"tools": [
{
"name": "understanding",
"version": ">=3.4.0",
"required": true
}
]
},
"provides": {
"commands": 3,
"hooks": 1
},
"tags": [
"quality",
"metrics",
"requirements",
"validation",
"readability",
"IEEE-830",
"ISO-29148"
],
"verified": false,
"downloads": 0,
"stars": 0,
"created_at": "2026-03-07T00:00:00Z",
"updated_at": "2026-03-07T00:00:00Z"
},
"v-model": { "v-model": {
"name": "V-Model Extension Pack", "name": "V-Model Extension Pack",
"id": "v-model", "id": "v-model",
@@ -74,12 +317,50 @@
"commands": 9, "commands": 9,
"hooks": 1 "hooks": 1
}, },
"tags": ["v-model", "traceability", "testing", "compliance", "safety-critical"], "tags": [
"v-model",
"traceability",
"testing",
"compliance",
"safety-critical"
],
"verified": false, "verified": false,
"downloads": 0, "downloads": 0,
"stars": 0, "stars": 0,
"created_at": "2026-02-20T00:00:00Z", "created_at": "2026-02-20T00:00:00Z",
"updated_at": "2026-02-22T00:00:00Z" "updated_at": "2026-02-22T00:00:00Z"
},
"verify": {
"name": "Verify Extension",
"id": "verify",
"description": "Post-implementation quality gate that validates implemented code against specification artifacts.",
"author": "ismaelJimenez",
"version": "1.0.0",
"download_url": "https://github.com/ismaelJimenez/spec-kit-verify/archive/refs/tags/v1.0.0.zip",
"repository": "https://github.com/ismaelJimenez/spec-kit-verify",
"homepage": "https://github.com/ismaelJimenez/spec-kit-verify",
"documentation": "https://github.com/ismaelJimenez/spec-kit-verify/blob/main/README.md",
"changelog": "https://github.com/ismaelJimenez/spec-kit-verify/blob/main/CHANGELOG.md",
"license": "MIT",
"requires": {
"speckit_version": ">=0.1.0"
},
"provides": {
"commands": 1,
"hooks": 1
},
"tags": [
"verification",
"quality-gate",
"implementation",
"spec-adherence",
"compliance"
],
"verified": false,
"downloads": 0,
"stars": 0,
"created_at": "2026-03-03T00:00:00Z",
"updated_at": "2026-03-03T00:00:00Z"
} }
} }
} }

View File

@@ -0,0 +1,54 @@
# Spec Kit - February 2026 Newsletter
This edition covers Spec Kit activity in February 2026. Versions v0.1.7 through v0.1.13 shipped during the month, addressing bugs and adding features including a dual-catalog extension system and additional agent integrations. Community activity included blog posts, tutorials, and meetup sessions. A category summary is in the table below, followed by details.
| **Spec Kit Core (Feb 2026)** | **Community & Content** | **Roadmap & Next** |
| --- | --- | --- |
| Versions **v0.1.7** through **v0.1.13** shipped with bug fixes and features, including a **dual-catalog extension system** and new agent integrations. Over 300 issues were closed (of ~800 filed). The repo reached 71k stars and 6.4k forks. [\[github.com\]](https://github.com/github/spec-kit/releases) [\[github.com\]](https://github.com/github/spec-kit/issues) [\[rywalker.com\]](https://rywalker.com/research/github-spec-kit) | Eduardo Luz published a LinkedIn article on SDD and Spec Kit [\[linkedin.com\]](https://www.linkedin.com/pulse/specification-driven-development-sdd-github-spec-kit-elevating-luz-tojmc?tl=en). Erick Matsen blogged a walkthrough of building a bioinformatics pipeline with Spec Kit [\[matsen.fredhutch.org\]](https://matsen.fredhutch.org/general/2026/02/10/spec-kit-walkthrough.html). Microsoft MVP [Eric Boyd](https://ericboyd.com/) (not the Microsoft AI Platform VP of the same name) presented at the Cleveland .NET User Group [\[ericboyd.com\]](https://ericboyd.com/events/cleveland-csharp-user-group-february-25-2026-spec-driven-development-sdd-github-spec-kit). | **v0.2.0** was released in early March, consolidating February's work. It added extensions for Jira and Azure DevOps, community plugin support, and agents for Tabnine CLI and Kiro CLI [\[github.com\]](https://github.com/github/spec-kit/releases). Future work includes spec lifecycle management and progress toward a stable 1.0 release [\[martinfowler.com\]](https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html). |
***
## Spec Kit Project Updates
Spec Kit released versions **v0.1.7** through **v0.1.13** during February. Version 0.1.7 (early February) updated documentation for the newly introduced **dual-catalog extension system**, which allows both core and community extension catalogs to coexist. Subsequent patches (0.1.8, 0.1.9, etc.) bumped dependencies such as GitHub Actions versions and resolved minor issues. **v0.1.10** fixed YAML front-matter handling in generated files. By late February, **v0.1.12** and **v0.1.13** shipped with additional fixes in preparation for the next version bump. [\[github.com\]](https://github.com/github/spec-kit/releases)
The main architectural addition was the **modular extension system** with separate "core" and "community" extension catalogs for third-party add-ons. Multiple community-contributed extensions were merged during the month, including a **Jira extension** for issue tracker integration, an **Azure DevOps extension**, and utility extensions for code review, retrospective documentation, and CI/CD sync. The pending 0.2.0 release changelog lists over a dozen changes from February, including the extension additions and support for **multiple agent catalogs concurrently**. [\[github.com\]](https://github.com/github/spec-kit/releases)
By end of February, **over 330 issues/feature requests had been closed on GitHub** (out of ~870 filed to date). External contributors submitted pull requests including the **Tabnine CLI support**, which was merged in late February. The repository reached ~71k stars and crossed 6,000 forks. [\[github.com\]](https://github.com/github/spec-kit/issues) [\[github.com\]](https://github.com/github/spec-kit/releases) [\[rywalker.com\]](https://rywalker.com/research/github-spec-kit)
On the stability side, February's work focused on tightening core workflows and fixing edge-case bugs in the specification, planning, and task-generation commands. The team addressed file-handling issues (e.g., clarifying how output files are created/appended) and improved the reliability of the automated release pipeline. The project also added **Kiro CLI** to the supported agent list and updated integration scripts for Cursor and Code Interpreter, bringing the total number of supported AI coding assistants to over 20. [\[github.com\]](https://github.com/github/spec-kit/releases) [\[github.com\]](https://github.com/github/spec-kit)
## Community & Content
**Eduardo Luz** published a LinkedIn article on Feb 15 titled *"Specification Driven Development (SDD) and the GitHub Spec Kit: Elevating Software Engineering."* The article draws on his experience as a senior engineer to describe common causes of technical debt and inconsistent designs, and how SDD addresses them. It walks through Spec Kit's **four-layer approach** (Constitution, Design, Tasks, Implementation) and discusses treating specifications as a source of truth. The post generated discussion among software architects on LinkedIn about reducing misunderstandings and rework through spec-driven workflows. [\[linkedin.com\]](https://www.linkedin.com/pulse/specification-driven-development-sdd-github-spec-kit-elevating-luz-tojmc?tl=en)
**Erick Matsen** (Fred Hutchinson Cancer Center) posted a detailed walkthrough on Feb 10 titled *"Spec-Driven Development with spec-kit."* He describes building a **bioinformatics pipeline** in a single day using Spec Kit's workflow (from `speckit.constitution` to `speckit.implement`). The post includes command outputs and notes on decisions made along the way, such as refining the spec to add domain-specific requirements. He writes: "I really recommend this approach. This feels like the way software development should be." [\[matsen.fredhutch.org\]](https://matsen.fredhutch.org/general/2026/02/10/spec-kit-walkthrough.html) [\[github.com\]](https://github.com/mnriem/spec-kit-dotnet-cli-demo)
Several other tutorials and guides appeared during the month. An article on *IntuitionLabs* (updated Feb 21) provided a guide to Spec Kit covering the philosophy behind SDD and a walkthrough of the four-phase workflow with examples. A piece by Ry Walker (Feb 22) summarized key aspects of Spec Kit, noting its agent-agnostic design and 71k-star count. Microsoft's Developer Blog post from late 2025 (*"Diving Into Spec-Driven Development with GitHub Spec Kit"* by Den Delimarsky) continued to circulate among new users. [\[intuitionlabs.ai\]](https://intuitionlabs.ai/articles/spec-driven-development-spec-kit) [\[rywalker.com\]](https://rywalker.com/research/github-spec-kit)
On **Feb 25**, the Cleveland C# .NET User Group hosted a session titled *"Spec Driven Development with GitHub Spec Kit."* The talk was delivered by Microsoft MVP **[Eric Boyd](https://ericboyd.com/)** (Cleveland-based .NET developer; not to be confused with the Microsoft AI Platform VP of the same name). Boyd covered how specs change an AI coding assistant's output, patterns for iterating and refining specs over multiple cycles, and moving from ad-hoc prompting to a repeatable spec-driven workflow. Other groups, including GDG Madison, also listed sessions on spec-driven development in late February and early March. [\[ericboyd.com\]](https://ericboyd.com/events/cleveland-csharp-user-group-february-25-2026-spec-driven-development-sdd-github-spec-kit)
On GitHub, the **Spec Kit Discussions forum** saw activity around installation troubleshooting, handling multi-feature projects with Spec Kit's branching model, and feature suggestions. One thread discussed how Spec Kit treats each spec as a short-lived artifact tied to a feature branch, which led to discussion about future support for long-running "spec of record" use cases. [\[martinfowler.com\]](https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html)
## SDD Ecosystem
Other spec-driven development tools also saw activity in February.
AWS **Kiro** released version 0.10 on Feb 18 with two new spec workflows: a **Design-First** mode (starting from architecture/pseudocode to derive requirements) and a **Bugfix** mode (structured root-cause analysis producing a `bugfix.md` spec file). Kiro also added hunk-level code review for AI-generated changes and pre/post task hooks for custom automation. AWS expanded Kiro to GovCloud regions on Feb 17 for government compliance use cases. [\[kiro.dev\]](https://kiro.dev/changelog/)
**OpenSpec** (by Fission AI), a lightweight SDD framework, reached ~29.3k stars and nearly 2k forks. Its community published guides and comparisons during the month, including *"Spec-Driven Development Made Easy: A Practical Guide with OpenSpec."* OpenSpec emphasizes simplicity and flexibility, integrating with multiple AI coding assistants via YAML configs.
**Tessl** remained in private beta. As described by Thoughtworks writer Birgitta Boeckeler, Tessl pursues a **spec-as-source** model where specifications are maintained long-term and directly generate code files one-to-one, with generated code labeled as "do not edit." This contrasts with Spec Kit's current approach of creating specs per feature/branch. [\[martinfowler.com\]](https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html)
An **arXiv preprint** (January 2026) categorized SDD implementations into three levels: *spec-first*, *spec-anchored*, and *spec-as-source*. Spec Kit was identified as primarily spec-first with elements of spec-anchored. Tech media published reviews including a *Vibe Coding* "GitHub Spec Kit Review (2026)" and a blog post titled *"Putting Spec Kit Through Its Paces: Radical Idea or Reinvented Waterfall?"* which concluded that SDD with AI assistance is more iterative than traditional Waterfall. [\[intuitionlabs.ai\]](https://intuitionlabs.ai/articles/spec-driven-development-spec-kit) [\[martinfowler.com\]](https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html)
## Roadmap
**v0.2.0** was released on March 10, 2026, consolidating the month's work. It includes new extensions (Jira, Azure DevOps, review, sync), support for multiple extension catalogs and community plugins, and additional agent integrations (Tabnine CLI, Kiro CLI). [\[github.com\]](https://github.com/github/spec-kit/releases)
Areas under discussion or in progress for future development:
- **Spec lifecycle management** -- supporting longer-lived specifications that can evolve across multiple iterations, rather than being tied to a single feature branch. Users have raised this in GitHub Discussions, and the concept of "spec-anchored" development is under consideration. [\[martinfowler.com\]](https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html)
- **CI/CD integration** -- incorporating Spec Kit verification (e.g., `speckit.checklist` or `speckit.verify`) into pull request workflows and project management tools. February's Jira and Azure DevOps extensions are a step in this direction. [\[github.com\]](https://github.com/github/spec-kit/releases)
- **Continued agent support** -- adding integrations as new AI coding assistants emerge. The project currently supports over 20 agents and has been adding new ones (Kiro CLI, Tabnine CLI) as they become available. [\[github.com\]](https://github.com/github/spec-kit)
- **Community ecosystem** -- the open extension model allows external contributors to add functionality directly. February's Jira and Azure DevOps plugins were community-contributed. The Spec Kit README now links to community walkthrough demos for .NET, Spring Boot, and other stacks. [\[github.com\]](https://github.com/github/spec-kit)

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "specify-cli" name = "specify-cli"
version = "0.1.6" version = "0.2.1"
description = "Specify CLI, part of GitHub Spec Kit. A tool to bootstrap your projects for Spec-Driven Development (SDD)." description = "Specify CLI, part of GitHub Spec Kit. A tool to bootstrap your projects for Spec-Driven Development (SDD)."
requires-python = ">=3.11" requires-python = ">=3.11"
dependencies = [ dependencies = [
@@ -13,6 +13,7 @@ dependencies = [
"truststore>=0.10.4", "truststore>=0.10.4",
"pyyaml>=6.0", "pyyaml>=6.0",
"packaging>=23.0", "packaging>=23.0",
"pathspec>=0.12.0",
] ]
[project.scripts] [project.scripts]
@@ -51,4 +52,3 @@ precision = 2
show_missing = true show_missing = true
skip_covered = false skip_covered = false

View File

@@ -67,6 +67,13 @@ if [ -z "$FEATURE_DESCRIPTION" ]; then
exit 1 exit 1
fi fi
# Trim whitespace and validate description is not empty (e.g., user passed only whitespace)
FEATURE_DESCRIPTION=$(echo "$FEATURE_DESCRIPTION" | xargs)
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Error: Feature description cannot be empty or contain only whitespace" >&2
exit 1
fi
# Function to find the repository root by searching for existing project markers # Function to find the repository root by searching for existing project markers
find_repo_root() { find_repo_root() {
local dir="$1" local dir="$1"
@@ -272,7 +279,16 @@ if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
fi fi
if [ "$HAS_GIT" = true ]; then if [ "$HAS_GIT" = true ]; then
git checkout -b "$BRANCH_NAME" if ! git checkout -b "$BRANCH_NAME" 2>/dev/null; then
# Check if branch already exists
if git branch --list "$BRANCH_NAME" | grep -q .; then
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Please use a different feature name or specify a different number with --number."
exit 1
else
>&2 echo "Error: Failed to create git branch '$BRANCH_NAME'. Please check your git configuration and try again."
exit 1
fi
fi
else else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME" >&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi fi

View File

@@ -30,12 +30,12 @@
# #
# 5. Multi-Agent Support # 5. Multi-Agent Support
# - Handles agent-specific file paths and naming conventions # - Handles agent-specific file paths and naming conventions
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, Amazon Q Developer CLI, or Antigravity # - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, Tabnine CLI, Kiro CLI, Mistral Vibe, Kimi Code, Antigravity or Generic
# - Can update single agents or all existing agent files # - Can update single agents or all existing agent files
# - Creates default Claude file if no agent files exist # - Creates default Claude file if no agent files exist
# #
# Usage: ./update-agent-context.sh [agent_type] # Usage: ./update-agent-context.sh [agent_type]
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|agy|bob|qodercli # Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|generic
# Leave empty to update all existing agent files # Leave empty to update all existing agent files
set -e set -e
@@ -73,9 +73,12 @@ CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
QODER_FILE="$REPO_ROOT/QODER.md" QODER_FILE="$REPO_ROOT/QODER.md"
AMP_FILE="$REPO_ROOT/AGENTS.md" AMP_FILE="$REPO_ROOT/AGENTS.md"
SHAI_FILE="$REPO_ROOT/SHAI.md" SHAI_FILE="$REPO_ROOT/SHAI.md"
Q_FILE="$REPO_ROOT/AGENTS.md" TABNINE_FILE="$REPO_ROOT/TABNINE.md"
KIRO_FILE="$REPO_ROOT/AGENTS.md"
AGY_FILE="$REPO_ROOT/.agent/rules/specify-rules.md" AGY_FILE="$REPO_ROOT/.agent/rules/specify-rules.md"
BOB_FILE="$REPO_ROOT/AGENTS.md" BOB_FILE="$REPO_ROOT/AGENTS.md"
VIBE_FILE="$REPO_ROOT/.vibe/agents/specify-agents.md"
KIMI_FILE="$REPO_ROOT/KIMI.md"
# Template file # Template file
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md" TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
@@ -351,10 +354,19 @@ create_new_agent_file() {
# Convert \n sequences to actual newlines # Convert \n sequences to actual newlines
newline=$(printf '\n') newline=$(printf '\n')
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file" sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
# Clean up backup files # Clean up backup files
rm -f "$temp_file.bak" "$temp_file.bak2" rm -f "$temp_file.bak" "$temp_file.bak2"
# Prepend Cursor frontmatter for .mdc files so rules are auto-included
if [[ "$target_file" == *.mdc ]]; then
local frontmatter_file
frontmatter_file=$(mktemp) || return 1
printf '%s\n' "---" "description: Project Development Guidelines" "globs: [\"**/*\"]" "alwaysApply: true" "---" "" > "$frontmatter_file"
cat "$temp_file" >> "$frontmatter_file"
mv "$frontmatter_file" "$temp_file"
fi
return 0 return 0
} }
@@ -492,13 +504,24 @@ update_existing_agent_file() {
changes_entries_added=true changes_entries_added=true
fi fi
# Ensure Cursor .mdc files have YAML frontmatter for auto-inclusion
if [[ "$target_file" == *.mdc ]]; then
if ! head -1 "$temp_file" | grep -q '^---'; then
local frontmatter_file
frontmatter_file=$(mktemp) || { rm -f "$temp_file"; return 1; }
printf '%s\n' "---" "description: Project Development Guidelines" "globs: [\"**/*\"]" "alwaysApply: true" "---" "" > "$frontmatter_file"
cat "$temp_file" >> "$frontmatter_file"
mv "$frontmatter_file" "$temp_file"
fi
fi
# Move temp file to target atomically # Move temp file to target atomically
if ! mv "$temp_file" "$target_file"; then if ! mv "$temp_file" "$target_file"; then
log_error "Failed to update target file" log_error "Failed to update target file"
rm -f "$temp_file" rm -f "$temp_file"
return 1 return 1
fi fi
return 0 return 0
} }
#============================================================================== #==============================================================================
@@ -628,8 +651,11 @@ update_specific_agent() {
shai) shai)
update_agent_file "$SHAI_FILE" "SHAI" update_agent_file "$SHAI_FILE" "SHAI"
;; ;;
q) tabnine)
update_agent_file "$Q_FILE" "Amazon Q Developer CLI" update_agent_file "$TABNINE_FILE" "Tabnine CLI"
;;
kiro-cli)
update_agent_file "$KIRO_FILE" "Kiro CLI"
;; ;;
agy) agy)
update_agent_file "$AGY_FILE" "Antigravity" update_agent_file "$AGY_FILE" "Antigravity"
@@ -637,12 +663,18 @@ update_specific_agent() {
bob) bob)
update_agent_file "$BOB_FILE" "IBM Bob" update_agent_file "$BOB_FILE" "IBM Bob"
;; ;;
vibe)
update_agent_file "$VIBE_FILE" "Mistral Vibe"
;;
kimi)
update_agent_file "$KIMI_FILE" "Kimi Code"
;;
generic) generic)
log_info "Generic agent: no predefined context file. Use the agent-specific update script for your agent." log_info "Generic agent: no predefined context file. Use the agent-specific update script for your agent."
;; ;;
*) *)
log_error "Unknown agent type '$agent_type'" log_error "Unknown agent type '$agent_type'"
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|agy|bob|qodercli|generic" log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|generic"
exit 1 exit 1
;; ;;
esac esac
@@ -712,13 +744,18 @@ update_all_existing_agents() {
found_agent=true found_agent=true
fi fi
if [[ -f "$TABNINE_FILE" ]]; then
update_agent_file "$TABNINE_FILE" "Tabnine CLI"
found_agent=true
fi
if [[ -f "$QODER_FILE" ]]; then if [[ -f "$QODER_FILE" ]]; then
update_agent_file "$QODER_FILE" "Qoder CLI" update_agent_file "$QODER_FILE" "Qoder CLI"
found_agent=true found_agent=true
fi fi
if [[ -f "$Q_FILE" ]]; then if [[ -f "$KIRO_FILE" ]]; then
update_agent_file "$Q_FILE" "Amazon Q Developer CLI" update_agent_file "$KIRO_FILE" "Kiro CLI"
found_agent=true found_agent=true
fi fi
@@ -730,6 +767,16 @@ update_all_existing_agents() {
update_agent_file "$BOB_FILE" "IBM Bob" update_agent_file "$BOB_FILE" "IBM Bob"
found_agent=true found_agent=true
fi fi
if [[ -f "$VIBE_FILE" ]]; then
update_agent_file "$VIBE_FILE" "Mistral Vibe"
found_agent=true
fi
if [[ -f "$KIMI_FILE" ]]; then
update_agent_file "$KIMI_FILE" "Kimi Code"
found_agent=true
fi
# If no agent files exist, create a default Claude file # If no agent files exist, create a default Claude file
if [[ "$found_agent" == false ]]; then if [[ "$found_agent" == false ]]; then
@@ -754,8 +801,7 @@ print_summary() {
fi fi
echo echo
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|generic]"
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|agy|bob|qodercli]"
} }
#============================================================================== #==============================================================================
@@ -807,4 +853,3 @@ main() {
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@" main "$@"
fi fi

View File

@@ -35,6 +35,12 @@ if (-not $FeatureDescription -or $FeatureDescription.Count -eq 0) {
$featureDesc = ($FeatureDescription -join ' ').Trim() $featureDesc = ($FeatureDescription -join ' ').Trim()
# Validate description is not empty after trimming (e.g., user passed only whitespace)
if ([string]::IsNullOrWhiteSpace($featureDesc)) {
Write-Error "Error: Feature description cannot be empty or contain only whitespace"
exit 1
}
# Resolve repository root. Prefer git information when available, but fall back # Resolve repository root. Prefer git information when available, but fall back
# to searching for repository markers so the workflow still functions in repositories that # to searching for repository markers so the workflow still functions in repositories that
# were initialized with --no-git. # were initialized with --no-git.
@@ -242,10 +248,26 @@ if ($branchName.Length -gt $maxBranchLength) {
} }
if ($hasGit) { if ($hasGit) {
$branchCreated = $false
try { try {
git checkout -b $branchName | Out-Null git checkout -q -b $branchName 2>$null | Out-Null
if ($LASTEXITCODE -eq 0) {
$branchCreated = $true
}
} catch { } catch {
Write-Warning "Failed to create git branch: $branchName" # Exception during git command
}
if (-not $branchCreated) {
# Check if branch already exists
$existingBranch = git branch --list $branchName 2>$null
if ($existingBranch) {
Write-Error "Error: Branch '$branchName' already exists. Please use a different feature name or specify a different number with -Number."
exit 1
} else {
Write-Error "Error: Failed to create git branch '$branchName'. Please check your git configuration and try again."
exit 1
}
} }
} else { } else {
Write-Warning "[specify] Warning: Git repository not detected; skipped branch creation for $branchName" Write-Warning "[specify] Warning: Git repository not detected; skipped branch creation for $branchName"

View File

@@ -9,7 +9,7 @@ Mirrors the behavior of scripts/bash/update-agent-context.sh:
2. Plan Data Extraction 2. Plan Data Extraction
3. Agent File Management (create from template or update existing) 3. Agent File Management (create from template or update existing)
4. Content Generation (technology stack, recent changes, timestamp) 4. Content Generation (technology stack, recent changes, timestamp)
5. Multi-Agent Support (claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, kilocode, auggie, roo, codebuddy, amp, shai, q, agy, bob, qodercli) 5. Multi-Agent Support (claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, kilocode, auggie, roo, codebuddy, amp, shai, tabnine, kiro-cli, agy, bob, vibe, qodercli, kimi, generic)
.PARAMETER AgentType .PARAMETER AgentType
Optional agent key to update a single agent. If omitted, updates all existing agent files (creating a default Claude file if none exist). Optional agent key to update a single agent. If omitted, updates all existing agent files (creating a default Claude file if none exist).
@@ -25,7 +25,7 @@ Relies on common helper functions in common.ps1
#> #>
param( param(
[Parameter(Position=0)] [Parameter(Position=0)]
[ValidateSet('claude','gemini','copilot','cursor-agent','qwen','opencode','codex','windsurf','kilocode','auggie','roo','codebuddy','amp','shai','q','agy','bob','qodercli','generic')] [ValidateSet('claude','gemini','copilot','cursor-agent','qwen','opencode','codex','windsurf','kilocode','auggie','roo','codebuddy','amp','shai','tabnine','kiro-cli','agy','bob','qodercli','vibe','kimi','generic')]
[string]$AgentType [string]$AgentType
) )
@@ -58,9 +58,12 @@ $CODEBUDDY_FILE = Join-Path $REPO_ROOT 'CODEBUDDY.md'
$QODER_FILE = Join-Path $REPO_ROOT 'QODER.md' $QODER_FILE = Join-Path $REPO_ROOT 'QODER.md'
$AMP_FILE = Join-Path $REPO_ROOT 'AGENTS.md' $AMP_FILE = Join-Path $REPO_ROOT 'AGENTS.md'
$SHAI_FILE = Join-Path $REPO_ROOT 'SHAI.md' $SHAI_FILE = Join-Path $REPO_ROOT 'SHAI.md'
$Q_FILE = Join-Path $REPO_ROOT 'AGENTS.md' $TABNINE_FILE = Join-Path $REPO_ROOT 'TABNINE.md'
$KIRO_FILE = Join-Path $REPO_ROOT 'AGENTS.md'
$AGY_FILE = Join-Path $REPO_ROOT '.agent/rules/specify-rules.md' $AGY_FILE = Join-Path $REPO_ROOT '.agent/rules/specify-rules.md'
$BOB_FILE = Join-Path $REPO_ROOT 'AGENTS.md' $BOB_FILE = Join-Path $REPO_ROOT 'AGENTS.md'
$VIBE_FILE = Join-Path $REPO_ROOT '.vibe/agents/specify-agents.md'
$KIMI_FILE = Join-Path $REPO_ROOT 'KIMI.md'
$TEMPLATE_FILE = Join-Path $REPO_ROOT '.specify/templates/agent-file-template.md' $TEMPLATE_FILE = Join-Path $REPO_ROOT '.specify/templates/agent-file-template.md'
@@ -258,6 +261,12 @@ function New-AgentFile {
# Convert literal \n sequences introduced by Escape to real newlines # Convert literal \n sequences introduced by Escape to real newlines
$content = $content -replace '\\n',[Environment]::NewLine $content = $content -replace '\\n',[Environment]::NewLine
# Prepend Cursor frontmatter for .mdc files so rules are auto-included
if ($TargetFile -match '\.mdc$') {
$frontmatter = @('---','description: Project Development Guidelines','globs: ["**/*"]','alwaysApply: true','---','') -join [Environment]::NewLine
$content = $frontmatter + $content
}
$parent = Split-Path -Parent $TargetFile $parent = Split-Path -Parent $TargetFile
if (-not (Test-Path $parent)) { New-Item -ItemType Directory -Path $parent | Out-Null } if (-not (Test-Path $parent)) { New-Item -ItemType Directory -Path $parent | Out-Null }
Set-Content -LiteralPath $TargetFile -Value $content -NoNewline -Encoding utf8 Set-Content -LiteralPath $TargetFile -Value $content -NoNewline -Encoding utf8
@@ -334,6 +343,12 @@ function Update-ExistingAgentFile {
$newTechEntries | ForEach-Object { $output.Add($_) } $newTechEntries | ForEach-Object { $output.Add($_) }
} }
# Ensure Cursor .mdc files have YAML frontmatter for auto-inclusion
if ($TargetFile -match '\.mdc$' -and $output.Count -gt 0 -and $output[0] -ne '---') {
$frontmatter = @('---','description: Project Development Guidelines','globs: ["**/*"]','alwaysApply: true','---','')
$output.InsertRange(0, $frontmatter)
}
Set-Content -LiteralPath $TargetFile -Value ($output -join [Environment]::NewLine) -Encoding utf8 Set-Content -LiteralPath $TargetFile -Value ($output -join [Environment]::NewLine) -Encoding utf8
return $true return $true
} }
@@ -387,11 +402,14 @@ function Update-SpecificAgent {
'qodercli' { Update-AgentFile -TargetFile $QODER_FILE -AgentName 'Qoder CLI' } 'qodercli' { Update-AgentFile -TargetFile $QODER_FILE -AgentName 'Qoder CLI' }
'amp' { Update-AgentFile -TargetFile $AMP_FILE -AgentName 'Amp' } 'amp' { Update-AgentFile -TargetFile $AMP_FILE -AgentName 'Amp' }
'shai' { Update-AgentFile -TargetFile $SHAI_FILE -AgentName 'SHAI' } 'shai' { Update-AgentFile -TargetFile $SHAI_FILE -AgentName 'SHAI' }
'q' { Update-AgentFile -TargetFile $Q_FILE -AgentName 'Amazon Q Developer CLI' } 'tabnine' { Update-AgentFile -TargetFile $TABNINE_FILE -AgentName 'Tabnine CLI' }
'kiro-cli' { Update-AgentFile -TargetFile $KIRO_FILE -AgentName 'Kiro CLI' }
'agy' { Update-AgentFile -TargetFile $AGY_FILE -AgentName 'Antigravity' } 'agy' { Update-AgentFile -TargetFile $AGY_FILE -AgentName 'Antigravity' }
'bob' { Update-AgentFile -TargetFile $BOB_FILE -AgentName 'IBM Bob' } 'bob' { Update-AgentFile -TargetFile $BOB_FILE -AgentName 'IBM Bob' }
'vibe' { Update-AgentFile -TargetFile $VIBE_FILE -AgentName 'Mistral Vibe' }
'kimi' { Update-AgentFile -TargetFile $KIMI_FILE -AgentName 'Kimi Code' }
'generic' { Write-Info 'Generic agent: no predefined context file. Use the agent-specific update script for your agent.' } 'generic' { Write-Info 'Generic agent: no predefined context file. Use the agent-specific update script for your agent.' }
default { Write-Err "Unknown agent type '$Type'"; Write-Err 'Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|agy|bob|qodercli|generic'; return $false } default { Write-Err "Unknown agent type '$Type'"; Write-Err 'Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|generic'; return $false }
} }
} }
@@ -411,9 +429,12 @@ function Update-AllExistingAgents {
if (Test-Path $CODEBUDDY_FILE) { if (-not (Update-AgentFile -TargetFile $CODEBUDDY_FILE -AgentName 'CodeBuddy CLI')) { $ok = $false }; $found = $true } if (Test-Path $CODEBUDDY_FILE) { if (-not (Update-AgentFile -TargetFile $CODEBUDDY_FILE -AgentName 'CodeBuddy CLI')) { $ok = $false }; $found = $true }
if (Test-Path $QODER_FILE) { if (-not (Update-AgentFile -TargetFile $QODER_FILE -AgentName 'Qoder CLI')) { $ok = $false }; $found = $true } if (Test-Path $QODER_FILE) { if (-not (Update-AgentFile -TargetFile $QODER_FILE -AgentName 'Qoder CLI')) { $ok = $false }; $found = $true }
if (Test-Path $SHAI_FILE) { if (-not (Update-AgentFile -TargetFile $SHAI_FILE -AgentName 'SHAI')) { $ok = $false }; $found = $true } if (Test-Path $SHAI_FILE) { if (-not (Update-AgentFile -TargetFile $SHAI_FILE -AgentName 'SHAI')) { $ok = $false }; $found = $true }
if (Test-Path $Q_FILE) { if (-not (Update-AgentFile -TargetFile $Q_FILE -AgentName 'Amazon Q Developer CLI')) { $ok = $false }; $found = $true } if (Test-Path $TABNINE_FILE) { if (-not (Update-AgentFile -TargetFile $TABNINE_FILE -AgentName 'Tabnine CLI')) { $ok = $false }; $found = $true }
if (Test-Path $KIRO_FILE) { if (-not (Update-AgentFile -TargetFile $KIRO_FILE -AgentName 'Kiro CLI')) { $ok = $false }; $found = $true }
if (Test-Path $AGY_FILE) { if (-not (Update-AgentFile -TargetFile $AGY_FILE -AgentName 'Antigravity')) { $ok = $false }; $found = $true } if (Test-Path $AGY_FILE) { if (-not (Update-AgentFile -TargetFile $AGY_FILE -AgentName 'Antigravity')) { $ok = $false }; $found = $true }
if (Test-Path $BOB_FILE) { if (-not (Update-AgentFile -TargetFile $BOB_FILE -AgentName 'IBM Bob')) { $ok = $false }; $found = $true } if (Test-Path $BOB_FILE) { if (-not (Update-AgentFile -TargetFile $BOB_FILE -AgentName 'IBM Bob')) { $ok = $false }; $found = $true }
if (Test-Path $VIBE_FILE) { if (-not (Update-AgentFile -TargetFile $VIBE_FILE -AgentName 'Mistral Vibe')) { $ok = $false }; $found = $true }
if (Test-Path $KIMI_FILE) { if (-not (Update-AgentFile -TargetFile $KIMI_FILE -AgentName 'Kimi Code')) { $ok = $false }; $found = $true }
if (-not $found) { if (-not $found) {
Write-Info 'No existing agent files found, creating default Claude file...' Write-Info 'No existing agent files found, creating default Claude file...'
if (-not (Update-AgentFile -TargetFile $CLAUDE_FILE -AgentName 'Claude Code')) { $ok = $false } if (-not (Update-AgentFile -TargetFile $CLAUDE_FILE -AgentName 'Claude Code')) { $ok = $false }
@@ -428,7 +449,7 @@ function Print-Summary {
if ($NEW_FRAMEWORK) { Write-Host " - Added framework: $NEW_FRAMEWORK" } if ($NEW_FRAMEWORK) { Write-Host " - Added framework: $NEW_FRAMEWORK" }
if ($NEW_DB -and $NEW_DB -ne 'N/A') { Write-Host " - Added database: $NEW_DB" } if ($NEW_DB -and $NEW_DB -ne 'N/A') { Write-Host " - Added database: $NEW_DB" }
Write-Host '' Write-Host ''
Write-Info 'Usage: ./update-agent-context.ps1 [-AgentType claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|agy|bob|qodercli|generic]' Write-Info 'Usage: ./update-agent-context.ps1 [-AgentType claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|generic]'
} }
function Main { function Main {
@@ -449,4 +470,3 @@ function Main {
} }
Main Main

View File

@@ -216,11 +216,11 @@ AGENT_CONFIG = {
"install_url": None, # IDE-based "install_url": None, # IDE-based
"requires_cli": False, "requires_cli": False,
}, },
"q": { "kiro-cli": {
"name": "Amazon Q Developer CLI", "name": "Kiro CLI",
"folder": ".amazonq/", "folder": ".kiro/",
"commands_subdir": "prompts", # Special: uses prompts/ not commands/ "commands_subdir": "prompts", # Special: uses prompts/ not commands/
"install_url": "https://aws.amazon.com/developer/learning/q-developer-cli/", "install_url": "https://kiro.dev/docs/cli/",
"requires_cli": True, "requires_cli": True,
}, },
"amp": { "amp": {
@@ -237,6 +237,13 @@ AGENT_CONFIG = {
"install_url": "https://github.com/ovh/shai", "install_url": "https://github.com/ovh/shai",
"requires_cli": True, "requires_cli": True,
}, },
"tabnine": {
"name": "Tabnine CLI",
"folder": ".tabnine/agent/",
"commands_subdir": "commands",
"install_url": "https://docs.tabnine.com/main/getting-started/tabnine-cli",
"requires_cli": True,
},
"agy": { "agy": {
"name": "Antigravity", "name": "Antigravity",
"folder": ".agent/", "folder": ".agent/",
@@ -251,6 +258,20 @@ AGENT_CONFIG = {
"install_url": None, # IDE-based "install_url": None, # IDE-based
"requires_cli": False, "requires_cli": False,
}, },
"vibe": {
"name": "Mistral Vibe",
"folder": ".vibe/",
"commands_subdir": "prompts",
"install_url": "https://github.com/mistralai/mistral-vibe",
"requires_cli": True,
},
"kimi": {
"name": "Kimi Code",
"folder": ".kimi/",
"commands_subdir": "skills", # Kimi uses /skill:<name> with .kimi/skills/<name>/SKILL.md
"install_url": "https://code.kimi.com/",
"requires_cli": True,
},
"generic": { "generic": {
"name": "Generic (bring your own agent)", "name": "Generic (bring your own agent)",
"folder": None, # Set dynamically via --ai-commands-dir "folder": None, # Set dynamically via --ai-commands-dir
@@ -260,6 +281,34 @@ AGENT_CONFIG = {
}, },
} }
AI_ASSISTANT_ALIASES = {
"kiro": "kiro-cli",
}
def _build_ai_assistant_help() -> str:
"""Build the --ai help text from AGENT_CONFIG so it stays in sync with runtime config."""
non_generic_agents = sorted(agent for agent in AGENT_CONFIG if agent != "generic")
base_help = (
f"AI assistant to use: {', '.join(non_generic_agents)}, "
"or generic (requires --ai-commands-dir)."
)
if not AI_ASSISTANT_ALIASES:
return base_help
alias_phrases = []
for alias, target in sorted(AI_ASSISTANT_ALIASES.items()):
alias_phrases.append(f"'{alias}' as an alias for '{target}'")
if len(alias_phrases) == 1:
aliases_text = alias_phrases[0]
else:
aliases_text = ', '.join(alias_phrases[:-1]) + ' and ' + alias_phrases[-1]
return base_help + " Use " + aliases_text + "."
AI_ASSISTANT_HELP = _build_ai_assistant_help()
SCRIPT_TYPE_CHOICES = {"sh": "POSIX Shell (bash/zsh)", "ps": "PowerShell"} SCRIPT_TYPE_CHOICES = {"sh": "POSIX Shell (bash/zsh)", "ps": "PowerShell"}
CLAUDE_LOCAL_PATH = Path.home() / ".claude" / "local" / "claude" CLAUDE_LOCAL_PATH = Path.home() / ".claude" / "local" / "claude"
@@ -534,7 +583,12 @@ def check_tool(tool: str, tracker: StepTracker = None) -> bool:
tracker.complete(tool, "available") tracker.complete(tool, "available")
return True return True
found = shutil.which(tool) is not None if tool == "kiro-cli":
# Kiro currently supports both executable names. Prefer kiro-cli and
# accept kiro as a compatibility fallback.
found = shutil.which("kiro-cli") is not None or shutil.which("kiro") is not None
else:
found = shutil.which(tool) is not None
if tracker: if tracker:
if found: if found:
@@ -1084,7 +1138,7 @@ def install_ai_skills(project_path: Path, selected_ai: str, tracker: StepTracker
if not templates_dir.exists() or not any(templates_dir.glob("*.md")): if not templates_dir.exists() or not any(templates_dir.glob("*.md")):
# Fallback: try the repo-relative path (for running from source checkout) # Fallback: try the repo-relative path (for running from source checkout)
# This also covers agents whose extracted commands are in a different # This also covers agents whose extracted commands are in a different
# format (e.g. gemini uses .toml, not .md). # format (e.g. gemini/tabnine use .toml, not .md).
script_dir = Path(__file__).parent.parent.parent # up from src/specify_cli/ script_dir = Path(__file__).parent.parent.parent # up from src/specify_cli/
fallback_dir = script_dir / "templates" / "commands" fallback_dir = script_dir / "templates" / "commands"
if fallback_dir.exists() and any(fallback_dir.glob("*.md")): if fallback_dir.exists() and any(fallback_dir.glob("*.md")):
@@ -1141,7 +1195,12 @@ def install_ai_skills(project_path: Path, selected_ai: str, tracker: StepTracker
# SKILL_DESCRIPTIONS lookups work. # SKILL_DESCRIPTIONS lookups work.
if command_name.startswith("speckit."): if command_name.startswith("speckit."):
command_name = command_name[len("speckit."):] command_name = command_name[len("speckit."):]
skill_name = f"speckit-{command_name}" # Kimi CLI discovers skills by directory name and invokes them as
# /skill:<name> — use dot separator to match packaging convention.
if selected_ai == "kimi":
skill_name = f"speckit.{command_name}"
else:
skill_name = f"speckit-{command_name}"
# Create skill directory (additive — never removes existing content) # Create skill directory (additive — never removes existing content)
skill_dir = skills_dir / skill_name skill_dir = skills_dir / skill_name
@@ -1214,7 +1273,7 @@ def install_ai_skills(project_path: Path, selected_ai: str, tracker: StepTracker
@app.command() @app.command()
def init( def init(
project_name: str = typer.Argument(None, help="Name for your new project directory (optional if using --here, or use '.' for current directory)"), project_name: str = typer.Argument(None, help="Name for your new project directory (optional if using --here, or use '.' for current directory)"),
ai_assistant: str = typer.Option(None, "--ai", help="AI assistant to use: claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, kilocode, auggie, codebuddy, amp, shai, q, agy, bob, qodercli, or generic (requires --ai-commands-dir)"), ai_assistant: str = typer.Option(None, "--ai", help=AI_ASSISTANT_HELP),
ai_commands_dir: str = typer.Option(None, "--ai-commands-dir", help="Directory for agent command files (required with --ai generic, e.g. .myagent/commands/)"), ai_commands_dir: str = typer.Option(None, "--ai-commands-dir", help="Directory for agent command files (required with --ai generic, e.g. .myagent/commands/)"),
script_type: str = typer.Option(None, "--script", help="Script type to use: sh or ps"), script_type: str = typer.Option(None, "--script", help="Script type to use: sh or ps"),
ignore_agent_tools: bool = typer.Option(False, "--ignore-agent-tools", help="Skip checks for AI agent tools like Claude Code"), ignore_agent_tools: bool = typer.Option(False, "--ignore-agent-tools", help="Skip checks for AI agent tools like Claude Code"),
@@ -1247,6 +1306,7 @@ def init(
specify init --here --ai claude # Alternative syntax for current directory specify init --here --ai claude # Alternative syntax for current directory
specify init --here --ai codex specify init --here --ai codex
specify init --here --ai codebuddy specify init --here --ai codebuddy
specify init --here --ai vibe # Initialize with Mistral Vibe support
specify init --here specify init --here
specify init --here --force # Skip confirmation when current directory not empty specify init --here --force # Skip confirmation when current directory not empty
specify init my-project --ai claude --ai-skills # Install agent skills specify init my-project --ai claude --ai-skills # Install agent skills
@@ -1270,6 +1330,9 @@ def init(
console.print("[yellow]Example:[/yellow] specify init --ai generic --ai-commands-dir .myagent/commands/") console.print("[yellow]Example:[/yellow] specify init --ai generic --ai-commands-dir .myagent/commands/")
raise typer.Exit(1) raise typer.Exit(1)
if ai_assistant:
ai_assistant = AI_ASSISTANT_ALIASES.get(ai_assistant, ai_assistant)
if project_name == ".": if project_name == ".":
here = True here = True
project_name = None # Clear project_name to use existing validation logic project_name = None # Clear project_name to use existing validation logic
@@ -1464,8 +1527,9 @@ def init(
if skills_ok and not here: if skills_ok and not here:
agent_cfg = AGENT_CONFIG.get(selected_ai, {}) agent_cfg = AGENT_CONFIG.get(selected_ai, {})
agent_folder = agent_cfg.get("folder", "") agent_folder = agent_cfg.get("folder", "")
commands_subdir = agent_cfg.get("commands_subdir", "commands")
if agent_folder: if agent_folder:
cmds_dir = project_path / agent_folder.rstrip("/") / "commands" cmds_dir = project_path / agent_folder.rstrip("/") / commands_subdir
if cmds_dir.exists(): if cmds_dir.exists():
try: try:
shutil.rmtree(cmds_dir) shutil.rmtree(cmds_dir)
@@ -1720,6 +1784,13 @@ extension_app = typer.Typer(
) )
app.add_typer(extension_app, name="extension") app.add_typer(extension_app, name="extension")
catalog_app = typer.Typer(
name="catalog",
help="Manage extension catalogs",
add_completion=False,
)
extension_app.add_typer(catalog_app, name="catalog")
def get_speckit_version() -> str: def get_speckit_version() -> str:
"""Get current spec-kit version.""" """Get current spec-kit version."""
@@ -1785,6 +1856,181 @@ def extension_list(
console.print(" [cyan]specify extension add <name>[/cyan]") console.print(" [cyan]specify extension add <name>[/cyan]")
@catalog_app.command("list")
def catalog_list():
"""List all active extension catalogs."""
from .extensions import ExtensionCatalog, ValidationError
project_root = Path.cwd()
specify_dir = project_root / ".specify"
if not specify_dir.exists():
console.print("[red]Error:[/red] Not a spec-kit project (no .specify/ directory)")
console.print("Run this command from a spec-kit project root")
raise typer.Exit(1)
catalog = ExtensionCatalog(project_root)
try:
active_catalogs = catalog.get_active_catalogs()
except ValidationError as e:
console.print(f"[red]Error:[/red] {e}")
raise typer.Exit(1)
console.print("\n[bold cyan]Active Extension Catalogs:[/bold cyan]\n")
for entry in active_catalogs:
install_str = (
"[green]install allowed[/green]"
if entry.install_allowed
else "[yellow]discovery only[/yellow]"
)
console.print(f" [bold]{entry.name}[/bold] (priority {entry.priority})")
if entry.description:
console.print(f" {entry.description}")
console.print(f" URL: {entry.url}")
console.print(f" Install: {install_str}")
console.print()
config_path = project_root / ".specify" / "extension-catalogs.yml"
user_config_path = Path.home() / ".specify" / "extension-catalogs.yml"
if os.environ.get("SPECKIT_CATALOG_URL"):
console.print("[dim]Catalog configured via SPECKIT_CATALOG_URL environment variable.[/dim]")
else:
try:
proj_loaded = config_path.exists() and catalog._load_catalog_config(config_path) is not None
except ValidationError:
proj_loaded = False
if proj_loaded:
console.print(f"[dim]Config: {config_path.relative_to(project_root)}[/dim]")
else:
try:
user_loaded = user_config_path.exists() and catalog._load_catalog_config(user_config_path) is not None
except ValidationError:
user_loaded = False
if user_loaded:
console.print("[dim]Config: ~/.specify/extension-catalogs.yml[/dim]")
else:
console.print("[dim]Using built-in default catalog stack.[/dim]")
console.print(
"[dim]Add .specify/extension-catalogs.yml to customize.[/dim]"
)
@catalog_app.command("add")
def catalog_add(
url: str = typer.Argument(help="Catalog URL (must use HTTPS)"),
name: str = typer.Option(..., "--name", help="Catalog name"),
priority: int = typer.Option(10, "--priority", help="Priority (lower = higher priority)"),
install_allowed: bool = typer.Option(
False, "--install-allowed/--no-install-allowed",
help="Allow extensions from this catalog to be installed",
),
description: str = typer.Option("", "--description", help="Description of the catalog"),
):
"""Add a catalog to .specify/extension-catalogs.yml."""
from .extensions import ExtensionCatalog, ValidationError
project_root = Path.cwd()
specify_dir = project_root / ".specify"
if not specify_dir.exists():
console.print("[red]Error:[/red] Not a spec-kit project (no .specify/ directory)")
console.print("Run this command from a spec-kit project root")
raise typer.Exit(1)
# Validate URL
tmp_catalog = ExtensionCatalog(project_root)
try:
tmp_catalog._validate_catalog_url(url)
except ValidationError as e:
console.print(f"[red]Error:[/red] {e}")
raise typer.Exit(1)
config_path = specify_dir / "extension-catalogs.yml"
# Load existing config
if config_path.exists():
try:
config = yaml.safe_load(config_path.read_text()) or {}
except Exception as e:
console.print(f"[red]Error:[/red] Failed to read {config_path}: {e}")
raise typer.Exit(1)
else:
config = {}
catalogs = config.get("catalogs", [])
if not isinstance(catalogs, list):
console.print("[red]Error:[/red] Invalid catalog config: 'catalogs' must be a list.")
raise typer.Exit(1)
# Check for duplicate name
for existing in catalogs:
if isinstance(existing, dict) and existing.get("name") == name:
console.print(f"[yellow]Warning:[/yellow] A catalog named '{name}' already exists.")
console.print("Use 'specify extension catalog remove' first, or choose a different name.")
raise typer.Exit(1)
catalogs.append({
"name": name,
"url": url,
"priority": priority,
"install_allowed": install_allowed,
"description": description,
})
config["catalogs"] = catalogs
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False))
install_label = "install allowed" if install_allowed else "discovery only"
console.print(f"\n[green]✓[/green] Added catalog '[bold]{name}[/bold]' ({install_label})")
console.print(f" URL: {url}")
console.print(f" Priority: {priority}")
console.print(f"\nConfig saved to {config_path.relative_to(project_root)}")
@catalog_app.command("remove")
def catalog_remove(
name: str = typer.Argument(help="Catalog name to remove"),
):
"""Remove a catalog from .specify/extension-catalogs.yml."""
project_root = Path.cwd()
specify_dir = project_root / ".specify"
if not specify_dir.exists():
console.print("[red]Error:[/red] Not a spec-kit project (no .specify/ directory)")
console.print("Run this command from a spec-kit project root")
raise typer.Exit(1)
config_path = specify_dir / "extension-catalogs.yml"
if not config_path.exists():
console.print("[red]Error:[/red] No catalog config found. Nothing to remove.")
raise typer.Exit(1)
try:
config = yaml.safe_load(config_path.read_text()) or {}
except Exception:
console.print("[red]Error:[/red] Failed to read catalog config.")
raise typer.Exit(1)
catalogs = config.get("catalogs", [])
if not isinstance(catalogs, list):
console.print("[red]Error:[/red] Invalid catalog config: 'catalogs' must be a list.")
raise typer.Exit(1)
original_count = len(catalogs)
catalogs = [c for c in catalogs if isinstance(c, dict) and c.get("name") != name]
if len(catalogs) == original_count:
console.print(f"[red]Error:[/red] Catalog '{name}' not found.")
raise typer.Exit(1)
config["catalogs"] = catalogs
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False))
console.print(f"[green]✓[/green] Removed catalog '{name}'")
if not catalogs:
console.print("\n[dim]No catalogs remain in config. Built-in defaults will be used.[/dim]")
@extension_app.command("add") @extension_app.command("add")
def extension_add( def extension_add(
extension: str = typer.Argument(help="Extension name or path"), extension: str = typer.Argument(help="Extension name or path"),
@@ -1873,6 +2119,19 @@ def extension_add(
console.print(" specify extension search") console.print(" specify extension search")
raise typer.Exit(1) raise typer.Exit(1)
# Enforce install_allowed policy
if not ext_info.get("_install_allowed", True):
catalog_name = ext_info.get("_catalog_name", "community")
console.print(
f"[red]Error:[/red] '{extension}' is available in the "
f"'{catalog_name}' catalog but installation is not allowed from that catalog."
)
console.print(
f"\nTo enable installation, add '{extension}' to an approved catalog "
f"(install_allowed: true) in .specify/extension-catalogs.yml."
)
raise typer.Exit(1)
# Download extension ZIP # Download extension ZIP
console.print(f"Downloading {ext_info['name']} v{ext_info.get('version', 'unknown')}...") console.print(f"Downloading {ext_info['name']} v{ext_info.get('version', 'unknown')}...")
zip_path = catalog.download_extension(extension) zip_path = catalog.download_extension(extension)
@@ -2017,6 +2276,15 @@ def extension_search(
tags_str = ", ".join(ext['tags']) tags_str = ", ".join(ext['tags'])
console.print(f" [dim]Tags:[/dim] {tags_str}") console.print(f" [dim]Tags:[/dim] {tags_str}")
# Source catalog
catalog_name = ext.get("_catalog_name", "")
install_allowed = ext.get("_install_allowed", True)
if catalog_name:
if install_allowed:
console.print(f" [dim]Catalog:[/dim] {catalog_name}")
else:
console.print(f" [dim]Catalog:[/dim] {catalog_name} [yellow](discovery only — not installable)[/yellow]")
# Stats # Stats
stats = [] stats = []
if ext.get('downloads') is not None: if ext.get('downloads') is not None:
@@ -2030,8 +2298,15 @@ def extension_search(
if ext.get('repository'): if ext.get('repository'):
console.print(f" [dim]Repository:[/dim] {ext['repository']}") console.print(f" [dim]Repository:[/dim] {ext['repository']}")
# Install command # Install command (show warning if not installable)
console.print(f"\n [cyan]Install:[/cyan] specify extension add {ext['id']}") if install_allowed:
console.print(f"\n [cyan]Install:[/cyan] specify extension add {ext['id']}")
else:
console.print(f"\n [yellow]⚠[/yellow] Not directly installable from '{catalog_name}'.")
console.print(
f" Add to an approved catalog with install_allowed: true, "
f"or install from a ZIP URL: specify extension add {ext['id']} --from <zip-url>"
)
console.print() console.print()
except ExtensionError as e: except ExtensionError as e:
@@ -2080,6 +2355,12 @@ def extension_info(
# Author and License # Author and License
console.print(f"[dim]Author:[/dim] {ext_info.get('author', 'Unknown')}") console.print(f"[dim]Author:[/dim] {ext_info.get('author', 'Unknown')}")
console.print(f"[dim]License:[/dim] {ext_info.get('license', 'Unknown')}") console.print(f"[dim]License:[/dim] {ext_info.get('license', 'Unknown')}")
# Source catalog
if ext_info.get("_catalog_name"):
install_allowed = ext_info.get("_install_allowed", True)
install_note = "" if install_allowed else " [yellow](discovery only)[/yellow]"
console.print(f"[dim]Source catalog:[/dim] {ext_info['_catalog_name']}{install_note}")
console.print() console.print()
# Requirements # Requirements
@@ -2136,12 +2417,21 @@ def extension_info(
# Installation status and command # Installation status and command
is_installed = manager.registry.is_installed(ext_info['id']) is_installed = manager.registry.is_installed(ext_info['id'])
install_allowed = ext_info.get("_install_allowed", True)
if is_installed: if is_installed:
console.print("[green]✓ Installed[/green]") console.print("[green]✓ Installed[/green]")
console.print(f"\nTo remove: specify extension remove {ext_info['id']}") console.print(f"\nTo remove: specify extension remove {ext_info['id']}")
else: elif install_allowed:
console.print("[yellow]Not installed[/yellow]") console.print("[yellow]Not installed[/yellow]")
console.print(f"\n[cyan]Install:[/cyan] specify extension add {ext_info['id']}") console.print(f"\n[cyan]Install:[/cyan] specify extension add {ext_info['id']}")
else:
catalog_name = ext_info.get("_catalog_name", "community")
console.print("[yellow]Not installed[/yellow]")
console.print(
f"\n[yellow]⚠[/yellow] '{ext_info['id']}' is available in the '{catalog_name}' catalog "
f"but not in your approved catalog. Add it to .specify/extension-catalogs.yml "
f"with install_allowed: true to enable installation."
)
except ExtensionError as e: except ExtensionError as e:
console.print(f"\n[red]Error:[/red] {e}") console.print(f"\n[red]Error:[/red] {e}")
@@ -2350,4 +2640,3 @@ def main():
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@@ -8,14 +8,18 @@ without bloating the core framework.
import json import json
import hashlib import hashlib
import os
import tempfile import tempfile
import zipfile import zipfile
import shutil import shutil
from dataclasses import dataclass
from pathlib import Path from pathlib import Path
from typing import Optional, Dict, List, Any from typing import Optional, Dict, List, Any, Callable, Set
from datetime import datetime, timezone from datetime import datetime, timezone
import re import re
import pathspec
import yaml import yaml
from packaging import version as pkg_version from packaging import version as pkg_version
from packaging.specifiers import SpecifierSet, InvalidSpecifier from packaging.specifiers import SpecifierSet, InvalidSpecifier
@@ -36,6 +40,16 @@ class CompatibilityError(ExtensionError):
pass pass
@dataclass
class CatalogEntry:
"""Represents a single catalog entry in the catalog stack."""
url: str
name: str
priority: int
install_allowed: bool
description: str = ""
class ExtensionManifest: class ExtensionManifest:
"""Represents and validates an extension manifest (extension.yml).""" """Represents and validates an extension manifest (extension.yml)."""
@@ -268,6 +282,70 @@ class ExtensionManager:
self.extensions_dir = project_root / ".specify" / "extensions" self.extensions_dir = project_root / ".specify" / "extensions"
self.registry = ExtensionRegistry(self.extensions_dir) self.registry = ExtensionRegistry(self.extensions_dir)
@staticmethod
def _load_extensionignore(source_dir: Path) -> Optional[Callable[[str, List[str]], Set[str]]]:
"""Load .extensionignore and return an ignore function for shutil.copytree.
The .extensionignore file uses .gitignore-compatible patterns (one per line).
Lines starting with '#' are comments. Blank lines are ignored.
The .extensionignore file itself is always excluded.
Pattern semantics mirror .gitignore:
- '*' matches anything except '/'
- '**' matches zero or more directories
- '?' matches any single character except '/'
- Trailing '/' restricts a pattern to directories only
- Patterns with '/' (other than trailing) are anchored to the root
- '!' negates a previously excluded pattern
Args:
source_dir: Path to the extension source directory
Returns:
An ignore function compatible with shutil.copytree, or None
if no .extensionignore file exists.
"""
ignore_file = source_dir / ".extensionignore"
if not ignore_file.exists():
return None
lines: List[str] = ignore_file.read_text().splitlines()
# Normalise backslashes in patterns so Windows-authored files work
normalised: List[str] = []
for line in lines:
stripped = line.strip()
if stripped and not stripped.startswith("#"):
normalised.append(stripped.replace("\\", "/"))
else:
# Preserve blanks/comments so pathspec line numbers stay stable
normalised.append(line)
# Always ignore the .extensionignore file itself
normalised.append(".extensionignore")
spec = pathspec.GitIgnoreSpec.from_lines(normalised)
def _ignore(directory: str, entries: List[str]) -> Set[str]:
ignored: Set[str] = set()
rel_dir = Path(directory).relative_to(source_dir)
for entry in entries:
rel_path = str(rel_dir / entry) if str(rel_dir) != "." else entry
# Normalise to forward slashes for consistent matching
rel_path_fwd = rel_path.replace("\\", "/")
entry_full = Path(directory) / entry
if entry_full.is_dir():
# Append '/' so directory-only patterns (e.g. tests/) match
if spec.match_file(rel_path_fwd + "/"):
ignored.add(entry)
else:
if spec.match_file(rel_path_fwd):
ignored.add(entry)
return ignored
return _ignore
def check_compatibility( def check_compatibility(
self, self,
manifest: ExtensionManifest, manifest: ExtensionManifest,
@@ -341,7 +419,8 @@ class ExtensionManager:
if dest_dir.exists(): if dest_dir.exists():
shutil.rmtree(dest_dir) shutil.rmtree(dest_dir)
shutil.copytree(source_dir, dest_dir) ignore_fn = self._load_extensionignore(source_dir)
shutil.copytree(source_dir, dest_dir, ignore=ignore_fn)
# Register commands with AI agents # Register commands with AI agents
registered_commands = {} registered_commands = {}
@@ -455,6 +534,12 @@ class ExtensionManager:
if cmd_file.exists(): if cmd_file.exists():
cmd_file.unlink() cmd_file.unlink()
# Also remove companion .prompt.md for Copilot
if agent_name == "copilot":
prompt_file = self.project_root / ".github" / "prompts" / f"{cmd_name}.prompt.md"
if prompt_file.exists():
prompt_file.unlink()
if keep_config: if keep_config:
# Preserve config files, only remove non-config files # Preserve config files, only remove non-config files
if extension_dir.exists(): if extension_dir.exists():
@@ -597,7 +682,7 @@ class CommandRegistrar:
"dir": ".github/agents", "dir": ".github/agents",
"format": "markdown", "format": "markdown",
"args": "$ARGUMENTS", "args": "$ARGUMENTS",
"extension": ".md" "extension": ".agent.md"
}, },
"cursor": { "cursor": {
"dir": ".cursor/commands", "dir": ".cursor/commands",
@@ -617,6 +702,12 @@ class CommandRegistrar:
"args": "$ARGUMENTS", "args": "$ARGUMENTS",
"extension": ".md" "extension": ".md"
}, },
"codex": {
"dir": ".codex/prompts",
"format": "markdown",
"args": "$ARGUMENTS",
"extension": ".md"
},
"windsurf": { "windsurf": {
"dir": ".windsurf/workflows", "dir": ".windsurf/workflows",
"format": "markdown", "format": "markdown",
@@ -636,7 +727,7 @@ class CommandRegistrar:
"extension": ".md" "extension": ".md"
}, },
"roo": { "roo": {
"dir": ".roo/rules", "dir": ".roo/commands",
"format": "markdown", "format": "markdown",
"args": "$ARGUMENTS", "args": "$ARGUMENTS",
"extension": ".md" "extension": ".md"
@@ -653,8 +744,8 @@ class CommandRegistrar:
"args": "$ARGUMENTS", "args": "$ARGUMENTS",
"extension": ".md" "extension": ".md"
}, },
"q": { "kiro-cli": {
"dir": ".amazonq/prompts", "dir": ".kiro/prompts",
"format": "markdown", "format": "markdown",
"args": "$ARGUMENTS", "args": "$ARGUMENTS",
"extension": ".md" "extension": ".md"
@@ -671,11 +762,23 @@ class CommandRegistrar:
"args": "$ARGUMENTS", "args": "$ARGUMENTS",
"extension": ".md" "extension": ".md"
}, },
"tabnine": {
"dir": ".tabnine/agent/commands",
"format": "toml",
"args": "{{args}}",
"extension": ".toml"
},
"bob": { "bob": {
"dir": ".bob/commands", "dir": ".bob/commands",
"format": "markdown", "format": "markdown",
"args": "$ARGUMENTS", "args": "$ARGUMENTS",
"extension": ".md" "extension": ".md"
},
"kimi": {
"dir": ".kimi/skills",
"format": "markdown",
"args": "$ARGUMENTS",
"extension": "/SKILL.md"
} }
} }
@@ -869,18 +972,44 @@ class CommandRegistrar:
# Write command file # Write command file
dest_file = commands_dir / f"{cmd_name}{agent_config['extension']}" dest_file = commands_dir / f"{cmd_name}{agent_config['extension']}"
dest_file.parent.mkdir(parents=True, exist_ok=True)
dest_file.write_text(output) dest_file.write_text(output)
# Generate companion .prompt.md for Copilot agents
if agent_name == "copilot":
self._write_copilot_prompt(project_root, cmd_name)
registered.append(cmd_name) registered.append(cmd_name)
# Register aliases # Register aliases
for alias in cmd_info.get("aliases", []): for alias in cmd_info.get("aliases", []):
alias_file = commands_dir / f"{alias}{agent_config['extension']}" alias_file = commands_dir / f"{alias}{agent_config['extension']}"
alias_file.parent.mkdir(parents=True, exist_ok=True)
alias_file.write_text(output) alias_file.write_text(output)
# Generate companion .prompt.md for alias too
if agent_name == "copilot":
self._write_copilot_prompt(project_root, alias)
registered.append(alias) registered.append(alias)
return registered return registered
@staticmethod
def _write_copilot_prompt(project_root: Path, cmd_name: str) -> None:
"""Generate a companion .prompt.md file for a Copilot agent command.
Copilot requires a .prompt.md file in .github/prompts/ that references
the corresponding .agent.md file in .github/agents/ via an ``agent:``
frontmatter field.
Args:
project_root: Path to project root
cmd_name: Command name (used as the file stem, e.g. 'speckit.my-ext.example')
"""
prompts_dir = project_root / ".github" / "prompts"
prompts_dir.mkdir(parents=True, exist_ok=True)
prompt_file = prompts_dir / f"{cmd_name}.prompt.md"
prompt_file.write_text(f"---\nagent: {cmd_name}\n---\n")
def register_commands_for_all_agents( def register_commands_for_all_agents(
self, self,
manifest: ExtensionManifest, manifest: ExtensionManifest,
@@ -940,6 +1069,7 @@ class ExtensionCatalog:
"""Manages extension catalog fetching, caching, and searching.""" """Manages extension catalog fetching, caching, and searching."""
DEFAULT_CATALOG_URL = "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.json" DEFAULT_CATALOG_URL = "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.json"
COMMUNITY_CATALOG_URL = "https://raw.githubusercontent.com/github/spec-kit/main/extensions/catalog.community.json"
CACHE_DURATION = 3600 # 1 hour in seconds CACHE_DURATION = 3600 # 1 hour in seconds
def __init__(self, project_root: Path): def __init__(self, project_root: Path):
@@ -954,43 +1084,109 @@ class ExtensionCatalog:
self.cache_file = self.cache_dir / "catalog.json" self.cache_file = self.cache_dir / "catalog.json"
self.cache_metadata_file = self.cache_dir / "catalog-metadata.json" self.cache_metadata_file = self.cache_dir / "catalog-metadata.json"
def get_catalog_url(self) -> str: def _validate_catalog_url(self, url: str) -> None:
"""Get catalog URL from config or use default. """Validate that a catalog URL uses HTTPS (localhost HTTP allowed).
Checks in order: Args:
1. SPECKIT_CATALOG_URL environment variable url: URL to validate
2. Default catalog URL
Returns:
URL to fetch catalog from
Raises: Raises:
ValidationError: If custom URL is invalid (non-HTTPS) ValidationError: If URL is invalid or uses non-HTTPS scheme
""" """
import os
import sys
from urllib.parse import urlparse from urllib.parse import urlparse
# Environment variable override (useful for testing) parsed = urlparse(url)
is_localhost = parsed.hostname in ("localhost", "127.0.0.1", "::1")
if parsed.scheme != "https" and not (parsed.scheme == "http" and is_localhost):
raise ValidationError(
f"Catalog URL must use HTTPS (got {parsed.scheme}://). "
"HTTP is only allowed for localhost."
)
if not parsed.netloc:
raise ValidationError("Catalog URL must be a valid URL with a host.")
def _load_catalog_config(self, config_path: Path) -> Optional[List[CatalogEntry]]:
"""Load catalog stack configuration from a YAML file.
Args:
config_path: Path to extension-catalogs.yml
Returns:
Ordered list of CatalogEntry objects, or None if file doesn't exist
or contains no valid catalog entries.
Raises:
ValidationError: If any catalog entry has an invalid URL,
the file cannot be parsed, or a priority value is invalid.
"""
if not config_path.exists():
return None
try:
data = yaml.safe_load(config_path.read_text()) or {}
except (yaml.YAMLError, OSError) as e:
raise ValidationError(
f"Failed to read catalog config {config_path}: {e}"
)
catalogs_data = data.get("catalogs", [])
if not catalogs_data:
return None
if not isinstance(catalogs_data, list):
raise ValidationError(
f"Invalid catalog config: 'catalogs' must be a list, got {type(catalogs_data).__name__}"
)
entries: List[CatalogEntry] = []
for idx, item in enumerate(catalogs_data):
if not isinstance(item, dict):
raise ValidationError(
f"Invalid catalog entry at index {idx}: expected a mapping, got {type(item).__name__}"
)
url = str(item.get("url", "")).strip()
if not url:
continue
self._validate_catalog_url(url)
try:
priority = int(item.get("priority", idx + 1))
except (TypeError, ValueError):
raise ValidationError(
f"Invalid priority for catalog '{item.get('name', idx + 1)}': "
f"expected integer, got {item.get('priority')!r}"
)
raw_install = item.get("install_allowed", False)
if isinstance(raw_install, str):
install_allowed = raw_install.strip().lower() in ("true", "yes", "1")
else:
install_allowed = bool(raw_install)
entries.append(CatalogEntry(
url=url,
name=str(item.get("name", f"catalog-{idx + 1}")),
priority=priority,
install_allowed=install_allowed,
description=str(item.get("description", "")),
))
entries.sort(key=lambda e: e.priority)
return entries if entries else None
def get_active_catalogs(self) -> List[CatalogEntry]:
"""Get the ordered list of active catalogs.
Resolution order:
1. SPECKIT_CATALOG_URL env var — single catalog replacing all defaults
2. Project-level .specify/extension-catalogs.yml
3. User-level ~/.specify/extension-catalogs.yml
4. Built-in default stack (default + community)
Returns:
List of CatalogEntry objects sorted by priority (ascending)
Raises:
ValidationError: If a catalog URL is invalid
"""
import sys
# 1. SPECKIT_CATALOG_URL env var replaces all defaults for backward compat
if env_value := os.environ.get("SPECKIT_CATALOG_URL"): if env_value := os.environ.get("SPECKIT_CATALOG_URL"):
catalog_url = env_value.strip() catalog_url = env_value.strip()
parsed = urlparse(catalog_url) self._validate_catalog_url(catalog_url)
# Require HTTPS for security (prevent man-in-the-middle attacks)
# Allow http://localhost for local development/testing
is_localhost = parsed.hostname in ("localhost", "127.0.0.1", "::1")
if parsed.scheme != "https" and not (parsed.scheme == "http" and is_localhost):
raise ValidationError(
f"Invalid SPECKIT_CATALOG_URL: must use HTTPS (got {parsed.scheme}://). "
"HTTP is only allowed for localhost."
)
if not parsed.netloc:
raise ValidationError(
"Invalid SPECKIT_CATALOG_URL: must be a valid URL with a host."
)
# Warn users when using a non-default catalog (once per instance)
if catalog_url != self.DEFAULT_CATALOG_URL: if catalog_url != self.DEFAULT_CATALOG_URL:
if not getattr(self, "_non_default_catalog_warning_shown", False): if not getattr(self, "_non_default_catalog_warning_shown", False):
print( print(
@@ -999,11 +1195,163 @@ class ExtensionCatalog:
file=sys.stderr, file=sys.stderr,
) )
self._non_default_catalog_warning_shown = True self._non_default_catalog_warning_shown = True
return [CatalogEntry(url=catalog_url, name="custom", priority=1, install_allowed=True, description="Custom catalog via SPECKIT_CATALOG_URL")]
return catalog_url # 2. Project-level config overrides all defaults
project_config_path = self.project_root / ".specify" / "extension-catalogs.yml"
catalogs = self._load_catalog_config(project_config_path)
if catalogs is not None:
return catalogs
# TODO: Support custom catalogs from .specify/extension-catalogs.yml # 3. User-level config
return self.DEFAULT_CATALOG_URL user_config_path = Path.home() / ".specify" / "extension-catalogs.yml"
catalogs = self._load_catalog_config(user_config_path)
if catalogs is not None:
return catalogs
# 4. Built-in default stack
return [
CatalogEntry(url=self.DEFAULT_CATALOG_URL, name="default", priority=1, install_allowed=True, description="Built-in catalog of installable extensions"),
CatalogEntry(url=self.COMMUNITY_CATALOG_URL, name="community", priority=2, install_allowed=False, description="Community-contributed extensions (discovery only)"),
]
def get_catalog_url(self) -> str:
"""Get the primary catalog URL.
Returns the URL of the highest-priority catalog. Kept for backward
compatibility. Use get_active_catalogs() for full multi-catalog support.
Returns:
URL of the primary catalog
Raises:
ValidationError: If a catalog URL is invalid
"""
active = self.get_active_catalogs()
return active[0].url if active else self.DEFAULT_CATALOG_URL
def _fetch_single_catalog(self, entry: CatalogEntry, force_refresh: bool = False) -> Dict[str, Any]:
"""Fetch a single catalog with per-URL caching.
For the DEFAULT_CATALOG_URL, uses legacy cache files (self.cache_file /
self.cache_metadata_file) for backward compatibility. For all other URLs,
uses URL-hash-based cache files in self.cache_dir.
Args:
entry: CatalogEntry describing the catalog to fetch
force_refresh: If True, bypass cache
Returns:
Catalog data dictionary
Raises:
ExtensionError: If catalog cannot be fetched or has invalid format
"""
import urllib.request
import urllib.error
# Determine cache file paths (backward compat for default catalog)
if entry.url == self.DEFAULT_CATALOG_URL:
cache_file = self.cache_file
cache_meta_file = self.cache_metadata_file
is_valid = not force_refresh and self.is_cache_valid()
else:
url_hash = hashlib.sha256(entry.url.encode()).hexdigest()[:16]
cache_file = self.cache_dir / f"catalog-{url_hash}.json"
cache_meta_file = self.cache_dir / f"catalog-{url_hash}-metadata.json"
is_valid = False
if not force_refresh and cache_file.exists() and cache_meta_file.exists():
try:
metadata = json.loads(cache_meta_file.read_text())
cached_at = datetime.fromisoformat(metadata.get("cached_at", ""))
if cached_at.tzinfo is None:
cached_at = cached_at.replace(tzinfo=timezone.utc)
age = (datetime.now(timezone.utc) - cached_at).total_seconds()
is_valid = age < self.CACHE_DURATION
except (json.JSONDecodeError, ValueError, KeyError, TypeError):
# If metadata is invalid or missing expected fields, treat cache as invalid
pass
# Use cache if valid
if is_valid:
try:
return json.loads(cache_file.read_text())
except json.JSONDecodeError:
pass
# Fetch from network
try:
with urllib.request.urlopen(entry.url, timeout=10) as response:
catalog_data = json.loads(response.read())
if "schema_version" not in catalog_data or "extensions" not in catalog_data:
raise ExtensionError(f"Invalid catalog format from {entry.url}")
# Save to cache
self.cache_dir.mkdir(parents=True, exist_ok=True)
cache_file.write_text(json.dumps(catalog_data, indent=2))
cache_meta_file.write_text(json.dumps({
"cached_at": datetime.now(timezone.utc).isoformat(),
"catalog_url": entry.url,
}, indent=2))
return catalog_data
except urllib.error.URLError as e:
raise ExtensionError(f"Failed to fetch catalog from {entry.url}: {e}")
except json.JSONDecodeError as e:
raise ExtensionError(f"Invalid JSON in catalog from {entry.url}: {e}")
def _get_merged_extensions(self, force_refresh: bool = False) -> List[Dict[str, Any]]:
"""Fetch and merge extensions from all active catalogs.
Higher-priority (lower priority number) catalogs win on conflicts
(same extension id in two catalogs). Each extension dict is annotated with:
- _catalog_name: name of the source catalog
- _install_allowed: whether installation is allowed from this catalog
Catalogs that fail to fetch are skipped. Raises ExtensionError only if
ALL catalogs fail.
Args:
force_refresh: If True, bypass all caches
Returns:
List of merged extension dicts
Raises:
ExtensionError: If all catalogs fail to fetch
"""
import sys
active_catalogs = self.get_active_catalogs()
merged: Dict[str, Dict[str, Any]] = {}
any_success = False
for catalog_entry in active_catalogs:
try:
catalog_data = self._fetch_single_catalog(catalog_entry, force_refresh)
any_success = True
except ExtensionError as e:
print(
f"Warning: Could not fetch catalog '{catalog_entry.name}': {e}",
file=sys.stderr,
)
continue
for ext_id, ext_data in catalog_data.get("extensions", {}).items():
if ext_id not in merged: # Higher-priority catalog wins
merged[ext_id] = {
**ext_data,
"id": ext_id,
"_catalog_name": catalog_entry.name,
"_install_allowed": catalog_entry.install_allowed,
}
if not any_success and active_catalogs:
raise ExtensionError("Failed to fetch any extension catalog")
return list(merged.values())
def is_cache_valid(self) -> bool: def is_cache_valid(self) -> bool:
"""Check if cached catalog is still valid. """Check if cached catalog is still valid.
@@ -1017,9 +1365,11 @@ class ExtensionCatalog:
try: try:
metadata = json.loads(self.cache_metadata_file.read_text()) metadata = json.loads(self.cache_metadata_file.read_text())
cached_at = datetime.fromisoformat(metadata.get("cached_at", "")) cached_at = datetime.fromisoformat(metadata.get("cached_at", ""))
if cached_at.tzinfo is None:
cached_at = cached_at.replace(tzinfo=timezone.utc)
age_seconds = (datetime.now(timezone.utc) - cached_at).total_seconds() age_seconds = (datetime.now(timezone.utc) - cached_at).total_seconds()
return age_seconds < self.CACHE_DURATION return age_seconds < self.CACHE_DURATION
except (json.JSONDecodeError, ValueError, KeyError): except (json.JSONDecodeError, ValueError, KeyError, TypeError):
return False return False
def fetch_catalog(self, force_refresh: bool = False) -> Dict[str, Any]: def fetch_catalog(self, force_refresh: bool = False) -> Dict[str, Any]:
@@ -1080,7 +1430,7 @@ class ExtensionCatalog:
author: Optional[str] = None, author: Optional[str] = None,
verified_only: bool = False, verified_only: bool = False,
) -> List[Dict[str, Any]]: ) -> List[Dict[str, Any]]:
"""Search catalog for extensions. """Search catalog for extensions across all active catalogs.
Args: Args:
query: Search query (searches name, description, tags) query: Search query (searches name, description, tags)
@@ -1089,14 +1439,16 @@ class ExtensionCatalog:
verified_only: If True, show only verified extensions verified_only: If True, show only verified extensions
Returns: Returns:
List of matching extension metadata List of matching extension metadata, each annotated with
``_catalog_name`` and ``_install_allowed`` from its source catalog.
""" """
catalog = self.fetch_catalog() all_extensions = self._get_merged_extensions()
extensions = catalog.get("extensions", {})
results = [] results = []
for ext_id, ext_data in extensions.items(): for ext_data in all_extensions:
ext_id = ext_data["id"]
# Apply filters # Apply filters
if verified_only and not ext_data.get("verified", False): if verified_only and not ext_data.get("verified", False):
continue continue
@@ -1122,25 +1474,26 @@ class ExtensionCatalog:
if query_lower not in searchable_text: if query_lower not in searchable_text:
continue continue
results.append({"id": ext_id, **ext_data}) results.append(ext_data)
return results return results
def get_extension_info(self, extension_id: str) -> Optional[Dict[str, Any]]: def get_extension_info(self, extension_id: str) -> Optional[Dict[str, Any]]:
"""Get detailed information about a specific extension. """Get detailed information about a specific extension.
Searches all active catalogs in priority order.
Args: Args:
extension_id: ID of the extension extension_id: ID of the extension
Returns: Returns:
Extension metadata or None if not found Extension metadata (annotated with ``_catalog_name`` and
``_install_allowed``) or None if not found.
""" """
catalog = self.fetch_catalog() all_extensions = self._get_merged_extensions()
extensions = catalog.get("extensions", {}) for ext_data in all_extensions:
if ext_data["id"] == extension_id:
if extension_id in extensions: return ext_data
return {"id": extension_id, **extensions[extension_id]}
return None return None
def download_extension(self, extension_id: str, target_dir: Optional[Path] = None) -> Path: def download_extension(self, extension_id: str, target_dir: Optional[Path] = None) -> Path:
@@ -1200,11 +1553,18 @@ class ExtensionCatalog:
raise ExtensionError(f"Failed to save extension ZIP: {e}") raise ExtensionError(f"Failed to save extension ZIP: {e}")
def clear_cache(self): def clear_cache(self):
"""Clear the catalog cache.""" """Clear the catalog cache (both legacy and URL-hash-based files)."""
if self.cache_file.exists(): if self.cache_file.exists():
self.cache_file.unlink() self.cache_file.unlink()
if self.cache_metadata_file.exists(): if self.cache_metadata_file.exists():
self.cache_metadata_file.unlink() self.cache_metadata_file.unlink()
# Also clear any per-URL hash-based cache files
if self.cache_dir.exists():
for extra_cache in self.cache_dir.glob("catalog-*.json"):
if extra_cache != self.cache_file:
extra_cache.unlink(missing_ok=True)
for extra_meta in self.cache_dir.glob("catalog-*-metadata.json"):
extra_meta.unlink(missing_ok=True)
class ConfigManager: class ConfigManager:
@@ -1782,4 +2142,3 @@ class HookExecutor:
self.save_project_config(config) self.save_project_config(config)

View File

@@ -94,9 +94,10 @@ You **MUST** consider the user input before proceeding (if not empty).
- Generate unique checklist filename: - Generate unique checklist filename:
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`) - Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
- Format: `[domain].md` - Format: `[domain].md`
- If file exists, append to existing file - File handling behavior:
- Number items sequentially starting from CHK001 - If file does NOT exist: Create new file and number items starting from CHK001
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists) - If file exists: Append new items to existing file, continuing from the last CHK ID (e.g., if last item is CHK015, start new items at CHK016)
- Never delete or replace existing checklist content - always preserve and append
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**: **CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for: Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
@@ -208,13 +209,13 @@ You **MUST** consider the user input before proceeding (if not empty).
6. **Structure Reference**: Generate the checklist following the canonical template in `templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001. 6. **Structure Reference**: Generate the checklist following the canonical template in `templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize: 7. **Report**: Output full path to checklist file, item count, and summarize whether the run created a new file or appended to an existing one. Summarize:
- Focus areas selected - Focus areas selected
- Depth level - Depth level
- Actor/timing - Actor/timing
- Any explicit user-specified must-have items incorporated - Any explicit user-specified must-have items incorporated
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows: **Important**: Each `/speckit.checklist` command invocation uses a short, descriptive checklist filename and either creates a new file or appends to an existing one. This allows:
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`) - Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
- Simple, memorable filenames that indicate checklist purpose - Simple, memorable filenames that indicate checklist purpose

View File

@@ -89,7 +89,7 @@ Execution steps:
- Information is better deferred to planning phase (note internally) - Information is better deferred to planning phase (note internally)
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints: 3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
- Maximum of 10 total questions across the whole session. - Maximum of 5 total questions across the whole session.
- Each question must be answerable with EITHER: - Each question must be answerable with EITHER:
- A short multiplechoice selection (25 distinct, mutually exclusive options), OR - A short multiplechoice selection (25 distinct, mutually exclusive options), OR
- A one-word / shortphrase answer (explicitly constrain: "Answer in <=5 words"). - A one-word / shortphrase answer (explicitly constrain: "Answer in <=5 words").

View File

@@ -13,6 +13,40 @@ $ARGUMENTS
You **MUST** consider the user input before proceeding (if not empty). You **MUST** consider the user input before proceeding (if not empty).
## Pre-Execution Checks
**Check for extension hooks (before implementation)**:
- Check if `.specify/extensions.yml` exists in the project root.
- If it exists, read it and look for entries under the `hooks.before_implement` key
- If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
- Filter to only hooks where `enabled: true`
- For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions:
- If the hook has no `condition` field, or it is null/empty, treat the hook as executable
- If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation
- For each executable hook, output the following based on its `optional` flag:
- **Optional hook** (`optional: true`):
```
## Extension Hooks
**Optional Pre-Hook**: {extension}
Command: `/{command}`
Description: {description}
Prompt: {prompt}
To execute: `/{command}`
```
- **Mandatory hook** (`optional: false`):
```
## Extension Hooks
**Automatic Pre-Hook**: {extension}
Executing: `/{command}`
EXECUTE_COMMAND: {command}
Wait for the result of the hook command before proceeding to the Outline.
```
- If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently
## Outline ## Outline
1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). 1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
@@ -88,7 +122,7 @@ You **MUST** consider the user input before proceeding (if not empty).
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*` - **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*` - **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*` - **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*` - **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `*.dll`, `autom4te.cache/`, `config.status`, `config.log`, `.idea/`, `*.log`, `.env*`
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/` - **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/` - **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/` - **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
@@ -136,3 +170,32 @@ You **MUST** consider the user input before proceeding (if not empty).
- Report final status with summary of completed work - Report final status with summary of completed work
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list. Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
10. **Check for extension hooks**: After completion validation, check if `.specify/extensions.yml` exists in the project root.
- If it exists, read it and look for entries under the `hooks.after_implement` key
- If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
- Filter to only hooks where `enabled: true`
- For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions:
- If the hook has no `condition` field, or it is null/empty, treat the hook as executable
- If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation
- For each executable hook, output the following based on its `optional` flag:
- **Optional hook** (`optional: true`):
```
## Extension Hooks
**Optional Hook**: {extension}
Command: `/{command}`
Description: {description}
Prompt: {prompt}
To execute: `/{command}`
```
- **Mandatory hook** (`optional: false`):
```
## Extension Hooks
**Automatic Hook**: {extension}
Executing: `/{command}`
EXECUTE_COMMAND: {command}
```
- If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently

View File

@@ -9,8 +9,8 @@ handoffs:
prompt: Clarify specification requirements prompt: Clarify specification requirements
send: true send: true
scripts: scripts:
sh: scripts/bash/create-new-feature.sh --json "{ARGS}" sh: scripts/bash/create-new-feature.sh "{ARGS}"
ps: scripts/powershell/create-new-feature.ps1 -Json "{ARGS}" ps: scripts/powershell/create-new-feature.ps1 "{ARGS}"
--- ---
## User Input ## User Input
@@ -39,33 +39,14 @@ Given that feature description, do this:
- "Create a dashboard for analytics" → "analytics-dashboard" - "Create a dashboard for analytics" → "analytics-dashboard"
- "Fix payment processing timeout bug" → "fix-payment-timeout" - "Fix payment processing timeout bug" → "fix-payment-timeout"
2. **Check for existing branches before creating new one**: 2. **Create the feature branch** by running the script with `--short-name` (and `--json`), and do NOT pass `--number` (the script auto-detects the next globally available number across all branches and spec directories):
a. First, fetch all remote branches to ensure we have the latest information: - Bash example: `{SCRIPT} --json --short-name "user-auth" "Add user authentication"`
- PowerShell example: `{SCRIPT} -Json -ShortName "user-auth" "Add user authentication"`
```bash
git fetch --all --prune
```
b. Find the highest feature number across all sources for the short-name:
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
c. Determine the next available number:
- Extract all numbers from all three sources
- Find the highest number N
- Use N+1 for the new branch number
d. Run the script `{SCRIPT}` with the calculated number and short-name:
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- Bash example: `{SCRIPT} --json --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `{SCRIPT} -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
**IMPORTANT**: **IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number - Do NOT pass `--number` the script determines the correct next number automatically
- Only match branches/directories with the exact short-name pattern - Always include the JSON flag (`--json` for Bash, `-Json` for PowerShell) so the output can be parsed reliably
- If no existing branches/directories found with this short-name, start with number 1
- You must only ever run this script once per feature - You must only ever run this script once per feature
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for - The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths - The JSON output will contain BRANCH_NAME and SPEC_FILE paths

View File

@@ -22,6 +22,40 @@ $ARGUMENTS
You **MUST** consider the user input before proceeding (if not empty). You **MUST** consider the user input before proceeding (if not empty).
## Pre-Execution Checks
**Check for extension hooks (before tasks generation)**:
- Check if `.specify/extensions.yml` exists in the project root.
- If it exists, read it and look for entries under the `hooks.before_tasks` key
- If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
- Filter to only hooks where `enabled: true`
- For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions:
- If the hook has no `condition` field, or it is null/empty, treat the hook as executable
- If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation
- For each executable hook, output the following based on its `optional` flag:
- **Optional hook** (`optional: true`):
```
## Extension Hooks
**Optional Pre-Hook**: {extension}
Command: `/{command}`
Description: {description}
Prompt: {prompt}
To execute: `/{command}`
```
- **Mandatory hook** (`optional: false`):
```
## Extension Hooks
**Automatic Pre-Hook**: {extension}
Executing: `/{command}`
EXECUTE_COMMAND: {command}
Wait for the result of the hook command before proceeding to the Outline.
```
- If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently
## Outline ## Outline
1. **Setup**: Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). 1. **Setup**: Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
@@ -63,6 +97,35 @@ You **MUST** consider the user input before proceeding (if not empty).
- Suggested MVP scope (typically just User Story 1) - Suggested MVP scope (typically just User Story 1)
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths) - Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
6. **Check for extension hooks**: After tasks.md is generated, check if `.specify/extensions.yml` exists in the project root.
- If it exists, read it and look for entries under the `hooks.after_tasks` key
- If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
- Filter to only hooks where `enabled: true`
- For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions:
- If the hook has no `condition` field, or it is null/empty, treat the hook as executable
- If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation
- For each executable hook, output the following based on its `optional` flag:
- **Optional hook** (`optional: true`):
```
## Extension Hooks
**Optional Hook**: {extension}
Command: `/{command}`
Description: {description}
Prompt: {prompt}
To execute: `/{command}`
```
- **Mandatory hook** (`optional: false`):
```
## Extension Hooks
**Automatic Hook**: {extension}
Executing: `/{command}`
EXECUTE_COMMAND: {command}
```
- If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently
Context for task generation: {ARGS} Context for task generation: {ARGS}
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context. The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.

View File

@@ -0,0 +1,34 @@
hooks:
before_implement:
- id: pre_test
enabled: true
optional: false
extension: "test-extension"
command: "pre_implement_test"
description: "Test before implement hook execution"
after_implement:
- id: post_test
enabled: true
optional: true
extension: "test-extension"
command: "post_implement_test"
description: "Test after implement hook execution"
prompt: "Would you like to run the post-implement test?"
before_tasks:
- id: pre_tasks_test
enabled: true
optional: false
extension: "test-extension"
command: "pre_tasks_test"
description: "Test before tasks hook execution"
after_tasks:
- id: post_tasks_test
enabled: true
optional: true
extension: "test-extension"
command: "post_tasks_test"
description: "Test after tasks hook execution"
prompt: "Would you like to run the post-tasks test?"

30
tests/hooks/TESTING.md Normal file
View File

@@ -0,0 +1,30 @@
# Testing Extension Hooks
This directory contains a mock project to verify that LLM agents correctly identify and execute hook commands defined in `.specify/extensions.yml`.
## Test 1: Testing `before_tasks` and `after_tasks`
1. Open a chat with an LLM (like GitHub Copilot) in this project.
2. Ask it to generate tasks for the current directory:
> "Please follow `/speckit.tasks` for the `./tests/hooks` directory."
3. **Expected Behavior**:
- Before doing any generation, the LLM should notice the `AUTOMATIC Pre-Hook` in `.specify/extensions.yml` under `before_tasks`.
- It should state it is executing `EXECUTE_COMMAND: pre_tasks_test`.
- It should then proceed to read the `.md` docs and produce a `tasks.md`.
- After generation, it should output the optional `after_tasks` hook (`post_tasks_test`) block, asking if you want to run it.
## Test 2: Testing `before_implement` and `after_implement`
*(Requires `tasks.md` from Test 1 to exist)*
1. In the same (or new) chat, ask the LLM to implement the tasks:
> "Please follow `/speckit.implement` for the `./tests/hooks` directory."
2. **Expected Behavior**:
- The LLM should first check for `before_implement` hooks.
- It should state it is executing `EXECUTE_COMMAND: pre_implement_test` BEFORE doing any actual task execution.
- It should evaluate the checklists and execute the code writing tasks.
- Upon completion, it should output the optional `after_implement` hook (`post_implement_test`) block.
## How it works
The templates for these commands in `templates/commands/tasks.md` and `templates/commands/implement.md` contains strict ordered lists. The new `before_*` hooks are explicitly formulated in a **Pre-Execution Checks** section prior to the outline to ensure they're evaluated first without breaking template step numbers.

3
tests/hooks/plan.md Normal file
View File

@@ -0,0 +1,3 @@
# Test Setup for Hooks
This feature is designed to test if LLMs correctly invoke Spec Kit extensions hooks when generating tasks and implementing code.

1
tests/hooks/spec.md Normal file
View File

@@ -0,0 +1 @@
- **User Story 1:** I want a test script that prints "Hello hooks!".

1
tests/hooks/tasks.md Normal file
View File

@@ -0,0 +1 @@
- [ ] T001 [US1] Create script that prints 'Hello hooks!' in hello.py

View File

@@ -0,0 +1,228 @@
"""Consistency checks for agent configuration across runtime and packaging scripts."""
import re
from pathlib import Path
from specify_cli import AGENT_CONFIG, AI_ASSISTANT_ALIASES, AI_ASSISTANT_HELP
from specify_cli.extensions import CommandRegistrar
REPO_ROOT = Path(__file__).resolve().parent.parent
class TestAgentConfigConsistency:
"""Ensure kiro-cli migration stays synchronized across key surfaces."""
def test_runtime_config_uses_kiro_cli_and_removes_q(self):
"""AGENT_CONFIG should include kiro-cli and exclude legacy q."""
assert "kiro-cli" in AGENT_CONFIG
assert AGENT_CONFIG["kiro-cli"]["folder"] == ".kiro/"
assert AGENT_CONFIG["kiro-cli"]["commands_subdir"] == "prompts"
assert "q" not in AGENT_CONFIG
def test_extension_registrar_uses_kiro_cli_and_removes_q(self):
"""Extension command registrar should target .kiro/prompts."""
cfg = CommandRegistrar.AGENT_CONFIGS
assert "kiro-cli" in cfg
assert cfg["kiro-cli"]["dir"] == ".kiro/prompts"
assert "q" not in cfg
def test_extension_registrar_includes_codex(self):
"""Extension command registrar should include codex targeting .codex/prompts."""
cfg = CommandRegistrar.AGENT_CONFIGS
assert "codex" in cfg
assert cfg["codex"]["dir"] == ".codex/prompts"
def test_release_agent_lists_include_kiro_cli_and_exclude_q(self):
"""Bash and PowerShell release scripts should agree on agent key set for Kiro."""
sh_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.sh").read_text(encoding="utf-8")
ps_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.ps1").read_text(encoding="utf-8")
sh_match = re.search(r"ALL_AGENTS=\(([^)]*)\)", sh_text)
assert sh_match is not None
sh_agents = sh_match.group(1).split()
ps_match = re.search(r"\$AllAgents = @\(([^)]*)\)", ps_text)
assert ps_match is not None
ps_agents = re.findall(r"'([^']+)'", ps_match.group(1))
assert "kiro-cli" in sh_agents
assert "kiro-cli" in ps_agents
assert "shai" in sh_agents
assert "shai" in ps_agents
assert "agy" in sh_agents
assert "agy" in ps_agents
assert "q" not in sh_agents
assert "q" not in ps_agents
def test_release_ps_switch_has_shai_and_agy_generation(self):
"""PowerShell release builder must generate files for shai and agy agents."""
ps_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.ps1").read_text(encoding="utf-8")
assert re.search(r"'shai'\s*\{.*?\.shai/commands", ps_text, re.S) is not None
assert re.search(r"'agy'\s*\{.*?\.agent/workflows", ps_text, re.S) is not None
def test_init_ai_help_includes_roo_and_kiro_alias(self):
"""CLI help text for --ai should stay in sync with agent config and alias guidance."""
assert "roo" in AI_ASSISTANT_HELP
for alias, target in AI_ASSISTANT_ALIASES.items():
assert alias in AI_ASSISTANT_HELP
assert target in AI_ASSISTANT_HELP
def test_devcontainer_kiro_installer_uses_pinned_checksum(self):
"""Devcontainer installer should always verify Kiro installer via pinned SHA256."""
post_create_text = (REPO_ROOT / ".devcontainer" / "post-create.sh").read_text(encoding="utf-8")
assert 'KIRO_INSTALLER_SHA256="7487a65cf310b7fb59b357c4b5e6e3f3259d383f4394ecedb39acf70f307cffb"' in post_create_text
assert "sha256sum -c -" in post_create_text
assert "KIRO_SKIP_KIRO_INSTALLER_VERIFY" not in post_create_text
def test_release_output_targets_kiro_prompt_dir(self):
"""Packaging and release scripts should no longer emit amazonq artifacts."""
sh_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.sh").read_text(encoding="utf-8")
ps_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.ps1").read_text(encoding="utf-8")
gh_release_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-github-release.sh").read_text(encoding="utf-8")
assert ".kiro/prompts" in sh_text
assert ".kiro/prompts" in ps_text
assert ".amazonq/prompts" not in sh_text
assert ".amazonq/prompts" not in ps_text
assert "spec-kit-template-kiro-cli-sh-" in gh_release_text
assert "spec-kit-template-kiro-cli-ps-" in gh_release_text
assert "spec-kit-template-q-sh-" not in gh_release_text
assert "spec-kit-template-q-ps-" not in gh_release_text
def test_agent_context_scripts_use_kiro_cli(self):
"""Agent context scripts should advertise kiro-cli and not legacy q agent key."""
bash_text = (REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh").read_text(encoding="utf-8")
pwsh_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8")
assert "kiro-cli" in bash_text
assert "kiro-cli" in pwsh_text
assert "Amazon Q Developer CLI" not in bash_text
assert "Amazon Q Developer CLI" not in pwsh_text
# --- Tabnine CLI consistency checks ---
def test_runtime_config_includes_tabnine(self):
"""AGENT_CONFIG should include tabnine with correct folder and subdir."""
assert "tabnine" in AGENT_CONFIG
assert AGENT_CONFIG["tabnine"]["folder"] == ".tabnine/agent/"
assert AGENT_CONFIG["tabnine"]["commands_subdir"] == "commands"
assert AGENT_CONFIG["tabnine"]["requires_cli"] is True
assert AGENT_CONFIG["tabnine"]["install_url"] is not None
def test_extension_registrar_includes_tabnine(self):
"""CommandRegistrar.AGENT_CONFIGS should include tabnine with correct TOML config."""
from specify_cli.extensions import CommandRegistrar
assert "tabnine" in CommandRegistrar.AGENT_CONFIGS
cfg = CommandRegistrar.AGENT_CONFIGS["tabnine"]
assert cfg["dir"] == ".tabnine/agent/commands"
assert cfg["format"] == "toml"
assert cfg["args"] == "{{args}}"
assert cfg["extension"] == ".toml"
def test_release_agent_lists_include_tabnine(self):
"""Bash and PowerShell release scripts should include tabnine in agent lists."""
sh_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.sh").read_text(encoding="utf-8")
ps_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.ps1").read_text(encoding="utf-8")
sh_match = re.search(r"ALL_AGENTS=\(([^)]*)\)", sh_text)
assert sh_match is not None
sh_agents = sh_match.group(1).split()
ps_match = re.search(r"\$AllAgents = @\(([^)]*)\)", ps_text)
assert ps_match is not None
ps_agents = re.findall(r"'([^']+)'", ps_match.group(1))
assert "tabnine" in sh_agents
assert "tabnine" in ps_agents
def test_release_scripts_generate_tabnine_toml_commands(self):
"""Release scripts should generate TOML commands for tabnine in .tabnine/agent/commands."""
sh_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.sh").read_text(encoding="utf-8")
ps_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.ps1").read_text(encoding="utf-8")
assert ".tabnine/agent/commands" in sh_text
assert ".tabnine/agent/commands" in ps_text
assert re.search(r"'tabnine'\s*\{.*?\.tabnine/agent/commands", ps_text, re.S) is not None
def test_github_release_includes_tabnine_packages(self):
"""GitHub release script should include tabnine template packages."""
gh_release_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-github-release.sh").read_text(encoding="utf-8")
assert "spec-kit-template-tabnine-sh-" in gh_release_text
assert "spec-kit-template-tabnine-ps-" in gh_release_text
def test_agent_context_scripts_include_tabnine(self):
"""Agent context scripts should support tabnine agent type."""
bash_text = (REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh").read_text(encoding="utf-8")
pwsh_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8")
assert "tabnine" in bash_text
assert "TABNINE_FILE" in bash_text
assert "tabnine" in pwsh_text
assert "TABNINE_FILE" in pwsh_text
def test_ai_help_includes_tabnine(self):
"""CLI help text for --ai should include tabnine."""
assert "tabnine" in AI_ASSISTANT_HELP
# --- Kimi Code CLI consistency checks ---
def test_kimi_in_agent_config(self):
"""AGENT_CONFIG should include kimi with correct folder and commands_subdir."""
assert "kimi" in AGENT_CONFIG
assert AGENT_CONFIG["kimi"]["folder"] == ".kimi/"
assert AGENT_CONFIG["kimi"]["commands_subdir"] == "skills"
assert AGENT_CONFIG["kimi"]["requires_cli"] is True
def test_kimi_in_extension_registrar(self):
"""Extension command registrar should include kimi using .kimi/skills and SKILL.md."""
cfg = CommandRegistrar.AGENT_CONFIGS
assert "kimi" in cfg
kimi_cfg = cfg["kimi"]
assert kimi_cfg["dir"] == ".kimi/skills"
assert kimi_cfg["extension"] == "/SKILL.md"
def test_kimi_in_release_agent_lists(self):
"""Bash and PowerShell release scripts should include kimi in agent lists."""
sh_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.sh").read_text(encoding="utf-8")
ps_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-release-packages.ps1").read_text(encoding="utf-8")
sh_match = re.search(r"ALL_AGENTS=\(([^)]*)\)", sh_text)
assert sh_match is not None
sh_agents = sh_match.group(1).split()
ps_match = re.search(r"\$AllAgents = @\(([^)]*)\)", ps_text)
assert ps_match is not None
ps_agents = re.findall(r"'([^']+)'", ps_match.group(1))
assert "kimi" in sh_agents
assert "kimi" in ps_agents
def test_kimi_in_powershell_validate_set(self):
"""PowerShell update-agent-context script should include 'kimi' in ValidateSet."""
ps_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8")
validate_set_match = re.search(r"\[ValidateSet\(([^)]*)\)\]", ps_text)
assert validate_set_match is not None
validate_set_values = re.findall(r"'([^']+)'", validate_set_match.group(1))
assert "kimi" in validate_set_values
def test_kimi_in_github_release_output(self):
"""GitHub release script should include kimi template packages."""
gh_release_text = (REPO_ROOT / ".github" / "workflows" / "scripts" / "create-github-release.sh").read_text(encoding="utf-8")
assert "spec-kit-template-kimi-sh-" in gh_release_text
assert "spec-kit-template-kimi-ps-" in gh_release_text
def test_ai_help_includes_kimi(self):
"""CLI help text for --ai should include kimi."""
assert "kimi" in AI_ASSISTANT_HELP

View File

@@ -147,6 +147,11 @@ class TestGetSkillsDir:
result = _get_skills_dir(project_dir, "gemini") result = _get_skills_dir(project_dir, "gemini")
assert result == project_dir / ".gemini" / "skills" assert result == project_dir / ".gemini" / "skills"
def test_tabnine_skills_dir(self, project_dir):
"""Tabnine should use .tabnine/agent/skills/."""
result = _get_skills_dir(project_dir, "tabnine")
assert result == project_dir / ".tabnine" / "agent" / "skills"
def test_copilot_skills_dir(self, project_dir): def test_copilot_skills_dir(self, project_dir):
"""Copilot should use .github/skills/.""" """Copilot should use .github/skills/."""
result = _get_skills_dir(project_dir, "copilot") result = _get_skills_dir(project_dir, "copilot")
@@ -162,6 +167,11 @@ class TestGetSkillsDir:
result = _get_skills_dir(project_dir, "cursor-agent") result = _get_skills_dir(project_dir, "cursor-agent")
assert result == project_dir / ".cursor" / "skills" assert result == project_dir / ".cursor" / "skills"
def test_kiro_cli_skills_dir(self, project_dir):
"""Kiro CLI should use .kiro/skills/."""
result = _get_skills_dir(project_dir, "kiro-cli")
assert result == project_dir / ".kiro" / "skills"
def test_unknown_agent_uses_default(self, project_dir): def test_unknown_agent_uses_default(self, project_dir):
"""Unknown agents should fall back to DEFAULT_SKILLS_DIR.""" """Unknown agents should fall back to DEFAULT_SKILLS_DIR."""
result = _get_skills_dir(project_dir, "nonexistent-agent") result = _get_skills_dir(project_dir, "nonexistent-agent")
@@ -400,8 +410,11 @@ class TestInstallAiSkills:
skills_dir = _get_skills_dir(proj, agent_key) skills_dir = _get_skills_dir(proj, agent_key)
assert skills_dir.exists() assert skills_dir.exists()
skill_dirs = [d.name for d in skills_dir.iterdir() if d.is_dir()] skill_dirs = [d.name for d in skills_dir.iterdir() if d.is_dir()]
assert "speckit-specify" in skill_dirs # Kimi uses dot-separator (speckit.specify) to match /skill:speckit.* invocation;
assert (skills_dir / "speckit-specify" / "SKILL.md").exists() # all other agents use hyphen-separator (speckit-specify).
expected_skill_name = "speckit.specify" if agent_key == "kimi" else "speckit-specify"
assert expected_skill_name in skill_dirs
assert (skills_dir / expected_skill_name / "SKILL.md").exists()
@@ -460,8 +473,9 @@ class TestNewProjectCommandSkip:
"""Simulate template extraction: create agent commands dir.""" """Simulate template extraction: create agent commands dir."""
agent_cfg = AGENT_CONFIG.get(agent, {}) agent_cfg = AGENT_CONFIG.get(agent, {})
agent_folder = agent_cfg.get("folder", "") agent_folder = agent_cfg.get("folder", "")
commands_subdir = agent_cfg.get("commands_subdir", "commands")
if agent_folder: if agent_folder:
cmds_dir = project_path / agent_folder.rstrip("/") / "commands" cmds_dir = project_path / agent_folder.rstrip("/") / commands_subdir
cmds_dir.mkdir(parents=True, exist_ok=True) cmds_dir.mkdir(parents=True, exist_ok=True)
(cmds_dir / "speckit.specify.md").write_text("# spec") (cmds_dir / "speckit.specify.md").write_text("# spec")
@@ -483,6 +497,7 @@ class TestNewProjectCommandSkip:
patch("specify_cli.shutil.which", return_value="/usr/bin/git"): patch("specify_cli.shutil.which", return_value="/usr/bin/git"):
result = runner.invoke(app, ["init", str(target), "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"]) result = runner.invoke(app, ["init", str(target), "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"])
assert result.exit_code == 0
# Skills should have been called # Skills should have been called
mock_skills.assert_called_once() mock_skills.assert_called_once()
@@ -490,6 +505,30 @@ class TestNewProjectCommandSkip:
cmds_dir = target / ".claude" / "commands" cmds_dir = target / ".claude" / "commands"
assert not cmds_dir.exists() assert not cmds_dir.exists()
def test_new_project_nonstandard_commands_subdir_removed_after_skills_succeed(self, tmp_path):
"""For non-standard agents, configured commands_subdir should be removed on success."""
from typer.testing import CliRunner
runner = CliRunner()
target = tmp_path / "new-kiro-proj"
def fake_download(project_path, *args, **kwargs):
self._fake_extract("kiro-cli", project_path)
with patch("specify_cli.download_and_extract_template", side_effect=fake_download), \
patch("specify_cli.ensure_executable_scripts"), \
patch("specify_cli.ensure_constitution_from_template"), \
patch("specify_cli.install_ai_skills", return_value=True) as mock_skills, \
patch("specify_cli.is_git_repo", return_value=False), \
patch("specify_cli.shutil.which", return_value="/usr/bin/git"):
result = runner.invoke(app, ["init", str(target), "--ai", "kiro-cli", "--ai-skills", "--script", "sh", "--no-git"])
assert result.exit_code == 0
mock_skills.assert_called_once()
prompts_dir = target / ".kiro" / "prompts"
assert not prompts_dir.exists()
def test_commands_preserved_when_skills_fail(self, tmp_path): def test_commands_preserved_when_skills_fail(self, tmp_path):
"""If skills fail, commands should NOT be removed (safety net).""" """If skills fail, commands should NOT be removed (safety net)."""
from typer.testing import CliRunner from typer.testing import CliRunner
@@ -508,6 +547,7 @@ class TestNewProjectCommandSkip:
patch("specify_cli.shutil.which", return_value="/usr/bin/git"): patch("specify_cli.shutil.which", return_value="/usr/bin/git"):
result = runner.invoke(app, ["init", str(target), "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"]) result = runner.invoke(app, ["init", str(target), "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"])
assert result.exit_code == 0
# Commands should still exist since skills failed # Commands should still exist since skills failed
cmds_dir = target / ".claude" / "commands" cmds_dir = target / ".claude" / "commands"
assert cmds_dir.exists() assert cmds_dir.exists()
@@ -538,8 +578,9 @@ class TestNewProjectCommandSkip:
patch("specify_cli.install_ai_skills", return_value=True), \ patch("specify_cli.install_ai_skills", return_value=True), \
patch("specify_cli.is_git_repo", return_value=True), \ patch("specify_cli.is_git_repo", return_value=True), \
patch("specify_cli.shutil.which", return_value="/usr/bin/git"): patch("specify_cli.shutil.which", return_value="/usr/bin/git"):
result = runner.invoke(app, ["init", "--here", "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"]) result = runner.invoke(app, ["init", "--here", "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"], input="y\n")
assert result.exit_code == 0
# Commands must remain for --here # Commands must remain for --here
assert cmds_dir.exists() assert cmds_dir.exists()
assert (cmds_dir / "speckit.specify.md").exists() assert (cmds_dir / "speckit.specify.md").exists()
@@ -631,6 +672,42 @@ class TestCliValidation:
assert "--ai-skills" in plain assert "--ai-skills" in plain
assert "agent skills" in plain.lower() assert "agent skills" in plain.lower()
def test_kiro_alias_normalized_to_kiro_cli(self, tmp_path):
"""--ai kiro should normalize to canonical kiro-cli agent key."""
from typer.testing import CliRunner
runner = CliRunner()
target = tmp_path / "kiro-alias-proj"
with patch("specify_cli.download_and_extract_template") as mock_download, \
patch("specify_cli.ensure_executable_scripts"), \
patch("specify_cli.ensure_constitution_from_template"), \
patch("specify_cli.is_git_repo", return_value=False), \
patch("specify_cli.shutil.which", return_value="/usr/bin/git"):
result = runner.invoke(
app,
[
"init",
str(target),
"--ai",
"kiro",
"--ignore-agent-tools",
"--script",
"sh",
"--no-git",
],
)
assert result.exit_code == 0
assert mock_download.called
# download_and_extract_template(project_path, ai_assistant, script_type, ...)
assert mock_download.call_args.args[1] == "kiro-cli"
def test_q_removed_from_agent_config(self):
"""Amazon Q legacy key should not remain in AGENT_CONFIG."""
assert "q" not in AGENT_CONFIG
assert "kiro-cli" in AGENT_CONFIG
class TestParameterOrderingIssue: class TestParameterOrderingIssue:
"""Test fix for GitHub issue #1641: parameter ordering issues.""" """Test fix for GitHub issue #1641: parameter ordering issues."""

View File

@@ -0,0 +1,263 @@
"""
Tests for Cursor .mdc frontmatter generation (issue #669).
Verifies that update-agent-context.sh properly prepends YAML frontmatter
to .mdc files so that Cursor IDE auto-includes the rules.
"""
import os
import shutil
import subprocess
import textwrap
import pytest
SCRIPT_PATH = os.path.join(
os.path.dirname(__file__),
os.pardir,
"scripts",
"bash",
"update-agent-context.sh",
)
EXPECTED_FRONTMATTER_LINES = [
"---",
"description: Project Development Guidelines",
'globs: ["**/*"]',
"alwaysApply: true",
"---",
]
requires_git = pytest.mark.skipif(
shutil.which("git") is None,
reason="git is not installed",
)
class TestScriptFrontmatterPattern:
"""Static analysis — no git required."""
def test_create_new_has_mdc_frontmatter_logic(self):
"""create_new_agent_file() must contain .mdc frontmatter logic."""
with open(SCRIPT_PATH, encoding="utf-8") as f:
content = f.read()
assert 'if [[ "$target_file" == *.mdc ]]' in content
assert "alwaysApply: true" in content
def test_update_existing_has_mdc_frontmatter_logic(self):
"""update_existing_agent_file() must also handle .mdc frontmatter."""
with open(SCRIPT_PATH, encoding="utf-8") as f:
content = f.read()
# There should be two occurrences of the .mdc check — one per function
occurrences = content.count('if [[ "$target_file" == *.mdc ]]')
assert occurrences >= 2, (
f"Expected at least 2 .mdc frontmatter checks, found {occurrences}"
)
def test_powershell_script_has_mdc_frontmatter_logic(self):
"""PowerShell script must also handle .mdc frontmatter."""
ps_path = os.path.join(
os.path.dirname(__file__),
os.pardir,
"scripts",
"powershell",
"update-agent-context.ps1",
)
with open(ps_path, encoding="utf-8") as f:
content = f.read()
assert "alwaysApply: true" in content
occurrences = content.count(r"\.mdc$")
assert occurrences >= 2, (
f"Expected at least 2 .mdc frontmatter checks in PS script, found {occurrences}"
)
@requires_git
class TestCursorFrontmatterIntegration:
"""Integration tests using a real git repo."""
@pytest.fixture
def git_repo(self, tmp_path):
"""Create a minimal git repo with the spec-kit structure."""
repo = tmp_path / "repo"
repo.mkdir()
# Init git repo
subprocess.run(
["git", "init"], cwd=str(repo), capture_output=True, check=True
)
subprocess.run(
["git", "config", "user.email", "test@test.com"],
cwd=str(repo),
capture_output=True,
check=True,
)
subprocess.run(
["git", "config", "user.name", "Test"],
cwd=str(repo),
capture_output=True,
check=True,
)
# Create .specify dir with config
specify_dir = repo / ".specify"
specify_dir.mkdir()
(specify_dir / "config.yaml").write_text(
textwrap.dedent("""\
project_type: webapp
language: python
framework: fastapi
database: N/A
""")
)
# Create template
templates_dir = specify_dir / "templates"
templates_dir.mkdir()
(templates_dir / "agent-file-template.md").write_text(
"# [PROJECT NAME] Development Guidelines\n\n"
"Auto-generated from all feature plans. Last updated: [DATE]\n\n"
"## Active Technologies\n\n"
"[EXTRACTED FROM ALL PLAN.MD FILES]\n\n"
"## Project Structure\n\n"
"[ACTUAL STRUCTURE FROM PLANS]\n\n"
"## Development Commands\n\n"
"[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]\n\n"
"## Coding Conventions\n\n"
"[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]\n\n"
"## Recent Changes\n\n"
"[LAST 3 FEATURES AND WHAT THEY ADDED]\n"
)
# Create initial commit
subprocess.run(
["git", "add", "-A"], cwd=str(repo), capture_output=True, check=True
)
subprocess.run(
["git", "commit", "-m", "init"],
cwd=str(repo),
capture_output=True,
check=True,
)
# Create a feature branch so CURRENT_BRANCH detection works
subprocess.run(
["git", "checkout", "-b", "001-test-feature"],
cwd=str(repo),
capture_output=True,
check=True,
)
# Create a spec so the script detects the feature
spec_dir = repo / "specs" / "001-test-feature"
spec_dir.mkdir(parents=True)
(spec_dir / "plan.md").write_text(
"# Test Feature Plan\n\n"
"## Technology Stack\n\n"
"- Language: Python\n"
"- Framework: FastAPI\n"
)
return repo
def _run_update(self, repo, agent_type="cursor-agent"):
"""Run update-agent-context.sh for a specific agent type."""
script = os.path.abspath(SCRIPT_PATH)
result = subprocess.run(
["bash", script, agent_type],
cwd=str(repo),
capture_output=True,
text=True,
timeout=30,
)
return result
def test_new_mdc_file_has_frontmatter(self, git_repo):
"""Creating a new .mdc file must include YAML frontmatter."""
result = self._run_update(git_repo)
assert result.returncode == 0, f"Script failed: {result.stderr}"
mdc_file = git_repo / ".cursor" / "rules" / "specify-rules.mdc"
assert mdc_file.exists(), "Cursor .mdc file was not created"
content = mdc_file.read_text()
lines = content.splitlines()
# First line must be the opening ---
assert lines[0] == "---", f"Expected frontmatter start, got: {lines[0]}"
# Check all frontmatter lines are present
for expected in EXPECTED_FRONTMATTER_LINES:
assert expected in content, f"Missing frontmatter line: {expected}"
# Content after frontmatter should be the template content
assert "Development Guidelines" in content
def test_existing_mdc_without_frontmatter_gets_it_added(self, git_repo):
"""Updating an existing .mdc file that lacks frontmatter must add it."""
# First, create the file WITHOUT frontmatter (simulating pre-fix state)
cursor_dir = git_repo / ".cursor" / "rules"
cursor_dir.mkdir(parents=True, exist_ok=True)
mdc_file = cursor_dir / "specify-rules.mdc"
mdc_file.write_text(
"# repo Development Guidelines\n\n"
"Auto-generated from all feature plans. Last updated: 2025-01-01\n\n"
"## Active Technologies\n\n"
"- Python + FastAPI (main)\n\n"
"## Recent Changes\n\n"
"- main: Added Python + FastAPI\n"
)
result = self._run_update(git_repo)
assert result.returncode == 0, f"Script failed: {result.stderr}"
content = mdc_file.read_text()
lines = content.splitlines()
assert lines[0] == "---", f"Expected frontmatter start, got: {lines[0]}"
for expected in EXPECTED_FRONTMATTER_LINES:
assert expected in content, f"Missing frontmatter line: {expected}"
def test_existing_mdc_with_frontmatter_not_duplicated(self, git_repo):
"""Updating an .mdc file that already has frontmatter must not duplicate it."""
cursor_dir = git_repo / ".cursor" / "rules"
cursor_dir.mkdir(parents=True, exist_ok=True)
mdc_file = cursor_dir / "specify-rules.mdc"
frontmatter = (
"---\n"
"description: Project Development Guidelines\n"
'globs: ["**/*"]\n'
"alwaysApply: true\n"
"---\n\n"
)
body = (
"# repo Development Guidelines\n\n"
"Auto-generated from all feature plans. Last updated: 2025-01-01\n\n"
"## Active Technologies\n\n"
"- Python + FastAPI (main)\n\n"
"## Recent Changes\n\n"
"- main: Added Python + FastAPI\n"
)
mdc_file.write_text(frontmatter + body)
result = self._run_update(git_repo)
assert result.returncode == 0, f"Script failed: {result.stderr}"
content = mdc_file.read_text()
# Count occurrences of the frontmatter delimiter
assert content.count("alwaysApply: true") == 1, (
"Frontmatter was duplicated"
)
def test_non_mdc_file_has_no_frontmatter(self, git_repo):
"""Non-.mdc agent files (e.g., Claude) must NOT get frontmatter."""
result = self._run_update(git_repo, agent_type="claude")
assert result.returncode == 0, f"Script failed: {result.stderr}"
claude_file = git_repo / ".claude" / "CLAUDE.md"
if claude_file.exists():
content = claude_file.read_text()
assert not content.startswith("---"), (
"Non-mdc file should not have frontmatter"
)

File diff suppressed because it is too large Load Diff