* test(commands): create extension-commands LLM playground sandbox * update(tests): format LLM evaluation as an automated test runner * test(commands): map extension-commands python script with timestamps * test(commands): map extension-commands python script with timestamps * test(commands): update TESTING.md to evaluate discovery, lint, and deploy explicitly * test(commands): simplify execution expectations and add timestamp calculation * fix(tests): address copilot review comments on prompt formatting and relative paths * fix(tests): resolve copilot PR feedback regarding extension schema structure and argparse mutually exclusive groups * feat(extensions): add core selftest utility and migrate away from manual tests sandbox * fix(selftest): update command name array to match spec-kit validation schema * fix(selftest): wrap arguments in quotes to support multi-word extension names * update the command to be more meaningful * fix: if the extension is discovery only, it should not be installable * Address review comments for selftest documentation * address review comments * address review comments * Update extensions/selftest/commands/selftest.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Manfred Riem <15701806+mnriem@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2.5 KiB
description
| description |
|---|
| Validate the lifecycle of an extension from the catalog. |
Extension Self-Test: $ARGUMENTS
This command drives a self-test simulating the developer experience with the $ARGUMENTS extension.
Goal
Validate the end-to-end lifecycle (discovery, installation, registration) for the extension: $ARGUMENTS.
If $ARGUMENTS is empty, you must tell the user to provide an extension name, for example: /speckit.selftest.extension linear.
Steps
Step 1: Catalog Discovery Validation
Check if the extension exists in the Spec Kit catalog.
Execute this command and verify that it completes successfully and that the returned extension ID exactly matches $ARGUMENTS. If the command fails or the ID does not match $ARGUMENTS, fail the test.
specify extension info "$ARGUMENTS"
Step 2: Simulate Installation
First, try to add the extension to the current workspace configuration directly. If the catalog provides the extension as install_allowed: false (discovery-only), this step is expected to fail.
specify extension add "$ARGUMENTS"
Then, simulate adding the extension by installing it from its catalog download URL, which should bypass the restriction.
Obtain the extension's download_url from the catalog metadata (for example, via a catalog info command or UI), then run:
specify extension add "$ARGUMENTS" --from "<download_url>"
Step 3: Registration Verification
Once the add command completes, verify the installation by checking the project configuration.
Use terminal tools (like cat) to verify that the following file contains a record for $ARGUMENTS.
cat .specify/extensions/.registry/$ARGUMENTS.json
Step 4: Verification Report
Analyze the standard output of the three steps. Generate a terminal-style test output format detailing the results of discovery, installation, and registration. Return this directly to the user.
Example output format:
============================= test session starts ==============================
collected 3 items
test_selftest_discovery.py::test_catalog_search [PASS/FAIL]
Details: [Provide execution result of specify extension search]
test_selftest_installation.py::test_extension_add [PASS/FAIL]
Details: [Provide execution result of specify extension add]
test_selftest_registration.py::test_config_verification [PASS/FAIL]
Details: [Provide execution result of registry record verification]
============================== [X] passed in ... ==============================